entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 16
215
| authors
sequencelengths 1
584
| primary_category
stringclasses 117
values | categories
sequencelengths 1
7
| text
stringlengths 7
396k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2405.10073v1 | 20240516130736 | First star formation in extremely early epochs | [
"Mana Ito",
"Kazuyuki Omukai"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO",
"astro-ph.SR"
] |
1Astronomical Institute, Graduate School of Science, Tohoku University, Aoba, Sendai 980-8578, Japan
mana.ito@astr.tohoku.ac.jp
omukai@astr.tohoku.ac.jp
stars: formation — stars: Population III — cosmic background radiation — dark ages, reionization, first stars
First star formation in extremely early epochs
Kazuyuki Omukai 1
May 20, 2024
==============================================
First stars play crucial roles in development of the universe, influencing events like cosmic reionization and the chemical enrichment of the intergalactic medium. While first stars are conventionally thought to form at around z ∼ 20-30 in the standard Λ Cold Dark Matter (ΛCDM) cosmology, observational constraints on small-scale (< Mpc) density fluctuations remain limited, possibly differing significantly from the scale-invariant fluctuations assumed in the ΛCDM model. Should this be the case, the formation of first stars could occur much earlier than typically predicted.
In this study, we investigate the formation process of first stars in the extremely early epochs of z ≳ 100 in the post-recombination universe. At such early times, the effects of the warm cosmic microwave background (CMB) become significant. We calculate the collapse of primordial star-forming clouds using a one-zone thermo-chemical model that accounts for CMB influences on radiative heating, Compton cooling, and photodissociation reactions.
We found that the impact of the CMB on the evolution is limited at z ≲ 100, with the temperature evolution closely resembling the conventional model. However, within the range 100 ≲ z ≲ 400, the formation of H_2 via the H^- channel is impeded by H^- photodetachment induced by the CMB, leading to higher temperatures compared to standard thermal evolution. Consequently, first stars with masses exceeding 1000 M_⊙ can emerge at z ≳ 100. Furthermore, at z ≳ 500, the temperature evolution becomes nearly isothermal at several thousand Kelvins solely due to atomic cooling, as H_2 formation is entirely suppressed, including the less efficient H_2^+ channel, which is blocked by H_2^+ photodissociation.
In such cases, supermassive stars with masses around ∼ 10^5 M_⊙ are expected to form solely via atomic cooling. These findings emphasize the significant variation in the typical mass of the first stars depending on the epoch of formation.
§ INTRODUCTION
The launch of the James Webb Space Telescope (JWST) has revolutionized our exploration of the high-redshift universe, shedding new light on the cosmic dawn. Early observations with the JWST have unveiled a substantial population of bright galaxies at redshifts greater than 10 <cit.>. While some galaxies initially identified through photometric observations have been revealed as low-redshift interlopers upon subsequent spectroscopic analysis, over ten galaxies have been spectroscopically confirmed at redshifts exceeding 10 <cit.> . Their unexpectedly high number density, approximately an order of magnitude greater than extrapolations from lower redshifts, has introduced tension with models assuming a constant star formation efficiency <cit.>. Additionally, there are candidates for extremely massive galaxies (M_∗∼ 10^10-11 M_⊙) at somewhat lower redshifts, around z ∼ 7-9 <cit.>. The presence of these galaxies challenges the standard ΛCDM scenario, suggesting either exceptionally high star formation efficiencies or the need for entirely new theoretical frameworks <cit.>.
These findings have prompted the proposal of various astrophysical mechanisms, aimed at explaining the abundance of bright galaxies at high redshifts. These include suggestions of a top-heavy initial mass function (IMF) in early galaxies <cit.>, high star formation efficiencies <cit.>, or reduced dust extinction <cit.>. Ongoing research actively investigates the feasibility of these proposals.
On the other hand, alternative scenarios have been proposed that go beyond the framework of the standard ΛCDM cosmology. The cosmological structure formation process involves density fluctuations present in the early universe growing under gravity, becoming nonlinear, and collapsing gravitationally to form dark matter halos, within which stars form. The scales at which halos form at different epochs are determined by the spectrum of primordial fluctuations. Since the amplitude of primordial fluctuations on scales smaller than 1 Mpc has not been directly observed, it is typically assumed that scale-invariant fluctuations on larger scales continue on smaller scales as well <cit.>. However, the possibility of larger fluctuations existing on smaller scales cannot be ruled out. If such fluctuations did exist, it would imply the formation of galaxies at earlier cosmic epochs.
For instance, <cit.> and <cit.> have proposed models featuring a bumpy enhancement in the spectrum of fluctuations on sub-Mpc scales. Similarly, <cit.> and <cit.> have considered a blue-tilted spectrum with a bend on the small-scale side. In the former works of each pair, the number of galaxies formed was estimated analytically, while the latter conducted N-body numerical simulations. Both studies argue that the presence of bright high-redshift galaxies observed by JWST can be explained by these models.
Various factors contribute to fluctuations on small scales, including the running of the power spectrum arising from certain inflation models <cit.> and phase transitions in the early universe <cit.>. Recently, there has been increasing research interest in the possibility of primordial black holes (PBHs) forming in the early universe <cit.>. If PBHs exist, their spatial distribution is known to generate isocurvature fluctuations, leading to significant fluctuations on small scales. It has been suggested that if PBHs comprise a small fraction of dark matter (∼ 10^-6-10^-3), the observed number density of high-redshift galaxies could be explained by this effect <cit.>.
Additionally, recent observations from Pulsar Timing Arrays, such as NanoGrav15yr <cit.>, suggest the presence of background gravitational waves . These could be interpreted as cosmological non-linear secondary gravitational waves, leading to large density fluctuations on small scales and the generation of PBHs <cit.>. There are suggestions that fluctuations may not be large enough to form PBHs, which might be more plausible, but even in such cases, non-linearities in the evolution of the universe could lead to the early formation of halos <cit.>.
Furthermore, the presence of primordial magnetic fields has been proposed to generate secondary density fluctuations on small scales, potentially leading to earlier galaxy formation <cit.>.
Due to the factors mentioned above, if the formation of first stars occurs much earlier than typically assumed, it is expected that the evolution of star-forming clouds and the resulting masses of the first stars will differ from the usual pattern.
In the standard ΛCDM scenario, the formation of first stars is conventionally hypothesized to occur within the redshift range of z=20-30 <cit.>. Even in rare scenarios where only one such object might be observable in the universe, the formation epoch is estimated to be approximately z=60-70 <cit.>. Consequently, the formation of first stars beyond z>100 remains scarcely explored.
In this context, <cit.> conducted cosmological simulations to explore the process of first star formation in very early epochs (z≃ 100-200), particularly in scenarios where the matter power spectrum exhibits a blue tilt on small scales. They found that
the dissociation of H^- by the high-temperature cosmic microwave background (CMB) inhibits the formation of H_2, a crucial coolant for primordial gas. Consequently, the star-forming gas becomes hotter than usual. They concluded that the first stars formed under such conditions tend to be somewhat more massive.
In this paper, we explore the possibility of first star formation occurring in the universe significantly earlier than what is typically assumed in the standard cosmological scenario.
We extend the investigation of potential formation epochs for the first stars from the typical early star formation epoch (around a few tens of redshifts) to just after cosmological recombination (around a few hundred redshifts), extending beyond the scope of the study conducted by <cit.>, who calculated the evolution of several first-star-forming clouds forming within a redshift range of z ∼ 20-200 selected from cosmological simulations. Here, we systematically study the thermal and chemical evolution of such clouds in a much wider redshift range of z ∼ 20-700. In addition, we calculate the entire prestellar collapse evolution up to the protostellar density of n_H=10^22 cm^-3, whereas <cit.> stopped their calculation at n_H=10^13 cm^-3.
We aim to understand how the thermal evolution of star-forming clouds would differ across this broad range of formation epochs and discuss the properties of the stars formed under these conditions, using a one-zone thermo-chemical model.
The structure of this paper is as follows: Section 2 outlines the methodology, followed by Section 3, which presents the computational results. In Section 4, we explore the properties of the stars formed based on these computational results and discuss other relevant aspects.
Finally, in Section 5, we provide a brief summary.
For the cosmological parameters, we adopt the following values: Ω_Λ = 0.76, Ω_ m = 0.24, Ω_ b = 0.04 and H_0 = 70 km s^-1 Mpc^-1.
§ METHOD OF CALCULATION
In this paper, we calculate the thermal and chemical evolution process of primordial clouds in high-density cosmological regions,
starting from the initial expansion, maximum expansion, gravitational contraction, and virialization up to the formation of protostars after further gravitational collapse, using a one-zone model. We employ a modified version of the computational model used by <cit.>, who investigated the thermal evolution during the typical first star formation <cit.>.
Here, we outline the model and describe the modifications made for the extension of calculations to extremely high-redshift periods.
§.§ Pre-Virialization Evolution
The primary focus of this calculation is the thermal evolution during the contraction of high-density cores, which significantly impacts the mass of forming stars. To establish initial conditions, it is necessary to compute the pre-virialization evolution.
Initially, for each halo's virialization epoch z_ vir, the density evolution is computed until reaching the virialization density using the spherical top-hat density evolution model <cit.> as follows:
ρ_ tot = 9/2π^2
( 1+z_turn/1-cosθ)^3 ρ_ cr,0Ω_m
where ρ_ cr,0 is the current critical density of the universe (=3H_0^2/8π G), and z_ turn is the turnaround redshift, which is related to z_ vir as 1+z_ turn=2^2/3 (1+z_ vir). The parameter θ is related to the redshift as
1+z = (1+z_turn)(θ-sinθ/π)^-2/3.
Additionally, the total matter density
ρ_ tot
comprises contributions from both dark matter (ρ_ DM) and baryons (ρ). It is assumed that prior to virialization, the amounts of dark matter and baryons are proportional to the cosmic mean.
The evolution of ionization and temperature until the maximum expansion of the halo (z=z_ turn) is not calculated using our model for this work. Instead, we utilize RecFast <cit.>, which accurately computes the recombination processes in the early universe. However, during this computation, the density evolves not monotonically as in the original RecFast code, resembling the evolution dictated by the aforementioned spherical top-hat model corresponding to each z_ vir.
Subsequent to z_ turn, when calculating the evolution of baryonic gas as it contracts, the thermal evolution code is switched to the one used in this study, evolving until the density reaches the virialization density.
As outlined above, for clouds at each formation epoch z_ vir, the state at z_ turn, which serves as the initial condition for our thermal evolution calculations, is uniquely determined by the density as prescribed by Equation (1). Additionally, the temperature and ionization degree are obtained by solving the cosmic recombination thermal evolution up to the time z_ turn
using the RecFast code, based on the density evolution given by Equation (1).
RecFast does not account for molecular reactions such as those involving hydrogen molecules. Therefore, for the abundance of hydrogen molecules, we adopted the values presented as a function of
z in Figure 4 of Galli and Palla (1998), who solved detailed chemical reactions and temperature evolution in a uniform early universe intergalactic medium (IGM).
The abundances of other minor species were set to be zero. These assumptions regarding chemical abundances have no significant impact on our results.
§.§ Post-Virialization Evolution
Once the density of the high-density region under consideration reaches the virialization density ρ_ vir, DM = 8 ρ_ DM(z_ turn), where ρ_ DM denotes the dark matter density at z_ turn, we consider the dark matter to have virialized, and its density remains constant at this value thereafter. Note that since z_ vir corresponds to the time when the density diverges in the top-hat model, the time when the density reaches the virialization density is slightly earlier than this. On the other hand, the CMB temperature T_ CMB varies with time until z_ vir, after which it is fixed to the value corresponding to each virialization epoch.
Meanwhile, the baryonic gas initiates a runaway collapse <cit.>. Assuming nearly free fall, we increase its density according to the timescale t_ ff = √(3π/32Gρ_ tot) as follows:
dρ/dt = ρ/t_ ff.
It should be noted that runaway collapse can only occur if the gas mass consistently exceeds the instantaneous Jeans mass throughout the protostellar collapse.
The temperature evolution of the gas is computed by solving the energy equation:
de/dt = -Pd/dt(1/ρ) - Λ_ net,
where the pressure is given by P = ρ k_B T/μ m_H, the specific internal energy by e = P/(γ_ ad - 1)ρ, and Λ_ net represents the net cooling rate per unit mass. The other symbols retain their usual meanings.
Here, the considered cooling and heating processes for Λ_ net include line emission from the primordial gas (such as H Lyα, H_2, and HD) and continuum emission (mainly H_2 collision-induced emission (CIE) and H^- free-bound emission), as well as heating and cooling processes associated with chemical reactions. Additionally, in this study, the influence of the early universe's CMB is considered, incorporating the Compton cooling process. The cooling rate for this process is provided by the formula from <cit.> (equation B23).
Furthermore, in <cit.>, the calculation of line cooling rates was conducted using fitting functions without solving for the occupation numbers of energy levels. In our case, to account for the effect of heating by the CMB, we replaced the cooling rate with Λ(T)-Λ(T_ CMB) during the computation.
The continuum cooling rate was computed in <cit.> by assuming local thermodynamic equilibrium (LTE) and utilizing the Planck mean opacity. This approach is a good approximation for typical first star formation scenarios, where only H_2 CIE is crucial for cooling among continuum processes. However, in our calculations for star formation at extremely high redshifts, where clouds can reach temperatures of several thousand degrees, there are opacity regimes not covered by available data, and chemical equilibrium might not hold as usual. Therefore, assuming local thermodynamic equilibrium for computing the cooling rate may no longer be a good approximation. To address this issue and consistently provide continuum cooling rates even in such cases, we computed the continuum cooling rate (Table A2) as well as Planck and Rosseland mean opacity (Tables A3 & A4) using the method outlined in <cit.>.
The optical depth for continuum radiation is treated in the grey approximation. Taking into account scattering effects, it is computed as τ=√(τ_ Rτ_ P)
<cit.>, where τ_ R and τ_ P represent the optical thickness of the cloud estimated using Rosseland and Planck mean opacity, respectively. It is worth noting that while Rosseland mean opacity considers both absorption and scattering, Planck mean opacity only accounts for pure absorption.
The size of the cloud is determined based on the following considerations. We model the star-forming cloud as uniform, corresponding to solving the evolution of the central region of the high-density core undergoing runaway collapse. The size of this central region is approximately the local Jeans length at that time, given by λ_ J = √(π k_ B T/Gμ m_ Hρ_ tot), where ρ_ tot denotes the instantaneous total density. When considering optical thickness, we estimate the cloud size as λ_J. Using the instantaneous gas density ρ, the optical thickness is given by τ_ R, P = κ_ R, Pρλ_ J.
The abundance of chemical species is determined by solving the chemical reactions of primordial gas with the temperature and density evolution computed as described above. Here, we consider a total of 23 species: H, D, He, Li, and their compounds, H, H_2, e^-, H^+, H_2^+, H_3^+, H^-, He, He^+, He^2+, HeH^+, D, HD, D^+, HD^+, D^-, Li, LiH, Li^+, Li^-, LiH^+, Li^2+, and Li^3+, and we solve the reactions between them, as listed in Table 1 of <cit.>. All reactions, including forward and reverse reactions, are paired (total of 107 pairs), and the reverse reaction rates are calculated from the equilibrium constants using the forward reaction rates. Therefore, when the density becomes sufficiently high, the chemical composition is modeled to approach the correct chemical equilibrium values given by the Saha equations. Additionally, we include the photodissociation reaction of H_2 as described below.
In <cit.>, photochemical reactions were considered in cases when the cloud becomes optically thick and the radiation field within it is dominated by thermal radiation.
The radiation intensity is given by
J_ν= (1-e^-τ)B_ν(T),
where B_ν(T) is the Plack function
and the photochemical reaction coefficient is calculated as
k_ dissoc= (1-e^-τ)k_ assoc K_ eq(T),
where τ represents the optical thickness of the cloud for continuum radiation, k_ assoc is the reverse radiative association reaction coefficient and K_ eq(T) is the equilibrium constant.
However, in our case, since the strong CMB radiation is irradiating from outside, we incorporate the CMB into the radiation field as well:
J_ν=(1-e^-τ)B_ν(T)+ e^-τB_ν(T_ CMB),
and accordingly modify the photochemical reaction coefficient as:
k_ dissoc = (1-e^-τ) k_ assoc(T) K_ eq(T)
+ e^-τ k_ assoc(T_ CMB) K_ eq(T_ CMB).
In our case, particularly important photochemical reactions include the photodetachment of H^- and photodissociation of H_2^+. The reaction rates for these species are treated as described above. Additionally, we incorporate the photodissociation of H_2, which was not considered in <cit.>. The reaction rate is determined using the expression from <cit.>.
However, as we will discuss later, there were no instances where photodissociation of H_2 due to the CMB became significant.
§ RESULTS
In this section, we present the results of the thermal and chemical evolution of primordial gas clouds as they collapse to form protostars at each virialization epoch z_ vir=20-700. We investigate the thermal processes underlying their collapse, with particular emphasis on the influence of the CMB at high redshifts.
§.§ Early Cosmic Expansion Phase
First, we see the initial evolution of primordial gas clouds, starting with their expansion due to cosmic expansion, reaching maximum expansion, followed by contraction towards virialization, resulting in the onset of runaway collapse of cloud cores driven by self-gravity.
The temperature evolution of primordial clouds at such early phases is shown in Figure <ref> for different formation epochs z_ vir=20, 100, 200, ..., 700, as a function of number density. We also depict the CMB temperature when each cloud reaches its respective density by dashed lines.
Shown is the evolution from a pre-recombination epoch of z=2000 downward. Each cloud experiences expansion initially, following the cosmic expansion, then reaches the maximum expansion before starting contraction. Note that both the virialization density, which is 18π^2 times the mean density of the universe at that time, and the density at maximum expansion, which is 1/8 of the virialization density, are proportional to (1+z_ vir)^3, reflecting the higher average density in the universe.
At very early phases, the gas temperature decreases for the cases of any z_vir. Throughout the early expansion phase, the gas temperature remains approximately equal to the CMB value for the cases with z_ vir≳ 100, since the gas is tightly coupled to the CMB thermally via the Compton scattering. Consequently, the slope of the thermal evolution is shallower compared to that of adiabat (=2/3).
In cases of lower formation redshift, for example, z_ vir=20, corresponding to as in the standard epoch of Population III star formation, although initially at T ≳ 600 K, i.e., z ≳ 200, the gas is thermally coupled to the CMB tightly, at lower temperature and redshift, the gas thermally decoupled from the CMB, resulting in adiabatic expansion.
For z_ vir≳ 200,
even after reaching the maximum expansion and after the onset of the collapse, the temperature continues decreasing further
due to Compton coupling. Subsequently, with further compression, as heating due to gravitational compression exceeds the Compton cooling,
whose effect weakens also due to electron recombination in clouds, the temperature begins to rise with increasing density, eventually turning to adiabatic contraction.
At lower z_ vir, the temperature rises almost adiabatically from just after the maximum expansion.
Recall that the maximum Jeans mass during the contraction/virialization phases is a few 10^5 M_⊙ across all cases considered. This represents the minimum gas mass required for further collapse and subsequent star formation. Correspondingly, the required halo mass is M_ vir = a few 10^6 M_⊙.
It should be noted that we are considering haloes that are larger than this mass.
These haloes possess a compact virial radius, given by:
r_ vir = [M_ vir/4π/318π^2 ρ_ critΩ_ m (1+z)^3 ]^1/3
= 30 pc(M_ vir/10^6 M_⊙)^1/3(100/1+z),
where ρ_ crit is the critical density of the universe,
with dimensions similar to those of local molecular clouds.
§.§ Thermal Evolution in Star-forming Cloud Cores
Initially, baryonic gas and dark matter
contract in the same way. After virialization of the halo, while the dark matter density becomes constant, the gas contracts further and its density continues increasing owing to energy dissipation due to radiative cooling as long as the halo is massive enough.
The gravitational collapse of such high-density cloud cores eventually leads to the formation of protostars. Here, we examine the thermal evolution of star-forming clouds at these higher densities.
Figure <ref> illustrates the thermal evolution of cloud cores forming at various cosmological epochs, in a much wider range of densities than in Figure <ref>, extending up to the densities typical of protostar formation, extending up to the densities typical of protostar formation.
For cases with z_ vir≤ 400, where H_2 serves as an important coolant, we also present the evolution of H_2 abundance and ionization fraction in Figure <ref>. Recall that the evolution of the ionization degree is calculated using the RecFast code up to the maximum expansion (represented by the lightly colored portion of the dash-dotted curves in the figure), while the initial H_2 abundance is derived from the IGM values provided by <cit.>.
In the subsequent discussion, we commence with the familiar case of a first-star forming cloud at a usual epoch of z_ vir=20 and progressively explore the evolution of those forming at earlier epochs, i.e., higher z_ vir.
§.§.§ Standard collapse sequence for first star formation: z_vir=20
In this section, we revisit the well-known thermal evolution of first-star forming clouds
at a conventional formation epoch, z_vir=20, which serves as a benchmark for comparison with those at higher-z cases to be discussed later.
For this case, the temperature evolution is illustrated by the purple line in Figure <ref>. Additionally, the evolution of hydrogen molecule and electron abundances within the cloud as a function of density is depicted by the purple lines in Figure <ref>. Furthermore, Figure <ref> (a) displays the contribution of individual processes to the cooling and heating rates at various densities.
Initially, up to n_H∼ 1 cm^-3, the temperature increases adiabatically with density due to the absence of effective radiative cooling. During this phase, compressional heating dominates among the cooling/heating processes (as observed in Figure <ref> a). Once the density reaches n_H∼ 1 cm^-3, the temperature exceeds ≳ 1000 K, facilitating H_2 formation via the H^- channel reaction:
H + e →H^- + γ
H^- + H →H_2 + e
This leads to the conversion of about 1/1000 of hydrogen into molecular form, but the H_2 abundance saturates at around this value due to the recombination of electrons, catalysts of the above reaction (Figure <ref>). Consequently, the star-forming clouds cool via H_2 line emission, causing the temperature to decrease (Figure <ref>).
As the density increases to n_H∼ 10^3-10^4 cm^-3, the temperature reaches a minimum value of 200K, where the rotational level population of hydrogen molecules attains LTE. Beyond this density, radiative cooling becomes less efficient compared to compressional heating, resulting in a gradual temperature rise.
Subsequently, at densities exceeding n_H > 10^9cm^-3, another phase of hydrogen molecule formation commences through three-body reactions, eventually transitioning the primordial gas to an almost fully molecular state (Figure <ref>). Despite a slight drop in temperature (Figure <ref>) accompanying this increase in H_2 abundance and subsequent cooling, the chemical heating associated with H_2 formation and photon trapping counterbalances the cooling effect (Figure <ref> a). Consequently, the temperature gradually increases once again.
However, at densities around n_H∼ 10^13-10^14 cm^-3, continuum cooling via H_2 collision-induced emission becomes effective (Figure <ref> a), momentarily decreasing the temperature (Figure <ref>). Yet, this cooling is transient, as densities exceeding n_H∼ 10^16 cm^-3 render the cloud cores optically thick to the continuous absorption, severely limiting radiative cooling efficiency. Consequently, the temperature rises adiabatically, triggering hydrogen molecule dissociation. Although chemical cooling temporarily mitigates temperature increase during dissociation, once most hydrogen molecules dissociate, the temperature rises rapidly (Figure <ref>). Subsequently, with no effective cooling processes at n_H≳ 10^20-10^21 cm^-3, gas temperature increases almost adiabatically, leading to the formation of a hydrostatic protstar by increasing pressure counterbalancing the gravitational pull. This nascent star initially possesses very small mass (∼ 10^-3 M_⊙) corresponding to the local Jeans mass at its birth, but it subsequently increases as gas accretion from the surrounding medium ensues, ultimately determining the final stellar mass, a discussion of which is reserved for Section 4.
§.§.§ Diminished H_2 cooling due to CMB:
z_vir =100-400
In the case of z_ vir=100,
the minimum temperature, which is attained when the H_2 level reaches the LTE around n_ H=10^3-10^4 cm^-3, is approximately 300 K, somewhat higher than the standard value of 200 K observed in the z_ vir=20 case (Figure <ref>). This difference arises from the inability of radiative cooling to lower the temperature below the CMB temperature, known as the so-called CMB floor (see Figure <ref>).
Although the density at which the temperature reaches around 1000 K is slightly higher than at z_ vir=20, resulting in a slightly higher onset density for H_2 formation via the H^- channel, the amount of formed H_2 and subsequent thermal and chemical evolution remain largely unchanged compared to the z_ vir=20 case (Figure <ref>).
For higher z_ vir, however, not only is the CMB floor higher (Figure <ref>), but also the amount of formed H_2 itself is significantly lower, by an order of magnitude or more, at densities below n_ H≲ 10^9 cm^-3, before the onset of three-body reactions (Figure <ref>). This reduction arises from the destruction of intermediate species H^-, which forms H_2, due to photodetachment by CMB photons, hindering H_2 formation (a quantitative discussion of this effect will be provided in <ref> later). Consequently, the efficiency of H_2 cooling decreases, leading to higher temperatures (Figure <ref> and Figure <ref> b).
For z_vir = 200-400, the minimum temperature reaches approximately 10^3 K in all cases (Figure <ref>). Once the density exceeds n_H≳ 10^8 cm^-3, H_2 formation via three-body reactions initiates, and the subsequent evolution mirrors that of the standard case of star formation at later epochs. Note also that, for z_ vir=300-400, the temperature exhibits a somewhat abrupt decrease at n_ H=10^8-10^10 cm^-3 as H_2 cooling via three-body reactions begins, owing to the higher temperature before that.
§.§.§ Direct collapse via atomic cooling: z_vir > 500
The thermal evolution of clouds forming at earlier epochs, i.e., higher z_ vir, differs completely. As evident from Figures <ref> and <ref>, once Compton-decoupled from the CMB, the temperature continues to rise, reaching as high as 5000-6000 K. Subsequently, the evolution proceeds nearly isothermally until a density as high as n_H∼ 10^16 cm^-3.
This behavior closely resembles the so-called direct collapse scenario, observed at lower z_ zir∼ 10, where H_2 formation and its subsequent cooling are inhibited by such processes as far-ultraviolet irradiation <cit.>, dense shock from cold streams <cit.>, and dynamical heating from other halo mergers <cit.>. In such cases, the collapse occurs solely due to atomic cooling (specifically, H^- free-bound emission).
However, in the current study, the inhibition of H_2 formation is attributed to H^- photodetachment by the CMB.
It should be noted that in the conventional direct collapse scenario at z ∼ 10, H Lyα cooling becomes significant in the low-density regime (n_ H<10^5 cm^-3) while, in our case, within this density range, Compton cooling becomes crucial, followed by cooling via H^- free-bound emission at higher densities (illustrated as "cont" in Fig. <ref>c), as in the standard case.
Following this thermal evolution, star-forming clouds experience minimal fragmentation, undergoing monolithic collapse into massive clumps at their centers <cit.>. Ultimately, supermassive stars exceeding approximately 10^5 M_⊙ are formed. It is conjectured that these supermassive stars undergo gravitational collapse due to general relativistic instabilities, known as post-Newtonian instability, eventually leading to the formation of black holes <cit.>.
§.§ Quantitative analysis of photo-reactions
§.§.§ H^-, H_2^+ photodissociation
So far, we have observed that H_2 formation reactions
in star-forming clouds at extremely high redshifts
are inhibited by CMB photons, resulting in higher temperatures compared to the standard case of z_ vir=20. Now, we quantitatively estimate when such photo-chemical reactions become significant.
First, let us examine the thermal evolution and the H_2 and e abundances without considering H^- photodetachment. These are illustrated in Fig <ref>, showing significant differences from Figs <ref> and <ref>, where H^- photodetachment is properly considered. In Fig <ref>(b), molecular hydrogen is abundant with 10^-3-10^-2, especially at higher z, due to the increased gas temperature. The temperatures reach the CMB floors, leading to isothermal evolution at low densities (Fig <ref> a). Particularly, a significant difference in the H_2 abundance is observed at z_vir≳ 200.
This can be understood from the following analysis.
At low densities, H_2 formation primarily occurs through the H^- channel. However, in the presence of radiation fields (above a threshold of >0.76 eV), H^- ions are photo-detached. The relevant chemical reactions for this channel can be described as follows:
H + e k_1k_2⇄H^- + γ
H^- + H k_3→H_2 + e ,
where associated k's signify respective reaction coefficients.
The fraction of H^- ions formed via reaction 1 and used in H_2 formation is k_3 n_ H/(k_2+k_3 n_ H). Thus, the effective rate of H_2 formation is given by
k_form=
k_1×k_3 n_H/k_2 + k_3 n_H .
This effective H_2 formation reaction coefficient is plotted in Fig <ref> (red line). Representative values of gas density and temperature (n_H=1000 cm^-3, T=1000 K) are used. From this figure, it is evident that the formation rate via the H^- channel significantly decreases under the influence of the CMB for T_ CMB≳ 400 K (z ≳ 130).
In addition to the H^- channel, there exists another H_2 formation pathway called the H_2^+ channel, catalyzed by H^+ ions, where H_2^+ acts as an intermediate:
H + H^+ k_1k_2⇄H_2^+ + γ
H_2^+ + H k_3→H_2 + H^+ .
In the absence of strong radiation fields, this pathway is less significant compared to the H^- channel due to lower formation rates by approximately two orders of magnitude. However, when the H^- channel is blocked by radiation fields such as the CMB, the H_2^+ channel becomes important because of the higher photo-dissociation threshold of H_2^+ (2.79 eV from the ground state). The effective formation rate through this channel is also plotted in Fig <ref> (blue line). It is evident from this figure that the formation rate via the H_2^+ channel surpasses that of the H^- channel for z ≳ 200. The H_2 formation via the H_2^+ channel has been confirmed to be effective in star-forming regions at such high-redshift epochs
also by cosmological simulations by <cit.>.
Furthermore, at z ≳ 400, the formation rate via the H_2^+ channel also decreases significantly due to photodissociation of H_2^+. As a result, H_2 formation is strongly suppressed at higher redshifts, and gas cooling relies solely on atomic cooling. This is consistent with the findings in Sections 3.1 and 3.2, indicating that the collapse of star-forming clouds during this epoch follows the so-called direct collapse scenario.
§.§ H_2 photodissociation
In addition to the indirect inhibition of H_2 formation due to the destruction of intermediate products such as H^- and H_2^+ via their photodetachment, the H_2 abundance can also be reduced by photodissociation of H_2 itself. Here, we aim to estimate when the latter effect becomes significant.
In a cloud collapsing at the free-fall rate, the H_2 formation reaction proceeds as much as possible in the free-fall time t_ ff, producing H_2 with its fraction y(H_2) ≃ k_ form n_H x_e t_ff, where x_e is the ionization degree. In other words, the timescale for molecular hydrogen formation also becomes t_ ff. Estimating this with the virialization density ρ_ vir, we have:
t_ff
=
√(3 π/32 Gρ_vir)≃
2.98 × 10^12( 1081/1+z)^3/2 [s].
On the other hand, the timescale for photodissociation is given by the inverse of the photodissociation rate, k_ pd^-1. Hence, the condition for photodissociation to become significant is when t_ ff>k_ pd^-1.
Since the frequency at which photodissociation occurs (≃ 12.4 eV) is greater than the peak frequency of the CMB, using the Wien approximation and neglecting the shielding effect, we have:
1/k_pd≃ 1.8 × 10^-9exp(52800/1+z) [ s].
This becomes smaller than t_ ff when 1+z > 1081. Furthermore, due to the exponential dependence of the photodissociation rate on 1+z, it is evident that photodissociation becomes unimportant very rapidly as z decreases. Thus, in the post-recombination era we are considering (z < 1000), photodissociation of H_2 by the CMB is entirely negligible.
§ DISCUSSION
§.§ Estimating the mass of forming stars
Here, by using the temperature evolution of the star-forming clouds obtained in the preceding section, we estimate the masses of stars formed. First, as an upper limit for the mass of stars, we evaluate the mass of dense cores, which serve as the mass reservoir for star formation.
When the temperature in clouds decreases with increasing density while undergoing gravitational collapse, the clouds tend to become filamentary in shape. Later, when the temperature begins to increase, these clouds fragment into clumps that are about the size of the Jeans mass at that moment, leading to the formation of dense cores <cit.>.
Even when fragmentation is not as efficient as envisaged above, a temporary slowdown in contraction occurs due to the pressure force, known as the "loitering phase," when the temperature shifts from decreasing to increasing <cit.>. Following this, a runaway collapse begins as sufficient mass, exceeding the Jeans mass, accumulates in the central region, leading to the formation of a dense core. Therefore, the Jeans mass at the density where temperature is minimized serves as an indicator of the mass of dense cores <cit.>, as plotted in the figure illustrating the temperature evolution of the star-forming clouds as a function of density (Figure <ref>a). The Jeans mass at this temperature minimum for each formation epoch is plotted in Figure <ref> b. For typical first stars (at z_ vir=20), this Jeans mass is approximately 2000 M_⊙, consistent with the mass of dense cores obtained from numerical simulations <cit.>.
As redshift increases, the temperature rises while the fragmentation density remains relatively constant. However, at z=150, this density is slightly higher due to the subtlety of the almost isothermal temperature evolution, leading to a temporary decrease in the fragmentation mass. Nonetheless, as the redshift exceeds 200, larger dense-core masses are observed, reaching approximately 7000 M_⊙.
The estimated mass of dense cores is significantly larger than the mass determined by stellar radiative feedback, as estimated below, indicating the abundant mass reservoir for star formation. For the case of direct collapse at z>500, there is no significant temperature decrease after virialization. Hence, the gas collapses nearly isothermally without fragmentation, i.e., monolithic collapse. The necessary gas mass for this collapse is given by the maximum Jeans mass during the collapse evolution, exceeding 10^5 M_⊙.
Within these dense cores, after the runaway collapse, the central regions become extremely dense, reaching densities as high as 10^20 cm^-3. At this point, the temperature evolution becomes adiabatic, and gravitational collapse is halted by pressure gradient forces, leading to the formation of protostars in hydrostatic equilibrium <cit.>. Initially, the mass of these protostars is very small, but it increases subsequently due to accretion from the surrounding gas.
The accretion rate is related to the temperature, hence the sound speed c_ s, of the dense core and is given by:
Ṁ≃ϕc_ s^3/G.
Here, ϕ is a dimensionless number typically ranging from O(1) to O(10), representing the degree to which the collapse toward protostar formation proceeds dynamically <cit.>. For instance, in the Larson-Penston solution <cit.>, which is
the self-similar solution corresponding to the dynamic limit of protostar formation through runaway collapse, ϕ=47. In contrast, for the Shu solution <cit.>, which is the self-similar solution where gas accretes statically from unstable hydrostatic equilibrium cores, ϕ=0.975. When collapse begins from realistic initial conditions, it is known to take values between these extremes <cit.>.
A protostar growing in this manner often have its final mass determined by the radiative feedback from itself, which stops accretion before all the material within the dense core, acting as a mass reservoir, accretes onto the star <cit.>.
This radiation feedback, in the case of first stars, occurs when the stellar mass reaches several tens of M_⊙ <cit.>. At this point, intense ultraviolet radiation from the star irradiates the circumstellar accretion disk, causing the gas in the disk to evaporate due to ionization heating, thereby halting accretion.
<cit.> conducted 2D radiation hydrodynamic simulations of primordial-gas clouds within minihalos, chosen from cosmological simulations as potential sites for first star formation. They investigated the final mass of stars due to radiation feedback during their growth. Their findings revealed a strong correlation between the accretion rate on the scale of dense cores and the mass of stars formed, which can be expressed by the following equation <cit.>:
M_∗ = 250 M_⊙(Ṁ/2.8 × 10^-3 M_⊙ yr^-1)^0.7.
On the other hand, <cit.> performed similar calculations for more massive atomic cooling halos, which are expected to give rise to more massive stars. They found a relationship between mass and accretion rate with even stronger dependence. Their numerical results were consistent with an analytical model of photoevaporation of circumstellar disks <cit.>. This relationship, although not expressible by a simple power-law, can be understood by referring to their Figure 10.
In our calculations, we estimate the accretion rate on the scale of dense cores based on their temperature, i.e., the minimum temperature. In doing so, we chose the dimensionless number ϕ to be 8.06 to ensure that the accretion rate estimated from the typical temperature evolution during first star formation in our calculations (at z=20) matches the typical value of the accretion rate reported by <cit.>, which is 3× 10^-3 M_⊙ yr^-1.
The typical accretion rate obtained using this approach for each redshift is depicted in Figure <ref>. Accretion rates increase with higher redshifts, reflecting higher core temperatures. For instance, at redshifts below 100, typical accretion rates match the value of 3× 10^-3 M_⊙ yr^-1 for standard first star formation. However, at redshifts ranging from 200 to 400, they increase to approximately 2-4 × 10^-2 M_⊙ yr^-1, roughly an order of magnitude higher.
The estimated masses of stars based on these accretion rates are depicted in Figure <ref>, with plots corresponding to the relationships proposed by <cit.> and <cit.>. Since our case encompasses various cases from molecular to atomic cooling, it is not entirely clear which relationship is more appropriate. Therefore, we discuss each relationship separately.
If we adopt the relationship proposed by <cit.>, the mass of stars ranges from 260 M_⊙ at z=20 to increasing masses at earlier cosmic epochs, exceeding 1000 M_⊙ for z>200 and reaching 1660 M_⊙ at z=400. On the other hand, <cit.>'s relationship shows a stronger dependence of mass on accretion rate. At z=20, the mass is 74 M_⊙, but it rapidly increases to over 1000 M_⊙ at z=200 and reaches 3800 M_⊙ at z=400.
Regardless of which relationship is employed, the formation of very massive stars, approximately 1000 M_⊙ in mass, occurs for z>200. Moreover, in cases where z>500, gigantic clouds undergo direct collapse, giving rise to supermassive stars. In these instances, protostars formed at the core of the clouds rapidly become supermassive at tremendous accretion rates of 0.1-1 M_⊙ yr^-1. They quickly reach masses of approximately 10^5 M_⊙ before collapsing into black holes due to general relativistic instabilities.
§.§ Caveats
§.§.§ Impact of Compton drag on gas dynamics
In our calculations, we have assumed nearly free-fall, runaway collapse for the dynamics of star-forming clouds. However, in the early universe where the radiation energy is high, consideration must be given to radiative viscosity, commonly known as Compton drag <cit.>. The Compton drag force acting on unit mass of gas moving with peculiar velocity v relative to the cosmic expansion is given by:
f_ drag = 4/3σ_ T a T_ CMB^4 n_ e/ρ c v
≃σ_ T a T_ CMB^4 x_ e/ m_ H c v.
Here, σ_T is the Thomson scattering cross-section, a the radiation energy constant, n_e the electron number density, and x_e the ionization degree. Consequently, the timescale on which drag influences gas motion is given by:
t_ drag = m_ H c/σ_ T a T_ CMB^4 x_ e.
Comparing this timescale with the timescale of cosmic expansion, characterized by the Hubble time:
t_ H = H_0^-1Ω_m^-1/2 (1+z)^-3/2,
we find that the era when Compton drag becomes significant:
1+z > 140 x_ e^-2/5.
This indicates the necessity of considering the drag force for ionized gas at z ≳ 100-200. On the other hand, the star-forming clouds under consideration are predominantly neutral after the cosmic recombination, with a low ionization fraction of approximately x_e≃ 10^-4. Thus, drag can be neglected, justifying our assumptions.
§.§.§ Impact of baryon streaming motions
Supersonic coherent baryonic flows, known as streaming motions, exist just after cosmic recombination. These flows have root-mean-square velocities of σ_ rms≃ 30 , km/s on scales of a few comoving Mpc <cit.>. The relative streaming motions between dark matter and baryons can lead to a suppression of halo formation in the early universe and a reduction in the gas content within these haloes, thereby increasing the minimum mass required for cooling and star formation <cit.>. This effect significantly delays the formation of the very first stars in the universe at z ≃ 60-70, with an estimated delay time of up to 3 Myrs in standard cosmology <cit.>.
By analyzing the results of previous numerical simulations, <cit.> proposed the following equation to determine the minimum cooling threshold for halos in terms of the circular velocity:
V_cool(z) = √((3.714 , km/s)^2 + [4.015 · v_bc(z)]^2).
In this equation, the first term under the square root on the right-hand side represents the threshold for molecular cooling, while the second term accounts for the effect of baryon streaming motions. It is important to note that the streaming motions are decelerated by cosmic expansion, as described by:
v_bc(z)= v_bc,rec( 1+z/1100),
where v_bc,rec is the streaming velocity at the time of recombination (z=1100).
Below we estimate the influence of streaming velocities on the formation of halos and the stars within them. Using the most common velocity in the distribution, v_bc,rec = 0.82 σ_ rms
and comparing the first and second terms under the square root in equation (<ref>), we observe that the effect of streaming velocity becomes significant at redshifts
z > 40.
At the very early epochs z>100 we are considering,
the first term can be neglected:
V_cool(z) = 8.7 , km/s(1+z/100)
for the most common velocity. The corresponding halo mass is given by:
M_ cool = r_ vir V_ cool^2/G.
Substituting equation (<ref>) for r_ vir
we find
M_ cool = 4 × 10^5 M_⊙(1+z/100)^3/2.
The effects of baryon streaming motions are negligible for halos more massive than this threshold. Since the halos we are considering have virial masses M_ vir > 10^6-10^7 M_⊙ as mentioned in Section 3.1, we expect that streaming motions will not significantly influence the collapse of these halos, except perhaps at the highest redshifts we have considered.
Nevertheless, while gravitational collapse of these clouds leads to star formation despite streaming motions, the dynamics during collapse may still be affected. In regions of high velocity, the fraction of gas remaining in halos might not only decrease, but even if contraction begins, the gas motion could be heavily influenced by streaming motions. Depending on the velocity, this could either promote the fragmentation of gas clouds or inhibit it, leading to contraction in larger clumps. This complex dependency has been demonstrated in numerical simulations by <cit.> for first-star-forming halos forming at a redshift of z ≃ 30.
Investigating how the dynamics of first-star-forming gas behave in response to streaming velocity in the earlier universe remains an intriguing topic for future studies.
§.§ Future observational prospects
From our calculation, first stars in the very early universe (z ≳ 100) typically have masses exceeding several 100 M_⊙. Such stars wold collapse without explosion, leaving behind intermediate-mass black holes (IMBHs) <cit.>
although, within a narrow mass range of approximately 5× 10^4 M_⊙, the collapse due to general relativistic instabilities triggers runaway nuclear reactions leading to explosions <cit.>.
On the other hand, in cases where there is a mass distribution among forming stars, those with somewhat smaller masses ranging from 140-260 M_⊙ may also form and subsequently undergo pair-instability supernovae (PISNe) <cit.>. These supernovae, approximately an order of magnitude more energetic than typical core-collapse supernovae, are anticipated to be detectable up to approximately z∼ 10 in forthcoming observations facilitated by instruments such as the Roman Space Telescope or GREX-PLUS <cit.>. However, given the extreme dimness and far-infrared wavelengths associated with stars at z∼ 100, direct detection in the near future, even if these stars exist, would likely prove impossible.
Could detection of first stars in the extreme early universe be feasible through some means? Typically, first stars are often formed as binaries
<cit.>. Moreover, it is common for supermassive stars to also form as binaries <cit.>. After the binary stellar evolution, binary BHs can ultimately form, and if they are sufficiently close (<0.1au), they can merge within the age of the universe, making such events observable through gravitational waves.
To estimate when the merger events occur, we need to consider the delay time. This refers to the time taken from the birth of stars to the merger of binary black holes. It is determined by the (very uncertain) separation distance between binaries, which necessitates further research in this area.
If the merger occurs around z∼ several tens, gravitational waves from these IMBH binary mergers can be detected by Laser Interferometer Space Antenna (LISA).
Furthermore, if mergers occur even earlier, the realization of a space experiment using cold atoms to search for ultra-light dark matter and to detect gravitational waves, known as the Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE), could make it possible to observe IMBH mergers even at z∼ 1000 <cit.>.
If such IMBH merger events were observed in the early universe, they could also potentially originate from primordial black hole (PBH) binaries. While PBHs can form binaries through their three-body interactions, it is generally believed that their binary fraction is lower compared to those originating from first stars <cit.>. Additionally, the redshift distribution of merger events would differ from those originating from first stars, as stellar BH mergers would occur after a certain period of star formation, leading to distinct characteristics <cit.>.
Detailed modeling of these scenarios through future study is highly desirable. If IMBHs indeed existed during those periods, it could provide a direct explanation for the presence of supermassive black holes (SMBHs) found at high redshifts (z > 6-7) <cit.>.
In this study, we explore scenarios where the first stars form at significantly earlier epochs (e.g., z > 100) than conventionally predicted (typically z = 20-30) due to the possible enhancement of density fluctuations on small (<Mpc) scales. This is motivated by recent JWST observations revealing an unusual abundance of bright galaxies at high redshifts. Specifically, we examine how the formation processes of these first stars might differ from typical trajectories and assess the masses of the stars that result.
Although we have not adopted a specific model for small-scale deviations in the fluctuation spectrum, estimating the number and timing of first star formation necessitates assuming some specific models. As discussed in Section 1, various models are possible, each featuring characteristics such as a blue-tilt, bumps, a running spectral index etc. in the density fluctuation spectrum on small scales. Integrating these models with our findings on the masses of first stars formed at different epochs will be crucial for developing theoretical predictions that meet observational constraints.
Particularly, identifying which models of the small-scale fluctuation spectrum can explain the observed number density of high-z galaxies by JWST, while also meeting various observational constraints, such as the electron scattering optical depth measured by CMB polarization <cit.>, the abundance of dwarf galaxies<cit.>, and
the pulsar timing data
<cit.>, remains a fascinating and challenging task for future research.
§ SUMMARY
We have investigated the formation of first stars during the extremely early universe, z ≳ 100. Employing a one-zone thermochemical model, we have studied the pre-stellar collapse of primordial gas clouds across various formation epochs ranging from a usual epoch z=20 to an extremely early epoch of 700. Our analysis take into accout the influence of the cosmic microwave background (CMB) on radiative heating, Compton cooling, and photodissociation reactions.
We have observed that the influence of the CMB on the evolutionary process is minimal at z ≲ 130, where temperatures closely resemble conventional expectations. However, between 130 ≲ z ≲ 500, H_2 formation via the H^- channel is impeded by CMB-induced photodetachment of H^-, resulting in higher temperatures compared to the standard thermal evolution. Additionally, at z ≳ 500, the temperature evolution approaches near-isothermal one around several thousand Kelvins, driven solely by atomic cooling, as the less efficient H_2^+ channel is also blocked by H_2^+ photodissociation and thus H_2 cooling is entirely suppressed.
Furthermore, we have estimated the mass of forming stars by computing the fragmentation mass and mass accretion rate during the loitering phase. By correlating stellar mass with accretion rate as proposed by
<cit.>, we found that for z<200, stellar masses range from 70-700 M⊙. At z>200, stellar masses exceed 1000 M⊙. For z>500, primordial-gas clouds undergo direct collapse, giving rise to supermassive stars more massive than ∼ 10^5 M_⊙. We conclude that first stars formed at higher redshifts tend to be more massive.
Furthermore, we have estimated the mass of forming stars by computing the fragmentation mass and mass accretion rate during the loitering phase.
By linking the stellar mass with accretion rates as proposed by <cit.>, we established that for redshifts z<200, stellar masses typically range from 70 to 700 M_⊙. Remarkably, at z>200, the stellar masses exceed 1000M_⊙, and for z>500, primordial-gas clouds undergo direct collapse, forming supermassive stars with masses exceeding 10^5 M_⊙.
This progression highlights a clear trend: the initial masses of the first stars tend to increase with redshift. The existence of such massive stars in the early universe could potentially be constrained in the future by observations, such as gravitational wave detections from remnant IMBH mergers. Furthermore, the dependency of stellar mass on the formation epoch found in this study will enable us to compute the abundance and timing of the first stars within early galaxies based on specific small-scale fluctuation spectral models. Identifying models that satisfy various observational constraints will be a crucial challenge for future research.
We would like to thank K. Eric Sadanari and Shingo Hirano for their helpful comments.
We also would like to thank the anonymous reviewer for the constructive comments, which were instrumental in improving the manuscript.
This work is financially supported by the Grants-in-Aid for Basic Research by the Ministry of Education, Science and Culture of Japan (KO:22H00149).
for_pasj
|
http://arxiv.org/abs/2405.08788v1 | 20240514173701 | Using application conditions to rank graph transformations for graph repair | [
"Lars Fritsche",
"Alexander Lauer",
"Andy Schürr",
"Gabriele Taentzer"
] | cs.SE | [
"cs.SE"
] |
Fritsche et al.
Technical University Darmstadt, Darmstadt, Germany
lars.fritsche@es.tu-darmstadt.de
andy.schuerr@es.tu-darmstadt.dePhilipps-Universität Marburg, Marburg, Germany
alexander.lauer@uni-marburg.de
taentzer@mathematik.uni-marburg.de
Using application conditions to rank graph transformations for graph repair This work was partially funded by the German Research Foundation
(DFG), project “Triple Graph Grammars (TGG) 3.0”.
Lars Fritsche10000-0003-4996-4639
Alexander Lauer20009-0001-9077-9817 Andy Schürr10000-0001-8100-1109 Gabriele Taentzer20000-0000-0000-0000
May 20, 2024
==============================================================================================================================================================================================
When using graphs and graph transformations to model systems, consistency is an important concern.
While consistency has primarily been viewed as a binary property, i.e., a graph is consistent or inconsistent with respect to a set of constraints, recent work has presented an approach to consistency as a graduated property.
This allows living with inconsistencies for a while and repairing them when necessary.
When repairing inconsistencies in a graph, we use graph transformation rules with so-called impairment- and repair-indicating application conditions to understand how much repair gain certain rule applications would bring.
Both types of conditions can be derived from given graph constraints.
Our main theorem shows that the difference between the number of actual constraint violations before and after a graph transformation step can be characterized by the difference between the numbers of violated impairment-indicating and repair-indicating application conditions.
This theory forms the basis for algorithms with look-ahead that rank graph transformations according to their potential for graph repair.
An initial evaluation shows that graph repair can be well supported by rules with these new types of application conditions.
§ INTRODUCTION
Graph transformation has proven to be a versatile approach for specifying and validating software engineering problems <cit.>.
This is true because graphs are an appropriate means for representing complex structures of interest, the constant change of structures can be specified by graph transformations, and there is a strong theory of graph transformation <cit.> that has been used to validate software engineering problems.
When applying graph transformation, it is typically important that processed graphs are consistent with respect to a given set of constraints.
Ensuring graph consistency involves two tasks.
First, to specify what a consistent graph is and to check whether a graph is indeed consistent, and second, to ensure that graph transformations preserve or even improve consistency.
Throughout this paper, we consider the Class Responsibility Assignment (CRA) problem as running example <cit.>.
It is concerned with an optimal assignment of features (i.e., methods and attributes) to classes.
The constraints enforcing that each feature belongs to one and only one class, are invariants for all class model modifying operations.
To validate the quality of a class model, often coupling and cohesion metrics are used.
Related design guidelines like “minimize dependencies across class boundaries" may be formulated as constraints, too.
This example shows that constraints may serve different purposes; some are considered so essential that they must always be satisfied, while others are used for optimization; they may be violated to a certain extent, but the number of violations should be kept as small as possible.
Nested graph constraints <cit.> provide a viable means of formulating graph properties.
The related notion of constraint consistency introduced in <cit.> is binary: a graph is either consistent or inconsistent.
Since graph repair is often only gradual, it is also interesting to consider graph consistency as graduated property, as was done in <cit.>.
To support gradual repair, graph transformations are analyzed in <cit.> w.r.t. their potential to improve (sustain) the consistency level of a processed graph, i.e., to strictly decrease (preserve) the number of constraint violations in a graph.
A static analysis approach is presented in <cit.> for checking whether or not a graph transformation rule is always consistency-sustaining or -improving.
Unfortunately, the approach does not yet support graph constraints with different priorities or propositional logic operators.
In addition, scenarios in which a rule either increases or decreases graph consistency depending on the context of the matched and rewritten subgraph are not supported.
To mitigate these issues, we introduce a new dynamic analysis approach that ranks matches of rules and related rule applications according to their effects on improving graph consistency as follows:
* Graph transformation rules are equipped with impairment-indicating and repair-indicating application conditions.
These conditions do no longer block rule applications, but count the number of constraint violations introduced or removed by a given graph transformation step.
They are derived from nested graph constraints based on the constructions presented in <cit.>.
* Our main theorem shows that the number of additional constraint violations caused by a rule application can be characterized by the difference between the numbers of violations of associated impairment-indicating and repair-indicating application conditions.
Reviewer 1 hätte hier gerne, dass schon steht, dass man das auch gewichten kann, weil das seiner Meinung nach hier so wirkt, als wäre das allgemein eine gute Metrik.
* This theory forms the basis for graph repair algorithms with a look-ahead for the graph-consistency-improving potential of selectable rule applications. Based on a prototypical implementation of a greedy algorithm, we show in an initial evaluation that our approach scales well with the available number of rule applications for each consistency-improving transformation step.
§ RUNNING EXAMPLE
To illustrate the problem addressed, we consider a version of the Class Responsibility Assignment (CRA) problem <cit.> for our running example.
The CRA aims to provide a high-quality design for object-oriented class structures.
Given a set of features, i.e., methods and attributes, with dependencies but no classes, a solution to the CRA involves a class design with high cohesion within classes and low coupling between classes.
We slightly adapt the CRA problem by starting with a predefined class diagram that is then refactored by moving features between classes. The goal of the refactoring steps is to reduce method-attribute dependencies across different classes and group methods with similar attribute dependencies in the same class.
Fig. <ref> shows our running example with a class diagram consisting of three classes modeling an online shopping session on the left.
On the right, we see an alternative, graph-like representation of the class diagram, in which classes, methods and attributes are represented as graph nodes.
In addition to the class diagram on the left, there are edges between methods and attributes that model access dependencies of methods on attributes[The original CRA problem also considers call dependencies between methods, which are ignored in this paper to keep the running example as small as possible.].
Next, we define two simple refactoring rules and constraints that these rules have to respect in Fig. <ref>.
The rule moves an attribute from one class to another one, while the rule does the same for methods.
From the set of language constraints on class structures, we select the two constraints _1 and _2, which impose two basic properties on class models, and consider them as hard constraints.
They state that methods and attributes must not be contained in more than one class.
There are also weak constraints, which are concerned with an optimal class design.
They specify (quantifiable) quality requirements for the class design.
We consider two weak constraints: Constraint _1 states that two methods within the same class ' should have at least one common attribute dependency on an attribute within the same class '.
Constraint _2 says that methods should have no dependencies on attributes of other classes.
These constraints are weak, which means that violating them is acceptable (at least for a certain time).
In fact, the two weak constraints may even contradict each other, since _2 may cause us to move a method to a class with other methods with which it has no other attribute dependencies in common.
Obviously, neither rule can violate the hard constraints, but moving features between classes can remove or add weak constraint violations, i.e., repair or impair the regarded graph´s consistency.
For example, moving the method from to would repair a violation of _2 because the method depends on the attribute.
However, it would introduce a new violation of the same constraint due to the dependency on the attribute in the class.
In the following, we will study this example using the previously introduced graph (refactoring) rules and nested graph constraints.
We will present a new methodology that derives application conditions for each refactoring rule, based on which we can calculate a look-ahead for each refactoring transformation (i.e., rule application) telling us how many repairs and impairments it will perform. Reviewer 2 hatte hier das Problem, dass der alte Satz (auskommentiert) auf eine statische Analyse hingedeutet hat. Er hatte den Ansatz schon richtig verstanden und ich konnte es nachvollziehen.
Based on this rule ranking information, different CRA optimization algorithms may be implemented including a greedy algorithm that always selects refactoring rule applications with a maximum consistency gain.
§ PRELIMINARIES
In this section, we briefly recall key notions that are used throughout this paper.
Our theory for the construction of model repair algorithms is based on graphs or, more precisely, typed graphs, as introduced for graph transformations in <cit.>.
In our running example, and are represented as nodes. Object references such as the attribute dependencies between a and an are represented as edges.
A graph G = (G_V, G_E, s_G, t_G) consists of a set G_V of nodes, a set G_E of edges and two mappings s_G G_E → G_V and t_G G_E → G_V that assign the source and target nodes for each edge of G. If a tuple as above is not given explicitly, the set of nodes (edges) is denoted by G_V (G_E) and the source (target) mapping is denoted by s_G (t_G).
Throughout the paper, we assume that a graph is always finite, i.e. the set of nodes and the set of edges are finite.
A graph morphism f G → H between two graphs G and H consists of two mappings f_V G_V → H_V and f_E G_E → H_E that preserve the source and target mappings, i.e., f_V ∘ s_G = s_H ∘ f_E and f_C ∘ t_G = t_H ∘ f_E.
A graph morphism is called injective(surjective) if both mappings f_V and f_E are injective (surjective).
An injective morphism is denoted by f G G. An injective morphism f G H is called inclusion if f_E(e) = e for all e ∈ G_E and f_V(v) = v for all v ∈ G_V.
Given a graph morphism p G H and an inclusion i G' G, the restriction of p to G', denoted by p_|G', is defined as p_|G' = p ∘ i.
Given a graph TG, called the type graph, a typed graph over TG is a tuple (G, type) consisting of a graph G and an graph morphism type G → TG. Given two typed graphs G = (G', type_G) and H = (H', type_H), a typed graph morphism f G → H is a graph morphism f G' → H' such that type_H ∘ f = type_G.
To formulate hard and weak constraints, we use nested graph conditions introduced by Habel and Pennemann <cit.>.
It has been shown that the class of nested graph conditions is equivalent to first-order logic <cit.> and that almost all OCL formulae can be translated into nested graph constraints <cit.>.
Nested graph constraints, or constraints for short, are nested graph conditions that can be evaluated directly on a given graph, whereas, in general, graph conditions must be evaluated with respect to graph morphisms, which usually represent rule matches.
A nested graph condition over a graph P is of the form
, or ∃(e P Q, d) where d is a condition over Q, or d_1 ∨ d_2 or d_1 where d_1 and d_2 are conditions over P.
A condition over the empty graph ∅ is called constraint. We use the abbreviations :=, d_1 ∧ d_2 := ( d_1 ∨ d_2), d_1 d_2 := d_1 ∨ d_2 and ∀(e P Q,d) := ∃(e P Q, d).
When e is of the form e ∅ Q, we use the short notation ∃(Q,d) and ∀(Q,d).
For a condition c = ∀(Q,d), we call Q the premise and d the conclusion of c.
Throughout the paper, we assume that each constraint is finite, i.e., the graphs used and the number of nesting levels are finite.
Each graph morphism p P G satisfies . It satisfies a condition c = ∃(e P Q,d), denoted by p c if there is a morphism q Q G with p = q ∘ e and q d.
For Boolean operators, satisfaction is defined as usual.
A graph G satisfies a constraint c, denoted by G c if the unique morphism p ∅ G satisfies c.
The graph shown in Fig. <ref> satisfies the hard constraints _1 and _2. No attribute and no method is contained in two classes.
However, the graph does not satisfy the weak constraint _1 shown in Fig. <ref>.
The methods and are contained in the same class , but does not contain an attribute used by both methods.
The graph also does not satisfy the weak constraint _2. This is because uses the attribute , which is contained in the class .
Graph transformation rules are used to specify state-changing operations for a system of interest.
In the case of our CRA example, rules are used to define the set of all available refactoring operations.
Their associated application conditions are unsatisfied whenever a rule application changes the consistency state of a rewritten graph.
In the following, we recall the formal definitions of graph transformation rules and graph transformations <cit.>.
A graph transformation rule, or rule for short, ρ = consists of graph L, called the left-hand side (LHS), K, called the context, R, called the right-hand side (RHS) and injective morphisms l K L and r K R.
The inverse rule of ρ, denoted by ρ^-1 is defined as ρ^-1 := RrKlL.
A left (right) application condition for a rule is a nested condition over its LHS (RHS).
For simplicity, we present the more constructive, set-theoretic definition of graph transformation, which has been shown to be equivalent to the commonly used double-pushout approach, based on category theory <cit.>.
Given a graph G, a rule ρ = and an injective morphism m L G, a graph transformation t, denoted by t G ⟹_ρ, m H, via ρ at m can be constructed by (a) deleting all nodes and edges of L that do not have a preimage in K, i.e., construct the graph D = G ∖ m(L ∖(l(K)) and (b) adding all nodes and edges of R that do not have a preimage in K, i.e., construct the graph H = D ∪̇ R ∖ r(K), where ∪̇ denotes the disjoint union.
The rule ρ is applicable at m if and only if D is a graph, i.e., if it does not contain any dangling edges. In this case, m is called match and the newly created morphism n R H is called comatch. We call G the original graph, D the interface and H the result graph of the transformation t.
The derived span of t, denoted by (t), is defined as
(t) := GgDhH,
where D is the interface, G is the original graph, H is the result graph and g and h are the transformation morphisms of t <cit.>.
The track morphism of t <cit.>, denoted by _t G H, is a partial morphism, defined as
_t(e) :=
h(g^-1(e)) if e ∈ g(D)
undefined otherwise.
§ COUNTING CONSTRAINT VIOLATIONS
For ranking rule applications, a method is needed to evaluate the increase or decrease in consistency of weak constraints
Here, we rely on the number of violations introduced by Kosiol et al. <cit.>.
This notion allows to detect an increase in consistency even if it is not fully recovered.
For example, this notion is able to detect whether an occurrence of the premise that does not satisfy the conclusion is deleted or extended so that it satisfies the conclusion in the result graph of the transformation.
While this notion was introduced for constraints in so-called alternating quantifier normal form, i.e., the set of all nested constraints that do not use Boolean operators, we extend this notion to support universally bounded nested conditions that may use Boolean operators in the conclusion. In particular, this notion evaluates conditions w.r.t. occurrences q of the premise of the condition.
For existentially bounded constraints, the number of violations acts as a binary property, i.e., it is equal to 1 if the constraint is unsatisfied and equal to 0 if the constraint is satisfied.
To use a constraint for ranking, it must be allquantified.
Weak constraints are allquantified and have the form ∀(P, d), where d = or d is a Boolean formula with conditions of the form ∃(e' P Q, ), i.e., all constraints with a nesting level less than or equal to 2 that are universally bound and do not use any boolean operators at the highest nesting level are allowed.
We focus on this form because, in our experience, it is the most commonly used in practice.
There are no restrictions for hard constraints.
In addition, for a condition with premise P, we allow to restrict violations to injective morphisms i with codomain P if necessary, i.e. if some parts of the violation do not need to be considered.
Given a condition c = ∀(e P Q,d) and graph morphism p P G, the set of violations of c in p, denoted by _p(c), is defined as
_p(c) := {q Q G | p = q ∘ e and q d}.
The set of violations of a constraint c = ∀(e ∅ P,d) in a graph G, denoted by _G(c), is defined as _G(c) := _p(c) where p ∅ G is the unique morphism into G.
AL: Den Einschränkenden Morphismus entsprechend Review angepasst.
Given a morphism i Q' Q, such that there exists a morphism i' P Q' with i ∘ i' = e,
the set of violations of c in p restricted to i, denoted by _p,i(c), is defined as
_p,i(c) := {q ∘ i | q ∈_p(c)}.
The number of violations of a condition c in p restricted to i, denoted by _p,i(c), is defined as _p,i(c) := |_p,i(c)|.
The number of violations of a constraint c in a graph G, denoted with _G(c), is defined as _G(c) := |_G(c)|.
As discussed in Example <ref>, the graph G in Fig. <ref> does not satisfy _1 and _2.
For _1 there are two occurrences of the premise that do not satisfy the conclusion of _1. The premise of _1 can be mapped to the violation described in Example <ref> in two different ways, so _G(_1) = 2. If we restrict _G(_1) to a morphism i Q' Q, where Q' contains only the node , we get _q, i(_1) = 1 with q ∅ G.
For _2 there are two occurrences of the premise. Both methods and use the attribute , which is contained in the class , so _G(_2) = 2.
§ APPLICATION CONDITIONS FOR CONSISTENCY MONITORING
The main idea of our approach is to predict the change in consistency induced by an application of a rule.
This allows us to apply the most consistency-increasing rule without using a trial and error approach.
To obtain this prediction, we use application conditions that do not block the application of constraint-violating rules, but instead annotate rule matches with the number of constraint impairments and/or repairs caused by the related transformation; thus, the conditions monitor the consistency change.
In addition, we use the observation that a rule introduces a violation of a constraint c if and only if the inverse rule repairs a violation of c. Thus, repair-indicating application conditions for a rule ρ are impairment-indicating application conditions for ρ^-1 .
To construct the application conditions, we use the well-known techniques introduced by Habel and Pennemann to construct so-called consistency-preserving and consistency-guaranteeing application conditions <cit.>.
§.§ Preliminaries
In the following we will briefly introduce the preliminaries that are needed for our construction of application conditions.
An overlap GH = (i_G, i_H, GH) of graphs G and H consists of jointly surjective morphisms[Two morphisms p P G and q Q G into the same graph G are called jointly surjective if for each element e G either there is an element e' ∈ P with p(e')=e or there is an element e' ∈ Q with q(e') = e.] i_G G GH and i_H G GH, called the overlap morphisms of GH, and a graph GH called the overlap graph.
The shift along morphism operator allows shifting a condition c over a graph P along a morphism i_P P PL, resulting in an equivalent condition over PL.
Given a condition c over a graph P and a morphism i_P P PL, the shift along i_P<cit.>, denoted by (i_P,c), is defined as follows:
.65
* if c =, (i_P,c) =, and
* if c = ∃(e P Q, d), (i_P,c) = ⋁_(e', i_Q)∃(e' PL QL, (i_Q,d)), where (e', i_Q, QL) is an overlap of PL and Q such that e' ∘ i_P = i_Q ∘ e, i.e., the square on the right is commutative, and
* if c = c_1 ∨ c_2, (i_P,c) = (i_P,c_1) ∨(i_P,c_2), and
* if c = c_1, (i_P,c) = (i_P, c_1).
.3
(P2) at (0,0) QL;
(P1) at (2,0) Q;
(C2) at (0,2) PL;
(C1) at (2,2) P;
[left hook-stealth] (P1) edge node [above] i_Q (P2);
[right hook-stealth] (C1) edge node [right] e (P1);
[left hook-stealth] (C1) edge node [above] i_P (C2);
[right hook-stealth] (C2) edge node [left] e' (P2);
Note that, when shifting a constraint ∀(P, d) over a morphism p ∅ P', the first step is to construct all overlaps PP'.
The shift over rule operator transforms a right-application condition into an equivalent left-application condition and vice versa.
Given a rule ρ = LlKrR and a condition c over R, the shift of c over ρ<cit.>, denoted by (ρ,c) is defined as follows:
* If c =, (ρ, c) =, and
* If c = ∃(e R P, d), (ρ,c) := ∃(n L P', ((t)^-1, d)), where P' is the resulting graph and n the comatch of the transformation t P ⟹_ρ^-1,e P'. If there is no such transformation, we set (ρ, c) =.
* If c = c_1 ∨ c_2, (ρ,c) := (ρ,c_1) ∨(ρ,c_1).
* If c = c_1, (ρ,c) := (ρ,c_1).
§.§ Shifting of Overlaps and Conditions
We will now discuss the construction of impairment- and repair-indicating application conditions.
These are actually annotated application conditions, i.e. pairs of application conditions and overlaps, which contain the first morphism of the application condition.
This additional information is used to directly obtain occurrences of the premise of a weak constraint to be repaired (or violated) when evaluating the set of violations of the application condition.
Given a rule ρ =, an application condition c = ∀(i_L L PL, d), and an overlap PL = (i_L, i_P, PL). Then, the pair (c, PL) is called a left annotated application condition for ρ.
Right annotated application conditions for ρ are defined analogously.
In order to count violations of application conditions, we require that the match m L G of a transformation factors through the inclusion i_L L PL of the LHS in the overlap graph and an inclusion p PL G of the overlap graph in the original graph of the transformation, i.e., m = p ∘ i_L.
So we only need to consider overlaps of the LHS of a rule and the premise of a constraint, where the rule is applicable at the inclusion of the LHS in the overlap graph.
AL: Folgender Absatz eingefügt.
We decompose the set of overlaps into the sets and , where contains each overlap where applying the rule at the inclusion of the LHS destroys the occurrence of the constraint premise, and contains each overlap where applying the rule at the inclusion of the LHS does not destroy the occurrence of the premise, so that the impact on the constraint conclusion can be investigated.
In particular, the set is used to construct application conditions that check whether an occurrence of the premise to be deleted satisfies the conclusion before the transformation. If that occurrence does not satisfy the conclusion, then deleting that occurrence would increase the consistency. The set GT: This set is not (as written before) I think. is used to construct application conditions that check whether an occurrence of the premise that is neither deleted nor created by the transformation satisfies the conclusion in the original and the resulting graph of the transformation.
If it does not satisfy the conclusion in the original graph, the consistency is increased.
Otherwise, the consistency is decreased.
AL: Diese Def und der Absatz davor sind neu. Uns interessieren nur diese Overlaps, Die Einschränkung, dass die Regel angewendet werden kann führt dazu, dass der Shoft of overlap operator nun wohldefiniert ist und wir die ständige Fallunterscheidung vermeiden.
Given a rule ρ = and a graph P. The set of overlaps of ρ and P, denoted by (ρ,P) := (ρ,P) ∪(ρ,P), contains every overlap PL = (i_P, i_L, PL), so that ρ is applicable at i_L, i.e., there is a transformation PL ⟹_ρ, i_L PR, where
(ρ, P) := {(i_P, i_L, PL) | i_P(P) ∩ i_L(L ∖ l(K)) ≠∅} and
(ρ, P) := {(i_P, i_L, PL) | i_P(P) ∩ i_L(L ∖ l(K)) = ∅}.
The shift of an overlap over a rule transforms an overlap of the RHS of the rule and a graph P into an overlap of the LHS of the rule and the graph P.
The inverse rule is applied to the inclusion of the RHS in the overlap graph.
If the newly created elements of the rule (i.e. the elements in R ∖ r(K)) and P overlap non-emptily, the occurrence of P is destroyed during the shift.
The result is an overlap of the LHS with a subgraph of P called the remaining graph of P after Shift.
AL: Diese Def ist neu.
Given a rule ρ =, a graph P, and an overlap PR = (i_P. i_R, PR) ∈(ρ^-1,P), the remaining graph of P after the transformation t^-1 PR ⟹_i_R, ρ^-1 PL, denoted by P(t^-1), is defined as
P(t^-1) := i^-1_P(i_P(P) ∖ i_R(R ∖ r(K))),
i.e., (1) and (2) in Fig. <ref> are commutative.
GT: I would call P(t') P(t^-1). AL: changed
Note that, by construction, the morphism _t^-1∘ i_P|P(t^-1) is total and that P(t^-1) = P if PR∈(ρ^-1,P).
The shift of an annotated condition over a rule shifts a right annotated application condition to a left one. Both the shift over a rule and the shift of an overlap over a rule are used.
Given a rule ρ= and a graph P:
* Given an overlap PR = (i_R, i_P, PR) ∈(ρ^-1,P), the shift of PR over ρ, denoted by (ρ,PR), is defined as
(ρ,PR) := (i_L, _t^-1∘ i_P|P(t^-1) , PL),
where i_L L PL is the comatch of the transformation t^-1 PR ⟹_ρ^-1, i_R PL.
* Given an annotated application condition ac = (c, PR), where c is a right application condition for ρ, the shift of ac over ρ, denoted by (ρ,ac), is defined as
(ρ,ac) := ((ρ,c), (ρ, PR)).
AL: Den Abschnitt noch ein bisschen klarer gemacht (hoffentlich)
The following lemma allows us to simplify the application conditions if we assume that hard constraints are always satisfied. If we consider a hard constraint of the form c = ∃(P) (which is satisfied by a graph G if P is not a subgraph of G) and a condition ∃(e P' Q,d), a morphism p P' G cannot satisfy ∃(e P' Q,d) if Q c, otherwise P is a subgraph of G.
Therefore, when simplifying an application condition, we can replace any condition of the form ∃(e P' Q,d) with Q c by .
Since ∀(e P' Q, d) = ∃(e P' Q, d), conditions of the form ∀(e P' Q, d) can be replaced by if Q c.
In our running example, this simplification drastically reduces the number and complexity of derived impairment- and repair-indicating application conditions.
Let a constraint c = ∀(e ∅ P, ) and a condition c' = ∃(e' P' Q', d) with Q' c be given, then
G c p c',
for each morphism p P' G.
The proof of this Lemma can be found in Appendix <ref>.
§.§ Construction of Application Conditions
When constructing impairment- and repair-indicating application conditions, all cases in which an impairment can be introduced or a violation can be repaired must be considered.
A transformation t G ⟹ H can introduce impairments of a constraint c = ∀(P,d) in the following two ways:
* Impairment of the premise: A new occurrence of P is introduced that does not satisfy d. That is, there is an occurrence p P H with p d so that there is no p' P G with _t ∘ p' = p.
* Impairment of the conclusion: An occurrence of P satisfies d in G and not in H. That is, there is an occurrence p P G with p d so that _t ∘ p is total and _t ∘ p d.
Similarly, the transformation t can repair violations in the following ways:
* Repair of the premise: An occurrence of P in G that does not satisfy d is deleted. That is, there is a morphism p P G with p d so that _t ∘ p is not total.
* Repair of the conclusion: There is an occurrence of P which satisfies d in H but does not satisfy d in G. That is, there is a morphism p P G with p d, _t ∘ p is total and _t ∘ p d.
Note that we use the same construction of impairment-indicating and repair-indicating application conditions for the premises and conclusions of constraints. The only difference is that we switch the roles of the LHS and the RHS.
Thus, intuitively, the repair-indicating application conditions can be obtained by computing the impairment-indicating application conditions of the inverse rule and shifting them to the LHS.
As a necessary prerequisite, we introduce overlap-induced pre- and post-conditions. Intuitively, given an overlap of a constraint with the LHS of a given rule, an overlap-induced pre-condition checks whether an occurrence of the premise of a constraint satisfies the conclusion of the constraint in the original graph of a transformation.
The overlap-induced post-condition checks whether an occurrence of the premise satisfies the conclusion in the result graph of a transformation.
All graphs and morphisms are visualised in Fig. <ref>.
A detailed example of the construction process can be found in Appendix <ref>.
Given a rule ρ =, an overlap PL = (i_P, i_L, PL) ∈ O(ρ,P), and a condition d over P, the overlap-induced pre- and post-conditions of ρ w.r.t. PL and d, denoted by _ρ(PL,d) and _ρ(PL,d), are defined as
* _ρ(PL,d) := (i_P,d), and
* _ρ(PL,d) := ((t), (_t ∘ i_P,d)) if PL∈(ρ,P)
if PL∈(ρ,P),
with t PL ⟹_ρ,i_L PR.
Note that _ρ(PL,d) = _ρ^-1(PR,d) if PL = (ρ, PR).
The correctness of overlap-induced pre- and post-conditions is formalized and proven by Lemma <ref> which can be found in Appendix <ref>.
AL: Die urpüngliche Definition in 2 aufgesplitet.
GT: Hier bitte doch noch eine Erklärung, wie die Mengen von Conditions zustande kommen. Also die Konstruktionen erklären. AL: Dafür haben wir leider keinen Platz mehr
Given a constraint c = ∀(e ∅ P,d) and a
rule ρ =.
The set of impairment-indicating application conditions for ρ w.r.t. c consists of a set (ρ,c) of impairment-indicating application conditions for the premise and a set (ρ,c) of impairment-indicating application conditions for the conclusion, where
(ρ,c) := {(ρ, (∀(i_R R PR, _ρ(PR,d)), PR))
|PR∈(ρ^-1, P)}
and
(ρ,c) := {(∀(i_L L PL, _ρ(PL,d) _ρ(PL,d)), PL)
|PL∈(ρ, P)}.
Given a constraint c = ∀(e ∅ P,d) and a rule ρ =, the set of repair-indicating application conditions of ρ for c consists of a set (ρ,c) of repair-indicating application conditions for the premise and a set (ρ,c) of impairment-indicating application conditions for the conclusion, where
(ρ,c) := {(∀(i_L L PL, _ρ^-1(PL,d)), PL) |PL∈(ρ, P)}
and
(ρ,c) := {(ρ,( ∀(i_R R PR, _ρ^-1(PR,d) _ρ^-1(PR,d)),
PR)) |PR∈(ρ^-1, P)}.
Note that for the examples and the implementation we have simplified the constructed application conditions with respect to the hard constraints _1 and _2 using Lemma <ref>.
Consider the annotated application conditions shown in Figs. <ref> and <ref>.
The annotating overlaps are implicitly given by the node names.
The impairment-indicating and repair-indicating application conditions of for _2 are shown in Fig. <ref>.
Note that can only repair and impair the premise, since the conclusion of _2 is equal to .
The impairment-indicating and repair-indicating application conditions of for _1 are shown in Fig. <ref>.
The premise of _1 does not contain an edge from a class to an attribute, so can only introduce impairments and repairs of the conclusion.
For the impairment-indicating application condition _1, the left side of the implication checks whether the occurrence of the premise of _1 in the annotating overlap satisfies the conclusion of _1 in the original graph of the transformation.
The right side of the implication checks whether this occurrence will satisfy the conclusion of _1 in the result graph of the transformation.
Reviewer 2: habe den scale-Factor von 0.5 auf 0.6 erhöht. Mehr geht nicht.
For the repair-indicating application condition _1, this implication is reversed, i.e. the left-hand side of the implication checks whether this occurrence will satisfy the conclusion of _1 in the result graph of the transformation and the right-hand side checks whether the occurrence satisfies the conclusion of _1 in the original graph of the transformation.
The detailed construction of these application conditions can be found in Appendix <ref>.
When evaluating a repair-indicating application condition (c', (i_L, i_P, PL)) w.r.t. to a rule match m, we restrict the set of violations of c' in m to i_P. Then, the set of violations of c' in m restricted to i_P contains occurrences of the premise of the constraint that are repaired by the transformation.
We proceed in a similar way for impairment-indicating application conditions.
Our main theorem states that the difference _H(c) - _G(c) in the number of violations of a transformation t G ⟹_ρ,m H can be evaluated by computing the difference in the number of violations of associated impairment-indicating and repair-indicating application conditions.
This allows us to evaluate the change in inconsistency before the transformation is performed by simply counting violations of application conditions.
Let PL = (i_P, i_L, PL).
Given a transformation t G ⟹_ρ, m H and a constraint c, then
_H(C) - _G(C) =
|⋃̇_̇(̇ċ'̇,̇ṖL̇)̇ ̇∈̇(̇ρ̇,̇ċ)̇_m,i_P(c')| + | ⋃_(c', PL) ∈(ρ,c)_m,i_P(c')|
- | ⋃_(c',PL) ∈(ρ,c)_m,i_P(c')| - | ⋃_(c', PL) ∈(ρ,c)_m,i_P(c')|.
Follows by applying Lemma <ref>.
Note that we use the disjoint union for the impairment-indicating application conditions for the premise, even though we use the union for the other application conditions.
As discussed earlier, the overlap of a pair (c', PL) ∈(ρ,c) is an overlap of the LHS and a proper subgraph of the premise of the constraint.
If c' is violated, the transformation extends an occurrence of this subgraph to at least one occurrence of the premise that does not satisfy the conclusion of the constraint. Using the union, this occurrence would be counted once, even though it could be extended to multiple occurrences of the premise that do not satisfy the conclusion.
To deal with this loss of information, we use disjoint union so that an occurrence of this subgraph can be counted multiple times.
AL: Beispiel zwischen die Abschnitte gesetzt.
As discussed in Example <ref>, we have _G(_2) = 2 for the graph G shown in Fig. <ref>.
When using the rule to move from the class to the class , the application conditions _1 and _2 are satisfied, so |⋃̇_(c',PL) ∈(ρ,c)_m,i_P(c')| = 0.
There is a violation of _1, while _2 is satisfied; so | ⋃_(c',PL) ∈(ρ,c)_m,i_P(c')| = 1.
So this transformation does not introduce an impairment of _2 but repairs a violation, i.e., _H(_2) = 1, where H is the result graph of this transformation.
Kosiol et al. <cit.> introduced the notions of consistency-sustaining and consistency-improving transformations, i.e., a transformation t G ⟹ H is consistency-sustaining w.r.t. a constraint c if _H(c) - _G(c) ≤ 0; it is called consistency-improving w.r.t. c if _H(c) - _G(c) < 0. Our main theorem implies that we can predict whether a transformation is consistency-sustaining (-improving) by evaluating application conditions that indicate impairment and repair.
§ EVALUATION
Eval wurde viel geändert. Bitte nochmal drüberlesen.
To evaluate the practical relevance of our approach, we implemented a greedy graph optimization algorithm for the CRA case study (cf. <ref>).
This was done using the graph transformation tool eMoflon[<www.emoflon.org>], which incrementally computes all matches to a given graph for rules and their application conditions.
This is a prerequisite for the efficient implemention of this approach, as calculating all matches from scratch after each rule application can easily become a serious performance bottleneck. Reviewer 2 hat gefragt ob incr. pattern matching eine Voraussetzung ist.
Based on these matches provided by eMoflon, our implementation then counts violations of derived application conditions and ranks rule matches w.r.t. the number of constraint violations that are removed or added by the application of the considered rule at the considered match (see Theorem <ref>).
Then, the rule application with the highest rank is greedily selected and applied, and the ranking is updated.
Reviewer 2 wollte dass es klarer wird, was genau unsere Contribution in Bezug zu eMoflon ist. Deswegen habe ich versucht klarer zu machen, dass eMoflon verwendet wird und unsere Implementierung die Matches von eMoflon nimmt und weiterverarbeitet.
Note that the application conditions are currently not derived automatically, but designed and implemented by hand in eMoflon based on our formal construction.
Also, our evaluation was run on a Ryzen 7 3900x and 64GB RAM on Windows 11 23H2.
It is available as a VM [<www.zenodo.org/records/10727438>] with a detailed description of how to reproduce our results.
Finding all the matches for rules and application conditions can become very expensive if there are many matches to find.
The CRA case study is particularly challenging because a feature can be moved from one class to any other class, which means that the number of refactoring steps, and thus the number of matches, grows rapidly with an increasing number of classes and features.
Therefore, we pose the following research questions: (RQ1) How does our approach scale with respect to the size of a processed class diagram (graph)? and (RQ2) Can we reduce the number of violations?
To answer the first question, we need to examine the two phases of our approach, which are related to how eMoflon works and incrementally provides us with collected matches.
First, we measure the time it takes to compute the initial collection of all rule and application condition matches.
Then, we use the application conditions matches to rank the rule matches.
Depending on the size of the model, this is expected to take longer than applying a rule and incrementally updating eMoflons internal structures as well as the rule rankings. AL: rule ranking oder rule match ranking oder rule application ranking (steht einen Satz weiter)?
Second, we measure the time taken to perform 10 repair steps, where we have to judge which repair to apply next, based on an actually selected repair step.
For this, we use the most promising (highest ranked) rule application, where repairs and impairments are uniformly weighted with a value of 1.
To investigate the scalability of our approach, we created synthetic class diagrams of varying sizes with increasing numbers of classes, where each class has five methods and five attributes.
Each method has two dependencies on attributes of the same class and a further three dependencies on attributes of other classes.
Having more dependencies means that it is less likely to move features from one class to another, as more features would form a dependency clique within a class.
<Ref> shows our evaluation results, where the left plot shows the time in seconds for processing models with up to 2,751 elements.
Starting with 276 elements, the initialization time is 2.2s and performing 10 refactoring steps takes 0.5s.
With 2,751 elements, this time increases to 65s for the initialization and 2.8s for the 10 steps.
Obviously, the initialization takes 30 times longer for a 10 times larger model, while the incremental updates scale better and takes only 5 times longer.
The reason for this non-linear increase lies in the structure of our rules, where the number of matches increases rapidly with each new class.
For a model with 276 elements, we collected 600 rule matches and 36,000 application condition matches, while for the largest model with 2,751 elements, we found 622,500 rule matches and 3,7 million condition matches.
So, while it takes 30 times as long to run a model 10 times the size, we found 118 times as many matches.
This shows that even for a challenging scenario like the CRA use case, our approach scales reasonably well given the number of repair steps available (RQ1).
To answer RQ2, we took a model with 2,201 elements and measured the aggregated number of impairments and repairs after n iterations (repair steps) along with their differences (gains).
As before, in each iteration we chose the rule application with the highest gain and continued until there was no rule application with a positive gain left, i.e. the application of a rule would have no effect or cause more impairments than repairs at this point. AL: Eher positive gain anstatt positive rank, oder? LF: angepasst und noch ein klein wenig klarer formuliert
The results are shown in <Ref> on the right.
After 589 iterations with a total gain of 1,661, the process terminated after finding a (local) optimum.
So the resulting model contains 1,661 fewer violations than before, which answers RQ2.
As shown, our approach can incrementally maintain the necessary rule-ranking information (RQ1) and improve consistency in a rather low number of iterations
(RQ2).
This is particularly interesting considering the fact that the search space of all rule matches can grow very rapidly, which is a particularly challenging scenario for our approach.
Threads to validity
Currently, we only investigate the CRA case using synthetic class diagrams, which grow only by adding more classes with a fixed number of features and dependencies.
To evaluate the general scalability of our approach, more scenarios should be investigated, preferably using real-world data, e.g., extracted from public code repositories.
Also, the application conditions are currently constructed by hand (following our formal construction process), which is a source of error.
In addition, since we only have a look-ahead of 1, there may not always be a good next step to improve consistency, even though a better overall solution exists.
Therefore, future work should investigate how our approach performs when the greedy strategy is replaced by another strategy such as simulated annealing.
§ RELATED WORK
Habel and Pennmann <cit.> introduced the original process for generating application conditions from nested graph constraints,
which are consistency-guaranteeing meaning that applying a rule with such conditions is guaranteed to produce a graph consistent with the given (hard) constraints.
In our paper, we extend the binary case of satisfying hard constraints, by ranking rule applications based on how many constraint violations they add or remove w.r.t. a set of weak constraints.
Kosiol et al. <cit.> also count violations but still consider only one type of constraint (hard constraints).
While they consider constraints in alternating normal form with arbitrary nesting levels but without Boolean operators, we focus on constraints up to level 2 but allow Boolean operators on level 2. Our experience has shown that this kind of constraints is mostly used in practical applications.
Since the resulting set of application conditions can be large and thus expensive to evaluate, Nassar et al. <cit.> showed that some subconditions can be filtered if they check for cases that cannot occur.
We also filter the resulting application conditions, but based on our set of additional hard constraints, e.g., by filtering out conditions that check for features contained in multiple classes simultaneously.
In <cit.>, application conditions are constructed to make rule applications consistency-sustaining, but there is no such construction for consistency-improving rule applications.
Moreover, the rule applications are consistency-sustaining in the strict sense that no new constraint violations are allowed.
In contrast to all existing literature, we also construct application conditions for consistency improvement and use them to rank rule applications, not to block them.
This approach can be used in a transformation engine like eMoflon, which supports the incremental computation of rule matches.
In <cit.>, a similar ranking approach is presented for model repair, identifying impairments as negative side effects and repairs as positive side effects of model repair sequences.
All constraints have equal priority.
Given a model with a set of violations, all possible repairs are computed and ranked according to their side effects.
A repair consists of a sequence of repair actions of limited length, each of which repairs only a single model element or a single property of an element.
The ranking is not done in advance, but is determined by first executing all computed repair action sequences and then observing their effects on the number of constraints violated.
In contrast, we define repair rules that can be arbitrarily complex and perform multiple repair actions in parallel (in one rewrite rule step).
Furthermore, we determine the positive and negative effects of all options to apply a given repair rule to all (weak) constraints simultaneously without changing the (model) graph for this purpose.
In <cit.>, constraints are also detected by incremental graph pattern matching.
They distinguish between ill-formedness and well-formedness constraints, where occurrences of the former are desirable and the latter are to be avoided.
Using genetic algorithms with a set of graph-modifying rules as mutators, they then search for a graph that maximizes the number of of occurrences of well-formedness constraints minus the number of occurrences of ill-formedness constraints.
Compared to our approach, they have to change the graph to track consistency and detect violations and repairs, whereas we use a look-ahead to plan next steps.
The running example is based on the well-known CRA problem <cit.>, which was the focus of attention at the TTC 2016 <cit.>.
Solutions like <cit.> have shown that greedy-based solutions like ours achieve quite good results, our approach additionally provides a look-ahead that can be used to find local optima faster.
§ CONCLUSION
In this paper, we introduce a new dynamic analysis approach that ranks rule matches based on their potential for graph repair.
This potential is computed using application conditions that are automatically derived from a set of nested graph constraints.
While some of these conditions indicate repair steps, others detect violations.
We formally showed that the potential of a rule application can indeed be characterized by the difference between the number of violations of repair-indicating and violation-indicating application conditions.
We illustrated and evaluated our approach in the context of the well-known CRA problem, and showed that even for a worst-case scenario, the performance scales reasonably well.
For the future, we want to fully automate the ranking of graph transformations based on an automated construction of violation-indicating and repair-indicating application conditions.
In addition, we will investigate different strategies besides greedy-based algorithms for specifying and optimizing graphs in different scenarios.
To further stengthen our ranking approach with a lookahead of 1, it may be advantageous to combine several repair rules into a larger one by composing concurrent rules <cit.>.
splncs04
§ ADDITIONAL FORMAL RESULTS AND PROOFS
We assume that G c and that there is a morphism p P' G with p c'. I.e., there is a morphism q Q' G with p = q ∘ e' and q d. Since Q' c, there is a morphism p' P Q' and hence, there is a morphism q ∘ p' P G. It follows that G c, this is a contradiction.
The following Lemma shows, that the counting method introduced in Definition <ref> is an extension of the one introduced in <cit.>.
Given a graph G and a constraint c = ∀(e ∅ P, d), the number of violations of c in G is given by
_G(c) = |{q P G | q d }|.
We show that {q P G | q d} = {q P G | p = q ∘ e and q d} with p ∅ G.
* ⊆: Let q ∈{q P G | q d}. It holds that p = q ∘ e since the empty morphism into G is unique. So, q ∈{q P G | p = q ∘ e and q d}.
* ⊇: Let q ∈{q P G | p = q ∘ e and q d}, it follows that q ∈{q P G | q d}.
Given a nested condition c over a graph C and a morphism p' C C', for each morphism p C' G it holds that
p (p',c) p ∘ p' c.
Given a plain rule ρ = and a condition c over R, for each transformation t G ⟹_ρ, m H with comatch n it holds that
n c m (ρ,c).
Given a rule ρ =, an overlap PL = (i_P, i_L, PL) ∈(ρ, P), a condition d over P, and a morphism p PL G. Then,
* p ∘ i_P d p _ρ(PL,d), and
* _t ∘ p ∘ i_P d p _ρ(PL,d), where t G ⟹_ρ, p ∘ i_L H.
* Since _ρ(PL,d) = (i_P,d), the statement follows by using Lemma <ref>.
* If PL∈(ρ, P), i.e., i_P(P) ∩ i_L(L ∖ l(K)) ≠∅, the morphism _t ∘ p ∘ i_P is not total and therefore cannot satisfy d.
If PL∈(ρ, P), then _ρ(PL,d) = ((t_1), (_t_1∘ i_P,d)) with t_1 PL ⟹_ρ,i_L PR. There is the transformation t_2 G ⟹_(t_1), p H. With Lemma <ref> follows that p _ρ(PL,d) n (_t_1∘ i_P,d), where n is the comatch of t_2.
n ∘_t_1∘ i_P = _t ∘ p ∘ i_P d n (_t_1∘ i_P,d) follows using Lemma <ref>.
Given a transformation t G ⟹_ρ, m H and a constraint c = ∀(e ∅
P,d), then, the following equations hold:
|{p P H |p is an impairment of the premise}|
= |⋃̇_̇(̇ċ'̇,̇(̇i̇_̇L̇,̇ ̇i̇_̇Ṗ(̇ṫ_̇1̇)̇,̇ ̇Ṗ(̇ṫ_̇1̇)̇L̇)̇)̇ ̇∈̇(̇ρ̇,̇ċ)̇_m,i_P(t_1)(c') |
{p P G | p d, _t ∘ p is total and _t ∘ p d}
= ⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P(c')
{p P G | p d and _t ∘ p is not total}
= ⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P (c')
{p P G | p d, _t ∘ p is total and _t ∘ p d}
= ⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P(c')
Note that, given a transformation t G ⟹_ρ, m H, a morphism p P H is an impairment of the premise of a constraint c = ∀(e ∅ P,d) if p d and there is no q P G with p = _t ∘ q.
The morphisms used throughout this proof are visualised in Fig. <ref>.
* Equation (1):
The main idea of the proof is to first show that each element p ∈{p P H |p is an impairment of the premise} has an associated p' ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c'), i.e., p_|P(t_1) = _t ∘ p' and vice versa. In the second part we show that for two morphisms p_1, p_2 ∈{p P H |p is an impairment of the premise} there are two morphisms p_1', p_2' ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c') so that p_1 is associated with p_1' and p_2 is associated with p_2'.
* ≤:
Let a morphism p ∈{p P H |p is an impairment of the premise} be given, i.e., p d and there is no morphism q P G with p = _t ∘ q. The graph P(t_1) = p^-1(p(P) ∖ n(R ∖ r(K)), with n being the comatch of t, already existed in G, i.e., there is a morphism p' P(t_1) G with _t ∘ p' = p_|P(t_1).
We show that p' ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c'). Since p has been introduced by t, there is an overlap PR = (i_P, i_R, PR) with i_P(P) ∩ i_R(R ∖ r(K)) ≠∅ and a morphism n_1 PR H with p = n_1 ∘ i_P (and in particular p_|P(t_1) = n_1 ∘ i_P|P(t_1)) and n = n_1 ∘ i_R. By construction, the pair ((ρ, ∀(i_R R PR, _ρ^-1(PR,d))), (i_L,_t_1∘ i_P|P(t_1), PL)), where i_L is the comatch of t_1 PR ⟹_ρ^-1, i_R PL, is an element of (ρ,c).
Also, there is a transformation t_2 H ⟹_(t_1), n_1 G. Since p = n_1 ∘ i_P d it follows that n_1 _ρ^-1(PR,d)) using Lemma <ref>. So m_1 PL G, the comatch of t_2, does not satisfy ((t_1)^-1,_ρ^-1(PR,d)) (Lemma <ref>) and
m_1 ∘_t_1∘ i_P | P(t_1) = _t_2∘ p_|P(t_1)
= p' ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c').
Let p_1, p_2 ∈{p P H |p is an impairment of the premise} be two morphisms with p_1 ≠ p_2.
If there are morphisms p_1' P_1(t_1) G, p_2' P_2(t_1) G with P_1(t_1) = p_1^-1(p_1(P) ∖ n(R ∖ r(K)), P_2(t_1) = p_2^-1(p_2(P) ∖ n(R ∖ r(K)),
_t ∘ p'_1 = p_1|P_1(t_1), _t ∘ p'_2 = p_2|P_2(t_1) and p'_1 ≠ p_2', the first part of this proof implies that p'_1, p'_2 ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c').
If P(t_1) = p_1^-1(p_1(P) ∖ n(R ∖ r(K)) = p_2^-1(p_2(P) ∖ n(R ∖ r(K)) and there is a morphism p' P(t_1) G with _t ∘ p' = p_1|P(t_1) = p_2|P(t_1), p_1 and p_2 differ only in elements created by t. So there are different overlaps PR_1 = (i_P1, i_R1, PR_1)∈(ρ^-1,P) and PR_2 = (i_P2, i_R2, PR_2) ∈(ρ^-1,P) of P and R with i_P1(P) ∩ i_R1(R ∖ r(K)) ≠∅ and i_P2(P) ∩ i_R2(R ∖ r(K)) ≠∅ and morphisms n_1 PR_1 H and n_2 PR_2 H with p_1 = n_1 ∘ i_P1, p_2 = n_2 ∘ i_P2 and n = n_1 ∘ i_R1 = n_2 ∘ i_R2.
For each of these overlaps, (ρ, c) contains a pair (c',(ρ, (i_R,
i_P, PR)). The first part of this proof implies that p' ∈_m,i_P(c') for both of these pairs. So |⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')| contains p' at least twice.
It follows that
|{p P H |q is an impairment of the premise}|
≤ |⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')|.
* ≥:
Let a morphism
p' P(t_1) G ∈⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')
be given. We show that t introduces a new occurrence p P H of P that does not satisfy d, and corresponds to p' in the sense that _t ∘ p' = p_|P(t_1). The graph P(t_1) is a subgraph of P and there is an overlap PR = (i_R, i_P, PR) ∈(ρ^-1,P) with i_P(P) ∩ i_R(R∖ r(K) ≠∅ such that p' is an element of _m, i_P(t_1)((ρ, ∀(i_R R PR, _ρ(PR,d))) with PL = (i_L, i_P(t_1), PL) = (ρ, PR).
I.e., there is a morphism m_1 PL G with m = m_1 ∘ i_L, p' = m_1 ∘ i_P(t_1) and m_1 ((t_1), _ρ(PR,d)) with t_1 PL ⟹_ρ, i_L PR.
Also, there is the transformation t_2 G ⟹_(t), m_1 H and since m_1 ((t_1), _ρ(PR,d)), n_1 PR H, the comatch of t_2, does not satisfy _ρ(PR,d) (Lemma <ref>). So the newly introduced p = n_1 ∘ i_P P H does not satisfy d (Lemma <ref>) and _t ∘ p' = n_1 ∘ i_P|P(t_1).
Let p'_1, p'_2 P(t_1) G ∈ |⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')| be two morphisms.
If p'_1 ≠ p'_2, the first part implies that there are newly created occurrences p_1, p_2 P H with p_1, p_2 d, _t ∘ p'_1 = p_1|P(t_1) and _t ∘ p'_2 = p_2|P(t_1). Since p'_1 ≠ p'_2 it follows that p_1 ≠ p_2.
If p'_1 = p'_2, there are two different overlaps PR_1 = (i_R1, i_P1, PR_1) ∈(ρ^-1,P) and PR_2 = (i_R2, i_P2, PR_2) ∈(ρ^-1,P) with PR_1 = PR_2, i_R1 = i_R2, i_R1(R ∖ r(K)) ∩ i_P1(P) ≠∅ and i_R2(R ∖ r(K)) ∩ i_P2(P) ≠∅ such that p'_1 arises from the application condition constructed with PR_1 and p'_2 arises from the application condition constructed with PR_2.
Using the first part of this proof, there are occurrences n_1 PR_1 H and n_2 PR_2 H with n_1 = n_2, _t ∘ p'_1 = _t ∘ p'_2 = n_1 ∘ i_P1|P(t_1) = n_2 ∘ i_P2|P(t_1), n_1 ∘ i_P1d and n_2 ∘ i_P2d. Since the overlaps PR_1 and PR_2 are different, i_P1≠ i_P2 and therefore, n_1 ∘ i_P1≠ n_2 ∘ i_P2.
I.e., if a morphism p' P(t_1) G occurs twice in |⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')| there are at least two newly introduced occurrences p P H of P with p d and _t ∘ p' = p_|P(t_1).
It follows that
|{p P H |q is an impairment of the premise}|
≥ |⋃̇_(c',(i_L, i_P(t_1), P(t_1)L)) ∈(ρ,c)_m,i_P(t_1)(c')|.
* Equation (2):
* ⊆:
Given a morphism p ∈{p P G | p d and _t ∘ p d}. Since p is not destroyed by t, there is an overlap PL = (i_P, i_L, PL) with i_L(L ∖ l(K)) ∩ i_P(P) = ∅ and a morphism m_1 PL G with m = m_1 ∘ i_L and p = m_1 ∘ i_P. So (ρ,c) contains an application condition ∀(PL,_ρ(PL,d) _ρ(PL,d) ) built by using this overlap.
It follows that m_1 _ρ(PL,d) and m_1 _ρ(PL,d) using Lemma <ref>.
So p ∈⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P(c').
* ⊇:
Given a morphism p P G ∈⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P(c'). We need to show that p d and _t ∘ p d. There is an overlap PL= (i_P, i_L, PL) with i_P(P) ∩ i_L(L∖ l(K)) = ∅ and a morphism m_1 PL G with p = m_1 ∘ i_P, m = p ∘ i_L, m_1 _ρ(PL,d) and m_1 _ρ(PL,d).
Lemma <ref> implies that p d and p _t ∘ p, so p ∈{p P G | p d, _t ∘ p is total and _t ∘ p d}.
* Equation (3):
* ⊆:
Given a morphism p ∈{p P G | p d and _t ∘ p is not total}.
Since _t ∘ p is not total, there is an overlap PL = (i_P, i_L, PL) with i_P(P) ∩ i_L(L∖ l(K)) ≠∅.
In particular, there is a annotated condition (∀(i_L L PL,_ρ^-1(PL,d)), PL) ∈(ρ,c)
and a morphism m_1 PL G with m = m_1 ∘ i_L and p = m_1 ∘ i_P.
Since p d Lemma <ref> implies that m_1 _ρ^-1(PL,d) and therefore, p = m_1 ∘ i_P ∈⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P (c').
* ⊇:
Given a morphism p P G ∈⋃_(c',(i_L, i_P, PL)) ∈(ρ,c)_m,i_P (c'), i.e., there is an overlap PL = (i_L, i_P, PL) with i_L(L ∖ l(K)) ∩ i_P(P) ≠∅ and a morphism m_1 PL G with m = m_1 ∘ i_L, p = m_1 ∘ i_P and m_1 _ρ^-1(PL, d).
Lemma <ref> implies that p d and _t ∘ p is not total.
I.e., p ∈{p P G | p d and _t ∘ p is not total}.
* Equation (4):
Analogous to the proof of Equation (2).
§ EXAMPLE: DETAILED CONSTRUCTION OF APPLICATION CONDITIONS
In this section, we will present the detailed constructions of the impairment-indicating application conditions shown in Fig. <ref> and <ref>. The repair-indicating application conditions can be constructed in a similar way.
First, we explain our construction in a semi-formal way using a constraint of the form ∀(∅ P, ∃(e P Q)) and a rule ρ. All graphs and morphisms are displayed in Fig. <ref>.
The impairment-indicating application condition of the premise P checks whether a transformation introduces a new occurrence of P that does not satisfy the conclusion of c.
Such a condition is constructed by first computing every overlap PR = (i_P, i_R, PR) of the RHS and P so that an application of the inverse rule will not destroy P. If this graph PR occurs in the result graph of the transformation, ρ has introduced a new occurrence of P.
Then, we compute the overlap-induced pre-condition that checks whether this newly introduced occurrence of P satisfies the conclusion of c. For this purpose we use the shift along morphism, i.e. we compute all commutative squares (4) (Fig. <ref>) resulting in a condition of the form ⋁(∃(e' PR QR, ).
So we obtain the annotated application condition (∀(i_R R PR, ⋁(∃(e' PR QR, ), PR).
After shifting this condition to the LHS, (∀(i_L L PL,
⋁(∃(e' PR QR, ), _ρ(PR)) is an impairment-indicating application condition for the premise of c. Note that this is the only case in which _ρ(PR) is an overlap of the LHS of ρ and a proper subgraph P(t') of P.
The impairment-indicating application condition of the conclusion checks whether each occurrence of P that satisfies the conclusion in the original graph also satisfies the conclusion in the result graph of a transformation.
Here we first compute every overlap PL = (i_L, i_P, PL) of the LHS and P sucht that an application of the rule will not destroy the occurrence of P.
Then, we compute the overlap-induced pre-condition, which checks whether the occurrence of P satisfies the conclusion.
We compute all commutative squares (3) (Fig. <ref>) resulting in a condition of the form ⋁(∃(e' PL QL, ).
We use the overlap-induced post-condition to check whether the occurrence of P satisfies the conclusion of ρ after the transformation. We compute every commutative square (4) and shift the condition ⋀∃(e' PR QR, ) over ρ.
The annotated condition (∀(i_L L PL, ⋁(∃(e' PL QL, ) (ρ, ⋁∃(e' PR QR, ),
PL) is an impairment-indicating application condition of the conclusion.
§.§ Impairment-indicating application condition for the premise
We start by constructing the impairment-indicating application of for the premise of _2 = ∃(P).
All overlaps of the RHS of and P are shown in Fig. <ref>. We only consider overlaps where an application of the inverse rule of would destroy the occurrence of P. In Fig. <ref>, these overlaps are highlighted with red boxes.
So the impairment-indicating application conditions before shifting to the LHS of are shown in Fig. <ref>. The associated overlap and its morphisms are implicitly given by the node names.
Shifting these application conditions to the LHS of destroys the graph P. The graph P(t'), i.e. the parts of P that are not destroyed by the shift over is shown in Fig. <ref>. Again, the associated overlap of P(t') and L is implicitly given by the node names.
§.§ Impairment-indicating application condition for the conlusion
For the impairment-indicating application condition of for the conclusion of _1 = ∀(P ∃(Q)), we consider every overlap of the LHS of and P, so that an application of does not destroy P. One such overlap is shown in Fig. <ref>. In fact, this is the only overlap of the LHS of and P such that the resulting impairment-indicating application condition cannot be simplified to .
The condition _(_1) is shown in Fig. <ref>. When filtering this condition using Lemma <ref>, the second condition is replaced by since a/a' is contained in two classes and we assume that _2 is always satisfied.
The condition _(_1) before shifting the condition to the LHS of is shown in Fig. <ref>. The resulting condition _(_1) is shown in Fig. <ref>. This condition is created by shifting the condition shown in Fig. <ref> to the LHS of .
The condition ∀(PL, _(_1) _(_1)) is shown in Fig. <ref>. The condition _1 shown in Fig. <ref> is obtained by simplifying this condition using Lemma <ref>. Again, the associated overlaps are implicitly by the node names.
|
http://arxiv.org/abs/2405.10017v1 | 20240516115945 | Mechanism for the Broadened Linewidth in Antiferromagnetic Resonance | [
"Yutian Wang",
"Jiang Xiao"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
UTF8gbsn
Department of Physics and State Key Laboratory of Surface Physics, Fudan University, Shanghai 200433, China
[Corresponding author: ]xiaojiang@fudan.edu.cn
Department of Physics and State Key Laboratory of Surface Physics, Fudan University, Shanghai 200433, China
Institute for Nanoelectronics Devices and Quantum Computing, Fudan University, Shanghai 200433, China
Shanghai Research Center for Quantum Sciences, Shanghai 201315, China
Shanghai Branch, Hefei National Laboratory, Shanghai 201315, China
The linewidth of antiferromagnetic resonance (AFMR) is found to be significantly broader than that of ferromagnetic resonance (FMR), even when the intrinsic Gilbert damping parameter is the same for both systems. We investigate the origin of this enhanced damping rate in AFMR by studying a bipartite magnet model. Through analytical calculations and numerical simulations, we present three perspectives on understanding this linewidth broadening in AFMR: i) The non-dissipative Heisenberg exchange interaction develops a damping-like component in the presence of Gilbert damping, ii) The transverse component of the exchange coupling reduces the AFMR frequency, thereby increasing the damping rate, and iii) The antiferromagnetic eigenmode exhibits characteristics of a two-mode squeezed state, which is inherently linked to an enhanced damping rate. Our findings provide a comprehensive understanding of the complex dynamics governing magnetic dissipation in antiferromagnet and offer insights into the experimentally observed broadened linewidths in AFMR spectra.
Mechanism for the Broadened Linewidth in Antiferromagnetic Resonance
Jiang Xiao (萧江)
May 20, 2024
====================================================================
Introduction -
Ferromagnetic resonance (FMR) refers to the precession of magnetic moments in a ferromagnetic material around an external magnetic field at a specific resonance frequency <cit.>. This phenomenon is widely used to study magnetic properties and has various applications, including magnetic storage, spintronics, and magnetic resonance imaging <cit.>. The linewidth of FMR, which characterizes the rate at which the magnetization returns to equilibrium after being perturbed, is affected by several factors, with magnetic damping being a significant contributor. Therefore, the linewidth of FMR provides valuable information about the magnetic properties of materials, particularly regarding the investigation and characterization of magnetic damping mechanisms.
Just like ferromagnetic materials exhibit ferromagnetic resonance (FMR), antiferromagnetic materials demonstrate antiferromagnetic resonance (AFMR) <cit.>. The study of AFMR also offers insights into the dynamics and properties of antiferromagnetic materials <cit.>.
The resonance frequency of AFMR is influenced by factors such as the strength of the exchange interaction, magnetic anisotropy, and the applied field. Both the AFMR resonance frequency and its linewidth provide valuable information about antiferromagnetic order, spin wave excitations, magnetic anisotropies, and especially the spin-related interactions in antiferromagnetic materials.
The linewidth of the resonance is typically proportional to the resonance frequency. The damping rate, the ratio between the linewidth and the resonance frequency, roughly characterizes how many cycles the oscillation can perform before damping out.
This damping rate is usually determined by a phenomenological damping coefficient, such as the viscous coefficient in a driven mechanical oscillator or the Gilbert damping parameter in FMR. Mechanisms like spin pumping give a correction to the damping coefficient, enhancing the Gilbert damping parameter <cit.>. There are some other mechanisms, such as two-magnon scattering <cit.>, that will introduce an extra damping effect aside from Gilbert damping. In coupled systems, the damping rate of the normal modes does not typically exceed the damping rate of the individual subsystems, provided that the coupling is coherent and does not introduce additional dissipation <cit.>.
The antiferromagnet can be conceptualized as two magnetic sublattices interconnected through Heisenberg exchange coupling. In this context, it is not surprising that the antiferromagnet is likened to a pair of coupled oscillators, with each magnetic sublattice represented as an oscillator. From this viewpoint, the damping rate for the antiferromagnet is expected to be similar to that of coupled oscillators. However, this paper highlights that this seemingly intuitive perspective is actually incorrect. The distinction between coupled oscillators and the antiferromagnet lies in the fact that coupled oscillators couple two normal particles, while the antiferromagnet, in the mean-field approximation, couples a particle with its anti-particle <cit.>.
This dissimilarity also results in distinct characteristics in the linewidth of the spectrum in the antiferromagnet: the damping rate in the antiferromagnet can exceed the damping rate of individual sublattices.
The dissipation is built-in in terms of the Gilbert damping parameter α, which has been shown to be a more realizable parametrization of the magnteic dissipation<cit.>. An experimentally more relevant parameter for characterizing magnetic dissipation is the linewidth, which is directly related to the imaginary part of the eigenfreuqency ω. Apart from a field/frequency-independent contribution, the linewidth is typically found to be proportional to the resonance frequency, ω∝ω. We define a damping rate as the ratio between the imaginary and the real part of the eigenfrequency:
α_eff≡ω/ω,
whose inverse characterizes the number of cycles before an excitation damping out. In FMR experiments, this damping rate is usually associated with the Gilbert damping constant used in the LLG equation above: α_eff = α. Naively, this identification makes sense because the Gilbert damping is the only dissipative mechanism in the exchange coupled LLG equations in eqn:LLG, and the Heisenberg coupling does not introduce any extra dissipation. In this Letter, we emphasize that the damping rate for AF actually is larger than the sublattice Gilbert damping α, and more importantly, we provide an explaination for the linewidth broadening in AFM in three different perspectives, from the microscopic torque analysis to a quasiparticle point view, then connecting to the generalized concept of spin wave polarization in antiferromagnet.
Bipartite Model -
We consider a minimal model of a bipartite magnets described by the following Hamiltonian
H =
- ∑_j=1^2 [K/2(_j·)^2 + ·_j ]
+ J _1·_2,
where K, J, and represent the uniaxial anisotropy along , the Heisenberg exchange, and the external magnetic field, respectively.
The dynamics of the _1,2 are governed by the phenomenological Landau-Lifshitz-Gilbert equation <cit.>
_1 =-γ_1×(+ K m_1^z-J_2)
+α_1×_1
_2 =-γ_1×(+ K m_2^z-J_1)
+α_1×_2.
The Heisenberg exchange term favors the (anti)ferromagnetic configuration for negative (positive) J.
In the absence of external magnetic field, the resonance frequencies for the bipartite magnet are given by
ω_±^ FM = (K+J±J)(1+iα),
ω_±^ AF =
±√((J+K)^2-J^2)
+ iα(J+K).
The opposite signs of eigenfrequencies ω_±^ AF for the AF phase indicate their opposite polarizations, being right-handed and left-handed, respectively <cit.>.
From these expressions, one immediately sees that the damping rate for FM is equal to the sublattice Gilbert damping constant α, while the damping rate for AF is larger than α:
α_eff^ AF/α = 1/√(1-[J/(J+K)]^2) = cosh(2r) ≥ 1
with tanh(2r) = J/(J+K).
We should note that this enhanced damping for the eigenmodes does not affect the fluctuation-dissipation theorem, which is governed by the imaginary part of the linear response susceptibility function.
Several literatures <cit.> have touched on this damping rate enhancement in AF, however the physical explanation and intuitive understanding on the enhancement remain elusive. Therefore, in this paper, we try to demonstrate the phenomenon based on a theoretically minimum bipartite model and provide the phyiscal understanding for the damping rate enhancement in AF.
Simulation -
To illustrate the enhanced damping in AF, considering a bipartite magnet initially in an antiferromagnetic state with a positive exchange constant (J > 0), the application of an external field can induce a ground state alteration. This process is scrutinized under two distinct conditions: when the field is aligned parallel () and perpendicular (⊥) to the easy axis as shown in fig:long_trans(a,e).
fig:long_trans(b) shows the simulated spin wave excitation spectrum with increasing longitudinal field B_ for J/K = 10. The system experiences a sequential transformation from antiferromagnetic ground state through a spin-flop state into a ferromagnetic state at critical fields denoted as B_^a = √(K(2J-K)) and B_^b = 2J-K <cit.>. Here we focus on the linewidth of the resonances for these three phases. A line cut at frequency ω = 6K has three reasonance peaks, in the AF, spin-flop, and ferromagnetic phases, respectively. Each peak has a different linewidth, even though it is for the same system with fixed Gilbert damping of α = 0.05 at the same resonance frequency.
fig:long_trans(c) shows the damping rate enhancement for the modes in fig:long_trans(b).
fig:long_trans(d) shows the calculated eigenfrequencies (see Appendix A) for these three phases on a complex-ω plane, <cit.>. The FMR is right on the straight line of slope α, while the excitation upon the AF and spin-flop state have a higher damping rate than α.
fig:long_trans(f) shows a similar simulated spin wave excitation spectrum at J/K = 5 with the magnetic field transverse to the anisotropy axis.
In this case, the ground state evolves from antiferromagnetic to a canted configuration, eventually achieving a ferromagnetic state at a critical field B_⊥^c = 2J+K. At a line cut at fixed frequency (ω = 2K, 6K), there are two resonance peaks for canted antiferromagnetic and ferromagnetic ground states, respectively. It can be seen that the linewidth in the canted AF phase is larger than that in the FM phase.
fig:long_trans(g) shows the damping rate enhancement for the modes in fig:long_trans(f).
fig:long_trans(h) shows the calculated complex resonance frequency on a complex-ω plane (see Appendix B), which is above the slope α as well. Therefore, in the above examples for the longitudinal and transverse field cases in fig:long_trans, we see that the damping rate is indeed enhanced for the antiferromagnet, but also for the non-collinear spin-flop or canted antiferromagnet.
To understand the mechanisms behind enhancing the damping rate, especially in the antiferromagnet, we employ three distinct analytical approaches. Firstly, a microscopic examination of how the non-dissipative exchange torque contributes to damping. Secondly, an analysis that distinguishes the effects of longitudinal and transverse components of the Heisenberg exchange interactions. Lastly, attention is focused on the influence of magnon squeezing and polarization in modulating the damping rate. Each perspective offers unique insights into the complex dynamics governing magnetic damping in antiferromagnetic materials.
Torque analysis -
The Heisenberg exchange coupling is inherently even in time reversal and thus non-dissipative by itself. However, the non-dissipative exchange torque can influence the damping behavior with a time-reversal broken antiferromagnetic state. For an antiferromagnetic eigen mode of circular polarization, the magnetic moments 𝐦_1,2 undergo circular trajectories at frequency Ω = √(K(2J+K)) around the axis 𝐳̂ with respective cone angles θ_1,2 (see fig:torque). For _1, the field-like precessional torque is _1 = Ωθ_1, and the damping-like Gilbert torque α𝐦_1 ×𝐦̇_̇1̇≃αΩθ_1. Therefore, if only Gilbert torque contributes to the damping, the damping rate would erroneously appear as constant α, not the enhanced value of α_eff^AF in eqn:alphaeffAF.
The mistake in the above analysis lies in the assumption that the exchange torque J _1×_2 points perfectly in the x-y plane, thus is a purely field-like precessional torque. This is indeed the case if _1, _2, and are all in the same plane, or equivalently _1 and _2 point in opposite directions in the x-y projection plane (see fig:torque). However, when taking the magnetic damping into account, an important observation for an antiferomangetic dynamical state is that the three vectors _1,2(t) and do not lie in the same plane. Instead, _2 becomes '_2 has an extra (average) phase delay of α relative to _1 in the projection plane in addition to the original π phase, as shown in fig:torque(b) (see Appendix C).
This misalignment between _1 and _2 leads to a tilting of the exchange torque out of the precessional plane, thus gives rise to a damping-like component of magnitude J(αθ_2) on _1.
Consequently, the total damping-like torque on _1 now reads
αΩθ_1 + α Jθ_2
= α (K+J) θ_1
= K+J/√(K(2J+K))αΩθ_1,
where θ_2/θ_1 = J/(K+J+Ω) is used <cit.>.
In comparison with the precessional torque Ωθ_1 on _1, we recovered the enhanced damping rate as in eqn:alphaeffAF. A similar analysis applies on _2 as well.
Longitudinal & transverse coupling -
An alternative understanding of the enhanced damping rate in AF is to rewrite the exchange coupling as:
J_1 ·_2
= Jm_1^zm_2^z + J' (m_1^xm_2^x + m_1^ym_2^y),
separating the longitudinal and transverse exchange coupling. For the typical isotropic Heisenberg interaction, J' = J. Consequently, the eigenfrequencies in eqn:frequency_nosf are rewritten as
ω_±^ FM = (1+iα)(K+J±J'),
ω_±^ AF = ±√((K+J)^2- J'^2) +iα(K+J),
which indicate that the longitudinal and transverse coupling are qualitatively different. More interestingly, for AF case, the transverse coupling J' does not affect the dissipative imaginary part, but reduces the real part of the eigenfrequencies.
Consequently, the damping rate for AF spin wave becomes larger than α. The reduction of the eigenfrequency due to J' also implies the gap difference between the left- and right-circular AF modes ω_±^ AF reduces because of the transverse coupling, manifesting a level attraction behavior <cit.>.
For comparison, in the FM case, the transverse coupling J' affects both real and imaginary parts in the same fashion, thus leaving the damping rate unchanged.
Two-mode Squeezing - In terms of the magnon creation and annihilation operators for _1,2, the Hamiltonian for the bipartite magnet can be written as
= (K+|J|) (_1^†_1 + _2^†_2) + |J'|
-(_1^†_2 + _1_2^†)
_1^†_2^† + _1_2 ,
which shows that the transverse J' coupling is a particle-number-conserved coupling for FM but particle-number-non-conserved coupling for AF.
Hamiltonian with the particle-number-non-conserved form is known to give rise to mode squeezing. In the present case, the transverse coupling in AF causes the two-mode squeezing between the excitations of _1 and _2 with squeezing parameter r: tanh(2r) = |J'|/(K+|J|) (see Appendix D). Surprisingly, this squeezing parameter r is the same r in the damping rate enhancement in eqn:alphaeffAF. This means that the damping rate enhancement in AF is related to the squeezing caused by the particle-non-conserved coupling. In contrast, the FM Hamiltonian has no squeezing, so there is no damping rate enhancement in FM.
Similar to two-mode squeezing in the antiferromagnetic magnon discussed here, there can be single-mode squeezing for ferromagnetic magnon. The squeezing Hamiltonian can be found in ferromagnetic systems by anisotropy, inhomogeneous magnetic texture, or dipolar interactions. Such ferromagnetic squeezing also leads to the damping rate enhancement in ferromagnetic spin waves. Because the squeezing of ferromagnetic magnons also implies the elliptical polarization of spin wave, it is no surprise that damping rate enhancement is also found for non-circular ferromagnetic spin waves <cit.> and the soft modes in magnetic Skymions with inhomogeneous magnetic texture <cit.>.
Conclusion -
Our results reveal that the dissipationless exchange interaction can significantly influence the dissipative properties of antiferromagnetic resonance, and more broadly, the spin excitations in systems with inhomogeneous magnetic ground states. This effect can be elucidated from both microscopic and macroscopic viewpoints. At the microscopic level, the intrinsic Gilbert damping slightly alters the antiferromagnetic mode, allowing the originally non-dissipative exchange torque to develop a damping-like component. At the macroscopic level, the dynamical (transverse) exchange interaction between the two magnetic sublattices serves to decrease the antiferromagnetic resonance frequency, thereby enhancing the damping rate relative to both the linewidth and the resonance frequency. Additionally, the increased damping in antiferromagnetic systems is linked to the fact that the AF eigenmode exhibits characteristics of a squeezed mode.
Acknowledgements -
J. X. acknowledges fruitful discussion with Gerrit Bauer. This work was supported by the National Key Research and Development Program of China (Grant no. 2022YFA1403300) and Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01).
apsrev4-2
§ APPENDIX A: COMPLEX FREQUENCIES ON LONGITUDINAL FIELD SCAN
For an antiferromagnetic bipartite magnet system, when the external field is applied parallel to the easy axis (), there are two phase transitions: from antiferromagnet (AF) to spin-flop (SF) at B_^a = √(K(2J-K)), and from SF to ferromagnet at B_^b = 2J-K.
From the linearized coupled LLG equations, we can get the complex frequencies for the spin wave excitation in these three phases
ω_±^AF =
[±Ω +iα(J+K)](1±B/Ω)
, B ≤ B_^a,
ω_+^SF = 2√(JJ_B)
+iα (J+J_B)
, B_^a < B < B_^b,
ω_±^FM = (B+K-J± J)(1+iα)
, B ≥ B_^b,
where Ω=√(K(2J+K)) is the antiferromagnet eigenfrequency and 2J_B = (2J+K)(B/B_^b)^2-K. The opposite signs of eigenfrequencies ω_±^AF for the AF phase indicate their opposite polarizations, being right-handed and left-handed, respectively. In SF phase, there is also a zero frequency Goldstone mode with ω_-^SF = 0 precessing about .
§ APPENDIX B: COMPLEX FREQUENCIES ON TRANSVERSE FIELD SCAN
When the external field is applied perpendicular to the easy axis (⊥), there is only one phase transition at B_⊥^c=2J+K, separating canted antiferromagnet (CAF) phase from the and saturated ferromagnetic (FM) phase. The complex frequencies for CAF and FM phases are calculated as
ω_±^CAF = √(K(2J+K) + B^2(J± J-K)/2J+K) +iα[K+J±B^2(2J∓ K)/2(2J+K)^2],
B<B_⊥^c
ω_±^FM = √((B-J± J)(B-J± J -K)) +iα(B-J± J-K/2),
B≥ B_⊥^c.
§ APPENDIX C: THE EXTRA DAMPING-RELATED PHASE DELAY BETWEEN THE TWO MAGNETIC SUBLATTICES IN AF
The equation of motion derived from linearized coupled LLG equation for the bipartite antiferromagnet is
-id/dt(1-iα 0
0 1+iα)
Ψ
=( K+|J| |J'|
-|J'| -K-|J|)
Ψ,
where Ψ = (ψ_1, ψ_2)^T with ψ_j=m_j^x+i m_j^y the transverse magnetizatic component in complex form. The eigenmodes of the EOM above are
Ψ_+ = (-cosh r
(1-iα) sinh r) e^+iΩ t≃(-cosh r
e^-iαsinh r) e^+iΩ t
Ψ_- =
(-sinh r
(1-iα) cosh r) e^-iΩ t≃(-sinh r
e^-iαcosh r) e^-iΩ t,
with tanh(2r)=|J'|/(K+|J|). These eigenstates means that when the Gilbert damping is taken into account, there is an extra phase delay of α between the two magnetic sublattices, in additional to the original π phase.
In comparison, we consider the equation of motion for the bipartite ferromagnet
-id/dt
(1-iα) Ψ
= ( K+|J| -|J'|
-|J'| K+|J|)
Ψ,
whose eigenmodes are
Ψ_± = 1/√(2)(1
± 1) e^iω_±^FM.
Therefore, the eigenstates for the FM are not modified by the Gilbert damping, different from the AF case above.
§ APPENDIX D: TWO-MODE SQUEEZING IN AF
The magnon Hamiltonian for bipartite antiferromagnet can be written as
=(K+|J|)(_1^†_1+_2_2^†)+|J'|(_1^†_2^†+_1_2).
It can be diagonalized by Bogoliubov transformation
[ _1; _2; _1^†; _2^† ]=
[ cosh r 0 0 -sinh r; 0 cosh r -sinh r 0; 0 -sinh r cosh r 0; -sinh r 0 0 cosh r ][ _1; _2; _1^†; _2^† ],
with tanh(2r)=J'/(K+J). The parameter r is assumed to be real for simplicity. The diagonalized Hamiltonian is
=Ω(_1^†_1+_2 _2^†).
By introducing the two-mode squeezing operator with squeezing parameter r
U_r = e^r(_1_2-_1^†_2^†),
the Bogoliubov transformation can be presented as _j = U_r^†_j U_r. Therefore, the AF eigenstates are squeezed states with squeezing parameter r.
|
http://arxiv.org/abs/2405.10283v1 | 20240516174153 | Power-law relaxation of a confined diffusing particle subject to resetting with memory | [
"Denis Boyer",
"Satya N. Majumdar"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"q-bio.PE"
] | |
http://arxiv.org/abs/2405.08716v1 | 20240514155822 | Commuting Clifford actions | [
"John W. Barrett"
] | math-ph | [
"math-ph",
"hep-th",
"math.MP",
"math.QA",
"15A66, 58B34"
] |
Commuting Clifford actions
John W. Barrett
School of Mathematical Sciences
University of Nottingham
University Park
Nottingham NG7 2RD, UK
E-mail john.barrett@nottingham.ac.uk
14th May 2024
===================================================================================================================================================================
It shown that if a vector space carries commuting actions of two Clifford algebras, then the quadratic monomials using generators from either Clifford algebra determine a spinor representation of an orthogonal Lie algebra.
Examples of this construction have applications to high energy physics, particularly to the standard model and unification. It is shown how to use Clifford data to construct spectral triples for the Pati-Salam model that admit an action of Spin(10).
§ INTRODUCTION
A Clifford module is a representation of a Clifford algebra on a complex vector space. A standard result is that taking quadratic monomials in the generators (gamma matrices) determines a spinor representation of the corresponding orthogonal Lie algebra.
This paper studies commuting actions of two real Clifford algebras,
(p_1,q_1) and (p_2,q_2), on a vector space ℋ. According to Theorems <ref> and <ref>, the quadratic monomials in pairs of generators taken from either Clifford algebra determine a spinor representation of the Lie algebra (q_1+p_2,p_1+q_2) on ℋ. This result is despite the fact that the gamma matrices from one Clifford algebra commute with the gamma matrices of the other rather than anticommute, which would be the case if they both belonged to one larger Clifford algebra. Note that in some (but not all) cases the tensor product of two Clifford algebras can be made into one big Clifford algebra <cit.>, but this is not the result reported here (although this does come into the proof of Theorem <ref>).
The general results in this paper were inspired by several different examples (for specific p_i, q_i) that appear in <cit.> in connection with the fields of particle physics. In particular, one can take commuting actions of (0,q) and (p,0) with any p+q=10, giving a representation of the unification group (10) on a Hilbert space formed by the fermions of the standard model.
For the case q=4 and p=6, the
Clifford algebras are (0,4)≅ M_2() and (6,0))≅ M_4(), and the spin groups for each factor are (4) and (6).
Thus this structure determines, in a natural way, a representation of the Pati-Salam group G_PS=(4)×(6)/_2 on ℋ as a subgroup of (10). The two subgroups lie in the even parts of these Clifford algebras, which are ⊕ and M_4().
It is shown that a simple construction from this determines real spectral triples with Hilbert space ℋ and algebra 𝒜=⊕⊕ M_4(). The Clifford structures allow one to define an action of the unification group (10) on the Hilbert space of these spectral triples.
§ SPIN GROUP ACTIONS
§.§ Clifford modules
A real Clifford algebra <cit.> is determined by a real vector space V with a quadratic form η (here assumed non-degenerate). A Clifford module <cit.> is a Clifford algebra together with a representation of the algebra in a complex vector space ℋ. Thus there is a map c V→(ℋ), making ℋ a left module for the algebra. Choosing a basis {e^a} for V, the image of a basis vector is the `gamma matrix' γ^a=c(e^a). The gamma matrices satisfy
γ^aγ^b+γ^bγ^a=2η^ab.
A Clifford module is said to be unitary if ℋ is a Hilbert space and the operation * of Hermitian conjugation in (ℋ) is an involution on V. This makes the Clifford algebra a *-algebra.
The standard example of a Clifford algebra (p,q) is for V=^n with the quadratic form η a diagonal matrix, with p diagonal entries +1 and q diagonal entries -1, with n=p+q. The unitary structure on a module for this Clifford algebra is chosen so that the gamma matrices c(e^a), with e^a the standard basis vectors, are unitary matrices (and are thus either Hermitian or anti-Hermitian).
The Clifford modules have standard definitions of a chirality operator γ and an antilinear map J called the real structure, with properties that depend on the signature of η, in fact on the parameter s=q-p 8 <cit.>. These properties are J^2=ϵ, Jγ^a=ϵ'γ^a J and Jγ=ϵ”γ J, with the signs given in Table <ref>.
It is worth explaining a bit more detail about these operators, as this will be useful later. The product of all of the gamma matrices in a Clifford module is denoted P=γ^1γ^2…γ^n. A calculation shows that P^2=(-1)^s(s+1)/2, so the chirality operator is
γ= i^s(s+1)/2P.
Obviously the definition of P depends on the ordering of the basis, so P and γ are only determined by the Clifford module up to sign.
An important point is that P is in the Clifford algebra, so it is useful to express the structures in terms of P. In particular,
JPJ^-1P^-1=ϵ'.
In the even s cases, the table lists one real structure but there is a second one defined by
J=JP
This obeys J^2=ϵ”ϵ, Jγ^a=-γ^a J and Jγ=ϵ”γJ.
§.§ Spin representations
The quadratic monomials of (p,q)
T^ab=1/2γ^aγ^b
with a b, generate the Lie algebra (p,q). Since T^ab=-T^ba, a spanning set is obtained by taking a<b. A calculation shows that the generators obey
[T^ab,T^cd]=η^bc T^ad-η^ac T^bd+η^bd T^ca-η^ad T^cb.
Note that the generators T^ab=-T^ab obey the same relations but for the matrix η^ab=-η^ab, which demonstrates the isomorphism (p,q)≅(q,p).
If the Clifford module is irreducible, the elements of ^k are called Dirac spinors (or pinors) and the representation of (p,q) is called the Dirac spinor (or pinor) representation.
If p+q is even (and greater than 0), the Dirac spinor representation splits into two inequivalent representations of (p,q) by the eigenspaces of the operator P. These representations are called Weyl spinor (or semispinor) representations, and are irreducible.
Since P is an invariant polynomial in the generators of (p,q),
P=2^n/2/n!ϵ_a_1… a_nT^a_1a_2T^a_3a_4… T^a_n-1a_n,
it is a Casimir operator and so its eigenvalue characterises the representation.
If p+q is odd, the Dirac spinor is already an irreducible representation of (p,q). Although in the odd case there are two inequivalent representations of the Clifford algebra, distinguished by the eigenvalues ±1 of the chirality operator, the corresponding representations of (p,q) are equivalent.
Each irreducible representation of (p,q) has structure maps P and/or J commuting with the Lie algebra action. These are shown in Table <ref>. Note that the table is symmetrical if one replaces s with -s. For s=2,6, the eigenvalues of P are ± i and so the map J does not survive the projection to the Weyl spinors. However, J relates the two Weyl spinor representations as complex conjugates of each other.
§.§ Commuting actions
Consider a finite-dimensional vector space ℋ with commuting actions of two Clifford algebras. The following result is that this determines a representation of a spin group using the generators from both algebras.
Let ℋ be a Clifford module for both (p_1,q_1) and (p_2,q_2), such that the two actions commute. Denote the gamma matrices for the first action by {Γ_1^a} and for the second action {Γ_2^α}.
The quadratic monomials formed from pairs of different generators in the set {Γ_1^a,Γ_2^α} determine a representation of the Lie algebra (q_1+p_2,p_1+q_2) on ℋ.
Define generators of the Lie algebra by T_1^ab=1/2Γ_1^aΓ_1^b, T_2^αβ= 1/2Γ_2^αΓ_2^β (for a b and αβ) and U^aβ=1/2Γ_1^aΓ_2^β for all a,β. For convenience, define T_1^aa=T_2^αα=0. Then the Lie brackets are
[T_1^ab,T_1^cd] =η_1^bc T_1^ad-η_1^ac T_1^bd+η_1^bd T_1^ca-η_1^ad T_1^cb
[T_2^αβ,T_2^γδ] =η_2^βγ T_2^αδ-η_2^αγ T_2^βδ+η_2^βδ T_2^γα-η_2^αδ T_2^γβ
[T_1^ab,U^cδ] =η_1^bc U^aδ-η_1^ac U^bδ
[U^aβ,T_2^γδ] =η_2^βγ U^aδ-η_2^βδ U^aγ
[U^aβ,U^cδ] =η_1^ac T_2^βδ-η_2^βδ T_1^ca
Note that these are not the generators of (p_1+p_2,q_1+q_2) because of the signs in the last line. Instead, the generators are mapped to those of (q_1+p_2,p_1+q_2) by defining n_1=p_1+q_1 and
T^ab=-T_1^ab
T^α+n_1,β+n_1=T_2^αβ
T^a,β+n_1=U^aβ
T^α+n_1,b=-U^bα
and η=(-η_1)⊕η_2.
The relative signature flip is a bit surprising, and one might wonder what happens with the commuting actions of three Clifford algebras. However in that case, the commutators of quadratic expressions do not close to a Lie algebra.
The representation determined by Theorem <ref> is determined by spinor representations of the Lie algebra, as one might expect. The Hilbert space splits into irreducible representations of the two Clifford algebras, so it suffices to consider this case.
Suppose that there are irreducible representations of (p_1,q_1) on ^k_1 with gamma matrices γ_1^a, and (p_2,q_2) on ^k_2 with gamma matrices γ_2^α. Then the representation of the Lie algebra (q_1+p_2,p_1+q_2) on ^k_1⊗^k_2 given by Theorem <ref> with Γ_1^a=γ_1^a⊗ 1 and Γ_2^α=1⊗γ_2^α is equivalent to a Dirac spinor representation if n_1=p_1+q_1 is even or n_2=p_2+q_2 is even. If both n_1 and n_2 are odd, the representation is one of the two Weyl spinor representations.
Consider the odd-odd case. According to <cit.>, an irreducible Clifford module of type (p,q)=(q_1+p_2,p_1+q_2) is given by the gamma matrices
[ 0 γ_1^a; -γ_1^a 0 ]⊗ 1, [ 0 1; 1 0 ]⊗γ_2^α
Note that the first set of matrices square to -η_1^aa, so the signature for these is reversed.
The quadratic monomials for this Clifford module are
1/2[ -γ_1^aγ_1^b 0; 0 -γ_1^aγ_1^b ]⊗ 1, 1/2[ γ_1^a 0; 0 -γ_1^a ]⊗γ_2^β, 1/2[ 1 0; 0 1 ]⊗γ_2^αγ_2^β.
generating the Lie algebra (q_1+p_2,p_1+q_2).
The matrix t=[ 1 0; 0 -1 ]⊗ 1 commutes with these generators, and so the ± eigenspaces of t are the two different spinor representations. However, the t=1 eigenspace is exactly the representation (<ref>), which proves the result.
Now suppose n_1 is even. Then an irreducible Clifford module of type (q_1+p_2,p_1+q_2) is given by the gamma matrices
iγ_1^a⊗ 1, γ_1⊗γ^β_2
using the chirality operator γ_1, which anti-commutes with the γ_1^a.
Therefore the quadratic expressions formed from these,
S^ab=-1/2γ_1^aγ_1^b⊗ 1,
S^a, n_1+β= i/2γ_1^aγ_1⊗γ^β_2,
S^n_1+α, n_1+β=1/2 1⊗γ^α_2γ^β_2
generate the Dirac spinor representation of (q_1+p_2,p_1+q_2). Define the operator V=exp(iπγ_1/4)⊗ 1 (which is unitary if the Clifford module is unitary). This has the property that V(γ^a_1⊗ 1)V^-1=iγ_1γ^a_1⊗ 1. This operator determines the equivalent representation
VS^abV^-1=-1/2γ_1^aγ_1^b⊗ 1=-T_1^ab
VS^a,β+n_1V^-1=1/2γ_1^a⊗γ_2^β=U^aβ
VS^α+n_1,β+n_1V^-1=1/2 1⊗γ^α_2γ^β_2=T_2^αβ
which agrees exactly with (<ref>). There is a similar argument if n_2 is even.
It remains to construct the structure maps for the spin representations constructed in Theorem <ref>.
It is useful to write p=q_1+p_2 and q=p_1+q_2. Then, s=q-p=s_2-s_1, with s_i=q_i-p_i. For the cases when s is even, the operator P can be defined so that its eigenvalues distinguish the Weyl spinor submodules in the results of Theorem <ref>.
In the even-even case, multiplying the gamma matrices in (<ref>) gives
P=(-1)^n_1/2P_1⊗ P_2,
which is unchanged by the unitary transformation, i.e., VP V^-1=P. In the odd-odd case, multiplying the gamma matrices shows that
P=(-1)^(n_1-1)/2 P_1⊗ P_2.
An antilinear structure map J for the tensor product is obtained from a pair of structure maps, one for each factor, providing they either both commute with the gamma matrices or both anti-commute with the gamma matrices. The formula
J=J_1⊗ J_2
commutes with the Lie algebra in the even-even cases, the odd-odd cases where s=0 or 4, or the even-odd (and odd-even) cases
where s_1 (or s_2) is 3 or 7.
In the even-odd cases with s_2=1 or 5 one has to take
J=J_1⊗ J_2
and similarly in the odd-even cases with s_1=1 or 5,
J= J_1⊗J_2.
The remaining odd-odd cases don't have a general formula for J because the Weyl spinor representation does not have an antilinear structure.
In the even-even cases there is also the antilinear structure map
J=JP=(-1)^n_1/2J_1⊗J_2.
§ EXAMPLES
Instances of this construction in particle physics appear in <cit.>, including Examples <ref> and <ref>.
The quaternion algebra has an irreducible representation σ→ M_2(). Applied to the
imaginary quaternions q_1,q_2,q_3∈ this gives the gamma matrices γ_1^a=σ(q_a) for a Clifford algebra (0,3) acting in ^2. The number γ_2^1=i acting in is the single gamma matrix for the type (0,1) Clifford algebra _. The tensor product of the two is ⊗_=M_2(), which acts in ^2⊗≅^2. The generators of (3,1) are T_1={1/2σ(q_1),1/2σ(q_2),1/2σ(q_3)} and U={i/2σ(q_1),i/2σ(q_2),i/2σ(q_3)} and the representation in ^2 is a Weyl spinor representation. This agrees with the holomorphic representation of (2,).
The quaternions and octonions can be considered real vector spaces. Let ℋ=⊗_⊗_, a complex vector space by scalar multiplication in the first factor. This has the structure of a non-associative algebra (the Dixon algebra). Left multiplication by the imaginary octonions generates a module for (0,7) and left multiplication by i times the imaginary quaternions generates a module for (3,0).
Applying Theorem <ref> gives a representation of (10) on ℋ≅^16⊗^2 that is a Weyl spinor representation with multiplicity two.
The next example is new.
Consider the tensor product of unitary irreducible modules for (4,0) and (0,6), as in Theorem <ref>. This results in the Dirac spinor representation of (10) on ℋ=^4⊗^8=^32.
This construction has a distinguished subgroup of (10), the Pati-Salam group
G_PS=(4)×(6)/_2. It is shown here that Example <ref> can be used to construct spectral triples for the internal space of the Pati-Salam model.
A spectral triple is defined using ℋ, J=J_1⊗ J_2, γ=γ_1⊗γ_2 and algebra
𝒜=(4,0)_even⊕(0,6)_even≅ (⊕)⊕ M_4().
Denote the projection onto the positive chirality eigenspace for the second action by π_2^+, and write π_2^-=1-π_2^+. Then Jπ_2^+=π_2^-J.
The left action of a=(a_1,a_2)∈𝒜 on ℋ is defined to be
l(a)=c_1(a_1)⊗π_2^+ + 1⊗ c_2(a_2)π_2^-,
The right action of the spectral triple can be computed from it
r(a)= Jl(a^*)J^-1=c_1(a_1^*)⊗π_2^- + 1⊗ c_2(a_2^*)π_2^+
and it can be seen that the zeroth-order condition
[l(a),r(b)]=0,
is satisfied, making ℋ a bimodule over 𝒜. This part of the construction gives the same Hilbert space and algebra as the Connes-Chamsesddine spectral triples <cit.>.
The gauge group can be defined by the elements u=(u_1,u_2)∈(4)×(6), which form a subgroup of the unitary elements of 𝒜. This subgroup automatically satisfies Connes' unimodular condition <cit.>. The gauge group action on ℋ is the adjoint action
l(u)r(u^*)=(c_1(u_1)⊗π_2^+ + 1⊗ c_2(u_2)π_2^-)(c_1(u_1)⊗π_2^- + 1⊗ c_2(u_2)π_2^+)
=c_1(u_1)⊗ c_2(u_2),
which gives a faithful action of the correct subgroup G_PS⊂(10).
Dirac operators for this spectral triple are defined by
D=c_1(d)⊗ 1
where d is any Hermitian element of (4,0)_odd. This can be written in terms of the gamma matrices as D=d_aγ^a_1⊗ 1 for a vector (d_1,d_2,d_3,d_4)∈^4. The physical interpretation of d is that it is the electro-weak Higgs field, with the gauge group acting in the vector representation of (4). Note that a Clifford algebra interpretation of this Higgs field has been noted previously in the context of the graded tensor product of Clifford algebras <cit.>. The operator D is Hermitian, commutes with J, anticommutes with γ and satisfies the first order condition
[[D,l(a)],r(b)]=0
for all a,b∈𝒜, defining a real spectral triple of KO-dimension s=6.
As a final remark, one can easily generalise the construction of spectral triples to other Clifford algebras, but it is not yet clear which ones are interesting for physical models.
§ ACKNOWLEDGEMENT
Thanks are due for the hospitality of Nichol Furey and the Physics Department of Humboldt University and a visitor grant from the Kolleg Mathematik Physik Berlin that enabled the initial phase of this research.
plain
|
http://arxiv.org/abs/2405.10316v1 | 20240516175921 | Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model | [
"Zheng Gu",
"Shiyuan Yang",
"Jing Liao",
"Jing Huo",
"Yang Gao"
] | cs.CV | [
"cs.CV",
"cs.GR"
] |
guzheng@smail.nju.edu.cn
0000-0001-9914-3922
City University of Hong Kong and State Key Lab for Novel Software Technology, Nanjing University
China
s.y.yang@my.cityu.edu.hk
0000-0001-8213-5803
City University of Hong Kong and Tianjin University
China
Jing Liao and Jing Huo are the co-corresponding authors.
jingliao@cityu.edu.hk
0000-0001-7014-5377
City University of Hong Kong
China
[1]
huojing@nju.edu.cn
0000-0002-8504-455X
State Key Lab for Novel Software Technology, Nanjing University
China
gaoy@nju.edu.cn
0000-0002-2488-1813
State Key Lab for Novel Software Technology, Nanjing University
China
Visual In-Context Learning (ICL) has emerged as a promising research area due to its capability to accomplish various tasks with limited example pairs through analogical reasoning. However, training-based visual ICL has limitations in its ability to generalize to unseen tasks and requires the collection of a diverse task dataset. On the other hand, existing methods in the inference-based visual ICL category solely rely on textual prompts, which fail to capture fine-grained contextual information from given examples and can be time-consuming when converting from images to text prompts.
To address these challenges, we propose Analogist, a novel inference-based visual ICL approach that exploits both visual and textual prompting techniques using a text-to-image diffusion model pretrained for image inpainting. For visual prompting, we propose a self-attention cloning (SAC) method to guide the fine-grained structural-level analogy between image examples. For textual prompting, we leverage GPT-4V's visual reasoning capability to efficiently generate text prompts and introduce a cross-attention masking (CAM) operation to enhance the accuracy of semantic-level analogy guided by text prompts.
Our method is out-of-the-box and does not require fine-tuning or optimization. It is also generic and flexible, enabling a wide range of visual tasks to be performed in an in-context manner. Extensive experiments demonstrate the superiority of our method over existing approaches, both qualitatively and quantitatively. Our project webpage is available at https://analogist2d.github.iohttps://analogist2d.github.io.
<ccs2012>
<concept>
<concept_id>10010147.10010371.10010382.10010383</concept_id>
<concept_desc>Computing methodologies Image processing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Image processing
< g r a p h i c s >
Examples of in-context visual generation by our method using a pretrained Stable Diffusion Inpainting model are demonstrated. With an example image pair A and A', illustrating a visual transformation, and a query image B, our method enhances the model's capacity for visual in-context comprehension, producing a reasonable output B' that follows the same visual pattern. Source images: ImageNet <cit.>, LOL <cit.>, InstructPix2Pix <cit.>, TongYi QianWen APP, UBC-Fashion <cit.>, ScanNet <cit.>, DAVIS <cit.>, DALLE-3 <cit.>.
Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model
Yang Gao
May 20, 2024
===============================================================================
§ INTRODUCTION
As one of the most popular research topics in the recent field of natural language processing (NLP), in-context learning (ICL) represents a paradigm wherein large language models (LLMs) acquire the ability to learn tasks based on a limited set of demonstrative examples <cit.>. Unlike supervised learning, ICL directly generates predictions using pretrained LLMs <cit.>. This paradigm offers an interpretable interface for interacting with LLMs through language demonstrations, mirroring human decision-making by learning through analogies and similar experiences. ICL significantly lowers computational costs for adapting models to new tasks, making language-model-as-a-service feasible and enabling practical applications in large-scale, real-world tasks such as machine translation <cit.>, information extraction <cit.>, and complexity reasoning <cit.>.
Following the success of NLP, research in visual In-Context Learning has entered its embryonic stage of exploration <cit.>.
Specifically, when the demonstration is a pair of images A and A', visual in-context learning can be considered as an image analogy problem <cit.>. This involves analogizing the observed transformation from A to A' and applying it onto a query image B, resulting in B'. This analogy capability holds significant potential in computer graphics and vision tasks <cit.>.
For example, as shown in Figure <ref>, with just a single pair of examples without training on a large dataset, the pretrained model can perform tasks ranging from low-level tasks such as colorization, deblurring, denoising, etc., to high-level tasks such as image editing, image translation, motion transfer, etc. Visual ICL also offers significant potential in enhancing creative workflows. Designers can leverage a model to learn design ideas such as color themes, typography, and visual motifs from an example pair and adapt them analogously to different contents.
Existing visual ICL works fall into two categories: training-based and inference-based. Training-based methods train the generative model on diverse in-context tasks <cit.>. The ICL capabilities primarily exhibit tasks similar to their training tasks and have limitations when applied to unseen tasks. Moreover, collecting and organizing the data into in-context task format is laborious. Inference-based methods conduct ICL via appropriate prompting the model during inference, possessing better generalizability. However, existing methods <cit.> convert the given images into textual prompts, falling short in two aspects. First, the textual prompting is coarse-grained and cannot cover the detailed information presented in the image examples. Second, textual inversion from images requires iterative optimization, which is still time-consuming.
In this work, we propose Analogist, a novel inference-based visual ICL approach, to address the aforementioned challenges. We introduce both visual and textual prompting techniques on a pretrained text-to-image diffusion model.
Firstly, we introduce a novel visual prompting technique to overcome the coarse-granularity issue in textual prompting. Inspired by MAEVQGAN <cit.>, we formulate the ICL task as an image inpainting task by arranging the exemplary image pair A and A', the query image B, and the unknown image B' in a 2 × 2 grid. Then, we utilize a pretrained diffusion inpainting model to fill in the region of B'. To guide the inpainting process with fine-grained visual contextual information, we propose a self-attention cloning (SAC) method. This method clones the self-attention maps between A and B to the self-attention maps between A' and B' during the forward propagation of the diffusion inpainting model. Since the self-attention maps represent similarity between pixels, the SAC method effectively helps learn structural-level relationships between A and B, which are then applied to A' to generate B' analogically.
In addition to visual prompting offering structural-level guidance, we incorporate textual prompting to offer semantic-level guidance by providing appropriate text prompts to the inpainting model. However, unlike previous methods <cit.> that rely on time-consuming textual inversion optimization, we propose utilizing GPT-4V's visual reasoning capability to analyze the semantic transformation between A and A' and apply it analogically to B to generate a textual description of B'. This is facilitated by our well-designed graphical and textual instructions fed into GPT-4V. Furthermore, we introduce a cross-attention masking (CAM) operation to restrict the interaction between text and image to the B' region only, which ensures that the textual prompt more accurately guides the generation of B'.
With both semantic-level (coarse-grained) and structural-level (fine-grained) contextual information respectively provided by textual and visual prompting techniques, our approach is capable of performing a wide range of visual tasks in an in-context manner, as illustrated in Figure <ref>. Our approach is an out-of-the-box solution that only requires one forward step of a pretrained diffusion model, without the need for fine-tuning or optimization. Extensive experiments and comparisons across different tasks have confirmed that our method outperforms existing training-based and inference-based visual ICL methods, both qualitatively and quantitatively. Our method is primarily designed for applications where the input A and A' are spatially aligned. Nonetheless, we show that it holds promise for applications in misaligned scenarios as well. In summary, our contributions can be summarized as follows:
* We introduce Analogist, an out-of-the-box approach for visual in-context learning that utilizes a pretrained diffusion inpainting model along with effective visual and textual prompting techniques.
* In visual prompting, we propose a Self-Attention Cloning (SAC) method that effectively guides the image inpainting model to exploit fine-grained contextual information in the 2 × 2 grid visual prompt.
* In textual prompting, we propose to efficiently generate textual prompts using GPT-4V and enhance the accuracy of textual guidance by introducing a Cross-Attention Masking (CAM) operation.
§ RELATED WORK
§.§ Visual In-context Learning
Inspired by the taxonomy in Dong et al. dong2022survey, we categorize current visual in-context learning into two groups, training-based and inference-based, based on the criterion of whether the model is trained on in-context tasks.
Training-based Methods
Training-based methods train (or finetune) the model on diverse in-context tasks. Painter <cit.> uses paired input and output images as visual prompts to train a Vision Transformer <cit.>, which enables the model to learn and perform a wide range of vision tasks. The follow-up work SegGPT <cit.> extends the in-context learning capabilities of Painter specifically for precise and adaptable segmentation across various domains.
More recently, several work progressively exhibits the ICL ability of state-of-the-art diffusion models <cit.>. PromptDiffusion <cit.> introduces ControlNet <cit.> to tune a pretrained Stable Diffusion on six manually designed vision-language tasks. The proposed method is able to generalize to similar, contextually related unseen tasks. However, it poses challenge for users to offer detailed and precise text descriptions.
ImageBrush <cit.> introduces a novel framework for image manipulation using in-context visual instructions, rather than natural language. An additional prompt encoder is introduced to translate the visual changes depicted in the example images into text features to guide the inpainting model. ImageBrush is built on a diffusion-based inpainting model and trained on several vision datasets.
The above training-based methods necessitate the construction of high-quality and diverse tasks, making the pipeline laborious and inflexible. Meanwhile, the test tasks should ideally bear some similarity to the training tasks, suggesting opportunities for improving generalizability.
Inference-based Methods Instead of tuning the model parameters, inference-based methods inspire the model's understanding on the given demonstrations during inference time.
Among them, MAEVQGAN <cit.> innovatively proposes a visual prompting format of inpainting the missing patch in a 2 × 2 grid-like image. The model is pre-trained on figures from computer vision papers which are typically in a regular grid pattern and emerges with ICL capability. However, the generation effects are not entirely satisfactory due to limitations in dataset size and model capacity in comparison with the latest diffusion models.
VISII <cit.> considers the demonstration as images before and after image editing. This approach estimates the editing instruction based on a pretrained text-based image editing model <cit.>, producing results with higher quality. However, reverse-engineering the textual description of the differences between two images through optimization remains time-consuming. What's more, by transferring visual information to coarse-grained text, the generation process is merely driven by textual descriptions. The role of visual prompting is not fully leveraged, leading to inaccurate contextual understanding.
Our work falls into the category of inference-based methods and, notably, eliminates the need for additional optimization steps. Instead of solely relying on textual prompts, our approach leverages both textual and visual prompting. This allows us to respectively understand semantic-level and structural-level contextual information for visual ICL. Besides, our method utilizes GPT-4V to get textual prompts instead of textual inversion.
§.§ Image Analogies
Defined by A:A'::B:B', the goal of image analogies <cit.> is to find an “analogous” image B' that relates to B in the same way as A' relates to A. Such idea can be extended in various ways of image synthesis <cit.>.
Recently, DIA <cit.> investigates the image analogies task with Diffusion model. This method estimates the CLIP features of the given images. The CLIP features are injected into a pretrained text-to-image diffusion model to provide in-context guidance. DIA is capable of executing example-based image editing that encompasses complex, higher-level contextual or structural relationships. However, since the goal of CLIP is to align image and text spaces, the estimated features are high level and struggle to capture detailed image information.
Our work aims to tackle the problem of image analogies in the paradigm of visual in-context learning. Different from traditional texture synthesis approaches <cit.>, the analogy is achieved by prompting a pre-trained text-to-image diffusion model and can be applied to more applications such as low-level tasks, manipulation tasks, and vision tasks.
§.§ Prompt-based Image Editing
Recent multimodal approaches have demonstrated superior text-image feature alignment capabilities <cit.>, leading to a series of works on prompt-based image editing. Previous GAN-based methods perform manipulation in the latent space via GAN inversion <cit.>. More recent methods utilize text-to-image diffusion models to attain leading outcomes <cit.>. However, these methods struggle to do image analogy task since they take textual descriptions as input, which is not sufficiently intuitive and accurate to depict details related to the image structure. In contrast, our work takes a pair of images as demonstration input, utilizes self-attention to provide structure-related information, and automatically acquires the corresponding textual description through GPT-4V.
§ PRELIMINARY
Since our approach utilizes a pretrained Stable Diffusion inpainting model, we briefly review latent Stable Diffusion in Section <ref> as well as the Stable Diffusion inpainting model in Section <ref>.
§.§ Latent Diffusion Models.
Denoising Diffusion Probabilistic Models (DDPM) <cit.> are a class of generative models that gradually convert random noise into structured data through a series of reverse diffusion steps based on a Markov chain.
Latent Diffusion Models (LDM) like Stable Diffusion (SD) <cit.> enhances DDPM by employing an encoder E to map high-dimensional data x into lower-dimensional latent space z=E(x). The generation of Stable Diffusion can be guided by an additional text embedding c(y) encoded by CLIP <cit.> and a text prompt y. During training, an UNet model, parameterized by θ, is optimized to eliminate the noise ϵ introduced into z_t:
ℒ=𝔼_z ∼ E(x),y,ϵ∼𝒩(0,1),t [ ϵ-ϵ_θ(z_t,t,c(y)) ^2_2 ].
During inference, a randomly sampled latent z_T ∼𝒩(0,1) is progressively denoised through the model to produce a clean latent representation z_0 by
z_t-1 = 1/√(α_t) [ z_t - 1-α_t/1-√(α̅_t)ϵ_θ ( z_t, t, c(y) ) ],
where α̅_t = ∏_i=1^tα_t. Subsequently, the clean latent is fed into the decoder to obtain the generated image D(z_0).
§.§ Stable Diffusion Inpainting Model
We apply our method over the pretrained Stable Diffusion inpainting model, which is fine-tuned to boasts an additional feature of image inpainting. The forward process of the inpainting pipeline is as follows:
z_t-1 = 1/√(α_t) [ z_t - 1-α_t/1-√(α̅_t)ϵ_θ ( z_t, t, c(y), E(I_m), M ) ],
The UNet is updated to include five extra input channels – four dedicated to the encoded masked-image E(I_m) and one for the mask M itself. These two extra inputs are concated with z_t to fed into the UNet to predict the noise at each time step.
§ METHOD
The goal of ICL is to encourage pretrained model to learn tasks given only a few examples in the form of demonstration <cit.>. Specific to the image domain, the demonstration is defined as an example image pair A and A', where A' is the result obtained by applying a certain visual effect or transformation to A. Given a new query image B, the model is expected to apply the same effect to B, thus creating a new image B', so that A:A'::B:B' <cit.>. This process demonstrates the model's understanding and replication of visual transformations from a given demonstration to a new context, exhibiting the ICL ability.
As illustrated in Figure <ref>, to address this issue, we approach it from both visual structural-level (Section <ref>) and textual semantic-level (Section <ref>) perspectives. For visual prompting (red region in Figure <ref>), we formulate the input images into a 2x2 grid image, utilizing a pretrained diffusion inpainting model to fill in the missing region in Section <ref>. To introduce more fine-grained visual information, we propose Self-Attention Cloning (SAC) in Section <ref>. For textual prompting (blue region in Figure <ref>), GPT-4V is elaborated to provide semantic-level guidance to the generation process in Section <ref>. To foster semantic correspondence between the inpainted image and the text prompt, we propose Cross-Attention Masking (CAM) in Section <ref>.
§.§ Visual Prompting
To introduce fine-grained structural-level visual guidance in the in-context inference process, we construct a visual prompt in the form of a 2× 2 grid-like image for the pretrained inpainting model, and provide visual contextual information by cloning the self-attention associations between the given images.
§.§.§ 2× 2-grid Prompting
Image inpainting models fill in unknown areas of an image based on its known regions, which naturally aligns with the concept of ICL. As shown in Figure <ref>, to take advantage of this property, we first rearrange the input images A, A', and B into a single 2×2 grid-like image, denoted as I. Image B is pasted to the bottom right corner of the grid image, getting image I'. We extract the features of the pasted image, E(I'), and add noise to it via diffusion forward process, getting the initial x_T. To align with the interface of the pretrained model, a mask image M is simultaneously generated. In this mask, the bottom right region is entirely ones, while the remaining regions are zeros. At each timestep t, the latent x_t ∈ℝ^b × 4 × h × w is concatenated with the feature E(I) ∈ℝ^b × 4 × h × w and mask M ∈ℝ^b × 1 × h × w, constructing the input of the UNet. By establishing such a 2× 2-grid prompt, we encourage the model to fill in the content of unknown area (B') based on the contextual regions (A, A', and B) in the image.
§.§.§ Self-Attention Cloning
The key of in-context learning is to recognize task instruction from the given demonstration. Previous inference-based work extract the visual instructions through cross-attention injection, which could only provides coarse and imprecise guidance. Differently, we introduce fine-grained structural-aware contextual information via self-attention.
Our motivation comes from an observation that the Diffusion model accurately constructs associations between different positions in the known areas through self-attention. We show the visualization of self-attention relations in Figure <ref>. We calculate the attention values between key semantic positions (e.g., the eyes, mouth, and flower in the first row and the spire, building, and the background grassland in the second row) in A and all regions in B. The results demonstrate that the visual associations between images can be accurately identified through self-attention, which could be more accurate than abstract semantic text prompts as guidance.
Based on this observation, we propose to use self-attention as a structural-level prior to guide the in-context generation procedure by modulating self-attention in UNet. We show an example in Figure <ref> of translating a cat into a tiger. The relative positional relationship of the tiger in B' and the tiger in A' should be consistent with the relative positional relationship of the two cats in B and A.
We present detailed illustration of the proposed self-attention cloning (SAC) in Figure <ref>. Denote the image feature before self-attention as F_i ∈ℝ^h × w × c. The self-attention map ℳ_s ∈ℝ^hw × hw records the similarity of each position on the entire image with other positions, which also includes the similarities between A and B, as well as between A' and B'. We extract the sub self-attention map ℳ_s(A,B) ∈ℝ^hw/4×hw/4 and assign its value to ℳ_s(A',B') ∈ℝ^hw/4×hw/4:
ℳ_s(A',B') := ℳ_s(A,B) · s,
where s is a coefficient used to balance the degree of preserving the structure of image B and the degree of applying transformations. We perform the self-attention cloning operation before softmax to prevent the original self-attention results being excessively affected.
§.§ Textual Prompting
Cloning self-attention effectively manages basic in-context visual guidance, yet the diffusion model's celebrated text-to-image feature remains underutilized to provide semantic-level guidance. To address this, we utilize GPT-4V's visual reasoning abilities <cit.> to provide semantic guidance to the inpainting model.
§.§.§ GPT-4V Prompting
We prompt GPT-4V to generate a coherent text description to aid the inpainting process. Considering the consistency of the entire pipeline, we feed the whole 2x2 grid-like image directly into GPT-4V with a pre-designed problem Description, as depicted in Figure <ref>. We employ two carefully-designed graphical instructions to make it easier for GPT-4V to understand the task. Firstly, inspired by <cit.>, we place a letter mark (A, A', B, B') in the top-left corner of each grid cell. Secondly, we add prominent arrow markers (→) between A and A', as well as between B and B', to indicate the relationship between the two images. These approaches introduce structured, easily identifiable reference points, facilitating more effective and accurate responses to queries involving visual content. Then, GPT-4V is asked to perform an analogy and output the text description for B'. Finally, we use GPT-4V's answer as the semantic-level positive text prompt to reinforce the model's ICL capabilities.
We also employ negative text prompts (i.e., “Messy, Disordered, Chaotic, Cluttered, Haphazard, Unkempt, Scattered, Disheveled, Tangled, Random”) to prevent the diffusion model from generating irregular and illogical results. These two prompts work cooperatively to inject semantic-level guidance into the model.
§.§.§ Cross-Attention Masking
Note that the prompt obtained from GPT-4V is specifically tailored for B', yet the textual guidance impacts the entire image through cross-attention in the UNet. To address this issue, we propose cross-attention masking (CAM): in cross-attention layers, we restrict the text interacts only with the region corresponding to B'. Specifically, suppose the cross-attention map as ℳ_c ∈ℝ^hw × L, where L denotes the length of text embedding. We repurpose the indices of different regions identified in the previous SAC process and set the attention values between the text and regions other than B' (i.e., A, A', and B) to zero:
ℳ_c(A):=0; ℳ_c(A'):=0; ℳ_c(B):=0.
As illustrated in Figure <ref>, we utilize the attention map post-softmax, as we are completely obstructing the relationship between the text and regions outside of B'.
As for the attention map indexing in SAC and CAM, due to the fixed positions of each image, we are able to pre-calculate the indices required for extracting the necessary sub-attention maps (e.g., ℳ_s(A,B) and ℳ_c(A)) from the entire attention map. This pre-determination streamlines the entire pipeline, enhancing its simplicity and efficiency.
§ EXPERIMENTS
§.§ Implementation Details
We implement our work in PyTorch <cit.>. The input images A, A', B are resized to 256 × 256 and spatially combined to form a 512× 512 grid-like image. We used a publicly available Stable Diffusion inpainting model[https://huggingface.co/runwayml/stable-diffusion-inpainting]. The model is initialized with SD1.2 and trained on inpainting task, therefore capable of inpainting the missing areas specified by a mask. The UNet architecture contains 16 blocks, each consists of one cross-attention and one self-attention. We perform SAC and CAM from layer 3 to 10 at all timesteps in the diffusion process. The scale for classifier-free guidance is set at 15. The coefficient for self-attention cloning s=1.3 in all experiments except for skeleton-to-image where s=1.4. All experiments are conducted on an RTX 3090 GPU.
§.§ Evaluation Setup
Dataset We employ the following three major categories, totaling ten tasks to evaluate the effectiveness of the proposed method quantitatively: low-level tasks, manipulation tasks, and more challenging vision tasks.
* Low-level tasks. We test out method on four low-level tasks, i.e., image colorization, image deblurring, image denoising, and image enhancement. For the first three tasks, we sample in-the-wild images from ImageNet <cit.> and apply corresponding transformations (i.e., grayscale, gaussian blurry, adding noise). For image enhancement, we use the LOL dataset <cit.>, which consists of low/normal-light image pairs. We collect 100 samples for each low-level task.
* Manipulation tasks. We select three kind of image manipulation tasks (i.e., image editing, image translation, and style transfer) from the CLIP-filtered subset processed by InstructPix2Pix <cit.>. Since the dataset is constructed for general image editing, we split the samples into three tasks based on the keywords. Instructions containing “add”, “remove” are considered as image editing tasks, those with “make, turn, change” are image translation tasks. Each manipulation task contains 200 samples.
* Vision tasks. We select three more challenging vision tasks for evaluation: skeleton-to-image generation from UBC-Fas-hion <cit.>, mask-to-image generation from ScanNet <cit.>, and image inpainting from DAVIS dataset <cit.>. Each task contains 200 samples.
By developing these three major categories, we can evaluate if the pretrained model is capable of understanding, processing, and utilizing visual information across various levels, while also evaluating its ability to generalize effectively across these tasks.
Baseline methods We take four methods, MAEVQGAN <cit.>, PromptDiffusion <cit.>, DIA <cit.> and VISII <cit.> as our baseline. All baseline methods utilize the official implementations and checkpoints provided. Since PromptDiffusion <cit.> requires text as part of its input, but most of the test datasets (such as low-level) do not have paired text descriptions, we input the same text prompts as ours that obtained from GPT-4V into PromptDiffusion <cit.> to ensure a fair comparison.
Evaluation Metrics We evaluate the model's ICL capacity via the CLIP direction similarity between the demonstration and the produced results. We utilize the Image Encoder from CLIP to extract the image features of A, A', B, and the generated B'. Then, we calculate the cosine similarity between the directional changes from A to A' and from B to B'. The higher the similarity, the more consistent the inferred B' is with the transformation effects applied to A. Due to the generation diversity of diffusion models, we do not compare pixel-level metrics like SSIM and PSNR. Instead, we calculate FID between the generated B' images and the ground truth images. In order to obtain more accurate result, we merge all the data in each major category to calculate the FID values for comparison.
§.§ Qualitative Results
Figure <ref> presents comparison of our method with the baselines on all of the ten tasks. For MAEVQGAN <cit.>, due to the lack of specific structuring of training data into the form of tasks and the absence of textual guidance, the quality of the generated output is relatively poor, especially for high-level tasks like manipulation.
For PromptDiffusion <cit.>, the bias in training task (i.e., image-to-HED, HED-to-image) significantly impacts the ICL generalizability of the model. As shown in the example of deblur and translation, the results tend to produce line drawings similar with edge detection results.
For the other two inference-based methods DIA <cit.> and VISII <cit.>, they conduct in-context learning through the estimated text solely, making it difficult provide sufficiently accurate prompt information to generate the correct results. Our method takes into account guidance at both the visual and semantic levels, which can produce accurate and reasonable in-context outputs. Notice that GPT-4V prompting may struggle with vision tasks, giving coarse descriptions. For example, “person in dress standing” in the skeleton-to-image example does not give the detailed description that what pose the woman should be standing in. However, thanks to the proposed SAC operation, these structure-aware in-context information can be still captured and utilized to produce the correct results. Figure <ref> shows further results of Analogist on these tasks, demonstrating the ICL capabilities of our proposed method. More randomly selected results are shown in supplementary materials.
Additionally, we conducted a comparison with ImageBrush <cit.>. Since ImageBrush has not released the code, the comparison is made in the range of training tasks of ImageBrush. As shown in Figure <ref>, it is worth noting that our method is more effective at preserving the details in Image B. Especially in manipulation tasks, the color of the aurora, the contour structure of the animals, and the texture on the clothing are better preserved. This is because our proposed visual and textual prompting contain more detailed in-context information. On the three vision tasks, we achieve competitive results with ImageBrush. Note that our model is not fine-tuned specifically for these tasks, which demonstrate our superiority of in-context generalizability as an inference-based method.
§.§ Quantitative Comparisons
CLIP Direction We compute the following CLIP direction similarity, cos[(ℰ(B')-ℰ(B)), (ℰ(A')-ℰ(A))], to evaluate how faithfully the transformations provided by the model adhere to the transformations contained in the given examples. The results are shown in in Table <ref>. Note that VISII <cit.> achieves acceptable results in manipulation tasks since the model it utilizes is pretrained on this ip2p dataset <cit.>. Overall, our method demonstrates superior ICL capabilities across all these tasks.
Fréchet inception distance (FID) We calculate FID between generated images and ground truth on the entire major category. The results are shown in Table <ref>. The proposed Analogist outperforms all baselines across the three major tasks. Notice that VISII <cit.> outperforms other baselines on manipulation tasks. This is because VISII leverages an InstructPix2Pix <cit.> model which is pretrained on the same dataset, making it more familiar with generating data of similar quality.
User Study We conduct a user study to evaluate the perceptual performance of our method. The user study consisted of 50 questions, with 42 participants involved, containing all of the 10 kind of tasks. In each question, first, we presented the participants with images A and A', asking them to analyze the changes between them. Then, we provided image B and tasked them with predicting the expected transformation of B following the same pattern. Subsequently, we displayed the outputs generated by different methods for this task, and the participants were required to select the one they deemed most consistent with the identified pattern and of the highest generative quality. We report the average selection result for the three major tasks: low-level tasks, manipulation tasks, and vision tasks in Table <ref>. Our proposed method exhibited the highest rate of being chosen among all of the three tasks.
§.§ Ablation Study
Effectiveness of proposed components To evaluate the effectiveness of the proposed components, we conduct a series of ablation studies. The ablation results are presented in Figure <ref>. (a) The baseline model of pretrained inpainting model generates rough and low-quality results. (b) By pasting B to the bottom right corner of the grid image, the outputs are more structurally consistent with B. (c) Adding negative prompts helps to stabilize the generation process and avoid messy results. (d-1) Crucially, when operating self-attention cloning by ℳ_s(B,B'):=ℳ_s(A,A'), the model retains the information from B, but is unable to extract accurate context from A' to infer the same transformation result. (d-2) When executing SAC by ℳ_s(A',B'):=ℳ_s(A,B), the model is required to keep the structural relation between A and B consistent, after they have been transformed into A' and B'. Thus, we use (d-2) instead of (d-1). (e) When adding textual prompts from GPT-4V in the whole grid image, the model rarely focuses the text guidance on the target inpainting area B'. (f) Finally, with the proposed CAM, our full approach not only maintained respectable generation quality but also successfully identified the necessary visual editing (adding sunglasses), effects (applying a cubist style), and transformations (changing church into mosque) for the ICL task.
GPT-4V Prompting We ablate on the designed graphical instructions that used to hint GPT-4V in Figure <ref>. Without adding the visual marks on the grid image, GPT-4V may not know the corresponding relationship of the given images, therefore is unable to correctly analyze the content according to the instructions. By explicitly marking the positions of images (A, A', B, and B') on the constructed grid image, GPT-4V conveniently understands the information contained in the pictures. Meanwhile, the introduced arrows from A to A' and B to B' successfully demonstrate the transformation relations, making it more acceptable for GPT-4V to produce the ideal response of adding a “pagoda in the snowy forest”. This text prompt will introduce semantic contextual information for the pretrained model to understand the task. Note that our method is generic and supports other vision-language models <cit.> as well.
Hyper-parameters We present ablation on the parameter sensitivity of our proposed method in Figure <ref>. As for the SAC coefficient s, utilizing a smaller s value (s=0.5) results in an output more closely resembling the original Image B, whereas a larger value (s=1.3) tends to imbue the result with characteristics of A'. However, excessively large coefficients (s=1.8) leads to an overly unbalanced attention map, which in turn reduces the quality of generation.
We also ablate the selection of UNet layers in which we perform SAC and CAM. The results indicate that it is necessary to perform operations simultaneously in both the encoder and the decoder. Furthermore, if the operations are performed at a shallow level (high resolution), the outcome is merely a simple replication of some colors and coarse textures, leading to poor quality. If the operations are performed at a deeper level (low resolution), the excessive compression of information leads to the generated result being similar to the original image B. In our experiments, we perform SAC and CAM at a middle level of the UNet layers.
§.§ Analysis
Different In-context examples
A model with contextual reasoning abilities should be able to produce different results based on different in-context examples, when given the same input. To verify that our approach has such capabilities, we conducted the following experiment as shown in Figure <ref>. Given the same image A as an image of wolves, we first translate A into different example outputs { A'_1, A'_2, A'_3, A'_4 } using MasaCtrl <cit.>, obtaining different animals like lion, tiger, dog, and panda. We construct different ICL tasks, keeping the image A and B being the same, while varying the image A's. Our method is able to recognize the translation from A to A' accordingly and generate the corresponding animals in B', demonstrating the ICL capacity of our Analogist.
Inference Runtime
In this section, we compare the execution time for different ICL methods performed once. Our experiment is conducted on an RTX 3090 GPU, and we calculated the time taken to generate one image. The result is shown in Tab <ref>. MAEVQGAN <cit.> is the least time-consuming, taking 0.4 seconds, since it is generating very few tokens without the need of iteratively denoising. Our method Analogist takes about 4 second, the same as PromptDiffusion <cit.>, which is typically the standard sampling time for Diffusion models, but does not require specific fine-tuning. As for the previous inference-baesd methods DIA <cit.> and VISII <cit.>, it takes rather long time (i.e., 258 seconds and 685 seconds) for these two methods to estimate the CLIP feature and editing instruction respectively.
§ APPLICATION
In this section, we extend Analogist to three categories of applications: (a) A and A' are aligned, (b) A and B are aligned, and (c) A, A', and B are all misaligned. For (b) and (c), we make adjustments to our method accordingly.
§.§ Lg and Lg are aligned
Under the condition that A and A' are aligned, we show example of applications in Figure <ref>, e.g., photo-to-caricature, sketch-to-portrait, normal-to-RGB, and icon-to-image tasks. The results show that our method is able to generate reasonable results on these tasks. Notice that there are slight structural changes between A and A' for photo-to-caricature and icon-to-image. However, our method is still robust to these minor issues since we are providing in-context information from both structural and semantic levels.
§.§ Lg and Lg are aligned
We make it possible to address tasks where A is aligned with B instead of A'. We give an example of object multiplication in Figure <ref>, where A contains one brick and A' contains a brick stack. This problem can not be done through our original pipeline. To tackle this problem, we swap the positions of A' and B in the grid image, constructing a new grid image where A' contains one brick and B contains a stack of bricks. In this way, we simplify the task into one where A and A' are aligned again, i.e., changing the task of turning one brick into brick stack into the task of changing bricks into golden bricks. This strategy can be applied to tasks like motion transfer and image analogy where A and A' are misaligned in figure <ref>. We also demonstrate our method's ability of addressing tasks with multiple translations like both motion editing and style transfer, and object multiplication with editing.
§.§ Lg, Lg, and Lg are all misaligned
We extend our method on tasks where A, A', and B are all misaligned in Figure <ref>, such as changing a circle to a square, resizing a big circle to a smaller one, extrapolating new content of numbers and letters. We test our method without SAC to prevent incorrect structure guidance. Analogist produces reasonable results and outperforms MAEVQGAN. It should be pointed out that the quality of long sequence letter generation still have room to improve due to notorious tendency of diffusion models to struggle with generating high-quality text. Nevertheless, we believe these results demonstrate the pre-trained generative models have ample potential of in-context ability to be further tapped.
§ LIMITATION
Although our approach enhances in-context learning abilities, it's important to consider two possible limitations. Firstly, the inpainting model might be misled by incorrect text descriptions. In Figure <ref>, when the transformation from A to A' is minor (i.e., the added object in the first case is small and easily overlooked), GPT-4V fails to recognize it. The second case shows an style transfer task of drawing “a sketch of elephant”. However, GPT-4V recognizes the object as a lion instead of an elephant, leading to inaccurate guidance. The potential solution could be leaving an interface for users to monitor and customize the text prompts in real time.
Secondly, the model struggles with producing data that it seldom sees during the training stage. As shown in Figure <ref>, when asked to produce unnatural images like normal map and line-drawing icons, the model fails to generate accurate results since most of its training data are natural RGB images. On the other hand, it explains our method's mediocre performance on vision tasks compared to ImageBrush <cit.>. We believe this could potentially be achieved by demanding a more powerful pretrained base model.
Finally, the proposed self-attention cloning may struggle with scenario in which A, A', and B are all misaligned as shown in Figure <ref>. The structural-level information is not applicable in this case. One possible solution is to rely on semantic-level information to produce the transformation as discussed in Section <ref>.
§ CONCLUSION
Addressing the limitations of inaccurate instruction and tedious optimization of existing inference-based methods, we introduced Analogist, a novel approach for visual In-Context Learning (ICL) combining visual and textual prompting. The proposed method utilizes a text-to-image diffusion model pretrained for image inpainting, making it an out-of-the-box solution for a wide range of visual tasks. We innovate with Self-Attention Cloning (SAC) for visual prompting, enabling fine-grained structural-level analogy, and leverage GPT-4V's visual reasoning for efficient textual prompting, supplemented by Cross-Attention Masking (CAM) for enhanced semantic-level analogy accuracy. Our approach, without the need for extra training or optimization, demonstrates superior performance in both qualitative and quantitative measures, showcasing robust ICL capabilities.
This work was supported in part by the National Natural Science Foundation of China under Grant 62276128, Grant 62192783 in part by the Collaborative Innovation Center of Novel Software Technology and Industrialization, and a GRF grant from the Research Grants Council (RGC) of the Hong Kong Special Administrative Region, China [Project No. CityU 11216122].
ACM-Reference-Format
|
http://arxiv.org/abs/2405.10297v1 | 20240516175236 | Low-Degree Polynomials Are Good Extractors | [
"Omar Alrabiah",
"Jesse Goodman",
"Jonathan Mosheiff",
"João Ribeiro"
] | cs.CC | [
"cs.CC",
"math.CO"
] |
Low-Degree Polynomials Are Good Extractors
Omar AlrabiahUC Berkeley. . Supported by a Saudi Arabian Cultural Mission (SACM) Scholarship, NSF Award CCF-2210823, and a Simons Investigator Award (Venkatesan Guruswami) Jesse GoodmanUT Austin. . Supported by a Simons Investigator Award (#409864, David Zuckerman). Jonathan MosheiffBen-Gurion University. . Supported by an Alon Fellowship.João RibeiroNOVA LINCS & NOVA School of Science and Technology. . Supported in part by NOVA LINCS (ref. UIDB/04516/2020) with the financial support of FCT - Fundação para a Ciência e a Tecnologia.
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
sections/abstract
sections/1-introduction
sections/2-overview
sections/3-preliminaries
sections/4-small-families
sections/5.0-large-families
sections/5.1-poly-error-sumset-ext
sections/5.2-existential-sumset
sections/6-evasive-sets
sections/7-future-directions
sections/8-acks
alpha
|
http://arxiv.org/abs/2405.09746v1 | 20240516010844 | Algebraic Geometric Rook Codes for Coded Distributed Computing | [
"Gretchen L. Matthews",
"Pedro Soto"
] | cs.IT | [
"cs.IT",
"cs.DC",
"cs.DM",
"math.AG",
"math.IT",
"11T71",
"E.4"
] |
Algebraic Geometric Rook Codes for Coded Distributed Computing
Gretchen L. Matthews & Pedro Soto
Department of Mathematics
Virginia Tech
Blacksburg, Virginia 24061 USA
{gmatthews, pedrosoto}@vt.edu
============================================================================================================================================
We extend coded distributed computing over finite fields to allow the number of workers to be larger than the field size.
We give codes that work for fully general matrix multiplication and show that in this case we serendipitously have that all functions can be computed in a distributed fault-tolerant fashion over finite fields.
This generalizes previous results on the topic. We prove that the associated codes achieve a recovery threshold similar to the ones for characteristic zero fields but now with a factor that is proportional to the genus of the underlying function field.
In particular, we have that the recovery threshold of these codes is proportional to the classical complexity of matrix multiplication by a factor of at most the genus.
§ INTRODUCTION
In this paper we consider the problem of coded distributed computation over a finite field.
Coded distributed computing and, in particular, coded distributed matrix multiplication has attracted a large surge of research interest as of late <cit.>.
In this paper we will extend the batch matrix multiplication problem in <cit.> to the case where there are more workers than there are elements in the field.
We show that over finite fields, our rook codes can encode all functions. We use codes constructed from algebraic function fields. Prior works that use algebraic geometry codes include <cit.>, <cit.>, <cit.>, and evaluation codes <cit.>.
This paper is organized as follows. Section <ref> gives an implicit construction that solves the general coded distributed matrix-matrix multiplication problem that is optimal to a factor of 2(g+1) where g is the genus of a particular function field,
Section <ref> gives an explicit construction that is good for small values,
and Section <ref> gives a construction that computes any function which is optimal up to a factor of ℓ(g+1) where ℓ is the degree of the function and g is also the genus of a yet unspecified function field.
*Background and Notation
Consider a function field F of genus g over a finite field . The set of places of F is denoted by ℙ_F.
The divisor of a nonzero rational function f ∈ F is
(f)=(f)_0-(f)_∞ where
(f)_0:=∑_P ∈ℙ_F, v_P(f) > 0 v_P(f) P and (f)_∞:=∑_P ∈ℙ_F, v_P(f) > 0 v_P(f) P denote the zero and pole divisors of f and v_P(f) denotes the discrete valuation of f at the place P.
Consider a divisor G=∑_P ∈ℙ_F a_P P of F. Its degree is (G):=∑_P ∈ℙ_F a_P (P), and its support is (G):={ P ∈ℙ_F: a_P ≠ 0 }.
The Riemann-Roch space of G is
ℒ(G):= { f : (f) ≥ -G }∪{ 0 }, meaning f ∈ℒ(G) if and only if f has a zero of order at least -a_P for each P ∈ (G) with a_P<0 and the only poles of f are at
P ∈ (G) with a_P>0 and of pole order at most a_P. The dimension of the divisor G is ℓ(G):=_ℒ(G). If (G) > 2g-2, then according to the Riemann-Roch Theorem, ℓ(G)= G + 1 - g.
Given divisors G and D:=P_1+…+P_n on F with disjoint support where each P_i is a rational place, the associated algebraic geometry code is C(D,G):={( f(P_1), …, f(P_n) ): f ∈ℒ(G) }. It is well known that C(D,G) is an [n,k,d] code over , with length n, dimension k=ℓ(G)-ℓ(G-D) and minimum distance d ≥ n- G. Hence, if 2g-2< (G)<n, then k=(G)+1-g. For additional details, see <cit.>.
§ DIAGONAL ALGEBRAIC GEOMETRIC ROOK PRODUCT
§.§ Batch Matrix Multiplication
We will consider the following problem: given k pairs of matrices
A_1,B_1,…,A_k,B_k,
where A_i ∈^t_1 × t_2 and B_i ∈^t_2 × t_3 for i ∈ [k] and a field ,
compute the products
A_1 B_1 ,…,A_k B_k
in the distributed master worker topology in which the master node gives n worker nodes coded matrices of the form
à _w = ∑_i ∈ [k]α_i^(w) A_i
∈^t_1 × t_2
, B̃ _w = ∑_i ∈ [k]β_i^(w) B_i
∈^t_2 × t_3
where α_i^(w)∈ and w ∈ [n] indexes over the worker nodes.
We are concerned with the minimum number of worker nodes that need to return their values so that the master can recover the desired products. This will be formalized in Definition <ref>.
Before we move on to constructing the actual codes, we will show that this batch matrix problem is actually the most general form of the distributed matrix multiplication problem since it implicitly solves the general matrix-matrix multiplication problem.
§.§ General Matrix-Matrix Multiplication
Given two matrices
A = [ A_1,1 … A_1,ζ; ⋮ ⋱ ⋮; A_χ,1 … A_χ,ζ; ]
,
B = [ B_1,1 … B_1,υ; ⋮ ⋱ ⋮; B_ζ,1 … B_ζ,υ; ],
with
A_i,j∈^t_1 × t_2 and B_i,j∈^t_2 × t_3, we can take an optimal (χ, υ, ζ) fast matrix multiplication tensor of rank r, i.e., for t ∈ [r], take
 _t = ∑_i,j ∈ [χ] × [ζ]γ_i,j^(t) A_i,j , B̂ _t = ∑_i,j ∈ [ζ ]× [υ ]δ_i,j^(t) B_i,j,
and then A_i,B_i := Â_i,B̂_i as in <cit.>, which shows that the general matrix-matrix multiplication problem reduces exactly to the batch matrix multiplication. It seems that computing the linear functions defined by γ,δ is an undesirable overhead, but one must compute the functions given by α,β anyways; in particular, one may compose the α,β and the γ, δ so that there is no overhead. The number r above is often called the tensor-rank or the bilinear complexity.
The primary goal of this paper is to generalize the main result of <cit.> (which only holds in true generality generally for characteristic zero) to more general cases over finite fields.
This paper
overcomes the major obstacle when the field is finite, namely the number of evaluation points (meaning the number of workers) or more generally, the length of the code.
§.§ Implicit Construction
Let F/𝔽_q be a function field of transcendence degree 1.
Given a divisor G on F, we say that ℒ(G) is
R-recoverable for a positive integer R if there exist functions x_1,…,x_k ∈ℒ(G_1), y_1,…,y_k ∈ℒ(G_2)
for some divisors G_1 and G_2 of F with G=G_1+G_2
so that
the values of A_i B_i (i.e., the diagonal elements) can be recovered from any R columns of the product
C
[ x_1y_1(P_1) x_1y_1(P_2) … x_1y_1(P_n); ⋮ ⋮ ⋮; x_1y_k(P_1) x_1y_k(P_2) … x_1y_k(P_n); x_2y_1(P_1) x_2y_1(P_2) … x_2y_1(P_n); ⋮ ⋮ ⋮ ; x_2y_k(P_1) x_2y_k(P_2) … x_2y_k(P_n); x_ky_1(P_1) x_ky_1(P_2) … x_ky_1(P_n); ⋮ ⋮ ⋮ ; x_ky_k(P_1) x_ky_k(P_2) … x_ky_k(P_n); ]
for some matrix
C = (A_1B_1 ⋯ A_1B_k A_2B_1 ⋯ A_2B_k ⋯ A_kB_1⋯ A_kB_k)
where A_iB_j ∈^t_1 × t_3 for all i,j ∈ [k]. In this case, we may say that G is R-recoverable.
Let G and D = P_1+…+P_n be divisors with disjoint supports on F / 𝔽_q(x). We call 𝒞(G,D) a (diagonal) [n,k]_q-rook code if there exist bases {x_1,…,x_k} for ℒ(G_1) and {y_1,…,y_k} for ℒ(G_2)
satisfying for all i,j ∈ [k] and l ∈ [n]
P_l ∉supp(x_iy_j) i = j = l
where
G_1 and G_2 are divisors on F / 𝔽_q(x) such that G=G_1+G_2.
An [n,k]_q rook code given by bases
{x_1,…,x_k} and {y_1,…,y_k}
satisfies the following generalization of the decodability condition of <cit.> for all i,j,k
(x_k) + (y_k) = (x_i) + (y_j) k = i k = j.
The proof is immediate from the fact that (x_iy_j) = (x_i) + (y_j). Thus, the two expressions have the same supports.
The term rook code is inspired by the name given to the codes in <cit.>, since Lemma <ref> shows that Definition <ref> is a generalization of the decodability condition presented in <cit.>.
They are equivalent up to code equivalance.
If x_1,…,x_k ∈ℒ(G_1) and y_1,…,y_k ∈ℒ(G_2) satisfy
Equation <ref> and supp(G_1+G_2) ∩supp(D) = ∅,
then G_1+G_2 is R-recoverable for some R ≤ d(G).
Worker w will receive the values
Ã(P_w) = ∑_i ∈ [k] A_ix_i(P_w), B̃(P_w) = ∑_i ∈ [k] B_ix_i(P_w).
The matrix consisting of their products will be exactly the matrix given by Equation <ref>.
In particular, worker w will return
C̃(P_w)=Ã(P_w) B̃(P_w)
to the master node.
The proof now follows from the fact that n- (G) ≤ d, where d denotes the minimum distance of C(D,G), is the maximum number of rational places needed to recover the k blocks of data A_iB_i which is a subset of the indices in Equation <ref>.
§.§ Existence of Codes for All Fields and Numbers of Rational Places
Next, we demonstrate that diagonal rook codes exist over every field.
Given a function field F/_q with at least n rational places, there exists an [n,k]_q-rook code for any k ∈ [n].
Fix an element x ∈ F that is transcendental over 𝔽_q and rational places P_1, …, P_k of F.
By repeated application of the Approximation Theorem (see <cit.> for instance), there exists some a_1,…,a_k ∈ F such that
v_P_i(x-a_i) = 0
and
v_P_j(x-a_i) = 1
for i≠ j. Then the functions
x_i = y_i = ∏_j ∈ [k]∖{i} x - a_i.
satisfy Equation <ref> by construction.
The proof of Theorem <ref> suggests another possible generalization of the decodability condition of <cit.>, namely,
v_P_k (x_k) + v_P_k(y_k) = v_P_k(x_i) + v_P_k(y_j) k = i k = j.
§.§ A More Efficient Construction
For i ∈ [k], let r_i:=min{α∈ H(P_i): α > 0 } where H(P)={α∈: ℓ(α P) ≠ℓ ((α -1)P) } is the Weierstrass semigroup of a rational place P. Then there exist functions z_i such that
⟨ 1, z_i ⟩ = ℒ (r_iP_i).
Then we define
x_i := ∏_ j∈ [k]∖{i} z^-1_j.
Our coding scheme will send matrices
Ã(P_w) = ∑_i ∈ [k] A_ix_i(P_w), B̃(P_w) = ∑_i ∈ [k] B_ix_i(P_w)
to worker w. Let
b = ∑_i ∈ [k]r_ideg(P_i).
Then we have that
⟨ 1, x_1,…,x_k ⟩⊂ℒ (Q_1+…+Q_k),
where Q_i = (z_i^-1)_∞ and deg(∑_i Q_i) ≤ b. Assume further that there exist rational places
P_k+1, …, P_n+k of F
so that
supp(D) ∩supp(G) = ∅
where G=Q_1+…+Q_b and D = P_k+1+…+P_n+k. Then consider the code 𝒞(D,G).
Before proving that the previous construction is indeed a rook code we introduce the following measure of complexity which will turn out to (upper) bound the recovery threshold.
Given a place P of F / 𝔽_q, the min pole number of P, denoted μ(P) as the smallest integer r such that there exists a non-constant z ∈ F such that
⟨ 1, z ⟩ = ℒ (rP);
that is, the min pole number of P is the multiplicity of the Weierstrass semigroup of the place P.
We define the (k^th) min pole sum as
σ_k(F) : = min{∑_i∈[k]μ(P_i): [ P_i ∈ℙ_F, (P_i)=1,; P_i ≠ P_j ∀ i, j ∈ [k] ,; with i ≠ j ]}.
The functions x_i given in (<ref>) satisfy Equation <ref>. The associated construction has recovery threshold given by
R = 2σ_k(F).
Since the P_i were chosen to be distinct and supp(z_i^-1)_0 = supp(z_i)_∞ = {P_i}, we have that
⋃_j ∈ [k] ∖{i}supp(z_j)_∞
= ⋃_j ∈ [k] ∖{i}supp(z^-1_j)_0 = supp(x_i)_0
We have that
P_i ∉supp(z_j)_ ∞ ,
for i ≠ j by construction. Therefore,
P_i ∉⋃_j ∈ [k] ∖{i}supp(z_j)_∞
= ⋃_j ∈ [k] ∖{i}supp(z^-1_j)_0 = supp(x_i)_0.
Similarly, we have that
P_ℓ∈supp(z_j)_ ∞
for ℓ = j by construction; therefore,
P_ℓ∈⋃_j ∈ [k] ∖{i}supp(z_j)_∞= ⋃_j ∈ [k] ∖{i}supp(z^-1_j)_0 = (x_i)_0,
for any ℓ∈ [k] such that ℓ≠ i, by construction.
Therefore, Equation <ref> is satisfied.
We then have that
b = ∑_i ∈ [k]r_ideg(P_i) = σ_k(F),
for an optimal choice of P_1,…,P_k.
§.§ Analysis: Upper Bounds on the Recovery Threshold
The bound in Theorem <ref> can be given a coarse upper bound as stated in the next result.
The codes defined by Equation <ref> have the following upper bound:
R(n,k,q) ≤( g_n,q+1 ) k
where g_n,q is the smallest genus of a function field over 𝔽_q with at least (g_n,k+1)k+n rational places.
According to the Weierstrass Gap Theorem, for any rational place P of a function field F of genus g, ∖ H(P) ⊆ [1, …, 2g-1] and |∖ H(P) | =g. Hence, μ(P) ≤ g +1 and σ_k(F) ≤ (g+1)k provided has at least k rational places. The result then follows from the fact that the scheme requires n reserved rational places.
§.§.§ Hasse-Weil and Lower Bounds on the Recovery
Prove that what Umberto and his student tried to do won't work. What goes wrong is a generalization of <cit.>
The recovery threshold R of a scheme that computes the corresponding general matrix-matrix coded distributed version satisfies the bound
𝒯≤ R ≤ 2σ_𝒯 (F) ≤( g_n,q+1 ) 𝒯.
In particular, if we take χ = ζ = υ = τ, then we have that the recovery threshold asymptotically bounded as
R = O(σ_τ^ω(F)) = O(g_n,qτ^ω),
where ω is the matrix multiplication exponent.
Repeating the arguements in <cit.> and Section <ref>, the recovery threshold for batch matrix multiplication bounds the complexity of general matrix multiplication by a factor of two.
It is possible that the recovery threshold is far smaller than the genus. In particular, for hyperelliptic function fields we have that σ_k(F) = 2k.
Future research would involve finding other families of curves where we can replace the genus bound on recovery threshold with the gonality.
§ ENTANGLED ALGEBRAIC GEOMETRIC ROOK PRODUCT
In order to separate the different constructions, we will call the rook codes from the previous section diagonal rook codes (diagonal codes for short) and the code from this section entangled rook codes (entangled codes for short). However, we will see in Section <ref> that diagonal rook codes can code the most general functions in a straightforward way when the field is finite.
We will see that the difference between the two is that the diagonal codes implicitly encode matrix multiplication while the entangled codes attempt to do two things at once: 1) code the matrices and 2) be a fast matrix multiplication tensor.
Diagonal codes, in contrast, simply take an already optimal fast matrix multiplication tensor and encode that as a batch matrix multiplication.
§.§ Entangled Codes Do Matrix-Matrix Multiplication Well for Small Cases
In the classic characteristic zero case, codes of this form achieve
the naive cubic recovery threshold at best and thus it is unlikely they perform as well as the diagonal ones.
However, for small values of k they do better. For example, using the entangled polynomial codes construction from <cit.>, we get that for A a 2 × 2 matrix and B a 2 × 2 matrix, that the entangled polynomial codes have a recovery threshold of 9=2*2*2+2-1 while the LCC <cit.> and CSA <cit.> constructions from CITE achieves a recovery threshold of 13=2*7-1.
For more explanation, please see <cit.>.
§.§ Entangled Codes as an Explicit Construction
In the entangled rook code case, instead of implicitly giving the general matrix multiplication as a batch of k = 𝒯 matrix multiplications, one directly looks for code that also performs fast matrix multiplication simultaneously.
Since the implicit batch multiplication can already bring the recovery threshold to within a factor of σ_K(F) ≤ 2(g+1) (or just a factor of 2 in the case where 𝔽 is an infinite field, since one can use MDS-codes without running out of rational place), this entails trying to get the factor down beneath the diagonal rook codes.
For the entangled codes, we need to redefine what recovery and rook codes means.
Given x_1,1,…,x_χ, ζ∈ℒ(G_1), y_1,1,…,y_ζ, υ∈ℒ(G_2), and x_i,jy_k,ℓ∈ℒ(G_1 + G_2), we say that ℒ(G_1 + G_2) is R-recoverable if the values of ∑_j ∈ζA_i,j B_j,k (i.e., the dot products) can be recovered from any R columns of the result of
C
[ x_1,1y_1,1(P_1) x_1,1y_1,1(P_2) … x_1,1y_1,1(P_n); x_1,1y_1,2(P_1) x_1,1y_1,2(P_2) … x_1,1y_1,2(P_n); ⋮ ⋮ ⋱ ⋮ ; x_χ, ζy_ζ, υ(P_1) x_χ, ζy_ζ, υ(P_2) … x_χ, ζy_ζ, υ(P_n); ],
where
0.9!C = [ A_1,1B_1,1 A_1,1B_1,2 … A_χ,ζB_1,1 … A_χ,ζB_ζ,υ ].
Let G = G_1+G_2 and D = P_1+…+P_n. We call 𝒞(G,D) a [n,k]_q-entangled rook code, if there are bases x_i,j for ℒ(G_1) and y_k,ℓ for ℒ(G_2) such that,
I don't think we need this anymore: after reindexing places with triple indices, we have that
(x_i,jy_k,ℓ) = (x_i',j'y_k',ℓ') j = k = j' = k'.
is satisfied.
Assume there is some r such that
⟨ 1, z ⟩ = ℒ (rP)
for some rational places P.
Then we define
x_i,j := (z^υζ i + j)^-1, y_k ,ℓ := (z^ζℓ+ ζ - k)^-1
then we have that
(x_i,jy_k,ℓ)_0 = r((υζ i + j+ζℓ + ζ - k)P_0).
It should be clear that only when j=k do we have
(x_i,jy_k,ℓ)_0 = r_0(υζ i + ζℓ + ζ ) P_0,
and thus (after normalizing the x_i,j and y_k,ℓ) we have that the coefficient of x_i,ky_k,ℓ in the product ( ∑_i,jA_i,jx_i,j)(∑_k,ℓB_k,ℓy_k,ℓ) is equal to ∑_kA_i,kB_k,ℓ.
Thus we have an alternate coding scheme that achieves a cubic recovery threshold.
We postpone the analysis of the previous construction since it would asymptotically give a cubic recovery threshold (i.e., the complexity of naive matrix multiplication) up to a factor proportional to the genus.
It is likely that one can extend the impossibility results from <cit.> that were proven using additive combinatorics to the case of the semigroup of only one point.
The main intuition behind the diagonal design is to consider semigroups of many points, allowing for more elbow room in the construction so that such impossibility results do not hinder us.
§ DIAGONAL ROOK CODES FOR TENSORS
§.§ Multi-linear Functions
By a tensor, T, we mean a function
T: V_1,…,V_ℓ→𝔽
T(v_1,…,α v_i + β w, …,v_ℓ)
= α T(v_1,…, v_i , …,v_ℓ) + β T(v_1,…, w, …,v_ℓ)
for all i; we call ℓ the order of the tensor. Given bases ℬ_i for the V_i we can represent a tensor by the values T.
Given linear functions w_i : V_i →𝔽, we define the rank-1 tensor associated to (w_1,…,w_ℓ) as
w_1 w_2 … w_ℓ (v_1,…, v_ℓ) :=
∏_ i ∈ [ℓ] w_i(v_i)= [v_1⊗…⊗ v_ℓ] (v_1,…,v_ℓ).
If V_1=…=V_ℓ, then
we can further define the ℓ^th power of a linear form as the rank-1 tensor
v^ℓ = v ⋯ v.
For simplicity, we consider the case where V_1=…=V_ℓ.
§.§ Implicit Construction
Given x_1^(i),…,x_k^(i)∈ℒ(G_i), and ∏_i ∈ [ℓ] x^(i)_j_i∈ℒ(G_1 + … + G_ℓ) for all (j_1,…,j_ℓ) ∈ [k]^ℓ, we say that ℒ(G_1 + …+G_ℓ) is R-recoverable if the values of w^ℓ_i(v_1,…,v_ℓ), i.e., the diagonal elements, can be recovered from any R columns of the result of the product of C=
[ [ w_1 … w_1 w_1 … w_2 … w_k … w_1 … w_k … w_k ]; (v_1,…,v_ℓ) ]
with
[ x_1^(1)… x^(ℓ)_1(P_1) x_1^(1)… x^(ℓ)_1(P_2) … x_1^(1)… x^(ℓ)_1(P_n); x_1^(1)… x^(ℓ)_2(P_1) x_1^(1)… x^(ℓ)_2(P_2) … x_1^(1)… x^(ℓ)_2(P_n); ⋮ ⋮ ⋱ ⋮ ; x_k^(1)… x^(ℓ)_k(P_1) x_k^(1)… x^(ℓ)_k(P_2) … x_k^(1)… x^(ℓ)_k(P_n); ].
Here, C is the result of applying all of the possible products w_i_1… w_i_ℓ to (v_1,…,v_ℓ).
Let G = G_1+…+G_ℓ and D = P_1+…+P_n. We call 𝒞(G,D) a [n,k]_q-tensor rook code, if there are bases x_1^(i),…,x_k^(i)∈ℒ(G_i) such that
P_k ∉supp(x_i_1^(1)… x_i_ℓ^(ℓ)) i_1 = …= i_ℓ = k .
is satisfied.
Definition <ref> seems to only encode the “symmetric” tensors. This is true for infinite fields but as we will see, over finite fields this models all possible functions, so that we don't have to bother encoding more general tensors; thus, for simplicity, we only consider the symmetric case. However, it is straightforward to generalize the construction implied by Definition <ref> for non-symmetric tensors.
§.§ True Generality Over Finite Fields
A tensor T:V^ℓ→𝔽 is symmetric if
T(v_1,…,v_i,…,v_j,…,v_ℓ) = T(v_1,…,v_j,…,v_i,…,v_ℓ)
for all i,j ∈ [l], and we call the symmetric rank the smallest number r such that there exists some w_1,…,w_r ∈ V^* such that
T(v_1,…,v_ℓ) = ∑_i ∈ [r] w_i^ℓ(v_1,…,v_ℓ).
The space of all symmetric tensors on V of order ℓ is commonly denoted as
S^ℓ(V).
Section 7.1 of <cit.> gives that the symmetric rank of a symmetric tensor bounds its algebraic complexity in the arithmetic circuits model; in particular, it bounds the complexity in the diagonal depth-3 circuits or depth-3 powering circuits sometimes denoted as ΣΠ^ ℓΣ circuits.
We now proceed to show that any function
f : 𝔽_q^t →𝔽_q^u
has its complexity bounded by the symmetric rank, and thus, our model gives a scheme to perform coded distributed computing of any function over a finite field.
Every function
f : 𝔽_q^t →𝔽_q
over a finite field is given by a multivariate polynomial
p_f ∈𝔽_q[x_1,…,x_t].
In particular, any multivariate function
(f_1,…,f_b) : 𝔽_q^t →𝔽_q^u
is given by a polynomial map p_f where p_f,i∈𝔽_q[x_1,…,x_t].
Every degree ℓ polynomial f on t variables is the specialization of a symmetric tensor on T_f ∈ S^ℓ(𝔽_q^t+1); i.e.,
f(x_1,…,x_t) = T_f(x_1,…,x_t,1).
Every function
f : 𝔽_q^t →𝔽_q^u
is given by some symmetric tensor T_f ∈(S^ℓ(𝔽_q^t))^u.
§.§ Explicit Construction
We define
x_i := ∏_ j∈ [k]∖{i} z^-1_i.
just as before, but now define the code by sending the ℓ coded vectors
w̃_̃j̃(P_u) := ∑_i ∈ [k] w_i(v_1,…,v_ℓ)x_i^(j)(P_u),
to worker u, where x^(1)_i = ... = x^(ℓ)_i = x_i, so that w̃ _1 =...=w̃_ℓ =: w̃.
The workers then return
w̃^ℓ(P_u).
The construction of the z_i satisfies Equation <ref>; in particular, the construction given by Equation <ref> has recovery threshold given by
R = ℓσ_k(F).
§.§ Analysis: Upper bounds on the Recovery Threshold for Tensors
The bound in Theorem <ref> can be given a coarse upper bound as follows:
The codes defined by Equation <ref> satisfy
R(n,k,q) ≤ g_n,qℓ k
where g_n,q is the smallest genus of a function field over 𝔽q with at least g_n,kk+n rational places.
Let 𝒯(f) be the ℓ-linear complexity of computing the polynomial function f of degree ℓ, then the recovery threshold of a scheme that computes the corresponding function is bounded as follows
𝒯≤ R ≤ℓσ_𝒯(F) ≤ℓ g_n,q𝒯 .
§ ACKNOWLEDGMENT
The first author is partially supported by NSF DMS-2201075 and the Commonwealth Cyber Initiative.
IEEEtran
|
http://arxiv.org/abs/2405.09591v2 | 20240515115808 | A Comprehensive Survey on Data Augmentation | [
"Zaitian Wang",
"Pengfei Wang",
"Kunpeng Liu",
"Pengyang Wang",
"Yanjie Fu",
"Chang-Tien Lu",
"Charu C. Aggarwal",
"Jian Pei",
"Yuanchun Zhou"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Both authors contributed equally to this research.
wangzaitian23@mails.ucas.ac.cn
[1]
pfwang@cnic.cn
0000-0003-1075-0684
Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Chinese Academy of Sciences
Beijing
China
100083
kunpeng@pdx.edu
Portland State University
Portland
OR
USA
97201
pywang@um.edu.mo
University of Macau
Macau
China
999078
yanjie.fu@asu.edu
Arizona State University
Tempe
AZ
USA
85281
ctlu@vt.edu
Virginia Tech
Falls Church
VA
USA
22043
CharuCAggarwal@gmail.com
IBM T. J. Watson Research Center
Yorktown
NY
USA
10598
j.pei@duke.edu
Duke University
Durham
NC
USA
27708
Contact Author.
zyc@cnic.cn
Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Chinese Academy of Sciences
Beijing
China
100083
Data augmentation is a series of techniques that generate high-quality
artificial
data by manipulating existing data samples.
By leveraging data augmentation techniques, AI models can achieve significantly improved applicability in tasks involving scarce or imbalanced datasets, thereby substantially enhancing AI models' generalization capabilities.
Existing literature surveys only focus on a certain type of specific modality data, and categorize these methods from modality-specific and operation-centric perspectives,
which lacks a consistent summary of data augmentation methods across multiple modalities and limits the comprehension of how existing data samples serve
the data augmentation process.
To bridge this gap, we propose a more enlightening taxonomy that encompasses data augmentation techniques for different common data modalities.
Specifically, from a data-centric perspective, this survey proposes a modality-independent taxonomy by investigating how to take advantage of the intrinsic relationship between data samples, including single-wise, pair-wise, and population-wise sample data augmentation methods.
Additionally, we categorize data augmentation methods across five data modalities through a unified inductive approach.
A Comprehensive Survey on Data Augmentation
Yuanchun Zhou
Received May 20, 2024;
===========================================
§ INTRODUCTION
With the rapid development of Artificial Intelligence (AI) in the past decades, AI methods have shown their superiority over human beings and other traditional approaches across most tasks, from universal tasks in our daily lives such as image recognition <cit.> and text translation <cit.> to complicated scientific research tasks such as protein structure prediction <cit.> and weather forecasting <cit.>.
Recently, powered by Latent Diffusion Models <cit.> and Transformer <cit.>, Stable Diffusion <cit.> and ChatGPT <cit.> have greatly changed human beings' working, living, and entertainment.
The success of AI products is usually attributed to AI models' in-depth understanding of the accumulated data, thereby intrinsically uncovering data patterns and learning data-task correlations.
The performances of AI models will be affected by the quantity and quality of training data, e.g. models trained with limited data suffer from over-fitting with significantly degenerative performance on testing datasets, and models trained with unbalanced samples will lead to poor generalization ability.
In most situations, researchers have to overcome difficulties caused by limited data sample amounts and imbalanced distribution of data samples.
An intuitive solution to these problems is to acquire more data, but data are scarce or difficult to collect in many cases, and labeling the data is yet another labor-intensive work.
To solve these problems, data augmentation has been extensively applied and proven to be effective and efficient <cit.>.
The core idea of data augmentation is to artificially enlarge the training dataset by creating modified copies of existing data.
It also introduces more diversity and fills the gap between training datasets and real-world applications.
As technology continues to evolve, a variety of techniques are applied to augment data samples.
Some simple approaches only randomly mask part of the data <cit.>, while more sophisticated augmentation schemes involve generative adversarial networks <cit.> or reinforcement learning agents <cit.>.
An ideal survey on data augmentation should be modality-independent, because only in this way can it focus on the intrinsic mechanisms of data augmentation, regardless of the data type, and thus provide insight into the nature of data augmentation.
Existing literature surveys summarize data augmentation methods from different perspectives <cit.>. As summarized in Table. <ref>, each of them concentrates on a certain data modality.
Most of them categorize data augmentation methods with modality-specific (e.g. image modality and text modality), or operation-centric (e.g. feature scaling operation, data perturbation operation) taxonomies.
These surveys reflect the development of data augmentation and its application in different learning scenarios.
However, they fail to cover data augmentation methods across data types, uncover their common patterns despite different modalities, and thus limit readers' understanding of the essence of data augmentation.
To fill this gap, we investigate data augmentation methods comprehensively, across five most popular data modalities.
We analyze how data augmentation methods utilize different sample numbers and how they leverage different components of the data information.
Then, we propose a unified taxonomy that focuses on the way data augmentation derives new data samples.
Finally, we investigate up-to-date literature and categorize them by our proposed taxonomy.
The main contributions of this survey can be summarized as follows:
* We propose a novel modality-independent taxonomy from a data-centric perspective that accommodates data augmentation techniques for all modalities consistently and inductively;
* To the best of our knowledge, this is the first survey that covers data augmentation techniques across five data modalities, i.e. image, text, graph, tabular, and time-series modalities;
* We investigate how information is consistently contained in each of these data modalities and can be utilized for data augmentation.
* We include and categorize up-to-date literature in data augmentation.
§ BACKGROUND
§.§ Early Evolution
Despite the fact that data augmentation only became popular in no more than a decade, some of its embryonic forms were proposed quite early.
The concept of data augmentation was conceived at the very beginning of Deep Learning.
For example, the use of random distortion can be found in LeNet <cit.>, where ninefold the distorted images are added to the dataset to verify that increasing the size of the training set can effectively reduce test error.
Since then, data augmentation has gradually been recognized as a best practice for training Convolutional Neural Networks (CNNs) <cit.>.
AlexNet <cit.> explicitly employs several data augmentation techniques to reduce overfitting. It augments the datasets by extracting patches from images and altering the color intensity. It also applies dropout, which randomly sets the output of some neurons to zero to prevent their co-adaption. Although dropout is often regarded as an approach to regularization, it has a similar idea to data augmentation and indeed inspires a set of augmentation methods <cit.>.
These applications of data augmentation mainly focus on increasing the size of the training dataset and introducing diversity to it.
SMOTE <cit.> is another early and representative work on data augmentation but from a different perspective. Focusing on addressing the class imbalance problem, it suggests that oversampling the minority class can achieve better classification performance when categories are not equally represented in the dataset.
§.§ Existing Surveys
Given the rapid development of CNNs and their wide application in image-related tasks, most early data augmentation methods are proposed for image data.
Also, it is natural to augment image data because many data augmentation methods suit the traits of CNNs very well, such as translation invariance.
Published in 2018, <cit.> is one of the first surveys on data augmentation. It gives a brief discussion of the effect of traditional and neural-based augmentation methods.
In 2019, the use of data augmentation on image data is thoroughly studied in <cit.>, which is up to now the most extensive work on this track. It categorizes image data augmentation methods into basic image manipulation and deep learning approaches, with a wide coverage. It also introduces how meta-learning is used to wisely choose and combine various basic operations for data augmentation.
<cit.> is a recent survey on image data augmentation published in 2022. It formalizes commonly used operations for image data augmentation with equations and illustrates their effects with examples. In the same year, <cit.> also elaborates on image data augmentation with plenty of examples. It adopts taxonomies similar to those of <cit.> but includes more modern approaches and evaluates the improvements in model performance when these augmentation methods are used.
Data augmentation on text data is not thoroughly researched as early as image data, possibly due to the discrete and correlated nature of text components. Still, several augmentation methods are proposed for or transplanted to text data.
<cit.> is among the first surveys on text data augmentation. It adopts a simple taxonomy but gives more emphasis on the applications.
Built on previous work on image data augmentation, <cit.> continues to investigate how data augmentation is used to enrich text datasets, and methods are classified as symbolic augmentation and neural augmentation.
<cit.> categorizes text data augmentation methods by identifying whether they are used on the data space or feature space. It also presents the performance improvements of different methods.
Following the success of data augmentation methods for image and text data, there is a growing trend in augmenting graph data, too. Since the graph represents data in a relatively complex form compared with image and text, there are more opportunities for data augmentation. This also leads to more possible ways to categorize graph data augmentation methods.
For example, <cit.> views graph data augmentation from the employed techniques and application scenarios, while <cit.> discusses graph data augmentation in terms of its data modality, task level, and whether the augmentation operation is rule-based or learned.
<cit.> tries to elaborate on all feasible taxonomies for graph data augmentation and uses detailed schematic illustrations to explain the overview framework as well as the idea of some typical methods.
Tabular data is common in reality, but there is not as much room for data augmentation as other data types because tabular data lacks some features for augmentation to utilize. Some surveys only discuss augmentation techniques in the context of deep learning on tabular data <cit.>, and others mainly focus on how to generate tabular data <cit.>.
With its growing application in scenarios such as speech recognition and IoT, time-series data is attracting more attention. <cit.> discusses basic and advanced augmentation methods. <cit.> focuses on generative approaches. <cit.> gives more discussion on mixing time-series data.
Till now, data augmentation techniques for different data types have been exhaustively discussed; however, there has not been a comprehensive survey that summarizes data augmentation for all types of data.
Such a survey can be profitable because data augmentation techniques for different modalities share some common methodologies when leveraging information from existing data, such as altering some features of a sample or mixing values between multiple samples. Summarizing these similarities can help reveal the common pattern of data augmentation.
A comprehensive survey does mean simply combining existing surveys because the proposed taxonomies in these surveys view data augmentation from different angles and are incompatible with each other.
Some taxonomies contrast basic operations to advanced approaches; some categorize methods by their application scenarios.
Besides, these taxonomies provide limited insight into what happens to the information.
Indeed, huge gaps lie between operations for data modalities due to their distinctive nature; these operation-based or application-based taxonomies reflect the development of data augmentation techniques.
However, from such taxonomies, we cannot grasp the essence of data augmentation and lack perception of the big picture of data augmentation.
Hence there is a need for a comprehensive survey that covers data augmentation techniques for all data modalities with an all-embracing and information-centric taxonomy.
§ TAXONOMY
Before presenting the taxonomy, we formalize data augmentation as follows. Given a labeled dataset 𝒟_L = {𝐗,𝐲}, where 𝐗 stands for the data, and 𝐲 stands for the labels, data augmentation can be represented by a certain function f_θ, such that the augmented dataset 𝒟_L = {𝐗,𝐲} is derived by:
f_θ: 𝒟_L =
{𝐗,𝐲}→𝒟_L =
{𝐗,𝐲}
This representation also applies to an unlabeled dataset 𝒟_U = {𝐗} or a partially labeled dataset 𝒟 = {𝐗_L∪𝐗_U,𝐲_L}, but for simplicity, in this section, we only use a labeled dataset to explain the taxonomy.
Some data augmentation approaches are used in the input data space, while others are used in the latent feature space. This section uses 𝐗 as a space-agnostic representation.
In this comprehensive survey, we try to propose a two-tier taxonomy from an information-centric perspective that can be applied to all data modalities.
The main consideration of our taxonomy is whence the information in the augmented data comes.
It differentiates data augmentation techniques by asking two research questions:
RQ1: How many samples are used to generate each new sample? RQ2: Which part of the information is used to generate new data?
The answers to these questions constitute the taxonomy hierarchy which we present below.
By answering RQ1, we can divide data augmentation approaches into Individual, Multiple, and Populational Augmentation, which build the first tier of the taxonomy in our survey.
Individual Augmentation.
This type of data augmentation method transforms one data sample without referring to others to create new data.
It introduces a certain perturbation to the data, such as masking and noise addition.
The information of new data comes from exactly one sample in the original dataset, and each new sample often lies around its original sample in the feature space.
We formalize Individual Augmentation as follows, where ϵ(𝐱_i) stands for a certain perturbation that is related to the original data 𝐱_i:
𝐱 = 𝐱_i + ϵ(𝐱_i),
y = y_i
We further answer RQ2 to build the second tier of the taxonomy.
To answer this question, we must consider how the information is contained in the data.
Typically, data is made up of a certain type of element that carries unit information with a value, such as the coloring of a pixel or the wording in a sentence.
For most data, there is also some structural information that connects these elements in a certain way, such as the positional relationship of pixels or the syntax of a sentence.
The key to designing data augmentation methods is to think of how to properly leverage the two types of information.
So, we can categorize Individual Augmentation according to which part of the information is used for data augmentation:
* Value-based Transformation, which perturbs the value that an element carries.
* Structure-based Transformation, which perturbs elements' structural relationships.
Multiple Augmentation.
This type of data augmentation method makes use of multiple data to acquire new ones.
Information from different data samples is combined by operations such as interpolation and concatenation.
New samples derived from these methods often lie between the source data in the feature space.
We formalize Multiple Augmentation as follows, where i and j are two data samples from the original dataset, and λ is a scalar factor that often lies between [0,1]:
𝐱 = λ·𝐱_i + (1-λ) ·𝐱_j,
y = λ· y_i + (1-λ) · y_j
Now we answer RQ2 for Multiple Augmentation.
Similarly to Individual Augmentation, this type of data augmentation is also conducted utilizing either value of structure information, and the detailed classification is as follows:
* Value-based Mixture, which arithmetically mixes the values of multiple data.
* Structure-based Combination, which pieces multiple data (or parts of them) together.
Populational Augmentation.
This type of data augmentation method does not explicitly make use of one or multiple data but rather uses the population of the data.
It comprehensively learns the features and abstractions of the entire dataset and samples completely new data from a learned distribution of the data.
The generated data should also fall into this distribution.
We formalize Populational Augmentation as follows, where P(·) stands for a probability distribution learned from the original dataset 𝒟_L = {𝐗,𝐲}:
𝐱∼ P(𝐗),
y∼ P(𝐲|𝐱)
The answer to RQ2 for Populational Augmentation is different from the two categories mentioned above.
When generating new data based on population distributions, most methods simultaneously consider value and structural information.
Some methods also introduce external knowledge when learning from the data population. So, we have the following classification:
* Vanilla Generation, where the generation process only resorts to the existing dataset.
* Exogenous Generation, where the generation process refers to external resources such as other datasets or expert knowledge.
We have presented a general taxonomy for data augmentation, whereas the category is at a high level, and each data modality has its specific data augmentation techniques depending on the nature of that modality. In addition, the equations that we use as a general formalization are only abstract and conceptual. They do not literally represent all specific data augmentation techniques in these categories. From paragraph <ref> to paragraph <ref>, we will elaborate on how this taxonomy applies to each data modality and review some typical data augmentation methods.
§ DATA AUGMENTATION FOR IMAGE DATA
The elemental components of an image are pixels.
A pixel carries unit information that uses a numerical value to represent the color.
Each pixel also has spatial relationships with other pixels, so, images are well-structured.
Data augmentation for image data revolves around the manipulation of the colors and spatial relationships of image pixels.
§.§ Individual Image Augmentation
§.§.§ Value-based Image Transformation
Two kinds of methods are used to perturb the value a pixel carries, and thus change the color it displays. One is called Pixel Erasing, which completely masks out the color information in an area of the data; another is called Photometric Transformation, which only imposes a soft adjustment to change the appearance of the image, but the essential information remains visible.
Pixel Erasing.
Pixel Erasing is one of the simplest yet effective types of method in data augmentation. The idea is to mask some information from the original input space or the feature space. Typical work like Cutout <cit.> and random erasing <cit.> first selects a rectangle region and erases all pixels in that region. The erased region is then filled with a certain color or Gaussian noises.
r0.45
Region selection strategy for Pixel Erasing methods
2*Name 2*Ref 2cAugmentation Strategy
Random Location-based
Cutout <cit.>
Image-aware Random Erasing <cit.>
Object-aware Random Erasing <cit.>
Region-aware Random Erasing <cit.>
Hide-and-seek <cit.>
Grid-Mask <cit.>
Random Erasing in the frequency domain <cit.>
If erasing is performed by randomly selecting an area in the image, it is called Image-aware Random Erasing (IRE) <cit.>. In some tasks, the location of the target in the image is known, so random erasing can be restricted in the object's bounding box, known as Object-aware random erasing (ORE) <cit.>. Further argued in <cit.>, the erased area within the bounding box should be limited to a manageable size and a part of the less informative background should also be erased. <cit.> proposed Region-aware Random Erasing (RRE), which first introduces ORE with upper bounds for width and height in the bounding boxes, and then randomly erases the background.
Random Erasing typically adds one or two big rectangular masks to the image, and some other methods deploy more flexible implementations. Hide-And-Seek <cit.> proposes to randomly remove multiple parts of the image so that the model has to learn from multiple parts of the image other than the most discriminating part. Similarly, the Grid-Mask method <cit.> generates a set of squared masks under the grid layout. It can prevent excessive deletion and reservation and achieve a reasonable balance between deleted and reserved regional information.
In most research, random erasing is used in the spatial domain, while REF <cit.> extends the use of random erasing to the frequency domain. It uses the discrete Fourier transform to process the image, applies random erasing on the image's frequency domain, and uses the inverse discrete Fourier transform to recover the image to the spatial domain.
Photometric Transformation.
In contrast to Pixel Erasing, which put masks on an area of the image, Photometric Transformation changes the overall color and style of the image.
Some methods alter the value of each pixel without considering other pixels. The simplest way is to adjust the brightness of the image. Color inversion replaces the pixels with their “opposite" color. Color casting applies different operations in different channels. For example, red color casting increases the image's red channel while keeping the green channel and blue channel untouched. Noise injection adds Gaussian or other types of noises to the images. It helps the model learn from various distorted features and become robust. In addition, this method can produce a large amount of augmented data with low cost and complexity.
Some other methods decide how to adjust the value of pixels according to other pixels. Histogram equalization <cit.>, for example, enhances the image by enhancing its contrast, and white balancing simulates alternative light cast on the object. Kernel filters are matrices that slide across and are convoluted with original the image to generate new images. Depending on the value of the matrix, a kernel filter can be used to sharpen or blur the image, or make other changes. A sharpening filter helps extract the features, and a blurring filter can improve the robustness of the model. <cit.>, for instance, uses kernel filters to improve edge detection performance.
Greyscaling transforms a colored picture into black-and-white by combining channels for different colors. This change forces the model to focus on the shapes or edges of target objects instead of their colors. Apart from this, greyscaling would be especially useful in compressing data and improving the model's efficiency. A counter-operation of grey-scaling is Grayscale-to-Color Conversion (GCC) <cit.>. Since most pre-trained models for image-related tasks are trained on colored images, single-channeled greyscale images should be colored to fit the models.
§.§.§ Structure-based Image Transformation
This type of Individual Image Augmentation manipulates the spatial relationship between pixels. Image Cropping breaks the original image into pieces, and Geometric Transformation perturbs the structure within the original data.
Image Cropping.
Image Cropping refers to selecting an area from the original image and removing other parts of the image.
Some cropping techniques are discussed in the context of reducing the cost of image storage and transmission, improving image quality, and reducing computation complexity and cost when training artificial intelligence models.
However, it can also be used as a data augmentation technique, in that image cropping can produce multiple samples by selecting different regions from the original image, and prevent the models from being affected by irrelevant background information in the images.
Primitive cropping methods make use of manual selection or random cropping, while some methods employ more advanced techniques to perform this task, including attention-based approaches and aesthetics-based approaches <cit.>.
Attention-based approaches use certain attention scores to evaluate a subimage. Regions that contain the main topic or other types of information often have high attention scores.
<cit.> proposes a general solution, which uses the entropy, area ratio, and center distance of the image to evaluate the performance of the image segmentation and accordingly adjust the cropping parameters.
<cit.> uses an adaptive cropping method that applies different processes depending classification result of the image.
These studies discuss the way to evaluate the importance of each part of the image, while they overlook how to search for the optimal cropping rectangle.
Formulations of this problem and algorithms with low computational complexity are given by <cit.>.
Apart from these automatic methods, there are also some interactive image cropping methods.
<cit.>, a gaze-based image cropping approach, uses eye tracking techniques to capture human fixations and thus infer the location of important contents on the image.
Unlike fully hand-crafted image cropping, the implicit interaction applied in eye tracking is less burdensome and less time-consuming.
Aesthetics-based approaches <cit.> are similar to quality evaluation. For example, <cit.> detects several subject regions and one background region, extracts features from the regions, estimates and combines posterior probabilities, and determines the quality score according to these probabilities. A2-RL <cit.> integrates image cropping into a sequential decision-making process and an aesthetics-aware reinforcement learning model to solve the best cropping problem.
Beyond attention-based and aesthetic-based cropping, <cit.> proposed a change-based approach. It takes into account the content that is removed and extracts two sets of specially designed features to model what is changed after cropping.
Geometric Transformation.
As one of the most classical image data augmentation methods, Geometric Transformation changes the spatial relationship of pixels within an image while keeping the value of each pixel untouched to create more data without extra sampling or labeling. It increases the number of training samples, extends the input space, and better represents the real world by deriving more possible views from the original images. For example, flipping the image of a cat can be understood as taking another picture of the cat from a different orientation.
Affine transformations like flipping, shearing, rotation, and translation manipulate the image while preserving the colinearity and ratios of distances.
Image flipping reflects the image around its vertical axis, horizontal axis, or both axes, rotation turns the image by an angle about a fixed point.
Image shearing fixs all pixels along a given line, and other pixels are shifted parallel to that line by a distance proportional to their perpendicular distance from the line.
Translation means moving an image in a certain direction. It can introduce more variance in the position of target objects in the image.
In non-affine transformation, the spatial relationship between pixels is further changed compared with affine transformations because the colinearity and ratio of distances are broken.
One example of non-affine transformation is perspective transformation <cit.>, which mimics images taken from other views of the object.
Image stretching <cit.> is a kind of content-aware image resizing. It preserves or even enlarges the prominent contents in the image and compresses the background or unimportant foreground information. For example, if there is a person in the middle of an image, the middle part of the image is made wider, and the left and right parts are made narrower.
Distortion <cit.> also stretches the pixels but often focuses on a certain region. It aims at fixing distortion in the original images and is especially useful in digit classification tasks where the handwriting is poor or when the images are distorted by a camera lens (e.g. wide-angle lens) when collecting them.
PatchShuffle <cit.> divides an image into block matrices and shuffles the pixels within each block under a certain probability. This transformation introduces more variance in the image and extends the input space or feature space of the training data.
§.§.§ Image Value-structure Transformation
Automatic Data Augmentation methods search for the optimized augmentation policy. The policy is derived from a search space that includes both value-based transformations and structure-based transformations. When implementing the augmentation, both pixels' coloring and the positions are changed.
Automatic Data Augmentation.
AutoAugment <cit.> is among the most significant achievements in Automatic Data Augmentation. It represents data augmentation as a decision-making process and uses reinforcement learning to search for the best operations. In AutoAugment, an operation can be any basic value-based transformation like shearing, or structure-based transformation like histogram equalization. A sub-policy consists of two operations along with their probabilities and magnitudes with which the operations are applied. A policy consists of many sub-policies. All sub-policies and their parameters form the search space (action space), and a controller RNN (agent) picks the sub-polices from the search space and gets a policy (action). Images are processed under the policy, which produces several augmented images. The augmented images are fed to downstream networks (environment) and then AutoAugment uses the network's performance (reward) to update the controller. As a result, the controller is trained to pick the policies and parameters that can yield the best network performance. This optimal policy will be used on all images and is also transferable so that it can be used on other datasets, too.
Fast AutoAugment <cit.> proposes a more efficient searching algorithm for augmentation policies, which works significantly faster than AutoAugment and thus allows a larger number of sub-policies, which should benefit the generalization performance.
Similarly, RandAugment <cit.> removes the need for a separate search phase on a proxy task, and makes the training process less complicated and less expensive.
Population Based Augmentation (PBA) <cit.> learns a augmentation schedule rather than a fixed policy as in AutoAugment to improve efficiency.
These methods combine the benefits of both value-based transformations and structure-based transformations and further discover the potential of data augmentation with individual image data.
§.§ Multiple Image Augmentation
§.§.§ Value-based Image Mixture
This type of Multiple Image Augmentation computes an arithmetical mixture (e.g. interpolation) of the color from different images.
Image MixUp.
Image mixup is a category of data augmentation methods that combine two images pixel-wise. Each pixel of the augmented image is produced by interpolating the pixels from the original images.
Mixup <cit.> mixes two random images on the input layer. Pixels from two original images each contribute a portion to the new image:
𝐱 = λ𝐱_i + (1-λ)𝐱_j,
𝐲 = λ𝐲_i + (1-λ)𝐲_j
where (𝐱_i, 𝐲_i) and (𝐱_j, 𝐲_j) are two labeled images from the dataset, and λ∈ [0,1].
As a variant, manifold mixup <cit.> is operated on the hidden layers of the neural network.
Most mixup methods combine two different images, while AugMix <cit.> applies three augmentation operations to one source image, and combines the three augmented images to generate a new sample.
In the above-mentioned mixup approaches, the mixing factor is even across the entire image, so they are region-consistent.
In region-variant mixup approaches, the mixing operation varies in strength across regions.
For example, SmoothMix <cit.> applies a soft-edged mask when mixing.
Two more sophisticated methods are Co-mixup <cit.> and Puzzle mix <cit.>, which both use saliency to decide the mixing strength of different regions, potentially leading to better mixing performances.
Most mixup methods create new samples with soft labels, as in Eq.<ref>, while there are some exceptions.
Sample Paring <cit.>, for example, uses the label of one of the original images, and that of the other image is left unused.
ReMix <cit.> is designed to address the class imbalance problem, and the label of the augmented data is assigned with the label from the minority class.
Neural Blending.
In mixup, we explicitly and arithmetically blend different images. However, this process can benefit from the use of neural networks.
AutoMix <cit.>, for example, integrates a Mix Block in the training process. It builds a bridge between the selection of mixup policy and the optimization of the model.
In AutoMix, the augmentation module is optimized along with the classification feature map, and the mixup functions and mixing ratio will updated according to the classification loss.
Another image blending technique is proposed as Smart Augmentation <cit.>.
At the input end of Smart Augmentation, two random samples are fed into the network through two channels, and a CNN sub-network yields a one-channel output with the same shape as the input, which means one new image is generated based on the information of the two input images.
The augmented image is then used for downstream tasks, and the augmentation sub-network adjusts to the task performance accordingly.
§.§.§ Structure-based Image Combination
Methods that combine images based on the position of pixels and image substructure are called Image Patching. It refers to putting together small fragments of many images or putting a part of an image over a second image.
Image Patching.
RICAP <cit.> randomly crops four images, and patches them to construct a new image. CutMix <cit.> makes use of two images. It is developed based on Cutout <cit.>, but the removed region is replaced by a patch from another image. Attentive CutMix <cit.> further enhances the CutMix method by adding a feature extraction module that selects an important or representative region, and pastes this attentive patch onto the other image. Image patching is not limited to rectangular shapes. CowMask <cit.>, for example, introduces patching with irregular shape rather than rectangular.
§.§ Populational Image Augmentation
§.§.§ Vanilla Image Generation
Generative Adversarial Networks.
The rise of Generative Adversarial Networks (GAN) <cit.> makes it possible for humans to generate new images on a large scale using computers. This nature also leads to a new path of data augmentation: creating new images instead of modifying or combining old ones.
GANs have now been widely used as a data augmentation technique <cit.>, especially in the medical field <cit.>, where data is to some extent scarce. Apart from simply enlarging the dataset, GAN can be used in other way such as removing background noise and cleaning the dataset <cit.>. It can also generate different views or poses of the object based on the original image <cit.>, and thus add more variation to the dataset.
§.§.§ Exogenous Image Generation
Neural Style Transfer introduces style information from external dataset when augmenting an entire dataset. Computer Graphic Modelling is to leverage expert knowledge on how objects in the existing dataset looks like and the power of computer software to generate new image samples.
Neural Style Transfer.
Neural Style Transfer <cit.> learns the artistic style from a group of image data and uses this artistic style to depict the topic of images from another dataset.
As is <cit.>, Neural Style Transfer can be used to mimic the drawing styles of particular artists. Such an augmentation process can help the model learn a more general representation of the topics. <cit.> proposes a more realistic use of neural style transfer. It transfers the weather or illumination from reference images to the target dataset. This is particularly useful because it is hard to collect sufficient images under certain weather.
Computer Graphics Modelling.
Computer Graphics Modelling refers to artificially creating images by computer software, and it can be used to produce datasets for model training <cit.>. These datasets have several advantages over ordinary image datasets. First, it requires little human efforts to collect data. Second, by applying different aspects, poses, and lighting condition, computer graphics modelling can generate images with high diversity. Third, these images are all fully labelled with accurate ground truth.
§ DATA AUGMENTATION FOR TEXT DATA
The elemental components of text data are words, and sometimes the words are processed in the form of tokens for later convenience.
Words are organized into sentences following syntactic rules.
The sentence is a sequential representation since one word comes after another, but there is also a tree-structured grammatical dependency behind the sentence.
Data augmentation for text data often changes the wording or reshapes the structure of a sentence.
§.§ Individual Text Augmentation
§.§.§ Value-based Text Transformation
Text data augmentation of this type uses word-level manipulation. It applies changes to the choice of words or adds or removes some words.
Invariant Replacement.
Invariant Replacement refers to those augmentations that try to use alternative wording in a given sentence but convey the same meaning.
EDA <cit.>, for example, proposes a set of text augmentations, including synonym replacement. It randomly selects several words from a sentence, substituting them with their synonyms, i.e. words that mean exactly or nearly the same.
There are several ways to implement Synonym Replacement.
<cit.> uses WordNet <cit.>, a lexical database to find synonyms for verbs and nouns in sentences and thus make replacements.
Apart from WordNet, <cit.> also derives word embeddings by Word2Vec <cit.> and replaces a word with others who have high cosine similarity.
Another possibility is to use a knowledge graph.
A knowledge graph describes the relationships of entities, and by utilizing the “is equivalent" relation, we can easily find a series of synonyms for a given word.
A special case of synonym replacement is discussed in <cit.>. Non-standard tokens are frequently used in people's daily lives, especially for text messages and social network websites, posing challenges to language processing tasks. <cit.> models the process of generating non-standard tokens from standard dictionary words by letter removal or substitution and proposes a letter transformation approach to handle non-standard tokens. Although used as a normalization method, this technique could also be applied in data augmentation tasks to transform non-standard tokens to standard form, or vice versa.
Token Addition.
Token Addition refers to adding extra tokens to the original text, either introducing junk tokens as noise or enriching the sentences and making them more informative.
The random insertion, proposed in EDA <cit.>, inserts a random synonym of a random word at a random position in the sentence.
The words added in the EDA play the role of data noising and may carry either little or harmful information.
As a variant of EDA, the technique of AEDA <cit.> only inserts punctuation marks.
It only adds limited noise and avoids the potentially negative effect of random insertion.
INSNET <cit.> is an insertion-based text generation model that can sequentially insert new tokens into an incomplete sentence, and with consecutive insertion steps, the sentence is enriched and contains more information.
LeCA <cit.> introduces lexical constraints to improve the performance of neural machine translation.
In LeCA, a set of words is supposed to appear in the translation result.
These words are called constraints whose embeddings are appended to those of the original tokens.
The embeddings are then fed to the transformers' encoding layer, and these constraint words are very likely to be included in the output text.
A very similar research is <cit.>, which also uses lexical constraints.
The difference is that in LeCA, the constraints are given a soft suggestion, rather than being forced to appear as in <cit.>.
Token Deletion.
Random deletion <cit.> randomly removes some words from a sentence. In <cit.>, noising techniques are used to avoid overfitting, and the randomly selected word is replaced by a blank placeholder token “_", rather than simply being removed. In <cit.>, apart from word dropout which is similar to random deletion, it also uses word blanking to replace a word with a special “<BLANK>" placeholder.
On Word Selection.
Most methods randomly select the word to be replaced, added, and deleted. But some method considers the role a word plays in the sentence when deciding whether to impose perturbation.
For example, TEXTFOOLER <cit.> ranks words by their importance and replaces the important words with their synonyms.
Syntax-aware data augmentation <cit.> argues that most previous text data augmentation approaches are only computation-motivated and cannot assure the good quality of the generated text. Instead of simply modifying words with the same probability, Syntax-aware data augmentation calculates the probability of being modified for each word based on its syntactic role in the sentence. Specifically, it utilizes a dependency tree to calculate the importance of each word, i.e. the depth in the tree. A word far from the root is considered less important syntactically and is more likely to be removed (by means of dropout and blanking) or replaced during the augmentation process.
r0.38
Word selection strategy
2*Name 2*Ref 3cAugmentation Strategy
Random Role-based Learned
EDA <cit.>
- <cit.>
TEXTFOOLER <cit.>
LetterTran <cit.>
AEDA <cit.>
INSNET <cit.>
LeCA <cit.>
GBS <cit.>
- <cit.>
STA <cit.>
Instead of using a syntax tree to assess word importance, Selective Text Augmentation (STA) <cit.> focuses more on the statistical and semantic features of words to describe the role of words. It proposes two perspectives to view words in text classification tasks, namely, statistical correlation, which means how frequently a word co-occurs with a certain category, and semantic similarity, which measures the semantic overlap of the given words and the category label. Using the metrics as axes, STA divides the roles of words into four quadrants: gold, venture, bonus, and trivial. Words with different roles are treated differently when imposing augmentation operations on the sentence. For example, gold words will be kept in selective deletion and synonyms of gold words are added in selective insertion.
§.§.§ Structure-based Text Transformation
We categorize transformation that perturbs sentence structure into two types: Sentence Cropping that splits a sentence into many pieces, and Sentence Morphing that swaps the location of words or grammatical components.
Sentence Cropping.
Borrowing the idea of image cropping, which selects segments from the images, Sentence Cropping <cit.> divides sentences into several small pieces. It often uses dependency trees to extract the relationships between subjects and objects of the sentence and produces multiple smaller sentences that focus on different parts of the original sentence. This operation breaks the syntactic solidity and semantic comprehensiveness of the entire sentence but preserves local syntactic tags and shallow semantic labels.
Sentence Morphing.
These methods manipulate sentence structure by altering the position of words or terms and thus changing the syntax of the sentence.
Random swap <cit.> randomly chooses two words in a sentence and swaps their position.
This perturbs the structure and adds noise to the text data across paragraphs.
<cit.> studies two types of syntactic augmentations: inversion and passivization.
Inversion swaps the subject and object in a sentence, and the semantics might be different in most cases.
Passivization converts a sentence to its passive form, and the meaning of the new sentence is typically consistent with the original one.
Sentence rotation <cit.> borrows the idea of image rotation, where the image is rotated around its center.
In sentence rotation, the root of the sentence is chosen as the center of the sentence, and other sentence fragments are connected to their center via the dependency tree.
For some languages, such an operation is semantic-preserving, while for other languages the sentence's meaning is subject to change.
The DRNNs model <cit.> also utilizes the dependency tree and subject-object knowledge for data augmentation.
It samples the shortest dependency path from a sentence's dependency parse tree and then extracts two sub-paths, i.e. the subject-predicate path and the object-predicate path.
Augmentation is done by changing the order of the two paths.
§.§.§ Text Value-structure Manipulation
This type of method perturbs both value and structure information of a sentence. And two typical methods are Back-translation Augmentation and Hierarchical Data Augmentation.
Back-translation Augmentation.
Back-translation <cit.> means translating text data from one language to another, or a sequence of other languages, and finally translating back to the original language. The semantic information is preserved during the translation process, but the process can introduce some variance in syntax and wording. Such transformation can help the model learn more flexible expressions that convey the same meaning.
Hierarchical Data Augmentation.
Hierarchical Data Augmentation (HDA) <cit.> augments text data at both word and sentence levels. It evaluates the roles of words and sentences by training a hierarchical attention network to obtain the attention values of the text data at both levels. The attention value is used to help decide the importance of components and augment the data by cropping and concatenating these components.
§.§ Multiple Text Augmentation
§.§.§ Value-based Text Mixture
Mixup <cit.> has been a widely used data augmentation approach especially used for image data.
However, unlike image data, whose coloring information is expressed by numerical values and can be interpolated at the input layer, it is hard to calculate an interpolation for the words. So, text data are usually interpolated at embedding layers.
Text Mixup.
The adaptation of Mixup in sentence classification is studied in <cit.>, which proposes wordMixup and senMixup.
WordMixup interpolates a pair of word embeddings and then feeds the interpolation to the sentence encoder.
SenMixup, on the other hand, interpolates a pair of sentence embeddings.
The embedding vector produced by WordMixup or SenMixup is then passed to the softmax layer to produce the target distribution of the classification task.
The use of mixup in text data augmentation has also been extensively studied in many research, such as nonlinear Mixup <cit.> and DoubleMix <cit.>.
A special example is GPT3Mix <cit.>, which makes use of the latest large language model to mix sentences.
By inserting two source sentences in a prompt template that describes the instruction to the model, we can get a mixed sentence of high quality with a soft label.
§.§.§ Structure-based Text Combination
Some methods combine text data by merging text fragments from one sentence into another.
Text Fragment Merging.
Substructure Substitution <cit.> and Subtree Swapping <cit.> both use tree structures to analyze the part of speech or role in the sentences and thus make reasonable swaps. Some simpler method like GECA <cit.> aims to remove a fragment from a sentence and populate it with another fragment that occurs in some common environment.
§.§ Populational Text Augmentation
§.§.§ Vanilla Text Generation
We have discussed how augmented text data samples can be produced based on existing text materials.
While modifying existing data is not the only solution to data augmentation.
By leveraging various generative models, we can easily acquire high-quality text data on a large scale, and such abundance in data availability could greatly improve the performance of language models especially in terms of their generalization and robustness.
Data augmentation by text generation previously relied on Traditional Models, while Large Language Models have now become the dominant technique.
Traditional Models.
The use in data augmentation of some early generative language models, such as seq2seq <cit.>, VAE <cit.>, and CVAE <cit.> has been investigated in <cit.>, which shows improvement in the accuracy of an extrinsic low resource classification task.
Large Language Models.
Recent advances in large language models (LLM) and generative pre-training (GPT) <cit.> models have demonstrated unprecedented power in text understanding, classification, and generation tasks. Apart from supporting intelligent chat-bots that help people in their daily lives, the capacity of these state-of-the-art language generation models establishes a new paradigm of text data augmentation.
LAMBADA <cit.> is among the first attempts to use a state-of-the-art language generator to synthesize new text data for data augmentation, and its procedure is as follows. LAMBADA first trains a baseline classifier from a small dataset, and then it fine-tunes the GPT-2 <cit.> model. A set of labels is thereafter fed to the fine-tuned language model, which generates a large dataset of labeled sentences. These sentences are evaluated by the classifier trained prior according to its confidence score, and top-ranked sentences are retained to compose the synthesized dataset.
Similarly, <cit.> integrates GPT-2 in its text data augmentation process and improves the performance of active learning on tiny datasets. It uses the language model to generate a tree of tokens as candidates and construct a sentence from them. The set of new sentences is repeatedly refined by active learning, and the outcome is an augmented text dataset.
Some other research also applies GPT-2 to effectively augment low-resource text datasets, such as <cit.> and <cit.>. Despite their success, both of them point out the need for more sophisticated methods to generate text data. <cit.> for example, argues that human-in-the-loop for relabeling might be rewarding. <cit.> adopts human-in-the-loop for seed selection and further discusses the potential benefits of injecting expert knowledge for specialized domains. Notably, Data Boost <cit.> proposes a reinforcement learning framework to guide the process of text data augmentation with GPT-2 and succeeds in generating data of high quality and high similarity to the original data.
The rise of GPT-3 <cit.> and later LLMs is now providing more possibilities for text generative augmentation. Their use and improvements are studied in the latest research such as <cit.> and <cit.>.
§ DATA AUGMENTATION FOR GRAPH DATA
A graph is represented by 𝐆 = {𝒱, ℰ}, where 𝒱 stands for the vertex set, and ℰ stands for the edge set.
The topology structure can be alternatively represented by an adjacency matrix 𝐀∈ℝ^|𝒱|×|𝒱|, where 𝐀_ij is 1 if there is an edge connecting vertex i and vertex j, and is 0 otherwise.
In a deep graph learning scenario, the graph also contains a feature matrix 𝐗∈ℝ^|𝒱| × d that represents the attributes of the nodes.
Hence another representation for a graph is 𝐆 = {𝐀, 𝐗}.
Graph data augmentation is to perturb the values in the feature matrix 𝐗 of nodes or the topology structure represented by the adjacency matrix 𝐀.
§.§ Individual Graph Augmentation
§.§.§ Value-based Graph Transformation
Graph nodes often contain rich attributes in the feature matrix. This matrix is generally similar to tabular data, so augmenting node attributes does not need to consider the specialty of graph data.
Node Attributes Augmentation.
Various typical augmentation approaches on node attributes from random masking <cit.> to perturbing attributes with random noises <cit.> can be applied, as we do for common tabular data.
These methods are unified in Eq. <ref>, where 𝐌∈{0,1}^N × N is a binary mask, ⊙ denotes the Hadamard product, and 𝐍_𝐖∼𝒩(0, Σ) is the Gaussian white noise.
𝐗 = 𝐗⊙𝐌 + 𝐍_W
Some other methods are more sophisticated. For example, <cit.> first learns the topology information of the graph by random walk, and uses the topology embedding together with the original features as inputs to a dual graph neural network. Similarly, LA-GNN <cit.> applies a local augmentation mechanism. It samples features from nodes' neighborhoods as additional information when augmenting the attributes. A different approach is proposed in DropMessage <cit.>. Instead of directly modifying the attributes on the nodes, it applies random dropping in the message-passing phase.
§.§.§ Structure-based Graph Transformation
Graph topology structure has immense information representation capacity.
Many augmentation methods designed for graph data focus on modifying the topology structure.
Naive Topology Perturbation methods modify a single node or edge component or consider a set of nodes as a whole and modify the substructure.
There is also Subgraphing that extracts subgraphs from the complete graph.
Graph Diffusion, Graph Rewiring and Graph Structure Learning are more sophisticated techniques that focus on adjusting the edges to obtain a better graph.
Naive Topology Perturbation.
Due to its straightforward idea and easy implementation, node dropping and addition are among the first applied augmentation approaches in graph learning tasks.
As given in Eq. <ref>, Node dropping <cit.> randomly discards some nodes from the graph. Such an operation will also remove the attributes that these nodes carry and the edges on these nodes.
𝒱 = Drop (𝒱),
𝐆 = {𝐀,𝐗}
= {𝐀[𝒱,𝒱], 𝐗[𝒱, :]}
In some special cases where node dropping is adopted, the edges on the removed nodes are not simply dropped. For example, in MoCL <cit.>, when dropping a general carbon, the two sides of the carbon atom are connected to ensure the completeness of the molecule. So, in MoCL, rather than being removed, the node is more of being “skipped".
As for node addition, an example is MPNN <cit.>. It introduces a latent master node that is connected to all other nodes, which helps messages pass through the graph network.
An edge describes the relationship between two nodes, and by perturbing the edges, we also introduce diversity to the structural information of a graph.
DropEdge <cit.> perturbs the edge set by randomly setting a portion of the adjacency matrix to zero. It first produces a sparse matrix from a random subset of the original edges, then derives the augmented edge set by subtracting the sparse matrix from the original adjacency matrix.
Edge masking in <cit.> constructs a Bernoulli distribution, and randomly masks edges under that probability.
FairDrop <cit.> computes a mask according to the network's homophily, and use this mask to guide the edge dropping process to remove unfair connections.
AD-GCL <cit.> applies a GNN-augmenter to learn how to drop edges so that the mutual information of the augmented graph and that of the original graph is minimized.
Graph substructure perturbation is beneficial for molecular graphs.
A molecule graph represents atoms with nodes and chemical bonds with edges.
Functional groups, which often decide the chemical characteristics of molecules, can be represented by substructures.
So, by wisely sampling substructures, the global semantics of a molecule graph can be well contained.
MoCL <cit.>, for example, utilizes substructure information as one of its augmentation scheme.
It performs graph augmentation by replacing functional groups, effectively introducing reasonable noises to the graph.
GEAR <cit.> also employs such technique. It augment molecule graphs by so-called environment replacement, which introduces noise from the environment subgraphs.
Subgraphing.
Subgraphing is based on the idea that graph semantics are partially represented in local structures <cit.>, and some subgraphing methods, such as those used in GCC <cit.> and SUGAR <cit.> use random walk to sample a subgraph from the original graph.
GrpahCrop <cit.> applies a node-centric strategy to crop connectivity-preserved subgraphs and expand the dataset with these augmented subgraphs. For each subgraph, the rest of the graph is removed, but compared with edge dropping and node dropping methods, GraphCrop <cit.> better maintains the topology characteristics of the original graph. Similarly, SUBG-CON <cit.> uses the Personalized PageRank algorithm to select the important nodes.
NeuralSparse <cit.> proposes to obtain the subgraph by learning. It uses a sparsification network to generate sparsified subgraphs.
Graph Diffusion.
Traditional message passing in graph neural networks only leverages one-hop connections, while the edges of a real graph are often noisy or arbitrarily defined, and many relationships in the networks are indirectly contained in higher-order neighborhoods. To address this issue, GDC <cit.>, as in Eq.
𝐀 = ∑_k=0^∞θ_k𝐓^k
, proposes to calculate a series of transition matrices 𝐓^k by means such as random walk and combines these transition matrices with weights θ_k to obtain a new adjacency matrix 𝐀 and thus generate a new graph that directly contains some node connection hidden in the original graph. The new graph can also be understood as being processed by a denoising filter because graph diffusion smooths out the neighborhood over the graph <cit.>.
This technique is widely used by self-supervised approaches to transform the graph and generate different views for contrastive learning, such as MVGRL <cit.>, SelfGNN <cit.>, and MERIT <cit.>.
Graph Rewiring.
The topology structure has significant influences on the performance of GNN models. Message passing plays an indispensable role in GNN models. Still, this process suffers from the over-squashing problem when information is spreading over long distances in a graph with the bottleneck <cit.>. Also, the over-smoothing problem leads to GNNs' performance degeneration on heterophilous graphs <cit.>. Moreover, the graph's robustness to perturbations poses the stability problem to GNN models <cit.>. To address these issues, a series of methods referred to as Graph Rewiring is proposed, which adjust the structure of the graphs to help models learn better graph representations and yield better inference in downstream tasks <cit.>.
Graph rewiring, as presented in Eq. <ref>, is initially proposed as randomly swapping edges <cit.> to improve a network's robustness and mitigate malicious attacks.
Some later research such as smart rewiring <cit.> takes the degrees of chosen nodes when swapping their edges.
Stochastic Discrete Ricci Flow <cit.> is a curvature-based graph rewiring method, which in each step adds an edge that can improve the most negatively curved edge, and then removes the edge with the most positive curvature.
Its results demonstrate the ability of graph rewiring to alleviate the over-squashing problem.
DiffWire <cit.> introduces the CT-LAYER and GAP-LAYER in a GNN model, which respectively learns the commute times and spectral gap of the graph.
It rewires the adjacency graph so that the graph's spectral gap is minimized, and by doing so tries to predict the optimal graph structure for the downstream task.
DHGR <cit.> first learns a similarity matrix from the original graph.
Then it rewires the graph by adding edges between high-similarity node pairs and removing edges from low-similarity ones.
Such an operation can effectively improve the homophily of the graph and thus improve GNN performance.
HDHGR <cit.> is a method that follows DHGR <cit.>.
It computes multiple similarity matrices under different meta-paths and thus generates several rewired meta-path subgraphs.
𝐀 =
𝐀⊙ (1 - 1_r)^edge removal
+
(1 - 𝐀) ⊙1_r^edge addition
Graph Structure Learning.
Graph Neural Networks are subject to noisy, wrong, or incomplete graph structures.
Similar to graph rewiring, graph structure learning (GSL) also aims at simultaneously optimizing graph structure and its corresponding representations <cit.> to deal with the above-mentioned problem, however, GSL is often implemented in a more complicated manner and discussed in a broader context than graph rewiring.
GSL applies various techniques to learn and model graph structures, and it is often depicted with a pipeline that includes iterative learning and updating <cit.>.
Also, unlike graph rewiring, which only rewires a graph, GSL can be used to learn and construct a graph structure from non-graph datasets <cit.>, so that many models specific to graph datasets, e.g. GNNs can be trained on these data. Based on how the graph structure is modeled, GSL can be categorized into three classes: metric-based, neural, and direct modeling <cit.>.
Metric-based modeling evaluates the similarity between node pairs through some kernel functions, and in the new graph, the weights of the edges are decided by the similarities of the node pairs. Many early GSL methods fall into this group.
For example, IDGL <cit.> and HGSL <cit.> both use cosine function for similarity learning to evaluate the optimal adjacency of nodes.
Neural approaches use neural networks for both graph learning and graph convolution tasks, as presented in the work of GLCN <cit.>, which introduces a simple single-layer neural network to learn a new graph from the original node attributes before the graph data is learned by the graph convolution module. PTDNet <cit.> focuses on dropping noisy, task-irrelevant edges from the graph structure. VIB-GSL <cit.> learns graph structure from an information theory perspective. SUBLIME <cit.> proposes an unsupervised learning paradigm for GSL that benefits more types of downstream task.
Direct GSL approaches do not involve estimating edge weights from the vertex or generating a graph by training neural networks. Pro-GNN <cit.> for example, directly updates the learned adjacency matrix itself during the training process. To avoid sub-optimal results, it jointly learns the graph structure and downstream GNN model. The learned graph and model parameters are updated sequentially in each training epoch. LDS-GNN <cit.> treats graph learning as a probabilistic distribution problem. When learning from the objectives, it updates the parameter of a Bernoulli distribution, and samples the learned graph from this distribution.
§.§ Multiple Graph Augmentation
§.§.§ Value-based Graph Mixture
Feature or label information from different nodes can be mixed by Graph Propagation, which is often used for tasks such as node classification.
Graph Propagation.
Propagation refers to spreading information from one node to other nodes along the graph. This type of method can be divided into feature propagation and label propagation.
Feature propagation shares node features with other nodes.
After feature vectors of some nodes are masked out, GRAND <cit.> further augments the node features by randomly propagating them through edges.
MV-GCN <cit.> first creates three different views from the original graph and propagates the features of the three views respectively before their representations are combined.
Label propagation algorithm (LPA) <cit.> propagates labels from labeled nodes to all unlabeled nodes according to their proximity through areas with dense unlabeled nodes.
AutoGRL <cit.> applies LPA in their semi-supervised framework to label more nodes utilizing a few ground truth nodes.
<cit.> performs two types of label propagation, correcting the base predictions and smoothing the final prediction, respectively.
Though not in a typical graph-based scenario, <cit.> constructs a nearest neighbor graph on the classifier's manifolds. The affinity matrix, which is the analogy to the adjacency matrix in this case, is constructed based on the k nearest neighbors of each node's embeddings, and this matrix decides how labels propagate.
§.§.§ Graph Value-structure Mixutre
Graph Mixup refers to mixing graph nodes or graph instances. This often involves both node attribute values and the topology structure information.
Graph Mixup.
The technique of mixup has been extensively applied in various augmentation approaches for image and text tasks.
By interpolating feature vectors and labels, mixup expands the feature space of training data and effectively improves the model's robustness and generalization ability.
However, the application of mixup in graph data augmentation is challenged by the special structure of graph datasets.
Due to the irregular connectivity and various topological structures of graph data, it is hard to directly combine nodes or graphs, as we do when combining two images.
For node-level mixup, one solution to this problem is temporarily ignoring the edges and mixing node features in latent space.
For example, GraphMix <cit.> interpolates the hidden states and labels of input nodes to train a better FCN layer, whose parameters are shared with the GNN layer.
A topology-involved node mixup method is proposed in <cit.>.
It randomly pairs nodes and includes the nodes' local topology representations when interpolating them.
Notwithstanding the topology information's involvement in <cit.>, it dose not mentions the construction of edges for the new node.
S-Mixup (Structural Mixup) <cit.> also considers neighboring edges when pairing nodes but works in the input space and discusses how to connect the node with the original graph with edges.
It matches nodes based on their classification confidence of a GNN classifier and selects edges with high gradients to connect with the mixed node.
<cit.> also proposes a method for graph-level mixup.
It uses GNNs to embed the complex and irregular graph structures and conduct mixup in the embedding space.
However, this method cannot mix graphs in the input layer.
To directly mix two inputs at the graph level, ifMixup <cit.> adds dummy nodes so that two graphs have the same number of nodes.
These dummy nodes have all-zero feature vectors and are not connected to any other nodes, or in other words, the weights of edges on dummy nodes are zeros.
When mixing the graph, the interpolation of a normal, one-hot labeled edge and a zero-weighted edge yields an edge with a soft label.
Similarly, S-Mixup (Graph Mixup with Soft Alignments) <cit.> also aligns graphs first before mixing them.
It obtains an assignment matrix by a soft assignment and uses the assignment matrix to transform a graph so that every node corresponds with a node in the other graph.
Another approach that mixes graph-structured datasets at the graph level and input space is Graph Transplant <cit.>.
It extracts a salient subgraph from the source graph and a random subgraph from the destination graph.
The augmented graph is produced by transplanting the source subgraph to the destination subgraph and adaptively interpolating the labels of the original graphs.
Compared with the above-mentioned approaches, G-Mixup <cit.> proposes a more sophisticated way to interpolate graph data. It first uses graphs within the same class to estimate a graphon, i.e. graph generator:
𝒢→ W_𝒢,
ℋ→ W_ℋ
The graphon W: [0,1]^2 → [0,1] is a matrix, where W(i,j) describes the probability of an edge between nodes i and j, and graphs of the same class are considered to be generated from the same graphon.
Graphons of two classes of graphs are then interpolated to produce a mixed graphon:
W_ℐ =
λ W_𝒢 +
(1-λ) W_ℋ
By sampling from the graphon mixup W_ℐ, we acquire a series of graphs with the same soft labels:
𝐲_ℐ =
λ𝐲_𝒢 +
(1-λ) 𝐲_ℋ
At the graphon estimation phase, G-Mixup aligns the original node features and obtains the graphon node features average pooling, which are used as node features of the generated graphs.
DAGAD <cit.> mixes graph representations by performing random permutations and it shows that data augmentation is very helpful in scenarios such as graph anomaly detection.
§.§ Populational Graph Augmentation
§.§.§ Vanilla Graph Generation
Despite that Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can all be used to generate graph data, they face many problems in practice.
VAE models require things like a massive graph-matching process, and GANs on graphs easily fall into mode collapse <cit.>.
So, graph data augmentation resort to another recently proposed technique, namely, Generative Diffusion Models.
Generative Diffusion Models
The denoising diffusion model <cit.> is a generative paradigm that has become popular in recent years. Though not yet widely applied as an approach to graph data augmentation, the Generative Diffusion Model boasts enormous potential in graph generation and can possibly be adopted as a promising data augmentation technique. To implement diffusion models on graph data, some properties of the graph data must be properly treated. For example, diffusion models often rely on a continuous Gaussian noise process, but the graph structures are discrete <cit.>. DiGress <cit.> designs a discrete denoising diffusion model for graph generation and achieves state-of-the-art performance on molecular and non-molecular datasets. The use of Generative Diffusion Models for data augmentation especially benefits studies on molecules, proteins, and materials, whose data are often in the form of graph instances.
§ DATA AUGMENTATION FOR TABULAR DATA
A table is made up of cells, which store numerical or categorical values that represent attributes of the described entities. Cells are organized into columns and rows, but this structural information is trivial, and switching the sequence of the columns and rows does not affect the information conveyed by the table. So, most data augmentation methods leverage the value information.
§.§ Individual Tabular Augmentation
§.§.§ Value-based Table Transformation
Tabular data is much simpler compared with other modalities, so it is hard to design complex augmentation methods that make use of spatial or sequential relationships within a data sample as we do for image or text data, and most data augmentation for tabular data perturbs the value stored in the table cells.
Table Masking is a simple technique often applied for tabular data augmentation.
These corrupted data are often used with the original data under a self-supervised learning framework to learn stronger representations of the dataset.
Another approach to tabular data augmentation borrows ideas from Feature Engineering.
Some work on Feature Engineering design automatic feature selection and transformation frameworks to generate optimal feature space, and the operations can also be used to augment tabular datasets.
Table Masking.
VIME <cit.> uses a mask generator to corrupt features. In a self-supervised scenario, two pretext tasks respectively reconstruct the original data and estimate the mask vector. The model updates its weights based on the reconstruction loss and the mask vector estimation loss of the two pretext tasks. VIME also considers a semi-supervised scenario. In that case, multiple masks are generated and applied to the feature of the unlabeled samples. Despite not being labeled, the model predictions on these corrupted samples should be similar, and thus a consistency loss is calculated. For labeled samples, a supervised loss is calculated based on the prediction. The two losses jointly contribute to the back-propagation phase.
SCARF <cit.> applies data augmentation in a contrastive learning framework. Its corruption process is similar to that of VIME but it uses InfoNCE loss to evaluate the similarity between the embeddings of the corrupted and original views and then update the model accordingly.
MET <cit.> works in a pattern similar to VIME and SCARF, which calculates the reconstruction loss of a partially masked sample.
MTR <cit.> randomly replaces part of the tokenized embeddings, instead of working directly in the input space.
Feature Engineering.
<cit.> continuously generates new features and records them in a transformation graph. It uses reinforcement learning to select the feature set with best cross-validation performance.
Some methods formulate the feature selection problem into a multi-agent reinforcement learning task <cit.>. They assign each feature to an agent, and the agents learn from the reward whether to select or unselect the features.
However, the use of multi-agent reinforcement learning increases the computational burden and hardware cost, and some single-agent frameworks are further proposed for this problem <cit.>. In these methods, only one agent is employed to traverse the whole feature set using the Monte Carlo algorithm and early stopping.
Compared to the above-mentioned automatic feature selection methods, RAFT <cit.> and GRFG <cit.> emphasize feature transformation and generation. Unlike latent representation learning methods, they use arithmetic operations to generate new features, providing better traceability and explainability.
GAINS <cit.> and MOAT <cit.> extend feature selection and transformation to continuous space. In GAINS <cit.>, the selected feature subset is encoded into a continuous embedding space. It searches for better embeddings in the learned space and uses the decoder to generate an optimal selection of the dataset. MOAT <cit.> adapts this process to a feature transformation scenario, where features go through arithmetic operations transformation before the continuous embedding and searching stage, and the decoded features are also processed by a series of transformations.
TFWT <cit.> computes weights for samples and features, and the weights are combined with the data for later tasks.
Although these studies focus on feature engineering, their way of thinking and application of reinforcement learning is very enlightening for data augmentation tasks.
§.§.§ Structure-based Table Transformation
Table Subsetting
Inspired by image cropping <cit.>, SubTab <cit.> divides a table into multiple subsets by splitting its columns. Each subset contains only a percentage of all features. The model is forced to learn the representation of the samples by these features and is required to reconstruct all the features from the subset.
§.§ Multiple Tabular Augmentation
§.§.§ Value-based Tabular Mixture
Mixup is the technique to interpolate samples to produce new data. Tabular data, due to its will-aligned structure, is suitable for this augmentation approach.
Table Mixup.
SMOTE <cit.> is among the first works to study data interpolation, even before the proposal of Mixup <cit.>. It addresses the problem of class imbalance by oversampling from the minority class. The samples are selected within the k nearest neighbors in the feature space and new synthetic data are produced by a random interpolation operation. Since tabular features are often nominal, two variants, namely SMOTE-NC and SMOTE-N, are also proposed along with the initial version of SMOTE <cit.>. As argued in <cit.>, SMOTE is subject to unrealistic samples when interpolating in a sparse feature space. To solve this problem, TAEI <cit.> first encodes the sparse data into a dense latent space, interpolates the new data, and then decodes them to obtain the original data plus the augmented data. Contrastive Mixup <cit.> applies mixup for tabular data in a self- and semi-supervised learning framework. SAINT <cit.> combines CutMix <cit.> with contrastive learning to learn from tabular data. ExcelFormer <cit.> proposes two schemes, namely Feat-Mix and Hidden-Mix. They interpolate data in the input layer and the hidden layer, respectively.
§.§.§ Struture-based Tabular Combination
Contrary to Individual Tabular Augmentation, which lacks structural information to use, Multiple Tabular Augmentation suffers from strict and fixed table structure. Columns differ in their nomenclatures and representations, so it is hard to combine and learn from multiple distinct tables. If these restrictions are relaxed, models can learn across different tables, be more transferable, and better predict unseen tables.
Structure Relaxation.
TransTab <cit.> suggests that multiple tables could share partially overlapped columns in the real world, but traditional methods often fail to fully utilize the data due to the removal of non-overlapping columns and mismatched samples when cleaning the data. TransTab first divides the table into three parts, each containing categorical, binary, and numerical data. These three types of data undergo different operations, are all converted to embeddings, and are again integrated into a new and encoded table. The processed tables can be joined in many ways, depending on the application scenarios. The augmented table contains information from many original tables and helps the model learn across them.
§.§ Populational Tabular Augmentation
§.§.§ Vanilla Tabular Generation
An Autoencoder (AE) learns from the dataset and extracts the essential structure, which can be used to generate a more robust dataset.
Generative Adversarial Network (GAN) is another powerful method that can produce more augmented data and is especially useful for oversampling and alleviating the problem of class imbalance.
Autoencoder.
<cit.> tries out four types of autoencoder, concluding that all of these autoencoders can generate data sets that produce better task performance, while the variational autoencoder (VAE) provides the most robust result. Some other works <cit.> also use VAE as their data generation approach. SDAT <cit.> adds extra noise to the latent space of its VAE to obtain augmented samples. These samples, along with the original ones, are then trained under a semi-supervised framework. <cit.> combines VAE with active learning. The samples are iteratively selected by active learning, labeled by an oracle, reconstructed by the VAE, and discriminated and classified.
Generative Adversarial Networks.
Table-GAN <cit.> mainly considers the privacy issue, and experiments show that the augmented table given by GAN is compatible with downstream machine learning models and produces similar performance. CTGAN <cit.> focuses on the class imbalance problem and proposes a conditional generator that accounts for the imbalance in categorical columns. <cit.> shares similar goal and method with CTGAN <cit.> but employs a more sophisticated loss function that better incorporates the discriminator and the auxiliary classifier. ITS-GAN <cit.> proposes a framework that aims to preserve functional dependencies in tables. It models functional dependencies with autoencoders and includes a term to characterize the set of functional dependency constraints in the generator loss function. Arguing that GANs are difficult to train on small datasets, ConvGeN <cit.> designed a new mechanism that requires the GAN's generator and discriminator to cooperate with each other, that the generator generates batches of convex combination of minority neighborhood samples and the discriminator classifies these augmented minority samples against majority batches.
§.§.§ Exogenous Tabular Generation
The potential of tabular data is limited by its simple nature compared with data from other modalities. Due to the lack of spatial or relational relationships, many popular augmentation approaches and network configurations are not suitable for tabular datasets. A solution is to construct and introduce relational structure based on the tabular dataset.
Relational Structure Construction.
GOGGLE <cit.> proposes a solution to this problem. It learns the relational structure and corresponding functional relationships from tabular datasets and uses this basis to generate new samples. In other words, it learns a graph from the features in the table and obtains augmented data from that graph. PET <cit.> depicts multiple tabular data instances are as a hyper graph and use the labels to help construct hyper edges. These methods are based on external assumptions and introduces extra structural information beyond the tabular dataset.
§ DATA AUGMENTATION FOR TIME-SERIES DATA
Time-series data represent sequential values within a period, and each timestamp is considered as an elemental component. Data augmentation methods for time-series data either perturb the value at each timestamp or perturb the sequence of timestamps.
§.§ Individual Time-series Augmentation
§.§.§ Value-based Time-series Transformation
A type of simple and straightforward value-based method is Value Perturbation, which changes the value represented by a sequence. There is also another type of method called Decomposition, which decomposes the value of a sequence into different parts that express the sequential values from different aspects.
Value Perturbation.
Value Perturbation augments time-series data by altering the value on the Y-axis. It often perturbs the amplitude at a certain timestamp or in a certain frequency channel.
One of the most simple but popular augmentation methods is to add random noise to the value at each timestamp. For time-series data, this is referred to as jittering. In <cit.>, jittering is used to simulated additive noise ϵ of wearable sensors:
x(ϵ) = {x_1+ϵ_1, ⋯, x_t+ϵ_t, ⋯, x_T+ϵ_T}
Scaling <cit.> alters the data by multiplying the magnitude of the entire sequence with a scalar α.
x(α) = {α x_1, ⋯, α x_t, ⋯, α x_T}
Magnitude warping <cit.> is similar to scaling, but it uses a smooth curve to warp the sequence so that magnitude at different timestamps is scaled with different parameters.
x(α) = {α_1 x_1, ⋯, α_t x_t, ⋯, α_T x_T}
Rotation <cit.>, or flipping, assumes the data is symmetric with respect to the X-axis, and makes the data upside down by applying a rotation matrix R.
x(R) = {R x_1, ⋯, R x_t, ⋯, R x_T}
Apart from the amplitude, the phase spectrum provides another option for data augmentation. RobustTAD <cit.> converts the time-series data from the time domain to the frequency domain, as shown in Eq. <ref>, and then increases all phases θ(ω_k) by a small perturbation.
F(ω_k)
= 1/N∑^N-1_t=0x_te^-jω_kt
= A(ω_k) exp[jθ(ω_k)]
Decomposition.
Empirical Mode Decomposition (EMD) <cit.> is used to produce a set of intrinsic mode functions (IMFs). This decomposition relies on extracting the energy linked to different time scales that are inherent to the data. EMD has been utilized as a method of data augmentation in <cit.> for the purpose of categorizing impact noise in vehicles.
Another way to decompose time-series data is STL <cit.>, which decomposes signals into seasonal, trend, and remainder:
y_t = τ_t + s_t + r_t, t = 1, 2, ⋯ N
Other works such as <cit.> and <cit.> focus on improving robustness with the help of STL components.
§.§.§ Structure-based Time-series Transformation
Unlike Value Perturbation, sequence perturbation is implemented on the X-axis, either on the time domain or the frequency domain.
Window Slicing splits the time-series data into pieces.
It is also the basis of many Sequence Morphing methods because they apply varied operations on these slices.
Window Slicing.
Inspired by image cropping, which is widely used for image data augmentation, MCNN <cit.> proposes window slicing. It extracts slices from the entire sequence as new training samples. Each slice is assigned with the label of the original sequence.
Sequence Morphing.
Based on Window Slicing, <cit.> extends or compresses the generated spans, which is called window wrapping. This is equivalent to speeding up or slowing down a span of the sequence.
Vowel stretching <cit.> is a special kind of window wrapping technique. It only perturbs the sequence by prolonging the vowels. This is meant to be consistent with how children speak.
<cit.> propose to randomly permeate these slices and concatenate them as a long sequence. Similar processing is also called Random Shuffling and is used in <cit.>.
VTLP <cit.> applies window wrapping in the frequency domain. It adds a small distortion to the central frequency.
SpecAugment <cit.> also proposes to augment the frequency domain. It generates masks of length sampled from a uniform distribution and uses each of them to mask consecutive frequency channels.
§.§ Multiple Time-series Augmentation
§.§.§ Value-based Sequence Mixing
Most methods that combine information from different sequences only use the values, and they are referred to as sequence averaging or sequence mixing.
For example, DTW Barycenter Averaging (DBA) <cit.> revolves around the averaging problem for Dynamic Time Warping. It develops a global technique for computing the average of a set of sequences and avoids using iterative pairwise averaging.
In Weighted-DBA <cit.>, instead of each time series contributing equally to the final average, some can contribute more than others, so that an infinite number of new examples from any set of given time series can be generated.
<cit.> applies both interpolation and extrapolation in the feature space to augment time-series data. It first uses a sequence autoencoder to learns a feature space from unlabeled data. The data is then encoded and augmented with additive noises, interpolation, and extrapolation. Finally, the data in feature spaces is decoded and classified.
Extending sequence mixing to the frequency domain, <cit.> applies the Equalized Mixture Data Augmentation (EMDA) to create new data. It computes the weighted average of two randomly chosen spectrograms with the same label.
§.§ Populational Time-series Augmentation
§.§.§ Vanilla Time-series Generation
GANs are widely used to generate virtual time-series data.
Generative Adversarial Networks.
<cit.> uses an RGAN and an RCGAN to produce realistic medical time-series data.
TimeGAN <cit.> combines supervised and unsupervised losses to train its autoencoding components and adversarial components.
<cit.> propose WaveGAN and SpecGAN to generate time-series data. WaveGAN is used on waveform, while SpecGAN is used on spectrogram. SpecGAN demonstrates how GAN is applied to generate time-series data on the frequency domain
§.§.§ Exogenous Time-series Generation
Some methods view time-series data outside the existing data and from a statistical perspective. They are based on human knowledge on statistics and can predict how the value changes in future timestamps.
Statistical Generation.
<cit.> uses a statistical algorithm called LGT (Local and Global Trend) to forecast paths of the time-series data and samples from them to get extra data.
GRATIS <cit.> employs mixture autoregressive (MAR) models to generate collections of time series and examine the variety and extent of the produced time series within a feature space for time series.
§ DISCUSSION
Influence of Data Augmentation Operations.
Some data augmentation methods aim to produce noisy or even “wrong” data
to improve the AI models' robustness by learning those low-quality or adversarial samples;
in contrast, some are designed to improve data quality by removing noise or error from the data, so that the models can learn a clean representation of the data; some other methods do not intend to affect data quality, and they only make some semantic-preserving perturbation to the data, enlarge the dataset size, and cover more area in the feature space.
Evaluation Metrics Specific to Data Augmentation.
Most research on data augmentation uses the models' performance on downstream tasks to evaluate the effect of data augmentation methods. However, this criterion depends on too many extraneous factors and may fail to reflect the influence of data augmentation methods on the data. So, there is need for a set of evaluation metrics that can assess specific considerations of data augmentation, such as how much diversity is introduced and data distribution in the feature space changes.
Harnessing Latest Techniques.
Large language models possess outstanding ability in manipulating text data, but up to now, most text data augmentation methods relies on symbolic analysis and rules. The question of how to utilize advanced techniques, such as LLM, to perform data augmentation tasks remains open and is rewarding to discover.
§ CONCLUSION
This survey presents a comprehensive summary of data augmentation techniques across five data modalities and proposes a modality-independent taxonomy from a data-centric perspective, which focuses on where the augmented data is derived from. It further enumerates related research papers on these data augmentation methods and annotates them with descriptive information in detail.
ACM-Reference-Format
|
http://arxiv.org/abs/2405.09491v1 | 20240515163654 | The McKay Correspondence for Dihedral Groups: The Moduli Space and the Tautological Bundles | [
"John Ashley Navarro Capellan"
] | math.AG | [
"math.AG",
"math.RT",
"14D20, 14E16, 14J17"
] |
The McKay Correspondence for Dihedral Groups]The McKay correspondence for dihedral groups: The Moduli Space and the Tautological Bundles
Graduate School of Mathematics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8602, Japan
m20049e@math.nagoya-u.ac.jp
[2010]Primary 14D20, 14E16, 14J17
A conjecture in <cit.> states that for a finite subgroup G of GL(2, ℂ), a resolution Y of ℂ^2/G is isomorphic to a moduli space ℳ_θ of G-constellations for some generic stability parameter θ if and only if Y is dominated by the maximal resolution. This paper affirms the conjecture in the case of dihedral groups as a class of complex reflection groups, and offers an extension of McKay correspondence (via <cit.>, <cit.>, and <cit.>).
[
John Ashley Capellan
========================
§ INTRODUCTION
The classical McKay correspondence relates representations of a finite subgroup G ⊂ GL(2, ℂ) to the dual graph of exceptional divisors of the minimal resolution of the quotient variety ℂ^2/G.
An algebro-geometric viewpoint of the correspondence was found by <cit.> in the case of G ⊂ SL(2, ℂ), via some locally free sheaves. In <cit.> and <cit.>, these locally free sheaves are realized as tautological bundles. An explicit description is obtained using the G-Hilbert scheme G-Hilb(ℂ^2) as the minimal (crepant) resolution of the quotient variety ℂ^2/G. The McKay correspondence of SL(2, ℂ) was obtained by computing the minimal generators of the G-module I/(𝔪 I + 𝔫) of each G-cluster. This computation is related to the tops and socles of each G-cluster (which is defined in Definition <ref>).
The aforementioned viewpoints of the correspondence (both <cit.> and <cit.>) give a rigorous proof of the correspondence between representations of G ⊂ SL(2) and the exceptional divisors of the minimal resolution of ℂ^2/G. This correspondence can be generalized further into equivalences of derived categories.
A natural generalization of the McKay correspondence is an equivalence between the G-equivariant geometry of ℂ^n and the geometry of a crepant resolution Y of ℂ^n/G expressed in the language of derived categories. When Y ⊂ G-Hilb(ℂ^n) is the irreducible component of G-Hilb(ℂ^n) which contains the open subset of all reduced G-clusters, the celebrated result of <cit.> states some conditions that will determine that τ: Y →ℂ^n/G is a crepant resolution and that Φ: D(Y) → D^G(ℂ^n) given by the Fourier-Mukai transform gives a derived equivalence. More particularly and explicitly, for n ≤ 3 and G ⊂ SL(n,ℂ), Φ defines a derived equivalence.
However, the story does not stop here. The moduli space of G-clusters provide one candidate for a crepant resolution of the quotient variety ℂ^n/G. With the conjecture of Reid (Conjecture 4.1 of <cit.>) in mind, which states that if τ: Y →ℂ^n/G is a crepant resolution, then Φ: D(Y) D^G(ℂ^n) for some derived equivalence Φ, there is an ongoing search for such crepant resolutions. One of such candidates is the moduli space of G-constellations.
A generalization of Hilbert scheme of G-orbits is the moduli space of G-constellations (on an affine space) which are introduced in <cit.>. The moduli space depends on some stability parameter θ and the moduli space of θ-stable G-constellations is denoted by ℳ_θ. If G is a subgroup of SL(n, ℂ) acting on ℂ^n and n ≤ 3, then ℳ_θ is a crepant resolution of ℂ^n/G for a generic stability parameter θ. The main results in <cit.>, <cit.>, <cit.> realize a (projective) crepant resolution Y of ℂ^3/G (for any finite subgroup G ⊂ SL(3, ℂ)) as a moduli space ℳ_θ of G-constellations for some generic stability parameter θ. More precisely, there is a generic stability parameter θ such that Y ≅ℳ_θ.
If we generalize from SL(2, ℂ) to GL(2, ℂ), which can be either small (i.e. which does not contain pseudoreflections) or non-small, we get a more general McKay correspondence.
The paper <cit.> obtained an algebraic-geometric viewpoint of the correspondence in the case of small subgroups of GL(2, ℂ). Other explicit descriptions of the correspondence, by derived functors and by the work of <cit.>, were obtained in <cit.>, using the G-Hilbert scheme G-Hilb(ℂ^2) as the minimal resolution of the quotient variety ℂ^2/G, and determined their tops and socles of each G-cluster recovering the Ito-Nakamura type of correspondence, i.e. the socles of a G-cluster recover the same G-modules as with the module I/(𝔪 I + 𝔫).
Since ℳ_θ is a resolution of ℂ^2/G, there is a fully faithful functor from D(ℳ_θ) ↪ D^G(ℂ^2). In relation to the DK hypothesis and the maximal resolution in <cit.>, Ishii posed the conjecture in <cit.> that Ỹ is isomorphic to ℳ_θ for some generic stability parameter θ if and only if Ỹ is between the minimal and maximal resolution of ℂ^2/G, where the maximal resolution means the smooth variety which has unique maximal coefficients satisfying the inequality in Definition <ref>. So far, this conjecture is solved in the cases of abelian subgroups and small subgroups of GL(2, ℂ).
It is natural to ask if it is possible to formulate the explicit descriptions of the McKay correspondence in the case of complex reflection groups. This case is particularly interesting because the quotient variety ℂ^2/G is isomorphic to ℂ^2 itself in which the minimal resolution is the identity map which reveals no data about the exceptional divisors. Hence, we consider the aforementioned maximal resolution in the hopes of recovering a McKay correspondence. In this paper, we offer an explicit description of the McKay correspondence in the case of dihedral groups via its derived equivalence.
Notation: Let D_2n = ⟨τ := [ 0 1; 1 0 ], σ := [ ϵ 0; 0 ϵ^-1 ] (ϵ^n = 1) ⟩⊂ GL(2,ℂ) be a dihedral group of order 2n embedded in the general linear group GL(2, ℂ). Unless explicitly stated, G is the dihedral group D_2n. The representations of D_2n are ρ_0 which is the trivial representation; ρ_0', ρ_n/2, ρ_n/2' are non-trivial 1-dimensional representation; and ρ_j (j ≠ 0, 0', n/2, n/2') are 2-dimensional representations. The character table are as follows:
The main results of this paper are the following:
The maximal resolution Y_max of (ℂ^2/G, B̂), defined as the smooth variety which has unique maximal coefficients satisfying the inequality in Definition <ref>, is isomorphic to the quotient variety ℤ_n-Hilb(ℂ^2)/ℤ_2 := ⟨σ⟩-Hilb(ℂ^2)/(D_2n/⟨σ⟩), where B̂ is a ℚ-divisor defined by the equation K_ℂ^2 = π^*(K_ℂ^2/D_2n + B̂), and π: ℂ^2 →ℂ^2/D_2n is the projection map. It is also the minimal embedded resolution of (ℂ^2/G, B̂).
The minimal embedded resolution of (ℂ^2/G, B̂) is the smooth surface obtained after the least number of monoidal transformations such that the strict transform of (the support of) B̂ is smooth.
A resolution of singularities Y →ℂ^2/D_2n≅ℂ^2 is isomorphic to ℳ_θ for some generic θ if and only if Y is dominated by the maximal resolution of the pair (ℂ^2/D_2n, B̂).
We point out here that ℂ^2/D_2n is non-singular. The notion of resolution Y ℂ^2/D_2n is a proper birational morphism from the smooth variety Y.
Section 3 is devoted to the proof of the first two theorems.
We define the stack associated to the maximal resolution 𝒴 := [Y_max] realized as the 2nd root stack
𝒴 = √((O_Y_max(𝔹), 1_𝔹)/(Y_max)),
where B is the boundary divisor or the strict transform of B̂, (f_2)_*^-1(B̂), under the maximal resolution f_2: Y_max→ℂ^2/G and 𝔹:= *B (for prime divisors B_α of B = Σ_α b_α B_α, *B := *b_αB_α); the global section 1_𝔹 is induced by the inclusion of divisors O_Y_max↪ O_Y_max(𝔹).
We refer to Section 2.2 of <cit.> for the detailed definition of the root stack.
Explicitly, the objects of 𝒴 over a scheme S are quadruples (f,M,t,ϕ), where f: S → Y_max is a morphism, M is an invertible sheaf on S, t ∈Γ(S,M), and ϕ: M^⊗ 2→ f^*(O_Y_max(𝔹)) is an isomorphism such that ϕ(t^2) = f^*(1_𝔹).
By Theorem <ref>, we obtain the isomorphism between stacks 𝒴≅ [ℤ_n-Hilb(ℂ^2)/ℤ_2].
We have the Fourier-Mukai transforms (defined in Section 4):
Φ: D([ℂ^2/D_2n]) → D(𝒴)
𝔊 ↦ϕ_*(Rp_[ℤ_n-Hilb(ℂ^2)/D_2n]* (p_[ℂ^2/D_2n]^*(𝔊) ⊗ O_[𝒵/D_2n]))
Ψ: D(𝒴) → D([ℂ^2/D_2n])
ϵ ↦ R(p_[ℂ^2/D_2n]*) (p_[ℤ_n-Hilb(ℂ^2)/D_2n]^*(ϕ^*(ϵ)) ⊗ det(ρ_nat) ⊗ O_[𝒵/D_2n]^[2])
The functors Ψ and Φ are equivalences via Theorem 4.1 of <cit.>.
We define the tautological sheaf associated to a representation ρ of D_2n as
ℛ̂_ρ := Φ(O_ℂ^2⊗ρ^).
The tautological bundles on the stack 𝒴 are described by the following:
tableThe Tautological Sheaves for odd n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ̂_ρ_0 𝒪_𝒴 0
ℛ̂_ρ_0' 𝒪_𝒴(ℬ_3 - 𝒟) ℬ_3 - 𝒟
ℛ̂_ρ_i (rank 2) 0 →𝒪_𝒴ℛ̂_ρ_i𝒪_𝒴(𝒟_i + ℬ_3 - 𝒟) → 0 𝒟_i + ℬ_3 - 𝒟
tableThe Tautological Sheaves for even n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ̂_ρ_0 𝒪_𝒴 0
ℛ̂_ρ_0' 𝒪_𝒴(ℬ_1 -ℬ_2) ℬ_1 -ℬ_2
ℛ̂_ρ_i (rank 2) 0 →𝒪_𝒴ℛ̂_ρ_i𝒪_𝒴(𝒟_i + ℬ_1 - ℬ_2) → 0 𝒟_i + ℬ_1 - ℬ_2
ℛ̂_ρ_(n/2) 𝒪_𝒴(ℬ_2) ℬ_2
ℛ̂_ρ_(n/2)' 𝒪_𝒴(ℬ_1) ℬ_1
[1ex]
where π: 𝒴→ Y_max is the morphism to the coarse moduli space and p: ℤ_n-Hilb(ℂ^2) →ℤ_n-Hilb(ℂ^2)/ℤ_2 is the projection; ℬ_i is a prime divisor on 𝒴 such that 2ℬ_i = π^-1(B_i) (this is the stacky locus) 𝒟 := π^-1(D), where D is a prime divisor of Y_max that satisfies all of the following properties: (1) D does not intersect B_i for all i, (2) D is transversal to the exceptional divisor intersecting B_1 and B_2 (or B_3), and (3) D · E_j = 0, j ≠ m; and 𝒟_i := π^-1(D_i) with D_i := p(D̃_i + g ·D̃_i), whose D̃_i is a transversal divisor to an exceptional divisor Ẽ_i of the minimal resolution ℤ_n-Hilb(ℂ^2) →ℂ^2/ℤ_n. (The abuse of notation is due to Theorem <ref>.)
The rank one tautological bundles on the stack are uniquely determined by their Chern classes; and the rank two tautological bundles are determined by an extension of two line bundles. Furthermore, there is only one possible (non-trivial) extension class, making these descriptions unique.
The aforementioned theorems are proved by explicitly tracing the by-product of the quotient variety ℂ^2/G ≅ G-Hilb(ℂ^2) as a subscheme in G-Hilb(ℂ^3) under flopping operations via <cit.>, which will show that all of the two-dimensional counterparts can be realized as a moduli space of D_2n-constellations ℳ_θ for some generic stability parameter θ. In particular, the maximal resolution realized as a moduli space of D_2n-constellations can be used to construct a similar McKay correspondence from the work of <cit.>.
Building from the works <cit.>, <cit.> and <cit.>, we formulate an analogous description of the socles of the G-constellations over exceptional divisors on the stack:
For a given D_2n-constellation F_st on the exceptional divisors over the quotient stack 𝒴, where the exceptional divisors ℰ_i satisfies π^-1(E_i) = ℰ_i,
top(F_st) = ρ_0 ⊕ρ_0'
socle(F_st) = ρ_i [F_st] ∈ℰ_i, [F_st] ∉ℰ_j, i ≠ j
ρ_i ⊕ρ_j [F_st] ∈ℰ_i ∩ℰ_j
ρ_n/2 - 1⊕ρ_n/2⊕ρ'_n/2 [F_st] ∈ℰ_n/2 - 1∩ℰ_n/2
ρ_n/2⊕ρ'_n/2 [F_st] ∈ℰ_n/2 - (ℰ_n/2 - 1∪{ℬ_1, ℬ_2})
ρ_n/2 [F_st] = ℬ_1
ρ'_n/2 [F_st] = ℬ_2
In the last two main results, the McKay correspondence was constructed over the quotient stack. The reader may ask if such correspondence can be made over the coarse moduli space. Unfortunately, it is not possible, and there is a comparison between the tautological sheaves over the stack and over the moduli space in Sections 4 and 5.
§ PRELIMINARIES
§.§ G-constellations on ℂ^n
§.§.§ Definitions
Let V = ℂ^n be an affine space and G ⊂ GL(V) be a finite subgroup.
A G-constellation on V is a G-equivariant coherent sheaf E on V such that H^0(E) is isomorphic to the regular representation of G as a ℂ[G]-module. In symbols, H^0(E) ≅ℂ[G].
When E = O_Z, the structure sheaf O_Z of a G-cluster Z inside V, E is a G-constellation.
Let R(G) = ⊕_ρ∈Irr(G)ℤρ be the representation ring of G, where Irr(G) denotes the set of irreducible representations of G. The parameter space of stability conditions of G-constellations is the ℚ-vector space:
Θ := {θ∈Hom_ℤ(R(G), ℚ) | θ(ℂ[G]) = 0 },
where ℂ[G] is regarded as the regular representation of G.
Given θ∈Θ, a G-constellation E is θ-stable (resp. θ-semistable) if every proper G-equivariant coherent subsheaf 0 ⊊ F ⊊ E satisfies θ(H^0(F)) > 0 (resp. θ(H^0(F)) ≥ 0). We regard H^0(F) here as an element of R(G).
A parameter θ∈Θ is generic if a θ-semistable G-constellation is also θ-stable.
By Proposition 5.3 of <cit.>, there is a fine moduli scheme ℳ_θ = ℳ_θ(V) of θ-stable G-constellations on V for generic θ.
There is a morphism τ: ℳ_θ(V) → V/G which sends a G-constellation to its support. By Proposition 2.2 of <cit.>, τ is a projective morphism when θ is generic.
The subset Θ^gen⊂Θ of generic parameters is open and dense. It is the disjoint union of finitely many convex polyhedral cones C in Θ (see Lemma 3.1 of <cit.>). The convex polyhedral cone C is called a chamber in Θ.
For θ∈Θ^gen, the moduli space ℳ_θ only depends on the open Geometric Invariant Theory (GIT) chamber C ⊂Θ containing θ∈Θ, so that we can write ℳ_C instead of ℳ_θ for any θ⊂ C. The following theorem gives an example:
For a finite subgroup G ⊂ SL(3, ℂ), suppose that Y →ℂ^3/G is a projective crepant resolution. Then Y ≅ℳ_C for some GIT chamber C ⊂Θ.
The following theorem describes the structure of G-constellations for n = 2. The arguments on Theorems 1.1 and 1.2 in <cit.> can be adapted to guarantee not only a resolution of singularities of ℂ^2/G, but also the embedding of their corresponding derived categories, which tell the relationship between canonical divisors via inequalities following the DK-hypothesis in <cit.>.
Let G be a finite subgroup of GL(2, ℂ). If θ is generic, then the moduli space ℳ_θ is a resolution of singularities of ℂ^2/G. Moreover, the universal family of G-constellations defines a fully faithful functor
Φ_θ: D^b(coh(ℳ_θ)) → D^b(coh^G(ℂ^2)).
§.§ The Maximal Resolution
Let G be a finite subgroup of GL(2, ℂ), not necessarily small (i.e. the action may not be free on ℂ^2 - {0}). Then the quotient variety X = ℂ^2/G and its projection ℂ^2 X is equipped with a boundary divisor B determined by the equality K_ℂ^2 = π^*(K_X + B) expressed as B = Σm_j - 1/m_j B_j, where B_j ⊂ X is the image of a one-dimensional linear subspace whose pointwise stabilizer subgroup G_j ⊂ G is cyclic of order m_j. Furthermore, G is small if and only if B = 0.
Consider the abelian group generated by the matrices [ 1 0; 0 ϵ_3 ] and [ -1 0; 0 1 ], i.e. this is the abelian group G ≅ℤ_3 ×ℤ_2.
There is a relation between canonical divisors: K_ℂ^2 = π^*(K_ℂ^2/G + 1/2div(x^2) + 2/3div(y^3)).
The log pair (X,B) is a log terminal singularity.
From this theorem, given a resolution of singularities τ: Y → X and write K_Y + τ_*^-1(B) = τ^*(K_X + B) + Σ_i a_i E_i, where E_i are the exceptional divisors and a_i ∈ℚ, then a_i > -1, for all i. Then, among all the resolutions Y which satisfy a_i ≤ 0 for all i, we define the maximal resolution of (X,B):
Let (X,B) be a log terminal pair of a surface X and a ℚ-divisor B. We can assume the surfaces Y and Z are smooth. A resolution of singularities f: Y → X is a maximal resolution of (X,B) if K_Y + f_*^-1(B) = f^*(K_X + B) + Σ_i a_i E_i, where -1 < a_i ≤ 0, and for any proper birational morphism g: Z → Y that is not an isomorphism, we have K_Z + h_*^-1(B) = h^*(K_X + B) + Σ_j b_j F_j, h = fg and for some b_j > 0.
A quotient singularity (X,B) of a surface has a unique maximal resolution (which we denote by Y_max).
§ REALIZING BLOW-UPS AS MODULI SPACES
In this section, we prove Theorem 1.2 (or the conjecture in the case of dihedral groups) by embedding an affine open subset of each blow-up of ℂ^2/D_2n to a crepant resolution of ℂ^3/D_2n.
Throughout the rest of this paper (unless explicitly mentioned), in G := D_2n represented by ⟨σ = [ e^2π i/n 0; 0 e^-2π i/n ], τ = [ 0 1; 1 0 ]⟩ and H := ⟨σ⟩≅ℤ_n, we have the following commutative diagram:
ℂ^2 [d, "τ_1"]
ℂ^2/H [d, "τ_2"] X_1 [d, "τ_2'"] [l, "f_1"] Y [dl, "f_3"]
ℂ^2/G Y_1 [l, "f_2"] (Y_1)_max [l, "h_max"]
where the corresponding varieties and the morphisms are:
X_1 := H-Hilb(ℂ^2) = ℤ_n-Hilb(ℂ^2)
Y_1 := X_1/(G/H) = X_1/(ℤ_2)
f_1 :
τ_1 :
τ_2, τ_2' :
f_2 :
(Y_1)_max :=
K_X_1 = τ_2'^*(K_Y_1 + B')
f_3 :
Y := (G/H)-Hilb(X_1) = ℤ_2-Hilb(X_1)
By the commutative diagram above, because f_1 is a birational map, f_2 is also a birational map. We can see this because the projection τ_2 induces an inclusion between the ring of rational functions (k(ℂ^2)^H)^G/H↪ k(ℂ^2)^H.
We denote the following exceptional divisors and refer to Figures <ref> and <ref> for the configuration.:
Ẽ_̃ĩ
E_i := τ_2'(Ẽ_i)
We define the ramification divisors on ℂ^2/G (i.e. the support of the discriminant divisor B̂ defined by the equation K_ℂ^2/H = τ_2^*(K_ℂ^2/G + B̂)) with their corresponding explicit equations as:
B̂_1: ⟨ (x^n/2 + y^n/2)^2 = 0 ⟩
B̂_2: ⟨ (x^n/2 - y^n/2)^2 = 0 ⟩
B̂_3: ⟨ (x^n - y^n)^2 = 0 ⟩
so that we can define their corresponding strict transformations for i = 1,2,3 as:
B_i := (f_2)_*^-1 (B̂_i)
B̃_̃ĩ := (τ_2')_*^-1(B_i).
Using the notations in <cit.>, we define f_1 := x^2m+1 + y^2m+1 and f_2 := x^2m+1 - y^2m+1 in the odd n case; and f_1 := x^m + y^m and f_2 := x^m - y^m in the even n case. We also note here that ℂ[x,y]^D_2n = ℂ[xy, x^n + y^n].
We prepare some propositions.
The surface Y_1 is smooth. Hence, f_2 is a resolution of (ℂ^2/G, B̂).
We compute (X_1)^ℤ_2, the fixed locus of the ℤ_2-action on X_1 by taking a closed subscheme V to g · V, where g is an element of ℤ_2.
The points of X_1 are G-invariant 0-dimensional subscheme of ℂ^2 (whose space of global sections is isomorphic to the regular representation, i.e. H^0(O_Z) ≅ℂ[G]), and so it can be realized as an ideal defining the aforementioned closed subscheme of ℂ^2. Referring to Thm. 2.2 of <cit.>; and Remark 9.7, Lemma 12.2, and Theorem 12.3 of <cit.>, we can identify the points on the exceptional divisors of X_1 as I_i(a_i : b_i) = ⟨ a_ix^i - b_iy^n-i, x^i+1, xy, y^n+1-i⟩, where 1 ≤ i ≤ n and (a_i:b_i) ∈ℙ^1; or equivalently, using the open affine covers of X_1 = ⋃_i=1^n U_i := ⋃_i=1^n Spec(ℂ[ x^i/y^n-i, y^n+1-i/x^i-1]), the points ( x^i/y^n-i, y^n+1-i/x^i-1) = (0, b) and ( x^i/y^n-i, y^n+1-i/x^i-1) = (a, 0) on the exceptional divisor correspond to I_i-1(b:1) and I_i(1:a), respectively.
Furthermore, the ℤ_2-action sends I_i(a_i : b_i) = ⟨ a_ix^i - b_iy^n-i, x^i+1, xy, y^n+1-i⟩ to I_n-i(b_i :a_i) = ⟨ a_iy^i - b_ix^n-i, x^n-i+1, xy, y^i+1⟩, so that the fixed points on the exceptional divisors of X_1 are:
When n is odd, I_(n-1)/2(0:1) = I_(n+1)/2(1:0).
When n is even, there are two fixed points, I_n/2(1:1) or I_n/2(-1:1).
From the commutative diagram below, we can check smoothness of Y_1 by verifying that the ℤ_2 acts on X_1 as a pseudoreflection. This amounts in showing that the fixed points under the ℤ_2-action are on ((f_1)_*^-1∘ (τ_2)_*^-1) (B̂_i,0).
(τ_2)_*^-1(B̂_i,0) ⊂ℂ^2/H - {(0,0)} [d, "τ_2"] X_1 - Σ(Ẽ_i) [d, "τ_2'"] [l, "f_1"]
B̂_i - {(0,0)} =: B̂_i,0 ⊂ℂ^2/G - {(0,0)} Y_1 - Σ(E_i) [l, "f_2"]
First, we define the open affine covering of X_1 = ⋃_i=1^n U_i := ⋃_i=1^n Spec(ℂ[ x^i/y^n-i, y^n+1-i/x^i-1]).
By the definition of B̂_i from (<ref>), we compute the strict transform ((f_1)_*^-1∘ (τ_2)_*^-1) (B̂_i) on each of the affine open sets covering X_1.
On Spec(ℂ[ x^i/y^n-i, y^n+1-i/x^i-1]),
(x^n - y^n)^2 = (x^i/y^n-i)^2(n+1-i)(y^n+1-i/x^i-1)^2(n-i) - 2 (x^i/y^n-i)^n (y^n+1-i/x^i-1)^n + (x^i/y^n-i)^2(i-1)(y^n+1-i/x^i-1)^2i
For B̂_̂1̂:
0 = x^n + 2(xy)^n/2 + y^n = (x^n/2/y^n/2)^n-2/2(y^(n+2)/2/x^(n-2)/2)^n/2[ (x^n/2/y^n/2) + 1 ]^2
so that the strict transform is the line x^n/2/y^n/2 = -1 on the open set U_n/2. The coordinate (x^n/2/y^n/2, y^(n+2)/2/x^(n-2)/2) = (-1,0) corresponds to the point on the G-Hilbert scheme I_n/2(1:-1). This works similarly for U_(n/2) + 1 and U_(n+1)/2 (for odd n). The same argument works for B̂_̂2̂ in which we obtain I_n/2(1:1); and for B̂_̂3̂ in which we obtain I_(n-1)/2(0:1) = I_(n+1)/2(1:0). This completes the description of the strict transform of the boundary divisors.
To show further that the closure ((f_1)_*^-1∘ (τ_2)_*^-1) (B̂_i) does not exist on other open sets other than U_n/2, U_(n/2) + 1, U_(n+1)/2, we again notice that WLOG:
(x^n - y^n)^2 = (x^i/y^n-i)^2(i-1)(y^n+1-i/x^i-1)^2i[ (x^i/y^n-i)^2(n+2-2i)(y^n+1-i/x^i-1)^2(n-2i) - 2 (x^i/y^n-i)^n + 2 - 2i(y^n+1-i/x^i-1)^n-2i + 1 ]
This implies that plugging any of the coordinates of U_i to zero does not lie on the strict transform defined by: 0 = (x^i/y^n-i)^2(n+2-2i)(y^n+1-i/x^i-1)^2(n-2i) - 2 (x^i/y^n-i)^n + 2 - 2i(y^n+1-i/x^i-1)^n-2i + 1. This completes the description of the strict transform of the boundary divisors.
These all imply that dim((X_1)^ℤ_2) = 1, or equivalently, ℤ_2 acts as a pseudo-reflection on X_1, which implies that the boundary divisor B' on Y_1 determined by the equation K_X_1 = τ_2'^*(K_Y_1 + B') is smooth and so Y_1 is smooth.
A more general statement for the smoothness of τ_2' is as follows:
For H ⊂ SL(2), Y_1 := H-Hilb(ℂ^2), so that G/H is cyclic, then Y_1/(G/H) is smooth iff G/H is a (cyclic) complex reflection group (or equivalently, if G/H has a local linear action on Y_1 by pesudoreflections).
The dihedral group is a special case of this.
The morphism f_2: Y_1 → (ℂ^2/G, B̂) is a crepant resolution.
From the fact that f_1 is a crepant resolution:
K_X_1 = f_1^*(K_ℂ^2/H)
K_Y_1 + (f_2)_*^-1(B̂) = f_2^*(K_ℂ^2/G + B̂) + Σ_j a_j F_j
K_X_1 = (τ_2')^*(K_Y_1 + B')
We need to show that (f_2)_*^-1(B̂) = B' = B_Y_1 by showing that a_j = 0 for all j.
K_ℂ^2/H = τ_2^*(K_ℂ^2/G + B̂)
K_X_1 = f_1^*(K_ℂ^2/H) = f_1^*(τ_2^*(K_ℂ^2/G + B̂))
= (τ_2 ∘ f_1)^*(K_ℂ^2/G + B̂)
= (f_2 ∘τ_2')^*(K_ℂ^2/G + B̂)
= τ_2'^*(K_Y_1 + (f_2)_*^-1(B̂) - Σ_j a_j F_j),
= τ_2'^*(K_Y_1 + (f_2)_*^-1(B̂)) - Σ_j a_j τ_2'^*(F_j)
This implies that τ_2'^*(F_j) are exceptional divisors for f_1 which forces the discrepancies to be zero.
This proposition implies that Y_1 is also a crepant resolution of (ℂ^2/G, B̂). Also, we recall the notion of the minimal embedded resolution of (ℂ^2/G, B̂).
Let C_0 be an irreducible curve in the surface X_0. Then there exists a finite sequence of monoidal transformations (with suitable centers) X_n → X_n-1→ ... → X_1 → X_0 such that the strict transform C_n of C_0 on X_n is nonsingular.
The minimum n that satisfies this proposition is the minimal embedded resolution of (X_0, C_0). This is minimal in the sense that if Y is a smooth surface and dominates X_0, then it dominates X_n as well. For instance, a smooth surface Y with normal crossings dominates the minimal embedded resolution. A more detailed description is given in Theorem 3.9 (Ch. V) of <cit.>.
The maximal resolution Y_max of (ℂ^2/G, B̂) is isomorphic to the quotient variety Y_1. It is also the minimal embedded resolution of (ℂ^2/G, B̂). Furthermore, the iterated Hilbert scheme Y is also isomorphic to the maximal resolution Y_max.
First, from the proof of Theorem 1 in <cit.>, the maximal resolution of (Y_1, B') defined by h_max is also the maximal resolution of (ℂ^2/G, B̂) defined by h_max∘ f_2.
Because Y_1 has at worst cyclic quotient singularities, and (Y_1, B_Y_1) has the smoothness property for both the variety and the boundary divisor, the minimal resolution of (Y_1, (f_2)_*^-1(B̂)), i.e. f_3: Y → (Y_1, (f_2)_*^-1(B̂)) is crepant, and more strongly, f_3 = id, which implies that the maximal resolution of (ℂ^2/G, B̂) is Y_1.
An explicit way to do this is to consider the affine open covers of Y_1 via the open affine covers of X_1. We show this for the odd n case, since the argument for the even case is similar.
Because f_2 is a crepant resolution, it remains to compute the discrepancy of the blow-up h: Z := Blp(Y_1) → Y_1, and we divide it into two cases depending on where the center of the blow-up is. The more interesting case is where the center of h is on f_2*^-1(B̂) (this can be realized also as the boundary divisor for the morphism τ_2'):
In the odd n case, in ℤ_n-Hilb(ℂ^2), the open set Spec(ℂ[x^(n+1)/2/y^(n-1)/2, y^(n+1)/2/x^(n-1)/2]) which covers the invariant locus under the ℤ_2-action is ℤ_2-invariant. Thus, we consider the open set Spec(ℂ[x^(n+1)/2/y^(n-1)/2, y^(n+1)/2/x^(n-1)/2])^ℤ_2 = Spec(ℂ[xy, f_1/(xy)^m]). The boundary locus in ℤ_n-Hilb(ℂ^2) is (x^(n+1)/2/y^(n-1)/2 - y^(n+1)/2/x^(n-1)/2)^2 = 0, which translates to (f_1/(xy)^m)^2 - 4xy = 0 on the invariant open set.
For any point (xy, f_1/(xy)^m) = (1/4a^2, a) on B̃_m, performing the coordinate change, we obtain the new equation: (f_1/(xy)^m)^2 + 2af_1/(xy)^m + a^2 = (f_1/(xy)^m + a)^2 = 4(xy + 1/4a^2) = 4xy + a^2.
On Spec(ℂ[(xy)^m+1/f_1, f_1/(xy)^m]), the equation transforms to f_1/(xy)^m - 2a = 4(xy)^m+1/f_1. The exceptional divisor defines the equation f_1/(xy)^m = 0.
Thus, the intersection number of h_*^-1(B') with the exceptional divisor is 1/2. Using the relation between canonical divisors, the discrepancy a_m+1 = 1/2. For the even n case, this reduces to a blow-up along lines which is treated similarly.
Thus, Y_1 is the maximal resolution of (ℂ^2/D_2n, B̂). Furthermore, because f_3 is a crepant resolution of Y_1, f_3 must be an isomorphism. Hence, Y ≅ (ℂ^2/G, B̂)_max≅ Y_1.
This particular assertion tells us that the maximal resolution can be realized as a moduli space of G-constellations which will help in our computations later.
Once again, we refer to (<ref>) for the definition of the exceptional divisors for the next lemma:
For an exceptional divisor Ẽ (resp. E) of X_1 (resp. Y_1), we know that the normal bundles 𝒩_Ẽ/X_1 are of degree -2, or equivalently, 𝒩_Ẽ/X_1≅ O_Ẽ(-2). Then:
𝒩_E/Y_1≅
O_E(-1)
O_E(-2)
The first statement for X_1 is well-known since it is the (minimal) crepant resolution of the quotient singularity ℂ^2/ℤ_n.
We can compute the self-intersection number E^2 via the adjunction formula and given our computations in Proposition <ref> regarding the fixed points of X_1 under the ℤ_2-action.
Because f_2 is crepant, we have K_Y_1 + (f_2)_*^-1(B̂) = (f_2)^*(K_ℂ^2/G + B̂), so that K_Y_1· E_m = -1 and K_Y_1· E = 0 for E ≠ E_m.
The only resolutions dominated by the maximal resolution of (ℂ^2/G, B̂) are essentially the blow-ups from (ℂ^2/G, B̂) with center the singular point of the (strict transforms of the) boundary divisor B̂.
Using the same argument as in Lemma <ref>, after the blow-down of the (-1)-curve on Y_1 and so on, we obtain the result.
The next lemma provides an isomorphism between the minimal resolution of the variety and the quotient variety.
For a complex reflection group G ⊂ GL(n,ℂ), there is an isomorphism between the G-Hilbert scheme and the quotient variety. In symbols:
G-Hilb(ℂ^n) ≅ℂ^n/G.
We consider the moduli functor for G-clusters:
h: S ↦{} / ≡
for a locally Noetherian scheme S over ℂ where E_S ≡ F_S if and only if there is an L in Pic(S) such that E_S ≅ F_S ⊗ L.
The G-Hilbert scheme G-Hilb(ℂ^n) represents the functor h. Thus:
h(S) ≅Hom_Sch(S, G-Hilb(ℂ^n)). We wish to show that h(S) ≅Hom_Sch(S, ℂ^n/G).
We construct the map first from h(S) to Hom_Sch(S, ℂ^n/G).
(1) From υ: Hom_Sch(S, ℂ^n/G) to h(S).
Given γ_S ∈Hom_Sch(S, ℂ^n/G), consider the fiber product diagram:
S ×_ℂ^n/G ℂ^n [d, "p_1"][r, "p_2"] ℂ^n [d, "p"]
S [r, "γ_S"] ℂ^n/G
It remains to show that every fiber of p is a G-cluster. This implies that the fiber product S ×_ℂ^n/Gℂ^n is a flat family of G-clusters.
By the Chevalley-Shephard-Todd theorem, the morphism p is flat. Then by the decomposition of p_*(O_ℂ^n/G) = ⊕_ρ∈Irr(G) M_ρ⊗ρ over representations of a finite group G with characteristic 0, where M_ρ = (p_*(O_ℂ^n/G) ⊗ρ^)^G is a finitely generated O_ℂ^n/G-module, each of the modules M_ρ is locally free.
Over the free locus on ℂ^n/G, the fiber consists of a G-cluster. Thus, considering a family of representations of a finite group G, the fiber over the non-free locus is also a G-cluster.
(2) From Λ: h(S) to Hom_Sch(S, ℂ^n/G).
Given a flat family 𝒵 of G-clusters over a scheme S, which is a subscheme of S ×ℂ^n, we wish to construct a scheme morphism δ_𝒵: S →ℂ^n/G. We consider first the following diagram:
𝒵 [d, "p_1"][r, "p_2"] ℂ^n [d, "p"]
S [r, dashed, "δ_𝒵"] ℂ^n/G
Taking note that the action of G on S is trivial, so that again, there is a decomposition of (p_1)_*(O_𝒵) = ⊕_ρ∈Irr(G) S_ρ⊗ρ.
Taking the G-invariant sections gives [(p_1)_*(O_𝒵)]^G = S_ρ_0, which is of rank 1, which is generated by the non-vanishing global section 1. This implies that S_ρ_0 = O_S and 𝒵/G = S.
Thus, the morphism δ_𝒵: 𝒵/G = S →ℂ^n/G is induced by the morphism p_2: 𝒵→ℂ^n.
(3) Now that we have constructed the maps, we want to show now that the maps υ and Λ induce a bijection between sets. First, we show that Λ∘υ = 1_Hom_Sch(S, ℂ^n/G).
By the construction of the map Λ, p_2 induces the map (p_2)/G: S = (S ×_ℂ^n/Gℂ^n/G) = (S ×_ℂ^n/Gℂ^n)/G →ℂ^n/G. From the construction, S is a categorical quotient p_1. Thus, by the universality property of the categorical quotient applied to the morphism p ∘ p_2, γ_S = (p_2)/G.
(4) We now show that υ∘Λ = 1_h(S). This amounts to show that 𝒵 = S ×_ℂ^n/Gℂ^n.
Consider the inclusion morphism i: 𝒵↪ S ×_ℂ^n/Gℂ^n, which is a closed immersion (via the closed immersion S ×_ℂ^n/Gℂ^n ↪ S ×_ℂℂ^n), induced by the universal property of the fiber product diagram. Then we have the exact sequence:
0 →ℐ→ O_S ×_ℂ^n/Gℂ^n→ i_*(O_𝒵) → 0
where ℐ is the kernel of O_S ×_ℂ^n/Gℂ^n→ i_*(O_𝒵).
Because p_1 is finite, the pushforward functor (p_1)_* is exact:
0 → (p_1)_*ℐ→ (p_1)_*(O_S ×_ℂ^n/Gℂ^n) → (p_1)_*(i_*(O_𝒵)) = (p_1)_*(O_𝒵) → 0
Because both S ×_ℂ^n/Gℂ^n and 𝒵 are flat families of G-clusters over S, the sheaf (p_1)_*(O_𝒵) is flat and every fiber of both (p_1)_*(O_S ×_ℂ^n/Gℂ^n) and (p_1)_*(O_𝒵) are G-clusters. Taking the fibers over s ∈ S in the exact sequence, both (p_1)_*(O_S ×_ℂ^n/Gℂ^n)_s ⊗ (O_S,s/𝔪_S,s) =: [(p_1)_*(O_S ×_ℂ^n/Gℂ^n)](s) and [(p_1)_*(i_*(O_𝒵)) = (p_1)_*(O_𝒵)](s) have the same dimension as vector spaces over ℂ. Thus, they are isomorphic as vector spaces. This leaves the fiber [(p_1)_*ℐ](s) = 0 for all s ∈ S. By Nakayama Lemma for local rings applied to the coherent sheaf (p_1)_*ℐ, for ℐ is coherent and p_1 is finite, this implies that the stalk (p_1)_* (ℐ)_s = 0, which implies that (p_1)_*(ℐ) = 0.
Again, because p_1 is finite, then the natural map 0 = (p_1)^*(p_1)_*(ℐ) →ℐ is surjective, which implies that ℐ = 0. This implies now that O_S ×_ℂ^n/Gℂ^n≅ i_*(O_𝒵), and so 𝒵 = S ×_ℂ^n/Gℂ^n.
The G-cluster that corresponds to the points on the boundary divisor of ℂ^n/G is a non-reduced closed subscheme.
Given a log pair of the quotient variety (ℂ^2/D_2n, B̂) determined by the projection morphism π: ℂ^2 →ℂ^2/D_2n via the relation K_ℂ^2 = π^*(K_ℂ^2/D_2n + B̂) from the previous section, then the following hold for the blow-ups of the quotient variety ℂ^2/D_2n:
* The maximal resolution for (ℂ^2/D_2n, B̂) is obtained after m = n-1/2 (for odd case) and m = n/2 (for even case) blowing ups with singular points of the boundary divisors as the center which satisfies the inequality in Definition <ref>.
* For each resolution Ỹ→ℂ^2/D_2n dominated by the maximal resolution Y_max, there is a generic θ in the parameter space of G-constellations such that Ỹ≅ℳ_θ.
(1) Because X_1 has n-1 exceptional divisors, Y_1 has n-1/2 (for n odd) or n/2 (for even n) exceptional divisors. The surface Y_1 being the maximal resolution follows from Theorem <ref>. And the contraction of exceptional divisors yielding a smooth resolution follows from Corollary <ref>.
(2) Because (Y_1)_max≅ Y_1 ≅ Y, we can look instead at the iterated Hilbert scheme and naturally embed Y into ℤ_2-Hilb(ℤ_n-Hilb(ℂ^3)).
We embed the group D_2n⊂ GL(2) ↪ SL(3) via taking the determinant so that SL(3) ⊃ D_2n =⟨[ 0 1 0; 1 0 0; 0 0 -1 ], σ := [ ϵ 0 0; 0 ϵ^-1 0; 0 0 1 ]⟩, so that it induces the group action of D_2n on ℂ^3 by matrix multiplication as well). This iterated Hilbert scheme ℤ_2-Hilb(ℤ_n-Hilb(ℂ^3)) is identified with X_0...(m-1) via Theorems 5.1 and 5.2 of <cit.> and Example 6.1 of <cit.>. The quotient variety ℂ^2/D_2n can be realized as a subscheme of D_2n-Hilb(ℂ^3).
We refer to the table of the normal bundles of the exceptional divisors (and their flops) 𝒩_X/E with their corresponding open covers in the same Theorems 5.1 and 5.2 of <cit.> to see which open sets cover the floppable (-1,-1) curve.
From the notation in <cit.>, under a suitable generic parameter θ^i satisfying the inequalities of Theorem 6.4 of <cit.>, we define Y_i' := ℳ_θ^i(ℂ^2) realized as moduli space of G-constellations of ℂ^2 which can be embedded in X_0...i := ℳ_θ^i(ℂ^3) realized as moduli space of G-constellations of ℂ^3.
Y_m-1' = Y_max = ℤ_2-Hilb(ℤ_n-Hilb(ℂ^2)) [hookrightarrow]d [dashed, "T_m-1"]r Y_m-2' [hookrightarrow]d [dashed, "T_m-2"]r ⋯[dashed, "T_1"]r Y_0' [hookrightarrow]d [dashed, "T_0"]r ℂ^2/G ≅G-Hilb(ℂ^2) [hookrightarrow]d
X_0...(m-1) = ℤ_2-Hilb(ℤ_n-Hilb(ℂ^3)) [dashed]r X_0...(m-2) [dashed]r ⋯[dashed]r X_0...0 [dashed]r G-Hilb(ℂ^3)
The number of flops from the iterated Hilbert scheme X_0...(m-1) to the G-Hilbert scheme G-Hilb(ℂ^3) is the same as the number of exceptional divisors of the morphism f_2: Y_max→ℂ^2/G.
After each flop of (-1,-1) curve, the number of exceptional divisors over the surface must either: (a) increase by one, (b) decrease by one, or (c) stays the same.
Furthermore, after m flops from X_0...(m-1), we eventually reach the G-Hilb(ℂ^3), whose image of Y_max must be G-Hilb(ℂ^2) ≅ℂ^2/G. Thus, all of the birational transformations over the surfaces must decrease the number of exceptional curves by one.
We examine the flop restricted to the surface, most especially the open cover containing the exceptional divisor E_i, in Figure <ref>.
Because the surface contains the (-1,-1) flopping curve over the threefold, blowing up with the flopping curve as the center in the threefold, over the surface, the exceptional curve remains the same. Over the threefold, this produces the ℙ^1 ×ℙ^1 exceptional surface. Contracting in the other direction, this contracts the exceptional divisor on the surface.
Because each of the surfaces Y_i' are smooth, this now implies that each of the broken arrows over the two-dimensional variants are blowdown morphisms of a (-1)-curve, and comparing with Corollary <ref>, every resolution dominated by Y_max can be realized as a moduli space of G-constellations, which proves the main theorem.
We fix the following notations for the open sets for odd n:
U_1' ≅ Spec(ℂ[z/f_2, f_1, xy]) U_i' ≅ Spec(ℂ[(xy)^i-1z/f_2, f_1/(xy)^i-1, xy]) i ≤ m + 1
U_i ≅ Spec(ℂ[(xy)^i-1z/f_2, f_2/(xy)^i-2z, zf_1/f_2]), i ≤ m + 1 U_i”≅ Spec(ℂ[zf_1/f_2, (xy)^i/f_1, f_1/(xy)^i-1]) i ≤ m
U_m+2≅ Spec(ℂ[z^2, f_1/(xy)^m, f_2/(xy)^m z]) X_0...i = ⋃_k=1^i+1 U_k”∪ U_i+2' ∪⋃_k=i+3^m+2 U_k
We fix the notations also for the open sets for even n:
U_i ≅ Spec(ℂ[(xy)^i-1z/f_1f_2, f_1f_2/(xy)^i-2z, zf_1/f_2]), i ≤ m V_m+1' ≅ Spec(ℂ[(xy)^m-1z^2/f_2^2, f_2^2/(xy)^m-1, f_1f_2/(xy)^m-1z])
U_m+1≅ Spec(ℂ[zf_2/f_1, f_1f_2/(xy)^m-1z, zf_1/f_2]) V_m+2' ≅ Spec(ℂ[z^2, f_2^2/(xy)^m-1, f_1/zf_2])
U_i' ≅ Spec(ℂ[(xy)^i-1z/f_1f_2, f_1^2/(xy)^i-1, xy]) i ≤ m V_i”≅ Spec(ℂ[(xy)^i-2z^2/f_2^2, f_2^2/(xy)^i-3z^2, zf_1/f_2]) i ≤ m + 1
U_m+1' ≅ Spec(ℂ[zf_2/f_1, f_1^2/f_2^2, f_2^2/(xy)^m-1]) V_m+2”≅ Spec(ℂ[z^2, f_2^2/(xy)^m-1z^2, zf_1/f_2])
U_i”≅ Spec(ℂ[zf_1/f_2, (xy)^i/f_1^2, f_1^2/(xy)^i-1]) i ≤ m-1 V_m+3”≅ Spec(ℂ[z^2, f_1^2/(xy)^m-1, f_2/zf_1])
U_m”≅ Spec(ℂ[zf_1/f_2, f_2^2/f_1^2, f_1^2/(xy)^m-1]) X_0...i^m...(m-j) = ⋃_k=1^i+1 U_k”∪ U_i+2' ∪⋯
V_i' ≅ Spec(ℂ[(xy)^i-2z^2/f_2^2, xy, f_1f_2/(xy)^i-2z]) i ≤ m ⋃_k=i+3^m-j U_k ∪ V_m-j+1' ∪⋃_k=m-j+2^m+3 V_k”
An explicit way to approach the theorem is to consider also the open sets so that U_m”∪ U_m+1' cover the exceptional divisor with coordinates
E_m :
(f_1 : (xy)^m)
(f_1^2 : (xy)^m)
From the proof of Theorems 5.1 and 5.2 of <cit.>, under the flopping transformation of the exceptional divisor E_m, the open sets U_m”∪ U_m+1' maps to the open sets U_m' ∪ U_m+1 covering the flop of E_m.
The coordinate of the flop of E_m is
Ê_m :
((xy)^m-1 z : f_2)
((xy)^m-1 z : f_1f_2)
so that on the new variety X_0...(m-2), the exceptional divisor vanishes (or is contracted in the two dimensional case).
We also note the gluing between the open sets U_m” and U_m+1' via
(zf_1/f_2, (xy)^m/f_1, f_1/(xy)^m-1) ↦ (zf_1/f_2·(xy)^m/f_1, ((xy)^m/f_1)^-1, (xy)^m/f_1·f_1/(xy)^m-1)
(zf_1/f_2, f_1^2/(xy)^m-1, f_2^2/f_1^2) ↦ (zf_1/f_2·f_2^2/f_1^2, (f_2^2/f_1^2)^-1, f_1^2/(xy)^m-1·f_2^2/f_1^2)
Thus, with the realization of the iterated Hilbert scheme over the surface as the locus from earlier, the loci zf_1/f_2 = 0 and (xy)^mz/f_2 = 0 collapses the open set U_m+1. The open cover implies that this is identical to the blow-down of the exceptional curve Ê_m. In symbols, we have the commutative diagram:
X_0...(m-2) ⊃U_m' ∪U_m+1 ⊃Ê_m [dashed,l, "Ψ_m-1"] E_m ⊂U_m” ∪U_m+1 ⊂X_0...(m-1)
Y_m-2' ⊃T_m-1(E_m) = pt [u, "i"] [l, "T_m-1"] E_m ⊂Y_max[u, "i"]
From the flop of E_m, we obtain another crepant resolution. This time, we do this similarly for the open sets U_i”∪ U_i+1' for 1 ≤ i ≤ m-1, which gives another crepant resolution. This works for both cases.
§ THE TAUTOLOGICAL BUNDLES AND THE MCKAY CORRESPONDENCE
In the next sections, we explicitly construct a McKay correspondence by investigating the exceptional divisors of the maximal resolution Y_max→ℂ^2/D_2n using the realization of the maximal resolution as a moduli space of D_2n-constellations.
In this section, in the spirit of <cit.>, we consider the tautological bundles and consider the stacky descriptions (or equivalently their ℤ_2-equivariant sheaves) to construct the McKay Correspondence for the dihedral groups. The main result in this section is the description parallel to the proof of Theorem 1.11 in <cit.>, in particular the description of a rank n indecomposable reflexive module as an extension of two vector bundles (one of which is a line bundle) of the form 0 → O_X̃^⊕ (n-1)→M̃→ O_X̃(D̃) → 0, whose Chern class c_1(M̃) of the locally free sheaf M̃ corresponds to a vertex of the Dynkin diagram associated to X̃.
We also insert a comparison between the tautological sheaves on the stack and the tautological sheaves on the coarse moduli space to show how they behave differently in those spaces. Together with the results on the final section, we show evidences why constructing a McKay correspondence is more plausible over the stack than on the coarse moduli space.
We consider the representations of ℤ_n and D_2n as described by the following character table. There are four one-dimensional representations ρ_0, ρ_0', ρ_n/2, ρ_n/2'. For 1 ≤ j ≤ n/2 (for even n) or 1 ≤ j ≤ (n-1)/2 (for odd n), the two-dimensional representations ρ_j are defined by their corresponding matrix representations: ρ_i = ⟨τ := [ 0 1; 1 0 ], σ^i := [ ϵ^i 0; 0 ϵ^-i ] (ϵ^n = 1) ⟩.
Consider the diagram:
𝒵⊂ X_1 ×ℂ^2 [dl, "p_X_1"] [dr,"p_ℂ^2"]
X_1 ℂ^2
where 𝒵⊂ X_1 ×ℂ^2 is the universal subscheme and p_X_1 and p_ℂ^2 are natural projections.
Since the group D_2n acts on each of the schemes X_1 and ℂ^2, by defining [X/G] as the (quotient) stack associated to the scheme X/G, we can construct the diagram:
[𝒵/D_2n] ⊂[(X_1 ×ℂ^2) / D_2n] [dl, "p_[X_1/ D_2n]"] [dr,"p_[ℂ^2/ D_2n]"]
[X_1/ D_2n] [ℂ^2 / D_2n]
Because ℤ_n acts trivially on X_1, there is a natural morphism ϕ: [X_1/D_2n] →[X_1/ℤ_2 ], so that the pushforward morphism ϕ_* sends a D_2n-equivariant coherent sheaf ℱ on X_1 to the ℤ_2-equivariant coherent sheaf ℱ^ℤ_n.
We also use the fact that the category of (quasi-)coherent sheaves on the quotient stack [X/G] is equivalent to the category of G-equivariant (quasi-)coherent sheaves on the scheme X, i.e. (Q)Coh([X/G]) ≅ (Q)Coh^G(X).
By Theorem <ref>, the maximal resolution and the quotient variety ℤ_n-Hilb(ℂ^2)/ℤ_2 are identical. This means that the corresponding boundary divisors are identical as well. This means that by considering the 2nd root stack √((O_ℤ_n-Hilb(ℂ^2)/ℤ_2(*B'), 1_*B')/(ℤ_n-Hilb(ℂ^2)/ℤ_2)), we obtain the isomorphism between stacks 𝒴≅ [ℤ_n-Hilb(ℂ^2)/ℤ_2].
Defining 𝒵⊂ℤ_n-Hilb(ℂ^2) ×ℂ^2 as the universal family of ℤ_n-constellations/clusters and consider the diagram whose p_[ℤ_n-Hilb(ℂ^2)/D_2n] and p_[ℂ^2/D_2n] are natural projections:
[𝒵/D_2n] ⊂[(ℤ_n-Hilb(ℂ^2) ×ℂ^2) / D_2n] [dl, "p_[ℤ_n-Hilb(ℂ^2)/ D_2n]"] [dr,"p_[ℂ^2/ D_2n]"]
[ℤ_n-Hilb(ℂ^2)/ D_2n] [ℂ^2 / D_2n]
We have the Fourier-Mukai transforms:
Φ: D([ℂ^2/D_2n]) → D(𝒴)
𝔊 ↦ϕ_*(Rp_[ℤ_n-Hilb(ℂ^2)/D_2n]* (p_[ℂ^2/D_2n]^*(𝔊) ⊗ O_[𝒵/D_2n]))
Ψ: D(𝒴) → D([ℂ^2/D_2n])
ϵ ↦ R(p_[ℂ^2/D_2n]*) (p_[ℤ_n-Hilb(ℂ^2)/D_2n]^*(ϕ^*(ϵ)) ⊗ det(ρ_nat) ⊗ O_[𝒵/D_2n]^[2])
The functors Ψ and Φ are equivalences via Theorem 4.1 of <cit.>.
We define the tautological sheaf associated to a representation ρ of D_2n as
ℛ̂_ρ := Φ(O_ℂ^2⊗ρ^) = (p_ℤ_n-Hilb(ℂ^2)*(O_𝒵) ⊗ρ^)^ℤ_n = ( [⊕_ϵ∈Irr(ℤ_n)ℛ_ϵ⊗ϵ] ⊗ρ^)^ℤ_n,
which are ℤ_2-equivariant locally free sheaves since ℛ_ϵ are locally free sheaves, and the ℤ_2-sheaf structure came from the induced representation from ℤ_n to D_2n.
We consider also the diagram:
𝒴[dr,"π"]
ℤ_n-Hilb(ℂ^2) [ur,"f"] [rr,"p"] ℤ_n-Hilb(ℂ^2)/ℤ_2
to define the corresponding tautological sheaves on ℳ_θ = ℤ_n-Hilb(ℂ^2)/ℤ_2, for some θ∈Θ.
The tautological bundle ℛ := p_X_1*(O_𝒵) on ℤ_n-Hilb(ℂ^2) decomposes as:
ℛ = ⊕_ϵ∈Irr(ℤ_n)ℛ_ϵ^∘⊗ϵ.
By considering ℛ_ϵ^∘ := (ℛ⊗ϵ^)^ℤ_n and ℛ_ρ^∘ := (p_*(ℛ) ⊗ρ^)^D_2n as subsheaves of K(ℂ^2) ⊗ϵ_i^ and K(ℂ^2) ⊗ρ_i^, respectively, we have the following images of the tautological bundles on ℤ_n-Hilb(ℂ^2) under the pushforward p_*:
p_*(ℛ_ϵ_i⊗ϵ_i ⊕ℛ_ϵ_n-i⊗ϵ_n-i) = (ℛ_ρ_i⊗ρ_i)^⊕ 2 (i ≠ n/2)
p_*(ℛ_ϵ_0⊗ϵ_0) = ℛ_ρ_0⊗ρ_0 ⊕ℛ_ρ_0'⊗ρ_0'
p_*(ℛ_ϵ_n/2⊗ϵ_n/2) = ℛ_ρ_n/2⊗ρ_n/2⊕ℛ_ρ_n/2'⊗ρ_n/2'
We collect some lemmas in order to establish a parallel statement of the correspondence:
For a (Cartier) divisor D on X_1, the followings hold:
* det(p_*(O_X_1(D)) = det(p_*(O_X_1)) ⊗ O_Y_1(p_*(D))
* det(p_*(O_X_1)) = O_Y_1(L), where L is a ℚ-divisor satisfying 2L = -(B_1 + B_2) (for even n) and 2L = -B_3 (for odd n). Please refer again to (<ref>) and (<ref>) for the definition of the ramification divisors. This shows that the sheaf O_Y_1(L) is a line bundle.
* p_*(O_X_1) = O_Y_1⊕ O_Y_1(L)
The statements (1) and (2) are essentially (Ch. IV.2, Ex. 2.6) of <cit.>, but we provide the complete proof. In the following, we distinguish K_X as the canonical divisor, and ω_X = O_X(K_X) is the canonical sheaf.
For an effective divisor D_1, consider the exact sequence:
0 → O_X_1→ O_X_1(D_1) → O_D_1(D_1) → 0.
Applying the push-forward p_* gives the exact sequence:
0 → p_*(O_X_1) → p_*(O_X_1(D_1)) → p_*(O_D_1(D_1)) → 0.
Taking the Chern classes over the exact sequence gives statement (1) for the effective divisor D_1.
If D is not effective, we can write D as D = D_1 - D_2, where both D_1 and D_2 are effective. We have shown the statement for the effective divisor D_1. For the divisor -D_2, we apply the similar sequence:
0 → O_X_1(-D_2) → O_X_1→ O_D_2→ 0.
Applying the push-forward p_* and taking the Chern classes gives the statement (1) for any divisor D = D_1 - D_2.
By the duality for a finite flat morphism (Ch. III.6, Ex. 6.19 of <cit.>), with ω_Y_1 being the dualizing sheaf of Y_1, so that ω_X_1≅ p^!(ω_Y_1) := ℋom(p_*(O_X_1), ω_Y_1) (where for a p_*(O_X_1)-module ℳ, the sheaf ℳ is the associated O_X_1-module, as seen in Ch. II.5, Ex. 5.17 of <cit.>), is also a dualizing sheaf for X_1, and we have the isomorphism:
p_*(O_X_1) ≅ p_*ℋom_X_1(ω_X_1, ω_X_1) ≅ℋom_Y_1(p_*(ω_X_1), ω_Y_1) ≅ (p_*(ω_X_1))^⊗ω_Y_1.
Taking the Chern classes gives the isomorphism:
det(p_*(ω_X_1)) ≅det(p_*(O_X_1))^-1⊗ω_Y_1^⊗ 2.
By plugging D = K_X_1 on statement (1), we compare different expressions for det(p_*(ω_X_1)):
det(p_*(O_X_1))^-1⊗ω_Y_1^⊗ 2 ≅det(p_*(O_X_1)) ⊗ O_Y_1(p_*(K_X_1))
det(p_*(O_X_1))^⊗ 2 ≅ O_Y_1(2K_Y_1 - p_*(K_X_1)).
From the relation K_X_1 = p^*(K_Y_1 - L), where -2L = B_1 + B_2 (in the even n case) and -2L = B_3 (in the odd n case), taking the pushforward p_*, we get statement (2) thereafter:
p_*(K_X_1) = 2K_Y_1 - 2L
det(p_*(O_X_1))^⊗ 2 ≅ O_Y_1(2L).
Because ℤ_2 acts trivially on Y_1 and p is a finite flat morphism, p_*(O_X_1) decomposes into a ℤ_2-invariant locally free sheaf and ℤ_2-anti-invariant locally free sheaf; or more precisely:
p_*(O_X_1) = ℱ_0 ⊕ℱ_1 ⊗δ,
where δ is the nontrivial representation of ℤ_2.
Certainly, ℱ_0 = O_Y_1, and it follows that ℱ_1 = det(p_*(O_X_1)) = O_Y_1(L) showing (3).
There exists a (Weil) divisor D on the maximal resolution Y_max such that:
* D is transversal to the exceptional divisor E_k; D · E_k = 1, for some 1 ≤ k ≤ m.
* D · E_j = 0, j ≠ k.
* For even n, D does not coincide with either of B_1 or B_2; D ≠ B_1, B_2.
We prove this in odd n case because the same argument will work in the even n case.
We refer to the open sets of the maximal resolution from Remark <ref>.
Consider the locus in ℂ^2/G defined by the equation W_k: f_1 - (xy)^k = 0; (1 ≤ k ≤ m = n-1/2).
We shall illustrate for k = 1 as the rest of the cases can be performed very similarly.
On U_1”, the locus W_1 is the line xy/f_1 = 1, which only intersects the coordinate axis at (xy/f_1, f_1) = (1,0), which corresponds to a point on the exceptional divisor E_1.
By the gluing between the open sets U_1” and U_2”, this identifies the same point (xy/f_1, f_1) = (1,0) = (f_1/xy, (xy)^2/f_1).
Once again, by the gluing between the open sets U_2” and U_3” (and so on), the following equation defines the locus, which has no intersection with the coordinate axes of U_i”, i ≠ 1,2; which implies that there is no intersection with the exceptional divisors other than E_1:
0 = f_1/xy - 1 = ((xy)^k+1/(f_1))^k-1·((f_1)/(xy)^k)^k-1·f_1/(xy)^k - 1.
For odd n, the boundary divisor B_3 is defined by a quadratic equation via Remark <ref>, hence, any divisor transversal to E_m must intersect the boundary divisor.
From the equivalence of categories Coh^ℤ_2(X_1) ≅ Coh(𝒴):
O_X_1⊗δ↦ O_𝒴(𝒞) := O_𝒴(ℬ_3 - 𝒟) if n is odd
O_𝒴(ℬ_1 - ℬ_2) if n is even
where ℬ_i is a prime divisor on 𝒴 such that 2ℬ_i = π^-1(B_i) (this is the stacky locus); and 𝒟 := π^-1(D), where D satisfies the conditions in Lemma <ref>. (Refer to (<ref>) and (<ref>) for the definition of B.)
We comment first on the derivation for this description. The ℤ_2-equivariant sheaf O_X_1⊗δ is a torsion element of Pic^ℤ_2(X_1) of order 2, so that the corresponding sheaf on the stack is also a torsion element of Pic(𝒴) of order 2. Because of the following relations:
O_𝒴 (2ℬ_3 - 2𝒟) = π^*(O_Y_1(B_3 - 2D)) ≅π^*(O_Y_1) = O_𝒴
O_𝒴 (2ℬ_1 - 2ℬ_2) = π^*(O_Y_1(B_1 - B_2)) ≅π^*(O_Y_1) = O_𝒴
so that O_𝒴(ℬ_3 - 𝒟) and O_𝒴(ℬ_1 - ℬ_2) are torsion elements of Pic(𝒴). It is important to point out that these are distinct divisors by considering ℬ_i = [B_i/ℤ_2]. Another way is to consider the Fourier-Mukai images of ℬ under Φ via Theorem <ref>. The stabilizer groups of ℬ_i and 𝒟 are ℤ_2 and {e} (except for the intersection point with ℬ_i), respectively.
We further note that such D exists by the previous Lemma <ref>.
We consider the commutative diagram. This particular diagram defines the equivalence between the two categories. Our main argument here is to trace via the diagram.
X_1 ×ℤ_2 [r, "pr_X_1"] [d, "a"] X_1 [d, "π"]
X_1 [r, "π"] 𝒴
which gives the isomorphism α: a^*π^*(O_𝒴(𝒞)) pr_X_1^*π^*(O_𝒴(𝒞)).
By considering the fiber at each point, the stabilizer group of 𝒞 is ℤ_2 and for the other points being {e}. This induces the non-trivial action of ℤ_2, and thus α is not identity.
The tautological bundles on the stack 𝒴 = [ℤ_n-Hilb(ℂ^2)/ℤ_2] are described by the following:
tableThe Tautological Sheaves for odd n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ̂_ρ_0 𝒪_𝒴 0
ℛ̂_ρ_0' 𝒪_𝒴(ℬ_3 - 𝒟) ℬ_3 - 𝒟
ℛ̂_ρ_i (rank 2) 0 →𝒪_𝒴ℛ̂_ρ_i𝒪_𝒴(𝒟_i + ℬ_3 - 𝒟) → 0 𝒟_i + ℬ_3 - 𝒟
tableThe Tautological Sheaves for even n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ̂_ρ_0 𝒪_𝒴 0
ℛ̂_ρ_0' 𝒪_𝒴(ℬ_1 -ℬ_2) ℬ_1 -ℬ_2
ℛ̂_ρ_i (rank 2) 0 →𝒪_𝒴ℛ̂_ρ_i𝒪_𝒴(𝒟_i + ℬ_1 - ℬ_2) → 0 𝒟_i + ℬ_1 - ℬ_2
ℛ̂_ρ_(n/2) 𝒪_𝒴(ℬ_2) ℬ_2
ℛ̂_ρ_(n/2)' 𝒪_𝒴(ℬ_1) ℬ_1
[1ex]
The rank one tautological bundles on the stack are uniquely determined by their Chern classes; and the rank two tautological bundles are determined by an extension of two line bundles. Furthermore, there is only one possible non-trivial extension class, making these descriptions unique.
We write an exact sequence involving (rank two) tautological sheaves ℛ̃_ρ_i (as ℤ_2-equivariant sheaves) similar to the proof of Theorem 1.11 of <cit.>:
0 →𝒪_X_1𝒪_X_1(D̃_i) ⊕𝒪_X_1(g ·D̃_i) = ℛ̃_ρ_i𝒪_X_1(D̃_i + g ·D̃_i) ⊗δ→ 0,
where i is the inclusion, δ is the nontrivial representation of ℤ_2 (and g correspond to the non-trivial element of ℤ_2), and pr(h_1, h_2) = h_1-h_2.
We show that this extension over X_1 is unique, which implies on the tautological sheaves on the stack. We let F = ΣẼ_i be the fundamental cycle. In the following, ℤ_2-Ext^1_X(-,-) is defined as the ℤ_2-invariant part of Ext_X^1(-,-).
ℤ_2-Ext_O_X_1^1(𝒪_X_1(D̃_i + g ·D̃_i) ⊗δ, 𝒪_X_1) = ℤ_2-Ext_O_X_1^1(𝒪_X_1, 𝒪_X_1(- D̃_i - g ·D̃_i) ⊗δ)
= H^1(X_1, 𝒪_X_1(- D̃_i - g ·D̃_i) ⊗δ)^ℤ_2
= H^1(F, 𝒪_X_1(- D̃_i - g ·D̃_i)|_F)^ℤ_2
= H^1(F, 𝒪_F(- D̃_i - g ·D̃_i))^ℤ_2
≅ℂ
which is one dimensional over ℂ. Thus, there is only one possible non-trivial extension (up to scalar).
From the Lemma <ref>, this corresponds over the global quotient stack 𝒴:
0 →𝒪_𝒴ℛ̂_ρ_i𝒪_𝒴(𝒟_i + 𝒞) → 0,
where ℛ̂_ρ_i and 𝒪_𝒴(𝒟_i) is the corresponding image in the isomorphism of the tautological sheaf ℛ̃_ρ_i and the invertible sheaf 𝒪_X_1(D̃_̃ĩ + g ·D̃_̃ĩ), respectively.
The rank one tautological sheaves ℛ̂_ρ_0', ℛ̂_ρ_n/2, ℛ̂_ρ_n/2' are obtained via the commutative diagram, where Φ^ℤ_n is the equivalence defined in <cit.>:
D(coh^D_2n(ℂ^2)) [d, "for"] [r, "Φ"] D(coh^ℤ_2(X_1)) [d, "for"]
D(coh^ℤ_n(ℂ^2)) [r, "Φ^ℤ_n"] D(coh(X_1))
We demonstrate this for ρ_n/2 and the rest of the cases are similar. From the definition of ℛ̂_ρ_n/2, this must come from a ℤ_2-lift of (Φ_ℤ_n∘for)(O_ℂ^2⊗ρ_n/2) = ℛ̃_ϵ_n/2, which must be a ℤ_2-line bundle with degree 1. But the only divisors that satisfy such property are B̃_1 and B̃_2. Since B̃_2 is invariant under the action of ρ_n/2, this describes the tautological sheaf ℛ̃_ρ_n/2 = O_X_1(B̃_2). In addition, ℛ̃_ρ_n/2' = ℛ̃_ρ_n/2⊗δ, which gives the complete description for the tautological sheaves on X_1.
The theory of Chern classes also work for Deligne-Mumford stacks via the theory of Chow groups with rational coefficients. This is shown in Section 3 of <cit.>.
We now compare the tautological sheaves over the stack, and the bundles over the coarse moduli space. We will see that the rank two bundles split over the coarse moduli space.
Taking the pushforward and the ℤ_2-invariants of the exact sequence above, we get the exact sequence on Y_1:
0 → O_Y_1→ℛ_ρ_i→ O_Y_1(D_i + L) → 0,
where D_i is the image p(D̃_̃ĩ).
The exact sequence over Y_1 is split. Equivalently, H^1(Y_1, 𝒪_Y_1(L + D_i)) = 0.
Let Z be the fundamental cycle of the maximal resolution Y_max→ℂ^2/D_2n.
The sheaf in question is ℱ := 𝒪_Y_1(L + D_i). Computing the degrees of ℱ|_E_j, ℱ|_Z, and ℱ⊗𝒪(-nZ)|_Z:
deg(ℱ|_E_j) = (L + D_i) · E_j
deg(𝒪(Z)|_Z) = -1
deg(ℱ⊗𝒪(-nZ)|_Z) = n
Thus, H^1(ℱ|_Z) = 0, and H^1(ℱ⊗𝒪(-nZ)|_Z) = 0.
Using induction and the exact sequence:
0 →ℱ⊗𝒪(-nZ)|_Z →ℱ|_(n+1)Z→ℱ|_nZ→ 0,
we have H^1(ℱ|_nZ) = 0, for all n > 0. By the theorem of formal functions (see Ch.III, Sec.11 of <cit.>), since R^1(f_2)_*(ℱ) is supported on the origin of ℂ^2/D_2n with f_2^-1(0) = Z, this implies the lemma.
The lemma can also be applied if D_i is replaced by any of the boundary divisors B_1 or B_2 since it will use the very same argument. This is needed for the following description of tautological bundles on the coarse moduli space.
The tautological bundles on the coarse moduli space Y_1 = ℤ_n-Hilb(ℂ^2)/ℤ_2 are described by the following:
tableThe Tautological Sheaves for odd n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ_ρ_0 𝒪_Y_1 0
ℛ_ρ_0' 𝒪_Y_1(L) L
ℛ_ρ_i (rank 2) ℛ_ρ_i = 𝒪_Y_1⊕𝒪_Y_1(D_i + L) D_i + L
tableThe Tautological Sheaves for even n
Tautological Bundle Description Chern Class
[0.5ex]
ℛ_ρ_0 𝒪_Y_1 0
ℛ_ρ_0' 𝒪_Y_1(L) L
ℛ_ρ_i (rank 2) ℛ_ρ_i = 𝒪_Y_1⊕𝒪_Y_1(D_i + L) D_i + L
ℛ_ρ_(n/2) 𝒪_Y_1(B_1 + L) B_1 + L
ℛ_ρ_(n/2)' 𝒪_Y_1(B_2 + L) B_2 + L
Statement (3) of Lemma <ref> describes the tautological sheaves ℛ_ρ_0 and ℛ_ρ_0'. Lemma <ref> implies the description of the rank 2 tautological bundles ℛ_ρ_i.
Using the similar exact sequence as applied in Lemma <ref>:
0 → O_X_1→ O_X_1(B̃_1) ⊕ O_X_1(B̃_2) → O_X_1(B̃_1 + B̃_2) ⊗δ→ 0.
Taking the push-forward p_* and ℤ_2-invariants gives the exact sequence:
0 → O_Y_1→ [p_*(O_X_1(B̃_1) ⊕ O_X_1(B̃_2))]^ℤ_2→ O_Y_1→ 0.
Because Ext^1(O_Y_1, O_Y_1) = H^1(Y_1, O_Y_1) = H^1(ℂ^2/G, O_ℂ^2/G) = 0, this implies that [p_*(O_X_1(B̃_1) ⊕ O_X_1(B̃_2))]^ℤ_2 = (O_Y_1)^⊕ 2. Furthermore, by Statement (1) of Lemma <ref>, we have the corresponding descriptions for ℛ_ρ_3 and ℛ_ρ_3'.
* A consequence of Theorem <ref> is that the rank 2 tautological sheaves can be expressed as an extension of two line bundles on the stack. Because of the structure of the line bundles which are torsion elements of the Picard group of the stack as given in the Lemma <ref>, certainly, it is not generated by global sections, which is an obstruction in writing an exact sequence similar to Theorem 1.1 of <cit.>.
* Lemma <ref> implies that over the coarse moduli space Y_1, the rank two tautological bundles split, whereas over the stack [X_1/ℤ_2], the tautological bundle does not split. Furthermore, there is a parallel description of <cit.> can be applied to stacks (especially when identified as ℤ_2-equivariant sheaves on X_1). This makes the stack a more appropriate venue to establish the McKay Correspondence for the dihedral group than the coarse moduli space.
* The rank two tautological bundles ℛ̃_ρ_i (i ≠ 0, 0', n/2, (n/2)') can be realized as an extension via the exact sequences:
0 → p^*(ℛ_ρ_i) →ℛ̃_ρ_i→ O_B̃→ 0 0 → O_Y_max→ p_*(O_ℤ_n-Hilb(ℂ^2)) → O_Y_max(L) → 0
where ℛ̃_ρ_i = O_X_1(D̃_̃ĩ) ⊕ O_X_1(g ·D̃_̃ĩ) and p_*(ℛ̃_ρ_i) = O_Y_max⊕ O_Y_max(D_i) ⊗ O_Y_max(L) ⊗δ, where D_i is any transversal to the exceptional divisor E_i not intersecting E_j, j ≠ i, and not intersecting the boundary divisors; and correspondingly for D̃_̃ĩ, which is a transversal to Ẽ_̃ĩ not intersecting on other exceptional divisors and not intersecting the boundary divisors. The exact sequence is obtained by evaluating Chern classes of each sheaves with the fact that the boundary divisors B̃ being affine line/s.
The extension is classified by:
Ext^1(O_B̃, p^*(ℛ_ρ_i)) ≅ Ext^1(O_B̃, O(D̃_̃ĩ + D̃_n-i - B̃))
≅ H^0(B̃, O(D̃_̃ĩ + D̃_n-i)|_B̃)
≅ H^0(B̃, O_B̃^⊕ 2)
≅ (ℂ[B̃])^⊕ 2
§ THE MCKAY CORRESPONDENCE VIA THE TOP AND THE SOCLES
In this section, we reveal that the top and socles (defined in Definition <ref>) can be described over the stacks, but not enough to construct such a correspondence over the coarse moduli space.
We refer to the previous section for the functors Φ and Ψ.
We wish to evaluate Φ(O_0 ⊗ρ_i^*) for ρ_i ∈Irr(D_2n).
By referring to the results in <cit.> regarding the functor Φ^ℤ_n, the computation of Φ(O_0 ⊗ρ_i^*) reduces to the task of determining ℤ_2-equivariant structures on Φ^ℤ_n(for(O_0 ⊗ρ_i^*)).
We define the representations (P_i, ρ_i) as a representation of D_2n and (D_i, ϵ_i := (ρ_i)|_ℤ_n) as its corresponding restriction to the cyclic group ℤ_n. Furthermore, we consider the exact sequence 0 → D_i ↪ P_i → P_i/D_i → 0, so that (P_i/D_i, δ_i) is a representation of D_2n/ℤ_n ≅ℤ_2. We have the following:
Using the Fourier-Mukai transform
Φ: D^D_2n(ℂ^2) → D^ℤ_2(X_1)
𝔊 ↦ Rp_X_1* (p_ℂ^2^*(𝔊) ⊗ O_𝒵),
the following images of structure sheaves at the origin has the following images:
Φ(O_0 ⊗ρ_0^*) = O_F ⊗δ_0
Φ(O_0 ⊗ρ_0'^*) = O_F ⊗δ_0'
Φ(O_0 ⊗ρ_j^*) = (O_Ẽ_j(-1) ⊕ O_Ẽ_n-j(-1))[1]
Φ(O_0 ⊗ρ_n/2^*) = O_Ẽ_n/2(-B̃_2)[1]
Φ(O_0 ⊗ρ_n/2'^*) = O_Ẽ_n/2(-B̃_1)[1]
Φ(O_0 ⊗ρ_(n-1)/2^*) = O_Ẽ_(n-1)/2(-B̃_3 ) [1]
where we refer to (<ref>), (<ref>), and (<ref>) for the definition of Ẽ and B̃; and F is the fundamental cycle ΣẼ_̃ĩ. Also, the group D_2n/ℤ_n ≅ℤ_2 fixes the subschemes F, and Ẽ_̃ĩ∪Ẽ_n-i; thus, the ℤ_2 acts on the line bundles O_F, O_Ẽ_i(-1) ⊕ O_Ẽ_n-i(-1), and O_Ẽ_n/2(-1).
From the commutative diagram above,
Φ^ℤ_n(for(O_0 ⊗ρ_0)) = Φ^ℤ_n(O_0 ⊗ϵ_0) = O_F
Φ^ℤ_n(for(O_0 ⊗ρ'_0)) = Φ^ℤ_n(O_0 ⊗ϵ_0) = O_F
Φ^ℤ_n(for(O_0 ⊗ρ_j)) = Φ^ℤ_n(O_0 ⊗ϵ_j ⊕ O_0 ⊗ϵ_n-j) = O_Ẽ_̃j̃(-1)[1] ⊕ O_Ẽ_n-j(-1)[1]
Φ^ℤ_n(for(O_0 ⊗ρ_n/2)) = Φ^ℤ_n(O_0 ⊗ϵ_n/2) = O_Ẽ_n/2(-1)[1]
Φ^ℤ_n(for(O_0 ⊗ρ'_n/2)) = Φ^ℤ_n(O_0 ⊗ϵ_n/2) = O_Ẽ_n/2(-1)[1]
The sheaf O_Ẽ_̃j̃(-1)[1] ⊕ O_Ẽ_n-j(-1)[1] is certainly a ℤ_2-equivariant sheaf. Thus, it remains to determine ℤ_2 equivariant structures on O_F and O_Ẽ_n/2(-1).
By considering the canonical isomorphism μ_g: g^*O_F O_g^-1(F) = O_F, and given the G-sheaf λ_g^O_F: O_F → g^*(O_F), the composition μ_g ∘λ_g^O_F∈Hom(O_F, O_F) = ℂ. Thus, μ_g ∘λ_g = c, so that λ_g = c μ_g^-1. Using the condition for G-sheaves, then c = ± 1 which in turn determines the ℤ_2-equivariant sheaves.
Similarly, this gives the ℤ_2 equivariant structures on O_Ẽ_n/2(-1), as Hom(O_Ẽ_n/2(-1), g^*(O_Ẽ_n/2(-1))) ≅Hom(O_Ẽ_n/2, g^*(O_Ẽ_n/2)) ≅Hom(O_Ẽ_n/2, O_Ẽ_n/2) ≅ℂ. Unfortunately, as a ℤ_2-equivariant sheaf, O_Ẽ_n/2(-1) must be a fixed locus, which can only be any of the B̃_i. Since O_Ẽ_n/2(-B̃_1) (resp. -B̃_2) is invariant under the action of ρ_n/2' (resp. ρ_n/2), this completes the description of the Fourier-Mukai images of the skyscraper sheaves.
Because there is a stacky structure on the fixed points of the ℤ_2, we restate the proposition above in terms of coherent sheaves on the global quotient stack [X_1/ℤ_2].
For the next proposition, we define some notations:
We consider the morphism of schemes p: X_1 → Y_1 and stacks π : [X_1/ℤ_2] → Y_1. We define the following closed substacks on [X_1/ℤ_2]:
ℰ_i := [p(Ẽ_i ∪Ẽ_n-i)/ℤ_2] ≅ p(Ẽ_̃ĩ∪Ẽ_n-i)/ℤ_2 (i ≠n-1/2, n/2, n+1/2) ℱ := [p(F)/ℤ_2]
ℰ_(n-1)/2 := [p(Ẽ_(n-1)/2∪Ẽ_(n+1)/2)/ℤ_2] =: ℰ_(n+1)/2 ℬ_1 := [p(B̃_1)/ℤ_2]
ℰ_n/2 := [p(Ẽ_n/2)/ℤ_2] ℬ_2 := [p(B̃_2)/ℤ_2]
ℬ_3 := [p(B̃_3)/ℤ_2]
The exceptional divisors on the stack ℰ are smooth except for ℰ_(n-1)/2. The exceptional divisor ℰ_(n-1)/2 is not smooth because the fixed point of ℤ_n-Hilb(ℂ^2) lies on the intersection of two distinct exceptional divisors.
Using the Fourier-Mukai transform
Φ: D^D_2n(ℂ^2) → D^ℤ_2(X_1) ≅ D([X_1/ℤ_2])
𝔊 ↦ Rp_X_1* (p_ℂ^2^*(𝔊) ⊗ O_𝒵),
the following images of structure sheaves at the origin has the images on the quotient stack [X_1/ℤ_2]:
Φ(O_0 ⊗ρ_0^*) = O_ℱ
Φ(O_0 ⊗ρ_0'^*) = O_ℱ(ℬ_1 - ℬ_2)
Φ(O_0 ⊗ρ_j^*) = O_ℰ_j(-1)[1]
Φ(O_0 ⊗ρ_n/2^*) = O_ℰ_n/2( -ℬ_2 ) [1]
Φ(O_0 ⊗ρ_n/2'^*) = O_ℰ_n/2( -ℬ_1 ) [1]
Φ(O_0 ⊗ρ_(n-1)/2^*) = O_ℰ_(n-1)/2( -ℬ_3 ) [1]
This a restatement in terms of coherent sheaves on the global quotient stack. Furthermore, the correspondence for O_F ⊗δ_i can be found in Lemma <ref>.
Take the even n case and the divisor ℰ_n/2 for instance. The following justification uses the root stack construction whose introduction and details to the said concepts can be found in <cit.>, <cit.>, and <cit.>.
Since p(Ẽ_n/2)/ℤ_2 is smooth, we can realize ℰ_n/2 as the 2nd root stack
ℰ_n/2 := √((O_p(Ẽ_n/2)(p(B̃_1) + p(B̃_2)), 1)/(p(Ẽ_n/2))),
where we perform the appropriate modification of the definition of the 2nd root stack seen after Remark <ref>. Refer to Section 2.2 of <cit.> for further details.
The setup propositions and theorems earlier in this section will be used to compute the top and socles in the hopes of obtaining a similar description as in the McKay correspondence of Ito-Nakamura <cit.> for the SL(2) case and Ishii <cit.> for the small GL(2) case. The top and socles can be (explicitly) computed via their ideals that define a closed subscheme, i.e. I_y, to obtain the corresponding quotients I_y/mI_y and (I_y:m)/I_y needed to compute the top and the socle respectively.
The data for the McKay Correspondence is given by the following:
For a given G-constellation F on the moduli space of θ-stable G-constellations ℳ_θ:
top(F) := F/⟨ x, y ⟩F
socle(F) := { a ∈F | ⟨ x,y ⟩ a = 0}
Before we state the proposition, the maximal resolution Y_max can be realized as a moduli space of θ-stable G-constellations ℳ_θ for some generic stability parameter θ via the isomorphism ℳ_θ≅ Y Y_1 ≅ Y_max. The isomorphism of iterated Hilbert schemes to a moduli space of G-constellations is justified in Theorem 1.5 of <cit.>. The specific stability parameter θ is computed in Table 5 of <cit.>.
For a given D_2n-constellation F on the maximal resolution Y_max,
top(F) = ρ_0 ⊕ρ_0' [F] ∈ Exc(Y_max→ℂ^2/G)
0 otherwise
socle(F) = ρ_i [F] ∈ E_i, [F] ∉ E_j, i ≠ j
ρ_i ⊕ρ_j [F] ∈ E_i ∩ E_j
ρ_n/2 - 1⊕ρ_n/2⊕ρ'_n/2 [F] ∈ E_n/2 - 1∩ E_n/2
ρ_n/2⊕ρ'_n/2 [F] ∈ E_n/2 - E_n/2 - 1
where E_i is an exceptional divisor on Y_max such that given the projection morphism p: X_1 Y_1 ≅ Y_max, p^-1(E_i) = Ẽ_̃ĩ∪Ẽ_n-i (refer again to (<ref>) for the definition of E).
We consider the diagram:
𝒴×ℂ^2 [dr,"π× id_ℂ^2"]
ℤ_n-Hilb(ℂ^2) ×ℂ^2 [ur,"f × id_ℂ^2"] [rr,"p × id_ℂ^2"] ℤ_n-Hilb(ℂ^2)/ℤ_2 ×ℂ^2
The universal flat family of ℤ_n-clusters on ℤ_n-Hilb(ℂ^2) ×ℂ^2 is denoted by 𝒰_0.
The universal flat family over the stacks 𝒰̃ and the coarse moduli space 𝒰 are defined as follows:
𝒰̃ = (f × id_ℂ^2)_*(𝒰_0); 𝒰 = (π× id_ℂ^2)_*(𝒰̃)
Our main interest here is to know the socles over the fixed point. The non-stacky points follow from the Fourier-Mukai image of the skyscraper sheaf 𝒪_ỹ⊕𝒪_g ·ỹ on ℤ_n-Hilb(ℂ^2).
For a fixed point ỹ in ℤ_n-Hilb(ℂ^2) under the ℤ_2-action and y = p(ỹ), we consider the exact sequence of skyscraper sheaves over the stack 𝒴:
O →𝒪_ỹ⊗δ_0' →𝒪_π^-1y→𝒪_ỹ⊗δ_0 → 0
which realizes the skyscraper sheaf 𝒪_π^-1y as a nontrivial extension of two ℤ_2-equivariant sheaves.
Using the Fourier-Mukai transform Ψ, the exact sequence translates to:
O →𝒰_0, ỹ⊗δ_0' →𝒰̃_π^-1y→𝒰_0, ỹ⊗δ_0 → 0
By the definition of the universal families, the fiber is realized as:
𝒰_x = ((π× id_ℂ^2)_* 𝒰̃)_x = 𝒰̃⊗_𝒪_𝒳𝒪_π^-1(x).
Realizing 𝒰_ỹ as a ℤ_2-invariant cluster yields that the socle of 𝒰_ỹ⊗δ_0 is ρ_n/2 and similarly, the socle of 𝒰_ỹ⊗δ_1 is ρ_n/2'.
We first enumerate the D_2n-constellations on each open subset of the maximal resolution Y. We are mainly interested in the open sets U_m+1' and U_m” which cover the exceptional divisor E_n/2:
Open set: U_m” = Spec (ℂ[(x^m + y^m)^2/(xy)^m-1, (x^m - y^m)^2/(x^m + y^m)^2])
ℤ_n-constellation:
Z_m = {1, y, y^2, ⋯, y^m-1, x, x^2, ⋯, x^m-1, x^m - y^m }
D_2n-constellation:
t]
| >p2.0cm
| >p1.5cm
| >p1.5cm
| >p1.5cm
| >p-12-5-10cm |
1 (x,y) (x^2, y^2) ⋯ x^m - y^m
α := x^m - y^m/x^m + y^m α (y,-x) α (y^2,-x^2) ⋯ α (x^m - y^m)
Open set: U_m+1' = Spec (ℂ[(x^m + y^m)^2/(x^m - y^m)^2, (x^m - y^m)^2/(xy)^m-1] )
ℤ_n-constellation:
Z_m = {1, y, y^2, ⋯, y^m-1, x, x^2, ⋯, x^m-1, x^m + y^m }
D_2n-constellation:
t]
| >p2.0cm
| >p1.5cm
| >p1.5cm
| >p1.5cm
| >p-12-5-10cm |
1 (x,y) (x^2, y^2) ⋯ x^m + y^m
β := x^m + y^m/x^m - y^m β (y,-x) β (y^2,-x^2) ⋯ β (x^m + y^m)
We compute the top and the socle by the following: for example in U_m” (and similarly for U_m+1'), the corresponding open set in X_1 is given by A = Spec(ℂ[(x^m + y^m)^2/(xy)^m-1, x^m - y^m/x^m + y^m]), so that every ℤ_n-cluster is given by the ideal I_a,b = ⟨ (x^m + y^m)^2 -a(xy)^m-1, (x^m - y^m) - b(x^m + y^m), (x^2m - y^2m) - ab(xy)^m-1⟩. Thus, x · (x^m - y^m) = x^m+1 - xy^m = 0 (and also x ·α (x^m - y^m) = x · b(x^m - y^m) = x · b^2(x^m + y^m) = 0, and similarly for the multiplication by y) by interpreting the ideals as a ℤ_n-cluster defined by the quotient ℂ[x,y]/⟨ p_i x^m - q_i y^m, x^m+1, y^m+1, xy ⟩.
Based on the character table, given that the natural presentation ρ_nat is isomorphic to its dual, i.e. ρ_nat≅ρ_nat^, the McKay quiver for D_2n is given by the following (with the first quiver for odd n and second quiver for even n respectively), which is identical to the McKay quiver of the binary dihedral group in the SL(2) case:
[row sep=1em,column sep=1em]
∘_ρ_0 [dr]
∘_ρ_1 [ul][dl][r] ∘_ρ_2 [r][l] ⋯[r][l] ∘_ρ_n-1/2 [l]
∘_ρ_0' [ur]
[row sep=1em,column sep=1em]
∘_ρ_0 [dr] ∘_ρ_n/2 [dl]
∘_ρ_1 [ul][dl][r] ∘_ρ_2 [r][l] ⋯[r][l] ∘_ρ_n/2 - 1 [ur][dr][l]
∘_ρ_0' [ur] ∘_ρ_n/2' [ul]
Consider the following bijection:
E_i ↦ρ_i
E_n/2 ↦ρ_n/2⊕ρ_n/2'
It is imperative to comment on the possible McKay correspondence via top and socles. Compared to the representation of a binary dihedral group as a small finite subgroup of GL(2, ℂ), particularly in the even case, the exceptional divisor E_n/2 corresponds to the two-dimensional representation ρ_n/2⊕ρ_n/2'. This is because the socle failed to separate the two 1-dimensional representations ρ_n/2 and ρ_n/2'. This tells us that such a correspondence is not the 'ideal' correspondence on the coarse moduli space.
Considering the dual graph of exceptional divisors on Y_max, possibly referring to the computation of socles over the coarse moduli space, that there is a bijection between exceptional divisors and representations of the group G. More precisely:
For odd n, we consider such bijection: E_i ↦ρ_i.
However, for even n, the mapping given by
E_i ↦ρ_i
E_n/2 ↦ρ_n/2⊕ρ_n/2'
gives a bijection between two-dimensional irreducible representations of G and the exceptional divisors whose self-intersection number is -2; and the exceptional divisor corresponding to the two-dimensional decomposable representation has self-intersection number -1.
For a given D_2n-constellation F_st on the exceptional divisors over the quotient stack 𝒴 (refer to (<ref>) for the definitions),
top(F_st) = ρ_0 ⊕ρ_0'
socle(F_st) = ρ_i [F_st] ∈ℰ_i, [F_st] ∉ℰ_j, i ≠ j
ρ_i ⊕ρ_j [F_st] ∈ℰ_i ∩ℰ_j
ρ_n/2 - 1⊕ρ_n/2⊕ρ'_n/2 [F_st] ∈ℰ_n/2 - 1∩ℰ_n/2
ρ_n/2⊕ρ'_n/2 [F_st] ∈ℰ_n/2 - (ℰ_n/2 - 1∪{ℬ_1, ℬ_2})
ρ_n/2 [F_st] = ℬ_1
ρ'_n/2 [F_st] = ℬ_2
Now, we consider the derived equivalences Ψ and Φ and compute in the level of the global quotient stacks. We use the derived equivalence in (7.1) of <cit.> and let ϕ: D(𝒴) D^ℤ_2(X_1) be the derived equivalence between the coherent sheaves on 𝒴 and ℤ_2-equivariant sheaves on X_1:
Hom_D(𝒴)^k(Φ(O_0 ⊗ρ_i^*), O_y) ≅Hom_D^ℤ_2(X_1)^k(ϕ(Φ(O_0 ⊗ρ_i^*)), ϕ(O_y)) ≅Hom_D^G(ℂ^2)^k(O_0 ⊗ρ_i^*, Ψ(O_y)).
It is imperative to notice here that ϕ(O_y) depends on whether the point is stacky or not.
ϕ(O_y) = O_y
O_y ⊕ O_g · y
Once again, from (7.2) of <cit.>, where Z_y is the subscheme of ℂ^2 corresponding to y and ℱ^ := RHom_O_ℂ^2(ℱ,O_ℂ^2) is the derived dual:
Ψ(O_y) = O_Z_y^⊗ K_ℂ^2[2]
(O_Z_y^⊕ O_Z_g · y^) ⊗ K_ℂ^2[2]
By Serre duality:
Hom_D^G(ℂ^2)^k(O_0 ⊗ρ_i^*, Ψ(O_y)) = G-Hom_ℂ^2^k(O_0 ⊗ρ_i^*, Ψ(O_y))
≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_i ⊗ det(ρ_nat), O_Z_y (⊕ O_Z_g · y))
≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_i ⊗ρ_0', O_Z_y (⊕ O_Z_g · y))
We are now in the position to compile each of the equivalences:
Hom_D(𝒴)^k(O_ℱ, O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_0', O_Z_y (⊕ O_Z_g · y))
Hom_D(𝒴)^k(O_ℱ(ℬ_1 - ℬ_2), O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_0, O_Z_y (⊕ O_Z_g · y))
Hom_D(𝒴)^k(O_ℰ_i(-1), O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_i, O_Z_y (⊕ O_Z_g · y))
Hom_D(𝒴)^k(O_ℰ_m(-ℬ_3), O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_m, O_Z_y (⊕ O_Z_g · y))
Hom_D(𝒴)^k(O_ℰ_m(-ℬ_2), O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_n/2', O_Z_y (⊕ O_Z_g · y))
Hom_D(𝒴)^k(O_ℰ_m(-ℬ_1), O_y) ≅ G-Hom_ℂ^2^2-k(O_0 ⊗ρ_n/2, O_Z_y (⊕ O_Z_g · y))
Setting k=2 in the above equivalences, we obtain a more refined structure of the socles over the quotient stack.
We can define the top and the socle of a D_2n-constellation on the stacky locus on the quotient stack as the 1/2 of the D_2n-constellation on the coarse moduli space. In simpler terms, this is simply the ℤ_2-invariant ℤ_n-cluster corresponding to the fixed point of ℤ_n-Hilb under the action of ℤ_2. This is reflected in the proof of the proposition.
amsplain
tocchapterReferences
99
[AGV]agv D. Abramovich, T. Graber, and A. Vistoli, Gromov-Witten theory of Deligne-Mumford stacks, Amer. J. Math., Vol. 130, No. 5, 2008, pp. 1337-1398.
[AV]artverd M. Artin and J.L. Verdier, Reflexive Modules over Rational Double Points, Math. Ann., Vol. 270, 1985, pp. 79-82.
[BCHM]bchm C. Birkar, P. Cascini, C. Hacon, and J. McKernan, Existence of Minimal Models
for Varieties of Log General Type, Journal of the American Mathematical Society, Vol. 23, No. 2, 2010, pp. 405-468.
[BKR]bridgelandkingreid T. Bridgeland, A. King, and M. Reid, The McKay correspondence as an equivalence of derived categories, Journal of the American Mathematical Society, Vol. 14, 2001, pp. 535-554.
[Cad]cadman C. Cadman, Using stacks to impose tangency conditions on curves, Amer. J. Math., Vol. 129, No. 2, 2007, pp. 405-427.
[CI04]crawishii A. Craw and A. Ishii, Flops of G-Hilb and Equivalences of Derived Categories by Variation of GIT Quotient, Duke Math. J., Vol. 124, 2004, no. 2 pp. 259-307.
[GV]gonzverd G. Gonzalez-Sprinberg and J.L. Verdier, Construction géométrique de la correspondance de McKay. (French) [Geometric construction of the McKay correspondence]
Ann. Sci. École Norm. Sup. (4) 16 (1983), no. 3, 409–449 (1984).
[Hart]hartshorne1 R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, 52, Springer-Verlag, New York, 1977.
[Ish02]art1ishii A. Ishii, On the McKay Correspondence for a finite
small subgroup of GL(2, ℂ), J. Reine Angew. Math., Vol. 549, 2002, pp. 221-233.
[Ish20]art2ishii A. Ishii, G-constellations and the maximal resolution of a quotient surface singularity, Hiroshima Mathematical Journal, Vol. 50, 2020, pp. 375-398.
[IIN]ishiiitonolla A. Ishii, Y. Ito and A. Nolla de Celis, On (G/N)-Hilb of N-Hilb, Kyoto J. Math., Vol. 53, 2013, pp. 91-130.
[IU15]ishiiueda A. Ishii and K. Ueda, The Special McKay
Correspondence and Exceptional Collections, Tohoku Math. J., Vol. 67, 2015, pp. 585-609.
[IN1]itonak1 Y. Ito and I. Nakamura, McKay Correspondence and Hilbert Schemes, Proc. Japan Acad. 72 (1996), 135–138.
[IN2]itonak2 Y. Ito and I. Nakamura, Hilbert Schemes and Simple Singularities, New trends in algebraic geometry (Warwick, 1996), 151-233, London Math. Soc. Lecture Notes Ser., 264, Cambridge University Press, Cambridge, 1999.
[Kaw20]kawamata Y. Kawamata, Derived McKay Correspondence for GL(3,ℂ), Adv. Math., Vol. 328, 2020, pp. 1199-1216.
[Kin94]king A. King, Moduli of Representations of Finite-Dimensional Algebras, Quart. J. Math. Oxford Ser. 45 (1994), 515–530.
[KM98]kollarmori J. Kollar and S. Mori, Birational Geometry of Algebraic Varieties, Cambridge Tracts in Mathematics, 134. Cambridge University Press, 1998.
[KS88]kollarshep J. Kollar and N. Shepherd-Barron, Threefolds and Deformations of Surface Singularities, Inventiones Mathematicae, Vol. 91, 1988, no. 2, pp. 299-338.
[NS17]nollasekiya A. Nolla de Celis and Y. Sekiya, Flops and Mutations for Crepant Resolutions of Polyhedral Singularities, Asian Journal of Mathematics, Vol. 21, 2017, no. 1, pp. 1-45.
[Ols]olsson M. Olsson, Algebraic spaces and stacks, American Mathematical Society Colloquium Publications, 62. American Mathematical Society, 2016.
[Pot17]potter R. Potter, Derived Categories of Surfaces and Group Actions. Thesis. University of Sheffield. 2017.
[R]reid M. Reid, McKay correspondence, Proc. of algebraic geometry symposium (Kinosaki, Nov 1996), T. Katsura (Ed.), pp. 14–41.
[V]vistoli A. Vistoli, Intersection theory on algebraic stacks and on their moduli spaces. Inventiones Mathematicae 97, 613-670 (1989).
[W]wunram J. Wunram, Reflexive Modules on Quotient Surface Singularities. Mathematische Annalen 279, 583-598 (1988).
[YAb]yamagishiabelian R. Yamagishi, Moduli of G-constellations and crepant resolutions I: the abelian case, Advanced Studies in Pure Mathematics 88, 2023, McKay Correspondence, Mutation and Related Topics, pp. 159-193.
[YNAb]yamagishinonabelian R. Yamagishi, Moduli of G-constellations and crepant resolutions II: the Craw-Ishii conjecture, preprint, math.AG 2209.11901, 46 pp.
|
http://arxiv.org/abs/2405.10113v1 | 20240516140513 | Master thesis: High-rate multipartite quantum secret sharing with continuous variables | [
"Jacopo Angeletti"
] | quant-ph | [
"quant-ph"
] |
Quantum cryptography has undergone substantial growth and development within the multi-disciplinary field of quantum information in recent years. The field is constantly advancing with new protocols being developed, security measures being improved, and the first practical applications of these technologies being deployed in optical fibers and free space optical beams. In this paper, we present a comprehensive review of a cutting-edge metropolitan-scale protocol for continuous-variable quantum cryptography. The protocol allows an arbitrary number of users to send modulated coherent states to a relay, where a generalised Bell detection creates secure multipartite correlations. These correlations are then distilled into a shared secret key, providing a secure method for quantum secret-sharing. This novel approach to quantum cryptography has the potential to offer high-rate secure multipartite communication using readily available optical components, making it a promising advancement in the field.
Master thesis: High-rate multipartite quantum secret sharing with continuous variables
Jacopo Angeletti
Received April 30, 2024; accepted Month Date, Year
======================================================================================
742020
§ INTRODUCTION
Quantum key distribution (QKD) <cit.> with continuous-variable (CV) systems <cit.> has garnered significant attention in recent years. The design of CV-based QKD protocols utilizing Gaussian quantum states of optical beams has proven to be particularly effective, and these states can now be easily produced in laboratory settings. The ideal implementation of QKD protocols that utilize CV systems <cit.> and Gaussian states <cit.> has the potential to approach the PLOB bound <cit.>, which is the ultimate limit of point-to-point communication. These advancements demonstrate the exciting progress and potential for continued development in the field of QKD with CV systems. Recently, there has been a significant push towards an end-to-end approach that can be applied to network implementations <cit.>. This approach utilizes an intermediate relay as a means of communication, allowing parties to perform measurement-device-independent (MDI) QKD protocols <cit.>, even if the relay is untrusted. This development provides a solution that can greatly benefit network implementations and has garnered significant attention in the field.
We analyze a cutting-edge multipartite protocol for secure quantum secret-sharing (QSS) that utilizes CV systems and an MDI configuration. This protocol can be easily implemented using linear optics and provides a secure method for key distribution. In this protocol, an arbitrary number of users are divided into groups and send Gaussian-modulated coherent states to an untrusted relay. A generalized multipartite Bell detection is performed at the relay and the results are publicly broadcast. QSS enables the distribution of a secret key among all users, which requires their collaboration for validity. In the case of non-collaboration, a threshold behavior is manifested and allows for the detection of “dummy" users, leading to the potential abort of the protocol. This multipartite protocol based on CV systems and MDI configuration provides a promising solution for secure key distribution in a network setting.
Consider a configuration where users are distributed asymmetrically around a relay station and analyze the security of the protocol against collective attacks. In this scenario, we assume that Eve uses independent entangling cloners <cit.> and analyze the asymptotic regime of many (ideally infinite) exchanged signals. The links connecting the parties to the relay are modeled as memory-less thermal-loss channels, with the assumption that users in the same ensemble share both common transmissivity and thermal noise. Under these realistic conditions, we demonstrate that the protocol is suitable for metropolitan-scale areas. For example, the ultimate limit for bipartite secure communication still allows for the establishment of a secret key between two groups in a noisy environment within a radius of 10 km.
The paper is organized as follows: in Sec. <ref>, we describe the communication scheme. Sec. <ref> focuses on the analysis of bi-partitions of users for a thermal-loss channel. In Sec. <ref>, we examine two specific configurations, referred to as the Y- and X-schemes, which allow for secure secret-sharing among three and four groups, respectively. Finally, in Sec. <ref>, we summarize our findings and provide concluding remarks. To facilitate a deeper understanding of the protocol, the mathematical tools used in our analysis are provided in the appendices.
§ DESCRIPTION OF THE COMMUNICATION SCHEME
We provide the definition of a generic secret-sharing protocol as follows:
An (M,N)-threshold scheme is a procedure for dividing a message into N pieces, called shadows or shares, such that no subset of fewer than M shadows can reveal the message, but any set of M shadows can be used to reconstruct it <cit.>.
To illustrate this concept, consider the scenario of Alice setting up a launch program for a nuclear warhead from a remote location. To ensure that the launch cannot be initiated by a single person, she divides the launch code into N parts, and distributes them among N individuals. These shares are encrypted and contain no information about the original launch code individually. However, if M individuals cooperate, they would be able to reconstruct the complete launch code. This makes it more challenging for any single person to gain unauthorized access, as they would need to collude with M-1 others.
In order to perform a QSS protocol, consider an arbitrary number N of trusted users (referred to as “Bobs") arranged into M groups, with N_j users in each group, where j={1, …, M}. The sum of all users in the groups should not exceed N (see Fig. <ref>), and when ∑_j=1^MN_j=N, we refer to this as the “full-house" case. The users send random Gaussian-modulated coherent states |α_k⟩ through a thermal-loss channel Φ_j to an untrusted relay, where a generalized multipartite Bell detection is performed, as depicted in Fig. <ref>. The relay is modeled as an N-port interferometer consisting of N beamsplitters, with increasing transmittivities T_1=1 to T_k=1-k^-1 for k={2, …, N}, followed by N homodyne detections. The first output is measured in p̂, while the rest are q̂-homodyned, where q̂ and p̂ are the two quadrature operators of the optical mode such that [q̂, p̂]=2i. The outcome γ:=(p, q_2,…, q_N) is broadcast to all Bobs, who can then remove the local displacement caused by the measurements. Further mathematical details are provided in App. <ref>.
The theoretical assessment of the protocol is performed in the entanglement-based (EB) representation. In this representation, each source of coherent states is represented by a two-mode squeezed vacuum (TMSV) state ρ̂_AB, which undergoes heterodyne detection. The B̂ modes are kept at each user's station, while the  modes are sent to the relay for detection. As a result, each user is equipped with a TMSV state ρ̂_AB that has a zero mean and a covariance matrix (CM) that is equal to
V_AB=
(
[ μI √(μ^2-1)Z; √(μ^2-1)Z μI; ]),
where Z=diag{1, -1}, I=diag{1, 1}, 1≤μ:=cosh 2r ∈ℝ [We sometimes omit that μ and ω are in SNU for simplicity of notation. This is also due to the fact that it is usually not possible to carry out a dimensional check in this field.], and the modes are ordered as (q̂^A, p̂^A, q̂^B, p̂^B)^T. Here, r is the squeezing parameter. By heterodyning mode B̂, each Bob remotely prepares a coherent state |β⟩ on mode Â, the amplitude of which is modulated by a complex Gaussian with variance μ - 1. For large modulation μ≫ 1, the outcome of the measurement β≃α^* is approximately equal to the projected amplitude α.
The CM of the TMSV state ρ̂_AB Eq. (<ref>), upon the action of the channel Φ_j, undergoes the transformation
V'_AB=
(
[ x_j I z_j Z; z_j Z y I; ]),
with j={1, …, M}. Here, each thermal-loss channel Φ_j is characterized by its transmissivity η_j and thermal noise ω_j, such that
x_j =η_jμ+(1-η_j)ω_j,
y =μ,
z_j =√(η_j(μ^2-1)).
After the Bell measurement and communication of the outcome γ, the modes B̂:= B̂_1⋯B̂_N are projected onto a symmetric N-mode Gaussian state (see also App. <ref>). The users are divided into M groups, each consisting of N_j members, and the global state is represented by ρ_M|γ, where M̂:= N̂_1⋯N̂_M represents all the members of the M groups. The members of each group can apply local operations (LOs) [In the EB representation, these LOs can be implemented by means of suitable interferometers, one on each side. These passive LOs can be available also at the post-processing stage, after the action of the relay, for the equivalent PM description.] on ρ_M|γ to establish a common secret key among the M groups. These local Gaussian operations concentrate the quantum correlations of all the Bobs, transforming ρ_M|γ into an effective M-mode Gaussian state ρ_M|γ with CM [Across the M-partition, plus a tensor product of thermal states for the remaining modes.] <cit.>
V_M|γ=
(
[ Γ_11 Γ_12 ⋯ Γ_1M; Γ_21 Γ_22 ⋯ Γ_2M; ⋮ ⋮ ⋱ ⋮; Γ_M1 Γ_M2 ⋯ Γ_MM ]),
where [The index k always runs from 1 to M, but we explicitly show it only once for cleanliness.]
Γ_ij =y I δ_ij
-z_iz_jdiag(δ_ij∑_k≠ iN_k𝔛^(ki)-(1-δ_ij)𝔛^(ij)/∑_k≠ iN_k𝔛^(i), √(N_iN_j)/∑_k=1^M N_kx_k),
with δ_ij the Kronecker delta, and
𝔛^(αβ):=∏_k≠α≠β x_k.
As a result, we can consider the situation from the perspective of M aggregated entities, commonly referred to as “super users," which correspond to the M groups into which the users are divided. This is shown in Fig. <ref>. For more information on this process, refer to App. <ref>.
We consider the practical limitations that arise during the implementation of the multipartite Bell detection. The presence of inefficiencies in the detectors is accounted for by including detector efficiencies τ<1 in the homodyne measurements. This is achieved through the use of N beam splitters with transmissivity τ. In CV-Bell detections, it is possible to attain high efficiencies at both optical and telecom frequencies, both with and without fiber components. Despite the technical difficulties, homodyne detection can reach detection efficiencies of up to 90% <cit.>.
To account for the finite effects due to a limited number of exchanged signals between the parties, we must consider the reconciliation efficiency ξ of the classical codes used for error correction and privacy amplification <cit.>. Despite being crucial for extracting a secret key, this process typically has an efficiency ξ<1, with typical values ranging from ξ≃ 0.95÷ 0.985 <cit.>. In addition to this, there may be other imperfections that arise from the relay, such as the asymmetric behavior of the interferometer beamsplitters <cit.>. However, this case is nontrivial and requires numerical solutions, and is therefore not considered in this analysis.
Assuming asymptotic security and infinite Gaussian modulation [In order to reach the optimal and asymptotic performances provided by the infinite-dimensional Hilbert space.], the secret-key rate of the protocol against collective attacks is simply given by <cit.>
R=ξ I_B|γ-χ_E|γ.
For practical purposes, the secret-key rate must be optimized over the modulation parameter μ, as outlined in App. <ref>. To analyze the potential performance of the protocol, we will focus on scenarios that are suitable for experimental testing, by considering the cases with M={2, 3, 4}.
§ RESULTS
§.§ Bipartite system
In a QSS session, users are divided into M=2 groups, referred to as group “1" and group “2", with group “2" positioned deeper [Consisting the interferometer of a cascade of beam splitters T_k, we may define a user depth. A user, whose channel is characterised by T_i, is deeper than another (T_j) if i>j.] in the interferometer and serving as the decoder [We are actually inverting the roles of encoder and decoder between the groups with respect to Refs. <cit.>. This will end up with a trivial and irrelevant exchange of roles between Alice and Bob.] (see also App. <ref>). As shown in Fig. <ref>, the performance of the protocol is evaluated in terms of the secret-key rate, measured in bits per channel use, as a function of the distance d_1 between group “1" and the relay (measured in kilometers for a standard optical fiber with 0.2 dB/km attenuation). The relay is fixed at a distance of d_2 = 0 from group “2". The notation used is the following
A splitting of the kind “X/Y" means that X% (Y%) of all users belongs to group “1" (“2").
In Fig. <ref>, we compare the optimal rate for different channel types (thermal- and pure-loss) and detection efficiencies, with group “2" fixed at the relay and group “1" at varying distances. The parameters of the thermal-loss channel are detailed in the figure caption. Our Gaussian QSS scheme achieves outstanding performance compared to qubit-based protocols, with secret key rates that are at least three orders of magnitude higher <cit.>, over comparable distances, for which one has ≲ 10^-4 bit/use at ≲ 25 km. If symmetric configuration was limited to 3.8 km, our apparatus can reach a maximum distance of 170 km in standard optical fibers, with a high key rate of 2×10^-4 bit/use. With a clock of 25 MHz, this corresponds to a key rate of the order of 2.5 Mbits/sec for all users. The optimal bipartition corresponds to the “full-house" case and symmetric splitting, resulting in the same best (ideal) performance as standard CV-MDI-QKD <cit.> in its asymmetric configuration (black curve). This is possible because the state ρ_2|γ is independent from the number of users N. Our results show that, while imperfections and noise do have a general destructive effect, they do not appreciably affect the performance of the protocol (solid and dashed line coincide). However, the presence of thermal noise with ω_1=1.1 SNU and a detection efficiency of τ=98% reduces the performance by about 150%. These results highlight the feasibility of high-rate secure CV-MDI-QKD QSS in a noisy environment at a metropolitan scale.
§.§.§ Secret-key rate versus group distance
In this study, we examine the behavior of the secret-key rate with respect to the distance of one of the two groups, while the other is fixed. We vary the distance d_i={0.1 , 1 , 10 } of one of the groups from the relay. We focus on the full-house case for different splittings, including 50/50, 5/95, and 1/99. The 50/50 splitting is also analyzed in a noisy environment.
The trends [We decide just to expose the results concerning 0.1 being the others totally analogous.] in the secret-key rate behavior with respect to the distance of a group are presented in Fig. <ref>. Our analysis shows that for a pure-loss channel, the performance is worse when the splittings are extreme and does not vary [A protocol is invariant with respect to asymmetrical splittings if they lead to the same result.] with asymmetrical splittings. This indicates that there are no depth effects induced by the relay. Despite this, we are pleased to find that the ultimate limit for bipartite secure communication still allows for the establishment of a secret key between two groups within a radius of 10 km in a metropolitan scale (not shown). When studying the impact of noise, we observe that in the case of asymmetrical noise [That is, when only one of the two groups is affected by noise (we exclude the case in which both are, clearly the worst possible scenario).], the group closer to the relay can tolerate more noise in its link, and the performance is not significantly affected. At present, reasonable values of excess noise are in the range of ϵ=0.04÷ 0.05 SNU <cit.>, which express the incredible tolerance of our protocol to noise, despite the conversion from excess noise to thermal noise ω not being immediate.
Finally, Fig. <ref> further (see Fig. <ref>) illustrates the impact of distance on the secret-key rate when considering non-ideal reconciliation efficiency (ξ≤ 1). As expected, the rate decrease as the distance increases, and the impact of imperfect reconciliation becomes more pronounced. The results clearly demonstrate the need for efficient reconciliation to achieve high secret-key rates over large distances.
§.§.§ Threshold behaviour
The results of this study, depicted in Fig. <ref>, showcase the threshold behavior characteristic of QSS. As depicted, when one or more users do not cooperate, the performance drops significantly, which allows for easy detection and potential termination of the session.
Our study focuses on determining the maximum distance achievable by one group of users, d_j^ max [We omit the superscript max in the figures for simplicity of sketching.], as a function of the number of users N, while keeping the distance of another group fixed at d_i. Three different types of user bipartitions were considered. As previously stated, the optimal split of 50/50 (represented by orange in the figure) has a performance that does not depend on N. Additionally, we analyze the cases where one “dummy" user is present in each group, leading to two scenarios: N_1 = N/2 - 1, N_2 = N/2 (purple) and N_1 = N/2, N_2 = N/2-1 (blue).
We also considered the scenario where two dummy users are present, resulting in three possible combinations: N_1 = N_2 = N/2 - 1 (red), N_1=N/2-2, N_2=N/2 (brown), or N_1=N/2, N_2=N/2-2 (pink). Our analysis shows that, regardless of the user group positioning, the worst effect occurs when users of the shallowest group do not cooperate, implying possible depth effects.
The introduction of reconciliation efficiency has a compressing effect on the rate, making the threshold behavior even more pronounced, as can be seen in Fig. <ref>(b).
§.§ M-partite systems: Y- and X-schemes
We present a study of a specific configuration in which there are M=3 groups, each with N_j=N/M users and a pure-loss channel with equal excess noise, ω_j=1 for j={1, 2, 3}, resulting in an optimal M-partition. The details of this setup are discussed in App. <ref>. In the Y-scheme configuration, the third group, located farthest from the relay, is placed at a distance of d_3, while the first and second groups are positioned at an equal distance d_1/2 from the relay. This configuration is depicted in Fig. <ref>. Additionally, the relay can be configured to act as a switch, connecting two groups at a time, as shown in Fig. <ref>.
The impact of the distance d_1/2 of the shallowest group on the secret-key rate can be seen in Fig. <ref>, where the fixed distance of the deepest group, d_3={10 , 50 , 100 }, is kept constant. The two sets of curves in the figure represent two different relay configurations, with the right set representing the switch case, which enhances the performance by a factor of nearly three.
The inset provides a visual comparison to highlight the scaling. When considering an analogous two-group scenario, with the deepest group, i.e., group “2", located d_2=100 from the relay, and varying the distance d_1 of the other group, secure communication at a distance of nearly 60 can be achieved even with a non-optimal split of N_1=2N/3 and N_2=N/3. However, adding a third group results in a drastic drop in performance that stabilizes immediately afterwards (if other groups are to be added). In a Y-scheme with an optimal three-partition, the restriction is to approximately 1, while four groups permit secure communication up to approximately 600. Finally, Fig. <ref> presents the results of the same scenario as Fig. <ref> but with non-ideal Bell detection.
Fig. <ref> demonstrates the robustness of the secret-key rate to changes in d_3 for both operational modes of the relay. The scalability of the scheme is further proven by the extension to four groups in the Y-scheme set-up shown in Fig. <ref>. The deepest group (group “4") is located at a distance d_4 from the relay, while the other three groups (“1", “2", and “3") are positioned at an equal distance d_1/2/3 from the relay. In addition, we analyze another configuration, known as the X-scheme, in which the users are distributed with an optimal M-partition of the form N_j=N/M, where M=4. In this configuration, groups “1" and “2" are positioned at a distance d_1/2 from the relay, while groups “3" and “4" are positioned at a distance d_3/4 from the relay (as shown in Fig. <ref>).
The secret-key rate of the protocol as a function of the distance d_1/2 of the first two groups, with the other distances d_3/4 fixed, is shown in Fig. <ref>. As the best protocol performance is achieved when the deepest group(s) in the interferometer is (are) located closer to the relay, we fix their distance. When distributed in a X-scheme (represented by green curves), the four groups perform better (by approximately less than 15%) than they would in a Y-scheme (represented by red curves). This is because the larger the number of users closer to the relay, the better the performance.
§ CONCLUSIONS
We have presented a novel multipartite CV-MDI-QKD protocol that enables secure quantum secret sharing among an arbitrary number of users. This protocol builds upon the asymmetric configuration from previous works <cit.> and extends the capabilities of standard CV-MDI-QKD <cit.>. Our analysis focuses on the asymptotic security of the protocol, ignoring finite-size effects and assuming individual uncorrelated attacks. Despite these limitations, the results are promising, especially considering the high level of excess thermal noise we have used in our analysis, which is even higher than what has been achieved experimentally <cit.>. Moreover, the challenges associated with modeling a correlated attack make this a highly nontrivial task both theoretically and computationally.
The performance of a M-partite CV-MDI-QKD protocol with M>2 groups has been analyzed in this study. To simplify the analysis, two specific configurations, the Y- and X-schemes, have been considered. The results show that the “switch" variant of the Y-scheme leads to improved performance. The protocol also demonstrates robustness to changes in the distance of the deepest group in the interferometer, providing a foundation for building a network of nodes.
In conclusion, it is important to keep in mind that the security of the presented protocol is only proven in the asymptotic limit of many exchanged signals and does not take into account finite-size effects. Further research is needed to improve the security and performance of the protocol. This includes the study of multipartite Bell detections, which have been limited to only a few users so far <cit.>. Alternatives such as a squeezed state protocol or a thermal-state protocol in the THz frequency range [It is attractive for the potential boosting of data rate of wireless communication.], as well as discrete modulation [In order to exploit advantage distillation and post-selection protocols, allowing to improve the achievable distance (paying a price in terms of the key-rate per use of the protocol), one may substitute the Gaussian modulation of the signal states with a discrete one.] <cit.>, may lead to improved results. Additionally, exploring other set-ups, such as smaller groups connected with two-by-two Bell-like detections, could help extend the study to more complex networks and clusters of networks. The potential for improvement in this field is vast and provides ample opportunities for future research.
§ ACTION OF THE INTERFEROMETER
The relay station in our model is represented by the N-port interferometer outlined in Sec. <ref>. This interferometer operates on the travelling modes  and is described by the symplectic linear transformation <cit.> given by
Â_1 → A_1 =1/√(N)∑_j=1^NÂ_j,
Â_k → A_k =1/√(k(k-1))[(k-1) Â_k-∑_i=1^k-1Â_i]
for k ={2, …, N}.
For clarity, we will use A instead of  to represent the travelling modes after they have undergone the transformation of the interferometer.
§.§ Bipartite system
To provide further clarity, let us consider the case where there are only two groups (M=2). After the interferometer has acted, the global input state is described by the CM
V_B=
(
[ y I_2N Υ; Υ^T Ξ; ]),
where, for the sake of calculation simplicity, the order of the modes has been changed to {B̂_1, …, B̂_N, A_1, …, A_N}, with B̂_j=(q̂_j^B, p̂_j^B)^T and A_j=(q_j^A, p_j^A)^T. Note that in this case, the absence of the hat symbol distinguishes the modes before and after the interferometer's action. In general, for the case M=2, the entries of the matrices Υ and Ξ can be calculated using Eq. (<ref>). where the value of ⋆ = {1, 2} in the expression depends on the group, and with Λ_a, b := ax_1 + bx_2. It is noteworthy that, with a proper rearrangement of the modes, the matrix Υ is upper-triangular. To give a concrete example, let us consider the case where N=5, N_1=2, and N_2=3. This scenario is depicted by the block matrices
Υ=
(
[ z_1/√(5) -z_1/√(2) -z_1/√(6) -z_1/2√(3) -z_1/2√(5); z_1/√(5) z_1/√(2) -z_1/√(6) -z_1/2√(3) -z_1/2√(5); z_2/√(5) 0 √(2/3)z_2 -z_2/2√(3) -z_2/2√(5); z_2/√(5) 0 0 √(3)/2z_2 -z_2/2√(5); z_2/√(5) 0 0 0 2z_2/√(5); ])⊗Z,
Ξ=
(
[ Λ_2, 3/5 0 -√(2/15)Λ_1, -1 -Λ_1, -1/√(15) -Λ_1, -1/5; 0 Λ_1, 0 0 0 0; -√(2/15)Λ_1, -1 0 Λ_1, 2/3 Λ_1, -1/3√(2) Λ_1, -1/√(30); -Λ_1, -1/√(15) 0 Λ_1, -1/3√(2) Λ_1, 5/6 Λ_1, -1/2√(15); -Λ_1, -1/5 0 Λ_1, -1/√(30) Λ_1, -1/2√(15) Λ_1, 9/10; ])⊗I .
Υ=(
[ z_1/√(N) -z_1/√(2) ⋯ -z_1/√(j(j-1)) ⋯ -z_1/√(k(k-1)) ⋯ -z_1/√(N(N-1)); ⋮ z_1/√(2) ⋯ ⋯ ⋯ ⋯ ⋯ -z_1/√(N(N-1)); z_1/√(N) 0 ⋱ ⋯ ⋯ ⋯ ⋯ ⋮; z_2/√(N) 0 ⋯ √(j-1/j)z_2 ⋯ ⋯ ⋯ -z_2/√(N(N-1)); ⋮ ⋮ ⋯ 0 ⋱ ⋯ ⋯ -z_2/√(N(N-1)); ⋮ ⋮ ⋯ ⋮ 0 √(k-1/k)z_2 ⋯ ⋮; ⋮ 0 ⋯ ⋯ ⋯ 0 ⋱ -z_2/√(N(N-1)); z_2/√(N) 0 ⋯ 0 ⋯ ⋯ 0 √(N-1/N)z_2 ])⊗Z,
Υ { ⟨B̂_iA_j⟩ =0, 1≠ i>j,
⟨B̂_mA_1⟩ =z_⋆/√(N), ∀ m,
⟨B̂_lA_k⟩ =-z_⋆/√(k(k-1)), 1≠ l<k,
⟨B̂_kA_k⟩ =√(k-1/k)z_⋆, k≥ 2,
., Ξ { ⟨ A_1^2⟩ =Λ_N_1, N_2/N,
⟨ A_k^2⟩ =Λ_2, k(k-1)-2/k(k-1), k> 2,
⟨ A_2A_j⟩ =0, ∀ j≠ 2
⟨ A_1A_k⟩ =-2Λ_1, -1/√(Nk(k-1)), k> 2,
⟨ A_lA_k⟩ =2Λ_1, -1/√(l(l-1)k(k-1)), l, k> 2,
.
§ GENERALISED MULTIPARTITE BELL DETECTION
To perform the multipartite Bell detection, N-1 homodyne detections in the q̂-quadrature and one homodyne detection in the p̂-quadrature are carried out. Using the example in Eq. (<ref>), in the scenario with N=5, N_1=2, and N_2=3, the resulting conditional global input state is described as
V_B|γ=(
[ leftA C E E E; C rightA[first] E E E; E E leftB D D; E E D B D; E E D D rightB; ]).[first]
where
A =yI-(
[ 1/x_1Λ_3, 1/Λ_3, 2 0; 0 1/Λ_2, 3 ])z_1^2,
B =yI-(
[ 2/x_2Λ_1, 1/Λ_3, 2 0; 0 1/Λ_2, 3 ])z_2^2,
C =(
[ x_2 /x_11/Λ_3, 2 0; 0 -1/Λ_2, 3 ])z_1^2,
D =(
[ x_1 /x_21/Λ_3, 2 0; 0 -1/Λ_2, 3 ])z_2^2,
E =(
[ 1/Λ_3, 2 0; 0 -1/Λ_2, 3; ])z_1 z_2.
§ UNITARY ENTANGLEMENT LOCALISATION OF M-SYMMETRIC STATES
Eq. (<ref>) displays a distinctive symmetry that remains consistent for any value of M, making it possible to simplify our problem. To demonstrate this simplification, let us examine one quadrature (the same reasoning can be applied to the other). As a straightforward application of linear algebra <cit.>, let us consider a N× N matrix of the form
W_N_j:= (d_j-c_j) I_N_j+N_jc_j P_N_j
= (
[ d_j c_j c_j ⋯ c_j; c_j d_j c_j ⋱ c_j; c_j c_j d_j ⋱ c_j; ⋮ ⋱ ⋱ ⋱ ⋮; c_j c_j c_j c_j d_j ]),
where P_N_j denotes the projection matrix onto the vector v_N_j=N_j^-1/2(1, 1, …, 1)^T [It is the matrix with all elements equal to N_j^-1.]. With the above Eq. (<ref>), it is straightforward to see that the matrix is diagonal in the basis defined by v_N_j and N_j-1 orthogonal vectors, that is
W'_N_j =R_N_j^-1W_N_jR_N_j
=(
[ d_j-c_j 0 0 ⋯ 0; 0 d_j-c_j 0 ⋱ 0; 0 0 d_j-c_j ⋱ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 0 0 0 d_j+(N_j-1)c_j ]).
The matrix R_N_j is the rotation that diagonalizes the matrix, which can be obtained from the basis of eigenvectors {e_k}_k=1^N_j of the matrix itself. It is given by R_N_j=N_j^-1/2(e_1, …, e_N_j)^T. We define
An M-symmetric state is a multi-partite state of ∑_j=1^M N_j modes characterized by its CM V_M. The state is constructed by incorporating diagonal blocks,
O_N_jN_j=(d_j-c_j) I_N_j+N_jc_j P_N_j≡W_N_j,
with the same symmetry as W_N_j, and off-diagonal blocks,
O_N_iN_j≡P_f_ij^-1 (i≠ j),
which are proportional to P_N_j and have all elements equal to f_ij.
For clarity, we present an example of a
V_3=(
[ W_N_1 P_f_12^-1 P_f_13^-1; P_f_12^-1 W_N_2 P_f_23^-1; P_f_13^-1 P_f_23^-1 W_N_3; ]).
As previously stated, Eq. (<ref>) is an example of a particular V_2. By applying the same reasoning as in Eq. (<ref>), we can find the transformed CM of a general M-symmetric state, as
V'_M=⊕_i=1^MR_N_i^-1V_M⊕_j=1^MR_N_j,
whose blocks are therefore simply given by
O'_N_iN_j=R_N_i^-1O_N_iN_jR_N_j.
Thus, the transformed CM V'M describes an effective state of M modes, since (∑_j=1^M N_j)-M of them are thermal (or vacuum) states that are uncorrelated with each other [as seen in Eq. (<ref>)]. The effective M-mode state is described by
O'_N_iN_j=[d_i+(N_i-1) c_i]δ_ij+(1-δ_ij) f_ij√(N_iN_j).
For example, we provide a transformed
V'_2=
[baseline=(0,0)]
[matrix of math nodes,left delimiter=(,right delimiter=)] (m)
d_1-c_1 0 ⋱ 0 0 0 0 0
0 d_1-c_1 ⋱ 0 0 0 0 0
⋮ ⋱ ⋱ ⋮ 0 0 0 0
0 0 0 d_1+(N_1-1)c_1 f_12√(N_1N_2) 0 0 0
0 0 0 f_12√(N_1N_2) d_2+(N_2-1)c_2 0 0 0
0 0 0 0 ⋮ ⋱ ⋱ ⋮
0 0 0 0 0 ⋱ d_2-c_2 0
0 0 0 0 0 ⋱ 0 d_2-c_2
;
[rounded corners, thick, draw, fill=blue!50, fill opacity=0.2,text opacity =1,inner sep=0pt] (m-4-4.north west) rectangle (m-5-5.south east);
,
and therefore V'_2 corresponds N_1+N_2-2 uncorrelated thermal modes, while the effective 2-mode state is described by
V'_2=
[baseline=(0,0)]
[matrix of math nodes,left delimiter=(,right delimiter=)] (m)
d_1+(N_1-1)c_1 f_12√(N_1N_2)
f_12√(N_1N_2) d_2+(N_2-1)c_2
;
[rounded corners, thick, draw, fill=blue!50, fill opacity=0.2,text opacity =1,inner sep=0pt] (m-1-1.north west) rectangle (m-2-2.south east);
.
By induction and following the above reasoning, the general post-reduction CM of the effective M-mode state is given by Eqs. <ref> and <ref> in the main text. To clarify these expressions, let us consider the case of
V_3|γ=(
[ Γ_11 Γ_12 Γ_13; Γ_12 Γ_22 Γ_23; Γ_13 Γ_23 Γ_33; ]).
where
Γ_11 =yI-(
[ N_3x_2+N_2x_3/Θ_q 0; 0 N_1/Θ_p ])z_1^2,
Γ_22 =yI-(
[ N_3x_1+N_1x_3/Θ_q 0; 0 N_2/Θ_p ])z_2^2,
Γ_33 =yI-(
[ Λ_N_2, N_1/Θ_q 0; 0 N_3/Θ_p ])z_3^2,
Γ_12 =√(N_1N_2)(
[ x_3/Θ_q 0; 0 -1/Θ_p ])z_1z_2,
Γ_13 =√(N_1N_3)(
[ x_2/Θ_q 0; 0 -1/Θ_p ])z_1z_3,
Γ_23 =√(N_2N_3)(
[ x_1/Θ_q 0; 0 -1/Θ_p ])z_2z_3,
with Θ_q:=N_1x_2x_3+cyclics and Θ_p:=∑_j=1^3N_jx_j. The change in labeling is made to facilitate comprehension [see Eqs. <ref> and <ref>].
§ SECRET-KEY RATE
Before the action of the eavesdropper and the measurements, the global input state that describes the parties (the Bobs) and the eavesdropper (Eve) is pure and Gaussian. After her action and before the measurements, the global output state is still pure, although it may be non-Gaussian. The local measurements commute, so we can defer the Bobs' heterodyne detections until after Eve's measurement. As a result, the Bobs and Eve share a pure conditional state ρ̂_BE|γ, where we label the local modes as B̂:= B̂_1⋯B̂_N. The reduced states for the Bobs and Eve are ρ̂_B|γ and ρ̂_BE|γ, respectively. Since the conditional state is pure, the von Neumann entropies S of the subsystems are equal, meaning
S(ρ̂_B|γ)=S(ρ̂_E|γ).
Analogously, in the conditional post-relay scheme, the action of the Bobs projects ρ̂_BE|γ into a pure [This occurs because heterodyne detection is a rank-1 measurement, hence the purity.] state ρ̂_BE|γβ̃^(N), yielding to
S(ρ̂_B|γβ̃^(N))=S(ρ̂_E|γβ̃^(N)).
As a consequence, the amount of information that Eve can obtain about Bobs' variables β̃^(N):={β̃_j}_j=1^N, conditioned on γ, is upper-bounded by her Holevo quantity
χ_E|γ =S(ρ̂_E|γ)-S(ρ̂_E|γβ̃^(N))
=S(ρ̂_B|γ)-S(ρ̂_B|γβ̃^(N)),
which is fully determined by the conditional state ρ̂_B|γ. Indeed, assuming asymptotic security and infinite Gaussian modulation, the secret-key rate of the protocol can then be expressed as
R=ξ I_B|γ-χ_E|γ,
where ξ<1 is the reconciliation efficiency. It is worth noting that, even though ∑_j=1^M N_j < N, Eve still has a purification of the global state due to the assumption of all trusted users, and as a result, the secret-key rate is covariant [In the sense that preserves Eq. (<ref>) in form.] <cit.>.
§ BIPARTITE SYSTEM
In the QSS protocol, the parties are divided into M=2 groups, and the effective two-mode CM can be obtained following the methods presented in Refs. <cit.>. Using the same notation, the resulting CM is given by
V_2|γ=(
[ Δ_1 Γ'; Γ' Δ_2; ]),
where, for l={1, 2}, one explicitly has
Δ_l =y-diag((N-N_l) z_l^2/Λ_N-N_1, N-N_2, N_l z_l^2/Λ_N-N_2, N-N_1),
Γ' =z_1z_2√(N_1N_2) diag(1/Λ_N-N_1, N-N_2, -1/Λ_N-N_2, N-N_1),
and again Λ_a, b:= ax_1+bx_2. One may compute the symplectic eigenvalues of the CM Eq. (<ref>) as <cit.>
ν_± = √(Δ±√(Δ^2-4V_2|γ)/2),
where Δ=Δ_1+Δ_2+2Γ' and · indicates the determinant. However, it is not known beforehand if the rate will be asymptotically maximum or if there exists an optimal modulation value μ that maximizes it. Our analysis shows that there is an optimal modulation in the full-house case, where all users participate. Additionally, we also study the asymptotic trends for large modulation values in the full-house (FH) scenario and find that
ν_+^(FH) →|η_1-η_2|√(N_1N_2/𝒩_12𝒩_12) μ,
ν_-^(FH) →1/|η_1-η_2|√(λ_12λ_12/N_1N_2),
with
λ_ij := N_i ω_j(1 - η_j)+N_j ω_i(1 - η_i),
λ_ij := N_i ω_i(1 - η_i)+N_j ω_j(1 - η_j),
𝒩_ij := N_iη_j+N_jη_i,
𝒩_ij := N_iη_i+N_jη_j,
which are all symmetrical with respect to the interchange of indices i and j. The removable discontinuity η_1=η_2 does not represent a problem, as we will prove in the next Sec. <ref>.
§.§ Conditioning: Heterodyne Detection
Assuming group “2" serves as the decoder, the conditional CM after local heterodyne detection of Eq. (<ref>) is calcalated as <cit.>
V_1|γ 2=Δ_1-Γ'(I + Δ_2)^-1Γ'^T,
which is diagonal and therefore its symplectic eigenvalue can be obtained by the symplectic invariance of its determinant, that is
ν_N^SS=√(V_1|γ 2).
Specifically, in the FH limit, the symplectic eigenvalue is given by
ν_N^SS→1/η_1√((λ_12+N_1η_2)(λ_12+N_2η_2)/N_1N_2).
Having the total and conditional symplectic spectra [Eqs. (<ref>) and (<ref>), respectively], the Holevo quantity using as
χ=h(ν_+)+h(ν_-)-h(ν_N^SS),
where the entropic function h is defined as
h(ν):=ν+1/2log_2(ν+1/2)-ν-1/2log_2(ν-1/2).
The function is equal to zero for the vacuum noise h(1)=0 and asymptotically approaches
h(ν)=log_2/2ν+O(ν^-1).
The continuity of the Holevo quantity χ in the transition from the asymmetrical to the symmetrical configuration, i.e., in {η_1=η_2, ω_1=ω_2}, must be verified. This can be done by comparing their respective symplectic eigenvalues in the FH case. When η_1=η_2:=η and ω_1=ω_2:=ω, the following holds
ν_±^(FH) →√(y(y-z^2/x))
=√([(1-η)μω+η]μ/(1-η)ω+ημ),
in agreement with Refs. <cit.>. The same continuity holds for ν_N^SS, which converges to
ν_N^SS→√(τ_12τ_21/τ_12τ_21),
where we define
τ_ij := η(N_i+N_jμ)+Nω(1-η)μ
τ_ij :=η(N_i+N_jημ)+Nω(1-η).
§.§ Mutual Information
The mutual information can be expressed compactly as <cit.>
I=1/2log_2Σ,
where we make the assumption that group “2" serves as the decoder, allowing us to write
Σ=1+Δ_1+Tr{Δ_1}/1+V_1|γ 2+Tr{V_1|γ 2},
with Δ_1 and V_1|γ 2 defined in Eqs. (<ref>) and (<ref>), respectively. A closer examination of the denominator σ_n:=1+V_1|γ 2+Tr{V_1|γ 2} reveals
σ_n→
η_2(η_1+η_2)N(N-N_1-N_2)^2μ^2/(N-N_1)[N(η_1+η_2)-𝒩_12][(N-N_2)(η_1+η_2)-N_1η_2].
The (quadratic) dependence on the modulation μ highlights the importance of identifying an optimal value of μ to maximize the secure communication performance. In contrast, in the FH case, there is no such dependence, as
σ_n^(FH)→(λ_12+𝒩_12)(λ_12+𝒩_12)/N_1N_2η_1^2.
This suggests that for a bipartite system, there is a direct relationship between full-house and asymptotic behavior. Whenever all users cooperate, the rate is maximized for high values of modulation μ≫1. On the other hand, as soon as one or more users do not cooperate, there exists an optimal modulation value that maximizes the rate.
§.§ Secret-key rate
In a thermal-loss channel with asymptotic security, the secret-key rate against collective attacks can be obtained using Eq. (<ref>), assuming perfect reconciliation (ξ=1) and utilizing an infinite Gaussian modulation as
R=1/2log_2Σ-h(ν_+)-h(ν_-)+h(ν_N^SS).
One may determine its FH asymptotic limit, resulting in
R^asy =log_2(2η_1η_2/|η_1-η_2|√(N_1N_2/(λ_12 + 𝒩_12)(λ_12 + 𝒩_12)))
-h(1/|η_1 - η_2|√(λ_12λ_12/N_1 N_2) )
+h[1/η_1√((λ_12 + N_1 η_2)(λ_12+ N_2 η_2)/N_1 N_2)].
The asymmetric configuration, when ideal conditions are met, enables secure long-distance communication. In particular, for η_2=1 (which corresponds to a distance of 0, for the second user), the secret-key rate expression in Eq. (<ref>) simplifies to
R^asy(η_2=1) =log_2(2η_1/e(1-η_1)√(N_1N_2/{N_1+N_2[ω_1(1-η_1)+η_1]}{N_2+N_1[ω_1(1-η_1)+η_1]}))
-h(ω_1)+h{1/η_1√([N_1+N_2ω_1(1-η_1)][N_2+N_1ω_1(1-η_1)]/N_1 N_2)}.
Under the condition that group “1" has pure-loss links (ω_1=1), Eq. (<ref>) can be further simplified to
R^asy (η_2=1, ω_1=1)=log_2[η_1/(1-η_1)√(N_1N_2)/N]
+h{1/η_1√([N_1+N_2(1-η_1)][N_2+N_1(1-η_1)]/N_1 N_2)}.
The rate Eq. (<ref>) is based on the assumption of infinite use of the relay channel. However, this can be closely approximated after a large but finite number of rounds, as demonstrated in Fig. <ref>. The fast convergence of the rate expressed in Eq. (<ref>) to its asymptotic value is particularly noteworthy. Additionally, it is observed that the configurations “X/Y" and “Y/X" show the same behavior, indicating that there are no depth effects introduced by the relay. Furthermore, when N_1 + N_2 < N, there exists an optimal modulation value μ that maximizes the rate, as confirmed by Fig. <ref> in the case of a pure-loss channel with ω_1 = ω_2 = 1.
§.§ Non-ideal Bell detector
In the bipartite asymmetrical scenario, the transformation rule Eq. (<ref>) is represented as
(N-N_1)x_1 +(N-N_2)x_2
↦(N-N_1)x_1+(N-N_2)x_2+N1-τ/τ,
(N-N_2)x_1 +(N-N_1)x_2
↦(N-N_2)x_1+(N-N_1)x_2+N1-τ/τ.
This generalizes the results from Refs. <cit.>. Furthermore, in the case of a FH scenario, with N_1+N_2=N, Eq. (<ref>) simplifies to
x_j↦ x_j+1-τ/τ, j={1, 2}.
One can then generalize Eq. (<ref>) by following the approach presented in Ref. <cit.> and applying the transformations from Eq. (<ref>). Although Eq. (<ref>) is not expressed in terms of x_j, y, and z_j, but rather in terms of the channel parameters η_j, ω_j, and the modulation μ, making this substitution non-trivial, after developing the usual analysis, it can be shown that, in the FH case, the asymptotic non-ideal secret-key rate for two groups with a thermal-loss channel is given by
R^asy =log_2(2τη_1η_2/|η_1-η_2|√(N_1N_2/L_12L_21))-h(1/τ|η_1 - η_2|√(S_12 S_21/N_1 N_2))
+h(1/τη_1√(R_12R_21/N_1 N_2)),
where
S_ij =N_i[1-τ+τω_1(1-η_1)]
+N_j[1-τ+τω_2(1-η_2)],
R_ij =N_i[1-τ+τω_1(1-η_1)]
+N_j[1-τ(1-η_2)+τω_2(1-η_2)],
L_ij =N_i[1-τ(1-η_1)+τω_1(1-η_1)]
+N_j[1-τ(1-η_2)+τω_2(1-η_2)].
§ M-PARTITE SYSTEMS: Y- AND X-SCHEMES
The standard procedure is followed when analyzing the security of a system with more than two groups. However, when dealing with multiple groups, it is important to pay attention to the rate. The smallest possible secret-key rate is obtained by subtracting the maximum Holevo quantity χ from the minimum mutual information I between any two groups. This is because the rate between different groups can vary and a potential eavesdropper may attack the group with the lower rate. To consider the worst-case scenario, we need to take the lowest possible rate into account.
To optimize the system, we must first find the symplectic eigenspectrum of V_M|γ [see Eq. (<ref>)]. The conditioning of the system can then be performed in M different ways and we need to find the method that maximizes the Holevo quantity or minimizes the conditional von Neumann entropy S_cond. To do this, we perform local heterodyne detection on V_M|γ and find the group that is furthest from the relay, as this group will result in the minimum conditional entropy [This behaviour does not depend on the modulation μ. In case there are more equidistant groups further away from the relay, it makes no difference which group is considered.].
For the switch to function correctly, we must consider all combinations of two groups and perform a heterodyne measurement. They are characterised by the M2 V_yzs 4× 4 matrices built from V_M|γ with the blocks Γ_yy, Γ_zz, and Γ_yz of the groups “Y" and “Z" of interest, with y, z={1, …, M}. The minimum is still obtained by measuring the group that is furthest from the relay.
The correct mutual information Eq. (<ref>) can be determined by considering all the M(M-1)/2 Σ-matrices two-by-two. In the tripartite case, we label Σ_xy|ζ such that Γ_xx is the numerator block and V_y|ζ the denominator one. Given that the groups are d_x≥ d_y≥ d_z from the relay, the minimum mutual information is obtained by considering groups “Z" and “X" and conditioning the measurement on group “Y". The secret-key rate is then given by Eq. (<ref>) after optimizing the modulation μ.
|
http://arxiv.org/abs/2405.09411v1 | 20240515150730 | Spin-spin Interactions in General Relativity versus Linearized Massive Gravity: $N$-body Simulations | [
"Eren Gulmez",
"Bayram Tekin"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-th",
"math-ph",
"math.MP"
] |
1.5
17pt12ptSpin-spin Interactions in General Relativity versus
Linearized Massive Gravity: N-body Simulations
1
Eren Gulmez[eren.gulmez@metu.edu.tr] and Bayram Tekin[btekin@metu.edu.tr]
Department of Physics, Middle East Technical University, 06800, Ankara, Turkey
We simulated spin-spin interactions of N-bodies in linearized General Relativity (GR) and linearized Massive Gravity of the Fierz-Pauli type (mGR). It was noted earlier that there is a discrete difference between the spin-spin interaction potential in GR and mGR for a 2-body system, akin to the van Dam-Veltman-Zakharov discontinuity in the static Newton's potential. Specifically, at large distances, GR favors anti-parallel spin orientation with total spin pointing along the interaction axis, while mGR favors parallel spin orientation with total spin perpendicular to the axis between the sources.
For an N-body system, a simulation in mGR hitherto has not been done and one would like to know the total spin of the system in both theories. Here we remedy this. In the simulations of GR, we observed that the total spin tends to decrease from a random initial configuration, while for mGR with a large distance, the total spin increases.
§ INTRODUCTION
General Relativity (GR) and Massive Gravity (mGR) have different implications on the spin orientations of two spinning point-like sources at large separations; and they lead to different total spins of the system. In the linearized GR, the potential energy expression between two sources is given by <cit.>
U_GR^spin-spin= -G/r^3 (J⃗_⃗1⃗·J⃗_⃗2⃗ -3J⃗_⃗1⃗·r̂J⃗_⃗2⃗·r̂ ),
where J⃗_⃗i⃗ are the spins of the localized sources and r⃗ is the radial vector between them. On the other hand, the potential energy function in mGR is a little more complicated: <cit.>
U_mGR^spin-spin= -G e^-x(1 + x + x^2)/r^3 ( J⃗_⃗1⃗·J⃗_⃗2⃗ -3 J⃗_⃗1⃗·r̂J⃗_⃗2⃗·r̂(1 + x + 1/3x^2)/1 + x + x^2 ),
where x:= c/ħ m_g r, m_g is the mass of graviton which is assumed to be non-zero but very small. The Yukawa decay is expected but the relative coefficient between the two spin-spin interaction terms also gets modified in a mass and distance-dependent way. One should note that as r →∞, the potential energy function (<ref>) becomes
U_mGR^spin-spin→ -G/r^3 (J⃗_⃗1⃗·J⃗_⃗2⃗ -J⃗_⃗1⃗·r̂J⃗_⃗2⃗·r̂ ).
Comparing (<ref>) and (<ref>) one realizes that the relative factor 3 that exists in the former becomes 1 in the latter expression, and that makes all the difference when one considers the total spin of the system. In the GR case, the total spin is minimized for a 2-body system while in the latter, the total spin is maximized. One can find the analytical proof of this statement in the appendix of <cit.>. The final, equilibrium, spin configurations are depicted in Figure 1 (<ref>) and Figure 2 (<ref>). One rather curious observation here is the following: in the ħ=1, c=1 units, the distance between the spinning masses in terms of the inverse graviton mass plays a crucial role as the structure of the spin-spin interaction in mGR changes character. Namely, for separations r ≤1.62/m_g, the spin-spin interaction of mGR reduces to that of GR, while for r > 1.62/m_g, they differ discretely as noted above. The approximate value 1.62 is the Golden ratio 1+√(5)/2.
For the sake of brevity and not to burden the reader, we shall not give here the detailed derivation of (<ref>) and (<ref>) as they were given in <cit.> and extended to other theories of gravity as well as to generic D-dimensions; and in <cit.> it was extended to sources that carry orbital angular momentum, linear momentum and spin. There are at least two rather beautiful expositions of massive gravity theories <cit.> and hence we shall also not discuss the problems and possible resolutions to the graviton mass issue. In any case, at large distances, that is the domain of our interest, we consider the Fierz-Pauli massive gravity as the linearized theory <cit.>.
Here, our concrete goal is to go beyond the 2-body problem and assume that there are N widely separated point masses whose spins are only affected by the spin-spin interactions and not by the tidal forces or other forces. So the Universe is assumed to be composed of these spinning "gas" whose elements can be considered as galaxy clusters.
We will study the orientation of spins for different possible graviton mass (m_g) values by using the interaction in mGR (<ref>). For this purpose, let us define the reduced Compton wavelength of the massive graviton as
_c = ħ/m_g c,
so that x becomes
x = r/_c,
where r is the distance between two sources.
One can assume that for the possible graviton masses, _c divided by the measured size of the universe is a number between 0 and 1. Therefore, let us define ξ as a number that is in the range (0,1).
_c/R∈ (0,1), 1 cm _c = R ξ,
where R is the size of the universe.
Therefore,
x = r/R ξ
Note that as ξ→ 0, the potential energy function reduces to the expression (<ref>) with r →∞, which results in parallel spin orientations. However, for ξ near 1 it reduces to the potential energy of GR (<ref>).
For different ξ values, we shall present the simulation results.
The question is, in the case when there are more than two spinning sources, how do spins become orientated such that their total potential energy becomes minimum? In addition to that, the question of whether the total spin of N-bodies increases or decreases compared to the initial distribution in GR and mGR is important; and how significant is the change in both theories? In the simulations, we used around 2000 spinning objects; each can be considered to be a galaxy cluster as noted above.
§ MINIMIZATION ALGORITHM
Point masses in our code have only two properties: spins and positions. Spins of point masses are represented by three-dimensional vectors. Positions of objects are distributed randomly in a three-dimensional cubic space. The initial orientation of spins is created randomly in a given range such that their magnitudes and directions are random in a chosen range.
The position and spin data are given separately in two arrays which have dimensions 3 × N where N is the number of objects, which is 2000 in our simulations.
The code block of the minimization algorithm is such that it can be iterated more than once to get more accurate results.
The potential energy function used in the algorithm has only a single changeable variable, that is, the spin of one object; the other variables are constraints. In one iteration of the code, this function is run specially for each object, changing its spin orientation.
That function returns the sum of the potential energy values between the changeable spin and all other spins. Therefore, the changeable spin will be modified such that it gives the minimum energy.
The steps of the algorithm are
* The first object in the array representation is taken.
* The potential energy function is operated with the input of that object's spin, which is the changeable spin.
* The input's spin orientation for which the potential energy is the minimum is found by a minimization algorithm.
* The spin data is updated with the new spin of the input object.
* The algorithm passes to the next object.
Via this procedure, the associated spins of the objects are changed to the last object.
It should be noted that in the algorithm, magnitudes of spins are constant while their directions change to minimize the potential energy function.
To reduce the computational workload, we defined a sphere for each object such that only objects inside the sphere are considered for the potential energy function.
The directions of spins that give the minimum will not be valid when spins around it change. Therefore, the minimization block must be executed more than once to get more accurate results. The problem is how it can be understood whether the spin orientations are accurately calculated, which is investigated in the accuracy part.
The sum of spins is calculated before the execution and after each iteration of the code block. It is seen that for the first two or three iterations, the sum of the spins changes significantly. However, for the rest, the rate of change of total spin approaches zero. It implies that the effect of the potential energy function on spins decreases as the number of iterations increases, which shows that the spins approach their optimistic orientations.
§ SIMULATION RESULTS
In simulations, we took the volume of the cubic space to be 10^6 (the length of one side is 100), and each component of spins to be in the range between (-10,10) randomly. The number of objects is taken to be 2000. The volume of the sphere is chosen as 1.13*10^5, whose ratio to the total volume is 0.1131
We carried out a total of 4 different types of simulations. The first one uses the potential energy function for GR (<ref>) while the second one uses the limiting case of mGR (<ref>), which is for large distances. In the third type, we combined both equations such that for small distances, the GR interaction, and for large distances the mGR interaction is used. In the fourth type, the exact potential energy function for mGR (<ref>) is used by taking the graviton mass m_g as the variable for each different start.
For the GR case, the initial & final total spins, percentage changes of total spins, and the sum of lengths of each spins, total lengths are obtained. We included total lengths in tables to compare the relative magnitudes of all spin measurements.
In these GR simulations, since we created the initial spins randomly, the total initial spins are already near zero because of the cancellations. Therefore, to see the effect of the interaction of spins better, we again simulated GR using (<ref>) in the case when the initial total spin is non-zero.
With the potential energy expression of mGR (<ref>), we first simulated the special case when the distance between the two sources is infinity, r →∞. Therefore, for this
computation, we used (<ref>), which has a numerical difference from the expression of GR (<ref>).
For these simulations, the code block is iterated more than 25 times. Again, the initial & final total spin, percentage changes, and the sum of lengths of each spin are obtained.
In each simulation of mGR, the sum of spins increased such that it became comparable to the total length of individual spins of objects.
Total spin is observed to be maximized, in which two spins are oriented such that they become parallel. For 2000 spins, we observed that nearby spins become parallel to each other as shown in the following plot.
Since the mGR potential energy expression (<ref>) is accurate in large distances, and the linearized mGR equation (<ref>) reduces to the equation for GR (<ref>), we tried to simulate in a way that for small distances GR equation (<ref>) governs while for larger distances, mGR equation (<ref>) governs.
if r<d : use GR,
if d<r<300 : use mGR (r →∞),
where r is the distance between sources, and d is the variable distance that determines which potential energy function is used.
In this simulation, the length of one side of the cube is chosen as 1000.
We finally simulated the orientation of spins for different possible graviton mass (m_g) values by using the exact potential energy expression of mGR (<ref>). The variable ξ is used to represent the comparison of the size of the universe R and m_g. The variable x in (<ref>) is then determined by using (<ref>), which is
x = r/R ξ
We simulated different runs by giving ξ several values that are in the range (0,1).
§ CONCLUSIONS
We performed 4 different types of simulations where we used different potential energy functions and combinations of them. In each simulation type, we took different runs and obtained the total initial & final spins, and the percentage changes for each one.
For two spinning sources, the spin-spin potential energy expressions of these theories differ significantly such that they yield different total spin orientations. It turns out that in GR, the minimum energy configuration of spin sources is antiparallel while the configuration in mGR depends on a coefficient ξ∈ (0,1) that represents the numerical difference of possible graviton mass and the size of the universe. For ξ→ 0, mGR favors parallel spin orientations while for ξ→ 1, the potential energy expression reduces to the equation in GR, which favors antiparallel spins. Therefore, by using potential energy formulas for two sources, we simulated spin orientations of N-bodies for theories GR, and mGR with large distances, a combination of them, and exact mGR expression. We should note that here we have been interested in the minimization of total spin-spin interaction potential energy. Of course, when the system relaxes to the minimum configuration state, due to the conservation of total spin, gravitational radiation with spin will be emitted.
Finally, what would these results suggest for our Universe if we take each spinning object to represent a galaxy cluster? Massive Gravity, as opposed to General Relativity, at this level of numerical and theoretical approximation suggests a rotating Universe as the total spin is maximized. In the past, rotating universe ideas were put forward by Hawking <cit.> and Birch <cit.>. The suggestion, albeit coming from our limited simulation capacity, that massive gravity predicts a rotating Universe is worth pursuing.
unsrt
|
http://arxiv.org/abs/2405.08928v1 | 20240514194802 | Constant-roll inflation with a complex scalar field | [
"Ramón Herrera",
"Mehdi Shokri",
"Jafar Sadeghi"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] |
font=footnotesize,
justification=raggedright,
singlelinecheck=false
[figure]labelfont=bf
|
http://arxiv.org/abs/2405.09704v1 | 20240515210612 | Neutron and $\boldsymbolγ$-ray Discrimination by a Pressurized Helium-4 Based Scintillation Detector | [
"Shubham Dutta",
"Sayan Ghosh",
"Satyajit Saha"
] | physics.ins-det | [
"physics.ins-det",
"astro-ph.IM"
] |
affil1]Shubham Dutta
shubhamdutta_16@yahoo.com
affil2]Sayan Ghosh
affil3]Satyajit Saha
[affil1]
organization=High Energy Nuclear and Particle Physics Division, Saha Institute of Nuclear Physics - a CI of Homi Bhabha National Institute,
addressline=Block AF, Sector I,
city=Bidhannagar, Kolkata,
postcode=700064,
state=,
country=India
[corr]Corresponding author
[affil3]
organization=Applied Nuclear Physics Division,
Saha Institute of Nuclear Physics - a CI of Homi Bhabha National
Institute,
addressline=Block AF, Sector I,
city=Bidhannagar, Kolkata,
postcode=700064,
state=,
country=India
[affil2]
organization=Department of Physics and Astronomy, Purdue University,
city=West Lafayette,
postcode=47907,
state=IN,
country=USA
Pressurized helium-4 based fast neutron scintillation detector offers an useful alternative to organic liquid-based scintillator due to its relatively low response to the γ-rays compared to the latter type of scintillator. In the present work, we have investigated the capabilities of a pressurized ^4He (PHe) detector for the detection of fast neutrons in a mixed radiation field where both the neutrons and the γ-rays are present. Discrimination between neutrons and γ-rays is achieved by using fast-slow charge integration method. We have also conducted systematic studies of the attenuation of fast neutrons and γ-rays by high-density polyethylene (HDPE). These studies are further corroborated by simulation analyses conducted using GEANT4, which show qualitative agreement with the experimental results. Additionally, the simulation provides detailed insights into the interactions of the radiation quanta with the PHe detector. Estimates of the scintillation signal yield are made based on our GEANT4 simulation results by considering the scintillation mechanism in the PHe gas.
Pressurized helium-4 detector neutron - gamma discrimination neutron spectroscopy neutron attenuation
GEANT4 simulation
§ INTRODUCTION
Neutrons as radiation quanta, are found in nature because of their release in various forms of nuclear reactions, most common being the nuclear fission and the (α,n) reactions caused by the natural radioactivity of the remnants of Uranium-Thorium (U-Th) decay chain. Free neutrons are, however, unstable against β-decay with about 15 minutes of half life. In spite of this fact, free neutrons pass and penetrate through matter since they are charge neutral and deposits partial or full energy through hadronic interactions. These interactions include capture into nuclei and elastic scattering resulting in nuclear recoil of the stopping media. Such interactions take place over a very short time scale compared to the half life of free neutrons, which make it possible to detect them in real life.
Neutrons are produced in large number inside the core of the nuclear reactors and also emerge out of the spent nuclear fuels by spontaneous fission and (α,n) reactions<cit.>. These neutrons, detected with the help of suitable neutron detectors, are often used to monitor the spent fuel repositories and also at the strategic surveillance stations for monitoring hidden nuclear materials. In that respect, neutron detectors capable of detecting neutrons up to around 10 MeV energy serve the purpose. However, since the neutrons are most often found in a mixed radiation field with dominance of mostly γ-rays, X-rays and electrons coming out of the same sources, it is important to achieve discrimination between the neutrons and the other radiation quanta before any meaningful result can be extracted.
Neutron background for a typical dark matter search experiment, usually set up at underground laboratories, interferes with the signal due to possible dark matter candidates, as both the radiation quanta interact with active media to produce overlapping signals. These neutrons are predominantly produced by the (α,n) reactions due to the U–Th decay chain products emitting α-particles. Careful measurements of the neutron background is essential at every site to assess the sensitivity limits. Pressurized helium-4 (PHe) detector has been used recently at such facilities to monitor the residual neutron flux<cit.>.
Liquid helium<cit.> has been investigated as scintillator for neutron detection more than 60 years back. However, the scintillation properties of PHe gas have been examined much later for successful implementation as fast neutron detector<cit.>. The major advantage of PHe gas as scintillator is its relatively weak response to β-particles and γ-rays, because the density of available electrons in helium is much less than in standard scintillators, organic and inorganic. On the other hand, neutrons cause nuclear recoil of the helium-4 nuclei inside the pressurized gas, resulting in multiple processes of ionization and other interactions, leading to scintillation through transitions from the singlet excimer states or the triplet excimer states<cit.>. Both the transitions lead to emission in the extreme ultraviolet (EUV) region of the electromagnetic spectrum, with wavelength of maximum emission around 80 nm. A wavelength shifting (WLS) compound is used to make the EUV scintillation light output readable by photomultipliers (PMTs) or silicon photomultipliers (SiPMs)<cit.>.
In recent times, PHe detectors are made commercially available by Arktis Radiation Detectors Ltd, Switzerland<cit.>. Front-end electronics and data acquisition system is provided as a package for measuring the fast and the thermal neutron fluxes after pulse shape discrimination using Time-over-Threshold (ToT) technique to discriminate between γ-rays, fast neutrons and the thermal neutrons. Incidentally, the inner wall of the detector is coated with a Lithium-based compound to make the detector sensitive to the thermal neutrons. These neutrons undergo capture by Lithium to produce energetic α-particles which result in scintillation inside the detector volume. Corresponding ToT signals are found to be larger than those produced by the fast neutrons. Some technical details about the variant detector model S670, which is not sensitive to the thermal neutrons, can be found elsewhere<cit.>.
Aim of this work is the following: 1) discrimination of fast neutrons from the γ-rays and electrons using fast-slow charge integration method; 2) qualitative study of attenuation of the fast neutrons and γ-rays by high density polyethylene (HDPE); 3) estimation of threshold for neutron and γ-ray discrimination; 4) carry out a detailed simulation to understand the interaction of the neutrons and γ-rays with the PHe media; 5) follow the systematics of energy transfer from the neutrons and the γ-rays to ^4He (nuclear recoil) and electrons (electron recoil) respectively; and 6) to estimate the resulting scintillation signals following excitation and charge transfer collision processes in the PHe.
§ EXPERIMENTAL DETAILS
§.§ Instrumentation
A photograph of the experimental set up is shown in the Fig. <ref>. Arktis-made PHe detector and a radioactive source (^252Cf source inside a PTFE capsule), placed in front of the detector, are displayed in the photograph. The active medium of the detector is ^4He gas at a pressure of 150 - 180 bar, enclosed inside a stainless steel tube of 60 mm inner diameter and 600 mm active length. Details about the detector, related electronics and their working principles are given in many references available as published articles<cit.>. The detector is packaged with SiPM arrays as photon readout located inside the pressurized detector volume. The detector is segmented into three parts along the length of the cylindrical tube. Photons from each segment are read out by an array of 8 SiPMs. Signals from two SiPMs are summed and fed into each output, so that 4 output signal pulses are generated from each segment.
For our work, the digital electronics readout system was replaced by a multichannel analog circuit (provided by the manufacturer) for analog signal processing using conventional electronics and a 12-bit current integrating analog to digital converter or QDC (16 channel Phillips Scientific charge ADC, Model 7187). The pulses from the SiPMs are fed to the analog circuit, which consists of a preamplifier and a shaping amplifier with baseline restorer for each SiPM channel. Traces of the output pulses from the amplifier are shown in the Fig. <ref>. The output signals were inverted using high speed pulse transformer (ALT 4532M) to match with the polarity and pulse timing requirements. Out of the four output signals from a segment, one signal was fed to a low threshold discriminator to generate the logic gates for the QDCs required for pulse shape discrimination (PSD) between neutrons and γ-rays or electrons. The PSD parameter (P) is defined as:
P=N Q_S/(Q_S+Q_L) where, Q_S and Q_L are the charge contents of the pulse within the duration of the short gate and the long gate respectively and N is a scaling factor to convert the ratios to suitable integer values for the plots.
§.§ Measurements with Radioactive sources
The systematic studies were done using different neutron and γ-ray sources placed at a certain distance from the detector as shown in the photograph (see Fig. <ref>). Provision for placement of different absorber materials in between for systematic studies was also made. An unmoderated ^252Cf spontaneous fission source (half life = 2.645 years, spontaneous fission branching ratio = 3.09%<cit.>) was used for simultaneous detection of neutrons and γ-rays. The source emits fast neutrons with average energy of 2.12 MeV. The source was sealed inside a cylindrical PTFE capsule having an opening on one of the flat faces. The aperture diameter was 2 mm, which was sealed with a 100 μm thick polyethylene terepthalate (PET) window.
A typical scatter plot of the PSD parameter P vs. Q_L is shown in Fig. <ref> for the ^252Cf source placed near the detector. Two distinct bands can be seen in the scatter plot
obtained after 5 hours of exposure to the ^252Cf source. The relative configuration of the gates, optimized by adjusting gate widths and delays by looking at P, is also shown in the Fig. <ref>. Optimum delay between the threshold for the up-swing of the pulse and the trigger point of the short gate was found to be around 300 nanosecond for achieving good discrimination.
An uncalibrated ^137Cs monoenergetic (662 keV) γ-ray source was used for the experiment to identify the γ-band as distinct from the neutron band. The scatter plot, obtained after exposing the detector with the γ-ray source, is displayed in the Fig. <ref>, which shows a single band (γ-band) as expected.
From the scatter plot of Fig. <ref>, the neutron and the γ-bands are found to merge at low Q_L values, which qualitatively indicates the low energy threshold for discrimination. A comparison of projection of the neutron and the γ-bands on the Q_L axis is shown in the Fig. <ref>. It reveals that the detector response to the γ-rays, mediated mostly through electron recoil, is quite low compared to that for the neutrons having energies of the same order. It can be seen from the plots that the nuclear recoil spectrum due to the neutrons terminate abruptly at higher channels (≳ 250), which is due to the saturation of the pulses of the SiPM signal amplifiers provided by the manufacturer.
§.§ Calibration of the detector
Energy calibration of the detector was done using the prompt neutrons from a ^252Cf source. It is expected that the energy spectrum of the neutrons originating from the spontaneous fission is Maxwellian. However, after detailed analysis by various groups on the properties of neutron sources, and several meetings under the banner of International Atomic Energy Agency (IAEA), held during 1980 to 1987, the expert group on ^252Cf spontaneous fission-based fast neutron sources, had proposed a corrected Maxwellian spectrum, based on experimental data and theoretical estimates<cit.> over various energy segments from 0.2 MeV to 20 MeV. Accordingly, the corrected Maxwellian spectrum F(E_n) as function of the kinetic energy (E_n) of the neutrons for the prompt fission neutrons is given by
F(E_n)=R(E_n) 2/√(π) √(E_n)/T^3/2exp [-E_n/T],
where, R(E_n) is the proposed correction factor to the Maxwellian spectrum, and T is the nuclear temperature of ^252Cf before fission, with typical value of 1.42 MeV<cit.>. Based on Mannhart's correction factors, a least squares-fitted polynomial regression model was introduced in the Los Alamos ORNL MCNP-DSP code <cit.>. The R(E_n) values, based on the above, were used to obtain the corrected Maxwellian form for the ^252Cf prompt neutrons to arrive at the benchmark ^252Cf prompt neutron spectrum. Finally, the simulated spectrum was folded by the intrinsic detector efficiency data available from Ref. <cit.> by applying cubic spline interpolation. It may be noted that the intrinsic efficiency data, obtained from the time of flight (TOF) measurements, is available in the range of 0.5 MeV to 6.5 MeV, with backward interpolation extended to 0.35 MeV.
The measured and the simulated efficiency values given in Ref. <cit.> considerably differ over various energy ranges. Therefore, the experimentally obtained spectrum was compared with the theoretical one after folding the latter with: 1) the experimentally obtained efficiencies ϵ_ ex (E_n), 2) simulated efficiencies ϵ_ th (E_n), and 3) average of the two efficiencies as mentioned above, estimated at each energy. Our comparisons reveal the best match for the ϵ_ ex (E_n), which is shown in the Fig. <ref>. It is evident that the lower energy part of the spectrum reveals a cut-off around 0.75 MeV, which is possibly contributed by the threshold discriminator setting and also the merging threshold of the neutron and the γ-bands in our experiment (refer to Fig. <ref>).
A comparison of the experimentally obtained ^252Cf prompt neutron spectrum and the one theoretically obtained as above, are shown in the Fig. <ref>. They are found to be in good agreement over the energy range of 0.7 MeV to 3.2 MeV, which has enabled calibration of the detector. It is important to mention two major points here. a) The upper limit of the available energy range is due to the gain settings of the built-in amplifiers associated with the ^4He detector. b) Our comparison, shown in the Fig. <ref>, manifests that the measured spectrum is constrained by a lower energy cut-off around 0.75 MeV, whereas the estimated spectrum, folded by the efficiency data, has a lower energy cut-off of around 0.35 MeV which arises entirely from the available intrinsic efficiency data in Ref. <cit.>.
There are uncertainties in the calibration which is caused by the sparse data on the intrinsic efficiency. The measured and the simulated efficiency results considerably differ over various energy ranges<cit.>. Besides, interpolation procedure followed leaves scope for introducing additional uncertainties. Because of these uncertainties, we have adopted a linear calibration without a cut-off. However, a least square fit of the calibration data up to the quadratic term (ie. E_n(x) = a_0+a_1 x + a_2 x^2, where x is the channel number), results in the following best fit values:
a_0=0.00167 ± 0.00212 MeV, a_1=0.0125 ± 3.93× 10^-5 MeV/ch, a_2=1.87× 10^-8± 1.48× 10^-7 MeV/ch^2. Because of relatively smaller intercept and the quadratic term, our choice of calibration procedure is well-justified. Since identical gain settings were maintained throughout the experiment, we utilized the same energy calibration for the neutrons obtained from different radioactive sources.
It may be noted that the detector is also capable of detecting the γ-rays due to its electron recoil response, though the intrinsic detection efficiency may be very small compared to that of the neutrons. Though considerable details about the shape of the energy spectral distribution of γ-rays from spontaneous fission of ^252Cf is available<cit.>, similar energy calibration procedure could not be done for the γ-ray spectra, because of the overlap of the neutron and the γ-bands (see Fig. <ref>) at lower energies. However, extending the energy calibration for the neutrons, which essentially relates to the recoil energy of ^4He, we refer the common energy scale as equivalent neutron energy E_ en.
§.§ Measurements with Am X-ray source
A ^241Am X-ray source, embedded in a glass matrix and sealed inside a cylindrical stainless steel capsule, was also studied with the detector. The source, intended for energy calibration of X-ray detectors, was used in the experiment to determine the response of the detector to low energy photons so that a low energy cut-off for electron recoil could be established. The scatter plot of P vs. Q_L, obtained after 5 hours of exposure, is shown in the Fig. <ref>. A clear neutron band, besides the γ-band, was observed to appear. This was a bit surprising due to the reason that 1) the source was used so far for energy and relative efficiency calibration of the X-ray detectors, and 2) it was never monitored with neutron detectors or dosimeters. Surprisingly, the same exposure time (5 hours) for the ^241Am and the ^252Cf sources to the PHe detector was used.
In order to confirm that the band is due to the neutrons being detected, attenuation of the band population by high density polyethylene (HDPE) was studied. For this purpose, 4 layers of HDPE, each being 25 mm thick with a total thickness of 100 mm, were placed between the ^241Am source and the PHe detector to attenuate the neutrons. The scatter plot (P vs. Q_L), obtained over the same exposure time, is shown in the Fig. <ref>. It clearly reveals attenuation of the neutron band. The plots, obtained by gating on the neutron band and projection taken along the Q_L axis (see Fig. <ref>), provide a comparison of the neutron attenuation by the HDPE layers. Low energy neutrons are expected to be attenuated more than those of higher energies, however from the plot, the attenuation factor appears to be fairly independent of energy. This is due to the moderation of the energetic neutrons that enhances counts at low energy and suppresses at higher energy. Similar projection spectrum for the γ-band is shown in the Fig. <ref>, where attenuation of γ-rays is observed. We also have observed build-up of counts in the high energy tail (larger Q_L). This was investigated in detail through simulation in Sec. <ref>.
§ RESULTS AND ANALYSIS
§.§ Simulation
A detailed simulation of response systematic of the detector to the relevant radiation quanta (neutrons, γ-rays) were done to understand the nuclear-recoil events (NR) due to the neutrons and the electron-recoil (ER) events due to the γ-rays, and to compare them qualitatively with the results obtained from the experiment. It is evident from the previous section that the scope of simulation are 5-fold: 1) emission of relevant radiation quanta from the radioactive source used, particularly the ^241Am source; 2) response of the detector to the neutrons, more specifically to the fast neutrons; 3) attenuation of the fast neutrons by suitable plastic absorbers (HDPE); 4) response of the detector to the γ-rays emitted from the radiation sources and also resulting from neutron absorption in HDPE; 5) discrimination between the neutrons and the γ-rays using signal analysis.
Simulations were carried out by the GEANT4 (G4) simulation toolkit <cit.>, version 10.7.4. G4 provides numerous handles to tune the simulation parameters, allowing users to focus on specific aspects of interest at a time. A custom physics-list is utilized in our simulation, which is based on the one developed by Mendoza et al.<cit.>. We will briefly outline the G4 packages used in this physics-list to model various interactions. The G4RadioactiveDecayPhysics package is used for modeling of the radioactive decays, relevant for the ^241Am source. It uses the ENSDF database <cit.> for the various decay parameters including the energy levels of the daughter nuclei. The production of neutrons from (α,n) reaction is modeled using the G4ParticleHP package, which is capable of utilizing the ENDF-6 formatted data libraries<cit.>. The database used for this purpose is the JENDL/AN-2005 dataset <cit.>. The QGSP_BIC_HP model is used for the hadronic interactions and uses the G4 Neutron Data Library (G4NDL) to implement low energy neutron interactions with high-precision. The EM interactions are modeled with the G4EmStandardPhysics_option4 package.
The geometry to obtain the particle spectra consists of a cylindrical stainless steel source capsule of 6 mm diameter × 9 mm length, with the ^241Am source, embedded in a pyrex glass matrix of 3 mm diameter × 6 mm thickness, placed inside and sealed with a stainless steel cover. The source capsule is surrounded by a spherical dummy detector, which is made of air. This is placed to track the particles that are being emitted from the source.
^241Am (432 years half-life) is used as the primary α-emitter nucleus embedded within the glass matrix. It decays to ^237Np by emitting α-particles at 5.486 MeV (∼ 85% decay branch), 5.443 MeV (∼ 13% decay branch), and the rest at other energies. Np^237 daughter nucleus has a half life of approximately 4 orders of magnitude more than that of ^241Am. Therefore, the source is approximated as an alpha emitter whose energy is sampled from the spectrum as obtained from G4. These α-particles are allowed to penetrate isotropically through the base material of the source capsule, where they undergo (α,n) reaction with the constituent nuclei. Based on the cross-section data available from the database, respective elemental compositions of the materials and the relevant energies of the α-particles, we find that the dominant reaction producing neutrons are: ^11B(α, n)^14N (∼69%) and ^23Na(α, n)^26Al(∼11%), rest are being produced by various other reactions.
The neutron yield from these (α,n) reactions is found to be very low, ∼1 per million for the impinging α-particles. Accumulating sufficient statistics would require significant amount of computation time. The physics-list has the option to bias the (α, xn) cross-section by a fixed factor in order to increase the neutron yield. The developers of the physics-list have tested and verified that the biasing technique do not impact the neutron energy spectrum <cit.>. A biasing factor of 10000 was used. The resulting neutron spectral distribution is shown in the Fig. <ref>. The neutron spectrum has a peak at ∼1 MeV and extends up to ∼4 MeV.
§.§ Neutron Response Systematics by G4
The geometry of the PHe detector is mentioned in Sec. <ref>. For this study, the source is placed at a fixed distance from the PHe detector to accommodate the neutron absorbers (HDPE) in between. The HDPE absorber slabs, each 25 mm thick, are cumulatively placed to record the resulting neutron energy spectral distributions as seen by the PHe detector.
As mentioned in Sec. <ref>, neutron scatters off a ^4He nucleus transferring a part of its kinetic energy (E_n). The recoiling ^4He deposits the recoil energy within the fiducial volume of the detector via ionization, followed by scintillation. The recoiling ^4He deposits its entire energy within the volume, which is designated as E_dep. The E_dep spectra for monoenergetic neutrons at different energies ranging from 0.25 to 3 MeV are estimated as shown in the Fig. <ref>. Since there is considerable spread in the E_dep spectra, we have estimated the 90% value of the total area under the spectral profile. Subsequently, we have determined the upper limit of the E_dep parameter for the integral under the spectral profile, which matches with the above-mentioned number. We have designated the corresponding upper limit on E_dep as the marker (Δ E) of the corresponding recoil signal registered by the detector. A plot of (Δ E) as function of E_n is shown in the Fig. <ref>. The (Δ E) is found to be fairly linear with the incident energy E_n. Therefore, from the E_dep spectrum, an estimate of the incident energy can be done.
It may be noted that the recoiling ^4He deposits its entire energy within the volume. Also, one neutron is found to cause multiple nuclear recoil during its passage. Therefore, each recoil is also recorded as a distinct event for systematic studies and calculation of S1 signal (to be explained in Sec. <ref>). The HDPE slabs are cumulatively placed and the simulations are done for 50 million events for each thickness of the absorber. Only the fast neutron events selected by energy cut of ≲ 70 keV, are recorded. Fig. <ref> shows the distribution of the E_dep corresponding to the fast neutron events, which can be compared with the spectra obtained experimentally (see Fig. <ref>). Energies of the incident neutrons are obtained from the calibration done with our measurements based on the ^252Cf source (see Sec. <ref>). The relative ordering of the plots for different thicknesses of the HDPE absorbers agrees with the experiment. However, because of incomplete information on the scaling between E_dep and incident energy of the neutrons, further quantitative comparisons through simulations could not be done.
§.§ Electron recoil events
γ-rays falling on the PHe detector cause electrons inside the gas to recoil and thus, result in electron recoil signal. This is demonstrated in our experiment (see Fig. <ref>) with ^137Cs standard γ-ray source as well. The range of E_dep values can be seen to merge with the lower range of E_dep values for the neutrons as shown in the Fig. <ref>, where the spectra for the neutrons and the γ-rays from ^252Cf are overlaid. Applying the energy calibration for the neutrons, we can conclude that the γ-ray spectrum as seen by the PHe detector extends from ∼0.6 - 1.2 MeV in neutron equivalent energy E_en. Reaching to lower energy value in E_en is constrained by the threshold of the detector for neutron-γ discrimination.
It has already been demonstrated by exposing the detector with ^241Am source that the neutrons are attenuated by the HDPE layers (see Sec. <ref>). It is also shown that while the γ-ray spectrum at the lower channels along the Q_L axis get significantly attenuated (see Fig. <ref>) after passing though the HDPE layers, the higher channels corresponding the larger E_dep values get populated as well. For measurements with different number of HDPE layers in between, we observe that after traversing two HDPE layers, the spectrum at the higher channels (i.e, region of interest or RoI > 150) show a rising trend and a bump around 240 channel number till it reaches saturation at the highest channel. There is not much visible relative change for four HDPE layers. However, the spectral population over the RoI diminish after passing through five HDPE layers. This may be interpreted as due to the production of 2.225 MeV γ-rays from neutron moderation within the HDPE, followed by absorption in HDPE through the n + p = d + γ process. A simulation of the γ-ray interaction with the detector is done to understand the origin, the nature of the γ-ray spectra and the underlying systematics of the process.
The γ rays falling on the detector will produce electron recoil within the pressurized ^4He-volume. The study is more complicated due to the presence of stainless steel housing of the detector which contains high Z elements. Since the γ-rays are energetic to undergo pair creation, positrons are also produced, which result in the production of 511 keV γ-rays. Therefore, the electron-recoil events, though largely contain effects due to the electrons, there would be effect due to the positrons as well. Fig. <ref> includes the E_dep spectra for the electron recoil events, when the neutrons emitted from the ^241Am source, are incident on the detector.
§.§ Gamma-ray Response Systematics by G4
In the simulation procedure, mono-energetic γ-rays of different energies (E_γ) are allowed to impinge upon the detector. The E_dep spectrum of the recoiling electrons is shown in Fig. <ref>. A peak begins to appear at E_dep∼ 280 keV as the energy of the incident γ-ray is increased. The γ-ray energy threshold for appearance of the peak is found to be around 650 keV. Therefore, the 847 keV γ-ray, which arises from the ^56 Fe(n,n'γ) reaction, would also give rise to the ∼ 280 keV peak due to the presence of SS housing, when no HDPE slab is placed. A detailed event-by-event tracking reveals that this number corresponds to the average E_dep of the recoiling electrons given the geometry (diameter) of the detector. The Δ E values for the E_dep spectra as function of E_γ is plotted in the Fig. <ref>, which indicates saturating trend at higher energies. The contour is fitted with a quadratic function, modulated by a smooth step function of the form:
F(x) = 0.5· (1-tanh((x-p0)/p1))·(p5· x^2 + p3· x + p2) + 0.5· p4· (1+tanh((x-p0)/p1)),
where, p0 to p4 are the parameters for the fit. The step-function makes the quadratic term dominant in the low E_γ region and the constant term (saturation value) dominant in the high E_γ region. Specifically, p0 marks the threshold when the saturation begins dominating, while p4 denotes the saturation value.
The saturating behaviour in the Fig. <ref> arises due to the fact that the recoiling electrons do not lose their entire energy within the detector. This can be seen more explicitly in Fig. <ref>, where the E_dep corresponding to the recoiling electrons is plotted against their kinetic energy (KE) for mono-energetic incident γ-rays of 1 MeV and 2.225 MeV. The kinetic energy of the recoiling electrons increases with E_γ as expected. The sharp right edge in the plots is the Compton edge. It is evident from Fig. <ref> that a significant fraction of the electrons with higher recoil energies lose only ∼280 keV of their energy within the detector. Therefore, we can conclude that the peak observed at higher channel numbers in the Fig. <ref> corresponds to ∼ 280 keV of E_dep.
To further confirm this geometric effect, the simulation was repeated by changing the diameter of the fiducial volume of the detector to 30 mm and 90 mm. Corresponding E_dep spectra are plotted in the Fig. <ref>. A clear shift in the peak can be seen, which is at ∼130 keV for 30 mm diameter (solid line) and ∼400 keV for 90 mm (dotted line).
Another systematic study was done to understand the effect of the SS cover of the detector. In this case, the SS cover was removed (keeping the original diameter of 60 mm unchanged for the active volume) and the corresponding E_dep spectrum was obtained. It was observed that the height of the peak at ∼280 keV had reduced in the spectrum indicating that the peak was, indeed, enhanced by the presence of the SS cover. The contribution from pair-production events to the RoI and in the overall spectra was evaluated from the E_dep spectrum obtained after vetoing those events where pair-production has taken place. No significant deviation was observed from the spectrum of Fig. <ref>, indicating that pair-production does not make significant contribution to the spectra. Additionally, it was observed from simulation that in almost all the cases, a single recoil electron is produced from a single γ-ray. The number of events with two recoil electrons from the incident γ-ray is almost two orders of magnitude lower than the single recoil events.
Based on the experimental and the simulation results on the γ-ray response of the PHe detector, we can conclude that: 1) the E_dep∼ 280 keV peak arising due to the most probable value of energy deposit by the recoiling electrons within the confined geometry of the PHe detector. This is also seen as rising bump at the higher channel numbers (∼ 240) in the experiment (see Fig. <ref>), apart from the spectral feature at the lower channel numbers (≤ 150) which has been demonstrated as arising from accompanying low energy γ-rays of the so-called room background. Based on the spectral profile of the E_dep spectra, we have done fitting of the peak after subtraction of an exponential background for the E_dep spectra shown in the Fig. <ref>, from 662 keV to 2.225 MeV γ-rays. The average E_dep value for the peak was found to be: 282 ± 10 keV. Based on the above and the fact that the x-axis of the plots of the Fig. <ref> scale as E_dep, a first order calibration with calibration constant of 1.175 keV/channel on the x-axis is applied. 2) Observed attenuation of the low energy γ-rays by the HDPE layers, which is mostly due to scattering of the γ-rays, qualitatively agrees with the simulation. However, no quantitative comparison could be attempted in our study due to lack of information about the discrete γ-rays from the sources. 3) Production of the 2.225 MeV γ-rays and correlation with the attenuation of the fast neutrons by the HDPE layers have been found to demonstrate reasonable agreement between our experiment and the G4 simulation.
§ SCINTILLATION SIGNAL ESTIMATES FOR THE PHE DETECTOR
In our simulation work, we have so far followed the transfer of energy from the radiation quanta (neutrons or γ-rays) to the recoiling medium particles (^4He or electron). There are primarily three mechanisms by which, a recoiling ^4He (NR events) loses its energy along its track, viz. ionization, charge-exchange collisions and excitation collisions. The cross-sections of these interactions depend on the charge-state of the recoiling ^4He itself. Electron-ion pairs are produced by ionization. While in charge-exchange collisions, ions are generated through electron-capture interactions by the recoiling He^1+ and He^2+ ions. Conversely, free electrons are produced from He^0 and He^1+ ions through the loss of electrons. When these electron-ion pairs recombine, Helium excimer states are formed. Additionally, they are also formed when excited He-atoms resulting from excitation collisions, combine with ground state He-atoms. Therefore, one of the main outcomes of these interactions is the formation of the Helium excimer molecules. These excimer molecules can be in spin-singlet or spin-triplet states (their proportion dependent on the production mechanism). The spin-singlet states decay promptly via radiative transitions and give rise to prompt scintillation, which is termed as the S1 signal, whereas the spin-triplet states are long-lived but they too decay radiatively and contribute to the delayed S3 signal<cit.>.
The above mechanism also works for the ER events, except for the absence of charge-exchange collisions. From the discussion, it can be concluded that the determining parameter for the intensity of these signals is the number of electron-ion pairs and excitation produced for a given recoil energy. The calculation of the yield (number per unit recoil energy) and the signal intensities have been worked out in detail <cit.>. The density of helium-4 gas at 180 bar pressure and 300 K temperature is taken as
0.0247 g· cm^-3 and the number density is 3.7×10^21 cm^-3. Figure <ref> shows the yield vs. recoil energy of ^4He, where the blue curve is the electron-ion ionization yield and the red curve is the excitation yield. The ionization and excitation yields due to ER are considered constant at 22.7 keV^-1 and 10.2 keV^-1 respectively <cit.>. Figure <ref> depicts the S1 and S3 signal intensities as a function of the recoil energy. The blue and the red curves represent NR and ER events respectively. The plot of the S1 signal distribution for NR and ER events for incident neutrons is shown in Fig. <ref> and Fig. <ref> respectively for different number of the HDPE layers placed in between. The incident energy of the neutrons is sampled from the energy distribution obtained from the ^11B (α,n)^14N reaction (see Fig. <ref>). The γ-rays produced from the ^241Am source and also due to interaction of the source neutrons with the intervening media have been taken into account for the S1-S3 signal estimates.
We have plotted the events on the S1-S3 plane for incident neutrons to estimate the bands corresponding to NR and ER events. This is shown in the Fig. <ref>. Here the narrow band towards the left side is due to the ER events (from the γ-rays produced in the nuclear reactions), while the band towards the right side is due to the NR events. In the NR band, most of the events cluster within a curve-like region (indicated by red). This is a consequence of saturation of the S1 signal at larger recoil energies of ^4He, as illustrated in the Fig. <ref>.
Based on the above simulations, we can conclude that by combining the S1 and S3 scintillation signals arising from the fast and the slow decay of the ^4 He^*_2 excimer states, a discrimination between NR and the ER events can be done. Our present simulation-based study does not have a scope to estimate the realistic background which can constrain the above threshold. Unusually long decay time (∼ 13 s) of the triplet state leading to the S3 signal may also severely limit the suitability of the proposed method.
§ CONCLUSION
Pressurized ^4He detector, capable of detecting and discriminating between the neutrons and the γ-rays in a mixed radiation field, has been evaluated in detail both through source-based experiment and G4-based simulation.
The experimental studies based on ^252Cf fast neutron source, ^137Cs γ-reference source and ^241Am source have successfully demonstrated that a fast and a slow gate-based integration of the detector output pulses by the QDC is effective in achieving discrimination. Attenuation of the neutron band by the HDPE layers placed between the source and the detector also confirms the discrimination method followed. The charge contents of the pulses over the long gate effectively scale as the energy deposited E_dep by the recoiling ^4He in case of the neutrons, or the recoiling electrons in case of the γ-rays. Based on the estimated fast neutron energy spectrum arising from the spontaneous fission of ^252Cf<cit.>, and the PHe detector efficiency data<cit.> for the fast neutrons, the measured E_dep spectral profile has been found to match reasonably well with theoretically estimated profile, folded by the detector efficiency. The lower E_dep part of the spectral profile, however, do not match due to the difference in the threshold for our measurement and the PHe detector efficiency data, which was based on the time-of-flight measurements. The measured spectral distribution was calibrated over a range in terms of the neutron energy, based on the above match of the profiles. The same calibration was applied for the measured neutron spectra pertaining to the ^241Am source based measurements.
Parallel measurement of the γ-ray spectral profiles for the radioactive sources mentioned above were also done. The γ-ray spectra, arising due to the scintillation caused by the corresponding electron recoil events, reveal peak-like structures at the larger E_dep values, which is grossly correlated with the placement of HDPE absorbers between the source and the detector.
Interaction of the neutrons and the γ-rays with the HDPE layers was investigated in detail through Geant4 based simulation. Production of the neutrons from the ^241Am source was demonstrated to be largely due to the (α,n) reactions on the constituent elements of the glass substrate. Corresponding neutron spectrum was used for the event generation. Neutron attenuation by the HDPE layers was successfully reproduced to qualitatively match with the experimental results. Interaction of the γ-rays emitted from the ^241Am source and also produced through neutron absorption by the HDPE layers with the pressurized helium active medium, were studied to find the origin of the peak-like structures in the measured spectra. It is found that a significant number of relatively higher energy γ-rays (E_γ-≳ 650 keV) produce recoiling electrons, which dump around 280 keV of energy, resulting in the peak-like structure. Role of the SS housing of the detector and its geometry were also investigated as part of the systematic studies in support of the above.
The scintillation signals from the pressurized ^4He has been estimated following the formalism of Guo and McKinsey<cit.>. Estimation of the S1 and S3 signals due to scintillation were worked out. We have demonstrated that a combination of the (S1,S3) signal can be used as a potential method to discriminate between the neutrons and the γ-rays. It may be useful for a future dark matter direct search experiment, provided a very low threshold on the NR-ER discrimination can be achieved. Alternatively, a time projection chamber (TPC) based on pressurized ^4He as the active medium, should be investigated where the possibility of discrimination using the scintillation (S1) and the ionization (S2) signals may be explored. As mentioned before, relatively smaller number of available electrons make the PHe detector much less sensitive to the γ-ray / electron recoil background.
We have estimated the sensitivity limit of a suitable pressurized ^4He detector (either as a scintillator or a TPC) if it can be configured and deployed as a detector in search of dark matter candidates. Considering low mass WIMPs as a suitable dark matter candidate, the exclusion limit is estimated assuming zero background, which makes this an ideal scenario. Given that ^4He has a mass of about 77 kg when filled inside a 1 metre diameter × 1 metre long barrel at 180 bar pressure, the total exposure for a year-long physics run amounts to 77 kg·year. The WIMP-nucleon cross-section is calculated following the method described by Lewin and Smith <cit.> for the spin-independent case. It assumes a Maxwellian distribution for the velocity of WIMP having 220 km· s^-1 as the most probable value. The mean WIMP density is estimated at 0.4 GeV·c^2·cm^-3<cit.>. The recoil energy range is taken as 0.5 - 100 keV. The upper limit at 90% CL on the expected signal is determined to be 2.44 in the absence of any observation using the Feldman-Cousins method <cit.>. This gives an expected event rate of 0.032 kg^-1 year^-1 for 77 kg·year exposure. The corresponding projected sensitivity limit is depicted in Fig. <ref>. The experiment will be optimally sensitive to rule out the existence of WIMP having mass ∼ 3.5 GeV· c^-2 at 3.5×10^-44 cm^2 cross-section level. Obviously, this is an ideal situation which will be constrained due to the presence of background to push the projected sensitivity limit up further.
§ ACKNOWLEDGEMENTS
We are grateful to Prof. Satyaki Bhattacharya of SINP, India, for his invaluable discussions on the statistical analysis for the sensitivity calculation. We would also like to acknowledge technical and operational support from Chandranath Marick of SINP, India. One of us (SS) would like to acknowledge financial support from the Department of Atomic Energy Raja Ramanna Fellowship (DAE-RRF) scheme to carry out this work.
elsarticle-num
|
http://arxiv.org/abs/2405.08905v1 | 20240514184520 | Enhancing Morphological Measurements of Cosmic Web with Delaunay Tessellation Field Estimation | [
"Yu Liu",
"Yu Yu",
"Pengjie Zhang",
"Hao-Ran Yu"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Yu Liu; Yu Yu; Pengjie Zhang
liuyu9@tsinghua.edu.cn; yuyu22@sjtu.edu.cn; zhangpj@sjtu.edu.cn
MFs with DTFE
Yu Liu et al.
0000-0002-9734-906X]Yu Liu
Department of Astronomy, Tsinghua University, Beijing, 100084, P.R. China
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, 200240, P.R. China
Key Laboratory for Particle Astrophysics and Cosmology (MOE)/Shanghai Key Laboratory for Particle Physics and Cosmology, P.R. China
0000-0002-9359-7170]Yu Yu
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, 200240, P.R. China
Key Laboratory for Particle Astrophysics and Cosmology (MOE)/Shanghai Key Laboratory for Particle Physics and Cosmology, P.R. China
0000-0003-2632-9915]Pengjie Zhang
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, 200240, P.R. China
Key Laboratory for Particle Astrophysics and Cosmology (MOE)/Shanghai Key Laboratory for Particle Physics and Cosmology, P.R. China
Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai, 200240, P.R. China
0000-0001-5277-4882]Hao-Ran Yu
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, P. R. China
The density fields constructed by traditional mass assignment methods are susceptible to irritating discreteness, which hinders morphological measurements of cosmic large-scale structure (LSS) through Minkowski functionals (MFs). For alleviating this issue, fixed-kernel smoothing methods are commonly used in literatures, at the expense of losing substantial structural information. In this work, we propose to measure MFs with Delaunay tessellation field estimation (DTFE) technique, with the goal to maximize extractions of morphological information from sparse tracers. We perform our analyses starting from matter fields and progressively extending to halo fields. At matter field level, we elucidate how discreteness affects the morphological measurements of LSS. Then, by comparing with traditional Gaussian smoothing scheme, we preliminarily showcase the advantages of DTFE for enhancing measurements of MFs from sparse tracers. At halo field level, we first numerically investigate various systematic effects on MFs of DTFE fields, which are induced by finite voxel sizes, halo number densities, halo weightings, and redshift space distortions (RSDs), respectively. Then, we explore the statistical power of MFs measured with DTFE for extracting cosmological information encoded in RSDs. We find that MFs measured with DTFE exhibit improvements by ∼ 2 orders of magnitude in discriminative power for RSD effects and by a factor of ∼ 3-5 in constraining power on structure growth rate over the MFs measured with Gaussian smoothing. These findings demonstrate the remarkable enhancements in statistical power of MFs achieved by DTFE, showing enormous application potentials of our method in extracting various key cosmological information from galaxy surveys.
§ INTRODUCTION
Ambitious on-going and up-coming cosmological surveys [e.g., HETDEX (), LSST (), Euclid (), 4MOST (), PFS (), SPHEREX (), DESI (), WFIRST (), and CSST ()], particularly the fifth-generation surveys [e.g., WST (), MSE (), MegaMapper (), and MUST[MUltiplexed
Survey Telescope (MUST): <https://must.astro.tsinghua.edu.cn/must/en/index.html>]], will provide high-precision map of cosmic large-scale structure (LSS), which encodes a wealth of valuable cosmological information about our Universe. Efficient extractions of these critical information will help us greatly deepen our understanding of many key fundamental questions in cosmology, (e.g., dark energy properties, neutrino masses, gravities and inflation models, etc.). This necessitates the development of powerful statistical tools to characterize or quantify LSS properties from various angles.
Two-point statistics (i.e., two-point correlation function and power spectrum) have played crucial roles in the analyses of LSS data, especially for studies on baryonic acoustic oscillations (e.g., ; ; ; ). These statistics characterize the clustering properties of LSS, but can only give a complete description for Gaussian random field. In reality, LSS has evolved into a highly non-Gaussian field in the present-day Universe, which makes them can not capture appreciable non-Gaussian information on small scales, thus requiring measurements of an infinite hierarchy of N-point statistics (i.e., N-point correlation functions and polyspectra). At present, accurate measuring and theoretical modelling higher-order N-point statistics are challenging[Still, some specific progresses have been made recently in this direction (see ; ; and Refs. therein).], due to the complexities in all possible combinations of multiplets. Moreover, even if the first N-order information is extracted, other interesting information may still remain in higher-order terms.
Consequently, various summary statistics for non-Gaussian information have been proposed as potential supplements to two-point statistics, e.g., count in cells (; ; ), void statistics (; ; ), peak statistics (; ; ; ), Voronoi statistics (; ), and even scattering transform (; ), etc. In particular, morphological statistical methods [e.g., -Skeleton (; ), Betti numbers (; ; ), persistent topology (; ; ; ; ), minimal spanning tree (; ), cosmic web skeleton (; ), wavelet analyses (; ), shape statistics (; ), genus statistics (; ; ), Minkowski tensors (; ), Minkowski functionals (; ; ), etc.] are actively employed to characterize the geometrical and topological properties of LSS, providing alternative ways to capture higher-order information directly, in complementary to traditional N-point formalism.
As a conceptual generalization of genus (; ), Minkowski functionals (MFs) can elegantly describe the global characterizations of morphological properties of LSS, and have been employed in various cosmological studies [e.g., detecting primordial non-Gaussianities (; ; ; ), serving as standard ruler (; ; ), constraining cosmological parameters (; ; ), probing neutrino masses (; ; ), analyzing effects of redshift space distortions (; ), testing cosmologies and gravities (; ; ), etc.]. One popular method [other method can also be found in literatures, i.e., the germ-grain approach (cf. ; )] for measuring MFs is the iso-density approach, which we will focus on in our study (see Section <ref>). The MFs measured in this way can be well theoretically modelled for Gaussian (; ; ) and weakly non-Gaussian (; ; ; ; ; ; ) fields [even for the fields with RSDs (; )], showing distinct advantages over the germ-grain approach.
The iso-density approach estimates four MFs with a series of excursion sets, which are specified by a series of iso-density contours of LSS. Therefore, this method relies heavily on the reconstruction of underlying continuous density field from discrete point tracers. In realistic applications, the number densities of tracers (i.e., halos/galaxies) are low, thereby inducing significant shot noises (i.e., discreteness effects) in the tracer fields constructed by commonly used mass assignment methods [i.e., Nearest Grid Point (NGP), Cloud-in-Cell (CIC), Triangular-Shaped Cloud (TSC), etc.]. As a result, this renders the tracer fields to be exceedingly discontinuous. To alleviate this problem, smoothing methods with fixed kernel widths, at least larger than the mean tracer spacing, are commonly employed in previous studies. These methods can help eliminate noise components and produce nicely smoothed continuous fields. However, meanwhile, the recipes of fixed smoothing will also erode the texture of underlying density distribution, i.e., discarding substantial structural information below the scales of kernel widths, consequently downgrading the statistical power of MFs in various cosmological studies (cf. ; ).
In this study, we opt for Delaunay Tessellation Field Estimator (DTFE)[Delaunay Tessellation Field Estimator (DTFE): <https://github.com/MariusCautun/DTFE>] (; ; ; ; ) to address this issue[Previously, attempted to use wavelet-denoising method to alleviate the problem induced by fixed-kernel smoothing. Additionally, endeavored to boost topological information of genus statistics with DTFE.]. This method can help to recover the largest number of structural elements from point sets, allowing for extracting maximum amount of morphological information with MFs. Therefore, the statistical power of MFs with DTFE (hereafter DTFE MFs) can be significantly enhanced, over the fixed smoothing methods (). We first demonstrate this at matter field level and further extend it to halo field level. In particular, for DTFE MFs of halo fields, we numerically explore various main systematic effects, caused by finite voxel size, halo number density, halo weighting scheme, and RSDs, on their measurements, and showcase their strong discriminative and constraining power in RSD studies. Hopefully, the applications of DTFE will dramatically improve the performance of MFs in extracting various key cosmological information (e.g., neutrino masses (), modified gravities (), primordial non-Gaussianities (; ), and cosmological parameters (; ), etc.) from sparse tracers.
Historically, due to its exceptional characteristics, DTFE has been applied in various advanced pipelines for characterizing, identifying, and classifying structures of LSS. Examples include MMF () and NEXUS (), which leverage multi-scale geometry of structural components, and SpineWeb () and DisPerSE (; ), which utilize topology of cosmic web via Morse theory (; ; ). Also, DTFE has been employed in WVF (), the first watershed-based void finder that can identify cosmic voids irrespective of sizes and shapes. Subsequently, proposed a closely related technique, ZOBOV, based on Voronoi Tessellation Field Estimator (VTFE) (cf. ). Thereafter, later watershed-based void finding pipelines, such as VIDE (), REVOLVER (), and V^2 (; ), are built upon ZOBOV. In addition, DTFE can construct not only continuous density fields but also volume-covering velocity fields and their corresponding divergences, shears, and vorticities (; ; ).
This paper is structured as follows. In Section <ref>, we give a brief overview for MFs. In Section <ref>, we describe the basic DTFE algorithm for generating continuous fields from point sets. In Section <ref>, we introduce in detail the data used in this work. In Section <ref>, we preliminarily demonstrate the advantages of DTFE in measuring MFs from sparse point set, at matter field level. Then, at halo field level, we investigate various main systematic effects on DTFE MFs in Section <ref> and show strong statistical power of DTFE MFs in extracting cosmological information in Section <ref>. Finally, we give summaries and discussions in Section <ref>. Appendix <ref> shows the details of our strategy to determine the smoothing lengths used in Gaussian smoothing method. Appendix <ref> displays the RSD signals and the associated RSD signal-to-noise (S/N) ratios extracted from DTFE MFs. Appendix <ref> presents the technical details of Fisher forecasts employed in our study.
§ MINKOWSKI FUNCTIONALS
MFs are a family of morphological (i.e., geometrical and topological) descriptors with properties of additivity, motion invariance, and conditional continuity for any manifold in D-dimensional space. These descriptors are originally derived from the theory of convex bodies and integral geometry, and were first introduced into cosmology by Ref. to characterize the morphology (i.e., geometry and topology) of cosmic web. According to Hadwiger's theorem (), the topo-geometrical properties of any given manifold in D-dimensional space can be completely characterized by D+1 MFs. Thus it opens a unique way to comprehensively access all orders of N-point correlation information at once.
In LSS studies, the manifolds (𝕄) of typical interests are excursion sets (E_ν) of 3D cosmological scalar fields (i.e., dark matter fields or halo/galaxy fields),
E_ν={𝐱∈𝕄: ν(𝐱) ≥ν},
where E_ν is the set of all points 𝐱 with density ν(𝐱)≥ν and ν is the density threshold serving as diagnostic parameter for displaying morphological features. Four MFs quantify the enclosed volume (V_0) of E_ν, as well as the surface's area (V_1), integrated mean curvature (V_2), and Euler characteristic (V_3) of the set boundary ∂ E_ν (i.e., the iso-density surface),
V_0(ν)=1/|𝒟|∫_E_νd^3x,
V_1(ν)=1/6|𝒟|∫_∂ E_νdS(𝐱),
V_2(ν)=1/6π|𝒟|∫_∂ E_ν(1/R_1(𝐱)+1/R_2(𝐱))dS(𝐱),
V_3(ν)=1/4π|𝒟|∫_∂ E_ν1/R_1(𝐱)R_2(𝐱)dS(𝐱),
where R_1(𝐱) and R_2(𝐱) are two principal radii of curvature of the set's surface orientated toward lower-density regions. These quantifiers provide complete morphological description, including size (i.e., V_0 and V_1), shape (i.e., V_2), and connectivity (i.e., V_3) of the excursion sets. In particular, according to Gauss-Bonnet theorem, the last MF (i.e., Euler characteristic) is simply related to the number of isolated regions (balls), empty regions inside balls (bubbles), and holes in ball surfaces (tunnels) per unit volume,
V_3=1/|𝒟|(N_ball+N_bubble-N_tunnel),
having direct relation with genus (G=1-V_3), which is the first topological descriptor (e.g., ) widely used in cosmic web's topological analyses.
Two standard grid-based numerical algorithms to compute MFs from regularly gridded density field[In literatures, several novel triangulation-based algorithms (cf. ; ; ), which estimate MFs by using triangulated iso-density surfaces, also have been proposed in succession, aiming to improve the accuracies of MFs' estimations.], i.e., Koenderink invariant from differential geometry () and Crofton's formula from integral geometry (; ), have been developed in literatures (). In this work, we employ Crofton's formula to quote our results, because it is more stable and is the most commonly used method in previous works. All MFs are measured as functions of density threshold ν≡ 1+δ with ν∈[0.003, 1000], where δ is the density contrast. The error bars are estimated via the standard errors of MFs, i.e., s_e=σ/√(n_f). Here, σ is the standard deviation of MFs, calculated from n_f subfields obtained by equally subdividing the original field. For DM and halo fields, the numbers of subfields are n_f=4^3=64 and n_f=8^3=512, respectively. Since the error bars are too tiny to be visible in most scenarios, they are omitted in all figures in this paper, except for Fig. <ref> and Fig. <ref>. Note that all MFs are visualized by logarithmic x-axis, considering that probability distribution function of LSS roughly obeys log-normal form at low redshift.
§ DELAUNAY TESSELLATION FIELD ESTIMATION
Given the positions of a set of points 𝐱_i (generators) with weights w_i (i=1, 2, 3 … N) in D-dimensional space, the first step of DTFE procedure is to self-adaptively tessellate the space into a union of space-filling and mutually disjoint Delaunay cells [i.e., simplexes, which are triangles (tetrahedra) in 2D (3D) space] using the Delaunay tessellation technique, which imposes the circumsphere of each tetrahedron does not contain any generators. Under the assumption of uniform sampling (i.e., the point set is an unbiased sample of the underlying density field), the estimated density at each generator 𝐱_i is determined by the normalized inverse of the volume of its contiguous Voronoi cell V(𝒲_i) (see for more technical details),
ρ(𝐱_i)=(1+D) w_i/V(𝒲_i),
where the contiguous Voronoi cell 𝒲_i is the union of all adjacent Delaunay cells 𝒟_𝐱_i^adj with 𝐱_i as one of their D+1 vertices,
V(𝒲_i)=∑V(𝒟_𝐱_i^adj).
In real survey data (i.e, galaxy samples), the point set is actually always modulated by specified selection process (i.e., systematic non-uniform sampling), which can be quantified by a priori selection function ψ(𝐱_i) varying with sky position and redshift. In this scenario, the equation (<ref>) will be generalized to be
ρ(𝐱_i)=(1+D) w_i/ψ(𝐱_i) V(𝒲_i).
To obtain a continuous field, the density values inside a Delaunay cell, at position 𝐱=(x^1, x^2, x^3 … x^D ), then are estimated by multi-dimensional linear interpolation from the D+1 density values of cell's vertices ρ(𝐱_n) (n=0,1,2,3 … D)
[Here, the subscript n seems to conflict with the previous subscript i. To clarify this confusion for careful readers, we'd like to note that the subscript n used here is only reserved for labelling one of the D+1 vertices of a certain Delaunay cell.],
ρ(𝐱)=ρ(𝐱_0)+∇ρ·(𝐱-𝐱_0).
Here, ∇ρ is the linear constant gradient inside the cell, which can be calculated by
∇ρ=([ ∂ρ/∂ x^1; ∂ρ/∂ x^2; ∂ρ/∂ x^3; ⋮; ∂ρ/∂ x^D ])^T=𝐉^-1([ Δρ_1; Δρ_2; Δρ_3; ⋮; Δρ_D ]) ;
𝐉=([ Δ x^1_1 Δ x^2_1 Δ x^3_1 ⋯ Δ x^D_1; Δ x^1_2 Δ x^2_2 Δ x^3_2 ⋯ Δ x^D_2; Δ x^1_3 Δ x^2_3 Δ x^3_3 ⋯ Δ x^D_3; ⋮ ⋮ ⋮ ⋱ ⋮; Δ x^1_D Δ x^2_D Δ x^3_D ⋯ Δ x^D_D ]),
where Δρ_n = ρ(𝐱_n)-ρ(𝐱_0) and Δ x^j_n = x^j_n - x^j_0, for j=1, 2, 3 … D as well as n=1, 2, 3 … D. Employing this interpolation scheme for each Delaunay cell, then we can straightforwardly produce a continuous space-filling field (i.e., DTFE density field) on a regular grid, which exploits the same anisotropic and self-adaptive scaling features of Delaunay tessellation and guarantees mass conservation, i.e., its volume integral can reproduce the total mass,
W=∫ρ(𝐱) d𝐱=∑_i=1^N w_i=W=cst..
DTFE in essence is a first-order version of natural neighbour interpolation procedure. Nevertheless, its self-adaptive nature enables the automatic capture of subtle structural elements in high-density regions with maximum possible resolutions. Meanwhile, it can also properly smooth low-density regions to avoid irritating discreteness. Consequently, DTFE can sharply reconstruct conspicuous structures and hierarchical features of cosmic web from a spatial distribution of sparse tracers, which is of crucial importance to extract maximum amount of morphological information with MFs. In following sections, we will illustrate its compelling performance with quantitative results to highlight the virtues.
§ COSMOLOGICAL N-BODY SIMULATIONS AND DATA SAMPLES
In this work, we adopt two high-resolution pure cold dark matter (DM) N-body simulations for different application requirements. One simulation is dubbed as WMAP_3072_600, which is from the CosmicGrowth simulation suite () realized by running an adaptive parallel particle–particle–particle-mesh (P^3M) N-body code (; ). This simulation incorporates N_p=3072^3 DM particles with mass resolution of 5.5×10^8 h^-1M_⊙ in a periodic cubic box with size of L=600 h^-1Mpc, adopting a WMAP cosmology, i.e., [Ω_c, Ω_b, h, n_s, σ_8] = [0.2235, 0.0445, 0.71, 0.968, 0.83]. Another one is called TianZero simulation (; ), realized by using a publicly-available P^3M N-body code CUBEP3M (). This simulation, parameterized with [Ω_c, Ω_b, h, n_s, σ_8] = [0.27, 0.05, 0.67, 0.96, 0.83], evolves N_p = 6912^3 DM particles with mass resolution of 4.6×10^8 h^-1M_⊙ in a cubic box of width L = 1200 h^-1Mpc. Both simulations assume flat cosmology, imposing Ω_Λ=1-Ω_m, where Ω_m = Ω_b + Ω_c.
The WMAP_3072_600 and TianZero are employed to perform DM and halo field level analyses, respectively. As for WMAP_3072_600, in addition to the complete sample of particles (at z=0) with number density of n̅_p=1.34×10^2 (h^-1Mpc)^-3, we also construct six (10%, 1%, 0.1%, 0.01%, 0.001%, 0.0001%) downgraded particle subsamples produced by random-down-sampling processes without repetition, to study shot noise effects on MFs' measurements. In TianZero, halos are identified by using CUBEP3M’s own on-the-fly spherical overdensity (SO) halo finder, which is set to resolve halo masses down to 2.3 × 10^11 h^-1M_⊙ with a minimum of 500 particles per halo. Similarly, to investigate the impacts of halo number densities on MFs' measurements, we construct three halo catalogues (at z=0.01) with number densities of n̅_h=1.6 × 10^-2 (h^-1Mpc)^-3, n̅_h=1.6 × 10^-3 (h^-1Mpc)^-3, and n̅_h=1.6 × 10^-4 (h^-1Mpc)^-3, by discarding halos with masses below mass cutoffs of M_min≃ 2.3 × 10^11 h^-1M_⊙, M_min≃ 3.2 × 10^12 h^-1M_⊙, and M_min≃ 3.1 × 10^13 h^-1M_⊙, respectively. The sample selection is chosen due to the consideration that galaxy samples in most observations are determined with faint flux limits (or low mass limits). Note that throughout this paper, subhalos are excluded from analyses.
§ THE MFS OF DM FIELDS
In this section, we strive to preliminarily demonstrate the superiorities of DTFE method in measuring MFs from sparse tracers. The analyses are performed at matter field level.
§.§ The shot noise effects on MFs
Based on the full and 10%, 1%, 0.1%, 0.01%, 0.001%, 0.0001% downgraded particle samples (cf. Section <ref>), we construct seven DM fields in real space, by employing a representative mass assignment method, i.e., CIC interpolation. Each field is interpolated on N_g = 1024^3 regular grid, with grid cell size L_g =L/(N_g)^1/3≃ 0.59 h^-1Mpc,
δ(𝐱) ≡n_p(𝐱)/n̅_p-1,
where n_p is particle number density and n̅_p is its mean value. Due to the extremely high particle number density, the DM field constructed from full particle sample can be safely regarded as the underlying noise-free DM field (hereafter noise-free field). The downgraded samples are unbiased samples of the underlying density field, such that the downgraded DM fields (hereafter downgraded fields) can be used to study any pure shot noise effects on MFs. Then, we measure the MFs for each field (hereafter CIC MFs). For simplicity, the results exclusively for the noise-free field and 10%, 1%, 0.1% downgraded fields are presented in Fig. <ref>.
We find that the shapes of MFs are severely distorted by shot noises. The issue becomes increasingly conspicuous as the particle number density decreases. This is because particles successively become poorer tracers of the underlying DM field, making the downgraded fields choppier (cf. the top panels of Fig. <ref>). Low-density regions are more vulnerable to the down-sampling process, since shot noises mostly affect these regions. In particular, in poorly sampled regions, with sparse or even no particles, density fields are severely discontinuous or even blank (i.e., 1+δ = 0) (cf. the top panels of Fig. <ref>). As particle number density decreases, blank regions become larger, thereby the volume fraction of non-zero density regions become smaller (cf. the left panel of Fig. <ref> and top panels of Fig. <ref>). Due to the existences of blank regions, the regions with 1+δ > 0 only account for a certain proportion of total box volume (cf. the left panel of Fig. <ref>). Thus, MFs have step changes[Note that the values of MFs at 1+δ = 0, which respectively are V_0=1, V_1=0, V_2=0, and V_3=0, cannot be plotted in Fig. <ref>, because of the logarithmic x-axis.], when 1+δ = 0 ⇒ 1+δ > 0.
Moreover, shot noises give rise to various spurious structures on the excursion sets at different density thresholds, depending on down-sampling level. Specifically, for 10% sampling, at low-density thresholds, more and larger isolated under-dense regions (bubbles) are produced by discrete-sampling process, such that V_1 and V_3 become larger and V_2 becomes smaller, relative to the case of noise-free field (the same below). At median density thresholds, we observe a lower negative minimum in V_3, indicating more porous structures (tunnels), produced by down sampling in the surfaces of excursion sets. As particle number density decreases, downgraded fields will be increasingly dominated by `meatball’ topology (i.e., a preponderance of isolated high-density regions). Because of this, for the extreme case of 0.1% sampling, V_2 and V_3 are always positive when 1+δ< 1. For the same case, we see that V_2 and V_3 have higher maximums at high-density thresholds, suggesting that the abundance of isolated regions is increased by breaking up structures into multiple objects as particles are taken out. For the intermediate case of 1% sampling, the corresponding results naturally fall somewhere in between those of the former two cases.
In conclusion, the discreteness effects pose challenges in accurately delineating iso-density contours (i.e., the excursion sets), thereby hindering the proper reflections of intrinsic morphological properties of particle-traced LSS with MFs.
Furthermore, in Appendix <ref>, interested readers can find additional results illustrating shot noise effects on MFs with Gaussian smoothing (cf. ; for genus scenarios).
§.§ The MFs with Gaussian smoothing
Gaussian smoothing with a fixed-kernel size is the most commonly used method to tackle the issue of shot noises (e.g., ; ; ; ). In this subsection, to produce continuous fields, we smooth the 1%, 0.1%, 0.01% downgraded fields with Gaussian window functions
W(𝐫)=1/(2π)^3 / 2 R_G^3exp(-|𝐫|^2/2 R_G^2)
of smoothing lengths R_G=2.93 h^-1Mpc, R_G=5.86 h^-1Mpc, R_G=11.72 h^-1Mpc, respectively (cf. the middle panels of Fig. <ref>). Then, we measure the corresponding MFs of these smoothed fields. Hereafter, we refer the MFs measured with this scheme as CIC+GS MFs. The smoothing lengths adopted here can provide enough smoothing for adequately suppressing discreteness effects without discarding too much structural information (cf. Appendix <ref> for the details of R_G determinations). If smoothing length is too small (e.g., R_G < d̅∼n̅_p^-1/3, where d̅ is mean tracer spacing), the algorithm may trend to pick out isolated high-density regions (; ), leading to the so-called `meatball shift' (cf. ). Actually, the free parameter of R_G is always empirically specified with the requirement of R_G≥d̅. Therefore, the determination of a smoothed field is not unique.
The results for 1% downgraded field are presented in Fig. <ref>, alongside the CIC MFs of noise-free field serving as references. It seems that CIC+GS method to some degree recovers the shapes of MFs of noise-free field, but showing much lower amplitudes for V_1, V_2, and V_3. The amplitude suppression depends on the adopted smoothing length, i.e., larger smoothing length leading to severer amplitude suppression (cf. Fig. <ref>). Moreover, after smoothing, MF curves are evidently squeezed towards the direction of intermediate density thresholds (i.e., 1 + δ∼ 1), indicating that the smoothed fields become more uniform. This is because the abundance of structural elements is substantially reduced by the smoothing process, significantly diluting intrinsic structures of LSS, especially for strongly clustered regions (cf. the middle panels of Fig. <ref>). On the other hand, the MFs of smoothed fields approach those of Gaussian random fields, such that they may describe more properties of the Gaussian kernels, than the real morphology of cosmic web (cf. ). Predictably, the recipe of CIC+GS will downgrade the discriminative and constraining powers of MFs in various LSS studies [e.g., neutrino mass effects (), semi-analytic models of galaxy formation (), etc.], thus limiting its applications.
§.§ The MFs with DTFE
The goal of our efforts is trying to maximize the extractions of true morphological properties with MFs from a set of points. This necessitates an optimal method for reconstructing continuous fields, such that their morphology can faithfully mirror the authentic underlying morphology of the point sets as well as possible. As LSS exhibits a multiscale nature, the smoothing scheme of this method should possess characteristics of anisotropy and spatial self-adaptivity. In fact, these characteristics align precisely with the iconic features of the DTFE method advocated in this paper. Additionally, DTFE also has the virtue of mass conservation and is not reliant on any priori parameters, ensuring the uniqueness of the constructed continuous fields (cf. Section <ref>). In this subsection, we explore how DTFE method contributes to recovering the morphology of noise-free field from downgraded particle samples.
We construct three DTFE fields with N_g=1024^3 regular grid cells in real space (cf. the bottom panels of Fig. <ref>), using the same downgraded samples as employed in Section <ref>,
δ_m(𝐱) ≡ρ_p(𝐱)/ρ̅_p-1,
and then directly measure their corresponding MFs. For comparisons, the results for 1% downgraded field are also presented in Fig. <ref>. As shown, the shapes of DTFE MFs exhibit noticeable non-Gaussian features, unfolded through the asymmetries of the MF curves. In contrast to CIC+GS MFs, they more closely resemble the MFs of the noise-free field, with much higher amplitudes for V_1, V_2, and V_3. This improvement is attributed to DTFE's capability to resolve a larger number of structural elements from point distribution (cf. the bottom panels of Fig. <ref>). Nevertheless, these amplitudes are comparatively smaller than those of CIC MFs of the noise-free field, particularly at low-density thresholds, due to the inevitable losses of information[Note that the amplitudes of DTFE MFs will be further suppressed for the cases of downgraded fields with lower particle number densities, resulting from information losses (cf. the bottom panels of Fig. <ref> and Section <ref> for the scenarios of halo fields)]. On the other hand, as depicted in Fig. <ref>, DTFE MFs instead exhibit slightly higher amplitudes at high-density thresholds. This is probably because DTFE can resolve more structural elements in high-density regions, where the impacts of shot noises are less pronounced, than the traditional CIC method. To conclude, these results suggest the great advantages of DTFE method in MFs' measurements, i.e., making morphological information more accessible and ultimately providing strong statistical power (cf. Section <ref>).
§ SYSTEMATIC EFFECTS ON DTFE MFS OF HALO FIELDS
In realistic data analyses, the measurements of MFs are inevitably affected by many systematic effects. These effects can introduce diverse modifications to the shapes and amplitudes of MFs and should be accurately taken into account to draw any sensible conclusions. Hence, it is theoretically interesting to investigate these systematic effects both analytically and numerically. Previous related works mainly focus on genus statistics (e.g., ; ; ; ; ; ; ). In particular, finite pixel (voxel) size effects were analytically studied for 2- () and 3-dimensional genus (); analytically investigated the RSD effects in linear regime; In weakly non-linear regime, and provided an analytic formula for the effects of non-linear gravitational evolution, which was confirmed by . For MFs, recently numerically studied the RSD effects and provided insights into the distinctions in CIC+GS MFs between redshift and real spaces.
Using a state-of-the-art simulation (i.e., TianZero; cf. Section <ref>), in this section, we investigate various dominant systematic effects on DTFE MFs, which are caused by finite voxel sizes, halo number densities, halo weighting schemes, and RSDs, with the goal to comprehend how these effects alter the morphological measurements with DTFE MFs. Note that, starting from this section, we perform the analyses for DTFE MFs at halo field level.
§.§ Systematic effects from finite voxel sizes
In the implementation of DTFE, density field is ultimately sampled on a regular grid (cf. Section <ref>), which effectively eliminates the adaptive nature of DTFE below the scale of voxel size. As grid becomes coarser, certain fine-scale or faint structures of cosmic web (e.g., substructures of voids, small filamentary features, etc.) tend to be insufficiently resolved. This deficiency will result in the amplitude drops for V_1, V_2, and V_3 (cf. and for the scenario of genus with Gaussian smoothing), irrespective of halo number densities, weighting schemes (cf. Section <ref>), etc. We demonstrate it in Fig. <ref>, where DTFE MFs are measured from halo fields in real space with grid resolutions of N_g=4096^3, N_g=2048^3, and N_g=1024^3 (corresponding to voxel sizes of 0.586 h^-1Mpc, 0.293 h^-1Mpc, and 1.171 h^-1Mpc), respectively. Here, we employ the case of uniform weighting with n̅_h=1.6 × 10^-2 (h^-1Mpc)^-3 to quote our results. In essence, the finite voxel size effects (i.e., smoothing effects cause by grid window) are entirely numerical artifacts, which should be minimized by adopting sufficiently large grid sizes while keeping acceptable memory overheads. In the following, we choose N_g=2048^3 to construct halo fields.
§.§ Systematic effects from halo number densities
In observation, the morphological properties of LSS can only be measured from biased tracers. The situation becomes more complicated due to the entanglement between the effects of shot noise and halo/galaxy bias, as compared with the case of matter field (cf. Section <ref>). In our study, it is quite non-trivial to separate these two effects, as the effective masses of our halo samples, which essentially determine the biasing effects[It should be theoretically intriguing to explore `morphological bias' by analyzing MFs of halo samples within different mass bins (cf. the `topological bias' found in ). As this falls outside the scope of this paper, systematic studies are left for future research.], have intrinsic relations with halo number densities. On the other hand, certain degrees of smoothness are also inevitably introduced in DTFE fields due to the limited halo number densities, which determine the effective smoothing lengths of the sophisticated DTFE windows. Therefore, the systematic effects caused by halo number density are actually combined effects jointly determined by halo bias, shot noise, and DTFE smoothing.
To investigate the combined effects, we measure DTFE MFs of uniform-weighted halo fields in real space constructed from halo samples with different number densities. The results are presented in Fig. <ref>. We note that these effects are the strongest effects within our tested domain. As observed, a reduction in halo number density results in the compression of MFs along the 1+δ∼ 1 direction and leads to lower amplitudes for V_1, V_2, and V_3. This occurs because lower halo number densities induce larger smoothing effects, producing smaller root-mean-square (r.m.s) values of densities. Additionally, the decrease in halo number density also makes the halos less effective as tracers of the underlying matter field, naturally leading to the losses of intrinsic structural information of LSS. In particular, when halo number density becomes sufficiently low (e.g., the case of n̅_h=1.6 × 10^-4 (h^-1Mpc)^-3), V_3 is consistently nonnegative across all density thresholds (i.e., V_3 ≥ 0 for 1+δ∈ (0, +∞)), which indicates that the halo field is dominated by structures of isolated objects (i.e., `meatball' structures).
§.§ Systematic effects from halo weighting schemes
DTFE halo fields can be constructed under different halo weightings (cf. Section <ref>), which can yield different halo biases (). In this work, we consider three weighting schemes: uniform, mass (), and optimal[The `optimal weighting' refers to the weighting scheme that can minimize the stochasticity of halos with respect to underlying dark matter (; ; ).] (; ) weightings, represented by the forms w_i(M) = 1, M_i, M_i + M_0, respectively. Here, M_0 is a free parameter, and we adopt M_0=3M_min[Note that this is an empirical relation fund in , which may not hold under a different condition, e.g., using a different halo-finder algorithm (cf. ). Here, we just naively employ this relation as a proxy for the optimal weighting in our tests.] (), where M_min is the mass cutoff (cf. Section <ref>). Since the last two weightings both depend on halo masses, we collectively denote them as mass-dependent weightings. These two weightings have been utilized in various cosmological studies [e.g., primordial non-Gaussianities (), growth rate of structure formation (), and initial condition reconstruction (), etc.], given that they can significantly improve the correlation between halo field and underlying matter field (cf. ; ; ).
Compared to uniform weighting, mass-dependent weightings trend to upweight the regions with higher halo abundance, as these regions are more likely to contain massive halos (cf. ; ). As a result, the density contrasts between high- and low-density regions are intensified in mass-dependent weighted halo fields, leading to the stretching of corresponding MFs in two opposite directions (cf. Fig. <ref> and Fig. <ref>). Also, due to halo fields are dominated by low-density regions, the MFs of mass-dependent weighted halo fields are visually shifted towards the direction of low-density thresholds. Moreover, we observe that the amplitudes of V_1, V_2, and V_3 for mass-dependent weightings are enhanced relative to the case of uniform weighting. This implies that mass-dependent weightings presumably aid in resolving more structural elements when estimating halo fields, consistent with previous findings that mass information can enhance the correlation between halo field and underlying matter field (; ; ).
§.§ Systematic effects from RSDs
In galaxy surveys, LSS is actually mapped in redshift space, where the positions of galaxies are misrepresented due to peculiar velocities, induced by gravitational field, along the line of sight (LOS). This phenomenon is referred to as redshift space distortions (RSDs). The RSDs blur density field in redshift space, leading to a striking anisotropic feature in the direction of LOS. On larger scales, coherent infall of galaxies produces squashed pancake-like distortions, known as Kaiser’s effect (). Whereas, on smaller scales, peculiar velocities of bound objects tend to generate elongated structures, known as fingers-of-God (FOG) effect (). In cosmology, these effects provide a generic way to probe peculiar velocity field, and are commonly used to measure the growth rate of structure formation.
To investigate the effects of RSDs on DTFE MFs, we compute the MFs of uniform-weighted halo fields with n̅_h=1.6 × 10^-2 (h^-1Mpc)^-3 separately in real and redshift space (cf. Fig. <ref>). In redshift space, the distant-observer approximation is adopted to obtain halo positions,
𝐬=𝐫+(1+z)𝐯_/H(z),
where 𝐫, z, 𝐯_, and H(z) is the halo position in real space, redshift, LOS component of halo peculiar velocity, and Hubble parameter, respectively. We observe that in redshift space, all curves of MFs are stretched in two opposite directions compared to their counterparts in real space, indicating that RSD effects increase the r.m.s values of densities (). Nevertheless, the main effect of RSDs is the decrease in amplitudes of V_1, V_2, and V_3.
Since valuable information about velocity field is encoded in RSD effects, an accurate modelling for these effects on MFs would open up a unique window to extract information on structure growth rate. In linear regime, fund that Kaiser’s effect on genus (equivalent to V_3) can be predicted by
G^(z)(ν)=3 √(3)/2√(C)(1-C) G^(r)(ν),
where G^(z)(ν) and G^(r)(ν) is the genus in redshift and real space, respectively. The parameter C is expressed as
C=1/31+6/5 f b^-1+3/7(f b^-1)^2/1+2/3 f b^-1+1/5(f b^-1)^2.
Here, b is the halo/galaxy bias, and f is the dimensionless linear growth rate, defined as
f ≡d ln D/d ln a≈Ω_m^4/7+Ω_Λ/70(1+Ω_m/2),
where D is the linear growth factor, and a is the expansion parameter (; ). Equation (<ref>) suggests that the extent of amplitude decline caused by RSDs depends on growth rate and halo/galaxy bias, such that this effect can be utilized to constrain these cosmological parameters.
DTFE MFs can naturally capture non-linear structure formation signatures induced by FOG effect. This is because DTFE tends to preserve maximum amount of non-linear structures, which are typically smoothed out in the CIC+GS scheme with a large R_G (cf. and for (quasi-) linear-scale scenarios). Therefore, our results do not fit well with linear theory predictions (e.g., the Equation (<ref>) under Kaiser approximation; also cf. Fig. <ref>). Indeed, using N-body simulations, found that the amplitude of genus is more suppressed than expected by linear theory (also cf. ), consistent with our results (cf. the bottom-right panel of Fig. <ref>). Nevertheless, in principle, we can still apply emulator-based approaches to parameter estimates (cf. ; ), where significant RSD signatures should be critical to improving constraining power of MFs on cosmological parameters. This subject is beyond the scope of this work, and we defer the investigations to future studies.
§ THE STATISTICAL POWER OF DTFE MFS IN EXTRACTING COSMOLOGICAL INFORMATION
In this section, we proceed to illustrate the strong statistical power of DTFE MFs in extracting cosmological information. For practical purposes, we restrict our analyses to extracting the cosmological information encoded in RSDs. Similar to Section <ref>, we also present the results of CIC+GS MFs as references for comparisons, but with a different strategy for determining smoothing lengths. Here, the smoothing lengths are determined by the mean halo spacing, i.e., d̅∼n̅^-1/3_h. For our halo samples with number densities in descending order, they are R_G=6.8 h^-1Mpc, R_G=14.6 h^-1Mpc, and R_G=31.4 h^-1Mpc, respectively. Note that these smoothing lengths are traditionally regarded as the smallest smoothing lengths that can be employed for CIC+GS MFs. Therefore, the comparative results displayed in this section are relatively conservative.
§.§ The discriminative power for RSDs
In this subsection, we quantitatively assess the discriminative power of DTFE MFs for RSD effects. To this end, we calculate the differences in MFs of uniform-weighted halo fields with various number densities between real and redshift space,
Δ V_i( ν)=V_i^(z)(ν)-V_i^(r)( ν),
where V_i^(r)(ν) and V_i^(z)( ν), with i=0,1,2,3, represent the MFs in real and redshift space, respectively. The results are shown in Fig. <ref>, where the error bars are obtained by
σ = √(σ_z^2+σ_r^2).
Here, σ_z and σ_r denote the errors of MFs (cf. Section <ref>) in real and redshift space, respectively.
We see that, for V_1, V_2, and V_3, RSD signals in DTFE MFs are significantly higher than those in CIC+GS MFs, regardless of halo number density. And, our results are well consistent with those found in previous works (e.g., ; ; ; ), but showing more pronounced signals in DTFE case. Moreover, intriguingly, despite our analyses being performed at halo field level, we find that the signs and trends of Δ V_i( ν) in our work are broadly the same with those in , where CIC+GS DM fields were employed for analyses. For interpretations of Δ V_i( ν), we refer interested readers to that paper to find more details. In Fig. <ref>, we also present the S/N ratios of RSD effects, defined as |Δ V_i| / σ_i. As shown, the amplitudes of RSD S/N ratios for DTFE MFs are also larger than those for CIC+GS MFs, except for |ΔV_0| / σ_0 with n̅_h=1.6 × 10^-2 (h^-1Mpc)^-3 (cf. the bottom subplot in left-top panel of Fig. <ref>), where the amplitudes in the two cases are basically comparable.
In fact, comparing S/N ratio amplitudes is not an effective way for assessing the relative performance of these two methods in extracting RSD signals. A more suitable approach is to compare the areas between the S/N curves and the x-axis. For this reason, we calculate the mean RSD S/N ratios, which are defined as
| Δ V_i| / σ_i≡∫_ν_min^ν_max| Δ V_i| / σ_i/ν_max-ν_min,
where ν_min=0.003 and ν_max=1000. Note that the numerator on the right-hand side of Equation (<ref>) is exactly the area between the S/N curve and the x-axis within the range of [ν_min, ν_max]. The results are shown in Fig. <ref>. Expectedly, DTFE leads to remarkable improvements over CIC+GS in terms of discriminative power for RSD effects, by ∼2 orders of magnitude (cf. Fig. <ref>). Unquestionably, this is because DTFE MFs are more informative and thus more sensible to any modifications in halo fields.
Additionally, due to information losses, |Δ V_i| / σ_i becomes lower as halo number density decreases, regardless of the schemes for measuring MFs. In particular, when halo number density becomes too low, halo fields will be dominated by structures of isolated objects, making the intrinsic topologies of these structures less susceptible to the deformations caused by RSDs (cf. Fig. <ref> and Fig. <ref>). On the other hand, we also notice a somewhat counter-intuitive result, i.e., for DTFE MFs with n̅_h=1.6 × 10^-3 (h^-1Mpc)^-3, |Δ V_i| / σ_i is instead higher than that of the case with n̅_h=1.6 × 10^-2 (h^-1Mpc)^-3. This may be caused by data truncation effect: density thresholds are limited to [0.003,1000] in our calculation, while there are still RSD signals above ν=1000 (cf. Fig. <ref>); for larger n̅_h, DTFE MFs could have captured signals from higher-density regions, but more signals were abandoned. As for CIC+GS MFs, they do not suffer from such data truncation effect, so their corresponding results appear rational.
§.§ The constraining power on structure growth rate
In the analyses of last subsection, we actually ignore the correlations between data points at different density thresholds. However, in reality, the correlations do exist and are quite significant. This can be seen from Fig. <ref>, where correlation matrices for DTFE MFs of uniform-weighted halo fields under various number densities are presented. One can observe that the correlation matrices exhibit certain regular patterns. By comparing Fig. <ref> and Fig. <ref>, we notice that the data points on MF-curve segments with same trend are positively correlated, while the data points on rising MF-curve segments are negatively correlated with those on falling MF-curve segments.
In this subsection, by taking into account these correlations, we go a step further to investigate the sensitivities of DTFE MFs to cosmological parameters, i.e., the constraining power. As a case study, we conduct simple Fisher forecasts for the errors on structure growth rate fσ_8(z=0.01), a key cosmological parameter often constrained in RSD studies (cf. Appendix <ref> for technical details). Here, σ_8 is the amplitude of density fluctuations within a sphere of comoving radius R=8 h^-1Mpc, which can be obtained by
σ^2_8=1/2 π^2∫ P_m(k) |W(k R)|^2 k^2d k,
where
W(kR)=3[sin (kR)-kR cos (kR)]/k^3R^3
is the top-hat window function in Fourier space and P_m(k) is the matter power spectrum. In this study, Fisher forecasts are performed for each V_i(ν), under various halo number densities, and the results are shown in Fig. <ref>.
As shown, the predictive errors increase with the decrease in halo number densities, attributed to information losses. In particular, DTFE MFs yield much stronger constraining power, exhibiting remarkable improvement by a factor of ∼ 3-5 over CIC+GS MFs within our tested domain. The results unequivocally demonstrate that DTFE MFs outperform traditional CIC+GS MFs in constraining cosmology. Again, this is due to the fact that DTFE MFs can capture more morphological information of LSS from halo distribution. In Fig. <ref>, the results are obtained by utilizing V_i with 32 data points. For consistency check, we also employ V_i with 64 data points for Fisher forecasts (cf. Fig. <ref>). The results from both approaches are consistent, and they are specific to the survey volume of our simulation box, i.e., 1.728 (h^-1Gpc)^3. If volume increases, the predictive errors will be further reduced. It is noteworthy that various numerical and observational systematic effects (e.g., data selection, irregular survey mask, and those investigated in Section <ref>) can affect the predictions, but hard to change our conclusions.
§ SUMMARIES AND DISCUSSIONS
In cosmology, constructing density fields from point sets is a basic task to perform grid-based analyses of LSS. The choice of a particular method is often a compromise between desired field properties and limitations of the method. In particular, constructing continuous fields is essential to specify iso-density surfaces for estimating MFs. To achieve this, traditional mass assignment methods typically require tracer samples with high number densities to ensure sufficient sampling per grid cell. Otherwise, poorly sampled under-dense regions will be dominated by prominent shot noises, severely limiting their applicabilities to sparse datasets. Therefore, these methods are always utilized in conjunction with smoothing recipes at the cost of erasing substantial structural information. It is obviously not an optimal solution because the ultimate goal should be to preserve the intricate LSS multi-scale patterns as faithfully as possible. A significant advancement toward this objective can be attained by leveraging DTFE technique, which can produce piece-wise continuous fields with unique features of being parameter-free, self-adaptive in scale, and preserving mass conservation.
In this work, we propose to optimize the extractions of morphological information from cosmic web tracers with DTFE MFs. We perform systematic analyses in a step-by-step manner, starting from matter field level and progressing to halo field level:
* At matter field level, we first investigate shot noise effects on CIC MFs, elucidating the challenges posed by severe discreteness in the density fields, constructed by traditional mass assignment methods, for proper morphological measurements from sparse tracers with MFs. Then, we measure CIC+GS MFs (i.e., the traditional scheme) and DTFE MFs (i.e., the new scheme) from the same downgraded particle samples and compare them to preliminarily demonstrate the superiorities of DTFE method in measuring MFs from point sets. For CIC+GS MFs, we explore the corresponding shot noise effects and propose a strategy for determining smoothing lengths to sufficiently eliminate shot noises without excessively erasing structural information.
* At halo field level, we first numerically study various dominant systematic effects on DTFE MFs, induced by finite voxel size, halo number densities, halo weighting schemes, and redshift space distortions. Then, we showcase the robust statistical power of DTFE MFs for extracting cosmological information encoded in RSDs. We find that DTFE MFs remarkably outperform traditional CIC+GS MFs by ∼ 2 orders of magnitude in discriminative power for RSD effects and by a factor of ∼ 3-5 in constraining power on structure growth rate. This is because that DTFE scheme can help conserve maximum morphological information of LSS from sparse tracers, rendering DTFE MFs more sensitive to cosmological parameters.
In view of the strong statistical power of DTFE MFs, we will employ this method to extract various critical cosmological signatures [e.g., neutrino masses (), modified gravities (), and primordial non-Gaussianities (; ] imprinted on halo/galaxies fields, in our ongoing projects.
The implementation scheme for measuring MFs proposed in this paper is conceptually related to other Delaunay-based methods developed in previous works (; ). These works compute MFs from triangulated iso-density surfaces, which are directly specified from Delaunay tessellation of a point set, without interpolating density field onto a regular grid. When density values at tessellation vertexes in these methods are estimated in the same manner as DTFE, their performance in measuring MFs should be highly similar to that of our method, with an added advantage of being free from finite voxel size issue. Alternatively, there might also be a possibility to directly evaluate MFs from alpha shapes of discrete tracers, which are specified via filtrations of Delaunay tessellation (). Despite all that, the method presented in this work stands out by its simplicity and convenience in implementation, as it does not require any additional complex code designs for extracting MFs from triangulated isosurfaces. This is undeniably one of the main merits of our method.
Moreover, It's well-known that Delaunay-based methods are sensitive to point perturbations, leading to substantial rearrangements in Delaunay tessellation. These methods also tend to produce prominent undesired spike-like artifacts due to highly elongated tetrahedra, which hinders the accurate identification of faint structures in point distribution. These issues seem still to be particularly problematic for morphological studies of LSS. One potential solution [Recently, proposed a phase-space DTFE (PS-DTFE), a hybrid method trying to combine advantages of phase-space density estimator (; ) and DTFE. However, PS-DTFE field exhibits discontinuities at fold caustic surfaces. And, it also relies on prior knowledge of tracers before shell crossing, limiting its feasibility to simulated data.] to address these concerns is the use of an ensemble-based DTFE technique (). This method computes the mean DTFE field from an ensemble of point realizations by perturbing the original point set following geometric constraints. It can be regarded as a natural generalization of DTFE and shares the same advantageous characteristics of DTFE. Therefore, the measurements of MFs with ensemble-based DTFE can be easily implemented on top of our method and merit investigations in future studies.
§ ACKNOWLEDGMENTS
Y.L. would like to thank Cheng Zhao and Charling Tao for useful communications. We also thank the anonymous referee for their very detailed and helpful comments. This work was supported by National Science Foundation of China (No. 12303005, 12273015, 12273020, 11621303, 11890691), National Key R&D Program of China (No. 2023YFA1605601, 2023YFA1607800, 2023YFA1607802), and the science research grants from China Manned Space Project with No. CMS-CSST-2021-A03. Y.L. acknowledges supports from Shuimu Tsinghua Scholar Program (No. 2022SM173). This work made use of the Gravity Supercomputer at the Department of Astronomy, Shanghai Jiao Tong University.
§ THE DETERMINATIONS OF SMOOTHING LENGTHS
In this appendix, we illustrate our strategy for determining the smoothing lengths adopted in Section <ref> and display the effects of shot noise on CIC+GS MFs. In previous works (e.g., ; ; ; ), smoothing lengths were empirically determined with the restriction R_G ≥d̅∼n̅_tracer^-1/3, where n̅_tracer is tracer number density. Actually, in our tests, setting R_G = d̅ cannot provide adequate smoothing for completely eliminating shot noise effects. Our strategy is to strike a balance between removing shot noise effects and retaining sufficient information. To achieve this, R_G is determined through an iterative and step-nice refined process, such that the differences of MFs between downgraded field and noise-free field are controlled within 1-σ deviation. We note that this strategy seems to have never been proposed before. Therefore, it is worth exploring it theoretically.
The results are visualized in Fig. <ref>, where MFs of noise-free field and three successive downgraded fields adopt the same R_G for each column. It explicitly illustrates the specified smoothing lengths, i.e., R_G=2.93 h^-1Mpc, R_G=5.86 h^-1Mpc, and R_G=11.72 h^-1Mpc for the 1%, 0.1%, and 0.01% downgraded fields, respectively. Meanwhile, we can also see that shot noises characteristically make the amplitudes of V_1, V_2, and V_3 increase (cf. ) and result in MF curves being stretched in two opposite directions. This is because discrete effects generate more pseudo structures on excursion sets, leading to choppier density fields with larger r.m.s of densities. This process for R_G determinations would also be applicable to the cases of halo fields. In consideration that it is not the main concern of our paper, we leave the systematic investigations on this topic to future investigations.
§ THE RSD SIGNALS AND S/N RATIOS EXTRACTED FROM DTFE MFS
In this appendix, we provide the RSD signals (cf. Equation <ref>) and the associated RSD S/N ratios extracted from DTFE MFs (cf. Section <ref>). The results are shown in Fig. <ref>. The figure is decided to be displayed here due to its large size as well as the fact that the main information it conveys is already effectively presented in Fig. <ref>.
§ THE METHODOLOGY OF FISHER FORECASTS FOR ERRORS ON STRUCTURE GROWTH RATE
In our work, Gaussian likelihood is assumed, and the dependence of covariance matrix on model parameters is also ignored. Therefore, the elements of Fisher matrix F can be written as
F_αβ=∂μ^T/∂θ_αC^-1∂μ/∂θ_β,
where α and β label the parameters of interest, μ, θ, and C are the mean of data vector x, vector of model parameters, and
data covariance matrix, respectively (cf. ). The elements of C are computed as
C_i j=∑_r=1^N(x_i^r-μ_i)(x_j^r-μ_j)/N-1,
where μ_i(j)=∑_r x_i(j)^r/N, i(j)=1, 2, 3 … n_b, r represents the r-th data realization, N and n_b denote the number of data realizations and number of data vector elements, respectively. Furthermore, we rescale the inverse covariance matrix C^-1 with a factor (N-n_b-2)/(N-1) to correct for the bias induced by finite number of data realizations[The validity of this correction assumes Gaussian errors and uncorrelated data vectors. This is not strictly true for most scenarios. To minimize impacts of this assumption, N needs to be sufficiently larger than n_b. ] (). Finally, the diagonal element of the inverse of Fisher matrix (ℱ)_i i^-1 yields the lower limit to the error of i-th model parameter (marginalized over other parameters),
σ_θ_i^2≥(ℱ)_i i^-1.
In our scenario, data vectors refer to the MFs in redshift space V_i^(z)(ν), and data realizations refer to the V_i^(z)(ν) measured from n_f=8^3=512 subfields (cf. Section <ref>). Specifically, the covariance matrices are first calculated by using the 512 subfields with RSDs and then are rescaled by 1/n_f[This method is based on an implicit assumption that all subfields are statistically independent. This is not really satisfied in practice. Therefore, it only gives rough estimations (cf. ; ). Since we mainly focus on comparisons of the two schemes for measuring MFs, this issue, in principle, does not affect our conclusions. To solve this issue, one can use a large number of simulation realizations to give accurate but expensive covariance estimates. We leave this for future works.]. To calculate the derivatives in Equation <ref>, we construct two additional halo fields with modified RSDs by artificially increasing and decreasing the peculiar velocity of each halo by a factor of 0.03, respectively. Then, the response functions are given as Δμ=μ_v_+-μ_v_-=Δ V_i^(z)(ν), where subscript v_+ and v_- indicate the two cases of increasing and decreasing halo peculiar velocities, respectively. Moreover, in RSD studies, there is an approximate relational expression between structure growth rate fσ_8(z) and halo/galaxy velocity bias b_v (cf. ),
.δ(f σ_8)/f σ_8|_k, z≃-.δ b_v/b_v|_k, z.
Therefore, in our scenario, the change in fσ_8 can be expressed as Δ(f σ_8) ≃-Δ b_v/b_vf σ_8=-b_v_+-b_v_-/b_vf σ_8=-0.06f σ_8. At this point, we get
∂μ/∂θ≃μ(θ+dθ)-μ(θ-dθ)/2dθ=Δ V_i^(z)(ν)/Δ(f σ_8).
In our calculation, we select n_b data points for each V_i(ν). These data points are evenly distributed within [ν_min,ν_max] in logarithmic coordinate. To avoid any problems sourced from the irreversibility of covariance matrices, the used density threshold range [ν_lower,ν_upper] should be narrowed as halo number density decreases. It will result in a decrease in the number of data points actually used in calculations. We illustrate this in Fig. <ref>[We only display the results of n_b=64, considering the striking similarity between these results and those of n_b=32.] (cf. the unshielded regions) and Table <ref>. In order to draw conclusions as reliable as possible, two n_b cases are tested, namely n_b=32 (cf. Fig. <ref>) and n_b=64 (cf. Fig. <ref>). This provides a consistency check for our methodology. In general, in comparison with Fig. <ref>, the predictive errors in Fig. <ref> are relatively smaller, depending on different cases. Specifically, as the number of data points increases, we find the followings:
(a) For CIC+GS MFs, the predictive errors exhibit minimal changes, except for a few cases, e.g., the case of V_0 with n_h=1.6 × 10^-2 (h^-1Mpc)^-3;
(b) For DTFE MFs, the predictive errors remain relatively stable in the case of n_h=1.6 × 10^-2 (h^-1Mpc)^-3, but showing a noticeable decrease in the cases of n_h=1.6 × 10^-3 (h^-1Mpc)^-3 and n_h=1.6 × 10^-4 (h^-1Mpc)^-3.
Nevertheless, the performance of DTFE MFs still significantly outperforms that of CIC+GS MFs, thus our conclusions do not change.
yahapj
|
http://arxiv.org/abs/2405.09764v1 | 20240516021042 | Clearing time randomization and transaction fees for auction market design | [
"Thibaut Mastrolia",
"Tianrui Xu"
] | q-fin.TR | [
"q-fin.TR",
"math.OC"
] |
EFEAR-4D: Ego-Velocity Filtering for Efficient and Accurate 4D radar Odometry
Xiaoyi Wu^1, Yushuai Chen^1, Zhan Li ^2, Ziyang Hong^1* and Liang Hu^1*, Senior Member, IEEE
*Corresponding authors: Ziyang Hong; Liang Hu
^1 X. Wu, Y. Chen, Z. Hong and L. Hu are with the Department
of Automation, School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, 518055,
China. ^2 Z. Li is with Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China. For correspondence: l.hu@hit.edu.cn.
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Flaws of a continuous limit order book mechanism raise the question of whether a continuous trading session and a periodic auction session would bring better efficiency. This paper wants to go further in designing a periodic auction when both a continuous market and a periodic auction market are available to traders. In a periodic auction, we discover that a strategic trader could take advantage of the accumulated information available along the auction duration by arriving at the latest moment before the auction closes, increasing the price impact on the market. Such price impact moves the clearing price away from the efficient price and may disturb the efficiency of a periodic auction market. We thus propose and quantify the effect of two remedies to mitigate these flaws: randomizing the auction's closing time and optimally designing a transaction fees policy. Our results show that these policies encourage a strategic trader to send their orders earlier to enhance the efficiency of the auction market, illustrated by data extracted from Alphabet and Apple stocks.
Keywords: Microstructure, auction market design, market making, optimal stopping
15cm0.4pt
§ INTRODUCTION
§.§ Periodic auction and continuous limit order book
Continuous limit order book (CLOB for short or continuous double auction) and periodic auction (also called batch auction or call auction) are the two most commonly used electronic trading systems around the world. For example, New York Stock Exchange, NASDAQ (U.S.), London Stock Exchange all use the continuous limit order book system during normal trading hours and switch to a periodic auction system for the opening and closing auctions to determine the open price and the closing price of each trading day. CBOE Europe have both continuous limit order book and periodic auction open during the normal trading hours. CowSwap a cryptocurrency exchange uses periodic auction to settle orders.
The trading mechanisms of the two systems are as follows. A continuous limit order book market executes incoming orders continuously, i.e., when there is a matching order between a market order and a limit order. Every matched order trades at a price that depends on its requested price and the price of the limit order it is matched to. In comparison, a periodic auction market executes incoming orders as a batch and applies a uniform price to all executed orders in this batch after a specific time horizon. To be more specific, a market order would initiate an auction that would be open for a specific time interval until a terminal time, which is called the clearing time. During the auction, the exchange receives orders from market participants. Market participants give the exchange a proposed price at which they are willing to buy or sell the asset and a specific volume. When the auction closes, the exchange determines a clearing price by setting it to maximize the number of fulfilled orders (or to minimize the imbalance). Every order may be executed at the clearing price instead of their proposed price. Due to this rule, limit buying orders with a proposed price below the clearing price and limit selling orders with a proposed price above the clearing price would not be executed. Despite the difference between the two trading systems, a continuous limit order book can be considered a periodic auction whose duration equals 0 second, see for example <cit.>.
§.§ Comparison and main flaws of limit order book
The literature to promote general market quality, discover better trading mechanisms, or improve market competition has been studied since the 60s; see <cit.>. The continuous trading system has the advantage of providing ”immediate execution”. No one likes to wait; in <cit.>, it is shown empirically that people prefer to trade in a continuous market instead of an auction market. However, such immediacy also creates a problem, especially after the emergence of high-frequency traders. <cit.>, <cit.>, and <cit.> question the efficiency of limit order mechanism rather than periodic auction. They study the efficiency of periodic auctions to monitor high-frequency trading advantages and increase market efficiency. <cit.> compares two highly correlated stocks from real data and finds that the continuous trading system creates arbitrage opportunities in small time intervals. These arbitrage opportunities could be caught by high-frequency traders and thus incite competition in speed rather than price. High-frequency trading has brought down the execution time from several seconds at the start of the 2000s to microseconds nowadays. <cit.> reaches a similar conclusion as they use simulations to show that high-frequency traders are latency arbitrageurs and widen the bid-ask spread. <cit.> discusses the negative impacts of high-frequency trading and proposes using periodic auctions (they also propose pro-rata rules with continuous market and randomized auction duration with periodic auctions).
Following these works, more studies focus on the advantages and disadvantages of continuous limit order books and periodic auctions. <cit.> use exchange message data to quantify the speed competition in <cit.>. However, restricting the competition in speed is only one of the characteristics of the periodic auction system compared to the continuous system. Recall that the other characteristic of a periodic auction differing from the CLOB is that the clearing price is set by combining the opinions of a batch of orders instead of just two. Such characteristics view market supply and demand more comprehensively and thus could improve the price discovery process. <cit.> shows that optimally setting a clearing rule (price discovery and auction duration) for the periodic auction system enables the clearing price of most assets to be closer to the efficient prices compared to the continuous limit order book system. However, the continuous system could sometimes be optimal regarding the above mentioned price discovery process. <cit.> shows by using real data that if we replace the continuous German Electricity Market with a frequent batch auction, there will be less traded volume but better price discovery (price is less noisy and closer to the fundamental value) and less liquidity cost measured in round-trip (CRT) cost. There are, of course, different opinions. In <cit.>, they show empirically that sub-second frequent batch auction leads to a decline in adverse selection cost but an increase in relative spread and a decrease in information efficiency measured by ”autocorrelation of midpoint returns”. One thing to note about all these works is that researchers use different assumptions, models, and measures to reach their conclusions, so seemingly contrasting conclusions do not necessarily imply a contradiction.
Echoing the conclusion of <cit.>, “One size does not fit all”. Neither periodic auction nor continuous limit order book is the best by all measures and neither would benefit all affiliated groups. The interest of this paper is not to compare periodic auction and CLOB, but to study the possibility of a co-existence of the two systems. <cit.> proposes an Ad Hoc Electronic Auction Design (AHEAD) which allows traders to switch between continuous trading sessions and periodic trading sessions. They show that this design enables a less volatile clearing price and traders especially the smaller players benefit from this design compared to a continuous system or a periodic auction system.
In addition to the CLOB and the periodic auction market, there have been focuses and advances in other trading mechanisms. Dark pools differ from CLOB and auction in that orders are not displayed to the public; see <cit.>, <cit.>, <cit.>, <cit.> for studies on whether a dark pool harm or help with market efficiency. See <cit.> for a latency floor design on CLOB to limit high frequency trading. Note also that cryptocurrency trading markets have interesting mechanism designs as well; see <cit.> for combining batch auction with automated market maker.
We also want to mention additionnal works as for example <cit.>, <cit.>, <cit.>, and <cit.>. Each of these articles study the design of periodic auction to answer the following question: What is the optimal auction duration? <cit.> uses simulation to show how fast and slow traders would choose between continuous market and periodic auction if the two markets run together; the model in this work does not consider strategic timing though. <cit.> proposes to add a size discovery market (”workup”) along with a batch auction market to increase allocation efficiency. A size discovery market allows traders to exchange inventory at a fixed-price so traders need not to worry about their price impact. The main differences between their model and ours are that they assume a strategic player in a batch auction submits a demand function instead of an order price or quantity and they focus on balancing the inventory level of each strategic player as they assume that a equal distribution of inventory among traders is the most desired. Despite the difference, their proposed design is worth to consider. Relating to <cit.>'s concern, <cit.> uses a Nash equilibrium model to show that strategic players in an auction lower demand and supply to avoid moving the clearing price away from their interests and such behavior could lead to a loss of trading volume in the market.
§.§ Optimal policies to cure auction's inefficiencies and related works
In this paper, we are interested in furthering the design of AHEAD, the concurrence of a continuous market and a periodic auction market, and we focus specifically on the periodic auction part. We want to see whether a strategic trader could take advantage of the current setting when an auction is open. The main question raising our motivation is the following.
Would a strategic player disturbs market efficiency by strategically picking their arrival time?
If so, how could we improve the auction design to bring back efficiency?
The efficiency measure we use is a price discovery measure, the difference between the actual clearing price of an asset and its theoretical efficient price. To get a larger picture, “market efficiency” usually refers to either “external efficiency” or “internal efficiency” (see <cit.>). A market is externally efficient if its prices reflect all available information. Fama develops this definition in <cit.> (also see <cit.>). Relevant criteria to measure the market quality include auto-correlation of return, delay measure, and Sharpe ratio (see <cit.> and <cit.>)…. Such efficiency is called external because it depends not on trading systems but on the outside world, such as how information is spread among traders. Internal efficiency refers to whether a market could enable traders to trade at prices close enough to their desired prices. The difference could be generated by the distance between the executed price and the true price (also called pricing error, price discovery measure, or price formation measure) and by transaction costs such as liquidation and bid-ask spread. We adopt the price discovery measure because it suits our purpose the most and because most relevant work uses such measure (see all relevant work cited in the introduction and see <cit.>).
The main results of our paper are as following. By studying the behavior of a strategic trader in a periodic auction, we emphasize that the trader benefit from arriving only at the end of an auction and right before the auction closes. Such strategic choice of arrival time enables the trader to take advantage of all known information to submit a strategic price and have the greatest price impact on the market. Not only is such strategic timing unfair to other market participants, but also the strategic pricing could drive the clearing price away from the efficient price of the underlying asset if the strategic trader has an incorrect guess about the efficient price. It can be seen as a disturbance to market efficiency.
We propose two regulatory policies to respond to this mechanical problem inherent in periodic auction markets. First, we introduce a randomization design of the auction's closing time. Second, we introduce a transaction fee indexed on the arrival time of a strategic trader in the auction, i.e., the later a trader arrives, the higher they pay. We prove that the randomization and the transaction fee design address the problem efficiently and bring better quality to the market. Note that the randomization of the closing time is mentioned in <cit.> and <cit.>. However, none of these works study the possible effects of such design and use quantification methods to address it. In reality, randomization of closing time is implemented, so it is possible to conduct an empirical analysis of such a design. London Stock Exchange adds a 30-second random period to the opening auction and closing auction. Cboe randomizes the whole duration of its periodic auction, i.e., an auction might close in 0 to 100 milliseconds. However, we caution that any conclusions on empirical analysis of real-world auction design might not apply to auction design because real-world auctions have specific settings, including auction duration and priority rules. Excluding or changing any of the settings could lead to a different conclusion.
Additionally, we would like to remind that our model of a periodic auction adopts the most basic set of rules. Periodic auction markets in reality could be much more complicated. Apart from Cboe's priority rule we mentioned above, the opening and closing auction of NYSE and Nasdaq each add their own matching and execution rules. These rules could possibly limit many advantages of a periodic auction market; see <cit.> for an empirical analysis of NYSE and Nasdaq's closing auctions.
The structure of this study is the following. In Section 2, we introduce the auction mechanism and the modeling without fees or randomization. Section <ref> presents the main charcateristics of the market, the mathematical model and the information set available for the strategic trader. Section <ref> set the clearing price rule to trade the considered asset ensuring the largest number of matching orders (see Proposition <ref>). Section <ref> define the optimization problem of the strategic trader without regulation (randomization of the clearing time and transaction fees) together with its impact of the market quality. Section <ref> introduce the data set together with the calibration of the relevant parameters for the study. Section <ref> presents the solution of the problem when the strategic trader has a full information on the traded asset illustrated with numerical results. Section <ref> studies the case where the strategic trader is imperfectly informed about the efficient price of the asset. Section <ref> turns to the impact of a clearing time randomization and transaction fees policy on the strategic trader behavior, the price impact and the market quality. The bilevel optimization is first introduced, following by the result considering only a randomization of the clearing time without fees then adding the fees. We consider two different problems from the exchange perspective to improve the market quality: either to reduce the price impact including the fees paid by the trader (Section <ref>) or to reduce the distance between the clearing price and the efficient price while benefiting from the fees structure (Section <ref>). Section <ref> concludes the study and provides future perspective.
§ AUCTIONS MARKET MODELING WITH TRANSACTION FEES AND RANDOMIZATION
§.§ The market characteristics
We consider an auction to trade a risky asset starting at time 0 with duration T>0. We denote by P^cl_T the clearing price of the auction determined by the exchange to maximize the number of trades at the clearing time T. During the auction's duration, limit orders arrived such that each limit order i is characterized by a limit price P_i at which a trader is willing to buy or sell the asset and a volume determined by a supply function Q_i = K(P^cl_T - P_i). The parameter K>0 is the slope of the supply function assumed to be fixed for each limit order. We assume that (P_i)_i≥ 1 is a family of independent normally distributed random variables with mean μ^mm and standard deviation σ^mm. Note that if Q_i ≤ 0, the order is a buying order; if Q_i ≥ 0, the order is a selling order. We assume that the efficient price of the risky asset denoted by P^* is a normal random variable with mean μ^* and standard deviation σ^*. We assume μ^mm = μ^* and σ^mm = σ^* = σ, for some σ>0. We model the arrival of these limit orders by a Poisson process M with intensity λ_t=λ× t where λ is a positive constant. In other words, N_t : = M_t + 1 denotes the number of market makers active in the auction up to time t.[Here, "+1" practically means that there is at least one trader in the auction market for it to be open and theoretically to avoid division by zero. This assumption is consistent with the existence of liquidity in a CLOB transfered to an auction at for example the end of the day, see <cit.>.] We model this process as limit orders pre-existing in a CLOB, set by market makers and sent continuously in the coexistence system of CLOB and auction market. These limit orders have been set in the CLOB and are transferred to the auction for execution as block trades.
We define a family of σ-algebras ℱ = {ℱ_t}_0 ≤ t ≤ T generated by the available information up to time t and composed with the number of limit orders arrived in the auction and their limit prices P_i that is ℱ_t := σ{N_t,(P_i)_i=1^N_t}.
We consider a strategic seller joining the auction at a deterministic time τ between 0 and T aiming at optimally liquidating her position in the risky asset.[We could similarly assume that the strategic trader is a buyer but only consider the seller case motivated by optimal liquidation problem for the sake of simplicity.] The strategic seller controls the exact time τ she arrives in the auction together with the price P at which she is willing to sell the asset, and the direction of the trade (which is selling). In this case, the volume sends by this strategic seller is Q = K(P^cl_T - P). We assume that the price P=P_τ^μ, sent at time τ in the auction, is a normal distribution with mean μ where μ is a random variable controlled by the seller and variance σ^2 fixed measurable with respect to the information available for the trader. Note that while the variance for the market makers and the strategic seller are the same for the sake of simplicity, the strategic trader knows up to time t the number of arrivals N_t and the prices of these orders {P_i}_i=1^N_t. In particular, the strategic trader may be imperfectly informed about the efficient price of the asset since μ is determined through ℱ which does not take into account the efficient price. The seller uses the information available at time t to determine μ_t, so μ could be viewed as a function μ_t = μ(t, N_t,{P_i}_i=1^N_t). By denoting P^μ_t the price the seller proposes entering in the auction at time t, we assume that P_t is a normal distribution 𝒩(μ(t, N_t,{P_i}_i=1^N_t),σ^2) where the function μ is controlled by the strategic seller when she enters at time t and sees N_t arrivals with associated price {P_i}_i=1^N_t.
Since the strategic seller controls the direction she trades, her order would not be executed if P^cl_T < P^μ_t. However, since all other market makers do not control the directions of their trades, their orders will be executed for sure. The quantity the strategic seller trades is thus given by K(P^cl_T - P^μ_t) if P^cl_T > P^μ_t, and 0 otherwise.
The strategic trader does not necessarily know exactly the mean μ^* of the efficient price and the mean μ^mm of other transferred limit orders. We denote by μ_g^* and μ_g^mm the estimations the strategic trader of μ^* and μ^mm respectively. We denote by 𝔼_g the expectation when the mean of P^* and P_i corresponds to these estimations and we denote by 𝔼 the expectation in the case μ_g=μ^* and μ_g^mm=μ^mm.
§.§ Clearing Price rule
The clearing price of an auction is set to maximize trading volumes. When there is no strategic trader, corresponding to the case when no traders control the direction of the proposed prices, this clearing price is the equilibrium between supply and demand, see <cit.> or <cit.>. That is
P^cl_T = ∑_i=1^N_T P_i /N_T,
which sets the clearing price to eliminate any imbalance between buy and sell orders.
Assume now that the strategic seller sends a price P^μ_t at time t to trade the asset at the clearing time T and will active only if the clearing price is above P^μ_t, we have the following result.
The clearing price of the auction is determined by
P^cl_T =
{∑_i=1^N_T P_i + P^μ_t /N_T +1 if ∑_i=1^N_T P_i/N_T > P^μ_t
∑_i=1^N_T P_i /N_T otherwise..
The governing rule of setting a clearing price is that the price maximizes the traded volume.
Assume first that ∑_i=1^N_T P_i/N_T > P^μ_t then ∑_i=1^N_T P_i + P_t /N_T+1 > P_tN_T + P^μ_t /N_T+1=P^μ_t. If we set P^cl_T = ∑_i=1^N_T P_i + P^μ_t /N_T+1, P^μ_t would be executed as a selling order. Then it follows from (<ref>) that P^cl_T is set as above.
Assume now ∑_i=1^N_T P_i/N_T≤ P^μ_t, there are two possible cases, either P^cl_T < P^μ_t or P^cl_T ≥ P^μ_t. If P^cl_T < P^μ_t, then the strategic seller's order would not be executed anyway, so P^cl_T = ∑_i=1^N_T P_i /N_T follows from (<ref>). If P^cl_T ≥ P^μ_t, number of executed buying orders:
N^buy = min{ K∑_i:P_i > P^cl_T (P_i - P^cl_T), K∑_i:P_i < P^cl_T (P^cl_T - P_i) + K (P^cl_T - P^μ_t) }
= K∑_i:P_i > P^cl_T (P_i - P^cl_T) because K∑_i=1^N_T (P_i - P^cl_T) ≤ KN_T(P^μ_t-P^cl_T) ≤ 0 ≤ K (P^cl_T - P^μ_t)
However, N^buy≤ K∑_i:P_i > ∑_i=1^N_T P_i/N_T (P_i - ∑_i=1^N_T P_i/N_T). This implies that traded volumes would be larger if the clearing price is set as ∑_i=1^N_T P_i/N_T. This violates the clearing price setting rule when P^cl_T > P^μ_t. Thus, P^cl_T ≤ P^μ_t and so P^cl_T = ∑_i=1^N_T P_i /N_T.
The clearing price (<ref>) could also be written as:
P^cl_T = ∑_i=1^N_T P_i + 1_∑_i=1^N_T P_i/N_T > P^μ_t P^μ_t /(N_T)+ 1_∑_i=1^N_T P_i/N_T > P^μ_t.
The clearing price P^cl_T depends on the strategic trader's input P^μ_t. In the following, we would write P_T^cl instead of P_T^cl(P^μ_t) for convenience; however, P_T^cl is a function of P^μ_t.
§.§ Strategic Trader's optimization and market quality
§.§.§ Strategic trader optimization
The strategic seller sends at time τ a volume K(P_T^cl-P^μ_τ) in the auction to sell the asset under the condition P^cl_T > P^μ_τ. If executed, the value of the strategic seller's portfolio at time T is compared with the efficient price. The seller's payoff at the clearing time is thus K(P^cl_T - P^μ_τ)(P^cl_T - P^*) if P^cl_T > P_τ, and 0 otherwise.
For any (t,n,(p_i)_i=1^n) ∈ [0,T]×ℕ×ℝ^n, we define
μ̂(t,n,(p_i)_i=1^n) := max_μ𝔼_g[1_p_μ≤ P_T^cl K(P^cl_T - p_μ)(P^cl_T - P^*) | N_t = n, (P_i)_i=1^N_t = (p_i)_i=1^n ],
where p_μ refers to a normal random variable 𝒩(μ,σ^2).
In the proof of Theorem <ref> and remark <ref> below, we will show the uniqueness of μ̂. However, if equation (<ref>) outputs more than one max value, we define μ̂ as the minimum of these outputs.
We set
μ̂_τ := μ̂(τ,N_τ,(P_i)_i=1^N_τ),
P̂_τ=P_τ^μ̂∼𝒩(μ̂_τ,σ^2).
Since the strategic seller wants to maximize payoff, the seller's problem is written as:
V^∘=sup_τ V^∘(τ),
with
V^∘(τ)= 𝔼_g[1_P̂_τ≤ P_T^cl{ K(P^cl_T - P̂_τ)(P^cl_T - P^*) }].
Intuitively, the optimal arrival of the trader should be always T as the trader could use more information to make decisions. The following theorem proves that the optimal time to arrive is indeed τ =T; the later the trader joins the auction the better the payoff.
The strategic trader benefits from arriving as late as possible in an auction. In other words, for any 0≤ s≤ t≤ T we have V^∘(s) ≤ V^∘(t).
We have
V^∘(s) = 𝔼_g[1_P̂_s≤ P_T^cl K(P^cl_T - P̂_s)(P^cl_T - P^*) ]
= 𝔼_g[ 𝔼_g[1_P̂_s≤ P_T^cl K(P^cl_T - P̂_s)(P^cl_T - P^*) |ℱ_s ]]
= 𝔼_g{𝔼_g[ 𝔼_g[1_P̂_s≤ P_T^cl K(P^cl_T - P̂_s)(P^cl_T - P^*) |ℱ_t ] |ℱ_s ] }
where the last equality is based on ℱ_s ⊂ℱ_t.
Similarly,
V^∘(t) = 𝔼_g{𝔼_g[ 𝔼_g[1_P̂_t≤ P_T^cl K(P^cl_T - P̂_t)(P^cl_T - P^*) |ℱ_t ] |ℱ_s ] }.
As μ̂_s is ℱ_s-measurable and ℱ_s ⊂ℱ_t, μ̂_s is ℱ_t-measurable. Then by the definition of μ̂ in (<ref>),
𝔼_g[1_P̂_t≤ P_T^cl K(P^cl_T - P̂_t)(P^cl_T - P^*) |ℱ_t ] ≥𝔼_g[1_P̂_s ≤ P_T^cl K(P^cl_T - P̂_s)(P^cl_T - P^*)| ℱ_t ], ℙ-a.s..
Then V^∘(t) ≥ V^∘(s).
§.§.§ Market quality and exchange's viewpoint
While the strategic trader wants to maximize her payoffs, the exchange would benefit from an arrival of the strategic trader which minimizes the spread between the clearing price P^cl_T and the efficient price P^*. We denote by P^cl,τ_T the clearing price given by (<ref>) for P=P̂_τ, that is the clearing price if the strategic seller arrives at time τ. We have
P^cl,τ_T =
{∑_i=1^N_T P_i + P̂_τ/N_T + 1 if ∑_i=1^N_T P_i/N_T > P̂_τ
∑_i=1^N_T P_i /N_T otherwise..
We introduce two different measures (disutility functions) of market quality:
(MQ)(τ): = 𝔼[ |P^cl,τ_T - P^*|^2 ],
(MQ)^ρ(τ): = 𝔼[ exp(ρ|P^cl,τ_T - P^*|) ],
where ρ>0 is the risk aversion of the exchange with respect to the spread. As a benchmark and first-best case scenario, we assume that the exchange controls the arrival of the strategic seller. The exchange aims at solving
min_τ_reg MQ (τ_reg) = min_τ_reg𝔼[ |P^cl,τ_reg_T - P^*|^2 ],
or
min_τ_reg MQ^ρ (τ_reg) = min_τ_reg𝔼[ exp(ρ|P^cl,τ_reg_T - P^*|) ].
§.§ Data and numerical analysis
We now investigate the optimal arrival of the strategic seller solving (<ref>) together with the optimal deviation form the efficient price μ̂ proposed in the auction and the market quality given by (<ref>) or (<ref>). We refer to Appendix <ref> for the details of the computations to perform the numerical study studied in this section.
We set T = 10 and discretize the time span by assuming that traders only join the markets at time τ∈{1,2,...,9, 10}. We use trading data extracted from YahooFinance for Apple and Alphabet (Google) stock on period Oct-2-2023 to Dec-29-2023 to calibrate the parameters μ^mm and σ. We consider three months' data to avoid being affected by any single period's abnormal behavior.
We set the mean μ^mm of P^* and {P_j}_j to be the average day price of the three month period considered. For each trading day, we compute each stock's day price by (Open Price + Close Price + High Price + Low Price)/4. We get Apple's μ^mm = 184.39 and Alphabet's μ^mm = 134.24.
As for the standard deviation σ of P^* and so P_t and {P_j}_j, we use the formula
σ^2 = 1/N∑_k=1^N (S_k - S_k-1)^2,
where S_k and S_k-1 are the price of the stock day k and day k-1 respectively, N denotes the total number of days of the period. We got from the data set Apple's σ = 1.76 and Alphabet's σ = 2.11.
When the strategic seller is imperfectly informed about the drift of efficient price and the drift of the price proposed by the other traders, the estimation of μ^* and μ^mm by the strategic seller, we work under two scenarios. We first consider the case where the strategic seller under-estimates these parameters and named this case Case (-), that is μ_g=μ^*-σ and μ_g^mm=μ^mm-σ. Symmetrically, we consider the case where the strategic seller over-estimates μ^* and μ^mm and named this case Case (+), that is μ_g=μ^*+σ and μ_g^mm=μ^mm+σ.
We further set a bound for the strategic trader's μ̂ in the bounded set [μ_g^* - 4σ,μ_g^* + 4σ].[Note that this assumption is not unrealistic in view of the existing literature since most of market making mathematical models require to chose a spread in a bounded interval. Moreover, in remark <ref>, we see that μ̂ could go to infinity for some cases, so it is necessary to bound μ̂ when solving numerically.]
We assume that the slope of the supply function is K = 10 and take λ = 1 in consideration of the computation cost.
§.§ Strategic trader with full information: efficient but unfair market
In a perfect world setting, μ^*_g = μ^* and μ^mm_g = μ^mm, i.e. the strategic trader either guesses correctly or has insider information.
§.§.§ The stock exchange prefers a strategic trader to join the market
In this subsection, we emphasize the benefit for the exchange to attract a strategic trader in the market. We modify the assumption in section <ref> by assuming ℱ_T = σ{N_T, (P_i)_i=1^N_T,P^*}. Note that such modification does not change the result of Theorem <ref> which says that a strategic trader always join at τ = T. We define P_T^cl,∅ to be the clearing price in an auction market when there is no strategic trader. According to (<ref>), we have
P_T^cl,∅ = ∑_i=1^N_T P_i /N_T.
Hence, the market quality when no strategic trader arrives in the auction is given by
MQ^∅ = 𝔼[ |P^cl,∅_T - P^*|^2 ].
Without a strategic trader, the exchange would prefer more limit orders in the auction. This is because MQ^∅ = σ^2(1+1-e^-Tλ/Tλ) which decreases monotonically with λ.
Assume that the strategic trader is either a seller or a buyer, i.e., the trader's objective is defined as
sup_τ𝔼[ K(P^cl_T - μ_τ)(P^cl_T - P^*)],
where μ_t is the optimizer for _μ𝔼[ K(P^cl_T -μ)(P^cl_T - P^*)|ℱ_t]. Then MQ(τ̂) = 1/4 MQ^∅ < MQ^∅ where τ̂ is the optimal arrival time of the strategic trader optimizing (<ref>). In other words, the exchange prefers the arrival of a strategic trader in the auction to improve the market quality.
Since τ̂ = T, the strategic trader's problem is to find the ℱ_T-measurable optimizer μ̅ for
_μ𝔼[{ K(P^cl_T - μ)(P^cl_T - P^*) }| ℱ_T]
= _μ𝔼[ { K(∑_i=1^N_T P_i + μ/N_T + 1- μ)(∑_i=1^N_T P_i + μ/N_T + 1 - P^*) }| ℱ_T]
= _μ𝔼[{-N_T/(N_T+1)^2μ^2 + μ( (1-N_T)∑_i=1^N_T P_i/(N_T+1)^2 + N_T/N_T+1P^*) + ∑_i=1^N_T P_i /N_T + 1(∑_i=1^N_T P_i /N_T + 1 - P^*) }| ℱ_T].
The function f defined by
f(x)
=𝔼[-N_T/(N_T +1)^2| ℱ_T] x^2 + 𝔼[ (1-N_T)∑_i=1^N_T P_i/(N_T+1)^2 + N_T/N_T+1P^*| ℱ_T] x + 𝔼[ ∑_i=1^N_T P_i /N_T + 1(∑_i=1^N_T P_i /N_T + 1 - P^*) | ℱ_T ]
= [-N_T/(N_T +1)^2] x^2 + [ (1-N_T)∑_i=1^N_T P_i/(N_T+1)^2 + N_T/N_T+1P^*] x + [ ∑_i=1^N_T P_i /N_T + 1(∑_i=1^N_T P_i /N_T + 1 - μ^*) ]
is maximized at
x = -( N_T(N_T +1)P^* - (N_T -1)∑_i=1^N_T P_i)/(N_T +1)^2/2-N_T/(N_T +1)^2.
By the symmetry of the parabola f, the optimizer is
μ̅ = -( N_T(N_T +1)P^* - (N_T -1)∑_i=1^N_T P_i)/(N_T +1)^2/ 2-N_T/(N_T +1)^2.
If the strategic trader sends price at μ̅, then
P_T^cl(μ̅) - P^* = 1/2( ∑_i=1^N_T P_i/N_T - P^*) = 1/2( P_T^cl,∅ - P^*).
Consequently,
𝔼[|P_T^cl(μ̅) - P^*|^2] =1/4𝔼[|P_T^cl,∅ - P^*|^2]=1/4 MQ^∅.
In this remark, we illustrate where μ̂ is achieved. Since τ̂ = T, the strategic seller's problem in (<ref>) is to find the optimizer μ̂ for
_μ𝔼[1_P^μ_T ≤ P_T^cl{ K(P^cl_T - P^μ_T)(P^cl_T - P^*) }|ℱ_T ].
The difference between the above problem and problem (<ref>) is the presence of the indicator function 1_P^μ_T ≤ P_T^cl and the trader sends P^μ_T centering around μ instead of sending μ. If remove the indicator function from the above problem, we have μ̂ = μ̅ by the symmetry of the probability distribution of a normal random variable.
(1) Suppose for the chosen event ω∈ℱ_T, ( P_T^cl,∅ - P^*) ≥ 0. By Theorem <ref>, μ̂(ω) ≥μ̅(ω) due to the presence of the indicator function 1_P^μ_T ≤ P_T^cl which shifts μ̂(ω) upward (see appendix <ref> for more details).
(2) Suppose for the chosen event ω∈ℱ_T, (P_T^cl,∅ - P^*) < 0. For any given μ, when P^μ_T > P_T^cl, 1_P^μ_T ≤ P_T^cl K(P^cl_T - P^μ_T)(P^cl_T - P^*) = 0 and when P^μ_T ≤ P_T^cl, 1_P^μ_T ≤ P_T^cl K(P^cl_T - P^μ_T)(P^cl_T - P^*) = K(P^cl_T - P^μ_T)(P^cl_T - P^*)
≤ K(P_T^cl - P^μ_T)(P_T^cl,∅ - P^*) <0. Thus, to maximize the objective (<ref>), the strategic trader would pick μ̂(ω) as large as possible, μ̂(ω) = ∞.
§.§.§ Numerical results and economical insights
The numerical analysis is presented in Table <ref>. We find the optimal τ̂_reg = 10 by solving (<ref>) or (<ref>), which implies that the regulator benefits from an arrival of a strategic trader with full information at the end of an auction. Figure <ref> shows that the absolute value of the slope of MQ^ρ increases when ρ gets larger. This implies that the exchange would prefer a late-arriving trader in an auction even more if the exchange is highly risk averse (higher ρ) on market quality spread. In addition, we find the optimal τ̂= 10 for the arrival of the strategic seller solving (<ref>) which confirms Theorem <ref> that traders prefer to arrive at the end.
The result τ̂_reg = τ̂=10 confirms the “Law of One Price” which says that arbitragers brings price convergence to the efficient price.[Suppose the efficient price is $5 per share and the market price is $4 share, then arbitragers would buy at $4 to sell later at $5. As they buy, they would push the market price upward until it reaches $5 at which arbitragers stop buying thus stop moving the market price.] From this law, the exchange would prefer the strategic trader to arrive at a time that maximizes their arbitrage impact which is τ = 10 when the trader knows everything about the market. In Table <ref>, two columns show that the strategic trader's arbitrage impact increases as τ increases: column `Strategic Seller' shows that the seller's expected payoff increases with the arrival time and column `Price Impact' shows that the trader moves the clearing price more away from other transferred limit orders' aggregated opinion the later the trader joins the auction.[Price Impact := 𝔼[|P_T^cl,∅-P_T^cl|^2]] From the last column, we could see that the strategic trader exerts price impacting power by proposing higher prices as τ increases. Note that the proposed price is always higher than the true mean of the efficient price of underlying assets, implying that the seller wants to sell high to gain profits; in other words, the later the trader arrives, the larger the spread they proposes.
The agreement between τ̂_reg = τ̂ seems satisfying but indulging the strategic trader to join only at τ =10 has several concerns. The strategic trader obviously uses time as a lever to gain advantage of earlier joined traders. Recall that our model of an auction market is a zero-sum game, so the greater gain of the trader, the larger loss of the other traders. This information unfairness could result in general traders losing interest in the auction market and turning entirely to the continuous market. In addition, the price impact of the strategic trader is a concern. From the table, we see the trader has the largest price manipulation power at τ =10. This does not produce a problem in a perfect world with full information but is unrealistic and leads to flaws in a less perfect world as we discuss in the next section.
§.§ Imperfect information and inefficiency of auctions
In a more realistic framework, the strategic trader has a misconception of μ^* and μ^mm. In the previous case, the trader uses price impacting power to drag clearing price towards the efficient price, but when the trader has a different target than the efficient price, the story changes.
We test two cases: case (-) when μ^*_g = μ^* - σ and μ^mm_g = μ^mm - σ and case (+) when μ^*_g = μ^* + σ and μ^mm_g = μ^mm + σ.[Note that Case (-) is more likely to happen than Case (+) since we are studying a seller not a buyer.]
The numerical analysis is presented in Table <ref>.[Alphabet's graphs carry the same spirit as Apple's graphs. For the sake of simplicity, we only analyze the data from Apple in this part.] In the case (-) we find the optimal τ̂_reg = 1 by solving (<ref>) or (<ref>), which implies that the regulator wants the strategic trader to arrive at the beginning of an auction. Under this case, we see that the optimal arrival time of a strategic trader disturbs market efficiency as τ̂=10. In addition, we observe that the market quality becomes worse as τ increases, while the strategic seller's payoff increases with τ. Thus the exchange would prefer the strategic trader to arrive as early as possible to minimize the spread between the efficient price and the clearing price, while the trader prefers to arrive as late as possible to maximize expected payoff.
In the case (+), we find the optimal τ̂_reg = 1 when using measure MQ^ρ with large ρ, such ρ =1, ρ =1.2 and τ̂_reg = 10 when using measure MQ or MQ^ρ with small ρ. It is important to note that the difference between MQ(t) or MQ^ρ(t) across different t is not significant, see Figure <ref>. A possible explanation is that the trader tends to propose a high price due to the misconception μ^*_g = μ^* + σ, which results in many of the high-price orders not getting executed due to the fact that P_T^cl < P. As the trader's price manipulation power gets restricted in this way, the market quality measure varies little across different τ (the arrival time of the strategic trader). Following the same logic, the reason why we see a divided result (i.e., what is τ̂_reg) between large ρ and small ρ is that when ρ grows the exchange becomes extremely sensitive to any possible difference between the clearing price and efficient price. Thus small chance events get exaggerated; for example, the event when P_T^cl, is high above P^* and the strategic trader manages to propose a high P such that P < P_T^cl. The strategic trader would be able to impact the price and catch this opportunity more if the trader arrives at τ=10 instead of τ=1, resulting MQ^ρ (τ =10) > MQ^ρ (τ =1). In addition, note that MQ^ρ (τ =4) > MQ^ρ (τ =10) even though the trader has larger price impact at τ =10. Recall Section <ref>, the exchange prefers a trader to join an auction market. The trader has more information to enable a successful execution (i.e. P < P_T^cl ) at τ = 10 than τ =4 thus resulting this relation. Overall, case one deserves more attention than case two because case two misconception self-restricts a strategic seller's price impacting power.
§ MONITORING POLICIES: TRANSACTION FEES AND CLEARING TIME RANDOMIZATION
§.§ Bilevel optimization between the exchange and the strategic trader
The previous section emphasized some flaws in auctions and a need to regulate the arrival behavior of traders in this type of market. To mitigate the conflict between the optimal arrival τ̂ from the strategic trader viewpoint and τ̂_reg, the one that the exchange would prefer for market quality reasons, we investigate and provide a quantitative analysis of two tools: a transition fees policy and a randomization of the clearing time. We denote by ξ(t) the transaction fee function, where t refers to the arrival time of traders and τ^cl to be the duration of an auction assumed to be a random variable whose parameter is controlled by stock exchanges.
In this model, the strategic seller has to pay ξ(t) if she arrives at time t in the auction per volume of shares of the asset traded. We then define for any (t,n,(p_i)_i=1^n), the set M̂_t,n,(p_i)_i=1^n^ξ,τ^cl of optimizers
μ̂(t,n,(p_i)_i=1^n;ξ,τ^cl) ∈max_μ𝔼_g[1_p_μ≤ P_τ^cl^cl K(P^cl_τ^cl - p_μ)(P^cl_τ^cl - P^*-ξ(t)) | N_t = n, (P_i)_i=1^N_t = (p_i)_i=1^n ],
where p_μ refers to a normal random variable 𝒩(μ,σ^2). The problem of the strategic seller becomes
V^fee (ξ,τ^cl): = sup_τ V^fee_ξ,τ^cl(τ)
with
V^fee_ξ,τ^cl(τ)= 𝔼_g[1_τ≤τ^cl1_P̂_τ≤ P_τ^cl^cl{ K(P^cl_τ^cl - P̂_τ)(P^cl_τ^cl - P^*) - K(P^cl_τ^cl - P̂_τ)ξ(τ) }],
where[Note that if the set M̂_t,n,(p_i)_i=1^n^ξ,τ^cl is not reduced to one element, the choice of the optimizer does not affect the value function V^fee.]
μ̂_τ := μ̂(τ,N_τ,(P_i)_i=1^N_τ,ξ,τ^cl), and P̂_τ∼𝒩(μ̂_τ,σ^2).
We denote by ℳ^ξ,τ^cl the set of optimizers (μ̂,τ̂) such that τ̂ is optimal for (<ref>) and μ̂∈M̂_τ̂,N_τ̂,(P_i)_i=1^N_τ̂^ξ,τ^cl.
Assume that the strategic seller arrives in the auction at time τ̂ by proposing a price P̂_τ̂. We denote by P^cl,τ̂_τ^cl the clearing price set by the exchange and defined by (<ref>) with P=P̂_τ̂ and T=τ^cl. The problem of exchange depends on where its top interest lies. If the exchange wants to improve the actual price paid by the trader, the exchange aims at solving a bilevel optimization
Λ_0 : = min_ξΛ_0(ξ)
where
Λ_0(ξ) = min_τ^cl,(τ̂, μ̂)𝔼[ (|P^cl,τ̂_τ^cl - P^*| + ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl )^2],
or by assuming that the exchange is risk averse
Λ^ρ_0: = min_ξΛ^ρ_0(ξ)
where
Λ^ρ_0(ξ)= min_τ^cl,(τ̂, μ̂)𝔼[exp(ρ(|P^cl,τ̂_τ^cl - P^*| + ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl))],
subject to
(IC): (τ̂, μ̂)∈ℳ^ξ,τ^cl,
(R): V^fee(ξ,τ^cl) ≥γ/2𝔼_g[ 1_P̂_τ̂≤ P^cl,τ̂_τ^cl K(P^cl,τ̂_τ^cl - P̂_τ̂) ],
where γ is the difference between the best bid price and the best ask price. The constraint (IC) is called the incentive compatibility constraint and models the best-reaction action (τ̂,μ̂) is the strategic seller when the exchange announced a transaction fee ξ and a clearing time rule τ^cl. The constraint (R) is set to bound the transaction fee ξ and ensure that the trader would benefit from the auction and not turn to the continuous trading market to avoid the transaction fee in the periodic auction market. With this reservation utility constraint, the auction is more competitive than trading on the CLOB directly.
This problem can by also seen as a “trader focused” exchange. We assume the exchange wants to minimize the total spread for the trader, where
total spread = MQ + transaction spread
= spread between efficient price and clearing price +
spread between clearing price and after-fee price (real executed price).
We define the transaction spread of a trader in this way: suppose the market clears at P_T^cl and the transaction fee is ξ per share, a buyer would pay P_T^cl + ξ to buy a share and a seller would receive P_T^cl - ξ to sell a share. Average transaction spread is thus 1/2[(P_T^cl + ξ) -(P_T^cl - ξ)] = ξ.
If the exchange focuses only on the market efficiency and wants to increase both fee gains and market quality, the problem becomes
Λ_0 : = min_ξΛ_0(ξ)
where
Λ_0(ξ) = min_τ^cl,(τ̂, μ̂)𝔼[ |P^cl,τ̂_τ^cl - P^*|^2 - ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl ],
or by assuming that the exchange is risk averse
Λ^ρ_0: = min_ξΛ^ρ_0(ξ)
where
Λ^ρ_0(ξ)= min_τ^cl,(τ̂, μ̂)𝔼[exp(ρ(|P^cl,τ̂_τ^cl - P^*| - ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl))],
subject to
(IC): (τ̂, μ̂)∈ℳ^ξ,τ^cl,
(R): V^fee(ξ,τ^cl) ≥γ/2𝔼_g[ 1_P̂_τ̂≤ P^cl,τ̂_τ^cl K(P^cl,τ̂_τ^cl - P̂_τ̂) ].
For the numerical solutions, we set the bid-ask spread γ by referring to the estimation method in <cit.>: for a period of N days, γ = 1/N∑_i=1^Nγ_i, where
γ_i = √(max{4(c_t- (l_t+h_t)/2)(c_t- (l_t+1+h_t+1)/2),0}),
c_t is the daily close log price, l_t is the daily low log price, and h_t is the daily high log price. By this method, we set Apple's γ = 0.0039 and Alphabet's γ = 0.0065.
Note that for both market impact or market efficiency optimization problems, we focus solely on the spread and fee of the strategic trader. We do not include the spread and fee of the other transferred limit orders because we assume these traders are transferred from CLOB to the periodic auction for execution and thus do not face the transaction fee imposed in the periodic auction.
§.§ Randomization without fees
In this section, we focus on the solution to (<ref>) or (<ref>), (<ref>) or (<ref>) when ξ=0. Echoing the discussion in Section <ref>, we focus on case (-), that is a misconception μ^*_g = μ^* - σ and μ^mm_g = μ^mm - σ for the strategic trader. We assume that τ^cl is a Bernoulli random variable taking values in {9,10} with p = ℙ(τ^cl=9) = 1-ℙ(τ^cl=10) and p ∈ [0,1]. The optimization on τ^cl in both (<ref>) or (<ref>), (<ref>) or (<ref>) is reduced to optimize p, that is
MQ : = min_p∈[0,1] MQ (p) ,
with
MQ (p)= min_(τ̂, μ̂)𝔼[ |P^cl,τ̂_τ^cl - P^*|^2 ],
subject to
(IC): (τ̂, μ̂)∈ℳ^0,τ^cl,
(R): V^fee(0,p) ≥γ/2𝔼[ 1_P̂_τ̂≤ P^cl,τ̂_τ^cl K(P^cl,τ̂_τ^cl - P̂_τ̂) ],
We recall that the strategic trader decides at time 0 when to arrive (i.e., chooses τ̂ before the auction starts). For both Apple and Alphabet, we observe in Table <ref> that the optimal p̂ is around 0.09.
With randomization set at p̂ = 0.09, the stock exchange successfully encourages a strategic trader to arrive earlier. We now have τ̂=9 instead of τ̂=10 which is the optimal arrival time without randomization. We also observe the market quality improves from 3.6161 to 3.5850 for Apple and from 5.1885 to 5.1438 for Alphabet comparing with Table <ref>. Looking closely at the results in Table <ref>, we observe that even a small p, as small as 0.09 (which means that the auction only has 0.09 probability to end at 9 instead of 10), would be sufficient to encourage the strategic trader to not arrive at τ=10. This explains why the optimal p̂ is very close to 0: the exchange prefers the case when the strategic trader arrives before the closing time and at p =0.09 there is only 9% of chance that the strategic trader arrives at the closing time of the auction which happens to be 9.
§.§ Optimal transaction fees indexed on time to improve price impact for the trader
We now turn to the solutions of (<ref>) and (<ref>). As a classical result of contract theory and bilevel optimization, the shape of the contract ξ has to be specified in a discrete time framework. We assume that the exchange proposes two type of exchange fees: either a linear fee indexed on the time of arrival of the strategic trader and the efficient price ξ_ℓ(t) = at or a square fee structure ξ_s(t) = at^2, with randomization of closing time τ^cl. By selecting either a linear fee structure or a square fee, the bilevel optimization problem thus becomes
Λ_0 : = min_ξ∈{ξ_ℓ,ξ_s}Λ_0(a),
where for a choice of ξ∈{ξ_ℓ,ξ_s}
Λ_0(a)=min_τ^cl,(τ̂, μ̂)𝔼[ (|P^cl,τ̂_τ^cl - P^*| +ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl )^2],
or by assuming that the exchange is risk averse
Λ^ρ_0: = min_aΛ_0^ρ(a)
where
Λ_0^ρ(a)=min_τ^cl,(τ̂, μ̂)𝔼[exp(ρ(|P^cl,τ̂_τ^cl - P_T^*| + ξ(τ̂) 1_P̂(ξ)≤ P^cl,τ̂_τ^cl))],
subject to
(IC): (τ̂, μ̂)∈ℳ^ξ,τ^cl,
(R): V^fee(ξ) ≥γ/2𝔼_g[ 1_P̂_τ̂≤ P^cl,τ̂_τ^cl K(P^cl,τ̂_τ^cl - P̂_τ̂) ].
As before, we assume that p = ℙ(τ^cl=9)=1-ℙ(τ^cl=10) and the exchange optimizes on the parameter p to optimize τ^cl.
The results are presented in Table <ref> with the optimal fee structure for Apple ('Stock'-'Apple' cells) and Alphabet ('Stock'-'Alphabet' cells). The second column “MQ measure” represents the choice of the problem (<ref>) or (<ref>) for different values of ρ. The third column gives the optimal fees' structure while the fourth column gives the optimal value of p. The fifth column gives the optimal arrival of the strategic trader τ̂ solving (<ref>) with the corresponding optimal fees and p value. The sixth column “Exchange gain” represents the value Λ_0 or Λ_0^ρ for different values of ρ corresponding to the value function of the exchange, while the last column compared this value with the market quality MQ(τ̂) when the fee is 0 (no transaction fees).
From the last two columns of the table, we could see that our transaction fee model improves market quality since Λ_0 is always smaller that the market quality when ξ=0. From the strategic trader's perspective, they now have the incentive to arrive earlier as they prefer to arrive at time τ = 1 instead of τ = 10 when there is no fee. We could thus conclude that the added fee is effective to cure the flaws of a periodic auction system observed in Section <ref>. Note moreover that p=0 is optimal, that is the randomization of the clearing time becomes useless by adding transaction fees. As a matter of fact, the regulator calibrated the fees optimally to make the strategic seller arriving in the auction as early as possible. It deletes the impact of the randomization of the clearing time between τ^cl=9 or τ^cl=10.
We now turn to a deeper study of how our transaction fee model improves the market's quality for the Apple's stock price.[Alphabet's graphs carry the same spirit as Apple's graphs. For the sake of simplicity, we only analyze the data from Apple in this part.] Both the linear transaction fee structure and square transaction fee structure encourage the strategic seller to arrive earlier in the market. From Figure <ref> (a) and (b), we see that as the fee increases (led by increasing a), the strategic seller's optimal τ gradually declines from 10 to 1 and the decline rate seems to coincide with the structure of the fee model as Figure <ref> (a) shows a linear declining pattern and Figure <ref> (b) shows an accelerating declining pattern.
From Figure <ref>, we see the competition between the spread term |P_T^cl - P^*| and the fee term ξ in the stock exchange's utility function Λ_0 and Λ_0^ρ. When the fees increase initially, the spread becomes better due to the fact that the strategic trader is willing to arrive earlier. The decrease of the spread overpowers the increase in fees. However, after fee reaches the optimal stage which is a=0.03, the strategic trader picks the same τ̂=1 no matter how large a becomes. The exchange then feels no need to increase the transaction fee. We could see from Figure <ref> (b) that the minimum a to encourage the trader to arrive at τ =1 is a=0.03, which explains why the optimal fee structure is very consistent across different utility measures we test.
§.§ Optimal transaction fees indexed on time: improving market quality while benefiting from the fees
We now turn to the solutions of (<ref>) and (<ref>). The bilevel optimization problem thus becomes
Λ_0 : = min_ξ∈{ξ_ℓ,ξ_s}Λ_0(a),
where for a choice of ξ∈{ξ_ℓ,ξ_s}
Λ_0(a)=min_τ^cl,(τ̂, μ̂)𝔼[ |P^cl,τ̂_τ^cl - P_T^*|^2 - ξ(τ̂) 1_P̂_τ̂≤ P^cl,τ̂_τ^cl ],
or by assuming that the exchange is risk averse
Λ^ρ_0: = min_aΛ_0^ρ(a)
where
Λ_0^ρ(a)=min_τ^cl,(τ̂, μ̂)𝔼[exp(ρ(|P^cl,τ̂_τ^cl - P_T^*| - ξ(τ̂) 1_P̂(ξ)≤ P^cl,τ̂_τ^cl))],
subject to
(IC): (τ̂, μ̂)∈ℳ^ξ,τ^cl,
(R): V^fee(ξ) ≥γ/2𝔼_g[ 1_P̂_τ̂≤ P^cl,τ̂_τ^cl K(P^cl,τ̂_τ^cl - P̂_τ̂) ].
As before, we assume that p = ℙ(τ^cl=9)=1-ℙ(τ^cl=10) and the exchange optimizes on the parameter p to optimize τ^cl.
In an informal way, we can see from (<ref>) or (<ref>) that
Exchange value function = market quality cost - fees,
or in other words
Market quality cost = Exchange gain + Fees' gain.
The results are presented in Table <ref>. The seventh column is the incomes generating by the transaction fees, that is the part ξ(τ̂) calculated in an informal way with column "MQ^ξ" - "Exchange's Gain". The eigth column shows the market quality MQ(τ̂) by considering the optimal fees ξ while the last column compared this value with the market quality MQ(τ̂) when the fee is 0 (no transaction fees).
Given the exchange is both market quality and fee driven, we see that the transaction fee model gives satisfying result as the previous section. From the last two columns, we could see that market quality improves since MQ(τ̂) with the optimal transaction fees ξ calibrated by solving the exchange's problems (<ref>) or (<ref>) is always smaller that the market quality when ξ=0. The exchange has better fee gains from the strategic trader. From the strategic trader's perspective, they now have the incentive to arrive earlier at time τ = 7, 8, 9 instead of τ = 10 when there is no fee. Unlike Section <ref>, we find p=0.5 optimal for Apple under several Λ_0^ρ measures. This is because under the fee structure 0.04t, the trader's τ̂ changes across different p, while under 0.03t^2, the trader's τ̂ stays at 1.
Under Λ_0 and Λ_0^ρ, Apple's optimal fee structure is 0.04t. The reason is that when fee is larger than 0.04t, the strategic trader keeps arriving at τ = 9 or τ = 10 at which market quality is the worst. However, when fee is smaller than 0.04t, the strategic trader is truly encouraged to arrive much earlier than τ =8 or 9 thus improving market quality but the fee gain decreases more. Therefore, the exchange finds a balance at 0.04t between better market quality and better fee gains.
§ CONCLUSIONS
We study the strategic arrival of a trader and show that a strategic trader always joins an auction at the last moment to have the greatest price manipulation power. Such behavior could impair fairness and market quality especially if the trader has a misconception of the efficient price. We propose two solutions: randomizing the closing time and introducing time-dependent transaction fees. With randomization (91% of chance auction closes at T=10 and 9% of chance closes at T=9), the strategic trader would join the auction before the last moment and market quality is improved. With transaction fees, the strategic trader would enter the auction even earlier which further improves the market quality. We consider two possible interests of the exchange, improving the market quality and making fee gains. Under either considerations, our solution provides better results than without randomization or fees.
Our results certainly have limitations. We assume the presence of a single strategic trader instead of allowing multiple strategic traders to compete. In terms of model setting, <cit.> studies a general optimal control and stopping problem with discrete controls and proves the existence of an equilibrium in a game in which every player is strategic, without studying randomization or transaction fees policy. Our paper is a specific optimal control and stopping problem with discrete stopping time and we assume all but one players are non-strategic. <cit.> sheds light on a possible extension of our model to a more general case when there are more than one strategic players in an auction. <cit.> studies the optimal execution strategy of a strategic trader in a continuous market whose order has price impact on the market and finds that the existence of price manipulation strategies depend on the choice of models. This reminds us that changing certain settings of our model may change the behavior of the strategic trader and possibly our conclusion. Updating the model to include interactions of multiple strategic traders would probably lead to similar results of this paper. Strategic traders would very likely still choose to arrive at the last moment to avoid sharing their views too early and to gain information of others to take advantage. We also assume a discrete optimal stopping control instead of a continuous one. The continuous version of this problem is in the works. We assume the auction market clears all order imbalance while in reality order imbalance exists. Finding a workable model to allow order imbalance is a future direction to consider. In addition, we model simply one round of an auction. If running the auction for several rounds, we could possibly see a larger negative impact of the strategic trader's arrival-timing strategy and see a greater need to regulate the arrival of traders.
Finally, we did not mention priority rules for a periodic auction. This is because our model assumes zero imbalance of orders, so we do not find the need of a priority rule. In reality, order imbalance exists; for example, if A wants to sell 10 shares of stock, B wants to buy 4 shares of stock, and C wants to buy 8 shares of stock, B or C or both might only receive part of what they request. Priority rules need to be set to divide the 10 shares in this situation. Cboe's periodic auction market assigns price priority over size priority over time priority; <cit.> also mentions that price priority should be given over time priority. In general, time priority should be the last to consider. Therefore, we argue that our observation (a strategic trader lacks an incentive to join early) would still be valid given the presence of order imbalance and priority rules. However, learning how to set priority rules for order imbalance could be a meaningful future study.
§ APPENDIX: NUMERICAL METHODS
§.§ Problem of a Strategic Seller
Recall the objective of a strategic seller is V^∘ = sup_τ V^∘(τ). Fix (t, n,{p_i}_i=1^n), recall the price P(t, n,{p_i}_i=1^n) submitted by the strategic seller is a normal random variable.
Denote q(p,x, y) to be the joint density function of P(t, n,{p_i}_i=1^n), P^*, ∑_j: j =1, τ_j ≥ t^m P_j, where τ_j is the arrival time of the j-th market maker with price P_j. Each of the three random variables follows a normal distribution. For simplicity, assume the three normal random variables are mutually independent.
Define N_(t,T) := N_T - N_t. Assume N_(t,T) is independent of P(t, n,{p_i}_i=1^n) , P^*, {P_j}_τ_j ≥ t. Denote f_N_(t,T) to be the probability density function of N_(t,T).
Fix N_t = n, {P_i}_i=1^N_t = {p_i}_i=1^n, μ̂(t, n,{p_i}_i=1^n) is the optimizer of :
sup_μ𝔼[1_P_μ≤∑_i=1^N_T P_i/N_T { p_μ^2 -K N_T/(1+N_T)^2 + p_μ[ K∑_i=1^N_T P_i - K N_T ∑_i=1^N_T P_i/(1+N_T)^2 + P^*_T K N_T /(1 + N_T)] + K (∑_i=1^N_T P_i)^2/(1 + N_T)^2
- P^*K ∑_i=1^N_T P_i/(1 + N_T)}| N_t = n, (P_i)_i=1^N_t = (p_i)_i=1^N_t]
= sup_μ1[1.5]∫𝔼[1_P_μ≤∑_i=1^N_T P_i/N_T { p_μ^2 -K N_T/(1+N_T)^2 + p_μ[ K∑_i=1^N_T P_i - K N_T ∑_i=1^N_T P_i/(1+N_T)^2 + P^*_T K N_T /(1 + N_T)] + K (∑_i=1^N_T P_i)^2/(1 + N_T)^2
- P^*K ∑_i=1^N_T P_i/(1 + N_T)}| N_t = n, (P_i)_i=1^N_t = (p_i)_i=1^N_t, N_(t,T) = m] dN_(t,T) (m)
= sup_μ∑_m=0^∞ f_N_(t,T)(m){𝔼[1_P_μ≤∑^n p_i+ ∑^m P_j/n+m{-Kp_μ^2n + m/(n+ m +1)^2 +Kp_μ (∑^n p_i + ∑^m P_j)(1- n- m )/(n + m +1)^2
+ K p_μ P^* n+ m/(n +m +1) +K(∑^n p_i + ∑^m P_j)^2 /(n + m +1)^2 - KP_T^* (∑^n p_i + ∑^m P_j) /(n + m +1)}] }
= sup_μ∑_m=0^∞ f_N_(t,T)(m){∫_-∞^∞∫_-∞^∞∫_-∞^∞1_P_μ≤∑^n p_i+ y/n+m{-Kp_μ^2n + m/(n+ m +1)^2 +Kp_μ (∑^n p_i + y)(1- n- m )/(n + m +1)^2
+ K p_μ x n+ m/(n +m +1) +K(∑^n p_i + y)^2 /(n + m +1)^2 - Kx (∑^n p_i + y) /(n + m +1)}q(p_μ,x,y) d(p_μ,x,y) }.
where the second equality is due to independence.
§.§ Appendix: Problem of the Regulator
𝔼[|P^cl,t_T - P^*|^2] = 𝔼[| ∑^N_T P_i + 1_{P_t ≤∑^N_TP_i/N_T} P_t/N_T +1_{P_t ≤∑^N_TP_i/N_T}- P^*|^2]
=1[2]∫𝔼[( ∑^N_T P_i + 1_{P_t ≤∑^N_TP_i/N_T} P_t/N_T +1_{P_t ≤∑^N_TP_i/N_T}- P^*)^2|N_(t,T)] dN_(t,T)
=1[2]∫𝔼𝔼[( ∑^N_T P_i + 1_{P_t ≤∑^N_TP_i/N_T} P_t/N_T +1_{P_t ≤∑^N_TP_i/N_T}- P^*)^2|ℱ_t, N_(t,T)] dN_(t,T)
By Disintegration Theorem <cit.>,
𝔼[( ∑^N_T P_i + 1_{P_t ≤∑^N_TP_i/N_T} P_t/N_T +1_{P_t ≤∑^N_TP_i/N_T}- P^*)^2|ℱ_t,N_(t,T)] = 1[2]∫| ∑^N_t P_i +y + 1_{p_t ≤∑^N_tP_i+y /N_(t,T)+N_t} p/N_(t,T)+N_t +1_{p ≤∑^N_tP_i+y /N_(t,T)+N_t}- x|^2 q(p,x,y) d(p,x,y),
where p refers to P, x refers to P^*, y refers to ∑^N_(t,T) P_j.
§ APPENDIX: ILLUSTRATE REMARK <REF>
To better illustrate the proof, we draw a figure <ref> of
y = -(KN_T)x + K∑_i=1^N_TP_i - K N_TP^*.
For each point on the line, the x coordinate would be
x(p) = P_T^cl(p) - P^* = ∑^N_TP_i + p/N_T +1 - P^*
and the y coordinate would be
y(p) = K(P_T^cl(p) - p) = K(∑^N_TP_i + p/N_T +1 - p),
where p is the price sent by the strategic seller. The shadow area is the trader's gain.
We have μ̅(ω) achieved at the middle point of the line segment and P_μ̅(ω) is normally distributed as 𝒩(μ̅(ω),σ^2).
1.7
[Abdi and Ranaldo (2017)]Farshid Abdi, F., and Ranaldo, A. (2017). A Simple Estimation of Bid-Ask Spreads from Daily Close, High, and Low Prices. The Review of Financial Studies, 30(12), 4437–4480.
[Alfonsi and Blanc (2016)]alfonsi Alfonsi, A., and Blanc, P. (2016). Dynamic optimal execution in a mixed-market-impact Hawkes price model. Finance and Stochastics, 20(1), 183–218.
[Aquilina et al. (2022)]Aquilina Aquilina, M., Budish, E., and O’Neill, P. (2022). Quantifying the High-Frequency Trading “Arms Race.” The Quarterly Journal of Economics, 137(1), 493–564.
[Baldacci et al. (2023)]BMMR
Baldacci, Bastien, Iuliia Manziuk, Mastrolia, Thibaut and Rosenbaum, Mathieu, (2023). Market making and incentives design in the presence of a dark pool: a Stackelberg actor–critic approach. Operations Research, 71(2):727-749.
[Brinkman and Wellman (2017)]Brinkman Brinkman, E., and Wellman, M. P. (2017). Empirical Mechanism Design for Optimizing Clearing Interval in Frequent Call Markets. Proceedings of the 2017 ACM Conference on Economics and Computation, 205–221.
[Budish et al. (2015)]Budish Budish, E., Cramton, and Shim, J. (2015). The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response. The Quarterly Journal of Economics, 130(4), 1547–1621.
[Canidiom and Fritsch (2024)]crypto24 Canidiom A., and Fritsch, R. (2024). Arbitrageurs' profits, LVR, and sandwich attacks: batch trading as an AMM design response. arXiv:2307.02074v5 [cs.DC].
[Derchu et al. (2020)]Derchu Derchu, J., Guillot, P., Mastrolia, T., and Rosenbaum, M. (2020). AHEAD : Ad Hoc Electronic Auction Design.
[Du and Zhu (2017)]du2017optimal Du, S., and Zhu, H. (2017). What is the Optimal Trading Frequency in Financial Markets? The Review of Economic Studies, 84(4 (301)), 1606–1651.
[Duffie and Zhu (2017)]dusize Duffie, D., and Zhu, H. (2017). Size Discovery. The Review of Financial Studies, 30(4), 1095–1150.
[Fama (1970)]fama Fama, E. (1970). Efficient Capital Markets - Review Of Theory And Empirical Work. The Journal of Finance (New York), 25(2), 383–423.
[Farmer and Skouras (2012)]fs2012 Farmer, D. and Skouras, S. (2012). Review of the benefits of a continuous market vs. randomised stop auctions and of alternative priority rules (policy options 7 and 12). Manuscript, Foresight, Government Office for Science, UK.
[Fricke and Gerig (2018)]fricke2018too Fricke, D., and Gerig, A. (2018). Too fast or too slow? Determining the optimal speed of financial markets. Quantitative Finance, 18(4), 519–532.
[Gayduk and Nadtochiy (2020)]gayduk Gayduk, R., and Nadtochiy, S. (2020). Control-Stopping Games for Market Microstructure and Beyond. Mathematics of Operations Research, 45(4), 1289–1317.
[Garbade and Silber (1979)]garbade
Garbade, Kenneth D and Silber, William L. (1979). Structural organization of secondary markets: Clearing frequency, dealer activity and liquidity risk The Journal of Finance, 34(3)577–593.
[Goldberg and Tenorio (1997)]goldberg Goldberg, L., and Tenorio, R. (1997). Strategic trading in a two-sided foreign exchange auction. Journal of International Economics, 42(3–4), 299–326.
[Graf et al. (2024)]energy Graf, C., Kuppelwieser, T., and Wozabal, D. (2024). Frequent Auctions for Intraday Electricity Markets. The Energy Journal (Cambridge, Mass.), 45(1), 231–256.
[Griffin et al. (2010)]griffin Griffin, J. M., Kelly, P. J., and Nardari, F. (2010). Do market efficiency measures yield correct inferences?: a comparison of developed and emerging markets. The Review of Financial Studies, 23(8), 3225–3277.
[Jegadeesh and Wu (2022)]JegadeeshJegadeesh, N., and Wu, Y. (2022). Closing auctions: Nasdaq versus NYSE. Journal of Financial Economics, 143(3), 1120–1139.
[Jusselin et al. (2021)]Jusselin Jusselin, P., Mastrolia, T., and Rosenbaum, M. (2021). Optimal Auction Duration: A Price Formation Viewpoint. Operations Research, 69(6), 1734–1745.
[Kalay et al. (2002)]kalay Kalay, A., Wei, L., and Wohl, A. (2002). Continuous Trading or Call Auctions: Revealed Preferences of Investors at the Tel Aviv Stock Exchange. The Journal of Finance (New York), 57(1), 523–542.
[Kallenberg (2002)]Kall Kallenberg, O. (2002). Foundations of Modern Probability, second edition. Springer.
[Liu and Chen (2020)]sharpe Liu, L., and Chen, Q. (2020). How to compare market efficiency? The Sharpe ratio based on the ARMA-GARCH forecast. Financial Innovation, 6(1), 1–21.
[Madhavan (1992)]madhavan1992trading Madhavan, A. (1992). Trading Mechanisms in Securities Markets. The Journal of Finance (New York), 47(2), 607–641.
[Malkiel (2003)]Malkiel Malkiel, B. G. (2003). The Efficient Market Hypothesis and Its Critics. The Journal of Economic Perspectives, 17(1), 59–82.
[Melton (2017)]Melton Melton, H. (2017). Market mechanism refinement on a continuous limit order book venue: a case study. SIGecom Exchanges, 16(1), 72–77.
[Shreve (2004)]shreveShreve, S.E. (2004). Stochastic Calculus for Finance II Continuous-Time Models. Springer 2004.
[Wah and Wellman (2013)]wah2013 Wah, E., and Wellman, M. P. (2013). Latency arbitrage, market fragmentation, and efficiency: a two-market model. Proceedings of the Fourteenth ACM Conference on Electronic Commerce, 855–872.
[Wah et al. (2016)]wah2016 Wah, E., Hurd, D., and Wellman, M. (2016). Strategic market choice: Frequent call markets vs. continuous double auctions for fast and slow traders. EAI Endorsed Transactions on Serious Games, 3(10), 1–10.
[West (1975)]west West, R. R. (1975). On the Difference between Internal and External Market Efficiency. Financial Analysts Journal, 31(6), 30–34. https://doi.org/10.2469/faj.v31.n6.30
[Ye (2024)]Ye24 Ye, L. (2024). Understanding the impacts of dark pools on price discovery. Journal of Financial Markets (Amsterdam, Netherlands), 68, 1–39.
[Ye (2011)]Ye11 Ye, M. (2011). A Glimpse into the Dark: Price Formation, Transaction Cost and Market Share of the Crossing Network. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1521494
[Zhang and Ibikunle (2023)]zhangib Zhang, Z., and Ibikunle, G. (2023). The market quality effects of sub-second frequent batch auctions: Evidence from dark trading restrictions. International Review of Financial Analysis, 89, 102737.
[Zhu (2014)]Zhudark Zhu, H. (2014). Do Dark Pools Harm Price Discovery? The Review of Financial Studies, 27(3), 747–789.
|
http://arxiv.org/abs/2405.08728v1 | 20240514161313 | Dimensionality reduction in bulk-boundary reaction-diffusion systems | [
"Tom Burkart",
"Benedikt J. Müller",
"Erwin Frey"
] | physics.bio-ph | [
"physics.bio-ph"
] |
These authors contributed equally.
Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, D-80333 München, Germany
These authors contributed equally.
Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, D-80333 München, Germany
frey@lmu.de
Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, D-80333 München, Germany
Max Planck School Matter to Life, Hofgartenstraße 8, D-80539 München, Germany
Intracellular protein patterns regulate many vital cellular functions, such as the processing of spatiotemporal information or the control of shape deformations.
To do so, pattern-forming systems can be sensitive to the cell geometry by means of coupling the protein dynamics on the cell membrane to dynamics in the cytosol.
Recent studies demonstrated that modeling the cytosolic dynamics in terms of an averaged protein pool disregards possibly crucial aspects of the pattern formation, most importantly concentration gradients normal to the membrane.
At the same time, the coupling of two domains (surface and volume) with different dimensions renders many standard tools for the numerical analysis of self-organizing systems inefficient.
Here, we present a generic framework for projecting the cytosolic dynamics onto the lower-dimensional surface that respects the influence of cytosolic concentration gradients in static and evolving geometries.
This method uses a priori physical information about the system to approximate the cytosolic dynamics by a small number of dominant characteristic concentration profiles (basis), akin to basis transformations of finite element methods.
As a proof of concept, we apply our framework to a toy model for volume-dependent interrupted coarsening, evaluate the accuracy of the results for various basis choices, and discuss the optimal basis choice for biologically relevant systems.
Our analysis presents an efficient yet accurate method for analysing pattern formation with surface–volume coupling in evolving geometries.
Dimensionality reduction in bulk–boundary reaction–diffusion systems
Erwin Frey
May 20, 2024
====================================================================
§ INTRODUCTION
Living organisms ensure their viability by precisely orchestrating a wide array of cellular functions, ranging from subcellular information processing to morphogenesis.
At the heart of these functions lie out-of-equilibrium molecular systems: proteins whose spatio-temporal organization in cells is driven by chemical interactions with other proteins and transport across the cell, commonly referred to as reaction–diffusion systems.
From these two building blocks, complex information processing systems can emerge that allow the transmission of chemical signals within the cell or encode temporal and spatial cues <cit.>.
Popular examples of such pattern-forming protein systems are the Min system of Escherichia coli which exhibits pole-to-pole oscillations in vivo <cit.> and spiral waves or labyrinth patterns in vitro <cit.>;
localization of the budding site via Cdc42 in Saccharomyces cerevisiae <cit.>;
or polarity establishment by PAR proteins in Caenorhabditis elegans <cit.>.
In dynamically deforming geometries, pattern-forming systems may gain the ability to engage in mechanochemical feedback loops that can give rise to even more interesting self-organizing dynamics and that play a key role in the regulation of many cellular functions.
Examples on the sub-cellular scale include the sensing and generation of membrane curvature by membrane-binding proteins <cit.> or the adaptive establishment of cell polarity <cit.>.
On the single-cell and tissue level, pattern formation in deforming geometries has been appreciated in the context of biochemical coordination of cell motility <cit.>, cell shape changes <cit.>, and the control of tissue morphogenesis <cit.>.
A central common feature of many pattern-forming systems in dynamic geometries is the coupling of two concurrently deforming domains – the cell volume (or bulk) and the cell membrane (boundary) – which models need to consider explicitly in order to accurately capture the mechanochemical coupling of the protein dynamics to the geometry <cit.>.
The realization of such systems in numerical simulations, however, offers two fundamental challenges: first, most numerical frameworks are not designed for solving reaction–diffusion dynamics in a deforming geometry, and including such deformations often abates the performance considerably.
Second, since the diffusion of membrane-bound proteins is typically slow compared to cytosolic proteins <cit.>, the length scale of protein patterns on membranes can be by orders of magnitude shorter than the length scale of protein gradients in the cell volume or the length scale of deformations.
This requires a high spatial resolution of the membrane and its vicinity, resulting in a fine mesh with high number of degrees of freedom that further slows down simulations.
A plethora of mathematical studies have focused on reaction–diffusion dynamics on the surface of a deforming geometry.
Popular methods that are used in such cases include specialized applications of the finite element method (FEM) <cit.>, level-set methods <cit.>, and mesh-free approaches <cit.>.
However, it is often challenging to generalize these approaches to systems with bulk–boundary coupling, i.e., systems with dynamics both in the volume and on the surface of the deforming geometry, as this requires handling the deformation effects in two different domains simulatneously and in a self-consistent manner <cit.>.
Many studies of biological systems handle this limitation by averaging the volume dynamics either over infinitesimal surface patches (projection) or over the entire volume (reservoir) <cit.>.
While this approach yields acceptable results when the volume dynamics are purely diffusive <cit.>, it was recently demonstrated that accounting for the bulk dynamics is crucial for accurate predictions in the presence of bulk reactions <cit.>.
This is because bulk reactions generically lead to protein concentration gradients in the cytosol which, by virtue of the bulk–boundary coupling, directly affect the protein dynamics on the membrane and thereby provide the basis for one type of geometry sensing <cit.>.
In studies committed to faithfully representing the reaction–diffusion dynamics in deforming geometries, the phase–field method has emerged as a promising strategy <cit.>.
Phase–field models represent the dynamic geometry as an indicator function, allowing for arbitrary shape changes and even topological changes <cit.>.
However, this benefit comes at the cost of either requiring fine mesh resolution everywhere in the simulated domain or using adaptive mesh refinement <cit.>.
Furthermore, coupling bulk dynamics to surface dynamics in phase–field models raises additional challenges that are subject of ongoing development <cit.>.
Here, we present an approach that exploits a priori knowledge about cytosolic gradients for modeling reaction–diffusion dynamics with bulk–boundary coupling in deforming geometries.
Rather than solving the ensuing partial differential equations (PDEs) in a meshed bulk, we propose to project the bulk dynamics onto the surface of the deforming geometry by decomposing the bulk dynamics into a few dominant basis functions, essentially reducing the spatial dimension of the system by one [Fig. <ref>].
We show how to design projection methods based on information about the steady state of the system at hand and compare three different techniques – based on step functions, polynomials, and exponential functions – with respect to their accuracy in flat and in deforming geometries.
Our analysis demonstrates that good approximations of the actual dynamics can be achieved already with a single nonlinear basis function, whereas a naive averaging of the bulk dynamics often fails to capture the relevant dynamics entirely.
Furthermore, we find that the projection method outperforms standard FEM approaches by an up to five-fold speedup in computation time, with most significant improvements in systems with large volume.
This paper is structured as follows:
In Sec. <ref>, we first introduce a generic projection method in a static one-dimensional geometry and subsequently generalize our approach to dynamic higher-dimensional geometries.
We show how to transfer techniques from FEM implementation to arbitrary basis choices and how a PDE defined on a volume domain can be approximated by a set of PDEs defined only on the volume's boundary.
In Sec. <ref> we test our approach on a toy model that shows geometry-dependent arrested coarsening and compare different projection methods based on the quality of the approximation and the computational efficiency.
We conclude with a concise summary and an outlook.
§ BULK PROJECTION
In this section, we introduce a method to project the dynamics of proteins in the bulk onto the membrane for a flat geometry.
We illustrate this method using a generic model for a single protein species that can bind to and unbind from the membrane and are degraded linearly in the bulk.
We then extend the projection method to dynamically deforming membranes.
§.§ Static geometry
Consider a one-dimensional bulk ℬ of height h [Fig. <ref>a].
The bulk (cytosolic) protein concentration on this line, denoted by c(z, t), is assumed to obey no-flux (Neumann) boundary conditions at the top of the line (z=h) and Robin boundary conditions – where reactive fluxes are balanced by diffusive fluxes onto the boundary – at the bottom of the line (membrane at z=0, denoted by 𝒮).
The protein concentration on the membrane is denoted by m(t).
A generic (non–mass-conserving) bulk–boundary reaction–diffusion system for a single protein species with linear degradation in the bulk can be written as
∂_t c(z,t) = D_c ∂_z^2 c(z,t) - λ· c(z,t) ,
∂_t m(t) = f_0(m(t), c(0,t)) ,
where the bulk dynamics are coupled to the membrane via Robin and no-flux boundary conditions
- D_c . ∂_z c(z,t) |_z=0 = - f_0(m(t), c(0,t)) ,
D_c . ∂_z c(z,t) |_z=h = 0 .
Here, D_c denotes the diffusion constant in the bulk, λ is a cytosolic degradation rate, and f_0(m(t), c(0,t)) denotes the reactive fluxes at the surface 𝒮.
A generic choice for the reaction at the boundary is
f_0(m(t), c(0,t)) = a(m(t)) · c(0,t) - d(m(t)) · m(t) ,
where the attachment a and detachment d to and from the surface 𝒮 may include (autocatalytic) nonlinear interactions.
To project the dynamics in a one-dimensional bulk onto the zero-dimensional surface 𝒮, we aim to rewrite the partial differential equation (PDE) for the bulk dynamics in Eq. (<ref>) as a set of N conveniently chosen ordinary differential equations (ODEs) that approximate the exact solution.
Analogous to reaction–diffusion equations that can be split into reaction and diffusion parts [Eq. (<ref>)] and boundary conditions [Eq. (<ref>)], we thus aim for a set of ODEs in the form of
∂_t u_k(t) = g_React + g_Diff + g_BC
for k ∈{0,…,N-1}.
The symbolic functions g = g({u_k(t)}, m(t)) are placeholders for the contributions to the ODE stemming from the bulk reactions, bulk diffusion, and the boundary conditions, respectively.
How can one derive such a set of ODEs (or, in higher dimensions, PDEs)?
To answer this question, we draw inspiration from numerical methods for solving PDEs, specifically the finite element method (FEM).
The general idea of FEM approaches is to approximate the exact dynamics by a linear combination of contributions derived for simple basis functions (finite elements), where each basis function represents the field value at a specific point in the meshed simulation domain <cit.>.
Since, for generic applications, the dynamics of the concentration fields are not known a priori, it is vital for an efficient FEM solver to use simple basis functions with little overlap on a well-resolved mesh.
For physical systems, however, certain properties of the field dynamics can be derived a priori.
For example, in a reaction–diffusion system with linear bulk reactions as in Eq. (<ref>), the characteristic diffusive length scale in the bulk is ℓ = √(D_c/λ) <cit.>.
In the following, we show how a more sophisticated choice of basis functions that exploits such knowledge can simplify the problem and reduce the complexity of numerical implementations solving the system's dynamics.
While FEM in the context of numerical simulations typically uses a single basis function per mesh point in the simulated domain <cit.>, we pursue a different approach using multiple basis functions v_k(z) anchored to the boundary of the domain [Fig. <ref>].
The different basis functions are designed to reflect the geometry of the bulk domain and to ensure that the relevant aspects of concentration gradients perpendicular to the boundary can be captured appropriately.
In contrast to FEM, the coefficients u_k(t) are defined for mesh points on the membrane so that all bulk variations are captured by the height-dependent basis functions v_k(z).
With this (incomplete) basis, the actual bulk dynamics are approximated by
c(z,t) ≈∑_k u_k(t) · v_k(z) .
Inserting this ansatz into the reaction–diffusion equations (<ref>) and (<ref>) yields, after partially integrating the diffusion term and taking the inner product ⟨·, ·⟩ with a basis function v_k (Galerkin projection <cit.>), a set of coupled ODEs for the coefficients u_k (detailed derivation in Appendix <ref>):
G_kl ∂_t u_k(t) = - f_0(m(t), {u_k(t)}) v_l(0) _g_BC +
-D_c A_ml u_m(t)_g_Diff - λ G_lm u_m(t)_g_React .
Here and in the following, we use Einstein sum convention over double indices.
With the inner product
⟨ v(z), w(z) ⟩ = ∫_0^hd z v(z) w(z)
the mass matrix (or Gram matrix) G_kl and stiffness matrix A_kl are defined as
G_kl = ⟨ v_k(z) , v_l(z) ⟩ ,
A_kl = ⟨∂_z v_k(z) , ∂_z v_l(z) ⟩ .
From here on we will omit all function arguments unless needed for clarity.
The set of ODEs in Eq. (<ref>), together with the basis {v_k}, yield an approximation of the time evolution of the bulk field c, where the quality of the approximation critically depends on the basis choice.
What is a good choice for the basis {v_k}?
In general, the basis needs to be chosen such that it can capture the main features of the bulk dynamics.
Specifically, this requires that both the field c as well as its gradient ∂_z c are approximated well by the projection u_k v_k.
Indeed, it has been shown that the coefficients u_k obtained from u_k = (G^-1)_kl⟨ v_l, c ⟩ yield the best possible approximation for the field c for a given basis choice {v_k} <cit.>.
However, no such statement exists for the gradient ∂_z c, and the same coefficients u_k can result in a low-quality approximation of the gradient for improper basis choices.
Here we present two ways to resolve this problem:
[label=(*)]
* by choosing a basis that is expected a priori to yield good approximations for both field c and gradient ∂_z c, or
* by making additional choices for the gradients and thereby correct for the low-quality approximation of the gradients.
The former approach strongly depends on the physical problem that is being studied.
For example, for protein reaction–diffusion systems with linear bulk reactions as specified in Eq. (<ref>) and no-flux boundary conditions at z=h, the steady state distribution can be derived as c(z) ∼cosh((h-z)/ℓ), with ℓ = √(D_c/λ) <cit.>.
In this case, a natural choice for the basis would be a set of hyperbolic cosine functions on varying length scales [Fig. <ref>b], e.g.,
{v_k(z)} = {1, cosh(h-z/ℓ), cosh(h-z/2ℓ) , …} .
A mathematically more tractable choice for the same system is {v_k(z)} = {(h-z)^k}, which corresponds to a polynomial fit to the field c and can be expected to approximate shallow gradients well [Fig. <ref>c].
For the latter approach revolving around making additional choices for the gradients, we only discuss a basis composed of orthogonal step functions, as this choice will prove highly useful for deforming geometries later on.
For this, the bulk is divided in N sections of height h_k, where each basis function v_k takes the form of a rectangle function [Fig. <ref>d].
Since the derivative of these basis functions at the interfaces z_k are not well-defined, we instead approximate the gradients in between the centers of the step functions as an additional set of piecewise constant functions, as derived in Appendix <ref>.
To highlight this additional approximation, we indicate auxiliary stiffness matrices constructed from such an artificial gradient choice by a bar over the symbol (A̅_kl).
For piecewise constant gradients, the resulting set of ODEs for the coefficients u_k then reads [Appendix <ref>]
∂_t u_k = - δ_k0 f_0 v_0(0) - D_c A̅_kl u_l + λ u_k ,
A̅_kk = 2/h_k + h_k+1+ 2/h_k + h_k-1 ,
A̅_k,k±1 = -2/h_k + h_k±1 ,
where δ_kl is the unit matrix and all other entries A̅_kl =0.
It is instructive to compare this choice with standard finite element methods on a predefined mesh: in FEM implementations, basis functions are typically non-zero only in the direct vicinity of a specific mesh point, similar to the step functions that are non-zero only for a fraction of the bulk domain [Fig. <ref>d] <cit.>.
Furthermore, FEM basis functions are typically piecewise linear, such that their gradients are piecewise constant and thus the stiffness matrix takes a similar form as in Eq. <ref>.
Rather than fully committing to the FEM approach, we here stick to piecewise constant basis functions and the auxiliary stiffness matrix A̅_kl since this will greatly simplify the generalization to deforming geometries in the following section.
§.§ Dynamically deforming geometries
We now generalize the idea of bulk projection to dynamically deforming geometries.
Consider a two-dimensional domain ℬ parametrized by r⃗(s,z), where s denotes the (arc-length) parametrization of the one-dimensional membrane 𝒮, r⃗_𝒮 (s,t), and z ∈ [0,h] is the distance from the membrane [Fig. <ref>]:
r⃗(s,z; t) = r⃗_𝒮 (s,t) + z n⃗̂⃗(s,t) .
The vector n⃗̂⃗ is the normal vector on the membrane.
The bulk dynamics specified in Eq. (<ref>) in such a dynamic geometry read <cit.>
1/√(g)∂_t (√(g) c) = D_c Δ_LB c + λ· c ,
Δ_LB c = 1/√(g)∂_i ( √(g) g^ij∂_j c ) ,
with the Laplace-Beltrami operator Δ_LB, the metric tensor g_ij, and the square root of the metric determinant √(g)≡√((g)).
Furthermore, we generalize the inner product, the Gram matrix G, and the stiffness matrix A to the deformable geometry:
⟨ u(z), v(z) ⟩^ij = ∫_0^h dz √(g(s,z;t)) g^ij(s,z;t) u(z) · v(z) ,
G_kl^ij(s,t) = ⟨ v_k, v_l ⟩^ij ,
A_kl^ij(s,t) = ⟨∂_z v_k, ∂_z v ⟩^ij ,
so that G_kl≡ G^zz_kl = ⟨ v_k, v_l ⟩^zz, and analogous for A_kl.
In Appendix <ref>, we use these definitions to derive the weak formulation and Galerkin projection <cit.> of the bulk dynamics in a deforming geometry as specified in Eq. (<ref>).
The resulting generalized set of PDEs for the coefficients reads
G_kl∂_t u_k = -u_m (∂_t G)_lm_g_Geom + D_c ∂_s (G^ss_lm ∂_s u_m )_g_𝒮 +
+ D_c [√(g)∂_z c v_l ]_0^h - D_c u_m A_lm + λ u_m G_lm .
In comparison to the set of ODEs in Eq. (<ref>), two new terms have emerged in addition to the redefined coupling matrices:
The contribution g_Geom accounts for deformations of the domain (dilation and compression), and g_𝒮 accounts for the diffusion parallel to the membrane.
Strikingly, this formulation holds for any basis where all elements {v_k} are weakly differentiable on [0,h].
In addition, the formulation can be adjusted for a basis of orthogonal step functions by appropriate rescaling of the stiffness matrix A̅_kl [Appendix <ref>].
This offers significant advantages compared to an explicit simulation of the bulk dynamics:
first, the number of degrees of freedom can be reduced significantly by avoiding a meshed bulk domain.
Second, similar to other methods that use an explicit parametrization of the geometry, our approach also makes a dynamic remeshing of the deforming bulk obsolete.
These advantages come at the cost of having to settle on a basis a priori, where the quality of the approximation critically depends on the basis choice.
With the projection step in Eq. (<ref>), one therefore converts a reaction–diffusion system with coupled PDEs in the bulk and on the membrane into a set of PDEs that are defined only on the membrane.
This approach yields an effective reduction of the system's spatial dimension (e.g., from a three-dimensional bulk to a two-dimensional membrane) while accounting for bulk gradients in an approximate manner and thereby respects the effect of the bulk geometry on the protein dynamics.
The accuracy of this approximation depends on the basis choice {v_k}.
These benefits come at the cost of introducing one additional PDE on the membrane for each basis function.
§ EXAMPLES
In the following, we present and discuss an explicit application of our dimensionality reduction method for a simple toy model, with particular focus on comparing different basis choices and on the influence of the bulk geometry.
For this, we extend a previously studied model showing interrupted coarsening by introducing an explicit sensitivity to the bulk geometry <cit.>.
This choice allows to easily quantify the effect of the bulk geometry and, most importantly, how well the bulk dynamics are captured by the projection method by means of the pattern length scale after coarsening is interrupted.
In this model, we consider a protein that switches between an active and an inactive state and prevails either in the bulk or on the membrane [Fig. <ref>a].
In the active state, the protein can bind and recruit itself to the membrane, and undergoes enzyme-mediated detachment.
Active proteins deactivate both on the membrane and in the bulk, but are assumed to be reactivated only on the membrane and assisted by membrane-proximal active bulk proteins.
For simplicity, we furthermore assume that the inactive proteins comprise an abundant reservoir maintaining a constant concentration on the membrane and are therefore not modeled explicitly.
The reaction–diffusion equations for the active proteins on the membrane m(x,t) and in the bulk c(x⃗, t) then read <cit.>
∂_t m = D_m ∂_x^2 m + f(m, .c|_𝒮) + ζ (m, .c|_𝒮) ,
∂_t c = D_c ∇^2 c - λ c ,
with Robin boundary conditions
D_c . n⃗·∇ c |_𝒮 = - f(m, .c|_𝒮) ,
where the mass-conserving binding kinetics of active proteins are given by
f(m, .c|_𝒮) = (1+m) ·.c|_𝒮 - m/1+m .
Here, .c|_𝒮 = c(x, z=0) denotes the cytosolic concentration at the membrane, and we have set all reaction rates and parameters to 1 for simplicity.
In addition, the non-mass-conserving part corresponding to the (de-)activation of proteins on the membrane (by exchange with an abundant reservoir [Fig. <ref>a]) is
ζ(m,.c|_𝒮) = p .c|_𝒮 - ϵ m
with an effective activation rate p and deactivation rate ϵ.
With Robin boundary conditions at the membrane and linear degradation, the bulk concentration has an associated length scale ℓ = √(D_c/λ) characterizing gradients at the steady state <cit.>.
For simplicity, we further nondimensionalize this toy model by enforcing ℓ = 1, so that the system height h as well as all other distances will always be given in units of the characteristic length scale (full parameter list in Table <ref>).
In Fig. <ref>a, we show kymographs of two realizations of this system on a domain of width L=20 and at bulk heights h=1 and h=10 obtained from FEM simulations that explicitly account for the bulk dynamics.
The system undergoes interrupted coarsening, where initially multiple peaks of high protein concentration on the membrane (green) form that subsequently merge or vanish (coarsening) until only a few peaks remain (interruption).
After coarsening is interrupted, peaks rearrange to accomodate approximately equidistant spacing Λ between them.
A signature of interrupted coarsening is the mean peak distance μ_Λ saturating at finite values [Fig. <ref>b].
For the reaction–diffusion system in Eq. (<ref>), the coarsening process is predominantly controlled by the (de-)activation of proteins on the membrane ζ(m, . c|_𝒮) <cit.> and thus depends on the membrane-proximal bulk concentration . c |_𝒮.
Since this membrane-proximal bulk concentration in turn depends on the system height h [Fig. <ref>b], the coarsening interruption is sensitive to the bulk geometry.
Importantly, similar to the steady-state concentration c^*_𝒮(h) shown in Fig. <ref>b, we expect a nonlinear dependence of the mean peak distance on the system height μ_Λ(h), with an approximately linear dependence for h ≲ℓ and no variation for h ≫ℓ.
§.§ Basis choices
As a naive direct projection of the bulk dynamics onto the membrane one may choose to average the bulk field c over the z-direction.
This corresponds to choosing a basis v_k(z) = {1 } with a constant function as the only basis element [Fig. <ref>a].
By definition, this ansatz disregards all gradients in the bulk field.
These gradients are shallow for small bulk heights h < ℓ, however they become significant when the bulk height is larger than the characteristic length scale of bulk gradients, h > ℓ.
Consequently, this naive projection captures the average pattern length scale μ_Λ only for small bulk heights but fails for large bulk heights [Fig. <ref>a].
In the next step, we extend the projection basis by one additional element to capture the bulk gradients, i.e., variations in z-direction.
To specify this additional basis element, we use the system's laterally homogeneous steady state (constant parallel to the membrane), which can be calculated analytically for the linear reactions as defined in Eq. (<ref>) <cit.>:
c^*(x⃗) = c^*(z=0) ·cosh( h-z/ℓ) /cosh(h/ℓ) ,
where c^*(z=0) = c^*_𝒮 is the steady state concentration at the membrane.
In the rescaled system with ℓ=1, the steady state profile can be represented exactly by a one-dimensional basis v_k(z) = {cosh(h-z)}.
Using this basis, gradients on the length scale of the characteristic scale ℓ are captured exactly [Fig. <ref>b].
In between these two limiting cases (naive averaging using v_k={1} and full recovery of the steady state using v_k={cosh(h-z)}) a plethora of other basis choices can be imagined.
The suitability of a specific basis choice depends on the desired application.
For example, while a hyperbolic cosine-derived basis captures the steady state bulk profile exactly in a flat geometry, no closed-form expressions for required Gram tensor G^ij_kl and stiffness tensor A^ij_kl as defined in Eq. (<ref>) may exist for deformed geometries with non-trivial metric g_ij, making them unsuitable for numerical implementations.
In particular, it is desirable to calculate the Gram and stiffness tensors as functions of time – which requires the inner products to be solved analytically – since this allows a more flexible implementation compared to recalculating the tensors after each time step by solving the inner products numerically.
To achieve this, one may expand the exact profile as a power series that respects the no-flux boundary condition at z=h, i.e., using a polynomial basis with even powers [Fig. <ref>c].
Alternatively, one may choose a set of piecewise constant basis functions that splits the bulk into N distinct sections of height h_k to approximate the bulk concentration profiles [Fig. <ref>d].
Similar to finite element methods, the coefficients u_k then denote the average value of the bulk field in the k^th section.
By tuning the section heights h_k the approximation can be refined in regions with comparably sharper bulk gradients.
For example, the membrane-proximal region for the toy model in Eq. (<ref>) requires a finer resolution along the z-direction than regions at the opposing no-flux boundary due to the shallow gradient at z=h [Fig. <ref>d].
The accuracy of the approximation can further be improved by increasing the number of basis functions [Fig. <ref>e].
The comparison of the approximated steady-state concentration profile with the exact counterpart can provide preliminary intuition about appropriate basis choices.
However, when studying the dynamic case, additional aspects of the bulk concentration may become relevant.
For example, the hyperbolic cosine in Fig. <ref>b can capture the long-term (steady state) dynamics but will not account for sharp bulk gradients as the steady state is approached.
How do the basis choices perform for approximating the protein dynamics?
§.§ Flat and static geometry
To address this question, we first focus on flat and static geometries [Fig. <ref>].
Since the projection of the bulk dynamics onto the membrane is a qualitative approximation, it is not expected that patterns obtained from the full system and from the approximation match quantitatively.
Instead, we are interested in capturing the qualitative characteristics of the patterns and compare the pattern's statistical properties to measure the quality of a basis choice.
For the interrupted coarsening system in Eq. (<ref>), the most important statistical property is the mean value μ_Λ of the distance between peaks Λ after coarsening is interrupted [Fig. <ref>].
For this, we solved the PDEs for the full system and for the projected counterparts on a domain of length L=50 with varying bulk height 10^-1≤ h ≤ 10^1 until coarsening was interrupted (total simulation time T=10^7; random perturbations around the homogeneous steady state as initial conditions) and extracted the mean peak distance μ_Λ at the final time point.
To reduce artifacts from initial conditions, the results are averaged over five repeats with varying initial conditions each.
In Fig. <ref>a, we compare these results for the full system (reference) as well as three different basis choices derived from hyperbolic cosines:
N =1: v_k ={cosh(h-z)} ,
N =2: v_k ={1, cosh(h-z)} ,
N =3: v_k ={1, cosh(h-z), cosh(h-zℓ_2)}
with a variable length scale ℓ_2 in the latter case.
All basis choices reliably capture the characteristic shape of the reference μ_Λ(h), however they consistently overestimate the true mean peak distance [Fig. <ref>a].
Importantly, even though the basis with v_k^(N=1) = {cosh(h-z)} can capture the steady state exactly as shown in Fig. <ref>b, additional basis functions are required to reduce the deviation from the reference in the dynamic case.
Figure <ref>b shows the quantification of mean peak distances using a piecewise constant basis for projecting the bulk dynamics [Fig. <ref>a,d,e].
The case N=1 corresponds to a naive averaging of the bulk dynamics and deviates significantly from the reference value μ_Λ(h) already for small bulk heights h< ℓ.
For h≥ℓ, the naive projection predicts no patterns at all (shaded region) and is insufficient for capturing the bulkd–boundary coupled reaction–diffusion dynamics.
The predictions can again be improved by introducing additional basis functions, where already N=2 basis functions approximately capture the characteristic shape of the reference μ_Λ(h) and additional improvements are obtained for N=3.
Importantly, even with just a single basis function meaningful approximations of the actual dynamics can be achieved for an adequate choice of the basis.
This becomes conspicuous by comparing the mean peak distances μ_Λ(h) in the case N=1 for a cosh-derived basis and the piecewise constant basis in Fig. <ref> (light green and blue lines).
With only one basis function, the naive averaging cannot be used to approximate the system's dynamics, whereas for the hyperbolic cosine basis a single basis function is sufficient to capture the geometry-dependent characteristics of the system.
This improvement is achieved by leveraging the a priori knowledge about the system's sensitivity to the bulk geometry and respecting this sensitivity in the projection step.
§.§ Curved and dynamic geometry
So far, we discussed the accuracy of the projection ansatz for a flat and static geometry.
How does this method perform in a dynamically deforming geometry?
In general, two types of deformations can be distinguished: deformations that lead to a (local) in-plane compression or stretching of the boundary domain while keeping the shape of the entire domain unchanged, and deformations that keep the metric of the membrane invariant but cause out-of-plane shape changes [Fig. <ref>a].
In the former case, the membrane is directly affected by the deformation, making it difficult to disentangle the effects of the bulk deformation from the membrane deformation.
Instead, we here focus on the latter case where the reaction–diffusion dynamics on the membrane are not directly affected by the geometry deformation but only indirectly via the coupling to the dynamic bulk.
This allows to compare how well a certain basis choice can capture bulk deformations.
To study such deforming geometries, we apply the projection method presented in Section <ref> to the toy model in Eq. (<ref>) that shows height-dependent interrupted coarsening.
For this, we choose a geometry parametrization r⃗(s,z; t) that leaves the metric on the boundary invariant (i.e., | ∂_s r⃗(s,0; t) | = 1) but has a spatio-temporally varying membrane curvature K(s,t) [Fig. <ref>b, Appendix <ref>].
Due to this dynamic curvature, the local ratio of bulk volume to membrane area and thereby the magnitude of the bulk field c(r⃗, t) now also vary in space and time <cit.>.
Note that in the static and flat case discussed above this ratio was only affected by the bulk height h, since in this case the curvature is K = 0 at all times [Fig. <ref>b].
Similar to the static case, where the bulk height affects the mean peak distance μ_Λ(h), the interrupted coarsening in a dynamic geometry depends on the bulk–boundary ratio d V/dA and thus on the membrane curvature K as shown in Fig. <ref>b.
Modulating the membrane curvature over time thus provides a straightforward method for probing the ability of the projection method to capture effects of the dynamic geometry by comparing the mean peak distance μ_Λ(K(t)).
For this, we vary the curvature periodically and homogeneously,
K(s,t) = K_max/2(1 - cos(2 π t/T ) )
with periodicity T=10^6 and maximum membrane curvature K_max = -1.
Figure <ref>a shows a typical kymograph of the membrane concentration m(s,t) during multiple deformation cycles.
When the deformation is strongest, the bulk–boundary ratio is maximal (equivalent to an increased bulk height h, Fig. <ref>) and the mean distance between peaks is large.
As the geometry returns to the flat shape, the pattern adapts by spawning additional peaks and thereby decreasing the mean peak distance μ_Λ.
In Fig. <ref>b,c we compare the periodic coarsening and spawning of peaks for various basis choices, using a bulk height h=1 and maximum curvature K_max=-1.
Similar to the flat case, the naive projection using a single basis function v_k(z) = {1} fails to capture the height-dependent dynamics and predicts no patterns at any deformation state at this bulk height (not shown).
To reproduce the reference dynamics, at least two basis functions are required.
Note that a hyperbolic cosine basis yields non-algebraic contributions for the projected PDEs in deforming geometries and is therefore not suitable for numerical implementation.
Instead, we achieve qualitatively matching results by expanding the hyperbolic cosine to lowest order, in this case using a polynomial basis with v_k = {1, (h-z)^2} [Fig. <ref>b].
For sufficiently small deformations, the polynomial basis captures the reference results with high accuracy, but considerable deviations are observed at strong deformations.
As an alternative, a piecewise constant basis with two or more basis functions may be used [Fig. <ref>c].
Interestingly, the piecwise constant basis captures the pattern statistics more robustly across the entire deformation cycle than the polynomial basis even for only N=2 basis functions.
The reason for this lies in the polynomial basis being derived from the steady state concentration profile in a flat geometry.
For strong deformations, this steady state profile may change its shape significantly so that the polynomial basis does not capture the relevant bulk gradients anymore.
§ DISCUSSION AND CONCLUSION
In conclusion, we have presented a method for using a priori knowledge about the physical characteristics of a bulk–boundary reaction–diffusion system to reduce the complexity of the underlying model and thereby simplify the numerical implementation of the system.
Specifically, we proposed a systematic approach to project the field dynamics in a volume onto the surface while respecting the (possibly deforming) geometry of the volume and accounting for spatial gradients normal to the surface.
We showed that this method yields more accurate approximations of the actual dynamics than a naive projection based on averaging the field dynamics over the volume.
At the same time, the projection approach effectively lowers the dimension of the problem by one and thus reduces the computational cost of numerically solving the reaction–diffusion dynamics considerably (approximately 5-fold compared to direct FEM implementations).
As a proof of concept, we applied our method to a generic model showing interrupted coarsening, where the statistical properties of the coarsening dynamics are sensitive to the geometry of the volume.
We evaluated the accuracy of multiple projection methods in both static and deforming geometries and found that for linear bulk reactions a set of two basis functions already leads to good approximations of the actual dynamics for all bulk heights.
Using three or more basis functions provides minor improvements that in general do not outweigh the increase in computation time due to the additional fields.
We emphasize that the projection of the bulk dynamics onto the membrane as presented in this article is only an approximation of the actual dynamics.
In contrast to finite element methods, where accuracy can often be increased simply by refining the mesh <cit.>, the accuracy of the projection strongly depends on the basis choice.
Leveraging the full potential of the projection method therefore requires to make use of a priori knowledge about the physical properties of the system at hand, such as the system's steady state profiles <cit.>.
A physics-informed projection can, by virtue of the dimensionality reduction, simplify the analytical and numerical assessment of pattern-forming system.
However, we stress that no significant benefits compared to FEM implementations are to be expected for uninformed or arbitrary basis choices.
In our derivations of the projection method in deforming geometries we relied on the geometry to be parametrizable and to have a constant height (as measured from the membrane).
As a natural extension to our results, it would be interesting to release these constraints.
In particular, the projection method could be coupled to phase–field implementations of reaction–diffusion dynamics on deforming surfaces, which have been extensively studied in the past <cit.>.
For this, a key challenge will be to implement a spatio-temporally varying support of the basis functions to account for variable volume height.
Other extensions to our approach could include hydrodynamic coupling of the field dynamics to the deforming geometry <cit.>, which is an important aspect of many intracellular reaction–diffusion systems <cit.>, or basis functions that depend on time or are non-local in time, as recently done for a phase-separating system with bulk–boundary coupling <cit.>.
Beyond the modeling of reaction–diffusion dynamics in parametrized deforming geometries, we expect our results to have valuable applications for systems that reciprocally couple the geometry deformation to the field dynamics, specifically for curvature-generating protein systems <cit.>.
Although the projection method only yields an approximation of the true dynamics, we believe that our approach can have significant advantages for large-scale sampling of the model parameter space in simulations.
§ ACKNOWLEDGEMENTS
We thank Tobias Roth and Henrik Weyer for stimulating discussions.
T.B. acknowledges support by the Joachim Herz Foundation.
This work was funded by the European Union (ERC, CellGeom, project number 101097810) and the Chan-Zuckerberg Initiative (CZI).
§ GENERAL DERIVATION OF PROJECTION METHODS
In this section, we provide a step-by-step derivation of the projection method for arbitrarily deforming geometries.
For additional information on finite element methods, which are conceptually similar to the projection method introduced here, we refer to pertinent textbooks <cit.>.
We start from a reaction–diffusion equation for a single field in a deforming parametrizable geometry (specified by r⃗(s⃗,z,t), c.f. Fig. <ref>, with the bulk extending in z-direction) with Robin boundary conditions at the surface 𝒮 (specified by r⃗_𝒮(s⃗,t) = r⃗(s⃗,0, t)) and no-flux boundaries elsewhere:
1/√(g)∂_t (√(g) c) = D_c Δ_LB c + f(c) ,
-D_c . n⃗̂⃗(s⃗, t) ·∇⃗c |_𝒮 = f_𝒮(. c |_𝒮; s⃗,t) ,
with Δ_LB c = 1/√(g)∂_i ( √(g) g^ij∂_j c ) .
The functions f and f_𝒮 denote the bulk reactions and the bulkd–boundary coupling, respectively, and are both assumed to be polynomial in c.
We constrain ourselves to systems with constant bulk height h measured in the normal direction at each point of the membrane.
Consequently, the z-direction is chosen to be normal to the membrane.
This allows to write the metric tensor in a block-diagonal form,
g_ij = [ g_𝒮, i'j' 0; 0 1 ] ,
where the indices i,j run over all coordinates {s⃗, z} and the indices i',j' run only over the surface coordinates {s⃗} so that g_𝒮, i'j' is the metric along the coordinates s⃗ of the reactive surface 𝒮.
In this parametrization, the metric determinant (g_ij) is identical to the determinant of the sub-metric, g≡ (g_𝒮, i'j').
Analogous to finite element methods <cit.>, we start by transforming the PDE for c(s⃗, z, t) to its weak form by introducing a test function v(z) and an inner product ⟨·, ·⟩^ij on the dynamic geometry along the z-direction anchored at each point s⃗ on the reactive surface:
⟨ c(s⃗, z, t) , v(z) ⟩^ij = ∫_0^zd z √(g) g^ij c(s⃗, z, t) · v(z) ,
⟨ c(s⃗, z, t) , v(z) ⟩ = ∫_0^zd z √(g) c(s⃗, z, t) · v(z) ,
where we make use of the special form of the metric (<ref>) to single out the z,z component of the inner product ⟨·, ·⟩^zz = ⟨·, ·⟩ for easier readability.
With this, the individual terms of the partial differential equation (<ref>) in weak form can be expressed as
⟨1/√(g)∂_t (√(g) c),v ⟩ = ∫_0^zd z ∂_t (√(g) c) · v = ⟨∂_t c, v⟩ + ⟨1/√(g) c ·∂_t √(g), v⟩ ,
⟨ D_c Δ_LB c ,v ⟩ = D_c ∫_0^zd z [ ∂_i'( √(g) g^i'j' ∂_j' c) · v + ∂_z ( √(g) ∂_z c) · v ]
= D_c ∂_i'⟨∂_j' c , v ⟩^i'j' + D_c [ √(g) ∂_z c · v ]_0^h - D_c ⟨∂_z c, ∂_z v ⟩ .
Note that the index i' runs over all coordinates excluding the z-coordinate, so that the derivative ∂_i' and the integral over z can be interchanged.
In the next step, we apply a Galerkin projection <cit.>: instead of solving the equations for arbitrary fields c(s⃗, z, t) ∈ L^2(ℬ), we only aim to find solutions for a subspace V ⊂ L^2(𝒮) × H^1([0,h]).
We choose this subspace such that all functions u(s⃗, z, t) ∈ V can be expressed as
u(s⃗, z, t) = u_k(s⃗, t) v_k(z) ,
where v_k(z) is a set of N basis functions in the Sobelov space H^1([0,h]) with k ∈{1,…,N-1}, and we use Einstein summation over double indices.
To highlight the difference between indices denoting spatial coordinates (i,j) and indices denoting elements of the basis functions (k,l,m), we use co-/contravariant notation for the spatial coordinates but consistently lower indices for the basis functions.
For all elements of the subspace V, the weak form of the differential equations can be rewritten further via this separation ansatz (<ref>):
⟨1/√(g)∂_t (√(g) c),v ⟩ = (∂_t u_k) ⟨ v_k, v ⟩ + u_k ∂_t ⟨ v_k, v ⟩ ,
⟨ D_c Δ_LB c ,v ⟩ = D_c ∂_i'[ (∂_j' u_k ) ⟨ v_k, v ⟩^i'j'] +
+ D_c [ √(g) ∂_z c · v ]_0^h +
- D_c u_k ⟨∂_z v_k, ∂_z v ⟩ .
For the boundary term [√(g) ∂_z c · v]_0^h, one may at this point insert the boundary conditions f_𝒮 (for h=0) and the no-flux boundary condition (for h=z).
In the Galerkin projection, the polynomial bulk reaction term f(c) takes a special form.
For the first and second order, for example, the corresponding expressions are
⟨ c , v ⟩ = u_k ⟨ v_k, v ⟩ ,
⟨ c^2, v ⟩ = u_k u_l ∫_0^zd z √(g) v_k v_l v ,
and similarly for higher orders.
Since the weak forms Eqs. (<ref>)-(<ref>) hold for all test functions v ∈ V, one may conveniently choose the basis functions v_k to obtain PDEs for the coefficients u_k(s⃗, t).
Introducing a generalized Gram tensor G_kl^ij, a multilinear variant of the Gram tensor G^ij_k_1⋯ k_n, and a generalized stiffness tensor A_kl^ij as
G_kl^ij = ∫_0^zd z √(g) g^ij v_k v_l = ⟨ v_k , v_l ⟩^ij ,
G_k_1⋯ k_n^ij = ∫_0^zd z √(g) g^ij v_k_1⋯ v_k_n ,
A_kl^ij = ∫_0^zd z √(g) g^ij (∂_z v_k) (∂_z v_l)
= ⟨∂_z v_k , ∂_z v_l ⟩^ij ,
and abbreviating the entry G^zz_kl = ⟨ v_k, v_l ⟩≡ G_kl (similarly for A_kl and G_k_1⋯ k_n) yields a concise formulation for the dynamics of u_k(s⃗, t):
(∂_t u_k) G_kl +u_k ∂_t G_kl_g_Geom = D_c ∂_i'[ (∂_j' u_k) G^i'j'_kl]_g_𝒮-Diff -. √(g) f_𝒮(u_k v_k) · v_l |_0_g_BC - D_c u_k A_kl_g_z-Diff + ∑_n 1n!∂^n f∂ c^n u_k u_l_2⋯ u_l_n G_kl_2⋯ l_n_g_React .
In the case of purely linear bulk reactions, the reaction term reduces to g_React = λ u_k G_kl with linear reaction rate λ, as applied in the toy model in Section <ref>.
Importantly, for a parametrizable geometry, the generalized Gram tensor and stiffness tensor can be calculated independently of the actual reaction–diffusion dynamics.
In particular, they can be calculated prior to solving the partial differential equations for the coefficients u_k.
The task of solving the bulk dynamics is therefore largely shifted to calculating the Gram and stiffness tensors, which greatly reduces the number of degrees of freedom required to numerically solving the PDEs.
It should be emphasized that this is the primary distinguishing aspect compared to finite element methods: in FEM implementations, the Galerkin projection is performed for each mesh point in the bulk and on the boundary <cit.>.
For a mesh size a and system width L and height h in d dimensions, this results in a matrix of size 𝒪((L/a)^d-1 (h/a) ) that needs to be diagonalized in each step.
In contrast, by projecting the bulk dynamics onto the membrane using N basis functions the computational complexity is reduced to diagonalizing a matrix of size 𝒪(N (L/a)^d-1), where FEM (or other standard methods) can be used to solve the dynamics on the membrane.
Specifically, the projection method changes the size of the relevant matrices by a factor N · a/h, so that the computational benefit is largest for low mesh sizes a and a small number of basis functions N.
This improvement is possible by explicitly allowing highly nonlinear basis functions in the projection approach and by adapting the basis functions choice to the physical problem that is to be solved.
For FEM, in contrast, it is often convenient to choose basis functions v_k with small support centered around the mesh points, as this renders the Gram matrix G as well as other important matrices sparse and thereby speeds up numerical applications.
The projection method abolishes this need for basis functions with small support in the bulk, since the Gram and stiffness matrices for the bulk dynamics only need to be calculated once before solving the system dynamics (however, Gram and stiffness matrix for the boundary dynamics still need to diagonalized at each time step).
When choosing the basis for the projection method, one is therefore not constrained by the small support of the bulk basis functions, and instead the basis can be tuned to the physical properties of the system.
A good choice of basis functions is therefore crucial for leveraging the full potential of this dimensionality reduction approach.
§ BASIS OF ORTHOGONAL STEP FUNCTIONS
In this section, we discuss how the stiffness matrix A_kl needs to be adjusted when choosing a basis consisting of step functions.
For this, consider a one-dimensional bulk [0,h] subdivided into N segments at {0, z_1, z_2, …, z_N-1, h } so that each segment has size h_k = z_k+1-z_k.
The corresponding basis functions are
v_k =
1 z_k ≤ z ≤ z_k+1 ,
0 else.
This basis is orthogonal by design and thus the Gram matrix takes the form G = diag({h_0, …, h_N-1}).
However, to construct the stiffness matrix A_kl it is necessary to find derivatives of the basis functions or at least derivatives in the weak sense <cit.>, i.e., functions w_k(z) for which
∫_0^h d z v_k(z) · (∂_z ϕ(z)) = - ∫_0^h d z w_k(z) ·ϕ(z)
for all differentiable test functions ϕ(z).
Since step functions are not weakly differentiable <cit.>, the stiffness matrix A_kl as defined in Eq. (<ref>) is not applicable for a step function basis.
At first glance, one may try to circumvent this issue by approximating the step functions using smooth analogues, for example a logistic sigmoid σ_w(z) = (1+e^-z/w)^-1, where the parameter w quantifies the width of the step.
The basis elements may then be written as v_k=σ_w(z-z_k) σ_w(z_k+1 -z).
This basis is not orthogonal anymore, however the Gram basis remains to lowest order G = diag({h_0, …, h_N-1}) + 𝒪(w).
The stiffness matrix, on the other hand, takes a tridiagonal form to lowest order,
A = 1/6w[ -2 1 0 ; 1 -2 1 ⋯; 0 1 -2 ; ⋮ ⋱ ] + 𝒪(w) .
As a result, the diffusion term g_Diff with such a stiffness matrix enters the differential equation for the coefficients (e.g., Eq. (<ref>)) with a prefactor ∼w^-1, leading to diverging terms in the limit w→ 0.
This is because the true gradients ∂_z c(z) are poorly approximated by the step functions, where gradients are ∂_z u ≈ (u_k+1 - u_k)/w in the w-neighborhood of the nodes z_k and zero elsewhere.
To be able to use step functions (or their smooth analogues) as a basis, the stiffness matrix needs to be reconstructed artificially.
For this, it is necessary to define approximations of the gradient terms ∂_z ṽ_k as stand-ins for the weak derivatives of the actual basis v_k.
As the simplest approach, one may emulate standard FEM strategies and use linear interpolation between the separate layers [Fig. XX].
This yields piecewise constant functions for the approximate gradients:
∂_z ṽ_k = 2/h_k-1 + h_k z_k - h_k-1/2≤ z ≤ z_k + h_k/2
2/h_k + h_k+1 z_k+1 - h_k/2≤ z ≤ z_k+1 - h_k+1/2
0 else .
The resulting reconstructed stiffness matrix A̅_kl then evaluates to the tridiagonal form
A̅_kk = 2/h_k + h_k+1+ 2/h_k + h_k-1 ,
A̅_k,k±1 = -2/h_k + h_k±1 ,
and zero for all other entries.
A priori knowledge about the system, e.g., about the steady state distribution, can be used to obtain better estimates for the gradients.
For all results presented in this paper we use the FEM-inspired stiffness matrix as defined in Eq. (<ref>) when working with the step function basis.
§ NUMERICAL IMPLEMENTATION
To test the projection method with the toy model as presented in Section <ref>, we performed numerical simulations using COMSOL Multiphysics v6.0 for the different projection approaches (basis choices) and, for reference, for a non-projected system, with simulation parameters as stated in Table <ref>.
For the reference simulations, the bulk dynamics were solved in a two-dimensional domain coupled to a one-dimensional boundary accounting for the membrane dynamics via reactive boundary conditions.
The boundary domain was meshed regularly with a mesh size of 0.05.
The bulk domain was subsequently meshed using a triangular Delauney tesselation (irregular) combined with local adaptive mesh refinement, where the mesh resolution was reduced at distances far away from the membrane controlled by setting the software's configuration parameter to 1.5.
Note that the adaptive mesh refinement already assumes shallow gradients far away from the membrane.
For comparison, we also produced solutions for a bulk domain that was meshed in a regular rectangular grid with mesh size 0.05.
For the projected systems, all dynamics (membrane and projected bulk) were solved on a one-dimensional domain corresponding to the boundary in the reference simulations.
This domain was meshed regularly with a mesh size of 0.05.
In Fig. <ref> we compare the solution times for different simulation methods (reference and projections) for three different values of the system height h.
Each data point in Fig. <ref> is the mean of five samples obtained from different initial conditions and otherwise identical parameters.
In the reference simulations (both regular mesh and adaptive refined mesh, black), solution times are consistently larger than ten seconds, and increase to approximately one minute (adaptive refined mesh) and ten minutes (regular mesh) at large system heights h=10 where the number of mesh points becomes large.
In contrast, for simulations using bulk projection with up to N=3 basis functions the solution time remained below 20 seconds, and reached approximately five seconds for low system heights (h=0.1, where coarsening is interrupted earlier and a steady state is reached faster, c.f. Fig. <ref>).
Importantly, the number of mesh points in the projected system is independent of the bulk height in contrast to the reference system, and therefore the computational speedup is most prominent for large bulk height and a regular mesh.
Comparing the reference simulations with adaptive refined mesh to the simulations using N=1 basis functions derived from a hyperbolic cosine, a speedup factor of >3 is achieved for h =0.1, and a factor >4 for h = 1 and h = 10.
For benchmarking the solution times, the simulations were performed on a 64 core AMD Ryzen Threadripper processor at 2.70 GHz.
§ PROJECTION OF MASS-CONSERVING SYSTEMS
In reaction–diffusion systems, the diffusion part always conserves the total protein mass n = ∫d V c(x⃗, t), i.e., for a system with no-flux boundaries at all boundaries and without any reaction terms defined by
∂_t c(x⃗, t) = D_c ∇^2 c(x⃗, t)
the change in total mass is <cit.>
∂_t n = ∫_ℬd V ∂_t c(x⃗, t) = . ∇ c(x⃗, t) |_∂ℬ = 0 .
This property of mass-conserving diffusion should be preserved under the projection method, in particular for systems where all reactions are also mass-conserving.
Under what circumstances does the projection method respect mass conservation in the diffusion?
For this, it is sufficient to consider diffusion in a one-dimensional bulk (i.e., no diffusion parallel to the membrane) with no-flux boundary conditions at both boundaries.
For such a system, the governing equations for the fields u_k(t) derived in Eq. (<ref>) reduce to
G_kl (∂_t u_k) = -D_c u_k A_kl .
Without loss of generality, assume furthermore that the basis {v_k} is orthonormal so that the Gram matrix is a unit matrix G_kl = δ_kl.
Since each component k contributes to the total concentration by u_k v_k(z), the change in the total mass is then
∂_t n = ∂_t u_k ∫d z v_k(z) = - D_c [∫d z v_k(z)] · A_kl u_l .
The total mass is thus conserved if and only if
A_kl ∫d z v_k(z) = 0 ∀ l .
This poses an additional constraint on the basis choice, complementing the requirements to approximate the true distribution c(z, t) and its gradient ∂_z c(z,t) well.
We now show that this requirement is always fulfilled if one of the basis functions is constant in z-direction.
For this, assume that v_0(z) = 1/√(h) in a bulk domain of height h.
This allows to rewrite the integral over a single basis function in terms of the Gram matrix,
∫d z v_k(z) = √(h)∫d z v_k(z) v_0(z) = √(h) G_k0 .
Inserting this into the requirement for mass conservation in Eq. (<ref>) and using the orthonormality of the basis G_kl = δ_kl yields
√(h) A_kl G_k0 = √(h) A_0l = 0 ,
which is always fulfilled for a basis that contains a constant basis function (in this case v_0).
This follows from the fact that A_kl = ⟨∂_z v_k , ∂_z v_l ⟩, and by choice of the basis function ∂_z v_0 = 0.
Since the orthonormalization step only affects the representation of the system but not the dynamics, the same line of argument holds for a basis where only a linear combination of basis functions is constant, i.e., where
α_k ∂_z v_k(z) = 0
for real constants α_k.
In particular, this includes a set of piecewise constant basis functions as presented in Sec. <ref>.
§ PARAMETRIZATION OF EVOLVING GEOMETRIES
In this section, we specify the parametrization used to evaluate the projection method in a deforming geometry as discussed in Sec. <ref>.
Since we use that the basis functions v_k(z) are identical for all points r⃗(s⃗, 0) on the membrane 𝒮, a necessary constraint for the parametrizations is that they have constant height h everywhere.
Note that here the height is defined as the distance from the membrane 𝒮 to the opposing no-flux boundary in the direction normal to 𝒮.
This also implies that these two boundaries are always parallel.
In addition, deformations need to avoid intersections of the bulk with itself, since in such cases the metric diverges.
Therefore, the maximum height of the bulk in a deforming geometry is constrained by the maximum (principal) curvature, h < 1/K_max.
Furthermore, we here choose a parametrization for which the metric on the membrane is constant and unity, . g_ij|_z=0 = δ_ij, since this ensures that no stretching or compression of the membrane can affect the coarsening dynamics and all observed effects are due to the bulk dynamics.
For the examples in Sec. <ref>, we examine a geometry consisting of an annular segment with spatially constant but temporally varying curvature K(t).
This geometry is parametrized by
r⃗(s, z, t) = 1/K(t)[ ( 1 + z K(t) ) sin (s K(t)); ( 1 + z K(t) ) cos (s K(t)) -1 ] ,
where we choose K(t) = K_max·1/2((1 - cos (2 π t/T)) with K_max < 0.
This geometry is flat at t = n · 2 π T for n ∈ℕ and morphs into an arc with curvature |K_max| at t = (2 n+1) ·π T with a periodicity T.
This parametrization ensures that the surface is never inward bent and therefore no problematic self-intersections can occur.
|
http://arxiv.org/abs/2405.09971v1 | 20240516103209 | A study of the fine-structure constant dependence of radiative capture in Halo-EFT | [
"Ulf-G. Meißner",
"Bernard Ch. Metsch",
"Helen Meyer"
] | nucl-th | [
"nucl-th",
"astro-ph.CO",
"hep-ph",
"hep-th",
"nucl-ex"
] |
meissner@hiskp.uni-bonn.de
Helmholtz-Institut für Strahlen- und Kernphysik,
Rheinische Friedrich-Wilhelms Universität Bonn, D-53115 Bonn, Germany
Bethe Center for Theoretical Physics,
Rheinische Friedrich-Wilhelms Universität Bonn, D-53115 Bonn, Germany
Institute for Advanced Simulation (IAS-4),
Forschungszentrum Jülich, D-52425 Jülich, Germany
metsch@hiskp.uni-bonn.de
Institute for Advanced Simulation (IAS-4),
Forschungszentrum Jülich, D-52425 Jülich, Germany
Helmholtz-Institut für Strahlen- und Kernphysik,
Rheinische Friedrich-Wilhelms Universität Bonn, D-53115 Bonn, Germany
hmeyer@hiskp.uni-bonn.de
Helmholtz-Institut für Strahlen- und Kernphysik,
Rheinische Friedrich-Wilhelms Universität Bonn, D-53115 Bonn, Germany
Bethe Center for Theoretical Physics,
Rheinische Friedrich-Wilhelms Universität Bonn, D-53115 Bonn, Germany
We study the fine-structure constant dependence of the rates of some
selected radiative capture reactions within the framework of so-called
Halo Effective Field Theory in order to assess the adequacy of some
assumptions made on the Coulomb penetrability. We find that this
dependence deviates from that implied by a parameterization of the
cross sections of this effect via a simple penetration factor. Some
features of this fine-structure dependence are discussed, in
particular its potential impact on the abundances of the light
elements in primordial nucleosynthesis.
A study of the fine-structure constant dependence of radiative capture in Halo-EFT
Helen Meyer
May 20, 2024
==================================================================================
§ INTRODUCTION
In Ref. <cit.> we made a re-assessment of the
electromagnetic fine-structure constant dependence of the light element abundances
in primordial nucleosynthesis or Big Bang nucleosynthesis
(BBN). This required a description of the fine-structure constant
dependence of the pertinent cross sections of the leading reactions in
the BBN network. Only for the leading nuclear reaction, i.e.
the radiative capture reaction p + n → d + γ a detailed and
sufficiently accurate theoretical description within the framework of
pionless Effective Field Theory (EFT) is available,
see <cit.>. For the other reactions we relied on a
parameterization of the fine-structure constant dependence that
accounted for the dependence of Q-values of the nuclear reactions
through changes in the nuclear binding energies due to the Coulomb
interaction of the protons as well as a modeling of the Coulomb
penetration factors in the form
P(x) = x/x-1
with
x
=
2π Z_a Z_b μ_ab c^2 α/c p
=
√(E_G(α)/E)
in terms of the so-called Gamow energy for a two-particle reaction channel ij
E_G(α)
=
2 π^2 Z_i^2 Z_j^2 μ_ij^ c^2 α^2
and the center-of-mass (CMS) energy E or E+Q for the entrance and
the exit channel, respectively. Here, p is the corresponding CMS
momentum, Z_i the charge (in units of the elementary charge e) of
nuclide i, μ_ij the reduced mass, c is the speed of light and α denotes the
fine-structure constant. In addition we accounted for a simple linear
dependence on α in case of radiative capture reactions as well
as a trivial α dependence reflecting the final momentum
dependence if assuming dominance of dipole radiation,
see <cit.>. We also noted in <cit.>
that for some other radiative capture reactions an effective field
theory description, viz. “Halo-EFT”, is available that
potentially offers the possibility to study the α dependence of
the cross sections analytically and thus assess the validity of the
assumptions made in <cit.>. This is the purpose of
the present paper: We shall thus study the α dependence of the
cross sections and the corresponding rates for the following radiative
capture reactions: The neutron induced reaction
n + 7Li→8Li + γ ,
as treated in Refs. <cit.>,
the proton induced reaction
p + 7Be→8B + γ ,
as treated in Ref. <cit.> and the two reactions that are
most relevant to BBN:
3H + 4He→7Li + γ ,
and
3He + 4He→7Be + γ ,
as treated in Refs. <cit.>.
The paper is organized as follows: In Sect. <ref> we
recapitulate the formulas for the radiative capture cross section in
Halo-EFT. We then compare the results for the nominal α value
with experimental data in Sect. <ref>. The results on the
α dependence of the cross sections or astrophysical S-factors
and the corresponding rates are discussed in Sect. <ref>.
The impact on the changes of the light element abundances with a
variation of the fine-structure constant is presented in
Sect. <ref>. We summarize our findings in Sect. <ref>. Some
technicalities not given in
Refs. <cit.>-<cit.> are relegated to
the Appendices.
§ BASIC FORMALISM
In Halo-EFT the nuclear system is assumed to consist of a
“core”-system with mass m_c , charge number Z_c and spin s_c
and a “valence”-system with mass m_v , charge number Z_v and
spin s_v . Furthermore, M=m_c+m_v and μ=m_c m_v / M denote
the total mass and the reduced mass of the system, respectively. With L_i
denoting the orbital angular momentum of the relative motion in the
initial state the total spin and angular momentum in the initial state
is given by S⃗_i = s⃗_c+s⃗_⃗v⃗ and J⃗_i =
S⃗_i + L⃗_i , respectively. Within the effective range
expansion the initial state interaction is then specified by the
parameters a_ζ_i, r_ζ_i, and s_ζ_i, (ζ_i
= S_iL_iJ_i) which for L_i=0 correspond to the s-wave
scattering length, effective range and the first shape parameter,
respectively.
The final state in the radiative capture reaction has mass M_f ,
charge number Z_f, excitation energy E_x and total nuclear spin
J_f . In terms of partial waves S_fL_fJ_f the final state is
written as
|M_f,E_x;J_f⟩ = ∑_s_f,L_f
a_S_f,L_f |s_cs_vS_fL_fJ_f⟩
= ∑_ζ
a_ζ_f |ζ_f⟩
with S_f the total spin of the di-nuclear-cluster and L_f the
relative orbital angular momentum quantum number.
The coefficients a_S_f,L_f are the amplitudes for the decomposition
of the final state in terms of di-nuclear states. Then
B_ζ=M-(M^ζ_f+E_x^ζ)
is called the separation energy
with respect to the clusters “v” and “c” and
γ_ζ=√(2 M B_ζ)
the binding momentum of this
state. We define
k_C = Z_c Z_v α μ
as the inverse Bohr radius of a di-nuclear system in the case of
charged particles.
§.§ Electric dipole radiative capture
The formulas in this section are adopted from
Ref. <cit.>. Assuming that the radiative capture
proceeds through an electric dipole transition and that only a single
state contributes, the cross section is given by the expression
σ_E1(p)
=
1/16π M^2 1/(2s_c+1)(2s_v+1)
×∑_ζ |a_ζ|^2 k^(ζ)_γ/p |M_E1^(ζ)|^2 ,
where p is the magnitude of the relative momentum in the CMS with
E=p^2/(2μ) the non-relativistic expression for the energy of
the relative motion and
k^(ζ)_γ = p^2+γ_ζ^2/2 μ
the non-relativistic approximation to the momentum of the photon in
the final state.
The dimensionless amplitude squared reads
|M_E1^(ζ)|^2
=
64π α (2J^ζ_f+1) (
Z_v m_c
-
Z_c m_v
)^2/μ γ_ζ𝒩(η^ζ_γ,ρ^ζ_γ)
×[
|𝒜(p)|^2
+
2 |Y(p)|^2
] .
Here, we defined η^ζ_γ = k_C/γ_ζ and
ρ^ζ_γ = ρ_1^ζ/γ_ζ with ρ_1^ζ =
ħ c/r_1^ζ the effective momentum and r_1^ζ the
effective range in the channel ζ . The normalisation is given
by
𝒩(η,ρ)
= 2π/
-
ρ
+4 η h(η)
+2 η^2 (η^2-1) h'(η)
,
where
h(η) = ψ(η) + 1/2 η- log(η)
and ψ(η) = Γ'(η)/Γ(η) is the digamma
function.
The normalisation is thus completely determined by the binding energy
of the di-nuclear cluster (via η) and the effective momentum (via
ρ) in the final state.
In case of a neutral cluster k_C = 0 and thus η =
k_C/γ = 0 . Then
.𝒩(η,ρ)|_η=0
=
-2π γ/ρ+3 γ .
With η_p=k_C/p the capture from the initial s-wave is
given by the amplitude
|𝒜(p)|/C_0(η_p)
=
|
X(p)
-
2π/μ^2 B(p) + μ J_0(p)+μ^2 k^(ζ)_γ L^(ζ)_E_1/[C_0(η_p)]^2 p((δ_0)-)| ,
where L^(ζ)_E_1 is the low-energy constant of the the two-body current contact term and
X(p)
=
1+ 2/3 κ Γ(2+η_γ)/C_0(η_p) 0∞ρ
W_-η_γ,3/2(2 κ ρ)
×[
-F_0(η_p,ρ)/ρ
+
∂_ρ F_0(η_p,ρ)
] ,
is the s-wave contribution without initial state strong interactions
in terms of Coulomb functions F_ℓ and Whittaker functions
W_η,μ, which are the solutions to the pure Coulomb problem.
In the case where k_C=0 (i.e. if a neutral particle is
involved) this reduces to
.X(p)|_η_γ=0
=
1 - 2/3 p^2/p^2+γ^2 .
The s-wave contribution from strong initial-state interactions is given by
B(p) + μ J_0(p)/μ^2 p = 1/3π -κ^3/1+κ^2
+
η_p C(p)/μ^2 + Δ B(p)/μ^2 p
-
η_p/2π[
2 h( η_p) + 2 γ_E - 5/3+log(4π)] .
Here, the function C(p) is given by a double integral, treated in
App. <ref> and the finite contribution Δ B(p) is
evaluated as follows: The integrand
ℬ(κ,η_γ;ρ)
=
-κ/3πΓ(2+η_γ) Γ(1+ κ η_γ)
W_-η_γ,3/2(2 κ ρ)
× [
-1/ρ W_-κη_γ,1/2(-2 ρ)
+
∂_ρ W_-κη_γ,1/2(-2 ρ)
]
where κ = η_p/η_γ=γ/p , in
B(p)/μ^2 p
=
0∞ρ ℬ(κ,η_γ;ρ) ,
is quadratically divergent for ρ→ 0 . Noting that the
integrand depends on α via η_γ = k_C/γ
and k_C ∝α , the integrand can be
regularized by subtracting the terms from zero and single
photon contributions, i.e. the terms of
𝒪(α^0) and 𝒪(α^1). Then,
with
α ℬ(α)α
=
η_γ ℬ(κ,η_γ;ρ)η_γ ,
the finite contribution[
As pointed out in Ref. <cit.> the divergent
pieces of B cancel the divergent pieces in J_0, this was accounted
for in arriving at Eq. (<ref>) .
]
is given by
Δ B(p)/μ^2 p
=
0∞ρ [
ℬ(κ,η_γ;ρ)
-
ℬ(κ,0;ρ)
-
(∂_η_γℬ)(κ,0;ρ)·η_γ]
which can be integrated numerically[
The partial derivative involved might be difficult to find
analytically. Although in principle not particularly stable, one could
use numerical approximations to the (partial) derivative of a function
ℬ(κ,η_γ;ρ) , such as (with ℬ_j
= ℬ(κ,j h;ρ)) :
.(∂_η_γℬ(κ,η_γ,ρ)
|_η_γ=0 = 8 (ℬ_+1
-
ℬ_-1)-(ℬ_+2-ℬ_-2))/12 h
+
𝒪(h^4)
=
45 (ℬ_+1-ℬ_-1)
-
9 (ℬ_+2-ℬ_-2)
+
(ℬ_+3 - B_-3) /60 h
+
𝒪(h^6)
for some small finite h .
].
For neutral particles, i.e. k_C=0, one obtains
.B(p)+μ J_0(p)/μ^2 p|_k_C=0
=
1/3π -κ^3/1+κ^2-/2π .
Finally, the contribution from the initial d-wave states to the
capture process is given by the amplitude
Y(p)
= 2/3 κ Γ(2+η_γ) 0∞ρ
W_-η_γ,3/2(2 κ ρ)
×[
2 F_2(η_p,ρ)/ρ
+
∂_ρ F_2(η_p,ρ)
]
which for k_C=0 reduces to
.Y(p)|_k_C=0
=
2/3 p^2/p^2+γ^2 .
§.§ Magnetic dipole radiative capture
In case of single nucleon radiative capture there are additional
relevant contributions from magnetic dipole transitions to the final
states.
§.§.§ Neutron induced magnetic dipole contribution
In case of the n + 7Li→8Li + γ reaction
we recapitulate the formulas from Ref. <cit.>.
Earlier work on this reaction, including the M1-contribution,
can be found in Ref. <cit.>.
The cross section for the M1 contribution to the radiative capture
in the 7Li + n →8Li + γ reaction through the
3^+ resonance according to Ref. <cit.> is given by
σ_M1(p)
= 1/14 7/3 α μ/m_p^2 |h^2 𝒵^(ζ)| (k/p)^3
×|
p^2/-1/a_1^(3)+1/2 r_1^(3) p^2- p^3|^2
{|
2/3 γ^3- p^3/γ^2+p^2 K^(2) + β^(2)|^2
+
|
2/3 γ^3- p^3/γ^2+p^2 K^(1) + β^(1)|^2
} ,
with m_p the proton mass. The asymptotic normalization of the
final 8Li-states (ζ=2^+ for the ground state or
ζ=1^+ for the first excited state) with binding momentum
γ^(ζ) is given by
h^2 𝒵^(ζ) = -2π/3 γ^(ζ)+r_1^(ζ) ,
the gyro-magnetic factors in the
5P3→3P2
and the
5P3→5P2
M1-transitions
are given by
K^(1) = √(3/2)(
3/2 g_c - 3/2 g_v
) ,
K^(2) = √(3/2)(
3/2 g_c - 3/2 g_v
) ,
K^(2) = √(3/2)(
3/2 g_c + 1/2 g_v
+
2 μ m_n Z_c/m_c^2) ,
in terms of the gyro-magnetic ratios g_c and g_v describing the
magnetic moments of the core and the valence system, respectively,
and β^(i), i=1,2 are constants reflecting the two-body current
terms. Finally, a_1^(3) and r_1^(3) are the scattering volume
and the effective momentum in the 5P3 scattering channel.
§.§.§ Proton induced magnetic dipole contribution
This section summarizes the results quoted in
Ref. <cit.> for the M1 contribution in the reaction
7Be + p →8B(2^+) + γ ,
through the 1^+ resonance. Considered is only the 5P1→5P2 transition assuming that the 1^+ resonance
is dominantly a proton p1/2 coupled to the
7Be(3/2^-) ground state with the amplitude
⟨[
3/2^-×[1/2^+×
1^-]^1/2]^1|[
[
3/2^-×1/2^+]^2
×1^-]^1⟩
=
√(5/6) .
The cross section for a magnetic dipole radiative reaction through the
1^+ resonance is then given by <cit.>
σ_M1(p)
=
1/16π M^2 1/6∑_ζ(k_γ^(ζ))^3/p^3 |M^(ζ)_M1|^2 .
The squared matrix element reads
|M^(ζ)_M1|^2
=
(2J^(ζ)_f+1) μ^3 8π M^2 α/m_p^2 |𝒜_1(p)|^2/|C_1(η_p)|^2
×2π/μ 𝒵^(5P2) μ^2/432 π^4 |L_22(p)|^2 ,
where
L_22(p)
= 2π/μ{9π/√(40)[
3 g_c + g_v
+ 4 μ m_p
(
Z_ϕ/m_ϕ^2 + Z_ψ/m_ψ^2)
]
×D(p,γ_ζ)/μ^2
-β_22}
with μ⃗_c = g_c j⃗_c and μ_v = g_v j⃗_v the (spin)
magnetic moments of the constituents c ,v of the di-nuclear
system and
μ⃗_L = μ m_p( Z_c/m_c^2 + Z_v/m_v^2) L⃗
the magnetic moment due to the current associated with the relative orbital motion.
[
Note that the current due to the velocity of fragment i is given
by the operator (Z_i e)/(m_i) p⃗_i . Accordingly the associated orbital magnetic moment, expressed
in units of μ_N = (e ħ c)/(2 m_p c^2) , reads μ_i =
Z_i (m_p/m_i) μ_n L⃗_i with L_i the angular
momentum of fragment i .
]
Furthermore
D(p)/μ^2 γ
=
1/3π κ^3-1/1+κ^2
+η_γ D'(η_p,κ)/μ^2
+Δ D(k_C;p,γ)/μ^2 γ ,
where we defined
ρ = p r , η_γ = k_C/γ ,
κ=γ/p , η_p = κ η_γ = k_C/p.
The integrand
𝒟(κ,η_γ;ρ)
=
- κ/3π Γ(2+ κ η_γ) Γ(2+η_γ)
×
W_-η_γ,3/2(2 κ ρ)
W_- η_p,3/2(-2 ρ)
in
D(k_C;p,γ) = μ^2 p 0∞ρ 𝒟(κ,n_γ;ρ)
is again divergent for ρ→ 0 . By subtracting the zero and
single photon contributions one then defines the finite term
Δ D(k_C;p,γ)
= μ^2 p 0∞ρ[
𝒟(κ,η_γ;ρ)
-
𝒟(κ,0;ρ)
-
(∂_η_γ𝒟)(κ,0;ρ)·η_γ] .
The evaluation of the second term D'(η_p,κ) on the r.h.s. of
Eq. (<ref>) is given in App. <ref>.
The initial-state interaction in the 5P1 channel is given
by the amplitude
γ^2 𝒜_1(p)
=
2π γ/μ
×9 (C_1(η_p))^2 2 σ_1/
-1/a_1^(5P1) p^2 γ + 1/2 r^(5P1)/γ
-2 p/γ η_p (η_p^2+1) H(η_p) ,
where a^(5P1) is the scattering volume
and r^(5P1) the effective momentum to reproduce the 1^+ resonance
with position E_R=p_R^2/(2μ) = 0.630(3) MeV and
width Γ_R = 0.0357(6) MeV .
The normalization of the final state is determined by
γ 2π/μ𝒵^(5P2)
=
2π/-ρ_γ
+ 2 η_γ^2(η_γ^2-1) h'(η_γ)
+4 η_γ h(η_γ)
with ρ_γ = r_1^(5P2)/γ .
§ CROSS SECTIONS, ASTROPHYSICAL S-FACTORS AND REACTION RATES.
The cross sections σ or the corresponding astrophysical S-factors given as
S(E) = E σ(E) √(E_G/E)
with E_G
the Gamow energy in the entrance channel, were calculated according to
the formulas given in the previous section, Sect. <ref>.
The nuclear structure parameters are given in Tab. <ref>,
while the reaction parameters for the nucleon induced reactions are
given in Tab. <ref> and for the radiative capture to
7Li and 7Be are given in Tab. <ref>.
The reaction rate in thermal equilibrium at temperature T then
follows from the cross section via
γ(T)
=
N_A√(8/π μ (kT)^3)0∞E σ(E) E -E/kT ,
where N_A is the Avogadro number and k the Boltzmann constant.
Essentially, this expression for the rate has the form of a
Laplace-transform of the cross section multiplied by the CM-energy.
In the next subsections we present the resulting cross sections or
astrophysical S-factors as well as the corresponding rates according
to Eq. (<ref>) for the four reactions studied here, all
calculated at the present value of the fine-structure constant that we
shall call the nominal value of α given by
α_0 = 7.2973525693(11) 10^-3 = 1/137.035999084(2)
from Ref. <cit.>.
For each of the reactions considered here we present the calculated rates for
two parameter sets used in the Halo-EFT calculations in order to give an impression
of the systematic uncertainty. In addition we compare the resulting rates
with those used in the original versions of some publicly available BBN codes: viz.
<cit.>,
<cit.>,
<cit.> and
<cit.>,
if available. These were also considered in our study <cit.>
mentioned in the introduction.
The most recent code ,
see <cit.>, by default uses the
-rates and thus in this context is not discussed
separately.
§.§ The n + 7Li→8Li + γ reaction
In this case Z_v = 0, Z_c=3 , s_v = 1/2 , s_c=3/2 and
Z_f=4 with J_f = 2 for the ground state and J_f=1 for the
excited state. The final 2^+ state is supposed to be an equal
mixture of the 3P2 and 5P2 states,
i.e.
|2^+⟩
=
1/√(2)|3P2⟩
+
1/√(2)|5P2⟩ ,
while the excited 1^+ state is supposed to be
|1^+⟩ = -1/√(6)|3P1⟩ + √(5/6)|5P1⟩ .
The total radiative capture cross section in this case is given by the
sum of the expression for the electric dipole contribution given in
Eqs. (<ref>,<ref>) with the special formulas
for k_C=0 given in
Eqs. (<ref>,<ref>,<ref>,<ref>)
and the magnetic dipole contribution of Eq. (<ref>) .
The resulting cross section (scaled with the laboratory neutron
velocity) is compared to the experimental data in
Fig. <ref>.
The calculated rates for the parameter sets “A” and “ANC” (these
correspond to the parameter sets called “EFT A” and “EFT ANC” in
Ref. <cit.>, respectively; “ANC” standing for:
“parameters corresponding to empirical A(symptotic) N(ormalization)
C(oefficient”)) are compared to the rates as parameterized in
<cit.>,
<cit.>
and <cit.>
as well as to the rate resulting from the following novel
parameterization of the cross section, accounting for the 1^+
resonance via a non-relativistic Breit-Wigner parameterization
√(E) σ(E)
=
0.0675 1-0.045 E+0.7 E^2/1+0.001 E+0.7 E^2
+0.018/1+5000.0 (E-0.2215)^2 ,
(in mb MeV^1/2, with E in MeV)
in Fig. <ref>. Indeed this parameterization yields a rate
very similar to those of the Halo-EFT calculation.
§.§ The p + 7Be→8B + γ reaction
In this case Z_v = 1, Z_c=4 , s_v = 1/2 , s_c=3/2 and
Z_f=5 with J_f = 2 for the ground state, which, as the
corresponding ground state of the mirror nucleus is supposed to be an
equal mixture of the 3P2 and 5P2 states. The
total radiative capture cross section is given by the sum of the
expression for the electric dipole contribution in
Eqs. (<ref>,<ref>) with k_C 0 and the
resonant magnetic dipole contribution as given in
Eq. (<ref>) .
The resulting S-factor
S(E)
=
0.018 1+0.3 E+0.125 E^2/1+0.017 E^2
+0.090/1+2500.0 (E-0.63)^2 ,
(in MeV mb, with E in MeV) .
is compared to experimental data and to the
parameterization of Eq. (<ref>) thereof in
Fig. <ref>.
The calculated rate for the parameter sets “NNLO” and “ANC”, where
these labels refer to the parameter sets labeled
“EFT_ gs I NNLO” and those
related to a determination from A(symptotic) N(ormalization)
C(oefficients), respectively, see Ref. <cit.>,
are compared to the rates as parameterized in
<cit.>,
<cit.>
and <cit.> as well as to
the rate corresponding to the parameterization of the S-factor of
Eq. (<ref>) in Fig. <ref>.
§.§ The 3H + 4He→7Li + γ reaction
In this case Z_v = 1, Z_c=2 , s_v = 1/2 , s_c=0 and Z_f=3 with
J_f = 3/2 for the ground state and J_f = 1/2 for the first
excited state. The radiative capture cross section is determined by
electric dipole contributions only, i.e. by the expression
for the electric dipole contribution in
Eqs. (<ref>,<ref>) with k_C 0. The
parameter sets labeled “fit” and “A” correspond to the parameter
sets labeled “χ^2” and “Model A” in
Ref. <cit.>, respectively.
The S-factor for this reaction
S(E)
=
0.01 1-1.15 E+1.0 E^2/1+0.01 E+0.5 E^2
(in MeV mb, with E in MeV) .
is compared to experimental data
and to the parameterization used in Ref. <cit.>
as well as an improved parameterization, see
Eq. (<ref>), thereof
in Fig. <ref>.
This new parameterization of the S-factor is closer to the
calculations within the framework of Halo-EFT studied here and improves
the desciption of the data,
in particular for energies E_cm>1 MeV,
and indeed
yields a rate that is much smaller at higher temperatures, see Fig. <ref>.
§.§ The 3He + 4He→7Be + γ reaction
In this case Z_v = 2, Z_c=2 , s_v = 1/2 , s_c=0 and Z_f=4 with
J_f = 3/2 for the ground state and J_f = 1/2 for the first
excited state. The radiative capture cross section is again
determined by electric dipole contributions only, i.e. by
the expression for the electric dipole contribution in
Eqs. (<ref>,<ref>) with k_C 0.
The parameter sets labeled “fit” and “AII” correspond to
the parameter sets labeled “χ^2” and “Model AII”
in Ref. <cit.>, respectively.
The astrophysical S-factor is displayed in Fig. <ref>.
The resulting nominal rates are given in Fig. <ref>.
§ THE FINE-STRUCTURE CONSTANT DEPENDENCE OF THE RATES
In order to study the fine-structure constant dependence we calculated the rates for
α = α_0 (1+δ) ,
and the fractional change in α, i.e. δ was
varied in the range [-0.05,+0.05] . We shall distinguish direct
and indirect effects of the variation of the fine-structure constant:
Direct effect
First of all the fine-structure constant α enters the
calculation of the radiative capture cross section as a linear factor
due to the coupling of electromagnetic field to the charges and
currents, which in the amplitude is proportional to e and hence in
the cross section leads to a proportionality e^2 ∝α.
Furthermore α enters the cross section via the inverse
Bohr-radius k_C = Z_v Z_c c^2 μ α , that in turn determines
the dimensionless quantities η_γ = k_C/γ, where
γ is the binding momentum, η_ρ = k_C/ρ, where
ρ the p-wave effective range, and η_p = k_C/p (the
Sommerfeld-parameter), that enter the expressions for the
normalization 𝒩(η_γ,η_ρ)
(Eq. (<ref>)), the amplitudes
𝒜(η_γ;η_p) (Eq. (<ref>)) , via
X(η_γ;η_p) (Eq. (<ref>)), and
ℬ(η_γ;η_p) (Eq. (<ref>)) , as well
as Y(η_γ;η_p) (Eq. (<ref>)) . The
Sommerfeld-parameter η_p also enters the astrophysical
S-factor.
Because the dependence of k_C on α is linear, k_C ∝α, we have
k_C(α)
=
k_C(α_0 (1+δ_α))
=
k_C(α_0) (1+δ_α) .
We shall call this the “direct effect”.
Indirect effect
On top of this, the value of α influences the nuclear binding
energies, i.e. the α dependence of the nuclear mass of the
nuclide i is given by
m^i(α)
=
m^i_N + V^i_C (1+δ_α)
=
m^i + V^i_C δ_α ,
where V^i_C denotes the (repulsive) Coulomb-energy contribution to the nuclear mass.
This in turn influences the Q-value of the reaction, i.e.
Q(α)
=
m_v(α) + m_c(α) - M_f(α) = B_f(α)
and thus the binding momentum
γ(α)
=
√(2 μ(α) c^2 B_f(α)) .
Concerning the kinematics of the reaction: For a given
CMS kinetic energy E, the CMS relative momentum in the entrance
channel p and the CMS final photon momentum k_γ are given by
s(E)
=
(m_v + m_c + E)^2 ,
p(E)
= √((s(E)-(m_c-m_v)^2)(s(E)-(m_c+m_v)^2)/4 s(E))
≈√(2 μ E) ,
k_γ(E)
= s(E)-M_f^2/√(4 s(E))≈ Q + E
and thus all depend on α. Because in general (for
δ_α≈ 0.1) Δ V^i_C/m^i ≈𝒪(10^-4) the dependence of M and μ on α is
expected to be rather small (Δμ/μ≈𝒪(10^-4)), whereas the change in the Q-value can be
appreciable, Δ Q/Q ≈𝒪(10^-1) . Accordingly,
the effect of a variation of μ with a variation of α on the
value of k_C=Z_v Z_c μ c^2 α will be ignored.
We call the total of these kinematical variations the “indirect effect”.
In Ref. <cit.>, we introduced an approximation to the
dependence of the rate on α by evaluating the effect on the
parameterized S-factor at an energy where the S-factor is supposed
to be maximal. For charged particle induced reactions this energy is
given by
E
=
(kT/2)^2/3(E_G^i)^1/3 .
This then leads to a temperature dependent factor, that gives a fair
approximation to parameterized results, both with and without
including the indirect effects.
§.§ The n + 7Li→8Li + γ reaction
Since the neutron in the entrance channel is uncharged, there is no Coulomb
interaction between the clusters and accordingly the direct effect of
varying α-dependence is completely determined by the fact that the cross section
is strictly linear in α. In addition there is the indirect effect
stemming from the fine-structure constant dependence of the Coulomb
contributions to the bindings energies of 7Li and
8Li, that affects the Q-value of the reaction.
The variation of the rate with α is displayed in
Fig. <ref>, where the relative variation
γ(α;T)/γ(α_0;T) is plotted as a function of the
temperature. Although the calculated rates, see
Fig. <ref>, do differ slightly the relative changes
of the rates are almost identical. The bottom panel in Fig. <ref> indeed
merely reflects that the cross section for this reaction trivially
linearly depends on α , i.e. if α varies by
5%, then also the direct effect rate varies by 5% and this effect is
temperature-independent. Small deviations occur if also the variation
of the binding energies of the Li-nuclides is taken into account, see
the top panel of Fig. <ref>.
§.§ The p + 7Be→8B + γ reaction
The fine-structure dependence of the temperature dependent rate of the
proton-induced radiative capture reaction is more interesting. In
Fig. <ref> the variation of the temperature-dependent
rate with α of the calculated values with the two parameter
sets is compared to the values obtained with the parameterization of
the α-dependence of the rates based on the parameterized cross
sections as done in Ref. <cit.>. From this figure one
infers that the relative variation in the Halo-EFT calculations is
smaller by about 40% than that found in Ref. <cit.>.
Excluding the M1 contribution yields practically identical results.
Furthermore it is observed, that considering also the indirect effect,
i.e. also the effect on the binding energies and thus on the
Q-value of the reaction, enhances this difference.
§.§ The 3H + 4He→7Li + γ reaction
The relative variation with α of the temperature-dependent rate
for the two parameter sets “fit” and “A” are compared to that with
the rate based on the parameterization of Ref. <cit.>
in Fig. <ref>. Contrary to the previous reaction this
variation is larger for the Halo-EFT results, in particular for the
parameter set “fit” than that with the parameterized rate.
Considering the direct effect only leads to the same conclusion.
§.§ The 3He + 4He→7Be + γ reaction
The relative variation of the rate with the value of the
fine-structure constant α of the temperature-dependent rates
for the two parameter sets “fit” and “AII” are compared to that
with the rate based on the parameterization of
Ref. <cit.> in Fig. <ref>. Here, the
relative variation with the two Halo-EFT parameter sets is much larger
than for the parameterization used previously, in particular if the
fine-structure constant is smaller than the nominal value. We shall
discuss the reason for this in the next section,
Sect. <ref>.
§.§ Discussion
First of all we observe that although the resonant magnetic dipole
contribution in the nucleon induced reaction accounts for a prominent
feature in the cross sections (or astrophysical S-factors), this
contribution is of minor importance in the variation of the rates with
α, see e.g. Fig. <ref>. Below we
shall therefore focus on the effects from the dominant electric dipole
contributions. With the exception of the neutron induced reaction the
effects of the α-variation differ from what was estimated on
the basis of the parameterization of the cross-section used before in
Ref. <cit.>. The dominant effect from the electric
dipole contribution seems to be the α variation of the
normalisation of Eq. (<ref>). The variation with α of
the relative normalisation N(α)/N(α_0) for the three
charged particle induced reactions is displayed in
Fig. <ref>. For the proton induced reaction the results
are shown for both the 5P2 and the 3P2
amplitudes contributing with equal weight to the ground state
capture. For the other two reactions the normalisation of the
2P3/2 (ground state) and the
2P1/2 (excited state) are shown. This figure
illustrates the main effects observed in the variation of the rates
with α:
* Because of the absence of Coulomb interactions the variation with
α of the cross sections and corresponding rates of the neutron
induced reaction is trivially linear.
* For the proton induced reaction the normalisation varies with
α almost linearly by ± 40% for δ∈ [-0.05,0.05].
Furthermore, the results for the two parameter sets considered here
are almost identical. The variation of the rates is slightly larger
for negative δ, while considering the direct effect alone leads to a
variation symmetric in δ, in accordance with the variation of the
normalisation.
Also displayed in Fig. <ref> is
the variation of the parameterized rate with α on the basis of
the approximation introduced in Ref. <cit.>, by evaluating
the effects at a fixed energy, see also Eq. (<ref>).
Indeed for the proton induced
reaction this temperature dependence is larger than the result
calculated in Halo-EFT in both cases.
* In case of the 3H and 3He induced reactions the
results with the approximation discussed above almost coincide with the
results on the basis of the parameterization, as was already
demonstrated in Ref. <cit.> and indeed in these cases
is much smaller than what is to expected on the basis of the
normalisation. Anyhow, the α dependence of the normalisation is
rather asymmetric in δ as shown in
Fig. <ref>. For the 3He induced reaction
this is even more prominent since the
denominator in the expression for the norm, see Eq. (<ref>),
vanishes for α < -0.06,
corresponding to a pole in the normalisation and thus leading to a
very asymmetric δ dependence in this case.
* Accordingly, within Halo-EFT the study of the α dependence of the
rates is limited to a rather
moderate relative variation of α of 5% only.
§ ABUNDANCES
In order to assess the relevance of the variation of the rates with
a variation of the fine-structure constant on the variation of the
resulting abundances of the light elements in BBN, we used five
different publicly available codes, viz.
<cit.>,
<cit.>,
<cit.>,
<cit.> and
<cit.>.
We use the rates as in our previous work,
see <cit.>, substituting the α dependence of
the rates for the four reactions considered here as discussed above.
More specifically, for calculating the α dependence of the
abundances, we used the parameter sets “NNLO”, “fit” and “fit”
for the
p + 7Be→8B + γ,
3H + 4He→7Li + γ and
3He + 4He→7Be + γ reactions,
respectively, the α-dependence of neutron induced radiative
capture reaction n + 7Li→8Li + γ being
practically linear anyway. As was demonstrated in the
Sect. <ref> these parameter sets showed the largest
variation of the rates with α.
In Table <ref> we list the nominal rates, i.e.
for α=α_0, see Eq. (<ref>). The results show
that with the exception of the values for the
7Li+Be abundance, which are larger by about
10%, and thus slightly deteriorate the so-called “Li-problem”,
the treatment of the four reactions in Halo-EFT as considered here
leads to results practically identical to those obtained previously in
Ref. <cit.>.
The fine-structure constant dependence
of the primordial abundances is depicted in Fig. <ref>.
Again, the results for the d-, 3H+He-,
4He- and 6Li- abundances are very similar to those
obtained previously, see Fig. 5 in
Ref. <cit.>. Moreover, the five BBN-codes considered
here produce consistent results, in spite of the fact that these codes
differ in details, such as the number of reactions in the BBN network
or the manner in which the rate equations are solved numerically.
This then also applies to the values for the resulting response matrix
elements. The (linear) response matrix elements
∂log(Y_n/Y_H)/∂logα = c_1 and the
coefficients of the quadratic term (c_2) in a quadratic
least-squares fit of the form
P_k(δ_α) = c_0(1+c_1 δ_α^ + c_2 δ_α^2) ,
are given and compared to the results obtained previously in
Table <ref>.
We do find a very different result for the α-dependence of the
7Li+Be abundance: In particular the linear
response coefficient is approximately five times larger than the value
obtained previously and moreover the response is far from linear, the
quadratic coefficient being approximately 40 times larger than the
value previously obtained in <cit.>, as can also be
seen from a comparison of Fig. <ref> with Fig.5 of
Ref. <cit.>.
If instead of the parameter sets “NNLO”, “fit” and “fit” for
the reactions p + 7Be→8B + γ, 3H +
4He→7Li + γ and 3He + 4He→7Be + γ, respectively, we use the parameter sets
“ANC”, “A” and “AII” of Tables <ref>
and <ref> for these three reactions, respectively, we
find similar results, except for the 7Li+Be response
coefficients: In accordance with the fact that, as was shown in
Sect. <ref>, the change of the rates with α was
found to be smaller for these parameters, the linear response
coefficient c_1 is about half as large and the quadratic coefficient
is smaller by a factor 2.5, still corresponding to an appreciable
curvature.
§ SUMMARY
In this work we have studied the fine-structure constant dependence of
some BBN-relevant radiative capture reactions within the framework of
Halo-EFT. We concentrated on the main effects, refraining from
implementing a coupled channel approach as would be dictated by strict
EFT power counting. Nevertheless we studied for each reaction two
parameter sets in order to obtain an indication of the systematic
errors. We found that the effects do deviate from what has been found
previously on the basis of parameterized cross section data and a
simple parameterization of the α dependence motivated by a
simple penetration factor. While for a neutron induced radiative
capture reaction the results are almost strictly linear, as is to be
expected since the radiative capture reaction amplitude is linear in
the electromagnetic coupling and thus the cross section is linear in
α, for charged particle reactions the direct effect can both be
smaller, as is the case for the 7Be(p,γ)8B
reaction, or larger, as is the case for the
4He(3H,γ)7Li and the
4He(3He,γ)7Be radiative captures, than
what is to be expected on the basis of the parameterized treatment.
In spite of these substantial deviations from the
α-dependence of the parameterized rates obtained for these
reactions previously, the impact on the resulting abundances and on
their α-dependence of the light elements 2H,
3H+He, 4He, 6Li with the rates
calculated within the framework of Halo-EFT is very minor only. In
contrast for the 7Li+Be-abundance we do find that
the α-dependence differs appreciably from that of the previous
parameterized results, this α-dependence being much more
pronounced and clearly non-linear with the Halo-EFT rates. Also the
nominal abundance (i.e. calculated with the current value of
the fine-structure constant α_0) of 7Li+Be is
larger by almost 10 %, whereas the other abundances remain
practically unchanged.
For reactions involving charged particles, the Halo-EFT calculation
accounts for the charged particle repulsion by inclusion of the full
Coulomb propagator in all reaction steps. As the present study shows,
these Coulomb effects cannot always be approximated by a universal
penetration factor. It was also found that in some cases the study of
the fine-structure dependence of cross sections and the corresponding
rates within the framework of Halo-EFT can be limited by singularities
appearing in the normalisation, that enters as a factor in the
resulting cross sections. This was found to be relevant for the
3He + 4He→7Be + γ reaction,
limiting the study to relative variations of α smaller than
6% . Furthermore, it should be stressed that the Halo-EFT framework
is of course restricted to those reactions where the di-nuclear
structure assumption underlying this is indeed applicable. Therefore
a definite assessment of the fine-structure dependence of rates relevant for
primordial nucleosynthesis should ultimately be performed within a framework
that allows for a genuine ab initio treatment of nuclear reaction
dynamics. Indeed recent progress within the framework of nuclear lattice
effective field theory (NLEFT), see e.g. Ref. <cit.>,
shows that NLEFT seems to be a promising candidate for such
a treatment.
This project is part of the ERC Advanced Grant “EXOTIC” supported
the European Research Council (ERC) under the European Union's Horizon
2020 research and innovation programme (grant agreement
No. 101018170), We further acknowledge support by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) and the NSFC
through the funds provided to the Sino-German Collaborative Research
Center TRR110 “Symmetries and the Emergence of Structure in QCD”
(DFG Project ID 196253076 - TRR 110, NSFC Grant No. 12070131001), and
the Chinese Academy of Sciences (CAS) President's International
Fellowship Initiative (PIFI) (Grant No. 2018DM0034).
§ C-INTEGRAL
To calculate
C(p)
= lim_δ↓ 0 μ^2/6 π^2 (p^2+γ^2)
×01x01y1/√(x (1-x))1/√(1-y)
(
x p^2 log[
π/4 k_C^2(
-y p^2+(1-y)γ^2/x- δ)
]
+
p^2 log[
π/4 k_C^2(
-y p^2-(1-y)p^2/x- δ)
]
+
x γ^2 log[
π/4 k_C^2(
y γ^2+(1-y)γ^2/x- δ)
]
+
γ^2 log[
π/4 k_C^2(
y γ^2-(1-y)p^2/x- δ)
]) .
Because the integral over y can be performed analytically this
reduces to a single integral and, with the substitution x=sin^2ϑ , one obtains
with κ = γ/p and η_p = k_C/p :
C(p)/μ^2
=
C_0
+
C_1(κ) ,
where
C_0 = 1/2π log[π/4 η_p^2]
and
C_1(κ)
= 1/3π^2(1+κ^2)0π/2ϑ{sin^2ϑ c_1(κ;sin^2ϑ)
+
c_2(κ;sin^2ϑ)
+
sin^2ϑ κ^2 c_3(κ;sin^2ϑ)
+
κ^2 c_4(κ;sin^2ϑ)
} ,
to be evaluated numerically with the integrands
c_1(κ;x)
=
01y 1/√(1-y) log[
-y + (1-y) κ^2/x- δ]
=
2 log[κ^2/x]
-
4
+
2/√(κ^2/x+1){log[
√(κ^2/x+1)+1/√(κ^2/x+1)-1]
- π} ,
c_2(κ;x)
=
01y 1/√(1-y) log[
-y - (1-y) 1/x- δ]
=
-2 log[x]
-
2 π
-
4
+
4 √(x/1-x) arctan(√(1-x/x)) ,
c_3(κ;x)
=
01y 1/√(1-y) log[
κ^2 y + (1-y) κ^2/x- δ]
=
2 log[κ^2/x]
-
4
+
4 √(x/1-x) arctan(√(1-x/x)) ,
c_4(κ;x)
=
01y 1/√(1-y) log[
κ^2 y - (1-y) 1/x- δ]
=
-2 log[x]
-
2 π
-
4
+
2 κ/√(1/x+κ^2){log[
√(1/x+κ^2)+κ/√(1/x+κ^2)-κ]
+ π} .
§ D'-INTEGRAL
To calculate
D'(k_C;p,γ)
= lim_δ↓ 0 μ^2/6 π^2 (p^2+γ^2)
×01x01y√(x/1-x)1/√(1-y)
(
p^2 log[
1/4 k_C^2(
-y p^2+(1-y)γ^2/x- δ)
]
+
p^2 log[
1/4 k_C^2(
-y p^2-(1-y)p^2/x- δ)
]
+
γ^2 log[
1/4 k_C^2(
y γ^2+(1-y)γ^2/x- δ)
]
+
γ^2 log[
1/4 k_C^2(
y γ^2-(1-y)p^2/x- δ)
]) .
Thus, with η_p=k_C/p and
κ=γ/p=η_p/η_γ :
D'(η_p,κ)/μ^2
=
lim_δ↓ 0 1/6 π^2 (1+κ^2)
×01x01y√(x/1-x)1/√(1-y)
(
log[
1/4 η_p^2(
-y + (1-y)κ^2/x- δ)
]
+
log[
1/4 η_p^2(
-y - (1-y)1/x- δ)
]
+
κ^2 log[
1/4 η_p^2(
y κ^2 + (1-y)κ^2/x- δ)
]
+
κ^2 log[
1/4 η_p^2(
y κ^2 - (1-y)1/x- δ)
]) ,
i.e
D'(η_p,κ)/μ^2
=
-1/3 π log(4 η_p^2)
+
1/3 π^2 (1+κ^2)0π/2ϑsin^2ϑ
{
c_1(κ,sin^2ϑ)
+
c_2(κ,sin^2ϑ)
+
κ^2 c_3(κ,sin^2ϑ)
+
κ^2 c_4(κ,sin^2ϑ)
} .
in terms of the integrands of Eqs. <ref>-<ref> of Section <ref> .
100
Meissner:2023voo
U.-G. Meißner, B. C. Metsch and H. Meyer,
Eur. Phys. J. A 59 (2023) no.10, 223
doi:10.1140/epja/s10050-023-01131-3
[arXiv:2305.15849 [hep-th]].
Rupak:1999rk
G. Rupak,
Nucl. Phys. A 678, 405-423 (2000).
10.1016/S0375-9474(00)00323-7
[arXiv:nucl-th/9911018 [nucl-th]].
Fernando:2011ts
L. Fernando, R. Higa and G. Rupak,
Eur. Phys. J. A 48 (2012), 24
doi:10.1140/epja/i2012-12024-7
[arXiv:1109.1876 [nucl-th]].
Higa:2020kfs
R. Higa, P. Premarathna and G. Rupak,
Eur. Phys. J. A 57 (2021) no.9, 269
doi:10.1140/epja/s10050-021-00516-6
[arXiv:2009.09324 [nucl-th]].
Higa:2022mlt
R. Higa, P. Premarathna and G. Rupak,
Phys. Rev. C 106 (2022) no.1, 014601
doi:10.1103/PhysRevC.106.014601
Higa:2016igc
R. Higa, G. Rupak and A. Vaghani,
Eur. Phys. J. A 54 (2018) no.5, 89
doi:10.1140/epja/i2018-12486-5
[arXiv:1612.08959 [nucl-th]].
Premarathna:2019tup
P. Premarathna and G. Rupak,
Eur. Phys. J. A 56 (2020) no.6, 166
doi:10.1140/epja/s10050-020-00113-z
[arXiv:1906.04143 [nucl-th]].
Workman:2022ynf
R. L. Workman et al. [Particle Data Group],
PTEP 2022, 083C01 (2022).
10.1093/ptep/ptac097
Kawano:1992ua
L. Kawano,
Let???s go: Early universe. 2. Primordial nucleosynthesis:The Computer way.
Report FERMILAB-PUB-92-004-A (1992).
Arbey:2011nf
A. Arbey,
Comput. Phys. Commun. 183, 1822-1831 (2012).
10.1016/j.cpc.2012.03.018
[arXiv:1106.1363 [astro-ph.CO]].
Arbey:2018zfh
A. Arbey, J. Auffinger, K. P. Hickerson and E. S. Jenssen,
Comput. Phys. Commun. 248, 106982 (2020).
10.1016/j.cpc.2019.106982
[arXiv:1806.11095 [astro-ph.CO]].
Pisanti:2007hk
O. Pisanti, A. Cirillo, S. Esposito, F. Iocco, G. Mangano, G. Miele and P. D. Serpico,
Comput. Phys. Commun. 178 956-971 (2008).
10.1016/j.cpc.2008.02.015
[arXiv:0705.0290 [astro-ph]].
Consiglio:2017pot
R. Consiglio, P. F. de Salas, G. Mangano, G. Miele, S. Pastor and O. Pisanti,
Comput. Phys. Commun. 233, 237-242 (2018).
10.1016/j.cpc.2018.06.022
[arXiv:1712.04378 [astro-ph.CO]].
Gariazzo:2021iiu
S. Gariazzo, P. F. de Salas, O. Pisanti and R. Consiglio,
Comput. Phys. Commun. 271, 108205 (2022).
10.1016/j.cpc.2021.108205
[arXiv:2103.05027 [astro-ph.IM]].
Pitrou:2018cgg
C. Pitrou, A. Coc, J. P. Uzan and E. Vangioni,
Phys. Rept. 754, 1-66 (2018).
10.1016/j.physrep.2018.04.005
[arXiv:1801.08023 [astro-ph.CO]].
Burns:2023sgx
A.-K. Burns, T. M. P. Tait and M. Valli,
[arXiv:2307.07061 [hep-ph]] (2023).
PRyMordial-Code
A.-K. Burns, T. M. P. Tait and M. Valli,
<https://github.com/vallima/PRyMordial>
Nagai:2005qu
Y. Nagai, M. Igashira, T. Takaoka, T. Kikuchi, T. Shima, A. Tomyo, A. Mengoni and T. Otsuka,
Phys. Rev. C 71 (2005), 055803
doi:10.1103/PhysRevC.71.055803
Imhof:1959zz
W. L. Imhof, R. G. Johnson, F. J. Vaughn and M. Walt,
Phys. Rev. 114 (1959), 1037-1039
doi:10.1103/PhysRev.114.1037
PIRE as of 06 Feb 2024
Gorres:1989zz
J. Gorres, M. Wiescher, S. Graff, R. B. Vogelaar, B. W. Filippone, C. A. Barnes, S. E. Kellogg, T. R. Wang and B. A. Brown,
Phys. Rev. C 39 (1989), 8-13
doi:10.1103/PhysRevC.39.8
Firestone:2016gos
R. B. Firestone and Z. Revay,
Phys. Rev. C 93 (2016) no.5, 054306
doi:10.1103/PhysRevC.93.054306
Koltypin:1956xyz
E. A. Koltypin and V. M. Morozov, Dokl. 1 (1956) 65
Lynn:1991zz
J. E. Lynn, E. T. Jurney and S. Raman,
Phys. Rev. C 44 (1991), 764-773
doi:10.1103/PhysRevC.44.764
Heil:1998xyz
M. Heil, F. Käppler, M. Wiescher, A. Mengoni, Astrophys. J. 507,
1002 (1998)
Blackmon:1996zz
J. C. Blackmon, A. E. Champagne, J. K. Dickens, J. A. Harvey, M. A. Hofstee, S. Kopecky, D. C. Larson, D. C. Powell, S. Raman and M. S. Smith,
Phys. Rev. C 54 (1996), 383-388
doi:10.1103/PhysRevC.54.383
Buompane:2022qbt
R. Buompane, A. Di Leva, L. Gialanella, A. D'Onofrio, M. De Cesare, J. G. Duarte, Z. Fülöp, L. R. Gasques, G. Gyürky and L. Morales-Gallegos, et al.
Phys. Lett. B 824 (2022), 136819
doi:10.1016/j.physletb.2021.136819
Junghans:2010zz
A. R. Junghans, K. A. Snover, E. C. Mohrmann, E. G. Adelberger and L. Buchmann,
Phys. Rev. C 81 (2010), 012801
doi:10.1103/PhysRevC.81.012801
Junghans:2003bd
A. R. Junghans, E. C. Mohrmann, K. A. Snover, T. D. Steiger, E. G. Adelberger, J. M. Casandjian, H. E. Swanson, L. Buchmann, S. H. Park and A. Zyuzin, et al.
Phys. Rev. C 68 (2003), 065803
doi:10.1103/PhysRevC.68.065803
[arXiv:nucl-ex/0308003 [nucl-ex]].
Junghans:2001ee
A. R. Junghans, E. C. Mohrmann, K. A. Snover, T. D. Steiger, E. G. Adelberger, J. M. Casandjian, H. E. Swanson, L. Buchmann, S. H. Park and A. Zyuzin,
Phys. Rev. Lett. 88 (2002), 041101
doi:10.1103/PhysRevLett.88.041101
[arXiv:nucl-ex/0111014 [nucl-ex]].
Junghans:2004spv
A. R. Junghans, E. C. Mohrmann, K. A. Snover, T. D. Steiger, E. G. Adelberger, J. M. Casandjian, H. E. Swanson, L. R. Buchmann, A. M. Laird and S. Park, et al.
Nucl. Phys. A 746 (2004), 210-214
doi:10.1016/j.nuclphysa.2004.09.035
ISOLDE:2002bco
L. T. Baby et al. [ISOLDE],
Phys. Rev. C 67 (2003), 065805
[erratum: Phys. Rev. C 69 (2004), 019902]
doi:10.1103/PhysRevC.69.019902
[arXiv:nucl-ex/0212011 [nucl-ex]].
ISOLDE:2002qgw
L. T. Baby et al. [ISOLDE],
Phys. Rev. Lett. 90 (2003), 022501
[erratum: Phys. Rev. Lett. 92 (2004), 029901]
doi:10.1103/PhysRevLett.90.022501
[arXiv:nucl-ex/0208005 [nucl-ex]].
ISOLDE:2003dpg
L. T. Baby et al. [ISOLDE],
Nucl. Phys. A 718 (2003), 487-489
doi:10.1016/S0375-9474(03)00865-0
Strieder:2001ozz
F. Strieder, L. Gialanella, G. Gyürky, F. Schümann, R. Bonetti, C. Broggini, L. Campajola, P. Corvisiero, H. Costantini and A. D'Onofrio, et al.
Nucl. Phys. A 696 (2001), 219-230
doi:10.1016/S0375-9474(01)01121-6
Hammache:1997rz
F. Hammache, G. Bogaert, P. Aguer, C. Angulo, S. Barhoumi, L. Brillard, J. F. Chemin, G. Claverie, A. Coc and M. Hussonnois, et al.
Phys. Rev. Lett. 80 (1998), 928-931
doi:10.1103/PhysRevLett.80.928
[arXiv:nucl-ex/9712003 [nucl-ex]].
Filippone:1983tkv
B. W. Filippone, A. J. Elwyn, C. N. Davids and D. D. Koetke,
Phys. Rev. C 28 (1983), 2222-2229
doi:10.1103/PhysRevC.28.2222
Filippone:1983zz
B. W. Filippone, A. J. Elwyn, C. N. Davids and D. D. Koetke,
Phys. Rev. Lett. 50 (1983), 412-416
doi:10.1103/PhysRevLett.50.412
Vaughn:1970gv
F. J. Vaughn, R. A. Chalmers, D. Kohler and L. F. Chase,
Phys. Rev. C 2 (1970), 1657-1665
doi:10.1103/PhysRevC.2.1657
Kavanach:1960xyz
R. W. Kavanagh,
Nucl. Phys. 15 (1960), 411-420
https://doi.org/10.1016/0029-5582(60)90322-9.
Parker:1966zz
P. D. Parker,
Phys. Rev. 150 (1966), 851-856
doi:10.1103/PhysRev.150.851
Bystritsky:2017gtz
V. M. Bystritsky, G. N. Dudkin, E. G. Emets, M. Filipowicz, A. R. Krylov, B. A. Nechaev, A. Nurkin, V. N. Padalko, A. V. Philippov and A. B. Sadovsky,
Phys. Part. Nucl. Lett. 14 (2017) no.4, 560-570
doi:10.1134/S1547477117040057
Griffith:1987xyz
G .M. Griffith, R .A. Morrow, P .J. Riley,J .B. Warren,
Can. J. Phys. 39 (1961), 1397
Burzynski:1987bic
S. Burzyński, K. Czerski, A. Marcinkowski and P. Zupranski,
Nucl. Phys. A 473 (1987), 179-188
doi:10.1016/0375-9474(87)90160-6
Schroder:1987opz
U. Schröder, A. Redder, C. Rolfs, R. E. Azuma, L. Buchmann, C. Campbell, J. D. King and T. R. Donoghue,
Phys. Lett. B 192 (1987), 55-58
doi:10.1016/0370-2693(87)91141-5
Utsunomiya:1988yqx
H. Utsunomiya, R. P. Schmitt, Y. W. Lui, D. R. Haenni, H. Dejbakhsh, L. Cooke, P. Heimberg, A. Ray, T. Tamura and T. Udagawa,
Phys. Lett. B 211 (1988), 24-28
doi:10.1016/0370-2693(88)90800-3
Utsunomiya:1990trd
H. Utsunomiya, Y. W. Lui, L. Cooke, H. Dejbakhsh, D. R. Haenni, P. Heimberg, A. Ray, B. K. Srivastava, R. P. Schmitt and T. Udagawa,
Nucl. Phys. A 511 (1990), 379-406
doi:10.1016/0375-9474(90)90165-I
Utsunomiya:1990zz
H. Utsunomiya, Y. W. Lui, D. R. Haenni, H. Dejbakhsh, L. Cooke, B. K. Srivastava, W. Turmel, D. O'Kelly, R. P. Schmitt and D. Shapira, et al.
Phys. Rev. Lett. 65 (1990), 847-850
doi:10.1103/PhysRevLett.65.847
Utsunomiya:1992xyz
H. Utsunomiya, Y.-W. Lui, S. R. Haenni, H. Dejbakhsh, L. Cooke, B. K. Srivastava,
W. Turmel, D. O'Kelly, R. P. Schmitt, D. Shapira, J. Gomez del Campo, A. Ray, T. Udagawa,
Phys. Rev. Lett. 69 (1992), 863-863(E)
doi:10.1103/PhysRevLett.69.863.2
Tokimoto:2001rd
Y. Tokimoto, H. Utsunomiya, T. Yamagata, M. Ohta, Y. W. Lui, R. P. Schmitt, S. Typel, Y. Aoki, K. Ieki and K. Katori,
Phys. Rev. C 63 (2001), 035801
doi:10.1103/PhysRevC.63.035801
Brune:1994zz
C. R. Brune, R. W. Kavanagh and C. Rolfs,
Phys. Rev. C 50 (1994), 2205-2218
doi:10.1103/PhysRevC.50.2205
Xu:2012uw
Y. Xu, S. Goriely, A. Jorissen, G. Chen and M. Arnould,
Astron. Astrophys. 549, A106 (2013).
Kontos:2013qoa
A. Kontos, E. Uberseder, R. deBoer, J. Görres, C. Akers, A. Best, M. Couder and M. Wiescher,
Phys. Rev. C 87 (2013) no.6, 065804
doi:10.1103/PhysRevC.87.065804
NaraSingh:2004vj
B. S. Nara Singh, M. Hass, Y. Nir-El and G. Haquin,
Phys. Rev. Lett. 93 (2004), 262503
doi:10.1103/PhysRevLett.93.262503
[arXiv:nucl-ex/0407017 [nucl-ex]].
Gyurky:2007qq
G. Gyurky, F. Confortola, H. Costantini, A. Formicola, D. Bemmerer, R. Bonetti, C. Broggini, P. Corvisiero, Z. Elekes and Z. Fulop, et al.
Phys. Rev. C 75 (2007), 035805
doi:10.1103/PhysRevC.75.035805
[arXiv:nucl-ex/0702003 [nucl-ex]].
LUNA:2007ffz
F. Confortola et al. [LUNA],
Phys. Rev. C 75 (2007), 065803
doi:10.1103/PhysRevC.75.065803
[arXiv:0705.2151 [nucl-ex]].
Brown:2007sj
T. A. D. Brown, C. Bordeanu, K. A. Snover, D. W. Storm, D. Melconian, A. L. Sallaska, S. K. L. Sjue and S. Triambak,
Phys. Rev. C 76 (2007), 055801
doi:10.1103/PhysRevC.76.055801
[arXiv:0710.1279 [nucl-ex]].
DiLeva:2009zz
A. Di Leva, L. Gialanella, R. Kunz, D. Rogalla, D. Schurmann, F. Strieder, M. De Cesare, N. De Cesare, A. D'Onofrio and Z. Fulop, et al.
Phys. Rev. Lett. 102 (2009), 232502
[erratum: Phys. Rev. Lett. 103 (2009), 159903]
doi:10.1103/PhysRevLett.102.232502
Carmona-Gallardo:2012apk
M. Carmona-Gallardo, B. S. Nara Singh, M. J. G. Borge, J. A. Briz, M. Cubero, B. R. Fulton, H. Fynbo, N. Gordillo, M. Hass and G. Haquin, et al.
Phys. Rev. C 86 (2012), 032801
doi:10.1103/PhysRevC.86.032801
Elhatisari:2021eyg
S. Elhatisari, T. A. Lähde, D. Lee, U.-G. Meißner and T. Vonk,
JHEP 02, 001 (2022).
10.1007/JHEP02(2022)001
[arXiv:2112.09409 [hep-th]].
|
http://arxiv.org/abs/2405.10043v1 | 20240516122539 | Crash Landing onto "you": Untethered Soft Aerial Robots for Safe Environmental Interaction, Sensing, and Perching | [
"Pham Huy Nguyen"
] | cs.RO | [
"cs.RO"
] |
Crash Landing onto “you”: Untethered Soft Aerial Robots for Safe Environmental Interaction, Sensing, and Perching
Pham Huy Nguyen
Empa (Swiss Federal Laboratories for Materials Science and Technology) and Imperial College London
Email: huy.pham@empa.ch
May 20, 2024
================================================================================================================================================
There are various desired capabilities to create aerial forest-traversing robots capable of monitoring both biological and abiotic data. The features range from multi-functionality, robustness, and adaptability. These robots have to weather turbulent winds and various obstacles such as forest flora and wildlife thus amplifying the complexity of operating in such uncertain environments. The key for successful data collection is the flexibility to intermittently move from tree-to-tree, in order to perch at vantage locations for elongated time. This effort to perch not only reduces the disturbance caused by multi-rotor systems during data collection, but also allows the system to rest and recharge for longer outdoor missions. Current systems feature the addition of perching modules that increase the aerial robots’ weight and reduce the drone’s overall endurance. Thus in our work, the key questions currently studied are: “How do we develop a single robot capable of metamorphosing its body for multi-modal flight and dynamic perching?”, “How do we detect and land on perchable objects robustly and dynamically?”, and “What important spatial-temporal data is important for us to collect?”
Spatial-temporal maps play a crucial role for researchers seeking to comprehend patterns and trends of regional biodiversity and abiotic environmental variables <cit.>. They inform researchers about climate patterns, environmental conditions, and local population of animals and plants over space and time. In order to collect data for these maps, there is a need for sensors that monitor both the region’s inhabitant and habitat conditions simultaneously. This progress will enable researchers to develop more sophisticated conservation and climate strategies, enhancing efforts to protect biodiversity in these delicate habitats. However, to create these maps of the understory layer of forests has been an extremely difficult task, with multiple sensors needing to be placed and collected manually via human effort at various locations <cit.>. The introduction of robots for sensor placement have potential, but the technology is still at its nascent stage <cit.>.
We have seen numerous bio-inspired principles as the bedrock of innovative solutions for aerial perching robots <cit.>. Similar to grasping and gripping principles, it has ranged from fibre-based dry adhesives like gecko-inspired solutions, to deployable microspines, tensile perching like spiders, and avian-inspired grippers <cit.>. For forest environments, dry adhesives often struggle because of the roughness of the tree branches and deployable microspines are often paired with another grasping mechanism to enhance gripping quality. Tensile perching and avian-inspired grippers, on-the-other hand, require additional modules plugged onto a commercial unmanned aerial vehicle, thus increasing the overall weight of the robot, reducing the possible payload it could carry, and therefore compromising mission endurance.
As a result, current roboticists have started to explore shape-changing, reconfigurable, and metamorphic hardware that are very lightweight and/or emphasize shared functionality to adapt to the locomotion requirements in different environments, as seen in Fig. <ref>. Thus, aiming to reduce the additional weight and components required for each mode of mobility. Furthermore, the addition of embodied compliance in aerial robot bodies will improve their adaptability to the environmental uncertainties and physical disturbances in these forest environments.
With the research objective centralized around the development of a metamorphic robot body and its interaction with different environments, one exciting strategy to develop aerial robots is through materials-based approaches seen in soft robotics <cit.>. Soft robotic methodologies open pathways to develop aerial perching robots that are imbued with soft robotic components, such as soft structures, sensing, and actuation <cit.>. The goal would not be to create a completely soft-bodied aerial robot, because classically soft systems still suffer from slow actuation speeds, but use a combination of flexible, rigid and soft materials with encoded mechanical shape-changing and sensing capabilities instead. I believe this design methodology will lead to aerial robot designs that can manipulate and sense its shape for shared functionality, but are still safe and robust enough to interact and transition between flight and perching.
§ PAST RESEARCH EXPERIENCE
Previously, through the study and characterization of intrinsically soft materials, with a focus on hyperelastic elastomers and multi-layered fabric materials, my team and I were able to create a computational model library that captured both their anisotropic and isotropic behavior <cit.>. This library was then utilized for optimizing the actuator designs for payload capacity and motion capabilities. These models were extended to investigate the capabilities of soft wearable assistive/rehabilitative devices, graspers, and continuum arms, that were also manufactured and tested. Further, we also embedded distributed sensors made of conductive rubber stretch sensors and fabrics to track the movement of soft continuum units that could perform multi-axis bending, helical twisting, and contraction <cit.>. With proprioceptive and motion capture sensors, we also proposed control models to improve the system tracking performance of these units <cit.>.
With the goals centered around the development of physically intelligent aerial robot platforms, our team has worked on aerial robots that are highly adaptable and physically robust to collisions with the environment <cit.>. We developed an aerial robot with a fully soft body made of inflatable fabrics co-developed with a fabric-based bistable grasper <cit.>. This work emphasized the potential of utilizing a variable stiffness soft body for mitigating impacts due to collisions, landing, and dynamic vertical perching. The conclusion of this work was the improved robustness of a soft-bodied frame in comparison to conventional frames and the frame’s ability to absorb impact when dynamically perching to extend contact time with the grasping object, for highly successful aerial grasping.
Further, our collaborative efforts have seen aerial-aquatic drones donned with a soft remora-inspired suction cup, capable of perching itself on uneven structures underwater and on land. Thus extending the importance of perching for aerial robots in multiple mediums <cit.>.
More recently, we explored the metamorphic design methodology to establish shared functionality in the aerial robot arms, utilizing the same robot hardware <cit.>. In essence, our aerial robot arms were capable of switching between flight and grasping for perching, without the need of an external gripper. Because of its origami-based design, the robot was able to wrap around perching structures of different sizes. This shared utility of the arms was able to reduce the overall weight of the system, which contributed to operational endurance as well. These soft robotic design methodologies showed enhanced flexibility in terms of aerial metamorphic robot development, along with capabilities to interact well with the environment with added robustness.
§ FUTURE WORK
Recently, different versions of morphing arm aerial robots (MAARs) have been developed, showcasing their ability in enabling adjustable flight dynamics and facilitating slow descent vertical perching <cit.>. Although these works establish an encouraging benchmark, I aim to extend the capabilities of these systems towards successful field missions by:
* Developing MAAR systems that fully explore the complexities of high-speed, high-impact dynamic perching from various angles on more complex objects.
* Tackling the complexities of graspable branch detection utilizing onboard vision-based perception methods.
* Equipping robot with sensors that are capable of collecting environmental data (such as acoustic or eDNA sensors) that can actually support ecologists and perform field missions alongside them.
Dynamic Perching: To enable high-impact dynamic perching, we would need the next version of our robot to absorb impact even better. This means taking a page from nature and enhancing our robot arms with elastic underactuated tendons <cit.>. This would aid in absorbing impact energy at the joints, during the free-fall dynamic perching phase, while still providing softness to adapt to the perching object and surface. A similar development process will be utilized to make the body structure of the drone for impact absorption without hindering perching.
To perform autonomous dynamic perching would require a post-stall maneuver performed by the aerial robot, by adjusting the flight speed, flight trajectory, perching mechanism, and motor disarming time. This needs to be a harmonious process between perception, planning and control. The same principles are relevant to dynamic takeoff scenarios as well, where the arms would need to swiftly transition back to its rigid configuration as the motor and propeller pairs turn on.
Vision-based Branch Detection: To detect perch-able tree branches with an onboard sensor is a considerable challenge due to the randomness of orientation, size, shape, overlapping nature, and occlusive textures surrounding them, such as leaves <cit.>. With a vision-based sensor and processing power of an onboard computer, it complicates the problem further. Past work has seen convolution neural networks identify and segment trees, branches, twigs, and leaves but these datasets remain limited <cit.>. The best approach for our team would be through a synthetic dataset generation pipeline instead to ease the difficulties of data collection and labeling, as well as enhances the amount of feature variability possible.
Environmental Sensing Network:
The eventual goal of creating a wireless network of robots monitoring a forest area can be performed with a single robot intermittently going from location-to-location and eventually scaling toward multiple robots. Onboard each robot, we aim to have a camera, eDNA, and acoustic sensors to track biodiversity and possibly poachers, as well as wide-spectrum sensors for measuring gas level, temperature, humidity, air pressure, CO_2, etc.
ieeetr
|
http://arxiv.org/abs/2405.09609v1 | 20240515180000 | Open star clusters and their asymmetrical tidal tails | [
"Pavel Kroupa",
"Jan Pflamm-Altenburg",
"Sergij Mazurenko",
"Wenjie Wu",
"Ingo Thies",
"Vikrant Jadhav",
"Tereza Jerabkova"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
Pavel Kroupa
pkroupa@uni-bonn.de
0000-0002-7301-33777]Pavel Kroupa
Helmholtz-Institut für Strahlen- und Kernphysik
Universität Bonn, Nussallee 14-16
53115 Bonn, Germany
Astronomical Institute
Charles University, V Holesovickach 2
18000 Praha, Czech Republic
Helmholtz-Institut für Strahlen- und Kernphysik
Universität Bonn, Nussallee 14-16
53115 Bonn, Germany
0009-0008-8138-3790]Sergij Mazurenko
Universität Bonn
Regina-Pacis-Weg 3
53113 Bonn, Germany
Helmholtz-Institut für Strahlen- und Kernphysik
Universität Bonn, Nussallee 14-16
53115 Bonn, Germany
Helmholtz-Institut für Strahlen- und Kernphysik
Universität Bonn, Nussallee 14-16
53115 Bonn, Germany
0000-0002-8672-3300]Vikrant Jadhav
Helmholtz-Institut für Strahlen- und Kernphysik
Universität Bonn, Nussallee 14-16
53115 Bonn, Germany
0000-0002-1251-9905]Tereza Jerabkova
European Southern Observatory
Karl-Schwarzschild-Strasse 2
85748 Garching, Germany
Stars that evaporate from their star cluster by the energy
equipartition process end up either in a leading or a trailing tidal
tail. In Newtonian gravitation and for open star clusters in the
Solar vicinity, the tidal threshold, or práh, for escape is
symmetrical, such that the leading and trailing tails are equally
populated. The data by six independent teams that applied the
convergent point method to map out the tidal tails of four open
clusters (the Hyades, the Praesepe, Coma Berenices and COIN-Gaia13)
using Gaia DR2 and DR3 are here applied to test for the expected
symmetry. All tidal tails contain more stars in the leading
tail. The combined confidence amounts to an 8 σ
falsification of the práh symmetry. The same test using Milgromian
dynamics leads to consistency with the data. More effort needs to be
exerted on this matter, but the data indicate with high confidence
that the tidal práh of an open star cluster is asymmetrical with
the corresponding confidence that Newtonian gravitation is
falsified. Open star clusters depopulate more rapidly in Milgromian
than in Newtonian dynamics and the COIN-Gaia13 cluster is here found
to be nearly completely dissolved. In view of these results,
the wide-binary star test and the Keplerian Galactic rotation
curve finding are briefly discussed.
§ INTRODUCTION
The stars in an open cluster with initial mass M_ oc, 0 orbit
chaotically within it with the many weak gravitational encounters
leading to an on-going redistribution of kinetic energy amongst them
as the cluster evolves towards energy equipartition which cannot be
reached. As a consequence of this two-body-relaxational process there
is a near-constant rate of loss of stars across the tidal threshold,
or práh[We adopt this term for the tidal threshold in
connection with Milgromian dynamics following <cit.>
where an explanation of its meaning related to the foundation of
Prague “on the threshold to a mystical world” is provided.],
Ṁ_ oc∝ M_ oc, 0 (eq. 12 in ).
By virtue of the stars leaking out of their cluster having nearly the
same velocity as the cluster, they are on very similar Galactocentric
orbits thus either drifting ahead or behind the cluster.
<cit.>, followed by <cit.>, were the first to show how the observed tails of
some star clusters in the Galaxy take their shape and how and why the
(observed) under- and over-densities are theoretically found
and explained. Their work was later confirmed and, also, interpreted
in terms of epicyclic orbits <cit.>. Tidal
tails grow constantly and uniformly in length for open star clusters
on almost circular Galactocentric orbits (fig. 7 in
). Owing to the linearity of Newtonian
gravitation and near the Solar circle and at larger Galactocentric
distances, for such star clusters the leading and trailing tails
contain, within Poissonian fluctuations, the same number of stars. The
symmetry of the tail populations has been quantified by
<cit.>, who show the evaporation process to be stochastic
and describable as a Bernoulli process. This allows quantifying the
number of stars in the leading, n_ l, and in the trailing tail,
n_ t, and also how likely a certain degree of asymmetry is,
given a total number of detected tail stars, n=n_ l+n_ t.
The notion to test whether Newtonian gravitation is valid on the
scales of open star clusters by using n_ l and n_ t was
introduced in <cit.> based on the new compact convergent
point (CCP) method developed by <cit.> to map out the
full extent of cluster tidal tals. These latter authors quantified for
the first time the full length of the tidal tails of the Hyades,
Praesepe and Coma Berenice, and later also of NGC752
<cit.>. By applying Milgromian and Newtonian gravitational
models to these data, <cit.> showed that while the
full tails of the Praesepe, Coma Berenices and NGC752 are
consistent with Newtonian symmetry, the data are also consistent with
Milgromian gravitation. The full-length tails of the Hyades cluster,
however, are inconsistent with the Newtonian symmetry at more than
5 σ confidence (see also ), while being
consistent with Milgromian gravitation. The Galactic bar does
not influence the evolution of star clusters orbiting at
Galactocentric distances larger than ≈ 4kpc
<cit.>. <cit.> subsequently showed
that the Milky Way's non-axisymmetric bar potential cannot lead to the
observed asymmetry. The significant Hyades tail asymmetry thus
supports the possibility that the práh is Milgrom-asymmetric,
namely, that more stars escape per unit time on the Galaxy-near
side of the open cluster than on the far side.
If this were to be affirmed with additional data then we would be
forced to discard Newtonian gravitation with corresponding
implications for all of galactic, extragalactic and cosmological
research. The tidal tail data clearly need significant improvement to
either confirm or discard the Milgrom-asymmetric práh. The aim here
is to test data on tidal tails of open star clusters that have been
obtained with the established convergent point (CP) method and by
research teams operating independently of the Jerabkova et al. and
Kroupa et al. efforts and prior to the publication of
<cit.>. These latter authors made the observation
(their sec. 2.2) that the tidal-tail-data extracted for the Hyades,
Praesepe, COIN-Gaia13 and Coma Berenices by six different teams
using the standard CP method appear to show an asymmetry by the
leading tail having more stars than the trailing tail. This
asymmetry based on data extracted using the CP method was not
quantified though, and instead the new tidal tail data extracted
using the new CCP method were used. Here we return to the previous
CP-based observations that were obtained prior to the invention of
the CCP method. Thus, while in <cit.> extended tidal
tail data in the distance range 50 to 200 pc ahead and behind the
three clusters Hyades, Praesepe and Coma Berenices and in the
distance range 50 to 130 pc ahead and behind the cluster NGC 752
were used, here we restrict the distance range from 10 pc to an
upper distance value for which data is reported by the six different
teams that used the CP method. The CP method is well known and finds
candidate ex-cluster members that co-move with the cluster and that
are still in the vicinity of the cluster (typically to within about
150 pc). The more involved new CCP method instead can identify more
distant candidate ex-cluster-members. But it relies on a model of
the tidal tail because the CP method breaks down when the linearity
assumption is violated since the stars increasingly deviate from the
cluster centre-of-mass velocity: as they drift further from the
cluster they are accelerated in the Galactic potential. Here we
ignore the cluster NGC 752 used in <cit.> because this
cluster does not have tidal tail information that was published
prior to the invention of the CCP method. The Milgromian models
reported in <cit.> were consistent with the tidal tail
data of the three clusters Hyades, Praesepe and Coma Berenices, but
the extended tidal tail data of the Hyades were asymmetric with more
than 5 σ confidence. The tidal tails of the two other
clusters did not indicate a strong asymmetry. Here we use the same
three clusters but rely on the less-extended tidal tail data
extracted prior to the invention of the CCP method using the more
robust CP method and we add the cluster COIN-Gaia13 for which such
data also exist. These data on the tidal tails closer to the open
clusters are more sensitive to the potential of the cluster plus
Galaxy combination since they assess the more recent escape of stars
through the práh.
The here-used data and the analysis as to their possible tidal práh
asymmetry are introduced in Sec. <ref> and in
Sec. <ref>, respectively. The Milgromian and Newtonian
models are compared with these data in Sec. <ref> and the
results are presented in Sec. <ref>. Sec. <ref>
contains the conclusions with a brief discussion of recent advances
concerning the validity of Milgromian and Newtonian dynamics.
§ THE CLUSTER DATA
The astrometric quality of the Gaia data release 2 (DR2) allows stars
to be extracted from the surroundings of four nearby open
clusters that most likely originated from the respective clusters. The
standard CP method (see and references
therein) was applied by six different teams on the Hyades
<cit.>, the Praesepe <cit.>,
COIN-Gaia13 (, using Gaia EDR3) and Coma-Berenices
<cit.> open clusters. This method picks-up
likely cluster ex-members based on their very similar space motion as
that of the cluster, and can therefore only map out the ex-members in
the vicinity of the cluster to a maximal distance d_ max beyond
which the CP approach breaks down. With the new CCP method,
<cit.> introduce a phase-space transformation allowing
also the ex-members in the full-length tidal tails to be found. As
reasoned in Sec. <ref>, here we resort only to the previous
work based on the CP method, as we aim to be as conservative as
possible in testing gravitational theory on the cluster práh.
The position and velocity data of the clusters are from the respective
publications, as collated in Table <ref>. The
Galactocentric Cartesian coordinate system X, Y, Z has X including
the Galactic Centre and pointing from the anchor-point towards the
Galactic centre, the explicit definition depending on various authors'
usage with the X-anchor-point sometimes being the Sun, the position
of the cluster or local standard of rest (see also
Sec. <ref>). Galactic rotation is in the positive Y
direction and the Galactic North pole in the positive Z direction.
ccccccccc
1 Data of the present-day star clusters and
tidal tails 0pt
Cluster T T̅
b X, Y, Z V_X, V_Y, V_Z
n_ l n_ t d_ max
(Myr) (Myr) (pc)
(pc) (pc/Myr)
(pc) Hyades<cit.> 580–720 650 3.1 -8344.44, 0.06, 10.22 -32.01, 212.37, 6.13 234 184 175
Hyades<cit.> “ “ “ “ “ 40 38 120
Praesepe<cit.> 708–832 770
3.7 -8441.57, -68.90, 127.03 -32.56, 216.53,
-2.74 214 124 210
COIN-Gaia13<cit.> 150–350 250 3.4 -8621.0, 109.3, 65.3 28.86, 243.83, -3.82 222 77 210
Coma Ber<cit.> 700–800 750 2.7 -8306.71, -5.91, 112.44 9.12, 236.90, 6.43 44 21 35
Coma Ber<cit.> “ “ “ “ “ 8 5 35
Except for Gaia-COIN13, the ages (T) and
Galactocentric position and velocities stem from table 1 in
<cit.>. T̅ is the mean of the given ages. The
Plummer parameters b are from <cit.> for the
Hyades, Praesepe and Coma Berenices. The derivation of the Plummer
parameter of COIN-Gaia-13 is described in Sect. <ref>
(the corresponding half-mass radii being
r_ h≈ 1.305 b, e.g. ). The
number of reported candidate stars in the leading and trailing
tail of each cluster, n_ l, n_ t, respectively (see
Fig. <ref>), stem from the respective authors
applying the standard CP method. The maximum extent of the tidal
tail as obtained by these authors is d_ max.
The cluster COIN-Gaia13 needs special attention: The data are obtained
from <cit.> who provide a total number of stars of 478 and a
total mass of 439 M_⊙. Based on the local Oort constants they
calculated a tidal radius of 11 pc. However, the stellar sample is
spread up to a distance of 200 pc from the cluster centre. The
majority of the stellar mass used for the calculation of the tidal
radius lies outside of the central 11pc radius. In order to
derive the proper star cluster (tidal) mass and the corresponding half
mass radius we extract[WebPlotDigitizer at
<https://automeris.io>] the data from fig. 9 (left) in
<cit.> which shows the relative cumulative mass fraction of the
stellar sample with a mass less than and
above 1 M_⊙. Fig. <ref> shows the
relative cumulative radial mass distribution, with r being the
three-dimensional distance from the cluster centre, which is here
interpolated by
M(≤ r)/M_i,tot =
{[ ρ_1 r ; r< r_1 ,; ρ_1 r_1 + ρ_2(r-r_1) ; r_1 ≤ r < r_2
,; ρ_1r_1+ρ_2(r_2-r_1) +; (1-ρ_1r_1+ρ_2(r_2-r_1)) ×; (1-e^-ρ_3(r-r_2)) ; r_2 ≤ r ,; ].
with parameters r_1 = 45 pc, ρ_1=0.012 pc^-1,
r_2 = 120 pc, ρ_2=0.0043 pc^-1 and
ρ_3=0.035 pc^-1, and M_ i, tot being the total mass
of the n stars.
The required internal tidal mass, M_t,
of a star cluster as a function of the tidal radius, r_t,
in a disk galaxy with differential rotation is given by <cit.>
M_t(r_ t) = 2(A-B)^2/G r_t^3 ,
where A ad B are the local Oort's costants. Here we use
A=14.5 km s^-1 kpc^-1 and
B=-13.0 km s^-1 kpc^-1 from <cit.>. The
intersection of this radial tidal mass function, Eq. <ref>,
and the radial cumulative mass distribution
(Fig. <ref>) determines a tidal mass of
M_ t=M_ oc=29 M_⊙ and the tidal radius to be
r_ t≈ 5.6 pc (Fig. <ref>,
see e.g. for the method). It follows that
COIN-Gaia13 has a half mass radius of
r_h≈ 4.4 pc and a Plummer parameter
b≈ 3.4 pc. These values supersede those reported by
<cit.>.
Because the open clusters are on near-circular orbits about the
Galactic centre with relatively small excursions above and below the
mid-plane, we analyse the star-counts by projecting the data onto the
X-Y plane, as shown in Fig. <ref>. For each of the six
measurements, a line passing through the cluster centre but at right
angle to the respective cluster's velocity vector in the X-Y plane
is constructed and stars that are 10pc ahead and behind this line
in the X-Y plane are counted as being in the leading and trailing
tails, respectively. We use a nominal distance of 10 pc ahead
and behind the cluster because this is observationally pragmatic and
is similar to the tidal radius,
r_ tid≈(0.5 M_ oc/M_
gal)^1/3 D_⊙≈ 11pc for an open cluster
weighing M_ oc=300 M_⊙ at the distance of
D_⊙=8300pc from the Galactic centre in a logarithmic
potential corresponding to a Galactic mass of
M_ gal=0.6× 10^11 M_⊙ within D_⊙. The
masses of the clusters in Table <ref> are (table 1 in
) M_ oc=275 M_⊙ (Hyades),
311 M_⊙ (Praesepe) and 112 M_⊙ (Coma Ber), while
COIN-Gaia13 is derived here to weigh M_ oc=29 M_⊙. The
CP method cannot assess ex-clusters stars beyond about d_ max
ahead or behind the clusters (for which the
CCP method is needed) and the star counts (n_ l, n_ t,
respectively) are thus for the distance range between 10pc
and d_ max (as listed in Table <ref>) ahead and
behind the cluster. This distance rage will be implemented in the
models in Sec. <ref>. From the data in
Table <ref> we note that each of the six measurements has
n_ l>n_ t.
As stars drift away from the open clusters, under- and over- densities
along the star cluster tails form kinematically
<cit.>. The existence and location of these
under- and over- densities (see figs. 11 and 12 in
) have been, later, also interpreted in
terms of epicyclic motions relative to the centre of mass of the
cluster <cit.>. Overdensities of stars thus
form at regular spacings ahead and behind the cluster, with
<cit.> for the first time reporting evidence for their
existence for an open star cluster. In Newtonian gravitation, the
first epicyclic overdensities have a distance from the centre of mass
position of the open cluster of
Δ/ pc≈± 350 (M_
oc/16400 M_⊙)^1/3 (fig. 6 in )
such that Δ≈ 64pc for M_ oc=100 M_⊙ and
Δ≈ 51pc for M_ oc=50 M_⊙, and are thus
within the range 10pc–d_, max. In Newtonian gravitation,
the overdensities and gaps are spaced symmetrically at equal distances
from the cluster ahead and behind it <cit.>, but in Milgromian gravitation
the leading overdensity is at a larger distance from the cluster than
the trailing one <cit.>. This indicates that the escape
speed is lower towards the leading tail such that stars escape
slightly faster, and thus the relative spacing of the epicyclic
overdensities is a sensitive measure of gravitational theory. For the
purpose here, the number of stars in the leading and trailing tails,
n_ l, n_ t, respectively, thus contains this
information. If Newtonian gravitation were to be correct, then
n_ l≈ n_ t also because the leading and trailing
Küpper epicyclic overdensities are symmetrically spaced relative to
the cluster's centre of mass position.
§ SYMMETRY ANALYSIS
If the four open clusters are orbiting in a smooth potential and are
thus unperturbed then the stochastic Bernoulli calculation by
<cit.> can be applied to assess the likelihood of the
observed n_ l and n_ t values to occur assuming the null
hypothesis that Newtonian gravitation is valid. Following
<cit.>, the normalised asymmetry parameter is defined as
ϵ = (n_ l-n_ t)/(n_ l+n_ t) .
This definition has the advantage that the quantity ϵ is
symmetrical about 0. By the null hypothesis, the expectation value
is μ_ϵ=0 with variance σ_ϵ =
1/√(n). For each of the six measurements, the asymmetry
significance σ = |μ_ϵ - ϵ|/σ_ϵ is
calculated and displayed in Fig. <ref>. For example, for the
Hyades <cit.> the asymmetry significance is
σ=|0-0.12| √(418)=2.45. The data for the Praesepe and for
COIN-Gaia13 constitute significant deviations from the null
hypothesis, which is rejected with, respectively, 4.9 σ and
8.39 σ confidence.
Is it possible that Newtonian gravitation is valid but that the star
counts and the application of the CP and CCP methods lead to a bias
that creates a number asymmetry of the tidal tails? An interesting
insight is obtained by the different teams reporting, for the same
clusters, rather different numbers of tail members, despite using the
same Gaia data releases (Table <ref>). This indicates
the need to further study the tail-member extraction
algorithms. Nevertheless, while the extracted numbers differ, in all
cases the leading tail has more stars than the trailing
tail. Particularly interesting is that the re-analysis by
<cit.> of the Hyades using the CCP method but allowing the
model cluster to evolve in non-axisymmetric Galactic potentials
extracts different tail candidates than the original study by
<cit.> who used an axisymmetrical potential. But both
extracted tidal tails have comparable tail lengths and the asymmetry
also remains similar. The clusters in Table <ref> are in
different directions as seen from the Sun, and this suggests that
observational bias will not affect the star counts in the same
manner. If such an effect were there, we might expect a more random
result in terms of which half-tail contains more reported
stars. Again, the finding that all measurements of all clusters
available today indicate the same symmetry breaking suggests that an
observational bias leading to this asymmetry is not likely.
Is it possible that Newtonian gravitation is valid but that the tidal
tail asymmetries are a result of the open star clusters being
perturbed? According to the data used here, the Praesepe and
COIN-Gaia13 show a very significant asymmetry
(Fig. <ref>). This could be due to a perturbation. Indeed,
<cit.> studied the possibility that the Hyades is
heavily perturbed by a recent encounter which lead to the significant
asymmetry of the number of stars in the leading and trailing full-length
tidal tails. A recent encounter with a massive perturber can lead to
an effect comparable to the observed asymmetry of the full-length tidal
tails, but the mass (≈ 10^7 M_⊙) and proximity
(≈ 120pc) appears for this to be unlikely because neither is
a perturbation of the Solar neighbourhood field population known nor
are there any correspondingly massive molecular clouds there, as
pointed out by <cit.>. The analysis here based on the
inner tidal tail data as obtained by the CP method shows the Hyades to
be consistent with the null hypothesis while the Praesepe (a
4.9sigma deviation) and COIN-Gaia13 (an 8.39sigma
deviation) are not. Combining the available information: The extended
tidal tails of the Hyades are in >5sigma tension with the null
hypothesis <cit.>, and the inner tidal
tails of the Praesepe are in 4.9sigma and of COIN-Gaia13 in
>5sigma tension with the null hypothesis. The asymmetry is the
same in all three cases, namely, the leading tail contains
significantly more stars than the trailing tail. The remaining
cluster, Coma Berenices, does not show a highly significant asymmetry
but nevertheless also has n_ l>n_ t at the near-3sigma
confidence level. While no perturber with a corresponding mass is
evident to be present, the possibility that all of these clusters
suffered an encounter at the same time leading to the same type of
asymmetry appears to not be physically plausible.
§ MODELS
Sec. <ref> documents that there is strong evidence that
Newtonian gravitation may not be universally valid. In
Sec. <ref> Newtonian models of a Hyades-like cluster are
studied to assess if a realistic orbit of an open cluster which
includes passing through the Galactic mid-plane can lead to an
asymmetry of its tidal tails. In Sec. <ref> Milgromian-
(MOND, )
and Newtonian-dynamics models are studied in order to advance our
knowledge on the tidal tail asymmetry of open star clusters. We refer
to <cit.> for a thorough introduction and discussion of
the problem.
§.§ Tidal-tail asymmetry due to Z-excursions?
Three Newtonian simulations (referred to in the following as
models 1, 2 and 3 with different random number seeds for the stellar
masses, position and velocity vectors but otherwise identical
initial parameters) of a Hyades-like star cluster with an initial
total mass of 1300 M_⊙ and an initial Plummer parameter of
b=2.3pc (half-mass radius of 3.0pc) are performed to test if
a realistic Galactocentric orbit that is inclined to the Galactic
plane might lead to periodic tidal tail asymmetries similar to those
observed. The direct N-body code PETAR <cit.> is
applied. It is a newly developed high-end code resting partially on
the developments that lead to the Aarseth-suite of N-body models
such as Nbody6 <cit.>. PETAR allows precise
and accurate star–star force calculations and thus the integration
of stellar orbits by being based on the Barnes-Hut tree method, the
Hermite integrator and invoking slow-down algorithmic regularisation
and thus also caters for high initial binary fractions.
The model clusters are initialised according to the Plummer
phase-space distribution function <cit.>. Modelling
open star clusters with the Plummer phase space distribution
function is motivated by its simplicity (e.g. ) and <cit.> and <cit.> finding it to
match the observed density profiles of the Hyades and the Praesepe,
respectively. The Plummer phase-space distribution function is
also the simplest fully analytical solution of the stationary
collision-less Boltzmann equation (e.g. ).
The computations assume a canonical stellar IMF <cit.>
but no initial binary population, as these would significantly slow
down the calculations without affecting the tidal tail symmetry. As
Aarseth's Nbody6 and Nbody7, PETAR incorporates stellar evolution
using the updated single-stellar evolution/binary-stellar evolution
(SSE/BSE) algorithms <cit.>. The
Milky Way is modelled as an axisymmetric bulge+disk+dark halo
potential as given by table 1 (MWPotential2014) in <cit.>.
All clusters are initially positioned at coordinates corresponding
to the Hyades, as detailed in Table <ref> and the
integrations of the stellar equations of motion extend up to
700Myr.
The computations are conducted within a coordinate system that
combines the Galactocentric system with a translation to the
cluster's center of mass. To align the model at any snapshot
with a co-rotating frame, a rotational transformation along the
Z-axis of this system is performed. This adjustment ensures that
in the resultant X-Y plane, the cluster consistently moves towards
the positive Y-axis, with its center maintained at the
origin. Tidal tail members are selected as stars in the leading tail
that have a Y-coordinate ranging from +10 to +175pc, and
those from -175 to -10pc as trailing tail stars (as for the
Hyades, Table <ref>). Fig. <ref>
illustrates this approach to selecting tidal tail members at 200Myr.
At the beginning of the calculations, the tidal tails are not yet
formed. Consequently, to analyse the asymmetry, it is necessary to
consider only the snapshots taken after a certain period. This
determination is based on the scatter plots of stars for each model,
akin to those illustrated in Fig. <ref>, but
capturing various time points. Our analysis indicates that the tidal
tails are well-developed by 200Myr. Fig. <ref>
presents the evolution of the normalised asymmetry parameter
ϵ (Eq. <ref>) in conjunction with the
Z-position in the Galactocentric coordinate system. The findings
demonstrate that excursions in the Z-position do not contribute to
a positive ϵ. Further analysis on the significance of the
asymmetry is shown in Fig. <ref>. For models 1 and 3,
the significance consistently remains below σ=2 once the
tidal tails have formed. In contrast, model 2 exhibits its highest
significance, nearly reaching σ=3, at around
200Myr. However, by 700Myr, the asymmetry parameter
ϵ becomes negative. No convincing correlation is evident
between ϵ, σ and Z.
To avoid the stochastic effects stemming from model initialisation,
the average number of stars in the leading and trailing tails are
calculated at each snap shot by combining all three
models. Fig. <ref> displays the resulting asymmetry
parameter, ϵ, and its significance, σ. The asymmetry
parameter indicates a positive asymmetry, meaning the leading tail
consistently contains more stars than the trailing tail. However,
the significance of this finding is minimal, remaining below
σ < 1.7. A significant correlation with Z is not apparent
in the model-averaged values of ϵ and σ.
In conclusion, a realistic orbit of an open cluster oscillating
about the mid-plane of the Galactic disk therefore does not lead to
a significant (σ>3) asymmetry of the tidal tails. Thus
Milgromian models are considered next in comparison to Newtonian
models computed with the new MLD N-body code.
§.§ MOND and the MLD code
MOND is a non-relativistic theory that generalises Newtonian
gravitation and is non-linear such that the potential around an open
star clusters near the Solar circle is asymmetrical. In particular,
fig. 3 in <cit.> explains why a Milgromian star cluster
looses more stars per unit time into the leading tail than a Newtonian
star cluster. First of all, in Newtonian gravitation the restoring
force towards the star cluster's centre is equal and opposite on
opposing sides of the cluster's centre such that both tails are fed
equally by evaporating stars. In contrast, in Milgromian dynamics,
the potential of the cluster on the side towards the Galactic centre
has a reduced restoring force by about 15 per cent (for the point-mass
approximation shown in fig. 3 in ) towards the
cluster than the backwards side such that more stars can exit it thus
ending up in the leading tail which is fed by stars falling towards
the Galaxy's centre. The fractional reduction of the restoring force
however depends on the mass and extent of the open cluster and on the
particular formulation of MOND used – see below – and needs further
theoretical and empirical exploration. It is also possible that the
escape speeds are very similar but that the práh on the Galactic
near side has a larger extent than the práh at the far side such
that more stars can escape towards the Galaxy. Investigations are
on-going as to the details of stellar escape in Milgromian dynamics.
<cit.> demonstrated that this práh asymmetry leads to
the type of asymmetry detected here, namely that the leading tidal
tail of a star cluster has, most of the time, more stars than the
trailing tail. Those models concentrated on the full extent of the
tidal tails in comparison with data extracted using the CCP method,
and the models were idealised by the clusters being set-up on circular
mid-plane orbits in the Galactic disk. Here realistic orbits are
studied considering tidal tail data extracted using the standard CP
method which means only the tidal tails near to the cluster are
assessed (Sec. <ref>). Discussed next, there are three
different formulations of MOND as a gravitational theory that allow
the integration of the equations of motion of particles in a
force-field generated by the matter distribution: AQUAL <cit.>,
QUMOND <cit.> and MLD <cit.>. AQUAL and QUMOND are
field-formulations relating the true (Milgromian) gravitational field,
Φ, to its source, the baryonic matter distribution, ρ. MLD
is a particle-based formulation and rests on calculating the particle
forces directly.
Very briefly on AQUAL: consider the following quasilinear elliptic
partial differential equation of second order where the left hand side
is the p-Laplace operator with u = Φ_p/a_0 being the generalised
potential with unit of length,
∇⃗·[ |∇⃗ u|^p-2 ∇⃗ u
] = 4π G ρ/a_0. It can be seen that for p=2
the standard Poisson equation is obtained with the matter density
ρ sourcing the (Newtonian) potential Φ_p=2. For p=3,
ρ sources the non-Newtonian potential Φ_p=3. In both cases
the negative gradient, -∇⃗Φ_p, is the acceleration.
The p=3 case corresponds to the deep-MOND limit where the equations
of motion are space-time-scale invariant <cit.>. The full
description includes the transition from the p=2 to the p=3
Laplace operator when
-∇⃗Φ≈ a_0 ≈ 3.9pc/Myr^2 with
Φ=Φ_p=2 when -∇⃗Φ≫ a_0 and
Φ=Φ_p=3 when -∇⃗Φ≪ a_0. It can be
formulated in terms of a Lagrangian <cit.>. The transition of
non-relativistic dynamics away from Newtonian dynamics to Milgromian
dynamics may be due to the quantum vacuum
<cit.>. Relativistic formulations that encompass these
non-relativistic field equations have been developed (see
for a review). In <cit.> the related
Lagrangian-based quasi-linear formulation of MOND (QUMOND) was applied
to do the simulations because it is computationally more
efficient. QUMOND rests on the idea that the Newtonian potential
generated by ρ can be augmented by a phantom dark matter
potential (which does not consist of real dark matter particles) such
that the combination of both potentials yields the total Milgromian
potential. The AQUAL and QUMOND formulations of MOND differ when
-∇⃗Φ_p ≈ a_0 and this regime is also sensitive to
the transition function between the p=2 and the p=3 regimes. By
choosing to use QUMOND and a particular transition function,
<cit.> explored the general effect of MOND on the escape
of stars across the práh of their cluster. However, this approach
becomes untenable to model the low-mass open clusters in
Table <ref>. Numerical simulations of stellar dynamical
systems by use of field theoretical descriptions require a
sufficiently smooth mass density distribution. QUMOND simulations
show that the dynamical evolution of already intermediate mass star
clusters (≈ 5000 M_⊙) is effected by the limitation of
the grid resolution and the graininess of the gravitational potential
<cit.>. Therefore, the self-consistent simulation of
star clusters with smaller masses require direct N-body methods in a
MONDian context. The above non-linear MONDian field equations have not
yet been discretised such that an N-body code has, until now, not
been available to study the dynamical evolution of open star clusters.
In Milgromian Law Dynamics (MLD), and as a first step towards
such a MOND N-body code, Milgrom's law <cit.> is postulated
to be valid in vectorial form <cit.>,
μ(a_ i/a_0) a⃗_⃗ ⃗i⃗ = g⃗_⃗ ⃗i⃗ ,
which connects the kinematical acceleration of star i,
a⃗_⃗ ⃗i⃗ (a_ i=|a⃗_ i|), to its formal
Newtonian gravitational acceleration, g⃗_⃗ ⃗i⃗. The
acceleration g⃗_⃗ ⃗i⃗ of each particle is obtained from the
sum of the Newtonian gravitational forces from all other particles,
just as in standard Newtonian N-body codes (e.g. ). The acceleration,
a⃗_ G, i(D⃗_ i), acting on particle i from the
full Galactic potential at the particle's location, D⃗_ i,
is added vectorially to g⃗_⃗ ⃗i⃗. The transition function has
the property μ⟶ 1 for a_ i≫ a_0 and
μ⟶ a_ i/a_0 for a_ i≪ a_0 such that
Newtonian dynamics is obtained in the former case (e.g. in the
planetary-regime of the Solar system). The standard transition
function, μ(x) = x/(1+x^2)^1/2 <cit.>, is
applied here. The external field effect (EFE), a unique new physical
phenomenon in MOND and non-existent in Newtonian dynamics
(e.g. ), is taken into
account automatically in MLD because the transition function μ is
on the left-hand side of Eq. <ref>. The MLD code is published
by <cit.> where detailed tests are documented and conserved
quantities are derived.
In the MLD code, the standard Hermite scheme used in direct N-body
codes <cit.> is
implemented to integrate the equations of motion of the stellar
particles using the accelerations and jerks in a predictor-corrector
method. The MOND-accelerations, a⃗_ i, are obtained by
solving Eq. <ref>, and the corresponding jerks are calculated
as the time derivative of the accelerations. In order to avoid the
Newtonisation of the centre of mass of the star cluster in this MOND
formulation and to avoid the handling of rare but
computationally-intensive close encounters, the gravitational N-body
force has been softened <cit.>. Here, a softening
parameter ε=0.1pc is used. This softening does not allow
a realistic assessment of the true evaporation rate which is driven by
the two-body relaxational process. It avoids excessive computational
time but allows the softened particles to self-consistently generate
the Milgromian or Newtonian potentials and thus to map out the
directionality of the relative mass loss from the cluster. In this
sense the MLD code is a first step akin <cit.> in the
Newtonian case and will be developed further by including
regularisation methods as well as stellar and binary-star evolution
algorithms. The simulations in pure Newtonian dynamics are performed
with the same MLD code with the same softening by setting the
threshold acceleration to a_0=3.9× 10^-20 pc/Myr^2.
To model the open clusters in Table <ref> using the MLD
code, first their initial positions in the Galaxy need to be obtained
by backwards integration to then forward-integrate the initialised
cluster to its presently observed position. Thus the present-day
position of the star cluster centre is calculated backward in time for
a time equal to the mean estimated age T̅ (same procedure as
used in ). At this position a Plummer phase-space
distribution function is set up as in Sec. <ref> with an
initial Plummer parameter b as in Table <ref> and
containing N=2000 particles of equal mass m_
i=0.5 M_⊙. In the case of the simulations of COIN-Gaia13,
b=3.4pc and the particle number is N=500 with a total mass of
439 M_⊙ (Sec. <ref>) in order to reach the dissolved
state of this cluster.
In the next step each particle i in the star cluster is integrated
forward in time using the MLD code. All particles are kept in the
calculation, and a spherical logarithmic Galactic gravitational
potential,
a⃗_ G, i = - V_ c^2/( X_ G, i^2+Y_ G,
i^2+Z_ G, i^2 )^1/2, is used. It corresponds to a
flat rotation curve of V_ c=225km/s, with
X_ G, i, Y_ G, i, Z_ G, i being the Galactocentric
Cartesian coordinates of particle i. The calculation proceeds until
the density centre of the star cluster comes closest to the current
position of the observed star cluster. The Solar position in this
coordinate systems is
X_ G⊙=-8300, Y_ G ⊙=0, Z_ G ⊙=27pc.
The position of the density centre of each model is found by
calculating the density-weighted radius based on the innermost 20 per
cent of particles <cit.>. At
this point the model cluster has a very similar position relative to
the Sun and its tidal tails are extracted just as for the observed
clusters in Sec. <ref>. In order to obtain sufficient
statistics of the tail occupation numbers, for each of the observed
star clusters listed in Table <ref> three models with
different initial random number seeds are calculated in MLD and in
Newtonian dynamics.
§ RESULTS
§.§ The observed clusters
In Sec. <ref> the asymmetry-significance, i.e. the
probability of obtaining the number of stars in the leading and
trailing tails, was computed for each observational study of the four
open clusters listed in Table <ref>, the results being
documented in Fig. <ref>. In order to assess the
probability whether the combined data of the clusters are consistent
with the null hypothesis (symmetrical tails, i.e. Newtonian dynamics
is valid) the observed cluster data are stacked. Since the escape of
stars under the null hypothesis can be very well described as a
stochastic process <cit.> we can add the leading and
trailing tails in the observed clusters (Fig. <ref>),
n_ l, sum=Σ_ i=1^6 n_ l,i and
n_ t, sum=Σ_ i=1^6 n_ ti. This combined data
set is extremely significantly discrepant with the null hypothesis
because the available measurements of the tidal tail membership have
significantly more stars in the leading than in the trailing
tail. Based on the observational data, the null hypothesis is
therewith rejected with 8.99 σ confidence
(Fig. <ref>).
§.§ The models
The above asymmetry-significance is also calculated for each of the
models in order to assess if these confirm the rejection: Are the
tidal tails of the Milgromian models as asymmetric as the observed
ones? And do the Newtonian models confirm the expected symmetry
<cit.>?
As described in Sec. <ref>, for each open star cluster in
Table <ref> three models are computed with the MLD code
in each of the gravitational theories in order to improve the
statistics in the model data. The final snapshots, when the
respective model is at the position of the observed cluster, are
stacked and shown in Fig. <ref>–<ref>.
The distribution of particles on the sky of the stacked models are
shown in Fig. <ref> to illustrate the sky-position, size
and extent of the tidal tails of each of the open star clusters in
Table <ref>. It is evident that the Milgromian and
Newtonian models look, at first sight, similar. More subtle
differences can be seen in the case of Coma Ber which is a close-by
old cluster close to disruption (the Praesepe has a similar age and
contains significantly more stars, Table <ref>).
The probabilities that the individual stacked models are consistent
with the null hypothesis are evaluated next. The numbers of particles
and probabilities are shown in Fig. <ref>. It is
already readily apparent, by comparing with the observed clusters
(Fig. <ref>), that the Milgromian models are indeed highly
inconsistent with symmetric tails, the leading tail always containing
significantly more stellar particles than the trailing one. The
Newtonian models, on the other had, confirm these to be consistent
with symmetrical tidal tails.
In order to assess the overall probability that Milgromian or
Newtonian models are consistent with the null hypothesis, all
Milgromian and Newtonian models are stacked into one respective
representation, as done for the observed clusters
(Fig. <ref>). As shown in
Fig. <ref>, the Milgromian models are consistent
with the observed tails by both being significantly dislodged from the
normalised probability distribution, while the Newtonian ones are well
consistent with this distribution. Thus, the Newtonian models computed
with the MLD code confirm that Newtonian tidal tails are symmetrically
occupied, while the Milgromian computations with this same code
confirm the MOND-asymmetry already noted by <cit.> on the
basis of a QUMOND code. The combined Milgromian models indeed show a
comparable asymmetry significance (8.63 σ,
Fig. <ref>) as the combined observed clusters
(8.99 σ, Fig. <ref>). Interesting
to note is also that the combined Milgromian models have
n_ M=411 particles in the leading and trailing tails, while the
Newtonian models have n_ N=385 such particles. The difference,
n_ M-n_ N, corresponds to a three-sigma effect that
suggests Milgromian open clusters to dissolve more rapidly than
equivalent Newtonian ones as discussed in <cit.>. This may
be one reason why observed open star clusters are found to be
dissolving more quickly than expected from Newtonian N-body models
<cit.>.
The four observed open star clusters that have tidal tail data thus
appear to compellingly indicate Milgromian rather than Newtonian
gravitation to be the valid description of gravitational dynamics on
the scale of a pc.
§ CONCLUSIONS
The tidal tails of open star clusters near to
the Sun allow to test gravitational theory. The leading and trailing
tails have, within statistical uncertainties, the same number of
stars if Newtonian gravitation is valid. If Milgromian gravitation
is valid, then the leading tail will have significantly more stars
than the trailing tail. We use the data from six teams that had
extracted tidal tail candidate stars for the four nearby open star
clusters Hyades, Praesepe, COIN-Gaia13 and Coma Berenices using the
standard CP method that allows to find co-moving ex-cluster member
stars still in the vicinity of an open star cluster. The available
data reject Newtonian symmetry with 8.99 σ confidence, but
are well consistent with Milgromian gravitation.
The Milky Way's bar potential cannot produce this asymmetry
<cit.>, and encounters with massive
structures also cannot simultaneously account for the similar
asymmetries observed in open star clusters that are at different
locations around the Sun. Newtonian simulations performed here show
that a Hyades-like cluster which periodically oscillates through the
Galactic disk over 700 Myr never shows a significant asymmetry as a
consequence of the disk crossings. While star-cluster simulations
in Newtonian gravitation cannot explain this asymmetry, simulations
in Milgromian gravitation naturally produce the observed asymmetry.
Further tidal-tail data are needed for confirmation and additional
Newtonian modelling is required including perturbations of the Milky
Way potential through its spiral arms <cit.> to sharpen
these results.
The Milgromian and Newtonian N-body computations presented here
support Milgromian open clusters to be dissolving more rapidly than
their Newtonian counterparts, and the present analysis finds the open
cluster COIN-Gaia13 to be nearly completely dissolved. Open star
clusters near the Galactocentric distance of the Sun are strongly
subject to the EFE because the external field from the Galaxy is
comparable to a_0 and the internal acceleration of the open clusters
is much smaller than a_0 <cit.>. The results here
suggest MOND rather than Newtonian dynamics to be relevant for
understanding the dynamical evolution of open star clusters. But the
relatively modest available data requires a significant further
effort on obtaining more tidal tail data in conjunction with
computer modelling in order to constrain the correct formulation of
Milgromian dynamics and the transition function. Additional tidal
tail data will become available for open star clusters at larger
distances from the Sun than the currently available clusters. Open
clusters that are ahead of the Sun in terms of Galactic rotation
will allow Gaia data to assess their trailing tails with more
accuracy and precision than the leading tails, while open clusters
behind the Sun will allow a more accurate and precise mapping of
their leading arms than their trailing arms. This leads to a bias
that needs to be catered for in the tail-symmetry analysis.
Placing the above findings into a broader context, is Milgromian
dynamics relevant beyond open star clusters? The Hubble Tension has
been shown to be resolved by galaxies falling in a Milgromian
gravitational field to the sides of a Gpc-sized local void we are
in, a void that is not possible in the standard dark-matter-based
cosmological model but readily forms in a MOND cosmological model
<cit.>.
Galaxy clusters have been posing some tension in that a factor of
two in mass appears to be missing but can be accounted for in MOND
if sterile neutrinos exist (e.g. and
references therein) and also if appropriate boundary conditions are
taken into account for these large structures
<cit.>. The prominent Bullet cluster of
galaxies has been used as a default object for the proof of the
existence of dark matter particles. But the fulfilment of
hydrostatic equilibrium required for the mass determination of the
hot gas seems to be problematic <cit.>. Problems of
explaining the Bullet (and the El Gordo) galaxy clusters in the
standard ΛCDM cosmological model of structure formation have
been noted. These galaxy clusters are, however, well understood in a
Milgromian cosmological model <cit.>.
It is already well established that Milgromian gravitation correctly
accounts for the properties of elliptical galaxies <cit.>, and of disk galaxies (e.g. ; specific example: M33, ; natural
formation of exponential disk galaxies: ;
star-formation properties: ). The availability of
Gaia DR3 has allowed independent teams to assess the rotation curve of
the Galaxy at Galactocentric distances of 19 to
27 kpc. <cit.> and <cit.>
all find it to be decreasing over this distance range by about
30km/s being consistent with a Keplerian decline. While a flat
MONDian rotation curve is rejected with 3 σ confidence by
<cit.>, <cit.> stress that globular clusters and
satellite galaxies lead to significantly higher rotation speeds at
distances from ≈ 15 to ≈ 200kpc. The Keplerian
fall-off between ≈ 19 and ≈ 27kpc is associated with
divergent stellar radial velocity components (fig. 8 in
) which compromises simple solutions of the Jeans
equation and suggests a strongly perturbed outer Galactic disk. The
Keplerian fall-off is also associated with a break of the stellar
surface density at about 17kpc with a steep radial decline to
larger Galactocentric distances (fig. 4 in
). This feature is very similar to such a break
at 20kpc and similar decline of the disk surface density in
Milgromian models of the Galaxy that involve an encounter with
Andromeda ≈ 10Gyr ago (fig. 3 in ). Such
Milgromian models of the dynamical history of the Local Group need
more exploration for the exact timing, close-encounter distance and
initial galaxy configurations (e.g. radii and masses of the
pre-encounter galactic disks). But we already know that they
naturally explain the observed thin/thick disk components, warps of
the Milky Way and of Andromeda disks, the planar satellite galaxy
arrangements around both of them, while also being consistent with the
present-day inclinations of the Galactic and Andromeda stellar and
gaseous disks as well as their relative distance and velocity of
approach <cit.>.
Recent work on dwarf spheroidal satellite galaxies
<cit.> reports difficulties in matching their
kinematical data by MOND. This problem remains unsolved in Milgromian
dynamics needing attention, but does not imply that Newtonian
solutions with dark matter exist
(e.g. ).
On the scale of thousands of AU, the very-wide-binary-star test has
been shown to falsify Newtonian dynamics with the data being
consistent with Milgromian dynamics <cit.>. Contrary to these results,
<cit.> find that very wide binary stars disprove MOND. This
was rebutted by <cit.>, who stress that the
sub-sample of close wide-binaries need to be shown by the same
method to comply with Newtonian solutions. This gauging of the
wide-binary-star test was not demonstrated by <cit.>.
While the alignment of orbital elements of outer Solar System bodies
are reported to probably be due to MOND <cit.>, <cit.> find this
to not be possible, and on the Saturnian distance scale
<cit.> report problems of matching the planet
position data with Milgromian dynamics. Relevant in this work
is that the field-equation underlying MOND (Sec. <ref>)
needs a discretised analogy for the application to few-body dynamics
which so-far has not been discovered. The technical details of the
above modelling on the scale of thousands of AU may thus contain
inconsistencies such that the conclusion reached on the validity or
non-validity of Milgromian dynamics in this regime remain to be
questionable <cit.>.
It would appear to be implausible that Milgromian dynamics be valid
rather than Newtonian dynamics and that dark matter particles that
are not part of the standard model of particle physics also
exist. Indeed, apart from the above and apart from the fact that
dark matter particles have not been found experimentally despite
40 yr of search, tests for the existence of dark matter particles
that had been hoped to be accounting for the apparent non-Newtonian
phenomena on galaxy scales have been yielding negative results: the
presence of dark matter halos has been significantly questioned by
the properties of dwarf galaxies in the Fornax galaxy cluster
<cit.>. Applying the Chandrasekhar dynamical friction
test to various galaxy systems <cit.>, and
most recently on the Milky-Way/Large-/Small-Magellanic-Cloud triple
system <cit.>, shows no orbital solutions to be
possible as the systems merge too quickly to be consistent with
their observed configuration in phase-space. Solutions without
dark matter particles but with Milgromian potentials are readily
obtained though. Noteworthy in this context is that earlier work
had already shown that the observed rotation curves of disk
galaxies cannot be reproduced if the theoretically-predicted dark
matter halos are assumed <cit.>, and
elliptical galaxies take too long to assemble in the
dark-matter-based structure formation models to be consistent with
their rapid early formation <cit.>. Independently of
the above results concerning Milgromian dynamics, these tests thus
suggest that dark matter particles do not exist.
In summary, the overall data situation thus indicates that the
validity of the universal law of Newtonian gravitation is
challenged, with Milgromian dynamics apparently accounting for the
observed celestial dynamics. The first structure-formation
simulations in a Milgromian cosmological model have successfully
been published (particle-based: , and
hydrodynamical: ). The
Milgromian-interpretation of dynamics requires significant further
attention and the implications of these findings for galactic
astrophysics and cosmology are major if confirmed by further
research. Hereby we need to keep in mind that different
formulations of MOND exist due to insufficient knowledge which
formulation is correct, and that additional differences in the
detailed dynamical phenomena arise through different transition
functions which are also not well understood, and that it remains
possible that the formulation of MOND as is known today may be a
simplification of a deeper matter–space-time coupling not fully
understood at the present. This might be the case for example if
MOND is related to the properties of the quantum vacuum
(e.g. ) that may change under different
conditions.
We thank an anonymous referee for very helpful comments. Vikrant
Jadhav acknowledges support through the Alexander von Humboldt
Foundation in the form of an AvH Research Fellowship. Wenjie Wu
acknowledges support through a studentship from the stellar
populations and dynamics research (SPODYR) group at the University
of Bonn. We thank the DAAD-Bonn-Prague exchange programme at the
University of Bonn for support. Much of this manuscript was written
at the Department of Astrophysics, Astronomy and Mechanics at
Aristotle University of Thessaloniki, and PK thanks Padelis
Papadopoulos and other institute members there for their kind
hospitality. This work relies on data obtained by the Gaia
astrometric space mission. Stacy McGaugh's Triton Station was an
important source of information on the problem concerning the
Gaia-DR3-derived Keplerian fall-off of the Galaxy's rotation curve.
aasjournal
|
http://arxiv.org/abs/2405.09752v1 | 20240516012530 | Time-Varying Graph Signal Recovery Using High-Order Smoothness and Adaptive Low-rankness | [
"Weihong Guo",
"Yifei Lou",
"Jing Qin",
"Ming Yan"
] | eess.SP | [
"eess.SP",
"cs.NA",
"math.NA",
"math.OC"
] |
Time-Varying Graph Signal Recovery Using High-Order Smoothness and Adaptive Low-rankness
Smooth and Low-Rank Time-Varying Graph Signal Recovery
Weihong Guo
Department of Mathematics, Applied Mathematics, and Statistics
Case Western Reserve University
Cleveland, OH 44106
wxg49@case.edu
Yifei Lou
Department of Mathematics & School of Data Science and Society
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599
yflou@unc.edu
Jing Qin Department of Mathematics
University of Kentucky
Lexington, KY 40506
jing.qin@uky.edu
Ming Yan School of Data Science
The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen)
Shenzhen, China 518172
yanming@cuhk.edu.cn
*
Weihong GuoYifei Lou Jing Qin Ming Yan
May 20, 2024
==========================================
Time-varying graph signal recovery has been widely used in many applications, including climate change, environmental hazard monitoring, and epidemic studies. It is crucial to choose appropriate regularizations to describe the characteristics of the underlying signals, such as the smoothness of the signal over the graph domain and the low-rank structure of the spatial-temporal signal modeled in a matrix form. As one of the most popular options, the graph Laplacian is commonly adopted in designing graph regularizations for reconstructing signals defined on a graph from partially observed data. In this work, we propose a time-varying graph signal recovery method based on the high-order Sobolev smoothness and an error-function weighted nuclear norm regularization to enforce the low-rankness. Two efficient algorithms based on the alternating direction method of multipliers and iterative reweighting are proposed, and convergence of one algorithm is shown in detail. We conduct various numerical experiments on synthetic and real-world data sets to demonstrate the proposed method's effectiveness compared to the state-of-the-art in graph signal recovery.
§ INTRODUCTION
Many real-world datasets are represented in the form of graphs, such as sea surface temperatures, Covid-19 cases at regional or global levels, and PM 2.5 levels in the atmosphere. Graphs play a crucial role in data science, facilitating the mathematical modeling of intricate relationships among data points. Typically composed of vertices with either undirected or directed edges, graphs regard each data point as a vertex and use edges to represent pairwise connections in terms of distances or similarities. A graph signal is a collection of values defined on the vertex set. The graph structure can be either provided by specific applications or learned from partial or complete datasets.
As an extension of (discrete) signal processing, graph signal processing <cit.> has become an emerging field in data science and attracted tremendous attention due to its capability of dealing with big data with irregular and complex graph structures from various applications, such as natural language processing <cit.>, traffic prediction <cit.>, climate change monitoring <cit.>, and epidemic prediction <cit.>. Graph signal recovery aims to recover a collection of signals with certain smoothness assumptions defined on a graph from partial and/or noisy observations. Unlike signals defined in traditional Euclidean spaces, the intricate geometry of the underlying graph domain must be considered when processing and recovering graph signals. Graph signals typically exhibit smoothness either locally or globally over the graph.
There are some challenges in graph signal recovery when exploiting the underlying graph structure to improve signal reconstruction accuracy. First, the topology of a graph desires a comprehensive representation involving many graph components, such as structural properties, connectivity patterns, vertex/edge density, and distribution. Second, it may be insufficient to describe the smoothness of graph signals by simply restricting the similarity of signal values locally. Moreover, the growth of graph size leads to a significant computational burden. To address them, various techniques have been developed, including graph-based regularization methods <cit.>, spectral graph theory <cit.>, and optimization algorithms <cit.>.
§.§ Time-Varying Graph Signal Recovery
A time-varying or spatial-temporal graph signal can be considered as a sequence of signals arranged chronologically, where each signal at a specific time instance is defined on a static or dynamically changing spatial graph.
Consider an undirected unweighted graph G=(V, E), where V is a set of n vertices and E is a set of edges. We assume a collection of time-varying graph signals {_t}_t=1,…,m with _t∈^n are defined on V with a time index t. Let X=[_1,…,_m]∈^n× m be the data set represented in matrix. The pairwise connections on the graph G can be modeled by an adjacency matrix A, where the (i,j)-th entry of A is one if there is an edge between vertices i and j, and zero otherwise. This binary adjacency matrix can be extended to the non-binary case for a weighted graph, where each entry indicates the similarity between two vertices. Throughout the paper, we use a standard k nearest neighbor (kNN) approach based on the Euclidean distance of data points to construct the adjacency matrix.
Given an adjacency matrix A, we further define the graph Laplacian matrix, L = M - A ∈ℝ^n× n, where M is a diagonal matrix with its diagonal element M_ii = ∑_j A_ij. The graph Laplacian serves as a matrix representation of the graph structure and can be used to describe some important characteristics of a graph, such as node connectivity and similarity. For example, geographic locations in the form of coordinates, i.e., longitude and latitude, are typically used to calculate the pairwise distance and, thereby, the graph Laplacian for geospatial data. For some data sets without obvious graph domains, a preprocessing step of graph learning can be implemented; see <cit.> for a comprehensive review of graph learning techniques.
Time-varying graph signal recovery aims to recover an underlying matrix from its partially observed entries that are possibly polluted by additive noise. Mathematically, a forward model is Y = J∘ X + 𝒩, where Y is the observed data, J ∈{0,1}^n× m is a sampling matrix, and 𝒩 is a random noise. In this work, we focus on recovering time-varying signals, represented by the matrix X, from incomplete noisy data Y defined on static spatial graphs in the sense that the vertex set and the edges do not change over time. In addition, we adopt a symmetrically normalized graph Laplacian that is pre-computed based on geographic locations.
§.§ Related Works
The recovery of graph signals from partial observations is an ill-posed problem due to missing data. Graph regularization plays a crucial role in developing a recovery model for time-varying signals by enforcing temporal correlation and/or describing the underlying graph topology.
An intuitive approach for recovering time-varying graph signals is to apply interpolation methods to fill in the missing entries, such as natural neighborhood interpolation (NNI) <cit.>. Numerous recovery models with diverse smoothness terms have been proposed to further preserve the underlying geometry. For example, Graph Smoothing (GS) <cit.> characterizes the smoothness of the signal using the graph Laplacian of X. Alternatively, temporal smoothness is incorporated in Time-Varying Graph Signal Recovery (TGSR) <cit.> by formulating the graph Laplacian of DX, where D is a first-order temporal difference operator. The combination of the
graph Laplacian of X and the Tikhonov regularity of DX was considered in <cit.>. In contrast, the graph Laplacian of DX with an additional low-rank regularity of X was formulated as Low-Rank Differential Smoothness (LRDS) <cit.>. In the Tikhonov regularization, XD_F^2=(XDD^TX^T) implies that DD^T is treated as the temporal graph Laplacian. In <cit.>, the graph Laplacian matrix L is replaced by (L+ϵ I)^r, where I is the identity matrix and r≥1 for a high-order Sobolev spatial-temporal smoothness. Its main advantage lies in faster convergence, as this approach does not necessitate extensive eigenvalue decomposition or matrix inversion. Recently, another low-rank and graph-time smoothness (LRGTS) method has been proposed in <cit.>, where the sum of the nuclear norm and the Tikhonov regularizer on the second-order temporal smoothness is adopted to promote the low-rankness and the temporal smoothness, respectively.
All the models mentioned above can be condensed into one minimization framework:
min_X 1/2Y-J∘ X_F^2+α/2(D_θ^TX^TL_sXD_θ)+β R(X) + γ/2(XL_tX^T),
where D_θ is a θ-th order temporal difference operator, L_s and L_r are the spatial and temporal graph Laplacian matrices, respectively, R(X) is the regularization term applied to X describing its characteristics, and α≥ 0, β≥ 0, γ≥ 0 are three parameters. Two common choices of θ are (1) θ = 0 that corresponds to D_θ=I and (2) θ =1 used in TGSR. Additionally, L_s can be a transformed version of the classical graph Laplacian L, e.g., L̃=(L+ϵ I)^r in the Sobolev method <cit.>, where ϵ>0 and r≥ 1, which can be non-integer. The temporal graph Laplacian can be constructed by using the τ-th order temporal difference operator, i.e., L_t=D_τ D_τ^T, for which case the temporal Laplacian can be expressed via the Frobenious norm (XD_τD_τ^TX^T)=XD_τ_F^2 (see Tikhonov with τ=1 and LRGTS with τ=2). The regularization R(X) can be chosen as the nuclear norm of X if the underlying time-varying graph signal X is low rank.
Various models utilize different choices of D_θ, L_s/L̃, L_t, and the regularization R. Leveraging the recent growth in deep learning, some time-varying graph signal recovery methods include unrolling technique <cit.>, graph neural network (GNN) <cit.>, and joint sampling and reconstruction of time-varying graph signals <cit.>. In this work, we are dedicated to developing unsupervised time-varying graph signal recovery algorithms that do not involve or rely on data training.
Following the general framework (<ref>), we propose a novel low-rank regularization R(X) based on the error function (ERF) <cit.> for sparse signal recovery (see Section <ref>). In addition, to handle non-Gaussian type of noise such as Laplace noise, we propose a variant model in which the Frobeinus norm based data fidelity term is replaced with the ℓ_1-norm data fidelity (see Section <ref>). In Table <ref>, we provide a summary of the proposed models and relevant works pertaining to the general framework outlined in (<ref>).
§.§ Contributions
The major contributions of this work are described as follows.
* We develop a generalized time-varying graph signal recovery framework encompassing several state-of-the-art works as specific cases. We also develop two new models with a new regularization based on ERF.
* The proposed models combine high-order temporal smoothness and graph structures with the temporal correlation exploited by iteratively reweighted nuclear norm regularization.
* We propose an efficient algorithm for solving the proposed models. Convergence analysis has shown that the algorithm generates a sequence that converges to a stationary point of the problem.
* We conduct various numerical experiments, utilizing both synthetic and real-world datasets (specifically PM2.5 and sea surface temperature data), to validate the effectiveness of the proposed algorithm.
§.§ Organization
The subsequent sections of this paper are structured as follows. In Section <ref>, we introduce a pioneering framework for recovering time-varying graph signals, leveraging Sobolev smoothness and ERF regularization. Additionally, we put forth an efficient algorithm based on the alternating direction method of multipliers (ADMM) and iterative reweighting scheme. A comprehensive convergence analysis of the proposed algorithm is also provided. In Section <ref>, we present numerical experiments conducted on synthetic and real-world datasets sourced from environmental and epidemic contexts. Finally, Section <ref> encapsulates our conclusions and outlines potential avenues for future research.
§ PROPOSED METHOD
§.§ Error Function Weighted Nuclear Norm Regularization
To enhance the low-rankness of a matrix, weighted nuclear norm minimization (WNNM) has been developed with promising performance in image denoising <cit.>. Specifically, the weighted nuclear norm (WNN) is defined as
L_,*:=∑_iw_iσ_i(L),
where σ_i(L) is the i-th singular value of L in the decreasing order and the weight vector =(w_i) is in the non-decreasing order with w_i≥0 being the i-th weight. Choosing the weights is challenging in sparse and low-rank signal recovery problems. Iteratively reweighted L1 (IRL1) <cit.> was proposed for the sparse recovery problem, where the weight is updated based on the previous estimate. It can solve many problems with complicated sparse regularizations, exhibiting improved sparsity and convergence speed.
In this work, we introduce a novel ERF-weighted nuclear norm based on the ERF regularizer <cit.> and use linearization to obtain WNN. For any real matrix X with n singular values σ_1(X)≥…≥σ_n(X), the ERF-weighted nuclear norm is
X_=∑_i=1^n∫_0^σ_i(X)e^-t^2/σ^2dt,
where σ serves as a filtering parameter. To solve the ERF-nuclear norm regularized minimization problem, we use iterative reweighting (linearization) to get WNN with adaptive weights.
§.§ Fractional-order derivative
Inspired by the Grünwald-Letnikov fractional derivative <cit.>, we introduce the total θ-th order temporal forward difference matrix with a zero boundary condition, as shown below
D_θ=[ C(0); ⋮ ⋱; C(k) ⋯ C(0); ⋱ ⋱; C(k) ⋯ C(0) ]∈^m× m.
Here the coefficients {C(i)}_i=0^k are defined as
C(i)=Γ(θ+1)/Γ(i+1)Γ(θ+1-i), 0≤ i≤ k,
where Γ(x) is the Gamma function. Notice that if θ is a positive integer, k can be deterministic. For example, if θ=1, then k=1 and we have C(0)=1 and C(1)=-1, which is reduced to the first-order finite difference case. If θ=2, then it reduces to the temporal Laplacian operator. Generally if θ=n, then only the first n+1 coefficients {C(i)}_i=0^n are nonzero and thereby k=n+1. For any fractional value θ, we have to choose the parameter k. The difference matrix (<ref>) is built upon the zero boundary condition, while other types of boundary conditions, e.g., Newmann and periodic boundary conditions, can also be used. Alternatively, we can use low-order difference schemes for boundary conditions, e.g., the first-order forward difference based on the first m-1 time points and the zeroth order for the last time point.
§.§ Proposed Algorithm 1
We propose the following ERF regularized time-varying graph signal recovery model
min_X 1/2Y-J∘ X_F^2+α/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+βX_erf.
Here we use the least squares as a data fidelity term, the Sobolev smoothness of time-varying graph signals <cit.> as the graph regularization, and an ERF-based regularization defined in (<ref>) for temporal low-rank correlation.
We apply ADMM with linearization to solve the problem (<ref>). First, we introduce an auxiliary variable Z to rewrite the problem (<ref>) into an equivalent constrained problem:
min_X,Z 1/2Y-J∘ X_F^2+α/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+βZ_erf, s.t. X=Z.
Since the proximal operator of ·_erf is difficult to compute, we apply linearization on the ERF term to obtain a WNN when solving the subproblem for Z. The ADMM iterates as follows,
w_i← exp(-σ_i^2(X)/σ^2), for i=1,…,m
Z← _Z βZ_,*+ρ/2X-Z+Z_F^2
X← _X1/2J∘ X-Y_F^2+
α/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+ρ/2X-Z+Z_F^2
Ẑ← Ẑ + (X-Z),
where ρ>0 is a stepsize that affects the convergence; please refer to Theorem <ref> for more details. We derive closed-form solutions for both Z- and X-subproblems in (<ref>).
Specifically for the Z-subproblem, it can be updated via the singular value thresholding operator, i.e.,
Z= SVT(X+Z)=U(Σ,(β/ρ))V^T,
where UΣ V^T is the singular value decomposition of X+Z, and (·) is a diagonalization operator turning a vector into a diagonal matrix with the same entries as the vector. Here the shrink operator (x,ξ)=(x)*max(|x|-ξ) is implemented entrywise, where (x) returns the sign of x if x≠0 and zero otherwise.
In the X-subproblem, we can rewrite the second term of the objective function as
(D_θ^TX^T(L+ε I)^rXD_θ) =(L+ε I)^r/2XD_θ_F^2
=(D_θ^T⊗ (L+ε I)^r/2)(X)_2^2
:=A(X)_2^2,
where ⊗ is the Kronecker product. Thus, the X-subproblem has the closed-form solution as
X=[(J+α A^TA+ρ I)^-1(J^TY+ρ(Z-Z)))],
where J=((J)). Note that J^TY=Y since J is a diagonal matrix with binary entries in the diagonal, whose nonzero entries correspond to the sampled spatial points.
Furthermore, considering that the matrix J+α A^TA+ρ I is symmetric and positive definite, we perform its Cholesky factorization as J+α A^TA+ρ I=L̃L̃^T. Subsequently, we leverage forward/backward substitution as a substitute for matrix inversion, thereby reducing computational time.
The pseudo-code of the proposed approach for minimizing the model (<ref>) is given in Algorithm <ref>.
§.§ Proposed Algorithm 2
In real-world applications, the type of noise could be unknown, and it is possible to encounter a mixture of different types of noise. To enhance the robustness against noise, we propose the second model,
min_X Y-J∘ X_1+α/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+βX_erf
Compared with (<ref>), this new model utilizes the ℓ_1-norm data fidelity to accommodate various types of noise.
Because of the ℓ_1 term, we need to introduce an additional variable V to make the subproblems easy to solve. The equivalent constrained problem is
min_J∘ X-Y=V
X=Z
V_1+α/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+βZ_erf.
Therefore, the ADMM with linearization on the ERF term has the following subproblems
V ←_VV_1+ρ_1/2J∘ X-Y-V+V_F^2
Z ←_ZZ_,*+ρ_2/2X-Z+Z_F^2
X ←_Xα/2(D_θ^TX^T(L+ϵ I)^rXD_θ)+ρ_1/2J∘ X-Y-V+V_F^2
←+ρ_2/2X-Z+Z_F^2
For the V-subproblem, we get the closed-form solution expressed via the shrinkage operator
V=(J^T(Y+V-V),1/ρ_1).
Similar to Algorithm 1, the solution of the Z-subproblem is given by (<ref>) with ρ replaced by ρ_2. For the X-subproblem, we get the closed-form solution
X=[(ρ_1J+α A^TA+ρ_2 I)^-1(ρ_1J^T(Y+V-V)+ρ_2 (Z-Z)))].
The entire algorithm is described in Algorithm <ref>.
§.§ Convergence Analysis of Algorithm <ref>
For simplicity, we define
f(X):=1/2Y-J∘ X_F^2+α/2(D_θ^TX^T(L+ϵ I)^γXD_θ)
and hence the augmented Lagrangian function is given by
ℒ(X,Z,Ẑ)=f(X)+βZ_+ρ⟨Ẑ,X-Z ⟩ +ρ/2X-Z_F^2.
The function f is convex and continuously differentiable. In addition, ∇ f is Lipschitz continuous with a constant L.
Let ρ>L and {(X^k, Z^k,Ẑ^k)} be a sequence generated from Algorithm <ref>, then, the sequence is bounded and has a limit point that is a stationary point of the problem (<ref>).
Consider one iteration of Algorithm <ref>, the update of Z^k+1 gives
(X^k,Z^k+1,Ẑ^k)-(X^k,Z^k,Ẑ^k)
= βZ^k+1_erf+ρ/2X^k-Z^k+1+Ẑ^k_F^2-βZ^k_erf -ρ/2X^k-Z^k+Ẑ^k_F^2
≤ βZ^k+1_w^k,*-βZ^k_w^k,*+ρ/2X^k+Ẑ^k-Z^k+1_F^2-ρ/2X^k+Ẑ^k-Z^k_F^2
≤ -ρ/2Z^k+1-Z^k_F^2.
The first inequality holds because the error function is concave for positive values. The second inequality is valid because Z^k+1 is the optimal solution of the Z-subproblem.
Then we consider the updates of X^k+1 and Ẑ^k+1, which together give
(X^k+1,Z^k+1,Ẑ^k+1)-(X^k,Z^k+1,Ẑ^k)
= f(X^k+1)+ρ⟨Ẑ^k+1,X^k+1-Z^k+1⟩+ρ/2X^k+1-Z^k+1_F^2
-f(X^k)-ρ⟨Ẑ^k,X^k-Z^k+1⟩-ρ/2X^k-Z^k+1_F^2
= f(X^k+1)-f(X^k)+ρ⟨Ẑ^k+1,X^k+1-X^k⟩
+ρẐ^k+1-Ẑ^k_F^2-ρ/2X^k+1-X^k_F^2,
where the last equality uses the update Ẑ^k+1=Ẑ^k+X^k+1-Z^k+1. Since f is smooth, the updates of X^k+1 and Ẑ^k+1 show that ρẐ^k+1+∇ f(X^k+1)=0. The convexity and smoothness of f give f(X^k+1)+⟨∇ f(X^k+1),X^k-X^k+1⟩+1/2L∇ f(X^k+1)-∇ f(X^k)^2≤ f(X^k). Therefore, we have
(X^k+1,Z^k+1,Ẑ^k+1)-(X^k,Z^k+1,Ẑ^k)
≤ (max(1/ρ-1/2L,0)L^2-ρ/2)X^k+1-X^k_F^2.
If ρ>L, then max(1/ρ-1/2L,0)L^2-ρ/2<0.
Combing the equations (<ref>) and (<ref>), we see that (X^k,Z^k,Ẑ^k) is decreasing. Furthermore if ρ>L, we have
f(X^k)+βZ^k_erf+ρ⟨Ẑ^k,X^k-Z^k⟩+ρ/2X^k-Z^k_F^2
= f(X^k)+βZ^k_erf-⟨∇ f(X^k),X^k-Z^k⟩+ρ/2X^k-Z^k_F^2
≥ f(Z^k)+βZ^k_erf+ρ-L/2X^k-Z^k_F^2≥ 0,
where the last inequality comes from the Lipschitz continuity of ∇ f. So (X^k,Z^k,Ẑ^k) is bounded from below. Therefore, (X^k,Z^k,Ẑ^k) converges and
lim_k→∞ (X^k+1-X^k)=0, lim_k→∞ (Z^k+1-Z^k)=0.
Since ∇ f is Lipschitz continuous, we can get
lim_k→∞Ẑ^k+1-Ẑ^k= X^k- Z^k=0.
Next, we show that (X^k,Z^k,Ẑ^k) is bounded. Since we have shown in (<ref>) that
ℒ(X^k,Z^k,Ẑ^k)
≥ f(Z^k)+βZ^k_erf +ρ-L/2X^k-Z^k_F^2.
Therefore, when ρ>L, the boundedness of ℒ(X^k,Z^k,Ẑ^k) gives the boundedness of f(Z^k)+βZ^k_ and X^k-Z^k_F^2. Thus, sequences {X^k} and {Z^k} are also bounded. Because ρẐ^k=-∇ f(X^k), the sequence {Ẑ^k} is also bounded.
Since the sequence {(X^k,Z^k,Ẑ^k)} is bounded. There exists a convergent subsequence, that is, (X^k_i,Z^k_i,Ẑ^k_i)→ (X^⋆,Z^⋆,Ẑ^⋆). The limits (<ref>) and (<ref>) show that (X^k_i+1,Z^k_i+1,Ẑ^k_i+1)→ (X^⋆,Z^⋆,Ẑ^⋆). Then we have that X^⋆=Z^⋆ and β∂Z^⋆_-ρẐ^⋆=0. Thus, X^⋆ is a stationary point of the original problem (<ref>).
§ NUMERICAL EXPERIMENTS
In this section, we conduct various numerical experiments on synthetic and real data to demonstrate the performance of our proposed methods. In particular, we compare our methods - Algorithm 1 and Algorithm 2 - with other related states of the art, including natural neighbor interpolation (NNI) <cit.>, graph smooth (GS) <cit.>, Tikhonov <cit.>, TGSR <cit.>, LRDS <cit.>, and Sobolev <cit.>. To evaluate the reconstruction quality, we adopt the root mean square error (RMSE) as a comparison metric, defined as follows
=X-X_F/√(nm),
where X is the approximation of the ground truth graph signal X∈^n× m defined on a spatial-temporal graph with n nodes and m time instances.
All the numerical experiments are implemented on Matlab R2021a in a desktop computer with Intel CPU i9-9960X RAM 64GB and GPU Dual Nvidia Quadro RTX5000 with Windows 10 Pro.
§.§ Synthetic Data
Following the work of <cit.>, we generate N=100 nodes randomly from the uniform distribution in a 100× 100 square area. The graph weight is determined using k-nearest neighbors. Specifically, the weight between any two nodes is inversely proportional to the square of their Euclidean distance. We consider k=5 and visualize the corresponding graph in Fig. <ref>.
Denote the weight matrix by W, its degree matrix D, and the graph Laplacian L has eigen-decomposition L=UΛ U^T, where Λ =diag(0, λ_2, ⋯, λ_N).
We further define L^-1/2=UΛ^-1/2U^T where Λ^-1/2 =diag(0, λ_2^-1/2, ⋯, λ_N^-1/2). Starting from x_1, we generate the time-varying graph signal
x_t=x_t-1 + L^-1/2 f_t, for t = 2, ⋯, T,
where f_t is an i.i.d. Gaussian signal rescaled to f_t_2=κ and κ corresponds to a temporal smoothness of the signal. Stacking {x_t} as a column vector, we obtain a data matrix X=[x_1, x_2, ⋯, x_T]. We generate a low-rank data matrix obtained by starting with an empty matrix X and repeating X=[X, x_1, ⋯, x_10, x_10, x_9,⋯, x_1] 10 times, thus also getting a 100× 200 data matrix. The measurement noise at each node is i.i.d. Gaussian noise 𝒩(0,η^2), where η is the standard deviation.
Parameter tuning. For the proposed Algorithm <ref>, we fix the following parameters: k=3 and θ=1.8 in the definition of fractional-order derivative (<ref>); σ=10^3 in the definition of the ERF regularization (<ref>); ϵ=0.1 and r=3 in the Sobolev graph Laplacian; and the step size ρ=10^-6 in the ADMM iterations (<ref>). In each set of experiments, we carefully tune two parameters (α,β) that determine the weights for the spatial-temporal smoothness and the low-rankness, respectively, in the proposed model (<ref>). We choose the best combination of (α,β) among α∈{0, 10^-5,10^-4, 10^-3, 10^-2,10^-1, 1, 10} and β∈{0, 10^-8, 10^-7, 10^-6, 10^-5,10^-4, 10^-3, 10^-2, 10^-1, 1, 10}.
As demonstrated in Table <ref>, some competing methods are special cases of the proposed models, and hence, we only tune the parameters α,β for these methods while keeping other parameters fixed.
Reconstruction errors with respect to sampling rates. We begin by evaluating the performance of competing methods under different sampling rates. The smoothness level is set as κ=1, while the standard deviation of the Gaussian noise is η=0.1. The reconstruction performance is evaluated via RMSE, defined in (<ref>), showing that the recovery errors of all the methods decrease with the increase of the sampling rates. The comparison results are visualized in Fig. <ref>. The proposed method achieves significant improvements over the competing methods. Surprisingly, LRDS, equipped with the nuclear norm, does not yield stable reconstruction performance in the low-rank case.
Reconstruction errors with respect to noise levels. We then investigate the recovery performance under different noise levels by setting the noise variance η^2 = {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1}. In this set of experiments, we fix the sampling rate as 40% and smoothing level κ=1. The noise level affects the magnitude of the least-squares fit, and as a result, we adjust the search window of α∈{0, 10^-3,10^-2, 10^-1, 1, 10, 10^2,10^3, 10^4. The parameter β remains the same: β∈{0, 10^-8, 10^-7, 10^-6, 10^-5,10^-4, 10^-3, 10^-2, 10^-1, 1, 10}. The results are presented in Fig. <ref>, demonstrating the superior performance of the proposed Algorithm <ref> under various noise levels.
§.§ Real Data
In the real data experiments, we first test the daily mean Particulate Matter (PM) 2.5 concentration dataset from California provided
by the US Environmental Protection Agency < https://www.epa.gov/outdoor-air-quality-data>. We used the data captured daily from 93 sensors in California for the first 200 days in 2015. The constructed graph is depicted in Fig. <ref>. In Fig. <ref>, we compare the average recovery accuracy of all the comparing methods over 50 trials when the sampling rates are 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45. In Table <ref>, we also compare the performance of Algorithm 1 and Algorithm 2, which shows Algorithm 2 can improve the accuracy of Algorithm 1 under some sampling rates with longer time in general.
Next, we test the sea surface temperature dataset, which was captured monthly by the NOAA Physical
Sciences Laboratory (PSL). The data set can be downloaded from the PSL website <https://psl.noaa.gov/>. We use a subset of 200 time points on the Pacific Ocean
within 400 months. The constructed graph is illustrated in Fig. <ref>. We see from Fig. <ref> that the proposed algorithm outperforms other methods significantly and consistently across all sampling rates. In Table <ref>, we also compare the performance of Algorithm <ref> and Algorithm <ref>, which indicates Algorithm <ref> can improve the accuracy of Algorithm <ref> under certain sampling rates but with more computational time in general.
§.§ Discussions
Using the sea surface temperature data, we conduct an ablation study of the proposed model (<ref>) without the smoothing regularization by setting α = 0 or without the low-rank ERF term by setting β =0. We plot the RMSE curves with respect to the sampling rates and the noise levels in Fig. <ref>, showing that the ERF regularization has a larger influence on the performance compared to the Sobolev-base graph Laplacian regularization.
Using the same sea surface temperature data, we investigate whether the proposed model (<ref>) is sensitive to the parameters (r,ϵ) in defining the Sobolev-graph Laplacian and σ^2 in defining the ERF regularization. Fig. <ref> shows that the proposed approach is not sensitive to various degrees of smoothness controlled by r and ϵ. Although the ERF regularization plays an important role in the recovery performance, as illustrated in the ablation study, the proposed model is not sensitive to the choice σ^2 as long as it is larger than 10,000.
In addition, we compare the proposed Algorithm <ref> and Algorithm <ref> using the sea surface temperature data and show the results in Tables <ref> and <ref>. One can see that the two algorithms lead to similar RMSE, but Algorithm <ref> is slower overall. We therefore prefer to use Algorithm <ref> unless the data is heavily polluted by the non-Gaussian type of noise, such as Laplace noise.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we exploit high-order smoothness across the temporal domain and adaptive low-rankness for time-varying graph signal recovery. In particular, we propose a novel graph signal recovery model based on a hybrid graph regularization involving a general order temporal difference, together with an error-function weighted nuclear norm. We also derive an effective optimization algorithm with guaranteed convergence by adopting a reweighting scheme and the ADMM framework. Numerical experiments have demonstrated their efficiency and performance in terms of accuracy. In the future, we will explore using high-order difference schemes to create a temporal Laplacian and low-rankness for recovering graph signals with dynamic graph topology.
§ ACKNOWLEDGEMENTS
The authors would like to thank the support from the American Institute of Mathematics during 2019-2022 for making this collaboration happen. WG, YL, and JQ would also like to thank the Women in Data Science and Mathematics Research Workshop (WiSDM) hosted by UCLA in 2023 for the support of continuing this collaboration. YL is partially supported by NSF CAREER 2414705.
JQ is partially supported by the NSF grant DMS-1941197. MY was partially supported by the Guangdong Key Lab of Mathematical Foundations for Artificial Intelligence, the Shenzhen Science and Technology Program ZDSYS20211021111415025, and the Shenzhen Stability Science Program.
unsrt
|
http://arxiv.org/abs/2405.09876v1 | 20240516075551 | Engineering Challenges in All-photonic Quantum Repeaters | [
"Naphan Benchasattabuse",
"Michal Hajdušek",
"Rodney Van Meter"
] | quant-ph | [
"quant-ph",
"cs.NI"
] |
IEEEexample:BSTcontrol
plain
plain
Engineering Challenges in All-photonic Quantum Repeaters
Naphan Benchasattabuse14,
Michal Hajdušek14,
and Rodney Van Meter34
1Graduate School of Media and Governance, Keio University Shonan Fujisawa Campus, Kanagawa, Japan
3Faculty of Environment and Information Studies, Keio University Shonan Fujisawa Campus, Kanagawa, Japan
4Quantum Computing Center, Keio University, Kanagawa, Japan
{whit3z,michal,rdv}@sfc.wide.ad.jp
May 20, 2024
======================================================================================================================================================================================================================================================================================================================================================================================
Quantum networking, heralded as the next frontier in communication networks, envisions a realm where quantum computers and devices collaborate to unlock capabilities beyond what is possible with the Internet.
A critical component for realizing a long-distance quantum network, and ultimately, the Quantum Internet, is the quantum repeater.
As with the race to build a scalable quantum computer with different technologies, various schemes exist for building quantum repeaters.
This article offers a gentle introduction to the two-way “all-photonic quantum repeaters,” a recent addition to quantum repeater technologies.
In contrast to conventional approaches, these repeaters eliminate the need for quantum memories, offering the dual benefits of higher repetition rates and intrinsic tolerance to both quantum operational errors and photon losses.
Using visualization and simple rules for manipulating graph states, we describe how all-photonic quantum repeaters work.
We discuss the problem of the increased volume of classical communication required by this scheme, which places a huge processing requirement on the end nodes.
We address this problem by presenting a solution that decreases the amount of classical communication by three orders of magnitude.
We conclude by highlighting other key open challenges in translating the theoretical all-photonic framework into real-world implementation, providing insights into the practical considerations and future research directions of all-photonic quantum repeater technology.
§ INTRODUCTION
The Internet has transformed how we interact with each other and with the world to the point that it has become an indispensable part of our lives.
The possibility of connecting quantum devices together in a similar fashion has sparked the interest of scientists, who imagine the impact and the new capabilities that the Quantum Internet will unlock <cit.>.
Long-range entanglement, the primary resource in quantum networks, can enable a multitude of industrially and scientifically useful applications.
These include generating secret keys to make communications over the Internet more secure, making better sensors such as improving the resolution of telescopes, increasing the computational power of quantum computers by connecting them together, or executing blind quantum computations where the programs are sent to be computed off-site but nothing about the program, the input, or the output are known to the off-site server.
The quantum network, akin to the classical computer network, connects quantum computers or quantum devices together.
However, the basic task of a quantum network is not merely sending or receiving quantum data; it is distributing generic entangled states between two or more distant parties in the network.
These generic entangled states can then be consumed to execute the applications previously mentioned, including transferring data.
Sharing these generic entangled states equates to establishing quantum communication channels.
Each entangled state is single-use, thus many applications consume a large quantity of this resource.
Therefore, distributing high-quality entanglement at a fast rate is crucial in realizing a usable quantum network.
The smallest entangled state that a quantum network needs to be able to distribute, which can be used to build larger entangled states, is the Bell pair[A Bell pair is a maximally entangled bipartite state, a state where measuring one qubit gives a complete description of the state of the other qubit.].
Although it would be more efficient for the network to generate generic multipartite entangled states, it makes managing the network layer itself much more difficult.
We consider a network that only distributes Bell pairs, and if the application requires larger entangled states, the application layer can create them from these Bell pairs.
The primary mechanism for generating long-distance Bell pairs between network nodes relies on exchanging single photons.
Attenuation of optical signals in fiber leads to an exponentially vanishing probability of photon arrival as the distance between network nodes increases, limiting practical quantum communication via direct photon transmission to only a few tens of kilometers.
To overcome this distance and scaling limit, quantum repeaters were introduced <cit.>.
Unlike the classical repeaters, where signals are amplified or regenerated, arbitrary quantum states cannot be copied due to the “no-cloning” theorem.
Instead, with the help of stationary quantum memories[We use the term stationary quantum memory to emphasize that the qubits are located in the repeaters.], quantum repeater nodes store the link-level Bell pairs – Bell pairs generated via optical fiber – and join them together to make them span multiple hops.
It is natural to think that quantum networks and quantum repeaters cannot be built before a working quantum computer because both share a need for controllable, long-coherence-time quantum memories.
This was shown to be in fact not necessary in the last decade by Azuma et al. <cit.>, who introduced an all-photonic scheme without any quantum memories.
This seminal result led to a new variation of quantum repeaters <cit.> whose development can be independent of the development of quantum computers or quantum memories.
Although this transition shifts the hardware challenge to the creation of a deterministic and controllable source of indistinguishable photons, it concurrently broadens the spectrum of methodologies for realizing quantum repeaters.
It is crucial to note that a quantum network is a hybrid system and that classical networking is an indispensable part of the quantum communication infrastructure.
Classical channels are used to transmit information about measurement outcomes, enabling error correction, and establishing entanglement between distant quantum nodes.
Therefore, while quantum repeaters represent a significant stride toward scalable quantum communication networks, their seamless integration with classical networking components remains a fundamental requirement for the realization of practical and reliable long-distance quantum communication.
In particular, classical communication plays a major role in the all-photonic repeater scheme, potentially imposing limitations on the overall Bell pair generation rate.
The scheme introduces redundancy to photonic qubits through quantum error-correcting codes to forego the necessity for quantum memories.
However, the ratio of the photons to end-to-end Bell pairs can span several orders of magnitude.
All the generated photons must be measured and tracked in order to create a Bell pair between two end nodes, thus necessitating a high bandwidth and processing power at the end nodes.
The contributions of this article are summarized as follows.
We first give a tutorial on the all-photonic quantum repeater scheme based on the repeater graph state (RGS), proposed by Azuma et al. <cit.>.
We continue by highlighting a particular disadvantage of this scheme, namely that better performance requires ever larger classical information that must be communicated and processed by the end nodes.
We propose a communication protocol that reduces the bandwidth and the data that end nodes are required to process.
We end the article by identifying some of the key open problems and potential solution ideas in realizing the RGS scheme.
§ QUANTUM REPEATERS
The quantum repeater <cit.> stands as a cornerstone in the development of quantum communication networks, enabling the distribution of entanglement over long distances.
At its core, the quantum repeater operates by splicing two shorter Bell pairs into a longer Bell pair through a process known as entanglement swapping.
To establish an end-to-end Bell pair, the process begins with the generation of link-level entanglement – a Bell pair between two adjacent nodes.
Through a series of entanglement-swapping iterations, this link-level entanglement is transformed into an end-to-end Bell pair, as shown in Fig. <ref>.
Quantum repeaters can be classified into three generations <cit.> based on how they manage quantum operational and photon loss errors (see Table <ref> for more details).
First-generation repeaters (1G) manage both errors in a heralded fashion.
Second-generation repeaters (2G) still manage the loss error via a heralded approach while using quantum error-correcting codes to manage quantum operational errors <cit.>.
Third-generation repeaters (3G) circumvent both types of errors via quantum error correction.
Generating shared entangled states is accomplished via distributed computation in 1G and 2G networks.
Unique feature of 3G networks is that it is also possible to create store-and-forward quantum packet switching transmission of quantum data.
This is done by decoding and re-encoding the encoded photonic quantum states at every repeater station <cit.>.
2G networks place very high demands on link-level entanglement generation rates, and local gate and memory operation fidelities.
3G networks require high photon detection probability and local gate fidelities.
The all-photonic quantum repeater <cit.> is classified as 3G, placing rather different demands on the development of hardware components.
It should be noted that generations with higher numbers do not imply a better performance.
Each generation has its parameter regime of hardware characteristics where it is expected to perform best.
Thus, the generation of repeaters is a guide to how one would choose to build a network based on the equipment at hand.
The choice between all-photonic and memory-based repeater schemes depends on several considerations.
Memory-based repeaters are favored when high-quality quantum memories are available.
This means the memories have long coherence time, high-fidelity
[Fidelity is a measure of how close the actual physical state or operations are to the ideal ones.]
local gates, and can be coupled to optical fibers, as detailed later in challenges and open problems section.
Conversely, all-photonic schemes are better suited for scenarios where good quantum memories are unavailable but high-quality photon sources and adaptive measurement devices are.
§ ALL-PHOTONIC QUANTUM REPEATERS
Interestingly, the entanglement swap does not need to be performed after link-level entanglement is created.
It can be performed first, provided that the probability of link-level generation is high enough.
This can be achieved by introducing redundancy in the number of trials encoded via a particular quantum state known as the repeater graph state (RGS).
This “time-reversed” procedure is the core concept of the all-photonic quantum repeater <cit.>.
In subsequent discussions, we will refer to this repeater scheme as the RGS scheme.
§.§ Repeater graph state
The workings of the RGS scheme are best explained via the language of graph states.
Graph states <cit.> are a class of quantum states that have an intuitive description in terms of mathematical graphs, as shown in Fig. <ref>.
Each vertex in the graph corresponds to a qubit while the edges connecting them correspond to an entangling operation between the two qubits.
Graph states belong to a special class of quantum states that have an efficient description that scales quadratically in the number of qubits.
Manipulations of these graph states admit efficient and compact mapping as long as the operation maps one graph state to another.
In this picture, vertices that belong to the same connected component are parts of the same entangled system.
A Bell pair, viewed from the graph state formalism, is a two-vertex graph with an edge connecting the two vertices.
The graph state representation of a quantum state exhibits a non-uniqueness of its description.
Two different graphs can describe two equivalent graph states, provided the two states are related by application of local Clifford operations[Single qubit Clifford gates encompass rotations around the X, Y, or Z axis of a single qubit, with the rotation angle being an integer multiple of π/2.].
The Clifford operator acting on each vertex is called a vertex operation (VOP), shown in Fig. <ref> as a letter next to the vertex except for when the VOP corresponds to the identity operator I.
The application of single qubit Clifford operators to a graph state can result in a distinct graph state representation, possibly changing its original edge structure and VOPs.
Notably, employing local complementation[Local complementation on a vertex v of a graph is obtained by complementing or inverting a subgraph induced by the neighbors of v. Simply put, for every two neighbor vertices of v, delete the edge if they are previously connected or join them with an edge if they are not.] on a graph induces a graph with an alternative description of the same quantum state, as illustrated in Fig. <ref>.
Measuring a qubit in a Pauli basis results in another graph state.
In this article, we only need to familiarize ourselves with Z basis measurement and X basis measurements performed on two adjacent vertices that share no neighbors, also called XX measurement.
The effect of these two sequences of measurements is shown in Fig. <ref>.
The RGS comprises 2m physical qubits, referred to as outer qubits, and 2m logical qubits, referred to as inner qubits.
The inner qubits are arranged in a complete graph, with the outer qubits being connected to a single inner qubit, as shown in Step 1 of Fig. <ref>.
RGS is defined by the parameter m and a branching vector b⃗, where m refers to the number of arms and b⃗ = (b_1, b_2, …, b_n) denotes the logical tree encoding of the inner qubits used for counterfactual measurement.
These two parameters determine how much loss and error the RGS can tolerate.
§.§ RGS scheme
In the RGS scheme, the previously designated repeater nodes and BSA nodes are redefined as RGS source nodes (RGSS) and adaptive measurement nodes, or advanced Bell state analyzer nodes (ABSA).
As suggested by their names, RGSS generates the photonic RGS and transmits them to ABSA for measurement.
The RGS is generated at all RGSSs along the path between the two end nodes intending to share a Bell pair, as depicted in Step 1 of Fig. <ref>.
At each RGSS, the RGS undergoes a split into two halves, with one half directed to the left ABSA and the other to the right ABSA.
Each ABSA receives the halves of two distinct RGSs and executes three stages of adaptive measurements.
In the initial stage, ABSA performs rotated Bell-state measurement (BSM) between outer qubits from the left and right RGS, as shown in Step 2 of Fig. <ref>.
This rotated-BSM is akin to initially connecting the two vertices with an edge and subsequently performing the XX measurement on them.
This process is repeated for all m pairs.
Given the inherent probabilistic nature of the BSM[BSM implemented via linear optics has a theoretical limit of 50% success probability without introducing ancilla photons.], the success or failure of the outer qubits' BSM is uncertain, compounded by the possibility of photons being lost in the optical fiber during the journey from the RGSS to the ABSA.
If at least one pair of outer qubits is successfully measured in the first stage, ABSA moves on to the second stage.
Measuring a qubit in the Z basis is equivalent to removing the corresponding vertex from the graph state, as shown in Fig. <ref>.
Therefore, Z measurements are performed on the inner qubits connected to the outer qubits that failed the BSM, removing them from the RGS.
If all BSMs fail in any ABSA along the connection path, the entire connection attempt is deemed a failure and must be restarted.
The inner qubits linked to successfully measured outer qubits also undergo Z measurements, excluding a single pair.
Following the completion of all Z measurements, the resultant state forms a linear chain graph state.
Step 3 of Fig. <ref> shows the concluding stage, where all inner qubits in the linear chain graph state undergo measurement in the X basis, constituting the XX measurement since only two qubits are left in each ABSA at this point.
The outcome is a locally equivalent Bell pair shared between the two end nodes.
§.§ Logical measurements of inner qubits
Logical encoding of the inner qubits is an essential part of the RGS scheme that allows us to forego quantum memories.
The RGS scheme hinges on measurements of all inner qubits to be successful.
The tree encoding of inner qubits is mainly to combat photon loss.
The logical measurement operation on inner qubits is achieved by measurement of physical qubits, where the basis of measurements of all qubits in the same level of the tree is the same, and the basis of measurement changes depending on the even or odd level of the tree, as shown in the bottom panel of Fig. <ref>.
To perform a logical Z measurement, qubits in the odd levels of the tree are measured in the Z basis while those in the even levels are measured in the X basis.
Similarly, for the logical X measurement, measurement bases are X in the odd levels and Z in the even levels.
The success of these logical X and Z measurements is determined via the success or failure of the first level of physical qubit measurements.
In this tree structure, the Z measurement of the physical qubits inside the tree can be indirectly measured – the measurement is considered successful even if the photon for this qubit is lost via a deduction of measurement results of its neighbors and their neighbors at one and two levels below it.
It is also worth noting that the success probabilities of logical Z and X measurements are different.
As seen from Fig. <ref>, a logical Z measurement is successful if all the first level Z measurements on physical qubits, either direct or indirect, are successful.
A logical X measurement succeeds if at least one of the X measurements at the first level succeeds.
This leads to a nontrivial tradeoff between increasing the number of arms m to increase the link-generation probability and the probability of successfully measuring all inner qubits.
§.§ Advantages of the RGS scheme
The RGS scheme offers a higher Bell pair generation rate compared to the traditional memory-based approaches.
This is due to the time-reversed protocol since repeaters no longer need to hold quantum memories awaiting messages heralding link-level generation before entanglement swapping can be performed.
The generation rate is only limited by the RGS generation time, thus a link shared between two RGSS nodes is only occupied by this generation time and can be reallocated for another connection much faster.
The waiting time at end nodes, on the other hand, is still limited by the same duration as memory-based repeaters: the time it takes for a message from the furthest ABSA to arrive.
It is not yet clear how shortening the busy time of the RGSS and ABSA nodes affects the overall performance and the service quality of a network.
This suggests that figures of merit for routing in the RGS scheme might be different from those of memory-based repeaters.
§ CLASSICAL COMMUNICATION FOR THE RGS SCHEME
Current research efforts prioritize the optimal use of quantum resources <cit.> (e.g., number of photons in the RGS, how the RGS is generated, or tradeoff between end-to-end Bell pair rate against the number of quantum emitters) while treating classical messages as a free resource.
In a real-world implementation, the network bandwidth that these classical messages need to take and their processing could become the bottleneck of the end-to-end Bell pair generation rate.
Consider end nodes separated by 1,000 km with RGSSs located every ∼4 km, and fiber attenuation of 0.2 dB/km.
The RGS structure that maximizes the Bell pair rate per emitter is m = 14 and b⃗ = (10, 5), as shown in <cit.>.
The total amount of classical information that needs processing is 545,462 bits.
Various quantum network applications require Bell pairs generated at megahertz rates, thus half a terabit of information needs to be processed per second to prepare these Bell pairs.
In the original proposal of all-photonic repeaters <cit.>, all this information is to be communicated to the end nodes for processing.
In this section, we propose to process this information in a distributed fashion in order to decrease the total load on the end nodes.
§.§ Clifford Side Effects (VOPs) of the RGS
Until now, we have assumed that the RGSSs are capable of generating the RGS without any VOPs, which is not always the case.
Generation of the RGS either via photonic fusion <cit.> or deterministic quantum emitters <cit.> introduces possible VOPs to most of the physical qubits since both rely on joining smaller graph states via measurements.
Therefore, one can either first apply operations to the photons at RGSS before sending them out to ABSA or track all the VOPs of all the qubits and send this information to be processed at end nodes.
We consider the latter approach, where the photons are sent to ABSA immediately after being generated and the information regarding the VOPs is sent later.
The crucial part is that the ABSA does not need to wait for this VOP information in order to decide which basis to measure in, provided that the possible set of VOPs is restricted.
This is true in the case of the RGS generation proposed in <cit.>, where the VOPs are restricted to be I or Z.
ABSA performs Pauli Z and X measurements on the inner qubits.
Only the X measurement results get flipped if the qubit has Z as its VOP[The Pauli Z operator (π rotation around the Z axis) inverts the directions of the X axis.].
The measurement outcomes are the same as having no VOPs in all other cases.
This implies that the ABSA does not need to know the VOPs of qubits in advance to select the basis of measurements and can follow the steps mentioned previously without any modification.
§.§ Transmission order of photons in the RGS
The basis selection of the inner qubits is dependent on the BSM outcome of the outer qubits.
It can be seen that as long as the outer qubits arrive at the ABSA before their connected inner qubits, the measurement basis can be determined locally by the ABSA.
The physical qubits composing the inner qubits can be sent in any order as long as it is known to the ABSA beforehand.
We note that even if the photons are lost in the fiber, assuming that the photons are well separated temporally, the ABSA can deterministically flag the loss event.
This well-separated assumption is also commonly adopted in memory-based repeater schemes for multiplexing photons from multiple memories into a single fiber <cit.>.
§.§ One-Stage versus Two-Stage Correction Method
We now consider the number of classical bits that needs to be processed to obtain a single deterministic end-to-end Bell pair.
Each measured photon produces two bits of information; the measurement outcome, and the VOP.
One method of correcting the VOPs is to communicate the classical information to the end nodes.
This is the usual method in literature <cit.>, and we refer to it as the One-Stage Correction Method.
We consider an optimized structure of the RGS from <cit.> in terms of the number of outer qubits m and the encoding parameter b⃗.
This structure tries to maximize the Bell pair generation per quantum emitter given the distance between the end nodes.
The total number of classical bits per Bell pair that need to be communicated to and processed by the end nodes is shown in Fig. <ref> by the blue bars.
To lessen the communication and processing load placed on the end nodes, we propose a Two-Stage Correction Method.
First, each ABSA gathers the VOP information from its adjacent RGSSs and reduces it down to only 2 bits of information; the two outcomes of X measurements on the appropriate inner qubits.
Next, these two bits are sent to be processed at the end nodes, calculating which correction operations are required to be performed to obtain the correct Bell pair.
Splitting the correction process into two stages has two clear advantages.
First, the bulk of the classical information generated by the RGS scheme does not need to be communicated to the end nodes, reducing the total load on the network and on the end nodes themselves.
Second, by processing the generated classical information in a distributed fashion at the ABSAs, the end nodes can compute the final correction faster and with less effort.
The new amount of classical information that has to be received and processed by the end nodes is represented by the orange bars in Fig. <ref>, showing an improvement of three orders of magnitude compared to the One-Stage Method.
The Two-Stage Method incurs only a constant processing load on the ABSAs, as shown by the red dashed line in Fig. <ref>.
§ CHALLENGES AND OPEN PROBLEMS
Although the all-photonic repeater is well understood at the theoretical level, there are engineering challenges in designing protocols between network nodes and other open problems, where further research is needed to improve the practicality of the scheme.
Generation of RGS. —
Creating highly entangled states with flying photonic qubits is difficult.
One approach uses the photonic fusion operator, which is a probabilistic process for entangling two multi-photon states by performing a joint measurement on two of their photons.
This approach imposes a large overhead penalty if the success probability of fusion is low.
Another way is to create the RGS deterministically via quantum emitters where entangling gates between emitters can also be deterministically applied <cit.>.
In theory, this is more efficient and would enable a faster generation rate but the current hardware lacks the required functionality.
Recent experimental advances in photonic fusion gates <cit.>, which can also be used to entangle quantum emitters, suggest that the two approaches do not need to be mutually exclusive.
Table <ref> summarizes state-of-the-art experimental data on quantum memories <cit.>, which play a pivotal role in both memory-based and RGS scheme implementations with deterministic generation approaches. As seen from the table, different memory types exhibit different characteristics. There is no clear winner between RGS scheme repeaters and memory-based repeaters, nor is there a clear preference among different generations of repeaters. However, the RGS scheme performs best when end-to-end coupling efficiencies exceed 85%, gate fidelity approaches ideal, and emitter coherence times exceed 2500 times the gate duration <cit.>.
Optimization of RGS structure depending on link characteristics and connection paths. —
In a complex network, the path between two nodes is not fixed and is determined upon request via a routing algorithm. However, the RGSS decides the two parameters m and b⃗ based on path characteristics.
Furthermore, if the error characteristics of each link are different, determining a good RGS structure is necessary to maintain good services of the network.
Currently, the optimal RGS structure for a given separation distance with RGSS and ABSA nodes evenly placed is found via exhaustive search <cit.>.
There is a need for algorithms that find good RGS structures on the fly for each connection path and can also adapt to changes in link characteristics.
Time synchronization between nodes in the network. —
In order for the link-level generation to succeed, the two photons sent to ABSA for BSM (one from the left and another from the right) need to arrive at the same time[The two photons need to be indistinguishable for the BSM to be successful. Thus, this time window that can be considered as “same time” varies with the efficiency of photon detectors and also the properties of photons themselves.].
For the memory-based quantum repeaters, time synchronization is required only between each pair of repeaters that share a link because the generated Bell pairs are then stored in memories.
On the other hand, the time synchronization for the RGS scheme is more complex if the end-to-end RGS is created in one go, leading to the need to synchronize the clocks for all the RGSS nodes and ABSA nodes along the connection path.
This problem can be alleviated if certain parts of RGS generation are delayed, causing a drop in the generation rate, or the two halves of the RGS, which need to be sent to different ABSAs, are generated independently.
End node participation. —
So far, research has focused mainly on the RGSS and the ABSA, making it unclear how end nodes should participate in the case where the generated Bell pairs are to be used for applications beyond quantum key distribution.
These applications often require end nodes to have actual corrected Bell pairs.
The Bell pairs need to be stored in quantum memories, as opposed to just directly measuring the RGS and processing only the classical outcomes for secret key generation.
One approach would require end nodes to have the same number of available quantum memories as the number of RGS arms, all emitting photons to the ABSA.
Upon success, only one of the memories holds the generated Bell pair, wasting the capacity of storing more entangled states in the memories.
A better approach would be to create just half of the RGS and anchor it to the memory.
Integration with memory-based repeaters. —
It is often thought that the all-photonic quantum repeaters and the memory-based repeaters are incompatible with each other.
The approach for including end nodes in the RGS scheme could bridge this incompatibility.
Due to the high generation rate and short busy time of RGSS and ABSA, combining the two architectures could lead to a better service in a quantum network.
The research area of combining different types of repeaters in a single network is still an open question to be explored.
Routing, multiplexing, and multipartite state distribution. —
Although the basic service of a quantum network is to distribute Bell pairs, distributing multipartite states directly from the network instead of creating them at the application layer can be much more efficient, especially in the early stages where quantum memories have short coherence times.
It is unclear whether this could be efficiently achieved for the RGS scheme or not since the behavior of the RGS scheme is akin to the circuit-switched network and the interplay between routing and multiplexing is not fully understood in the more flexible operations of memory-based repeaters.
This creates a need for efficient quantum network simulators to study this emergent behavior, which is likely different from the patterns seen in classical networks.
The open problems and challenges listed here are by no means exhaustive.
Some of the above open problems are being addressed <cit.>.
§ CONCLUSION
The all-photonic quantum repeater represents an alternative and sometimes overlooked approach towards the realization of quantum repeaters.
We have provided the operational intuition as a foundation for understanding the topic.
However, many engineering challenges still await solutions.
We proposed a distributed approach to dealing with the large volume of classical information generated per Bell pair, making all-photonic repeaters more amenable towards practical implementation.
Moreover, we have highlighted various open questions and research opportunities that beckon exploration, serving as stepping stones toward the ultimate realization of robust and scalable quantum networks.
§ ACKNOWLEDGMENT
This work is supported by JST [Moonshot R&D][JPMJMS226C-104].
ChatGPT, a language model developed by OpenAI, was used in the development of this article for grammar checking, paragraph rewriting, and overall text improvement without generating novel content.
IEEEtran.bst
|
http://arxiv.org/abs/2405.09748v1 | 20240516011413 | A Mathematical Reconstruction of Endothelial Cell Networks | [
"Okezue Bell",
"Anthony Bell"
] | q-bio.CB | [
"q-bio.CB",
"math.CO",
"math.GR",
"q-bio.QM"
] |
Scalar curvature lower bounds on asymptotically flat manifolds
Yuqiao Li
May 20, 2024
==============================================================
§ INTRODUCTION
The endothelium, a monolayer of endothelial cells lining the blood and lymphatic vessels, plays a central role in vascular homeostasis <cit.>. Far from a passive conduit for blood flow, the endothelium actively regulates vascular permeability, blood coagulation, angiogenesis, and inflammatory responses. These functions arise from the ability of endothelial cells to form complex networks with specialized cell-cell junctions. Adherens junctions and tight junctions are ubiquitous structures that mechanically and chemically couple endothelial cells, while gap junctions mediate the passage of ions and small signaling molecules between cells. More recently discovered structures, such as nectin junctions, also contribute to endothelial connectivity. Collectively, endothelial cell-cell junctions enable the coordination of endothelial responses across vascular networks.
Endothelial network integrity is critical for vascular health, as disruption of interendothelial junctions and resulting increases in vascular permeability are key events in atherosclerosis, ischemia-reperfusion injury, sepsis, and other disease states <cit.>. Aberrant endothelial connectivity also enables the pathological angiogenesis seen in cancer and eye diseases <cit.>. Quantitative analysis of endothelial network structure therefore has the potential to yield new biomarkers and therapeutic strategies for vascular dysfunction <cit.>.
Network representations have become an invaluable tool for understanding the structure and function of complex biological systems. Prominent examples include neuron maps <cit.>, metabolic system representations <cit.>, cell-cell communication models <cit.>, and protein-protein interaction networks <cit.>. In these contexts, network models have revealed fundamental organizing principles such as small-world topology, modularity, and centrality. A unifying mathematical language for biological networks has emerged from graph theory, which captures connectivity patterns between large numbers of interacting elements at a high fidelity.
However, standard graphs are not sufficient to capture the rich junction architecture connecting endothelial cells into vascular networks. A single edge between two cells cannot represent the multiple junction types that may exist between them<cit.>. Instead, endothelial network connectivity is inherently a multi-relation structure. Furthermore, the concept of the line graph, which can represent multi-edges, does not naturally capture the distinct types of endothelial junctions and their set-like organization around cells. As a result, a more sophisticated mathematical representation is needed.
Here we develop a mathematical formalism called π-graphs to meet this need. π-graphs are abstract structures that faithfully represent the multi-junction connectivity of endothelial networks using an intuitive set-based language. We define π-graphs and their morphisms, and prove several results linking π-graphs to their underlying unnested endothelial graphs. We also describe a framework based on topological realization to represent the spatial embedding of endothelial networks. Our work provides an expressive, tractable, and generalizable framework to quantitatively interrogate endothelial networks.
§ MODELS AND METHODS
§.§ Basic Definitions
We begin by formally defining the notion of an endothelial π-graph. Let ℰ be a finite set, whose elements we call endothelial cells (ECs). We consider four types of intercellular junctions that can connect ECs:
* Adherens junctions (AJs): Cell-cell adhesion structures that mechanically link adjacent ECs via homophilic interactions between transmembrane VE-cadherin proteins<cit.>.
* Tight junctions (TJs): Multiprotein complexes composed of claudins, occludins, and adaptor proteins that form a seal between adjacent ECs to regulate paracellular permeability<cit.>.
* Gap junctions (GJs): Intercellular channels formed by connexin proteins that allow direct exchange of ions and small molecules (<1 kDa) between the cytoplasm of adjacent ECs<cit.>.
* Nectin junctions (NJs): Adhesive structures formed by heterophilic interactions between nectin proteins that are distinct from but colocalize with AJs and help initiate cell-cell contacts<cit.>.
We represent each of these junction types as a distinct symmetric binary relation on the set ℰ:
Let ∼_AJ, ∼_TJ, ∼_GJ, ∼_NJ be symmetric binary relations on ℰ representing adherens junctions, tight junctions, gap junctions, and nectin junctions respectively. The set of all junction relations is denoted 𝒥_ℰ := ∼_AJ, ∼_TJ, ∼_GJ, ∼_NJ.
The relations in 𝒥_ℰ jointly determine the connectivity of the endothelial network. To capture the set of junctions incident at each EC, we introduce the notion of π-incidence:
A π-incidence on ℰ is a function π : ℰ→𝒫(𝒥_ℰ) satisfying:
* (Junction Consistency) For all x ∈ℰ and ∼∈𝒥_ℰ,
∼∈π(x) ∃ y ∈ℰ s.t. x ∼ y
* (Nondegeneracy) For all x ∈ℰ, π(x) ≠∅
The Junction Consistency condition states that a cell x is π-incident to a junction relation ∼ if and only if x is connected by ∼ to some other cell. The Nondegeneracy condition requires that each cell participates in at least one junction. Note that distinct elements x,y ∈ℰ may share multiple junction relations, i.e. |π(x) ∩π(y)| > 1. This multi-relational structure is a key feature of endothelial networks that motivates the π-graph formalism.
With these ingredients, we can now define the central object of study:
An endothelial π-graph is a tuple G = (ℰ,π) where ℰ is a finite set of ECs and π is a π-incidence on ℰ. The set of all π-graphs is denoted ϖ.
Thus, a π-graph G ∈ϖ encodes the network connectivity of an endothelial monolayer as a set ℰ of ECs and a π-incidence reflecting the junction architecture connecting the ECs. Figure <ref> illustrates a simple example.
§.§ Elementary Properties
π-graphs satisfy a number of basic properties that reflect natural features of endothelial networks. We highlight a few key properties here.
First, the local junction architecture around an EC is encoded by its π-incidence:
Let G = (ℰ,π) be a π-graph. For any x ∈ℰ, the set π(x) uniquely determines the set of all ECs sharing a junction with x, i.e.
y ∈ℰ : ∃∼∈π(x) s.t. x ∼ y = ⋃_∼∈π(x) [x]∼∖x
where [x]∼ denotes the equivalence class of x under the relation ∼.
Let y ∈ℰ with y ≠ x. If y shares a junction with x, then there exists ∼∈π(x) such that x ∼ y. By definition of equivalence class, this implies y ∈ [x]∼∖x.
Conversely, suppose y ∈ [x]∼∖x for some ∼∈π(x). By definition of equivalence class, we have x ∼ y, so x and y share a junction. Therefore
y ∈ℰ : ∃∼∈π(x) s.t. x ∼ y = ⋃_∼∈π(x) [x]∼∖x
as desired.
This property allows us to recover the local "junction neighborhood" of a cell from its π-incidence alone. In biological terms, the junction neighborhood reflects the set of all cells that can directly communicate with or mechanically influence a given cell.
Next, we consider the global connectivity of π-graphs. Define a path of length n from x to y in G as a sequence of cells x = x_0, x_1, …, x_n = y such that for each 0 ≤ i < n, there exists ∼∈π(x_i) ∩π(x_i+1) with x_i ∼ x_i+1. We say G is connected if there exists a path between any two cells:
A π-graph G = (ℰ,π) is connected if and only if for all x,y ∈ℰ, there exists a path from x to y.
Connectivity of the π-graph captures the ability of ECs to communicate across the endothelial monolayer via a combination of junction types. This communication may take the form of mechanical forces transmitted via AJs, chemical and electrical signals propagated by GJs, or regulation of paracellular transport through the coordination of TJs and NJs. In this sense, π-graph connectivity abstractly reflects integrated endothelial function.
Finally, we define an operation called π-union that models the merging of two π-graphs over a common subset of ECs:
Let G_1 = (ℰ_1, π_1) and G_2 = (ℰ_2,π_2) be π-graphs with ℰ_1 ∩ℰ_2 ≠∅. The π-union of G_1 and G_2 is the π-graph G_1 ⊔_π G_2 = (ℰ_1 ∪ℰ_2, π_1∪2) where for x ∈ℰ_1 ∪ℰ_2,
π_1∪2(x) =
π_1(x) if x ∈ℰ_1 ∖ℰ_2
π_2(x) if x ∈ℰ_2 ∖ℰ_1
π_1(x) ∪π_2(x) if x ∈ℰ_1 ∩ℰ_2
Intuitively, the π-union glues together two π-graphs along their common ECs and unions the π-incidences of the common ECs (Figure <ref>). This models the formation of a larger endothelial tissue by merging two smaller tissues with some overlapping cells. The notation ⊔_π was chosen to suggest a modified disjoint union operation that preserves the common elements and their π-incidences.
The π-union provides a natural way to build up complex π-graphs from simpler building blocks, and to decompose π-graphs into components. One can show that π-union is associative and commutative up to π-isomorphism (defined in the next section), giving the set of π-graphs the algebraic structure of a commutative monoid.
Having established the basic definition and properties of π-graphs, we turn next to the notion of π-graph isomorphism to characterize when two π-graphs have the same connectivity structure.
§ RESULTS
§.§ pi-graph Isomorphisms
A fundamental question in the construction of such π-graphs is when two π-graphs have the same connectivity structure. To formalize this notion, we introduce the concept of π-graph isomorphism:
Let G_1 = (ℰ_1,π_1) and G_2 = (ℰ_2,π_2) be π-graphs. A π-graph isomorphism from G_1 to G_2 is a bijective function φ : ℰ_1 →ℰ_2 such that for all x ∈ℰ_1 and ∼∈𝒥_ℰ_1,
∼∈π_1(x) φ(∼) ∈π_2(φ(x))
where φ(∼) := (φ(a),φ(b)) : a ∼ b.
If such an isomorphism exists, we say G_1 and G_2 are π-isomorphic and write G_1 ≅_π G_2. The set of all π-isomorphisms from G_1 to G_2 is denoted Iso_π(G_1,G_2).
Intuitively, a π-isomorphism is a bijection between the EC sets of two π-graphs that preserves the π-incidence structure. The condition ∼∈π_1(x) φ(∼) ∈π_2(φ(x)) ensures that if two ECs are connected by a junction in G_1, then their images under φ are connected by the same type of junction in G_2, and vice versa.
Figure <ref> illustrates an example of π-isomorphic graphs, with the dashed arrows indicating the bijection φ that preserves the color-coded π-incidence structure.
The following proposition establishes basic properties of π-isomorphisms:
Let ϖ be the class of π-graphs.
* For any G ∈ϖ, the identity function id_ℰ : ℰ→ℰ is a π-isomorphism from G to itself.
* For any π-isomorphism φ : G_1 → G_2, the inverse function φ^-1 : ℰ_2 →ℰ_1 is a π-isomorphism from G_2 to G_1.
* For any π-isomorphisms φ : G_1 → G_2 and ψ : G_2 → G_3, the composition ψ∘φ : ℰ_1 →ℰ_3 is a π-isomorphism from G_1 to G_3.
Therefore, π-isomorphism defines an equivalence relation on ϖ.
(1) For any x ∈ℰ and ∼∈𝒥_ℰ, we have ∼∈π(x) id_ℰ(∼) ∈π(id_ℰ(x)) since id_ℰ(∼) = ∼ and id_ℰ(x) = x. Thus id_ℰ is a π-isomorphism.
(2) Let φ : (ℰ_1,π_1) → (ℰ_2,π_2) be a π-isomorphism. For any y ∈ℰ_2 and ∼∈𝒥_ℰ_2, we have
∼∈π_2(y) φ^-1(∼) ∈π_1(φ^-1(y))
(φ^-1(a),φ^-1(b)) : a ∼ b∈π_1(φ^-1(y))
since φ is a bijection. Thus φ^-1 is a π-isomorphism.
(3) Let φ : (ℰ_1,π_1) → (ℰ_2,π_2) and ψ : (ℰ_2,π_2) → (ℰ_3,π_3) be π-isomorphisms. For any x ∈ℰ_1 and ∼∈𝒥_ℰ_1, we have
∼∈π_1(x) φ(∼) ∈π_2(φ(x))
ψ(φ(∼)) ∈π_3(ψ(φ(x)))
(ψ∘φ)(∼) ∈π_3((ψ∘φ)(x))
Thus ψ∘φ is a π-isomorphism.
Properties (1)-(3) are the reflexive, symmetric, and transitive properties of an equivalence relation<cit.>, respectively. Therefore, π-isomorphism is an equivalence relation on the class of π-graphs.
This proposition justifies the notation G_1 ≅_π G_2 for π-isomorphic graphs, and allows us to partition the class of π-graphs into equivalence classes [G]π = H ∈ϖ : H ≅π G represented by a distinguished element G ∈ϖ.
§.§ Relationship to Standard Graph Isomorphism
We explore the relationship between π-isomorphism and the standard notion of graph isomorphism. To make this relationship precise, we introduce an "unnested" representation of an endothelial π-graph.
Given a π-graph G = (ℰ,π), define the unnested endothelial junction graph (or simply unnested graph) of G as U_G = (ℰ, ⋃_x ∈ℰπ(x)). In other words, U_G is the graph whose vertices are the ECs of G and whose edges represent the individual junctions, without the π-incidence structure. Figure <ref> illustrates this construction.
We can now relate π-isomorphisms of π-graphs to standard graph isomorphisms of their unnested graphs:
Let G_1 = (ℰ_1,π_1) and G_2 = (ℰ_2,π_2) be π-graphs with unnested graphs U_G_1 = (ℰ_1,R_1) and U_G_2 = (ℰ_2,R_2), respectively. If φ : ℰ_1 →ℰ_2 is a π-isomorphism from G_1 to G_2, then φ is a graph isomorphism from U_G_1 to U_G_2.
Let x,y ∈ℰ_1. We must show that (x,y) ∈ R_1 if and only if (φ(x),φ(y)) ∈ R_2. Suppose (x,y) ∈ R_1. By definition of unnested graph, there exists ∼∈π_1(x) such that x ∼ y. Since φ is a π-isomorphism, this implies φ(∼) ∈π_2(φ(x)) and φ(x) ∼φ(y). Therefore (φ(x),φ(y)) ∈ R_2. The converse follows by a symmetric argument.
This proposition shows that π-isomorphism is a stronger condition than unnested graph isomorphism, i.e. π-isomorphic π-graphs necessarily have isomorphic unnested graphs. However, the converse is not true in general:
There exist π-graphs G_1 and G_2 such that U_G_1 and U_G_2 are isomorphic as graphs but G_1 ≇_π G_2.
We construct a counterexample. Let ℰ = x_1, x_2, x_3 and consider the following π-graphs:
G_1 = (ℰ,π_1) where π_1(x_1) = ∼_AJ, π_1(x_2) = ∼_AJ,∼_GJ, π_1(x_3) = ∼_GJ
G_2 = (ℰ,π_2) where π_2(x_1) = ∼_AJ,∼_GJ, π_2(x_2) = ∼_GJ, π_2(x_3) = ∼_AJ
and junction relations given by:
x_1 ∼_AJ x_2, x_2 ∼_AJ x_1, x_2 ∼_GJ x_3, x_3 ∼_GJ x_2, x_1 ∼_AJ x_3.
The unnested graphs are:
U_G_1 = (ℰ,(x_1,x_2),(x_2,x_1),(x_2,x_3),(x_3,x_2))
U_G_2 = (ℰ,(x_1,x_2),(x_2,x_1),(x_1,x_3),(x_3,x_1))
which are isomorphic via the bijection φ : ℰ→ℰ defined by φ(x_1) = x_1, φ(x_2) = x_3, φ(x_3) = x_2.
However, G_1 ≇_π G_2 since there is no bijection ψ : ℰ→ℰ that preserves the π-incidence structure, i.e. ∼∈π_1(x) ψ(∼) ∈π_2(ψ(x)) for all x ∈ℰ and ∼∈𝒥_ℰ. Intuitively, this is because the distribution of AJ and GJ among the ECs differs between G_1 and G_2, even though they have the same total number of junctions.
This proposition highlights a key feature of the π-graph formalism: it captures connectivity information beyond what is represented by the unnested graph. The additional information is precisely the π-incidence structure that encodes which subsets of ECs are connected by each junction type.
Despite this negative result, there are some conditions under which π-isomorphism and unnested graph isomorphism coincide:
Let G_1 = (ℰ_1,π_1) and G_2 = (ℰ_2,π_2) be π-graphs satisfying the following conditions:
* ∀ x,y ∈ℰ_i, |π_i(x) ∩π_i(y)| ≤ 1 (i = 1,2).
* ∀ x ∈ℰ_i, ∀∼,≈∈π_i(x) with ∼≠≈, [x]∼∩ [x]≈= x (i = 1,2).
If φ : ℰ_1 →ℰ_2 is an unnested graph isomorphism (i.e. a graph isomorphism from U_G_1 to U_G_2), then φ is also a π-isomorphism from G_1 to G_2.
Assume the hypotheses and let φ : ℰ_1 →ℰ_2 be an unnested graph isomorphism. We must show that φ preserves the π-incidence structure, i.e. for all x ∈ℰ_1 and ∼∈𝒥_ℰ_1,
∼∈π_1(x) φ(∼) ∈π_2(φ(x))
() Suppose ∼∈π_1(x). By definition of π_1, there exists y ∈ℰ_1 with y ≠ x such that x ∼ y. Since φ is an unnested graph isomorphism, this implies (φ(x),φ(y)) ∈ R_2, i.e. there exists ≈∈π_2(φ(x)) such that φ(x) ≈φ(y). We claim that ≈ = φ(∼).
Suppose for contradiction that ≈≠φ(∼). Then by condition (2), [φ(x)]≈∩ [φ(x)]φ(∼) = φ(x). But φ(y) ∈ [φ(x)]≈ by definition of ≈, and φ(y) ∈ [φ(x)]φ(∼) since y ∈ [x]∼ and φ is a graph isomorphism. This contradicts [φ(x)]≈∩ [φ(x)]_φ(∼) = φ(x). Therefore ≈ = φ(∼), so φ(∼) ∈π_2(φ(x)).
() Conversely, suppose φ(∼) ∈π_2(φ(x)). By definition of π_2, there exists z ∈ℰ_2 with z ≠φ(x) such that φ(x) ∼ z. Since φ is a bijection, z = φ(y) for some y ∈ℰ_1 with y ≠ x. Moreover, since φ is an unnested graph isomorphism, (φ(x),φ(y)) ∈ R_2 implies (x,y) ∈ R_1, i.e. there exists ≈∈π_1(x) such that x ≈ y.
We claim that ≈ = ∼. Suppose for contradiction that ≈≠∼. By condition (1), |π_1(x) ∩π_1(y)| ≤ 1, so ≈ is the unique junction relation between x and y in G_1. But by the forward direction of the proof, ≈∈π_1(x) implies φ(≈) ∈π_2(φ(x)), and φ(≈) ≠φ(∼) since ≈≠∼ and φ is injective. This contradicts condition (1) for G_2, since φ(∼),φ(≈) ∈π_2(φ(x)) ∩π_2(φ(y)). Therefore ≈ = ∼, so ∼∈π_1(x).
The conditions in this proposition have a natural biological interpretation. Condition (1) states that any two ECs share at most one type of junction. Condition (2) states that distinct junction relations on a common EC connect that EC to disjoint sets of ECs. In other words, the conditions preclude having multiple redundant junctions between ECs and ensure that each junction relation represents a unique connectivity pattern.
Under these biologically plausible assumptions<cit.>, the π-graph and unnested graph representations are equivalent up to isomorphism. This suggests that for endothelial networks satisfying the assumptions, one can work with the simpler unnested graph representation without losing connectivity information. However, the π-graph representation is still valuable for encoding the multiplicity of junction types and explicitly representing the junction architecture around each EC.
§.§ Spatial Representation
We now introduce a framework to represent the spatial embedding and geometry of π-graphs, extending the abstract connectivity structure to incorporate biologically relevant spatial information. Our approach is based on the theory of topological graphs and their embeddings into geometric spaces.
§.§ Topological pi-graphs
We begin by defining a topological π-graph, which endows an abstract π-graph with a topology inherited from its vertex and edge sets.
A topological π-graph is a tuple 𝒢 = (G,𝒯_ℰ,𝒯_𝒥) where:
* G = (ℰ,π) is an abstract π-graph.
* 𝒯_ℰ is a topology on the vertex set ℰ.
* 𝒯_𝒥 is a topology on the set of junctions 𝒥_G := ⋃x ∈ℰπ(x).
* (Continuity) The π-incidence π : ℰ→𝒫(𝒥_G) is continuous with respect to 𝒯_ℰ and the subspace topology on 𝒫(𝒥_G) induced by 𝒯_𝒥.
Intuitively, a topological π-graph equips the vertex and edge sets of an abstract π-graph with topologies that are compatible with the π-incidence structure, in the sense that the π-incidence map is continuous. This allows us to treat π-graphs as geometric objects and study their topological properties.
A natural choice for the vertex topology 𝒯_ℰ is the discrete topology, which captures the intuition that ECs are distinct, separated objects. For the junction topology 𝒯_𝒥, there are several biologically motivated options, such as the discrete topology (treating junctions as distinct objects), the Euclidean topology (embedding junctions into Euclidean space), or a more general geometric topology (e.g. representing junctions as simplicial complexes). The choice of junction topology depends on the specific geometric properties one wishes to model.
Figure <ref> illustrates an example of a topological π-graph with discrete vertex topology and Euclidean junction topology.
§.§ Spatial Embeddings of pi-graphs
We now turn to the problem of embedding topological π-graphs into geometric spaces in a way that respects the π-incidence structure and junction geometry. We focus on embeddings into Euclidean space ℝ^d, but the framework can be generalized to other spaces (e.g. manifolds) as needed.
Let 𝒢 = (G,𝒯_ℰ,𝒯_𝒥) be a topological π-graph and d ≥ 1. A spatial embedding of 𝒢 into ℝ^d is a pair of continuous maps (φ_ℰ,φ_𝒥) where:
* φ_ℰ : (ℰ,𝒯_ℰ) →ℝ^d embeds the vertices.
* φ_𝒥 : (𝒥_G,𝒯_𝒥) →𝒦(ℝ^d) embeds the junctions, where 𝒦(ℝ^d) is the hyperspace of nonempty compact subsets of ℝ^d with the Vietoris topology.
* (Incidence Compatibility) For all x ∈ℰ and j ∈𝒥_G,
j ∈π(x) φ_ℰ(x) ∈φ_𝒥(j)
Most crucially, vertices are embedded as points in ℝ^d via the continuous map φ_ℰ, junctions are embedded as nonempty compact subsets of ℝ^d via the continuous map φ_𝒥. The compactness requirement ensures that junctions are bounded and closed, which are natural geometric constraints. The hyperspace 𝒦(ℝ^d) is equipped with the Vietoris topology, which provides a notion of convergence for sequences of compact sets, and the Incidence Compatibility condition ensures that the spatial embedding respects the π-incidence structure, i.e. a vertex x is incident to a junction j in the π-graph if and only if the embedded vertex point φ_ℰ(x) lies in the embedded junction region φ_𝒥(j).
Figure <ref> illustrates spatial embedding for a simple π-graph.
The definition of spatial embedding provides a general framework to represent the geometry of π-graphs. By choosing appropriate vertex and junction topologies and embedding maps, one can model a variety of biologically relevant scenarios, such as:
* ECs as point particles and junctions as line segments or polygonal chains representing the physical connections between cells.
* ECs as extended spatial regions (e.g. polygons or ellipsoids) and junctions as overlap regions between cells.
* ECs as point particles and junctions as probability distributions over ℝ^d representing the likelihood of a junction occurring at each point in space.
In each case, the embedding maps φ_ℰ and φ_𝒥 can be tailored to capture the desired geometric properties, such as continuity, smoothness, or distance constraints.
§.§ Topological Invariants
An important aspect of spatial embeddings is their behavior under continuous deformations <cit.>, which capture the idea of elastic transformations that preserve intrinsic structure. This leads to the notion of topological invariants, which are properties of spatial embeddings that are preserved by continuous deformations.
Let 𝒢 be a topological π-graph and let (φ_ℰ,φ_𝒥) and (ψ_ℰ,ψ_𝒥) be spatial embeddings of 𝒢 into ℝ^d. A continuous deformation from (φ_ℰ,φ_𝒥) to (ψ_ℰ,ψ_𝒥) is a pair of continuous maps (H_ℰ,H_𝒥) where:
* H_ℰ : ℰ× [0,1] →ℝ^d is a homotopy from φ_ℰ to ψ_ℰ, i.e. H_ℰ(x,0) = φ_ℰ(x) and H_ℰ(x,1) = ψ_ℰ(x) for all x ∈ℰ.
* H_𝒥 : 𝒥_G × [0,1] →𝒦(ℝ^d) is a homotopy from φ_𝒥 to ψ_𝒥, i.e. H_𝒥(j,0) = φ_𝒥(j) and H_𝒥(j,1) = ψ_𝒥(j) for all j ∈𝒥_G.
* (Incidence Compatibility) For all x ∈ℰ, j ∈𝒥_G, and t ∈ [0,1],
j ∈π(x) H_ℰ(x,t) ∈ H_𝒥(j,t)
If such a deformation exists, we say (φ_ℰ,φ_𝒥) and (ψ_ℰ,ψ_𝒥) are topologically equivalent embeddings.
Intuitively, a continuous deformation is a continuous path in the space of spatial embeddings that preserves the incidence compatibility condition at each point along the path. Topological equivalence of embeddings is then the equivalence relation generated by the existence of a continuous deformation.
A topological invariant of a spatial embedding is a property that is preserved by topological equivalence. Figure <ref> illustrates some examples of topological invariants for simple π-graph embeddings.
The study of topological invariants provides a powerful tool to classify and compare the intrinsic structure of spatial embeddings, independent of the specific geometry. By computing invariants for different endothelial networks and comparing them across conditions, one can gain insight into the topological organization of the endothelium and how it relates to function.
§.§ Temporal Dynamics of pi-graphs
To capture the dynamic nature of endothelial networks, we extend the π-graph formalism by introducing a temporal dimension. Let T ⊆ℝ be a time interval of interest. We define a temporal π-graph as a tuple 𝒢 = (G, τ) where:
* G = (ℰ, π) is an abstract π-graph.
* τ: T →ϖ is a continuous function that assigns a π-graph τ(t) = (ℰ_t, π_t) to each time point t ∈ T.
Intuitively, a temporal π-graph represents the evolution of an endothelial network over time, with the topology of the network encoded by the time-varying π-incidence function π_t. We require that the vertex set ℰ_t and junction set 𝒥_ℰ_t vary continuously with time to ensure well-defined dynamics.
The temporal π-graph formalism enables us to study the dynamics of endothelial network connectivity, such as the formation and dissolution of junctions, the migration of ECs, and the remodeling of network topology. By considering the time-dependent properties of π-graphs, we can develop more realistic models of angiogenesis, vascular permeability, and other dynamic processes in the endothelium (Figure <ref>).
§.§ Evolution of Topological Invariants
The spatial embedding framework introduced in Section <ref> naturally extends to temporal π-graphs. We define a spatiotemporal embedding of a temporal π-graph 𝒢 = (G, τ) into ℝ^d as a pair of continuous maps (Φ_ℰ, Φ_𝒥) where:
* Φ_ℰ: ℰ× T →ℝ^d embeds the vertices.
* Φ_𝒥: 𝒥_𝒢× T →𝒦(ℝ^d) embeds the junctions, where 𝒥_𝒢 := ⋃_t ∈ T𝒥_ℰ_t is the set of all junctions across time.
The spatiotemporal embedding maps vertices and junctions into ℝ^d continuously over time, respecting the π-incidence structure at each time point.
With this framework in place, we can study the evolution of topological invariants of endothelial networks. For instance, we can track the number of connected components, cycles, or higher-dimensional holes as the network evolves under physiological or pathological conditions (Figure <ref>). The persistence of these invariants over time provides insight into the robustness and adaptability of the network topology.
To quantify the evolution of topological invariants, we can employ techniques from persistent homology, which provide a multiscale description of the topology of a spatial embedding and its change over time. Persistent homology computes the birth and death times of topological features as a filtration parameter (such as a distance threshold or time) varies, yielding a barcode or persistence diagram that summarizes the topological structure of the embedding across scales.
By applying persistent homology to spatiotemporal embeddings of π-graphs, we can track the emergence, persistence, and disappearance of topological features in evolving endothelial networks. This provides a powerful tool to compare the topological dynamics of different networks, detect critical transitions or bifurcations, and identify topological biomarkers of vascular dysfunction.
§ DISCUSSION
In this paper, we have introduced a novel mathematical framework called π-graphs to represent the complex multi-scale connectivity structure of endothelial networks. π-graphs capture the essence of endothelial connectivity by abstracting the detailed physiology of intercellular junctions into a concise, mathematically tractable formalism based on set theory, topology, and graph theory.
The key ingredients of the π-graph framework are:
* A finite set ℰ of endothelial cells (ECs).
* A collection 𝒥_ℰ = ∼_AJ, ∼_TJ, ∼_GJ, ∼_NJ of symmetric binary relations on ℰ representing different types of intercellular junctions.
* A π-incidence map π : ℰ→𝒫(𝒥_ℰ) assigning to each EC the set of junctions it participates in, subject to basic consistency and nondegeneracy conditions.
The π-incidence map is the heart of the formalism, encoding the local junction architecture around each EC in a manner that is both biologically expressive and mathematically elegant. In contrast to standard graph representations, π-graphs natively capture the multiplicity of junction types and their combinatorial arrangement around ECs.
We have developed the basic theory of π-graphs, including notions of π-graph isomorphism, π-graph union, and π-graph connectivity, that highlight the rich mathematical structure of the formalism. In particular, we have shown that π-graph isomorphism is a strictly stronger notion than isomorphism of the underlying "unnested" EC graphs obtained by forgetting the π-incidence structure (Proposition <ref>). This underscores the importance of the π-incidence in capturing the full complexity of endothelial connectivity.
At the same time, we have established conditions under which π-graph isomorphism and EC graph isomorphism coincide (Proposition <ref>), namely when any two ECs share at most one type of junction and different junction types link ECs to disjoint sets of neighbors. These conditions have a natural biological interpretation and suggest that in certain regimes, one can work with the simpler EC graph representation without losing essential connectivity information.
To further ground the π-graph formalism in biological reality, we have introduced a spatial representation framework that allows embedding π-graphs into Euclidean space in a geometry-preserving manner. The key idea is to represent ECs as points or regions in space and junctions as lines, surfaces, or more general compact subsets, constrained by an incidence compatibility condition that ensures the geometric embedding respects the π-incidence structure.
The spatial embedding framework is highly flexible and can accommodate a variety of biologically relevant geometric features, such as the size and shape of ECs, the thickness and tortuosity of junctions, and the density of junctions per unit volume. Mathematically, the framework draws on concepts from topological graph theory and the theory of abstract cell complexes, providing a rigorous foundation for studying the interplay between the combinatorial connectivity and geometric arrangement of endothelial networks.
A central theme that emerges from the spatial embedding framework is the importance of topological invariants - properties of embedded π-graphs that are preserved by continuous deformations of the embedding. Examples of key topological invariants include the number of connected components, the number and type of cycles, and the homology groups that capture higher-dimensional "holes" in the embedding.
The study of topological invariants provides a powerful tool to classify and compare endothelial networks across different biological contexts, independent of the precise geometric details. For instance, one could use topological invariants to quantify the degree of network remodeling during angiogenesis, or to characterize the topological defects that arise in pathological conditions such as vascular leakage or tumor angiogenesis.
The introduction of a temporal dimension to the π-graph formalism and the study of evolving topological invariants open up new avenues to investigate the dynamics of endothelial networks. By representing the time-varying connectivity and spatial embedding of endothelial networks, temporal π-graphs provide a natural framework to model angiogenesis, vascular remodeling, and other dynamic processes in the endothelium.
The application of persistent homology to track the evolution of topological invariants offers a principled way to quantify the multiscale topology of endothelial networks and its change over time. Persistent homology can detect the birth, persistence, and death of topological features such as connected components, cycles, and higher-dimensional holes, which may reflect the formation, stability, and regression of vascular structures. By comparing the persistence diagrams or barcodes of different endothelial networks, we can gain insight into the topological basis of vascular function and dysfunction.
From a broader perspective, the temporal π-graph formalism and evolving topological invariants contribute to the growing field of dynamical systems and network evolution. Endothelial networks provide a rich biological context to study the coupling between network topology, geometry, and dynamics, and to explore the emergence of complex behavior from simple local rules. The π-graph framework also has potential applications to other biological systems that exhibit time-varying connectivity, such as neural networks, cellular signaling pathways, and ecological communities.
To fully realize the potential of the temporal π-graph formalism, several challenges and opportunities for future work remain. Some key directions include:
* Developing efficient algorithms to compute and update π-graphs and their topological invariants from time-series data, such as live-cell imaging of endothelial junctions.
* Integrating temporal π-graphs with models of endothelial cell mechanics, migration, and signaling to predict the dynamics of network remodeling and its feedback on cell behavior.
* Investigating the role of junction plasticity, EC heterogeneity, and microenvironmental cues in shaping the temporal topology of endothelial networks.
* Applying the temporal π-graph framework to study the topological basis of vascular patterning, anastomosis, and pruning during development and disease.
* Comparing the topological dynamics of endothelial networks across different tissues, species, and experimental conditions to identify conserved and divergent features.
* Constructing temporal π-graphs from experimental data, such as imaging of junction protein localization or freeze-fracture electron microscopy, to bridge the gap between molecular and tissue-scale dynamics.
The temporal π-graph formalism and evolving topological invariants in particular provide a powerful lens to study the dynamics of endothelial networks across scales. By integrating tools from algebraic topology, dynamical systems, and network science, this framework can drive new discoveries and applications in vascular biology and beyond. As the field of mathematical biology continues to grow, we anticipate that temporal π-graphs will find broader utility in modeling and analyzing the structure, function, and dynamics of complex biological systems.
As the explosion of experimental data on endothelial network structure continues, the π-graph formalism can provide a unifying mathematical language to organize, analyze, and interpret this data in a principled way. At the same time, the study of π-graphs as mathematical objects in their own right may inspire new theoretical questions and constructions that enrich the broader field of applied topology and network science.
In conclusion, the π-graph framework introduced here represents an exciting new direction in the systems biology of the endothelium that can bridge the gap between molecular-level mechanisms and tissue-level function. By providing a rigorous mathematical foundation for studying the connectivity, geometry, and topology of endothelial networks, π-graphs have the potential to catalyze new discoveries and advances in vascular biology and beyond. As such, we believe the π-graph formalism will be a valuable addition to the rapidly growing toolkit of mathematical methods in biology and medicine.
|
http://arxiv.org/abs/2405.09247v1 | 20240515110042 | Graph Neural Network based Handwritten Trajectories Recognition | [
"Anuj Sharma",
"Sukhdeep Singh",
"S Ratna"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
The DP-coloring of the square of subcubic graphs
Ren Zhao
May 20, 2024
================================================
The graph neural networks has been proved to be an efficient machine learning technique in real life applications. The handwritten recognition is one of the useful area in real life use where both offline and online handwriting recognition are required. The chain code as feature extraction technique has shown significant results in literature and we have been able to use chain codes with graph neural networks. To the best of our knowledge, this work presents first time a novel combination of handwritten trajectories features as chain codes and graph neural networks together. The handwritten trajectories for offline handwritten text has been evaluated using recovery of drawing order, whereas online handwritten trajectories are directly used with chain codes. Our results prove that present combination surpass previous results and minimize error rate in few epochs only.
§ INTRODUCTION
One of the Artificial Intelligence (AI) important applications is human handwritten text recognition. The Handwriting Recognition (HWR) refers to recognizing handwriting through machines. The handwritten text scanned and recognized is offline HWR in nature, whereas recognizing while writing is online HWR <cit.>. The handwriting trajectory refers to handwritten strokes which are set of sequential pixels in online HWR and set of pixels in offline HWR <cit.>. In offline HWR, these trajectories writing orders recovered through drawing order techniques, and in online HWR, the digital pen strokes refer to trajectories <cit.>. In either case, these trajectories are important sources of information to understand and recognize handwriting. Handwriting trajectories are also understood as the paths traced by a writing pen or stylus movements across a writing surface. These trajectories capture the spatial and temporal aspects of handwriting, including the sequence of strokes, their direction, and the relative positioning of characters <cit.>. The trajectories in online HWR include important information as start or end of movements, velocity of pen movement, curvature, graphical shapes, time series information, dynamic nature of pen movements and behavioral part of writing. Today advances in digital pen technologies made it easier to capture and analyze handwriting trajectories for use in different domains. The chain code features are an important feature extraction technique related to trajectories and a popular way to recognize
these trajectories <cit.>. These chain codes describe the boundary of the object by encoding the sequence of directions or transitions between neighboring points along the trajectory <cit.>. The chain code formed on the basis of boundary tracing using techniques as Huffman chain codes <cit.>. The sequence of chain codes could be the original direction of trajectory or manually chosen points. Chain code features offer a compact representation of object boundaries and are invariant to translation, rotation, and scaling transformations.
The Graph Neural Networks (GNNs) are a class of neural networks designed to process and analyze data structured as graphs, also referred to as machine learning with graphs <cit.>. The nodes and edges are the main components of a graph. The chain code information for trajectories are transformed in the form of nodes and edges information. This way we can have offline and online HWR data in form of graphs and it could be applicable to GNN. Further, any data that could be well represented in the form of graphs for GNN may be used as an alternative to traditional machine learning approaches <cit.>. The other important parts of GNN as message passing, graph convolution, aggregation function and pooling complete the recognition process of handwritten trajectories using GNN. The data could be graphically rich in raw form or could be converted to graphical form. The present work is related to HWR and raw form of handwritten strokes or trajectories are graphically rich in nature. Therefore, GNN could be a suitable choice in this field where input is in the form of chain code based trajectories.
Based on properties of HWR trajectories, chain codes and GNN, "Is it possible to implement a deep learning based approach using GNN to accept graphs derived from handwritten strokes or trajectories?". Our claim to this question is feasible based on experimental results performed on benchmarked datasets. The main contributions of this study are:
* The present work could be first in this direction to use handwriting trajectories, chain codes and GNN together.
* The various GNN operators are explored to experiment in order to achieve state-of-the-art results.
* The approach is simple and Mathematically rich in nature. The hybrid nature of this approach strengthen the system working and indicate future use with other domain research problems.
* The benchmarked and inhouse datasets transformed to GNN requirements based on feature vectors.
Organization of rest of the paper: section <ref> include system overview and explanation of its components, section <ref> explain handwritten trajectories and chain code working to form feature vector. The section <ref> explain GNN working and Section <ref> discuss experimentation. The last section <ref> conclude the paper.
§ SYSTEM OVERVIEW
This section explain overview of proposed system architecture, components, and their interactions. This explain system components functionality, and how different parts work together to achieve the desired outcomes. Our system include connection of different domains properties such as handwriting trajectories, graphs, chain codes and deep learning training. As happen in traditional AI approaches, we form dataset in desired form which split to train and test parts. The large dataset need special attention where batch loaders are used to train the data in batches. Our system accepts batch based data and reduce system complexities as happen in deep learning. The GNN dataset is special type of dataset in graphical form and any relational or hierarchical or other type of dataset need to be converted to graphical form of data <cit.>. Our system explained in this section collect graph properties and results in graphical dataset.
The proposed system design has been presented in figures <ref> and <ref>. The figure <ref> include five stages from (a) to (e). The first stage (a) collect scanned image and transform to desired drawing order as shown in stage (b). The stage (b) directly implemented in case of online HWR strokes as drawing order is available during data capturing. The stage (c) is the chain code implementation and output in small segments, where each segment is a direction code. This is the simple graph nature where nodes are the segments start and end points and edges are links between these nodes. This way we have been able to collect information related to graphs for their respective nodes, edges and other features as degree of nodes, isolated nodes or self loops etc, as shown in <ref> stage (d). Finally, GNN dataset formed as depicted in stage (e).
After GNN dataset stage, the figure <ref> explain GNN working. The inside working of GNN is complex in nature. The GNN operators are the important part of GNN which include idea of convolution into graphs. This transformation is latent node features from node features. Further, various aggregation functions could be applied such as mean, sum or trainable layers <cit.>. The input feature matrix at layer l is 𝐇^(l)∈ℝ^N × d^(l)
with N nodes and d^(l) feature dimensions, the learnable weight matrix at layer l is 𝐖^(l)∈ℝ^d^(l)× d^(l+1), and the adjacency matrix of the graph is 𝐀̂ = 𝐀 + 𝐈N with added self-loops. 𝐃̂ is the degree matrix of 𝐀̂, defined as 𝐃̂ii = ∑_j 𝐀̂_ij, and 𝐃̂^-1/2 is the normalized degree matrix. This results in output as a new feature matrix 𝐇^(l+1)∈ℝ^N× d^(l+1) at layer l+1, where d^(l+1) is the number of output features. The pseudocode explained in <ref> include working of figures <ref> and <ref> in sequence.
§ CHAIN CODE BASED FEATURE EXTRACTION
The process of capturing and representing characteristics from patterns is referred to as feature extraction. These features help in analyzing the patterns for various applications such as handwriting recognition, image recognition, biometric verification etc. Common feature extraction techniques names are histogram of oriented gradients, scale invariant feature transform, local binary patterns, zoning and grid based features, statistical features, shape based structural features, chain code based features etc. The present study includes chain code feature extraction which has been used for offline HWR and online HWR both.
The offline HWR is scanned form of input handwritten text. The scanned form is an image and need preprocessing to extract features. The preprocessing include image conversion to binary images which are sent to thin image function to output in thin image <cit.>. The thin image is a skeleton line form of original image. The thin image pixels list need attention to choose start point and sequential order of pixels, which is called recovery of drawing order. The recovery of drawing order technique has been explained in detail in literature <cit.>. In this technique, start point is point with one neighbor and top left position in thin image. The next point is selected as next neighbor or the point among available points that closely fall in the previous direction. All the points in the pixel list are covered and drawing order of handwritten trajectory is recovered. The next step is formation of direction chain codes based on drawing order <cit.>. In online HWR, recovery of drawing is not required as input handwritten text is in form of sequential order of pixels. Thus, directly direction chain codes are formed <cit.>.
The direction chain codes include the segments based on pixels order. These segment directions are evaluated on the basis of directions. We have considered eight directions and results in chain code features for the input pixel list. The figure <ref> stages (a) to (c) explain input image to chain code segments. These segments are assigned directions which results in chain code feature vector. In stage (c) of figure <ref>, end points of each segments are nodes and link between these end points are edges. These nodes and edges are the two main parts of a graph as G=(V,E), where V is set of nodes and E is the set of edges. The graph based on resultant feature vector is the input data for recognition stage using GNN.
§ GNN
The graph representation learning is field of machine learning to work with graph structured data and GNN is the technique that work with graph structured data. Recently, the success of GNN has been witnessed in applications such as computer vision, recommender systems, drug discovery, social network and patterns classifications <cit.>. It is a type of neural network with deep learning features and graphical properties that include multiple layers, message passing mechanism and minimizing loss in each iteration. The propagation of information between neighboring nodes in multiple iterations is part of message passing <cit.> <cit.>. This was final layer output refers to various downstream tasks as node classification, link prediction or graph classification. The important intrinsic property of GNN is its rich Mathematical working which provide stepwise analysis of GNN working and graphically representation. The relationships of nodes, edges and graphs for their various properties outcome in domain level understanding. One important part of GNN is their operator function which is used to transform the features and structure of graph. This is possible with the message passing that exchange information between neighboring nodes <cit.>. The operator commonly include convolution, pooling, attention, heterogeneous and point cloud functions <cit.>. The GNN ability to capture both local and global structural information that help to understand basic building components and overview level working simultaneously <cit.>. Some of the graphical rich data problems are possible to classify with GNN which were not possible using traditional machine learning methods.
The recent literature suggest that the field of GNNs is constantly evolving, with new architectures, algorithms, and its implementation in real life applications. The attention based mechanisms in GNN which allow for more fine-grained control over the information flow between nodes and edges in the graph <cit.>. Also, the spatial-temporal modeling ability to model spatial and temporal information in graph-structured data. This way it can capture the dynamic nature of the underlying data and results in more accurate predictions and representations. Similarly, closely related meta-learning technique also train GNNs, allowing them to quickly adapt to new domains with labeled data. The GNN important feature as scalability which help to scale to large graphs in efficient way. Also, graph partitioning and sparsification reduce complexity of large graphs.
Therefore, in view to see above properties of GNN, it has been implemented in pattern recognition problem as handwriting recognition, where trajectories are considered as input data, which are graphically rich in nature. The handwritten trajectories information satisfy all major components of GNN working as nodes, edges, GNN operators, layers connections, pooling and aggregation functions. The GNN operators are of various nature and suitability to respective domain data is subject to experimentation. We explored various operators from literature and implemented for handwritten data. The common GNN operators working has been discussed in literature <cit.>.
* The Graph Convolutional Network Convolution <cit.> expressed as:
𝐇^(l+1) = 𝐃̂^-1/2𝐀̂𝐃̂^-1/2𝐇^(l)𝐖^(l)
* The ChebConv operator <cit.> is:
𝐇^(l+1) = ∑_k=0^K-1Θ^(l)_k 𝐓_k(𝐋̃) 𝐇^(l)𝐖^(l)
Here, Θ^(l)k ∈ℝ^d^(l)× d^(l+1) is the kth learnable Chebyshev filter at layer l, 𝐋̃ = 2𝐋/λmax-𝐈_N is the normalized graph Laplacian with eigenvalues scaled to the range [-1, 1] and 𝐋 = 𝐈_N - Ã refers to Laplacian matrix. 𝐓_k(𝐋̃) is the kth Chebyshev polynomial of degree k evaluated at 𝐋̃.
* The Sample and aggregate convolutional operator <cit.> is represented as,
𝐇^(l+1)_i = σ( 𝐖^(l)·CONCAT( 𝐇^(l)_i, MEAN_j ∈𝒩(i){𝐇^(l)_j}) )
where, σ is an activation function, MEANj ∈𝒩(i)𝐇^(l)_j is the mean of the input feature vectors of the neighbors of node i, 𝒩(i) is the set of neighbors of node i, and CONCAT is a concatenation operation that combines the feature vector of node i with the mean of the feature vectors of its neighbors.
* The transformer convolution operator <cit.> is expressed as,
𝐱^'_i = 𝐖_1 𝐱_i +
∑_j ∈𝒩(i)α_i,j𝐖_2 𝐱_j
where the attention coefficients α_i,j are computed via multi-head dot product attention is,
α_i,j = softmax(
(𝐖_3𝐱_i)^⊤ (𝐖_4𝐱_j)/√(d))
* The Gated Graph Convolution <cit.> working is,
𝐇^(l+1)= σ( 𝐃̃^-1/2𝐀̃𝐃̃^-1/2𝐇^(l)𝐖_z^(l))
𝐑^(l+1)= σ( 𝐃̃^-1/2𝐀̃𝐃̃^-1/2𝐇^(l)𝐖_r^(l))
𝐇^(l+1)= tanh( 𝐃̃^-1/2𝐀̃𝐃̃^-1/2CONCAT( 𝐑^(l+1)⊙𝐇^(l)𝐖_h^(l), 𝐙^(l+1)⊙𝐇^(l)) )
where, σ is the sigmoid activation function, ⊙ denotes element-wise multiplication. The matrix 𝐑^(l+1) is controlled by the gating mechanism.
* The Fused based Gated Convolution <cit.> is,
𝐇_i^(l+1) = σ( ∑_j ∈𝒩_isoftmax_j ( LeakyReLU( 𝐚^(l)T𝐖^(l) [𝐇_i^(l), 𝐇_j^(l)] ) ) 𝐖^(l)𝐇_j^(l))
The attention score is subject to linear transformation term and not being computed separately, which is one of the key difference between GatedGraphconv and FusedGatConv.
* The GATv2Conv <cit.> can be expressed as,
𝐞_ij^(l)= LeakyReLU( 𝐚⃗^(l)T [𝐖^(l)𝐇_i^(l), 𝐖^(l)𝐇_j^(l)] )
α_ij^(l)= exp(LeakyReLU(𝐞_ij^(l)))/∑_k ∈𝒩_iexp(LeakyReLU(𝐞_ik^(l)))
𝐇_i^(l+1)= σ( 𝐖^(l)( ∑_j ∈𝒩_iα_ij^(l)𝐇_j^(l) + 𝐇_i^(l)) )
* The unified message passing <cit.> working is based on following concept as,
v^(l)_c;j = W^(l)_c;vh^(l)j + b^(l)_c;v
ĥ^(l+1)_i = |_c=1^C∑_j ∈ N(i)α^(l)_c;ij(v^(l)_c;j + e_c,ij)
where the || is the concatenation operation for C head attention. The distant feature h_j is transformed to v_c;j∈ R^d for weighted sum.
* The Topology Adaptive Graph Convolution <cit.> is,
𝐇^(l+1)= σ( ∑_p=0^P-1∑_q=0^Q-11/|𝒩|^p·𝐅_p,q^(l)·𝐇^(l)· (𝐇^(l))^q )
where, 𝐅_p,q^(l) is a topology-adaptive convolutional filter which can be defined as,
𝐅_p,q^(l) = ∑_r=0^R-1𝐖_p,q,r^(l)·𝐌_r^(l)
𝐗_r^(l)= ∑_i=1^|𝒩|( 𝐃_𝒩^-1/2·𝐀^(l)·𝐃_𝒩^-1/2)^r ·𝐇^(l)· (𝐇^(l))^T
where 𝐖p,q,r^(l) is a learnable weight tensor and R is a hyperparameter that controls the number of iterations to compute the topology-adaptive filter.
* The Simplified Graph Convolution <cit.> is,
𝐇^(l+1) = σ( 𝐃̃^-1·Ã·𝐇^(l)·𝐖^(l))
* The Gaussian Mixture Model Convolution <cit.> is,
𝐇^(l+1) = ∑_i=1^M∑_j=1^N𝐏_i,j^(l)·∑_k=1^K𝐇_j,k^(l)·Θ_i,k^(l)
𝐏_i,j^(l) = softmax( exp(-1/2·(𝐱_i - 𝐲_j)^T·𝐀·(𝐱_i - 𝐲_j))/∑_j'=1^Nexp(-1/2·(𝐱_i - 𝐲_j')^T·𝐀·(𝐱_i - 𝐲_j')))
where, 𝐏i,j^(l) is a probability matrix that assigns each node i to a set of K Gaussian mixture components centered at nodes j, 𝐇_j,k^(l) is the feature vector of the k-th component centered at node j, Θi,k^(l) is the weight matrix of the k-th component assigned to node i, 𝐱_i is the feature vector of node i, 𝐲_j is the feature vector of node j, 𝐀 is a learnable affinity matrix that controls the similarity between nodes, and softmax is a softmax function that ensures that the weights assigned to each Gaussian mixture component sum to 1.
* The spline convolution <cit.> is,
𝐇^(l+1) = ∑_k=1^K𝐔_k^(l)·spline_order( (Λ_k^(l))^1/2·𝐗^T·𝐇^(l)·Θ_k^(l)/σ_k^(l))
where, Θ_k^(l) is the weight matrix of the k-th spline kernel, 𝐔_k^(l) is the normalization matrix of the k-th spline kernel, spline_order is a spline function, Λ_k^(l) is the eigenvalue matrix of the graph Laplacian corresponding to the k-th spline kernel, 𝐗 is the feature matrix of the graph nodes, σ_k^(l) is a normalization constant for the k-th kernel.
* The neural message passing <cit.> is,
𝐇^(l+1) = σ(∑_j ∈𝒩(i)𝐖_j^(l)·𝐗_j^(l))
* The dynamic graph convolutional neural network <cit.> is,
𝐞_i,j = f_edge(𝐇_i, 𝐇_j, 𝐞_i,j)
𝐦_i = ∑_j ∈𝒩(i) f_node(𝐇_i, 𝐇_j, 𝐞_i,j)
𝐇_i^(l+1) = f_out(𝐇_i^(l), 𝐦_i)
where 𝐇_i is the feature vector of node i, 𝒩(i) is the set of neighbors of node i in the graph, 𝐞i,j is the edge attribute between nodes i and j, f_edge, f_node, and f_out are learnable functions, and 𝐦_i is the aggregated message vector for node i.
* The cluster graph convolution network <cit.> is,
𝐇_i^(l+1) = σ(∑_j∈𝒩_i1/c_ic_j𝐇_j^(l)𝐖^(l))
where, 𝒩i is the set of neighboring nodes of node i in the subgraph, and c_i and c_j are normalization constants that are used to control the effect of subgraph size. The subgraph is constructed by taking a random sample of nodes from each cluster, and then adding all edges between those nodes. The normalization constants c_i and c_j are defined as the square root of the size of the subgraph that contains node i and node j, respectively.
* The Weisfeiler-Lehman Graph Convolution <cit.> is,
𝐇^(l+1) = σ(𝐃^-1/2𝐀𝐃^-1/2𝐇^(l)𝐖^(l))
* The Anti-Symmetric graph convolution network <cit.> represented as,
𝐇^(l+1)_i = ∑_j ∈𝒩_i𝐖^(l)(𝐀_ij) · (𝐇^(l)_i - 𝐇^(l)_j)
* The equation for Point Transformer <cit.> can be written as follows:
y_i = ∑ρ (γ (φ (x_i) - ψ (x_j) + δ ) ) ⊙ (α (x_j) + δ )
where, x corresponds feature vector and y output feature matrix. ρ is normalization function, ψ and γ are feature transformations.
* The equation for xConv <cit.> can be written as follows:
F_p = X-Conv(K, p, P, F) = Conv(K, MLP(P - p) × [MLP(P - p), F])
MLP is a multi-layer perceptron that takes as input the maximum feature vector among the neighboring points of point i. The F is trainable convolution kernels and P refers to features.
§ RESULTS
In this section, we have presented the experimental results for benchmark datasets. As our study include both online and offline HWR, we have used mnist dataset <cit.> for offline HWR, where handwritten trajectories are recovered. For online HWR dataset, we have used unipen <cit.> and indic handwritten strokes <cit.> datasets as mentioned in literature. As GNN is deep learning technique and need machines to computes lengthy calculations, we have used concept of batch loaders that train and test dataset in batches. This results in fast computational time and better handling of memory resources. As discussed in previous sections, GNN need dataset in graphical form, we have converted the datasets in same line of action. The formation of GNN dataset source code has been included in Supplementary file.
The offline HWR dataset as mnist has been used for experimentation with respect to recovery of drawing order technique. This dataset include 10 classes as digits from 0 to 9. We have used feature vector length as 41 and this feature vector include chain codes only. The details of these feature description has been discussed in literature <cit.>. The mnist include 60000 images of handwritten numerals. Therefore, we have formed 60000 respective graphs from the feature vectors. Similarly, 10000 test images refers to respective 10000 test graphs in mnist. Our architecture include three convolution steps and 16 hidden channels. The first step convolution includes feature vector to hidden channels, second step hidden channels to hidden channels and third step from hidden channels to number of classes. The optimizer has been used as adam with learning rate of 0.01. The mnist dataset has been used extensively in literature and error rates were reported as low as 0.19% <cit.>. We notice that our work perform subject to performances of GNNs for mnist as error rate 0.86% <cit.>.
One of the online HWR dataset as Gurmukhi HandWritten Test (GHWT) has been implemented as discussed in literature <cit.> <cit.>. It includes 62 classes and feature vector length is 25. We have used same number of hidden channels 16 as in offline case and three convolution steps with adam optimizer and learning rate as 0.01. The other online HWR dataset used as unipen <cit.>. This dataset include online handwritten digits strokes for the ten classes from 0 to 9. Our results shows that GNN outperform literature results in both cases of offline and online. The train and test part of respective datasets are randomly shuffled for all classes to ensure unbiased outcomes. The table <ref> depict results for both offline and online HWR datasets, it clearly shows that our present approach outperform literature results for same feature vectors using GNN. Also, the number of epochs in GNN experimentation are moderate. We did GNN experimentation to the epochs number where we surpassed previous results in first few epochs. The GNN operators option is one advantage to get best results as not all operator perform same. Similarly, in our case, DeeperGCN GNN operator <cit.> results for these chain code based feature vectors are same or better as compared to other GNN operators. The experiments of GNN operators with common parameters is other important factor has been considered.
The GNN do have limitations as a classifier One of the challenge is over-smoothing that happen in deep architecture. Here, information is propagated through multiple layers node representations tend to become more similar in losing discriminative power. Also, GNNs are computationally expensive and require high end machines to handles big data. The computational cost is still high when we compare to other classifiers for medium size datasets but accuracy is the other useful outcome that prefers GNNs. One of the deep learning architecture is its black-box nature and GNN are again black-box appearance which lack interpretability compared to traditional graph algorithms. The other challenge is its perturbations in structures, a small change in graphs with respect to nodes or edges could result in variation of outcomes. The sparsity and irregularities further make task challenging for classification. In our case, handwritten trajectories especially through drawing order is first such work using GNN in this direction and these challenges could not dominate the entire classification progress.
This work has opened many future directions in connection to chain code and trajectory formation. There are many real life data challenges where chain code based graph understanding could be useful, especially large scale graphs complex nature. The rich Mathematical GNN working used to generate theoretically feasible and diverse solutions. Further, transfer learning with GNN could optimize existing solutions or feature extraction techniques. In view to see present work outcomes, GNNs shown to be effective for handwritten trajectories using traditional method chain code. The interesting part is the drawing order formation in offline handwriting.
§ CONCLUSION
In this paper, novel offline and online HWR recognition technique has been proposed using chain code feature vectors and GNN, where each handwritten trajectory has been understood as a graph. The results are better as compared to literature where same dataset and feature vector used. We have received improvement in accuracy rates for all the datasets. Our results surpassed previous results in few epochs only and system is computationally efficient as batch loader based experimental setup used. The GNN requirement of dataset in graphical parameters form has given a new future direction for handwritten trajectories where graphs present simple understanding of data. The use of recovery of drawing order for offline handwritten text using GNN is first in this direction and results are encouraging. Further, online HWR results also support chain code and GNN combination. The work done in this study has opened new scope to extend present system and implementation in other domains too with similar feature sets.
Availability of data and Materials The data and code that support the findings of this paper are available from corresponding author and code is available in suplementary files.
Ethics and informed consent for data used The research does not involve human participants and/or animals. Consent for data used from open available sources and in-house lab datasets. Further, no new dataset generated in this paper.
Competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work related to this paper.
unsrtnat
|
http://arxiv.org/abs/2405.08665v1 | 20240514144632 | Note: Shoving model and the glass transition in one-component plasma | [
"Sergey Khrapak"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.soft",
"physics.chem-ph",
"physics.plasm-ph"
] |
Sergey.Khrapak@gmx.de
Joint Institute for High Temperatures, Russian Academy of Sciences, 125412 Moscow, Russia
A modified shoving model is applied to estimate the location of the glass transition in a one-component plasma. The estimated value of the coupling parameter Γ≃ 570 at the glass transition is compared with other predictions available in the literature.
Note: Shoving model and the glass transition in one-component plasma
Sergey Khrapak
May 20, 2024
====================================================================
A one-component plasma (OCP) model is an idealized system of point charges immersed in a uniform neutralizing background of opposite charge <cit.>. This
model is of relevance in a wide interdisciplinary context, including laboratory and space plasmas, planetary interiors, white dwarfs, liquid metals, and electrolytes. There are strong relations to various soft matter systems such as charged colloidal suspensions and complex (dusty) plasmas <cit.>. OCP represents a very convenient system to test and verify the applicability of different theoretical approaches used in condensed matter research.
The mobile particles forming the OCP are interacting via a very soft and long-ranged Coulomb repulsive interaction potential, ϕ(r)= e^2/r, where e is the electric charge and r is the distance between a pair of particles. The particle-particle correlations and thermodynamics of the OCP are characterized by a single dimensionless coupling parameter Γ=e^2/aT, where a=(4π n/3)^-1/3 is the Wigner-Seitz radius in three dimensions (3D), and T is the temperature in energy units (≡ k_ BT). The coupling parameter is equivalent to the inverse temperature in conventional matter. Since the interaction potential is purely repulsive, the OCP does not exhibit a gas-liquid phase transition, gas-liquid coexistence, critical and gas-liquid-solid triple points. The one-dimensional phase diagram of the OCP is very simple. There is a phase transition from the fluid phase to the body-centred cubic (bcc) solid phase at sufficiently strong coupling (low temperature), Γ_ fr≃ 174 <cit.>, where the subscript “fr” refers to freezing. There is also a gas-to-liquid dynamical crossover, which has been recently located at Γ/Γ_ fr≃ 0.05, that is at Γ∼ 10 <cit.>.
Over the years there have been predictions that supercooled OCP fluid might exhibit a glass transition at strong coupling. However, the location of this transition differs greatly in these studies. Ichimaru and Tanaka used a generalized viscoelastic theory to demonstrated the possibility of a glass transition at Γ_ g = 900 - 1000 <cit.>. Cardenas and Tosi <cit.> studied the supercooled-fluid region and the transition to an amorphous glassy state in the OCP within the replica-symmetry-breaking scenario developed by Franz and Parisi <cit.>. They obtained the threshold value Γ_ g≃ 1500 for the glass transition. Glass transition properties for the Yukawa (screened Coulomb) potential were investigated by Yazdi et al. using the traditional mode coupling theory (MCT) with the structural information obtained from the Ornstein-Zernike relation and the hypernetted-chain (HNC) approximation closure <cit.>. In the infinite screening length limit, corresponding to the OCP, they obtained Γ_ g≃ 590. Lucco Castello and Tolias combined the traditional MCT with three closures of the integral equation theory of liquids to estimate the glass transition line in the Yukawa system <cit.>. With HNC closure they located the glass transition point in the OCP at Γ_ g≃ 575, slightly lower than the result of Yazdi et al. With the isomorph-based empirically-modified hypernetted chain (IEMHNC) approach <cit.> and the variational modified hypernetted chain (VMHNC) approximation <cit.>, the obtained coupling parameters are almost a factor of two lower, Γ_ g≃ 290 and Γ_ g≃ 280, respectively.
Considerable variation in the locations of the glass transition point reported in different studies suggests checking these predictions with other methods and tools. The purpose of this Note is to apply a simple model of the glass transition based on elastic arguments – the shoving model <cit.>. The model considers a “flow event”, which requires a local volume increase. The activation energy for a flow event is identified with the work done in shoving
aside the surrounding liquid. This work can be expressed using the infinite frequency elastic moduli (infinite frequency shear modulus in the original formulation <cit.>). The shoving model is one of the elastic models discussed in connection to glass-forming liquids <cit.>. In the simplest approximation these elastic models lead to a Lindemann-like criterion for the glass transition <cit.>, an analogy that will be elaborated further below. At the moment we remind that the energy-landscape version of the shoving model predicts that the following dimensionless combination is nearly constant at the glass transition <cit.>:
G_∞/nTK_∞+4G_∞/3/2K_∞+11G_∞/3≃ const.
Here K_∞ and G_∞ are the infinite frequency bulk and shear moduli and no distinction between the high-frequency plateau moduli and idealized instantaneous affine moduli (see Ref. <cit.> for details) is made in case of the OCP.
The value of the constant in Eq. (<ref>) is expected to be quasi-universal. For a wide range of metallic glasses the values of the const are scattered in the vicinity of const≃ 30. Actually, in Fig. 2 of Ref. <cit.> the data points are scattered around the value of ≃ 0.03, but the results are expressed in units of GPa cm^3/J = 10^3.
The instantaneous bulk modulus of the OCP system is infinite: Due to very soft and long-ranged character of the Coulomb interaction, the dispersion relation has a plasmon dispersion instead of the conventional acoustic one. If we formally substitute K_∞→∞ in the Eq. (<ref>) and adopt the constant appropriate for metallic glasses, we obtain G̃_∞≃ 60 as a preliminary estimate of the glass transition point in the OCP. Here G̃_∞=G_∞/nT is the reduced shear modulus. Within the quasi-localized charge approximation (QLCA) <cit.>, the elastic moduli of the OCP system can be directly expressed via the excess internal energy. At strong coupling, the OCP excess energy is not very sensitive to whether OCP forms a fluid, solid, or glass. The ion sphere model provides a reasonable estimate <cit.>, resulting
in <cit.>
G̃_∞≃ 0.1936 e^2 n^1/3/T≃ 0.12Γ.
In terms of the coupling parameter this yields Γ_ g≃ 500. Still, this estimate is not very convincing, because it neglects the contribution from the longitudinal collective mode.
Equation (<ref>) should be modified to account for the non-acoustic character of the longitudinal collective mode in the OCP system. We discuss this modification below. Let us start with estimating the amplitude of the atomic vibrations in the harmonic approximation. We have
⟨δ r^2 ⟩=3T/m⟨1/ω^2⟩,
where m is the atomic mass. Averaging can be performed over normal modes,
⟨1/ω^2⟩ = 1/3N∑_ kω_ k^-2.
Furthermore, the sum over frequencies can be converted to an integral over k using the standard procedure
1/V∑_ k(...)→1/(2π)^3∫ (...) d k,
where V is the volume. One longitudinal (compressional) and two transverse (shear) modes are supported in the solid state. We get
⟨1/ω^2⟩ = 1/6π^2 n∫_0^k_ maxk^2dk(1/ω_l^2+2/ω_t^2),
where ω_l and ω_t are the frequencies of the longitudinal and transverse modes and the cutoff k_ max= (6π^2 n)^1/3 ensures that ⟨𝒳⟩ = 𝒳 for a quantity 𝒳, which is independent of k.
Substituting the acoustic dispersion relations ω_l=c_lk and ω_t=c_tk into and taking into account the relations between the sound velocities and elastic moduli mnc_l^2=M_∞=K_∞+43G_∞ and mnc_t^2=G_∞, we get
⟨δ r^2⟩/Δ^2=3nT/(6π^2)^2/3G_∞2K_∞+11 G_∞/3/K_∞+4G_∞/3.
Here Δ=n^-1/3 is the average interatomic separation and Eq. (<ref>) represents an expression for the Lindemann measure in the harmonic approximation. According to the Lindemann melting rule, melting of a crystal occurs when the average vibrational amplitude exceeds a universal fraction (∼ 0.1) of the inter-atomic distance. By virtue of Eq. (<ref>), an analogue of the Lindemann rule applies to the glass transition, as discussed previously <cit.>. The only difference is in the relative vibrational amplitude. Adopting const≃ 30 in Eq. (<ref>) we arrive at
⟨δ r^2⟩/Δ^2≃ 0.0066
at the glass transition point.
It is now obvious how the shoving approximation should be modified in case of the OCP. Performing averaging in Eq. (<ref>) we have to substitute the plasmon dispersion ω_l(k)≃ω_ p instead of the acoustic one. Here ω_ p=√(4π e^2n/m) is the plasma frequency. The acoustic approximation for the transverse dispersion relation remains adequate. Even more accurate results can be expected by substituting almost exact QLCA long-wavelength dispersion relations for ω_l(k) and ω_t(k) <cit.> in Eq. (<ref>). There is no need to perform averaging here. This has been already done in connection to the self-diffusion mechanism and the Stokes-Einstein relation between the self-diffusion and viscosity coefficients in the strongly coupled OCP <cit.>. The result, which applies equally to strongly coupled fluid and amorphous solid phases is ⟨ω_ p^2/ω^2⟩≃ 9.76 <cit.>. Combining Eqs. (<ref>) and (<ref>) we get the following condition for the glass transition in the OCP model
Γ_ g≃ 570.
This is our best estimate for the glass transition point within the shoving model. It should be pointed out, however, that the shoving model is an approximation and the value of the constant in Eq (<ref>) based on metallic glasses may not be optimal for the OCP system. Nevertheless, the present estimate correlates much better with recent MCT predictions of Refs. <cit.>, compared to earlier works <cit.>.
To summarize, the energy-landscape version of the shoving model has been modified to account for the non-acoustic character of the longitudinal collective mode in the OCP system. This modified version of the shoving model predicts the glass transition in OCP at Γ_ g≃ 570. Similar modifications can be straightforwardly implemented in other simple fluids for which deviations from acoustic asymptotes are important. This includes for instance 2D OCP as well as 2D and 3D Yukawa fluids and related systems with soft interaction potentials <cit.>. Generalization of the shoving model to mixtures seems desirable in view of applications to complex plasma experiments targeting the glassy state with polydisperse dust.
|
http://arxiv.org/abs/2405.09809v1 | 20240516044221 | Biomarker Selection for Adaptive Systems | [
"Joshua Pickard",
"Cooper Stansbury",
"Amit Surana",
"Anthony Bloch",
"Indika Rajapakse"
] | q-bio.MN | [
"q-bio.MN",
"math.OC"
] |
refs2.bib
[margin=1in]
#1#2#2#1
=
geneList
[backgroundcolor=gray!10,roundcorner=0pt]
subsectionsectionsections
sectionsectionsections
subsubsectionsectionsections
defnDefinition
thmTheorem
[2][1=][linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]#2
[2][1=][linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]#2
[2][1=][linecolor=asparagus,backgroundcolor=asparagus!25,bordercolor=asparagus,#1]#2
[2][1=][linecolor=fuchsia,backgroundcolor=fuchsia!25,bordercolor=fuchsia,#1]#2
[2][1=][disable,#1]#2
Biomarker Selection for Adaptive Systems
Joshua Pickard[jpicumich.edu], Cooper Stansbury, Amit Surana,
Anthony Bloch, and Indika Rajapakse
May 2024
======================================================================================================
Biomarker selection and real-time monitoring of cell dynamics remains an active challenge in cell biology and biomanufacturing.
Here, we develop scalable adaptations of classic approaches to sensor selection for biomarker identification on several transcriptomics and biological datasets that are otherwise cannot be studied from a controls perspective.
To address challenges in system identification of biological systems and provide robust biomarkers, we propose Dynamic and Structure Guided Sensors Selection (DSS and SGSS), methods by which temporal models and structural experimental data can be used to supplement traditional approaches to sensor selection.
These approaches leverage temporal models and experimental data to enhance traditional sensor selection techniques.
Unlike conventional methods that assume well-known, fixed dynamics, DSS and SGSS adaptively select sensors that maximize observability while accounting for the time-varying nature of biological systems.
Additionally, they incorporate structural information to identify robust sensors even in cases where system dynamics are poorly understood.
We validate these two approaches by performing estimation on several high dimensional systems derived from temporal gene expression data from partial observations.
§ INTRODUCTION
The selection of biomarkers or sensor genes is, at least in principle, a classic problem of systems theory.
Like many engineered, industrial, and socioeconomic processes, a central objective of experimental science lies in minimizing the requisite measurements and data collected, while preserving our capacity to accurately estimate, detect, and forecast the state of a complex system.
Historically, biomarker identification for cancer and disease has relied upon domain knowledge of the biological system <cit.>; however, such an approach is limited to explaining known or characterized phenomena <cit.>.
As the recent advent of real time sequencing technologies ushers in a new era in genomics <cit.>, model-based biomarker has the potential to uncover uncommon sensors and biomarkers directly from data.
Model-based biomarker identification selects sensors to maximize the observability of dynamic models of the a biological system.
A system is called observable when the measurements or data collected from sensors provide sufficient information to determine unmeasured states <cit.>, and yet, while observability is a classic problem of systems theory <cit.>, many challenges remain in applying input/output models or state space models typical of controls engineering to the study of biological systems <cit.>.
In contrast to many physical systems – such as the pendulum, where the position and velocity capture the state and the equations of motion are known – dynamics of biological systems, and often the correct state representation, remain unknown.
The high dimensional and low temporal resolution of data gathered in many biological experiments present a further challenge, as these data are not compatible with standard methods for identification and learning dynamics of complex systems <cit.>.
In spite of these challenges, many models to predict cell trajectories during differentiation, perturbation, and reprogramming have been proposed <cit.>.
Remarkably, the famous cell reprogramming (controller) protocols of Weintraub <cit.> and Yamanaka <cit.> were found by characterizing biomarkers genes (observer) of the target cell types, which exemplifies a classic principle of control theory in biological systems: the dual concepts of controllability and observability.
Nevertheless, our observability analysis of gene regulation support the notion that steering and monitoring biological systems are in fact not equivalent problems.
To address these challenges, we introduce a framework for biomarker detection founded on dynamic models of gene regulation.
We present two templates for sensor selection: Dynamic and Structure-Guided Sensor Selection (DSS and SGSS).
We demonstrate their efficacy in identifying biomarkers that optimize the observability of dynamics on gene regulatory networks derived from time-series gene expression
datasets.
Our focus lies on the linear time variant (LTV) state-space model
t+1 =tt
t =tt.
Here, t∈^n is the system state, representing, for instance, the expression of each gene as a vector; t∈^n× n signifies a state transition matrix, akin to a gene regulatory network; t∈^p_t× n stands as the sensor or measurement matrix, dictating our data collection process, so that t∈^p(t) denotes our measurements or data (where p(t)≪ n).
If (t) is fixed for all t, we call the system linear time invariant (LTI).
When it is cost-prohibitive to measure the full state at each time, the sensor selection problem involves crafting a measurement matrix (t) to ensure that the low-dimensional data (t) gathered throughout time or during an experiment offers the greatest insight into the complete state of the system (t).
§ RESULTS
§.§ Dynamic Sensor Selection
Traditional methods for sensor selections first evaluate each variable as a sensor and then suggest monitoring as many top-ranked sensors as feasible, considering experimental constraints.
However, by alternating our sensors at each time step to measure different variables, the estimation of nonlinear and time-varying systems can be improved.
Motivating Example: Biological Oscillators.
As a first example, consider how to best observe a network of coupled oscillators
dtdt = F(t, μ(t)) - L(t)
t = tt.
Here, is a vector representing the locations or values of each oscillator, is the dynamics of individual oscillators with internal parameters μ, and is the diffusion operator specific to the network structure.
Turing's theory of morphogenesis, Smale's two cell system, the repressilator, and other higher order motifs exemplify the dynamics of many biological systems described by <ref>, highlighting the importance of its observation <cit.>.
In <ref>, the trajectory of three coupled Van der Pol oscillators is shown.
When measuring the state of any two oscillators _1, _2, or _3, the observations are 2D projections of the 3D trajectory.
With fixed sensors, the question “which oscillators are the best sensors?" is akin to asking “data from with 2D plane enables the best reconstruction of the 3D shape?"
This supposes the observed data measures the same two variables at all times.
However, the information content of each projection changes as the oscillators synchronize and phase lock (<ref>).
As a result, alternating the plane of observed data throughout time provides a clearer picture of the 3D shape and enables better estimation and prediction of the network trajectory.
As a root node (<ref>), _1 is a good sensor to monitor the long term behavior of the system <cit.>.
Nevertheless, modification of oscillator connectivity or parametrization before reaching the limiting behavior may necessitate sensor reallocation.
As dynamics evolve, the number and distribution of sensors should change as well.
For example, synchronized networks require fewer sensors than unsynchronized ones (<ref>).
Similarly, changes in parameterization and connectivity of oscillators necessitates reallocation of sensors (SI.<ref>).
The cell cycle and differentiation stages exemplify temporal interactions where sensors are dynamically allocated.
For instance, interactions between key regulatory genes, such as P27, P21, CYCLIN D1, CDK4, and MYOD, change between proliferation, differentiation, and quiessence <cit.>.
The PIP-FUCCI biomarker, developed first as FUCCI, employs fluorescent biomarkers to distinguish cell cycle stages.
Initially, CDT1 and GEM gene expression distinguished the G1 stage from S, G2, and M <cit.> (SI.<ref>).
Adding PIP to monitor the PCNA gene enabled accurate detection of G2 phase transitions <cit.>.
Monitoring of CDT1, GEM, and PCNA between different cell cycle stages exemplifies DSS.
Recently, the introduction of adaptive sequencing, which allows for a sequencer to update in real time which genes, cells, or other markers are measured, provides a flexible framework for DSS on high dimensional genomics experiments <cit.>.
Maximizing Observability.
We propose two formulations of DSS.
Output energy measures the magnitude of the observation t over time.
At time T, sensor selection to maximize energy ℰ is formulated
max_tℰ for all t, where ℰ=∑_t=0^Tt^⊤t.
Adapting the approach of <cit.>, <ref> is solved through its Lagrange dual form (SI.<ref>).
<Ref> is predicated on the prediction of t, and while this assumption is reasonable in many scenarios
the observability Gramian offers a generalized measure of output energy.
To form the discrete-time observability Grammian, let Φ(t_2,t_1)= _t_2_t_2-1⋯_t_1 denote the transition matrix from time t_1 to t_2, so that the observability Gramian is
_o=∑_t=0^TΦ(t,0)^⊤t^⊤tΦ(t,0).
Using the relation t=tΦ(0,t)_0, utilizing tΦ(0,t) relaxes the need for a prediction of t.
By summing over the inner product of tΦ(0,t) times its transpose, <ref> is a direct generalization of the energy ℰ in <ref>.
In contrast to ℰ, _o is a matrix rather than a scalar, and several measures of observability derived from _o have been proposed. We consider the problem
min_t J(_o)
where J(·) denotes the trace, logarithm of the determinant, smallest eigenvalue, or rank, each of which provide a different observability measure (SI.<ref>).
For the trace, <ref> is solved with a linear program and can applied to high dimensional systems (SI.<ref>).
The methodologies of <ref> and <ref> can handle time-varying sensors, incorporate additional constraints such as SGSS, and support the implementation of scalable algorithms.
When compared with alternative sensor selection techniques in <ref>, these approaches are versatile.
§.§ Structure Guided Sensor Selection
Sullivan's maxim “form ever proceeds function”
has long established the essence of the structure-function (S-F) causality dilemma.
When the system identification problem remains unresolved and the model of function contains errors, SGSS can exploit information in both the structure and function domains to constrain the DSS optimization problems.
SGSS considers system geometry and spatial arrangement, leveraging orthogonal experimental methods, to mitigate modeling errors and identify robust sensors.
Knowledge of the structure can aid our estimation and understanding of the dynamics based upon its function.
This perspective resonates with approaches in other domains both (1) algorithmically, where methods such as PageRank <cit.> and the the fast multipole method <cit.> leverage additional structures to compute on complex systems, and (2) from data, where the S-F relationship has been recognized in the brain <cit.>, gene regulation <cit.>, and community structures <cit.>.
Paired data of the position (S) and effect (F) together, such as genome structure (Hi-C, S) and gene expression (RNAseq, F), is more powerful than either information alone.
The observability can be viewed as either a binary or scalar feature.
A well known limitation of the binary Kalman observability test is that all systems in the form of <ref> are nearly observable.
Apropos of this constraint, in 1974, Lin proposed structural observability, where the sparsity structure of the operators and determine a binary observability condition <cit.>.
In contrast, scalar measures of observability, derived directly from t and t, provided graded measures of observability (SI.<ref>).
While DSS adopts the scalar metric perspective, SGSS departs from the binary view of observability.
The structure considered by SGSS is independent of t's sparsity but rather based upon external attributes or structures of our system that may not appear in the dynamics.
While this notion of structure in SGSS varies from Lin's usage of the word, the challenge remains the same: despite great experimental advancements over the past half century, system identification and learning the dynamics is not a solved problem for biological systems.
Obtaining the data for traditional system identification techniques to be successful is both experimentally challenging and cost-prohibitive.
Present methodologies have utilized LTI methods on time-series gene expression signals <cit.>, and SGSS seeks to complement these methods by incorporating readily accessible data pertaining to genome structure.
Observability in a Small World.
The tendency to meet strangers with mutual acquaintances is a byproduct of the spatial structures that shape small world networks.
For instance, Milgram's infamous experiment was guided by the geography of individuals from Nebraska to Boston <cit.>; the Watts-Strogatz (WS) model positions each vertex in a lattice before forming the network <cit.>; and the small world structure of gene regulatory networks is guided by the 3D organization of chromatin <cit.>.
In each case, the structure guides the formation and dissolution of interactions in the system.
The positioning of nodes on the lattice determines the expected value of each node as a sensor in small world generated with the WS model.
We constructed an ensemble of small world networks and evaluated the contribution of each node to the network observability based upon the Gramian.
The node contributions to observability on the lattices resembled their average contribution as sensors over all small world networks generated from each lattice (SI.<ref>).
Moreover, evenly spacing sensor nodes across the lattice proves an effective strategy for placing sensor nodes on small world networks when the precise small world adjacency structure t is unknown (SI.<ref>).
This suggest that when the precise set or regulatory interactions or network edges of t is only partially known but the underlying structure is well characterized sensor selection can be guided by the structure.
The Nucleus is a Small World.
While network models of gene regulation and chromatin architecture have been developed from self organization principles <cit.> and molecular dynamics simulations <cit.>, quantification of properties of the genome from structural data remains unexplored.
We developed a four parameter network model whose adjacency structure qualitatively mirrors Hi-C (<ref>, SI.<ref>).
and caveman properties capture the diagonal dominance and block structure characteristic of the fractal globule chromatin architecture and Hi-C data <cit.>.
Based upon our ability to fit networks to Hi-C with relatively few parameters, we proceeded to quantify the Small World Quotient (SWQ) for several Hi-C datasets (see SI.ref).
Varying from individual chromosomes to the full genome, the SWQ of Hi-C networks was estimated at several different resolutions, and we observed small world properties in all cases.
The SWQ increased with the resolution and size of the Hi-C network and matrix, and the small world properties at multiple resolutions are consistent with the self similar or fractal structures are consistent with classic, multi-scale perspectives of Hi-C <cit.>.
Utilizing Hi-C data collected in parallel with the proliferation and reprogramming datasets, we evaluated the SWQ throughout time; however, in neither dataset did we observe a significant change in the SWQ throughout time.
The consistent small world propensity of Hi-C motivates augmenting DSS of gene regulatory networks based upon chromatin structure.
§.§ Application to Data
We applied DSS and SGSS on a range of data including both genomic and EEG signals (<ref>, SI.<ref>).
High dimensional, low frequency gene expression data are at the frontier of observability theory whereas the low dimensional, high frequency EEG signals are a classic problem to study.
We used standard approaches to learn LTI and LTV models (SI.<ref>), and the sensors of each model are assessed based on their ability to estimate the full system state from the sensor measurements (SI.<ref>).
Proliferation.
To validate our models of gene expression dynamics, we employed established biomarkers from the literature to estimate gene expression during cell proliferation <cit.> (SI.<ref>).
Human fibroblasts were synchronized in terms of both the cell cycle stage and circadian rhythm, offering optimal conditions for learning LTI and LTV models.
For sensor selection, we employed the KEGG pathway database, which contains manually curated sets of genes <cit.> (SI.<ref>).
Initially, we investigate pathways associated with the cell cycle, such as the Basal Transcription Factors (hsa03022), Cell Cycle (hsa04110), Circadian Rhythm (hsa04710), Circadian Entrainment (hsa04713), and Cellular Senescence (hsa04218) pathways.
LTV models had median component wise errors bounded near 10%, which outperformed LTI models when using sensors from all pathways except hsa04713 (SI.fig.<ref>).
Although LTV dynamics generally offer superior estimation, we observed that they exhibit decreased robustness due issues such as over fitting and poor conditioning of the observability matrix (SI.<ref>).
Considering the role of transcription factors (TFs) in determining cell fate and the duality of controllability and observability, we hypothesized that including TFs is essential to forming effective sensor sets.
Consistent with this, while hsa04713 contained the third most genes of the sensor pathways considered thus far, it contained no TFs.
Repeating the estimation anew with all human KEGG pathways as sensor sets (n=346), we discovered neither the presence of a large number of TFs nor a large sensor set are necessary for good estimation, thereby challenging our hypothesis (<ref>, SI.<ref>).
Mathematically, TFs' effectiveness as controllers but not observers, which is contrary to linear systems theory, is ascribed to the nonlinearity of biological systems.
Biologically, TFs' relatively low expression levels result in low output energy and less variability in their concentrations, necessitating more sensitive observer and estimation approaches.
We observed a bifurcating behaviour in the estimation procedure.
Of the sensors that poorly estimate the initial state, the failed predictions deviate from biologically meaningful values by several orders of magnitude.
This improves the interpretability of our approach by offering a clear indicator of failure, even in cases where the true state of the system is unknown.
Pesticide Detection.
We build models of the gene regulatory network for Pseudomonas fluorescens SBW25 and selected biomarkers for malathion detection, a commonly used insecticide <cit.>.
In one model, we learned LTI dynamics () and time invariant sensors (), and in another model, we learned LTV dynamics (t) and used DSS (t).
Varying the number of sensors, we assessed the estimation capabilities of each model, and found that LTV dynamics and DSS improved prediction accuracy for reconstructing the expression levels of individual genes.
Although <ref> and <ref> can always be further maximized by adding more sensors, in practice, increasing the number of sensors may not improve estimation, as illustrated in <ref>.C1.
Cellular Reprogramming.
The low efficiency of Weintraub's famous myogenic reprogramming experiment remains an active challenge in cell reprogramming <cit.> (SI.<ref>).
Monitoring cells throughout reprogramming may offer insight to this issue;
however, both formulations of DSS fail to perform well on this system, likely due to the asynchronized and noisy experimental conditions.
We applied SGSS to improve state estimation and increase observability by selecting spatially distributed genes.
Based on the hypothesis that colocalized genes are coregulated, we clustered genes according to Hi-C data and constrained DSS to select at most one sensor from each cluster (SI.<ref>).
By including constrained selection from Hi-C, the distribution of sensors across chromosomes shifted to mirror the distribution of genes (SI.<ref>-<ref>).
While we cannot measure the spatial proximity of clustered genes, we observed correlation in the expression values of several gene clusters, consistent with the concept of transcription factories.
Regardless, the estimation was improved by the Hi-C constrained SGSS.
When using few sensors, SGSS reduced the variance and improved estimation accuracy by approximately 25%.
To improve estimation further, we amplified the weak reprogramming signal by sampling genes involved in myogenesis and proliferation (SI.<ref>).
This targeted dataset provides improved conditions for biomarker identification, counteracting the experimental conditions of reprogramming.
Under these conditions, the estimation of the initial state for the reduced data shows median component-wise errors below 15% with all combinations of fixed or dynamic sensors from energy or Gramian based selection.
To close the design build test loop for myogenicSignal, we utilized these sensors to estimate the state of the complete reprogramming data.
Sensors selected from the reduced data, when optimized for energy, fail to estimate the full data well.
This occurs since the high energy genes in the targeted data have low energy in the complete reprogramming signals.
However, biomarkers identified via the Gramian on the targeted dataset continue to perform well at estimating the full data.
The median component-wise error is improved when applying Gramian selected biomarkers from the targeted data to the full data.
Converse of targeted observability <cit.>, where sensors are selected on the full reprogramming time series to observe only the myogenic signal, Gramian based sensor selection identified genes on the reduced data that estimate the full system well.
Beyond Genomics.
We employed DSS to rank different sensors observed in EEG signals.
The brain's properties are well-documented, and current research suggest EEGs are observabile with few sensors <cit.>.
We ranked the sensors of 64-lead EEG signals based on their contributions to output energy and the observability Gramian.
Relative to the genomics data, where synchronized or controled experiments have low frequency, high dimensional measurements, EEG data are high frequency and low dimensional, and the EEG signals are unsynchronized.
Instead EEG signals were partitioned according to different tasks the participants performed, such as opening or closing their eyes, prior to performing sensor selection.
The sensor ranking exhibit great variability across different activities, which underscores the utility of DSS, when participants change between tasks, a common occurrence in clinical settings.
In this context, the significance of sensors is determined by the participants' activities or states rather than specific time points from the start of the EEG signals.
Consistent with the principles of DSS, transitions between states coincide with variations in the most relevant sensors.
§ DISCUSSION
Many biological systems exhibit high dimensional, unknown dynamics that evolve overtime, often in an unpredictable manner.
Here, we have extended state space and network observability methods to develop a template for the observability of systems that are constrained to omit high dimensional and low temporally resolved data.
Beyond the initial step of over measuring the system prior to sensor selection, we stress the assumptions and limitations of our study.
In particular, time dependent observability is sensible when monitoring synchronized or perturbed systems, where control signals act as reference points in time for sensor selection.
When dealing with systems where the state evolves but cannot be determined a priori based upon the time, it is more appropriate to consider state dependent observability.
Furthermore, while we apply SGSS based upon gene clustering from Hi-C data to identify transcription factors, alternative procedures based upon gene regulatory networks, chromatin accessibility, or alternative data and clustering techniques may be utilized.
Although our application of SGSS based on Hi-C improves estimation, several user defined choices are made in this process which can be further refined.
The inherent flexibility and freedom of these procedures to be adapted for different systems and data make these templates versatile for sensor selection both in and beyond the genome.
Our work also raises several questions worthy of future pursuit.
Implicit in our state space model is the representation of genes as model states.
Expanding the state space representation to incorporate isoform, chromatin accessibility, or other exponentially available omics data could enhance these models.
Moreover, the time series experimental datasets utilized in our study are divorced from RNA veolocity or pseudotime approaches that are also utilized to study genome dynamics.
Such methodologies may be married with the framework of our study to facilitate the analysis of single cell resolved dynamics.
As contemporary trends in both science and industry emphasize harnessing computing power for modeling from larger data, it's crucial to highlight that data quantity must not compromise focused experimentation.
More data, Big Data, and recent excitement around AI models are not a panacea for science.
Rather the collect of data to maximize observability must work in parsimony with modeling approaches to gain new insights to complex systems.
1215
PART:
*Supporting Information
toc
1212
§ INTRODUCTION
This Supporting Information is organized as follows. Section <ref> provides information about how we build LTI and LTV models of dynamics from time series data and previous perspectives on observability. Sections <ref> and <ref> provide mathematical formulations of Dynamic Sensor Selection and Structure Guided Sensor Selection respectively. Finally, Section <ref> outlines particular data sets and results obtained on the data. Supplementary figures are provided in Section <ref>, and the references in the Supporting Information are distinct from those in the main text.
§ RELATED WORK ON LEARNING DYNAMICS, OBSERVABILITY, AND SENSOR SELECTION
The problem of learning dynamics and selecting sensors or measurements is a wide field of study with a deep history <cit.>. Here, we clarify some of the particular algorithms we rely on for learning the dynamics of t and survey classic measures of observability to distinguish our contributions. We consider the time-variant system with linear outputs
t+1 =tt+t
t =tt,
where t∈^n is the state vector, t∈^n× n is the state transition matrix, t∈^p_t× n is the measurement matrix, and t∈^p(t) is the system output. The measurements are lower dimensional than the state of our system (i.e. p(t)≪ n).
§.§ 1212Learning Dynamics
We focus on the parameterization of the state transition matrices t from evenly spaced time series data , where the data are of the form
=[ | | | |; 0 1 … T-1 T; | | | |; ].
Here, ∈^n× (T+1) is a matrix with n rows, where each row corresponds to a measurement, and T columns, where each column contains all measurements at a particular instant in time. Consistent with many data, we assume the data are collected at even intervals in time (columns), and we will learn a model where each state variable comes from a single measurement (row). We first review Dynamic Mode Decomposition (DMD), then shift focus to the LTV systems under the Data Guided Control (DGC) model.
§.§.§ 1212Dynamic Mode Decomposition (DMD)
Here we outline the basic steps of DMD to provide a concise summary of the models we employ. For a comprehensive review on DMD and its applications in modeling biological and other complex systems, see <cit.> and references therein. Given time series data , consider the first and last T samples, denoted by ^- and ^+ where
=[ccccc][margin]
| | | |
0 1 … T-1 T
| | | |
[yshift=25pt]1-12-4^-[yshift=3pt]1-21-5^+ such that ^-=[ | | |; 0 1 … T-1; | | |; ] and
^+=[ | | |; 1 … T-1 T; | | |; ].
The purpose of the DMD algorithm is to learn the best linear model of the dynamics t+1=t to explain the observed data. Learning the dynamics can be formalized by solving the matrix minimization
min_A^+-^-^2_F,
whose solution is given by =^+(^-)^†, where † denotes the pseudo inverse. Consider the Singular Value Decomposition (SVD) ^-=Σ^⊤, where, ∈^n× r, ∈^r× r and ∈^(T-1) × r and r≪ n. The least squares problem <ref> has the solution
=^+(^-)^†=^+^-1^⊤,
which is the best linear model to explain the data .
Often, when we have multiple replicates of an experiment, rather than the data being a n× T matrix, will be a n× T× r tensor, where r denotes the number of replicates or separate instances of time series data generated from the system of interest. When there are multiple replicates, as in the case of all time series datasets we consider, we form ^+ and ^- similarly by removing the first and last time points to generate n× (T-1)× r tensors. Alternatively, the data can be averaged over multiple replicates to obtain a n× T matrix and be treated similarly as if only one replicate had been obtained.
§.§.§ 1212Model Reduction
DMD is classically applied to physical systems, which are often relatively low dimensional when compared with gene expression and biological systems.
The high dimensional number of variables in omics data lends itself to state space models of higher dimensions than most applications of DMD. For instance, the human genome contains on the order of 20 thousand genes, each of which can be modeled as a state variable in ; however, such a model omits other important biological states or features such as isoform or protein expression. These large state space require model reduction to be computationally tractable, whereby the number of free parameters is reduced. Such an approach is consistent with many biological processes, where understanding the dynamics of only a few variables is sufficient to understand the dynamics of the system.
To study the dynamics of our high dimensional data , we can change the coordinates to a lower dimensional space where it is easier to integrate, compute, and analyze our model.
Principal component (PC) based model reduction is standard to DMD. From computing the SVD of ^- in the DMD algorithm, the left singular vectors or PCs map gene expression to principal component space.
To make such a change of coordinates, we apply the transformation t=^⊤t. Then the reduced state vector will have r components where r is the rank of the data. Applying this transformation, we can produce a model reduced state transition matrix , where =^⊤. To see this reduction, consider
t+1=^⊤t+1=^⊤t=^⊤t=t.
From this, can be derived directly from the data as
=^⊤=^⊤^+^-1,
which is r× r matrix.
Returning to the observability of <ref>, we can cast the system output measurements in terms of the model reduced system as
t+1 = t,
t = t,
where 0=^⊤0.
Thus, sensor selection problem can be cast in terms of reduced system.
§.§.§ 1212Switching Systems
Transitioning from LTI systems learned with DMD to LTV dynamics, consider a switching system, where there are two models of dynamics _1 and _2 that describe the flow of the system during two distinct phases in time. The dynamics of this model can be described as
t+1 = _1t, 0≤ t ≤ T_s
t+1 = _2t, t> T_s,
where T_s denotes the time at which the dynamics switch from the first phase with _1 dynamics to the second phase with _2 dynamics. Dynamics such as these are well studied in social and communication systems and also characteristic of many biological dynamics such as bistable systems in cell regulation <cit.>.
Given data _1 is generated before T_s and _2 after T_s, we can employ the DMD algorithm to model the dynamics _1 and _2 for each respective time period. Let _i=_i_i_i^⊤. Model reduction can then be applied to each time period, so that the dynamics of the system are expressed
t+1 = _1 t, 0≤ t ≤ T_s
t+1 = _2 t, t> T_s
The full state space and reduced model are related to one another from the relation t=_i^⊤t, where i=1 or 2, depending on the period of the system.
Given an initial condition _0 the system evolves as
1 = _1_1^⊤0, (model reduction)
2 = _11, (step forward)
⋮ =⋮
T_s = _1T_s-1,
T_s+1 = _2_2^⊤_1T_s, (switch dynamics)
T_s+2 = _2T_s+1, (step forward)
⋮ =⋮
T = _2T-1.
Thus, at any time point t, the reduced, one step, state transition matrix for a switching system can be written:
_t=_1_1^⊤, t=0
_1, 0<t≤ T_s
_2_2^⊤_1, t=T_s+1
_2, t>T_s+1
.
§.§.§ 1212State Transition Matrices Φ
Given time points time points t_1 and t_2 where t_1≤ t_2, we are interested in defining a single transition matrix between these times. For a LTI system, the transition matrix between t_1 and t_2 is found as ^t_2-t_1 or via a similar matrix exponentiation for the continuous case. For the LTV system <ref>, we can define a matrix Φ(t_2,t_1) that maps directly from time t_1 to t_2 as
Φ(t_2,t_1)=t_2-1t_2-2⋯t_1.
Then, given the initial conditions _t_0 and the control signals _t_0,_t_0+1,⋯,_t-1 input to <ref>, the state t can be written as
t=Φ(t,t_0)_t_0+∑_i=t_0^t-1Φ(t,i+1)ii.
The transition matrix Φ has following properties:
Φ(k_2,k_0) = Φ(k_2,k_1)Φ(k_1,k_0) k_0≤ k_1≤ k_2,
Φ(k,k) = .
These properties hold for the reduced transition matrix Φ, which is similarly defined as Φ(t_1,t_2)=t_2-1t_2-2⋯t_1.
§.§.§ 1212Data Guided Control (DGC) Model
The DGC model for approximating time variant linear systems was proposed by <cit.> to model the dynamics throughout cell reprogramming. This approach to learning dynamics considered the discrete-time LTV system with control:
t+1=tt+t,
where _t is the control configuration matrix and is the input or control signal.
The fundamental assumption of the model is that gene expression of a population does not change considerably over time. Hence, the state transition matrix should be similar to the identity . From this, the authors of <cit.> define t as a rank one perturbation from the identity:
t=+(t+1-t)t^⊤/t^⊤t.
The authors then posed the challenge of selecting t for cell reprogramming as an optimal control problem. Based on the success of this model for controlling cell dynamics, we chose to study its observability.
§.§ 1212Observability and Sensor Selection
The problem of observing dynamical systems is a broad area with deep roots in systems theory <cit.>. Mathematically, the ability to uniquely determine (0) is sufficient to call a system observable, since the knowledge of , 0 and possibly a known control signal is sufficient to determine the state t at any future point in time. Here we survey several classic tests for observability of linear systems, challenges associated with nonlinear observability, and how we can use observed outputs to estimate the full state of a system.
§.§.§ 1212Tests for Observability of Linear Systems
Classic tests for observability seek to provide binary criteria regarding if <ref> is observable. From the binary perspective, a system is mathematically observable if the full system initial condition 0 can be uniquely determined.
In many instances, however, it is beneficial to consider more refined notions of observability, such as local observability, which is the ability to determine 0 within a subset of possible states, or targeted observability, which is the ability to determine a subset of relevant state variables in 0 <cit.>.
Here, we survey the Kalman Rank Condition, Popov-Belevitch-Hautus Test, and Structural Observability, three classical tests of observability.
1212Kalman Rank Condition.
The Kalman Rank condition for observability, likely the most famous criteria, guarantees a system is observable when the initial state of the system _0 can be uniquely deteremined from the measurements 0,…,n-1. As a test for this, the rank of the so called observability matrix 𝒪 is compared with the dimension of the state so that when
rank(𝒪)=dim()𝒪 = [ ; ; ^2; ⋮; ^n-1 ][ 0; 1; 2; ⋮; n-1 ]=
[ ; ; ^2; ⋮; ^n-1 ]0
has a unique solution for 0, the system is observable. This approach is the foundation upon which subsequent tests and our estimation procedure are developed.
1212PBH Test.
Building upon the Kalman Condition, the Popov-Belevitch-Hautus (PBH) test, also known as the Hautus Lemma, guarantees observability where
rank(𝒫)=()𝒫=[ 𝐀 - λ𝐈; 𝐂 ],
for all eigenvalues λ of <cit.>. The rank deficiency of the PBH test can be seen as the existance of an eigenvector in the null space of the observability matrix 𝒪. By verifying rank(𝒫)=n for all λ, the PBH test provides an equivalent check that rank(𝒪)=n or that (0) can be deteremined from the first n measurements 0,…,n-1.
Although the Kalman Rank Condition and PBH Test offer convenient and consistent methods for assessing the observability of a LTI system, based on the matrices defining dynamics and observables, they are seldom utilized in practice due to three main challenges:
(C1) Identifiability: If we fail to identify to infinite precision, which will certainly be the case in any experimental system such as the ones studied here, then both the Kalman Rank Condition and PBH Test will almost certainly be observable <cit.>.
(C2) Cost: Explicitly constructing and determining the rank of 𝒪 or 𝒫 for all λ is relatively expensive, particularly for high dimensional systems. For instance, many numerical schemes compute the SVD to determine the matrix rank, which is approximately a 3rd order polynomial operation, once the matrix is constructed.
(C3) Condition: Both the PBH and Kalman conditions provide a binary condition for observability where either YES a system is observable or NO it is not. In practice, however, many systems which are experimentally studied are unobservable, yet this does not deter us from gaining new insights. Rather, a graded degree of observability or targeted observability, where we are interested in only a few hidden states, is sufficient to understand internal dynamics from limited outputs.
To address C1-3, various alternatives have been proposed. For instance, Lin's structural controllability, in the next section, addressed both C1 and C2. Continuing this work, the measures of observability and formulations proposed in <ref> address C2 and C3 by allowing fast algorithms to provide scalar measures of observability, and the structural constraints of <ref> are designed to address C1.
1212Lin's Structural Observability.
To address the concern of system identifiability and, more recently, the cost involved in assessing observability for large or high dimensional systems, the theory of structural controllability was developed for LTI systems, and structural observability is formulated similarly <cit.>. To address (C1), Lin identified graph structures, based on the sparsity structures of and , that allow LTI dynamics to be observable based on the PBT test <cit.>. To address (C2), Liu, Slotine, and Barabási developed the Minimum Inputs theorem from which a fast, graph based maximum matching algorithm can be applied to efficiently select sensor nodes.
Structural observability and controllability have been employed to study a variety of input/output systems across domains <cit.>. Yet there remain limitations to this approach as well <cit.>
For our work in particular, it leaves challenge C3 unanswered, as structural observability provides a binary YES/NO condition.
§.§.§ 1212Measures of Observability
The system output or observability energy provides a direct method to measure observability and address (C3). Output energy quantifies the amount of energy, defined as a norm, of the output measurements transmitted from a system 0,1,… with the equation
ℰ=∑_t=0t^⊤t.
When the output energy is small or zero, we lack useful information about the system's state. Therefore, one way to frame the sensor selection problem is as a maximization of the output energy.
The observability Gramian is a generalization of output energy from which many measures of observability have been proposed. Based on the relation that t=^t(0), the Gramian generalizes <ref> as
_o=∑_t=0^∞ (^t)^⊤^⊤^t=𝒪^⊤𝒪.
This definition of the observability Gramian can also be derived from the Lyaponov equation ^⊤+=-^⊤. Due to the infinite summation, must be stable with all eigenvalues bound inside the unit circle to prevent from diverging.
The eigenvalues of the _o represent the relative observability of each system mode:
* The minimum eigenvalue λ_min(_o) is a measure of the output energy for the least observable mode, and λ^-1_min(_o) characterizes the maximum estimation uncertainty.
* The maximum eigenvalue λ_max(_o) is the measure of the output energy for the most observable mode, and λ^-1_max(_o) characterizes the minimum estimation uncertainty.
* The eigenvector corresponding to the maximum eigenvalue, λ_max(_o), is the direction with the largest gain (and thus most observable mode), therefore a small perturbation in that direction yields an output energy equivalent to that of a larger perturbation in the direction of the eigenvector corresponding to the minimum eigenvalue, λ_min(_o) (direction of the least observable mode). Because the condition number κ(_o) = λ_max(_o)/λ_min(_o) measures the ratio of maximum eigenvalue to minimum eigenvalue, an observability gramian _o with a large condition number indicates that the output energy is dominated by some modes, while others are difficult to observe. Furthermore, κ(_o^-1) captures the shape of estimation uncertainty ellipsoid.
* The log det(_o) and equivalently log det(_o^-1) measures the log of the volume of estimation uncertainty ellipsoid.
* The trace of the Gramian, i.e., tr[_o] captures the average output energy, and equivalently tr[_o^-1] measure the average estimation uncertainty.
Based on these, the following measures of observability have been proposed,
J_1(_o)=tr[_o^-1], J_2(_o)= log det(_o^-1), J_3(_o)=-λ_min(_o),
J_4(_o)=rank(_o), J_5(_o)=tr[_o],
which needs to be minimized to minimize the uncertainty in state estimate. Note that J_1 and J_2 are convex functions. Also minimizing λ^-1_min(_o) is equivalent to maximizing λ_max(_o) which is a concave function. Hence, J_3 is also a convex function.
All the measures J_i for i=1,2,3 are defined only when _o is full rank. To handle such unobservable cases one could use metrics such as tr[_o^†], and corresponds to the average energy required to move the system around the observable subspace, or the log product of nonzero eigenvalues which relates to the “volume” of the subspace reachable with one unit of input energy.
§.§.§ 1212Least Squares Estimation
The mathematical notion of observability and our ability to estimate hidden states of a system from limited output are intimately related.
The relationship between these concepts begins with the Kalman Rank Condition, which guarantees mathematical observability for LTI systems
when there is a unique solution for 0 to the system
[ 0; 1; 2; ⋮; n-1 ]=
[ ; ; ^2; ⋮; ^n-1 ]0.
This problem is solved for 0 directly as 0=𝒪^†, where is the left matrix in <ref> and 𝒪 is the observability matrix as defined in <ref>.
This approach is readily adapted for LTV with control. Given a LTV system <ref>, with time dependent dynamics t, measurements t, input matrices t, and control signals t, the output measurements of the system evolve over time as
0 = 00,
1 = 11=1Φ(1,0)0+100 ,
⋮ = ⋮
t = Tt=TΦ(T,0)0+T∑_i=0^T-1Φ(T,i+1)ii.
Thus, one can express the system as,
=𝒪0+ with the solution 0=𝒪^†(-)
where
=[ 0; 1; ⋮; T ],
𝒪=[ 0; 1Φ(1,0); ⋮; TΦ(T,0) ], and =
[ 0; 100; ⋮; T∑_i=0^T-1Φ(T,i+1)ii; ].
There is a unique solution for 0 in the above two equations when 𝒪^† is full rank, motivating the Kalman Rank condition to check that rank(𝒪)=dim().
Assuming that the measurements are corrupted by independent and identically distributed (i.i.d.) zero-mean Gaussian noise with normal distribution 𝒩(0,σ), using a weighted least squares formulation, the minimum variance estimate for ^*(0) is given by,
^*(0)=(𝒪^⊤𝒪)^-1𝒪^⊤ (-).
The estimate of _0^* has error covariance __0
_0=𝔼[(^*(0)-0)(^*(0)-0)^⊤]=σ(𝒪^⊤𝒪)^-1=σ (_o)^-1.
The least squares estimator is known to be efficient (i.e., the Cramer-Rao lower bound is achieved), therefore the Fisher Information Matrix (FIM) =σ^-1𝒪^⊤𝒪 is exactly the inverse of the estimation covariance <cit.>.
To see the relation between the LS estimation proceedure and the Gramian, note that the observability Gramian can be expressed in terms of 𝒪 based on the relationship
𝒪^⊤𝒪 = [ 0^⊤ Φ(1,0)^⊤1^⊤ ⋯ Φ(T,0)^⊤T^⊤; ][ _0; _1Φ(1,0); ⋮; _TΦ(T,0) ],
= ∑_i=0^TΦ(i,0)^⊤i^⊤iΦ(i,0)=_o(0) .
By relation (<ref>), the observability gramian _o is proportional to the FIM and inversely proportional to the estimate covariance. Consequently, the eigenvalues of the observability Gramian directly control the Fisher information and inversely control the estimation covariance, and can be used to define sensor selection measures/metrics. Moreover, when _o is full rank, the least squares estimation of (0)^* in <ref> is well determined.
Despite this relationship between observability and estimating 0, to date, the marriage of these concepts for biological systems has been prevented due to practical considerations.
* Noise: Experimental observations or measurements in a real system are corrupted with noise.
* Scalability: the least squares approach requires a significant number of observations to achieve the theoretical guarantee of observability.
In the experimental setting, our measurements 0,1,… will always be corrupted with noise, and for high dimensional systems where ∈^n and n is large, such as genomics data (n≫ 20,000), it is cost prohibitive and experimentally infeasible to obtain n measurements. Fortunately, through careful sensor selection truncated observability matrices 𝒪, where the number of rows are truncated to only include time points for which we have data, can be designed so that least squares estimation is still well conditioned. We aim to address these considerations in addition to C1-3 in the following sections regarding Dynamic and Structure Guided Sensor Selection.
§ DYNAMIC SENSOR SELECTION
In this section, we return to <ref> and address the challenge of selecting the best measurement matrices t when given known dynamics and state transition matrices t. We first review some basic objects and control perspectives on observability for LTI systems and then present two formulations for sensor selection for time variant systems.
§.§ 1212Oscillator Networks
Oscillator networks are characterized by internal dynamics of each oscillator, as well as interactions among them. This is described by the equation:
dtdt = F(t, μ) - L(t),
as seen in <ref>.
Here, is a vector representing the locations or values of each oscillator, F is the dynamics of individual oscillators with internal parameters μ, and L is the diffusion operator specific to the network structure.
The dynamics of (t) can be any oscillator, such as the Van der Pol or Andorov-Hopf oscillators, where each oscillator may possess its unique parameters. Additionally, the coupling between oscillators in 𝐋(t) can occur across networks with diverse directionalities and weights. This framework via <ref> encompasses several well-known oscillators, including: (1) the Kuromoto oscillator, in which individual oscillators positioned along the unit circle interact with sinusoidal functions <cit.>; (2) Turing's Equation, which generates oscillations by coupling of two stable systems, and Smale's related work <cit.>.
Continuing the work of Kuromoto, Turing, and Smale, systems of this form have garnered recent interest in the study of higher order structures. For instance, network motifs couple groups of oscillators to form complex and emergent behavior that have been studying in network science <cit.>, systems biology <cit.>, communication systems <cit.>, among other systems. In network biology, genetic circuits such the Repressilator <cit.>, Goodwin Oscillator <cit.>, Toggle Switch <cit.> can be mathematically expressed in the form of <ref>.
§.§.§ 1212Repressilator
The canonical Repressilator shown in <ref> is one such instance of a network of coupled oscillators, each of which represents the expression from DNA to mRNA and finally protein. In Escherichia coli., the expression from mRNA to protein of lacI, tetR, and cl can be seen as three osicillators, where the regulation the mRNA and protein concentration for a single gene is both regulated internally ((t)) and repressed by coupled proteins ((t)). Borrowed from <cit.>, the continuous deterministic Repressilator has dynamics
0.10
0.35
dm_lacIdt = -m_lacI+α_0xx^() +α1+p_cl^n^-(),
dm_tetRdt = -m_tetR+α_0 +α1+p_lacI^n,
dm_cldt = -m_cl+α_0 +α1+p_tetR^n,
0.35
dp_lacIdt = -β p_lacIxx^()+β m_lacIxx^-(),
dp_tetRdt = -β p_tetR+β m_tetR,
dp_cldt = -β p_cl+β m_cl.
0.10
Here, there are have six chemical species describing the mRNA (m) and protein (p) concentrations of the lacI, tetR, and cl genes; α, β, and n, are parameters; and there is a pair of equations to describe the dynamics of mRNA and protein concentrations separately. Each form of the equations can be expressed in terms of <ref>, where (t) describes how the concentrations change as a function of the concentration of a given species, and (t) describes how the concentration changes as a function of either the upstream mRNA or the repressing protein.
§.§.§ 1212Oscillator Types
Oscillator networks, particularly those found in gene regulatory or neurological systems, can be directly modeled from data, as we do in the analysis of various genomics and EEG time series data; however, in addition to empirical approaches, we may also explore theoretical models of such oscillators. We turn to the following well known oscillators:
* Van der Pol (VP):
d^2xdt^2 - μ (1 - x^2) dxdt + x = 0 or dxdt = μ· (1 - y^2) · x - y
dydt = x
* Goodwin (GW):
dx_1dt = α1+x_3^n-x_1
dx_2dt = x_1-x_2
dx_3dt = x_2-x_3
* Andronov-Hopf (AH):
dxdt = ax-by-x(x^2+y^2)
dydt = bx+ay-y(x^2+y^2)
The VP oscillator was introduced to model vacuum tubes by the Dutch physicist Balthasar van der Pol (1889-1959).
This second order differential equation and its representation as a pair of first-order differential equations via the Koopman theory, is a non-conservative oscillator that exhibits nonlinear damping.
This model has since been applied to study several other biological and complex systems, such as neurological circuit action potentials, and has been well studied <cit.>.
The GW oscillator, proposed by biologist Brian Goodwin (1931-2009), was an early model of a genetic oscillator with three variables to model concentrations of RNA, proteins, and a final product all produced from the same gene and regulated with negative feedback <cit.>. During the early to mid-1960s, this model of a genetic oscillator was proposed shortly after François Jacob and Jacques Monod introduced their model of gene regulation <cit.>. The oscillator was developed as a Hamiltonian system and an early proponent of using Hill functions in biology.
The AH oscillator, after the physicist Aleksandr Andronov (1901-1952) and astronomer and mathematician Eberhard Hopf (1902-1983), as shown in <ref> is one particular form of a general AH oscillator, which is a pair of coupled oscillators x and y that exhibit a Hopf bifurcation (when a critical point becomes a limit cycle); the Hopf bifurcation is sometimes also referred to as the Poincaré-Andronov-Hopf bifurcation to provide attribution to Poincaré and Andronov for their work studying this bifurcation <cit.>. Numerous other ecological and biological oscillators, such as the Lotka–Voltera, SIR, Hodgkin–Huxley models, exhibit Hopf bifurcations as well <cit.>.
1212Internal Oscillator Parameters.
The trajectories an oscillator networks are governed in part by the internal dynamics of each oscillator and governed by parameters associated with individual oscillators.
The VP oscillator is governed by the parameter μ, which controls the stability and limit cycle of the system. In <ref>.A, the dynamics of three instances of the VP oscillator are shown. As μ changes from less than zero to greater than zero, the limiting behavior of the system changes from the origin being a stable critical point to a limit cycle with fast and slow transitions. The top row in <ref>.A plots the position x in terms of time, and the stability of this system is interpreted based upon how the oscillation amplitud changes between periods. For instance, in the top row where μ<0, the contraction of the position over time indicates stability. The bottom row of in <ref>.A plots the position x relative to the velocity y, and there the stability can be seen based upon the shift of the trajectory toward the origin or in a stable limit cycle. For instance, in the bottom row where μ>0, the trajectory enters a limit cycle where it remains throughout time, similar to the fixed amplitude oscillations above. By varying μ, the behavior of individual oscillators is changed and can effect the dynamics of an oscillator network.
The GW oscillator of <ref> is governed by two parameters and n. Often, and in the original parlance of Goodwin, this oscillator is written with more parameters, but the number of free parameters may be reduced to 2 in the case of equal degredation rates when written in the dimensionless form, which is useful for the purpose of considering its birfurcation behavior <cit.>. Regardless, the Goodwin oscillator exhibits a Hopf bifurcation, as illustrated in <ref>.B. By fixing the value of n, and varying α, we see that for certain values of α the system is stable while other values produce internal oscillations in the form of a limit cycle. The AH, VP, Liénard, and many other oscillators exhibit Hopf bifurcations as well.
§.§.§ 1212Diffusion and Strong Coupled Oscillators
The trajectories of oscillator networks are governed in part by the network structure and external interactions between oscillators.
Diffusive coupling of oscillators forces the state of an oscillator toward the state of its neighboring osillators. In particular, when the oscillators are coupling on a graph 𝒢 with a Laplacian , the diffusive coupling of a single oscillator 𝐱_i can be written as
(𝐱_i) = λ∑_j∈𝒩(i)𝐱_i-𝐱_j,
where 𝒩(i) is the neighbors or oscillators that are adjacent to oscillator i on the graph 𝒢, λ denotes the coupling strength, and the vector 𝐱_i represents all state variables of the oscillator.
Returning to <ref>, there are three VP oscillators coupled in a directed chain. Each of these oscillators has the internal dynamics (_i) of <ref>, and the chain of interactions in <ref>.D defines the graph laplacian () two edges, from oscillators _1 to _2 and from _2 to _3. The full dynamics of this system are written as
dx_1dt = μ_1 · (1 - y_1^2) · x_1 - y_1^(,μ) -0(^2 ,^-()
dy_1dt = x_1 -0 ,
dx_2dt = μ_2 · (1 - y_2^2) · x_2 - y_2 -λ(x_2-x_1),
dy_2dt = x_2 -λ(y_2-y_1),
dx_3dt = μ_3 · (1 - y_3^2) · x_3 - y_3 -λ(x_3-x_2),
dy_3dt = x_3 -λ(y_3-y_2).
Here, there are three Van der Pol oscillators written in their two dimensional form (,μ), each with its on internal state _i=[ x_i y_i ]^⊤, and internal parameters μ_i discussed in the prior section. In eq. <ref>, x_1 and y_1 are not influenced by external parameters from other oscillators, so the long term behavior of the oscillator network is controlled by oscillator 1.
From Lin's perspective of structural observability/control, oscillator 1 is called a root node. This means all other nodes can be influenced from oscillator one and it is uneffected by any node. As a result, from the perspective of structural observability/control, oscillator 1 is a good sensor/actuator. This is based on the notion that the LTI system defined on the adjacency matrix of the graph from <ref>.D is observable/controllable with oscillator 1 as a sensor/actuator according to the PBH test.
One factor not considered by Lin's structural perspective is the strength or weight of the coupling between oscillators. In eq. <ref>, all oscillators are coupled with an identical parameter λ>0, but in practice, each interaction could have its own coupling strength. Since coupled oscillators interact by adjusting their phase relative to one another, the coupling coefficients λ, control both the speed and direction with which oscillators interact <cit.>.
In <ref> and <ref>, for instance, several instances of coupled VP and AH oscillators are shown. We consider oscillator networks containing either 2 or 5 oscillators coupled in a ring or cycle network where all interactions have a weight λ. In the first two rows, we consider the case where λ_1=1 and λ_2=-1 such that if the oscillators were uncoupled, one would decay quickly toward the origin and one would exhibit a limit cycle. In the top row, λ>0 such that the oscillator in the limit cycle induces synchronized oscillations in the otherwise stable oscillator, and in the second row, where λ<0, so that the induced oscillations of the stable VP oscillator are out of phase with the oscillator in the limit cycle. Rows 3 and 4 recreate rows 1 and 2 where both oscillators are parameterized with μ>0 so that both oscillators exhibit limit cycles. Here the coupling of λ synchronizes the oscillations rather than induces new ones, and comparing between the left and right columns where the magnitudes of λ change, we see that as the coupling strength increases, the oscillators shift into synchrony or phase locking quicker. Finally, rows 5 and 6 recreate identical results as in rows 3 and 4, with an expanded number of oscillators so that the synchronization and phase locking is more clear.
§.§ 1212Sensor Selection Formulations
Based on the oscillator networks as a guiding example and the reviewed methods of observability in <ref>, here we propose two methods for dynamically selecting sensors.
Consider the system
t+1 =tt
t =tt,
which is equivalent to <ref> without control. We summarize two methods of sensor selection and designing for <ref>, which are extensions of similar methods for LTI systems.
* Energy based selection: One graded approach to maximize observability is to maximize the output energy of the system, formalized as:
max_tℰ=max_t0^⊤0 subject to tt^⊤=.
A similar formulation with LTI dynamics with fixed outputs was used by Hasnain et. al. to select biomarkers <cit.>. This maximization problem is a quadratic program with linear constraints, and particularly when has been computed from DMD, the sensor rankings from this optimization can be obtained quickly.
* Gramian based selection: Sensors can be selected to optimize a particular function of the Gramian that corresponds to the observability of the system. We consider the maximization
max_t J(),
where J is one of the five functions discussed in <ref>.
Summers et. al. utilized this formulation for fixed sensor selection on LTI systems <cit.>. The five proposed functions J of the Gramian are submodular, making them conducive to greedy algorithms for sensor selection, and when the trace is used, <ref> is solved with a linear program.
Since the objective functions of the optimization problems in <ref> and <ref> are continuous, (C3) is addressed by providing scalar value of observability rather than a binary criteria.
Moreover, since both optimizations allow for the use of model reduction, and can be solved with fast algorithms to address (C2).
In the subsequent sections, we provide further detail regarding how to solve these optimization problems and select sensors.
§.§ 1212Output Energy Maximization
Here we provide our method to solve <ref> based on its Lagrangian dual form. We first discuss how this problem is solved when the sensors are fixed for all time and then consider the dynamic selection of sensors.
§.§.§ 1212Time Invariant Sensors
The objective is to select sensors that maximize the signal or output energy of the system ℰ where
ℰ=∑_i=t_0i^⊤i=∑_i0^⊤Φ(i,t_0)^⊤^⊤Φ(i,t_0)0,
which is formalized as
max_ℰ subject to ^⊤=.
<Ref> is the fixed sensor formulation of <ref>. Hasnain et. al. shows that
the Lagrangian dual formulation of this problem is
max_ℰ+ℒ where ℒ=tr((^⊤-)),
where are the dual variables <cit.>. Following eq. 5 of <cit.>,
∂(ℰ+ℒ)∂^⊤=2^⊤-2^⊤=0, such that ^⊤=^⊤.
This last expression implies that the eigenvectors of are the sensor weights or the importance of each sensor at a critical point of the signal output energy with respect to the sensors.
We extend the approach to the selection of time varying sensors.
§.§.§ 1212Time Variant Sensors
Suppose we seek to maximize the signal energy at time t such that our objective function is 𝒥_t where
ℰ(t)=t^⊤t=0^⊤Φ(t,t_0)^⊤t^⊤tΦ(t,t_0)0.
Applying the prior Lagrangian formulation here, we seek to maximize ℰ(t) with respect to t at all times. Since t_1 and t_2 are independent of one another for all t_1 and t_2, we can maximize the energy at each time ℰ(t) independently of one another. To do so, we extend the fixed sensor selection optimization of ℰ to an optimization of ℰ(t), using a similar approach as in <cit.>. This is formulated as follows:
∂(ℰ_t+ℒ)∂t^⊤ =∂∂t^⊤(0^⊤Φ(t,t_0)^⊤t^⊤tΦ(t,t_0)0-tr((tt^⊤-)))
=∂∂t^⊤(tr(0^⊤Φ(t,t_0)^⊤t^⊤tΦ(t,t_0)0)-tr((tt^⊤-)))
=∂∂t^⊤(tr(tΦ(t,t_0)00^⊤Φ(t,t_0)^⊤t^⊤)-tr((tt^⊤-))).
At this point, let
(t, t_0)= Φ(t,t_0)00^⊤Φ(t,t_0)^⊤.
Then, returning to <ref> we can obtain the optimal sensors t
∂(ℰ_t+ℒ)∂t^⊤ =∂∂t^⊤(tr(t(t,t_0)t^⊤)-tr((tt^⊤-)))
= 2(t,t_0)t^⊤-2t^⊤=0.
This has a similar interpretation to the time invariant case, were the eigenvalues of (t,t_0) denote the contribution of each state variable to observability at time t.
To solve this and select sensors from time t_0,…,t, we must form (t, t_0) for all t. This requires integrating the system forward from the initial conditions 0, which can be performed efficiently using model reduction, and then computing the largest eigenvector of the matrices (t, t_0), for which there are fast algorithms.
§.§ 1212Gramian Based Observability
Here we provide our method to solve <ref> with standard optimization techniques. We begin by highlighting a few key properties of the observability Gramian for LTV systems, then provide the integer programming formulation of <ref> as an integer programming. Finally, we provide a continuous relaxation of the integer programming problem, making it solvable as a linear program.
§.§.§ 1212Integer Programming Formulation
To formulate the sensor selection problem in terms of integer programming, consider the case where there is a binary variable denoting whether or not each state variable x_i∈ is observed or measured at time t. Let _tj be the j-th row of the matrix _t ∈ R^n× p_t. Then the LTI observability Gramian can be reweitten as
_o = ∑_i=0^Φ(i,0)^⊤i^⊤iΦ(i,0),
= ∑_i=0^∑_j=1^p_kΦ(i,0)^⊤(_ij)^⊤_ijΦ(i,0).
Let, α_tj∈{0,1} be a binary variable, which indicates whether x_j variable is measured at time t. Then one can express above relation, as
_o() = ∑_i=0^∑_j=1^nα_ij_ij,
where, =(α_11,α_21,⋯, α_(T+1)n)^⊤∈ R^n(T+1), where T is the upper bound on the first summation used to compute the observability Grammian, and
_ij=Φ(i,0)^⊤(_ij)^⊤_ijΦ(i,0).
Within this representation, note that _ij is a row vector with j-th entry as 1 and zero otherwise. Note that each _ij is a positive semidefinite matrix. Because composition with affine mappings preserves convexity, each measure J_i(_o) discussed in the Section <ref> is also a convex function J_i(_o()) of the sensor selection variable .
Given the sensor selection variables , the sensor selection problem can be written as a mixed-integer convex problem,
min_ J(_o()) ∑_j=1^nα_tj≤ p_t,
t=0,1,⋯ where 0≤_kj∈{0,1}.
Here, the first constraint restricts number of selected sensors to be no more than p_t for each time point. Because mixed-integer programs do not scale well for large problems, a convex relaxation to (<ref>) provides a useful solution alternative.
§.§.§ 1212Continuous Relaxation
In the continuous relaxation, the observation of variable j at time t is relaxed to the interval α_ij∈[0,1]. This leads to the convex program,
min_ J(_o()) ∑_j=1^nα_tj≤ p_t, t=0,1,⋯, where 0≤_tj≤ 1.
The advantage of the relaxation is that it can be solved in time that is polynomial in the number of variables using efficient techniques such as interior point methods. Furthermore, if the solution to the relaxed problem is such that α_ij∈{0,1} (within numerical tolerance), then the original mixed-integer problem has been solved. The relaxation serves two roles — an approximate (suboptimal) solution to the mixed-integer problem by rounding α_ij, and, in some cases, a fast optimal solution to the mixed-integer problem.
In both the mixed-integer problem (<ref>) and the convex relaxation (<ref>), the desired number of sensors was explicitly constrained to be p_t. Another approach is to allow the number of sensors to be a free variable, and enforce a sparse solution, which can be achieved, for example, by l_1 regularization technique which yields a convex problem,
min_ J(_o())+c_1 0≤_tj≤ 1,
where the constant c≥ 0 is the weighting on the l_1-norm penalty. By varying the weight c, the number of sensors in the solution set will change to balance the sparsity penalty with the observability measure, tracing the Pareto tradeoff curve between sparsity and observability can be used to achieve a particular level of observability.
§ STRUCTURE GUIDED SENSOR SELECTION
For many systems where the exact network edge structure is unknown or partially known, we can supplement our understanding of the system from structure among the nodes that extends beyond the adjacency structure of the network.
For instance, in constructing social, telecommunications, or postal networks, where the contacts of only some individuals are known, the locations of individuals is informative to network structure <cit.>;
and given our knowledge of the transcription cluster and chromosome territories, the positioning of genes on different chromosomes is important for understanding gene regulatory networks <cit.>.
In each case, we can assign location/positional attributes to each node that form an underlying structure between the nodes, independent of the edge or adjacency structure on the network.
§.§ 1212Small World Networks
networks, characterized by relatively small diameters (distance across the network) and high clustering coefficients (similarity between nodes), are often formed from an underlying structure among the nodes. In 1967, Stanley Milgram's famous experiment, from which the Small World and related six degrees of separation theories stem, mapped the first network by considering the social network between Nebraska and Boston <cit.>; the location of individuals around the United States informed the social network through which packages were mailed to friends and acquaintances. Subsequently, in 1998, Duncan Watts and Steve Strogatz proposed a tunable model, dubbed the Watts-Strogatz model (WS), for generating networks with Small Worlds properties <cit.>. This model introduces long range connections, or “weak ties" to highly structured lattices <cit.>. The initial lattice structure, resembling spatial distribution, serves as the framework for forming Small World networks.
properties have since garnered recognition across a variety of fields where both structural and functional or relational data are captured <cit.>.
Considering the significance of structure in networks and their ubiquity, we study the observability and effective sensor selection methods for such networks. We present results from two simulation to investigate the observability and quality of sensor placement on highly structured, , and random networks, focusing on scenarios where only the lattice or spatial structure is known.
§.§.§ 1212Sensor Ranking Distribution from Lattices
Suppose the lattice structure used to generate a network but not the rewired, long-range interactions exist in a network formed from the WS model.
What can the lattice tell us about the expected ranking of each node as a sensor in the network?
To investigate this question, we considered the following model. networks were generated according the the SW model (WattsStrogatz in <ref>) using the lattice structures in <ref>. The unweighted, bidirectional adjacency matrix from each network was used in place of _t in <ref> or <ref>. The value of each node as a sensor was evaluated based upon their contribution to observability as defined by <ref>.
Based upon this model, the expected contribution of each node as a sensor to observability in a small world network mirrors its impact on observability within the lattice structure. In <ref>, the sensor rankings of each node on the lattice are juxtaposed with the aveage sensor ranking over 2500 instances of networks generated from the lattice as well as the sensor rankings on a single network. Qualitatively, the anticipated sensor contributions of each node closely resemble those observed on the lattice.
To asses and provide a scale for the utility of the lattice structure in this sensor selection, we measured the similarity in the sensor selection on the and random networks relative to the lattice structures. As a baseline, we utilized the Erdős-Rényi-Gilbert (ERG) model for random graphs to ensure that the lattice, , and reference point shared identical node and edge counts <cit.>. We recorded the contribution to observability of each node in a vector and calculated the distance between these vectors using the Frobenius norm, as illustrated, as illustrated in <ref>.
Our choice to use the Frobenius norm is arbitrary, so for completion, we evaluate the similarity based upon several additional distances in <ref>-<ref>. Regardless of the choice in norm, the and ERG distributions of similarity in sensor rankings relative to the lattice show clear separation, and across all norms we considered, except for ·_1, the sensors were typically more similar to the lattice than random.
Across all norms, the square, diagonal, and isometric lattices typically show more similarity to the sensor ranking on the lattice than the random graph, but this is not true of the ring lattice. In this ring lattice, all nodes are equivalent such that the ring lattice is symmetric up to any rotation of nodes; however, in each of the other three lattices, the structure creates center, edge, and even corner nodes, each with different properties such as their degrees or contributions to observability as shown in <ref>. Similar to the ring lattice, the expected structure of an ERG network is perfectly symmetric. The increased symmetery of the ring lattice explains why across the shown norms, the distributions generated from the ring and random lattices shown in <ref> are similar.
§.§.§ 1212Observability from Spatially Distributed Sensors
Given that the expected sensor ranking of a network can be determined from the lattice, we consider effective stratigies for placing sensors. In line with the supposition that only the lattic structure is known, we consider the peformance of evenly placed sensors on the lattice, as shown in <ref>.
When is placing sensors evenly in space an effective strategy?
To investigate this question, we consider the following experiment. Sensors are uniformly distributed in space on a lattice.
r0.5
< g r a p h i c s >
Similar to fig. 1 in <cit.>; however, here we fix p and change the number of iterations whereas Watts' figure was focused on the variability of p.
Subsequently, while the sensors remain fixed, we conduct multiple iterations of IterativeWattsStrogatz (see <ref>) wherein edges are progressively rewired, similar to the WS model. Over multiple iterations, the network structure gradually changes from a lattice, to a small world network (see <ref>), and finally a random network, and we evaluate the observability from the fixed sensors nodes throughout. As a baseline for comparison, also we measured the observability of the system given by random sensors.
In the early, highly structured phase, the evenly spaced sensors have increased network observability relative to the randomly spaced sensors, on all latices except for the ring (see <ref>). As the iterations progress and the network approaches a random graph, the utility of the evenly spaced sensors relative to randomly selected sensors diminishes, and the overall observability of the system decreases. The initially increased observability from the structure guided sensors followed by its decrease suggest there exist a set of networks whose sensor selection can be based on the underlying node structure. The underlying lattice structure alone is not sufficient to guarantee good sensor placement, but it does suggest that enforcing constraints to space out the sensors may be beneficial on small world networks.
§.§ 1212Constrained Optimizations
The DSS energy and observability Gramian based sensor selection problems (<ref> and <ref>) can be constrained according to the structure of the system. In the small world example above, we see that placing sensors evenly in space can be an effective strategy for selecting sensors. Here, we modify the DSS optimizations in order to constrain the selection so that at most one sensor per cluster is selected. In principal, these clusters can be created in space or based upon spatial data so that the sensors will be evenly placed in space, but the selection of a clustering method is not of primary importance.
§.§.§ 1212Linear Constraints in Gramian Optimization
Using the trace of the observability Gramian as a measure of observability, the DSS optimization <ref> is formulated as a linear program in eq. (<ref>). In the unconstrained DSS optimization, the only constraint imposed on the linear program is to bound the number of sensor selected. However, we can modify the constraints of this problem based on the spatial clustering of different state variables and solve the optimization problem exactly.
§.§.§ 1212Heuristic Approach for Energy Optimization
To modify the Energy based optimization of <ref>, we use a heuristic, ranking approach. We first solve the maximization problem of <ref> directly and rank the contribution of each sensor at every time. Then, we apply a greedy selection algorithm that selects the top ranked sensor at each time according to their rankings and so as not to violate the rule of selecting multiple sensors per cluster. Pseudocode for this proceedure is provided in <ref>.
§.§ 1212Small Worldness of the Genome
Networked models of the nucleus are well established, and the propensity for small world properties have even been seen from molecular dynamics simulations of chromatin conformation <cit.>.
The genome and gene regulatory networks exhibit numerous parallels with classic networks:
* Similar to Milgram's experiment, our current understanding of the full structure in the nucleus and complete set of regulatory interactions of the genome remains incomplete.
* While our understanding of the system is incomplete, like Milgram, recent experimental assays allow us to partially observe both the structure, the location of individuals in Milgram's experiment or the folding of chromatin in the nucleus, and function, social or gene regulatory interactions, of the genome.
* Similar to the WS model, the genome's architecture arises from the polymer structure of chromatin. The modeling and simulation of such polymers employs a bead-on-a-string model that imposes a one dimensional lattice structure while allowing for distant long range interactions or weak ties to form <cit.>.
Despite these similarities, there has been no empirical quantification of the small-worldness of the nucleus from experimental data. Based upon the observability of networks, to apply SGSS for biomarker identification, we examine properties observed in experimental data by (1) constructing networks with adjacency structures that qualitatively resembles Hi-C and (2) quantitatevly evaluating the small world quotient from Hi-C data.
§.§.§ 1212Overview of Experimental Data
Here we provide a concise discussion of the experimental data types we consider in our study.
Hi-C is a genome wide experimental technique to map the spatial arrangement of chromatin within the nucleus <cit.>. Similar to other Chromosome Conformation Capture methodologies, the Hi-C assay (1) cross-links, (2) digests, (3) ligates, and (4) sequences pieces of DNA to identify proximal genomic loci. The resulting data structure is typically viewed as an occurrence or frequency matrix, where the i,jth element denotes the number of times the ith and jth loci of chromatin were observed near one another.
From a data science or controls perspective, Hi-C is both interesting and challenging to consider because it can be analyzed at multiple resolutions, as shown in <ref>. Whereas gene expression occurs in discrete units, Hi-C is analyzed by binning or summing over the base pairs that form the chromatin polymers. Examining genome wide Hi-C at base pair resolution is both challenging, as it requires a matrix with ≈ 9× 10^18 entries, and unnecessary, as viewing the data at lower resolutions by averaging the number of contacts yields interesting and relevant information. Also, in contrast to eeg or gene expression signals, population Hi-C is less variable throughout time, particularly when the cells are unsynchronized.
In the following sections, we propose and validate models to demonstrate that the genome exhibits properties. However, a point of clarity must be made regarding the relationship between Hi-C and gene expression. The motivation to identify biomarkers for methods such as adaptive sequencing based upon the RNAseq datasets such as Proliferation, Reprogramming, MyogranicSignal, and SWB215 utilizes a gene-centric perspective. From these data, we characterize gene expression dynamics using <ref>, where the node set or state vector represents individual gene expression levels. In contrast, the state representation of Hi-C data classically utilize a basepair-centric perspective, where nodes represent distinct regions of chromatin that may or may not overlap with one or more genes. Consisitent with SGSS and the prior consideration of observing networks, our investigation of the properties of the genome is focused on nuclear structure and not on the gene regulatory dynamics.
§.§.§ 1212Network Model for Hi-C
We propose a simple 4-parameter network model that qualitatively captures the main features found in a single chromosome of Hi-C. Hi-C data are characterized by (1) having strong diagonal dominance, whereby interactions of chromatin are likely to interact with one another, (2) having more distant self-interacting regions, such as Topologically Associated Domains or, at a coarser scale, the arms of a chromosome, and (3) looping structures where distant loci will have strong interactions that appear far from the diagonal on the matrix.
To recaptiulate these three structures, we utilize a combination of the WS and Caveman models of networks. In a caveman network, nodes are tightly clustered within their cave, and have few interactions between caves. This is similar to the high number of contacts observed within a TAD or arm of a chromosome as compared to more distal regions. In our model, we
begin with two line lattices of size n_1 and n_2, rewire interactions on both with probability p_1 according to the WS model, and then rewire interactions between the two now SW networks with probability p_2. The four parameters n_1, n_2, p_1, and p_2 constitute the model, and the process to form this model is shown in <ref>.
We fit one instance of this model to the individual chromosomes from <cit.> (see <ref>). This model performs best, in terms of its qualitative resemblance of the Hi-C network, for the larger chromosomes and when the chromosome arms are relatively equal length. Regardless, this model is able to capture the diagonal dominance, self interacting regions, and spotty long range interactions of Hi-C with relatively few parameters and Hi-C structure.
§.§.§ 1212The Nucleus is a Small World
To empirically validate the hypothesis that the nucleus is a small world, we computed the Small World Quotient (SWQ) for all Hi-C data paired with the Proliferation and Reprogramming datasets. We considered this data at various resolutions, ranging from 10kb to 25mb to, and computed SWQs for the entire genome as well as for individual chromosomes. For individual chromosomes, we used Toeplitz normalization (observed/expected).
To quantify small worldness, the SWQ measures the clustering coefficient and distance across the network relative to random graphs. In particular, the SWQ σ is the ratio of the clustering coefficient and diameter, characteristic path length (CPL), or other distance measure of the network relative to the corresponding statistics of a random network i.e.
σ=C/C_r/L/L_r.
Here, C and L are the clustering coefficient and diameter of the network and C_r and L_r are the mean clustering coefficient and diameter of random graphs that are a similar size to the tested network. A small world coefficient σ>1 indicates small world properties <cit.>.
Computing distances and clustering on Hi-C networks, particularly at very low or very high resolutions, present several challenges. At a high resolution, the number of nodes is relatively large, so computing the exact CPL, diameter, or clustering coefficient can expensive. At low resolution, where there are fewer nodes, the network becomes a weighted, dense, where the sparse, small world structure is not immediately present, so to address this, we threshold the Hi-C to enforce sparsity. After thresholding Hi-C, at various values, we removed all disconnected vertices in the remaining network.
We employed a hundred ERG models as a baseline reference used to compute C_r and L_r.
Properties such as the SWQ, distance, clustering coefficients, and the threshold values used are shown in <ref> for the Hi-C data from <cit.>. Thresholds for the minimum number of Hi-C contacts to count as an edge were set independently for each Hi-C matrix in order to include a fixed percent of the observed edges. As a result, the utilized thresholds are inversely proportional to the resolution (top right of <ref>). Across both datasets, at various resolutions, and various thresholds, Hi-C data exhibits small world properties. Other metrics, such as the clustering coefficient and diameter change in a manner consistent with the resolution and network size.
§.§ 1212Hi-C Guided Biomarker Identification
Based on the prior sections discussing observability in a and the properties of Hi-C, we aim to use knowledge of Hi-C structure to select biomarkers and sensor genes that render gene expression networks observable. Following the red panel of <ref>, we first learn the structure among our state variables, i.e. genes, from Hi-C, and then propose constraints to DSS.
§.§.§ 1212Learning Structure from Hi-C
Whereas the networks previously considered assign each node a position in terms of (x,y) or θ, Hi-C data does not explicitly contain the geometry or architecture of the genome. Moreover, the variable resolution of Hi-C measure contacts on a state space that is not a well defined function of the gene state variables whose dynamics we consider. To address these challenges, we propose the two part framework to learn the structure of genes based on Hi-C:
(1) Construct gene resolution Hi-C matrices that indicate the conects between gene coding regions
(2) Utilize clustering as an unsupervised learning approach to identify genes which are spatially similar.
1212Gene by Gene Hi-C Matrix.
To construct a matrix from Hi-C data that both represents the structure of and has nodes for individual genes, we propose GeneXGeneHiC (see <ref>). Figure <ref> illustrates this algorithm, which consists of only two steps. First, each row/column of the Hi-C matrix is assigned gene(s) or the absense of them, based upon the genomic coordinates of the row/column and location of the genes. Rows/columns representing intergenic regions may be assigned no genes, while others might be assigned multiple genes. Subsequently, the average contact frequency of the rows/columns for each pair of genes is recorded in a new matrix. The i,jth entry of this new matrix is the average value in the Hi-C matrix associated with gene i and gene j. The new matrix will have exactly one row/column per gene and can be interpreted in the context of our LTV dynamic models. A similar approach was employed by Chen et. al.; see, for instance <cit.>.
1212Learning Spatially Similar Genes.
To identify genes with similar spatial characteristics, we clustered them based on a gene-by-gene Hi-C matrix. We employed agglomerative Principal Component (PC) clustering using a Euclidean metric and determined the optimal number of clusters using the Silhouette score. Leading eigenvalues or PCs were used for clustering due to their simplicity and prior implication in chromatin accessibility and epigenetic markers, such as DNAse1, H3K27^me3, and H3K36^me3 <cit.>.
This approach to clustering genes from spatial data was selected due to its simplicity, and the end to end procedure is illustrated in <ref>. For instance, the mathematics required to obtain such clusters amount to only averaging gene counts, the Singular Value Decomposition to compute the PCs, and computing the distance between vectors. While simple and direct, this provides several areas for potential improvement of the clustering approach, including but not limited to:
* Dimension reduction: rather than using PCA for dimension reduction, we could consider alternative methods of dimension reduction
* Improved metric: rather than a Euclidean metric, this approach could be modified or improved by using alternative metrics or Kernel functions to be used for the clustering.
We would like to note limitations of this approach. Namely, the generated clusters are not guaranteed to identify genes that are physically close to one another in space. Moreover, the available Hi-C were not phased, meaning that the expression and spatial features of maternal and paternal alleles are treated equivalently, despite the likelihood that there is great variability of their positioning in the nucleus. Regardless of these limitations, incorporating Hi-C constraints improves the estimation on the Reprogramming and does distribute the sensors across chromosomes, a proxy for distribution in space.
§.§.§ 1212Estimation with SGSS
Due to challenges in estimating the Reprogramming dataset, we applied SGSS based on the above gene clustering approach to identify sensor genes. Due to the similar cell type and improved experimental conditions, we utilized Hi-C from the Proliferation dataset at 100kb to generate the gene clusters, and we applied the Gramian formulation of DSS.
Performing cross validation, we constructed LTV models using 2 replicates of the Reprogramming data, and tested the models ability to estimate the held out replicate. For a relatively small number of sensors, applying SGSS improves both the average estimation error and reduced the variance in estimation errors. The relative improvement diminishes as more sensors were included in the estimation procedure (see <ref>), which motivate the further work on the MyogenicSignal dataset (<ref>).
To validate that SGSS does in fact improve the spatial distribution of biomarkers, we considered the distribution of sensor genes across chromosomes (<ref>). We built LTI and LTV models of the dynamics and identified the top 2,000 ranked sensor genes from the observability Gramian both with and without Hi-C SGSS. When SGSS, the distribution of sensor genes mirrors the distribution of gene placement on different chromosomes. For example, the disproportionate number of sensor genes are placed on chromosomes 6 and 17 is remedied when the structured constraints are applied. We observed a similar result in improved sensor distribution when selecting time varying sensors as well (<ref>-<ref>).
§ APPLICATIONS TO DATA
Here we summarize and provide references for accessing the data used in our study. <Ref> summarizes the time series data sets utilized in our study, and sourcing and processing of each data is discussed in the below sections. The full data and associated code will be made available upon acceptance for publication.
§.§ 1212Human Fibroblast Proliferation
To investigate the 4D Nucleome, that is how the genome architecture or chromatin folding changes over time, the Chen et. al. generated time series RNAseq and Hi-C dataset <cit.>. Human Fibroblast were cell cycle and circadian rhythm synchronized, and bulk RNAseq and Hi-C were collected at 8 hour intervals. The experimental protocols to generate this data are available in Materials and Methods of the appendix of <cit.>. The FASTQ files from this study were aligned to Homo_sapiens.GRCh38.107 using Bowtie2 with the parameter –very-sensitive. Raw read counts were obtained using HTSeq and converted to transcripts per million (TPM).
§.§ 1212Cellular Reprogramming
To study myogenic reprogramming, Liu et. al. introduced exogenous MYOD to Firboblast cells to force the cells to change lineage <cit.>. Recreating Weintraub's original cell reprogramming experiment, paired RNAseq and Hi-C data were collected at 8 hour intervals <cit.>.
The experimental protocols to generate this data are available in the supplement of <cit.>. The corresponding FASTQ files were obtained and aligned to Homo_sapiens.GRCh38.107 using Bowtie2 with the parameter –very-sensitive. Raw read counts were obtained using HTSeq and converted to transcripts per million (TPM).
§.§ 1212Myogenic Signal
To generate a reduced dataset amplifying the myogenic signal from the Reprogramming data set, we identified groups of genes involved in myogenesis and cell proliferation, the two dominant biological processes in a reprogramming experiment. Genes related to the cell cycle were taken from the KEGG cell cycle pathway (hsa04110), genes associated with Fibroblasts and Myogenic cells from PanglaoDB, and other genes known to be involved in cell proliferation and myogenesis. A complete list is as follows:
ABI3, ABL1, ACHE, ACTA1, ACTA2, ACTC1, ACTG2, ACTN2, ACTN3, ADAM12, ADAM33, ADAMTS10, ADIPOQ, ADIPOR2, ADM, ADPRHL1, ALPK3, ANAPC1, ANAPC10, ANAPC11, ANAPC13, ANAPC15, ANAPC16, ANAPC2, ANAPC4, ANAPC5, ANAPC7, ANGPT2, ANKRD1, ANKRD2, AQP1, ARAP1, ARHGAP26, ARL4D, ART3, ASB2, ATM, ATR, ATRX, AURKB, BDNF, BMP4, BRAF, BUB1, BUB1B, BUB3, CA3, CAPN1, CAPZA3, CASQ2, CAV3, CCL11, CCL19, CCL2, CCNA1, CCNA2, CCNB1, CCNB2, CCNB3, CCND1, CCND2, CCND3, CCNE1, CCNE2, CCNH, CD109, CD300E, CD36, CDC14A, CDC14B, CDC16, CDC20, CDC23, CDC25A, CDC25B, CDC25C, CDC26, CDC27, CDC45, CDC6, CDC7, CDCA5, CDH11, CDH15, CDH3, CDK1, CDK2, CDK4, CDK6, CDK7, CDKN1A, CDKN1B, CDKN1C, CDKN2A, CDKN2B, CDKN2C, CDKN2D, CDT1, CHEK1, CHEK2, CHODL, CKM, CKMT2, CLOCK, CMKLR1, CNN1, COL13A1, COL4A3, COL4A4, COL7A1, CORO6, CPT1A, CREBBP, CSRP3, CUL1, CXCL1, CXCL3, CXCR4, DBF4, DBF4B, DDX11, DES, DKK1, DLL1, DMD, DOCK1, DOCK5, E2F1, E2F2, E2F3, E2F4, E2F5, EDN1, EGFR, EN1, ENO3, EP300, ESCO1, ESCO2, ESPL1, FABP4, FAP, FBLN7, FBXO5, FGF2, FGF23, FGFR4, FHL2, FIBIN, FLNC, FMOD, FOXF1, FST, FZR1, GADD45A, GADD45B, GADD45G, GATA4, GATA5, GATA6, GEM, GFAP, GJA5, GJB2, GPIHBP1, GRWD1, GSK3B, HAMP, HAND1, HAND2, HAS1, HDAC1, HDAC2, HDAC8, HEY2, HGF, HHIP, HSPB7, IGF2, IL11RA, IL1R1, IL4, IL6, ITGA3, ITGA7, JPH2, KLF4, KNL1, KRT14, KRT17, KRT5, LAMA2, LAMB3, LAMC2, LDB1, LDB3, LIF, LOX, LRRK1, MAD1L1, MAD2L1, MAD2L1BP, MAD2L2, MAU2, MB, MCM2, MCM3, MCM4, MCM5, MCM6, MCM7, MDFI, MDM2, MEDAG, MEF2B, MEF2D, MEOX1, MFN2, MITF, MME, MMP2, MMP3, MMP9, MRC1, MSTN, MTBP, MTTP, MUSK, MYBPC3, MYC, MYD88, MYF5, MYF6, MYH1, MYH11, MYH14, MYH4, MYH6, MYH7, MYH7B, MYH8, MYL1, MYL2, MYL3, MYL4, MYL7, MYLK, MYLPF, MYOD1, MYOG, MYOM1, MYOM2, MYOZ1, MYOZ2, NBR1, NDC80, NEB, NEXN, NFATC1, NGF, NGFR, NID2, NIPBL, NKX2-5, NOG, NOTCH1, NOTCH3, NOX4, NPHS1, NPPA, NPPB, NPPC, NT5E, OBSCN, ORC1, ORC2, ORC3, ORC4, ORC5, ORC6, PAMR1, PAX3, PAX7, PCNA, PCSK6, PDE1A, PDE4D, PDLIM5, PDS5A, PDS5B, PDZRN3, PKMYT1, PLD1, PLK1, PLN, PNMT, POPDC2, PPP2CA, PPP2CB, PPP2R1A, PPP2R1B, PPP2R5A, PPP2R5B, PPP2R5C, PPP2R5D, PPP2R5E, PRG4, PRKCQ, PRKDC, PTGIR, PTK2, PTTG1, PTTG2, PXN, PYGM, RAD21, RB1, RBL1, RBL2, RBM20, RBM24, RBX1, RRAD, RYR2, SCARA5, SERPINB10, SERPINB5, SFN, SGO1, SGPL1, SIX1, SIX4, SKP1, SKP2, SLC5A1, SLC6A13, SLN, SMAD2, SMAD3, SMAD4, SMC1A, SMC1B, SMC3, SMIM3, SMPX, SORBS2, SOX18, SOX2, SPEG, SPHK1, STAG1, STAG2, STC1, STK40, STRN, SULF1, TAGLN, TBX18, TBX20, TBX3, TCAP, TFDP1, TFDP2, TGFB1, TGFB2, TGFB3, THBS4, TICRR, TMOD4, TNNC1, TNNC2, TNNI1, TNNI2, TNNI3, TNNT1, TNNT2, TNNT3, TNS1, TNXB, TP53, TP63, TRDN, TREML4, TRIM63, TRIO, TRIP13, TRPV1, TTK, TTN, TXLNB, VLDLR, WAPL, WASHC1, WEE1, WEE2, WIF1, WIPF1, YWHAB, YWHAE, YWHAG, YWHAH, YWHAQ, YWHAZ, ZBTB16, ZBTB17, and ZFPM2.
§.§ 1212Pesticide Detection in SBW25
To design biomarkers for pesticide detection, Hasnain et. al. collected bulk RNAseq of pseudomonas fluorescens SBW25 every 10 minutes following treatment with the organophosphate malathion, an anti-parasite that is can be used to treat crops or head lice. Beyond being an excellent dataset, this experiment was introduced with the energy based formulation of sensor selection for the purpose of biomarker detection on LTI models of gene expression <cit.>.
Following the quality control and normalization of the original authors, the gene set was filtered to contain only genes with above 100 TPM, leaving approximately 10% of the genes from the original data. These data were obtained and processed directly from the data and codes associated with <cit.> and accessed from <https://github.com/AqibHasnain/transcriptome-dynamics-dmd-observability>
§.§ 1212Electroencephalogram BCI2000
We studied BCI2000 dataset, a benchmark dataset containing 64 lead EEG data collected from 109 participants performing different tasks <cit.>. We performed no filtering or preprocessing to this data and used the raw signals as is. These data were obtained from the PhysioNet Database Portal and accessible at <https://www.physionet.org/content/eegmmidb/1.0.0/>
§.§ FUCCI Microscopy
We obtained microscopy signals from <cit.> where the FUCCI assay and Hoechst stain were applied to track cells and monitor their progression through the cell cycle. The mCherry and TagGFP are FUCCI markers and Hoechst is used to track the nucleus of individual cells over time. From the time series images, the expression of each of these three markers were measured every 5 minutes based on their intensity in the microscopy images.
On the left of <ref>, we show a hundred trajectories of these three markers in their 3D phase portrait. The discrete time sampled data were linearly interpolated. On the right of the figure, each of the three pairs of data are shown during different cell cycle phases. We used codes from <cit.> for cell cycle phase segmentation based upon the FUCCI system. Similar to <ref>, the variability or information content of each of the three signals changes between cell cycle stage. For instance, TagGFP contains more variability in S/G2/M than in G1.
§.§ 1212Reference Databases
We used the KEGG, HumanTF, PanglaoDB, databases as references to identify pathways known in the literature, define transcription factors, and find genes associated with myogenesis in our study.
§.§.§ 1212KEGG
To identify known and biologically meaningful sensor genes, we utilized pathways from KEGG <cit.>. We downloaded and considered all human (hsa) pathways and used them as sensor genes to estimate the proliferation dataset. These data were from the February 2024 version of the database and accessed at <https://www.genome.jp/kegg-bin/show_organism?menu_type=pathway_maps org=hsa>
§.§.§ 1212HumanTF
To characterize genes and biomarkers as Transcription Factors, we utilized the Human Transcription Factors database (version 1.0.1), which contains 1639 genes as TFs <cit.>. The data were accessed at <http://humantfs.ccbr.utoronto.ca/>
§.§.§ 1212PanglaoDB
To select genes for the MyogrenicSignal, we used marker genes for different myogenic cells and firboblasts that were found in the PangloDB database <cit.>. These data were from the March 27, 2020 version of the database and can be found at <https://panglaodb.se/markers.html?cell_type=
§ SUPPLEMENTARY FIGURES
< g r a p h i c s >
< g r a p h i c s >
The distribution of genes across chromosomes is hardwired and contributes to the structure and spatial organization of gene regulatory networks.
< g r a p h i c s >
< g r a p h i c s >
The placement of the top two thousand sensor genes across chromosomes is shown for the set difference of the selected sets.
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Final figure regarding gene sensor placement on different chromosomes.
< g r a p h i c s >
·_3
< g r a p h i c s >
·_10
< g r a p h i c s >
·_∞
< g r a p h i c s >
·_1
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Network Model and Hi-C. This figure illstrates our simple network model applied to other chromosomes. It is interpreted similar to <ref>.
> |
http://arxiv.org/abs/2405.09600v1 | 20240515141434 | Aggregate Representation Measure for Predictive Model Reusability | [
"Vishwesh Sangarya",
"Richard Bradford",
"Jung-Eun Kim"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.CY"
] |
SARATR-X: A Foundation Model for Synthetic Aperture Radar Images Target Recognition
Weijie Li, Wei Yang^∗, Yuenan Hou, Li Liu^∗, Yongxiang Liu^∗, Xiang Li
This work was supported by the National Key Research and Development Program of China No. 2021YFB3100800, the National Natural Science Foundation of China under Grant 61871384, 61921001, 62022091, 62201588, and 62376283, the Science and Technology Innovation Program of Hunan Province under Grant 2022RC1092, and the Key Stone Grant JS2023-03 of the National University of Defense Technology.
(^∗Corresponding authors: Li Liu, Wei Yang, and Yongxiang Liu. e-mail: liuli_nudt@nudt.edu.cn, yw850716@sina.com, and lyx_bible@sina.com.)
Weijie Li, Wei Yang, Yongxiang Liu, Li Liu, Xiang Li are with the College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China (e-mail: lwj2150508321@sina.com).
Yuenan Hou is with the Shanghai AI Laboratory, Shanghai, 200000, China.
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we propose a predictive quantifier to estimate the retraining cost of a trained model in distribution shifts. The proposed Aggregated Representation Measure (ARM) quantifies the change in the model's representation from the old to new data distribution. It provides, before actually retraining the model, a single concise index of resources - epochs, energy, and carbon emissions - required for the retraining. This enables reuse of a model with a much lower cost than training a new model from scratch. The experimental results indicate that ARM reasonably predicts retraining costs for varying noise intensities and enables comparisons among multiple model architectures to determine the most cost-effective and sustainable option.
§ INTRODUCTION
As deep neural networks are becoming increasingly prevalent in everyday applications and deployments, involving ever larger datasets, the compute requirements keep increasing with larger models, and their energy consumption becomes an issue. Recent research <cit.> have addressed the energy efficiency of diverse neural network methods, as well as the issue of carbon emissions <cit.>. At the same time, there is an ongoing need for deployed neural networks to respond to changes in their environment. When a deep learning model sees distributional shifts in data, it is desirable to adapt itself to the change. To develop models robust to such distributional shifts, existing solutions <cit.> train the model from scratch with larger models, larger training data, and longer training duration. However, in such approaches, the resource requirements are costly.
One of the sustainable solutions to this problem is reusing an existing model to adapt to the new environment. By retraining previously-trained models on the new distributions, energy consumption and carbon emissions will be significantly reduced. Then there are potential questions to consider: (i) How do different models adapt to a certain distributional shift? (ii) How does a given model adapt to different levels of noise or corruption? (iii) Is it possible to predict the behavior of a model before expending the cost of retraining and adapting it?
In order to quantitatively answer those questions, we propose a predictive reusability quantifier, Aggregate Representation Measure (ARM). ARM works by quantifying the change in a model's representation for new distributional shifts. In particular, ARM quantifies the change in representation for each layer and then aggregates it for the entire model. It provides a single concise value that can predict the retraining efforts required to adapt a model to a distributional shift. Using ARM, the energy consumption and carbon emission can be predicted before expending the retraining costs. We show that ARM requires only one forward pass through the model, and we provide evidence of how it strongly correlates to retraining measures, training epochs, energy, and carbon emissions. ARM not only helps predict the behavior of a model for different levels of noise but also allows comparisons among different models, thus enabling better decisions on the model type to be deployed.
§ RELATED WORK
Since <cit.>, several techniques, architectures and training methodologies have emerged to improve model robustness. While certain models architectures and training methods do generate robust models, there is a tradeoff with regard to model size or the size of training data used as shown by <cit.>. The use of large models for real-world deployments and the hyperparameter search involved with training these large models, with the increased dataset size is a resource-exhaustive process. Several research works <cit.> have shown that there is a non-uniform improvement in robustness to the different distribution shifts, in some cases improvement on one type of noise or corruption results in decreased performance on a distributional shift. In general, even with uneven, non-uniform gains on certain distributions, with decreased improvements on other distributions, the training and augmentation techniques <cit.> are computationally heavy and require training a model from scratch.
Methods which using test time adaptation <cit.> exhibit only marginal improvements in model robustness and fail to provide substantial benefits in scenarios with elevated noise levels. If the test time information is insufficient for adapting the model's prediction, these methods fail to provide accurate and confident outputs during inference.
With the pressing need for sustainable development of deep learning networks, several works have focused on monitoring the energy and carbon emissions of neural network training. Several works <cit.> focus on neural network energy consumption and carbon emissions.
<cit.> highlight the need for measuring energy and carbon emissions of neural network training and deployment.
They call attention to the significantly high energy usage and carbon emissions that are a part of neural network training and hyper parameter search mechanisms involved.
Works such as <cit.> use the change in layer representation to study pathology data and focus their work to individual layers of a model to show it correlates to accuracy loss on domain shifts.
§ AGGREGATE REPRESENTATION MEASURE
R0.5
0.5
Aggregated representation measure (ARM) makes use of the model's change in representation between the data it was trained on and the new distribution. ARM is calculated per layer of the model and then averaged to give a single scalar value that describes how much the shift is in the model's representation. To be able to capture the representation for each data, we perform one forward pass of the entire dataset through the model. During the forward pass, the activation outputs of each filter or neuron are collected. For convolutional layers, to reduce the memory requirements, the activation output is averaged. For each layer l with k filters, the probability function of a given filter f_l,k is obtained by iterating over the dataset of size n. We introduce the filter probability distribution, f_l,k in Algorithm <ref>, where F_l,k is a set containing the averaged activation values for the dataset. PF represents the probability distribution function computed on the complete set of activation data. For each filter/neuron in a given layer, the activation outputs for the entire dataset are collected, as shown in Algorithm <ref>. For each data sample I_x, the activation for the sample is summed and averaged, where h and w represent the height and width of the activation.
For each layer l, the layer probability P_l is obtained by:
P_l = 1/n_l∑_k=1^n_l f_l,k
where n_l represents the number of filters/neurons in a given layer l. For each layer, the filter/neuron probability distributions are summed and averaged to produce a layer probability distribution as shown in (<ref>), where P_l is the averaged layer probability distribution for layer l. This operation is performed for the second distribution shifted dataset.
The Aggregate Representation Measure, ARM, is obtained as follows:
ARM = 1/L∑_l=1^LWD(P_l, d1, P_l, d2)
WD represents Wasserstein distance between the two probability distributions. The final result - ARM is calculated by finding the average of the Wasserstein distance between each layer's corresponding probability distribution for the entire model. As the layer probability distribution may have small differences, the Wasserstein metric, with its displacement-based measurement, offers a fine-grained and more sensitive measure, making it preferable over other measures such as Jensen-Shannon divergence for capturing subtle distinctions. (<ref>) provides the final aggregated measure for a given model with L layers, where P_l, d1 and P_l, d2 represent the probability distributions for a given layer l over the original data d1 and new data d2 by distributional shift, respectively.
§ RETRAINING MEASURES
Retraining measures quantify the effort and resources required to adapt a model to a new distribution. The retraining measures used are epochs, energy consumption, and carbon emissions which help quantify the sustainability of adapting a model. To measure the number of epochs required for retraining a model, we set a minimum required accuracy for each dataset and train the model until it reaches the accuracy level. To measure carbon emission and energy consumption, we use <cit.>, which makes use of the energy consumed for the retraining and the location of the energy generated to calculate the likely weight of carbon compounds emitted into the atmosphere. For a coarse-grained analysis, ARM is capable of predicting the overall global gradient norm of a model during retraining. <cit.> shows that gradients represent the difficulty of samples. This is useful, as the gradient norm as a retraining measure helps understand the depth and path of the loss landscape for each model, with a lower overall gradient norm resulting in faster convergence, as the model is able to adapt to the new distribution faster. A lower overall gradient norm also represents that the model's current parameter space is closer to the new parameter space post retraining. We reserve gradient norm and standard learning rate experiment analysis for future work.
§ EXPERIMENTS AND RESULTS
In the experiments, three datasets are used, CIFAR10, CIFAR100, and SVHN, with 3 noise types - Image Blur, Gaussian noise, and Salt-Pepper noise. For each noise type, 7-9 different noise levels are employed. In the charts, Noise 1 represents the lowest noise level (intensity). The highest level of noise is comparable to severity level 4 in <cit.>. The large number of noise intervals is to provide detailed working evidence of ARM and its correlation to the retraining measures. We explore different model architectures, ResNet, VGG, GoogLeNet, and MobileNetV2.
To reuse an existing model, we train a randomly initialized model on the original data distribution until it reaches the required accuracy for each dataset. All experiment results are an average of three runs.
R0.5
0.5
Since various models are susceptible to different ranges and learning rate schedules, finding the most optimal learning rate plan will require computationally expensive hyperparameter search and tuning. The retrieved learning rate schedule may not be the most optimal one as it is difficult to verify how the learning rate needs to be adapted to the different loss and gradient landscapes for each model. To provide a fine-grained analysis regarding the number of epochs a model requires to adapt to a new data distribution, we set the learning rate to be extremely small for all models and experiments, in the order of 1e-4 to 1e-6.
We perform experiments on Gaussian noise using ResNet18, GoogLeNet and VGG16 to collect the data, which exhibits the linear relation between ARM and epochs. We conduct a prediction on a new model, MobileNetv2, of which data is not used to model a regression predictor. We retrain MobileNetv2 on the first level of noise and obtain a starting value to model a regression. This starting value is used with the regression predictor to predict the epochs needed by MobileNetV2 to adapt to the remaining six levels of noise. The actual and predicted retraining epochs are presented in Fig. <ref> (a) which shows that the predicted values are a fair prediction of a new model's epochs required for retraining and the likely behavior to different noise levels.
We perform an experiment to predict additional and unseen noise levels for each of the 4 models. We use the data collected for the initial two and final two Gaussian noise levels for each model when modeling the regression predictor. Using this predictor, we predict the retraining epochs of each model for the three intermediate noise levels - noise levels 3, 4, and 5. Fig. <ref> (b) shows the true and predicted epochs for each model. In all our experiments, the ARM model consistently over-estimates the retraining cost; we see this as a feature, in that the estimates are conservative rather than falsely optimistic. The predicted values are fairly close to the actual values considering that a single regression predictor was used for all models. With pre-existing or additional data of model's retraining and ARM values, highly accurate predictors for each individual model can be devised. Another point to pay attention to in Fig. <ref> is the inter-model comparison. For the same level of noise, GoogLeNet and MobileNetV2 adapt much faster as compared to ResNet18, and ResNet18 adapts faster than VGG16. This comparison helps predict and compare the adaptability and reusability among different model architectures and trained models.
Table <ref> shows the Pearson correlation coefficient between ARM and epochs. The high Pearson correlation coefficient and low p-value indicate a strong positive relation between ARM and retraining epochs, suggesting that ARM is a suitable predictive measure to determine retraining costs.
With the main objective being sustainable re-usability, we measure the energy and carbon emissions which have a strong positive correlation with the measure. Fig. <ref> depicts the linear increasing trend of ARM with retraining energy usage and carbon emissions for ResNet18.
Additional experiment results on CIFAR10, CIFAR100, and SVHN are provided in the Appendix.
§ CONCLUSION
We have presented a novel metric to predict the cost of retraining a model to new distributional shifts. Our proposed measure, a predictive quantifier of the reusability of trained models, will help users make informed decisions. We demonstrated the correlation and predictive ability of ARM with epochs, energy, and carbon emissions, which indicates the effectiveness of ARM to predict a model's behavior. ARM enables intra-model comparison to different noise levels and inter-model comparison to select the most adaptable and sustainable model.
This publication is based upon work supported by the National Science Foundation under Grant No. 1945541 (transferred and extended to No. 2302610). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
plain
§ APPENDIX A: MODEL RETRAINING COMPARISON AND AGGREGATE REPRESENTATION MEASURE AS A PREDICTIVE TOOL
We further conduct analysis to predict the epochs required for a new model based on data from other models. For this experiment, we use the measure vs epochs data on 7 levels of Gaussian noise from ResNet18, GoogLeNet and VGG16. We make use of a common regression slope computed from the 3 models. We introduce a new model - MobileNetv2 who's data was not used in computing the regression model, we perform one retraining experiment for the lowest and first noise level. Using the obtained value, we are able to formulate the likely intercept for MobileNetv2 and predict the model's likely behavior for the remaining 6 levels of noise. Fig. <ref> depicts the actual and predicted epochs for MobileNetV2 and the data of the other models. The predicted values are a reasonable estimation of the model behavior and can be used to predict the likely behavior on new models and different noise levels.
A second insightful information Fig. <ref> provides is the model comparison and how different model's adapt. As it can be seen from the figure, for the same levels of noise, GoogLeNet and MobileNetV2 adapt much faster as compared to ResNet18, and ResNet18 adapts sooner than VGG16. This comparison is helpful in predicting and comparing the adaptability and reusuability among different model architectures and trained models, allowing deep learning practitioners and organizations to make better decisions when selecting which models can be reused with minimal resource expenditure.
§ APPENDIX: ALL EXPERIMENT DETAILS - CIFAR10, CIFAR100, SVHN
In this Appendix we provide the Pearson correlation coefficient, associated p-value and ARM vs. retraining epochs graphs for all models on CIFAR10, CIFAR100 and SVHN.
§.§ CIFAR10 Dataset
This section provides all results for GoogleNet, ResNet18, MobileNetV2 and VGG16 on the CIFAR10 dataset. Fig. <ref>, Fig. <ref>, and Fig. <ref> illustrates the ARM and retraining epochs for different levels of Gaussian noise, Salt-and-Pepper noise and Image Blur, respectively. Table. <ref> provides the Pearson correlation coefficient and p-values for the 4 models retrained on CIFAR10 for the 3 noise types.
§.§ SVHN Dataset
This section provides all results for GoogleNet, ResNet18, MobileNetV2 and VGG16 on the SVHN dataset. Fig. <ref>, Fig. <ref>, and Fig. <ref> illustrates the ARM and retraining epoch values for different levels of Gaussian noise, Salt-and-Pepper noise and Image Blur, respectively. Table. <ref> provides the Pearson correlation coefficient and p-values for the 4 models retrained on SVHN for the 3 noise types.
§.§ CIFAR100 Dataset
This section provides all results for GoogleNet, ResNet18, MobileNetV2, and ResNet50 on the CIFAR100 dataset. Fig. <ref>, Fig. <ref> and Fig. <ref> illustrates the ARM and retraining epochs for different levels of Gaussian noise, Salt-and-Pepper noise and Image Blur, respectively. Table. <ref> provides the Pearson correlation coefficient and p-values for the 4 models retrained on CIFAR100 for the 3 noise types.
|
http://arxiv.org/abs/2405.10222v1 | 20240516161116 | Kramers nodal line in the charge density wave state of YTe$_3$ and the influence of twin domains | [
"Shuvam Sarkar",
"Joydipto Bhattacharya",
"Pramod Bhakuni",
"Pampa Sadhukhan",
"Rajib Batabyal",
"Christos D. Malliakas",
"Marco Bianchi",
"Davide Curcio",
"Shubhankar Roy",
"Arnab Pariari",
"Vasant G. Sathe",
"Prabhat Mandal",
"Mercouri G. Kanatzidis",
"Philip Hofmann",
"Aparna Chakrabarti",
"Sudipta Roy Barman"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
^1UGC-DAE Consortium for Scientific Research, Khandwa Road, Indore 452001, Madhya Pradesh, India
^2Theory and Simulations Laboratory, Raja Ramanna Centre for Advanced Technology, Indore 452013, Madhya Pradesh, India
^3Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, Maharashtra, India
^4Department of Chemistry, Northwestern University, Evanston, 60208, Illinois, USA
^5Department of Physics and Astronomy, Interdisciplinary Nanoscience Center (iNANO), Aarhus University, 8000 Aarhus C, Denmark
^6Vidyasagar Metropolitan College, 39 Sankar Ghosh Lane, Kolkata 700006, India
^7Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700 064, India
^8Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA
Recent studies have focused on the relationship between charge density wave (CDW) collective electronic ground states and nontrivial topological states. Using angle-resolved photoemission and density functional theory, we establish that is a CDW-induced Kramers nodal line (KNL) metal, a newly proposed topological state of matter. is a non-magnetic quasi-2D chalcogenide with a CDW wave vector () of 0.2907c^*. Scanning tunneling microscopy and low energy electron diffraction revealed two orthogonal CDW domains, each with a unidirectional CDW and similar . The effective band structure (EBS) computations, using DFT-calculated folded bands, show excellent agreement with ARPES because a realistic x-ray crystal structure and twin domains are considered in the calculations. The Fermi surface and ARPES intensity plots show weak shadow bands displaced by from the main bands. These are linked to CDW modulation, as the EBS calculation confirms. Bilayer split main and shadow bands suggest the existence of crossings, according to theory and experiment. DFT bands, including spin-orbit coupling, indicate a nodal line along the Σ line from multiple band crossings perpendicular to the KNL. Additionally, doubly degenerate bands are only found along the KNL at all energies, with some bands dispersing through the Fermi level.
Kramers nodal line in the charge density wave state of and the influence of twin domains
Shuvam Sarkar^1*, Joydipto Bhattacharya^2,3, Pramod Bhakuni^1, Pampa Sadhukhan^1, Rajib Batabyal^1,
Christos D. Malliakas^4, Marco Bianchi^5, Davide Curcio^5, Shubhankar Roy^6, Arnab Pariari^7, Vasant G. Sathe^1, Prabhat Mandal^7, Mercouri G. Kanatzidis^4,8, Philip Hofmann^5, Aparna Chakrabarti^2,3, Sudipta Roy Barman^1†
May 20, 2024
=================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Understanding of the coupling between collective electronic ground states such as charge density wave (CDW) or superconductivity and non-trivial topology has become an exciting research frontier in the field of condensed matter physics <cit.>. The intricate interplay of CDW and the non-trivial band topology often gives rise to various exotic topological phases such as an axion insulator state <cit.>, Kramers nodal line (KNL) metal <cit.>, quantum spin-Hall insulator <cit.>, fractional Chern insulator states <cit.>, eightfold fermionic quasiparticle <cit.>
and manipulation of topologically protected states <cit.>, to mention a few.
CDW is typically observed in layered materials exhibiting quasi-one dimensional (quasi-1D) or quasi-two dimensional (quasi-2D) structures. Due to the associated modification in lattice symmetry, CDW can drive topological phase transitions <cit.>. For instance, we have recently demonstrated that CDW induced inversion symmetry breaking gives rise to a
KNL <cit.> in the non-magnetic layered rare-earth tritelluride, <cit.>. KNLs are new type of two-fold degenerate nodal lines that always connect two time reversal invariant momenta (TRIM) of the Brillouin zone (BZ) and are robust under spin-orbit coupling (SOC) <cit.>. Xie et al. have proposed that all non-centrosymmetric achiral crystal symmetries with sizable SOC, when coupled with time reversal (TR) symmetry, should host this KNL state <cit.>. Besides our recent work on <cit.>, experimental evidence of KNL is limited to ruthenium silicides <cit.>
and SmAlSi <cit.>.
Yttrium tritelluride () is another member of the RTe_3 series (R= rare earth) that is non-magnetic and exhibits an incommensurate CDW state at ambient temperature <cit.>. As in other RTe_3 compounds, it has a structure consisting of R-Te1 corrugated blocks sandwiched between the Te2-Te3 bilayers along the long axis (y) [Fig. <ref>(a)]. The Te bilayers in host the CDW <cit.>. The CDW transition temperature (T_CDW) of 334 K has been determined from resistivity measurements <cit.>. The magnetization measurements conducted on indicate that the diamagnetic susceptibility remains
unchanged up to the room temperature <cit.>. The non-magnetic nature of indicates
the existence of TR symmetry. exhibits an orthorhombic structure in the non-CDW state above T_CDW, characterized by the Cmcm space group <cit.>. However, in contrast to the other members of RTe_3 series (R: La-Tm), whose detailed x-ray crystallography study in the CDW state is reported in literature <cit.>, the structure of has not been studied.
An angle-resolved photoemission spectroscopy (ARPES) investigation on <cit.>
reported a k-dependent (in-plane) variation of the CDW gap, where the gap decreases as k_x increases.
Their study suggests a smaller CDW gap and consequently a smaller gapped region in the Fermi surface (FS) in compared to . An interesting characteristic of is that it exhibits superconductivity with a transition temperature of 3 K with 8% Pd intercalation, which in turn, inhibits the CDW state <cit.>.
The existence of the CDW and resemblance of the physical properties with motivated us to perform a comprehensive study of . At the first step, the structure of in the CDW phase was solved by
x-ray crystallography. Two mutually orthogonal incommensurate CDW domains with similar CDW wave vector () values observed in scanning tunneling microscopy (STM) and low energy electron diffraction (LEED) significantly modifies the ARPES intensity plots and the Fermi surface. Effective band structure (EBS) calculations based on density functional theory (DFT) considering the twin domains and using a realistic structure of the CDW state determined by x-ray crystallography provide excellent agreement with ARPES, which shows faint bilayer split shadow bands that show potential crossing with the main bands. DFT calculations show formation of KNL along the Σ line characterized by doubly degenerate bands along the KNL and crossings perpendicular to it.
§ METHODS
§.§ Experimental:
Single crystals of with residual resistivity ratio [RRR, ρ(300 K)/ρ(2 K)] of 32 were grown using the tellurium flux method <cit.>. High-purity Y and Te were mixed in a molar ratio of 1:39.
This mixture was sealed under high vacuum in a crucible and heated at 900^∘C for 10 h, and subsequently cooled slowly to 600^∘C in 4 days. Excess Te was separated using a high-temperature centrifuge, resulting in gold-colored, plate-like crystals.
Single-crystal x-ray diffraction data for were collected at 100 K with the use of graphite-monochromatized MoKα radiation (λ= 0.71073 Å) STOE IPDS diffractometer. The collection of intensity data as well as cell refinement and data reduction were carried out with the use of the program X-Area. An analytical absorption correction was performed (X-Shape within X-Area) and the modulated structure was refined with JANA2006 <cit.>.
Atomic coordinates of the atoms in the subcell and initial values of their modulation functions were determined by the charge-flipping method <cit.>. The distortion (positional or displacement parameter) of a given atomic parameter x_4 in the subcell was expressed by a periodic modulation function p(x_4) in a form of a Fourier expansion
p(k+x_4)= ∑_n=1^mA_snsin[2π q_n(k+x_4)]+∑_n=1^mA_cncos[2π q_n(k+x_4)], where A_sn is the sinusoidal coefficient of the given Fourier term, A_cn the cosine coefficient, n the number of modulation waves used for the refinement and k the lattice translation. q_n= ∑_i=1^dα_niq_i, where α_ni are integer numbers for the linear combination of the incommensurate modulation vectors q_i. Satellite reflections of one order were observed and used for the refinement. Consequently, one modulation wave for positional and thermal parameters was used for all atoms. Only the symmetry allowed Fourier terms were refined.
The ARPES measurements were conducted at the SGM3 beamline at the ASTRID2 synchrotron facility <cit.>. FS data at the SGM3 beamline were collected with an energy resolution of 15-20 meV at photon energies (hν) of 24 and 28 eV, with an angular resolution of 0.2^∘ (0.008 ^-1). These measurements were performed at various temperatures ranging from 45-340 K, and photon energy-dependent studies were conducted using different photon energies in the range of 16 eV to 30 eV.
A linearly polarized photon beam in the horizontal plane was incident at an angle of 50^∘ with respect to the surface normal, which was oriented along the analyzer axis. The analyzer slit was vertically oriented, resulting in a vertical detection plane, the experimental geometry is similar to what has been used in Ref. Sarkar2023.
The STM measurements were conducted under a base pressure of 2×10^-11 mbar employing a variable-temperature STM from Omicron Nanotechnology GmbH in the constant current mode. Mechanincally grinded Pt-Ir tips from Unisoku were used and cleaned in-situ using voltage pulse method. LEED was performed using a four grid rear view optics from OCI Vacuum Microengineering. A third order 2D polynomial background function (with 10 coefficients) <cit.>
was subtracted from the LEED image to extract the weak CDW related satellite spots. All the measurements were carried out on freshly peeled surfaces under a chamber base pressure of
2×10^-10 mbar.
§.§ Density functional theory:
DFT calculations have been performed using the Vienna Ab-initio Simulation Package (VASP) <cit.> within the framework of the projector augmented wave method (PAW) <cit.> to obtain the electronic structure of . The exchange-correlation functional is treated under the generalized gradient approximation <cit.>. The energy cut-off is set to 500 eV for the expansion of the plane waves. The convergence criterion for energy in the self-consistent-field cycle and total force tolerance on each atom are taken to be 10^-6 eV and 0.02 eV/Å, respectively. The SOC is employed by a second-variation method as implemented in the VASP code <cit.>.
The calculations have been performed for a seven-fold approximate structure with C2cm space group (SG #40) derived from the experimental atomic positions from the cif file using the PSEUDO program <cit.>. This program displaces the atoms to arrive at the commensurate seven-fold structure with non-centrosymmetric C2cm space group (SG #40). This is discussed further in section <ref>.
EBS has been computed using Pyprocar code and experimental energy and momentum broadening were convoluted with the unfolded spectral function for comparison with ARPES <cit.>. All the DFT bands (and consequently the EBS) are rigidly shifted to larger binding energy (E) by 0.1 eV with respect to the for comparison with the ARPES data. VESTA software has been used for crystal structure visualization <cit.>.
§ RESULTS AND DISCUSSION
§.§ Crystal structure of in the CDW state
X-ray crystallography data show that has an orthorhombic structure. The lattice constants and other crystal data are presented in Table S1 of the
Supplementary Material (SM) <cit.>. We find that hosts a unidirectional incommensurate CDW with the = 0.2907c^* and the superspace group is determined to be C2cm(00γ), similar to the other members of the RTe_3 series <cit.>. C2cm is the basic space group of the C2cm(00γ) superspace group and γ represents the z component of .
An incommensurate structure is generally approximated as a commensurate structure with a large unit cell such that its is close to the incommensurate value. However, the latter becomes an accurate representation of the incommensurate structure if it is based on the atom positions determined by x-ray crystallography and the value is within the experimental accuracy <cit.>. By utilizing the continued fraction method <cit.> to find a rational fraction that could represent , we arrive at a fiftyfive-fold structure (1×1×55 supercell) with C2cm space group with = 16/55c^* = 0.2909c^* that matches the experimental value of 0.2907(4)c^* within its accuracy. It has 440 atoms in the unit cell with positions almost coinciding with those given by x-ray crystallography.
A comparison of the Te2 and Te3 atom positions (green dots) with the experimental positions (open circles) show no perceptible deviation [Fig. S1(a)
of SM <cit.>]. This is also supported by a small average displacement (u= 0.001 Å) of the atoms in the unit cell from the experimentally determined positions,
as obtained from the PSEUDO program <cit.>.
However, because of the large size of the fiftyfive-fold unit cell, DFT calculations turned out to be highly resource intensive.
So, a smaller seven-fold unit cell with 56 atoms is derived, importantly with all symmetries of the fiftyfive-fold structure preserved i.e., with the same space group [Fig. <ref>(a)]. The corresponding value is 2/7c^* (= 0.2857c^*), which is an 1.8% deviation from the experimental value, with the Te atoms (orange dots) showing small deviations from the experimental positions (open circles) [Fig. S1(b)
of SM <cit.>]. Figure <ref>(a) shows that is made up of two main structural units: the Te2-Te3 bilayer that hosts the CDW and the Y-Te1 corrugated slab. The Te bilayer, highlighted by blue double-sided arrows, is weakly coupled by van der Waals interaction.
Te 4d core-level photoemission spectrum of in Fig. <ref>(b) comprises of 4d_5/2 and 4d_3/2 peaks separated by the spin-orbit splitting of 1.5 eV.
Note that each of these peaks exhibit two components that are separated by 0.6 eV. These components are the signature of different valency of Te in the Y-Te1 slab and the Te2-Te3 layer due to transfer of electronic charge from the former to the latter. A nearly similar splitting
was observed in 4d <cit.> and 3d <cit.> spectra. This was attributed to the difference in valencies of Te1 compared to Te2 (and Te3), as supported by the DFT calculations.
§.§ Twin domains on the surface
In Fig. <ref>(c), the LEED pattern of the surface in the CDW state at room temperature (the CDW transition temperature T_ CDW being 334 K <cit.>) displays sharp main spots labeled as A-H. The satellite spots related to the CDW modulation along the k_z direction are highlighted by red circles. Interesting to note are the relatively weaker satellite spots in the k_x direction (blue circles). These are unambiguously visible after performing a background subtraction,
as shown in the inset on the right side. The CDW modulation vectors along k_z and along k_x have been determined from the LEED intensity profiles by measuring the distance between the satellite and the main peaks. In Fig. <ref>(d), the intensity line profiles measured along AC and CE demonstrate that both and have similar value of (0.3±0.01)c^*, which is in agreement with that determined by x-ray crystallography. Note that for RTe_3 with R= Tb-Tm, coexisting
bidirectional CDW has been observed <cit.>, and the magnitudes of in the two mutually perpendicular directions are different (e.g., (2/7)c^* and (1/3)a^* for ErTe_3 <cit.>). On the other hand, the values for the two twin domains are similar here for .
These twin domains are directly observed in the STM topography image with atomic resolution, where the domain boundary is indicated by a dashed white line [Fig. <ref>(f)]. Satellite spots related to the CDW are also visible in the Fourier transforms (FT) of each domain along k_z and k_x for domain 1 (top) and domain 2 (bottom), respectively [Figs. <ref>(g,h)]. In Fig. <ref>(e), the q values for the two domains from the line profiles measured along PQ [Fig. <ref>(g)] and UV [Fig. <ref>(h)] are = (0.29 ± 0.03)c* and = (0.3 ± 0.05)c*, which are equal within the error bar and are in excellent agreement with the values obtained from LEED. It was also noted from STM topographies performed at different sample locations
that domain 1 is more prevalent compared to domain 2, a finding corroborated by the lower intensity of the LEED spots corresponding to domain 2 [Figs. <ref>(c)].
SRB2toSS: I agree with the reason you gave, but on second thought may be we can omit this for now as this is not the main focus of the paper, but we are now ready to answer the referees:
One possible reason for the emergence of twin domains in may be a naturally occurring in-plane stacking fault, leading to a swap of the a and c axes <cit.>. Conversely, a recent investigation demonstrated that in-plane tensile stress along the x-direction can reversibly toggle the between the two in-plane axes <cit.>. This switching behavior of is attributed to the square planar Te sheet with tetragonal symmetry (a= c), which supports the appearance of along both in-plane directions <cit.>. However, the slight orthorhombicity in RTe_3 (c>a, e.g., a/c= 0.998 for ) breaks the tetragonal symmetry, favoring the primary along c. A minor change in lattice parameters due to stress can alter the directions of . Consequently, a locally strained region, inherent during crystal growth or mechanical exfoliation, could lead to stretching of the lattice along the x-direction, causing a slight enlargement of the lattice constant a relative to c in small regions of the sample and biasing the to align along this direction.
A recent study on GdTe_3 also suggests that local strain induces the formation of twin domain walls <cit.>. Nevertheless, a more in-depth investigation is needed to fully understand the origin of twin domain formation in . Also see Fu et al PRB STM paper: https://doi.org/10.1103/PhysRevB.94.205101
In Fig. <ref>(f), atomic resolution enables direct observation of the CDW in the top Te layer. A zoomed region in the yellow rectangle shows that the average positions of the neighboring Te chains, depicted by the black dashed lines, are not equidistant (orange and green arrows), signifying the breaking of M_x mirror symmetry and, consequently, the inversion symmetry as well. The non-centrosymmetry is also demonstrated by the Raman spectrum in Fig. <ref>(i) through occurrence of P2 and P4 peaks of B_ 1 symmetry mode, which is an irreducible representation of achiral C_2v point group. These modes were also observed in <cit.>. The mirror symmetry however remains intact in the non-CDW state that has a centrosymmetric Cmcm space group <cit.>.
§.§ Shadow branches in the twin domain modified Fermi surface
The Fermi surface in the CDW state of measured by ARPES comprises two diamond-shaped sheets centered around Γ and two smaller oval pockets – γ_1 and γ_2 – located near the X and Z points, respectively; these high symmetry points are indicated by red color [Fig. <ref>(b)]. The high symmetry points are also shown in the Brillouin zone (BZ) (black lines) in Fig. <ref>(a), it is inscribed within the non-CDW BZ (cyan).
A signature of a gapped region in the FS around the Γ Z direction was observed previously for other RTe_3 compounds <cit.>. Curiously, such a gapped region seems to be absent in Fig. <ref>(b). The FS rather seems to nearly resemble that calculated for the non-CDW state of RTe_3 <cit.>, except for
an important difference: occurrence of a narrow gapped region, highlighted by the red arrow at (k_x, k_z) = (0.2, 0.2) that splits the inner diamond-shaped sheet (α). We henceforth refer to the upper (lower) part as α_1 (α_2), as shown in Fig. <ref>(b). α_2 resembles α_1 rotated by 90^∘. A similar replica of the β_1 (upper part of the outer sheet) branch is named as β_2 (lower part). γ_2 that is observed around Z in the lower right corner, is also a replica of the γ_1 pocket around X (upper left corner).
The appearance of these replicas in the FS rotated by 90^∘ can be explained by the presence of the twin domains (1 and 2) since the ARPES signal with a photon beam spot size of (200×100) μm^2 probes both the domains simultaneously. The intensity of the FS branches related to domain 2 is less e.g., compare the intensities of the α_2 sheet and the α_1 sheet. This indicates that the contribution of domain 1 is greater than that of domain 2, which is consistent with the LEED and STM results discussed in the previous section. The high symmetry points of domain 1 (related to ) and domain 2 (related to ) are shown by red and blue colors, respectively in Fig. <ref>(b).
To examine whether the narrow gapped region (red arrow) mentioned above might be a signature of the CDW gap, we have measured the FS in both CDW and the non-CDW states.
Interestingly, in contrast to the FS in the CDW state at 45 K [Fig. <ref>(c)], as the temperature is raised above T_ CDW [Fig. <ref>(d)], the α sheet is no longer gapped at 340 K (black dashed oval), indicating CDW melting. Subsequently, as the temperature is lowered back to 45 K, the gap becomes visible again [white arrows in Fig. <ref>(e)]. Thus temperature dependent FS measurement establishes that the narrow gapped region in the α sheet corresponds with the CDW gap in . Additionally, from a comparison of the ARPES bands and the calculated EBS considering both domains, we show how the combined effect of CDW gap and twin domains gives rise to the narrow gapped region in the FS in Discussion A
of the SM <cit.>.
Thus, unlike the case of RTe_3 single crystals (as in Ref. Sarkar2023), the FS of our YTe_3 crystals is significantly modified by the presence of twin domains.
In spite of this, shadow branches that are related to the CDW modulation are observed, as indicated by dark yellow arrows in Fig. <ref>(b). These are separated from the main branches by (black dashed arrows). The shadow branches are observed for both the domains; see for example, the vertical and horizontal black dashed arrows of length .
§.§ ARPES and effective band structure along XΓZ
The ARPES intensity plot of along XΓZ high-symmetry direction in Fig. <ref>(f) shows that the bands towards the Γ Z (Γ X) direction of domain 1 overlap with those of the Γ X (Γ Z) direction of domain 2.
Therefore, the intensity plot
seems to be essentially identical in these two directions, with bands crossing the in both directions. Additionally, in both the directions, some bands are bent away from the [black arrows] due to the hybridization of the main band and shadow band, resulting in formation of the CDW gap.
In Fig. <ref>(g), these hybridized bands with enhanced clarity are shown by the black and orange arrows around E= 0.3 eV and 0.5 eV, respectively. The lower limit of the CDW gap defined from the to the band maximum [Δ], is estimated to be 0.29±0.02 eV. This value is similar in both directions.
In Fig. <ref>(h), the DFT calculated bands of along XΓ Z for the seven-fold structure show multiple folded bands along both Γ Z and Γ X directions (gray dashed curves), which are difficult to compare with the ARPES intensity plots [Figs. <ref>(f,g)]. So, we have determined the EBS from the DFT bands by band unfolding, as shown by the red markers <cit.>. The EBS shows reasonable resemblance with ARPES: The CDW gap along Γ Z is evident in the EBS, as marked by the black arrows in Fig. <ref>(h). Along Γ X, two parabolic bands (related to the γ pocket) centered around X cross the , along with two additional bands (related to the α sheet) that approach the .
The agreement between theory [Fig. <ref>(i)] and experiment [Fig. <ref>(g)] is excellent
when the influence of the twin domains is accounted for by superimposing the EBSs calculated along Γ Z and Γ X, where the red and blue bands represent the domain 1 and 2 bands, respectively. For k_||>0, the bands involved in the CDW gap formation are from domain 1, while the bands that cross are from domain 2. The roles are reversed for k_||<0. Figure S2
of SM <cit.> shows the evolution of the CDW gap across the BZ; a good agreement of ARPES and EBS is also apparent here, see Discussion A
of SM <cit.>.
§.§ Main and Shadow bands towards k_z near the BZ boundary
In this subsection, we investigate the interaction between the ARPES main bands that are present also in the non-CDW state and shadow bands that only appear in the CDW state. Fig. <ref>(a) shows a stack of k_z-k_x isosurface plots for different E ranging from E= 0 to 0.4 eV.
The FS i.e., the top plot for E= 0 as well as other isosurface plots at larger E show the shadow branches corresponding to the main β_1 branch, as indicated by yellow arrows. The main and shadow branches appear to cross, and the crossing point shifts to lower k_x as E increases (shown by the black arrows). Notably, the crossing point does not shift in k_z and occurs at k_z= 0.209 , (blue dashed arrow). Additionally, the crossings between the main and shadow bands are also identified in the ARPES intensity plots, E(k_z), measured at k_x= 0.67, 0.61, 0.59, and 0.56 , respectively [Figs. <ref>(b-e)]. The red horizontal lines on the right axis of the FS in Fig. <ref>(a) indicate these directions. In Fig. <ref>(b), we observe two main bands (inner and outer) of parabolic shape that are centered around the X point (indicated by the red dot at the top horizontal axis). These bands intersect at k_z≈±0.15 and ±0.21 . The outer main band related to the β_1 main branch disperses down to an energy of ∼1.2 eV, while the inner main band related to the γ_1 main branch has the minimum at ∼0.8 eV. With decreasing k_x in Figs. <ref>(c-e), the outer main band spreads out in k_z, while the inner bands shrink in k_z and its bottom moves towards lower E. Additionally, a flat band at E= 0.3 eV [pink
arrows in Figs. <ref>(c,d) and (g,h)] along with a “M"-shaped band centered around k_z= 0 [green arrow in Fig. <ref>(h), dashed black curve is guide to the eye] partly overlap with the main bands.
From the EBS in Figs. <ref>(j-m) [calculated at same k_x values as in Figs. <ref>(f-i), respectively], where bands of both the domains are overlaid, we find that these additional bands (blue color) are related to domain 2 [pink and green arrows in Fig. <ref>(l)]. Furthermore, EBS reveals a splitting in main band, as highlighted by double black arrows in Fig. <ref>(k). This splitting – referred to as “bilayer splitting"– occurs due to the interaction between the Te bilayers that host the CDW in and has been observed in other RTe_3 members <cit.>. Additionally, the calculated band structure E(k_z) at k_x= 0.59 (gray curves in Fig. S3
of SM <cit.>) illustrates that the bilayer splitting for the outer main band increases with energy. For example, at E= 0 eV, the splitting (Δ k_z) is 0.2 , while at E= 0.4 eV, it is 0.3 . Note that bilayer splitting is not related to the CDW, as has been discussed in detail for <cit.>. For also, a comparison of the CDW band structures with the non-CDW state (blue dashed) in Fig. S3
of SM <cit.> reveals that the bilayer splitting is comparable in both states.
The ARPES intensity plots are however complicated by the presence of twin domains, bands from domain 2 interfere with the bands from domain 1. For example, some of the blue bands from domain 2 can be seen in between the bilayer split main bands and overlapping with the crossing region within the black dashed circles in Figs. <ref>(k-m). Nevertheless, bilayer splitting is observed at higher E, where it is pronounced [two black arrows in Figs. <ref>(c-e) and (g-i)].
The evidence of shadow bands is obtained in the 2D curvature plots e.g., in Figs. <ref>(g-i). The shadow bands on both sides of the X point are separated by from the corresponding main bands, as shown by the black dashed horizontal arrows. In the raw APRES intensity plots in Figs. <ref>(b-e), the momentum distribution curves (MDC) at the (gray filled circles) depicts the k_z where the shadow bands cross (yellow vertical arrows). The larger intensity peaks in the MDC are related to the main bands (black vertical arrows). As k_x decreases, the k_z separation between the main and the shadow band increases, and their crossing – indicated by the black dashed circles – shifts to higher E values. However, independent of k_x, the crossings occur at same k_z at the Γ_2 X_2 line [cyan dot at the top of Figs. <ref>(f-i)].
It may be noted that, in good agreement with ARPES, the calculated EBSs in Figs. <ref>(j-m) show that the shadow bands are separated by from the main bands, as indicated by the horizontal black dashed arrows. EBS clearly shows that the shadow bands in are weaker than in <cit.>, which is consistent with an observation from ARPES. So, this is not related to extraneous factors, but is related to the difference in the crystal structure of these two compounds. The most notable difference – although their are close – is in the amplitude (A) of the CDW modulation that is a factor of two smaller in (A= 0.07 Å from Fig. S1
of SM <cit.>) compared to (A= 0.14 Å from the supplementary Fig. 2 of Ref. Sarkar2023). Additionally, our EBS calculation for showed that if A decreases to about one-fifth of the experimental value, the shadow bands are almost completely absent <cit.>. Thus, it is reasonable to conclude that the weakness of the shadow bands in is related to the smaller amplitude of the CDW. In spite of this, the crossings of the shadow bands with the main bands
are observed [highlighted by the black dashed circles in Figs. <ref>(j-m)]. These show similar trend as ARPES, i.e., the crossings move towards larger E with lower k_x.
§.§ Evidence of Kramers nodal line in
Figure <ref>(a) shows the EBS
of the crossing region at k_x= 0.59 corresponding to the black dashed circle in Fig. <ref>(l). EBS shows four crossings (L, R, T and B) between the bilayer split main band (mu and md) and the shadow band (su and sd). Since the electronic structure of the structurally similar domains would be same,
to avoid the interference from the bands of domain 2, we have shown the EBS for domain 1 only and the corresponding folded band structure (black curves) is overlaid on it. Note that the additional parabolic band due to band folding is not detected in the unfolded EBS because of its reduced spectral weight and is also not observed in ARPES [Figs. <ref>(d,h)]. Notably, although all the four crossings were observed from ARPES in <cit.>, in case of the weakness of shadow bands and interference of the twin domains make it difficult to identify them.
In Figs. <ref>(b,c), expanded regions near L and R crossings of show that both are gapped by 12 meV. T and B also exhibit a mini gap (4-6 meV). To decipher the influence of SOC, we have calculated the folded band structure near the crossing region without SOC (Fig. S4
of SM <cit.>). Mini-gaps are observed at T and B without SOC, signifying the hybridization of bands involved in these crossings. In contrast, L and R exhibit gapless crossings without SOC. Thus nodal lines are formed at L and R that are gapped out with SOC. A similar behavior has been reported for <cit.>. When SOC is considered, each of the gapped branches of both T and B exhibit splitting that is expected in a noncentrosymmetric system such as .
However, it is interesting to note that the bands cross at k_z= 0.209 , as indicated by green arrows in Figs. <ref>(d,e). k_z= 0.209 is significant because it corresponds to the Γ_2 X_2 direction. The origin of the crossings is discussed later.
The SOC splitting is more pronounced in case of bands at the T crossing [2 meV, Fig. <ref>(d)] compared to the B crossing [0.2 meV, inset of Fig. <ref>(e)].
In Fig. <ref>(f), the EBS along Γ_2X_2 direction shows the dispersion of T and B crossing related bands, i.e., these bands trace the crossings along the k_x direction. However, due to the broadening used, we do not observe both the branches. Both the mini-gapped branches of T and B are however evident in the folded band structure shown zoomed in Figs. <ref>(g) and (h), respectively, where two doubly degenerate branches cross the . From the E(k_x) ARPES intensity plot and its 2D curvature plot in Figs. <ref>(i,j) towards the Γ_2 X_2 direction at k_z= 0.209 ,
the T and B crossing related bands are experimentally observed. These cross at k_x= 0.62 and 0.66 , respectively.
It is worth mentioning that the EBS [Fig. <ref>(f)] exhibits excellent agreement with the ARPES bands in Figs. <ref>(i,j).
An analysis of the orbital texture of the EBS for k_x= 0.59 [Fig. <ref>(a)] in Figs. S5(a-c)
of SM <cit.>, shows that both the main and shadow bands are predominantly of Te p_x–p_z character near the crossing regions, while Te p_y character becomes significant for E≥0.5 eV, indicating the 2D character of both the main and shadow bands. Further, the orbital textures of the EBS along Γ_2 X_2 [Fig. <ref>(f)] presented in Figs. S5(d-f)
of SM <cit.> reveal that both T and B bands exhibit significant contributions from the in-plane Te orbitals (p_x and p_z), showcasing their 2D nature. However, above E= 0.4 eV, there is some p_y contribution in the T band.
The double degeneracy of the bands discussed above is in fact true for all the DFT calculated bands along Γ X, as shown in Fig. <ref>. The zoomed blue rectangular region in Fig. <ref>(b) shows this for the different bands crossing the . The zoomed yellow and red rectangle regions in Figs. <ref>(c,d) show, in contrast, that the degeneracy is lifted by the SOC along other directions i.e., along Γ Z, whereas it remains in tact along Γ X both near and at larger E, respectively. The seven-fold structure of with C2cm space group and the little group that is isomorphic to C_2v point group, coupled with protection from time reversal symmetry, enforces the double degeneracy along Γ_2 X_2, i.e. Γ X or the Σ line <cit.>. This is the signature of the Kramers nodal line (KNL) <cit.> that always connects two TRIMs. In the present case, we find that both the upper and lower branches of T and B exhibit quadratic dispersion in the vicinity of the crossings [Figs. <ref>(d,e)]. This type of crossing with quadratic and cubic dispersion has been described as higher-order Dirac points (two fold degenerate) and the Berry phase around a KNL is quantized as mπ mod 2π <cit.>. The crossings of the CDW-induced shadow with the main bands are enforced by the KNL. The crossings of E(k_z) bands disperse and traverse the as k_x increases [Figs. <ref>(f-j)] and multiple degenerate E(k_x) bands along the KNL cross [Figs. <ref>(a,b)] establishing that is a KNL metal.
§ CONCLUSIONS
We present a comprehensive investigation of a non-magnetic chalcogenide material in the CDW state combining results of the electronic band structure using ARPES, DFT performed with an experiment-based structure determined by x-ray crystallography and surface properties using STM and LEED. We find that the crystal structure of is noncentrosymmetric with C2cm basic space group and of 0.2907c^*. A commensurate fiftyfive-fold structure is derived from the atomic positions determined by x-ray crystallography, which has a indistinguishable from the x-ray crystallography value within the experimental accuracy.
However, since this unit cell is very large, the DFT calculations have been performed with a seven-fold structure, albeit with same symmetries.
Evidence of two mutually orthogonal CDW domains, each exhibiting an unidirectional and similar is obtained from both LEED and STM. The EBS, obtained by unfolding the DFT calculated bands, provides excellent agreement with ARPES when the contributions from both the twin domains are considered. The CDW gap is barely visible in the FS of because of the twin domains. Nevertheless, the gap could be identified by comparing with the FS measured at high temperature in the non-CDW state. Thus, twin domains influence the band structure and the Fermi surface of .
CDW modulation related weak shadow bands near the BZ boundary, which are shifted from the main bands by , have been identified in ARPES intensity plots and are further corroborated by the EBS calculation. Additionally, possible existence of multiple crossings between the bilayer split main and shadow bands is portrayed both by experiment and theory. The bands calculated using DFT with SOC show a nodal line along Σ resulting from the crossings between the bilayer split main and shadow bands perpendicular to the KNL. Doubly degenerate bands are found only along the KNL for all energy values, with certain bands crossing the . The KNL exists only in the CDW phase where shadow bands are present and the inversion symmetry is broken. The symmetries of the C2cm space group and the time reversal symmetry protect the KNL. Thus, our present work establishes that is a KNL metal. The intricate interplay between CDW, non-trivial topology and twin domains in unveils a rich landscape for further exploration and offers a pathway to deeper insights into exotic topological phases in condensed matter systems.
§ ACKNOWLEDGMENTS
S.S., P.S. and S.R.B. gratefully acknowledge the financial support from Department of Science and Technology, Government of India within the framework of the DST-Synchrotron-Neutron Project to perform experiments at ASTRID2 synchrotron facility. A part of this work was supported by VILLUM FONDEN via the Centre of Excellence for Dirac Materials (Grant No. 11744). H. S. Kunwar is thanked for help in the Raman measurement. The Computer division of Raja Ramanna Centre for Advanced Technology is thanked for installing the DFT codes and providing support throughout. At Argonne, this work was supported by the U.S. Department of Energy, Basic Energy Sciences, Office of Science, Materials Sciences and Engineering Division (synthesis, crystals growth, property characterization).
^*shuvamsarkarhere@gmail.com
^†barmansr@gmail.com
Author contributions
S.S., P.S., and S.R.B. conducted the ARPES measurements with assistance and support from M.B., D.C. and P.H. LEED was carried out by S.S. and P.S., while STM was performed by P.B. and R.B. J.B. did the DFT calculations under the supervision of A.C. S.R., A.P., C.D.M., M.G.K., and P.M. performed crystal growth and property characterization, the latter introduced us to this system. C.D.M. and M.G.K. performed x-ray crystallography and solved the crystal structure. V.G.S. provided the Raman spectroscopy data. S.S. analyzed the ARPES, STM and LEED data, performed the post-analysis of the DFT results with some inputs from J.B., and prepared the figures. The explanation of the results was provided by S.S. and S.R.B. The project was planned and led by S.R.B. who jointly wrote the paper with significant contributions from S.S.
Competing interests
The authors declare no competing interests.
45
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sarkar et al.(2023)Sarkar,
Bhattacharya, Sadhukhan, Curcio, Dutt, Singh, Bianchi, Pariari, Roy, Mandal, Das, Hofmann, Chakrabarti, and Barman]Sarkar2023
author author S. Sarkar, author J. Bhattacharya, author P. Sadhukhan, author D. Curcio,
author R. Dutt, author
V. K. Singh, author
M. Bianchi, author A. Pariari, author S. Roy, author P. Mandal, author T. Das,
author P. Hofmann, author A. Chakrabarti, and author S. R. Barman, title
title Charge density wave induced nodal lines in LaTe_3, https://doi.org/10.1038/s41467-023-39271-1 journal
journal Nat. Commun. volume 14, pages 3628 (year 2023)NoStop
[Li et al.(2021)Li,
Zhang, Yilmaz, Pai,
Marvinney, Said, Yin,
Gong, Tu, Vescovo,
Nelson, Moore, Murakami,
Lei, Lee, Lawrie, and Miao]Li2021
author author H. Li, author T. T. Zhang,
author T. Yilmaz, author Y. Y. Pai, author
C. E. Marvinney, author
A. Said, author Q. W. Yin, author C. S. Gong, author Z. J. Tu, author E. Vescovo,
author C. S. Nelson, author R. G. Moore, author
S. Murakami, author
H. C. Lei, author H. N. Lee, author B. J. Lawrie, and author H. Miao, title title
Observation of unconventional charge density wave without acoustic phonon
anomaly in kagome superconductors AV_3Sb_5 (A=Rb,
Cs), https://doi.org/10.1103/PhysRevX.11.031050 journal journal Phys. Rev. X volume
11, pages 031050 (year 2021)NoStop
[Shi et al.(2021)Shi,
Wieder, Meyerheim, Sun,
Zhang, Li, Shen,
Qi, Yang, Jena, Werner, Koepernik, Parkin, Chen, Felser, Bernevig, and Wang]Shi2021
author author W. Shi, author B. J. Wieder,
author H. L. Meyerheim, author Y. Sun, author
Y. Zhang, author Y. Li, author L. Shen, author Y. Qi, author L. Yang, author
J. Jena, author P. Werner, author K. Koepernik, author S. Parkin, author Y. Chen, author C. Felser, author B. A. Bernevig, and author Z. Wang, title title A charge-density-wave
topological semimetal, https://doi.org/10.1038/s41567-020-01104-z
journal journal Nat. Phys. volume 17, pages 381 (year 2021)NoStop
[Qian et al.(2014)Qian,
Liu, Fu, and Li]Qian2014
author author X. Qian, author J. Liu, author L. Fu, and author
J. Li, title title Quantum spin hall effect in two-dimensional transition metal
dichalcogenides, https://doi.org/10.1126/science.1256815
journal journal Science volume 346, pages 1344 (year
2014)NoStop
[Lei et al.(2021)Lei,
Teicher, Topp, Cai,
Lin, Cheng, Salters,
Rodolakis, McChesney, Lapidus, Yao, Krivenkov, Marchenko, Varykhalov, Ast, Car, Cano, Vergniory, Ong, and Schoop]Lei2021
author author S. Lei, author S. M. L. Teicher,
author A. Topp, author
K. Cai, author J. Lin, author G. Cheng, author T. H. Salters,
author F. Rodolakis, author J. L. McChesney, author
S. Lapidus, author N. Yao, author M. Krivenkov, author D. Marchenko, author A. Varykhalov, author C. R. Ast,
author R. Car, author
J. Cano, author M. G. Vergniory, author N. P. Ong, and author L. M. Schoop, title title Band
engineering of dirac semimetals using charge density waves, https://doi.org/10.1002/adma.202101591 journal journal Adv. Mater. volume 33, pages
2101591 (year 2021)NoStop
[Yang and Kee(2010)]Yang2010
author author B.-J. Yang and author H.-Y. Kee, title title Searching for topological density-wave
insulators in multiorbital square-lattice systems, https://doi.org/10.1103/PhysRevB.82.195126 journal journal Phys. Rev. B volume 82, pages 195126 (year 2010)NoStop
[Huang et al.(2021)Huang,
Xu, Singh, Hsu, Hsu, Su, Bansil, and Lin]Huang2021
author author S.-M. Huang, author S.-Y. Xu,
author B. Singh, author M.-C. Hsu, author
C.-H. Hsu, author C. Su, author A. Bansil, and author H. Lin, title title Aspects of symmetry and topology in
the charge density wave phase of 1t–TiSe2, https://doi.org/10.1088/1367-2630/ac1bf4 journal journal New J. Phys. volume 23, pages 083037 (year 2021)NoStop
[Polshyn et al.(2021)Polshyn, Zhang, Kumar, Soejima, Ledwith, Watanabe, Taniguchi, Vishwanath, Zaletel, and Young]Polshyn2022
author author H. Polshyn, author Y. Zhang,
author M. A. Kumar, author T. Soejima, author
P. Ledwith, author K. Watanabe, author T. Taniguchi, author A. Vishwanath, author M. P. Zaletel, and author A. F. Young, title title Topological
charge density waves at half-integer filling of a moiré superlattice, https://doi.org/10.1038/s41567-021-01418-6 journal
journal Nat. Phys. volume 18, pages 42 (year 2021)NoStop
[Zhang et al.(2020)Zhang,
Gu, Sun, Luo, Liu, Chen, Shao, Zhang,
Li, Sun, Li, Li, Xue, Ge, Xing,
Comin, Zhu, Gao,
Yan, Feng, Pan, and Wang]Zhang2020prb
author author X. Zhang, author Q. Gu, author H. Sun, author
T. Luo, author Y. Liu, author Y. Chen, author Z. Shao, author Z. Zhang, author
S. Li, author Y. Sun, author Y. Li, author X. Li, author S. Xue, author
J. Ge, author Y. Xing, author R. Comin, author Z. Zhu, author P. Gao, author
B. Yan, author J. Feng, author M. Pan, and author J. Wang, title title Eightfold fermionic excitation in a
charge density wave compound, https://doi.org/10.1103/PhysRevB.102.035125 journal journal Phys. Rev. B volume 102, pages 035125 (year 2020)NoStop
[Mitsuishi et al.(2020)Mitsuishi, Sugita, Bahramy, Kamitani, Sonobe, Sakano, Shimojima, Takahashi, Sakai, Horiba, Kumigashira, Taguchi, Miyamoto, Okuda, Ishiwata, Motome, and Ishizaka]Mitsuishi2020
author author N. Mitsuishi, author Y. Sugita,
author M. S. Bahramy, author M. Kamitani, author
T. Sonobe, author M. Sakano, author T. Shimojima, author H. Takahashi, author H. Sakai, author K. Horiba, author H. Kumigashira, author K. Taguchi, author K. Miyamoto, author T. Okuda, author S. Ishiwata, author Y. Motome, and author K. Ishizaka, title title
Switching of band inversion and topological surface states by charge density
wave, https://doi.org/10.1038/s41467-020-16290-w journal journal Nat. Commun. volume
11, pages 2466 (year 2020)NoStop
[Xie et al.(2021)Xie,
Gao, Xu, Zhang, Hu, Gao, and Law]Xie2021
author author Y.-M. Xie, author X.-J. Gao,
author X. Y. Xu, author C.-P. Zhang, author
J.-X. Hu, author J. Z. Gao, and author K. T. Law, title title Kramers nodal
line metals, https://doi.org/10.1038/s41467-021-22903-9 journal journal Nat. Commun. volume
12, pages 3064 (year 2021)NoStop
[Shang et al.(2022)Shang,
Zhao, Hu, Ma, Gawryluk, Zhu, Zhang, Zhen,
Yu, Xu, Zhan, Pomjakushina, Shi, and Shiroka]Shang2022
author author T. Shang, author J. Zhao,
author L.-H. Hu, author J. Ma, author
D. J. Gawryluk, author
X. Zhu, author H. Zhang, author Z. Zhen, author B. Yu, author Y. Xu, author Q. Zhan, author
E. Pomjakushina, author
M. Shi, and author
T. Shiroka, title title Unconventional superconductivity in topological Kramers nodal-line
semimetals, https://doi.org/10.1126/sciadv.abq6589 journal journal Science Advances volume 8, pages eabq6589 (year
2022)NoStop
[Zhang et al.(2023)Zhang,
Gao, Gao, Lei, Ni, Oh, Huang, Yue,
Zonno, Gorovikov, Hashimoto,
Lu, Denlinger, Birgeneau,
Kono, Wu, Law, Morosan, and Yi]Zhang2023
author author Y. Zhang, author Y. Gao, author X.-J. Gao, author
S. Lei, author Z. Ni, author J. S. Oh, author J. Huang, author Z. Yue, author M. Zonno, author
S. Gorovikov, author
M. Hashimoto, author
D. Lu, author J. D. Denlinger, author R. J. Birgeneau, author J. Kono, author L. Wu, author K. T. Law, author
E. Morosan, and author
M. Yi, title title Kramers nodal lines and weyl fermions in SmAlSi, journal journal Communications Physics volume 6, https://doi.org/10.1038/s42005-023-01257-2
10.1038/s42005-023-01257-2 (year 2023)NoStop
[Ru and Fisher(2006)]Ru2006
author author N. Ru and author I. R. Fisher, title title Thermodynamic and
transport properties of YTe_3,
LaTe_3, and CeTe_3, https://doi.org/10.1103/PhysRevB.73.033101 journal journal Phys. Rev. B volume 73, pages 033101 (year 2006)NoStop
[Brouet et al.(2008)Brouet,
Yang, Zhou, Hussain,
Moore, He, Lu, Shen, Laverock, Dugdale, Ru, and Fisher]Brouet2008
author author V. Brouet, author W. L. Yang,
author X. J. Zhou, author Z. Hussain, author
R. G. Moore, author
R. He, author D. H. Lu, author Z. X. Shen, author J. Laverock, author S. B. Dugdale, author N. Ru, and author I. R. Fisher, title title Angle-resolved photoemission study of the
evolution of band structure and charge density wave properties in RTe_3
(R=Y, La, Ce, Sm, Gd, Tb, and Dy), https://doi.org/10.1103/PhysRevB.77.235104 journal journal Phys. Rev. B volume 77, pages 235104 (year 2008)NoStop
[Yumigeta et al.(2021)Yumigeta, Qin, Li, Blei,
Attarde, Kopas, and Tongay]Yumigeta2021
author author K. Yumigeta, author Y. Qin,
author H. Li, author
M. Blei, author Y. Attarde, author C. Kopas, and author S. Tongay, title title Advances
in rare-earth tritelluride quantum materials: Structure, properties, and
synthesis, https://doi.org/10.1002/advs.202004762 journal journal Adv. Sci. volume
8, pages 2004762 (year 2021)NoStop
[Malliakas and Kanatzidis(2006)]Malliakas2006
author author C. D. Malliakas and author M. G. Kanatzidis, title title Divergence in the
behavior of the charge density wave in RETe_3 (RE = rare-earth element)
with temperature and RE element, https://doi.org/10.1021/ja0641608 journal journal J. Am. Chem. Soc. volume 128, pages 12612 (year 2006)NoStop
[Ru(2008)]RuThesis
author author N. Ru, title Charge density wave formation in rare-earth
tritellurides, https://doi.org/https://web.stanford.edu/group/fisher/people/Nancy_Ru_thesis.pdf
Ph.D. thesis, school Stanford University, address
California (year 2008)NoStop
[He et al.(2016)He,
Wang, Yang, Long,
Zhao, Ma, Yang, Wang, Shangguan, Xue, Zhang, Ren, Li, Liu, and Chen]He2016
author author J. B. He, author P. P. Wang,
author H. X. Yang, author Y. J. Long, author
L. X. Zhao, author
C. Ma, author M. Yang, author D. M. Wang, author X. C. Shangguan, author M. Q. Xue,
author P. Zhang, author Z. A. Ren, author
J. Q. Li, author W. M. Liu, and author G. F. Chen, title title
Superconductivity in pd-intercalated charge-density-wave rare earth
poly-tellurides rete_n, https://doi.org/10.1088/0953-2048/29/6/065018 journal
journal Superconductor Science and Technology volume 29, pages 065018 (year
2016)NoStop
[Pariari et al.(2021)Pariari, Koley, Roy, Singha, Laad, Taraphder, and Mandal]Pariari2021
author author A. Pariari, author S. Koley,
author S. Roy, author
R. Singha, author M. S. Laad, author A. Taraphder, and author P. Mandal, title title
Interplay between charge density wave order and magnetic field in the
nonmagnetic rare-earth tritelluride LaTe_3, https://doi.org/10.1103/PhysRevB.104.155147 journal journal Phys. Rev. B volume 104, pages 155147 (year 2021)NoStop
[Petříček et al.(2014)Petříček, Dušek, and Palatinus]Petek2014
author author V. Petříček, author M. Dušek, and author L. Palatinus, title title Crystallographic
computing system jana2006: General features, https://doi.org/10.1515/zkri-2014-1737 journal journal Zeitschrift für Kristallographie - Crystalline Materials volume 229, pages 345–352 (year 2014)NoStop
[Oszlányi and Sütő(2004a)]Oszlnyi2004
author author G. Oszlányi and author A. Sütő, title title ab initio structure
solution by charge flipping, https://doi.org/10.1107/s0108767303027569 journal journal Acta Crystallographica Section A Foundations of Crystallography volume 60, pages 134–141 (year 2004a)NoStop
[Oszlányi and Sütő(2004b)]Oszlnyi2004_2
author author G. Oszlányi and author A. Sütő, title title Ab initio structure
solution by charge flipping. ii. use of weak reflections, https://doi.org/10.1107/s0108767304027746 journal journal Acta Crystallographica Section A Foundations of Crystallography volume 61, pages 147–152 (year 2004b)NoStop
[Hoffmann et al.(2004)Hoffmann, Søndergaard, Schultz,
Li, and Hofmann]Hoffmann2004
author author S. Hoffmann, author C. Søndergaard, author C. Schultz, author Z. Li, and author P. Hofmann, title title An undulator-based spherical grating monochromator
beamline for angle-resolved photoemission spectroscopy, https://doi.org/10.1016/j.nima.2004.01.039 journal journal Nucl. Instrum. Methods Phys. Res. A volume
523, pages 441 (year 2004)NoStop
[Igo(2021)]IgorProManualV9
@noop title Igor Pro Manual, edition version 9 ed. (year 2021)NoStop
[Kresse and Furthmüller(1996)]Kresse_1996
author author G. Kresse and author J. Furthmüller, title title Efficient iterative
schemes for ab-initio total-energy calculations using a plane-wave basis
set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Kresse and Joubert(1999)]Kresse_1999
author author G. Kresse and author D. Joubert, title title From ultrasoft
pseudopotentials to the projector augmented-wave method, https://doi.org/10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop
[Perdew et al.(1996)Perdew,
Burke, and Ernzerhof]Perdew
author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal
journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop
[Capillas et al.(2011)Capillas, Tasci, de la Flor, Orobengoa, Perez-Mato, and Aroyo]Capillas2011
author author C. Capillas, author E. S. Tasci,
author G. de la Flor, author D. Orobengoa, author
J. M. Perez-Mato, and author
M. I. Aroyo, title title A new computer tool at the bilbao crystallographic server to detect
and characterize pseudosymmetry, https://doi.org/10.1524/zkri.2011.1321 journal journal Z. Kristallogr. volume 226, pages 186 (year 2011)NoStop
[Herath et al.(2020)Herath,
Tavadze, He, Bousquet,
Singh, Muñoz, and Romero]Herath2020
author author U. Herath, author P. Tavadze,
author X. He, author
E. Bousquet, author
S. Singh, author F. Muñoz, and author A. H. Romero, title title
PyProcar: A python library for electronic structure pre/post-processing, https://doi.org/10.1016/j.cpc.2019.107080 journal
journal Comput. Phys. Commun. volume
251, pages 107080 (year 2020)NoStop
[Momma and Izumi(2011)]Momma2011
author author K. Momma and author F. Izumi, title title Vesta 3 for three-dimensional
visualization of crystal, volumetric and morphology data, https://doi.org/10.1107/s0021889811038970 journal journal J. Appl. Crystallogr. volume 44, pages 1272 (year 2011)NoStop
[sup()]supp
note See Supplemental Material at xx.xx for Figs. S1-S5, Table S1
and Discussion ANoStop
[Janssen et al.(2006)Janssen, Janner, Looijenga-Vos, and de Wolff]Janssen2006
author author T. Janssen, author A. Janner,
author A. Looijenga-Vos, and author P. M. de Wolff, title title Incommensurate and commensurate
modulated structures, in https://doi.org/10.1107/97809553602060000624 booktitle
International Tables for Crystallography (publisher
International Union of Crystallography, year 2006) pp. pages 907–955NoStop
[Van Smaalen(2007)]van2007incommensurate
author author S. Van Smaalen, @noop title Incommensurate
crystallography, Vol. volume 21 (publisher
Oxford University Press, year 2007)NoStop
[Wyman and Wyman(1985)]Wyman1985
author author M. F. Wyman and author B. F. Wyman, title title An essay on continued
fractions, https://doi.org/10.1007/bf01699475 journal journal Math. Syst. Theory volume 18, pages 295 (year 1985)NoStop
[Sarkar et al.(2020)Sarkar,
Singh, Sadhukhan, Pariari,
Roy, Mandal, and Barman]SarkarAIP2020
author author S. Sarkar, author V. K. Singh,
author P. Sadhukhan, author A. Pariari, author
S. Roy, author P. Mandal, and author S. R. Barman, title title X-ray
photoelectron spectroscopy study of a layered tri-chalcogenide system
LaTe_3, https://doi.org/10.1063/5.0001764 journal journal AIP Conf. Proc. volume 2220, pages 100005 (year
2020)NoStop
[Ru et al.(2008)Ru,
Condron, Margulis, Shin,
Laverock, Dugdale, Toney, and Fisher]RuPRB2008
author author N. Ru, author C. L. Condron,
author G. Y. Margulis, author K. Y. Shin, author
J. Laverock, author
S. B. Dugdale, author
M. F. Toney, and author
I. R. Fisher, title
title Effect of chemical pressure on the charge density wave
transition in rare-earth tritellurides RTe_3, https://doi.org/10.1103/physrevb.77.035114 journal journal Phys. Rev. B volume 77, pages 035114 (year 2008)NoStop
[Fang et al.(2007)Fang,
Ru, Fisher, and Kapitulnik]Fang2007
author author A. Fang, author N. Ru, author I. R. Fisher, and author A. Kapitulnik, title
title Stm studies of tbte_3: Evidence for a fully
incommensurate charge density wave, https://doi.org/10.1103/PhysRevLett.99.046401 journal
journal Phys. Rev. Lett. volume 99, pages 046401 (year 2007)NoStop
[Moore et al.(2010)Moore,
Brouet, He, Lu, Ru, Chu, Fisher, and Shen]Moore2010
author author R. G. Moore, author V. Brouet,
author R. He, author
D. H. Lu, author N. Ru, author J.-H. Chu, author I. R. Fisher, and author Z.-X. Shen, title title Fermi surface evolution
across multiple charge density wave transitions in erte_3, https://doi.org/10.1103/PhysRevB.81.073102 journal
journal Phys. Rev. B volume 81, pages 073102 (year 2010)NoStop
[Lavagnini et al.(2008)Lavagnini, Baldini, Sacchetti,
Di Castro, Delley, Monnier,
Chu, Ru, Fisher,
Postorino, and Degiorgi]Lavagnini2008
author author M. Lavagnini, author M. Baldini,
author A. Sacchetti, author D. Di Castro, author
B. Delley, author R. Monnier, author J.-H. Chu, author N. Ru, author I. R. Fisher,
author P. Postorino, and author L. Degiorgi, title title Evidence for coupling between charge density waves
and phonons in two-dimensional rare-earth tritellurides, https://doi.org/10.1103/PhysRevB.78.201101 journal journal Phys. Rev. B volume 78, pages 201101 (year 2008)NoStop
[Setyawan and Curtarolo(2010)]Setyawan2010
author author W. Setyawan and author S. Curtarolo, title title High-throughput
electronic band structure calculations: Challenges and tools, https://doi.org/10.1016/j.commatsci.2010.05.010 journal
journal Comput. Mater. Sci. volume
49, pages 299 (year 2010)NoStop
[Ku et al.(2010)Ku,
Berlijn, and Lee]Ku2010
author author W. Ku, author T. Berlijn, and author C.-C. Lee, title title Unfolding first-principles band structures, https://doi.org/10.1103/PhysRevLett.104.216401 journal
journal Phys. Rev. Lett. volume 104, pages 216401 (year 2010)NoStop
[Allen et al.(2013)Allen,
Berlijn, Casavant, and Soler]Allen2013
author author P. B. Allen, author T. Berlijn,
author D. A. Casavant, and author J. M. Soler, title title Recovering hidden bloch character: Unfolding
electrons, phonons, and slabs, https://doi.org/10.1103/PhysRevB.87.085322 journal journal Phys. Rev. B volume 87, pages 085322 (year 2013)NoStop
[Gweon et al.(1998)Gweon,
Denlinger, Clack, Allen,
Olson, DiMasi, Aronson,
Foran, and Lee]Gweon1998
author author G.-H. Gweon, author J. D. Denlinger, author J. A. Clack, author J. W. Allen,
author C. G. Olson, author E. DiMasi, author
M. C. Aronson, author
B. Foran, and author
S. Lee, title title Direct observation of complete fermi surface, imperfect nesting, and
gap anisotropy in the high-temperature incommensurate charge-density-wave
compound SmTe_3, https://doi.org/10.1103/PhysRevLett.81.886
journal journal Phys. Rev. Lett. volume 81, pages 886 (year
1998)NoStop
[Chikina et al.(2023)Chikina, Lund, Bianchi, Curcio, Dalgaard, Bremholm, Lei, Singha, Schoop, and Hofmann]Chikina2023
author author A. Chikina, author H. Lund,
author M. Bianchi, author D. Curcio, author
K. J. Dalgaard, author
M. Bremholm, author
S. Lei, author R. Singha, author L. M. Schoop, and author P. Hofmann, title title Charge
density wave generated fermi surfaces in ndte_3, https://doi.org/10.1103/PhysRevB.107.L161103 journal
journal Phys. Rev. B volume 107, pages L161103 (year 2023)NoStop
|
http://arxiv.org/abs/2405.09425v1 | 20240515151846 | Robust Covariance-Based Activity Detection for Massive Access | [
"Jianan Bai",
"Erik G. Larsson"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Robust Covariance-Based Activity Detection for Massive Access
Jianan Bai and Erik G. Larsson
The authors are with the Department of Electrical Engineering (ISY), Linköping University, 58183 Linköping, Sweden (email: jianan.bai@liu.se, erik.g.larsson@liu.se). This work was supported in part by ELLIIT, the KAW foundation, and the European Union’s Horizon 2020 research and innovation program under grant agreement no. 101013425 (REINDEER).
The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725.
May 20, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The wireless channel is undergoing continuous changes, and the block-fading assumption, despite its popularity in theoretical contexts, never holds true in practical scenarios. This discrepancy is particularly critical for user activity detection in grant-free random access, where joint processing across multiple resource blocks is usually undesirable. In this paper, we propose employing a low-dimensional approximation of the channel to capture variations over time and frequency and robustify activity detection algorithms. This approximation entails projecting channel fading vectors onto their principal directions to minimize the approximation order. Through numerical examples, we demonstrate a substantial performance improvement achieved by the resulting activity detection algorithm.
§ INTRODUCTION
In grant-free random access (GFRA) systems, the handshaking process inherent in grant-based schemes is circumvented in order to diminish communication latency.
Nevertheless, due to the open-loop operation of all users, the base station is required to identify active users prior to decoding their payload information.
The task of activity detection becomes notably challenging when the number of potential users considerably surpasses the channel coherence length, thereby precluding the utilization of mutually orthogonal pilot sequences.
State-of-the-art techniques rely on traffic sporadicity to facilitate accurate activity detection using non-orthogonal pilots. The majority of these methods fall into two primary categories: compressed sensing (CS) and the covariance-based approach. In CS, not only are the identities of active users discerned, but their respective channels can also be recovered. An effective iterative algorithm derived from CS is the approximate message passing (AMP), which has been extensively investigated in studies such as <cit.>. Contrarily, the covariance approach does not directly provide channel estimates, but it can potentially identify a considerably larger number of active users than CS in large, albeit finite, antenna regimes; the scaling law for this approach is formally substantiated in the seminal paper <cit.>.
Predominantly, current schemes restrict to ideal narrowband systems and presume a block-fading channel, an assumption driven by the concept of coherence block and justifiable by the sampling theorem when joint processing or coding across blocks is feasible. Nevertheless, the block-fading assumption may not hold true in the context of activity detection, wherein instantaneous processing is anticipated within a block. It is crucial to acknowledge that the channel continuously experiences significant variations (up to π phase shifts) within a coherence block, as the term is nominally defined in the literature, for example in <cit.>.
Some efforts have been made to address the channel variations in wideband orthogonal frequency division multiplexing (OFDM) systems. In <cit.>, subcarriers are partitioned into sub-blocks, and the channel gain is presumed to change linearly within each sub-block; consequently, an AMP-based detection algorithm is developed. The covariance approach is extended in <cit.> by assuming independent and identically distributed (i.i.d.) channel taps and capitalizing on the discrete Fourier transform (DFT) structure present in the OFDM symbols.
Technical contribution: We present a unified framework that generalizes the approximation models in <cit.> from a dimensionality reduction perspective. We propose that the optimal approximation, with respect to minimizing the approximation order, can be achieved by projecting the channel vectors onto their principal directions, a classical outcome derived from principal component analysis (PCA). Our treatment of this subject naturally leads to an extension of the coherence block concept to a statistical perspective, which we term as a prediction horizon. We employ this result to remarkably enhance the robustness of the covariance-based activity detection algorithm.
Notations: Most of our notations follow standard conventions. Particularly, we use [N] to represent the set {1,⋯,N} for any positive integer N. We denote [x]_i as the i-th entry in a vector x, and [X]_i,j as the (i,j)-th entry in a matrix X. We use x for a diagonal matrix with x on its diagonal.
§ SIGNAL MODEL
We consider a single-cell system comprising a base station with M antennas and K potential users, each with a single antenna.
Each user k∈[K] is pre-assigned a unique, length-L pilot sequence ϕ_k=[ϕ_1k,⋯,ϕ_Lk]^T that is normalized to have unit energy per symbol, i.e., ϕ_k_2^2=L.
We are only interested in the regime where L≪ K, and therefore, the pilot sequences of different users are mutually non-orthogonal.
The activity of user k is denoted by a binary variable a_k.
The active users will transmit their pilot sequences over a time-frequency block of size T× F (a resource block with T OFDM symbols and F subcarriers). We assume L=TF for simplicity and the time- and frequency indices of the l-th pilot dimension are given by
t_l = l - ⌊l-1/F⌋ F and f_l=⌊l-1/F⌋ + 1,
respectively, where ⌊·⌋ is the floor function. See Fig. <ref> for an illustration of such blocks.
Assuming that sufficiently long cyclic prefixes are appended to the OFDM symbols, we obtain L parallel discrete memoryless channels in the frequency domain. Denoting the channel coefficients between user k and antenna m over the pilot dimensions by h_km = [h_1km,⋯,h_Lkm]^T, the received pilot signals at antenna m is given by
y_m = ∑_k∈[K] a_k √(β_k)h_kmϕ_k + w_m,
where β_k is the received signal strength (the product of the large-scale fading coefficient and the transmit power) of user k, and w_m∼0σ^2I is additive noise.
§ DIMENSIONALITY REDUCTION OF
CHANNEL VARIATIONS
In the existing literature, the problem of activity detection has been extensively studied under the block-fading model so that the channel coefficients are assumed to be constant during the pilot transmission. This block-fading assumption is motivated by the notion of coherence block/interval (see, for example, <cit.>), the size of which depends on the delay spread of multipath propagation and the Doppler frequency resulting from mobility. We note that, however, the wireless channel is always continuously changing, and the block-fading assumption can collapse due to large excess delay and/or high mobility.
The question therefore arises:
Can we go beyond the block-fading model when designing activity detection algorithms?
§.§ A Unified Viewpoint: Dimensionality Reduction
This question has been partially approached in previous work <cit.> by either approximating the frequency-selective channel over the subcarriers by a block-wise linear (BWL) model, or leveraging the assumed statistical distribution of the channel impulse response in the delay domain and the DFT structure inherent in the OFDM symbols (hereafter referred to as the DFT-based model).
We note that both of the models in <cit.>, as well as the block-fading model, can be interpreted from the perspective of dimensionality reduction and, mathematically, they can be unified into an abstract model
h_km≈∑_n∈[N]θ_nkmg_n = Gθ_km,
where G=[g_1,⋯,g_N]L× N is a deterministic matrix, θ_km=[θ_1km,⋯,θ_Nkm]^TN is a random vector with i.i.d. entries, and N is the approximation order.
The interpretation of (<ref>) is that the channel vectors {h_km} approximately lie in an N-dimensional subspace, N≪ L, with the basis {g_n}, and the stochastic nature of h_km is encapsulated within a significantly reduced representation θ_km.
To see how the abstract model (<ref>) generalizes those existing models, we first note that, as the simplest example, the block-fading model adopts N=1 with g_1=1. The BWL model in <cit.> divides the subcarriers into N/2 sub-blocks, and each sub-block is associated with two basis vectors, both of which have all-zero entries outside the corresponding sub-block; the first vector's non-zero entries are all ones, accounting for the mean value, while the other features equally spaced entries representing linear variations. The DFT-based model in <cit.>, on the other hand, assumes N i.i.d. channel taps in the time domain which results in N basis vectors that are given by corresponding columns in the DFT matrix.
However, to preserve a high approximation accuracy as the channel undergoes more rapid variations, the BWL model necessitates smaller sub-block sizes, which undermines the benefits provided by this model. The DFT-based model fails to exploit the magnitude variation in the power delay profile and the statistical correlation across channel taps due to pulse shaping. As the number of sampled channel taps increases, the algorithm developed based on the DFT-based model may experience escalating complexity and degraded performance. Moreover, these schemes consider only the channel variations across frequency, disregarding the variations over time. This limitation renders these schemes unsuitable for cases requiring multiple (OFDM) symbols due to limited bandwidth.
§.§ Our Approach: A Principal Component Analysis
To prepare our answer to the question about extending the block-fading model, we first generalize the notion of coherence interval, which is nominally defined in a deterministic sense, to a statistical viewpoint. The motivation is that even if the channel varies substantially within a block, the fading coefficients may still exhibit strong statistical correlations that can be effectively exploited. Notably, this type of blocks, which we term as a prediction horizon, can have a size that is significantly larger than the nominal coherence interval. A more formal definition is provided as follows:
A time-frequency block of size L is an ϵ-approximate prediction horizon of order N if there exist a deterministic matrix GL× N and a random vector θN such that the small-scale fading vector hL can be approximated by Gθ with approximation error
h-Gθ_2≤ϵ.
For any given block size L and approximation order N, the optimal choice of G that minimizes the approximation error can be obtained by performing a PCA. Specifically, denoting the covariance matrix of h_km as R and its best rank-N approximation (in terms of both spectral- and Frobenius norm) as UρU^H, where ρ=[ρ_1,⋯,ρ_N]^T contains the N largest eigenvalues and U=[u_1,⋯,u_N] comprises of the corresponding eigenvectors, the optimal choice is given by
G = Uρ^1/2.
By restricting our discussion to Rayleigh fading channels, the corresponding random vector is θ_km∼0I_N.
To establish a connection with other models from the perspective of PCA, we observe that the block-fading model assumes R=11^T, the BWL model presumes a block-diagonal R with a specific structure in each diagonal block, and the DFT-based model assumes R=FF^H, where F represents the corresponding DFT matrix. In contrast, our approach entails a more efficient exploitation of the covariance information, leading to a more potent dimensionality reduction.
Lastly, we note that the selection of G relies on the channel covariance R, which may not be perfectly known and must be learned from the environment. Fortunately, although the degrees of freedom in a Hermitian matrix generally amount to L(L+1)/2, the underlying physical or statistical model typically results in significantly fewer degrees of freedom in R. For instance, in a wide-sense stationary and uncorrelated scattering (WSSUS) channel, R exhibits a Toeplitz-block-Toeplitz structure. The number of unknowns can be further reduced if the underlying physical model is known (with the DFT-based model serving as an extreme case).
§ WHAT APPROXIMATION ORDER?
Our PCA-based model is designed to accurately capture channel variations within a resource block, utilizing the lowest possible approximation order to enable efficient signal processing for activity detection. We observe that even for a physical channel characterized by a large excess delay and high mobility, a low approximation order is sufficient. In the following section, we substantiate this claim through numerical examples.
§.§ Generation of Channel
We first generate the impulse response of a WSSUS channel by using the improved sum-of-sinusoids method proposed in <cit.>. Since the same procedure will be repeated independently for each user-antenna pair, we ignore the subscript “km” throughout this subsection. For a channel with paths at different delays, we generate each path independently, and the impulse response of the i-th path is given by
q_i(t) = 1/√()∑_n∈[] e^j(ω_d t cosα_n + ψ_n)
with
α_n = 2π n + ζ_n/, n∈[],
where is the number of sinusoids, ω_d is the maximum Doppler frequency in radians, ψ_n and ζ_n are i.i.d. distributed over [-π,π).
We denote the sampling rate by B which is identical to the system bandwidth, and the impulse response of the pulse shaping filter as p(·).
The discrete-time impulse response of the multipath-fading channel is given by
q_t_l ℓ = ∑_i∈[]√(c_i) q_i(t_l-1/B) p(ℓ-Bτ_i),
where t_l∈[T], the fractional power c_i and the delay τ_i of different paths are determined by a power delay profile defined in, for example, <cit.>. Here, ℓ is the time-lag index, which is an integer ranging from ℓ_min to ℓ_max. The smallest time-lag ℓ_min is a negative integer, e.g., -3, and ℓ_max=⌈ Bτ_⌉ - ℓ_min with τ_=max_i∈[]τ_i being the maximum excess delay. For each t_l, we apply the discrete Fourier transform (DFT) to x_t_l=[q_t_l0,⋯,q_t_lℓ_max,q_t_lℓ_min,⋯,q_t_l,-1]^T to obtain the frequency response at f_l∈[F] as
Q_t_lf_l = ∑_i∈[ℓ_max-ℓ_min+1] [x_t_l]_i e^-j2π (i-1) (f_l-1)/ℓ_max-ℓ_min+1.
These coefficients are re-arranged into the length-L channel vector where the l-th element, h_l, equals to Q_t_lf_l with the invertible mapping between l and (t_l,f_l) defined in (<ref>).
§.§ Numerical Example
We proceed to evaluate the accuracy of the proposed low-dimensional approximation. The time-frequency block size is set to T=F=10, resulting in a channel dimension of L=100. We employ the Hilly Terrain channel model defined in <cit.>, with a maximum excess delay of τ_≈ 18.02 microseconds and a mobile speed of 120 km/h. The carrier frequency is set to 3.5 GHz, which corresponds to a maximum Doppler frequency of ω_d ≈ 2445.2 rad/s. The subcarrier spacing is set to 5 kHz (aligning with the long preamble sequence in the 5G New Radio PRACH formats <cit.>), resulting in a system bandwidth for pilot transmission of 50 kHz. The number of sinusoids in the channel simulator is =20. For pulse shaping, we utilize a root-raised-cosine (RRC) filter with a rolloff factor of 0.22 and a symbol rate of 1.
We perform the eigenvalue decomposition on the sample covariance of the generated fading coefficients to obtain the matrix G whose columns are the basis vectors {g_n}.
For an arbitrary user-antenna pair, we obtain the least-squares estimate of the effective channel θ_km by minimizing h_km - Gθ_km_2.
The estimated channel is given by h_km = Gθ_km.
An instance of the channel realization and its order-4 approximation on the time-frequency block are depicted in Fig. <ref>. The basis vectors g_n are visualized in Fig. <ref> with proper normalization.
To quantify the approximation accuracy, we define H = [⋯,h_km,⋯] as an L× KM matrix consisting of the fading coefficients of all user-antenna pairs, H = [⋯,h_km,⋯] as the channels obtained by the low-dimensional approximation, and H = [⋯,h_km,⋯] with h_km = (1/L∑_l∈[L]h_lkm)1 as the block-fading approximation. We define the metric
κ = H - H_F/H - H_F
as the relative approximation error compared with the block-fading model. The value of κ for different approximation order N is depicted in Fig. <ref>. One can observe that a relatively low approximation order, 3≤ N ≤ 5, gives a significantly improved approximation accuracy.
§ ACTIVITY DETECTION
By using the abstract channel model in (<ref>), the received pilot signals at antenna m can be re-written as
y_m
= ∑_k∈[K] a_k √(β_k)S_kθ_km + w_m,
where S_k = [s_1k,⋯,s_Nk] with s_nk=g_nϕ_k being the effective pilot.
We note that y_m has distribution 0Σ with the covariance matrix
Σ = y_my_m^H = ∑_k∈[K] a_k β_k S_kS_k^H + σ^2I.
Given the observation in (<ref>) that the distribution of the received signals {y_m} is parameterized by the user activities a=[a_1,⋯,a_K]^T, it is reasonable to perform a maximum likelihood (ML) detection. However, the binary constraint a∈{0,1}^K renders the problem combinatorial such that the complexity grows exponentially with K. Following the approach in <cit.>, we define γ=[γ_1,⋯,γ_K]^T with γ_k = a_kβ_k and relax the binary constraint by γ∈Γ, where Γ represents the non-negative orthant {γ_k ≥ 0} when {β_k} are unknown, and the box constraint {0≤γ_k≤β_k} otherwise.
The ML estimation of γ can be formulated by
γ^* = _γ∈Γ f(γ),
where
f(γ) = log|Σ_γ | + (Σ_γ^-1Σ)
is the negative log-likelihood function (up to some rescaling and removal of constant terms) with
Σ_γ=∑_k∈[K]γ_kS_kS_k^H + σ^2I
being the parameterized covariance matrix, and the sample covariance Σ = 1/M∑_m∈[M]y_my_m^H. We note that the cost in (<ref>) is the log-determinant divergence between Σ and Σ_γ (up to some constant terms), and the ML formulation can also be interpreted from the covariance matching perspective.
We employ a coordinate descent algorithm to solve the ML problem (<ref>).
Specifically, in each iteration of the algorithm, we pick a coordinate (user) k based on some pre-determined schedule and make the update γ←γ + d^*e_k where e_k is the k-th standard basis vector of ℝ^K, and
d^* = _d∈[-γ_k,∞] f(γ + de_k).
(In case that β_k is known, the constraint can be replaced by d∈[-γ_k,β_k-γ_k].) Different from the original work <cit.> where the change of γ_k results in a rank-1 update in the covariance matrix, we need to deal with a rank-N update according to (<ref>). This problem has been successfully solved in <cit.>, We briefly present the approach in the following.
By applying Sylvester's determinant identity and the Woodbury matrix identity, we obtain
log |Σ_γ + dS_kS_k^H | = log|Σ_γ| + log|I + dΨ_k|
(Σ_γ + dS_kS_k^H)^-1 = Σ_γ^-1 - dΣ_γ^-1S_k(I + dΨ_k)^-1S_k^HΣ_γ^-1
with Ψ_k=S_k^HΣ_γ^-1S_k. By writing the eigenvalue decomposition of Ψ_k as V_kλ_kV_k^H with λ_k = [λ_k1,⋯,λ_kN]^T, the cost function in (<ref>) can be rewritten as
f(γ+de_k) = f(γ) +log|I + dD_λ_k| - d ((I + dD_λ_k)^-1Ξ_k )
= f(γ) +∑_n∈[N](log(1+dλ_kn) -d[Ξ_k]_n,n/1+dλ_kn)
with Ξ_k = V_k^HS_k^HΣ_γ^-1ΣΣ_γ^-1S_kV_k, and it has the derivative
∂/∂ d f(γ + d e_k) = ∑_n=1^N ( λ_kn/1 + dλ_kn - [Ξ_k]_n,n/(1+dλ_kn)^2).
The optimal d^* can be obtained by comparing the cost value for all feasible stationary points and boundary points. The stationary points where the derivative in (<ref>) equals zero can be obtained by real-root isolation of a polynomial with real-valued coefficients of order 2N-1. No explicit formula exists for N≥ 3, while efficient algorithms were developed <cit.>. Alternatively, for large N, a one-dimensional search can be employed to minimize the cost (<ref>) directly.
§.§ Numerical Evaluation
We evaluate the performance of activity detection by simulating a single-cell system where the base station has M=200 antennas. There are K=1000 potential users with K_=100 active users that are selected uniformly at random during each block. For simplicity, we assume an ideal channel inversion power control such that the signal-to-noise ratio (SNR) equals to 0 dB for all users, i.e., β_1=⋯=β_K=σ^2=1. We consider a time-frequency block of size T=10, F=14 with pilot length L=140, and the fading coefficients are generated by using the same set of parameters as in Section <ref>. The pilot sequences are generated independently from 0I_L and then normalized to have unit energy per symbol. We run the coordinate descent algorithm for 10 iterations, and within each iteration, the coordinates are selected by traversing a randomly permuted list of user indices.
In addition to the algorithm for the block-fading model in <cit.>, we also use the BWL model in <cit.> and the DFT-based model in <cit.> as baselines. Since the time variations are not considered in these models, we modify their models for a more fair comparison. In the BWL model, we take the first basis vector as an all-one vector representing the mean value, the second- and third vectors accounting for the linear variations for t_l∈{1,⋯,T/2 } and t_l∈{T/2+1,⋯,T} over time, and the fourth- and fifth vectors for the linear variations for t_l∈{1,⋯,F/2 } and f_l∈{F/2+1,⋯,F} over frequency. In the DFT-based model, the first basis vector is also an all-one vector, while we use the second and last columns in the T-point and F-point DFT matrices as the remaining basis vectors. We also incorporate the results obtained by ignoring the time variability in these two models (the two basis vectors accounting for time variations are removed). All the basis vectors are properly normalized so that the corresponding random components have unit variance. We note that in <cit.>, an AMP-based algorithm is designed for the BWL model. Since our main focus is on the channel approximation models instead of comparing different algorithms, we use the adapted covariance approach for this model as well. The detection performance is compared in Fig. <ref>. As one can observe, the PCA-based models significantly robustify the covariance-based activity detection algorithm and outperform the competing models by a considerable margin. The approximation models that ignore the time-variability do not provide an obvious benefit under the highly time-varying environment.
§ CONCLUSION
We generalize the channel approximation models from user activity detection literature into a unified framework following the dimensionality reduction perspective. This naturally leads to an extension of the nominal coherence block concept to a statistical viewpoint which inspires us to exploit the statistical correlation across channel coefficients. Consequently, we propose a PCA-based model to jointly approximate the channel variations over both time and frequency with the lowest possible approximation order. This finding results in an adapted version of the covariance-based activity detection algorithm that is robust under highly varying channel conditions. It is important to note, however, that the accuracy of our PCA-based model depends on the quality of channel covariance information. The acquisition of channel covariance information, while not thoroughly investigated in this paper, requires careful consideration in practice.
ieeetr
|
http://arxiv.org/abs/2405.08649v2 | 20240514142849 | The computational power of discrete chemical reaction networks with bounded executions | [
"David Doty",
"Ben Heckmann"
] | cs.CC | [
"cs.CC"
] |
BeACONS: A Blockchain-enabled Authentication and Communications Network for Scalable IoV
Qi Shi, Jingyi Sun, Hanwei Fu, Peizhe Fu, Jiayuan Ma, Hao Xu and Erwu Liu
Q. Shi, J. Sun, H. Fu, P. Fu and J. Ma are with College of Electronic and Information Engineering, Tongji University, Shanghai 201804, PR China, E-mail: {qishi, 2252086, 2251039, 2252719, 2050971}@tongji.edu.cn;
H. Xu and E. Liu are with College of Electronic and Information Engineering and Shanghai Engineering Research Center for Blockchain Applications And Services, Tongji University, Shanghai, China, E-mail: {hao.xu, erwu.liu}@ieee.org.
14th May 2024
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Chemical reaction networks (CRNs) model systems where molecules interact according to a finite set of reactions such as A + B → C, representing that if a molecule of A and B collide, they disappear and a molecule of C is produced.
CRNs can compute Boolean-valued predicates ϕ:^d →{0,1} and integer-valued functions f:^d →; for instance X_1 + X_2 → Y computes the function min(x_1,x_2).
We study the computational power of execution bounded CRNs,
in which only a finite number of reactions can occur from the initial configuration
(e.g., ruling out reversible reactions such as A B).
The power and composability of such CRNs depends crucially on some other modeling choices that do not affect the computational power of CRNs with unbounded executions, namely whether an initial leader is present, and whether (for predicates) all species are required to “vote” for the Boolean output.
If the CRN starts with an initial leader, and can allow only the leader to vote,
then all semilinear predicates and functions can be stably computed in O(n log n) parallel time by execution bounded CRNs.
However,
if no initial leader is allowed, all species vote,
and the CRN is “noncollapsing”
(does not shrink from initially large to final O(1) size configurations),
then execution bounded CRNs are severely limited,
able to compute only eventually constant predicates.
A key tool is to characterize execution bounded CRNs as precisely those with a nonnegative linear potential function that is strictly decreased by every reaction, a result that may be of independent interest.
§ INTRODUCTION
Chemical reaction networks (CRNs) are a fundamental tool for understanding and designing molecular systems. By abstracting chemical reactions into a set of finite, rule-based transformations, CRNs allow us to model the behavior of complex systems. For instance, the CRN with a single reaction 2X → Y, produces one Y every time two X molecules randomly react together, effectively calculating the function f(x) = ⌊ x/2 ⌋ if the initial count of X molecules is interpreted as the input and Y as the output.
A commonly studied special case of CRNs is the population protocol model of distributed computing <cit.>,
in which each reaction has exactly two reactants and two products, e.g., A+B → C+D.
This model assumes idealized conditions where reactions can proceed indefinitely, constrained only by the availability of reactants in the well-mixed solution.
Precisely the semilinear predicates ϕ: ^d →{0,1} and functions f:^d → can be computed stably, roughly meaning that the output is correct no matter the order in which reactions happen.
In population protocols or other CRNs with a finite reachable configuration space, this means that the output is correct with probability 1 under a stochastic scheduler that picks the next molecules to react at random.
However, existing constructions to compute semilinear predicates and functions use CRNs with unbounded executions,
meaning that it is possible to execute infinitely many reactions from the initial configuration.
CRNs with bounded executions have several advantages.
With an absolute guarantee on how many reactions will happen before the CRN terminates,
wet-lab implementations need only supply a bounded amount of fuel to power the reactions.
Such CRNs are simpler to reason about:
each reaction brings it “closer” to the answer.
They also lead to a simpler definition of stable computation than is typically employed: an execution bounded CRN stably computes a predicate/function if it gets the correct answer after sufficiently many reactions.
To study this topic, we limit the classical, discrete CRN model to networks that must eventually reach a configuration where no further reactions can occur, regardless of the sequence of reactions executed. By guaranteeing a finite endpoint for CRN computations and later integrating the concept of decreasing potential, we aim to align our models more closely with their implementations in the physical world.
This restriction is nontrivial because the techniques in <cit.> and <cit.> rely on reversible reactions catalyzed by species we expect to be depleted once a computational step has terminated. This trick seems to add computational power to our system by undoing certain reactions as long as a specific species is present. Consider the following CRN computing f(x_1, x_2, x_3) = min(x_1-x_2, x_3). The input values x_i are given as counts of copies of X_i, and the count of Z molecules in the stable output:
X_1 → Y
X_2+Y →∅
Y+X_3 → Z
Z+X_2 → X_2+X_3+Y
Reactions (<ref>) and (<ref>) compute x_1 - x_2, storing the result in the count of Y. Next, reaction (<ref>) can be applied exactly min(y, x_3) times. But since the order of reactions is a stochastic process, we might consume copies of Y in (<ref>), before all of x_2 is subtracted from it. Therefore, we add reaction (<ref>), using X_2 as a catalyst to undo reaction (<ref>) as long as copies of X_2 are present, indicating that the first step of computation has not terminated. A similar technique is used in <cit.>, where semilinear sets are understood as a finite union of linear sets, shown to be computable in parallel by CRNs. A reversible, catalyzed reaction finally converts the output of one of the CRNs to the global output. Among other questions, we explore how the constructions of <cit.> and <cit.> can be modified to provide equal computational power while guaranteeing bounded execution.
<ref> defines execution boundedness (<Ref>). Furthermore, we introduce alternative characterizations of the class for use in later proofs, such as the lack of self-covering execution paths.
<ref> and <ref> contain the main positive results of the paper and provide the concrete constructions used to decide semilinear sets and functions using execution bounded CRNs whose initial configurations contain a single leader.
<ref> discusses the limitations of execution bounded CRNs, introducing the concept of a “linear potential function” as a core characterization of these systems.
We demonstrate that entirely execution bounded CRNs that are leaderless and non-collapsing (such as all population protocols),
can only stably decide trivial semilinear predicates: the eventually constant predicates
(<Ref>).
§ PRELIMINARIES
We use established notation from <cit.> and stable computation definitions from <cit.> for (discrete) chemical reaction networks.
§.§ Notation
Let denote the nonnegative integers. For any finite set Λ, we write ^Λ to mean the set of functions f: Λ→. Equivalently, ^Λ can be interpreted as the set of vectors indexed by the elements of Λ, and so ∈^Λ specifies nonnegative integer counts for all elements of Λ.
(i) denotes the i-th coordinate of , and if is indexed by elements of Λ, then (Y) denotes the count of species Y ∈Λ.
For two vectors ,∈^k,
we write
≧ to denote that (i) ≥(i) for all 1 ≤ i ≤ k,
≥ to denote that ≧ but ≠,
and > to denote that (i) > (i) for all 1 ≤ i ≤ k.
In the case that = 0⃗,
we say that is
nonnegative,
semipositive,
and positive,
respectively.
Similarly define ≦,≤,<.
For a matrix or vector ,
define = _1 = ∑_i |(i)|,
i ranges over all the entries of .
§.§ Chemical Reaction Networks
A chemical reaction network (CRN) is a pair =(Λ, R), where Λ is a finite set of chemical species, and R is a finite set of reactions over Λ, where each reaction is a pair (,) ∈^Λ×^Λ indicating the reactants and products .
A population protocol <cit.> is a CRN in which all reactions (,) obey = = 2.
We write reactions such as A+2B A+3C to represent the reaction ({A,2B}, {A,3C}).
A configuration c⃗∈^Λ of a CRN assigns integer counts to every species S ∈Λ.
When convenient, we use the notation {n_1 S_1, n_2 S_2, …, n_k S_k} to describe a configuration with n_i ∈ copies of species S_i, i.e., (S_i) = n_i,
and any species that is not listed is assumed to have a zero count.
If some configuration is understood from context, for a species S, we write # S to denote (S).
A reaction (, ) is said to be applicable in configuration if ≦.
If the reaction (, ) is applicable,
applying it results in configuration ' = - +, and we write →'.
An execution is a finite or infinite sequence of one or more configurations =(c⃗_0, c⃗_1, c⃗_2, …) such that, for all i ∈{1, …,||-1}, c⃗_i-1→c⃗_i and c⃗_i-1≠c⃗_i.
x⃗_P y⃗ denotes that P is finite, starts at x⃗, and ends at y⃗.
In this case we say is reachable from .
Let = {|}.
Note that the reachability relation is additive:
if , then for all ∈^Λ,
+ +.
For a CRN =(Λ,R) where |Λ|=n and |R|=m,
define the n × m stoichiometric matrix of as follows.
The species are ordered S_1,…,S_n, and the reactions are ordered (_1,_1),…,(_m,_m),
and _ij = _j(S_i) - _j(S_i).
In other words, _ij is the net amount of S_i produced when executing the j'th reaction.
For instance, if the CRN has two reactions S_1 S_2 + 2S_3 and 3S_2 + S_3 S_1 + S_2 + S_3,
then
full
=
[ -1 1; 1 -2; 2 0 ].
submission,final
=
[ -1 1; 1 -2; 2 0 ].
Let ∈^R.
Then the vector ∈^Λ represents the change in species counts that results from applying reactions by amounts described in .
In the above example, if = (2,1),
then = (-1,0,4),
meaning that executing the first reaction twice (_1=2)
and the second reaction once (_2=1) causes S_1 to decrease by 1, S_2 to stay the same, and S_3 to increase by 4.
§.§ Stable computation with CRNs
To capture the result of computations done by a CRN, we generalize the definitions to include information about how to interpret the final configuration after letting the CRN run until the result cannot change anymore (characterized below as stable computation). Computation primarily involves two classes of functions: 1. evaluating predicates to determine properties of the input (akin to deciding a set defined by these properties), and 2. executing general functions that map an input configuration to an output, denoted as f: ^k →.
A chemical reaction decider (CRD) is a tuple =(Λ, R, Σ, Υ_Y,Υ_N, s⃗), where (Λ, R) is a CRN, Σ⊆Λ is the set of input species, Υ_Y⊆Λ is the set of yes voters, and Υ_N⊆Λ is the set of no voters.
If Υ_Y∪Υ_N = Λ, we say the CRD is all-voting.
We define a global output partial function Φ: ^Λ⟶{0,1} as follows. Φ(c⃗) is undefined if either c⃗=0⃗,
or if there exist S_N∈Υ_N and S_Y∈Υ_Y such that c⃗(S_N)>0 and c⃗(S_Y)>0.
In other words, we require a unanimous vote as our output.
We say is stable if, for all ' such that ',
Φ() = Φ(').
We say a CRD 𝒟 stably decides the predicate ψ: ^Σ→{0,1} if,
for any valid initial configuration ∈^Λ with ↾Σ=_0,
for all configurations ∈^Λ, implies ^' such that ^' is stable and Φ(^')=ψ(_0).
We associate to a predicate ψ the set A = ψ^-1(1) of inputs on which ψ outputs 1,
so we can equivalently say the CRD stably decides the set A.
A chemical reaction computer (C R C) is a tuple =(Λ, R, Σ, Y, s⃗), where (Λ, R) is a CRN,
Σ⊂Λ is the set of input species,
Y ∈Λ\Σ is the output species, and s⃗∈^Λ\Σ is the initial context.
A configuration o⃗∈^Λ is stable if, for every c⃗ such that o⃗c⃗, o⃗(Y) = c⃗(Y), i.e. the output can never change again.
We say that stably computes a function f: ^k → if for any valid initial configuration i⃗∈^Σ and any c⃗∈^Λ, i⃗c⃗ implies
c⃗o⃗ such that o⃗ is stable and f(i⃗↾Σ)=o⃗(Y) where i⃗↾Σ denotes restriction of i⃗ to Σ.
For a CRD or CRC with initial context and input species Σ, we say a is a valid initial configuration if = +, where (S) = 0 for all S ∈Λ∖Σ;
i.e., is the initial context plus only input species.table configuration.
§.§ Time model
The following model of stochastic chemical kinetics is widely used in quantitative biology and other fields dealing with chemical reactions between species present in small counts <cit.>.
It ascribes probabilities to execution sequences, and also defines the time of reactions, allowing us to study the computational complexity of the CRN computation in <Ref>.
If the volume is defined to be n, the total number of molecules,
then the time model is essentially equivalent to the notion of parallel time studied in population protocols <cit.>.
In this paper, the rate constants of all reactions are 1, and we define the kinetic model with this assumption.
A reaction is unimolecular if it has one reactant and bimolecular if it has two reactants.
We use no higher-order reactions in this paper.
The kinetics of a CRN is described by a continuous-time Markov process as follows.
Given a fixed volume v > 0,
the propensity of a unimolecular reaction α : X →… in configuration is ρ(, α) = (X).
The propensity of a bimolecular reaction α : X + Y →…, where X ≠ Y, is ρ(, α) = (X) (Y)/v.
The propensity of a bimolecular reaction α : X + X →… is ρ(, α) = 1/2(X) ((X) - 1)/v.
The propensity function determines the evolution of the system as follows.
The time until the next reaction occurs is an exponential random variable with rate ρ() = ∑_α∈ Rρ(,α) (note that ρ()=0 if no reactions are applicable to ).
The probability that next reaction will be a particular α_next is ρ(,α_next)/ρ().
The kinetic model is based on the physical assumption of well-mixedness that is valid in a dilute solution.
Thus, we assume the finite density constraint, which stipulates that a volume required to execute a CRN must be proportional to the maximum molecular count obtained during execution <cit.>.
In other words, the total concentration (molecular count per volume) is bounded.
This realistically constrains the speed of the computation achievable by CRNs.
For a CRD or CRC stably computing a predicate/function,
the stabilization time is the function t:→ defined for all n ∈ as t(n)= the worst-case expected time to reach from any valid initial configuration of size n to a stable configuration.
§.§ Semilinear sets, predicates, functions
A set L ⊆^d is linear if there are vectors ,_1,…,_k such that
L = { + n_1 _1 + … + n_k _k | n_1,…,n_k ∈}.
A set is semilinear if it is a finite union of linear sets.
A predicate ϕ:^d →{0,1} is semilinear if the set ϕ^-1(1) is semilinear.
A function f: ^d → is semilinear if its graph
{ (,y) ∈^d+1| f() = y }
is semilinear.
The following is a famous characterization of the computational power of CRNs <cit.>.
A predicate/function is stably computable by a CRD/CRC if and only if it is semilinear.
T ⊆^d is a threshold set is if there are constants c,w_1,…,w_d ∈ such that
T = {∈^d | w_1 (1) + … + w_d (d) ≤ c }.
M ⊆^d is a mod set if there are constants c,m,w_1,…,w_d ∈ such that
M = {∈^d | w_1 (1) + … + w_d (d) ≡ c m }.
The following well-known characterization of semilinear sets is useful.
A set is semilinear if and only if it is a Boolean combination (union, intersection, complement) of threshold and mod sets.
§ EXECUTION BOUNDED CHEMICAL REACTION NETWORKS
In this section, we define execution bounded CRNs and state a few alternate characterizations of the definition.
submissionProofs are in the appendix.
finalconfProofs are in the full version of this paper.
A CRN is execution bounded from configuration x⃗ if all executions = (x⃗, … ) starting at x⃗ are finite.
A CRD or CRC is execution bounded if it is execution bounded from every valid initial configuration.
is entirely execution bounded if it is execution bounded from every configuration.
This is a distinct concept from the notion of “bounded” CRNs studied by Rackoff <cit.> (studied under the equivlaent formalism of vector addition systems).
That paper defines a CRN to be bounded from a configuration if || is finite (and shows that the decision problem of determining whether this is true is 𝖤𝖷𝖯𝖲𝖯𝖠𝖢𝖤-complete.)
We use the term execution bounded to avoid confusion with this concept.
We first observe that being execution bounded from implies a slightly stronger condition:
there is a uniform upper bound on the length of all executions from x⃗.[
In other words, this rules out the possibility that, although all executions from x⃗ are finite, there are infinitely many of them _1,_2,…, each longer than the previous.
]
A CRN is execution bounded from x⃗ if and only if there is a constant N ∈ such that all executions from x⃗ have length at most N.
Equivalently, there are finitely many executions from x⃗.
We use Kőnig's lemma to show that in the absence of an infinite path, the number of all possible paths must be finite, which directly implies a global bound on the length of all executions. We represent the set of all executions for as a tree where each edge represents a single reaction applied and each node stores the complete execution sequence starting from configuration x⃗. Note that this construction is slightly different from a more straightforward graph with the reachable states as nodes, which would not give us a tree, since the same state can be reached by different executions. Formally, we generate the tree as follows: T_(x⃗) = (V, E) where V ≜{∈{^Λ}^* | is a valid execution sequence starting from x⃗}, E ≜{ (_1, _2 ) |_1 ≼_2 |_2| = |_1| + 1}.
In other words, all the executions from x⃗ of length d are the nodes at depth d of this tree.
One can think of the nodes as being labeled by configurations rather than executions (specifically the final configuration of the execution, with the tree rooted at x⃗), but the same configuration can label multiple nodes if it can be reached from x⃗ via different executions.
In this case the children of a configuration are those that are reachable from it by applying a single reaction.
This tree is finitely branching, as we can only choose from a finite number of reactions at any node. By definition of execution bounded, there is no execution sequence with an infinite length.
Due to the bijection between paths in T_(x⃗) and executions possible in , there is no infinite path in the tree.
By Kőnig's Lemma, the tree has a finite number of nodes, guaranteeing a single bound N (the depth of the tree) on the length of every execution.
The next lemma characterizes execution boundedness as equivalent to having a finite reachable state space with no cycles.
A CRN is execution bounded from x⃗ if and only if x = {y⃗|x⃗y⃗} is finite and, for all y⃗∈x, y⃗y⃗ except by the zero-length execution.
Every configuration reachable from x⃗ is reached through some execution contained in T_(x⃗) as a node and there exists only a finite number of them (<Ref>). Multiple unique executions can produce the same configuration but one execution cannot produce multiple configurations.
Thus, there exists a surjection from the nodes of T_(x⃗) into x and x must also be finite. For the second part of the condition, we prove its contrapositive and assume there exists y⃗∈x, y⃗_P y⃗ |P| > 0. Let P = (p⃗_1, p⃗_2, …, p⃗_n). It holds that p⃗_1 = p⃗_n and p⃗_n-1p⃗_1. We can construct an infinite-length execution P^' = (x⃗, …, p⃗_1,…, p⃗_n-1, p⃗_1, …), which must also be a valid under the reactions of , making execution unbounded from x⃗.
If x is finite and contains no such y⃗, then we can construct a finite, directed, acyclic graph G_(x⃗) = (V, E) where V=x, E={(x⃗, y⃗) |x⃗_P y⃗ |P| > 0}. The longest path in the graph has length of at most |x|-1. A bijection exists between paths in G_(x⃗) and executions possible in starting from x⃗. We set n = |x| satisfying that each execution has length of at most n, making execution bounded.
The following result is used frequently in impossibility proofs for CRNs and population protocols,
and it will help us prove another characterization of execution bounded CRNs in <Ref>.
(Dickson's Lemma)
For every infinite sequence of nonnegative integer vectors _1,_2,…∈^k,
there are i < j such that
_i ≦_j.
We first observe an equivalent characterization of execution bounded that will be useful in the negative results of <Ref>.
submissionA proof is in the appendix.
A execution = (_1,_2,…) is self-covering if for some i < j,
_i ≦_j.
It is strictly self-covering if _i ≤_j.
We also refer to these as (strict) self-covering paths.[
Rackoff <cit.> uses the term “self-covering” to mean what we call strictly self-covering here,
and points out that Karp and Miller <cit.> showed that || is infinite if and only if there is a strictly self-covering path from .
The distinction between these concepts is illustrated by the CRN A B.
From any configuration , is finite
(|| = (A) + (B) + 1), and there is no strict self-covering path.
However, from (say) {A},
there is a (nonstrict) self-covering path {A}{B}{A}, and by repeating, this CRN has an infinite cycling execution within its finite configuration space 𝗋𝖾𝖺𝖼𝗁({A}) = {{A},{B}}.
]
A CRN is execution bounded from if and only if there is no self-covering path from .
For the forward direction, assume there is a self-covering path from , which reaches to _i and later to _j ≧_i.
Then the reactions going from _i to _j can be repeated indefinitely (in a cycle if _i=_j, and increasing some molecular counts unboundedly if _i ≤_j),
so is not execution bounded from .
For the reverse direction,
assume is not execution bounded from .
Then there is an infinite execution = (=_1,_2,_3,…).
By Dickson's Lemma there are i<j such that _i ≦_j,
i.e., is self-covering.
§ EXECUTION BOUNDED CRDS STABLY DECIDE ALL SEMILINEAR SETS
In this section, we will show the computational equivalence between execution bounded and execution unbounded CRNs by a construction.
The following is the main result of this section.
Exactly the semilinear sets are stably decidable by execution bounded CRDs.
Furthermore, each can be stably decided with expected stabilization time Θ(n log n).
Since semilinear sets are Boolean combinations of mod and threshold predicates, we prove this theorem by showing that execution bounded CRDs can decide mod and threshold sets individually as well as any Boolean combination in the following lemmas. To ensure execution boundedness in the last step, we require the following property.
Let be a CRD with voting species Υ.
We say is single-voting if for any valid initial configuration ∈^Σ and any ∈^Λ s.t. ,
∑_V ∈Υ(V) = 1,
i.e., exactly one voter is present in every reachable configuration.
Every mod set M={(x_1, …, x_d) |∑_i=1^d w_i x_i ≡ c m} is stably decidable by an execution bounded, single-voting CRD with stabilization time Θ(nlog n).
We design a CRD with exactly one leader present at all times, cycling through m “states” while consuming the input and accepting on state c.
Let Σ = {X_1, …, X_d} be the set of input species and start with only one L_0 leader, i.e. set the initial context s⃗(L_0)=1 and s⃗(S)=0 for all other species. For each i ∈{1, …, d}, j ∈{0, … ,m-1} add the following reaction:
X_i + L_j → L_j + w_i m.
Let only L_c vote yes and all other species no, i.e. Υ = {L_c}. For any valid initial configuration, reaches a stable configuration which votes yes if and only if the input is in the mod set, and no otherwise. submissionThe time and execution boundedness are proven in the appendix.
terminates with the correct output value: At any point in time, there is a single leader L_j present (the initial configuration contains a single leader and each reaction produces and consumes one). Every reaction satisfies the following invariant (for the leader's subscript j): j ≡∑_i=1^d w_i x_i' m where x_i' is the updated count of species X_i in the current configuration. By design of , there will be a reaction applicable as long as there are copies of X_i (a leader with any subscript can react with any X_i). After applying this reaction as often as possible, we have reached a stable configuration with L_∑_i=1^d w_i x_i m as the only species present.
is execution bounded: Every reaction reduces the count of chemicals by one. Every possible execution contains exactly i⃗ configurations, where i⃗ is the number of all molecules in the starting configuration.
is single-voting: Initially, L_0 is present and the only voter. Every valid input contains no voter and every reaction results in no change to the count of copies of L_i.
stabilizes in Θ(n log n) time: We start with #L = 1, #X = n in volume n. n reactions must occur before terminates. For the first reaction, we have a rate of λ=n · 1/n, for the last (with only the leader and one X present), our rate will be λ = 1 · 1/n. Thus, the expected time for all n reactions to complete is
∑_i=1^nn/i = n ∑_i=1^n1/i = Θ(n log n).
Every threshold set T={(x_1, …, x_d) |∑_i=1^d w_i x_i ≥ t} is stably decidable by an execution bounded, single-voting CRD with stabilization time Θ(n log n).
We design a CRD which multiplies the input molecules according to their weight and consumes positive and negative units alternatingly using a single leader. Once no more reaction is applicable, the leader's state will indicate whether or not there are positive units left and the threshold is met. Let Σ = {X_1, …, X_d} be the set of input species and Υ = {L_Y} the yes voter. We first add reactions to multiply the input species by their respective weights. For all i ∈{1, …, d}, add the reaction:
X_i →
w_i P if w_i>0
-w_i N if w_i<0
∅ otherwise
P and N represent “positive” and “negative” units respectively. Now add reactions to consume P and N alternatingly using a leader until we run out of one species:
L_Y + N → L_N
L_N + P → L_Y
Finally, initialize the CRD with one L_Y and the threshold number t copies of P (or -tN if t is negative), i.e. s⃗(L_Y)=1, s⃗(P)=t if t>0, or s⃗(N)=-t if t<0, and s⃗(S)=0 for all other species. For any valid initial configuration, reaches a stable configuration which votes yes if and only if the weighted sum of inputs is above the threshold, and no otherwise.
submissionThe execution time is proven in the appendix.
is single-voting since it starts with a single leader and no reaction changes the count of L_B molecules.
stabilizes in Θ(n log n) time: First, all input species will be converted to w_i instances of P or N. We run these reactions until no X_i. As they are independent of molecules other than the reactant, these reactions have a rate of λ = i, so the expected time until the next reaction is 1/i. The total time for reactions (<ref>) to complete is therefore ∑_i=1^n 1/i = Θ(log n). The time for reactions (<ref>) and (<ref>) on the other hand is asymptotically dominated by the last reaction, where #L=1 and #B=1, where B ∈{P, N}, so λ = 1·1/n. Let n_P, n_N be the counts of P, N and assume without loss of generality n_P ≥ n_N. We get:
∑_i=0^n_N-1n/(n_P-i) + n/(n_N-i)≤∑_i=0^n_N-12n/(n_N-i) =
2n ∑_i=1^n_N1/i =
Θ(n log n).
If sets X_1, X_2 ⊆^d are stably decided by some execution bounded, single-voting CRD, then so are X_1 ∪ X_2, X_1 ∩ X_2, and X_1 with stabilization time O(n log n).
To stably decide X_1, swap the yes and no voters.
For ∪ and ∩, consider a construction where we decide both sets separately and record both of their votes in a new voter species. For this, we allow the set of all voters to be a strict subset of all species. We first add reactions to duplicate our input with reactions of the form
X_i → X_i,1+X_i,2
by two separate CRDs. Subsequently, we add reactions to record the separate votes in one of four new voter species: V_N N, V_N Y, V_Y N, V_Y Y. The first and second CRN determine the first and second subscript respectively. For b ∈{Y, N} and if S_b, T_b are leaders of _1 and _2 respectively, add the reactions:
S_b+V_b ?→ S_b+V_b ?
T_b+V_? b→ T_b+V_? b
E.g. if N_1 is the no voter of the first CRD, we would add N_1+L_YN→ N_1 + L_NN and N_1+L_YY→ N_1+L_N Y. We let the yes voters be: Υ = {V_N Y, V_Y N, V_Y Y} to stably decide X_1 ∪ X_2 or Υ = {V_YY} to stably decide X_1 ∩ X_2.
Reaction (<ref>) will complete in O(log n) time and is clearly execution bounded since the input X_i is finite and not produced in any reaction. Consequently, two separate CRNs run in Θ(n log n) time as shown in <ref> and <ref>. After stabilization of the parallel CRNs, we expect reaction (<ref>) and (<ref>) to happen exactly once. Each molecule involved is a leader and has count 1 in volume n. This leads to a rate of λ=1·1/n, so the expected time for one reaction to happen is O(n). It is important to note that reactions (<ref>) and (<ref>) do not result in unbounded executions due to the unanimous vote in parallel CRDs. In both mod sets and threshold sets, the leader changes its vote a maximum of |i⃗| times, with only ever one leader present at any time. Again, we start with only one V_bb voter present initially and no reaction changes the count of voters, making our construction single-voting.
Since semilinear predicates are exactly Boolean combinations of threshold and mod predicates,
<Ref> imply the following.
Every semilinear set is stably decidable by an execution bounded, single-voting CRD, with stabilization time O(n log n).
We can also prove the same result for all-voting CRDs.
Note, however, that such CRDs cannot be “composed” using the constructions of <Ref>,
which crucially relied on the assumption that the CRDs being used as “subroutines” are single-voting.
submissionA proof is in the appendix.
Every semilinear set is stably decidable by an execution bounded, all-voting CRD, with stabilization time O(n log n).
By <Ref>,
every semilinear set is stably decided by a single-voting CRD.
We convert this to an all-voting CRD, where every species is required to vote yes or no, by “propagating” the final vote (recorded in the single voter V^0 voting no or V^1 voting yes) back to all other molecules.
A superscript indicates the “global” decision.
The execution boundedness proven in <ref> ensures that the leader propagates the final vote only a finite amount of times.
For each vote b ∈{0,1} and each voter V^b voting b,
and all other species S ∈Λ\{V}, replace species S with two versions S^0 and S^1, and add reactions:
V^b+S^b V^b+S^b
The original reactions of the CRD must also be replaced with “functionally identical” reactions for the new versions of species. For example, the reaction A+B C+D becomes
A^0+B^0 C^0+D^0
A^0+B^1 C^0+D^0
A^1+B^0 C^0+D^0
A^1+B^1 C^1+D^1
In the middle two cases we can pick the superscripts of the products arbitrarily,
whereas in the first and last case, we must choose the product votes to match those of the reactants to ensure stable states remain stable.
A vote change of the V^b leader leads to the propagation of the vote to at most n molecules once using reaction (<ref>).
This reaction dominates the runtime, as a single molecule is required to interact with each other molecule. We cannot speed this process up using an epidemic style process as conflicting votes would make the CRN execution unbounded.
The original CRD takes time O(n log n) to converge on a correct output for the single voter V^b.
At that point, a standard coupon collector argument shows that the voter V^b takes expected time O(n log n) to correct the votes of all other species via reaction (<ref>).
§ EXECUTION BOUNDED CRCS STABLY COMPUTE ALL SEMILINEAR FUNCTIONS
In this section we shift focus from computing Boolean-valued predicates ϕ: ^d →{0,1} to integer-valued functions f: ^d →,
showing that execution bounded CRCs can stably compute the same class of functions (semilinear) as unrestricted CRCs.
Similar to <cit.>, we compute semilinear functions by decomposing them into “affine pieces”, which we will show can be computed by execution bounded CRNs and combined by using semilinear predicates to decide which linear function to apply for a given input.[
While this proof generalizes to multivariate output functions as in <cit.>, to simplify notation we focus on single output functions.
Multi-valued functions f: ^d →^l can be equivalently thought of as l separate single output functions f_i: ^d →,
which can be computed in parallel by independent CRCs.
]
We say a partial function f: ^k is affine if there exist a vectors a⃗∈^k, c⃗∈^k with x⃗-c⃗≥0⃗ and nonnegative integer b ∈ such that
f(x⃗) = a⃗^⊤(x⃗ - c⃗)+b.
This definition of affine function may appear contrived, but the main utility of the definition is that it satisfies <ref>.
For convenience, we can ensure to only work with integer valued molecule counts by multiplying by 1/d after the dot product, where d may be taken to be the least common multiple of the denominators of the rational coefficients in the original definition such that n_i = d ·(i):
full
f(x⃗) = b+∑_i=1^k a_i (x_i-c_i)
f(x⃗) = b+1/d∑_i=1^k n_i (x_i-c_i).
submission,final
f(x⃗) = b+∑_i=1^k (i) ((i)-(i))
f(x⃗) = b+1/d∑_i=1^k n_i ((i)-(i)).
We say that a partial function f: ^k →^2 is a diff-representation of f if dom f=domf̂ and, for all x⃗∈dom f, if (y_P, y_C)=f̂(x⃗), then f(x⃗)=y_P-y_C, and y_P=O(f(x⃗)). In other words, f̂ represents f as the difference of its two outputs y_P and y_C, with the larger output y_P possibly being larger than the original function's output, but at most a multiplicative constant larger <cit.>.
Let f: ^k → be an affine partial function. Then there is a diff-representation f̂: ^k ⟶^2 of f and an execution bounded CRC that monotonically stably computes f̂ in expected time O(n).
Define a CRC C with input species Σ={X_1, …, X_k} and output species Γ={Y^P, Y^C}.
We need to ensure that after stabilizing, y=#Y^P - #Y^C
To account for the b offset, start with b copies of Y^P.
For the c_i offset, we must reduce the number of X_i by c_i. Since the result will be used in the next reaction, we want to produce a new species X_i' and require X_i' to not be consumed during the computation. We achieve this by adding reactions that let X_i consume itself c_i times (keeping track with a subscript) and converting X_i to X_i' once c_i has been reached.
For each i ∈{1, …, k} and m, p ∈{1, …, c_i}, if m+p ≤ c_i, add the reaction
X_i, m+X_i, p→ X_i, m+p
If m+p>c_i, add the reaction
X_i, m+X_i, p→ X_i, c_i+(m+p-c_i) X_i^'
Runtime: In volume n, the rate of reactions (<ref>) and (<ref>) would be λ≈(x_i)^2/n (x_i molecules have the chance to react with any of the x_i-1 others), so the expected time for the next reaction is n/(x_i)^2. The expected time for the whole process is ∑_i=1^x_in/i^2=n ∑_i=1^x_i1/i^2=O(n). Further, the reactions are execution bounded since both strictly decrease the number of their reactants and exactly x_i - 1 reactions will happen.
To account for the n_i / d coefficient, we multiply by n_i, then divide by d using similar reactions as for the subtraction.
To multiply by n_i, add the following reaction for each i ∈{1, …, k}:
X_i' →n_i D_1^P, if n_i>0
(-n_i) D_1^C, if n_i<0
For each m, p ∈{1, …, d-1}, if m+p ≤ d-1, add the reactions
D_m^P+D_p^P → D_m+p^P
D_m^C+D_p^C → D_m+p^C
If m+p>c_i, add the reactions
D_m^P+D_p^P → D_m+p-d^B+Y^P
D_m^C+D_p^C → D_m+p-d^B+Y^C
Reactions (<ref>) complete in expected time O(log n), while (<ref>) and (<ref>) complete in O(n) by a similar analysis as for the first two reactions.
As for execution boundedness, (<ref>) is only applicable once for every X_i^'; all other reactions start with a number of reactants which are a constant factor of X_i' and decrease the count of their reactants by one in each reaction.
We require the following result due to Chen, Doty, Soloveichik <cit.>, guaranteeing that any semilinear function can be built from affine partial functions.
Let f: ^d → be a semilinear function. Then there is a finite set {f_1: ^d →, …, f_m: ^d →} of affine partial functions, where each dom f_i is a linear set, such that, for each x⃗∈^d, if f_i(x⃗) is defined, then f(x⃗)=f_i(x⃗), and ⋃_i=1^m dom f_i=^d.
We strengthen <ref> to show we may assume each dom f_i is disjoint from the others.
This is needed not only to prove <Ref>,
but to correct the proof of Lemma 4.4 in <cit.>,
which implicitly assumed the domains are disjoint.
submission<Ref> is proven in the appendix.
Let f: ^d → be a semilinear function.
Then there is a finite set {f_1: ^d →, …, f_m: ^d →} of affine partial functions, where each dom f_i is a linear set, and dom f_i ∩dom f_j = ∅ for all i≠ j, such that, for each x⃗∈^d, if f_i(x⃗) is defined, then f(x⃗)=f_i(x⃗), and ⋃_i=1^m dom f_i=^d.
By <cit.>, every semilinear set is a finite union of disjoint fundamental linear sets.
The author defines a linear set L={b⃗+n_1 u⃗_1+…+n_p u⃗_p | n_1, …, n_p ∈} as fundamental, if u⃗_1, …u⃗_p ∈^k span a p-dimensional vector space in ^k, i.e. all vectors are linearly independent in ^k.[
This distinction is significant because not all integer-valued linear sets can be represented using solely linearly independent vectors.
An illustrative example is b⃗=0⃗, u⃗_1=(1,1,1), u⃗_2=(2,0,1), u⃗_3=(0,2,1), as discussed in <cit.>.
The vectors u⃗_1, u⃗_2, u⃗_3 are not linearly independent in ^3, yet this set cannot be expressed with less than three basis vectors.
]
The proof of <Ref> in <cit.> shows that each linear set L_i comprising the semilinear graph of f corresponds to one partial affine function f_i.
The fact that <cit.> lets us assume each L_i is disjoint from the others immediately implies that each dom f_i is disjoint from the others.
The next theorem shows that semilinear functions can be computed by execution bounded CRCs in expected time Θ(n log n).
Let f: ^d → be a semilinear function.
Then there is an execution bounded CRC that stably computes f with stabilization time O(n log n),
in expectation and with probability at least 1-n^-c.
We employ the same construction of <cit.> with minor alterations. A CRC with input species Σ={X_1, …, X_d} and output species Γ={Y}. By <Ref>, we decompose our semilinear function into partial affine functions (with linear, disjoint domains), which can be computed in parallel by <Ref>. Further, we decide which function to use by computing the predicate ϕ_i= “x∈dom f_i” (<Ref>). We interpret each Y_i^P and Y_i^C as an “inactive” version of “active” output species Y_i^P and Y_i^C. Let L_i^Y, L_i^N be the yes and no voters respectively voting whether x⃗ lies in the domain of i-th partial function. Now, we convert the function result of the applicable partial affine function to the global output by adding the following reactions for each i ∈{1, …, m}.
L_i^Y+Y_i^P → L_i^Y+Y_i^P+Y
L_i^N+Y_i^P → L_i^N+M_i
M_i+Y →Y_i^P
Reaction (<ref>) produces an output copy of species Y and (<ref>) and (<ref>) reverse the first reaction using only bimolecular reactions. Both are catalyzed by the vote of the i-th predicate result. Also add reactions
L_i^Y+Y_i^C → L_i^Y+Y_i^C
L_i^N+Y_i^C → L_i^N+Y_i^C
and
Y_i^P+Y_i^C → K
K+Y →∅
Reactions (<ref>) and (<ref>) activate and deactivate the “negative” output values and reactions (<ref>) and (<ref>) allow two active partial outputs to cancel out and consume the excess Y in the process. When the input is in the domain of function i, exactly one copy of L_i^Y will be present, otherwise one copy of L_i^N. Since we know that the predicate computation is execution bounded and produces at most one voter, the catalytic reaction will also happen at most as often as the leader changes its vote. Therefore, it is also execution bounded.
§ LIMITATIONS OF EXECUTION BOUNDED CRNS
The main positive results of the paper (<Ref>)
rely on the assumption that valid initial configurations have a single leader (in particular, they are execution bounded only from configurations with a single leader, but not from arbitrary configurations).
<Ref> shows that we may assume the CRD deciding a semilinear set is all-voting.
However,
for the “constructive” results
<Ref>,
which compose the output of a CRD with downstream computation,
using as a “subroutine” to stably compute a more complex set/function,
the constructions crucially use the assumption that is single-voting
(i.e., only the leader of votes)
to argue the resulting composed CRN is execution bounded.
In this section we show these assumptions are necessary,
proving that execution bounded CRNs without those constraints are severely limited in their computational abilities.
We show that entirely execution bounded CRNs (from every configuration) can be characterized by a simpler property of having a “linear potential function” that essentially measures how close the CRN is to reaching a terminal configuration.
We then use this characterization to prove that entirely execution bounded CRNs can stably decide only limited semilinear predicates (eventually constant, <Ref>),
assuming all species vote, and that molecular counts cannot decrease to O(1) in stable configurations (see <Ref>).
§.§ Linear potential functions
We define a linear potential function of a CRN to be a nonnegative linear function of states that each reaction strictly decreases.
A linear potential function Φ: ^Λ→ for a CRN is a nonnegative, linear function of configurations,
such that for each reaction
(,),
Φ(-) < 0.
Note that since Φ() = ∑_S ∈Λ c_S (S) is required to be nonnegative on all configurations ,
it must be nondecreasing in each species,
i.e., all coefficients c_S must be nonnegative
(though some are permitted to be 0).
Intuitively, we can think of Φ as assigning a nonnegative “mass” to each species (the mass of S is c_S), such that each reaction removes a positive amount of mass from the system.
A system of linear inequalities with rational coefficients has a real solution if and only if it has a rational solution.
For any homogeneous system (where all inequalities are comparing to 0), any positive scalar multiple of a solution is also a solution.
By clearing denominators, a system has a rational solution if and only if it has an integer solution.
Thus, one can equivalently define a linear potential function to be a function Φ() = ∑_S ∈Λ c_S (S)
such that each c_S ∈, i.e., we may assume Φ:^Λ→.
In particular, since Φ is decreased by each reaction, it is decreased by at least 1.
A CRN may or may not have a linear potential function.
Although it is not straightforward to “syntactically check” a CRN to see if has a linear potential function,
it is efficiently decidable:
a CRN has a linear potential function if and only if the following system of linear inequalities has a solution
(which can be solved in polynomial time using linear programming techniques;
the variables to solve for are the c_S for each S ∈Λ),
where the i'th reaction has reactants _i and products _i,
and species S ∈Λ has mass c_S ≥ 0:
full
(∀ i)
∑_S ∈Λ
[ _i(S) - _i(S) ] c_S < 0
submission,final
(∀ i)
∑_S ∈Λ
[ _i(S) - _i(S) ] c_S < 0.
For example, for the reactions A+A B+C and B+B A,
for each reaction to strictly decrease the potential function Φ() = c_A (A) + c_B (B) + c_C (C),
Φ must satisfy
2c_A > c_B + c_C and 2c_B > c_A.
In this case, c_A = 1, c_B = 1, c_C = 0 works.
The following is a variant of Farkas' Lemma <cit.>,
one of several similar “Theorems of the Alternative” stating that exactly one of two different linear systems has a solution.
(See <cit.> for a list of such theorems.)
A proof can be found in <cit.>.
Let be a real matrix.
Exactly one of the following statements is true.
* There is a vector ≧0⃗ such that < 0⃗.
* There is a vector ≥0⃗ such that ≧0⃗.
We require the following discrete variant of <Ref>.
The geometric intuition of this version is illustrated in <Ref>.
submissionIt is proven in the appendix.
Let be a rational matrix.
Exactly one of the following statements is true.
* There is an integer vector ≥0⃗ such that ≧0⃗.
* There is an integer vector ≧0⃗ such that < 0⃗.
For convenience when we use <Ref> in proving <Ref>, we swapped the roles of and in left- vs. right-multiplication with ;
the real-valued version of the statement of <Ref> is equivalent to <Ref> by taking the transpose of .
To see that we may assume the vectors are integer-valued if is rational-valued,
recall that a system of linear equalities/inequalities with rational coefficients has a solution if and only if it has a rational solution.
Since the system is homogeneous (the matrix-vector product is compared to the zero vector 0⃗),
any multiple of a solution is also a solution.
By clearing denominators,
it has a rational solution if and only if it has an integer solution.
Although we do not need the following fact,
it is worthwhile to observe that,
if is integer-valued (as in our application),
then the solution or (whichever exists) in <Ref> has entries that are at most exponential in
i.e., at most exponential in the sum of absolute values of entries of
(see e.g., <cit.>).
So in particular when we consider having small O(1) size entries,
this means the solution or has entries that are at most exponential in the number of rows and columns of .
When is a stoichiometric matrix, this corresponds to the number of species and reactions, respectively, of the CRN.
<Ref> will help us prove the following theorem characterizing CRNs with bounded executions from all configurations.
<Ref> is used in this paper only to prove
<Ref>,
but it may also be of independent interest, since it equates a “global, infinitary, difficult-to-check” condition (bounded executions from all configurations) with a “local, easy-to-check” condition (having a linear potential function).
A CRN has a linear potential function if and only if it is entirely execution bounded.
Let = (Λ,R) be a CRN.
The forward direction is easy:
assuming has potential function Φ,
since each reaction decreases Φ by at least 1
(see <Ref>),
starting from configuration ,
we can execute at most Φ() reactions while keeping Φ nonnegative.
Thus is entirely execution bounded.
To see the reverse direction, assume that is execution bounded from every configuration,
and let be the stoichiometric matrix of .
We claim there is no integer vector ≥0⃗ satisfying ≧0⃗;
for the sake of contradiction suppose otherwise.
Interpreting as counts of reactions to execute,
for any sufficiently large configuration ,
all reactions in can be applied (in arbitrary order),
and the vector describes the resulting change in species counts, reaching to configuration = +.
Since ≧0⃗,
this path is self-covering, i.e., ≧.
But since is execution bounded from every configuration,
by <Ref>,
has no self-covering path from any configuration,
a contradiction.
This establishes the claim that ≧0⃗ has no integer solution ≥0⃗.
By <Ref>,
there is an integer vector ≧0⃗ such that < 0⃗.
Let ∈^Λ be the coefficients of a linear function Φ:^Λ→, i.e., Φ() = ·.
Then the vector ∈^R represents the amount Φ changes by one unit of each reaction, i.e., (α) is the amount Φ increases when executing reaction α once.
Since < 0⃗, this means that every reaction strictly decreases Φ, i.e.,
Φ is a linear potential function for .
By employing the real-valued version of <Ref>,
the above proof also shows that <Ref> holds for the continuous model of CRNs <cit.>, in which species amounts are modeled as continuous nonnegative real concentrations.
In this case, a continuous CRN would be defined to be execution bounded from configuration if each reaction can be executed by at most a finite (real-valued) amount from .
§.§ Impossibility of stably deciding majority and parity
In this section, we prove
<Ref>,
which is a special case of our main negative result, <Ref>.
We give a self-contained proof of
<Ref>
because it is simpler and serves as an intuitive warmup to some of the key ideas used in proving <Ref>,
without the complexities of dealing with arbitrary semilinear sets.
<Ref> shows a limitation on the computational power of entirely execution bounded, all-voting CRNs,
but it requires an additional constraint on the CRN for the result to hold
(and we later give counterexamples showing that this extra hypothesis is provably necessary), described in the following definition.
Let be a CRD.
The output size of is the function s:→ defined s(n) = min_,{|, =n, is a valid initial configuration, is stable},
the size of the smallest stable configuration reachable from any valid initial configuration of size n.
A CRD is non-collapsing if lim_n→∞s(n)=∞.
Put another way, is collapsing if there is a constant c such that, from infinitely many initial configurations ,
can reach a stable configuration of size at most c.
All population protocols are non-collapsing, since every reaction preserves the configuration size.
No noncollapsing, all-voting, entirely execution bounded CRD can stably decide the majority predicate [X_1 ≥ X_2?]
or the parity predicate [X ≡ 1 2?].
Let =(Λ,R,Σ,Υ_Y, Υ_N,) be a CRD obeying the stated conditions,
and suppose for the sake of contradiction that stably decides the majority predicate (so Σ={X_1,X_2}).
We consider the sequence of stable configurations _1,_1,_2,_2,… defined as follows.
Let _1 be a stable configuration reachable from initial configuration +{X_1, X_2};
since the correct answer is yes,
all species present in _1 vote yes.
Now add a single copy of X_2.
By additivity,
the configuration _1 + {X_2} is reachable from + {X_1,2X_2},
for which the correct answer in this case is no.
Thus, since stably decides majority,
from _1 + {X_2},
a stable “no” configuration is reachable;
call this _1.
Now add a single X_1.
Since the correct answer is yes, from _1 + {X_1} a stable “yes” configuration is reachable,
call it _2.
Continuing in this way, we have a sequence of stable configurations
_1,
_1,
_2,
_2,
…
where all species in _i vote yes and all species in _i vote no.
Since is noncollapsing,
the size of the configurations _i and _i increases without bound as i →∞.
(Possibly _i+1 < _i, i.e., the size is not necessarily monotonically increasing, but for all sufficiently large j > i, we have _j > _i.)
Since all species vote,
for some constant δ > 0,
to get from _i+{X_2} to _i,
at least δ_i reactions must occur.
This is because all species in _i must be removed since they vote yes,
and each reaction removes at most O(1) molecules.
(Concretely, let δ = 1 / max_(,) ∈ R -, i.e., 1 over the most net molecules consumed in any reaction.)
Similarly,
to get from _i+{X_1} to _i+1,
at least δ_i reactions must occur.
Since is entirely execution bounded, by <Ref>,
has a linear potential function Φ() = ·,
where ≥0⃗.
Adding a single X_2 to _i increases Φ by the constant (X_2).
Since _i grows without bound,
the number of reactions to get from _i + {X_2} to _i increases without bound as i→∞,
and since each reaction strictly decreases Φ by at least 1,
the total change in Φ that results from adding X_2 and then going from _i + {X_2} to _i is unbounded in i, so unboundedly negative for sufficiently large i
(negative once i is large enough that δ_i ≥(X_2) + 2).
Similarly, adding a single X_1 to _i and going from _i + {X_1} to _i+1,
the resulting total change in Φ is unbounded and (eventually) negative.
Φ starts this process at the constant Φ(+{X_1,X_2}).
Before _i and _i are large enough that
δ_i ≥(X_2) + 2 and
δ_i ≥(X_1) + 2
(i.e., large enough that the net change in Φ is negative resulting from adding a single input and going to the next stable configuration),
Φ could increase,
if Φ({X_1})
(resp. Φ({X_2}))
is larger than the net decrease in Φ due to following reactions to get from _i + {X_2} to _i
(resp. from _i + {X_1} to _i).
However, since is noncollapsing,
this can only happen for a constant number of i
(so Φ never reaches more than a constant above its initial value Φ(+{X_1, X_2})),
after which Φ strictly decreases after each round of this process.
At some point in this process,
will not be able to reach all the way to the next _i or _i without Φ becoming negative,
a contradiction.
The argument for parity is similar, but instead of alternating adding X_1 then X_2, in each round we always add one more X to flip the correct answer.
<Ref>
is false without the noncollapsing hypothesis.
The following collapsing, leaderless (but all-voting and entirely execution bounded) CRD stably decides majority:
Species X_1,x_1 vote yes, while X_2,x_2 vote no:
X_1+X_2 x_1+x_2
X_1+x_2 X_1
X_2+x_1 X_2
x_1+x_2 x_1
It has bounded executions from any configuration:
min(#X_1,#X_2) of the first reaction can occur,
and the other reactions decrease molecular count,
so are limited by the total configuration size.
However, it is collapsing since a stable configuration of size 1 is always reachable.
<Ref>
is similarly false without the all-voting hypothesis; for each of the reactions with one product above, add another non-voting product W.
This converts the CRD to be noncollapsing but not all-voting.
Of course,
the execution bounded hypothesis is also necessary:
the original population protocols paper <cit.> showed that all-voting, noncollapsing, leaderless population protocols can stably decide all semilinear predicates.
The following collapsing, all-voting, leaderless (but entirely execution bounded) CRD stably decides parity.
Let the input species be named X_1.
Species X_1 votes yes, X_0 votes no:
X_1+X_1 X_0
X_1+X_0 X_1
X_0+X_0 X_0
full
It has bounded executions from any configuration:
exactly # X_1 + # X_0 - 1 reactions can occur since each reduces #X_1 + #X_0 by 1.
Similar to above, by adding the non-voting product W to each reaction above,
this CRD becomes noncollapsing but not all-voting, showing that the all-voting hypothesis is also necessary for stably deciding parity.
§.§ Impossibility of stably deciding not eventually constant predicates
We now present our main negative result, <Ref>,
which generalizes <Ref> to show that such CRNs can stably decide only trivial (eventually constant) predicates.
submissionThe proof is in the appendix.
Let ϕ: ^d →{0, 1} be a predicate. We say ϕ is eventually constant if there is n_0 ∈ such that ϕ is constant on _≥ n_0^d={∈^d |(∀ i ∈{1, …, d}) (i) ≥ n_0}, i.e., either ϕ^-1(0) ∩_≥ n_0^d=∅ or ϕ^-1(1) ∩_≥ n_0^d=∅.
In other words, although ϕ may have an infinite number of each output, “sufficiently far from the boundary of the positive orthant” (where all coordinates exceed n_0), only one output appears.
submission,fullSee <Ref> for a 2D example.
For any set B ⊆^d and ∈^d, write B+ to denote the set { + |∈ B},
which is B translated by vector .
Let _i ∈^d denote the unit vector in direction i, i.e., _i(i)=1 and _i(j)=0 for j ≠ i.
We say A ⊆^d is periodic if, for some k ∈^+, for some finite set F ⊆{0,1,…,k-1}^d,
A = ⋃_n_1,…,n_d ∈ F + ∑_i=1^d k · n_i ·_i.
We say k is the period of A and say that A is k-periodic.
Equivalently, A is k-periodic if, for all ∈^d and all unit vectors _i, ∈ A + k ·_i ∈ A.
In other words, A is periodic if it is a union of copies of a finite subset F of the k × k ×…× k hypercube with a corner at the origin,
translated in each direction by every nonnegative integer multiple of the hypercube's width.
See <Ref>.
Note that if A is k-periodic, then it is also k'-periodic for every positive integer multiple k' = i · k of k.
Let A ⊆^d be a Boolean combination of mod sets.
Then A is periodic.
We prove this by induction on the number of mod sets.
For the base case, let
A = {|·≡ c m } be a single mod set,
where ∈{0,…,m-1}^d and c,m ∈ are constants.
Letting k=m
and
F = A ∩{0,…,m-1}^d
in <Ref> works.
Let ∈^d.
Then for all 1 ≤ i ≤ d,
·≡· ( + m _i) m,
so
·≡ c m · ( + m _i) ≡ c m,
meaning that ∈ A + m _i ∈ A,
so A is k-periodic.
The inductive case amounts to showing that periodic sets are closed under Boolean operations of union, intersection, and complement.
Clearly the complement of any periodic set is also periodic.
Inductively assume that A_1,A_2 ⊆^d are periodic;
we argue that A_1 ∪ A_2 is periodic.
Letting k be the least common multiple of their periods, we may assume both A_1 and A_2 are k-periodic with the same period k.
Then for all ∈^d
and all unit vectors _i,
∈ A_1 + k ·_i ∈ A_1
and
∈ A_2 + k ·_i ∈ A_2.
Thus
∈ A_1 ∪ A_2 + k ·_i ∈ A_1 ∪ A_2,
so A_1 ∪ A_2 is also k-periodic.
Similar reasoning shows A_1 ∩ A_2 is k-periodic (one can also appeal to DeMorgan's Laws).
Each threshold set T is defined by a hyperplane that partitions ^d into the sets T
(on one side of the hyperplane, including integer points on the hyperplane itself)
and T
(on the other side of the hyperplane).
More generally, several threshold sets partition ^d into multiple disjoint subsets we call “regions”.
Furthermore, any predicate that is a Boolean combination of threshold sets has constant output in any region; the next definition formalizes this.
Let A ⊆^d be Boolean combination of threshold sets T_1,…,T_k ⊂^d.
A region of A is a convex polytope R ⊂_≥ 0^d such that, for all ,∈ R ∩^d,
for all 1 ≤ i ≤ k,
∈ T_i ∈ T_i.
The output of the region R is the value 1 if R ∩^d ⊂ A and 0 if R ∩^d ∩ A = ∅.
(Note these are the only two possibilities, since no individual threshold set T_i is exited or entered as we move within R.)
A region R is totally unbounded
if, for all c ∈,
R ∩_≥ c^d ≠∅,
i.e., R contains points that are arbitrarily large on all components.
A region is called partially bounded if it is not totally unbounded.
Put another way, predicates defined by Boolean combinations of threshold sets are defined by (d-1)-dimensional hyperplanes that partition ^d into regions, where in each region, the output of the predicate is all yes, or all no.
In fact this is an exact characterization of Boolean combinations of threshold predicates.
For any set A ⊆^d,
the recession cone of A is
(A) = {∈^d | (∀∈ A)(∀λ > 0) + λ∈ A },
the set of vectors such that,
from any point in A, one can move in direction forever without leaving A.
A region R defined by threshold sets is totally unbounded if and only if (R) ∩_> 0^d ≠∅,
i.e., the recession cone of R contains a positive vector.
Let A ⊆^d be Boolean combination of threshold sets that is not eventually constant.
Then there are two adjacent totally unbounded regions R_0, R_1 with opposite outputs, such that the normal vector of the hyperplane H separating R_0 and R_1 has at least one negative component and at least one positive component.
See <Ref> for an example in 2D.
Since A is not eventually constant, it must have two totally unbounded regions R_0 and R_1 with opposite outputs;
assume WLOG that R_i has output i.
Let c ≥ 0 be sufficiently large that all partially bounded regions of A are subsets of B = ^d ∖_≥ c^d.
Now, simply pick any points _0 ∈ R_0 ∖ B and _1 ∈ R_1 ∖ B.
There is some path from _0 to _1 that follows only unit vectors (i.e., moves only to adjacent points that are distance 1 from the previous point),
such that every intermediate point ' also obeys ' ∉B.
Then this path never enters a partially bounded region of A,
since they are all subsets of B.
Thus, since the path starts in a region R_0 with output 0,
ends in a region R_1 with output 1,
there must be two adjacent points , on the path,
where is in a totally unbounded region with output 0 and is in a totally unbounded region with output 1.
Finally, we must that the normal vector of the hyperplane separating R_0 from R_1 has a negative and a positive entry.
Recall that a threshold set T is defined by T = {∈ℕ^d |·≤ a }, where = (w_1, …, w_d) ∈^d and a ∈ (<ref>).
Since A is a Boolean combination of threshold sets and R_0, R_1 are adjacent with opposite outputs, there must be some threshold set T such that R_1 ⊆ T, but R_0 ∩ T = ∅ (or vice versa, but assume R_1 ⊆ T WLOG, since we could replace T with T in the Boolean combination defining A).
Equivalently, we can think of the regions R_0 and R_1 as being separated by the hyperplane · = a, with normal vector and offset a, such that all points ∈ R_1 obey ·≤ a,
and all points ∈ R_0 obey · > a.
The transition between the regions at points and involves crossing the hyperplane, where the inequality changes from ≤ a to > a, which defines the boundary between different outputs (0 in R_0 and 1 in R_1). Therefore, the points on the hyperplane · = a necessarily lie exactly at the boundary between these regions.
We show that cannot be nonnegative or nonpositive. Suppose ≥0⃗ (scale the normal vector by -1 otherwise).
Since R_1 is totally unbounded, it contains points that are arbitrarily large on all components.
More formally, there is a strictly increasing sequence _1 < _2 < … such that all _i ∈ R_1.
Since ≥0⃗,
lim_i →∞·_i = ∞.
This contradicts the previous assumption that
all points ∈ R_1
obey
·≤ a (geometrically, we would cross the hyperplane somewhere and land in R_0).
Symmetric reasoning applies to the case ≤0⃗.
We conclude that the separating hyperplane must have a normal vector with at least one positive and at least one negative component, establishing the lemma.
The next lemma shows that the there exists a vector > 0⃗ parallel to the hyperplane separating the two regions. In other words, we can move along H while increasing every component.
Let H be a hyperplane with normal vector .
Then there is a positive vector > 0⃗ with · = 0
if and only if has at least one negative component and at least one positive component.
:
If > 0⃗ and ≥0⃗ then · > 0.
Similarly, if > 0⃗ and ≤0⃗ then · < 0.
So to get · = 0, must have at least one positive and at least one negative element.
:
We construct as follows: Let I_+ denote the indices of the positive coordinates of and I_- the indices of the negative coordinates. Our goal is to balance out the positive and negative parts of the dot product, given by ·=∑_i ∈ I_+(i)(i)+∑_i ∈ I_-(i)(i). Set (i) to be the sum of the positive coordinates of if i ∈ I_- and the sum of the absolute values of negative coordinates of otherwise:
(i) =
∑_j ∈ I_- |(j)| if i ∈ I_+,
∑_j ∈ I_+(j) if i ∈ I_-,
0 otherwise.
Substituting into the formula shows the correctness. For brevity, let p := (i) if i ∈ I_+ and n := (i) if i ∈ I_- as above.
· =
∑_i ∈ I_+(i)(i) + ∑_i ∈ I_-(i)(i)
=
∑_i ∈ I_+( n ) (i) + ∑_i ∈ I_-( p ) (i)
=
n ∑_i ∈ I_+(i) + p ∑_i ∈ I_-(i)
=
n p + p (-n)
=
0.
Finally, if is not integer-valued, scale it by the least common multiple of all coordinate denominators to ensure ∈^d without altering the dot product.
Let ϕ:^d →{0,1} be a semilinear predicate that is not eventually constant.
Then there is an infinite sequence _0, _1, … and constant c,
such that for all j ∈,
*
ϕ(_j) ≠ϕ(_j+1)
(correct answer swaps for each subsequent point),
*
_j ≤_j+1
(inputs are increasing),
and
*
_j+1 - _j≤ c
(adjacent inputs are “close”).
We associate to ϕ the set A ⊆^d where ϕ^-1(1) = A, i.e., ϕ() = 1 ∈ A.
Since A is semilinear, it is a Boolean combination of threshold sets T_1,…,T_k and mod sets M_1,…,M_l.
Recall <Ref>, where the threshold sets partition ^d into regions,
where moving within a region does not cross and hyperplanes defining the threshold sets,
thus does not change the Boolean value [∈ T_i?] for any T_i.
Suppose we have m regions R_1,…,R_m.
Then we can rewrite A ∩ R_j as a Boolean combination of mod sets only, intersected with R_j.
We do this by replacing each T_i in the original Boolean expression
with either ^d or ∅, depending whether R_j ⊆ T_i or R_j ∩ T_i = ∅, respectively.[
For example, if the expression is
T_1 ∪ (M_1 ∩ T_2) ∪ (M_2 ∪ T_3) ∪ (M_3 ∩ M_4),
if the points are in T_2 but not T_1 or T_3,
this becomes
∅∪ (M_1 ∩^d) ∪ (M_2 ∪∅) ∪ (M_3 ∩ M_4) = M_1 ∪ M_2 ∪ (M_3 ∩ M_4).
]
(Note by the definition of region these are the only two possibilities.)
Let M'_j be this Boolean combination of mod sets, such that M'_j ∩ R_j = A ∩ R_j.
By <Ref>,
M'_j is periodic.
Consider a totally unbounded region R_j.
By <Ref>,
(R_j) contains a positive vector .
We have two cases:
for some totally unbounded region R_j, M'_j ∩ R_j is not constant:
This is illustrated in <Ref>,
which show two subcases.
<Ref>
shows the subcase where,
for some ∈(R_j) ∩^d and point _0 ∈ R_j,
defining _i = _0 + i,
the sequence ϕ(_0),ϕ(_1),… is not constant.
Since M'_j is periodic,
the sequence O = (ϕ(_0),ϕ(_1),…) is periodic with period p.
So we can find a subsequence _0,_1,… obeying all three conditions of the lemma.
In particular, it suffices to choose a point _0 = _0 ∈ R_j ∩ A
(resp. _0 ∈ R_j ∖ A )
let i < p such that
_i ∉A
(resp. _i ∈ A),
letting _1 = _i,
and let _2 = _p,
and subsequent elements of the subsequence are the same distances apart
(_3 = _p+i,
_4 = _2p, …).
<Ref>
shows the subcase where,
for all _0 ∈ R_j and ∈(R_j) ∩^d,
defining _i = _0 + i,
the sequence ϕ(_0),ϕ(_1),… is constant.
However,
since M'_j is not constant,
we can still find a sequence _0,_1,…, but unlike the previous subcase, it is not a subsequence of points collinear along one vector .
Since M'_j is periodic and not constant,
and since R_j is totally unbounded,
for every ∈ R_j ∩ A,
there is ' ≥ such that ' ∈ R_j ∖ A,
i.e., for every point in the region in A, there is a larger point in R_j not in A.
Also, since M'_j is periodic,
there is a constant c independent of such that ' - ≤ c.
By symmetric reasoning, there is a ”∈ R_j ∩ A such that ”≥' and ” - ' ≤ c.
Let _0 ∈ R_j ∩ A be arbitrary.
For all i ∈,
choose _i+1∈ R_j based on _i as above,
such that
_i+1≥_i,
_i+1 - _i ≤ c,
and
_i+1∈ A if i is odd and _i+1∉A if i is even.
Then the sequence _0,_1,… satisfies the lemma.
for all totally unbounded regions R_j, M'_j ∩ R_j is constant:
This implies that the mod sets M_1,…,M_l can be “factored out” of the Boolean expression defining A in terms of threshold sets T_1,…,T_k and the mod sets M_1,…,M_l,
which will give the same output as A in totally unbounded regions.
Put another way,
A ∩ (R_1 ∪…∪ R_u) is a Boolean combination of the threshold sets T_1,…,T_k,
where R_1,…,R_u represents all the totally unbounded regions.
By <Ref>,
two adjacent totally unbounded regions of A have opposite outputs.
See <Ref> for an example of picking the points _0,_1,… below.
These adjacent regions are separated by some hyperplane H_j, such that H_j ⊆ A,
but for some unit vector _i,
(H_j + _i) ∩ A = ∅,
i.e., all of H_j is contained in A,
but the entire hyperplane adjacent to H_j in direction _i,
consists of points not in A.
Note this is not true for general hyperplanes, e.g., one whose orthogonal vector is (1,1), where both unit vectors _1 = (1,0) and _2 = (0,1) would move off the hyperplane, but in the “yes” direction where the point is still contained in the threshold set.
However, since H_j is separating two totally unbounded regions,
some strictly positive vector > 0⃗ is parallel to H_j, i.e., obeys · = 0 for H_j's orthogonal vector .
By <Ref>,
has at least one positive coordinate (say i) and at least one negative coordinate (say k),
so that unit vector _i moves to one side of H_j and _k moves to the other side.
In this case,
we let > 0⃗ be some vector parallel to H_j,
let _0 ∈ H_j,
sufficiently large that the vector , starting at _0,
does not cross any of the hyperplanes of T_1,…,T_k
(as in <Ref>).
Define the rest of the infinite sequence as
_1 = _0 + ,
_2 = _0 + ,
_3 = _0 + + ,
_4 = _0 + 2,
_5 = _0 + 2 + ,
_6 = _0 + 3,
_7 = _0 + 3 + ,
⋮
By the arguments given above,
for all odd i,
ϕ(_i) = 0
and for all even i,
ϕ(_i) = 1,
satisfying condition (<ref>).
If j is even, then _j+1 = _j + +, so clearly _j ≤_j+1,
satisfying condition (<ref>).
If j is odd,
then _j+1 = _j - +.
Since > 0⃗,
we have - ≥0⃗,
so _j+1≥_j,
satisfying condition (<ref>).
Finally,
_j+1 - _j ≤ + 1,
satisfying condition (<ref>).
If a noncollapsing, all-voting, entirely execution bounded CRD stably decides a predicate ϕ,
then ϕ is eventually constant.
submissionA complete proof appears in the appendix.
This proof is similar to that of <Ref>.
In that proof, we repeatedly add a “constant amount of additional input {X_2} or {X_1}, which flips the output”.
For more general semilinear, but not eventually constant, predicates,
we dig into the structure of the semilinear set to find a sequence of constant-size vectors representing additional inputs that flip the correct answer.
Any predicate that is not eventually constant has infinitely many yes inputs and infinitely many no inputs, but in general they could be increasingly far apart:
e.g., ϕ() = 1 if and only if 2^n≤ < 2^n+1 for even n.
For the potential function argument to work,
each subsequence input needs to be at most a constant larger than the previous.
But if ϕ is semilinear (and not eventually constant) then we can show that there is a sequence of increasing inputs _0 ≤_1 ≤_2 ≤…, each a constant distance from the next (_j+1 - _j = O(1)),
flipping the output (ϕ(_j) ≠ϕ(_j+1)).
Roughly, this is true for one of two reasons.
Using <Ref>,
ϕ is a Boolean combination of threshold and mod sets.
Either the mod sets are not combined to be trivially ∅ or ^d,
in which case we can find some vector that, followed infinitely far from some starting point _0 (so _i = _0 + i)
periodically hits both yes inputs (ϕ(_j)=1) and no inputs (ϕ(_j)=0).
submission,full(See <Ref>.)
Otherwise,
the mod sets can be removed and simplify the Boolean combination to only threshold sets,
in which case the infinite sequence _0,_1,… can be obtained by moving along a threshold hyperplane that separates yes from no inputs.
submission,full(See <Ref>.)
This proof is similar to that of <Ref>,
with the vectors _i defined below playing the role of the “constant amount of additional input {X_2} or {X_1} that flips the correct answer” in that proof.
Let =(Λ,R,Σ,Υ_Y, Υ_N,) be a CRD obeying the stated conditions,
and suppose for the sake of contradiction that stably decides a semilinear predicate ϕ that is not eventually constant.
By <Ref>,
there is an infinite sequence _0, _1, … such that
* ϕ(_i) ≠ϕ(_i+1) (i.e., the correct answer swaps for each subsequent input)
* _i ≤_i+1, i.e., the inputs are increasing (on at least one coordinate(s)),
and
* for some constant c, _i+1 - _i≤ c, i.e., adjacent inputs are “close”.
Assume WLOG that ϕ(_0) = 0.
For each i ∈,
let _i = _i+1 - _i,
noting by condition (<ref>) that
_i ≥ 0.
We consider the sequence of stable configurations _0,_1,_2,… defined as follows.
Let _0 be a stable configuration reachable from _0;
since the correct answer is no,
all species present in _0 vote no.
Now add _0 to _0.
By additivity,
the configuration _0 + _0 is reachable from _1 = _0 + _0.
Since the correct answer for _1 is yes,
must go from _0 + _0 to a stable “yes” configuration, call this _1.
Now add _1 to _1.
Since the correct answer is no, must now reach from _1 + _1 to a stable “no” configuration,
call it _2.
By condition (<ref>),
each _i obeys _i < c for some constant c.
Continuing in this way, we have a sequence of stable configurations
_0,
_1,
…
where all species in _i vote yes for odd i, and all species in _i vote no for even i.
Since is noncollapsing,
the size of the configurations _i increases without bound as i →∞.
(Possibly _i+1 < _i, i.e., the size is not necessarily monotonically nondecreasing, but for all sufficiently large j > i, we have _j > _i.)
Since all species vote,
for some constant δ > 0,
to get from _i+_i to _i+1,
at least δ_i reactions must occur.
This is because all species in _i must be removed since they vote the opposite of the voters in _i+1,
and each reaction removes at most O(1) molecules.
(Concretely, let δ = 1 / max_(,) ∈ R -, i.e., 1 over the most net molecules consumed in any reaction.)
Since is entirely execution bounded, by <Ref>,
has a linear potential function Φ() = ·,
where ≥0⃗.
Adding _i to _i increases Φ by (_i), which is bounded above by a constant since _i < c.
Since _i grows without bound,
the number of reactions to get from _i + _i to _i+1 increases without bound as i→∞,
and since each reaction strictly decreases Φ by at least 1,
the total change in Φ that results from adding _i and then going from _i + _i to _i+1 is unbounded in i, so unboundedly negative for sufficiently large i
(negative once i is large enough that δ_i ≥(_i) + 2).
However, Φ started at the constant Φ(_0).
Before _i is large enough that
δ_i ≥(_i) + 2
(i.e., large enough that the net change in Φ is negative resulting from adding a single input and going to the next stable configuration),
Φ could increase,
if Φ(_i)
is larger than the net decrease in Φ due to following reactions to get from _i + _i to _i+1.
However, since is noncollapsing,
this can only happen for a constant number of i
(so Φ never reaches more than a constant above its initial value Φ(_0)),
after which point Φ strictly decreases after each round of this process.
At some point in this process,
will not be able to reach all the way to the next _i without Φ becoming negative,
a contradiction.
The statement of <Ref> does not mention the concept of a leader,
but it would typically apply to leaderless CRDs.
A CRD may be execution bounded from configurations with a single leader, but not execution bounded when multiple leaders are present (preventing the use of <Ref>, which requires the CRD to be execution bounded from all configurations).
For example, in <Ref>,
reaction (<ref>) occurs finitely many times if the leader/voter S_Y or S_N has count 1.
However, if S_Y and S_N can be present simultaneously (e.g., if we start with two leaders),
then the reactions
S_Y + V_NN S_Y + V_YN
and
S_N + V_YN S_N + V_NN
can flip between V_NN and V_YN infinitely often in an unbounded execution.
If the CRN is leaderless, however, we have the following, which says that if it is execution bounded from valid initial configurations, then it is execution bounded from all configurations.
If a leaderless CRD or CRC is execution bounded,
then it is entirely execution bounded.
submissionA proof is in the appendix.
Since is leaderless, the sum of two valid initial configurations is also valid.
Thus if we can produce some species from a valid initial configuration,
we can produce arbitrarily large counts of all species by adding up sufficiently many initial configurations.
This means that for any configuration ,
from any sufficiently large valid initial configuration ,
some ≧ is reachable from .
But if is execution bounded from ,
since ,
it must also be execution bounded from ,
thus also from since by additivity any reactions applicable to are also applicable to .
Let be a leaderless CRD or CRC.
Let be any configuration.
We first show that some ≧ is reachable from a valid initial configuration .
We may assume without loss of generality that only contains species producible from valid initial configurations, otherwise we obtain an equivalent CRN by removing those unproducible species from .
Since is leaderless, the sum of two valid initial configurations is also valid.
Then each species S being producible means that
there is a valid initial configuration _S,1 such that for some _S,1,
_S,1_S,1
and _S,1(S) ≥ 1,
i.e., at least one copy of S can be produced.
Let _S,k = k ·_S,1.
By additivity,
_S,k_S,k,
where _S,k = k ·_S,1,
noting that _S,k(S) ≥ k.
In other words, all species are producible in arbitrarily large counts from some valid initial configuration.
Now we argue all species can be made simultaneously arbitrarily large count from some valid initial configuration;
in particular, we can reach a configuration with counts at least .
Let = ∑_S ∈Λ_S,(S).
Since each _S,(S)_S,(S),
by additivity we have
, where = ∑_S ∈Λ_S,(S).
Then for each S ∈Λ, (S) ≥(S),
so ≧.
Since all executions from are finite,
all executions from are finite.
By additivity,
any sequence of reactions applicable to is also applicable to .
Thus all executions from ≦ must be finite as well, i.e., is entirely execution bounded since is an arbitrary configuration.
<Ref> lets us replace “entirely execution bounded” in <Ref> with “leaderless and execution bounded”:
If a noncollapsing, all-voting, leaderless, execution bounded CRD stably decides a predicate ϕ,
then ϕ is eventually constant.
In particular, since the original model of population protocols <cit.> defined them as leaderless and all-voting—and since population protocols are noncollapsing—we have the following.
If an execution bounded population protocol stably decides a predicate ϕ,
then ϕ is eventually constant.
§.§ Feedforward CRNs
We show that another common constraint, feedforwardness, significantly reduces computational power, making it impossible to decide even simple mod and threshold sets.
A CRN is reaction-feedforward if reactions can be ordered r_1, r_2, …, r_n such that, for all k<ℓ, no reactant of r_k appears in r_ℓ (as either reactant or product).
Reaction-feedforward CRNs are significant in the sense that many continuous real-valued CRNs computing numerical-valued functions (where the count of some species Y is interpreted as the output, e.g., 2X → Y computes f(x) = ⌊ x / 2 ⌋) can be computed by reaction-feedforward CRNs <cit.>.[
The definition of feedforward in reference <cit.> is different from the definition given here, being based on an ordering of species rather than reactions.
However, it is straightforward to verify by inspection that the CRNs given for the positive results of <cit.> are reaction-feedforward according to <Ref>.
]
Compared to general CRNs, reaction-feedforward CRNs are easy to analyze and prove correctness. One reason is that, if a reaction-feedforward CRN can reach terminal configuration from at all, then it is execution bounded from .
There is a similar definition, called simply feedforward in <cit.>, based on ordering of species rather than reactions.
We use the term species-feedforward to avoid confusion with <Ref>.
We say a reaction (,) produces a species S if (S) > (S),
and it consumes S if (S) > (S).
A CRN is species-feedforward if species can be ordered S_1, S_2, …, S_n such that every reaction producing a species S_ℓ consumes a earlier species S_k where k<ℓ.
Although the term “linear potential function” was not used in <cit.>,
it is shown in <cit.> that species-feedforward CRNs have a linear potential function (assigning weight 1/K^i to species S_i for a suitably large constant K), thus are entirely execution bounded.
The same is not always true of reaction-feedforward CRNs,
for example X 2X is reaction-feedforward but not execution bounded.
However, we can use similar techniques to proofs used for so-called noncompetitive CRNs in <cit.> to show “reasonable” reaction-feedforward CRNs are execution bounded.
Suppose in a reaction-feedforward CRN that by execution P, and by execution Q. If any reaction occurs less in P than Q, then is not terminal.
Here we equivalently think of an execution from as a sequence of reactions, since from those and we can deduce the configurations in the execution.
Define #(r_k, P) as the number of times reaction r_k occurs in the execution P.
Let r_k be the first reaction in the reaction-feedforward order such that #(r_k, P) < #(r_k, Q). Assume, for brevity of explanation, that r_k has only one reactant, denoted A;
the argument below, however, is general and applies to any number of reactants in r_k.
By the definition of a reaction-feedforward CRN, the reactions r_k+1 through r_n do not affect the count of A. Further, reactions r_1 through r_k-1 can only produce A and not consume it, reactions r_1 through r_k can increase the count of A, and among them, only r_k can decrease it.
Let m = #(r_k, P). Let Q' represent the prefix sequence (, _1, …, _p) of Q where the transition _p _p+1 corresponds to the (m+1)st execution of reaction r_k. The configuration _p is thus the configuration just before r_k occurs more in Q than in P.
Note that reactions r_1 through r_k-1 occur at least as often in P as in Q (i.e. #(r_i, P) ≥#(r_i, Q) for i = 1 to k-1). Therefore, they occur at least as often in P as in Q', since Q' is a prefix of Q.
Moreover, by our choice of Q', #(r_k, P) = #(r_k, Q'). So A is present in , i.e. (A) > 0. Thus, r_k is applicable at , so is not terminal.
The following corollary implies that any reaction-feedforward CRN that can reach a terminal configuration from is execution bounded from .
In a reaction-feedforward CRN , if there is a terminal configuration _ reachable from initial configuration , then _ is reached by every sufficiently long execution from .
Furthermore, all of these executions are permutations of the same number of each reaction type.
In particular, is execution bounded from .
Let P be the execution leading from to _. Consider any execution Q with |Q| > |P|. By the pigeonhole principle, Q must involve more occurrences of some reaction r than P does.
By <ref>, this would imply that _ is not terminal, which contradicts the premise that _ is terminal.
Therefore, no execution Q can be longer than P.
Consider any execution Q where |Q| = |P|. Q must be a permutation of P, as any deviation resulting in more of any reaction would, by the pigeonhole principle, lead to a contradiction of the terminality of _.
To address the possibility of a shorter terminal execution, consider any execution Q with |Q| < |P|. There must be some reaction r occurring more frequently in P than in Q, and by <Ref>, Q cannot reach a terminal configuration.
As noted,
in the model of continuous CRNs, it is known that
all the functions that can be stably computed (the continuous, piecewise linear functions) can be stably computed by reaction-feedforward CRNs <cit.>.
In contrast, with discrete CRNs computing predicates,
we show that reaction-feedforward CRNs cannot stably decide all semilinear sets by giving two counterexamples, showing that reaction-feedforward CRDs can decide neither “most” mod sets (<ref>) nor “most” threshold sets (<ref>).
Specifically, we chose the parity and majority predicate as our counterexamples,
although the techniques generalize to more complex mod and threshold sets, e.g., [X_1 + 2 X_2 ≡ 3 5?].
Reaction-feedforward CRDs can't stably decide the parity predicate [X ≡ 1 2?].
We show that in any possible construction, the input species must be a reactant of two distinct reactions. By letting the CRN stabilize and then introducing another input molecule, there must exist a set of rules inverting the output in either way, consisting of at least two reactions with X as reactant, breaking the reaction-feedforward condition.
Consider the set of even numbers. A simple, non-reaction-feedforward CRD that decides parity is:
Y+X → N
N+X → Y
where X is the input species, Y is a yes voter, and N is a no voter, initialized with 1Y and nX. In either way to order these reactions, a reactant of the first reaction appears in the second reaction. Thus, the CRN is not reaction-feedforward.
To show that no such CRN could decide parity, we show that any construction requires us to have at least one reactant reappear in a later reaction, or even stronger: at least one species must be a reactant of two distinct reactions. Specifically, this is true for the input species X.
To motivate the choice of species, let's consider an even simpler parity computing CRD.
X+X → Y
Y+X → X
where X is both input and votes no, Y votes yes, initialized with 1Y and nX.
Only the input species appears twice as a reactant. Intuitively, this is true for all CRDs because we expect the input to be able to change our answer in either way, reversing the previous one.
Suppose for the sake of contradiction that there is a reaction-feedforward CRD which stably decides whether the initial number n=#X of input X is even.
We withhold two copies of X and let stabilize on the correct output of yes.
Denote Υ as the set of yes voters. We denote the no voters with Υ≜Λ\Υ. Only species contained in Υ are present in the stable, correct output configuration.
Now, we release one of the remaining copies of X.
We first run the chain reaction (if any) starting from only one X. Let Ω_X := { S |∃∈{1X}: (S) > 0} be the set of species
producible from {1X}
(e.g., if there is no reaction X ..., then Ω_X is just {X}).
Without loss of generality, we assume Ω_X⊆Υ, that is, all of X's direct products are yes-voters (if not, exchange Υ and Υ in what follows).
To correct the answer (n+1 ≢n 2), must consume all species currently present and produce at least one copy of a species in Υ. It follows that for all X ∈Ω_X, contains a reaction with X as a reactant. Further, none of these reactions contain a reactant of Υ, since none are present in the current configuration.
Finally, we release the last remaining copy of X. Again, we produce the set Ω_X from X. To invert the vote again, we must consume all Y ∈Υ and produce at least one member of Υ. The reaction(s) consuming Υ must have a member of Ω_X as a reactant since the configuration is stable without Ω_X. Further, the reaction cannot be any of the ones from before since they contain a member of Υ as reactant.
Since there are least two reactions sharing a common species as reactant, the reactions cannot be ordered such that no reactant of the first of these reactions appears in the latter one. This makes non-reaction-feedforward, contradicting our initial assumption.
Reaction-feedforward CRDs can't stably decide the majority predicate [X_1 ≥ X_2?].
Suppose, for the sake of contradiction, there exists a reaction-feedforward CRD which stably decides the predicate. We let stabilize on input {n X_1, n X_2} (yielding output yes), while withholding two copies of X_2. We release one X_2. Again, we consider the full set of species a single X_2 could produce before reacting with other molecules (denoted Ω_X_2). Without loss of generality, we consider all of them yes voters i.e. Ω_X_2⊆Υ.
The correct output now changes to no, and all yes voters must be consumed by reactions that only have reactants which are yes voters and further, these reactions contain all species in Ω_X_2 as reactants.
Once the vote has reversed and stabilized, it contains only species of Υ≜Λ\Υ. We release the last X_2 and let it produce Ω_X_2. Since Ω_X_2⊆Υ i.e. all elements are yes voters, but the correct vote is still no, all X ∈Ω_X_2 must be consumed again. This time, they must be consumed in reactions involving no voters, must be distinct reactions from those in the previous step. Thus, all species X ∈Ω_X_2 appear at least twice as a reactant, breaking the reaction-feedforward condition.
§ CONCLUSION
full
We explored the computational capabilities of execution bounded Chemical Reaction Networks (CRNs), which terminate after a finite number of reactions.
This constraint aligns the model with practical scenarios where fuel supply is limited.
Our findings illustrate that the computational power of these CRNs varies significantly based on structural choices.
Specifically, CRNs with an initial leader and the ability to allow only the leader to vote can stably compute all semilinear predicates and functions in O(xlogx) parallel time.
Without an initial leader, and requiring all species to vote, these networks are limited to computing eventually constant predicates.
This limitation holds considerable weight for decentralized systems modeled by population protocols, which inherently exhibit these traits.
Additionally, we introduced a new characterization of execution bounded networks through a nonnegative linear potential function, providing a novel theoretical tool for analyzing the physical constraints CRNs.
A key question remains open: Can execution bounded CRNs compute semilinear functions and predicates within polylogarithmic time? Angluin, Aspnes and Eisenstat introduced a fast population protocol that simulates a register machine with high probability. This protocol can perform standard operations like comparison, addition, subtraction, and multiplication and division by constants in O(log^5 n) time <cit.>.
Chen, Doty and Soloveichik applied this construction to CRNs in <cit.>, showing that semilinear functions can be computed by CRNs without error in expected polylogarithmic time in the kinetic model.
Central to their success in both cases is the “phase clock”, which generates a clock signal to indicate the probable completion of an epidemic style chain reaction and orders more recent instructions to overwrite older ones.
This clock is inherently unbounded in its execution, cycling through m stages.
plain
|
http://arxiv.org/abs/2405.09384v1 | 20240515143642 | Probing particle acceleration in Abell 2256: from to 16 MHz to gamma rays | [
"E. Osinga",
"R. J. van Weeren",
"G. Brunetti",
"R. Adam",
"K. Rajpurohit",
"A. Botteon",
"J. R. Callingham",
"V. Cuciti",
"F. de Gasperin",
"G. K. Miley",
"H. J. A. Röttgering",
"T. W. Shimwell"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.CO"
] |
Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The Netherlands
David A. Dunlap Department of Astronomy and Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada
erik.osinga@utoronto.ca
Istituto Nazionale di Astrofisica, Istituto di Radioastronomia Via P Gobetti 101, 40129 Bologna, Italy
Laboratoire Leprince-Ringuet (LLR), CNRS, École Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Nice, France
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
Dipartimento di Fisica e Astronomia, Universitát di Bologna, via P. Gobetti 93/2, 40129, Bologna, Italy
ASTRON, Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, Dwingeloo, 7991 PD, The Netherlands and Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The Netherlands
Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029, Hamburg, Germany
Merging galaxy clusters often host spectacular diffuse radio synchrotron sources. These sources can be explained by a non-thermal pool of relativistic electrons that are accelerated by shocks and turbulence in the intracluster medium. The origin of the pool and details of the cosmic ray transport and acceleration mechanisms in clusters are still open questions. Due to the often extremely steep spectral indices of diffuse radio emission, it is best studied at low frequencies. However, the lowest frequency window available to ground-based telescopes (10-30 MHz) has remained largely unexplored, as radio frequency interference and calibration problems related to the ionosphere become severe. Here, we present LOFAR observations from 16 to 168 MHz targeting the famous cluster Abell 2256. In the deepest-ever images at decametre wavelengths, we detect and resolve the radio halo, radio shock and various steep spectrum sources. We measure standard single power-law behaviour for the radio halo and radio shock spectra, with spectral indices of α=-1.56±0.02 from 24 to 1500 MHz and α=-1.00±0.02 from 24 to 3000 MHz, respectively. Additionally, we find significant spectral index and curvature fluctuations across the radio halo, indicating an inhomogeneous emitting volume. In contrast to the straight power-law spectra of the large-scale diffuse sources, the various AGN-related sources that we study often show extreme steepening towards higher frequencies and flattening towards low frequencies. We also discover a new fossil plasma source with a steep spectrum between 23 and 144 MHz, with α=-1.9± 0.1. Finally, by comparing radio and gamma-ray observations, we rule out purely hadronic models for the radio halo origin in Abell 2256, unless the magnetic field strength in the cluster is exceptionally high, which is unsupportable by energetic arguments and inconsistent with the knowledge of other cluster magnetic fields.
Probing particle acceleration in Abell 2256: from to 16 MHz to gamma rays
E. Osinga
1,2
R. J. van Weeren1
G. Brunetti3
R. Adam4,5
K. Rajpurohit6,7
A. Botteon3
J. R. Callingham1,8
V. Cuciti7
F. de Gasperin3,9
G. K. Miley1
H. J. A. Röttgering1
T. W. Shimwell1,8
Received 2023-09-18; accepted 2024-05-07
================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Galaxy clusters provide a unique laboratory for studying the physics of particle acceleration in cosmic-scale dilute plasmas from the densest and hottest regions of the cosmic web. In these regions, the intracluster medium (ICM) shines brightly both in thermal bremsstrahlung observable with X-ray telescopes <cit.>, and diffuse synchrotron radio emission due to ultra-relativistic electrons <cit.>. There is significant evidence that both types of emission are driven by the injection of energy through cluster mergers, which heat the ICM and accelerate charged particles through shocks and turbulence <cit.>.
Because of the dynamic nature of the ICM, galaxy clusters host a panoply of interesting radio sources. Jets of active galactic nuclei (AGN) are found to be more bent closer to the centres of clusters <cit.> or possibly re-accelerated by interactions with the ICM <cit.>. On even larger (Mpc) scales, diffuse synchrotron radiation in the form of `radio halos' and `radio shocks' have been widely observed in merging galaxy clusters <cit.>. In this paper, we adopt the classification of the diffuse synchrotron radiation used in <cit.>, where radio halos are found in the centres of clusters with brightness profiles that generally follow the baryonic distribution of the ICM. In contrast, radio shocks are generally found on the outskirts of clusters and are thought to trace Fermi-I acceleration at shocks <cit.>. Additionally, there exists another class of diffuse synchrotron sources that are believed to trace old plasma from AGN that has been re-energised by various processes in the ICM. For example, such diffuse emission could have been re-energised by adiabatic compression or internal turbulence. This class encompasses sources such as gently re-energized tails <cit.> and radio phoenices <cit.>, which can be dubbed `fossil plasma' sources. All classes of diffuse cluster radio emission typically show steep spectra with α<-1, where α denotes the spectral index and the radio flux density follows S_ν∝ν^α, where ν denotes the frequency. This implies that the cluster diffuse emission is brighter, and sometimes easier to detect at high significance, at low frequencies.
There are various open questions related to the details of particle acceleration of different classes of diffuse cluster synchrotron sources. One major problem is that the acceleration seen in both the weak radio shocks in the ICM and the turbulent Fermi-II type acceleration in radio halos is not efficient enough to accelerate particles from the thermal pool <cit.>. A `seed' population of mildly relativistic electrons could alleviate this problem both in radio shocks <cit.> and radio halos <cit.>, although the origin of the seed population need not be the same. Possible scenarios for the origin of the seed population are the injection by AGN <cit.>, multiple weak shocks <cit.>, or secondary products of hadronic proton-proton collisions <cit.>.
The favoured scenario for the origin of radio halos is based on re-acceleration by merger-induced turbulence <cit.>. The role of secondary particles from hadronic interactions in the origin of radio halos is still unclear. A pure hadronic scenario is disfavoured by current radio data and their follow-up <cit.>. However, the only direct limit to the presence of cosmic ray protons (CRp) and to their contribution to radio halos comes from gamma-ray observations. At the moment, the detection of gamma rays from clusters remains elusive, and the only direct constraints on CRp come from the Coma cluster <cit.>.
Abell 2256 is one of the best laboratories for studying particle acceleration mechanisms. This is because of its large angular size and high flux density due to its proximity <cit.>, coupled with the fact that it is undergoing a massive (M_500=6.2×10^14M_⊙; ) and complex merger. The cluster hosts host clear well-characterised examples of all known classes of diffuse cluster radio emission. It also hosts the lowest redshift radio halo with an ultra-steep spectrum at low frequencies (α<-1.5). It has therefore been studied extensively across the electromagnetic spectrum <cit.>.
Until now, neither the ultra-low frequencies (<100 MHz) nor the high-energy gamma-rays have been properly explored.
No good quality data existed on Abell 2256 below 100 MHz due to calibration problems related to the ionosphere, although some ultra-low frequency observations were taken during the early phase of the LOFAR telescope when the calibration and imaging techniques were still in their infancy <cit.>. Those observations, combined with data up to 1.4 GHz, showed that the radio shock had an unusually flat spectrum of α=-0.81±0.03, inconsistent with standard diffusive shock acceleration and that the radio halo showed unexpected flattening towards higher frequencies. These results were however not corroborated by recent higher frequency investigations <cit.>. A more thorough ultra-low frequency study of Abell 2256 is therefore warranted to accurately quantify and characterise the low-frequency emission.
Recent advances in calibration and imaging techniques have made routine LOFAR Low Band Antenna (LBA) observations at ∼ 50 MHz possible <cit.>. In principle, the LOFAR LBA system works down to 10 MHz <cit.>, but no standard data reduction pipeline yet exists for observations in the 10-30 MHz range.
In this paper, we present the deepest radio images made at the lowest radio window available to ground-based telescopes. We study particle acceleration in Abell 2256 by combining those data with higher-frequency data from the literature and gamma-ray upper limits from 13.5 years of Fermi-LAT observations. A flat concordance cosmology with H_0=70 kms^-1Mpc^-1, Ω_m=0.3 and Ω_Λ=0.7 is adapted, which means that at the cluster redshift, 1 arcsecond corresponds to 1.12 kpc.
§ DATA
The radio observations used in this work are listed in Table <ref>. Abell 2256 was observed with both the LOFAR LBA and High Band Antenna (HBA) systems for 16 hours each. The observations and calibration process are detailed below, separately for the HBA and LBA. The flux density scale of both systems was verified with bright compact sources in the field using recent higher frequency data from <cit.>, as shown in Appendix <ref>. We found that the LOFAR HBA maps were biased slightly high, and thus corrected those with a scaling factor of 0.83 <cit.>. Throughout the paper, a 10% uncertainty will be assumed on all flux measurements, which is common for LOFAR observations <cit.>. All images are made using Briggs weighting with a value of the robust parameter equal to -0.5 <cit.>.
§.§ LOFAR HBA
The LOFAR HBA (120-168 MHz) observations were taken in the 𝙳𝚄𝙰𝙻_𝙸𝙽𝙽𝙴𝚁 configuration (i.e. the remote station collecting area is matched to the core stations) in two different observing sessions. Observations taken on 2018-05-01 include Abell 2256 at a distance of 1.6 degrees from the pointing centre as part of the LOFAR project with code LC9_008. Additionally, observations from the LOFAR Two-Metre sky survey <cit.> of the field P255+78, include Abell 2256 at a distance of 1.0 degree from the pointing centre.
The total target observation time is 16 hours spread equally over the two observations and both observations were book-ended with 10-minute scans on the calibrator source 3C295.
We separately calibrated both observations using the standard LoTSS DR2 pipeline <cit.>. First, direction-independent effects such as polarisation alignment, Faraday rotation, bandpass and delay terms were corrected in 𝚙𝚛𝚎𝚏𝚊𝚌𝚝𝚘𝚛[<https://git.astron.nl/eosc/prefactor3-cwl>] using the calibrator observations <cit.>. The solutions were applied to the target field after which several cycles of direction-dependent (self-) calibration were done.
After the complete direction-dependent calibrated image was created with the standard LoTSS DR2 pipeline, we extracted a region of 0.5×0.5 around Abell 2256, using the extraction procedure detailed in <cit.>. This optimises the image quality of the main target of interest by removing sources away from the target and performing a direction-independent self-calibration towards the target with the full 16-hour dataset. The resulting image is shown in Figure <ref>. The final science image has a background RMS noise of 90 μJy beam^-1 when imaged at the half-power beamwidth (HPBW) resolution of 6^''×6^''.
§.§ LOFAR LBA
Abell 2256 was observed with the LBA system as part of LOFAR project LC15_026 from 16 to 64 MHz in three separate observing runs, detailed in Table <ref>. We employed a similar observing strategy to the LOFAR LBA sky survey <cit.>, observing a calibrator source (3C380) simultaneously during the entire run. Similar to the LoTSS DR2 pipeline and LoLSS pipelines, we first used 𝚙𝚛𝚎𝚏𝚊𝚌𝚝𝚘𝚛 over the full bandwidth to calculate the direction-independent corrections, which may vary over the time of the observation. These corrections include the polarisation alignment, bandpass and LOFAR beam model.
Afterwards, calibration was performed separately for the frequency range 16-30 MHz and 30-64 MHz.
§.§.§ 30-64 MHz
For the frequency range 30-64 MHz, we used the pipeline employed for LoLSS[<https://github.com/revoltek/LiLF>] <cit.>. This pipeline first solves for direction-independent effects <cit.> in the target field by self-calibration, starting from a model from TGSS ADR1 <cit.> and then direction-dependent effects as described in <cit.>. After successful calibration and imaging of the complete field-of-view, we extracted the target cluster using the method detailed in <cit.>. The final image integrated from 30-64 MHz has a resolution of 19^''×12^'' and an rms noise of 1.4 mJy beam^-1. It is shown in the left panel of Figure <ref>.
§.§.§ 16-30 MHz
For the lower part of the LBA sub-band, from 16 to 30 MHz, no standard pipeline is yet available, although <cit.> recently presented a calibration strategy for the decametre band that is shown to work for a standard LOFAR observation of an arbitrary field with typical observing conditions. We have used a similar method to calibrate the Abell 2256 field, proceeding as follows.
We re-calculated phase calibration solutions in two steps using the calibrator source and solution intervals and smoothness constraints optimized for the frequency range. First, differential Faraday rotation was calibrated by converting the data to a circular basis and taking only the phase difference of the XX and YY correlations. This has the advantage that all scalar phase effects are removed from the data.
Then, scalar phase effects (i.e. ionospheric dispersive delay and clock terms) were taken out by solving for a model of the calibrator source. For both of these calibrations, we constrained the solutions to be smooth by convolving them with a Gaussian kernel that has a width that is linearly proportional with the frequency, to follow the ν^-1 dependence of ionospheric dispersive delays.
The calibrator phase solutions were then applied to the target field, which concluded the data pre-processing. The first direction-independent image was then made by means of self-calibration using a bright calibrator in the target field, that dominates the flux density. We phase shifted to the brightest source in the target field, 3C390.3, and used the same calibration strategy as for the calibrator field, which solves for differential Faraday rotation and residual phase effects, but now in the direction of the target field. We used again the TGSS-ADR1 survey as the starting model.
Finally, for direction-dependent calibration of the target field, we manually extracted ∼ 1^∘× 1^∘ regions around the 13 brightest sources in the field. Those were self-calibrated to correct for ionospheric distortions by calibrating for total electron content (TEC) and phase simultaneously (𝚝𝚎𝚌𝚊𝚗𝚍𝚙𝚑𝚊𝚜𝚎 in DP3; ), again using the TGSS-ADR1 survey as a starting model. The final direction-dependent calibrated image was made by combining the solutions from different directions to a smooth screen.
The full field-of-view of the LBA image is shown in the appendix (Fig. <ref>), where the imaging was done in WSclean using multi-scale clean <cit.> and the image-domain gridder <cit.>. Then, as was similarly done for the higher frequency data, we manually extracted the direction of the target and performed additional rounds of self-calibration to optimise the calibration quality in the direction of Abell 2256 <cit.>. The right panel of Figure <ref> shows the resulting image of Abell 2256. We achieved unprecedentedly low Gaussian noise levels (< 10 mJy beam^-1) in the frequency range 16-30 MHz. This presents not only the deepest-ever image of Abell 2256 at such low frequencies but also of any celestial target.
§.§ Gamma-ray data
For comparison with gamma-ray observations, we have made use of publicly available data from the Fermi Large Area Telescope. The event selection and analysis follow the work presented in <cit.>.
We used 13.5 years of Pass 8 data (P8R3), collected from August 4, 2008, to February 7, 2022. They were extracted within a radius of 10 degrees from the cluster centre. We selected events with energies from 200 MeV to 300 GeV and we applied the 𝙿8𝚁3_𝚂𝙾𝚄𝚁𝙲𝙴_𝚅2 selection (event class 128) and selected 𝙵𝚁𝙾𝙽𝚃+𝙱𝙰𝙲𝙺 converting photons (event type 3). Data from zenith angles less than 90 degrees were filtered out to remove the Earth limb photons. Time selection and rocking angle cuts were applied following recommendation: 𝙳𝙰𝚃𝙰_𝚀𝚄𝙰𝙻>0 && 𝙻𝙰𝚃_𝙲𝙾𝙽𝙵𝙸𝙶==1, and (𝙰𝙱𝚂(𝚁𝙾𝙲𝙺_𝙰𝙽𝙶𝙻𝙴)<52).
Here, we focus mainly on the gamma-ray spectral constraints, to be combined with radio synchrotron data. In order to extract the cluster SED, we performed a joint likelihood fit of both the background components and the cluster using the fermipy package <cit.>. The data were binned both in energy and space, with 8 energy bins per decade and 0.1x0.1 deg^2 pixels. The region of interest (ROI) width was set to 12 degrees. We model the ROI using the 4FGL-DR2 catalog (𝚐𝚕𝚕_𝚙𝚜𝚌_𝚟20.𝚏𝚒𝚝; ) together with the isotropic diffuse background (𝚒𝚜𝚘_𝙿8𝚁3_𝚂𝙾𝚄𝚁𝙲𝙴_𝚅2_𝚟1.𝚝𝚡𝚝) and the galactic interstellar emission (𝚐𝚕𝚕_𝚒𝚎𝚖_𝚟07.𝚏𝚒𝚝𝚜). The cluster gamma-ray template was modelled using the MINOT package <cit.>. MINOT requires a thermal gas model and a cosmic ray proton (CRp) spatial and spectral distribution to compute gamma-ray templates from hadronic interactions. The thermal model was fixed to the one discussed in Section <ref>. When fitting the sky model to extract the SED, the photon spectral index is allowed to vary within the bins so that the final results are insensitive to the CRp spectrum. Given the fact that Abell 2256 is barely resolved by the Fermi-LAT, the SED constraints are only weakly sensitive to the assumptions made about the CRp spatial distribution with the cluster (see Section <ref> for the modelling). We performed the spectral extraction using different assumptions about the spatial modelling and concluded that the results remained stable. In the end, we obtained, in each energy bin, the likelihood scan for the normalization of the flux that is either used to constrain the cluster CRp normalisation and spectrum independently from other wavelengths, or used jointly with radio data for testing acceleration models (Section <ref>).
§ RESULTS - RADIO ANALYSIS
The full-resolution LOFAR images are shown in Figures <ref> and <ref>, where no uv baseline filtering is applied. Although the resolution is low in the 16-30 MHz band, we can still clearly distinguish the distinct sources of radio emission in the cluster. The radio halo and radio shock are clearly resolved, and the brightest AGN-related sources can still be separated from the diffuse radio emission.
To emphasise the low surface brightness emission in the cluster, we plot the three frequency bands all convolved to the same resolution of 39^''×24^'' in Figure <ref>. We note that at the matched resolution, the 30-64 MHz and 16-30 MHz images have a similar sensitivity to the HBA system for sources with a spectral index of α=-1.3 and α=-1.7, implying that the HBA image is more sensitive than the two LBA images for sources that have α>-1.3 and α>-1.7, respectively.
We also made two spectral index maps, between 16-46 and 46-144 MHz, at a common resolution of 39^''×24^''.
For these maps, we set the robust parameter to -0.5 and employed an inner uv baseline cut at 100 times the observing wavelength (i.e. 100λ), to ensure short baselines are similarly sampled at all frequencies. Additionally, only pixels with a flux density greater than three times the RMS noise in all three images were used. Figure <ref> shows the spectral index maps, with contours representing the total intensity of the higher-frequency image. The spatial distributions of uncertainties are shown in the Appendix Figure <ref>, calculated from Eq. <ref>
including both the flux scale offset and statistical uncertainty. The median spectral index uncertainty is 0.31 for the lower part of the LBA band and 0.19 for the LBA-HBA map. In the following sections, we present the analysis of the radio halo, radio shock and AGN-related sources separately.
§.§ Radio halo
In the low-resolution images shown in Figure <ref>, the halo appears largest at 144 MHz, owing to the high sensitivity of the HBA system. In fact, with the low-resolution image bringing out the low surface-brightness emission, the radio halo is larger than reported in the recent work by <cit.>, with a largest-linear size (LLS) of 0.40^∘, corresponding to 1.6 Mpc at the cluster redshift. This is due to the fact that the halo LLS was measured in 20^'' images with a uv cut of 100λ by <cit.>, but these full uv plane lower resolution images show that low surface-brightness emission extends further.
The halo encompasses the radio shock, extending out to about 60% of the R_500=1273 kpc <cit.> radius.
The halo emission appears to become filamentary in the south-east region, which is best visible in the high-resolution 120-168 and 30-64 MHz images in Figures <ref> and <ref>, where the latter has been marked to indicate the location of the possible filament. We note that the filamentary emission is oriented approximately parallel to the source AI, but also underline that their detection remains tentative.
We calculated the integrated spectral index of the radio halo between 23 and 144 MHz using the same regions from the recent higher frequency study from <cit.> for both the subtraction of compact sources and the integration of the halo flux density, as well as the definition of the sub-regions `core' and `wedge' (see Fig <ref>). The resultant integrated spectrum is shown in Figure <ref>, where the inset shows our data together with the higher frequency measurements, where for a correct comparison accounting for the different baseline coverages, we filtered out baselines below 100λ when measuring the halo flux. The resultant halo flux measurements are given in Table <ref>. The integrated spectral index follows a power-law of -1.56 ± 0.02 over almost two orders of magnitude in frequency, from 24 to 1500 MHz. However, the spectral index of the core region is flatter than the overall radio halo, with α=-1.36±0.08, while the spectral index of the `wedge' is slightly steeper at low frequencies.
The integrated spectral index agrees within two standard deviations with the spectrum measured at higher frequencies (α=-1.63±0.03; ), with no evidence for spectral curvature. This differs from the curved spectra observed in the radio halos of other clusters with wide frequency coverage, such as the Coma cluster, MACS J0717.5+3745 or Abell S1063 <cit.>.
We note that for our measurements we have ensured a consistent uv-min in wavelength units for all datasets when imaging. This is highly important, as for example the halo is significantly brighter, by a factor of ∼2, at LOFAR frequencies in images without inner uv cuts, indicating the presence of large-scale emission that is only detected with the shortest LOFAR baselines. Whilst our measurements integrated the pixels within a given region, we note that if we repeat the exercise but using the flux from the best-fit spherical models of the radio halo surface brightness (see Section <ref> and Appendix <ref>) we find consistent results for the spectral index measurements.
To investigate the spatial distribution of the spectral index, we calculated the standard deviation from the halo region of the spectral index maps of the 23-46 and 46-144 MHz bands. These are std(α_23^46)=0.43 and std(α_46^144)=0.25, respectively, while the median uncertainties in the spectral index map in the radio halo region were found to be 0.36 and 0.21 respectively. However, the spectral index uncertainty varies slightly across the radio halo, so to check whether the observed values are higher than the fluctuations expected from noise, we performed the following Monte Carlo analysis. First, we calculated versions of the spectral index uncertainty maps with the only source of noise being background RMS fluctuations, as a systematic flux scale uncertainty would not contribute to spatial variations. Then, we assumed a constant spectral index map across the halo, and re-sampled every pixel in the constant map from a Gaussian with a standard deviation equal to the uncertainty in that location of the uncertainty map. This process was repeated 100 times, and the standard deviation of the re-sampled maps is then a good estimate of the level of fluctuations caused by noise. Figure <ref> shows the comparison between the observed spectral index fluctuations and the magnitude of the fluctuations expected from noise. The observed standard deviation of the spectral index cannot be accounted for by map noise only.
However, calibration or deconvolution errors might contribute noise as well. Therefore, we repeated this process assuming an additional noise source that is a fraction of the surface brightness, until we reach the observed standard deviation in the spectral index. We find that at least a 13% surface brightness uncertainty would be needed to explain the observed standard deviation of the spectral index. Such large errors seem unlikely, given the fact that the radio halo in Abell 2256 is detected at high significance in all maps (see Fig. <ref>), and LOFAR HBA observations have been shown to reliably recover more than 90% of the flux of radio halos, even up to the scale of the radio halo in Abell 2256 <cit.>. To estimate the level of noise added by calibration or deconvolution issues in the LBA maps, we calculated the flux density of the radio halo per self-calibration round and found that the results were very stable. The radio halo flux density varied on the order of 1% between self-calibration rounds. It is thus unlikely that there are deconvolution errors that are causing additional surface brightness fluctuations larger than 10%. We conclude that the spectral index shows excess scatter over the noise, likely caused by physical fluctuations in the emitting region.
Higher frequency data on Abell 2256 has shown that the spectral index also has a radial trend, with steeper spectra towards the outskirts <cit.>. The low-frequency spectral index maps in Figure <ref> do not show a clear radial trend when analysed in a comparable way. To quantify this, we calculated the spectral index by fitting the 23, 46 and 144 MHz flux densities in radial bins using concentric annuli. The best-fit spectral index, with the one-sigma uncertainty, is shown as a function of radius in Figure <ref>. Although there is a hint of steepening in the last radial bin and flattening in the core, the data are consistent with a constant spectral index as a function of radius, given the large error bars.
A superposition of curved spectra could result in a single power-law spectrum when integrated over the entire radio halo, as observed for example in Abell 2744 <cit.>.
To investigate possible spatial variation of the curvature, we computed the spectral curvature map from the spectral index maps as follows
α_23MHz^46MHz-α_46MHz^144MHz.
This spatial distribution of spectral curvature is shown in Figure <ref>, and the corresponding uncertainty map in Figure <ref>. The median uncertainty across the radio halo is 0.39 while the standard deviation of the measured curvature across the radio halo is 0.52. Using the same Monte Carlo test as above, we find that setting a level of 10% noise fluctuations due to calibration or deconvolution errors produces a lower standard deviation in curvature (0.430 +/- 0.005) than is observed. Thus, the spectrum locally exhibits a convex or concave shape in different regions of the radio halo, likely due to physical fluctuations in the emitting region. Due to large uncertainties, we cannot say if the spectral curvature variations show a radial dependence, as Figure <ref>, is consistent with a straight line.
In summary, the radio analysis of the halo in Abell 2256 reveals that the integrated spectrum of the halo is consistent with a steep power-law. However, we also find evidence of spectral index and curvature variations that do not follow a radial profile, indicating a complex and inhomogeneous environment.
§.§ Radio shock
Abell 2256 hosts one of the clearest examples of filamentary radio emission inside a radio shock <cit.>. Usually, such filaments are observed at gigahertz frequencies only due to the high resolution required but here we present the first case where filamentary radio emission is observed in a radio shock down to at least 23 MHz. In the total intensity images shown in Figures <ref> and <ref> the well-known filaments inside the radio shock (source G and H) are still clearly seen at 144 and 46 MHz, while the 23 MHz image shows only the brightest larger filaments.
The integrated spectrum of the radio shock between 23 and 144 MHz is plotted in Figure <ref>, where we have divided the shock into three sub-regions (shown in Figure <ref>) to allow us to study the spectral steepening from west to east across the radio shock that was noticed at higher frequencies by <cit.> and <cit.>. The radio shock flux measurements are given in Table <ref>. In agreement with these previous studies, Figure <ref> shows that the westernmost region R1 has a steeper spectrum than regions R2 and R3, with R1 showing α=-1.08±0.07 and R2 and R3 showing α=-0.84±0.07 and α=-0.83±0.08, respectively. We note that the uncertainties on the spectral index measurements are dominated by the systematic uncertainty in the flux density scale, and thus have a strong spatial correlation while the statistical uncertainty is on the order of 0.02. The total integrated spectrum, when combined with higher frequency data from the recent study by <cit.>, agrees with a straight power-law with α=-1.00±0.01 from 24 to 3000 MHz.
However, at low frequencies, the contribution of the radio halo flux to the region of the radio shock might become significant and cannot be easily separated in the images. We can estimate the contribution using the spherical halo models that were fit in Appendix <ref>. Assuming the radio halo is spherically symmetric, we find that the halo contributes 10%, 26% and 39% of the total radio shock flux in the 144, 46 and 23 MHz images respectively. However, subtracting this contribution only flattens the spectrum marginally. With the subtraction of the estimated radio halo flux from the radio shock region, we find that the radio shock spectrum between 24 and 3000 MHz still follows a power-law with α=-0.95±0.01.
The spectral index trend across the radio shock can also be seen in the spectral index map at 20^'' between 46 and 144 MHz, shown in Figure <ref>. This spectral index map indicates steepening from the southwest towards the northeast side of the radio shock, where we see preferentially emission with α<-1. In contrast, the west side of the radio shock shows flatter spectrum emission, with α>-1.
§.§ AGN related emission
Abell 2256 also hosts a large number of complex radio sources, that appear to be either directly related to AGN or associated with (revived) fossil AGN plasma <cit.>. Fossil plasma sources typically show very steep spectra that are often curved at high (GHz) frequencies <cit.>. In the case of Abell 2256, there are various (candidate) fossil plasma sources.
First, the sources labelled AG+AH and AI were discovered in <cit.>, where they showed spectral indices at frequencies higher than 140 MHz of α<-1.95 and α<-1.45 respectively. The possibility was raised that both sources are revived fossil plasma sources, although AG+AH might also simply be old AGN emission from the long, tailed radio galaxy. This scenario is supported by the high-resolution radio images of Figures <ref> and <ref>, where AG+AH seems to be connected to the long Mpc-sized tailed radio source C. In the high-resolution HBA image, we also clearly observe for the first time `ribs' coming off the source AG+AH. These are reminiscent of the ribs seen in the radio tail dubbed T3266 in Abell 3266 as observed with the MeerKAT telescope <cit.>
Second, there is the F complex of sources, also discussed by <cit.> and <cit.>. The F complex of sources is located on the west side of the radio halo, and consists of three components, F1, F2 and F3. The narrow-angle tailed source F3 is clearly associated with a cluster member (<cit.> galaxy 122), situated at the eastern tip of the radio source <cit.>, as shown in Figure <ref>. However, the nature and origin of the other two sources are still unclear. One possibility is that F1 and F2 are also related to the same galaxy as F3, but another possibility is that F1 and F2 consist of fossil radio plasma from previous episodes of AGN activity (possibly from F3) that is compressed somehow by interactions in the ICM <cit.>.
The 23 MHz data shows that F2 and F3 are more extended than previously reported at higher frequencies <cit.>. The radio emission of F3 seems to fade into the wedge arc of the radio halo, indicating a possible connection between the tailed radio source and the halo arc.
Interestingly, we observe no clear spectral index gradient across F1-F3.
Additionally, we detect a new, very steep, region just below the F complex, co-spatial with the radio halo. It is clearly seen as a bright region in the 46 MHz contours shown in Figure <ref> and shows a spectral index of α<-2 in the 46-144 MHz spectral index map (Fig. <ref>). This seems like a fossil plasma source due to the extreme steepness of the spectrum, and could possibly be associated with the F complex as well. We, therefore, label it F4 in this study. The optical overlay, Figure <ref>, shows that the 46 MHz contours seem to originate from the cluster galaxy MCG+13-12-020 at 17h05m39.5s +78d37m34.2s the south-west, which agrees with the spectrum flattening spatially towards this galaxy, implying a possible optical host.
As lower energy electrons cool less efficiently through synchrotron and inverse Compton radiation, our low-frequency data allows us to probe the aging of the observed emission. By fitting their spectra (see Fig <ref>) with simple synchrotron ageing models, we estimated the ages of AG, AH, AI and F1-F4, adding high-frequency data from the literature where possible. All sources except F1 and F4 show spectral flattening towards lower frequencies, indicating that we do not observe the break frequencies of F1 and F4, which are likely below 23 MHz.
We used synchrofit[https://github.com/synchrofit] <cit.> to fit standard synchrotron models to the various AGN-related sources in Abell 2256. We fit a continuous injection <cit.> model to the curved spectrum sources. The model has three free parameters: the injection index s=1-2α_inj, where α_inj is the radio spectral index upon injection, the break frequency after which the spectrum steepens, and the remnant fraction (i.e. the fraction of time the source is `off'). Following the minimum energy condition as calculated in <cit.> which follows the <cit.> formula, we assume a tangled magnetic field with a strength of 7 μG for
the F complex. Doing the calculation for AG+AH and AI gives lower values of the minimum energy magnetic field strength around ∼3 μG, but we assume 7μG as well to give a conservative age estimate. We note that the maximum age estimate is obtained for B=B_CMB/√(3) <cit.>, which results in 1.8μG at the redshift of Abell 2256. The resulting spectral ages, best-fit injection indices and break frequencies for the AGN-related sources are given in Table <ref>.
The straight spectrum of F1 over multiple decades in frequency indicates the source is likely still being energised and we are observing the spectrum above the break frequency. For a simple continuous injection model, the spectrum would consist of two power-laws with α=α_inj-0.5 after the break frequency <cit.>. The best-fit spectral index of F1 was found to be α=-1.36±0.03, implying a radio injection index of α_inj=-0.86±0.03.
For source F4 the simple continuous injection model does not fully work, because it would imply an injection index of α_inj=0.5-1.9±0.1=-1.4±0.1, which is much steeper than typical injection indices (>-1). Thus we are likely observing the exponential steepening of the spectrum of F4, implying relativistic particles are not continuously injected.
§ RADIO - GAMMA-RAY COMPARISON
Nearby clusters such as Abell 2256, whose radio halo exhibits an ultra-steep spectrum, are expected to generate gamma-ray flux in the Fermi-LAT energy band if the halo is generated by secondary particles from hadronic interactions <cit.>. They are therefore ideal candidates to constrain the contribution of secondary electrons from hadronic interactions to the cosmic ray electron population. In this section, we combine our LOFAR data with upper limits from Fermi-LAT data to test a purely hadronic origin of the halo.
§.§ Theoretical framework
To study the contribution of the hadronic interactions to the radio halo in Abell 2256, in this section, we model hadronic interactions of cosmic ray protons with thermal ions. Assuming spherical symmetry, we obtained the thermal properties of Abell 2256 using X-ray data from ACCEPT, ROSAT <cit.>, and the Sunyaev-Zel’dovich data from Planck <cit.>. We fit a gNFW profile <cit.> for the pressure and a simple β-model for the gas density as a function of radius (R). The best-fit parameters for the beta model were found to be n_th(0) = 3×10^-3 cm^-3, r_c = 341 kpc, and β=0.77
in the standard β-model given by
n_th(R) = n_th(0) [ 1 + ( R/r_c)^2 ]^-3β/2
We verified that the fits also closely match the X-ray data from the Archive of Chandra Cluster Entropy Profile Tables <cit.> out to R_500.
For the non-thermal properties, we assumed a power-law distribution of cosmic-ray proton density that follows the thermal plasma distribution n_th as follows:
N_CRp(p_p,R) = C_p [ n_th(R) kT(R) ]^a p_p^-s,
where C_p is a constant, n_th and kT denote the thermal gas density and temperature as functions of radius R, and p_p denotes the momentum of the protons, following a power-law distribution with index s, down to a conservative momentum cut-off p_p,min∼ 0.1 mc (approximately 30 times larger than that of the thermal protons). A power-law momentum distribution for cosmic-ray protons is routinely assumed in the literature as a result of acceleration mechanisms, and agrees with the power-law radio spectrum found for the halo down to low frequencies. The proportionality between the cosmic ray proton energy density and the thermal plasma energy density (W_CRp∝W_th^a) is parameterised by a, which is constrained from the Abell 2256 radio halo intensity profile in the next section (Fig. <ref>).
The collisions between CRp[For CRp with momentum above the threshold p> p_thr; the threshold kinetic energy being 289 MeV <cit.>] and thermal protons create pions (denoted by π) that decay into γ-rays, electrons/positrons and neutrinos <cit.>. The spectrum of secondary electrons is calculated as in <cit.>, assuming stationary conditions. First, the injection spectrum of electrons and positrons is given by numerical integration:
Q_e^±(p_e)= 8 β_μ^' m_π^2 n_th c^2/m_π^2-m_μ^2∫_E_mind E_π/E_π∫_p_*d p_p/β̅_μβ_p N_CRp(p_p)
×dσ^±/d E(E_π, E) F_e(E_e, E_π),
where p_e is the electron momentum, E is the proton energy (otherwise E_i is the energy of species i), m_π and m_μ are the pion and muon masses, β_μ^'=0.2714, E_min=2E_e m_π^2/(m_π^2+m_μ^2), p_* = max{p_th, p_π}, β̅_μ=√(1-m_μ^2/E̅_̅μ̅^̅2̅), E̅_̅μ̅=1/2E_π(m^2_π-m^2_μ)/(β_μ^'m_π^2)
and dσ^±/dE is the differential inclusive cross-section for the production of neutral and charged pions. This cross-section is calculated by combining four energy ranges, following <cit.> and references therein. Finally, F_e(E_e, E_π) is given in <cit.> below Eq. 36. The resulting steady-state distribution of the secondary electrons is then calculated as
N_e^±(p_e)=1/|dp_e/dt|_rad + |dp_e/dt|_C∫_p Q_e^±(x)dx,
where |dp_e/dt|_i denotes radiative (i=rad) and Coulomb (i=C) losses, from <cit.>. The steady-state distribution is a good assumption for galaxy clusters, as CRp are confined to the cluster for many Gyr <cit.>, and the timescale of p-p collisions is much larger than the relatively short lifetime of synchrotron-emitting (GeV) electrons (∼ 10^8 yr, e.g. ). Thus within a few cooling times, the electron spectrum will reach a steady state balance between injection and cooling.
These electrons will generate a synchrotron emissivity obtained from the following numerical integration:
j_syn(ν) = √(3)e^3/m_e c^2∫_0^π/2 dθsin^2θ∫ N_e^±(p_e) F(ν/ν_c) dp_e,
where e is the elementary charge, m_e the electron mass, c the speed of light, ν_c is the synchrotron critical frequency and F is the synchrotron Kernel <cit.>. The pitch angle θ between the magnetic field and the electron velocity is assumed to be randomly distributed.
Assuming a power-law distribution of CRp as in Eq. <ref>, the spectrum of the synchrotron emissivity can be approximated with a power-law in the form j_syn(ν) ∝ν^α, with α≃ (1-s)/2.[ We note that Coulomb losses may generate a flattening in the spectrum of CRp at lower energies, which may induce a corresponding flattening in the synchrotron spectrum generated by secondary electrons at low frequencies. In the case of galaxy clusters, this effect is expected to be significant only in the cores, where densities and magnetic field strengths are higher. However, the low-frequency data excludes the possibility of flattening down to energies of E ∼ 3-10 GeV and only a small fraction of the gamma rays are produced in the core, thus for simplicity we neglect this effect.]
The gamma-ray intensity from the decay of pions was then computed following <cit.>, with the injection rate of pions given by
Q_π^±,0(E_π)= n_th c
∫_p_* dp_p N_CRp(p_p) β_p
d σ^±, 0/ d E (E_π,E),
where ± and 0 refer to charged and neutral pions, respectively, and d σ^±, 0/d E is the differential inclusive cross section for their production, which is calculated in four different energy ranges as in <cit.>. The decay of neutral pions then generates an emissivity in the gamma-rays at the energy E_γ in the form
j_γ(E_γ)= 2 E_γ∫_E_min^E_CRp,maxQ_π^0(E_π)/√(E_π^2 - m_π^2 c^4)
dE_π,
where E_min = E_γ + m_π^2c^4/(4E_γ).
The synchrotron and gamma-ray emission can
be obtained through numerical integration of Eqs. <ref>–<ref> and <ref>–<ref>, respectively. As a useful reference (using a power-law approximation for the synchrotron emissivity), at a distance r on the sky plane, the synchrotron and gamma-ray emission are proportional to:
I_syn(r)
∝∫_LOSRdR/√(R^2-r^2) n_th^2(R) kT(R) ℱ(R) B^1-α(R)/B^2(R)+B^2_CMB,
and
I_γ(r)
∝∫_LOSRdR/√(R^2-r^2) n_th^2(R) kT(R) ℱ(R) ,
where we defined ℱ(R)= W_CRp(R)/W_th(R) and B refers to the ICM magnetic field strength, with the CMB subscript referring to the cosmic microwave background equivalent magnetic field strength.
The ratio of the synchrotron to gamma-ray luminosity is thus governed by the magnetic field profile as
L_syn/L_γ∝< B(R)^1-α/B^2(R)+B^2_CMB>,
where the brackets denote a volume average (e.g. in the next section we integrate up to R_500) weighted for the distribution of CRp <cit.>.
In our calculations, the cluster magnetic field was assumed to follow the commonly used profile where the magnetic field energy density is proportional to the thermal gas energy density, as found for example for the Coma cluster <cit.>
B(R) = B_0 (n_th(R)/n_th(0))^0.5,
where n_th(R) denotes the thermal electron density at radius R. The central magnetic field strength B_0 is not well-constrained for Abell 2256 <cit.>, so was left as a free parameter.
In summary, with reasonable assumptions on the magnetic field profile and cosmic ray proton spectrum plus measurements of the cluster thermal density, temperature profile, and synchrotron luminosity, we can estimate the expected gamma-ray luminosity from hadronic interactions in Abell 2256. We additionally assumed spherical symmetry and homogeneous and stationary conditions for simplicity.
The results may be influenced by non-homogeneous conditions within the intra-cluster medium. For instance, in a similar analysis conducted on the Coma cluster, <cit.> demonstrated that for an additional turbulent component of the magnetic field (where
<B + δ B>_Volume= B and <B + δ B >^2_Volume= B^2 + δ B^2), the radio/gamma-rays ratio changes by less than a factor of 2 compared to that in a homogeneous medium, even in the extreme scenario where δ B^2 ∼B^2. Thus, the main conclusions are not expected to change significantly despite potential variations in intra-cluster medium conditions.
§.§ Gamma-ray upper limits
The LOFAR observations of the radio halo in Abell 2256 constrain the spatial distribution (i.e. a) and number density of cosmic ray protons of the purely hadronic model. We obtained the spatial distribution CRp from the brightness profile of the radio halo, which should follow Equation <ref>. We modelled the radio surface brightness using an MCMC halo-fitting code <cit.>. We masked the regions where the halo is seen in projection with either AGN or the large radio shock, as shown in Figure <ref>. We assumed a simple spherically symmetrical model commonly used for radio halos where I(r) = I_0 exp(-r/r_e) <cit.>. The resulting fits are shown in Appendix <ref> with the best-fit model parameters given in Table <ref>. We found similar values for the e-folding radius of ∼ 200 kpc at the three different frequencies, which is consistent with the finding in Section <ref> that the spectral index is constant as a function of radius.
The observed surface brightness profiles of the radio halo at the three different frequency bands show very similar behaviour as a function of radius, as indicated in Figure <ref> where the normalised profiles are shown for comparison. These profiles are flatter than expected from models that assume a constant CRp density (or a declining CRp density, with positive values of a). Such a tendency was also observed in other radio halos such as the Coma Cluster <cit.>.
Assuming a value of a=-0.5 approximately reproduces the flatness of the observed profile as a function of radius, so we set this as a reference value in the following calculations.
To match the total synchrotron luminosity of the radio halo for B_0=[3, 5, 10, 20, 30] μG, hadronic models require an energy budget of CRp that is equal to [15, 4.9, 1.4, 0.6,0.4] times the thermal energy density averaged over the cluster volume within R_500 respectively. This energy budget is large, because of the combination of the flat radio profile and steep synchrotron spectrum and improbable given the fact that the integrated CRp energy density is expected to be on the order of a few per cent of the total energy density in clusters <cit.>.
Figure <ref> shows that for all models, the radial profile of the cosmic ray energy density would exceed the thermal energy density within R_500.[We note that assuming p_p,min=0.01 mc would imply a CR energy budget that is about 40 percent higher than that in Fig. <ref>.] Such energy budgets of CRp should result in a detectable gamma-ray luminosity and flux.
To calculate the integrated synchrotron luminosity and gamma-ray luminosity from the hadronic model, we integrated out to R_500=1273 kpc, although this cutoff is not sharp in practice. Therefore, this results in a conservative estimate for the expected gamma-ray radiation from the hadronic model.
We also note that for a=-0.5 although the cosmic ray fraction increases away from the cluster centre, the gamma-ray luminosity still declines as a function of radius for a>-1. We show the expected gamma-ray flux derived from purely hadronic models that match the radio observations in Figure <ref>, where the overlay shows the current observational limits from Fermi-LAT. It is clear that for typical magnetic field values of B_0=1–10 μG, gamma-rays would be detected if the halo was purely hadronic. At a three-sigma confidence level, the purely hadronic model disagrees with B_0<17 μG.
§ DISCUSSION
The radio halo in Abell 2256 was among the first radio halos to be discovered <cit.>, with deeper follow-up data uncovering its progressively larger extent <cit.>. The most recent estimate from <cit.> shows that the largest linear size of the radio halo is at least 900 kpc.
In this work, we find the radio halo to be significantly larger than these previous estimates, with an observed size at 144 MHz of 1.6 Mpc. This significant increase in observed size can be attributed to the unparalleled sensitivity of LOFAR at low frequencies, particularly to large-scale emission because of the many short baselines. The large size of the radio halo implies that a large fraction of the cluster volume is occupied by relativistic electrons and magnetic fields, which is in line with recent works that have also found that radio halos extend out to large radii when observed with high sensitivities at low frequencies <cit.>. In fact, it is likely that the observed size of the radio halo is still limited by missing short baselines, as data imaged without 100λ baseline cuts shows a significantly larger and brighter radio halo (approximately 20%) than data imaged without short uv spacings. This was also found in previous works by injection of large mock radio halos into LOFAR data <cit.>. With the anticipated LOFAR2.0 upgrade to the LBA system, which can probe larger angular scales than the HBA system, observations will become more sensitive allowing the detection of even larger scale emission in nearby clusters.
§.§ Spectral properties of the halo
The integrated spectrum of the radio halo in Abell 2256 is classified as ultra-steep and shows no indication of curvature (Figs. <ref>, <ref>). It is one of the few radio halos that are detected over a large frequency range, with other examples being the Bullet cluster <cit.>, the Toothbrush cluster <cit.>, Abell 2744 <cit.>, Abell S1063 <cit.>, Coma <cit.> and MACS J0717.5+3745 <cit.>. We compiled the properties of these clusters in Table <ref>. It is interesting that the first three of these other radio halos do not show any indication of spectral curvature, with relatively flat spectra α < -1.3 up to GHz frequencies, while the last three halos do show spectral curvature, resulting in ultra-steep spectra (α<-1.5), at frequencies above ∼1 GHz. Abell 2256 thus presents a unique radio halo with an ultra-steep spectrum up to GHz frequencies, without spectral curvature.
Simple homogeneous turbulent re-acceleration models, with constant magnetic field and acceleration rate throughout the volume, have been successful in reproducing the observed counts and redshift distribution of radio halos in statistical samples <cit.>. In such models, ultra-steep spectra are expected above a cut-off frequency that scales with the acceleration efficiency in the ICM, which depends on the energetics (e.g. mass and mass ratio) of the merger <cit.>.
It is therefore interesting that the integrated spectrum of the radio halo in Abell 2256 shows no curvature, while it is ultra-steep.
Variations in the magnetic field, turbulent energy and resulting acceleration efficiency throughout the emitting volume may complicate the apparent spectral behaviour. The superimposition of different regions can stretch the spectrum and generate a quasi-power-law spectrum when integrated over the full halo region. This effect has been observed in simulations <cit.>, although they are limited in resolution and do not capture the full complexity of the dynamics of the ICM and CRs. The observed significant curvature and spectral index variations across the radio halo volume (e.g. Fig. <ref>), which were also observed at higher frequencies <cit.>, point to an inhomogeneous situation in the Abell 2256 halo volume. In such a scenario, the steep spectral slope measured for Abell 2256 implies that a significant fraction of the emission in the halo volume is generated at low frequencies, where the acceleration time is shorter than the cooling time.
The intrinsic 2D scatter of the spectral index can be estimated as σ_2D=√(σ^2_obs - σ^2_rms), where σ^2_obs is the total observed scatter, and σ^2_rms is the scatter expected from the flux density uncertainties. Values of σ_2D of 0.14 and 0.24 are obtained for the scatter measured between 46-144 and 23-46 MHz, respectively. These variations are found to be quite large with respect to the other non-curved radio halo in the Toothbrush cluster <cit.>, as listed in Table <ref>.
However, the variations are of the same magnitude and spatial scale as those observed in MACS J0717.5+3745 <cit.>,
where an inhomogeneous situation was also proposed.
Furthermore, Table <ref> indicates that MACS J0717.5+3745 has the steepest spectrum below the break frequency, implying that the level of inhomogeneity might be correlated with the steepness of the radio spectrum. It is also noteworthy that Abell 2256 is the least massive galaxy cluster in this sample, which implies that it has a smaller turbulent energy budget and will preferentially emit lower frequency radiation. However, this sample of clusters with radio halos detected over a large frequency range is small and the selection is not unbiased, thus additional data are required to draw definite conclusions.
An inhomogeneous turbulent scenario has also been explored in the case of radio bridges, where theoretical models based on second-order Fermi re-acceleration predict that the fraction of the synchrotron emitting volume increases at lower frequencies <cit.>. The spectrum of radio bridges is not well known over large frequency ranges due to their low surface brightness, but the conditions for generating synchrotron emission in the volume (i.e. the acceleration time is smaller or equal to the cooling time) are more likely to be matched at lower emitting frequencies. However, it is still an open question how this process would result in a straight power-law for the integrated spectrum. Thus, explaining the combination of inhomogeneity in the halo volume and the perfect integrated power-law over multiple orders of magnitude in frequency as observed in Abell 2256 requires further theoretical studies.
§.§ Testing a hadronic origin
The radio halo in Abell 2256 is the nearest one in the universe that shows an ultra-steep spectrum below GHz frequencies. It is therefore one of the best candidates to put constraints on hadronic models from the combination of gamma-ray and radio data, as such a steep spectrum requires a large energy budget of cosmic ray protons which should result in observable gamma-ray emission. In Section <ref>, we have shown that secondary models may explain the levels of radio and gamma-ray emission in Abell 2256 only in the case that B_0>17μG. This is significantly higher than typical magnetic field values of B_0<10μG estimated from Faraday rotation measurements in clusters <cit.>. In fact, such strong magnetic fields are also unlikely for energetic reasons, since it would imply a magnetic pressure in the ICM that is ≥ 19% of the thermal pressure, and a total non-thermal pressure (i.e. magnetic + CR) of the same order as the thermal pressure at r=R_500.
This is significantly higher than the non-thermal pressure found observationally (∼6%) from the combination of X-ray and SZ observations <cit.>. Thus, in practice, assuming a hadronic origin of the halo, the combination of our LOFAR and gamma-ray data requires an untenable energy budget due to the combination of steep spectrum and flat radio brightness profile of the radio halo. We conclude that the purely hadronic model cannot explain the radio halo in Abell 2256.
This conclusion is quite robust, because of the conservative assumptions made in Section <ref>. Firstly, we limit the integration of the gamma-ray emission at r=R_500. The required energy budget for the non-thermal components would be even larger with a larger aperture radius. Secondly, similar to the case of the Coma cluster <cit.>, a flatter profile of the magnetic field would help reduce the energy budget of cosmic ray protons, but would not solve the tension because the magnetic field in the outskirts would become a dominant source of pressure.
Additionally, we note that our models for the gamma-ray emission in Abell 2256 did not consider other possible sources of cosmic-ray protons. Shock (re)accelerated electrons that generate the bright radio shock in Abell 2256 may also generate gamma-rays via inverse Compton scattering off the CMB, provided that TeV electrons are accelerated at the shock. Additionally, protons should also be accelerated by the shock front, but the acceleration efficiency of cosmic ray protons at ICM shocks is poorly constrained <cit.>, making it difficult to include this in our models. In any case, this implies that the central magnetic field strength would need to be even higher than B_0=17μG to explain the non-detection of gamma rays, which we have argued cannot be the case due to energetic reasons. In fact, our models are also conservative due to the fact that the radio halo was significantly brighter (factor 2) in images without a 100λ uv cut, but we employed this cut to make a fair comparison between different frequencies. We note, however, that gamma-ray observations do not suffer from resolving out large-scale emission like radio observations do.
In the turbulent re-acceleration scenario, a mildly relativistic `seed' population of electrons is re-accelerated by turbulence, which produces the radio halo <cit.>. The origin of the seed electrons is still unconstrained, and hadronic interactions might produce the seeds for re-acceleration <cit.>. Jointly modelling the seed population from hadronic interactions at a level consistent with the upper limits presented here and the re-acceleration of those seed particles through turbulent magneto-hydrodynamics can address this problem, although such modelling is beyond the scope of the current paper. We can however make a qualitative assessment of this model. In the turbulent scenario, the emission is generated with a ratio of radio to gamma-rays that is typically a factor 3-10 smaller than that in the case of purely hadronic models, thus allowing an energy budget of CRps that is up to one order of magnitude smaller than in the purely hadronic case. Current gamma-ray limits constrain the energy budget of the CRp (and magnetic field) to a level that is several times smaller than that obtained in Section <ref>. If the radio halo is indeed generated by turbulent re-acceleration, <cit.> predicted that a gamma-ray detection would only be possible in the case that B_0<1 μG. The non-detection is thus consistent with typical magnetic field strengths between 1-10 μG that are observed in clusters from Faraday rotation experiments <cit.>. The current Fermi-LAT limits do not rule out re-acceleration of secondary particles for the origin of the halo in Abell 2256, as was also concluded for the Coma cluster <cit.>.
§.§ Diffusive shock acceleration in the radio shock
The diffusive shock acceleration (DSA) of fossil electrons is the most promising model for radio shocks in clusters <cit.>. According to DSA, the integrated spectral index of radio shocks cannot be flatter than α=-1.0. However, this constraint was violated in the radio shock in Abell 2256 with early LOFAR observations at low frequency, where a radio shock spectral index value of -0.85±0.01 was found <cit.>. Observations between 100 MHz and 3 GHz show no such violation, with a recent study by <cit.> finding a spectral index of α=-1.07 ± 0.02 between 144 MHz and 3 GHz.
In this work, we found that the low-frequency spectral index of the relic is α_23^146=-0.87±0.06, which is in line with the previous study by <cit.>, and shows a discrepancy with DSA. However, when combining our data with higher frequency data up to 3 GHz, we obtain a value of α_23^3000=-1.00±0.02, which is consistent with DSA, and slightly flatter than the value found by <cit.>. The flatter spectral index observed between 23 and 144 MHz could be caused by flux scale biases, which are more impactful when the frequency difference of the associated flux measurements is small. As <cit.> noted, the original LOFAR HBA calibration provided fluxes that were too high, but this issue was addressed by re-scaling the flux scale using compact bright sources and we have taken over the scaling in this study. However, if the HBA flux scale is still too bright, this would cause <cit.> to overestimate the steepness of the spectral index above 144 MHz and this study to underestimate the steepness of the spectral index below 144 MHz. Given the possible systematic issues with the LOFAR HBA flux scale, we only draw conclusions from the spectrum evaluated over the entire range of available frequencies, and we see no strong evidence of the spectrum between 24 and 3000 MHz being inconsistent with DSA.
Similar to other radio shocks that are mapped over wide frequency ranges, such as the Toothbrush and Sausage radio shock <cit.>, our findings suggest that there is no deviation from a power-law over multiple orders of magnitude, indicating no inconsistency with the standard DSA scenario in Abell 2256. The radio shock interpretation is also consistent with the X-ray detection of a nearby shock in Abell 2256 by <cit.>.
However, while the DSA interpretation seems to be supported by observations, some problems remain to be understood. In the case of standard DSA, an integrated spectrum with a spectral index close to α=-1 requires a large Mach number <cit.>, which is inconsistent with what has been measured for the Mach number of the X-ray detected shock in Abell 2256 <cit.>.
This might be resolved by considering that the radio shock region consists of an ensemble of shocks, and the radio and X-ray observations trace different parts of this distribution, with projection effects also playing a significant role <cit.>.
§.§ Origin of AGN related sources
The physical interpretation and age estimation of the various smaller ultra-steep spectrum sources in Abell 2256 have been complicated by the inability of previous studies to fit their spectra with simple synchrotron models, due to the strong curvature implying low break frequencies <cit.>. The new ultra-low frequency data show that we can now observe many of the radio source spectra flatten towards lower frequencies (Fig. <ref>).
The question of whether the F-complex should be considered a radio shock was raised by <cit.>, because of its steep spectrum, polarisation and elongated structure. However, unlike the large radio shock of Abell 2256, the spectrum of this source is strongly curved, resembling a typical aged AGN spectrum. As raised already by <cit.>, sources F1 and F2 might all be part of the tail of source F3.
We propose that sources F2 and F1 are related to the Fabricant Galaxy 122 (FG122) at the location of F3. The synchrotron modelling implied that the radiative age of the sources is approximately 200 Myr, which is consistent with the time it would take FG122 to travel the distance between its current location and the location of the F complex, given the typical velocity dispersion in the cluster <cit.>. If the magnetic field strength is lower than our assumed 7 μG <cit.>, then the age estimates would increase further and this picture would remain consistent with observations, unless the magnetic field is significantly weaker than B=1.8μG, in which case inverse Compton losses would quickly dominate. Furthermore, we observe no spectral index gradient across sources F1 to F3 in the low-frequency spectral index map (Fig. <ref>), which is expected in the standard spectral ageing scenario <cit.>, for a constant magnetic field when observing sources below the break frequency.
Interestingly, a new source was detected below the F complex which complicates the scenario once more. We have named this source F4. The spectrum of F4 remains curved below 100 MHz, with a spectral index of α_23^144=-1.9 ± 0.1, indicating that we have not yet found the break frequency of this source, but constrain it to be <23 MHz. Whether the source is physically related to the F1-F3 complex is difficult to say. However, the sudden steepening in the spatial spectrum, with no gradient in the spectral index map between F2 and F4 makes a physical relation unlikely. Multiple cluster members are located in the region co-spatial with F4, so an optical association is difficult to make correctly, given the diffuse morphology of the source. However, the morphology of the radio emission and the spectral index map shown in Figure <ref> indicate a possible host galaxy (MCG+13-12-020). Given the steep spectrum of the source at such low frequencies, it is likely that source F4 is a very old remnant radio galaxy with an age of >400 Myr. The dense and turbulent intracluster medium possibly quenched the expansion of source F4, limiting adiabatic losses and allowing the low-frequency detection of such an old source <cit.>.
The source AG+AH is located at approximately 800 kpc from the head of the tailed source C and shows a curved spectrum where α_144^351=-2.05 <cit.> while we observe α_23^144=-0.91±0.07. In previous LOFAR observations, <cit.> noted that if the break frequency of the spectrum is below 50 MHz, the radiative age of the source would be old enough to link it to source C. However, we observed the break frequency at 113±12 MHz, implying AG+AH can only be related if the fossil plasma is re-accelerated. Processes such as the gentle re-energisation process <cit.> or a shock wave that is also responsible for the radio shock can increase the age of the source substantially, <cit.>, allowing a physical relation between the sources. Such processes could also explain the filamentary `ribs' coming off the radio source, which are likely caused by complex interactions of the fossil plasma with the environment <cit.>. Interestingly, like the first ribbed source detected in Abell 3266, AG+AH is also related to an apparently one-sided tail. There are multiple sources now found in clusters that show such one-sided tails with such rib-like features, including IC1711 in Abell 1314 <cit.>, and SDSS J105851.01+564308.5 in Abell 1132 <cit.>. These observations may provide insights into the origin of these phenomena.
Finally, source AI was discovered by <cit.>, where it was suggested to be either a radio shock or a radio phoenix. It was recently classified as a radio phoenix based on the morphology, location and curved spectrum by <cit.>. This is corroborated by the ultra-low frequency results here, where the spectrum indeed approaches a typical AGN spectrum with α_23^46=-1.18 which significantly steepens towards higher frequencies.
§ CONCLUSION
We have investigated particle acceleration in Abell 2256 by studying the lowest energy electrons observable by ground-based telescopes. This study presented the first high-quality LOFAR observations down to 16 MHz of Abell 2256, proving the potential for new cluster science with LOFAR ultra-low frequency observations. The radio halo, radio shock, and most prominent fossil plasma sources in Abell 2256 were all detected clearly at 144, 46 and 23 MHz. The ultra-low frequency data paint a consistent picture with respect to what was found at higher frequencies, where both the radio halo and radio shock show straight power-law spectra over multiple orders of frequency, while the fossil plasma sources show relatively flat spectra at low frequencies that can curve extremely towards higher frequencies. This dichotomy, where spectral shapes are powerfully distinguished, that starts to show at low frequencies could help in the classification of diffuse cluster sources, which is becoming increasingly challenging as cluster radio emission is more ubiquitously detected.
We summarise the main results of this work as follows:
* The combination of low-frequency radio and gamma-ray data places some of the strongest direct constraints on the purely hadronic model for radio halos. The data are only consistent with the purely hadronic model for central magnetic field strengths >17 μG, which are improbably high given non-thermal pressure and magnetic field constraints that exist for comparable clusters.
This is only the second cluster for which such a direct constraint was produced, with the only other cluster being the Coma cluster, where data also disfavours a purely hadronic model.
* The sensitive LOFAR HBA image shows that the radio halo has a largest linear size of 24 arcminutes at 144 MHz, corresponding to a linear size of 1.6 Mpc at the cluster redshift. This is larger than previously measured.
* The integrated radio halo spectrum follows a straight power-law with a spectral index of -1.56±0.02 over a wide frequency range from 24 to 1500 MHz. The core region emits flatter spectrum emission (α=-1.36) than the overall radio halo, and the wedge arc between the radio shock and the F-complex shows somewhat steeper emission.
* Although the integrated spectrum follows a straight power-law, we found significant spatial variations in the spectral index and curvature across the radio halo on the order of σ(α_2D)=0.2. This implies that the emitting volume is strongly inhomogeneous, which is difficult to reconcile with the perfect power-law of the integrated spectrum by current theories.
* The radio shock spectrum also agrees with a straight power-law, but is significantly flatter than the radio halo, with α=-1.00±0.02 between 24 and 3000 MHz. The spectral index map at low frequencies also shows steepening from the southwest side to the northeast side, indicating the direction of the shock as electrons age in the downstream region.
* Abell 2256 hosts six complex radio sources with mostly curved spectra, of which five were known previously. We have detected a new ultra-steep spectrum source just below the F-complex, which we have named F4. While we see the spectra of the other complex radio sources flatten significantly towards 23 MHz, F4 still shows an ultra-steep spectral index of α_23^144=-1.9 ± 0.1, and we suspect it is unrelated to sources F1-F3 based on the sudden discontinuity in the spectral index map.
* We have modelled the synchrotron emission of these complex radio sources, finding typically curved spectra that agree well with simple ageing models, and finding radiative ages around 200 Myr. These findings are consistent with the interpretation that these are fossil plasma sources.
Most of the understanding about the origin and formation of diffuse radio emission in clusters has been derived from studies of relatively massive galaxy clusters that could be detected at GHz frequencies. However, turbulent re-acceleration models predict that an increasing fraction of halos in lower mass clusters should have a steep spectrum <cit.>, implying they are missed at high frequencies. To constrain model parameters, a large lever arm is needed for precise spectral index determination. Observations down to about 16 MHz can provide a similar lever arm when combined with ∼ 150 MHz to the lever arm historically used by combining 150 and 1500 MHz observations. The successful observations made in the lowest radio window available to ground-based telescopes thus open up exciting possibilities for future research on particle acceleration mechanisms in clusters.
EO and RJvW acknowledge support from the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO).
FdG acknowledges the support of the ERC CoG grant number 101086378.
AB acknowledges financial support from the European Union - Next Generation EU.
LOFAR <cit.> is the Low-Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d’Orléans, France; BMBF, MIWF-NRW MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the LOFAR-IT computing infrastructure supported and operated by INAF, and by the Physics Dept. of Turin University (under the agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy.
We thank the staff of the GMRT that made TGSS possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
EO thanks Jonah Wagenveld and Roland Timmerman for supplying useful Python scripts and the Istituto di Radioastronomia at CRN Bologna for the hospitality during the spring of 2022, where helpful discussions took place.
aa
§ FLUX MEASUREMENTS AND DECAMETRE SKY FIELD-OF VIEW
To verify the flux density scale of the LOFAR LBA and HBA images in the direction of Abell 2256, we have compared our data with deep upgraded Giant Metrewave Radio Telescope (uGMRT) data at 675 MHz from <cit.>. We have identified eight compact bright sources around Abell 2256 which are visible in the LOFAR 24 MHz, 46 MHz, HBA and uGMRT images. The 24 MHz flux was calculated in a 90-arcsecond resolution map to make sure all flux was captured for point sources which may still suffer from residual ionospheric errors. We decided not to compare to ancillary VLA 1-4 GHz data, as the field of view of those data is too small to make comparisons for many sources around Abell 2256. The results are shown in Figure <ref>, where the HBA flux is corrected by a scaling factor of 0.83, and the LBA flux is not adjusted. Most sources show a curved spectrum with flattening towards lower frequencies, which could either indicate a low flux density scale or a physical effect. We argue this is likely a physical effect, as it was also seen recently in the LoLSS survey <cit.>, where most sources were found to have a curved spectrum between 54 MHz and GHz frequencies. There, it is clear that there is no significant flux scale issue, as the spectra were in line with observations at 38 MHz from the 8C survey. This indicates that at lower frequencies the spectrum physically flattens, an effect that we also observe in our ultra-low frequency observations. Finally, the results show that the fluxes are in line with simple log-space polynomial fits, implying that there is no significant bias in the flux density scale.
For completeness, the flux measurements of the radio halo and radio shock regions defined in Sections <ref> and <ref> are given in Table <ref>. The full field of view of the LBA observations in the 16-30 MHz range is shown in Figure <ref>.
§ UNCERTAINTY MAPS
We show in Figures <ref> and <ref> the uncertainty maps for the spectral index and spectral curvature respectively. The uncertainty on the spectral index was calculated as
Δα = 1/ln(ν_1/ν_2)[(Δ S_1/S_1)^2 + (Δ S_2/S_2)^2]
where ν refers to the frequency of the observation, S to the corresponding observed flux, and Δ S to the uncertainty on the flux (which includes both the absolute flux scale uncertainty and the RMS map noise). The uncertainty on the curvature map was computed from the uncertainties on the spectral index maps using standard error propagation.
§ HALO FITTING
Figure <ref> shows the results of the Halo-Flux Density CAlculator <cit.>, a Markov-chain Monte Carlo code that fits a simple surface brightness model,
I(r) = I_0 exp(-r/r_e),
to a radio halo. We have indicated the region used for the fitting, and the regions used to mask the compact AGNs in the leftmost panel. The resulting best-fit parameters are given in Table <ref>.
|
http://arxiv.org/abs/2405.10098v1 | 20240516135437 | When Large Language Model Meets Optimization | [
"Sen Huang",
"Kaixiang Yang",
"Sheng Qi",
"Rui Wang"
] | cs.NE | [
"cs.NE"
] |
1
.001
When Large Language Model Meets Optimization
Huang et al.
mode = title]When Large Language Model Meets Optimization
1]Sen Huang
huangsen@scut.edu.cn
2]Kaixiang Yang
yangkx@scut.edu.cn
3]Sheng Qi
qisheng@nudt.edu.cn
3]Rui Wang
[1]
ruiwangnudt@gmail.com
[1]School of Electronic and Information Engineering, South China University of Technology
[2]School of Computer Science and Engineering, South China University of Technology
[3]College of Systems Engineering, National University of Defense Technology
[cor1]Corresponding author.
Optimization algorithms and large language models (LLMs) enhance decision-making in dynamic environments by integrating artificial intelligence with traditional techniques. LLMs, with extensive domain knowledge, facilitate intelligent modeling and strategic decision-making in optimization, while optimization algorithms refine LLM architectures and output quality. This synergy offers novel approaches for advancing general AI, addressing both the computational challenges of complex problems and the application of LLMs in practical scenarios. This review outlines the progress and potential of combining LLMs with optimization algorithms, providing insights for future research directions.
Large Language Model Optimization Algorithm Evolutionary Computation
[
[
Received April 30, 2024; accepted Month Date, Year
======================================================
§ INTRODUCTION
Optimization algorithms (OA) is becoming increasingly important as a class of heuristic search algorithms in the broad field of artificial intelligence and machine learning <cit.>. OA draws on the natural mechanisms of biological evolution, including processes such as natural selection, heredity, mutation, and hybridization, for solving complex optimization problems. These algorithms are widely used in many fields due to their global search capability, low dependence on problem structure, and ease of parallelization.
Optimization algorithms pivotal in diverse fields such as logistics, finance <cit.>, healthcare <cit.>, and artificial intelligence <cit.>, aim to identify the best solution from available alternatives. They are essential for making decisions efficiently and effectively in an era of rapidly increasing data complexity and volume.
The continuous advancement in optimization techniques has resulted in significant enhancements to algorithmic strategies, each customized to address specific types of problems and operational constraints.
From deterministic methods addressing linear problems to stochastic approaches for global optimization under uncertainty, optimization algorithms hold promise across a broad spectrum of research and practical applications.
With the development of technology, especially when dealing with large-scale, high-dimensional and dynamically changing optimization problems, traditional algorithms often face performance bottlenecks. Evolutionary computing provides effective solutions to these problems with its unique search strategy. In addition, the flexibility and adaptability of evolutionary computation enable it to be combined with a variety of other computational techniques to form hybrid algorithms for further performance enhancement.
In the rapidly evolving field of artificial intelligence, Large Language Models (LLMs) such as GPT (Generative Pre-trained Transformer) <cit.> provide a significant breakthrough with their advanced natural language understanding and generation capabilities. These models have revolutionized applications ranging from automated writing assistants to sophisticated conversational agents.
LLMs have become pivotal in advancing fields like natural language processing, image recognition with their extensive parameters and deep learning capabilities, and machine learning, offering robust solutions for complex data-driven challenges.
LLMs have achieved breakthroughs in traditional NLP tasks like text generation and language translation through extensive data training, and they also show promising potential in the emerging fields of algorithm design and optimization.
Traditional optimization algorithm design, dependent on human expertise <cit.>, is both time-consuming and potentially limited by the experts' knowledge. The advent of large-scale language models, however, has transformed this arena. These models learn extensive algorithmic patterns and strategies, enabling them to devise new algorithms and tailor solutions to specific challenges.
Furthermore, constructing and training large language models require significant computational resources and large datasets <cit.>, which escalate research and development costs and constrain the applicability and generalization of LLMs.
Optimization algorithms are crucial in developing LLMs, enabling researchers to efficiently tailor and refine model structures for specific applications.
By enhancing the training process, boosting computational efficiency, and lowering resource consumption, these algorithms facilitate the construction and application of large-scale language models.
These algorithms enhance the models' generalization capabilities and robustness, enabling improved performance amidst real-world complexity and uncertainty.
The goal in designing optimization algorithms for LLMs is to enhance their operational efficiency and reduce resource consumption, without compromising, and possibly improving, model performance.
This review aims to systematically analyze research on developing optimization algorithms with LLMs, and optimizing LLMs with optimization algorithms. It summarizes related research and application scenarios, and explores the diverse aspects of these applications.
Section <ref> provides a comprehensive review of large language model development, also known as macromodels, tracing their progression from basic predictive models to sophisticated systems that can comprehend and generate human-like text.
Section <ref> focus to optimization algorithms, concisely tracing their evolution from basic iterative methods to advanced algorithms essential for efficiently scaling AI models.
Section <ref> examines research that approaches large language models as optimization problems, highlighting innovative methods employed as search operators and in designing optimization algorithms.
Section <ref> discusses recent advances in optimization algorithms tailored for refining large-scale models. It highlights how these algorithms can enhance design, boost efficiency, and improve the performance of LLMs.
Section <ref> examines the practical applications of integrating optimization algorithms with LLMs, highlighting real-world implementations and their benefits.
Section <ref> is future outlook and research trends, in which summarizes the insights gained from this exploration and provides an outlook on the potential future developments in this exciting intersection of AI research.
In summary, we conducted a comprehensive study on the development and application of optimization algorithms for large models, aiming to provide valuable references and insights for future research.
§ LARGE LANGUAGE MODELS
Language has a crucial role in human cognition, enabling communication and expression from early childhood to adulthood <cit.>. Teaching robots to imitate human-like language skills is a difficult task due to their intrinsic lack of cognitive capacity for understanding and expressing language. Computational linguistics aims to close this divide by using advanced Artificial Intelligence (AI) algorithms to enable machines to possess reading, writing, and communication skills that like those of humans <cit.>.
The rise of Large Language Models (LLMs) undoubtedly represents an important milestone in the evolution of Natural Language Processing (NLP). These models, such as GPT-3 and GPT-4 of the GPT family <cit.>, are built on the Transformer architecture and have up to billions of parameters. They achieve a deep understanding of natural language and generative capabilities by pre-training on massive text datasets. The evolution of LLMs has gone through several notable stages: from the early days of Statistical Language Models (SLMs) and Neuro-Linguistic Models (NLMs), to Pre-Trained Language Models (PLMs), and ultimately to today's large-scale language models. While PLMs like BERT and GPT-2 have achieved remarkable success in NLP tasks, the emergence of LLMs has revolutionized the game in this field <cit.>. Not only can they be adapted to a wide range of tasks through large-scale pre-training, but they are also further optimised through fine-tuning, demonstrating a wide range of potential in application scenarios such as chatbots, search engine optimization and office automation The rise of Large Language Models (LLMs) undoubtedly represents an important milestone in the evolution of Natural Language Processing (NLP). These models, such as GPT-3 and GPT-4 of the GPT family <cit.>, are built on the Transformer architecture and have up to billions of parameters. They achieve a deep understanding of natural language and generative capabilities by pre-training on massive text datasets. The evolution of LLMs has gone through several notable stages: from the early days of Statistical Language Models (SLMs) and NLMs, PLMs, and ultimately to today's large-scale language models. While PLMs like BERT and GPT-2 have achieved remarkable success in NLP tasks, the emergence of LLMs has revolutionized the game in this field. Not only can they be adapted to a wide range of tasks through large-scale pre-training, but they are also further optimized through fine-tuning, demonstrating a wide range of potential in application scenarios such as chatbots, search engine optimization, and office automation <cit.>.
The excellence of Large Language Models (LLMs) in the field of Natural Language Processing (NLP) is due to several key components of their design, which together give LLMs powerful language understanding and generation capabilities <cit.>. First, “pre-training” is one of the core processes of LLMs. By pre-training on large-scale textual datasets, LLMs are able to learn the basic structures and patterns of language. These datasets typically contain billions of words covering a wide range of topics and language styles, allowing the models to capture the diversity and complexity of the language. Second,“adaptability” is another key characteristic of LLMs. After pre-training, LLMs can be further fine-tuned to adapt to specific downstream tasks, such as text classification, sentiment analysis, or machine translation. This adaptability allows LLMs to optimise their performance for specific tasks, leading to better results in various NLP challenges <cit.>. In terms of “applications”, the broad applicability of LLMs is another reason for their popularity. Not only do they perform well in traditional NLP tasks, but they can also be applied to a wider range of domains, such as the development of chatbots, the optimization of search engines, the construction of content recommendation systems, and the development of automated office tools. Finally, “performance evaluation” is critical to ensure the reliability and effectiveness of LLMs. Through a series of standardised testing and evaluation protocols, researchers are able to quantify the performance of LLMs and ensure that they work consistently across a range of tasks and conditions. Performance evaluation also includes studies of model bias, fairness, and interpretability, which are key factors in improving model quality and trust.
LLMs are becoming a key driver in the field of AI, and their development and application are attracting widespread attention from industry and academia. LLMs represented by ChatGPT and GPT-4 have not only made significant progress in technology but also promoted in-depth discussions on artificial general intelligence (AGI) conceptually <cit.>. OpenAI's technical article proposes that GPT-4 <cit.> may be an early attempt to move towards AGI, which all indicates the critical position of LLMs in the development of AI <cit.>. In the field of Natural Language Processing (NLP), LLMs are becoming a common tool for solving various linguistic tasks, changing the previous research and application paradigm. The Information Retrieval (IR) field is also feeling the winds of change, with traditional search engines facing the challenge of emerging information access methods such as AI chatbots, such as New Bing3, which is an attempt to enhance search results based on LLMs <cit.>. In addition, the field of computer vision (CV) is exploring multimodal models that combine vision and language, and the multimodal input support of GPT-4 is a manifestation of this trend. The rise of LLMs heralds the birth of a whole new ecosystem of apps based on these advanced models. Microsoft 365 leverages LLMs (e.g., Copilot) to automate the office, while OpenAI introduces plug-in functionality in ChatGPT, all of which demonstrate the potential of LLMs to enhance productivity and extend application scenarios <cit.>.
While LLMs have brought about many positive changes, they also present a number of challenges, particularly in terms of the security and accuracy of the generated content. In addition, the training of LLMs requires substantial computational resources, which is a challenge for research institutes as it limits the ability to perform extensive experiments and optimization of the models <cit.>.We have summarised the relevant restrictions below: 1) Computational resources: LLMs demand substantial computational resources for training and inference, which can pose challenges for implementing optimization algorithms at scale. Ensuring access to adequate computational infrastructure remains a significant hurdle. 2) Data efficiency: While LLMs have demonstrated impressive performance, they often require large amounts of data for effective training. This reliance on extensive datasets can be a bottleneck for optimization algorithms, especially in scenarios where data availability is limited or costly to obtain. 3) Interpretability and explainability: The inherent complexity of LLMs poses challenges for the interpretability and explainability of optimization algorithms. Understanding the decision-making process of these models and interpreting their outputs can be challenging, particularly in critical applications where transparency is essential. 4) Generalization and Robustness: Ensuring the generalization and robustness of optimization algorithms trained using LLMs is another key challenge. Over-reliance on specific patterns in the training data may lead to poor generalization performance on unseen data or vulnerability to adversarial attacks.
§ CLASSIC OPTIMIZATION ALGORITHM
§.§ Traditional optimization algorithms for
optimization problems
Optimization algorithms have wide applications in fields such as industry, economics, and management. With the rapid development of artificial intelligence, optimization algorithms play a crucial role in achieving intelligence and automation. Traditional optimization algorithms include deterministic optimization algorithms, approximation algorithms, and heuristic algorithms. Deterministic optimization algorithms include Linear Programming (LP)<cit.>, Integer Programming (IP)<cit.>, Mixed Integer Programming (MIP)<cit.>, Convex Optimization<cit.>, Adaptive Dynamic Pro gramming (ADP)<cit.>, etc. Deterministic optimization algorithms can guarantee finding the global optimal solution, but they are generally not suitable for large-scale problems. Polynomial-time approximation algorithms can find a good solution within a reasonable time frame, but they do not guarantee optimality. In other words, approximation algorithms do not guarantee finding the global optimal solution. For some specific problems, optimality guarantees may not exist at all. Heuristic algorithms employ specially designed functions to intelligently explore the solution space <cit.>. Heuristic Algorithm rely on intuitive rules, trial-and-error strategies, and practical insights to approximate solutions within acceptable bounds. Heuristic algorithms are particularly effective for problems with high-dimensional search spaces or combinatorial complexities where exact solutions are impractical. Heuristic algorithms include greedy algorithm<cit.>, Tabu Search<cit.>, genetic algorithm<cit.>, differential evolution algorithm<cit.>, Cooperative co-evolution algorithm<cit.>. Heuristic algorithms have been widely used due to their excellent computational performance, but they typically require customization and domain expertise for specific problems. Additionally, heuristic algorithms may converge to local optimal solutions and have high time complexity.
Recently, the superiority of using Estimation of Distribution Algorithm (EDA) in solving optimization problems has been demonstrated. EDA is a prominent optimization technique that employs probabilistic models to guide the search process. Unlike traditional evolutionary algorithms, which rely on mutation and recombination operators, EDA focuses on building and updating a probabilistic model of promising solutions. This model is then sampled to generate new candidate solutions, effectively balancing exploration and exploitation. Yang et al. <cit.> proposed ACSEDA based on the Gaussian distribution model, and calculates the covariance according to an enlarged number of promising individuals. In contrast to solely relying on solutions from the current generation to estimate the Gaussian model, EDA^2 <cit.> incorporates a strategy where a set number of high-quality solutions from previous generations are retained in an archive. These historical solutions are then utilized to aid in estimating the covariance matrix of the Gaussian model. Dong et al. <cit.> introduced a latent space-based EDA (LS-EDA), which converts the multivariate probabilistic model of Gaussian-based EDA into a principal component latent subspace with reduced dimensionality. This transformation effectively decreases the complexity of EDA while preserving its probability model, ensuring that crucial information is retained. Consequently, LS-EDA enhances performance scalability for large-scale global optimization problems. In recent times, hybrid EDAs have emerged as a prominent research focus. Li et al. <cit.> proposed IDE-EDA, an enhanced version of DE achieved by integrating the EDA. Zhang et al. <cit.> introduced a new hybrid evolutionary algorithm for continuous global optimization problems, Zhou et al. <cit.> proposed a fusion of an Estimation of Distribution Algorithm (EDA) with both economical and costly local search (LS) methods. This integration aims to leverage both global statistical insights and individual location-specific information for enhanced optimization performance.
§.§ Reinforcement learning methods for optimization problems
Another famous learning paradigm is reinforcement learning. Reinforcement learning learns and improves its strategy by interacting with the environment, aiming to maximize long-term cumulative rewards. The aforementioned heuristic algorithms typically find optimal or approximate solutions by searching the solution space, guided by heuristic information. However, they do not consider interaction with the environment during the search process. Unlike supervised and unsupervised learning, in reinforcement learning, agents can only learn through trial and error, rather than relying on labeled data or searching for the inherent structure of the data. Furthermore, reinforcement learning can be categorized into classical reinforcement learning methods and deep reinforcement learning methods, which will be introduced below.
According to <cit.>, classic RL methods can be divided into model-based and model-free approaches. Model-free methods can be further categorized into value-based, policy-based, and actor-critic methods. Value-based methods seek the optimal policy by estimating the value function. This approach is suitable for smaller state spaces and discrete action spaces but faces challenges to extend to continuous action spaces and has the "high bias" problem. The error between the estimated value function and the actual value function is difficult to eliminate. Policy-based methods, on the other hand, do not require estimating the value function. Instead, they directly fit the policy function using neural networks. By training and updating the policy parameters, the optimal policy is generated. This method is suitable for continuous action spaces but requires sampling a large number of trajectories, and there is a huge difference between each trajectory, leading to the "high variance" problem. To address the contradiction between high bias and high variance, the actor-critic method emerges. The actor-critic method constructs an agent that can both output policies and evaluate their quality in real-time using the value function. Generally, an actor-critic network consists of two parts: the actor network and the critic network. The actor network is used to generate policies to approximate the policy function, while the critic network is used to evaluate policies to approximate the value function. Representative works of these methods will be introduced below.
Q-learning<cit.> is a classic value-based RL algorithm and is currently the most widely used model-free RL algorithm. Q-learning first initializes a Q-function, typically represented as a Q-table. It selects an action based on the current state using the ϵ-greedy strategy, performs the selected action, and observes the reward obtained and the next state transitioned to. The Q-function is updated using the Bellman equation, repeat the above steps and update the Q value until the stop condition is reached. Finally, the optimal policy is extracted based on the learned Q-function. Double Q-learning<cit.> is an improved version of the Q-learning algorithm which alleviates the overestimation problem by using two Q-functions. In each round of interaction with the environment, one of the value function estimators is alternately selected to choose actions, while the other is used for action value estimation. Existing research has demonstrated that the Double Q-learning method achieves higher stability and greater long-term returns.
The REINFORCE algorithm<cit.>, also known as the Monte Carlo Policy Gradient Reinforcement Learning algorithm, is a classical policy gradient method. The goal of the REINFORCE is to update parameters along the gradient direction to maximize the objective function. On the basis of the classical policy gradient algorithms, Trust Region Policy Optimization (TRPO)<cit.> ensures that the Kullback-Leibler Divergence between the new and the old policy does not exceed a predefined threshold during each policy update. This threshold represents the "trust region" of policy updates, indicating the similarity between the new and old policies, thereby ensuring the stability of policy optimization. Proximal Policy Optimization (PPO)<cit.> is an improvement over TRPO, which is simpler to implement and requires less computation in practical use.
Actor-Critic (AC)<cit.> algorithm combines the advantages of both policy-based and value-based methods. It learns both the policy and the value function simultaneously. The actor trains the strategy based on the value function feedback from the critic, while the critic trains the value function using the Time Difference Method (TD) for single step updates. The aforementioned REINFORCE algorithm uses a stochastic policy function, which outputs the probability distribution of actions for a given state, and then selects an action based on the probability distribution. In contrast to the REINFORCE algorithm, Deterministic Policy Gradient (DPG)<cit.> algorithm employs deterministic policy function, which directly outputs a deterministic action for given states and update policy parameters by maximizing expected returns. DPG integrates deterministic policy gradients only in the state space, greatly reducing the need for sampling and enabling the handling of larger action spaces. However, the coupling between policy updates and value estimation in DPG leads to insufficient stability, particularly being highly sensitive to hyperparameters. The difficulty of tuning hyperparameters in actor-critic algorithms and challenges in reproducibility make them hard to apply in practical scenarios. When extended to application fields, the robustness of the algorithm is also one of the most concerned core issues.
The above methods are relatively simple and intuitive, easy to understand and implement, and suitable for smaller-scale problems. In relatively fewer samples, classical RL methods typically achieve good performance, especially in stable environments and reward settings. More importantly, the above method exhibits strong interpretability, allowing for a clear understanding of the agent's behavior and decision-making process within the environment.
Although RL has achieved remarkable achievements, previous methods have often struggled to handle high-dimensional data such as images, text, etc. This limitation has constrained its ability to deal with complex tasks and environments. Classical RL methods often find it challenging to strike a good balance between exploration and exploitation, leading to susceptibility to local optima, especially in high-dimensional and complex environments where issues of insufficient exploration are more pronounced. The reason for the aforementioned situation is that RL algorithms, like other algorithms, face challenges such as memory complexity, computational complexity, sample complexity, etc<cit.>. However, the powerful representation learning capability and function approximation capability of deep learning bring a completely new solution for RL.
Deep learning is a branch of machine learning aimed at using multi-layer neural network models to learn representations and features of data, and solving various tasks through these representations and features. Deep learning models have strong nonlinear function approximation capabilities, allowing them to learn more complex and accurate data representations and features. Deep learning models have strong generalization and representation learning capabilities, enabling them to learn more accurate and effective policies or value functions, thereby allowing RL agents to tackle more complex and high-dimensional tasks and environments, achieving higher performance and accuracy. Deep reinforcement learning (DRL) is the product of integrating both RL and DL. Similarly, DRL methods are also divided into three types: value-based, policy-based, and actor-critic methods. Among them, value-based DRL employs DL to approximate value functions, while policy-based DRL uses DL to approximate policies and solve decision-making policies based on policy gradient rules. The following will introduce representative works in DRL.
In 2013, Mnih et al. from DeepMind combined DL with Q-learning, proposing the groundbreaking Deep Q-network (DQN)<cit.>. DQN is a DRL algorithm based on Q-learning. On one hand, it utilizes deep neural networks as value function estimators, and on the other hand, it introduces experience replay and target networks. The experience replay mechanism breaks the high dependency between sampled samples, while the target network alleviates the instability of neural networks during training. These two mechanisms work together to enable the DQN algorithm to achieve performance close to or even surpassing human levels in most Atari games. Double DQN<cit.>, based on Double Q-learning, is an improvement over DQN. Similar to how DQN extends Q-learning, DDQN addresses the overestimation issue by using two Q-networks. DDQN achieves better stability and algorithm performance compared to DQN. In 2017, Dai et al. combined RL with graph embedding and proposed S2V-DQN<cit.>. They utilized a graph embedding network called structure2vec (S2V) to represent the policy in greedy algorithms and employed multi-step DQN to learn greedy policies parameterized by the graph embedding network. The S2V-DQN algorithm generates high-quality solutions faster, sometimes finding better solutions than commercial solvers within a longer timeframe.
In 2015, inspired by the ideas of DQN, Lillicrap et al. combined neural networks with the DPG algorithm to propose DDPG<cit.>. DDPG employs two different parameterized deep neural networks to represent the value network and the deterministic policy network. The policy network is responsible for updating the policy, while the value network outputs the Q-values for state-action pairs. Similar to DQN, DDPG also utilizes target networks to overcome the instability issues during network updates. Zhang et al. <cit.> considered the feasibility constraints of NP-hard problems, embedding heuristic functions into the transformer architecture, and applying DRL to combinatorial optimization problems with feasibility constraints. Ma et al.<cit.> introduced a Graph Pointer Network (GPN) to solve the classical TSP problem and combined it with a hierarchical RL framework to address the TSP problem with time window constraints. Multiple Traveling Salesman Problems (MTSP) are more complex, and Hu et al.<cit.> designed a network consisting of a shared graph neural network and distributed policies to learn a common policy expression suitable for MTSP. Experimental results demonstrated the effectiveness of this approach on large-scale problems.
In 2016, Mnih et al.<cit.> developed an improved Actor-Critic algorithm called A3C (Asynchronous Advantage Actor-Critic). By utilizing asynchronous gradient descent to optimize the parameters of deep neural networks (DNNs), A3C significantly improved the efficiency of policy optimization. Vinyals et al. <cit.> proposed the Pointer Network model for solving combinatorial optimization problems, which initiated a series of research studies on utilizing DNN for solving combinatorial optimization problems. This model was inspired by the Seq2Seq model in machine translation. It employs a deep neural network-based encoder to encode the input sequence of the combinatorial optimization problem (such as city coordinates), then utilizes a decoder and attention mechanism to compute the selection probabilities of each node. Finally, it selects nodes in an autoregressive manner until obtaining a complete solution. Due to the supervised nature of the training method proposed by Vinyals et al.<cit.>, the quality of the solutions it obtains will never exceed the quality of the sample solutions. Recognizing this limitation, Bello et al.<cit.> employed a RL approach to train the Pointer Network model. They treated each problem instance as a training sample, using the objective function of the problem as feedback signals, and trained the model based on REINFORCE. They also introduced a critic network as a baseline to reduce training variance. Furthermore, Nazari et al.<cit.> extended the Pointer Network to handle dynamic VRP problems. They replaced the LSTM in the Encoder input layer with a simple one-dimensional convolutional layer, effectively reducing computational costs. While maintaining optimization effectiveness, the training time was reduced by 60%.
In recent years, the Transformer<cit.> has achieved tremendous success in the field of natural language processing. Its multi-head attention mechanism enables better extraction of deep features from problems. In view of this, several recent studies have drawn inspiration from the Transformer for solving combinatorial optimization problems. Deudon et al. <cit.> improved traditional pointer network models by incorporating ideas from the Transformer. They utilized a similar structure to the Transformer in the encoder, while the decoder employed linear mapping of the decisions from the last three steps to obtain a reference vector, thereby reducing model complexity. The attention calculation method remained the same as in traditional Pointer Network models, and the classic REINFORCE method is still used to train the model. Kool et al.<cit.> proposed a new method capable of solving multiple combinatorial optimization problems using attention mechanisms. The attention calculation method in this model adopted the self-attention computation method from the transformer, with additional computational layers to enhance performance. They further designed a greedy rollout baseline to replace the Critic network, leading to significant improvements in optimization performance.
Deep reinforcement learning, as one of the most popular research directions in the field of artificial intelligence, has shown great potential in solving complex tasks and addressing various real-world problems. However, DRL also has its limitations, such as data requirements, sample efficiency, computational resources, interpretability, etc. Despite achieving some success in both research and application domains, DRL fundamentally remains constrained to simulated environments with ideal, highly structured experimental data design. They heavily rely on the design and training of specific models. Therefore, there is a growing interest in the design and optimization of automatic algorithms.
§ LLMS AS OPTIMIZATION
§.§ LLMs as the Black-box Optimization Search Model
There is a strong alignment between Large Language Models (LLMs), which are powerful in generating creative texts, and Evolutionary Algorithms (EAs), capable of discovering diverse solutions to complex real-world problems <cit.>.
With their powerful knowledge storage and generation capabilities, LLMs can support optimization algorithms in problem decomposition, parameter search, and solution generation.
As show in Fig. <ref>, LLMs can be classified into two categories for enhancing optimization algorithms: 1) One category is to use the large model as the search operator of the black-box optimization model, which makes full use of the knowledge storage capacity and experience of LLMs and thus can effectively reduce the input of workforce. 2) The other category of approach is to give full play to the generative capacity of the large model and to make full use of the understanding of the optimization problem by the large model as the input of model and then generate suitable optimization algorithm configurations or generate optimization algorithms for solving specific problems. The use of large models to assist the design of optimization algorithms has achieved preliminary research and widespread attention. How to give full play to the advantages of large models in the design of algorithms and integrate them into the optimization algorithm framework has become the key to the research in this field. A detailed classification of what is the goal of the auxiliary optimization model with LLM is shown in Table <ref>.
Yang et al. <cit.> proposed the Optimization by PROmpting (OPRO), which utilizes natural language descriptions to guide LLMs in searching for solutions for optimization problems. This method is particularly effective for derivative-free optimization scenarios that are common in real-world applications.
Technically, OPRO includes previously generated solutions and their values, allowing the LLM to iteratively improve upon solutions. The framework also intelligently balances benefits and costs, crucial for effective optimization, by adjusting the sampling temperature of the LLM to encourage both refinement of existing solutions and exploration of new ones.
Chen et al. <cit.> explored cue optimization methods in multi-step tasks to improve task execution efficiency. By constructing a discrete LLM-based prompt optimization framework, the framework automatically provides suggestions for improvement by integrating human-designed feedback rules and preference alignment. It is noted that while LLM performs well in single-step tasks, real-world multi-step tasks pose new challenges, such as more complex cueing content, difficulty in evaluating individual step impacts, and the fact that different people may have different task execution preferences. To address these issues, the researchers introduced human feedback, leveraging human expertise in providing input and combining it with a genetic algorithm-style framework to optimize cues.
For the current emerging optimization problem of neural architecture search, Zhang et al. <cit.> designed an automated machine learning (AutoML) system based on large-scale language models (LLMs), AutoML-GPT. AutoML-GPT utilizes a GPT model as a bridge to connect multiple AI models and dynamically uses optimized hyperparameters to train the models. The system automatically generates corresponding prompt paragraphs to search for the optimal model architecture and parameters by dynamically taking inputs from user requests and data cards. It then automatically executes the entire experimental process, from data processing to model architecture, hyperparameter tuning, and prediction of training logs.
AhmadiTeshnizi et al. <cit.> proposed a large language model (LLM)-based agent called OptiMUS. OptiMUS is designed to formulate and solve mixed-integer linear programming (MILP) problems from natural language descriptions. It is capable of developing mathematical models, writing and debugging solver code, developing tests, and checking the validity of the generated solutions.
§.§.§ Heuristic Search Operators
To tackle NP-hard combinatorial optimization problems (COPs) using large language models (LLMs) as Hyper-Heuristics (LHHs), Ye et al. <cit.> proposed ReEvo, a framework that emulates the reflective design approach of human experts. It leverages the scalable inference capabilities of LLMs, Internet-scale domain knowledge, and powerful evolutionary search strategies. ReEvo operates by generating heuristics through LLMs with minimal human intervention, offering an open-ended heuristic space and the potential for Knowledge and competence beyond that of human experts.
The research aims to address the complexity and heterogeneity of COPs by automating the design process of heuristics, which traditionally requires extensive trial and error from domain experts.
ReEvo incorporates a dual-level reflection mechanism, where short-term reflections are used to analyze heuristics' relative performance, and long-term reflections are accumulated to guide their evolution. This reflective process allows ReEvo to adapt and improve heuristics over time, leading to smoother fitness landscapes and more effective search results.
Zhong et al. <cit.> introduced a groundbreaking approach to the design of metaheuristic algorithms by utilizing the capabilities of the large language model (LLM) ChatGPT-3.5. They proposed a animal-inspired metaheuristic algorithm named Zoological Search Optimization (ZSO), which is developed to tackle continuous optimization problems. The ZSO algorithm is designed to mimic the collective behaviors of animals, incorporating two key search operators: the prey-predator interaction operator and the social flocking operator, which together balance exploration and exploitation effectively.
A similar approach called Language Model Crossover (LMX) utilizes large pre-trained Language Models (LLMs) to generate new candidate solutions. <cit.> LMX does this by combining the parent solutions into a prompt and then feeding that prompt into the LLM to collect offspring from the output. This approach is simple to implement and generates high-quality progeny across various domains, including binary strings, mathematical expressions, English sentences, image generation prompts, and Python code.
Liu et al. <cit.> explore the potential of using Large Language Models (LLMs), such as GPT-4, to generate novel hybrid population intelligence optimization algorithms. The research focuses on using GPT-4 to identify and decompose six population algorithms that perform well in sequential optimization: particle swarm optimization (PSO), cuckoo search (CS), artificial bee colony algorithm (ABC), grey wolf optimizer (GWO), self-organizing migration algorithm (SOMA), and whale optimization algorithm (WOA) by constructing hints to guide the LLMs to search for the optimal from the current population's parent solution.
INSTINCT (INSTruction optimization using Neural bandits Coupled with Transformers) <cit.> is a black-box LLMs cue optimization method. The method employs a novel neural band algorithm, Neural Upper Confidence Bound (NeuralUCB), in place of the Gaussian Process (GP) model in BO.NeuralUCB uses a neural network (NN) as a proxy while retaining the theoretical basis of the trade-off between exploration and exploitation in BO. Theoretical basis for the trade-off between exploration and exploitation. More importantly, NeuralUCB allows for the natural coupling of NN agents with hidden representations learned from pre-trained transformers (i.e., open-source LLMs), significantly improving algorithmic performance.
§.§.§ Multi-Objective Optimization
Brahmachary et al. <cit.> introduces an approach to numerical optimization using Large Language Models (LLMs) called Language-Model-Based Evolutionary Optimizer (LEO), which leverages the reasoning capabilities of LLMs to perform zero-shot optimization across a variety of scenarios, including multi-objective and high-dimensional problems.
LEO lies in its population-based strategy, which incorporates an elitist framework consisting of separate explore and exploit pools of solutions. This strategy not only harnesses the optimization capabilities of LLMs but also mitigates the risk of getting stuck in local optima. The method is distinct from other auto-regressive, evolutionary, or population-based methods as it uses LLMs to generate new candidate solutions, providing a unique balance between exploration and exploitation. It is shown through a series of test cases that LEO is not only capable of handling single-objective but also of solving multi-objective optimization problems well.
Liu et al. <cit.> leverage the capabilities of large language models (LLMs) to design operators for multi-objective evolutionary algorithms (MOEAs). The research addresses the challenges associated with the manual design of search operators in MOEAs, which often require extensive domain knowledge and can be time-consuming. The authors propose a method for decomposing the multi-objective optimization problem (MOP) into several single-objective subproblems (SOPs). LLMs are employed as search operators for each subproblem through prompt engineering. This allows the LLM to serve as a black-box search operator in a zero-shot manner without problem-specific training.
Bradley et al. <cit.> introduce a new approach called Quality-Diversity through AI Feedback (QDAIF) that combines evolutionary algorithms and large-scale language models (LLMs) to generate high-quality and diverse candidates for optimization algorithms. The core idea of QDAIF is to use language models to create variants and evaluate the quality and diversity of the candidates. The EA is responsible for maintaining the library of optimization algorithms and replacing the newly generated higher quality and more diverse solutions to the relevant positions in the library based on the evaluation of the LLMs to achieve an iterative optimization search process.QDAIF can find a set of higher quality and more diverse solutions within the search space, which is a successful application of the LLMs in QD problems.
§.§ LLMs as the Generator of Optimization Algorithms
Optimization algorithms usually require the design of suitable optimization schemes based on the specified tasks. Due to the problem-understanding and algorithmic analysis capabilities of LLMs, the design of optimization methods based on LLMs can generate suitable optimization method selection and combination schemes compared to the optimization methods in the pre-LLM era <cit.>.
§.§.§ Option Generate with the Cognitive of LLMs
Zhang et al. <cit.> explore the use of large language models (LLMs) to generate optimal configurations during Hyperparametric Optimization (HPO). The generation method does not rely on a predefined search space and consists of selecting parameters that can be optimized and specifying bounds for these parameters. Furthermore, the process treats the code of the specified model as hyperparameters to be output by the LLM, which goes beyond the capabilities of existing HPO methods.
Liu et al. <cit.> proposes a system named AgentHPO, which leverages the advanced capabilities of LLMs to streamline the HPO process, traditionally a labor-intensive and complex task that requires significant computational resources and expert knowledge. AgentHPO lies in its unique architecture that incorporates two specialized agents: the Creator and the Executor. The Creator agent interprets task-specific details provided in natural language and generates initial hyperparameters (HPs), emulating the role of a human expert. It utilizes extensive domain knowledge and sophisticated reasoning to propose HP configurations that are expected to yield optimal model performance.
Ma et al. <cit.> explored the effectiveness of Large Language Models (LLMs) as cueing optimizers. They find that LLM optimizers struggle to accurately identify the root cause of errors during reflection and often fail to generate appropriate cues for the target model through a single cueing optimization step, even when semantically valid. Further, they proposed a new paradigm of "automated behavioral optimization" designed to more controllably and directly optimize the behavior of the target model.
§.§.§ Algorithm Generate with with Chain-of-Thought
Liu et al. <cit.> proposed a approach called Algorithmic Evolution using Large Language Models (AEL), which aims to automate the generation of efficient algorithms for specific optimization problems.AEL creates and improves algorithms by interacting with Large Language Models (LLMs) within an evolutionary framework, eliminating the need for model training and significantly reducing the need for domain knowledge and expert skills. The constructive algorithms generated by AEL outperform hand-crafted heuristics and algorithms generated directly from LLMs based on the Traveling Provider Problem (TSP) example. Further, they introduce a improved framework for automated algorithm design that combines evolutionary computation and LLMs <cit.>. By automating algorithm design, combination, and modification, the AEL framework significantly reduces manual effort and eliminates the need for model training. The researchers used the AEL framework to design a Guided Local Search (GLS) algorithm for solving the TSP.
The PromptBreeder (PB) system <cit.> utilizes the Chain-of-Thought Prompting strategy, which can significantly improve the reasoning ability of Large Language Models (LLMs) in different domains. It is a generalized mechanism for self-referential self-improvement of large language models (LLMs). At the heart of the system is self-reference: not only do the task prompts evolve, but the mutation prompts used to generate them also improve over time.PB outperforms existing state-of-the-art prompting strategies, such as Chain-of-Thought and Plan-and-Solve prompts, in several commonly used benchmark tests—and-Solve prompts.
Pluhacek et al. <cit.> identified and decomposed six population algorithms that perform well on sequential optimization problems through LLM by augmenting the population. Enhanced Swarm Exploration and Exploitation Optimizer(ESEEO) aims to maintain population diversity and effectively balance exploration and exploitation by combining elements of Particle Swarm Optimization (PSO), Cuckoo Search (CS), and Artificial Bee Colony (ABC). Combining Limited Evaluation Population Optimizer (LESO) Designed to solve expensive optimization problems with a limited number of objective function evaluations, LESO combines the features of PSO, Grey Wolf Optimizer (GWO), and ABC to achieve effective exploration and exploitation within a limited number of assessments.
LLMs are a challenge in Genetic Programming (GP) approaches. Bradley et al. <cit.> present an optimized algorithm generation tool for implementing LLMs that converts natural language descriptions into implementation code and automatically repairs program errors. In addition, this paper presents two uses of LLMs as evolutionary operators: difference modeling and LMX crossover. The former is a language model specialized for predicting code discrepancies, and the latter is a technique for generating candidate solutions using multiple parents.
Similarly, Hemberg et al. <cit.> proposed a LLM-based GP algorithm, called LLM-GP, for generating optimization algorithm code.
Unlike conventional GP algorithms, LLM-GP harnesses the power of pre-trained pattern matching and sequence complementation capabilities of the LLM. This unique feature allows for the design and implementation of genetic operators, paving the way for more efficient and effective algorithm generation.
§ OPTIMIZATION ALGORITHMS OPTIMIZE LARGE LANGUAGE MODELS
Large Language Models (LLMs) have exhibited exceptional proficiency in many natural language processing tasks, encompassing text generation and sentiment analysis. Nevertheless, the task of optimizing their performance and efficiency continues to be a crucial obstacle, especially in intricate and unpredictable circumstances. Optimization Algorithms (OA) have emerged as a promising method to improve LLMs. OA utilizes the concepts of natural selection to perform repeated searches inside the parameter space of LLMs <cit.>. It focuses on tasks like prompt engineering, model architecture optimization, hyperparameter setting, and multi-task learning. This holistic strategy seeks to discover optimal solutions for problems that have extensive search spaces, thereby enhancing the effectiveness of LLMs in several fields, such as natural language processing, software engineering, and neural architecture search. In this section, we will delve into the complexities of OA optimization for LLMs, exploring various strategies such as model tuning, prompt tuning and network architecture search to unlock the full potential of these transformative language models. The framework of the optimization algorithm to optimize LLMs is shown in Fig. <ref>.
§.§ Optimize model tuning
§.§.§ Multi-task learning optimization
Multi-task learning (ML) optimization of large models is an approach that uses optimization methods such as evolutionary algorithms to optimize model architectures for multiple tasks simultaneously. Through multi-task learning, researchers identify optimal architectures for multiple target tasks simultaneously in large pre-trained models <cit.>. The advantage of this approach is the ability to share and interactively learn relevant information between different tasks, thus improving the generalization ability and efficiency of the model. Table <ref> summarises the main optimization model tuning class methods
The optimization strategy of multi-task learning reduces training costs, enhances model performance and improves adaptability across diverse domains and tasks by concurrently identifying optimal model architectures for multiple objectives using evolutionary algorithms <cit.>. For example, Choong et al. <cit.> proposed the concept of a diverse set of compact machine learning model sets designed to efficiently address multiple target architectures in large pre-trained models through an evolved multi-task learning paradigm. Baumann et al. <cit.> present an evolutionary multi-objective approach designed to optimize prompts in large language models (e.g., ChatGPT), demonstrating its effectiveness in crafting prompts in a sentiment analysis task that effectively captures conflicting emotions. Yang et al. <cit.> treated instruction generation as an evolutionary multi-objective optimization problem, using a large-scale language model to simulate instruction operators in order to improve the quality and diversity of generated instructions.
Meanwhile, Gupta and Bali et al. <cit.> have investigated the use of multi-objective multi-task evolutionary algorithms, such as the MO-MFEA algorithm, in creating task-specific, small-scale models derived from Large Language Models (LLMs) in the field of online search architecture study. These specialized models are created from the LLM as a general base model and exhibit enhanced performance or more compression in different application fields and neural network designs.
§.§.§ Based on structural pruning
Structural pruning optimizes large language models (LLMs) by selectively removing non-critical coupled structures based on gradient information, effectively reducing model size while preserving functionality and ensuring task-agnosticism. Structural pruning is an essential optimization technique used to enhance pre-trained LLMs for subsequent tasks, such as text categorization and sentiment analysis. Structural pruning, as suggested by Klein <cit.>, seeks to uncover numerous subnetworks of LLMs that achieve a compromise between performance and size, making them easier to use in different real-world applications. This approach employs a multi-objective local search algorithm to identify numerous Pareto-optimal subnetworks effectively. It does this by minimizing evaluation costs via weight sharing. Ma et al. <cit.> propose the LLM-Pruner approach to optimize large language models by selectively removing non-critical coupling structures based on gradient information to achieve efficient compression while preserving functionality and task agnosticism. Gholami et al. <cit.> demonstrate that weight pruning can be used as an optimisation strategy for the Transfer architecture, proving that judicious pruning can significantly reduce model size without sacrificing performance, thus contributing to bridging the gap between model efficiency and performance.
§.§ Prompt optimization
Prompt tuning, often referred to as optimization for prompt tuning, is a method used to fine-tune large language model (LLMs) prompts. This methodology does not need access to the underlying model parameters and gradients. This strategy is especially beneficial for closed-source models that have limited access. Prompt optimisation enhances the efficacy of model creation in fewer or zero-shot scenarios by fine-tuning the input prompts. Optimization algorithms (OAs), renowned for their adaptability and efficiency in situations where the internal workings are unknown, are used to discover prompts that improve job performance just by using the outcomes of LLMs reasoning. Prompt tuning may be categorized into two distinct types: continuous and discrete. Continuous prompt tuning employs continuous optimisation algorithms, such as CMAES <cit.>, to improve the quality of embedding prompts <cit.>. This process involves techniques like partitioning and subspace decomposition to enhance the embedding space. Conversely, discrete prompt tuning uses discrete optimization algorithms to explore the prompt space directly. It utilizes specialized genetic operators to adjust prompts in order to address the problem of combinatorial explosions in the search space. Table <ref> summarises the main methods of optimizing prompts
§.§.§ Continuous prompt optimization
Continuous prompt tuning is used to optimize the performance of LLMs and is often employed to tune the embedding of prompts. The embedding vectors of the prompts are iteratively tuned to maximize the performance of the model on a particular task, thus improving the quality and performance of the model generation <cit.>. Continuous prompt tuning typically explores different strategies and techniques, such as stochastic embedding, subspace decomposition, and knowledge distillation, in order to improve the embedding quality and the search performance for the optimization of large models. For example, Sun et al. <cit.> presented BBTv2, an improved version of Black-Box Tuning, which uses a divide-and-conquer gradient-free algorithm to optimize prompts at different layers of pre-trained models, achieving comparable performance under few-shot settings. Fei et al. <cit.> introduced a gradient-free framework for optimizing continuous textual inversion in an iterative evolutionary strategy. It accelerates the optimization process with minimal performance loss and compares performance with gradient-based models with variant GPU/CPU platforms. Pryzant et al. <cit.> proposed Automatic Prompt Optimisation (APO) using numerical gradient descent techniques to automatically improve the prompts of Large Language Models (LLMs), obtaining significant performance gains in various NLP tasks and jailbreak detection. Zheng et al. <cit.> Black-box prompt optimization using subspace learning (BSL) enhances the versatility of prompt optimization across tasks and LLMs by identifying common subspaces through meta-learning, ensuring competitiveness across a variety of downstream tasks.
Recently, some researchers have used techniques such as knowledge distillation, variational reasoning, and federated learning to improve search efficiency, generalization, and security. For example, Shen et al. <cit.> presented techniques for adapting large pre-trained language models (PLMs) to downstream tasks using only black-box API access, achieving competitive performance with gradient-based methods while also considering predictive uncertainty in prompts. Sun et al. <cit.> present a set of techniques for improving the efficiency and performance of black-box optimization (BBT-RGB) for tuning large language models without access to gradient and hidden representations, demonstrating its effectiveness in a variety of natural language understanding tasks. Han et al. <cit.> proposed GDFO, which ensembles gradient descent and derivative-free optimization for optimising task-specific successive prompts of large pre-trained language models in black-box tuning scenarios, obtaining significant performance gains over previous state-of-the-art approaches. Sun et al. <cit.> proposed FedBPT for federated black-box prompt tuning, a framework for efficient and privacy-preserving fine-tuning of pre-trained language models through collaborative prompt optimization without access to model parameters, reducing communication and memory costs while maintaining competitive performance. Chai et al. <cit.> introduced the Clip-Tuning technique to enhance search efficiency and offer more detailed and diverse evaluation feedback throughout the black-box tuning procedure. Clip-tuning differs from utilizing a single random projection matrix to reduce dimensionality in BBT. Instead, it utilizes pre-trained sampling on dropout models during inference. This process generates many subnetworks that act as predictive projections of samples in the original high-dimensional space. The search technique achieves faster convergence to the ideal solution by aggregating rewards from predictions made by several subnetworks.
Continuous prompt tuning is a method for optimising the performance of large language models where prompts are represented as continuous vectors that exist in a continuous embedding space. These vectors are optimized in a continuous embedding space. The search space is continuous and can be differentiated, allowing the use of gradient-based optimization techniques. Continuous prompt tuning is suitable for black-box optimization scenarios where the internal structure of the model and gradient information are not accessible. However, this approach may require more computational resources due to the complex mathematical operations and gradient calculations involved.
§.§.§ Discrete prompt optimization
Discrete prompt optimization is a method for finding optimal prompts in a pre-trained language model, where the prompts are represented as discrete text sequences. These methods typically use genetic algorithms, particle swarm optimization, or other heuristic-based search methods to search in a discrete prompt space to find the optimal prompt sequence <cit.>. Unlike continuous prompt tuning, discrete prompt optimization focuses more on tuning prompts at the level of text sequences and is suitable for tasks based on text sequences, such as text generation or classification. Before the emergence of large language models, researchers have been inclined to investigate the application of optimization methods to enhance the performance of pre-trained language models. For instance, Greedy Teaching Prompt Search (GrlPS) <cit.> uses a stepwise search approach without gradients, whereas Genetic Prompt Search (GPS) <cit.> is grounded in the ideas of genetic algorithms. The majority of these research employ evolutionary algorithms (EAs) as the primary search engine, whereas the language model is tasked with generating and assessing potential prompts <cit.>. Within the discrete prompt space, specialized genetic operators are employed to fine-tune heuristics and directly identify the most optimal prompts. This, in turn, enhances the quality of the model's response to a given task.
Typically, these studies employ optimization algorithms as a search framework, where Large Language Models (LLMs) are used to generate and evaluate prompts. Nevertheless, it is important to acknowledge that this research mainly concentrates on particular rapid engineering situations and has restricted breadth. To fully harness the potential of discrete optimization in black-box prompt optimization, it is necessary to tackle the issue of combinatorial explosion in discrete search spaces. For example, Zhou et al. <cit.> proposed a simple black-box search method called ClaPS, which achieves state-of-the-art performance on a variety of tasks and LLMs while significantly reducing the search cost by clustering and pruning the search space to focus on key prompting tokens that affect LLM prediction. Yu et al. <cit.> proposed a black-box prompt tuning framework for visual-verbal models, which optimizes visual and verbal prompts in an intrinsic parameter subspace through an evolutionary strategy, enabling task-relevant prompt learning without back-propagation. Pan et al. <cit.> introduced meta-heuristic algorithms as a generic prompt learning method, and demonstrated their effectiveness in black-box prompt learning and Chain-of-Thought prompt tuning by testing six typical methods and were able to discover more understandable prompts, opening up more possibilities for prompt optimisation. Lapid et al. <cit.> proposed a method for attacking large language models using genetic algorithms to reveal the vulnerability of the models to malicious manipulation and to provide a diagnostic tool for assessing and enhancing the consistency of language models with human intentions. Guo et al. <cit.> presented EvoPrompt, a discrete prompt optimization framework for the automatic optimization of LLMs prompts using evolutionary algorithms. By linking LLMs and EAs, the method achieved significant performance improvements over manually designed prompts and existing automatic prompt generation methods on 31 datasets, demonstrating the potential of combining LLMs and EAs. Pinna et al. <cit.> present a method for improving the generation of code for large language models using genetic improvement techniques, which significantly improves the quality of the generated code through user-supplied test cases, demonstrating the potential of combining LLM with evolutionary techniques.
Generally, discrete prompt optimisation is a method to optimise the performance of large language models in a black-box environment, which searches for optimal prompt words or phrases in a discrete prompt space through techniques such as genetic algorithms, heuristic search, and clustering pruning without the need for internal gradient information of the model. The advantages include effective performance improvement without relying on the internal information of the model in a black-box environment, and adaptability to tasks with a small number of samples or zero samples, while the disadvantages include the possibility of facing a huge search space, the tendency to fall into local optimums, the sensitivity of hyper-parameters, the limited ability of generalisation, the difficulty of interpreting the results, and the high dependence on the choice of evaluation metrics.
§.§.§ Blackbox optimization prompt tuning
Black-box optimization of large models refers to the process of optimizing and tuning large pre-trained models (e.g., large language models) within a black-box optimization framework. Compared to traditional black-box optimization, black-box optimization of large models is more challenging because the complexity of large pre-trained models and a large number of parameters makes the optimization process more complex and time-consuming. In black-box optimization of large models, optimization algorithms typically optimize the performance of pre-trained models by interacting with them and adjusting their inputs or parameters step-by-step without having direct access to the internal structure or parameters of the model.
In recent years, a number of researchers have focused on the application of black-box optimization to large-scale language models (LLMs) and visual-linguistic models, proposing a variety of methods to optimize model performance without accessing the model's internal parameters or gradients. Yu et al. <cit.> optimize a visual language model using a dialogue-feedback-based approach. Guo et al. <cit.> introduced the collaborative black-box tuning (CBBT) technique. Sun et al. <cit.> develop the a black-box tuning framework for Language Models as a Service (LMaaS). Diao et al. <cit.> proposed a black-box discrete prompt learning (BDPL) algorithm. And the work of Yu et al. <cit.> introduces a black-box prompt tuning framework for visual language models. These studies demonstrate that in black-box scenarios where model weights cannot be directly modified, external prompt learning and optimization are used to effectively improve model performance in image classification, text-to-image generation, and adaptation to different downstream tasks.
§.§ Self-Tuning optimization
Compared to the initialization method under the EvoPrompt framework, which relies on hand-prompted optimization. In recent years some researchers have proposed automated prompting methods. Use as a gene operator in EAs to automatically create high-quality prompts for yourself and others. Table <ref> summarises the main Self-Tuning optimization methods. For example, Singh et al. <cit.> applied an interpretable auto-prompt (iPrompt) to generate a natural language string that explains the data. Fernando et al. <cit.> propose a self-improving mechanism for PromptBreder that evolves and adapts cues for different domains, outperforming existing strategies on arithmetic, common-sense reasoning, and hate speech classification tasks. Pryzant et al. <cit.> proposed a simple and non-parametric solution, Automated Prompt Optimisation (APO), which automatically performs fast improvement prompts by using techniques inspired by numerical gradient descent. Li et al. <cit.> proposed SPELL, a black-box evolutionary algorithm that uses a large language model to automatically optimize text style cues, demonstrating rapid improvements for a variety of text tasks.
Furthermore, LLMs can serve as a flexible prompt selector for jobs that are not inside the domain it was trained on. Self-tuning can operate inside a versatile language domain without being dependent on parameter updates <cit.>. For example, Zhang et al. <cit.> proposed Auto-Instruct, an approach that utilizes the generative power of LLMs to automatically improve the quality of instructions for a variety of tasks, going beyond manually written instructions and existing baselines in a variety of out-of-domain tasks, with significant generalisability to other LLMs.
§.§ Optimize network architecture search
Prompt-based optimization tools improve the quality of model output by optimizing the input format. Another approach known as LLM Network Architecture Search (NAS) focuses on directly optimizing the architecture of LLM models, and in the context of Large Language Models (LLMs), NAS can take a different form by optimizing the architecture of the model directly rather than by tuning the parameters of the model. Table <ref> summarises the main optimized network architecture search search methods
As the complexity of neural network models increases, manually designing efficient network architectures becomes time-consuming and challenging. NAS eases the burden on researchers by automating the design process, allowing efficient exploration of the vast search space to discover more efficient, generalized and less resource-intensive model architectures <cit.>. Previously NAS was optimized by simulating the process of natural selection. It involves the steps of randomly generating an initial population, selection, crossover (or recombination, as it is called), and mutation until termination conditions are met <cit.>. With the development of deep learning techniques and the increase of arithmetic power, the NAS field is also exploring new optimization strategies to optimize large models. With the increase of computational resources and the proposal of new algorithms, the efficiency and effectiveness of NAS have been significantly improved. Nasir et al. <cit.> proposed a new NAS algorithm that effectively combines the advantages of LLMs and Quality Diversity (QD) algorithms to automate the search and discovery of high-performance neural network architectures. So et al. <cit.> proposed an evolutionary Transformer discovered through evolutionary architectural search in multilingual tasks superior Transformer that achieves better performance with fewer parameters and maintains high quality even at smaller sizes.
More sophisticated and effective search strategies have been proposed by researchers in recent years to improve the performance of large models. For example, Gao et al. <cit.> proposed an automatic method (AutoBERT-Zero) for discovering the backbone structure of a general-purpose language model (LLM) using a well-designed search space and an operation-first evolutionary strategy, as well as a two-branch weight-sharing training strategy, to improve search efficiency and performance. Ganesan et al. <cit.> perform task-independent pre-training of BERT models while generating differently shaped sub-networks by varying the hidden dimensions in the Transformer layer. Rather than optimizing for a specific task, it generates a series of different-sized models by varying the hidden dimensions of the network, which can be fine-tuned for various downstream tasks. Yin et al. <cit.> proposed the use of one-shot Neural Architecture Search (one-shot NAS) to automatically search for architectural hyperparameters. A large SuperPLM is obtained through one-shot learning, which can be used as a proxy for all potential sub-architectures. An evolutionary algorithm is also used to search for the best architectures on the SuperPLM, and then the corresponding sub-models are extracted based on these architectures and further trained. Javaheripi et al. <cit.> proposed a no-training Neural Architecture Search (NAS) algorithm for finding Transformer architectures that have an optimal balance between task performance (perplexity) and hardware constraints (e.g., peak memory usage and latency). Zhou et al. <cit.> proposed a Transformer architecture search method called T-Razor, which uses zero-cost agent-guided evolution to improve the search efficiency and evaluates and ranks Transformers by introducing metrics such as synaptic diversity and synaptic saliency to efficiently find optimized architectures in the Transformer search space. Klein et al. cites klein2023structural proposed Neural Architecture Search (NAS) based on weight sharing as a structural pruning method for finding the optimal balance between optimization efficiency and generalization performance to achieve compression of large language models (LLMs) in order to reduce the model size and inference latency.
Overall, the main advantage of NAS for optimising large models is its ability to automate the exploration and discovery of efficient network architectures for specific tasks, significantly improving model performance while reducing manual design and tuning efforts. With intelligent search strategies, NAS helps save computational resources and time. However, this approach also faces challenges, including the large search space, the possibility of falling into local optimal solutions, and the large amount of computational resources required in the initial search and training phases. In addition, the selection and tuning of optimization algorithms require expertise, and the generalization ability of the network architecture obtained from the search still needs further validation. Future research may focus on improving the search efficiency, reducing the computational cost, and enhancing the generalisability and adaptability of the model.
§ APPLICATION OF LLMS-BASED OPTIMIZATION ALGORITHMS
As show in Fig. <ref>, optimization algorithms are pivotal in various applications, broadly categorized into software programming, neural architecture search and content generation.
LLM-based optimization algorithms are becoming increasingly important in artificial intelligence, especially in machine learning. They are used for software programming and neural architecture search to help design efficient network architectures. Furthermore, these algorithms are employed as innovative tools in content generation, optimizing the creation process to produce relevant and engaging content. This bifurcation in application highlights the versatility and evolving role of optimization algorithms in addressing both conventional challenges and pioneering technological advancements.
§.§ Assisted Optimization Programming: Software Programming
In the wave of artificial intelligence, optimization algorithms based on LLMs have gradually become an important research area to promote code generation and software development. With many parameters and deep learning capabilities, LLMs have shown powerful capabilities in many fields, such as natural language processing, image recognition, etc. Especially in the process of software development, the application of large models can improve the efficiency of code generation and further enhance the model performance through optimization algorithms. By automatically generating code for training models, non-professionals can also easily train efficient machine learning models, greatly reducing the technical threshold and expanding the audience for machine learning technology. Meanwhile, code integration practices in model development, such as static code integration and dynamic code integration, also play a key role in improving the efficiency and quality of software development.
Weyssow et al. <cit.> explore using Large Language Models (LLMs) for code generation tasks, focusing on Parameter-Efficient Fine-Tuning (PEFT) optimization techniques. The methodology aims to optimize the fine-tuning process of LLMs by updating a small portion of the model's parameters instead of all of them using the PEFT technique to achieve efficient fine-tuning in resource-constrained environments.
Cassano et al. <cit.> present a system called MultiPL-E, which is a system for translating code generation benchmark tests from Python to other programming languages.
Further, Pinna et al. <cit.> point out the application of automatic code generation based on problem descriptions and that even the most efficient LLMs often fail to generate correct code. Therefore, to address the question of how to enhance code generation based on Large Language Models (LLMs) through a Genetic Improvement (GI) approach, an evolutionary algorithm-based approach is proposed that uses Genetic Improvement (GI) to improve LLM-generated code using a collection of user-supplied test cases.
Program synthesis (PS), a form of automation, aims to reduce the time and effort required for software development while improving code quality. Although Genetic Programming (GP) is a competing approach to solving the program synthesis problem, it has limitations in evolving syntactically correct and semantically meaningful programs. Tao et al. <cit.> used a combination of Generative Pre-trained Transformers (GPTs) and Grammar-Guided Genetic Programming (G3P) to solve the program synthesis problem. GPTs) and Grammar-Guided Genetic Programming (G3P) to address the program synthesis problem.
The OpenAI team has developed a language model called CodeX <cit.>, which has been fine-tuned using publicly available code on GitHub and investigated for its ability to write Python code. A special production version of Codex is GitHub Copilot, a programming aid.
Brownlee et al. <cit.> explored how large language models (LLMs) can be applied to variation operations in Genetic Improvement (GI) to improve the efficiency of the search process.GI is a search-based technique used to improve the non-functional attributes, such as execution time, and functional attributes, such as fixing defects, of existing software.
Ji et al. <cit.> specifically presents a research overview of the assessment and interpretation of code generation capabilities based on large language models (LLMs), including two main phases: data collection and analysis. In the data collection phase, the prompts' features are quantified by extracting their linguistic features and performance metrics from the generated code. In the analysis phase, causal diagrams are constructed using causal discovery algorithms and further analyzed to identify principles of hint design.
Codex <cit.> was fine-tuned using publicly available GitHub code to enhance its Python coding capabilities. The method evaluated Codex using a new test set, HumanEval, which assesses the functional correctness of programs synthesized from documentation strings.In this dataset, Codex successfully solved 28.8% of the problems, compared to GPT-3, which solved none, and GPT-J, which solved 11.4%.
CODAMOSA <cit.> integrates a pre-trained Codex with Search-Based Software Testing (SBST) to enhance test case code coverage. SBST generates high-coverage test cases by combining test case generation with mutation for programs under test. However, SBST may face stagnant coverage, meaning it struggles to produce new test cases that increase coverage. When SBST's coverage improvement stagnates, the CODAMOSA algorithm aids in relocating to more advantageous search space areas by using Codex to generate example test cases for functions with lower coverage.
Wu et al. <cit.> highlight the advances in code generation by large-scale language models (LLMs) and the associated security risks, particularly the critical vulnerabilities in the generated code. Although some LLM providers have sought to mitigate these issues through human guidance, their efforts have yet to yield robust and reliable code LLMs in practical applications. They introduce the DeceptPrompt algorithm, designed to generate adversarial natural language instructions that prompt code LLMs to produce functionally correct yet vulnerable code. DeceptPrompt employs a systematic, evolution-based algorithm with a fine-grained lossy approach. The algorithm uniquely excels at identifying natural prefixes/suffixes with benign, non-directional semantics and effectively induces code LLMs to generate vulnerable code. This feature enables researchers to conduct near-worst-case red-team tests on these LLMs in real-world scenarios through natural language.
In summary, in software programming, large language models (LLMs) enhance code generation efficiency via optimization algorithms and reduce the complexity of machine learning technology. This simplification allows non-experts to train efficient models, thereby broadening the reach of machine learning.
§.§ Assisted Optimization Framework: Neural Architecture Search
Neural Architecture Search (NAS), an important technique for the automatic design of neural networks, is undergoing a transformation driven by combining LLMs and optimization algorithms. With their massive parameters and deep learning capabilities, LLMs show unprecedented potential in handling complex tasks. At the same time, the combination of well-designed optimization algorithms can further accelerate the Neural Architecture Search process, improving the search efficiency and the performance of the resulting models. With the continuous application of big models in NAS, we see their great potential in the automatic search and optimization of neural network structures, which not only greatly saves labor costs but also improves the innovation and diversity of model design.
Nasir et al. <cit.> present LLMatic, a large model-based Neural Architecture Search (NAS) algorithm that uses two QD archives to search for competitive networks, which combines the code generation capabilities of Large Language Models (LLMs) with the diversity and robustness of Quality Diversity (QD) algorithms.LLMatic utilizes the LLMs to generate new architectural variants and combines the QD algorithms (especially the MAP-Elites algorithm) to discover diverse and robust solutions.
Chen et al. <cit.> found that an approach combining evolutionary prompt engineering and soft prompt tuning, EvoPrompting, consistently discovers diverse and high-performance models. A method for creating and curating data using evolutionary search to improve in-context prompting examples for LM is presented. While focused on neural architecture design tasks, this approach is equally applicable to LM tasks that rely on in-context learning (ICL) or cue tuning.
Jawahar et al. <cit.> build new uses for Performance Predictors (PP) by using Large Language Models (LLMs) that predict the performance of specific Deep Neural Network (DNN) architectures on downstream tasks. A hybrid search algorithm (HS-NAS) is proposed, which uses LLM-Distill-PP in the initial phase of the search and a baseline predictor for the remainder of the search.HS-NAS reduces the search time by about 50% with performance comparable to that of the SOTA NAS and sometimes improves latency, GFLOPs, and model size.
Jawahar et al. <cit.> introduced LLM-PP, a precise performance predictor developed using LLM for few-shot prompting. It achieves a mean absolute error (MAE) comparable to the state of the art (SOTA). LLMDistill-PP, developed as a more cost-effective predictor, caters to applications like Neural Architecture Search (NAS) that require numerous predictions. Additionally, the new HS-NAS algorithm is introduced. It leverages the strengths of LLMDistill-PP and the state-of-the-art performance estimator, reducing NAS search times by half and identifying more efficient architectures.
Zheng et al. <cit.> explored the potential of GPT-4 models for the Neural Architecture Search (NAS) task of designing effective neural network architectures. At the same time, they propose an approach called GPT-4 Enhanced Neural archItectUre Search (GENIUS), which leverages the generative power of GPT-4 as a black-box optimizer to navigate the architectural search space quickly, identify promising candidate architectures, and iteratively refine these candidate architectures to improve performance.
EvoPrompting <cit.> employs advanced Language Models (LMs) for code-level Neural Architecture Search (NAS). This approach integrates evolutionary prompt engineering with soft-prompt tuning. It aims to iteratively refine contextual prompts and enhance prompt tuning on LMs, thereby boosting their capacity to generate innovative and diverse solutions for complex reasoning tasks.
Radford et al. <cit.> describe a method to enhance Natural Language Understanding (NLU) using Generative Pre-Training. They show that this approach significantly boosts performance across various NLU tasks by initially pre-training a language model on a vast corpus of unlabeled text, then applying supervised fine-tuning for particular tasks.
Chowdhery et al. <cit.> proposed PaLM (Pathways Language Model), a large-scale language model, PaLM has demonstrated excellent performance on a variety of Natural Language Processing (NLP) tasks, and PaLM also has very good performance on network structure design, structure search.
In summary, integrating LLMs with optimization algorithms in neural network architecture search (NAS) enhances search efficiency, fosters innovation, diversifies model designs, and opens new avenues for the automated design of complex neural architectures.
§.§ Assisted Optimization Generation: Content Innovation Generation
Innovative content generation has become a key driver for developing media, entertainment, arts, and scientific discovery. Applying big artificial intelligence models combined with optimization algorithms is increasingly important in this process. In summary, using optimization algorithms based on large models in innovative content generation is not only about the innovation and diversity of content but also promotes and facilitates scientific and technological innovation development.
Xiao et al. <cit.> proposed a pattern-centric text generation framework, PatternGPT, to address the error-prone nature of Large Language Models (LLMs) and the inability to use external knowledge in text generation tasks directly. The framework uses algorithms to search for or generate high-quality patterns based on judgmental criteria. It leverages the pattern extraction capabilities of LLMs to develop a diverse set of structured and formalized patterns, which can help to bring in external knowledge for computation.
Chen et al. <cit.> enhance the performance of Large Language Models (LLMs) in language generation tasks through Model-Adaptive Prompt Optimization (MAPO), a prompt optimization method that can be widely applied to various downstream generation tasks.
Similarly, they propose a new paradigm for news summary generation that uses Large Language Models (LLMs) to improve the quality of news summary generation through evolutionary fine-tuning <cit.>. The method uses LLM to extract multiple structured event patterns from news passages, evolves a population of event patterns via a genetic algorithm, and selects the most adapted event patterns to input into LLM to generate news summaries.
PanGu Drug Model <cit.> is a graph-to-sequence asymmetric conditional variational autoencoder designed to improve molecular property representation and performance in drug discovery tasks. The model is inspired by conversions between molecular formulas and structural formulae in the chemistry classroom and can appropriately characterize molecules from both representations.
Liang et al. <cit.> presented a prototype of a DrugChat system designed to provide ChatGPT-like capabilities for drug compound analysis.DrugChat, by combining graph neural networks (GNNs), large language models (LLMs), and adapters, enables users to upload molecular maps of compounds and ask various questions during multiple rounds of interaction. Diagrams and ask different questions in numerous rounds of interaction, which the system then answers.
To break the bottleneck of literate graph technology, Berger et al. <cit.> proposes the framework of StableYolo, which aims to optimize the image generation quality of large language models (LLMs) by applying evolutionary computation to the Stable Diffusion model while adjusting the prompts and model parameters. The core idea of StableYolo is to improve the image generation quality of photo-realistic styles by combining visual evaluation with multi-objective search. The core concept of StableYolo is to enhance the quality of image generation in photo-realistic style by combining visual evaluation with multi-objective search. The system uses the confidence estimate of the Yolo model as a fitness function and searches for the optimal combination of cue words and model parameters using a Genetic Algorithm (GA).
To explore additional research related to LLMs, including cognitive functions of LLMs, behavior and learning in game-theoretic environments, and Big Five personality traits, Suzuki et al. <cit.> propose a model for the evolution of personality traits based on Large Language Models (LLMs), specifically those related to cooperative behavior. The approach demonstrates how LLMs can enhance the study of human behavioral evolution and is based on evolutionary game theory by using an evolutionary model that assumes that human behavioral choices in game-theoretic situations can be simulated by providing LLMs with high-level psychological and cognitive trait descriptions.
De et al. <cit.> explored the phenomenon of the self-organized formation of scale-free networks in social interactions between large language models (LLMs). Scale-free networks are a typical emergent behavior in complex systems, especially in online social media, where users can follow each other and form social networks with specific structural features.
Lu et al. <cit.> propose a novel learning framework, SELF (Self-Evolution with Language Feedback), which aims to continuously enable large-scale language models (LLMs) to improve themselves through self-feedback and self-improvement. The SELF framework is inspired by the human self-driven learning process, which consists of an initial attempt, reflective feedback, and The SELF framework is inspired by the human self-driven learning process, which involves a cycle of initial attempts, reflective feedback, and behavioral improvement to improve the model's capabilities. The ELF framework also enables smaller LLMs to improve themselves, which can be reversed to facilitate the development of larger predictive models.
In summary, the proposed systems and frameworks, including DrugChat and SELF, illustrate the development of personalized, intelligent tools for analyzing drug compounds, generating news summaries, and mimicking human behaviors. These tools continuously improve their performance through self-learning and feedback mechanisms, enhancing efficiency and accuracy in related fields.
§ FUTURE OUTLOOK AND RESEARCH TRENDS
In the previous sections, we have examined recent advances in the fields of long-term memory models (LLMs) and optimization algorithms (OAs). Nonetheless, there are still many challenges and unresolved issues between these two fields. Therefore, the aim of this section is to explore directions for future research in order to provide scholars with the opportunity to explore new areas beyond the boundaries of current knowledge, to ask new research questions, and to reinvigorate the field.
Theoretical Foundations and Methodologies. Experimental studies have confirmed the effectiveness of combining large-scale language models (LLMs) with OAs in solving small-scale problems <cit.>. However, the motivation for their interaction has not yet been clarified. To further promote the performance of algorithms, we need to deeply explore the mechanism of mutual reinforcement between LLMs and OAs in theoretical studies and analyze their complementary advantages and potential problems in practical applications in detail through large-scale empirical studies. In addition, it is crucial to conduct in-depth theoretical analyses of algorithms combining LLMs and OAs, which includes evaluating their convergence, time complexity and space complexity. Also, investigating the impact of algorithmic parameter settings on performance, as well as performance guarantees or theoretical limitations of the algorithms on different problem types, are key steps in advancing the algorithms. Further, exploring optimization theory <cit.>, such as clarifying the definition and characterization of the objective function, dealing with constraints, and analyzing the feasible solution space of a problem, will provide a solid theoretical foundation for the design and application of algorithms to achieve better algorithmic performance in solving more complex problems.
Automated Intelligent Optimization. In the optimization context, large language models (LLMs) show significant potential, especially in enhancing the automation and intelligence of optimization algorithms (OAs). Learning from multimodal data during the pre-training phase allows LLMs to understand and generate cross-modal content <cit.>. This provides a new search and mutation strategy for OAs when performing cross-modal operations. This capability of LLMs can facilitate OAs in achieving a more efficient global search in multimodal optimization problems. At the same time, as the technology of LLMs continues to advance, it is expected that they will drive the performance of OAs in modeling complex evolutionary mechanisms, especially when dealing with optimization problems with large-scale search spaces <cit.>. However, current research has yet to explore the potential of LLMs in evolutionary optimization, and there remain challenges, such as how to combine LLMs and OAs better and how to handle complex search spaces.
In addition, the pre-training of LLMs on large amounts of textual data embeds them with rich domain knowledge, which provides a robust knowledge base for OAs.LLMs can assist OAs in better integrating domain-specific knowledge in the optimization process, thus improving the efficiency of optimization and the quality of solutions. For example, LLMs can generate high-quality initial solutions, improve problem formulation, and provide solution coding and definition of solution spaces. In addition, LLMs can provide guiding principles for algorithm design, enabling EAs to handle complex optimization problems such as multi-objective, discrete and dynamic more effectively <cit.>. With the rapid development of LLMs technology, they are expected to play an even more critical role in the future evolutionary optimization field, driving the field toward higher levels of automation and intelligence.
Robustness and Prompt Engineering. Utilizing optimization techniques is a crucial method for improving the capabilities of LLMs in engineering applications. Common approaches involve utilizing LLMs as optimization operators within EIA frameworks to consistently produce fresh prompt. This technique has consistently shown efficacy and superiority in numerous investigations. Nevertheless, certain obstacles persist. Firstly, it is crucial to pay close attention to the initialization of the optimization process as it will have a substantial impact on the outcomes <cit.>. It is crucial to have cue templates that are generic and customizable in order to provide accurate and valid prompts. Random initialization may not be capable of using existing information, and manual seeding may add bias. In addition, when confronted with issues that contain a significant amount of previous knowledge, the range of possible prompts to consider increases exponentially as the length of the cue and the size of the vocabulary grow. This can result in over fitting or becoming trapped in local optimal solutions. Furthermore, these approaches lack stability and strongly depend on the capabilities of the LLM, rendering them susceptible to stochasticity <cit.>. If the LLM lacks the ability to comprehend and efficiently employ the cues, it may undermine the effectiveness of the approach.
Further research should strive to tackle these obstacles by creating more resilient techniques. For instance, in the context of initialization, the technique of multisource seeding can be investigated to automatically improve the size and quality of the initial population utilising LLM. When dealing with intricate search spaces, it is essential to develop efficient optimisation algorithms. This may involve combining more comprehensive sets of optimisation operators, using the advantages of different evolutionary algorithms, and utilising adaptive optimisation techniques.
Generality and Architecture Search.
The combined efforts of Large Language Models (LLMs) and OAs have accelerated progress in the field of code generation, leading to notable improvements in downstream applications such as software engineering and OAs design. An commonly used method in this collaboration involves employing LLMs to create large training datasets, and then refining the LLMs using reinforcement learning approaches <cit.>. Nevertheless, this approach has challenges with the variety and quantity of training data, which could result in a failure to cover all possible scenarios. An alternate approach involves utilizing the strong code generation capabilities of LLMs in combination with the powerful search architecture of OAs to continually improve the code generation process. However, this method has difficulties when it comes to generating code for sophisticated algorithmic logic that may require the combined work of numerous code snippets. In order to overcome these obstacles, it is possible to develop a modular strategy that breaks down large activities into smaller, more manageable sub-tasks. An interactive interface might be added to allow users to clearly define the breakdown of tasks <cit.>. This would enable LLMs and OAs to generate code for each sub-task in a coordinated manner.
Neural Architecture Search (NAS) is an important application scenario that arises from the combination of LLMs and OAs. Although LLMs have shown remarkable effectiveness in other tasks, they have not been specifically designed for NAS <cit.>. The performance of current LLM models varies significantly when used for NAS tasks, and there is a clear difference between LLM-based approaches and conventional NAS methods in terms of their application area and ability to generalize <cit.>. In order to enhance the overall effectiveness of LLMs and EAs in NAS projects, a comprehensive strategy could be implemented. This involves assessing the effectiveness of various LLM models in NAS tasks, enhancing LLM's NAS skills by incorporating more training data, optimizing the structure of LLMs during the fine-tuning stage, and investigating the utilization of past search knowledge to speed up future searches and provide clearly defined search spaces for LLMs.
Interdisciplinary Applications and Innovations. The incorporation of Large Language Models (LLMs) with optimization algorithms (OAs) shows potential in several interdisciplinary domains, providing a powerful synergy to stimulate innovation and improve performance in intricate jobs.
In the realm of computational creativity and generative design, LLMs are adept at generating creative content, such as artwork, music, and literary pieces. The collaboration with OA brings methods of variation and selection, which can promote creative diversity and ignite innovation. This collaborative approach can result in the production of unique and groundbreaking artistic and design works, thereby promoting innovation and fostering the growth of creativity. Within the domain of robotics, intelligent and adaptable robot systems can be produced through the collaboration of OAs, which have the ability to refine control strategies and action sequences, and LLMs, which are capable of generating instructive dialogues and task-oriented directives <cit.>. These systems have enhanced capabilities to adjust to various tasks and participate in complex interactions with humans, enhancing collaboration between humans and robots and establishing the foundation for advanced robotic applications.
Moreover, in the field of drug design, the ability of LLMs to produce new chemical structures, combined with the multi-objective optimization capabilities of OAs, can accelerate the process of discovering new drugs. This comprehensive technique has the capability to recognize drug candidates with higher potential, so decreasing the time and expenses linked to conventional trial-and-error procedures and promoting progress in pharmaceutical research and development. The combination of LLMs and OAs offers a versatile instrument that has the ability to transform various fields by offering inventive solutions and improving efficiency in problem-solving. As research explores the joint capabilities of LLMs and OAs, it is expected that more significant advancements and innovative uses will arise, revolutionizing industries and expanding the limits of human accomplishment.
cas-model2-names
|
http://arxiv.org/abs/2405.09010v1 | 20240515003010 | On Low Field Size Constructions of Access-Optimal Convertible Codes | [
"Saransh Chopra",
"Francisco Maturana",
"K. V. Rashmi"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
On Low Field Size Constructions of
Access-Optimal Convertible Codes
This work was funded in part by an NSF CAREER award (CAREER-1943409), a Sloan Fellowship and a VMware Systems Research Award.
Saransh Chopra, Francisco Maturana, and K. V. Rashmi
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA, USA
Email: {saranshc,fmaturan,rvinayak}@andrew.cmu.edu
May 20, 2024
======================================================================================================================================================================================================================================================
Most large-scale storage systems employ erasure coding to provide resilience against disk failures. Recent work has shown that tuning this redundancy to changes in disk failure rates leads to substantial storage savings. This process requires code conversion, wherein data encoded using an [,] initial code has to be transformed into data encoded using an [,] final code, a resource-intensive operation. Convertible codes are a class of codes that enable efficient code conversion while maintaining other desirable properties. In this paper, we focus on the access cost of conversion (total number of code symbols accessed in the conversion process) and on an important subclass of conversions known as the merge regime (combining multiple initial codewords into a single final codeword).
In this setting, explicit constructions are known for systematic access-optimal Maximum Distance Separable (MDS) convertible codes for all parameters in the merge regime. However, the existing construction for a key subset of these parameters, which makes use of Vandermonde parity matrices, requires a large field size making it unsuitable for practical applications. In this paper, we provide (1) sharper bounds on the minimum field size requirement for such codes, and (2) explicit constructions for low field sizes for several parameter ranges. In doing so, we provide a proof of super-regularity of specially designed classes of Vandermonde matrices that could be of independent interest.
§ INTRODUCTION
Erasure codes are used widely in modern large scale distributed storage systems as a means to mitigate data loss in the event of disk failures. In this context, erasure coding involves dividing data into groups of k chunks that are each encoded into stripes of n chunks using an [n,k] erasure code. These encoded chunks are then stored across n distinct storage nodes in the system. The code parameters n and k determine the amount of redundancy added to the system and the degree of durability guaranteed.
There are various classes of codes that are commonly used in real-world systems. For example, systematic codes are those in which the original message symbols are embedded among the code symbols. This is highly desirable in practice as in the event that there are no observed disk failures, there is no decoding process needed to recover the original data. Systematic codes with Vandermonde parity matrices (see <ref>) are even more advantageous as there are known efficient algorithms utilizing Fast Fourier Transform (FFT) for computing the product between vectors and Vandermonde matrices <cit.>, speeding up the encoding process. This attribute is becoming increasingly important given the recent trend to use wider (high k) and longer (high n) erasure codes <cit.>. Additionally, Maximum Distance Separable (MDS) codes are a subset of erasure codes that require the least amount of additional storage in order to meet a specific failure tolerance goal. An [n,k] MDS code can tolerate loss of any n-k out of the n code symbols. In this paper, our interest is on systematic MDS codes with Vandermonde parity matrices.
Recent findings by Kadekodi et al. <cit.> reveal the dynamic variability in disk failure rates over time. Their research highlights the potential for meaningful savings in storage and associated operational expenses through tuning code parameters to observed failure rates. However, the resource overhead associated with the default approach of re-encoding all of the data in order to modify n and k is prohibitively expensive <cit.>.
The code conversion problem introduced in <cit.> formalizes the problem of efficiently transforming data that has been encoded under an [, ] initial code to its new representation under an [, ] final code . One of the key measures of the cost of conversion is the access cost, which represents the total number of code symbols accessed (read/written) during conversion. Convertible codes <cit.> are a class of codes that enable efficient conversion while maintaining other desirable properties such as being MDS and systematic (more details in <ref>).
Among various types of conversions, the merge regime, where = for any integer ≥ 2 (i.e., combining multiple initial codewords into a single final codeword), is the most important one. First, the merge regime requires the least resource utilization <cit.> among all types of conversions and hence are a highly favorable choice for practical systems. Second, constructions for the merge regime are key building blocks for the constructions for codes in the general regime which allows for any set of initial parameters and any set of final parameters <cit.>. In this paper, our focus is on systematic MDS convertible codes in the merge regime.
In <cit.>, the authors established lower bounds on the access cost of conversion and provided constructions of access-optimal convertible codes for all parameters in the merge regime. Let us denote := - and := -, (which correspond to the number of parity symbols in the initial and final codes if the codes are systematic). For cases where > (i.e., when the initial configuration has more parities than the final configuration), the authors provide explicit constructions of systematic MDS access-optimal convertible codes over fields of size linear in . For cases where < (i.e., when more parities are needed in the final configuration than in the initial), it has been shown <cit.> that the access cost of conversion for MDS erasure codes is lower bounded by that of the default approach to decode and re-encode all of the data. As a consequence, it is not possible to realize any savings with specialized code constructions.
However, in the case where =, the best-known construction requires a minimum field size of p^D for any prime p and some D ∈Θ(()^3) <cit.>. This field size is far too high for efficient practical implementations. Most current instruction-set architectures are optimized to operate on bytes of data at a time. Utilizing erasure codes defined over larger field sizes can hamper the encoding/decoding speed. Hence most (if not all) practical implementations of storage codes use _256 (which translates each field symbol to a one-byte representation). Thus, the problem of constructing low-field-size access-optimal convertible codes remains open for the case =.
In this paper, we study the setting of systematic MDS access-optimal convertible codes in the merge regime in the case where =. The best known construction of convertible codes in this setting is a systematic code with a very specific choice of super-regular Vandermonde parity matrix with a singular degree of freedom <cit.> (as will be detailed in <ref>). In <ref>, we improve on this construction by allowing more freedom in the choice of scalars of the Vandermonde matrix. We then study the minimum field size q^*(k,r) required for existence of the underlying k × r super-regular Vandermonde parity matrices of such codes. We provide two lower bounds on the minimum field size required, one applicable for codes over general prime power fields (<ref>) and one for codes over fields of characteristic 2 where k > r (<ref>). For fields of characteristic 2, the bound takes the form q^*(k,r) ≥Ω(2^r). Additionally, we establish an upper bound q^*(k,r) ≤ O(k^r) (<ref>), which in turn results in an improved upper bound q ≤ O(()^) on the field size required for the existence systematic MDS access-optimal convertible codes in the merge regime in the case where =.
Furthermore, in <ref>, we provide the first explicit low-field-size constructions of convertible codes in this setting for several parameter ranges via constructing their corresponding super-regular Vandermonde parity matrices. The proposed construction makes use of field automorphisms in designing the Vandermonde matrices. For any general prime power field _q where q = p^w, we find explicit constructions of k × 3 super-regular Vandermonde matrices for all k such that k < w (<ref>). This, in turn, gives us a construction of systematic MDS access-optimal convertible codes for all parameters in the merge regime such that = ≤ 3 and < w. For any finite field _q where q = 2^w (that is, characteristic 2), we present a stronger result covering a larger range of k by showing that the same proposed construction is super-regular for all k such that k < q (<ref>).
These results are also of independent interest beyond the setting considered in this paper as systematic MDS codes with Vandermonde parity matrices serve as the base codes for bandwidth-optimal convertible codes <cit.> and have also been studied in various other settings <cit.>.
§ BACKGROUND AND RELATED WORK
Let us begin with an overview of important concepts and notation referred to throughout this paper, along with a literature review of previous related work.
§.§ Systematic MDS codes and Vandermonde matrices
An [n,k] linear erasure code with generator matrix G∈()_k × n over a finite field is said to be systematic, or in standard form, if G = [I_k|P] where I_k is the k × k identity matrix and P is a k × (n-k) matrix also known as the parity matrix. Let m be a message and c be its corresponding codeword under , where m = (m_i)_i = 1^k and c = (c_i)_i=1^n are vectors of message and code symbols, respectively. As m is encoded under via the multiplication c = m^T G, it follows that c_i = m_i for all i ≤ k if is systematic.
An [n,k] linear erasure code is Maximum Distance Separable (MDS) if and only if every k columns of its generator matrix G are linearly independent; in other words, every k × k submatrix of G is non-singular <cit.>. As a result, data encoded by an [n,k] MDS code can withstand any erasure pattern of n-k out symbols in any codeword and still successfully recover the original data. If is also systematic with parity matrix P, this is equivalent to the property that every square submatrix of P is non-singular <cit.>. Such a matrix is also referred to as super-regular. It is useful to note that any submatrix of a super-regular matrix is also super-regular.
A systematic code with a Vandermonde parity matrix P∈(_k × r) is one where P is of the form
[ 1 1 … 1; ξ_1 ξ_2 … ξ_r; ξ_1^2 ξ_2^2 … ξ_r^2; ⋮ ⋮ ⋱ ⋮; ξ_1^k-1 ξ_2^k-1 … ξ_r^k-1; ]
for some scalars ξ = (ξ_i)_i = 1^r∈^r. Let us denote the above k × r Vandermonde matrix as V_k(ξ). Such a matrix is not always guaranteed to be super-regular <cit.> and thus careful selection of the scalars is required to ensure the resulting systematic code is MDS.
§.§ Convertible Codes <cit.>
Recall that a code conversion is a procedure that converts data from its initial representation under an [, ] code to its final representation under an [, ] code . In order to capture the potential change in dimension, let M := lcm(, ) and consider any message m∈_q^M. This is equivalent to := M/ codewords in the initial configuration and := M/ codewords in the final configuration. Let [i] := {1, 2, …, i} and let |S| denote the size of a set S. Let m[S] be the vector formed by projecting m onto the coordinates in the set S, and let (m) stand for the encoding of m under the code . Let := - and := -.
An (, ; , ) convertible code over _q is defined by: (1) a pair of codes (,) over _q such that is an [, ] code and is an [, ] code; (2) a pair of partitions := {P_i^| i ∈ []} and := {P_j^| j ∈ []} of [M = lcm(, )] such that |P_i^| = for all P_i^∈ and |P_j^| = for all P_j^∈; and (3) a conversion procedure which, for any m ∈_q^M, maps the initial set of codewords {(m[P_i^]) | P_i^∈} to the corresponding set of codewords {(m[P_j^]) | P_j^∈} over the final code.
Recall that access cost during code conversion refers to the number of code symbols that are read or written during conversion. Access-optimal convertible codes are those which meet the lower bounds on access cost established in <cit.> that are known to be tight. It is known that any (, ; , ) convertible code for the merge regime where = formed by a pair of systematic codes with Vandermonde parity matrices = V_(ξ) and = V_(ξ) over the same scalars is access-optimal <cit.>. This is due to being “-column block-constructible” from ; that is, each new parity of a merged codeword can directly be computed as a linear combination of the parities of the original codewords. If the parity matrices are super-regular, then the resulting convertible code is guaranteed to be MDS as well. The best known construction <cit.> of a systematic MDS access-optimal convertible code for the merge regime where = is formed by a pair of systematic codes with Vandermonde parity matrices over the scalars ξ = (θ^i-1)_i=1^, for any primitive element θ∈. This construction requires a field size q ≥ p^D where p is any prime and D ∈Θ(()^3) <cit.>.
§.§ Additional Notation and Preliminaries
This section presents notation and terminology used in this paper that follows and expands on the notation introduced in <cit.>, and reviews some preliminaries from Galois theory that will be used in the rest of the paper.
For any two sets I,J, let I J denote the symmetric difference of I and J. For any two integers a,b, let a ⊥ b denote that a and b are coprime. Let x denote the vector (x_i)_i = 1^r for some r. Let M_i,j denote the entry in the ith row and jth column of the matrix M, with both indices 1-indexed. Let M_I × J denote the submatrix of M formed by the intersection of the rows indexed by I and the columns indexed by J, with all indices 1-indexed. Let row_i(M) stand for the ith row vector of the matrix M. Let χ_P be the indicator function for whether the proposition P is true.
Let _p denote the prime field of size p, and let us reserve _q for prime power fields of size q = p^w for some prime p and w > 1. Let ^× denote the multiplicative group of the field, or ∖{0}. Let ord(a) denote the order of an element a ∈^×. Let [x_1,…,x_r] denote the ring of polynomials in x_1,…,x_r over the field . Let Aut() denote the group of automorphisms over the field . Let S_n denote the group of permutations of [n].
Recall that a field automorphism is a bijective map σ: → such that for all x, y ∈, σ(x+y) = σ(x) + σ(y) and σ(xy) = σ(x)σ(y); in essence, the map preserves the structure of the field. Note also by definition, it must be the case that σ(0) = 0 and σ(1) = 1, which also gives us that σ(-a) = -σ(a), σ(a^-1) = σ(a)^-1, and ord(a) = ord(σ(a)) for all a ∈^×. It is easy to verify that the set of fixed points of an automorphism form a sub-field of , termed the fixed field of the automorphism. It is also a consequence of Galois theory that the fixed field of an automorphism over the field _q where q = p^w is always an extension of the base prime field _p <cit.>.
§.§ Related Work
The most directly related works on access-optimal convertible codes <cit.> were already discussed in <ref>. In this section, we will discuss other closely related works. In addition to the access cost, previous works on convertible codes have also studied other costs of conversion such as bandwidth cost <cit.> and locality of repair <cit.>. In this paper, while we focus on the access cost of conversion, the proposed new constructions do enable better constructions of bandwidth-optimal convertible codes as well. This is because access-optimal convertible codes serve as the base codes of the Piggybacking framework <cit.> when constructing convertible codes efficient in bandwidth cost.
There also have been previous efforts to study the fundamental limits of existence of super-regular Vandermonde matrices. Shparlinski <cit.> provided an upper bound on the total number of singular square submatrices of a Vandermonde matrix by showing that any (q - 1) × m Vandermonde matrix V_q-1(ξ_1,…,ξ_m) over the field _q has at most 3(m-1)(q-1)^mT^-1/m-1 singular m × m square submatrices where T := min_i ≠ j ∈ [m]ord(ξ_i/ξ_j); however, this bound has been shown to be not tight upon closer investigation <cit.>. Additionally, Intel's Intelligent Storage Acceleration Library (ISA-L), commonly used to implement erasure coding in practice, has published bounds on the range of parameters [n,k] over _256 for which its code supports generation of super-regular Vandermonde parity matrices, based on a very specific construction <cit.>. There is no proof provided alongside these bounds; they were likely determined by running a code script to test each submatrix for invertibility.
In addition, there has been independent work studying systematic linear MDS codes with various other constructions of super-regular parity matrices. For example, it is known that a Cauchy matrix C, that is, one of the form C_i,j = (a_i + b_j)^-1 for all i,j ∈ [n] given two vectors (a_i)_i = 1^n and (b_j)_j =1^n, is super-regular so long as the a_i's and b_j's are all distinct from each other <cit.>. Additionally, Lacan and Fimes introduced a construction of super-regular matrices formed by taking the product of two Vandermonde matrices <cit.>. To add on, there has been considerable progress in constructing super-regular Toeplitz matrices in the development of convolutional codes <cit.>. Nonetheless, none of these alternatives are suitable for the construction of access-optimal convertible codes.
To our knowledge, in this paper we establish the best known bounds on the field size required for the existence of systematic MDS access-optimal convertible codes for the merge regime where =. This paper is also the first to provide, with proof, explicit constructions of systematic MDS access-optimal convertible codes for the merge regime where = over practically usable field sizes.
§ FUNDAMENTAL LIMITS ON FIELD SIZE
In this section, we study a new construction of systematic MDS access-optimal convertible codes for the merge regime where = that generalizes the construction introduced in <cit.>. The new construction is still based on systematic codes with super-regular Vandermonde parity matrices, but we allow the scalars to take on any distinct nonzero values, rather than being restricted to consecutive powers of a primitive element in the field. By virtue of the parity matrices being Vandermonde matrices, as detailed in <ref>, the new construction of convertible codes remains access-optimal. Thus, a proof of the existence of any k × r super-regular Vandermonde matrix yields (, ; , = ) systematic MDS access-optimal convertible codes for any ≥ 2, ≤ k, and =≤ r. We will establish several bounds (<ref>) on the field sizes for which there exist [n,k] systematic MDS codes with Vandermonde parity matrices. This is done by studying their underlying super-regular Vandermonde matrices.
We start with a result which provides a requirement on the field sizes over which such matrices exist. This result draws upon intuition that an optimal choice of scalars for the Vandermonde matrix would avoid selecting elements with smaller order to avoid repetition along the corresponding columns.
theoremthmOne
Over the field _q, a k× r super-regular Vandermonde matrix can only exist if the following condition holds: for every divisor m
of q-1 where m < k, q ≥ rm + 1.
Provided in <ref>.
For the field _256, for example, this result tells us that [n=90,k=86] and [n=58,k=52] systematic MDS codes with Vandermonde parity matrices do not exist.
The next lemma is a simple consequence of viewing finite prime power fields as vector spaces over their base prime fields.
lemmalemNonEmpty
Over the field _q, where q = 2^w, for any r > w, for any S = {ξ_i}_i=1^r ⊆_q, there must exist some nonempty subset I ⊆ [r] such that ∑_i ∈ Iξ_i = 0.
Provided in <ref>.
This lemma stems from the fact that any collection of field elements larger than the field's dimension must be linearly dependent. Over fields of characteristic 2, this simply corresponds to a nonempty subset of elements that add to 0. This will be used later to identify a singular submatrix in a proposed Vandermonde matrix. This in turn, yields a lower bound on the minimum field size required for the existence of super-regular Vandermonde matrices specific to fields of characteristic 2.
theoremthmTwo
Over the field _q, where q = 2^w, for any r,k such that k > r, a k× r super-regular Vandermonde matrix with distinct, nonzero scalars can only exist if q ≥ 2^r.
Provided in <ref>.
For example, again considering _256, this bound informs us that [n=19,k=10] systematic MDS codes with Vandermonde parity matrices do not exist.
The first result for general fields (<ref>) is a tighter bound for regimes where k ≫ r and ∃ m ≈ k such that m < k and m divides q-1 for a proposed field size q; in this case, we get the bound q^*(k,r) ≥Ω(kr). On the other hand, the lower bound specific to fields of characteristic 2 (<ref>) is more relevant in settings such as storage in unreliable environments which demand narrow codes with higher storage overhead, or when when k ≈ r.
We will next prove the existence of k × r super-regular Vandermonde matrices over all fields of size greater than a threshold in terms of k and r. We first start with a lemma that narrows down the set of square submatrices of a Vandermonde matrix that need to be tested for singularity to establish super-regularity. More specifically, we show that it is sufficient to only consider submatrices formed by a set of rows that includes the first row.
lemmalemTwo
Over the field _q, for any r,k,ℓ such that ℓ≤min(r,k), for any k × r Vandermonde matrix V_k(ξ) with (ξ_i)_i=1^r ∈ (_q^×)^r, the submatrix H := V_k(ξ)_I × J defined by I := {α_1,…,α_ℓ}⊆ [k] and J := {β_1,…,β_ℓ}⊆ [r], where α_i < α_j for all i < j, is non-singular if and only if the submatrix H' := V_k(ξ)_I' × J defined by I' := {1,α_2- (α_1 -1),…,α_ℓ - (α_1 -1)}⊆ [k] and J is non-singular.
Provided in <ref>.
We now utilize the Schwartz–Zippel lemma <cit.> in a probabilistic argument for the existence of a super-regular Vandermonde matrix given a sufficiently large field size. This, in effect, establishes an upper bound on the minimum field size required for the existence of super-regular Vandermonde matrices.
theoremthmThree
Over the field _q, for any r,k, if q > 1 + k2∑_ℓ = 2^r rℓk-2ℓ-2∈ O(k^r), then there must exist scalars (ξ_i)_i=1^r ∈ (_q^×)^r such that the k× r Vandermonde matrix V_k(ξ) is super-regular.
Provided in <ref>.
Recall that the previously known upper bound <cit.> on the minimum field size q required for the existence of systematic MDS access-optimal convertible codes for the merge regime where = was log q ≤Θ(()^3). <ref> establishes the improved upper bound of log q ≤ O(log), an order of magnitude smaller.
§ LOW FIELD SIZE CONSTRUCTIONS
In this section, we present several explicit constructions of systematic MDS access-optimal convertible codes in the merge regime (that is, for (, ; , = ) convertible codes where ≥ 2), with field sizes smaller than existing constructions. Specifically, for general prime power fields _q where q = p^w, we provide explicit constructions of convertible codes in the merge regime for all parameters such that = ≤ 3 and w>. For fields _q of characteristic 2, we present explicit constructions of convertible codes in the merge regime for all parameters such that = ≤ 3 and q>. We do this by providing constructions of k × 3 super-regular Vandermonde matrices for field sizes: q > p^k for general prime power fields (<ref>) and q > k for finite fields of characteristic 2 (<ref>).
These matrices serve as the parity matrices for the systematic MDS codes that underlie the aforementioned convertible codes. As every submatrix of a super-regular matrix is also super-regular, a valid parity matrix for three parities gives us one for any fewer than three parities as well.
We start with a lemma that builds on the intuition to choose primitive elements of the finite field for the scalars of the super-regular Vandermonde parity matrix.
lemmalemFour
Over the field _q, for all k < q, given any primitive element θ∈_q, given 2 ≤ e ≤ q-1 such that e, e-1 ⊥ q-1, the k × 3 Vandermonde matrix V_k(1, θ, θ^e) has no singular 2 × 2 square submatrices.
Provided in <ref>.
Next, we introduce the idea of field automorphisms into our construction and choice of scalars, in particular as automorphisms are order preserving maps. Recall
some key properties of field automorphisms from <ref>.
Over the field _q where q = p^w, for all k < q, given any primitive element θ∈_q and nontrivial automorphism σ∈Aut(_q) with fixed field _p, the k × 3 Vandermonde matrix V_k(1, θ, σ(θ)) has no 2 × 2 singular square submatrices.
First, recall that Aut(_q) is a group generated by the Frobenius automorphism, or the map σ: x → x^p, and thus any nontrivial element σ∈Aut(_q) is of the form σ(x) = x^p^e for some 1 ≤ e < w. It follows that p ≤ p^e < p^w = q, and because q ≡ 0 p, q - 1 ≢0 p and clearly p^e ⊥ q - 1. Next, see that if σ has fixed field _p, this can only occur if the polynomial p_1(x) = x^p^e - x, and consequently the polynomial p_2(x) = x^p^e - 1 - 1, have no roots in _q outside of _p. This implies that p^e - 1 ⊥ q - 1, and thus we can apply <ref> to get that this matrix has no 2 × 2 singular submatrices.
For the same construction of Vandermonde matrices as in <ref>, we next consider its 3 × 3 square submatrices and establish the necessary and sufficient conditions under which they are singular. We are able to show a significantly tighter end result for fields of characteristic 2 in particular, but a lot of the arguments used apply to all finite fields as well. Thus, we start with an intermediate result using the shared ideas.
lemmalemFive
Over the field _q where q = p^w, for all k < q, given any primitive element θ∈_q and nontrivial automorphism σ∈Aut(_q) with fixed field _p, the k × 3 Vandermonde matrix V_k(1, θ, σ(θ)) has a 3 × 3 singular square submatrix if and only if ∃ e_1, e_2 ∈ [k-1] and c_1, c_2 ∈_p^× such that e_1 < e_2 and {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + x^e_2.
Provided in <ref>.
We now arrive at the first of our major results in this section, on explicit constructions of super-regular Vandermonde matrices over arbitrary prime power fields.
Over the field _q where q = p^w, for all k ≤ w, given any primitive element θ∈_q and a non-trivial automorphism σ∈Aut(_q) with fixed field _p, the k × 3 Vandermonde matrix V_k(1, θ, σ(θ)) is super-regular.
First, note that every 1 × 1 submatrix of V_k(1, θ, σ(θ)) is non-singular as every element is a power of a nonzero element of _q. Next, by <ref>, every 2 × 2 submatrix of V_k(1, θ, σ(θ)) is also non-singular. Finally, assume for sake of contradiction that V_k(1, θ, σ(θ)) has a singular 3 × 3 square submatrix. Then by <ref>, ∃ e_1, e_2 ∈ [k-1] and c_1, c_2 ∈_p^× such that e_1 < e_2 and {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + x^e_2. However, as f ∈_p[x], it must be a multiple of the minimum polynomial of θ in _p[x], which we know is of degree w ≥ k > e_2 = (f) as θ is a generator of _q^×, resulting in a contradiction. Thus, every 3 × 3 square submatrix is also non-singular and V_k(1, θ, σ(θ)) is super-regular, as desired.
Finally, we show an analogous but stronger result for fields of characteristic 2. This is of particular interest as finite fields of characteristic 2 are the most efficient choice for the representation of data in compute nodes and on storage devices.
Over the field _q where q = 2^w, for all k < q, given any primitive element θ∈_q and a non-trivial automorphism σ∈Aut(_q) with fixed field _2, the k × 3 Vandermonde matrix V_k(1, θ, σ(θ)) is super-regular.
First, note that every 1 × 1 submatrix of V_k(1, θ, σ(θ)) is non-singular as every element is a power of a nonzero element of _q. Next, by <ref>, every 2 × 2 submatrix of V_k(1, θ, σ(θ)) is also non-singular. Finally, assume for sake of contradiction that V_k(1, θ, σ(θ)) has a singular 3 × 3 square submatrix. Then by <ref>, ∃ e_1, e_2 ∈ [k-1] and c_1, c_2 ∈_2^× such that {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + x^e_2. However, this implies c_1 = c_2 = 1, but then f(1) = 1 + 1 + 1 = 1, contradicting the fact that 1 is a root of f. Therefore, every 3 × 3 square submatrix is also non-singular and V_k(1, θ, σ(θ)) is super-regular, as desired.
Using this result and the Frobenius automorphism, which is known to have fixed field _p over any finite extension K / _p <cit.>, we show a family of constructions of super-regular Vandermonde matrices for fields of characteristic 2. We also give results specific to the field _256, which is the most commonly used finite field in practice.
Over the field _q where q = 2^w, for all k < q, given any primitive element θ∈_q, the k × 3 Vandermonde matrix V_k(1, θ, θ^2) is super-regular.
Over the field _256, for all k < 256, given any primitive element θ∈_256, the k × 3 Vandermonde matrices V_k(1, θ, θ^2), V_k(1, θ, θ^8), V_k(1, θ, θ^32), and V_k(1, θ, θ^128) are super-regular.
10
IEEEtran
§
*
Consider the k× r Vandermonde matrix V_k(ξ) for any scalars ξ_i_i=1^r∈^r_q. Given any divisor m of q-1, as k > m, the ith entry in the (m+1)th row of V_k(ξ) is of the form ξ_i^m. Note that raising this entry to the (q-1)/m power would result in ξ_i^q-1 = 1, so it follows that all of the entries in this row of V_k(ξ) are roots of the polynomial x^(q-1)/m - 1 over _q. This polynomial has exactly (q-1)/m distinct roots in _q, so if r > (q-1)/m, then ∃ i,j ∈ [r] such that i ≠ j and ξ_i^m = ξ_j^m. It follows that V_k(ξ)_I × J where I = {1, m+1} and J = {i,j} is of the form
[ 1 1; ξ_i^m ξ_j^m ]
and is singular. Therefore, in order for the matrix to be super-regular, we must have r ≤ (q-1)/m ⇒ q ≥ rm + 1.
*
As there are 2^r distinct subsets of [r], and q < 2^r, then ∃ I,J ⊆ [r] such that I ≠ J and ∑_i ∈ Iξ_i = ∑_i ∈ Jξ_i. As every element is its own additive inverse in fields of characteristic 2, it follows that 0 = ∑_i ∈ Iξ_i + ∑_i ∈ Jξ_i = ∑_i ∈ I ∖ Jξ_i + ∑_i ∈ J ∖ Iξ_i + ∑_i ∈ I ∩ Jξ_i + ∑_i ∈ I ∩ Jξ_i = ∑_i ∈ I Jξ_i. As I ≠ J, I J must be nonempty, as desired.
*
Let q < 2^r, and consider the k× r Vandermonde matrix V_k(ξ) for any distinct scalars ξ_i_i=1^r∈ (^×_q)^r and r,k such that k > r. Then, it follows by <ref>, that ∃ I ⊆ [r] nonempty such that ∑_i ∈ Iξ_i = 0, and we must have |I| > 2 as the ξ_i's are nonzero and distinct. Let us define ℓ := |I| and (c_i)_i=1^ℓ+1∈^ℓ+1_q to be the coefficient vector of the polynomial f(x) := ∏_i ∈ I (x - ξ_i) such that f(x) = ∑_i = 1^ℓ + 1 c_ix^i-1, and note by construction c_ℓ = ∑_i ∈ Iξ_i = 0. Now consider the square submatrix H := V_k(ξ)_J × I where J = [ℓ+1] ∖{ℓ}. If we take the linear combination y = c_ℓ + 1row_ℓ(H) + ∑_i=1^ℓ-1 c_irow_i(H), it follows that y = (f(ξ_i))_i ∈ I = 0. As c_ℓ+1 = 1, this is a nontrivial linear combination of the rows of H, and thus H is singular. Therefore, in order for the matrix to be super-regular, we must have q ≥ 2^r.
*
Observe that H is of the form
[ ξ^α_1 - 1_β_1 ξ^α_1 - 1_β_2 … ξ^α_1 - 1_β_ℓ; ξ^α_2 - 1_β_1 ξ^α_2- 1_β_2 … ξ^α_2 -1 _β_ℓ; ⋮ ⋮ ⋱ ⋮; ξ^α_ℓ - 1_β_1 ξ^α_ℓ -1_β_2 … ξ^α_ℓ -1 _β_ℓ; ]
while H' is of the form
[ 1 1 … 1; ξ^α_2-α_1_β_1 ξ^α_2-α_1_β_2 … ξ^α_2-α_1_β_ℓ; ⋮ ⋮ ⋱ ⋮; ξ^α_ℓ-α_1_β_1 ξ^α_ℓ-α_1_β_2 … ξ^α_ℓ - α_1_β_ℓ; ]
As the ξ_i's are all non-zero, it can be seen that we can get from H' to H by multiplying through the ith column by ξ^α_1 - 1_β_i for all i ∈ [ℓ]. Therefore, (H) = (H')∏_i= 1^ℓξ^α_1 - 1_β_i, so either (H) = (H') = 0 or both matrices are non-singular, as desired.
*
Let us start by considering an arbitrary square submatrix of our proposed k × r Vandermonde matrix V_k(ξ)- that is, let I := {α_1, …, α_ℓ}⊆ [k] and J := {β_1,…, β_ℓ}⊆ [r] for some ℓ≤min(k,r) and let us define H := V_k(ξ)_I × J so that H is an ℓ×ℓ submatrix of V_k(ξ). Observe that
(H) = ∑_σ∈ S_ℓ( sgn(σ)∏_i = 1^ℓH_i,σ(i))
= ∑_σ∈ S_ℓ( sgn(σ)∏_i = 1^ℓξ_β_σ(i)^α_i - 1)
where S_ℓ denotes the group of permutations of [ℓ]. See that we can treat the scalars as variables (x_i)_i = 1^r and the overall determinant as a multivariate polynomial
f_H(x) = ∑_σ∈ S_ℓ( sgn(σ)∏_i = 1^ℓ x_β_σ(i)^α_i - 1) ∈_q[x_1,…,x_r]
We deduce that the degree of the term in this summation corresponding to any arbitrary σ∈ S_ℓ is ∑_i = 1^ℓ (α_i - 1), and thus this is the total degree of f_H as well. Also, note that because every term in this summation corresponds to a unique permutation of [ℓ] and the α_i's are distinct, the resulting monomial terms are also all unique, so no terms cancel out and f_H is not identically 0 so long as q > (f_H). From here, see that for any family of square submatrices of V_k(ξ), if we define f_ := ∏_H∈ f_H, then f_ is also not identically 0 so long as q > (f_). Note also that f_ evaluates to 0 if and only if one of the square submatrices in has determinant 0 and is singular. Moreover, as (f_) = ∑_(I,J) | V_k(ξ)_I × J∈∑_α_i ∈ I (α_i - 1), we can then apply Schwartz–Zippel to get that the probability that a uniformly randomly drawn vector from (_q^×)^r is a root of f_ is at most
_x[ f_(x) = 0 ] ≤∑_(I,J) | V_k(ξ)_I × J∈∑_α_i ∈ I (α_i - 1)/q - 1
= ∑_i ∈ [k] (i - 1) ∑_(I,J) | V_k(ξ)_I × J∈χ_i ∈ I/q-1
= ∑_i ∈ [k] (i - 1) | {V_k(ξ)_I × J∈| i ∈ I}|/q-1
Now see that by <ref>, it is sufficient to test for super-regularity by only considering := { V_k(ξ)_I × J| 1 ∈ I}. Therefore, it follows that
_x[ f_(x) = 0 ] ≤∑_i ∈ [k] (i - 1) | {V_k(ξ)_I × J| 1,i ∈ I}|/q-1
=∑_i ∈ [k] (i - 1) ∑_ℓ = 2^r rℓk-2ℓ-2/q-1
=k2∑_ℓ = 2^r rℓk-2ℓ-2/q-1 < 1
if q > 1 + k2∑_ℓ = 2^r rℓk-2ℓ-2. If there is a nonzero probability that a uniformly randomly drawn vector x from (_q^×)^r is not a root of any of the determinant polynomials, then there must exist some assignment of scalars (ξ_i)_i = 1^r such that the k × r Vandermonde matrix V_k(ξ) is super-regular, as desired.
§
-3.0cm
*
First, see that because e ⊥ q - 1, we must have that θ^e is in fact another primitive element of _q. Thus, we can handle the cases of 2 × 2 submatrices formed by the first and second columns or the first and third columns identically. In both of these cases, the matrix is of the form
[ 1 θ^i; 1 θ^j ]
where q - 1 ≥ k > j > i ≥ 0. It follows that the determinant of this matrix is equal to θ^i - θ^j and thus the matrix is singular if and only if θ^i = θ^j ⟺θ^j - i = 1 ⟺ q - 1 | j - i, a contradiction as j - i < q - 1. Similarly, for the case that the 2 × 2 submatrix is formed by the second and third columns, the matrix is of the form
[ θ^i (θ^e)^i; θ^j (θ^e)^j ]
See that as (θ^i)^-1 and (θ^j)^-1 both exist and are nonzero, we can multiply through both rows by these constants respectively and it would not affect whether the matrix is singular or non-singular. As a result, it is sufficient to consider the matrix
[ 1 (θ^e-1)^i; 1 (θ^e-1)^j ]
and note that as e - 1 ⊥ q - 1, θ^e-1 is again a primitive element, and this matrix is thus non-singular by the same proof as in the previous case.
*
First, let us consider an arbitrary 3 × 3 square submatrix of the Vandermonde matrix; it must be of the form
H = [ 1 θ^i (σ(θ))^i; 1 θ^j (σ(θ))^j; 1 θ^k (σ(θ))^k ]
By a similar argument as in <ref>, (θ^i)^-1 and (σ(θ)^i)^-1 both exist and are nonzero, as σ(x) = 0 if and only if x = 0. Therefore, we can multiply these values through the second and third columns, respectively, and it would not affect whether the matrix or any square submatrix is singular. Thus it suffices to consider matrices of the form
H' = [ 1 1 1; 1 θ^e_1 (σ(θ))^e_1; 1 θ^e_2 (σ(θ))^e_2 ]
where q - 1 ≥ k > e_2 > e_1 > 0. Next, see that H' is singular if and only if there exists nontrivial (c_i)_i=1^3 ∈_q^3 such that ∑_i= 1^3 c_irow_i(H') = 0; in other words, {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + c_3x^e_2. See also that if c_i = 0 for any i ∈ [3], then if we let J = [3] ∖{i}, it follows that ∑_j ∈ Jc_jrow_j(H'_[3]×[2]) = 0. This would equate to a singular 2 × 2 square submatrix of V_k(1, θ, σ(θ)), a direct contradiction of <ref>. So we can assume the c_i's are all nonzero. From here, see that {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + c_3x^e_2 if and only if they are also roots of the polynomial g(x) = c_3^-1f(x), so we can assume without loss of generality that c_3 = 1. In summary, H' is singular if and only if ∃ c_1, c_2 ∈_q^× such that {1, θ, σ(θ)} are all roots of the polynomial f(x) = c_1 + c_2x^e_1 + x^e_2.
Plugging in our three roots into the polynomial, we get the following:
c_1 + c_2 + 1 = 0
c_1 + c_2θ^e_1 + θ^e_2 = 0
c_1 + c_2(σ(θ))^e_1 + (σ(θ))^e_2 = 0
Furthermore, we can plug in both sides of <ref> into σ, giving us the additional equation
0 = σ(c_1) + σ(c_2)(σ(θ))^e_1 + (σ(θ))^e_2
We can combine <ref> and <ref> to get
c_1 - σ(c_1) = (σ(c_2) - c_2)(σ(θ))^e_1
We can then substitute c_2 = -c_1 - 1 from manipulating <ref> to get
c_1 - σ(c_1) = (c_1 + 1 + σ(-c_1 - 1))(σ(θ))^e_1
= (c_1 - σ(c_1))(σ(θ))^e_1
Assume for sake of contradiction that c_1 ∉_p. Then c_1 is not fixed by σ and thus (c_1 - σ(c_1)) ≠ 0. We can then multiply through by (c_1 - σ(c_1))^-1 to get 1 = (σ(θ))^e_1. Note that as σ(θ) is a primitive element of _q, we must have q - 1 | e_1, contradicting the assumption that e_1 < q - 1. Therefore, we must have c_1 ∈_p^×, and as c_2 = - (c_1 + 1) and fields are closed under addition and inverses, c_2 ∈_p^× as well, as desired.
|
http://arxiv.org/abs/2405.08736v1 | 20240514162304 | Polytropic Dynamical Systems with Time Singularity | [
"Oday Hazaimah"
] | math.DS | [
"math.DS",
"34B15, 34B16, 34N05, 65L05, 65L10, 65L11"
] |
5pt
5mm
4000
fancy [C]
plain
theoremTheorem[section]
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
definitionDefinition[section]
remarkRemark[section]
exampleExample[section]
algAlgorithm[section]
LineLinesearch
methodMethod
fact[theorem]Fact
*pfProof
retraitsimple–
=0.3ex =0.3ex =0em =0em
=1em =0em
– =
Polytropic Dynamical Systems with Time Singularity
Oday Hazaimah
[ Northern Illinois University, odayh982@yahoo.com. https://orcid.org/0009-0000-8984-2500.]
=================================================================================================================
In this paper we consider a class of second order singular homogeneous differential equations called the Lane-Emden-type with time singularity in the drift coefficient. Lane-Emden equations are singular initial value problems that model phenomena in astrophysics such as stellar structure and are governed by polytropics with applications in isothermal gas spheres. A hybrid method that combines two simple methods; Euler's method and shooting method, is proposed to approximate the solution of this type of dynamic equations. We adopt the shooting method to reduce the boundary value problem, then we apply Euler's algorithm to the resulted initial value problem to get approximations for the solution of the Lane-Emden equation. Finally, numerical examples and simulation are provided to show the validity and efficiency of the proposed technique, as well as the convergence and error estimation are analyzed.
Keywords: Nonlinear Lane-Emden equation; Euler's method; Shooting method; Polytropes; Dynamics, Singularities.
Mathematics Subject Classification: 34B15, 34B16, 34N05, 65L05, 65L10, 65L11.
Polytropic Dynamical Systems with Time Singularity
Oday Hazaimah
[ Northern Illinois University, odayh982@yahoo.com. https://orcid.org/0009-0000-8984-2500.]
=================================================================================================================
§ INTRODUCTION
Laplace's equation and Poisson's equation are important examples of elliptic partial differential equations which used broadly in applied mathematics and theoretical physics, see, e.g., <cit.>. For instance, Poisson's equation used to calculate gravitational field in potential theory and can be seen as generalization of Laplace's equation. By removing or reducing dimensions from Poisson's equation, we obtain a second-order nonlinear differential equation called Lane-Emden-type equation (LE, for short). The Lane-Emden equation (a.k.a. polytropic dynamic equation) is one of the well studied classical dynamical systems that has many applications in nonlinear mathematical physics and non-Newtonian fluid mechanics (see, for instance, <cit.>). A preliminary study on the LE equations (polytropic and isothermal) was undertaken by astrophysicists Lane (1870) and Emden (1907), such that the interest of the LE derived from its nonlinearity and singular behavior at the origin. The point x_0 is called ordinary point (or regular point) of the dynamic equation (<ref>) if the coefficients of x,x' are analytic in an interval about x_0. Otherwise, it is called singular point. In solving singular boundary value problems (BVPs) some numerical techniques are based on the idea of replacing a two-point BVP by two suitable initial value problem <cit.>. In this paper we adopt such idea (called the shooting method) to study dynamical models that play an essential role in the theory of star structure and evolutions, thermodynamics, and astrophysics (see, e.g., <cit.>). Equation (<ref>) describes and models the mechanical structure of a spherical body of gas such as a self-gravitating star and also appeared in the study of stellar dynamics (see, for instance <cit.> and the references therein).
The solutions to the LE, which are known as polytropes, are functions of density versus the radius expressed by x(t) in (<ref>). The index n determines the order of that solution. Nonlinear singular LE equations can be formulated as
1/t^2d/dt(t^2d x/d t)+x^n=0
or,
x”(t)+2/tx'(t)+[x(t)]^n=0, n≥ 0
subject to
x(0)=1 , x'(0)=0.
The dynamical system model (<ref>) along with initial conditions form a special type of initial value problems (IVP) for which it has several applications in the fields of celestial mechanics, quantum physics and astrophysics <cit.>.
The following figure is a motivation example shows finite solutions of Lane-Emden equation for the value of n in equation (<ref>) or (<ref>) given by n=0,1 ,2 ,3 ,4 ,5, 6.
< g r a p h i c s >
For some special cases when n=0,1,5 exact analytical solutions were obtained by Chandrasekhar <cit.>, while for all other values of n approximate analytical methods were obtained such as: the Adomian decomposition method <cit.>, homotopy analysis method <cit.>, power series expansions <cit.>, variational method <cit.>, and linearization techniques <cit.> (provide accurate closed-form solutions around the singularity.).
Numerical discretization for equation (<ref>) has been the object of several studies in the last decades (see, e.g., <cit.> and the references therein). In <cit.>, the authors presented numerical method for solving singular IVPs by converting Lane-Emden-type equation (<ref>) to an integral operator form then rewriting the acquired Voltera integral equation in terms of a power series. Ramos <cit.> applied linearization method for the numerical solution of singular initial value problems of linear and nonlinear, homogeneous and nonhomogeneous second-order dynamic equations.
Russell and Shampine in <cit.> discussed the solution of the singular nonlinear BVP for certain dynamical systems in the context of analytical geometry and symmetry as follows
x”(t)+k/tx'(t)+g(t,x)=0, where k=0,1,2,
and with boundary conditions x'(0)=0 (or equivalently x(0) is finite), x(b)=λ, for some scalar λ, and the convergence is uniform over the interval [0,1].
Biles et al. in <cit.>, have considered an initial value problem for Lane-Emden type of the form
x”(t)+p(t)x'(t)+q(t,x(t))=0, t>0 with x(0)=a , x'(0)=0
where a∈ℝ and p(t) may be singular at t=0. They introduced the following definition and theorem, respectively; where the theorem gives the conditions of existence and uniqueness of solution of second-order linear BVPs.
x is a solution of the above equation (<ref>) if and only if there exist some T>0, such that x,x' are absolutely continuous on [0,T].
Suppose in the above equation (<ref>) p is measurable on [0,1], non-negative on (0,1] and ∫_0^1 sp(s) ds is finite, and q is bounded. Specifically, suppose there exist α,β with α<a<β and K>0 such that:
* for each t∈[0,1], q ∈ C([α,β]); and q is Lipschitz in y on [α,β]
* for each x ∈ [α,β], q is measurable on [0,1]; and
* sup_(t,x) ∈ [0,1]×[α,β] |q(t,x)|≤ K.
* Suppose that q is Lipschitz in y on [α,β].
Then equation (<ref>) has a unique solution.
Our paper is organized in the following fashion. In section 2, we provide some necessary notations and essential background. In section 3 we present the second-order dynamical system of Lane-Emden type, and the BVP is transformed to IVP by shooting method. Then applying Euler's method on the resulted initial value problem to get approximations for the solution of the LE. The convergence results and error estimation are analyzed in section 4. Finally, numerical examples are provided to demonstrate the validity and efficiency of the proposed technique.
§ PRELIMINARIES
In this section we introduce some basic definitions and conventional notations. Let C^1(I) be the space of all continuously differentiable functions defined on an interval I. A set D in the Euclidean space ℝ^n is compact set if and only if it is closed and bounded set. The basic space used throughout this paper is the space of continuous functions C[0,1] on the compact set
[0,1] with the associated norm (distance) function defined by,
x=max_0≤ t≤1|x(t)|.
Define a continuous function f:D→ℝ^n where D is an open subset of ℝ^n+1, and consider the dynamical system
ẋ(t)=f(t,x) , x(t_0)=x_0.
Given (t_0,x_0) ∈ D, a continuous function x(t) in an open interval (a,b) containing t_0 is a solution of the IVP (<ref>) if and only if
x(t)=x_0+ ∫_t_0^tf(s,x(s)) ds
for every t ∈ (a,b).
Conventionally, most of dynamic evolution equations of this type (<ref>) arising in application-driven aspects cannot be solved algebraically or exactly, but they can be investigated qualitatively without knowing the exact solutions. As we know, qualitative approaches are not very accurate, hence, an approximate solution (more accurate) of this dynamic equation (<ref>) can be obtained by successive approximations methods. We say f is differentiable function if its graph f:={(t,x(t)); t∈ (a,b)} has a slope defined at every point t in the interval (a,b).
Let D be a nonempty set. Suppose there is a function f from D to itself, and 0≤ L<1, where L is free of x and y. If for any two points x,y∈ D we have
|f(x)-f(y)|≤ L|x-y| , ∀ x,y∈ D,
then f is called a contraction. The smallest such value of L is called the Lipschitz constant of f, and f is then called a Lipschitz function.
A function f:D⊂^n+1→^n is said to be locally Lipschitz in x if for each compact set contained in D, and each x, y ∈ D, there exists L>0 such that
f(t,x)-f(t,y)≤ Lx-y.
In particular, all C^1 functions are locally Lipschitz. The following two theorems address existence and uniquness of solutions to any IVP.
A sequence x_n(t) of functions in C[a, b] converges uniformly to a function x(t)∈ C[a, b] if and only if lim_n→∞x_n-x=0.
(Picard-Lindelof theorem). If the function f : D →ℝ^n is continuous and locally Lipschitz in x in an open set D ⊂ℝ^n+1 , then for each (t_0 , x_0) ∈ D there exists a unique solution of the initial value problem in some open interval containing t_0.
Assume â(t,x(t),x'(t))∈ C([0,1]×ℝ×ℝ) and â,∂â/∂ x, ∂â/∂ x'∈ C([0,1]×ℝ×ℝ).
If ∂â/∂ x>0 and there exist M>0 such that | ∂â/∂ x'|<M, ∀ (t,x,x') ∈ [0,1] ×ℝ×ℝ,
then the BVP
d^2 x/d t^2=â(t,x,x')
with
x(0)=α , x(1)=β,
has a unique solution x=x(t).
To better understand the theorem we illustrate it by giving an example on the interval [1, 2] instead of [0, 1]: Consider the BVP,
x”(t)+sin x'+e^-tx=0
with x(1)=x(2)=0 and t∈[1,2]. Now apply the theorem to
x”(t)=-sin x'-e^-tx=â(t,x,x').
Since q(t,x(t))= ∂â/∂ x=te^-tx>0, ∀ t>0, and | p(t)=∂â/∂ x'|=|-cos x'| ≤ 1=M, then the condition is satisfied and the BVP has a unique solution.
Now reader might ask how can we apply this Theorem to Lane-Emden equation. Theorem 2.2 can be simplified by taking into account that the functions sin x'/x' and e^-tx are continuous on the interval (0,∞) to assure the differential equation is linear.
§ COMPUTATIONAL MEHTODS FOR DYNAMICAL SYSTEMS
In this section, we start by presenting the methods (shooting to transform from BVP to IVP, and Euler's for regular singularity in the drift term) and apply them on the second order singular dynamical system.
§.§ Shooting method
The shooting method treats the two-point BVP as an IVP. The idea basically, is to write the BVP in a vector form and begin the solution at one end of the BVP, and then "shooting" to the other end with any IVP solver, such as; Runge-Kutta method or multistep method for linear case and Secant method or Newton's method for nonlinear case, until the boundary condition at the other end converges to its correct value. To be precise, the ordinary differential equation of second order, associated with its initial conditions must normally be written as a system of first order equation before it can be solved by standard numerical methods. Next figure shows graphically the mechanism of the shooting.
< g r a p h i c s >
Roughly speaking, we 'shoot' out trajectories in different directions until we find a trajectory that has the desired boundary value.
The drawback of the method is that it is not as robust as those used to solve BVPs such as finite difference or collocation methods presented in <cit.>, and there is no guarantee of convergence.
Shooting method can be used widely for solving a BVP by reducing it to an associated IVP, and is valid for both linear (also called chasing method) and non linear BVPs, by <cit.>,
d^2 x/d t^2=â(t,x(t),x'(t)), x(t_0)=x_0, x(t_1)=x_1.
Next theorem provides existence and uniqueness to the BVP's solution.
Define a set D:={(t,x,x')∈ [a,b]××}, and assume f is continuous function on D such that it satisfies the BVP:
{[ x”(t)=f(t,x,x'); x(a)=α; x'(b)=β. ].
Suppose that f_x and f_x' are continuous on the same set D. If
(i) f_x(t,x,x')>0 for all values, and
(ii) There exists M>0 such that
|f_x'(t,x,x')|≤ M , ∀ (t,x,x')∈ D
then the BVP (<ref>) has a unique solution.
A special case of this theorem is the following corollary, i.e., when the right hand side of (<ref>) is linear. For linear Lane-Emden equations, one can use Frobenius method to determine the analytical solutions of (<ref>) near the singularity, see, for instance, <cit.>.
Consider (<ref>) given by
x”(t)=p(t)x'+q(t)x+r(t),
and the time-dependent coefficients p(t) , q(t) , r(t) are continuous functions on the domain [a,b] and further q(t)>0, then the BVP (<ref>) has a unique solution.
We need to consider two cases: (i) When equation (<ref>) given with boundary conditions x(a)=α , x'(a)=0, has a unique solution x_1(t). (ii) When equation (<ref>) with r(t)=0, x(a)=0 , x'(a)=1, has a unique solution x_2(t).
Therefore, one can easily check that the linear combination x̂(t)=x_1(t)+α -x_1(b)/x_2(b)x_2(t) is the unique solution to (<ref>), and hence to (<ref>) due to the existence and uniqueness guaranteed by Picard-Lindelof theorem (<ref>).
§.§ Euler's Method
Euler's method is a numerical approach for solving (iteratively) initial value problems, as follows: We divide the time interval [t_0,T] into N equal subintervals, each of length h=Δ t=t_n+1-t_n, for n≥ 0, and start by initial value x(0) then move forward using the step size towards x(T), that is, given the second-order ordinary differential equation (<ref>), converting it into two first-order dynamic equations (i.e., dynamical system). Discretize the interval [t_0,T] into subintervals, and by assuming y_n the approximation to x(t_n) and v_n the approximation to u(t_n). Euler's method is then can be expanded, as a two-terms truncated Taylor series, by the following Euler's method for solving a second-order differential equation is given by:
[b]
Forward Euler's Algorithm.
Step 0. (Initialization):
Take
t_0, x_0∈, h=T-t_0/N, n≥ 0.
Step 1. (Forward step): Given t_n , y_n , v_n define
t_n+1 = t_n + h,
y_n+1 = y_n + h · v_n,
v_n+1 = v_n + h ·â(t_n, y_n, v_n),
Stopping Criterion: If v_n+1=v_n then stop.
The local error at every step is proportional to the square of the step size h and the global error at a given time is proportional to h. Moreover, the order of the global error can be calculated from the order of the local error ( i.e. by summing up the local error). We can understand Euler's method by appealing the idea that some differential equations provide us with the slope at all points of the function , while an initial value provides a point on the function. Using this information we can approximate the function with a tangent line at the initial point. It is known that the tangent line is only a good approximation over a small interval. When moving to a new point, we can construct an approximate tangent line, using the actual slope of the function, and an approximation to the value of the function at the tangency point. Repeating this manner, we eventually construct a piecewise-linear approximation to the solution of the differential equation. Moreover, this approximation can be seen as a discrete function and to make it a continuous function, we interpolate (linearly) between each pair of these points.
In the following, we study and analyse the Lane-Emden-type equation with an endpoint singularity in terms of the independent variable which has the form
d^2 x/d t^2=-a(t,x)/1-td x/d t+g(t,x) = â(t,x,x')
where â(t,x(t),x'(t)): [0,1)××→, and the Lipschitz functions a(t,x), g(t,x)∈ C^1([0,1)×), for all 0 ≤ t<1.
At t=1, the -a(t,x)/1-t term is singular, but symmetry implies the boundary condition x'(0)=0. With this boundary condition, the term -a(t,x)/1-tdx/dt is well defined as t→ 1.
The solution of (<ref>) can be given by the system:
dx' =â(t,x(t),x'(t))dt
dx =x'dt
Define x_t:=x(t), x'_t:=x'(t). By the fundamental theorem of calculus and provided that all integrals are exist (finite), we notice that equation (<ref>) is equivalent to the nonlinear system of integral equations:
x'_t = x'_t_n +∫_t_n^t â(s,x_s,x'_s) ds
x_t = x_t_n +∫_t_n^t x'_s ds .
Where
0 = t_0 < t_1 < t_2 < ... < 1 .
Expanding the integrands in (<ref>) so we have:
x'_t = x'_t_n+∫_t_n^t [â(t_n, x_t_n, x'_t_n)
+∫_t_n^s[∂â/∂ t(u,x_u,x'_u)
+∂â/∂ x(u,x_u,x'_u)x'_u+∂â/∂ x'(u,x_u,x'_u)â(u,x_u,x'_u)]du]ds
x_t = x_t_n+∫_t_n^t[x'_t_n+∫_t_n^sâ(u,x_u,x'_u)du]ds.
Or in the equivalent form,
x'_t = x'_t_n+â(t_n,x(t_n),x'(t_n))(t-t_n)
+∫_t_n^t ∫_t_n^s (∂â/∂ t+∂â/∂ x x'_u+∂â/∂ x'â)(u,x_u,x'_u) du ds
x_t = x_t_n + x'_t_n(t-t_n) + ∫_t_n^t ∫_t_n^s â(u,x_u,x'_u) du ds
For simplicity we assume
L^(1)_n =∫_t_n^t∫_t_n^s (∂â/∂ t+∂â/∂ xx'_u+∂â/∂ x'â)(u,x_u,x'_u) du ds,
L^(2)_n =∫_t_n^t∫_t_n^s â(u,x_u,x'_u) du ds.
Thus the system becomes,
x'_t_n+1 =x'_t_n+â(t_n, x(t_n), x'(t_n))(h_n+1)+ L_n^(1)
x_t_n+1 =x_t_n+x'_t_n h_n+1+L_n^(2)
where h_n+1=t_n+1-t_n.
In order to estimate the error, we need to find a bound for the integrands in L^(1)_n and L^(2)_n. The double integrals in both L^(1) , L^(2) yield the local truncation error, if we define the numerical value by:
y'_n+1 =y'_n + â(t_n,y_n,y'_n)h_n+1
y_n+1 =y_n + y'_n h_n+1.
where h_n+1=t_n+1-t_n.
§ DISCRETIZATION AND CONVERGENCE ANALYSIS
Consider a sequences of times
0 = t_0 < t_1 < t_2 < ... < 1 ,
and the corresponding step sizes h_n=t_n - t_n-1.
Define x_n = x(t_n) and x'_n=x'(t_n) where (x(t), x'(t)) is a solution of (5). Writing (8) in the form:
x'_n+1 =x'_n +â(t_n, x_n, x'_n)(h_n+1)+ L_n^(1)
x_n+1 =x_n +x'_n h_n+1+L_n^(2)
Use y_n as defined in (9) and let ϵ_i=x_i-y_i ,ϵ'_i=x'_i(t)-y'_i(t), ∀ i. So we have
ϵ'_n+1 =ϵ'_n +[â(t_n,x_n, x'_n)-â(t_n,y_n,y'_n)](h_n+1)+ L_n^(1)
ϵ_n+1 =ϵ_n +ϵ'_n h_n+1+L_n^(2)
By using the inequality (x+y)^2≤2x^2+2y^2, the error can be estimated as,
(ϵ'_n+1)^2≤(ϵ'_n)^2+2[â(t_n,x_n,x'_n)-â(t_n,y_n,y'_n)]^2(h_n+1)^2 +2(L_n^(1))^2
+2ϵ'_n(â(t_n,x_n,x'_n)-â(t_n,y_n,y'_n))h_n+1+2ϵ'_nL_n^(1)
(ϵ_n+1)^2≤(ϵ_n)^2+2(ϵ'_n)^2(h_n+1)^2+2(L_n^(2))^2+2ϵ_nϵ'_n h_n+1 +2ϵ_n L_n^(2).
Next, we introduce some assumptions on the functions a(t,x(t)), g(t,x(t)) and their partial derivatives for t ∈ [0,1), x ∈ℝ . But before that we remind ourselves of the value of â from section 3,
â(t,x(t),x'(t))=-a(t,x(t))/1-tdx/dt+g(t,x(t)).
Also, for any T_1 , T_2 ∈ [0,1) the Lipschitz conditions are:
|a(t,x)-a(t,y)| ≤ T_1 |x-y| , |g(t,x)-g(t,y)| ≤ T_2 |x-y|.
Our required bounds explicitly are:
|a(t,x(t))| ≤ C_0 , |g(t,x(t))| ≤ C_3 .
The partial derivatives bounds are:
|∂ a/∂ t (t,x(t))| =|a_1(t,x(t))| ≤ C_1,
|∂ a/∂ x(t,x(t))| =|a_2(t,x(t))| ≤ C_2,
|∂ g/∂ t(t,x(t))| =|g_1(t,x(t))| ≤ C_4,
|∂ g/∂ x(t,x(t))| =|g_2(t,x(t))| ≤ C_5 .
This final bound applies along the path
|x'(t)|≤ A_1.
Taking the difference between the computed and the exact values of â,
|â(t,x,x')-â(t,y,y')|
= |-a(t,x)/1-tx'+g(t,x)+a(t,y)/1-ty'-g(t,y)|
≤|a(t,y)y'-a(t,x)x'/1-t|+|g(t,x)-g(t,y)|.
By adding and subtracting the required terms, we have
|a(t,y)y' - a(t,x)x'|
= |a(t,x)(y'-x')+x'(a(t,y)-a(t,x))+(a(t,y)-a(t,x))(y'-x')|
≤ C_0 |y'-x'|+A_1T_1 |y-x|+T_1 |y-x|.|y'-x'|.
Thus, the difference <ref> becomes,
|â(t_n,x_n,x'_n)-â(t_n,y_n,y'_n)| ≤C_0|ϵ'_n|/1-t+A_1T_1|ϵ_n|/1-t+T_1|ϵ_n| |ϵ'_n|/1-t+T_2|ϵ_n|.
Note that,
∂â/∂ t = â_̂1̂(t,x,x')
=-a_1(t,x)x'/1-t-a(t,x)x'/(1-t)^2+g_1(t,x),
∂â/∂ xx'
=-a_2(t,x)/1-t(x')^2+g_2(t,x)x',
∂â/∂ x'â =a^2(t,x)/(1-t)^2x'-a(t,x)g(t,x)/1-t.
We now apply a very well known result from functional analysis, Cauchy-Schwarz inequality twice on L^(1)and L^(2):
(L_N^(1))^2
= (∫_t_n^t_n+1∫_t_n^t(∂â/∂ t+∂â/∂ xx'+∂â/∂ x'â) ds dt)^2
≤ h^2_n+1∫_t_n^t_n+1∫_t_n^t(∂â/∂ t+∂â/∂ xx'+∂â/∂ x'â)^2 ds dt
≤ h^2_n+1∫_t_n^t_n+1∫_t_n^t[3(∂â/∂ t)^2+3(∂â/∂ xx')^2+3(∂â/∂ x'â)^2] ds dt
≤ 3h^2_n+1∫_t_n^t_n+1∫_t_n^t( 3C_1^2A_1^2/(1-t)^2+3C_0^2A_1^2/(1-t)^4+3C_4^2
+2C_2^2A_1^4/(1-t)^2 + 2C_5^2A_1^2 + 2C_0^4A_1^2/(1-t)^4 + 2C_0^2C_3^2/(1-t)^2) ds dt
≤ D_1h_n+1^4/(1-t_n+1)^4.
for some Constant D_1, which does not depend on h_n+1 and n.
(L_N^(2))^2
= (∫_t_n^t_n+1∫_t_n^t(-a(t,x)/1-s+g(t,x)) ds dt)^2
≤ h_n+1^2 ∫_t_n^t_n+1∫_t_n^t(-a(t,x)/1-s+g(t,x))^2 ds dt
≤ h_n+1^2 ∫_t_n^t_n+1∫_t_n^t2a^2(t,x)/(1-s)^2 ds dt +h_n+1^2∫_t_n^t_n+1∫_t_n^t2g^2(t,x) ds dt
≤ 2h_n+1^2( ∫_t_n^t_n+1∫_t_n^tC_0^2/(1-s)^2 ds dt +∫_t_n^t_n+1∫_t_n^tC_3^2 ds dt)
= 2h_n+1^2(C_0^2 ∫_t_n^t_n+1-1/1-s dt+C_3^2∫_t_n^t_n+1(t-t_n) dt)
≤2h_n+1^4C_0^2/(1-t_n+1)^2+C_3^2/2h_n+1^4
≤ D_2h_n+1^4/(1-t_n+1)^2.
where D_2 is independent of n and h_n+1.
To avoid the singularity and produce a better estimation to test the efficiency of the algorithm, we introduce a variable step size by fixing
ĥ>0
and then defining step size h_n and node points t_n using ĥ:
ĥ=h_n+1/1-t_n+1
or,
t_n+1=t_n+ĥ(1-t_n+1).
In the process of estimating the global error, we need to use the following two fundamental lemmas:
For all x≥ -1, and any m>0, we have 0≤ (1+x)^m≤ e^mx.
The proof of this result follows by applying Taylor's theorem with f(x)=e^x, x_0=0, and n=1.
if M_1≥-1 and M_2≥0 are real numbers and {a_n}^N_n=0is a sequence with a_0 ≥0 such that
a_n+1≤(1+M_1)a_n+M_2 , ∀ n=0, 1, 2, …, N-1,
then,
a_n+1≤ e^(N+1)M_1(M_2/M_1+a_0)-M_2/M_1, ∀ n=0, 1, 2, … ,N-1.
Fix a positive integer n, then (<ref>) can be written as
a_n+1 ≤ (1+M_1)a_n+M_2
≤ (1+M_1)[ (1+M_1)a_n-1+M_2] a_n+M_2
⋮
≤ (1+M_1)^n+1a_0+[1+(1+M_1)+(1+M_1)^2+…+(1+M_1)^n]M_2
≤ (1+M_1)^n+1a_0+[∑_j=0^n(1+M_1)^j]M_2
≤ (1+M_1)^n+1a_0+[1-(1+M_1)^n+1/1-(1+M_1)]M_2 (sum of geometric series)
≤ (1+M_1)^n+1a_0+[(1+M_1)^n+1-1]M_2/M_1
≤ (1+M_1)^n+1(a_0+M_2/M_1)-M_2/M_1.
By Lemma <ref>, equation (<ref>) follows, i.e.,
a_n+1≤ e^(1+N)M_1(a_0+M_2/M_1)-M_2/M_1.
Now if we add the two inequalities in (11) together, we will have
(ϵ'_n+1)^2 + (ϵ_n+1)^2
≤ (ϵ'_n)^2+(ϵ_n)^2+2h_n+1^2(ϵ'_n)^2+2[â(t_n,x_n,x'_n)-â(t_n,y_n,y'_n)]^2 h_n+1^2
+ 2(L_n^(1))^2+2(L_n^(2))^2+2ϵ_nϵ'_n h_n+1 +2ϵ_n L_n^(2)
+ 2ϵ'_n((â(t_n,x_n,x'_n)-â(t_n,y_n,y'_n))h_n+1+L_n^(1))
≤ (ϵ'_n)^2+(ϵ_n)^2+2h_n+1^2(ϵ'_n)^2+8C_0^2(ϵ'_n)^2(h_n+1/1-t_n+1)^2
+ 8A_1^2T_1^2ϵ_n^2(h_n+1/1-t_n+1)^2+8T_1^2ϵ_n^2(ϵ'_n)^2(h_n+1/1-t_n+1)^2+8T_2^2ϵ_n^2h_n+1^2
+ 2D_1(h_n+1/1-t_n+1)^4+2D_2(h_n+1/1-t_n+1)^2h_n+1^2+2ϵ_nϵ'_nh_n+1
+ 2ϵ'_n√(D_1)(h_n+1/1-t_n+1)^2+2ϵ_n√(D_2)(h_n+1/1-t_n+1)h_n+1
+ 2(ϵ'_n)^2C_0(h_n+1/1-t_n+1)+2A_1T_1ϵ_nϵ'_n(h_n+1/1-t_n+1)
+ 2T_1ϵ_n(ϵ'_n)^2(h_n+1/1-t_n+1)+2T_2ϵ_nϵ'_nh_n+1
≤[ K_1 h_n+1^2+K_2 (h_n+1/1-t_n+1)^2+h_n+1+K_3 (h_n+1/1-t_n+1) ] ||ϵ_n||^2
+ 2D_1(h_n+1/1-t_n+1)^4+2D_2(h_n+1/1-t_n+1)^4+ K_4ϵ_n(ϵ'_n)^2(h_n+1/1-t_n+1)
+ 2(h_n+1/1-t_n+1)^2[√(D_1)+√(D_2)]√((ϵ'_n)^2+(ϵ_n)^2)+K_5^2ϵ_n^2(ϵ'_n)^2(h_n+1/1-t_n+1)^2.
Using the definition of the norm ϵ_n=√((ϵ'_n)^2+(ϵ_n)^2) , then system (13) can be simplified as
(ϵ'_n+1)^2+(ϵ_n+1)^2≤ (ϵ'_n)^2+(ϵ_n)^2+m_1(ĥ)[(ϵ'_n)^2+(ϵ_n)^2]+m_2(ĥ)^3
where m_1 and m_2 are independent constants of h_n+1 and t_n+1.
Now we apply Lemma <ref> for a_n=ϵ_n^2, followed by a foundation for the step size order, with M_1=1+m_1(ĥ) and M_2=m_2(ĥ)^3 such that if
ϵ_n+1^2 ≤ϵ_n^2 +M_1 ϵ_n^2 + M_2 = (1+M_1)ϵ_n^2+M_2
then we have
ϵ_n+1^2 ≤ e^NM_1(M_2/M_1+ϵ_0^2) -M_2/M_1 = (e^NM_1-1) M_2/M_1.
The following theorem can assur the variable step size and the uniform convergence for solutions of the method.
Given that the singular boundary value problem in (<ref>) satisfies the upper bounds assumption in (17a)-(17d), then the successive approximation (<ref>) with variable step sizes (<ref>) as ĥ→ 0, has O((ĥ)^2), converges uniformly in n for t_n < 1-δ < 1, and thus the global pointwise error for the above proposed algorithm is of order O(ĥ).
If we have N steps, (<ref>) gives (1-ĥ)^N=δ, and thus
N=lnδ/ĥ=-lnδ/ln(1-ĥ), whenever h^*→ 0.
Then by using Lemma <ref>
on (<ref>), we have
ϵ_n^2
≤[e^(-N (1+m_1(ĥ)))-1] m_2(ĥ)^3/1+m_1(ĥ)
≤D/δ^m_1
(ĥ)^2
where D and M_1 are constants that do not depend on n,ĥ or δ.
§ SIMULATION AND NUMERICAL EXPERIMENTS
Polytropic Dynamical Systems with Time Singularity
Oday Hazaimah
[ Northern Illinois University, odayh982@yahoo.com. https://orcid.org/0009-0000-8984-2500.]
=================================================================================================================
In this section we run the algorithm over some examples to show the validity of the method. We used MATLAB with bulit-in functions such as; ode45 and EulerSolver
Consider the second order differential equation (<ref>) with a(t,x)=sin x, and g(t,x)=x^5, where the step size is 0.05 and time interval [0,1] along with initial conditions
x(0)=0, x'(0)=2; i.e.,
d^2 x/d t^2=-sin x/1-td x/d t+x^5
Table 1 compares the two dependent solutions x(t) and x'(t) for equation (<ref>) given the above numerical values, and figures below draw the relationships between trajectories of the differential equation and the time.
The analytical solution to this problem is somewhat lower than our approximation. By shrinking the size of the interval Δ t, we could calculate a more accurate estimate.
< g r a p h i c s >
Consider equation (<ref>) with a(t,x)=tx, and g(t,x)=x^3, where the step size is Δ t=0.1 and same time interval [0,1] along with initial conditions
x(0)=0, x'(0)=2; i.e.,
d^2 x/d t^2=tx/1-td x/d t+x^3
< g r a p h i c s >
Consider a constant function a(t,x)=3 in Lane-Emden equation (<ref>) with g(t,x)=e^tx, where the step size is Δ t=0.05 and the time interval is [0,2] along with initial conditions given as, x(0)=0, x'(0)=2; i.e.,
d^2 x/d t^2=3/1-td x/d t-e^tx.
< g r a p h i c s >
Consider the second-order dynamic equation (<ref>) with a(t,x)=2t, and g(t,x)=tx^2, where the step size is 0.01 (which can enlarged to help decrease the error estimates) and time interval [0,1] along with initial conditions x(0)=0, x'(0)=1; i.e.,
d^2 x/d t^2=2t/1-td x/d t+tx^2.
< g r a p h i c s >
§ CONCLUSION AND EXTENSIONS
In this paper our primary goal was to investigate the second-order singular Lane Emden type equations and we have successfully arrived at the solutions by the forward Euler's algorithm combined with the shooting method, which in turn, reduces the boundary value problem into initial value problem, so the method showed that it is a precise and time-saving method. The Lane Emden equations are solved for the values of the polytropic indices varies from 1, 2, 3 and 5 with having constants, linear functions and periodic functions in the drift term. The numerical solution of the problem for these values of indices replaces the unsolvable version of equation and any closed form solution that we wish to find. For the case of n = 2 the solution is obtained as an infinite power series. Graphical representations of these results give us information about polytropes for different values of polytropic indices which may be helful in the study of the behavior of stellar structures in astrophysics. One good extension for this work is through implementing backward Euler formula for a second-order differential equations where the recursion formula is the same, except that the dependent variable is a vector. Another possible modification for the work is by using the reliable Runge–Kutta method which promises accurate results in deriving the solutions of the Lane Emden equations.
It is also significant in handling highly nonlinear differential equations with less computations and a larger interval of convergence. For thinking globally, finite difference methods may be used to replace the shooting method to treat the boundary value problem. Finally, we may think of adding the additive noice to the second order differential equation (it will be called stochastic differential equation) and in this case, Euler's method will be replaced by Euler-Maruyama Algorithm, see, for instance, <cit.>.
§.§ Acknowledgments and Declarations
The author would like to express his gratitude to professor Randy Hughes, a professor at Southern Illinois University Carbondale, for suggesting the problem and providing valuable advises along the way of writing this manuscript. The author declares that there was no conflict of interest or competing interest.
99
tocchapterReferences
Jedara
Ala'yed, O., Saadeh, R., Qazza, A.: Numerical solution for the system of Lane-Emden type equations using cubic B-spline method arising in engineering. AIMS Mathematics. 8, 6, (2023).
Ali
Ali, K.K., Mehanna, M.S., Abdelrahman, M.I., Shaalan, M.A.: Analytical and Numerical solutions for fourth order Lane-Emden-Fowler equation. Partial Differential Equations in Applied Mathematics. 6, (2022).
Asadpour
Asadpour, A., Hosseinzadeh, H., Yazdani, A.: Numerical Solution of the Lane-Emden Equations with Moving Least Squares Method. Applications and Applied Mathematics. 14, 2 (2019).
Barr
Barreira, L., Valls, C.: Ordinary Differential Equations.
American Mathematical Society (2010).
Bataineh
Bataineh, A.S., Noorani, M.S.M, Hashim, I.: Homotopy analysis method for singular IVPs of Emden-Fowler type. Communications in nonlinear science and numerical simulation. 14, 1121-1131 (2009).
Bile
Biles, D.C., Robinson, M.P., Sparker, J.S.: A generalization of the Lane-Emden equation. J. Math. Anal. Appl. 654-666 (2002).
Burd
Burden, R.L., Faires, J.D.: Numerical Analysis. Brooks/Cole Publishing Company. (2011).
Chandra
Chandrasekhar, S.: An Introduction to the Study of Stellar Structure. Dover Publications, New York (1967).
ColliTee
Collings A.G., Tee G.J.: Stability and accuracy of the generalized Euler method for ordinary differential equations, with reference to structural dynamics problems. Engineering Structures. 2, 99-108 (1979).
Datt
Datta, B.K.: Analytic solution to the Lane-Emden equation. Nuovo Cimento. 111, 1385-1388 (1996).
Davis
Davis, H.T.: Introduction to nonlinear differential and integral equations. Dover Publications Inc. New York (1962).
HughesGreg
Gregory, J., Hughes, H.R.: New general methods for numerical stochastic differential equations. Utilitas Mathematica. 63, 53-64 (2003).
He
He, J.H.: Variational approach to the Lane-Emden equation. Applied Mathematics and computation. 143, 539-541 (2003).
Herron
Herron, I.H.: Solving singular boundary value problems for ordinary differential equations. Carrib J. Math Comput. Sci. 1-30 (2013).
HughesLoch
Hughes, H.R., Siriwardena, L.P.: Efficient variable step size approximations for strong solutions of stochastic differential equations with additive noise and time singularity. International Journal of Stochastic Analysis. (2014).
Karimi
Karimi, S.V., Aminataei, A.: On the numerical solution of differential equations of Lane-Emden type. Computers and Mathematics with applications. 59, 2815-2820 (2010).
Kloeden
Kloeden, P.E., Platen, E.: Numerical solution of stochastic differential equations. Applications of Mathematics, Stochastic Modelling and Applied Probability. Springer, New York. 23, (1992).
Othmar
Koch, O. and Weinmüller, E.B., The convergence of shooting methods for singular boundary value problems. Mathematics of Computation. 289-305, 2003.
Maruy
Maruyama, G.: Continuous Markov processes and stochastic equations. Rendiconti del circolo matematico di palermo. 4, 48-90 (1955).
Motsa
Motsa, S.S., Shateye, S.: New analytic solution to the Lane-Emden equation of index 2. Mathematical problems in engineering. (2012).
KouroshAmin
Parand K., Ghaderi, A.: Two efficient computational algorithms to solve the singularly perturbed Lane-Emden problem. arxiv.org/abs/1708.07384. (2017).
PeterOlver
Peter J.O.: Introduction to Partial Differential Equations. Springer, (2014).
Ramos
Ramos, J.I.: Linearization techniques for singular initial-value problems of ordinary differential equations. Applied Mathematics and Computation. 161, 525-542 (2005).
Royden
Royden, H.L.: Real analysis. New York (1988).
Russ
Russell, R.D., Shampine, L.F.: Numerical methods for singular boundary value problems. J. Num. Anal. 13-36 (1975).
sand
Sandile, S.M., Precious, S.I.: A New Algorithem for Solving Singular IVPs of Lane-Emden type. Latest Trends on Applied Mathematics, Simulation, Modelling. 176-180 (2010).
Wazwaz
Wazwaz, A.: Adomian decomposition method for a reliable treatment of the Emden-Fowler equation. Applied Mathematics and computation. 161, 543-560 (2005).
|
http://arxiv.org/abs/2405.09753v1 | 20240516012638 | Stacked Intelligent Metasurfaces for Holographic MIMO Aided Cell-Free Networks | [
"Qingchao Li",
"Mohammed El-Hajjar",
"Chao Xu",
"Jiancheng An",
"Chau Yuen",
"Lajos Hanzo"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Stacked Intelligent Metasurfaces for Holographic MIMO Aided Cell-Free Networks
Qingchao Li, Graduate Student Member, IEEE,
Mohammed El-Hajjar, Senior Member, IEEE,
Chao Xu, Senior Member, IEEE,
Jiancheng An, Member, IEEE,
Chau Yuen, Fellow, IEEE,
and Lajos Hanzo, Life Fellow, IEEE
The work of Chau Yuen was supported by the Ministry of Education, Singapore, under its Ministry of Education (MOE) Tier 2 (Award number MOE-T2EP50220-0019). The work of Lajos Hanzo was supported by the Engineering and Physical Sciences Research Council projects EP/W016605/1, EP/X01228X/1, EP/Y026721/1 and EP/W032635/1 as well as of the European Research Council's Advanced Fellow Grant QuantCom (Grant No. 789028). (Corresponding author: Lajos Hanzo.)
Qingchao Li, Mohammed El-Hajjar, Chao Xu and Lajos Hanzo are with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: qingchao.li@soton.ac.uk; meh@ecs.soton.ac.uk; cx1g08@ecs.soton.ac.uk; lh@ecs.soton.ac.uk).
Jiancheng An and Chau Yuen are with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798 (e-mail: jiancheng.an@ntu.edu.sg; chau.yuen@ntu.edu.sg).
May 20, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Large-scale multiple-input and multiple-output (MIMO) systems are capable of achieving high date rate. However, given the high hardware cost and excessive power consumption of massive MIMO systems, as a remedy, intelligent metasurfaces have been designed for efficient holographic MIMO (HMIMO) systems. In this paper, we propose a HMIMO architecture based on stacked intelligent metasurfaces (SIM) for the uplink of cell-free systems, where the SIM is employed at the access points (APs) for improving the spectral- and energy-efficiency. Specifically, we conceive distributed beamforming for SIM-assisted cell-free networks, where both the SIM coefficients and the local receiver combiner vectors of each AP are optimized based on the local channel state information (CSI) for the local detection of each user equipment (UE) information. Afterward, the central processing unit (CPU) fuses the local detections gleaned from all APs to detect the aggregate multi-user signal. Specifically, to design the SIM coefficients and the combining vectors of the APs, a low-complexity layer-by-layer iterative optimization algorithm is proposed for maximizing the equivalent gain of the channel spanning from the UEs to the APs. At the CPU, the weight vector used for combining the local detections from all APs is designed based on the minimum mean square error (MMSE) criterion, where the hardware impairments (HWIs) are also taken into consideration based on their statistics. The simulation results show that the SIM-based HMIMO outperforms the conventional single-layer HMIMO in terms of the achievable rate. We demonstrate that both the HWI of the radio frequency (RF) chains at the APs and the UEs limit the achievable rate in the high signal-to-noise-ratio (SNR) region.
Holographic multiple-input and multiple-output (HMIMO), stacked intelligent metasurface (SIM), cell-free network, hardware impairment (HWI).
§ INTRODUCTION
In the fifth generation (5G) wireless systems, large-scale multiple-input and multiple-output (MIMO) systems have been harnessed for providing significantly increased throughput by employing a large number of antennas at the base station (BS) <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. However, they require numerous active radio frequency (RF) chains, which results in excessive hardware cost and energy consumption. Hence, some authors have focused their attention on the design of low-cost and energy-efficient solutions <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. As an attractive design alternative, holographic MIMO (HMIMO) solutions rely on an intelligent software reconfigurable paradigm in support of improved hardware efficiency and energy efficiency. They achieve this ambitious objective by utilizing a spatially near-continuous aperture and holographic radios having reduced power consumption and fabrication cost <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. The recent progress in channel modeling and efficient channel estimation conceived for HMIMO systems is reported in <cit.>.
Currently, the hardware architectures of HMIMO are mainly based on reconfigurable refractive surfaces (RRS) <cit.>, reconfigurable holographic surfaces (RHS) <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and dynamic metasurface antennas (DMA) <cit.>, <cit.>, <cit.>, <cit.>.
§.§.§ Reconfigurable refractive surfaces
It is infeasible to realize HMIMO schemes relying on a large number of conventional RF chains and active antennas due to the excessive power consumption <cit.>. As a remedy, Zeng et al. <cit.> employed a RRS illuminated by a single RF chain at the BS for creating an energy-efficient HMIMO. A substantial beamforming gain can be achieved by adjusting the coefficient of each RRS element. Both their theoretical analysis and simulation results show that the RRS-aided HMIMO has higher energy efficiency than the conventional MIMO systems using phased arrays.
§.§.§ Reconfigurable holographic surfaces
In <cit.>, <cit.>, <cit.>, Deng et al. proposed a RHS based architecture. Their digital beamformer relying on the state-of-the-art (SoA) zero-forcing (ZF) transmit precoding method and a holographic beamformer are jointly optimized at the BS and the RHS, respectively. The simulation results showed that the RHS-based hybrid beamformer achieves a higher sum-rate than the SoA massive MIMO based hybrid beamformer relying on phase shift arrays. To reduce the configuration complexity, Hu et al. <cit.> proposed a holographic beamforming scheme based on amplitude-controlled RHS elements having limited resolution. They showed that the holographic beamformer associated with 2-bit quantized amplitude resolution achieves a similar sum-rate as that associated with continuous amplitude values. Furthermore, in <cit.> the RHS beamformer is employed for ultra-dense low-Earth-orbit (LEO) satellite communications to compensate for the severe path-loss of satellite communications. The simulation results showed that the RHS provides a more cost-effective solution for pursuing higher data rate than the phased array architecture. The impact of the number of radiation elements on the sum-rate of RHS-based LEO satellite communications is further investigated in <cit.>. Explicitly, the authors theoretically analyzed the minimum number of RHS elements required for the sum-rate of the RHS-aided system to exceed that of the phased array system. Furthermore, Wei et al. <cit.> proposed a low-complexity ZF beamformer based on utilizing Neumann series expansion to replace the matrix inversion operation in multi-user RHS-based MIMO communications. To reduce the channel state information (CSI) estimation overhead, Wu et al. <cit.> minimized the transmit power of the RHS-based holographic MIMO based on a two-time scale beamformer. Specifically, this holographic beamformer was designed based on the statistical CSI, and then the instantaneous CSI of the equivalent channel links was estimated and utilized for designing the digital transmit precoding matrix.
§.§.§ Dynamic metasurface antennas
The dynamic metasurface antenna consists of multiple microstrips and each microstrip is composed of a multitude of sub-wavelength and frequency-selective resonant metamaterial radiating elements, which can be employed to realize low-cost, power-efficient and size compact antenna arrays <cit.>, <cit.>, <cit.>, <cit.>. In the DMA, the information is beamformed by linearly combining the radiation observation from all metamaterial elements in each microstrip. The mathematical model of DMA-based massive MIMO systems were firstly proposed by Shlezinger et al. in <cit.>, where the fundamental limits of DMA-aided uplink communications was also investigated. To approach these limits, the alternating optimization algorithm was employed for designing practical DMAs for arbitrary multipath channels and frequency selectivity profiles. Furthermore, the achievable sum-rate of DMA-based downlink massive MIMO systems was characterized in <cit.>, and an efficient alternating algorithm was proposed for dynamically configuring the DMA weights to maximize the achievable sum-rate. It was shown that the fundamental limits of DMA-based massive MIMO systems are comparable to those of conventional MIMO systems based on ideal antenna arrays. In <cit.>, You et al. employed DMA to realize large-scale antenna arrays having reduced physical size, hardware cost, and power consumption. Specifically, the energy efficiency of the DMA-based massive MIMO system was maximized by the Dinkelbach transform, alternating optimization, and deterministic equivalent methods. The simulation results showed that higher energy efficiency can be achieved by the DMA-based massive MIMO architecture than by the conventional fully-digital and hybrid massive MIMO systems. Furthermore, Li et al. <cit.> proposed the power-efficient DMA, which operate at high frequencies and realize extremely large-scale MIMO (XL-MIMO) schemes. The DMA-based XL-MIMO architecture is composed of the conventional digital beamformer and the DMA-based holographic beamformer. Specifically, in the holographic beamformer, the DMAs can be configured based on three different modes, including continuous-amplitude configurations, binary-amplitude configurations and Lorentzian-constrained phase configurations <cit.>. An efficient successive convex approximation based alternating direction method of multipliers (ADMM) aided algorithm was proposed for optimizing the digital beamformer and the DMA-based holographic beamformer in an alternating manner. The associated simulation results showed that the DMA-based array has lower hardware overhead and power consumption than the conventional hybrid massive MIMO beamformer.
The above holographic MIMO architectures are based on a single-layer metasurface. To further improve both the spatial-domain gain and the beamformer's degree-of-freedom, An et al. <cit.> proposed a HMIMO system based on stacked intelligent metasurfaces (SIM), which is composed of stacked multi-layer reconfigurable surfaces to carry out advanced signal processing directly in the native electromagnetic (EM) wave regime without digital beamformer. The gradient descent algorithm is employed for optimizing the phase shifts of the elements in all layers of the metasurfaces to maximize the sum-rate. The simulation results show that the SIM architecture outperforms its single-layer metasurface counterparts. The wave-based beamforming relying on the SIM is benefit of simplifying hardware architecture and improving computational efficiency. Furthermore, an SIM has the application of performing interference cancellation for multiple access and enabling integrated sensing and communications <cit.>.
However, the above contributions assume idealized perfect RF hardware both at the BSs and the UEs, which is impractical. Furthermore, the above HMIMO architectures are tailored for cellular networks, where the cell-edge users (UEs) suffer from low data-rate both due to the increased BS-UE distance and owing to the inter-cell interference. As a design alternative, the cell-free concept mitigates the signal path loss and the inter-cell interference by deploying a set of distributed access points (APs) for cooperatively serving UEs without cell boundaries <cit.>. Specifically, the ZF method and the minimum mean square error (MMSE) method can be employed for designing the beamformers in the centralized algorithm. To reduce the required overhead of CSI-sharing between APs, a distributed algorithm can be employed based on the maximum ratio transmission (MRT) or the maximum ratio combining (MRC) criteria, albeit while at the cost of a performance degradation. Furthermore, in the distributed optimization algorithm of the cell-free system, the cooperation between APs is promising in terms of harnessing parallel computing resources and achieving almost the same data rate as the centralized algorithm <cit.>. To deal with these challenges, in this paper we propose an SIM assisted HMIMO architecture for cell-free networks, while directly considering the signal distortion resulting from the realistic hardware impairments (HWIs) of the RF chains both at the APs and the UEs. Against this background, our contributions are detailed as follows, while Table <ref> explicitly contrasts them to the literature at a glance.
* We conceive an SIM-based HMIMO architecture for the uplink of a cell-free network, where the SIM is employed at the APs to attain high spectral- and energy-efficient information transfer. The distributed operation is employed for the SIM-based cell-free network. Specifically, at each AP the hybrid beamformer coefficients and the receiver combining (RC) vectors are jointly optimized to acquire a local estimate of the information arriving from the UEs. Afterwards, the central processing unit (CPU) uses the data detected by all APs to carry out the final detection of each UE's data.
* Since designing the hybrid beamformer coefficient of the SIM coefficients and the RC vectors at each AP is a non-convex problem, we propose a low-complexity layer-by-layer iterative optimization algorithm. Specifically, when the RC vectors are given, the coefficients of the intelligent metasurface are optimized on a layer-by-layer basis. By contrast, when the coefficients of all layers of the intelligent metasurface are given, the RC vectors are designed based on the MRC criterion. The SIM coefficients and the RC vectors of each AP are alternately optimized until reaching convergence.
* For recovering the information gleaned from the UEs, the weight vector of the CPU used for combining the local detections arriving from all APs is designed based on the MMSE criterion harnessed for maximizing the signal-to-noise-ratio (SNR) of the received signal. Furthermore, since having RF hardware impairments at APs and UEs is inevitable, we take them into account in the weight vector design by exploiting their statistics.
* Our numerical results show that the average achievable rate of our SIM-based HMIMO architecture in the cell-free network outperforms the conventional single-layer intelligent surface aided HMIMO. Furthermore, owing to the HWIs at APs and UEs, the achievable rate saturates at high SNRs without reaching its theoretical maximum.
The rest of this paper is organized as follows. In Section <ref>, we present the system model, while the beamformer design is described in Section <ref>. Our simulation results are presented in Section <ref>, while we conclude in Section <ref>.
Notations: Vectors and matrices are denoted by boldface lower and upper case letters, respectively, (·)^T, (·)^† and (·)^H represent the operation of transpose, conjugate and Hermitian transpose, respectively, ⊙ represents the Hadamard product operation, |a| and ∠ a denote the amplitude and angle of the complex scalar a, respectively, 𝐚 denotes the norm of the vector 𝐚, ℂ^m× n denotes the space of m× n complex-valued matrices, 0_N is the N×1 zero vector, 𝐈_N represents the N× N identity matrix, 𝐃𝐢𝐚𝐠{𝐚} denotes a diagonal matrix having elements of 𝐚 in order, [𝐚]_n is the nth element in the vector 𝐚, 𝒞𝒩(μ,Σ) is a circularly symmetric complex Gaussian random vector with the mean μ and covariance matrix Σ.
§ SYSTEM MODEL
In this section, we describe our proposed SIM-aided HMIMO architecture, and then present the channel model of the cell-free network considered.
The system model of the SIM-aided HMIMO operating in narrowband cell-free networks is shown in Fig. <ref>[In this paper, we investigate the SIM-based hybrid beamforming design of narrowband cell-free networks. In wideband networks, both spatial-wideband effects and frequency-selective effects should be considered <cit.>. The SIM-based hybrid beamforming design of wideband networks considering the spatial-wideband effect and the frequency-selective effect is set aside for our future work.]. In contrast to the conventional cellular network, where a central AP is deployed in each cell to support the surrounding UEs, the cell-free network deploys multiple distributed APs. We focus our attention on the uplink scenario, where we consider L distributed multi-antenna APs and K single-antenna UEs. AP-l (l=1,2,⋯,L) has M antennas and a stacked intelligent metasurface containing T_l layers, with each layer composed of N reconfigurable passive elements. Specifically, the information is sent from the K UEs to L APs. In each AP, the received RF signals go through the passive beamformer of the multi-layer SIM, and then they are converted to baseband signals via the RF chains. To recover the desired information, we rely on distributed operation in order to reduce the detection complexity, where each AP locally detects the data of the UEs supported by it, and then the CPU fuses the data detected at all APs via the fronthaul links to carry out the final detection of the desired information.
§.§ SIM-Aided Holographic MIMO Architecture
Before describing the SIM-aided holographic MIMO architecture, we briefly review the conventional fully-digital massive MIMO architecture and the hybrid digital-analog hybrid massive MIMO architecture. In the conventional fully-digital massive MIMO architecture, the number of the RF chains is the same as that of the transmit antennas. To reduce the power consumption of the RF chains, a hybrid digital-analog beamforming architecture is formed, where only a few RF chains is employed, and a phase shift array (PSA) is harnessed between the RF chains and AP antennas.
The SIM-aided HMIMO architecture of cell-free networks is shown in Fig. <ref>. Firstly, the signals received at each AP go through an SIM-based beamformer. The output signals of the SIM are received by the antennas, and then converted to baseband signals via multiple RF chains. The SIM is constituted by a closed vacuum container having several layers of stacked reconfigurable metasurfaces, each of which is composed of a large number of passive densely-spaced elements <cit.>, <cit.>. By appropriately configuring the phase shifts of the elements in each layer of the metasurface with the aid of a software controllers, such as a field programmable gate array (FPGA) <cit.>, <cit.>, a substantial beamforming gain may be achieved. In practice, the SIM is enclosed in a supporting structure surrounded by wave-absorbing materials, to prevent interferences from undesired diffraction, scattering, and environmental noise <cit.>, <cit.>. The AP antennas are M_x× M_y uniform rectangular planar arrays (URPA), while the intelligent surfaces in each layer are N_x× N_y URPAs, satisfying M=M_xM_y and N=N_xN_y. We denote the size of each reconfigurable element as δ_x×δ_y, and the distances between the adjacent antennas are d_x and d_y along the x and y axis, respectively.
Explicitly, we contrast our SIM-aided HMIMO to various MIMO technologies in Table <ref>.
§.§ Channel Model
In this section, we describe the channel between the UEs and the AP antennas. As shown in Fig. <ref>, we have to consider the channel between the SIM and the antennas at the AP as well as the channel between the UEs and the SIM at the AP.
§.§.§ Channel links between the SIM and the antennas at APs
We assume that the antenna array and the SIM layers are parallel to the xoy plane of the Cartesian coordinate. We denote the Cartesian coordinates of the M antennas at the lth AP as 𝐩_1^(l,0)=(x_1^(l,0),y_1^(l,0),z^(l,0)),
𝐩_2^(l,0)=(x_2^(l,0),y_2^(l,0),z^(l,0)),⋯,𝐩_M^(l,0)
=(x_M^(l,0),y_M^(l,0),z^(l,0)), and that of the N elements in the SIM layer-t as 𝐩_1^(l,t)=(x_1^(l,t),y_1^(l,t),z^(l,t)),
𝐩_2^(l,t)=(x_2^(l,t),y_2^(l,t),z^(l,t)),⋯,𝐩_N^(l,t)
=(x_N^(l,t),y_N^(l,t),z^(l,t)) with t=1,2,⋯,T_l. We employ the near-field model to describe the channel response between the antennas and the SIM layers.
As shown in Fig. <ref>, we denote the channel response between AP-l of the SIM layer-1 and the antennas as 𝐀^(l,1)∈ℂ^M× N, with the (m,n)th entry a_m,n^(l,1) representing the response of the nth element in the SIM layer-1 to the mth antenna, while a_m,n^(l,1) is given by
a_m,n^(l,1)=√(Γ_m,n^(l,1))e^-2π/λ𝐩_n^(l,1)-𝐩_m^(l,0).
In (<ref>), λ is the carrier wavelength, 𝐩_n^(l,1)-𝐩_m^(l,0) is the distance between the nth element in the SIM layer-1 and the mth antenna, while Γ_m,n^(l,1) is the power radiated from the nth element in the SIM layer-1 to the mth antenna, which can be represented as <cit.>
Γ_m,n^(l,1)=∬_𝒟_nκ(z^(l,1)-z^(l,0)/𝐩_n^(l,1)
-𝐩_m^(l,0))^κ/2/4π𝐩_n^(l,1)-𝐩_m^(l,0)^2dxdy,
where κ is the directional radiation gain of the reconfigurable elements and the integration region is formulated as:
𝒟_n= {(x,y)∈ℝ^2:x_n-δ_x/2≤ x≤ x_n+δ_x/2,.
.y_n-δ_y/2≤ y≤ y_n+δ_y/2}.
Next, we focus our attention on the channel model between the SIM layers. For AP-l we denote the response of the channel spanning from the SIM layer-t to layer-(t-1) as 𝐀^(l,t)∈ℂ^N× N (t=2,3,⋯,T_l) with the (n_2,n_1)th entry representing the response from the n_1th element in the SIM layer-t to the n_2th element in the SIM layer-(t-1), given by
a_n_2,n_1^(l,t)=√(Γ_n_2,n_1^(l,t))e^-2π/λ𝐩_n_1^(l,t)-𝐩_n_2^(l,t-1).
In (<ref>), 𝐩_n_1^(l,t)-𝐩_n_2^(l,t-1) is the distance between the n_1th element in the SIM layer-t and the n_2th element in the SIM layer-(t-1), while Γ_n_2,n_1^(l,t) is the power radiated from the n_1th element in the SIM layer-t to the n_2th element in the SIM layer-(t-1), which can be represented as <cit.>
Γ_n_2,n_1^(l,t)=∬_𝒟_nκ(z^(l,t)-z^(l,t-1)/𝐩_n_1^(l,t)-𝐩_n_2^(l,t-1))^κ/2/4π𝐩_n_1^(l,t)-𝐩_n_2^(l,t-1)^2dxdy.
Furthermore, the phase shifts of the reconfigurable elements in each layer can be adjusted to achieve a beamforming gain. In AP-l, we denote the phase shift of the nth element in layer-t as θ_n^(l,t). Upon considering the signal power attenuation resulting from the EM waves travelling through a SIM, we denote the radiation coefficient of the nth element in layer-t as υ_n^(l,t) satisfying υ_n^(l,t)∈[0,1]. Thus, by defining Υ^(l,t)=𝐃𝐢𝐚𝐠{υ_1^(l,t),
υ_2^(l,t),⋯,υ_N^(l,t)} and Θ^(l,t)=𝐃𝐢𝐚𝐠{e^θ_1^(l,t),
e^θ_2^(l,t),⋯,e^θ_N^(l,t)}, the response matrix of the SIM's layer-t can be represented as
Ξ^(l,t)=
√(Υ^(l,t))Θ^(l,t)
= 𝐃𝐢𝐚𝐠{√(υ_1^(l,t))e^θ_1^(l,t),
√(υ_2^(l,t))e^θ_2^(l,t),
⋯,.
.√(υ_N^(l,t))e^θ_N^(l,t)}.
Therefore, the equivalent channel response of the SIM-based beamformer at the lth AP, denoted as 𝐆^(l)∈ℂ^M× N, is[In this paper, we assume that each SIM layer can be configured independently without mutual coupling. In reality, having mutual coupling among the metamaterial elements is inevitable due to their dense packaging. The holographic beamforming design considering the effect of mutual coupling is set aside for our future work.]
𝐆^(l)= 𝐀^(l,1)Ξ^(l,1)𝐀^(l,2)Ξ^(l,2)⋯𝐀^(l,T_l)Ξ^(l,T_l)
= 𝐀^(l,1)√(Υ^(l,1))Θ^(l,1)𝐀^(l,2)√(Υ^(l,2))Θ^(l,2)⋯𝐀^(l,T_l)
√(Υ^(l,T_l))Θ^(l,T_l).
§.§.§ Channel links between UEs and APs
We denote the large-scale fading and the small-scale fading between UE-k and the SIM at AP-l as ϱ_k^(l) and 𝐡_k^(l)∈ℂ^N×1, respectively. We assume the knowledge of 𝐡_1^(l),𝐡_2^(l),⋯,𝐡_K^(l) can be attained at the AP-l. In practical metasurface-based holographic MIMO systems, this has to be acquired by CSI acquisition methods, such as the subspace-based channel estimator of <cit.> or the sparse channel estimator of <cit.>. We adopt the millimeter-wave (mmWave) channel model to characterize the propagation environment between the APs and each user[Note that here we consider a mmWave channel model as an example for describing the signal propagation between the APs and the UEs. But indeed, our proposed SIM-based beamforming architecture is also applicable to other channel models.]. Specifically, the mmWave uplink channel of UE-k is assumed to be the superposition of all propagation paths that are scattered in ζ_c clusters and each cluster contributes ζ_p paths, expressed as <cit.>
𝐡_k^(l)=√(1/ζ_cζ_p)∑_c=1^ζ_c∑_p=1^ζ_pα_c,p^(l,k)𝐟(ψ_c,p^(l,k),φ_c,p^(l,k)),
where α_c,p^(l,k) is the complex gain of the pth path in the cth cluster following α_c,p^(l,k)∼𝒞𝒩(0,1), and 𝐟(ψ_c,p^(l,k),φ_c,p^(l,k)) is
𝐟(ψ_c,p^(l,k),φ_c,p^(l,k))
=[1,⋯,.
.e^2π/λ(δ_xn_xsinψ_c,p^(l,k)cosφ_c,p^(l,k)
+δ_yn_ysinψ_c,p^(l,k)sinφ_c,p^(l,k)),
⋯.
.e^2π/λ(δ_x(N_x-1)sinψ_c,p^(l,k)cosφ_c,p^(l,k)
+δ_y(N_y-1)sinψ_c,p^(l,k)sinφ_c,p^(l,k))]^H,
where ψ_c,p^(l,k) and φ_c,p^(l,k) are the elevation and azimuth angles of departure (AoD) from UE-k to the SIM in AP-l in the pth path of the cth cluster, respectively. Within the cth cluster, the random variables ψ_c,p^(l,k) and φ_c,p^(l,k) have uniformly distributed mean values of μ_ψ_c^k and μ_φ_c^k, respectively, and have angular spreads, i.e. standard deviations, of σ_ψ_c^k and σ_φ_c^k, respectively.
The signal received by the antennas at AP-l is given by
𝐲^(l)= ∑_k=1^K(√(ρ_kε_u_kε_𝐯^(l))𝐪_k^(l)s_k+√(ρ_k
(1-ε_u_k)ε_𝐯^(l))𝐪_k^(l)u_k.
.+√(ρ_k(1-ε_𝐯^(l)))𝐪_k^(l)⊙𝐯_k^(l))+𝐰^(l),
where 𝐪_k^(l)=√(ϱ_k^(l))𝐆^(l)𝐡_k^(l) is the equivalent channel spanning from the UE-k to the antennas at AP-l, s_k is the desired information of UE-k, ρ_k denotes the transmit power of UE-k, and 𝐰^(l)∼𝒞𝒩(0_M,σ_w^(l)^2𝐈_M) is the additive noise at AP-l. Furthermore, u_k∼𝒞𝒩(0,1) represents the contamination of the information symbol s_k due to HWIs at UE-k, resulting from the power amplifier non-linearities, amplitude/phase imbalance in the In-phase/Quadrature mixers, phase noise in the local oscillator, sampling jitter and finite-resolution quantization in the analog-to-digital converters. Furthermore, 𝐯_k^(l)∼𝒞𝒩(0_M,𝐈_M) is the distortion of the information symbol s_k due to HWIs of the RF chains at AP-l. Finally, ε_u_k and ε_𝐯^(l) represents the hardware quality factors of UE-k and AP-l satisfying 0≤ε_u_k≤1 and 0≤ε_𝐯^(l)≤1, respectively <cit.>. Explicitly, a hardware quality factor of 1 indicates that the hardware is ideal, while 0 means that the hardware is completely inadequate.
§ BEAMFORMING DESIGN
As shown in the SIM-based cell-free network of Fig. <ref>, we rely on the distributed processing philosophy for reducing the required overhead of CSI sharing between APs. Specifically, for the information transmitted from the UE-k, each AP carries out a local detection based on its received signal, where the SIM coefficient matrices and the RC vectors of each AP are optimized based on the corresponding local CSI. Then, the CPU gathers the locally detected signals from all APs to generate a final estimate of the UE-k information. In the following, we first present the beamformer design in terms of the SIM coefficient matrices and combining vectors at each AP, followed by the optimization of the weight vectors at the CPU for generating the final detection of the UE information.
§.§ SIM Coefficient Matrices and RC Vectors Design at APs
To recover the information transmitted from UE-k, we denote the corresponding RC vector at AP-l as 𝐛_k^(l) satisfying 𝐛_k^(l)^2=1. Thus, the locally recovered information of s_k at AP-l, denoted as ŝ_k^(l), is given by
ŝ_k^(l)= 𝐛_k^(l)H𝐲^(l)
= ∑_k=1^K(√(ρ_kε_u_kε_𝐯^(l))𝐛_k^(l)H𝐪_k^(l)s_k.
.+√(ρ_k(1-ε_u_k)ε_𝐯^(l))𝐛_k^(l)H𝐪_k^(l)u_k.
.+√(ρ_k(1-ε_𝐯^(l)))𝐛_k^(l)H(𝐪_k^(l)⊙𝐯_k^(l)))
+𝐛_k^(l)H𝐰^(l),
and the corresponding signal-to-interference-plus-noise ratio (SINR), denoted as γ_k^(l), can be formulated as
γ_k^(l)=ρ_kε_u_kε_𝐯^(l)|𝐛_k^(l)H𝐪_k^(l)|^2/ς_k,k+∑_k'=1^K_k'≠ k(ρ_k'ε_u_k'ε_𝐯^(l)|𝐛_k^(l)H𝐪_k'^(l)|^2+ς_k,k')
+σ_w^(l)^2.
In (<ref>) ς_k,k' represents the interference resulting from the distortion imposed on UE-k' signal for the recovery of s_k due to the HWIs, given by
ς_k,k'= ρ_k'(1-ε_u_k')ε_𝐯^(l)|𝐛_k^(l)H𝐪_k'^(l)|^2
+ρ_k'(1-ε_𝐯^(l))·
𝐛_k^(l)H((𝐪_k'^(l)𝐪_k'^(l)H)⊙𝐈_M)𝐛_k^(l).
For a specific UE-k, we aim for jointly optimizing the active beamformers 𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l) and the passive SIM-based beamformers Θ^(l,1),Θ^(l,2),⋯,
Θ^(l,T_l) to maximize the SINR of γ_k^(l) with l=1,2,⋯,L. The corresponding optimization problem can be formulated as
(P1) max_𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l),
Θ^(l,1),Θ^(l,2),⋯,
Θ^(l,T_l)γ_k^(l), l=1,2,⋯,L
s.t. Θ^(l,t)Θ^(l,t)H=𝐈_N, t=1,2,⋯,T_l,
𝐛_k'^(l)^2=1, k'=1,2,⋯,K.
Since the L APs jointly support all K UEs, the beamforming of all APs focused on a specific UE-k to maximize the SINR γ_k^(l) (l=1,2,⋯,L) is carried out at the cost of disregarding the other K-1 UEs. The main advantage of the cell-free architecture is that of reducing the path-loss from each AP to its nearest UE. Thus, at each AP we aim for maximizing the channel gain between the AP and its nearest user within the SIM-based holographic beamformer, while the digital beamformer is optimized for all K UEs to get their local information estimates. Specifically, we denote the AP set associated with the SIM-based beamforming focused on UE-k as ℒ_k={l:D_k^(l)≤ D_k^(l'),l'=1,2,⋯,L} with D_k^(l) being the distance from UE-k to AP-l. Therefore, problem (P1) can be formulated as
(P2) max_𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l),
Θ^(l,1),Θ^(l,2),⋯,
Θ^(l,T_l)γ_k^(l), l∈ℒ_k
s.t. Θ^(l,t)Θ^(l,t)H
=𝐈_N, t=1,2,⋯,T_l,
𝐛_k'^(l)^2=1, k'=1,2,⋯,K.
Since (P2) is a non-convex problem, we can decouple it into a pair of sub-problems and optimize them iteratively as follows.
§.§.§ Design of RC vectors at APs
Once the SIM-based beamformer Θ^(l,1),Θ^(l,2),⋯,
Θ^(l,T_l) is given, we can estimate the equivalent channels impinging from all K UEs to AP-l, i.e 𝐪_1^(l),𝐪_2^(l),⋯,𝐪_K^(l). Then, the optimal active beamformer 𝐛_k'^(l) designed for the recovery of the information s_k' at the AP-l can be optimized by using the MRC criterion as
𝐛_k'^(l)=𝐪_k'^(l)/𝐪_k'^(l),
k'=1,2,⋯,K.
§.§.§ Design of SIM coefficient matrices at APs
When the active beamformer 𝐛_k^(l) is given, the problem (P2) aims for optimizing the channel gain 𝐆^(l)𝐡_k^(l)^2, and it is given by
(P3) max_Θ_1^(l),Θ_2^(l),⋯,
Θ_T_l^(l)𝐆^(l)𝐡_k^(l)^2, l∈ℒ_k
s.t. Θ_t^(l)Θ_t^(l)H=𝐈_N,
t=1,2,⋯,T_l.
Since the sub-problem (P3) is still a non-convex one, we propose a layer-by-layer iterative optimization algorithm. Firstly, we optimize Θ^(l,1) by fixing all the other T_l-1 layers of the SIM. Therefore, the channel gain can be represented as 𝐛_k^(l,1)HΘ^(l,1)𝐡_k^(l,1)
in conjunction with
𝐛_k^(l,1)H=𝐛_k^(l)H𝐀^(l,1)√(Υ^(l,1))
and
𝐡_k^(l,1)=𝐀^(l,2)√(Υ^(l,2))Θ^(l,2)⋯𝐀^(l,T_l)√(Υ^(l,T_l))Θ^(l,T_l)𝐡_k^(l),
and the passive beamformer Θ^(l,1) can be optimized as
Θ^(l,1)=𝐃𝐢𝐚𝐠{e^(∠𝐛_k^(l,1)
-∠𝐡_k^(l,1))}.
Afterwards, we optimize Θ^(l,2) by fixing all the other T_l-1 layers of the SIM with the channel gain represented as 𝐛_k^(l,2)HΘ^(l,2)𝐡_k^(l,2) along with
𝐛_k^(l,2)H
=𝐛_k^(l)H𝐀^(l,1)√(Υ^(l,1))Θ^(l,1)𝐀^(l,2)√(Υ^(l,2))
and
𝐡_k^(l,2)=𝐀^(l,3)√(Υ^(l,3))Θ^(l,3)⋯𝐀^(l,T_l)√(Υ^(l,T_l))Θ^(l,T_l)𝐡_k^(l),
and the passive beamformer Θ^(l,2) can be optimized as
Θ^(l,2)=𝐃𝐢𝐚𝐠{e^(∠𝐛_k^(l,2)
-∠𝐡_k^(l,2))}.
Then, Θ^(l,3),Θ^(l,4),⋯,Θ^(l,T_l) can be optimized in turn.
The details of the layer-by-layer iterative optimization of the hybrid beamformer is presented in Algorithm <ref>.
§.§ Weight Vector Design at the CPU
The CPU computes its estimate as a linear combination of the local estimates as <cit.>
ŝ_k=∑_l=1^Lη_k^(l)†ŝ_k^(l),
where η_k=[η_k^(l),η_k^(2),⋯,η_k^(L)]^T∈ℂ^L×1 is the weight vector that the CPU assigns to the local signal estimate of the signal arriving from UE-k satisfying η_k^2=1. Therefore, according to (<ref>), ŝ_k can be written as in (<ref>).
The SINR of ŝ_k, denoted as γ_k, can be derived as
γ_k=ρ_k|η_k^H𝐳_k,k|^2/η_k^H𝐑_kη_k,
where 𝐑_k is given by
𝐑_k= ρ_k(𝐳_u_k,k𝐳_u_k,k^H
+(𝐳_𝐯_k,k𝐳_𝐯_k,k^H)
⊙𝐈_L)+∑_k'=1^K_k'≠ kρ_k'(𝐳_k,k'.
.𝐳_k,k'^H
+𝐳_u_k,k'𝐳_u_k,k'^H
+(𝐳_𝐯_k,k'𝐳_𝐯_k,k'^H)
⊙𝐈_L)+𝐖
in conjunction with
𝐳_k,i=
[[ √(ε_u_kε_𝐯^(1))/𝐪_k^(1)𝐪_k^(1)H𝐪_i^(1); ⋮; √(ε_u_kε_𝐯^(L))/𝐪_k^(L)𝐪_k^(L)H𝐪_i^(L) ]],
𝐳_u_k,i=
[[ √((1-ε_u_k)ε_𝐯^(1))/𝐪_k^(1)𝐪_k^(1)H𝐪_i^(1); ⋮; √((1-ε_u_k)ε_𝐯^(L))/𝐪_k^(L)𝐪_k^(L)H𝐪_i^(L) ]],
𝐳_𝐯_k,i=
[[ √(1-ε_𝐯^(1))/𝐪_k^(L)𝐪_k^(1)†⊙𝐪_i^(1); ⋮; √(1-ε_𝐯^(L))/𝐪_k^(L)𝐪_k^(L)†⊙𝐪_i^(L) ]],
and
𝐖=𝐃𝐢𝐚𝐠{σ_w^(1)^2,σ_w^(2)^2,⋯,
σ_w^(L)^2}.
Based on the generalized Rayleigh quotient, the maximum of γ_k in (<ref>) can be attained as follows:
γ_k= ρ_k𝐳_k,k^H𝐑_k^-1𝐳_k,k
= ρ_k𝐳_k,k^H(ρ_k(𝐳_u_k,k𝐳_u_k,k^H+(𝐳_𝐯_k,k𝐳_𝐯_k,k^H)⊙𝐈_L)
+∑_k'=1^K_k'≠ kρ_k'(𝐳_k,k'𝐳_k,k'^H
+𝐳_u_k,k'𝐳_u_k,k'^H.
.+(𝐳_𝐯_k,k'𝐳_𝐯_k,k'^H)
⊙𝐈_L)+𝐖)^-1𝐳_k,k,
by setting
η_k=𝐑_k^-1𝐳_k,k.
§.§ Computational Complexity
§.§.§ Computational complexity of the hybrid beamformer at each AP
The computational complexity of the proposed layer-by-layer iterative hybrid beamformer at AP-l (l∈ℒ_k) in Algorithm <ref> depends both on the number of iterations in the alternating maximization, which is denoted as τ, and on the computational complexity required in each iteration. As for each iteration, the sub-problem of optimizing the combining vectors 𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l) is solved in line 3 and 4 of Algorithm <ref>, while the sub-problem of optimizing the coefficients of all layers in the SIM Θ^(l,1),Θ^(l,2),⋯,
Θ^(l,T_l) is evaluated in line 5 to 9 of Algorithm <ref>. The complexity of additions is neglected, since its operation is easily implemented in hardware. Hence, we quantify the computational complexity by counting the number of floating-point multiplication operations that are required. Specifically, the calculation of 𝐪_k^(l) in line 3 of Algorithm <ref> requires
c_1'=(T_l-1)(N^2+2N)+MN+2N+M
floating-point multiplication operations by exploiting the property that Υ^(l,1),Υ^(l,2),⋯,Υ^(l,T_l) and Θ^(l,1),Θ^(l,2),⋯,Θ^(l,T_l) are diagonal matrices. The calculation of 𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l) in line 4 of Algorithm <ref> requires
c_2'=2KM
floating-point multiplications. Furthermore, the calculation of 𝐛_k^(l,t) and 𝐡_k^(l,t) in line 6 and 7 of Algorithm <ref> requires
c_3,t1'=(t-1)(N^2+2N)+(M+1)N
and
c_3,t2'=(T_l-t)(N^2+2N)
floating-point multiplications, respectively. The loop between line 8 of Algorithm <ref> has no floating-point multiplications. Thus, the loop between line 5 to 9 of Algorithm <ref> entails
c_3'= ∑_t=1^T_L(c_3,t1'+c_3,t2')
= (T_l^2-T_l)(N^2+2N)+T_l(M+1)N
floating-point multiplications. Therefore, according to (<ref>), (<ref>) and (<ref>), the total number of floating-point multiplications in each iteration is
c'= c_1'+c_2'+c_3'
= (T_l^2-1)N^2+(2T_l^2+T_lM+T_L+M)N
+(2K+1)M.
Hence for N>M and N>K, the overall computational complexity of our proposed layer-by-layer iterative optimization of the hybrid beamformer at all APs is
𝒪(τ(∑_l=1^LT_l^2)N^2).
This shows that the proposed layer-by-layer iterative optimization algorithm conceived for the SIM-based hybrid beamformer is of polynomial time-complexity with respect to the number of APs, the number of SIM layers at each AP, and the number of reconfigurable elements in each SIM layer.
§.§.§ Computational complexity of the CPU processing
To recover the information s_k, the computational complexity at the CPU includes the calculation of the matrix 𝐑_k defined in (<ref>), of the linear combination vector η_k defined in (<ref>) and of the information recovery ŝ_k defined in (<ref>). Specifically, the calculation of 𝐑_k requires
c_1”=(L^2+2L+3M)K
floating-point multiplication operations according to (<ref>), (<ref>), (<ref>) and (<ref>). The calculation of the linear combining vector η_k requires
c_2”=1/3(L^3-L)+L^2
floating-point multiplication operations by employing the Cholesky decomposition of the Hermitian positive-definite matrix 𝐑_k^-1. Furthermore, the calculation of the information recovery ŝ_k requires
c_3”=L
floating-point multiplication operations according to (<ref>). Therefore, according to (<ref>), (<ref>) and (<ref>), the total number of floating-point multiplications required for recovering the information s_k is
c”= c_1”+c_2”+c_3”
= 1/3L^3+(K+1)L^2+(2K+2/3)L+3MK.
Therefore, the overall computational complexity of recovering the information s_1,s_2,⋯,s_K at the CPU is
𝒪(KL^3)+𝒪(K^2L^2)+𝒪(K^2M).
This shows that the CPU processing conceived for information recovery in the cell-free systems is of polynomial time-complexity with respect to the number of UEs, the number of APs and the number of RF chains at each AP.
§.§ Convergence Analysis
First, in line 3 and 4 of Algorithm <ref> convinced for the digital beamforming design, the received SINR γ_k becomes non-decreasing after the digital beamformer 𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l) has been optimized, given the holographic beamformer
Θ^(l,1),Θ^(l,2),⋯,Θ^(l,T_l), i.e.,
γ_k(𝐛̈^(l,i+1),Θ̈^(l,i))≥γ_k(𝐛̈^(l,i),Θ̈^(l,i)),
where γ_k(𝐛̈^(l,i_1),Θ̈^(l,i_2)) represents the received SINR based on the digital beamformer 𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l) in the i_1th iteration and on the holographic beamformer
Θ^(l,1),Θ^(l,2),⋯,Θ^(l,T_l) in the i_2th iteration. Secondly, in line 5 to 9 of Algorithm <ref>, the received SINR γ_k becomes non-decreasing after the holographic beamformer Θ^(l,1),Θ^(l,2),⋯,Θ^(l,T_l) has been optimized, given the digital beamformer
𝐛_1^(l),𝐛_2^(l),⋯,𝐛_K^(l), i.e.,
γ_k(𝐛̈^(l,i),Θ̈^(l,i+1))≥γ_k(𝐛̈^(l,i),Θ̈^(l,i)).
Therefore, (<ref>) and (<ref>) imply that in each iteration of the proposed layer-by-layer iterative optimization algorithm at each AP, the objective function value of the received SINR γ_k is non-decreasing. Additionally, the objective function value sequence obtained throughout the iteration process is monotonic and it is also bounded, hence the overall algorithm is guaranteed to converge.
§ NUMERICAL AND SIMULATION RESULTS
In this section, the average achievable rate of the SIM-based holographic MIMO of cell-free networks is quantified. A total of L APs are uniformly distributed in the area 𝒮 with the Cartesian coordinate of {(x,y):-100m≤ x≤100m,-100m≤ y≤100m}. Furthermore, K UEs are uniform-randomly distributed in the area 𝒮. We employ a distance-dependent path-loss model for the UE-AP channel, given by ϱ_k^(l)=min{C_0,C_0(d_k^(l))^-β}, where d_k^(l) is the length of the link spanning from AP-l to UE-k, β is the path loss exponent, and C_0 is the path loss at the reference distance of 1 meter <cit.>. Referring to <cit.>, <cit.>, the simulation parameters are those given in Table <ref>, unless otherwise specified.
Fig. <ref> compares the average achievable rate R=1/K∑_k=1^KR_k=1/K∑_k=1^Klog_2(1+γ_k) versus the transmit power ρ for different number of APs in cell-free networks, with the number of AP L=16 and L=64, respectively. Observe from Fig. <ref> that the average achievable rate can be improved by increasing the number of APs when the average distance between the APs and UEs can be reduced. When the hardware is non-ideal, i.e. ε<1, the average achievable rate tends to a constant value upon increasing of the transmit power ρ. Furthermore, as seen in Fig. <ref>, an obvious performance degradation exists when the HWIs are ignored during information recovery, especially with the increase of the number of APs. Fig. <ref> compares the average achievable rate R versus the transmit power ρ for different number of SIM layers in each AP, where the number N of elements in each SIM layer is set to N=16×16, N=8×8 and N=4×4 when the number of SIM layers T_l are 1, 2 and 4 respectively for ensuring that the same total number of SIM elements are employed. Observe in Fig. <ref> that the average achievable rate is improved upon increasing the number of SIM layers. It inspires our future research on the SIM design to determine how many SIM layers is optimal in terms of maximizing the performance gain given the total number of SIM elements.
To provide further insights, Fig. <ref> characterizes the average achievable rate of both our SIM-based hybrid beamforming architecture and of the conventional cell-free full-digital beamforming architecture, for the hardware quality factors of ε=1, 1-10^-4 and 1-10^-2, respectively. In terms of the perfect hardware quality factor, i.e., ε=1, Fig. <ref> (a) shows that the SIM-based hybrid beamformer has higher average achievable rate than the conventional full-digital beamformer, even when the SIM has the signal radiation attenuation associated with the power radiation coefficients of υ_n^(l,t)=0.5. This is a benefit of the beamforming gain attained by the configuration of the SIM elements. Furthermore, the average achievable rate can be improved upon increasing the number of SIM layers. When the hardware quality is imperfect, i.e., ε<1, it is illustrated in Fig. <ref> (b) and Fig. <ref> (c) that the SIM-based hybrid beamformer outperforms the full-digital beamformer in the low-SNR region, while the achievable rate in the high-SNR region is limited by the hardware quality. Moreover, in the high-SNR region, increasing the number of SIM layers even degrades the achievable rate when the SIM power radiation coefficients υ_n^(l,t)<1, since the signals are attenuated in each layer of the SIM and it cannot be compensated by the configuration of SIM elements due to the limitation of the hardware quality. It can be observed that although the SIM can increase the equivalent channel gain, but it cannot compensate for the deleterious effects of HWIs in the high-SNR region.
The performance comparison of the average achievable rate R versus the number of antennas M at each AP is presented in Fig. <ref>, for different hardware quality factors ε. It shows that the average achievable rate can be improved upon increasing the number of antennas at each AP, but only at the cost of higher energy consumption. However, the signal distortion resulting from the imperfect hardware quality can be compensated by employing more AP antennas.
Fig. <ref> portrays the average achievable rate versus the inter-layer distance, where we have z=z^(l,0)=z^(l,1)=⋯=z^(l,T_l-1) for l=1,2,⋯,L. The figure shows that there exists an optimal inter-layer distance in terms of the highest average achievable rate. Furthermore, it also shows that the optimal inter-layer distance increases upon employing more intelligent surface elements. Observe that when the number of intelligent surface elements is small, the beamforming gain attained by the SIM is predominantly limited by the path loss between layers, and thus a higher average achievable rate can be attained by reducing the inter-layer distance. By contrast, upon increasing the number of intelligent surface elements, the inter-layer path loss effect becomes negligible and increasing the inter-layer distance improves the signal radiation between adjacent layers, increasing the achievable rate.
To investigate the impact of the signal attenuation caused by the signal travelling through each layer of the SIM on the achievable rate performance, Fig. <ref> presents the average achievable rate R versus the number of SIM layers at each AP, characterize by different power radiation coefficients υ_n^(l,t). It shows that although the average achievable rate degrades with the reduction of the radiation coefficients υ_n^(l,t), this effect can be compensated by increasing the number of SIM layers.
In Fig. <ref> we portray the average achievable rate R versus the number of iterations τ attained by the proposed layer-by-layer iterative optimization algorithm used by the hybrid beamformer. Observe that although the convergence speed is reduced as the number of SIM layers increases, it achieves convergence within 10 iterations.
In the above simulations, we assume that the reconfigurable SIM elements have continuous phase shifts in the range of [0,2π). However, in practical hardware implementations, the phase shift of each reconfigurable element is limited to a finite number of discrete values. For simplicity, we assume that the discrete phase shift set is obtained by uniformly quantizing the interval [0,2π) into 2^b levels, with b respecting the number of phase shift control bits. Thus, we have θ_n^(l,t)∈{2π·0/2^B,2π·1/2^B,⋯,
2π·2^b-1/2^b}. Fig. <ref> characterizes the achievable sum-rate, denoted as R_sum=∑_k=1^KR_k=∑_k=1^Klog_2(1+γ_k) versus the number of phase shift control bits b, with different number of UEs k. It shows that 4-bit phase shift quantization can approach the rate of the infinite phase shift resolution.
§ CONCLUSIONS
We conceived an uplink SIM-based cell-free HMIMO architecture, where the distributed signal processing was employed. Each AP carries out a local detection of the UE information, where the hybrid beamformer and the RC vectors of each distributed AP are optimized based on our low-complexity layer-by-layer iterative optimization algorithm. The CPU recovers the final UE information by fusing the local detections gleaned from all APs, where the RC weight vector used for combining the local detections is designed based on the MMSE criterion, taking into account the HWIs of the RF chains at the APs and of the UEs. The simulation results showed that the achievable rate of the SIM-based cell-free HMIMO network improves upon increasing the number of SIM layers as well as the number of elements in each layer. Furthermore, due to the HWI of the RF chains at the APs and the UEs, the achievable rate saturates in the high-SNR region.
IEEEtran
|
http://arxiv.org/abs/2405.09460v1 | 20240515155425 | The OU$^2$ process: Characterising dissipative confinement in noisy traps | [
"Luca Cocconi",
"Henry Alston",
"Jacopo Romano",
"Thibault Bertrand"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"math-ph",
"math.MP"
] |
^1 Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen, Germany
^2 Department of Mathematics, Imperial College London, South Kensington, London SW7 2AZ, United Kingdom
luca.cocconi@ds.mpg.de
May 20, 2024
The Ornstein-Uhlenbeck (OU) process describes the dynamics of Brownian particles in a confining harmonic potential, thereby constituting the paradigmatic model of overdamped, mean-reverting Langevin dynamics. Despite its widespread applicability, this model falls short when describing physical systems where the confining potential is itself subjected to stochastic fluctuations. However, such stochastic fluctuations generically emerge in numerous situations, including in the context of colloidal manipulation by optical tweezers, leading to inherently out-of-equilibrium trapped dynamics. To explore the consequences of stochasticity at this level, we introduce a natural extension of the OU process, in which the stiffness of the harmonic potential is itself subjected to OU-like fluctuations. We call this model the OU^2 process. We examine its statistical, dynamic, and thermodynamic properties through a combination of analytical and numerical methods. Importantly, we show that the probability density for the particle position presents power-law tails, in contrast to the Gaussian decay of the standard OU process. In turn, this causes the trapping behavior, extreme value statistics, first passage statistics, and entropy production of the OU^2 process to differ qualitatively from their standard OU counterpart. Due to the wide applicability of the standard OU process and of the proposed OU^2 generalisation, our study sheds light on the peculiar properties of stochastic dynamics in random potentials and lays the foundation for the refined analysis of the dynamics and thermodynamics of numerous experimental systems.
§ INTRODUCTION
The Ornstein-Uhlenbeck (OU) process is a continuous-time Gaussian stochastic process with linear mean-reverting properties first introduced to describe the fluctuating velocity of a Brownian particle immersed in a fluid <cit.>. It has found over the years countless applications across various subfields of physics (as well as other disciplines), where it plays a similarly paradigmatic role as that of the harmonic oscillator in classical and quantum mechanics. It can be understood as the overdamped limit of any Langevin dynamics exploring the local neighborhood of a differentiable minimum of an arbitrary potential landscape. Its Langevin equation of motion generically reads
d x(t)/dt = - k̅ x(t) + √(2 D_x)ζ(t) .
where, depending on the context, k̅>0 might be interpreted as a friction coefficient or potential stiffness coefficient, while D_x denotes the diffusivity and ζ(t) is a delta-correlated zero mean and unit variance white noise. Key results including steady-state probability density function, formal solution and Green's function for the standard OU process are reviewed for completeness in <ref>. To give but one example of its wide applicability for instance in spatially extended settings, a lattice of elastically coupled OU processes formally defines the Gaussian free field around which the perturbative expansion of non-conserved dynamical field theories is typically constructed <cit.>.
While originally modelling the frictional contribution to Brownian motion, the first term in the right-hand side of Eq. (<ref>) is often interpreted in the case of overdamped dynamics as a restoring force resulting from an effective harmonic potential V(x) = k̅x^2/2 acting on the coordinate x. This is the case, for instance, in many physical models of micro-particle manipulation by optical tweezers <cit.>, where force gradients are established through the inhomogeneous electric field within a highly focused laser beam <cit.>. However, the nature of the potential V(x) might even be more abstract, as exemplified by models of mean reverting portfolios in finance <cit.> or continuous trait evolution in ecology <cit.>.
In all such cases, it is reasonable to expect that the underlying processes governing the potential are themselves subject to some degree of stochasticity, implying that V(x,t) may itself be a stochastic process <cit.>. A case in point is that of optical tweezers controlled by real laser systems, which are characterised by small fluctuations in power output around its mean <cit.>; these fluctuations in power lead in turn to fluctuations in the stiffness of the potential experienced by the dielectric particle.
Inspired by this rather simple idea, we define here a generic model of diffusion in a noisy trap. Namely, we introduce continuous, zero-mean fluctuations in the potential stiffness of the original OU process [Eq. (<ref>)]. More precisely, we model these fluctuations themselves by an OU process: we characterise this second process by an effective stiffness μ and effective diffusivity D_k (see Fig. <ref> for a schematic illustration and example trajectories). Overall, the resulting coupled dynamics of the particle position x(t) and the fluctuations in the confining potential stiffness k(t) read
d x(t)/dt = - [k̅ + k(t)] x(t) + √(2 D_x)ζ_x(t)
d k(t)/dt = -μ k(t) + √(2D_k)ζ_k(t)
where we fix the average stiffness k̅ > 0, k(t) is the zero-mean fluctuating contribution and ⟨ζ_i(t) ζ_j(t) ⟩ = δ_ijδ(t-t'). The particle dynamics reduce to a standard OU process upon setting D_k = 0. We call this generic composite stochastic process the OU^2 process and dedicate the rest of this paper work to its extensive characterisation[The OU^2 model, which we introduce here, is not to be confused with the squared-OU models, a term sometimes used in the context of the modelling of interest rates by the Cox-Ingersoll-Ross model <cit.>.]. To the best of our knowledge, this model has been introduced for the first time in two recent works by the authors <cit.> in the context of nonequilibrium thermodynamics of diffusion in fluctuating potentials. Alternative generalisations of the OU process have also been investigated including models in which the location of the confining potential minimum undergoes stochastic <cit.> or oscillatory <cit.> “sliding” dynamics at fixed stiffness. The OU^2 model is closely related to a number of other models, which we mention briefly below.
Firstly, we note that the OU^2 process is closely related to the problem of Brownian motion in an intermittent harmonic potential <cit.>, where the potential switches stochastically between two states with finite stiffnesses k_1 and k_2 in the manner of a telegraph process. The case where k_1 =0 and k_2>0 has recently received some attention as it represents a realistic implementation of stochastic resetting <cit.>; interestingly, it was shown in this case that any degree of intermittency leads to the establishment of a nonequilibrium stationary probability density for the trapped particle's position displaying a Gaussian bulk which eventually crosses over into exponential tails.
Furthermore, the OU^2 process introduced here constitutes a continuous time extension of a class of so-called “random difference equations" (see for instance <cit.>), i.e. recurrence relations involving random parameters. For instance, the model introduced in Ref. <cit.> is effectively a time-discretised version of the OU process where the confining potential exhibits a fluctuating stiffness, whose fluctuations are uncorrelated in time. More recently, Morita <cit.> focused on a version of these processes where the random parameter is allowed to have correlations in time. Nevertheless, in this work, the author consider the much simpler case where k(t) is governed by a Poisson jump process, arguing that studying the case where k(t) is governed by an OU process, which is precisely a discrete-time analog of OU^2 process, is particularly challenging.
In the study of transport in inhomogeneous environments, recent models of diffusing diffusivity have been introduced, which allow for stochastic fluctuations in diffusivity of a free Brownian particle. These models display Fickian diffusion (characterised by a linear time dependence of the mean-square displacement) in the presence of non-Gaussian displacement distributions <cit.>.
Moreover, establishing a connection with non-equilibrium thermodynamics, the OU^2 process can be seen as a stochastically breathing harmonic potential. Standard breathing potentials, whereby the stiffness is modulated deterministically in time according to a pre-defined protocol, are often studied in the context of heat engines operating in finite time cycles <cit.>. Interestingly, generic results have been obtained in the slow driving regime for the full distribution of the stochastic work <cit.>.
An alternative inspiration for the OU^2 process may be found in the context of motile active matter, particularly in the canonical active particle model known as the Active Ornstein-Uhlenbeck particle <cit.> (AOUP). Here, out-of-equilibrium self-propulsion is introduced via a forcing term in the Langevin equation of motion, whose statistics are those of a zero-mean OU process. However, one may equivalently interpret this term as a linear potential whose amplitude is modulated stochastically in time. The OU^2 process is thus a natural, non-motile counterpart to the (much more studied) AOUP, offering a minimal example of dissipative trapping and single-particle irreversibility beyond active motility.
Finally, we will see shortly that the so-called random acceleration model can be a seen as a special case of the OU^2 process <cit.>.
The paper is organised as follows: we begin in Section <ref> with a preliminary calculation of the marginal probability density function of the position for an OU^2 process with vanishing positional noise, D_x=0, highlighting some non-trivial characteristics of the associated statistics. In Section <ref>, we move away from this limit and derive the conditional and full Green's functions of the full OU^2 process, clarifying the condition for the stability of the dynamics. Section <ref> deals with the moments of the marginal, stationary probability density function for coordinate x in Eq. (<ref>). In particular, we obtain the necessary conditions for the existence of the even moments ⟨ x^2n⟩ for arbitrary n ≥ 1 in the form of an upper bound on k̅μ^2/D_k which decreases monotonically to zero with increasing n, as well as closed form expressions for the second and fourth moments. In Section <ref>, we study two limits of fast stiffness dynamics by means of homogenisation <cit.>, solving analytically the resulting coarse-grained Fokker-Planck equation for the slow dynamics.
In Section <ref>, we draw on known heuristic arguments from extreme value statistics of weakly correlated time series to conjecture the distribution of the maximum of a finite time OU^2 process, verifying our proposed classification via numerical simulations.
Owing to the algebraic nature of the tails of the marginal probability density, we observe an unexpected transition from the Gumbel to the Frechet universality class.
Section <ref> is dedicated to investigating the impact of stiffness fluctuations on the mean first passage time (mFPT) to a stationary target.
Finally, we re-derive previous results for the steady-state entropy production rate <cit.> in a compact way in Section <ref>, thus offering a simple thermodynamic characterisation of the model. Finally, some remarks and potential directions for future research are discussed in the Conclusion.
§ EXACT SOLUTION IN THE LIMIT OF VANISHING POSITIONAL NOISE
As a preliminary analysis, we consider the limiting case of vanishing positional noise, D_x =0 in Eq. (<ref>), for which a series of exact results can be derived. In this case the dynamics of the particle are constrained to the half-line to which the initial condition belongs. Here, by symmetry, we set x(0) > 0 so that x(t)>0 without loss of generality.
§.§ Probability density function
Let us denote x_0 ≡ x(0) and define z≡ln x such that we can recast the dynamics as
ż(t) = -k̅ - k(t)
or, equivalently,
z̈(t) = μ k(t) - √(2D_k)ζ_k(t) ,
which reduces to the random acceleration process <cit.> in the limit μ→ 0.
As initial condition at t=0, we choose z(0)=z_0 (and correspondingly, x_0 = e^z_0) and assume that the fluctuating stiffness has been evolving from t → -∞ such that at t=0 the particle experiences a value of k randomly drawn from the steady-state distribution. The solution of Eq. (<ref>) is thus written
z(t) = z_0 - k̅t - √(2D_k)∫_0^t dt' ∫_-∞^t' dt” e^-μ(t'-t”)ζ_k(t”) ,
of which the time-dependent mean and variance are computed straightforwardly to be
z̅(t) ≡⟨ z(t)⟩ = z_0 - k̅t,
σ_z^2(t) ≡⟨ z^2(t) ⟩ - ⟨ z(t)⟩^2 = 2D_k/μ^2 t - 2D_k/μ^3(1-e^-μ t) ,
where ⟨⋯⟩ means an average over realisations of the noise ζ_k. It is interesting to note that at long times, i.e. t ≫μ^-1, the variance scales linearly with time as σ_z^2(t) ≃ 2D_k t/μ^2 confirming that the process remains diffusive even in the absence of positional noise. Since (<ref>) is Gaussian, the first two moments are sufficient to determine the time-dependent
moment generating function,
Z(q,t) ≡⟨ e^-iqz(t)⟩ = exp[ -iqz̅(t) - σ_z^2(t)/2q^2 ]
from which the corresponding probability density is easily obtained,
P(z,t) = 1/√(2πσ^2_z(t))exp[-(z-z̅(t))^2/2σ^2_z(t)].
Via a straightforward transformation of probability, we then obtain that the probability density for the original variable x(t) is given by the following log-normal distribution
P(x,t) = dz/dx P(z(x),t)
= 1/x √(2πσ^2_z(t))exp[ - (ln(x) - z̅(t))^2/2σ^2_z(t)],
The exact probability density (<ref>) is shown for different values of t in Fig. <ref>.
§.§ Growth and trapping
Equipped with the time-dependent probability density (<ref>), we can start to study the main qualitative features of the system. Namely, studying three key statistics of the process — the median, mean and mode of x(t) — each of which offers a different perspective on the dynamics, we discuss the conditions under which the process is said to be: (i) trapped in the sense that the associated statistic reverts back to the center of the potential (here, x=0) or (ii) growing in the sense that the associated statistic grows exponentially in time.
Median —
The median is defined as the value x_M in the support of the distribution such that
∫_0^x_M dx P(x,t) = 1/2 .
For the log-normal distribution (<ref>), an explicit expression for the median can be written and simply gives
x_M(t) = exp[z̅(t)] = x_0 exp(-k̅t)
Interestingly, we note that the behavior of the median changes as k̅ changes sign. Indeed for k̅>0, we can easily see that the median x_M(t) approaches zero exponentially as t increases. Conversely, for k̅<0, we find that x_M(t) grows exponentially with t. Note that the same result holds for every finite percentile of the distribution. In other words, as k̅ changes sign from positive to negative, most of the particles go from being trapped around x=0 to seeing their position grow exponentially in time.
Mean —
Secondly, let us consider the mean of the distribution (<ref>), which can be computed exactly to be
x̅(t)= exp[z̅(t) + σ_z^2/2] = x_0 e^D_k/μ^3(1-e^-μ t) e^-(k̅ - D_k/μ^2)tt→∞∝exp[-(k̅ -D_k /μ^2)t].
The mean thus decays exponentially with time for k̅ > D_k/μ^2, a more stringent condition compared to that of median trapping.
Mode —
Finally, we turn our attention to the mode x_m, which is defined as the location of the maximum of the probability density, . ∂_x P(x,t) |_x_m=0. For the log-normal distribution in Eq. (<ref>), the mode is given by
x_m(t) = exp[z̅(t) - σ_z^2 ] = x_0 e^-2D_k/μ^3(1-e^-μ t) e^-(k̅ +2D_k /μ^2)tt→∞∝exp[-(k̅ +2D_k/μ^2)t].
Here, we notice that the behavior of the mode for the OU^2 process changes from trapped to growing as k̅ + 2D_k/μ^2 changes sign. In other words, for k̅ < -2D_k/μ^2, the distance of the most likely outcome grows exponentially.
Remarkably, these results imply the existence of two non-trivial regimes:
(i) for 0<k̅<D_k/μ^2, the mean position of an ensemble of particles undergoing OU^2 processes without positional noise grows indefinitely even if most of the particles approach zero exponentially; this is due to rare trajectories with exceptionally large displacements.
(ii) for -2D_k/μ^2 < k̅ < 0, most of the particles escape exponentially to infinity, however the most likely outcome remains trapped at the proximity of the origin.
§.§ Optimal trapping and condition for growth
It is interesting to note that, while the parameters μ and D_k might not be accessible to direct experimental control in many physical implementations of this model, one may still be able to control the strength of the couplings between the particle and the potential via a medium-dependent parameters u such that ẋ=-u(k̅+k(t)). In this case, the mean position becomes x̅(t;u) ∝exp[(-k̅ u +u^2D_k/μ^2)t] for large enough t. For example in an experiment with optical tweezers, u can be tuned by changing the viscosity or dielectric properties of the colloid, while it may not be possible to improve the properties of the confining potential by increasing its mean k̅ or reducing the noise strength D_k. The exponent is now negative for u<μ^2k̅/D_k. In other words, provided that k̅ is positive we can always induce mean trapping by reducing the coupling of the particles with the potential itself. On the other hand, an excessive reduction of the coupling u results in a weak confinement. The optimal value of u minimising the exponent, hence the magnitude of the mean, and thus providing the best confinement of the latter is u_ opt=μ^2/2D_k.
Another application of the OU^2 process can be found in finance, where Eq. (<ref>) can be used as a simplified model for the growth of the capital x of a company. In this case, exponential growth rather than trapping is the desired outcome. In this model, the stiffness k represents instead (minus) the return on investment of its operations, which is itself subject to stochastic market fluctuations. While the mean return -k̅ and volatility D_k of investment can be difficult to improve, u can be tuned by simply reinvesting more capital, while u>1 can be obtained by using leverage. The expected capital will grow for u>u_th=μ^2k̅/D_k, which can happen also for companies with negative mean returns.
§ CONDITIONAL AND FULL GREEN'S FUNCTION
We now return to the full model, including noise in the displacement, and calculate its Green's function, first conditional on a particular value of k at the time of perturbation, then averaged over the corresponding steady-state distribution. From Eq. (<ref>), we can write that the formal solution for x is given by
x(t) = √(2D_x)∫_-∞^t dt' ζ_x(t') exp[ - k̅(t-t') - ∫_t'^t dt” k(t”) ] .
From Eq. (<ref>), we identify the conditional Green's function of the process, which describes the typical evolution of a noise-generated perturbation,
𝒢(t;k_0) = ⟨exp[ -k̅t - ∫_0^t dt' k(t') ] ⟩_k_0Θ(t)
where ⟨∙⟩_k_0 denotes an average with respect to the possible realisations of the process k(t) conditioned on the initialisation k(0) = k_0, and the Heaviside theta function Θ(t) ensures causality. The conditional Green's function 𝒢(t;k_0) quantifies the typical temporal evolution of a perturbation generated at t=0 by the noise ζ_x.
Exploiting the relation between the moment and cumulant generating functions and the fact that the cumulants of order 3 and above vanish for the OU process due to it being Gaussian, we write
𝒢(t;k_0)
= e^-k̅texp∑_n=1^∞(-1)^n/n!⟨( ∫_0^t dt' k(t') )^n ⟩_c,k_0
= e^-k̅texp[ -∫_0^t dt' ⟨ k(t')⟩_c,k_0 + 1/2∫_0^t dt' dt” ⟨ k(t') k(t”) ⟩_c,k_0] ,
where ⟨∙⟩_c,k_0 denotes the conditional cumulants. The conditional cumulants of first- and second-order can easily be calculated independently
⟨ k(t')⟩_c,k_0 = k_0 e^-μ t',
⟨ k(t') k(t”) ⟩_c,k_0 = D_k/μ(e^-μ|t'-t”|-e^-μ(t'+t”)).
With this result in hand, we can perform the integral in (<ref>) to obtain the conditional propagator
𝒢(t;k_0) = exp[ -(k̅ - D_k/μ^2)t - k_0/μ(1-e^-μ t) + D_k/2μ^3(4 e^-μ t- e^-2μ t - 3) ] .
Interestingly, the propagator decays to zero at long times only if k̅ > D_k/μ^2, while fluctuations grow exponentially otherwise. This highlights the importance of the competition between the two timescales in the problem, namely τ_x = 1/ k̅ — the typical mean reversion time for the particle position — and τ_k = μ^2/D_k — the typical timescale for the stiffness fluctuations.
Also note that the dependence on the initial condition for the stiffness, k_0, is rather simple. Expanding the exponent to leading order in small times t ≪ 1, we find
𝒢(t;k_0) = exp[-(k̅+k_0)t + 𝒪(t^2)]
indicating that at short times the growth/decay of fluctuations is controlled by the initial condition k_0, such that fluctuations might initially grow exponentially (when k_0 < - k̅), even when they are eventually suppressed on average at long times. In other words, the conditional propagator is not necessarily monotonic.
We might also be interested in a situations where the initial value of the potential stiffness k_0 is unknown. Assuming that the statistics of k(t) have reached steady-state by the time we perturb our system, we can calculate the full propagator by averaging Eq. (<ref>) over k_0 which has a known steady-state Gaussian probability density function, i.e.
𝒢_ full(t) =
exp[ -(k̅ - D_k/μ^2)t + D_k/2μ^3(4 e^-μ t- e^-2μ t - 3) ] ⟨exp[ - k_0/μ(1-e^-μ t) ] ⟩
where ⟨∙⟩ now denotes an expectation with respect to the steady-state probability density function of k_0. Using the fact that the last term in the above is simply the moment generating function of a zero-mean normal distribution with conjugate variable s = -(1-e^-μ t)/μ, we eventually arrive at
𝒢_ full(t)
= exp[ -(k̅ - D_k/μ^2)t + D_k/2μ^3(4 e^-μ t- e^-2μ t - 3) + D_k/2μ^3 (1-e^-μ t)^2 ]
= exp[ -(k̅ - D_k/μ^2)t + D_k/μ^3(e^-μ t- 1) ] .
We note that in the limit where D_k → 0, we recover the Green's function for the standard OU process, c.f. Eq. (<ref>). The long time behaviour is the same as for the conditional case, however expanding again at small times t ≪ 1, we now find 𝒢_ full(t) = exp[-k̅t + 𝒪(t^2) ]. This is to be expected since the average of the exponential converges to the exponential of the average when t → 0.
§ POSITIONAL MOMENTS
We now discuss the steady-state moments of the marginal probability density function for the coordinate x. In particular, we derive conditions for the existence of finite moments. Already in the case of a Brownian particle confined in a potential whose stiffness switches stochastically between two finite values k_1 and k_2 following a two-state Markov jump process, it was shown that the condition for existence of a moment of order s is more restrictive than merely ensuring that the stiffness is positive on average, ⟨ k ⟩_t >0. Indeed, the existence of ⟨ |x|^s ⟩ requires that k_1 P(k_1) + k_2 P(k_2) - s k_1 k_2 < 0 with P the stationary probability mass function of the jump process <cit.>. We will see in this section that similar conditions can be derived in the OU^2 case.
Starting from the formal solution, Eq. (<ref>), we first argue by symmetry that all odd moments are expected to vanish, ⟨ x^2n+1(t) ⟩=0 for all n ∈ℕ. The even moments are on the other hand given by
⟨ x^2n(t) ⟩ = (2D_x)^n⟨( ∫_-∞^t dt' ζ_x(t') e^- k̅(t-t')exp[ - ∫_t'^t dt” k(t”) ] )^2n⟩ .
Using the fact that ζ_x(t) and k(t) are uncorrelated stochastic processes, we write
⟨∏_i=1^2nζ_x(t_i')exp[-k̅ (t-t_i') - ∫_t_i'^t dt_i” k(t_i”)] ⟩
=
⟨∏_j=1^2nζ_x(t_j') ⟩⟨∏_i=1^2nexp[-k̅(t-t_i') - ∫_t_i'^t dt_i” k(t_i”)]⟩ .
The first expectation on the right-hand side can be simplified by Wick-Isserlis theorem to
a sum of product of white noise correlators, i.e. Dirac delta functions, with (2n-1)!! = (2n)!/(2^n n!) summands corresponding to all possible pairings 𝒫_2n of the random variables:
⟨∏_j=1^2nζ_x(t_j') ⟩ = ∑_p ∈𝒫_2n∏_{i,j}∈ p⟨ζ_x(t'_i) ζ_x(t'_j) ⟩ .
Since the overall integral is invariant under permutation of the indices, all summands give the same contribution. The expression for the moments thus simplifies to
⟨ x^2n(t) ⟩ = 𝒩_n∫_-∞^t dt_1<...<n exp[-2k̅∑_i=1^n (t-t_i)] ⟨exp[- 2 ∑_j=1^n ∫_t_i^t dt_i' k(t_i')]⟩
with 𝒩_n = (2D_x)^n (2n-1)!! n!,
where we have additionally imposed the arbitrary time ordering t_1 < t_2 < ... < t_n in the multiple integrals, compensated by the combinatorial prefactor n!, without loss of generality.
Next, we exploit the identity relating the moment generating function and the exponential of the corresponding cumulant generating function
⟨exp[ - 2 ∑_i=1^n ∫_t_i^t dt_i' k(t_i') ] ⟩
= exp∑_m=1^∞1/m!⟨( - 2 ∑_i=1^n∫_t_i^t dt_i' k(t_i') )^m ⟩_c
= exp⟨ 2 ( ∫_t_1^t dt_1' k(t_1') + ∫_t_2^t dt_2' k(t_2') + ... + ∫_t_n^t dt_n' k(t_n') )^2 ⟩_c
where we have used the fact that cumulants of order m>2 vanish for the equilibrium OU process governing k(t), Eq. (<ref>), while the first order cumulant is zero at steady state.
The right-hand side of (<ref>) can be evaluated as
⟨ 2 ( ∫_t_1^t dt_1' k(t_1') + ∫_t_2^t dt_2' k(t_2') + ... + ∫_t_n^t dt_n' k(t_n') )^2 ⟩_c
= 2 D_k/μ[
(∑_i=1^n
∫_t_1^t dt_i' ∫_t_1^t dt_i” e^-μ|t_i'-t_i”|)
+
2 ( ∑_i < j^n ∫_t_i^t dt_i' ∫_t_j^t dt_j' e^-μ|t_i'-t_j'|)]
= 4 D_k/μ^3[
(∑_i=1^n
μ(t-t_i) -1 + e^-μ(t-t_i)) .
. +
2 ( ∑_i < j^n e^-μ(t-t_i) + e^-μ(t-t_j) - e^-μ|t_j-t_i| + 2μ(t-t_j) - 1
)] .
Here we have used the result for the double integral
∫_t_i^t dt_i' ∫_t_j^t dt_j' e^-μ|t_i'-t_j'| = 1/μ^2( e^-μ(t-t_i) + e^-μ(t-t_j) - e^-μ|t_j-t_i| + 2μ(t- max(t_i,t_j)) - 1) .
We now compute (<ref>) for different values of n.
§.§ Variance (n=1)
First consider the particular case n=1 for which compact expressions can be obtained. In this case the right-hand side of (<ref>) using (<ref>) simplifies to
. ⟨exp[ - 2 ∑_i=1^n∫_t_i^t dt_i' k(t_i') ] ⟩|_n=1 = exp[ 4D_k/μ^3( e^-μ(t-t_1) + μ(t-t_1) -1) ] .
Using this result, we can then rewrite (<ref>) for n=1 as
⟨ x^2 ⟩ = 2D_x ∫_-∞^t dt_1 exp[ - ( 2k̅ - 4D_k/μ^2)(t-t_1) + 4D_k/μ^3( e^-μ(t-t_1) - 1 ) ] .
It is clear by inspection that the second moment exists if and only if D_k/μ^2 < k̅/2.
Note that this is a stricter condition compared to that found in Sec. <ref> for the exponential decay of the Green's function, suggesting the existence of parameter regions for which the steady state exists but not the variance.
We now write the double exponential term in Eq. (<ref>) as a power series,
exp[ 4D_k/μ^3 e^-μ(t-t_1)] = ∑_ℓ=0^∞1/ℓ!( 4D_k/μ^3)^ℓ e^-μℓ(t-t_1) .
Substituting back into (<ref>), swapping integral and sum, performing the simple exponential integral and rearranging terms, we eventually arrive at the expression
⟨ x^2 ⟩ = 2D_x/μ e^-ξ∑_ℓ=0^∞ξ^ℓ/ℓ! (σ - ξ+ℓ)
with ξ = 4D_k/μ^3 and σ = 2k̅/μ, which reduces to ⟨ x^2 ⟩ = D_x/k̅ for D_k=0, as expected of the standard OU process. This analytical result is plotted against numerical simulation in Fig. <ref>, showing good agreement. Formally, the right hand side of Eq. (<ref>) can be written more compactly in terms of the lower incomplete Gamma function γ(a,b), which has the following series expansion <cit.>
γ(a,b) = b^a∑_ℓ=0^∞(-b)^ℓ/ℓ! (a+ℓ) ,
allowing us to reduce Eq. (<ref>) to ⟨ x^2 ⟩ = 2D_x e^-ξ (-ξ)^-σ+ξγ(σ-ξ,-ξ) or, in the original notation,
⟨ x^2 ⟩ = 2 D_x (-4D_k/μ^3)^4/μ(D_k/μ^2 - k̅/2)exp[-4D_k/μ^3] γ(4/μ(k̅/2 - D_k/μ^2),-4D_k/μ^3)
§.§ Quartic moment (n=2)
In this case the right-hand side of (<ref>) becomes, for n=2 and using (<ref>),
⟨ x^4 ⟩ = 6 (2D_x)^2 ∫_-∞^t dt_1 ∫_t_1^t dt_2 e^-2[k̅(t-t_1)+k̅(t-t_2)]
×exp[ 4D_k/μ^3( 2e^-μ(t-t_1) + 2e^-μ(t-t_2) - e^-μ(t_2-t_1) - 3 + μ(t-t_1) + 3μ(t-t_2) ) ] .
Expanding once again the double exponential as power series we then obtain
⟨ x^4 ⟩ = 6 (2D_x)^2 e^-12D_k/μ^3∑_n,m,ℓ=0^∞( -1/2)^k ( 8D_k/μ^3)^m+ℓ+k/m! k! ℓ!
×∫_-∞^t dt_1 ∫_t_1^t dt_2 exp[ -2[k̅ - 2D_k/μ^2(t-t_1) - 2(k̅-6D_k/μ^2)(t-t_2)]
- μ m (t-t_1) - μℓ(t-t_2) - μ k (t_2 - t_1) ] ,
indicating that the quartic moment exists if and only if D_k/μ^2 < k̅/4.
The double integral can now be performed in closed form,
⟨ x^4 ⟩ = 24 D_x^2 e^-12D_k/μ^3∑_k,m,ℓ=0^∞( -1/2)^k ( 8D_k/μ^3)^m+ℓ+k/m! k! ℓ! ( 4D_k/μ^2-2k̅-μ(m+k) )( 16D_k/μ^2-4k̅-μ(m+ℓ) ) ,
giving us the most compact expression for the quartic moment. This is plotted against numerical simulations in Fig. <ref>, showing good agreement in the domain of convergence.
§.§ Higher values of n
For the general case n>2,
we focus on determining the criteria for convergence of the moments. As shown in <ref>, one can generalise the arguments developed above and obtain the following criteria of convergence for the moment of order 2n
D_k/μ^2<k̅/2n
which is in agreement with the results we just obtained for the particular cases n=1 and n=2.
From this criterion of convergence, we can argue for the asymptotic scaling of the marginal probability density function P(x). Indeed, assuming that the asymptotic scaling exponent is a continuous function of D_k/μ^2, we expect that the value of D_k/μ^2 at which the moment of order 2n becomes divergent, namely D_k/μ^2 = k̅/(2n), corresponds to that at which the marginal probability density scales asymptotically as x^-2n-1 at large x. We conclude that, asymptotically,
P(x) ∼ |x|^-1-k̅μ^2/D_k .
This result in agreement with the closed form expression for P(x) derived analytically in Sec. <ref> below by means of homogenisation in the fast-slow limit, as well as with numerical simulations for the full model, as shown in Fig. <ref>.
§ FAST STIFFNESS LIMIT
In many applications, such as the optical tweezer example discussed in the introduction, the potential stiffness fluctuations can be reasonably assumed to occur on a comparatively fast timescale.
In this regime, analytical results for the marginal probability density function P(x) can be obtained by enforcing a formal separation of timescales between the slow dynamics of the particle position x(t) and the fast dynamics of the stiffness fluctuations k(t). The elimination of the fast stiffness dynamics can subsequently be carried out following a multiscale approach <cit.>.
In the following, we consider two fast-slow regimes: (i) a naïve adiabatic limit, where the characteristic timescale of the k dynamics is sent to zero at fixed variance σ_k^2 = D_k/μ, and (ii) a nontrivial limit, where the variance σ_k^2 diverges as the inverse of the characteristic timescale, leading to k(t) in Eq. (<ref>) becoming statistically equivalent to a Gaussian white noise.
§.§ Naïve adiabatic limit
First, we consider a naïve adiabatic limit, in which stiffness fluctuations are expected to be irrelevant. Formally, we introduce a real dimensionless coefficient ε and proceed to the rescaling μ→μ̃/ε^2 and D_k →D̃_k/ε^2 in Eq. (<ref>) which governs the dynamics of k(t). We then take the limit ε→ 0, keeping the variance of the stiffness σ_k^2 = D_k/μ = D̃_k/μ̃ constant. In this limit, one finds that the effective dynamics of the slow variable x_ε are obtained by replacing k(t) → 0 in Eq. (<ref>) by its mean value. Consequently, x_ε is shown to be governed by the OU dynamics,
ẋ_ε(t) = - k̅ x_ε(t) + √(2D_x)ζ_x(t),
A rigorous derivation of this result is presented in <ref>. We conclude that in this trivial limit, we do not retain any signature of the stiffness fluctuations.
§.§ White noise limit
Intuitively, we can go beyond this first trivial limit by replacing the Gaussian process k(t) not by its mean value but by a Gaussian white noise with appropriate mean and standard deviation. Formally, this second regime is obtained by performing the alternative rescaling μ→μ̃/ε^2 and D_k →D̃_k/ε^4, before taking the limit ε→ 0 which keeps the ratio D_k/μ^2 constant.
In situations where x(t) denotes the position of an overdamped particle, this regime can be understood physically as a low viscosity, high temperature limit. Indeed, given an effective friction coefficient γ and bath temperature T, we have by the Stokes-Einstein relation that μ∝γ^-1 while D_x ∝γ^-1T. Taking γ = γ̃ε^2 and T = T̃ε^-2 produces the desired rescaling.
Mathematically, this amounts to k(t) in Eq. (<ref>) becoming statistically equivalent to a Gaussian white noise with covariance ⟨ k(t)k(t')⟩ = 2D̃_k/μ̃^2 δ(t-t'). Importantly, the term k(t)x(t) appearing when integrating Eq. (<ref>) should now be treated as a Stratonovich product <cit.>. Taking care of the Stratonovich-to-Itô conversion <cit.>, we find that the Itô form of the resulting Langevin equation in the limit ε→ 0 reads
ẋ_ε(t) = - (k̅ - D_k/μ^2)x_ε(t) + √(2(D_x + D_k x_ε^2(t)/μ^2))ζ_x(t)
where we have used that D̃_k/μ̃^2 = D_k/μ^2 in this case. Interestingly, this shows a clear instability as D_k/μ^2 > k̅ due to a renormalisation of the confining potential stiffness and, unlike the original dynamics, is characterised by multiplicative noise. The Fokker-Planck representation of Eq. (<ref>),
∂_t P(x_ε,t) = ∂_x_ε{∂_x_ε[ ( D_x + D_k x_ε^2/μ^2)P(x_ε,t)] + (k̅ - D_k/μ^2) x_ε P(x_ε,t) } ,
can equivalently be derived by multiscale methods (see <ref>). Interestingly, Eq. (<ref>) can be mapped onto an associated Legendre differential equation, see <ref>. In the rest of this subsection we drop the subscript of ε and define the shorthands h ≡ D_k/μ^2 and κ≡k̅-h for the sake of simplicity.
We now proceed to determining the steady state probability density function P(x) associated with the Langevin equation (<ref>) for the slow x dynamics. To do so, we introduce a variable z(x) whose stochastic dynamics do not involve multiplicative noise <cit.>
z(x) = ∫^x dξ( D_x + D_k ξ^2/μ^2)^-1/2 = 1/√(h)tanh^-1(√(h)x/√(D_x+hx^2)).
By Itô's lemma, the dynamics for z take the form
ż(t) = -(κ+h)x/√((D_x+h x^2))+ √(2)ζ_x(t) = -(κ+h)tanh[√(h)z(t)]/√(h) + √(2)ζ_x(t)
where the noise is now additive as we anticipated.
The dynamics for z are exactly those of a passive Brownian particle in a static potential
V(z) = ( κ + h/h)ln[cosh(√(h)z)] ,
whence the steady state probability distribution for z is given by the Boltzmann measure
P_Z(z) = e^-V(z)/𝒵, 𝒵 = √(π/h)Γ[(h+κ)/2h]/Γ[(2h+κ)/2h] .
Finally, we perform a transformation of probability distributions to obtain a simple expression for the probability density of x,
P_X(x) = dz/dx P_Z(z(x)) = √(h/π)Γ(κ/2 h+1) [Γ(h+κ/2 h)]^-1 D_x^h+κ/2 h(D_x+h x^2)^-κ/2 h-1 .
Asymptotically, we again find that
P_X(x) ∼ |x|^-k̅μ^2/D_k-1
It can be checked by direct substitution that P(x)=P_X(x) solves the Fokker-Planck equation (<ref>) at steady state.
Notice also that Eq. (<ref>) is in agreement with the asymptotic scaling of the marginal probability density for the full OU^2 process, Eq. (<ref>). In particular, it is straightforward to check that the existence of moments of order 2n demands D_k/μ^2 < k̅/(2n).
§ STATISTICS OF MAXIMA
Despite their relevance in many domains, such as climate modelling, there is currently no general framework to study the extreme value statistics (EVS) of correlated random variables <cit.>. Amongst the few exceptions is the standard OU process, for which the EVS can be compute exactly and can be shown to belong to the Gumbel universality class <cit.>. Accordingly, the asymptotic probability density of the maximum X(t) ≡ τ∈ [0,t]max{x(τ)} is given by
Φ_G(X;m,s) = s^-1 e^-(z+e^-z) , z = X-m/s
where the first two moments of X(t) can be expressed as
⟨ X⟩ = m + s γ, ⟨ X^2⟩ = π^2 s^2/6
respectively, where γ=0.5772... is the Euler-Mascheroni constant. It is thus natural to wonder how the EVS of the OU^2 process compare to those of the standard OU process.
While a fully analytical approach is beyond the scope of this work, we can draw on the renormalisation group heuristic introduced in Ref. <cit.> to argue that, as long as correlations decay over a finite time, the EVS for a weakly correlated stochastic process are still expected to converge to one of the three limiting distributions for uncorrelated random variables, namely Gumbel, for exponentially decaying parent distributions, Fréchet, for fat-tailed parent distributions, and Weibull, for parent distributions with compact support <cit.>. Combining this heuristic with the finding of Sec. <ref> that the probability density of x decays algebraically at large x, specifically as -ln P(x) ∼ 1+k̅μ^2/D_k ≡ 1 + α, leads us to conjecture that the EVS for the OU^2 process should converge to a Fréchet distribution with D_k-dependent characteristic exponent. In particular, we consider the Fréchet probability density for X(t)= τ∈ [0,t]max{x(τ)} given by
Φ_F(X;m,s) =
0 for z≤ m
α/sz^-1-α e^-z^-α for z > m
, z = X-m/s
where the first two moments of X(t) are expressed as
⟨ X⟩ = m + s Γ( 1 - 1/α), ⟨ X^2⟩ = s^2 [ Γ( 1 - 2/α) - Γ^2( 1 - 1/α)^2 ]
the latter being defined only for α > 2. We find this conjecture to be in good agreement with numerical simulations of the full model, as shown in Fig. <ref>. As expected, Gumbel EVS are recovered upon setting D_k=0, i.e. in the absence of potential fluctuations.
§ FIRST-PASSAGE TIME STATISTICS
We now focus on the impact of the continuous fluctuations of the potential stiffness present on the first-passage time statistics of the standard Ornstein-Uhlenbeck process, as characterised in <cit.>. The problem of finding the mean first-passage times to an absorbing boundary can be mapped onto that of the escape of particle over a fluctuating energy barrier. The rate at which a Brownian particle escapes over an energy barrier is a central problem in statistical physics dating back to Kramers <cit.>, finding applications across disciplines through reaction-rate theory <cit.>.
Calculating the escape rate over a fluctuating energy barrier for a Brownian particle has attracted some attention in the past <cit.>. At low temperatures, zero-mean fluctuations in the energy barrier height lead to so-called resonant activation and to a reduction of the mean first-passage time, effectively aiding the escape process <cit.>; resonant activation has been observed for general confining potentials <cit.>. In general, solving for the first-passage time distribution and its moments for the coupled dynamics of the particle position and potential stiffness constitutes a formidable task. To the best of our knowledge, only approximate results can be derived in the limit where the timescale associated with the potential stiffness fluctuations is negligible compared to that of the particle position dynamics, i.e. in the fast stiffness limit described above.
The analysis of the high temperature case is particularly challenging: even for static harmonic potentials Kramers' theory has been shown to breakdown in this limit <cit.>; the high temperature limit in the case of a fluctuating potential is an open problem.
Here, we instead tackle this problem numerically. Specifically, we numerically integrate Eq. (<ref>) using the Euler-Maruyama method with timestep dt = 10^-4. For all results presented here (see Fig. <ref>), we simulate m = 10^5 realisations of the coupled dynamics in which the particle is initialised at x_0 = 0 and we place an absorbing boundary condition in x_a > 0, fixing D_x = k̅ = 1 with μ=0.1. We have confirmed that all realisations lead to a finite first-passage time to the absorbing boundary condition. We probe a wide range of values for the stiffness fluctuations strength, such that for high enough values of D_k /μ^2, the steady-state distribution for the particle position may not exist. However, for all values of the stiffness fluctuations strength D_k/μ^2 studied here, we have checked that the standard deviation of the mean-first passage time (mFPT) is finite and independent of the number of realisations for m sufficiently large ensuring the convergence of the mFPT.
A first look at Fig. <ref> shows that the mFPT independently of the target location is generically a non-trivial non-monotonic function of the stiffness fluctuation strength, D_k/μ^2.
In particular, we find that the mFPT to the absorbing target can be significantly reduced (more than one order of magnitude) by strong enough stiffness fluctuations D_k/μ^2 for targets which are not too close to the initial conditions, which is consistent with previous studies. Interestingly, we show that instead stiffness fluctuations can increase the mFPT compared to the standard Ornstein-Uhlenbeck case (i.e. the zero stiffness fluctuations limit, D_k/μ^2 → 0) if x_a is small. Said differently, for targets very close to the initial conditions, stiffness fluctuations can be detrimental and a system with constant stiffness k̅ will instead be optimal. Furthermore, we observe that in all cases the coefficient of variation of the first-passage times, σ_τ/τ̅, first strongly increases before reaching a maximum at the same value of D_k / μ^2; at large enough fluctuations strength, the coefficient of variation decays monotonously with fluctuation strength for all target locations. Interestingly, we find that the first-passage times are exponentially distributed at both low and high values of D_k/μ^2 for the most distant absorbing target location.
§ ENTROPY PRODUCTION
We now consider the thermodynamic implications of the stiffness fluctuations characterising the OU^2 process. Indeed,
we expect a non-zero rate of entropy production at steady-state <cit.> as the system performs work to change the stiffness.
Furthermore, the existence of steady-state divergence-free probability currents in the (x,k)-plane, as shown on Fig. <ref>, implies a breaking of time-reversal symmetry. It is in this sense that we have previously referred to the OU^2 process as a minimal model of dissipative confinement. As detailed in Ref. <cit.> and <ref>, such dissipation can be written in terms of the second moment of the steady-state marginal probability density function for the coordinate x, which we computed in Sec. <ref>. We now rederive this result using a shortcut and call upon the results of Sec. <ref> to provide a clearer picture of the thermodynamic properties of the OU^2 process.
As a preliminary step, let us introduce the Fokker-Planck formulation <cit.> of the Langevin dynamics (<ref>) and (<ref>),
∂_t P(x,k) = [D_x ∂_x^2 + D_k ∂_k^2]P(x,k) + (k̅+k) ∂_x [xP(x,k)] + μ∂_k [kP(x,k)] .
Multiplying both sites of Eq. (<ref>) by x^2 and subsequently integrating with respect to both x and k, using integration by parts where necessary, gives the remarkably simple relation
⟨ (k(t) + k̅)x^2(t) ⟩=D_x .
Note however that, when carrying out this procedure, one encounters the following integral
∫ dx dk x^2 (k+k̅) ∂_x [xP(x,k)] = k̅∫ dx dk x^2 ∂_x [xP(x,k)]
= k̅[ x^3 ∫ dk P(x,k) ]_x=-∞^+∞ - 2 k̅∫ dx dk x^2 P(x,k) .
For the boundary term on the right-hand side to vanish, it is required that the marginal probability density P(x) ≡∫ dk P(x,k) decays faster than P(x) ∼ x^-3 as x →∞. This is also the condition for the existence of the second moment, such that the validity of Eq. (<ref>) is contingent upon the existence of ⟨ x^2 ⟩.
The mean rate of entropy production, denoted Ṡ_i, is related to the Jarzynski stochastic work <cit.>
W_τ = ∫_0^τ dt ẋ∘ [(k(t) + k̅) x(t)]= 1/2∫_0^τ dt k̇(t) ∘ x^2(t)
via
Ṡ_i = lim_τ→∞1/D_x⟨ W_τ⟩ = - μ/2D_x⟨ k x^2 ⟩ = μk̅/2D_x(⟨ x^2 ⟩ - D_x/k̅)
where ∘ denotes a Stratonovich product and we have used (<ref>) to replace ⟨ kx^2⟩ in Eq. (<ref>). Note that ⟨ x^2 ⟩ > D_x/k̅ for all D_k >0 <cit.>, such that the second law of thermodynamics Ṡ_i ≥ 0 is always satisfied.
Now, using Eq. (<ref>) for the variance, we can write the steady-state entropy production rate as
Ṡ_i = μ/2( s e^-ξ∑_ℓ=0^∞ξ^ℓ/ℓ! (s-ξ+ℓ) - 1 )
where again we have defined ξ = 4D_k/μ^3 and s = 2k̅/μ. Expanding to leading order in weak stiffness noise, Ṡ_i = μξ/[2s(1+s)] + 𝒪(ξ^2) = D_k/[k̅(μ + k̅)]+ 𝒪(ξ^2), which vanishes at D_k=0.
Remarkably, the entropy production rate diverges together with ⟨ x^2 ⟩ as D_k/μ^2 approaches k̅/2 from below.
§ CONCLUSION
We have examined a number of statistical, dynamic and thermodynamic properties of the OU^2 process: a generalised Ornstein-Uhlenbeck (OU) process obtained by allowing for the associated stiffness coefficient to undergo OU-like stochastic fluctuations around a positive mean. To the best of our knowledge, this is the first systematic exploration of such a model, which was originally introduced in two recent works by some of the authors on the thermodynamics of Brownian motion in fluctuating potentials <cit.>. This process finds physical relevance in contexts where effective harmonic confining potentials are generated by a non-ideal contextual processes, e.g. when colloid are manipulated by optical tweezers with realistic laser power stability <cit.>. We also argued that the OU^2 model is closely related to models in stochastic search with resetting <cit.> and active matter <cit.>, and that it constitutes a stochastic counterpart to the breathing harmonic potentials studied in the thermodynamics literature <cit.>.
We started out analysis by considering the limit of vanishing positional noise, for which the time-dependent probability density can be obtained in closed form, Eq. (<ref>). From this, we derived exact expressions for the median (Eq. (<ref>)), mode (Eq. (<ref>)) and mean (Eq. (<ref>)), whose time-dependence was found to transition from exponentially decreasing (trapped regime) to exponentially increasing (growing regime) at different non-trivial critical values of the non-dimensional parameter ℱ≡ D_k/(μ^2 k̅).
Having reintroduced a finite positional noise, we computed the conditional and full Green's function of the process, Eqs. (<ref>) and (<ref>), showing that both grow exponentially with time when ℱ exceeds unity, indicating loss of ergodicity. Starting from the formal solution of the OU^2 dynamics, Eq. (<ref>), we subsequently computed the second and fourth moments of the steady-state positional probability density function, Eqs. (<ref>) and (<ref>), as well as a necessary condition for the existence of moments of order 2n in terms of an n-dependent upper bound on ℱ, Eq. (<ref>). From this condition we inferred an algebraic asymptotic decay of the positional probability density, with scaling exponent η = -1-ℱ^-1. We subsequently considered two limiting regimes of fast stiffness dynamics: while the naïve adiabatic limit produced trivial OU dynamics for the slow degree of freedom, the second limit, obtained by taking μ→μ̃/ϵ^2 and D_k →D̃_k/ϵ^4 with ϵ→ 0, led to non-trivial coarse grained dynamics for the position involving a renormalised stiffness and multiplicative noise, Eq. (<ref>), for which the steady-state probability density was obtained in closed form, Eq. (<ref>).
Borrowing from extreme value theory heuristics, we then conjectured that in the presence of finite stiffness fluctuations, ℱ>0, the standardised distribution of the running maximum should converge at long times to a Frechet form with ℱ-dependent exponent, in good agreement with numerical experiments (cf. the standard OU process, whose maximum is Gumbel distributed at long times). Further, the dependence on ℱ of the mean first passage time to a positive target was studied numerically and we show that sufficiently strong stiffness fluctuations can aid the particle by reducing drastically the mean first-passage time to that target. A formal analytical treatment of this problem remains an intriguing open question and is left for future studies.
Finally, we presented a compact derivation of the steady-state entropy production rate which calls upon results for Jarzinsky's stochastic work, Eq. (<ref>), showing that it depends solely on the second steady-state moment of the positional probability density. Remarkably, the entropy production diverges for ℱ > 1/2. It would be interesting to explore higher order statistics of the stochastic work (<ref>), similarly to what was done in Ref. <cit.> for the stochastically sliding potential and more recently in Ref. <cit.> for an AOU particle confined in a harmonic potential, and to compare any such result to the full distribution of the stochastic work obtained in Ref. <cit.> for a breathing harmonic potential in the slow driving regime.
Taken together, our results point to the OU^2 process as a widely applicable minimal model of dissipative confinement. It's rich phenomenology, emerging from the dynamical establishment of a non-equilibrium steady-state analogous to that of Brownian motion under stochastic resetting, renders it a valuable non-motile counterpart to other minimal models of single-particle out-of-equilibrium dynamics, such as the AOU particle, that have been explored extensively in recent years.
iopart-num
§ KEY RESULTS FOR THE ORNSTEIN-UHLENBECK PROCESS
In this appendix, we recall some useful key results for the original Ornstein-Uhlenbeck process as described by Eq. (<ref>). First, the transition probability P(x, t|x', t') for the process takes the form
P(x, t | x', t') = √(k̅/2 π D_x(1-e^-2k̅ (t-t')))exp[-k̅(x-x'e^-k̅(t-t'))^2/2D_x(1-e^-2k̅(t-t'))],
where t'<t. The steady-state probability density function is thus a Gaussian distribution with zero mean and variance, ⟨ x^2⟩ = D_x/k̅ <cit.>. The exponential decay at large x of the steady-state distribution ensures that all even moments ⟨ x^2n⟩ are finite for n∈ℤ_≥ 0, with odd moments vanishing due to the x→ -x symmetry. The associated cumulant generating function then takes the form
K(t) = log⟨ e^a x⟩ = a^2 D_x/2k̅
where cumulants of order 3 and above vanish. The solution x(t) for a given realisation of the noise can be written as
x(t) = √(2D_x)∫_-∞^t dt' ζ(t')exp[-k̅(t-t')],
from which we read off the Green's function for the process 𝒢(t) of the form
𝒢(t)= exp[-k̅t]Θ(t)
where Θ(t) is the Heaviside function.
§ CONVERGENCE CRITERION FOR HIGHER MOMENTS
In this section, we discuss the derivation of the convergence criterion for moments of order n>2. Looking at Eq. (<ref>), we argue that since the function is finite for all (t_1,...,t_n), any divergence of the integral in the right-hand side of (<ref>) must be controlled by the behaviour of the integrand in the regime t_i → -∞. Keeping only leading order terms in this limit, we thus rewrite (<ref>) using (<ref>) as
⟨exp[ - 2 ∑_i=1^n∫_t_i^t dt_i' k(t_i') ] ⟩ ≃exp[ 4D_k/μ^2( ∑_i=1^n (t-t_i) + 2 ∑_i<j (t-t_j)) ]
= exp[ 4D_k/μ^2∑_i=1^n (1 + 2(i-1))(t-t_i)
]
where ≃ denotes approximate equality at t_i → -∞. Combining this with (<ref>), we argue that the moment of order 2n converges when the following integral also converges
ℐ_n = ∫ dt_1<...<nexp[ - ∑_i=1^n ( 2k̅ - 4D_k/μ^2(1+2(i-1)) ) (t-t_i) ] ,
where ∫ dt_1<...<n≡∫_-∞^t dt_1…∫_t_n-1^t dt_n. We now define a_i = 2k̅ - 4D_k[1+2(i-1)]/μ^2 and re-write the integral (<ref>) as
ℐ_n = ∫ dt_1<...<n∏_i=1^n e^-a_i(t-t_i) .
We now call upon the following result for generic m iteratively
∫_t_m-1^t dt_m e^-a(t-t_m)= 1-e^-a(t-t_m-1)/a
to evaluate ℐ_n up to a multiplicative constant. We integrate over dt_n and re-arrange to write
ℐ_n ∝∫ dt_1<...<(n-1)∏_i=1^n-1e^-a_i(t-t_i)[1-e^-a_n(t-t_n-1)]
=∫ dt_1<...<(n-1)∏_i=1^n-2e^-a_i(t-t_i)[e^-a_n-1(t-t_n-1)-e^-(a_n-1+a_n)(t-t_n-1)].
Integrating over t_n-1 we subsequently derive
ℐ_n ∝∫ dt_1<...<(n-2)∏_i=1^n-2e^-a_i(t-t_i)[ λ_n-1(1-e^-a_n-1(t-t_n-2)) - 1+ e^-(a_n-1+a_n)(t-t_n-2)]
∝∫ dt_1<...<(n-2)∏_i=1^n-3e^-a_i(t-t_i)[ (λ_n-1-1)e^-a_n-2(t-t_n-2)-λ_n-1e^-(a_n-2+a_n-1)(t-t_n-2)
-e^-(a_n-2+a_n-1+a_n)(t-t_n-2)].
where λ_n+1=(a_n-1+a_n)/a_n-1.
Treating the remaining time variables in the same manner, we conclude that the convergence of the integral ℐ_n is determined by the positivity of the partial sums A_m = ∑_i=1^m a_i for m=1… n. More specifically, the most strict condition is always the positivity of the full sum, m=n, as the sequence {a_i} is monotonically decreasing: a_i+1<a_i. The condition for the convergence of the n-th moment is thus
0<A_n=∑_i=1^n[2k̅ - 4D_k/μ^2(1+2(i-1))]= 2 k̅ n - 4 D_k/μ^2n^2 D_k/μ^2<k̅/2n,
in agreement with the results obtained in Section <ref> for the particular cases n=1 and n=2.
§ HOMOGENISATION PROCEDURE FOR COUPLED DYNAMICS
In this appendix, we first review multiscale methods for the coarse graining of the coupled dynamics of two stochastic variables with multiple timescales and then apply it to the homogeneisation of the OU^2 process.
§.§ General theory
In this section, we follow the treatment of the homogenisation procedure presented in Chapter 11 of Ref. <cit.>. We begin from the most general form for the coupled dynamics of two stochastic processes x(t) and k(t),
ẋ = 1/ε f_0(x,k) + f_1(x,k) + α(x,k) ζ_x(t)
k̇ = 1/ε^2 g (x,k) + 1/εβ(x,k) ζ_k(t)
where ε > 0 is a dimensionless factor that will be taken to zero to enforce a separation of timescales between the “slow” x and “fast” k dynamics. Note that both ẋ and k̇ involve fast contributions to the dynamics, but the dynamics for k are an order of ε faster than those of x.
Combining Eqs. (<ref>) and (<ref>), we construct the backward Kolmogorov equation for v(x, k, t), which takes the form
∂_t v = ℒv = 1/ε^2(g ∂_k v + 1/2 B ∂_k^2 v) + 1/ε(f_0 ∂_xv) + (f_1∂_xv+ 1/2A∂_x^2 v),
where we have defined the diffusivities A(x,y) = α(x,y) α̅(x,y) and B(x,y) = β(x,y) β̅(x,y), with ∙̅ denoting the conjugate transpose. We write the order 𝒪(ε^-2) contribution to the backward Fokker-Planck operator acting on v as
ℒ_-2 v ≡ g ∂_k v + 1/2B∂_k^2 v.
Clearly, ℒ_-2w(x)=0 for any function w(x) that does not depend on k. Additionally, we define ρ^∞(k;x) as the normalised measure obtained by solving the associated steady-state forward Kolmogorov equation for a fixed value of the slow variable x, ℒ^*_-2ρ^∞(k;x) = 0.
For the limit ε→ 0 of this problem to be well-posed, we require that f_0(x, k) satisfies the so-called centering condition, namely that its average with respect to ρ^∞ vanishes
∫ dk f_0(x, k) ρ^∞(k;x) = 0.
We now construct a perturbative solution to Eq. (<ref>) of the form v = v_0 + ε v_1+ ε^2 v_2 + 𝒪(ε^3).
Matching terms of order 𝒪(ε^-2), we conclude that v_0(x) is independent of k. At 𝒪(ε^-1) and relying on the centering condition on f_0 introduced above, we find an expression for v_1(x, k) in terms of v_0(x) and a function Φ(x,k) which solves
-ℒ_-2Φ(x,k) = f_0(x,k)
with ∫ dk Φ(x,k) ρ^∞(k;x) =0.
Finally, at 𝒪(ε^0) we derive a closed equation for ∂_t v_0 from the ergodicity assumption that ∫ dkρ^∞ℒ_-2v_2=0, namely
∂_t v_0 = F(x)∂_x v_0 + 1/2 A(x) A(x)^T ∂_x^2 v_0
where we have defined the vector fields
F(x) = ∫ dk ( f_1(x,k) + (∂_x Φ(x, k)) f_0(x,k) ) ρ^∞(k;x)
and
A(x)A^T(x) = A_1(x) + 1/2 (A_0(x) + A_0^T(x)) ,
with
A_0(x) = 2 ∫ dk f_0(x,k) Φ(x, k) ρ^∞(k;x)
A_1(x) = ∫ dk A(x,k) ρ^∞(k;x) .
The Itô Langevin equation corresponding to the backward Kolmogorov equation (<ref>) for v_0 constitutes our slow variable dynamics in the regime ε≪ 1 and reads
ẋ(t) = F(x(t)) + A(x(t))η_x(t) .
This is the key result that we draw on in Section <ref>, as detailed in the rest of this Appendix.
§.§ Homogenisation for OU^2 Process
We now discuss the treatment of dynamics on multiple timescales in the specific context of the OU^2 process, Eqs. (<ref>) and (<ref>), which we restate for convenience here:
ẋ = - (k̅ + k) x + √(2 D_x)ζ_x
k̇ = -μ k + √(2D_k)ζ_k.
§.§.§ Adiabatic limit —
First, following Sec. <ref>, we consider a naïve separation of timescales between the two dynamics, akin to the adiabatic limit in thermodynamics, obtained by introducing a small dimensionless parameter ε via the rescaling μ→μ̃/ε^2 and D_k →D̃_k/ε^2,
ẋ = - (k̅ + k) x + √(2 D_x)ζ_x
k̇ = -μ̃/ε^2 k + √(2D̃_k/ε^2)ζ_k.
Comparing these equations with Eqs. (<ref>) and (<ref>) and matching terms by their order in ε, we conclude that in this limit f_0(x,k)=0, f_1(x,k) = - (k̅ + k) x and α(x,k) = √(2D_x), while g(x,k) = -μ̃k and β(x,k)= √(2D̃_k). Employing the procedure outlined in the previous section, it is then straightforward to verify that F(x) = -k̅x and A(x) = √(2D_x) leading to
ẋ(t) = - k̅ x(t) + √(2D_x)ζ_x(t),
In other words, no signature of the coupling between x and k survives in the effective dynamics for the slow variable x in the limit ε→ 0.
§.§.§ White noise limit —
For a non-trivial contribution to appear in the slow dynamics, we subsequently consider a second fast-slow regime, which we refer to as the white noise limit in Sec. <ref>. This time, we perform the rescaling μ→μ̃/ε^2 and D_k →D̃_k/ε^4, whereby
ẋ = - (k̅ + k) x + √(2 D_x)ζ_x ,
k̇ = -μ̃/ε^2 k + √(2D̃_k/ε^4)ζ_k .
Now, let χ≡ε k, such that
ẋ = -1/εχ x - k̅x + √(2 D_x)ζ_x ,
χ̇ = -1/ε^2μ̃χ + 1/ε√(2D̃_k)ζ_k .
Comparing once again these equations with Eqs. (<ref>) and (<ref>), we identify f_0(x,χ) = - χ x, f_1(x,χ) = - k̅x and α(x,χ) = √(2D_x), while g(x,χ)= -μ̃χ and β(x,χ)= √(2D̃_k).
Following the procedure outlined above, we find that the effective dynamics for the slow variable x take the form
ẋ(t) = -( k̅ - D_k/μ^2) x(t) + √(2( D_x + D_kx^2(t)/μ^2))ζ_x(t) ,
where we have used that D̃_k/μ̃^2 = D_k/μ^2 in this case. We thus recover the effective slow dynamics (<ref>) studied in the main text.
§ CONNECTION TO THE ASSOCIATED LEGENDRE EQUATION
Upon enforcing the separation of timescales detailed in Sec. <ref>, the Fokker-Planck equation (<ref>) for the marginal distribution of the slow variable x(t) can be mapped on to an associated Legendre differential equation through a change of variable, which we detail here. Indeed, the marginal steady-state distribution P(x) is solution to
0 = ∂/∂_x{∂/∂_x[ ( D_x + D̃_k x^2/μ̃^2)P(x,t)] + (k̅ - D̃_k/μ̃^2) x P(x,t) } .
Introducing h ≡D̃_k/μ̃^2, we define
P(x) = a_0 ( D_x + h x^2 )^-1/4( 1 + k̅/h)Φ(x) ,
which we substitute into Eq. (<ref>) to obtain
0= ( 1 + h/D_xx^2 ) Φ”(x) + 2h/D_x x Φ'(x) + h + k̅/4D_x( 1 + D_x - k̅x^2/D_x + h x^2) Φ(x) .
Finally, we perform the change of variable s = i x √(h/D_x) and derive the equation for Φ(s) with imaginary argument
0=(1-s^2) Φ”(s) - 2 s Φ'(x) + [ ( k̅/2h - 1/2)( k̅/2h - 1/2 + 1) - ( h+k̅/2h)^2 1/1-s^2]Φ(s),
which is now in the form of an associated Legendre differential equation <cit.>. The formal solution of the original equation, up to a normalisation factor, is thus
P(x) ∝( D_x + h x^2 )^-μ/2[ c_1 P_λ^μ( i√(h/D_x)x) + c_2 Q_λ^μ( i√(h/D_x)x) ]
where
μ = k̅+h/2 and λ = k̅-h/2,
and P_λ^μ and Q_λ^μ denote the associated Legendre functions of the first and second kind, respectively, which are precisely the two linearly independent solutions of the Legendre equation.
§ DERIVATION OF THE ENTROPY PRODUCTION RATE FROM THE GIBBS-SHANNON ENTROPY
In this appendix, we summarise the full derivation of the entropy production rate Ṡ found in Ref. <cit.>. We begin from the Fokker-Planck equation for the joint probability distribution for the particle position and stiffness governed by Eq. (<ref>):
∂_t P(x, k, t) = D_x ∂_x^2 P + (k̅ + k) ∂_x(xP)+D_k ∂_k^2P + μ∂_k(k P) = -∂_x J(x, k, t) - 𝒥(x, k, t)
where we have defined the probability currents for the positional and stiffness variables respectively as
J(x, k, t) = -(k̅ + k) x P - D_x∂_x P and 𝒥(x, k, t)=-μ k P - D_k ∂_k P.
Following the standard approach for the thermodynamic treatment of diffusive systems with fluctuating potentials <cit.>, we differentiate the Gibbs-Shannon entropy with respect to time to write
Ṡ(t) = -∫ dx ∫ dk ∂_tP(x, k, t) log(P(x, k, t)/P̅),
where P̅ is an arbitrary constant for dimensional consistency and we work in units such that k_B=1. We identify two equal and opposite contributions to this rate that we write as
Ṡ(t) = Ṡ_i(t) + Ṡ_e(t)
where have defined the internal (or total) entropy production rate
Ṡ_i(t) = ∫ dk ∫ dx 1/P(x, k, t)[J^2(x, k, t)/D_x + 𝒥^2(x, k, t)/D_k]
and the external entropy production (or entropy flow)
Ṡ_e(t) = ∫ dk ∫ dx (k̅ + k)xJ(x, k, t)/D_x+1/D_k∫ dk∫ dx μ k𝒥(x, k, t).
At steady-state, ∂_tP=0 and hence lim_t→∞Ṡ(t) vanishes, which implies the steady-state relation lim_t→∞Ṡ_i(t)=-lim_t→∞Ṡ_e(t). In general, the internal entropy production is the quantity of interest in the thermodynamic characterization of non-equilibrium stochastic processes. In what follows, we evaluate the integrals for the external entropy production at steady-state due to their simpler form, then employ the steady-state relation to evaluate the more classical thermodynamic quantity.
The dynamics for the stiffness are given by the equilibrium Ornstein-Uhlenbeck process, thus at steady-state the current 𝒥(k) = ∫ dx 𝒥(x, k) vanishes. Thus the only contribution to Ṡ_e(t) at steady-state is the first term, that we can re-write as
lim_t→∞Ṡ_e(t) = -⟨ ((k̅ + k)x)^2⟩/D_x + k̅,
where the average ⟨·⟩ is taken over the joint probability distribution P(x, k). Multiplying the Fokker-Planck equation by x^2, we integrate over x to derive an equation for the dynamics of the marginal variance Ξ(k,t):
∂_t Ξ(k,t) = 2D_xP^ tot(k,t) - 2(k̅ +k) Ξ(k,t)+∂_k[D_k∂_kΞ(k,t) + μ kΞ(k,t)].
Integrating this last equation at steady-state with respect to k leads to ⟨ k x^2⟩ = D_x.
We then multiply (<ref>) by k before again integrating over k to obtain
∫ dk [(k̅+k)^2/D_xΞ(k)] = k̅ + 1/2D_x∫ d k [ D_k∂_kΞ(k) + k∂_k[μ kΞ(k)]] .
Finally, we argue that the second term on the right-hand side of (<ref>) equation can be written as
1/2D_x∫ d k [ D_k∂_kΞ(k) + k∂_k[μk̅Ξ(k)]] = -μ/2D_x[⟨(k̅ + k) x^2⟩ - k̅⟨ x^2⟩],
noticing that the term proportional to D_k vanishes by imposing a sufficiently fast decay of ∂_x P at x →±∞. Using ⟨ (k̅ + k) x^2 ⟩ = D_x, we conclude that the internal entropy production rate at steady-state can be expressed as
lim_t→∞Ṡ_i =μk̅/2D_x(⟨ x^2⟩ - D_x/k̅).
|
http://arxiv.org/abs/2405.09004v1 | 20240515000408 | Improving Sequential Market Clearing via Value-oriented Renewable Energy Forecasting | [
"Yufan Zhang",
"Honglin Wen",
"Yuexin Bian",
"Yuanyuan Shi"
] | eess.SY | [
"eess.SY",
"cs.LG",
"cs.SY"
] |
Cons-training tensor networks
Jing Chen
May 20, 2024
=============================
empty
plain
Large penetration of renewable energy sources (RESs) brings huge uncertainty into the electricity markets. While existing deterministic market clearing fails to accommodate the uncertainty, the recently proposed stochastic market clearing struggles to achieve desirable market properties. In this work, we propose a value-oriented forecasting approach, which tactically determines the RESs generation that enters the day-ahead market. With such a forecast, the existing deterministic market clearing framework can be maintained, and the day-ahead and real-time overall operation cost is reduced. At the training phase, the forecast model parameters are estimated to minimize expected day-ahead and real-time overall operation costs, instead of minimizing forecast errors in a statistical sense. Theoretically, we derive the exact form of the loss function for training the forecast model that aligns with such a goal. For market clearing modeled by linear programs, this loss function is a piecewise linear function. Additionally, we derive the analytical gradient of the loss function with respect to the forecast, which inspires an efficient training strategy. A numerical study shows our forecasts can bring significant benefits of the overall cost reduction to deterministic market clearing, compared to quality-oriented forecasting approach.
Keywords: Energy forecasting, Loss function, Forecast value, Market clearing, Decision-focused learning
§ INTRODUCTION
The current short-term electricity markets are organized in a sequence of trading floors, i.e., day-ahead (DA) and real-time (RT) markets <cit.>. A DA market is cleared 12-36 hours before the actual operation. A RT market runs close to the delivery time and addresses any imbalance from the DA schedules. They are initially designed for controllable fossil-fueled generators in the view of traditional power system operation. However, the increasing share of renewable energy sources (RESs) (up to 30% of global electricity generation in 2022 <cit.>) exposes the electricity markets to significant uncertainty and therefore raises concerns to the market operation <cit.>.
A significant challenge with sequential deterministic market clearing arises from its limited ability to address the uncertainty associated with RESs. The separation of DA and RT market clearings means that DA market clearing does not adequately consider the re-dispatch costs incurred due to RES uncertainty <cit.>. This often results in higher overall operation costs. Consequently, stochastic market clearing is proposed, which aims at informing the DA market with the operation cost in the RT market to reduce overall costs <cit.>.
Though economic efficiency can be improved, stochastic market clearing faces a challenge in simultaneously achieving important market properties, namely revenue adequacy and cost recovery <cit.>. Therefore, attempts have been made to uphold desirable market properties in stochastic market clearing. For example, <cit.> ensures cost recovery and revenue adequacy per scenario and in expectation, albeit at the expense of market efficiency.
Alternatively,
there has been a growing body of work focusing on tactically scheduling RESs within the current deterministic operation pipeline to emulate the performance of their stochastic program-based counterparts. The core idea is to retain a deterministic DA market clearing model but to tactically schedule RESs by accounting for the future outcomes in RT markets <cit.>. For example, studies <cit.> maintain the current deterministic clearing framework and strategically determine wind power dispatch by solving a bilevel program on a case-by-case basis during the operational phase. The study <cit.> utilizes these methods to allocate reserves according to predicted deviations in RESs.
While these methods show a decrease in operation costs, they might pose computational challenges in determining the RES schedule.
This motivates the exploration of the following technical question: Is it possible to train a forecast model (a function, not a fixed solution) that maps the context (e.g. features for RES forecast) to an appropriate RES dispatch schedule, allowing it to enter the deterministic DA market and minimize the overall DA and RT operation costs? In this way, the RES schedule can be conveniently issued by the forecast model during the operational phase. Aligning the training objective of a forecast model with the value of the operation criteria poses a challenge, and is encompassed within the realm of value-oriented forecasting <cit.>.
Several research threads have emerged to address this challenge, encompassing integrated optimization, differentiable programming, and the loss function design. In the first thread, forecast model parameters are optimized concurrently with decision variables. This integrated program is readily solved using commercial solvers <cit.>. Another approach, proposed by <cit.>, introduces a bilevel program. In this setup, DA operations form the lower level with the forecast as a parameter, while the upper level optimizes the model parameters alongside RT decisions. Notably, this method requires a linear forecast model, potentially limiting its performance. The second thread accommodates more sophisticated forecast models, such as neural networks, by obtaining the gradient of the optimal decision-making objective with respect to the forecast <cit.>. However, obtaining this gradient involves solving the inverse of Karush-Kuhn-Tucker (KKT) conditions, making it computationally expensive. The third thread, which is the primary focus of our work, centers around the design of loss functions. Existing loss functions, such as pinball loss <cit.> or SPO loss <cit.>, are primarily tailored for single-stage stochastic programs without redispatch decisions (such as those related to flexible units in the RT market). The design of a value-oriented loss function for sequential market clearing remains an open question.
In this study, our focus lies in the analytical formulation of a loss function specifically crafted for training a value-oriented RES forecast model, aligning with the objective of sequential market clearing. We formulate the parameter estimation task as a bilevel program, optimizing the forecast model parameter at the upper level and the operation decisions are determined at the lower level. Specifically, the DA and RT market clearing problems are solved at the lower level, with the RES forecast from the upper level serving as the input. To show the relationship between the operation cost and the RES forecast more clearly, we resort to the lower-level dual problems and replace the upper-level objective with the dual objectives. To obtain the loss function for training forecast models (whose input is the forecast, and output is the overall DA and RT operation cost), we transform the reformulated upper-level objective as an analytical function of the forecast. Given that the DA and RT market clearings are more general compared to the operational problems analyzed in <cit.>, the reformulated upper-level objective integrates not only the forecast but also the primal and dual solutions derived from the lower level. Hence, it is necessary to derive the functions linking the primal and dual solutions with the forecast. Concretely, we derive the function between dual solutions and forecasts by solving lower-level dual problems. The function between primal solutions and forecasts is derived via the active constraints of the lower-level primal problems. By substituting the primal and dual solutions with these functions, the upper-level objective transforms into a function of the forecast, serving as the loss function for training. Our main contributions are,
1) From a market perspective, the proposed approach maintains the deterministic market clearing framework, while minimizing the overall DA and RT operation cost with value-oriented RES forecasts.
2) From a theoretical perspective, we analytically derive a value-oriented loss function that aligns the training objective of a forecast model with the operation value, i.e., minimizing the overall operation cost.
3) From a practitioner perspective, the analytical loss function allows the analytical derivation of the gradient, which is computationally cheap compared to <cit.> and inspires a computationally efficient training approach.
The remaining parts of this paper are organized as follows. The preliminaries regarding the sequential market clearing are given in Section 2. Section 3 formulates a bilevel program for forecast model parameter estimation. Section 4 derives the loss function for value-oriented forecasting and the training process is presented in Section 5. Results are discussed and evaluated in Section 6, followed by the conclusions.
§ PRELIMINARIES
The framework and mathematical formulation of sequential market clearing are introduced in subsection <ref>, and the reformulation is presented in subsection <ref>.
§.§ Sequential Market Clearing
We consider the sequential clearing of DA and RT markets <cit.>, which is illustrated in Fig. <ref>. The DA market is cleared at time t on day d-1, with an advance of k hours in time to the next day d, and covers energy transactions on day d, typically on an hourly basis. Since the RES production is uncertain at the time of the DA market, the energy imbalance with respect to the DA production schedule needs to be settled in the RT market. In line with European practices, we do not incorporate binary decisions regarding unit commitment (UC). However, we note that UC is a requisite consideration in the U.S. DA and RT markets. To analyze the market behavior, the relaxed UC problem, where the binary commitment decisions are substituted with continuous ones, is widely used <cit.>. Our approach remains applicable to the relaxed problem. More details of the two markets can be found in <cit.>.
Concretely, in the DA market, the operator determines the schedules of generators and RES to satisfy inelastic demand. The generation and RES schedule for each time-slot τ,∀τ=1,...,24 on the next day d are denoted as p_d,τ and w_d,τ, respectively. The DA market clearing is formulated as,
min_Ξ_DA ∑_τ=1^Tρ^⊤p_d,τ
s.t. 1^⊤(p_d,τ+w_d,τ)=1^⊤l_d,τ,∀τ=1,...,T
-f≤H(p_d,τ+w_d,τ-l_d,τ)≤f,∀τ=1,...,T
p ≤p_d,τ ≤p,∀τ=1,...,T
r ≤p_d,τ-p_d,τ-1 ≤r,∀τ=2,...,T
0 ≤w_d,τ ≤ŷ_d,τ,∀τ=1,...,T,
where Ξ_DA={p_d,τ,w_d,τ}_τ=1^T. ρ is the marginal cost vector of
traditional generators. RES enters the market with zero marginal cost. Each element in the vector p_d,τ represents the power generated by a generator unit, whose marginal cost is in the corresponding element in ρ. In the case where generators submit stepwise marginal cost curves (which is an approximation of the linear marginal cost curve <cit.>), the elements in p_d,τ represent different generation blocks with different costs. Here, we assume there is no power loss on lines, and include a DC representation of the network. The equality constraint (<ref>) enforces the power balance conditions. For simplification, the demand l_d,τ is considered to be known with certainty. But we note that this simplification can be easily removed. In this way, the net demand (which is demand minus RES) is uncertain and required to be forecasted. The inequality constraints (<ref>) restrict the scheduled power flow within the line flow limits. H in (<ref>) is the shift factor matrix mapping the nodal power injection to the power flow on lines <cit.>. (<ref>) and (<ref>) are the output power and ramping limits of the generators. (<ref>) limits the DA schedule of RES up to the forecast ŷ_d,τ representing a single-value estimate of the RES production Y_d,τ. Y_d,τ is a random variable, since the RES production is unknown at the moment of DA market clearing. After solving (<ref>), the optimal solutions are obtained and denoted as {p^*_d,τ,w^*_d,τ}_τ=1^T.
Since the RES production is uncertain in DA, the DA schedules are to be adjusted at each time-slot τ,∀τ=1,...,T in RT on day d, after the RES realization y_d,τ is observed. The RT market deals with the imbalance y_d,τ-w_d,τ^* caused by RES, with a minimized imbalance cost. Additionally, the RT market clearing at time-slot τ,∀τ=2,...,T is influenced not only by the DA clearing outcomes but also by the extent of power adjustments made in the preceding time-slot. This is due to the ramping constraints that interconnect adjacent time slots. Because the market clearing is conducted separately for each day, the RT market clearing at time-slot τ=1 on day d remains unaffected by any adjustments made at time-slot τ=T on the previous day d-1. In the following, we firstly give the mathematical formulation of the RT clearing at τ=1,
min_Ξ_d,τ^RT ρ_+^⊤p_d,τ^+-ρ_-^⊤p^-_d,τ
s.t. 1^⊤(p_d,τ^+-p^-_d,τ-κ_d,τ)=-1^⊤(y_d,τ-w^*_d,τ)
-f-H(p_d,τ^*+w^*_d,τ-l_d,τ)≤H(p_d,τ^+-p_d,τ^–κ_d,τ
+y_d,τ-w^*_d,τ)≤f-H(p_d,τ^*+w^*_d,τ-l_d,τ)
0 ≤p_d,τ^+ ≤p^+
0 ≤p_d,τ^- ≤p^-
p ≤p_d,τ^*+p_d,τ^+-p_d,τ^- ≤p
0 ≤κ_d,τ ≤y_d,τ
where Ξ_d,τ^RT={p_d,τ^+,p_d,τ^-,κ_d,τ}. The output power of generators may be increased by an amount p_d,τ^+ with the marginal cost ρ_+ for up-regulation, or decreased by an amount p_d,τ^- with the marginal cost ρ_- for down-regulation. These decisions are driven by the need to settle the RES deviation y_d,τ-w_d,τ^* in (<ref>). (<ref>) is the power flow constraint, whose lower and upper bounds are determined by subtracting the power flow in the DA market from the line capacity. (<ref>) and (<ref>) limit the amount of up-regulation and down-regulation power to p^+,p^-. For inflexible generators that cannot be dispatched in RT, the corresponding elements in the upper bounds will be zero, resulting in zero up- and down-adjustments for those generators. Additionally, the eventual generation power, considering the DA schedule p_d,τ^* and the adjustment, should be within the output power limits, as stated in (<ref>). The inclusion of RES spill accounts for situations where the actual RES generation surpasses the scheduled amount in the DA schedule w_d,τ^*, and the excess cannot be entirely offset by the down-regulation power available from flexible generators. The amount of RES spill κ_d,τ can be at most to its realization y_d,τ, as stated in (<ref>).
The RT clearing at time-slot τ,∀τ=2,...,T is,
min_Ξ_d,τ^RT (<ref>)
s.t. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>)
r ≤p_d,τ^*+p_d,τ^+-p_d,τ^- -
(p_d,τ-1^*+p_d,τ-1^+*-p_d,τ-1^-*)≤r
The difference between the eventual generation at time-slot τ-1, denoted by p_d,τ-1^*+p_d,τ-1^+*-p_d,τ-1^-*, and the eventual generation at time-slot τ must satisfy the ramping constraints, as stated in (<ref>).
To obtain the unique primal and dual solutions from DA and RT clearing, we require each element in the marginal cost vectors ρ,ρ_+,ρ_- are different. After solving the DA and RT market clearing, the eventual generation of the generators is either p_d,τ^*+p_d,τ^+* when the RES falls short of its scheduled production in RT, or p_d,τ^*-p_d,τ^-* when the RES generates more power than the schedule. Here, we define overall generation cost or the negative social surplus in a day.
We define overall generation cost in a day d as,
∑_τ=1^T ρ^⊤p^*_d,τ+ρ_+^⊤p^+*_d,τ-ρ_-^⊤p^-*_d,τ
= ∑_τ=1^T ρ^⊤ (p^*_d,τ+p^+*_d,τ)+(ρ_+-ρ)^⊤p^+*_d,τ+
∑_τ=1^Tρ^⊤ (p^*_d,τ-p^-*_d,τ)+(ρ-ρ_-)^⊤p^-*_d,τ
ρ_+-ρ and ρ-ρ_- are the incremental bidding price, which reflects the marginal opportunity loss for up- and down-regulation <cit.>. We require them to be positive. In this way, the overall generation cost is the minimum if the generators can be dispatched to the eventual schedule in DA. Any RT adjustment would bring the extra cost either (ρ_+-ρ)^⊤p^+*_d,τ or (ρ-ρ_-)^⊤p^-*_d,τ. The incremental bidding prices of supplying upward and downward balancing power are usually different. This explains why forecasting the expectation hardly works well in reducing the overall cost, as it overlooks the typical asymmetry affecting the RT cost.
§.§ Mathematical Reformulation
In this subsection, we first convert the RT clearing in (<ref>) and (<ref>) into a mathematically equivalent form, and then give the compact form of DA and RT market clearing.
We reformulate the RT clearing in (<ref>). To show the upper and lower bounds of the power adjustment more clearly, we divide the constraint (<ref>) into two parts. Concretely, when the RES produces less power than the schedule, we have p_d,τ^+ ≥0,p_d,τ^- = 0. Conversely, when the RES produces more power than the schedule, we have p_d,τ^- ≥0,p_d,τ^+ = 0. We divide (<ref>) into the following two constraints by the two cases,
p-p_d,τ^* ≤p_d,τ^+ ≤p-p_d,τ^*
p_d,τ^*-p≤p_d,τ^- ≤p_d,τ^*-p
Since p≤p_d,τ^* ≤p, the left side of (<ref>) is less than 0. Considering the power adjustment p_d,τ^+,p_d,τ^- is larger than 0 as stated in (<ref>) and (<ref>), (<ref>) can be further simplified as,
0≤p_d,τ^+ ≤p-p_d,τ^*
0≤p_d,τ^- ≤p_d,τ^*-p
The RT market clearing at time τ=1 becomes,
min_Ξ_d,τ^RT (<ref>)
s.t. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>)
Likewise, (<ref>) can be equivalently reformulated as,
0≤p_d,τ^+ ≤r+(p_d,τ-1^*+p_d,τ-1^+*-p_d,τ-1^-*)-p_d,τ^*
0≤p_d,τ^- ≤p_d,τ^*-(p_d,τ-1^*+p_d,τ-1^+*-p_d,τ-1^-*)-r
The RT market clearing at time τ=2,...,T becomes,
min_Ξ_d,τ^RT (<ref>)
s.t. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>)
Next, we convert the DA market clearing in (<ref>) and the RT market clearing in (<ref>) and (<ref>), which are linear programs, into equivalent compact forms, with the dual variable listed after the colon. Concretely, the compact DA market clearing is,
x_d^*= min_x_d ρ_DA^⊤ x_d
s.t. G_DAx_d≤ψ_DA,d+F_DA^yŷ_d:σ_d.
where x_d=[x_d,τ]_τ=1^T=[p_d,τ;w_d,τ]_τ=1^T is the collection of DA decision variables. Its optimal solution is denoted as x_d^*=[x_d,τ^*]_τ=1^T=[p_d,τ^*;w_d,τ^*]_τ=1^T. The coefficients ρ_DA,G_DA,ψ_DA,d,F_DA^y are constant, whose specific forms are provided in Appendix <ref>. The RES forecasts and the demand are summarized into vectors ŷ_d=[ŷ_d,τ]_τ=1^T and l_d=[l_d,τ]_τ=1^T, respectively.
The value of ψ_DA,d varies from day to day due to its dependence on the demand l_d. Likewise, the RT market clearing in (<ref>) and (<ref>) are converted into the compact forms,
z_d,τ^*= min_z_d,τ ρ_RT^⊤ z_d,τ
s.t. G_RTz_d,τ≤ψ_RT,d,τ+F_RTx_d,τ^*:
ν_d,τ,∀τ=1.
z_d,τ^*= min_z_d,τ ρ_RT^⊤ z_d,τ
s.t. G_RT^'z_d,τ≤ψ_RT,d,τ^'+[F_RT^'x,F_RT^'p,F_RT^'+-]
[x^*_d,τ,p^*_d,τ-1,p^+-*_d,τ-1]^⊤: ζ_d,τ,∀τ=2,...,T.
where z_d,τ=[p_d,τ^+;p_d,τ^-;κ_d,τ] is the collection of RT decision variables. The coefficients ρ_RT,G_RT,ψ_RT,d,τ,F_RT and G^'_RT,ψ^'_RT,d,τ,F^' x_RT,F^' p_RT,F^' +-_RT are constant and provided in the Appendix <ref>. The values of ψ_RT,d,τ,ψ_RT,d,τ^' vary from hour to hour due to the dependence on the RES realization y_d,τ. The parameter p^+-*_d,τ-1=[p^+*_d,τ-1;p^-*_d,τ-1] is the collection of RT power adjustment for up- and down- regulation at previous time τ-1.
§ METHODOLOGY: PARAMETER ESTIMATION
In this section, a bilevel program <cit.> is formulated for estimating the forecast model parameter at the training phase. Let g( · ;Θ) denote the forecast model with the parameter Θ, and s_d,τ denote the context. The RES forecast for the time-slot τ on day d is issued by,
ŷ_d,τ= g(s_d,τ;Θ)
where ŷ_d,τ denotes the forecast at the training phase, with Θ given by any value. Data in the training set {{s_d,τ,y_d,τ}_τ=1^T}_d=1^D is available, which is consisted of historical context and RES realization in D days. An illustration of the bilevel program is shown in Fig. <ref>. The upper level determines the model parameter Θ, while the lower level involves the DA and RT market clearings. The bilevel program is mathematically formulated as,
Θmin 1/D·T∑_d=1^D{ρ_DA^⊤x_d^*+∑_τ=1^Tρ_RT^⊤z_d,τ^*}
s.t.
ŷ_d,τ=g(s_d,τ;Θ),∀τ=1,...,T,∀d=1,...,D
0 ≤ŷ_d,τ ≤y̅_d,τ,∀τ=1,...,T,∀d=1,...,D
(<ref>),∀d=1,...,D
(<ref>),∀τ=1,∀d=1,...,D
(<ref>),∀τ=2,...,T,∀d=1,...,DLower level
where the upper-level objective (<ref>) seeks to minimize the expected overall operation cost of the two markets. This is achieved by leveraging the optimal DA and RT cost functions, which are informed by the decisions obtained from the lower level (<ref>). (<ref>) limits the forecast ŷ_d,τ within y̅_d,τ, which can be RES capacity or its offering quantity. The lower level treats the forecast ŷ_d,τ as an input parameter. As a consequence, both DA and RT decisions are affected by it.
To show the impact of the forecast on the operation cost more clearly, we replace the lower level with the dual problems. The overall operation cost within the upper-level objective is then substituted with the DA and RT dual objectives. These objectives are constructed as a linear combination of the right-side parameters and the associated dual variables, i.e,
Θmin 1/D·T∑_d=1^D{-σ_d^*⊤(ψ_DA,d+F_DA^yŷ_d)
-ν_d,1^*⊤(ψ_RT,d,1+F_RTx_d,1^*)+
∑_τ=2^T-ζ_d,τ^*⊤(ψ_RT,d,τ^'+F_RT^'xx_d,τ^*+F_RT^'pp_d,τ-1^*+
F_RT^'+-p_d,τ-1^+-*)}
s.t.
ŷ_d,τ=g(s_d,τ;Θ),∀τ=1,...,T,∀d=1,...,D
0 ≤ŷ_d,τ ≤y̅_d,τ,∀τ=1,...,T,∀d=1,...,D
σ_d^*=max_σ_d≥0-σ_d^⊤(ψ_DA,d+F_DA^yŷ_d),∀d=1,...,D
(<ref>),∀d=1,...,D
ν_d,1^*=max_ν_d,1 ≥0-ν_d,1^⊤(ψ_RT,d,1+F_RTx_d,1^*),
∀τ=1,∀d=1,...,D
(<ref>),∀τ=1,∀d=1,...,D
ζ_d,τ^*=max_ζ_d,τ ≥0-ζ_d,τ^⊤(ψ_RT,d,τ^'+F_RT^'xx_d,τ^*+F_RT^'pp_d,τ-1^*+
F_RT^'+-p_d,τ-1^+-*),∀τ=2,...,T,∀d=1,...,D
(<ref>),∀τ=2,...,T-1,∀d=1,...,D
Since RT clearing requires the primal solutions of DA clearing and the previous RT clearing as input parameters, we also include the primal problems in the lower level. The forecast ŷ_d affects the upper-level objective (<ref>) via its impact on the DA and RT dual solutions σ_d^*,ν_d,1^*,ζ_d,τ^*, and their primal solutions x^*_d,τ,p^*_d,τ-1,p^+-*_d,τ-1. If we can obtain the function between them and the forecast ŷ_d directly, the upper-level objective can be rewritten as a function regarding the forecast ŷ_d, and can be used as the loss function for training. In the next section, we will show how to achieve this.
At the operation phase, the forecast ŷ_d,τ under the context s_d,τ can be obtained by the trained model. Subsequently, utilizing this forecast, the operator proceeds to solve the DA market clearing in (<ref>), followed by the RT market clearing in (<ref>) and (<ref>).
§ LOSS FUNCTION DESIGN
We derive the loss function based on the bilevel program in (<ref>). We first analyze how forecasts influence the DA and RT optimal solutions. Based on it, a loss function is derived, and will be used for training a forecast model.
§.§ The Impact of the Forecast on the DA and RT Solutions
This section analyzes the impact of the forecasts on the primal and dual solutions of DA and RT market clearings. Specifically, we derive analytical functions that quantitatively depict the impact of the forecasts on the optimal primal and dual solutions. As a parameter to the DA market clearing (<ref>), forecasts ŷ_d influence DA primal and dual solutions directly. As for the RT market clearing, the impact of the forecast is indirect, as it does not directly
appear as the parameter in (<ref>) or (<ref>). Concretely, for the RT market clearing at time-slot τ=1, the forecast ŷ_d influences x_d,1^*, and then x_d,1^* influences the RT solutions, with the first impact being determined by the DA clearing (<ref>), and the second impact by the RT clearing (<ref>). For the RT clearing at time-slot τ=2,...,T, the impact of the forecast is complex. The forecast ŷ_d affects the DA solutions x_d,τ^*,p_d,τ-1^* through the DA clearing (<ref>). Its influence on the RT solutions p_d,τ-1^+-* at the previous time-slot τ-1 occurs as explained earlier. Then, the influence of the parameters x_d,τ^*,p^*_d,τ-1,p^+-*_d,τ-1 on the RT clearing solutions at time τ is determined by (<ref>). The above process is summarized in Fig. <ref>.
Since the core of the above analysis is understanding the impact of the parameters on the optimization solutions, we use the multiparametric theory for this end. A general linear program (<ref>) is used as an example. We firstly define primal and dual decision policies. Then, the theorem regarding them is presented.
x^*= min_x c^⊤x
s.t. Gx≤ψ+Fω:σ.
(Primal and dual decision policies)
Primal and dual decision policies are functions defined across the polyhedral set Ω, which describe the change in the optimal primal and dual solutions, i.e., x^* and σ^*, as the parameter ω varies in Ω.
<cit.> Consider the linear program (<ref>) and the parameter ω∈Ω. The primal and dual decision policies are a piecewise linear function and a stepwise function respectively, if there exists a polyhedral partition R_1,...,R_N of Ω, and ∀ω∈ R_i, the primal decision policy is linear, and the dual decision policy is a constant function.
Theorem <ref> implies that in a neighborhood of the parameter, the primal decision policy is represented by a linear function, whereas the corresponding dual decision policy remains a constant function. Given a specific value of w, we study the local policies defined in its neighborhood. The local dual decision policy can be obtained easily, as it is a constant function. Its output is the optimal dual solution obtained by solving the dual problem of (<ref>), given the value of ω. Additionally, after solving (<ref>), the active constraints of (<ref>) can be obtained. Let 𝒥 denote the row index set associated with (<ref>), and 𝒥^a denote the row index set of the active constraints. The parameters associated with the active constraints are denoted as G_𝒥^a,ψ_𝒥^a,F_𝒥^a. They are the sub-matrices and sub-vectors of G,ψ,F and are comprised by the rows of G,ψ,F in the row index sets 𝒥^a. We have the following proposition for the local primal decision policy,
The local primal decision policy of (<ref>) is,
x^*=G_𝒥^a^-1(ψ_𝒥^a+F_𝒥^aω)
With the active constraints of (<ref>), we have G_𝒥^ax^*=ψ_𝒥^a+F_𝒥^aω. G_𝒥^a^-1 is the pseudo inverse of G_𝒥^a. By moving G_𝒥^a from left to right, we have (<ref>).
(<ref>) calculates the inverse of G_𝒥^a, whose computational complexity depends on the matrix size, i.e., |𝒥^a|. We note that a similar matrix inverse is also involved in <cit.>, with the matrix size of |𝒥|+ι, where ι is the dimension of x. There is |𝒥|+ι > |𝒥^a|. Therefore, the computation burden of <cit.> is larger.
With Proposition <ref>, we present the local primal decision policy for the DA clearing (<ref>), and RT clearing (<ref>) and (<ref>). Let 𝒥^a_DA,d,𝒥^a_RT,d,τ,∀τ=1,...,T denote the row index set of active constraints (<ref>), (<ref>), and (<ref>). The parameters associated with the active constraints are denoted as G_DA,𝒥_DA,d^a,ψ_DA,d,𝒥_DA,d^a,F_DA,𝒥_DA,d^a^y, G_RT,𝒥_RT,d,τ^a,ψ_RT,d,τ,𝒥_RT,d,τ^a,F_RT,𝒥_RT,d,τ^a, and G_RT,𝒥_RT,d,τ^ a^',ψ_RT,d,τ,𝒥_RT,d,τ^ a^',F_RT,𝒥_RT,d,τ^ a^' x,F_RT,𝒥_RT,d,τ^ a^' p,F_RT,𝒥_RT,d,τ^ a^' +-, respectively. We have the following proposition for the local primal decision policies of (<ref>), (<ref>), and (<ref>).
The local primal decision policies of (<ref>),(<ref>) and (<ref>) are,
x_d^*=G_DA,𝒥_DA,d^a^-1(ψ_DA,d,𝒥_DA,d^a+F_DA,𝒥_DA,d^a^yŷ_d)
z_d,τ^*=G_RT,𝒥_RT,d,τ^a^-1(ψ_RT,d,τ,𝒥_RT,d,τ^a+F_RT,𝒥^a_RT,d,τx_d,τ^*),∀τ=1
z_d,τ^*=G_RT,𝒥_RT,d,τ^ a^' -1(ψ^'_RT,d,τ,𝒥_RT,d,τ^ a+F_RT,𝒥_RT,d,τ^ a^' xx_d,τ^*+
F_RT,𝒥_RT,d,τ^ a^' pp_d,τ-1^*+F_RT,𝒥_RT,d,τ^ a^' +-p_d,τ-1^+-*),∀τ=2,...,T
Eq. (<ref>) is a linear function of ŷ_d, whose output is the DA solution x_d^* on the day d. In the following, we show how to rewrite (<ref>) and (<ref>) as a function of the forecast ŷ_d.
In addition, we define an operator Π_𝒥: h(x) ↦h(x), where h(x)=Ax+b is a linear function with the parameter A,b, and 𝒥 is the row index subset of A,b. The output h(x) of the operator is also a linear function, where h(x)=A_𝒥x+b_𝒥 and A_𝒥,b_𝒥 are the sub-matrix and sub-vector of A,b.
Let ℐ_DA,d,τ denote the row index set corresponding to x_d,τ^* within x_d^*. The function that maps ŷ_d to x_d,τ^* is,
f^x_d,τ(ŷ_d):=x_d,τ^*=Π_ℐ_DA,d,τ(G_DA,𝒥_DA,d^a^-1F_DA,𝒥_DA,d^a^yŷ_d+
G_DA,𝒥_DA,d^a^-1ψ_DA,d,𝒥_DA,d^a),∀τ=1,...,T
The coefficients are determined by the coefficients of (<ref>), whose row indexes belong to the set ℐ_DA,d,τ. By substituting (<ref>) into (<ref>), we can obtain the function between RT primal solution z_d,τ^* at time-slot τ=1 and the forecast ŷ_d.
f_d,τ^z(ŷ_d):=z_d,τ^*=G_RT,𝒥_RT,d,τ^a^-1F_RT,𝒥^a_RT,d,τf_d,τ^x(ŷ_d)+
G_RT,𝒥_RT,d,τ^a^-1ψ_RT,d,τ,𝒥_RT,d,τ^a,∀τ=1
To obtain the function between RT primal solution z_d,τ^* at time-slot τ=2,...,T and the forecast ŷ_d, we need to obtain the function between DA solution p_d,τ-1^*, RT solution p_d,τ-1^+-* and the forecast ŷ_d as well. Concretely, since p_d,τ-1^* is a part of x_d,τ-1^*, let ℐ_DA,d,τ-1^p be the row index set corresponds to p_d,τ-1^* within x_d,τ-1^*. With (<ref>), the function between p_d,τ-1^* and the forecast ŷ_d is,
f^p_d,τ-1(ŷ_d):=p_d,τ-1^*=Π_ℐ_DA,d,τ-1^p(f_d,τ-1^x(ŷ_d)),
∀τ=2,...,T
Likewise, since p_d,τ-1^+-* is a part of z_d,τ-1^*, let ℐ_RT,d,τ-1^+- be the row index set corresponds to p_d,τ-1^+-* within z_d,τ-1^*. We can express a function of p_d,τ-1^+-* w.r.t. ŷ_d as,
f_d,τ-1^+-(ŷ_d):=p_d,τ-1^+-*=Π_ℐ_RT,d,τ-1^+-(f_d,τ-1^z(ŷ_d)),
∀τ=2,...,T
Accordingly, by substituting (<ref>), (<ref>), (<ref>) into (<ref>), the function between z_d,τ^* at time-slot τ=2,...,T is,
f^z_d,τ(ŷ_d):=
z_d,τ^*=G_RT,𝒥_RT,d,τ^ a^' -1(ψ^'_RT,d,τ,𝒥_RT,d,τ^ a+F_RT,𝒥_RT,d,τ^ a^' xf_d,τ^x(ŷ_d)+
F_RT,𝒥_RT,d,τ^ a^' pf_d,τ-1^p(ŷ_d)+F_RT,𝒥_RT,d,τ^ a^' +-f_d,τ-1^+-(ŷ_d)),
∀τ=2,...,T
To sum up, the function between the forecast ŷ_d and DA and RT primal solutions, which is defined in the neighborhood of ŷ_d, are summarized in the (<ref>)-(<ref>). The function between the optimal dual solutions and the forecast ŷ_d is a constant function, whose output can be obtained by solving the dual problems of (<ref>),(<ref>), and (<ref>). With these, we are ready to transform the upper-level objective (<ref>) to a function of the forecast ŷ_d. The details are in the next subsection.
§.§ Loss Function Derivation
In this subsection, we derive the loss function in the neighborhood of the forecast ŷ_d. By substituting the functions (<ref>),(<ref>),(<ref>) and dual solutions into the upper-level objective (<ref>), the loss function in the neighborhood of the forecast ŷ_d is,
ℓ_d(ŷ_d):=-σ_d^*⊤(ψ_DA,d+F_DA^yŷ_d)-ν_d,1^*⊤(ψ_RT,d,1+
F_RTf_d,1^x(ŷ_d))+∑_τ=2^T-ζ_d,τ^*⊤(ψ_RT,d,τ^'+F_RT^' xf_d,τ^x(ŷ_d)+
F_RT^' pf_d,τ-1^p(ŷ_d)+F_RT^' +-f_d,τ-1^+-(ŷ_d))
(<ref>),(<ref>),(<ref>)
Since functions in (<ref>) are linear, and the dual decision policies are constant, the loss function ℓ_d(ŷ_d) in the neighborhood of the forecast ŷ_d is linear, and output the overall operation cost (<ref>) given the forecast ŷ_d. Naturally, the derivative of ℓ_d(ŷ_d) w.r.t. the forecast ŷ_d, i.e., ℓ_d(ŷ_d)/ŷ_d, measures the marginal impact of the forecast on the overall cost. It is derived as,
(ℓ_d(ŷ_d)/ŷ_d)^⊤=-σ_d^*⊤F_DA^y-ν_d,1^*⊤F_RT∂ f_d,1^x(ŷ_d)/∂ŷ_d+∑_τ=2^T-ζ_d,τ^*T
(F_RT^' x∂ f_d,τ^x(ŷ_d)/∂ŷ_d+F_RT^' p∂ f_d,τ-1^p(ŷ_d)/∂ŷ_d+F_RT^' +-∂ f_d,τ-1^+-(ŷ_d)/∂ŷ_d)
According to Theorem <ref>, the loss function defined across the entire space of ŷ_d is expected to be a piecewise linear function. Specifically, each piece is related with a different active constraint index sets 𝒥^a_DA,d,𝒥^ a_RT,d,τ,∀τ=1,...,T of DA and RT market clearings. Such index sets then determine the primal decision policies (<ref>)-(<ref>), and the following functions in (<ref>)-(<ref>). It is possible to enumerate all possible active constraint index sets, and derive the corresponding loss function <cit.>. However, implementing such a practice can be computationally expensive, particularly when dealing with large-scale optimization problems. We notice that the derivatives ∂ f_d,τ^x(ŷ_d)/∂ŷ_d, ∂ f_d,τ-1^p(ŷ_d)/∂ŷ_d, ∂ f_d,τ-1^+-(ŷ_d)/∂ŷ_d in (<ref>) associated with the active index sets are constants, as (<ref>),(<ref>),(<ref>) are linear functions.
This suggests that there is no need to recalculate these derivatives when encountering the same active index sets during the training. For that, we propose a solution strategy, where the derivatives are recalculated only when encountering new ones.
§ SOLUTION STRATEGY
We illustrate the training phase of the forecast model based on neural networks (NNs). With the loss function, we use batch optimization to train NN. Given a batch of data in B days, the parameter estimation with the derived loss function is formulated as,
Θmin 1/B·T∑_d=1^Bℓ_d(ŷ_d)
s.t.
ŷ_d,τ=g(s_d,τ;Θ),∀τ=1,...,T,∀d=1,...,B
0 ≤ŷ_d,τ ≤y̅_d,τ,∀τ=1,...,T,∀d=1,...,B
Different from the conventional unconstrained program at the training phase, (<ref>) is with the box constraint (<ref>) for the NN output. We design a specific model structure to address this. Specifically, a Sigmoid function, whose output is between 0 and 1, is used as the activation function at the output layer. By multiplying its output with the cap y̅_d,τ of each sample, the box constraint (<ref>) is satisfied. NN's structure is illustrated in Fig. <ref>.
During the training, the NN, with the parameter Θ given by any value, outputs the forecast ŷ_d,τ by (<ref>). The primal and dual problems of DA and RT clearing in (<ref>),(<ref>),(<ref>) are solved. The active constraint index sets 𝒥^a_DA,d,𝒥^a_RT,d,τ,∀τ=1,...,T and the dual solutions σ_d^*,ν_d,1^*,ζ_d,τ^*,∀τ=2,...,T are obtained. We check whether they are the new ones. If yes, we calculate derivatives ∂ f_d,τ^x(ŷ_d)/∂ŷ_d, ∂ f_d,τ-1^p(ŷ_d)/∂ŷ_d, ∂ f_d,τ-1^+-(ŷ_d)/∂ŷ_d associated with the new ones, and calculate the gradient ℓ_d(ŷ_d)/ŷ_d in (<ref>). Additionally, we store the new active index sets and the associated derivatives in the buffers. If not, the stored elements in the buffers are used for calculating the gradient. The training process is summarized in Fig. <ref>.
§ CASE STUDY
We consider a modified version of IEEE 9-Bus system <cit.>. As shown in Fig. <ref>, the system consists of 3 loads, 2 wind farms, and 3 generators (G_1,G_2,G_3) whose generation needs to be settled in the DA market and can be adjusted for providing up- and down-regulation power in the RT market.
The generators submit the marginal generation cost ρ, the minimum generation power p, the maximum generation power p, the ramping limits r,r
in the DA market, which are provided in Table <ref>. The marginal generation cost ρ_+,ρ_- for the power adjustment, the marginal opportunity loss ρ_+-ρ,ρ-ρ_- and the adjustment limits p^+ and p^-, that generators submit in the RT market, are provided in Table <ref> as well. The yearly demand consumption data is used, with a valley value of 210 MW, and a peak value of 265 MW. The hourly wind power production in the year of 2012 from GEFCom 2014 is used, whose range is from 0 to 1. The wind data is scaled by multiplying a constant according to the considered wind generation capacity, which will be discussed in the following sections. The demand and wind data can be found in <cit.>.
We use a four-layer ResNet as the forecast model, which has 256 hidden layer units. Its structure is described in Figure <ref>. The input context consists of the numeric weather prediction (i.e., the predicted wind speed and direction at 10m and 100m altitude) of each wind farm in the system.
We use Root Mean Squared Error (RMSE) on the test set for assessing the forecast quality, and the average overall operation cost on the test set, as defined in (<ref>), for evaluating the operation value. Four benchmark models are used for comparison: Two quality-oriented, one value-oriented forecasting approach, and a stochastic program. The two quality-oriented forecast models are trained using Mean Squared Error (MSE) and pinball loss (an asymmetric loss function), denoted as Qua-E and Qua-Q, respectively. Specifically, Qua-E provides predictions for expected wind power, while Qua-Q offers quantile predictions. Light Gradient Boosted Machine is used for issuing quantiles, which is the winner of GEFCom 2014 <cit.>. We consider the value-oriented forecast model trained via OptNet <cit.> as a benchmark, referred to as OptNet. Lastly, the stochastic program, with 50 wind power scenarios obtained by k-nearest-neighbors, is considered and denoted as Sto-OPT. For each sample on test set, the stochastic program is solved for settling the schedule of generators and wind power in DA. Then, the adjustment in (<ref>) and (<ref>) are performed in RT.
§.§ The Operational Advantage
The capacity of two wind farms is set as 105 MW, respectively, which takes up 79% of the maximum demand. The nominal level of Qua-Q issued quantile is chosen as 1/16. Such nominal level is determined by ρ^1-ρ_-^1/ρ_+^1-ρ_-^1 <cit.>, where ρ^1,ρ_-^1,ρ_+^1 represent the marginal costs of G_1. The results of RMSE and average operation cost, along with training time per NN epoch and test time, are reported in Table <ref>.
Since Sto-OPT does not need training or rely on a point forecast, its RMSE is not reported. Sto-OPT serves as the ideal benchmark <cit.>, which has the least average operation cost. The proposed approach outperforms all other methods in terms of average operation cost on the test set, except for Sto-OPT. However, its test time is much shorter than Sto-OPT, demonstrating computational efficiency. Also,
we observe that the performance of Sto-OPT is heavily influenced by the number of scenarios used. When fewer scenarios, such as 20, are employed, the average operation cost on the test set increases to $84,478, which is even worse than that achieved by the proposed approach.
The proposed approach exhibits a higher RMSE compared to Qua-E. This underscores the point that more accurate forecasts don't always translate to better operational performance. Additionally, since the incremental bidding price ρ_+-ρ is larger than ρ-ρ_-, the marginal opportunity loss of up-regulation is larger than that of down-regulation. Therefore, Qua-Q, which issues the quantile forecasts with a low proportion level (1/16), has better performance than Qua-E. As for the training time, the proposed approach requires a longer training time than Qua-E due to the more complex computation involved in calculating the gradient during the training process. But it is still acceptable, and much shorter than the value-oriented forecasting approach OptNet.
§.§ The Sensitivity Analysis
In this section, we compare the proposed approach against Qua-E under different wind power capacities and adjustment cost ρ^+ for up-regulation.
§.§.§ Performance under Different Wind Power Penetration
Here, different wind power capacities are considered, i.e., 85 MW, 95 MW, and 105 MW per wind farm. Fig. <ref> shows the average operation cost of the proposed approach and Qua-E under different wind power capacities. Under large wind power capacity, the average cost reduction of the proposed approach is more obvious. For instance, such a reduction is 2.4% and 2.9%, respectively, under the wind power capacity of 85 MW and 105 MW. Therefore, the proposed approach has larger operation benefits under large penetration of wind power.
§.§.§ Performance under Different Up-regulation Cost in RT
The performance is further tested under various up-regulation costs, along with the marginal opportunity loss ρ_+-ρ for up-regulation. The marginal opportunity loss ρ-ρ_- for down-regulation is the same as in Table <ref>. The capacity of wind power is set to 105 . The average operation cost of the proposed approach and Qua-E under two settings are listed in Table <ref>. When the RT market lacks flexibility for up-regulation, the up-regulation cost is high, and the marginal opportunity loss for up-regulation is much larger than that for down-regulation. Therefore, the proposed approach tends to forecast less power than Qua-E to mitigate the risk of costly up-regulation, and results in a significant cost reduction (8%). When the up-regulation cost is similar to the DA marginal generation cost ρ, the marginal opportunity loss for up-regulation is lower than that for down-regulation. Therefore, the proposed approach tends to forecast more power to mitigate the risk of costly down-regulation. In such case, since the marginal opportunity losses of up- and down-regulation is very similar, the cost reduction of the proposed approach is less significant. The forecast profiles for 6 days of wind farm at the node 5 under the two settings are given in Fig. <ref>.
To sum up, the operation advantage of the proposed approach is more evident, under large penetration of wind power, and high up-regulation cost.
§ CONCLUSIONS
We propose a value-oriented renewable energy forecasting approach, for minimizing the expected overall operation cost in the existing deterministic market clearing framework. We analytically derive the loss function for value-oriented renewable energy forecasting in sequential market clearing. The loss function is proved to be piecewise linear when the market clearing is modeled by linear programs. Additionally, we provide the analytical gradient of the loss function with respect to the forecast, which leads to an efficient training strategy. In the case study, compared to quality-oriented forecasting approach trained by MSE, the proposed approach can reduce average operation cost on the test set to 2.9%. Such an advantage is more obvious under large wind power capacity and high up-regulation costs. Under high up-regulation costs, our approach can reduce the cost by up to 8%. Future work will include attempts to derive the loss function for market clearing modeled by other types of optimization programs, such as quadratic optimization or conic optimization.
IEEEtran
§ DETAILS OF COEFFICIENTS IN DAY-AHEAD AND REAL-TIME CLEARINGS
We first provide the details of coefficients in DA clearing (<ref>). For that, we turn each constraint in (<ref>) into the compact form. Let H=[H,H] denote the horizontal stack of the matrix H, E=[E,O], Ê=[O,E] be the stack of an identity matrix and an all-zero matrix. Next, we define the following matrices, which are used for forming the coefficients in (<ref>),
D_1=
[ 1^⊤ 0^⊤ … 0^⊤; 0^⊤ 1^⊤ … 0^⊤; ⋮ ⋮ ⋱; 0^⊤ 0^⊤ … 1^⊤ ]D_2=
[ H O … O; O H … O; ⋮ ⋮ ⋱; O O … H ]
D_3=
[ E O … O; O E … O; ⋮ ⋮ ⋱; O O … E ]D_4=[ -E E … O; O -E … O; ⋮ ⋮ ⋱; O O … E ]
D_5=[ Ê O … O; O Ê … O; ⋮ ⋮ ⋱; O O … Ê ]d_1=[ 1^⊤l_d,1; ⋮; 1^⊤l_d,T ]d_2=[ f̅+Hl_d,1; ⋮; f̅+Hl_d,T ]
d_3=[ f̅-Hl_d,1; ⋮; f̅-Hl_d,T ]d_4=[ p̅; ⋮; p̅ ]d_5=[ -p; ⋮; -p ]d_6=[ r̅; ⋮; r̅ ]d_7=[ -r; ⋮; -r ]
G_DA and ψ_DA,d in (<ref>) are the vertical stack of the matrices D_1-D_5 and the vectors of d_1-d_7, i.e., G_DA=[D_1;-D_1;D_2;-D_2;D_3;-D_3;D_4;-D_4;D_5;-D_5] and ψ_DA,d=[d_1;-d_1;d_2;d_3;d_4;d_5;d_6;d_7;0;0]. The matrix F_DA^y is the vertical stack of all-zero matrix and identity matrix, i.e., F_DA^y=[O;...;O;E;O]. Let ρ=[ρ;0]. Then, ρ_DA=[ρ;...;ρ].
Next, the details of coefficients in RT clearing (<ref>) are provided. We define the following matrices, which are used for forming the coefficients in (<ref>),
R_1=[ 1^⊤ -1^⊤ -1^⊤ ], R_2=[ H -H -H ]
R_3=[ E O O ],R_4=[ O E O ]
R_5=[ O O E ],
r_2=f̅+H(l_d,τ-y_d,τ)
r_3=f̅+H(-l_d,τ+y_d,τ),r_4=p^+
r_5=p^-,
r_6=p-p_d,τ^*,
r_7=p_d,τ^*-p,r_8=y_d,τ
G_RT and ψ_RT,d,τ in (<ref>) are the vertical stack of the matrices of R_1-R_5 and the vectors of r_1-r_8. G_RT=[R_1;-R_1;R_2;-R_2;R_3;-R_3;R_4;-R_4;R_3;-R_3; R_4;-R_4;R_5;-R_5] and ψ_RT,d,τ=[r_1;-r_1;r_2;r_3;r_4;0; r_5;0;r_6;0;r_7;0;r_8;0]. Let I_RT=[0,1^⊤] and H_RT=[H,O]. Then, F_RT=[I_RT;-I_RT;H_RT;-H_RT;O;...;O;-E;O;E;O;O;O]. ρ_RT=[ρ_+;-ρ_-].
Finally, the details of coefficients in RT clearing (<ref>) are provided. G^'_RT=[R_1;-R_1;R_2;-R_2;R_3;-R_3;R_4;-R_4; R_3;-R_3;R_4;-R_4;R_3;-R_3;R_4;-R_4;R_5;-R_5] and ψ^'_RT,d,τ=[r_1;-r_1;r_2;r_3;r_4;0;r_5;0;r_6;0;r_7;0;r̅;0;-r̅;0; r_8;0]. Then, F_RT^' x=[I_RT;-I_RT;H_RT;-H_RT;O;...;O;-E;O; E;O;-E;O;O;O]. F_RT^' p=[O;...;E;O;-E;O;O;O]. Let E_RT=[E,-E]. Then,
F_RT^' +-=[O;...;E_RT;O;-E_RT;O;O;O].
|
http://arxiv.org/abs/2405.09244v1 | 20240515105516 | NeuralCMS: A deep learning approach to study Jupiter's interior | [
"Maayan Ziv",
"Eli Galanti",
"Amir Sheffer",
"Saburo Howard",
"Tristan Guillot",
"Yohai Kaspi"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"cs.LG"
] |
Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
maayan.ziv@weizmann.ac.ilUniversité Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, FranceInstitut für Astrophysik, Universität Zürich, Winterthurerstr. 190, 8057 Zürich, Switzerland
NASA's Juno mission provided exquisite measurements of Jupiter's gravity field that together with the Galileo entry probe atmospheric measurements constrains the interior structure of the giant planet. Inferring its interior structure range remains a challenging inverse problem requiring a computationally intensive search of combinations of various planetary properties, such as the cloud-level temperature, composition, and core features, requiring the computation of ∼10^9 interior models.
We propose an efficient deep neural network (DNN) model to generate high-precision wide-ranged interior models based on the very accurate but computationally demanding concentric MacLaurin spheroid (CMS) method.
We trained a sharing-based DNN with a large set of CMS results for a four-layer interior model of Jupiter, including a dilute core, to accurately predict the gravity moments and mass, given a combination of interior features. We evaluated the performance of the trained DNN (NeuralCMS) to inspect its predictive limitations.
NeuralCMS shows very good performance in predicting the gravity moments, with errors comparable with the uncertainty due to differential rotation, and a very accurate mass prediction. This allowed us to perform a broad parameter space search by computing only ∼10^4 actual CMS interior models, resulting in a large sample of plausible interior structures, and reducing the computation time by a factor of 10^5. Moreover, we used a DNN explainability algorithm to analyze the impact of the parameters setting the interior model on the predicted observables, providing information on their nonlinear relation.
Ziv, M., et al.
NeuralCMS: A deep learning approach to study Jupiter's interior
M. Ziv1, E. Galanti1, A. Sheffer1, S. Howard2,3, T. Guillot2,Y. Kaspi1
Received 3 April 2024 / Accepted 6 May 2024
==========================================================================
§ INTRODUCTION
The interior structure of Jupiter holds information on its formation and evolution processes, with the two research fields being highly related to one another <cit.>. The range of plausible interior structures of Jupiter is constrained by the accurately measured gravity field by NASA's Juno mission <cit.> and atmospheric measurements by both Juno <cit.> and the Galileo probe <cit.>. In addition, it is also affected by the surface winds and their internal structure, which significantly contribute to the gravity field <cit.>. Inferring this range requires the exploration of a large parameter space of interior models to identify those consistent with the observations.
Relating the above observables to the physical parameters defining the interior structure of a gas giant can be done by two approaches. Theory of figures (ToF) <cit.>, implemented for example to the seventh order in <cit.> and to the fourth order <cit.> in the CEPAM model <cit.>, which was used by <cit.>, and the more accurate concentric Maclaurin spheroid (CMS) method <cit.>, used by <cit.>, which is more computationally demanding <cit.>. One way to overcome the computational burden of the CMS approach is to correct the ToF results with offsets to the gravity moments to make up for the precision difference <cit.>. However, the offsets are defined for specific parameters and might not represent the entire parameter space.
Previous studies have suggested deep learning approaches to characterize the interior of exoplanets by predicting the distribution of interior features given the planetary mass, radius, and several additional parameters (e.g., the fluid Love number k_2, the effective temperature, the temperature at one bar), thus addressing the inverse problem directly <cit.>. Recently, <cit.> presented an approach allowing inference of both the inverse problem and the forward interior model.
In this work, we present NeuralCMS, a new approach to accelerate the CMS method, by predicting the model results using a deep neural network (DNN), which in practice can quickly compute millions of interior models simultaneously, or a single interior model on the order of milliseconds. Theoretically, DNNs are a suitable choice to regress the CMS results as they can approximate any nonlinear function between an adjustable number of inputs and outputs <cit.>. We used a DNN for the principal task of approximating the detailed forward CMS model constrained by the gravity moments and mass. NeuralCMS can then be used in any search algorithm, such as Monte Carlo methods <cit.>, to assemble a sample of plausible interior structures. We also demonstrate that with the advance in explainable DNN techniques <cit.>, further investigation of the nonlinear relations between interior features and the observables can be made possible.
In Sect. <ref> we describe the numerical and theoretical interior model, followed by a description of the dataset used to train the DNN. In Sect. <ref> we present the DNN architecture, performance, and training specifics. In Sect. <ref> we present the efficiency derived from our approach by performing a simple grid search for plausible Jupiter's interior solutions.
§ JUPITER INTERIOR STRUCTURE MODEL
Our numerical interior model is based on a publicly available CMS model <cit.>. CMS <cit.> is an iterative method to compute the shape and gravity harmonics (J_2n) of a rotating fluid planet. It is constructed of multiple concentric Maclaurin spheroids set by their equatorial radii, and using the hydrostatic equilibrium of the gravitational and rotational potential, it solves the shape for each spheroid assembling the planet (Fig. <ref>). We modeled Jupiter with N=1041 spheroids spaced the same as in <cit.>. We validated our CMS model against the analytic n=1 polytrope solution <cit.> (see Appendix <ref>).
We constructed a four-layer model of Jupiter as shown in Fig. <ref>, similar to <cit.> and <cit.>. The outer layer is mostly composed of hydrogen and helium with their mass fraction X_1 and Y_1, respectively. We set Y_1/(X_1+Y_1)=0.238 to be consistent with the Galileo probe measurements <cit.>. The mass fraction of heavier elements, or metallicity, in this layer, is marked by Z_1, which was constrained in the atmosphere by both Juno and Galileo to be higher than the solar abundance <cit.>. Recent interior models still struggle to reconcile with this important observation <cit.>. The outer envelope is treated as adiabatic, with a constant entropy determined by the temperature at one bar, which was measured by the Galileo probe to be T_1bar=166.1±0.8 K <cit.>, and was recently suggested to reach T_1bar=170.3±3.8 K after reassessing Voyager radio occultations <cit.>.
The boundary between the inner He-rich and the outer He-poor envelopes is set by the pressure P_12 representing a region where immiscibility of He in H occurs, and based on simulations of phase separation of H and He mixtures should occur between ∼0.8 and ∼3 Mbar <cit.>. We set the metallicity of the inner envelope to be Z_2=Z_1. Then we implemented a dilute core by imposing an inward increase in the mass fraction of heavy elements using the same formulation used by <cit.> with two main controlling parameters, Z_ dilute defining the maximum mass fraction of heavy elements in the dilute core, and m_ dilute representing the extent of the dilute core in normalized mass (see Appendix <ref> for more details). The helium mass fraction in the inner envelope and the dilute core regions is forced by requiring the planet's overall helium abundance to be consistent with the protosolar value, Y_proto=0.278± 0.006 <cit.>. Most recent models agree on the presence of a dilute core inside Jupiter although its extent exhibits discrepancies between interior and formation models. Recently, models with a small enough dilute core consistent with Juno were suggested <cit.>. Finally, we allowed the presence of a compact core composed of heavy materials only, and its normalized radius r_core controls it.
It was shown that the choice of the equation of state (EOS) for hydrogen and helium strongly affects the interior model <cit.>. For this work, we did not explore this effect but used only the pure H and He tables from <cit.>, and the nonideal mixing effect tables, accounting for the interactions between H and He, from <cit.>. We used the Sesame water EOS <cit.> for heavy materials. We used the additive volume law combined with the nonideal corrections to compute the density and entropy at a given pressure and temperature <cit.>. The EOS for the compact core is the analytical solution from <cit.>.
Unlike other CMS-based models, we did not restrict the calculated planetary mass to a specific value. As we treat the gravity moments, the mass is compared to the Juno-derived value within its uncertainty. The mass was computed through the observed GM=1.266865341×10^17 m^3 s^-2 <cit.>, and while in our model we did not use the Newtonian constant of gravitation G explicitly, the range of its suggested values results in a noticeable uncertainty in Jupiter's mass. Dividing the measured GM by the extremum values of G collected by CODATA <cit.> gives a mass range between 1.8978 and 1.8988 10^27 kg. We used G=6.673848×10^-11m^3 kg^-1 s^-2 <cit.>, to be consistent with <cit.>, resulting in M_J=1.8983±0.0005×10^27 kg. Also, deviations in the equatorial radius R_eq, stemming from either measurement errors <cit.> or from the dynamical height due to the wind <cit.>, suggest a similar mass uncertainty.
We reduced the characterization of Jupiter's interior to the seven input parameters shown in Fig. <ref> and allowed them to vary, while R_eq=71 492 km <cit.> and Jupiter's rotation rate of 9 hr 55 min 29.7s <cit.> were kept constant. For training, we used previously computed results of over 10^6 CMS interior models, with different setups of 7D input samples: (1) sparsely gridded inputs; (2) randomly sampled inputs, both resulting in a large range in the outputs J_2n and mass; and (3) densely gridded inputs where the outputs are closer to the Juno measurements. The deep learning model was trained to accurately predict a broad range of interior models, to be used in a wide search for plausible models. The training dataset range and distribution are presented in Table <ref> and in Fig. <ref>.
§ A DEEP SHARING-BASED NEURAL NETWORK
In recent years, feedforward neural networks have been a popular machine-learning approach in many research fields. They are capable of deciphering information and relations in multidimensional data <cit.>. They are generally composed of input and output layers connected through several hidden layers, all built from a varying number of neurons. Their training is controlled by minimizing a so-called loss function between the predicted and the true output values, practically by optimizing the weights and biases on the connections between neurons <cit.>. Many machine-learning algorithms were suggested to address multi-target regression problems where the outputs are correlated <cit.>.
For this work, we adopted a sharing-based architecture <cit.>, shown in Fig. <ref>. Our feedforward DNN comprises an input layer fully connected to a shared hidden layer with 1024 neurons. The shared layer is fully connected to five separate private hidden blocks, one for each output. Each block contains two 1024-neuron fully connected layers computing a single output value. The internal hidden layers are activated using the nonlinear Rectified Linear Unit (ReLU) function <cit.>, which is commonly used and easy to optimize. Since the gravity moments and mass are all functions of the density structure and shape of the planet, the correlation between them is designed to be learned by the shared layer. The private layers then act as a single output regressor. We find that using the mass as an output parameter improves the prediction of J_2n, but it is more precisely predicted using an entirely separate DNN that has the same seven input parameters, and four fully connected layers with 1024, 512, 256, and 128 neurons, respectively, each activated with ReLU, predicting only the mass. Appendix <ref> discusses other architectures that were tested.
We normalized the inputs to have values between zero and one, using the bounds of the training dataset (see Table <ref>). The gravity moments were scaled by 10^6 and taken in positive values. The mass was scaled by 10^-27. We initialized the DNN weights using Kaiming uniform distribution to prevent activation from potentially harming the training <cit.>. During training, we approached the predicted and true output values using a weighted mean squared error loss function,
L=1/NW∑_i=1^N((M_i^ pred-M_i^ CMS/Δ M)^2+∑_n=1^4(J_2n,i^ pred-J_2n,i^ CMS/3σ_2n)^2)
,
where N is the number of samples, Δ M=0.0005×10^27 kg is the mass uncertainty discussed in Sect. <ref>, 3σ_2n is Juno's 3σ uncertainty for J_2n <cit.>, and W=1/(Δ M)^2+∑_n=1^41/(3σ_2n)^2 is the sum of the weights. This gives a larger weight to the more accurately measured observables. The loss function was minimized using the Adam optimizer <cit.> with a learning rate starting from 0.001 and reduced tenfold at manually chosen epochs. We took 80% of the models from the full dataset for training, leaving the rest to validate the trained DNN. The DNN was trained for 700 epochs using the PyTorch library <cit.>, showing no overfitting (Fig. <ref>f).
The performance of our DNN was evaluated by the prediction errors ϵ for each output. Figure <ref> shows the prediction errors as a function of the true output values compared to the uncertainty stemming from measurement errors <cit.> and the wind <cit.>. For all model outputs, except for J_2, the standard deviation of the prediction error ϵ_σ is smaller than the combined uncertainty, and it is mostly comparable to the wind-derived uncertainty. Specifically, for J_2×10^6, ϵ_σ=0.789 is about twice the wind-related uncertainty but smaller than the offset applied to ToF results by a factor of ∼2-7 <cit.>. <cit.> evaluated the uncertainty on J_2×10^6 related to assumptions of the CMS method to be roughly 0.1, being lower than ϵ_σ. Deviations in J_2×10^6 from the analytical solution for the n=1 polytrope found in previous studies with a similar number of spheroids are between ∼0.1-3 <cit.>. We note that the ToF offsets and deviations from the polytropic solutions are systematic errors, whereas ϵ_σ are both positive and negative (see Fig. <ref>). Table <ref> compares these sources for errors and uncertainty with the DNN's performance.
The relatively small prediction errors allow the DNN to eliminate the vast majority of interior models that deviate from the Juno measurements. Moreover, the prediction errors are independent of the output values, highlighting the DNN's ability to predict a large range of interior models. Due to the very accurate measured J_2 and J_4 compared to the DNN errors, interior models still need to be calculated using CMS, to eliminate models falsely predicted by the DNN to be consistent with the observations. Also, the CMS output contains valuable information we wish to retrieve such as density and composition profiles.
§ PERFORMANCE AND INTERPRETATION OF NEURALCMS
To demonstrate the computational efficiency gained by using NeuralCMS, we performed a simple grid search exploring all possible combinations of an equally spaced grid for each of the seven parameters with m grid points. We initiated the first grid search using only the DNN, with a wide range for all the parameters using the bounds shown in the axes range in Fig. <ref>, the determined Y_proto=0.278± 0.006 <cit.>, and P_12 between 0.8 and 5 Mbar, with m=17 grid points for each parameter, exploring over 4×10^8 interior models. This procedure takes ∼2 hours. The results of this grid search were used to reduce the range of input parameters by eliminating models that do not fall within a wind-effect criterion regarding the Juno measurements within the absolute maximal prediction errors on the validation dataset, ϵ_ max. The wind-effect criterion is a range we allowed models to deviate from the Juno measurements setting a subsequent exploration of the effects of a coupled wind model. The criterion accepts models that are within 2×10^-6 and 10^-6 from the measured J_2 and J_4, respectively, and within the mass uncertainty discussed in Sect. <ref>. These values were added to the prediction errors considered. The range of Z_1 and Z_ dilute was significantly reduced after the first grid search as shown in the left column of Fig. <ref>.
Using NeuralCMS, we encountered the known difficulty of finding solutions that are consistent with the Juno gravity measurement, the Galileo-measured T_ 1bar, and the high measured atmospheric metallicity <cit.>. Moreover, the correlation plot between m_ dilute and Z_ dilute shown in the middle-left panel of Fig. <ref> is similar to the results produced with the same EOS by <cit.> and with a slightly different setup (see their Fig. 13). This provides another validation for our model. We note that some of the accepted models here, shown in the left column of Fig. <ref>, will be eliminated with CMS calculations because we considered large prediction errors due to the relatively sparse grid used, hence only serving to reduce the interior parameters' range roughly.
The second grid search was done with the narrowed parameter range obtained from the initial grid search, with a denser grid of m=20 grid points per parameter, exploring 1.28× 10^9 interior models. Again, we used the wind-effect criterion, now added to the 3σ prediction errors on the validation dataset (ϵ_3σ) to retrieve 10871 possible models to test against actual CMS calculations. From these possible combinations, we find 2927 interior models accepted by the wind-effect criterion according to their CMS results. The initial search was enough to produce a compact distribution of models with respect to the parameters presented in Fig. <ref>, which was further tuned after the second denser grid search. Testing the grid search predictions made by the DNN against the CMS outputs yields a performance similar to that observed on the validation dataset. Importantly, using NeuralCMS, only ∼10^4 actual CMS models need to be computed instead of the impractical computation of ∼10^9 CMS models, to assemble a sample of ∼3000 plausible interior models, all falling within ϵ_3σ.
The DNN can also be used to reveal the contribution of the interior parameters to the predictions. The SHapley Additive exPlanations (SHAP) values are a game theory approach assigning an impact value for each model input representing its contribution to a specific prediction <cit.>. SHAP values provide a locally accurate approximation such that for a single prediction, the SHAP values for all inputs sum to the deviation of the predicted values from their mean in a reference dataset. This provides an interpretation of the magnitude and direction in which each input moves the model predictions. For this work, we used Deep SHAP <cit.>, which linearizes the DNN's nonlinear components to backpropagate the SHAP computation through the network. We took all CMS-accepted models as the reference dataset and calculated SHAP values for 500 random models from these accepted models. As an example, we examined the SHAP values for J_6, an observable that is usually difficult to fit <cit.>. Figure <ref> shows these SHAP values for all 500 models, each corresponding to a point on each row. For example, we marked with black circles a specific model with the highest SHAP value for Z_ dilute (see Table <ref>). The parameters controlling the planet's core (m_ dilute, Z_ dilute, and r_core) have the largest effect on the prediction of J_6. Moreover, the analysis shows that all model parameters have an overall monotonic effect on J_6 (color-coded in Fig. <ref>). The analysis also highlights the interplay among input features. The SHAP values for T_1bar and Z_1 exhibit similar magnitudes but in opposite directions, such that high values of Z_1 (T_1bar) contribute positively (negatively), effectively balancing out each other's contribution to J_6. Conversely, the dilute core parameters yield the same sign contribution to J_6. Similar results are observed for the other gravity harmonics.
§ CONCLUSION
We present NeuralCMS, an efficient deep learning approach to explore the range of plausible interior structures of Jupiter, constrained by the Juno-measured gravity field and mass. This is done by training a DNN to predict the results of the sophisticated and computationally demanding CMS method. We trained a sharing-based DNN using results from over 10^6 CMS calculations, showing good performance compared to the uncertainties associated with the gravity moments and mass.
We show that NeuralCMS can be used to eliminate interior models inconsistent with the measured gravity field and substantially reduce the number of actual CMS runs. We demonstrate the efficiency of NeuralCMS by performing a grid search for model solutions consistent with Juno. Evaluating over 10^9 possible models with NeuralCMS allowed us to identify ∼10^4 solutions on which actual CMS runs were performed, producing a big sample of nearly 3000 plausible interior models, thus reducing the computational time by a factor of 10^5. This would not have been computationally feasible using only the CMS model.
We demonstrate that despite the DNN's complex nature, it is possible to interpret relations between the physical interior parameters and their contribution to the observables using SHAP values. As an example, we show that within their range relevant to the Juno measurements, parameters controlling the planet's core have the largest impact on the predicted J_6 suggesting that high dilute core metallicity is associated with small dilute core extent, and vice versa, thus compensating for each other.
NeuralCMS can be used in any search methodology to detect plausible interior structures without a single CMS computation, acknowledging its prediction errors. It can also be further expanded to allow additional interior parameters (e.g., the equatorial radius and a temperature jump in the He rain region) and to more complex interior structures of Jupiter or other gaseous planets. NeuralCMS is available on GitHub[https://github.com/zivmaaya/NeuralCMS].
This work was supported by the Israeli Space Agency and the Helen Kimmel Center for Planetary Science at the Weizmann Institute.
aa
§ CMS MODEL VALIDATION
To validate our CMS model, we tested it against the analytic Bessel solution for Jupiter's uniformly rotating n=1 polytrope <cit.>. This was done using the same spheroid radii grid as used by <cit.>. Table <ref> shows our model convergence to the analytic Bessel solution with an increasing number of spheroids. Our model exhibits good convergence with N=1024 spheroids.
§ DILUTE CORE FORMULATION AND THE TRAINING DATA RANGE
Jupiter's dilute core is implemented differently in various interior models. For this Letter, we followed the formulation of <cit.>, defining the mass fraction of heavy elements in the inner envelope and the dilute core,
Z=Z_2+Z_dilute-Z_2/2[1-erf(m-m_dilute/δ m_dil)],
where Z_ dilute is the maximum metallicity in the dilute core, m_ dilute represents the extent of the dilute core in normalized mass, and δ m_ dil controls the steepness of the metallicity gradient and was set to be δ m_ dil=0.075. This region is not an adiabat, but we treat each spheroid as an adiabat. We infer the interpolated entropy (S) and temperature (T) profiles, such that for each spheroid i in the dilute core region, T_i+1=T(P_i+1,S_i,Y_i,Z_i) and S_i+1=S(P_i+1,T_i+1,Y_i+1,Z_i+1).
Table <ref> and Fig. <ref> show the parameter range and distribution of the training and validation datasets. The range of most parameters is larger than observational or theoretical constraints to allow for a broad exploration of the parameter space.
§ EXPLORATION OF DNN ARCHITECTURES
We found our proposed deep learning architecture (Fig. <ref>) to perform best at predicting the gravity moments compared to other architectures tested. First, we tried a fully connected network architecture <cit.> with a varying depth and size (i.e., varying the number of hidden layers, with a varying number of neurons) predicting all five output parameters together. Secondly, we tried a similar fully connected architecture to regress each parameter separately, which was successful only for the mass prediction. Lastly, we adopted a sharing-based architecture <cit.>, again tested with different sizes and depths. Our chosen architecture may seem large compared for example with the network used by <cit.> for a similar regression task, where they used three hidden layers with less than 100 neurons each. These authors compensated for the small network size with a very long training of 4.4× 10^6 epochs, which is a few orders of magnitudes longer than our training process. Again, we note that no overfitting occurred during the training, supporting the validity of the architecture chosen.
§ QUANTITATIVE DNN PERFORMANCE EVALUATION
In addition to the performance evaluation shown in Fig. <ref>, we present a quantitative comparison of the DNN mean prediction errors with other sources of uncertainty in Table <ref>. The 1σ prediction errors on the validation dataset (ϵ_σ) are comparable to the Juno measurement uncertainty for J_6 and J_8. All prediction errors are comparable to the uncertainty due to the wind, and they are significantly lower than the offsets needed to be applied on results produced by the ToF expansion <cit.>. The maximal absolute prediction errors (ϵ_max) are comparable to the offsets to ToF by <cit.>. In our CMS model, we set the outermost spheroid radius to be the measured R_eq=71 492 km <cit.> at one bar. This underlines the assumption that the higher atmosphere (P < 1 bar) can be neglected. <cit.> evaluated the uncertainty stemming from this assumption, which is lower than ϵ_σ for J_2, but higher for J_4 and J_6. These authors also evaluated the discretization error on J_2×10^6, when using a polytropic EOS, to be of a similar magnitude to the value shown in Table <ref> for neglecting the higher atmosphere.
§ DESCRIPTION OF SHAP VALUES
The SHAP values are a game theory approach that assigns an impact value for each model input representing its contribution to a specific prediction <cit.>. For deep learning models, SHAP is combined with the additive feature attribution DeepLIFT method <cit.>, which practically linearizes the nonlinear components of the DNN, to provide explanations based on a locally accurate approximation:
f(x)≈y̅_ref+∑_i=1^Mϕ_x_i,y,
where x is a specific input; f(x) is the prediction model (the DNN in our case); y̅_ref is the mean of all output predictions y in a reference dataset, used as a baseline value; M is the number of input features; and ϕ_x_i,y is the SHAP value of the input x_i for a predicted output y <cit.>. This means that for each prediction, the SHAP values for all inputs sum to the difference between the predicted value and the mean of all predictions in a reference dataset. For this work, we used Deep SHAP through the DeepExplainer from the SHAP Python library <cit.>, which combines SHAP values computed for smaller components of the DNN into values for the whole network by backpropagating the computation through the network. We took all 2927 models accepted by their CMS results as the reference dataset and calculated SHAP values for 500 random models from these accepted models. More details on the specific model marked by black circles in Fig. <ref> can be found in Table <ref>.
|
http://arxiv.org/abs/2405.09582v1 | 20240515023406 | AD-Aligning: Emulating Human-like Generalization for Cognitive Domain Adaptation in Deep Learning | [
"Zhuoying Li",
"Bohua Wan",
"Cong Mu",
"Ruzhang Zhao",
"Shushan Qiu",
"Chao Yan"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
AD-Aligning: Emulating Human-like Generalization for Cognitive Domain Adaptation in Deep Learning
t]c@2emc
1st Zhuoying Li 2nd Bohua Wan
Department of Computer Science Department of Computer Science
Johns Hopkins University Johns Hopkins University
Baltimore, US Baltimore, US
zli181@jhu.edu bwan2@jhu.edu
3rd Cong Mu 4th Ruzhang Zhao
Department of Computer Science Department of Computer Science
Johns Hopkins University Johns Hopkins University
Baltimore, US Baltimore, US
cmu2@jhu.edu rzhao@jhu.edu
5th Shushan Qiu 6th Chao Yan
Department of Electrical and Computer Engineering Department of Electrical and Computer Engineering
University of Houston Northeastern University
Houston, US Sunnyvale, USA
sqiu3@cougarnet.uh.edu yan.chao@northeastern.edu
May 20, 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Domain adaptation is pivotal for enabling deep learning models to generalize across diverse domains, a task complicated by variations in presentation and cognitive nuances. In this paper, we introduce AD-Aligning, a novel approach that combines adversarial training with source-target domain alignment to enhance generalization capabilities. By pretraining with Coral loss and standard loss, AD-Aligning aligns target domain statistics with those of the pretrained encoder, preserving robustness while accommodating domain shifts. Through extensive experiments on diverse datasets and domain shift scenarios, including noise-induced shifts and cognitive domain adaptation tasks, we demonstrate AD-Aligning's superior performance compared to existing methods such as Deep Coral and ADDA. Our findings highlight AD-Aligning's ability to emulate the nuanced cognitive processes inherent in human perception, making it a promising solution for real-world applications requiring adaptable and robust domain adaptation strategies.
Computer Vision, Domain Adaption, Generalization, Cognition
§ INTRODUCTION
§.§ Background
Generalization across domains is a pivotal aspect of AI cognitive research, particularly in the context of domain adaptation <cit.>. Understanding how models can effectively adapt to different domains has emerged as a highly active area of investigation <cit.>. Researchers have employed various strategies to tackle the challenge of domain shift, where the distribution of data differs between training and testing phases <cit.>. However, much of the existing work primarily focuses on addressing domain shifts within the objective truth—instances where the task remains consistent across domains, albeit with varying environmental conditions <cit.>.
Consider a scenario where a model is trained to identify bears in photos taken during the day but is tested on images captured at night. This exemplifies a common domain shift problem, where the model must generalize its understanding to accommodate changes in lighting conditions. While such challenges are significant, they represent only one facet of domain adaptation <cit.>.
A crucial yet often overlooked aspect is cognitive domain shift, where humans possess the ability to generalize across disparate concepts despite variations in presentation <cit.>. For instance, if individuals are trained to recognize polar bears but are later tested on images of brown bears, they can seamlessly extend their understanding to identify both as types of bears. This cognitive flexibility presents a formidable challenge for artificial intelligence (AI) systems, as they struggle to emulate the nuanced cognitive processes inherent in human perception <cit.>.
In this paper, we address this gap in research by investigating cognitive domain shift and its implications for machine learning models. We argue that understanding and replicating human-like generalization abilities are essential for developing robust AI systems capable of adapting to diverse and evolving environments. Through empirical analysis and theoretical insights, we aim to shed light on the complexities of cognitive domain shift and pave the way for more comprehensive solutions in domain adaptation research <cit.>.
§.§ Related Work
Machine learning (ML) has been a focal point of research and innovation in recent years, with numerous influential papers contributing to its advancement. For instance, Liu et al. introduced a method for influence pathway discovery on social media <cit.>. Additionally, Li et al. proposed a Contextual Hourglass Network for semantic segmentation of high-resolution aerial imagery <cit.>. Furthermore, Li et al. presented a technique for deception detection using bimodal convolutional neural networks, highlighting the importance of leveraging multiple modalities for domain adaptation tasks <cit.>.
Domain adaptation, the process of transferring knowledge from a source domain to a target domain with differing distributions, has garnered significant attention in the machine learning community. Numerous approaches have been proposed to address this challenge, ranging from traditional methods to more recent deep learning-based techniques. Traditional domain adaptation methods often rely on minimizing the discrepancy between source and target domains using techniques such as Maximum Mean Discrepancy (MMD) <cit.> and Kernel Mean Matching (KMM) <cit.>. Deep learning has revolutionized domain adaptation by leveraging the representational power of deep neural networks to learn domain-invariant features. Notable deep learning-based methods include Deep Coral <cit.>, which aligns second-order statistics to bridge domain gaps, Adversarial Discriminative Domain Adaptation (ADDA) <cit.>, which employs adversarial training to learn domain-invariant representations, and Self-supervised agent learning <cit.>, which learns domain-invariant representations through a self-supervised learning framework.
These methods have demonstrated promising results in various domains and have paved the way for further advancements in domain adaptation research. In recent years, various approaches have been proposed to address similar challenge. Recent works have explored novel techniques such as meta-learning <cit.> to enhance the adaptability and robustness of domain adaptation models. Additionally, attention has been given to the challenges of unsupervised, semi-supervised, and multi-source domain adaptation scenarios <cit.>, <cit.>.
§ METHODOLOGY
§.§ Benchmark Models
§.§.§ Deep Coral
The Deep Coral model, introduced by Sun et al. in 2016, is a notable advancement in domain adaptation within deep learning. It tackles domain shift by aligning source and target domain distributions in a deep feature space through correlation alignment. This approach preserves data distribution structures effectively, leading to improved adaptation performance across diverse tasks and domains. Deep Coral's elegant formulation and demonstrated efficacy make it a cornerstone in the development of robust and generalizable deep learning systems.
§.§.§ ADDA
The Adversarial Discriminative Domain Adaptation (ADDA) model, is a leading technique for unsupervised domain adaptation in deep learning. By simultaneously minimizing domain discrepancy and maximizing task-specific discrimination through adversarial training, ADDA learns domain-invariant representations. This approach enables effective adaptation without requiring labeled target domain data, making ADDA particularly valuable when labeled data in the target domain is limited or unavailable. With its innovative framework and demonstrated effectiveness, ADDA has become a widely utilized method for addressing domain shift in various real-world applications.
§.§ AD-Aligning
We propose an unsupervised model that integrates the advantages of the Adversarial Discriminative model with the alignment of source and target domain distributions. In the adaptation phase, we receive source images X_s with corresponding labels Y_s exhibiting a source domain distribution p_s(x,y), alongside target images X_t characterized by a target distribution p_t(x,y), and unseen target images X_ut with an unknown distribution. The primary objective of domain adaptation classification is to obtain a target representation M_t and classification C_t capable of categorizing both X_t and X_ut into K classes.
The model structure is illustrated in Fig. <ref>. During the pre-training phase, source images are fed into an encoder to obtain their representation M_s. The classifier C is then trained using the source representation and labels. Following pre-training, we acquire an encoder capable of encoding the source representation and a classifier C. Subsequently, in the adversarial adaptation phase, only source and target images are provided without labels. The objective in this phase is to obtain a target encoder that generates a target representation M_t similar to the source representation M_s. To achieve this, we freeze the source image encoder and target classifier. The initial status of the target encoder is copied from the source encoder, and a discriminator is employed to work adversarially with the target encoder to train it in an unsupervised manner. Finally, the target encoder and source classification model are used to classify the target images.
In both the pre-training and adversarial training phases, we employ standard supervised loss augmented with coral loss:
l_cls = l_class + ∑^t_i=1λ_il_coral
where t represents the number of coral loss layers in the network, and λ is a weight parameter that balances the contribution of the coral loss. The classification losses (l_class) and coral losses (l_coral) are expressed as follows:
l_class(X_s, Y_t) = 𝔼_(X_s,Y_t) - ∑^t_k=1log C(M_s(x_s))
l_coral = 1/4d^2||Co_s-Co_t||^2_F
where d denotes the dimensionality of the deep layer, Co_s (Co_t) represents the feature covariance matrices, and || .||^2_F denotes the squared matrix Frobenius norm.
In the adversarial adaptation task, a domain discriminator D is introduced to discern whether the representation originates from the source or target domain. The adversarial component also incorporates standard supervised loss, formulated as:
l_advD = -𝔼_X_s[log D(M_s(x_s))] - 𝔼_X_t[log(1-D(M_s(x_s)))]
Given that the target domain lacks labels, minimizing the disparity between source and target representations poses a challenge. During training of the adversarial network, the encoder is trained using the standard loss function with inverted labels [10], yielding the encoder loss:
l_advE = - 𝔼_X_t[D(M_s(x_s))]
AD-Aligning entails unconstrained optimization aimed at minimizing the combined losses l_cls, l_advD, and l_advE.
§.§ Data Processing
To conduct our experiments, it is imperative to procure datasets comprising both source and target images exhibiting domain shift. We meticulously curate three pairs of datasets to ensure the requisite conditions for our experiments are met.
§.§.§ Noise Dataset
Noise represents a prevalent manifestation of domain shift, imparting notable alterations to the texture and characteristics of images. Notably, images captured under low-light conditions, such as during nighttime, often exhibit heightened levels of noise compared to those captured in well-lit environments. In our experimental setup, we delineate the original, noise-free image as the source, while the image subjected to noise augmentation serves as the target. This approach effectively simulates domain shifts corresponding to various times of the day. Specifically, our noise dataset encompasses five distinct types of perturbations introduced into the Tiny-16-Class-ImageNet dataset: uniform noise, salt-and-pepper noise, rotation, high-pass, and low-pass filters. A visual depiction of domain shift induced by noise is provided in Figure <ref>.
§.§.§ MNIST-USPS
The MNIST-USPS dataset amalgamates handwritten digit images sourced from two disparate origins: MNIST and USPS. MNIST comprises standardized 28x28 grayscale images depicting digits 0 through 9, while USPS encompasses handwritten digits gleaned from postal mail envelopes. This dataset assumes a pivotal role as an evaluative benchmark for domain adaptation algorithms, owing to the substantial disparities in style, dimensions, and variability between the source and target domains. Such diversity renders it an exemplary testbed for gauging the efficacy of domain adaptation methodologies in real-world contexts.
§.§.§ Generalized ImageNet
The Tiny-16-Class-ImageNet encompasses 16 general classes, with multiple subclasses within each class. Through manual integration of all datasets, we partitioned the dataset into source, target, and unseen-target subsets based on subclasses. For instance, all images depicting brown bears were categorized into the source subset, while half of the images featuring black bears were allocated to the target subset, and the remaining black bear images were assigned to the unseen-target subset. The ground truth for these images uniformly identifies them as belonging to the bear category. This categorization process was informed by human cognition. Thus, we curated the Generalized ImageNet dataset, intending to assess the model's efficacy in Cognitive Domain Adaptation.
§ EXPERIMENTS
§.§ Deep Coral
As a benchmark, we configure Deep Coral [2] to align the second-order statistics within the final layer of the backbone network by incorporating a coral loss. Recognized for its efficacy and remarkable versatility, this method provides a robust foundation for comparison. Significantly, we enhance the framework by substituting Deep Coral's backbone with a ResNet-50 model pretrained on the ImageNet dataset, thereby leveraging established representations to enhance performance. We adopt the same SGD hyperparameters as outlined in [2]. The λ parameter, controlling the weight of the coral loss, remains consistent with [2], except on the MNIST-USPS dataset, where we set λ = 1-epoch/num_epoch.
§.§ ADDA
To establish a benchmark, we adhere to the methodology of Adversarial Discriminative Domain Adaptation (ADDA) by initially acquiring a discriminative representation using source domain data. Subsequently, we utilize a domain-adversarial loss to train an additional encoder, which maps the target domain to the source domain. For our implementation, we employ ResNet-50 as the backbone for the encoder, complemented by a three-layer Multi-Layer Perceptron (MLP) acting as the discriminator, with a hidden size of 1024. The Adam optimizer is utilized, with parameters b1 = 0.5 and b2 = 0.999. We set the learning rate to 0.0002 and utilize a batch size of 32. During the adaptation stage, updates to the target encoder are performed every four steps.
§.§ AD-Aligning
We employ Coral loss in conjunction with standard loss during the pretraining phase of AD-Aligning to align the second-order statistics of target domains between the classification outputs of the fixed pretrained encoder and the AD-Aligning trained target encoder. The overall architecture is depicted in Fig <ref>. The blue encoder and classifier are pretrained and remain fixed throughout the experiments. Our experimentation reveals that the vanilla Adversarial Discriminative component compromises the pretrained encoder due to the inadequately trained discriminator. To optimize the utilization of the pretrained encoder initialization while ensuring that the target encoder generates similar features for the target and source domains, we utilize the coral loss solely to align the classification output of the AD-Aligning trained encoder with that of the fixed pretrained encoder, gradually reducing the weight of the coral loss over time.
§ DISCUSSION
§.§ Compare ADDA with Deep CORAL on Noise Dataset
We conducted experiments on the Noise Dataset to evaluate the performance of ADDA and Deep CORAL in handling domain shifts induced by five types of noise. The models were trained using the source dataset and one noise target dataset, and their performance was assessed across all five noise domains.
The experimental results, illustrated in Figure <ref>, showcase the improvements made by ADDA and Deep CORAL on the target domain. The classification accuracies for different domains are presented as percentages. Model M0 was solely trained on the source domain, while models M1 to M5 were adapted to one target domain using ADDA. Similarly, models M6 to M10 were adapted to one target domain using Deep CORAL. The red rectangle highlights the target domain on which each model was trained. Notably, the best results for each domain and method are highlighted in bold blue.
In our analysis, Deep CORAL consistently outperforms ADDA across various target domains, with the exception of the High-Pass domain. The discrepancy in performance on the High-Pass domain can be attributed to the significant domain shift present between High-Pass and the other domains. This challenge underscores the limitations of Deep CORAL in handling extreme domain shifts, where features learned in the source domain may not generalize well to the target domain.
Despite this, Deep CORAL demonstrates superior generalizability to previously unseen domains compared to ADDA. This enhanced generalizability can be attributed to the minimal alteration of the encoder by Deep CORAL, particularly as the encoder is pretrained on the ImageNet dataset without the inclusion of added noises. This pretrained initialization likely contributes to the model's ability to capture robust and transferable features, thereby enhancing its performance across diverse domains.
§.§ Performance on Cognitive Domain Adaptation
We conducted tests on more challenging datasets to evaluate the models' performance in cognitive domain adaptation, aiming to assess their ability to generalize across disparate concepts despite variations in presentation. The datasets used for this evaluation were MNIST-USPS and Generalized ImageNet. To increase the difficulty of the task, uniform noise (0.5) was added to the Generalized ImageNet target domain.
MNIST-USPS was chosen to assess the models' basic ability in handling domain shifts related to objective truth, while Generalized ImageNet was used to evaluate their performance in cognitive domain adaptation.
Table <ref> presents the results of different models on MNIST-USPS and Generalized ImageNet. ResNet-50 exhibited limited capability in handling domain shifts, performing slightly better than random guessing but showing poor accuracy in the target domain. ADDA demonstrated good performance on MNIST-USPS but performed poorly on Generalized ImageNet, indicating its effectiveness in objective truth domain shift but limitations in cognitive domain adaptation.
Conversely, Deep Coral showed better performance in cognitive domain adaptation, possibly due to its utilization of correlation alignment, which effectively emulates the nuanced cognitive processes inherent in human perception. However, the reliability of Deep Coral's results is questionable, given its poor performance on ADDA.
Our proposed model, AD-Aligning, outperformed other settings in both MNIST-USPS and Generalized ImageNet tests. The notable improvements in performance validate the effectiveness of our novel modifications and designs, highlighting the potential of AD-Aligning as a robust solution for domain adaptation tasks.
§.§ Performance on Unseen Domain Adaptation
Not all target domains are readily available during the training phase, necessitating an investigation into the model's performance when faced with unseen target domains. To validate this, we utilize the Generalized ImageNet dataset. To intensify the challenge, uniform noise (0.5) is added to the target domain. We deliberately abstain from training ResNest50-ImageNet, a pretrained model, to serve as a control model.
The results are presented in Table <ref>. ResNet-50 demonstrates an inability to adapt to the target domain, displaying signs of overfitting on the source domain and performing even worse than the untrained ResNest50-ImageNet. Conversely, Deep Coral exhibits notably superior performance compared to ResNet-50. AD-Aligning's performance on the unseen and untrained target domain significantly outperforms other methods, as evidenced in Table 2. Given the inherent difficulty of adapting to unseen target domains, the achieved accuracy underscores the robustness of the model.
§ CONCLUSION
Our experimental investigation into domain adaptation methods, including Deep Coral, ADDA, and our proposed AD-Aligning approach, has provided valuable insights into their effectiveness across various domains and scenarios. Deep Coral demonstrates superior generalizability to unseen domains compared to ADDA, although it struggles with extreme domain shifts such as in the High-Pass domain. On the other hand, ADDA performs well on objective truth domain shift tasks but exhibits limitations in cognitive domain adaptation. Our novel AD-Aligning method showcases significant improvements over existing approaches, particularly in addressing the challenges of unseen domain adaptation. Its robust performance across diverse datasets underscores its potential as a versatile and effective solution for domain adaptation tasks. Overall, our findings highlight the importance of considering the nuances of domain shifts and the need for adaptable and robust adaptation methods to address real-world challenges effectively.
IEEEtran
|
http://arxiv.org/abs/2405.08751v1 | 20240514163521 | From Text to Context: An Entailment Approach for News Stakeholder Classification | [
"Alapan Kuila",
"Sudeshna Sarkar"
] | cs.CL | [
"cs.CL",
"cs.IR"
] |
0009-0006-4168-1190
IIT KHARAGPUR
Kharagpur
India
alapan.cse@gmail.com
0000-0003-3439-4282
IIT KHARAGPUR
Kharagpur
India
sudeshna@cse.iitkgp.ac.in
Navigating the complex landscape of news articles involves understanding the various actors or entities involved, referred to as news stakeholders. These stakeholders, ranging from policymakers to opposition figures, citizens, and more, play pivotal roles in shaping news narratives. Recognizing their stakeholder types, reflecting their roles, political alignments, social standing, and more, is paramount for a nuanced comprehension of news content. Despite existing works focusing on salient entity extraction, coverage variations, and political affiliations through social media data, the automated detection of stakeholder roles within news content remains an underexplored domain. In this paper, we bridge this gap by introducing an effective approach to classify stakeholder types in news articles. Our method involves transforming the stakeholder classification problem into a natural language inference task, utilizing contextual information from news articles and external knowledge to enhance the accuracy of stakeholder type detection. Moreover, our proposed model showcases efficacy in zero-shot settings, further extending its applicability to diverse news contexts.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010179</concept_id>
<concept_desc>Computing methodologies Natural language processing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Natural language processing
From Text to Context: An Entailment Approach for News Stakeholder Classification
Sudeshna Sarkar
May 20, 2024
================================================================================
§ INTRODUCTION
In the intricate world of news reporting, especially when dealing with various government policies on socio-political and economic fronts, mass media frequently spotlights key players and entities directly or indirectly engaged in these issues. These entities, including policy-makers (like government officials), political opponents (representing opposition political parties), consumers (ordinary citizens), and civic society organizations, take center stage as opinion holders, significantly steering the overall tone and direction of news content. Referred to as Stakeholders in our work, these influential entities hold considerable sway over public discourse. The significance of our task lies in the ongoing competition among stakeholders for increased media coverage, a competition driven by the desire to heighten visibility among news consumers <cit.>. Moreover, research in journalism and digital media has delved into the influence of the political inclinations or social stature of these key stakeholders, providing insights into the inherent ideologies and political leanings of publishers <cit.>. Notably, the politicization of COVID-pandemic-related news, where politicians receive more coverage than scientists and researchers <cit.>, exemplifies the urgency to address the task at hand—grouping these essential entities within news articles into relevant stakeholder classes <cit.>. This involves identifying the stakeholder type of individual entities, thereby facilitating the recognition of actors sharing similar political views or socio-economic backgrounds—a fundamental challenge in the domain of computational journalism <cit.>.
In the domain of recognizing stakeholder classes, understanding the roles of key actors <cit.> in news topics and their involvement in particular issues becomes paramount. Existing research in this field reveals certain research gaps that our work aims to address. Previous studies, particularly in the financial domain <cit.> have focused on identifying salient entities <cit.> but primarily relied on extracting target company names from news articles <cit.>. While some efforts have been made to extract influential entities and determine their political affiliations and roles using various online resources <cit.>, this approach is cumbersome and often relies on incomplete knowledge bases. Notably, each news topic pertaining to a specific government policy involves a distinct set of potential stakeholders <cit.>. While common stakeholders like the government, opposition, and citizens are prevalent across various policies, there exist topic-specific stakeholders depending on the nature of the government policy, such as the Banking sector, private sectors (in Economic policies), and foreign nations and their leaders (in Foreign policies). The heterogeneous nature of the news domain and the domain dependency of stakeholder classes render the task challenging.
Comparable works, such as those focusing on political preference or ideology detection of salient actors, often rely on social media posts and metadata from social media accounts <cit.>. However, the brief nature of social media posts and the specific structural configuration of social network sites facilitate better representation learning of stakeholder entities, aiding in more accurate political perspective detection <cit.>. In contrast, news articles are descriptive and contain textual clues crucial for entity representation learning spread throughout the article. Occasionally, a specific news document may lack sufficient information for effective stakeholder-type classification. Our work aims to bridge these gaps by proposing an efficient approach that considers the heterogeneous nature of news articles and mitigates the challenges posed by the varied domain dependencies of stakeholder classes.
In this paper, we tackle the challenge of classifying stakeholder types for salient entities mentioned in news articles detailing Indian government policies. Recognizing the need for addressing new stakeholder classes within the heterogeneous news domain, we frame the classification problem as an entailment-based natural language inference task <cit.>. Inspired by <cit.>, we develop an entailment-based zero-shot classifier for stakeholder type detection, addressing the scarcity of labeled datasets <cit.> for topic-specific stakeholder classes. Our approach leverages both cross-document context as textual knowledge and excerpts from Wikipedia pages as global information for effective entity feature representation. We construct a weakly supervised textual entailment dataset, incorporating stakeholder entity descriptions and label information. Through training our proposed model[Link to code and dataset: <https://github.com/alapanju/NewsStake>] on the entailment task, it adeptly classifies salient entity mentions in news articles into unseen stakeholder classes with a high degree of success.
§ TASK DEFINITION
The news stakeholder classification task involves labeling prominent entities in news articles based on factors like their roles, social standing, political alignment, and geographic location. Our framework aims to determine each entity's stakeholder class by analyzing its description within news articles, offering insights into their perspectives on various news topics. We denote an entity mention as x = <e, M>, where e represents the entity phrase and M comprises snippets from multiple news articles containing e or its coreference mentions. We enhance M with information from external knowledge sources (e.g., Wikipedia) to create the aggregated entity description M' = w ⊕ M. Our objective is to design a topic-agnostic stakeholder classifier f that uses M' to accurately detect the true stakeholder classes (∈ S) of all prominent entities mentioned in heterogeneous news articles. Formally, f: e × M' → S. In subsequent sections, we outline our approach to formalizing the classification task into an entailment problem, detail the method for extracting cross-document entity representations, model description and analyze the results.
§ OUR APPROACH
§.§ Formalizing Classification as Entailment
Addressing the challenge of a lack of properly labeled datasets for news stakeholder classification, we reframe the problem as a natural language inference (NLI) task. This strategic shift enables us to fine-tune a domain-agnostic zero-shot classifier, circumventing the need for topic-specific labeled datasets. Initially, we annotate entity mentions with stakeholder labels based on their descriptions from multiple news documents and Wikipedia pages. These annotations form the basis of our transformed NLI dataset, where each instance comprises a premise (entity description) and a hypothesis (stakeholder label-embedded prompt) as detailed in Table <ref>. During training, the model learns to predict whether the premise entails the corresponding hypothesis. In testing, the fine-tuned model utilizes entailment scores to classify query entities into unseen classes as illustrated in Figure <ref>, thus effectively addressing the challenge of a lack of labeled data for specific stakeholder classes.
§.§ Entity Representation
In this section, we delineate the procedure for generating entity descriptions for the stakeholder entity phrases.
Entity Identification
Initially, we utilize Spacy's [<https://spacy.io/api/entityrecognizer>] entity recognizer to identify all entity mentions within news articles. This tool enables the extraction of entity phrases along with their associated entity types. However, our focus is narrowed to specific entity types with potential stakeholder significance, including Person, Geopolitical-entity, and Organization. Additionally, we impose a constraint that entities must exhibit salience within the document context to qualify as valid stakeholders. To determine saliency, we consider entities referenced multiple times within the document, ensuring their significance and relevance as potential stakeholders.
WD-Entity Context
We identify relevant sentences within news articles containing the target entity or its coreference mentions to form the Within Document (WD) entity description. Leveraging the LINGMESS coreference model <cit.>, we ensure comprehensive identification of all entity coreference chains within the document.
CD-Entity Context
To address limited information in single documents, we extend our analysis across multiple documents, resolving cross-document entity references crucial for accurate stakeholder prediction. Using string matching and phonetic measures like Jaro-Winkler similarity[Experimentally, threshold values between 0.8 to 0.9 yield optimal results for effective cross-document entity resolution.], Levenstein distance, and substring matching, we identify relevant cross-document coreference mentions. Aggregating entity context from individual articles, we form the Cross-Document (CD) entity description, providing a comprehensive stakeholder representation across documents.
Background Knowledge as Entity Context
We enhance entity descriptions by integrating external domain knowledge from relevant Wikipedia pages. Initially, we link target entity phrases to corresponding Wikipedia pages and extract introductory sentences for additional information. Leveraging the Wikipedia Python library[<https://pypi.org/project/wikipedia/>], we access and parse Wikipedia data, overcoming challenges in identifying correct pages for some stakeholder phrases. To address this, we manually retrieve India-related Wikipedia pages, enriching our dataset with substantial India-specific textual content. This augmentation significantly enhances the context and depth of our stakeholder descriptions.
§ DATA
In this section, we will depict the dataset creation procedure along with the dataset description.
News Domain All our experiments are based on the news articles on five Indian Govt policies:
1) Agriculture Act (2020)[<https://en.wikipedia.org/wiki/2020_Indian_agriculture_acts>], 2) Demonetization[<https://en.wikipedia.org/wiki/2016_Indian_banknote_demonetisation>], 3) Citizenship Amendment Bill (CAB)[<https://en.wikipedia.org/wiki/Citizenship_(Amendment)_Act,_2019>], 4) COVID pandemic management[<https://en.wikipedia.org/wiki/COVID-19_pandemic_in_India>] and 5)Abrogation of Article 370[<https://en.wikipedia.org/wiki/Revocation_of_the_special_status_of_Jammu_and_Kashmir>].
Stakeholder type selection
After consulting with domain experts (PhD scholars) from political science and social science backgrounds, we finalized a set of stakeholder groups for the aforementioned news topics. Given the multi-party system in India, various potential stakeholder groups exist, and we identified some of the prominent ones for our experiment. Certain stakeholders, such as political parties (both ruling and opposition), elected government officials, and bureaucrats, are common across all political news topics. Moreover, the stakeholder type "Citizen & Activist" plays a significant role in societal discourse, with their voices carrying substantial influence. Recognizing the pivotal role of media agencies in political news coverage, they are also considered as potential stakeholders. However, certain stakeholders are relevant only to specific news topics. The complete list of stakeholder groups considered in our experiment is provided in Table <ref>.
Article Collection
We gather topic-oriented news articles from GDELT[<https://blog.gdeltproject.org/>] and EventRegistry[<https://github.com/EventRegistry/event-registry-python>]. GDELT provides a dataset of geo-located events reported in news articles worldwide, while EventRegistry offers access to news data through its API. We extract the actual content from URLs provided by GDELT GKG table[<https://blog.gdeltproject.org/>] using scraping tools. EventRegistry's Python package facilitates filtering articles based on various parameters. After extraction, we use a bag-of-words approach and semi-supervised LDA <cit.> to identify topic-specific news articles.
Annotation Procedure
In the absence of existing annotated data fitting our research scope, we constructed a detailed annotation guide providing thorough descriptions of stakeholder classes and their interrelations. Two domain experts, both PhD scholars in India-specific political news, annotated stakeholder types using entity representations from Section <ref>. Any labeling uncertainties were resolved through discussion, resulting in a customized dataset for stakeholder classification.
Data Statistics
In Table <ref>, we report the statistics of the annotated dataset. We use news topics: Agriculture Act, COVID Control and CAB for training our model. For evaluation purpose we use new topics demonetization and Article 370. The number of labels and label specific training and development and test sets of our dataset used for our experiment is reported in Table <ref>.
§ EXPERIMENTS
Our aim is to develop a model that receives an entity description as input and produces entailment scores for each candidate stakeholder label. The ultimate prediction involves selecting the highest-scoring class in a single-label classification setup or the top-K classes surpassing a specified threshold in a multi-label scenario. In our experimental setup, we assign a single stakeholder label to each candidate entity, providing a streamlined approach to stakeholder-type prediction.
§.§ Model Description
For addressing the stakeholder classification problem, we utilize two distinct model architectures: 1) an encoder-only model and 2) an encoder-decoder model. In the encoder-only model, we fine-tune the RoBERTa model <cit.>, comprising 355M parameters. Alternatively, for the encoder-decoder model, we fine-tune the BART model <cit.>, equipped with 400M parameters.
§.§ Results and Discussions
The performance of our two models in stakeholder type classification is showcased in Table <ref>. The "Known Labels" column presents the results on the Test_seen dataset, while the "Unknown Labels" column reports the performance on the Test_unseen dataset, representing the model's efficacy in a zero-shot setting. Table <ref> shows that both models exhibit comparable performance when classifying stakeholder labels in supervised settings. However, in zero-shot settings, the RoBERTa model outperforms the BART model by a slight margin. These results form the basis for our discussion on the effectiveness of the proposed models in handling both seen and unseen stakeholder labels.
§.§ Model Robustness
In this section, we explore the influence of hypothesis prompt-templates on model performance in a zero-shot setting. We assess this impact by employing two semantically equivalent hypothesis templates with distinct tokens and evaluating the resulting model performance. Furthermore, we compare our findings with the widely used NLI-based zero-shot classifier, bart-large-mnli from the HuggingFace Hub.
Figure <ref> illustrates that the performance of the bart-large-mnli model exhibits instability. Our proposed model's F1-score also varies by 2% and 5% when using different prompt templates. To address this issue, we employ P-tuning <cit.>, which utilizes trainable continuous prompt embeddings in conjunction with discrete prompts and trains the RoBERTa model. P-tuning enhances the model's robustness against changes in the hypothesis template.
§ CONCLUSION AND FUTURE WORK
In this paper, we propose a novel approach for stakeholder classification in news articles, leveraging natural language inference and zero-shot classifiers. Our method offers valuable insights into news narratives, demonstrating effectiveness in both seen and unseen scenarios. Additionally, we explore methods to design robust and stable zero-shot classifiers. Moving forward, we aim to enhance zero-shot model performance, predict finer stakeholder labels, and uncover news bias through stakeholder coverage analysis.
ACM-Reference-Format
|
http://arxiv.org/abs/2405.09630v1 | 20240515180103 | Dynamical coupling of Keplerian orbits in a hierarchical four-body system: from the Galactic Centre to compact planetary systems | [
"Myank Singhal",
"Ladislav Šubr",
"Jaroslav Haas"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.EP",
"astro-ph.SR"
] |
firstpage–lastpage
Quantum switch instabilities with an open control
Pedro R. Dieguez0000-0002-8286-2645
=================================================
This study focuses on the long-term evolution of two bodies in nearby initially coplanar orbits around a central dominant body perturbed by a fourth body on a distant Keplerian orbit. Our previous works that considered this setup enforced circular orbits by adding a spherical potential of extended mass, which dampens Kozai–Lidov oscillations; it led to two qualitatively different modes of the evolution of the nearby orbits. In one scenario, their mutual interaction exceeds the effect of differential precession caused by a perturbing body. This results in a long-term coherent evolution, with nearly coplanar orbits experiencing only small oscillations of inclination. We extend the previous work by (i) considering post-Newtonian corrections to the gravity of the central body, either instead of or in addition to the potential of extended mass, (ii) relaxing the requirement of strictly circular orbits, and (iii) removing the strict requirement of complete Kozai–Lidov damping. Thus, we identify the modes of inter-orbital interaction described for the zero-eccentricity case in the more general situation, which allows for its applicability to a much broader range of astrophysical systems than considered initially. In this work, we scale the systems to the orbits of S-stars; we consider the clockwise disc to represent the perturbing body, with post-Newtonian corrections to the gravity of Sagittarius A* playing the role of damping potential. Considering post-Newtonian corrections, even stellar-mass central bodies in compact planetary systems can allow for the coupled evolution of Keplerian orbits.
black hole physics – Galaxy: centre – celestial mechanics – stars: kinematics and dynamics
§ INTRODUCTION
The study of dynamics in Keplerian potentials is an old yet very progressive area of research. The secular orbital evolution of light (test) particles in the dominating central potential accompanied by a distant perturber is one of the classical problems in celestial mechanics. According to the pioneering works of <cit.> and <cit.>, the orbital solution within this hierarchical three-body setup is often called Kozai–Lidov (K–L) dynamics. Various works have extended its original formulation, which supposed a non-evolving circular orbit of the perturber, e.g., eccentric perturber <cit.>, relativistic effects <cit.>, mass loss and transfer <cit.>.
Considering the four-body setup brings new degrees of freedom and also more variants of the general setup <cit.>. A possible configuration has recently been investigated by <cit.>. Similarly to K-L dynamics, their setup consists of a dominating central body and a massive perturber on a circular orbit. Contrary to K-L dynamics, they considered the orbital evolution of two light, mutually gravitationally interacting bodies inner to the orbit of the massive perturber. In their work, <cit.> focused on the case when the two inner orbits are close to each other in terms of semi-major axes and are initially co-planar (with arbitrary inclination with respect to the orbit of the perturber). An additional assumption, primarily imposed due to limitations of the used calculus, was the non-evolving zero eccentricity of the two inner orbits. <cit.> argue that this assumption is relevant if another non-Keplerian spherically symmetric potential is present within the system, being strong enough to damp the K-L oscillations of the inner bodies enforced by the massive perturber. Within this setup, <cit.> developed a secular theory showing that the two inner orbits periodically exchange their angular momentum such that their inclinations oscillate. If their mutual interaction is strong enough (which depends on their mass and separation), the precession of their orbits is synchronised, i.e., the initial co-planar structure is nearly preserved. In the other case, orbital planes of the inner bodies precess differentially due to the perturbing force of the outer body, which leads to disruption of the co-planar configuration. We refer to the temporal evolution of the specific four-body setup introduced by <cit.> as the VHS mechanism throughout this paper.
A shortcoming of the secular theory of VHS dynamics is the requirement of the spherically symmetric external potential needed to dampen the Kozai-Lidov oscillations, which reduces its applicability in observed astrophysical systems. However, <cit.> introduced a physically realistic setup in which the VHS mechanism is applicable. They studied a system in which the super-massive black hole (SMBH) in the Galaxy's centre, Saggitarius A* () <cit.>, represents the dominant body, and the additional spherical potential is due to the surrounding nuclear star cluster. They considered the perturbing body to be the circum-nuclear gaseous disc <cit.> and the bodies on inner nearby co-planar orbits to be the observed stars from the young stellar disc that is within the distances of 0.04 – 0.4 from the central super-massive black hole <cit.>. <cit.> suggested that the four-body dynamics in the spherically symmetric external potential can explain the specific, near-perpendicular orientation of the stellar disc with respect to the distant perturber.
Our study aims to expand the scope of the VHS dynamics described in <cit.> and to explore its applicability in a broader range of astrophysical systems by relaxing some of the assumptions of the underlying secular theory. Firstly, we develop the idea, suggested in the original work, that the non-Keplerian spherical potential can be omitted if we consider post-Newtonian terms in the gravity of the central body while still working within the secular approach. Secondly, we investigate the evolution of systems with a non-zero eccentricity of the two inner orbits by directly integrating the equations of motion. Finally, we consider a scenario in which the eccentricity of the inner orbits evolves over time, i.e., when the K-L oscillations are not entirely damped.
The paper is structured as follows: In Section <ref>, we provide a detailed description of the four-body setup we are studying. Section <ref> provides a summary of the secular theory developed in <cit.>, along with a discussion of the damping of K-L oscillations due to the effects of general relativity. Section <ref> describes several examples of systems with non-zero eccentricity that were integrated. Finally, we present our conclusions on the generalised VHS dynamics in Section <ref>.
§ SETUP
We study a hierarchical four-body system with a dominant central body, characterized only by its mass, M_∙. The system further consists of a distant perturber of mass M_p on a circular orbit with radius R_p around the central body. The orbit of the perturber defines the reference plane. We can choose any line within this plane to define our reference axis to calculate the longitude of the ascending node, Ω.
Finally, we consider two light particles of masses m and m^' where m, m^'≪ M_p on orbits around the central body with a semi-major axes a and a^', which are much smaller than R_p and a^'<a. These light bodies are in inclined orbits, having inclinations i and i^' with respect to the reference plane. The last important parameters we consider are the longitudes of the ascending nodes of the two bodies, Ω and Ω^'. Initial conditions are set up such that i=i^' and Ω=Ω^'.
As an example, in this work we use the objects observed in the Galactic Centre as an astrophysical system to provide us realistic values for M_∙, M_p and R_p. We set the system that may correspond to the situation in the vicinity of the black hole, i.e., M_∙=4 × 10^6 M_⊙ <cit.>. We consider the distant perturber of mass of M_p=10^4 M_⊙ and a semi-major axis of R_p=0.1 pc, aiming to mimic the overall gravitational influence of the observed clockwise young stellar disc (CWD) <cit.>. The two light particles could be representatives of the S-stars that are observed in the Galactic Centre.
§ SECULAR THEORY
In this Section, we follow the mathematical approach used in <cit.> and briefly sketch the main ideas. In particular, we consider a secular approach to describe the long-term evolution of the system described in Sec. <ref>. For this, the mean interaction potential of the system, , averaged over fast changing variables needs to be specified. As it can be given as a direct sum of the individual terms describing different components of the system, we discuss these separately in the following sections.
§.§ Potential of the distant / outer perturber
The averaged interaction potential between a perturbing body on a circular orbit with radius R_p and a particle on an orbit with semi-major axis a, eccentricity e and inclination i with respect to the orbital plane of the perturber reads <cit.>:
_p = - G m M_p/16 R_p( a/R_p)^2 [(2+3e^2)(3cos^2i -1) + 15e^2 sin^2icos2ω ]
where ω is the argument of periapses of the orbit. Suppose _p is the only component of the total perturbing potential (i.e., the system is reduced to a three-body setup). In that case, the body on the inner orbit is subject to quadrupole K–L dynamics <cit.>. Depending on the initial conditions, its eccentricity and inclination may undergo large periodic variations that are mutually coupled through the so-called Kozai integral, C ≡√(1 - e^2)cosi, which, together with the semi-major axis (a) and _p, is a conserved quantity along the orbit evolution.
The number of known integrals of motion allows for an effective insight into the K–L dynamics through plots of isocontours of in the e-ω space, which for fixed values of a and C give sets of possible evolutionary tracks (see Figure <ref>). These sets form two qualitatively different topologies: For C > √(3/5), they consist of concentric ovals, which means that the eccentricity oscillates slightly along the evolutionary path and ω rotates within the whole range (0, π) (see right panel of Figure <ref>). If C ≤√(3/5), the topology qualitatively changes: a separatrix crosses the central point; It divides the diagram into zones with ω librating around the value of π / 2 or 3π / 2 and the outer rotation zone (left and middle panel of Figure <ref>). The lower the value of C, the larger the eccentricity oscillations. The characteristic time scale for these oscillations is given by <cit.>:
T_K≡M_∙/M_pR_p^3/a √(G M_∙ a) .
An important result from the isocontour plots is that the zero eccentric orbit is stable for C > √(3/5), while it undergoes periodic variations when C below the limiting value.
Finally, let us note that the longitude of the ascending node, Ω, rotates monotonically in the full range of (0, 2π) for arbitrary initial conditions. The rate of precession depends on the other orbital elements, as well as on the mass and semi-major axis of the perturber. However, the value of Ω does not affect the evolution of the other orbital elements, which is a natural consequence of the axial symmetry of the problem.
§.§ Spherical potential
In our study of four-body systems, we examine two distinct sources of an external spherical potential. The first source is the presence of an extended mass around the central body, while the second source is an approximation of the first-order post-Newtonian corrections to the gravity of M_∙. Although these sources differ, they have very similar effects and impact the evolution of the two bodies in a similar manner.
§.§.§ Extended Mass
<cit.> and
<cit.> considered such an astrophysical context involving an extended mass around the central body, influencing the secular dynamics of the inner orbit(s). In particular, the authors provide an analytic form for the mean potential corresponding to the mass density distribution with power-law profile, ρ_c ∝ r^β-2,
_c= -GmM_c/β R_p(a/R_p)^β𝒥(e,β),
where M_c stands for the integral of the extended mass density within the orbit of the perturber (R_p) and
𝒥(e,β)=1/π∫_0^π (1-e cosu)^1+β du = 1 + ∑_n≥ 1a_n e^2n.
The coefficients a_n are given by
a_n+1/a_n=[1- 3+β/2(n+1)][1- 2+β/2(n+1)],
with a_1=β (1+β)/4.
From the spherical symmetry of this perturbing potential we get that its only manifestation on the orbit evolution is a monotonous (retrograde) rotation of the argument of pericentre, ω.
When combined with the potential of the distant perturber _p, the potential of the extended mass generally leads to damping of the Kozai–Lidov oscillations (see <cit.> for a detailed discussion). This damping stabilizes the zero eccentricity orbit for arbitrary inclination for a suitable choice of system parameters. Note also that in such a situation, monotonous rotation of the longitude of the ascending node remains the primary manifestation of the influence of the distant perturber. <cit.> showed that for damped K–L oscillations, the rate of change in longitude of the ascending node is given by
dΩ/d t≈ - 3/4cosi/T_K1+3/2e^2/√(1-e^2)≈constant.
This equation shows that when the K–L oscillations are damped, dΩ/d t depends on the semi-major axis through T_K (see <ref>) and will result in differential precession for different orbits.
§.§ Post-Newtonian corrections
It has already been discussed in the literature that relativistic corrections to the gravity of the central body can play a role similar to the spherical potential of the extended mass in secular dynamics <cit.>, enforcing a (prograde) rotation of the argument of the pericentre, ω. A straightforward way to implement this relativistic effect within the framework presented above is to use the approximation given by <cit.>. This approximation mimics the rotation of the argument of pericenter due to the relativistic effect of the central body using an additional spherically symmetric potential,
V_GR=-GM_∙ h^2/c^2 r^3,
where h ≡√(GM_∙ a(1-e^2)) is the specific angular momentum of the test particle and c stands for the speed of light. Formally, this potential is equivalent to spherical mass distribution with density profile ρ∝ r^-5, that is, the form of the averaged potential given by <ref> with β = -3 may be directly used, giving us the mean potential of the first-order post-Newtonian correction,
_GR = -GM_∙ m h^2/ c^2 a^3𝒥(e,-3).
Note that, in comparison to the general mean potential for extended mass distribution, <ref> contains additional dependence on eccentricity through h, and it has one less parameter (M_∙ vs. M_c and a_P). Also note that <ref> diverges as e approaches unity.
We visualise the damping effect of the post-Newtonian corrections using the isocontours of the perturbing potential in the e-ω space in <ref>. In particular, we show three examples of (e,ω) with = _p + _GR for a randomly selected value of C = 0.34 and the properties of the central body and perturber are same as (M_∙=4×10^6 M_⊙) and the CWD (M_p=10^4 M_⊙, R_p=0.1pc) as described in Section <ref>. We change the value of the semi-major axis of the inner body, i.e., with variable strength of _GR with respect to _p.
In the left panel of <ref>, the topology is very similar to that of the middle panel of <ref>, which means that _p dominates over _GR in absolute value for most of the parameter space. The middle panel of <ref> shows a setup with a smaller value of semi-major axis, leading to a decrease in the absolute value of _p while, at the same time, it leads to a growth in the absolute value of _GR, which means that it contributes considerably to . The topology of the isocontours of remains the same as in the previous case, but the overall structure changes so that the separatrix does not reach smaller eccentricity values. Further reduction of the semi-major axis, as shown in the right panel of <ref>, leads to _GR fully dominating over _p, and hence the isocontours of form nearly circular shapes as a consequence of the independence of _GR on ω. In this case, the K–L oscillations are strongly damped, and the zero eccentricity orbit becomes stable and does not evolve.
For the sake of the analytic treatment of the four-body dynamics described in the following sections, the system configuration must be such that the zero eccentricity orbit is stable. However, due to the non-trivial dependence of _GR and _p on system parameters, this condition must be evaluated from case to case.
In Figure <ref>, we evaluate it for parameters of the system that may correspond to the situation in the vicinity of the black hole and the semi-major axis of the inner body is sampled within the range 0.01 - 0.5 R_p, which falls into the region of the S-stars for the example setup described in Sec <ref>. We quantify the damping of K–L oscillations by evaluating the maximum value of eccentricity e_max reached by the system during its evolution when starting from near-zero eccentricity. The K–L oscillations are successfully damped when we obtain smaller values of e_max, as the only source of change in eccentricity in these systems is the K–L dynamics. We see that in this setup, the GR effects damp the K–L oscillations for the entire range of C for a ≲ 0.14 R_p. At the same time, for a ≳ 0.3 R_p the K–L dynamics is less affected; that is, the zero eccentric orbit is stable only for C≳√(3/5), shown by the white dashed line in <ref>.
§.§ Inter-particle potential
In order to describe the four body setup, <cit.> evaluated the averaged inter-particle potential for circular orbits,
_i = -Gmm^'/aΨ ( α, n·n^'),
where α≡ a^'/a. n and n^' are the unit vectors that are normal to the mean orbital plane for the two stars, which can be parameterized as n = sinisinΩ, -sinicosΩ, cosi^T and n^'=sini^'sinΩ^', -sini^'cosΩ^', cosi^'^T. We can define the function Ψ as
Ψ(ζ, x) = ∑_l ⩾ 2 [ P_l(0)]^2 ζ^l P_l(x) ,
where P_l(x) are the Legendre polynomials.
We can express the potential energy due to the interaction of inner circular orbits and the outer perturber,
_p,0 = -GmM_p/R_pΨ(a/R_p, cosi),
_P,0^' = -Gm^' M_p/R_pΨ(a^' /R_p, cosi^').
§.§ VHS mechanism
The total averaged potential of the four-body setup described in Section <ref> is:
= _i + _p,0 + _p,0^'
and the classical orbital elements are assumed to evolve according to the Lagrange equations
<cit.>:
dcosi/dt =-1/mη a^2∂ℛ/∂Ω, dΩ/dt =-1/mη a^2∂ℛ/∂cosi,
dcosi^'/dt =-1/m^'η^'a^'^2∂ℛ/∂Ω^', dΩ^'/dt =-1/m^'η^'a^'^2∂ℛ/∂cosi^',
here η and η^' are the mean motion frequncies of the two bodies. Although the average potential due to either the extended mass or relativistic corrections plays an essential role in damping the K–L oscillations of the circular orbits, we may omit it here as it does not contribute to the target subset of Lagrange equations, <ref>) & <ref>.
The set of <ref> & <ref> with mean perturbing Hamiltonian (<ref>) has been first studied by <cit.> and we refer to their solution in general as the VHS mechanism. These equations may be translated to equations for normal vectors, n and n^', of the orbital planes <cit.>.
dn^'/d t = ω^'_I (n^'×n) + ω^'_p (n^'×e_z),
dn/d t = ω_I (n×n^') + ω_p (n×e_z),
where
ω^'_I= -η^'α(m/M_∙) Ψ_x(α,n· n^'), ω_I= -η(m^'/M_∙) Ψ_x(α,n· n^')
ω^'_p= -η^'(M_p/M_∙) Ψ_x(a^'/R_p,cosi^'), ω_p= -η(M_p/M_∙) Ψ_x(a/R_p,cosi),
and
Ψ_x (ζ,x) ≡d/dxΨ (ζ,x).
The frequencies ω_I, ω^'_I and ω_p, ω^'_p correspond to the frequencies caused by the mutual interaction of the two bodies and the perturber, respectively.
<cit.> have shown both by means of analysis of the averaged Hamiltonian as well as direct integration of the Lagrange equations that there exist two qualitatively distinct classes of solutions. On a qualitative level, if the masses of the inner orbits are small enough, or their separation (in terms of semi-major axes) is sufficiently large or a combination of both, we call the regime of interaction weak. In the opposite case, we call the interaction strong. Explicit formula defining boundary between the two modes is not known, nevertheless, an estimate for particular setup can be obtained comparing the frequencies ω_I and ω^'_I to ω_p and ω^'_p. In the strong mode, ω_I, ω^'_I >> ω_p, ω^'_p which means that evolution of the orbital planes described by <ref> is governed by the mutual interaction of the inner orbits. On the other hand, if ω_I, ω^'_I << ω_p , ω^'_p the weak mode of the VHS mechanism takes place in which the two planes precess differentially due to the gravitational influence of the outer orbit. Note that none of the frequencies ω_I, ω^'_I, ω_p and ω^'_p are constant over time. Hence, determination of which mode realises cannot be reliably determined from their initial values.
§.§.§ Weak mode of the inter-particle interaction
In the weak mode of the VHS mechanism, the two orbits periodically interchange their angular momenta such that their magnitudes stay constant, but mutual inclination changes. The longitudes of their ascending nodes rotate at different rates while still being mutually influenced. This independent rotation of Ω disrupts the original co-planar configuration. At the moments when ΔΩ≡Ω^' - Ω reaches the value of a multiple of 2π, the relative inclination of the two orbits drops to zero, and the planar structure is re-established for the moment.
An example of this solution is shown in <ref> with parameters of the system given in <ref> under the label 1. Besides showing the orbital evolution according to the secular approximation, we also plot the evolution of the orbital elements coming from direct integration of the equations of motion. For the latter case, we utilize the integrator <cit.> since it allows for integrations of a few-body system with up to 2.5 order post-Newtonian approximation. Both solutions share the same qualitative properties with slight differences in the amplitude and period of oscillations of the inclinations which indicate the quality of the secular approximation in this particular configuration.
Estimate of characteristic time-scale, T_char, of the weak mode of the VHS mechanism comes from that (i) the precession of Ω is dominated by the distant perturber, i.e., it is nearly constant but different for the two inner bodies and (ii) the period of oscillation of inclinations is determined by the time instances when Ω-Ω^' =2π. To find T_char for which Ω(T_char)-Ω^' (T_char)=2π, we can apply <ref> independently to both the inner and outer orbits, approximating inclinations and eccentricities with their initial values (I=I^' =I_0 and e=e^' =e_0) which yields:
T_char≈16π√(1-e_0^2)/3(3e_0^2 +2) cosI_0[1/T_K - 1/T^' _K]^-1.
For the case of e_0=0, this simplifies to the formula given by <cit.>. Plugging the inital conditions of 1 in <ref> we get T_char≈ 192 Myr which is same order of magnitude of T_1=123.42 Myr.
§.§.§ Strong mode of the inter-particle interaction
The strong mode of the VHS mechanism occurs when the masses of the inner orbits are large and/or their orbits are closer to each other. In this case, inter-particle interaction surpasses the differential precession of Ω and Ω^' induced by the distant perturber, and the orbits co-rotate. Similarly to the weak case, the two inner orbits keep the magnitude of total angular momenta constant, yet exchange angular momentum so that their inclinations undergo mirrored oscillations. The amplitude of these oscillations is typically smaller than in the weak mode and, therefore, the two orbits stay nearly coplanar during the whole course of the secular orbit evolution.
<ref> shows a typical example of strong mode which occurs in the setup with initial conditions labelled as model 3 in <ref>. Just like the weak case, we show results of the orbital evolution according to the secular approximation as well as direct integration of the equations of motion. The solutions are qualitatively the same which proves that the secular theory is suitable for understanding the nature of the VHS mechanism.
As the differential precession of Ω is suppressed in this mode of the VHS mechanism, it cannot be used to define any characteristic time-scale. Instead, the fact that ω_I and ω^'_I dominate in <ref> can be used to estimate period of the orbital evolution as T_char = 2 / (ω_I + ω^'_I). For 3 we get T_char≈ 0.98 Myr, while the numerical integrations give a period of 0.97 Myr.
§ NUMERICAL SOLUTIONS
The secular theory discussed above relies on the assumption of constant zero eccentricity of the two nearby orbits. This section aims to investigate the four-body dynamics that relax this strict constraint. Since no analytic theory is formulated for the non-zero eccentricity case (and is expected to be non-trivial as the dynamics of nearby eccentric orbits is susceptible to chaotic behaviour induced by close encounters), we study our desired setups with sufficient accuracy using direct numerical integrations.
The lack of analytic theory makes it difficult to define distinct classes of possible evolution. Our strategy is then to perform a set of integratons with different initial conditions and compare the results with the ideal cases (zero eccentricity) for which we have an analytic insight.
Therefore, the set of examples presented below is likely to be incomplete in terms of all the possible outcomes but it still shows that the two basic modes of the VHS mechanism have identifiable effects in more general setups.
<ref> lists the initial conditions of the four setups we discuss in this section, along with the zero eccentricity cases discussed in the previous section. A large number of direct integrations with relativistic corrections using were conducted; We selected a subset of the runs to clearly demonstrate the strong and weak modes of the VHS mechanism when we relax certain requirements for secular theory. These individual cases are discussed separately in the following sections.
§.§ Weak mode with e > 0
Let us start by relaxing the condition of zero eccentricity of the two nearby orbits. The model 2 is then straightforwardly derived from 1 simply by changing the initial eccentricity values from zero to 0.721. Since the K–L oscillations are damped, the eccentricity of the orbits does not evolve. We can see this in the temporal evolution of selected orbital elements of the two particles for this setup in <ref>. When Ω -Ω^' reaches a multiple of 2 π, the relative inclination of the two particles drops to zero. This directly agrees with the angular momentum exchange in the weak mode of VHS mechanism between the two bodies as described in Sec. <ref>.
Period of the secular evolution within the weak mode of the VHS mechanism for model 2 is clearly shorter with respect to the circular case (1). This is in accord with the dependence of <ref> for characteristic time-scale on eccentricity. For model 2 it gives T_char≈ 75 Myr, while the period determined directly from the numerical integrations is T_2≈ 47 Myr.
§.§ Strong mode with e > 0
In another example, we consider a system based on 3, but with an initial eccentricity of 0.77 and we refer to this model as 4. <ref> shows the evolution of the orbital elements of this model. We observe that the inclinations of the two bodies exhibit mirrored oscillations, while the value of ΔΩ oscillates around zero. These two signatures suggest that the system is influenced by the strong mode of VHS mechanism, albeit with some qualitative differences compared to the zero eccentricity case.
Contrary to the previous cases (1 – 3), the orbits undergo non-periodic changes of their semi-major axes, which means that there is a stochastic energy exchange occurring between the two particles. We attribute this to particles on two nearby eccentric orbits occasionally getting so close to each other that the instantaneous two-body scattering noticeably affects their semi-major axes and eccentricities. These scattering events mean that we cannot treat the orbital evolution as secular.
A clear distinction between 3 and 4 is the evolution of the inclination of the two particles. In 3, the orbits evolve in accordance with the secular theory of <cit.>, which implies that the inner of the two coplanar orbits is pushed to higher values of inclination while the inclination of the outer orbit decreases. The evolution is more complex in 4 compared to 3. In 4, the value of Δ i ≡ i^' - i periodically changes its sign (see Appendix <ref> for further discussion). On the other hand, the (quasi)periodic mirrored oscillations of inclinations of the two orbits suggest that the angular momentum transfer between them is secular. The magnitude of the change in inclination is also higher in 4 compared to 3, but still smaller compared to the inclination oscillations present in weak mode of VHS mechanism (models 1 and 2).
Finally, let us focus on the evolution of the longitudes of the ascending nodes Ω and Ω^' of the two particles. If these were test particles, i.e., not interacting with each other, Ω and Ω^' would evolve at different constant rates according to <ref>, which means that ΔΩ would grow monotonically in time, reaching a value of ≈ 28 on the time scale of 20 in the setup of model 4. However, in the bottom panel of <ref>, we see limited oscillations of ΔΩ around zero with maximum amplitude ≈ 10. Small ΔΩ means that differential precession is suppressed, although not as ideal as in model 3 with zero eccentricity.
Considering the two necessary signatures in the evolution of the orbital elements, i.e., small amplitude mirrored oscillations of inclinations and suppressed differential precession in terms of ΔΩ, we state that the system described in model 4 undergoes a generalised mode of the strong mode of VHS mechanism with non-zero eccentricity.
§.§ Strong mode on the top of Kozai–Lidov cycles
Now that we have seen examples of systems with nonzero eccentricity showing either the weak or strong mode of VHS mechanism, we now try to relax the requirement of having constant eccentricity by reducing the damping of K–L dynamics. We can do this by increasing the ratio of a/R_p, which strengthens the perturbing potential due to the outer body with respect to the damping potential due to the post-Newtonian corrections. We study model 5 (<ref>) to explore the VHS mechanism with variable eccentricity.
The left panels of <ref> show the evolution of orbital elements for this system, with the eccentricity oscillations of the two particles now sharing a common period and amplitude. Their inclinations have a more complex evolution, but it is straightforward to identify short-term mirrored oscillations around the mean value. The mean value of the inclination oscillates due to K–L dynamics, which is on a much longer time scale than the strong mode of VHS mechanism. In this case, the inclinations evolve according to the secular theory of <cit.> in that the inclination of the inner body is always greater than that of the outer one. Finally, it is the suppressed differential precession of Ω and Ω^' which indicates that we see the two particles moving in the regime where strong mode of VHS mechanism is present, i.e. with a mutually locked orientation of their orbital planes while undergoing typical long-term K–L cycles.
Since the particles undergo two independent types of secular evolution at once, we find it beneficial to demonstrate how the orbits will evolve without the VHS mechanism. We can achieve this in the test-particle regime, that is, when mutual interaction between the two inner bodies is suppressed, as shown in the right panels of <ref>. Both particles undergo independent K–L oscillations in the test-particle regime with different periods and amplitudes. Difference of the longitudes of the ascending nodes, ΔΩ, systematically (though not monotonically) grows over time. We can also see the period of the K–L oscillations are different between the left and right figures. This means that the VHS mechanism changes T_K of the two bodies so that the new T_K is between the T_K of the two bodies if they were evolving independently.
Let us also point out the apparent regular nature of this setup contrary to the above-discussed model 4 in Section <ref>. This property, however, is not generic as the system is chaotic; slightly modified initial parameters of the system may lead to dramatically different evolution of orbits.
§.§ Transition from the strong to weak mode
It has been demonstrated already in Section <ref> (model 4) that the systems with non-zero eccentricity may be subject to slightly chaotic evolution due to stochastic close encounters between the two inner particles. Model 6 in <ref> is another example of a system where such encounters play an essential role. One notable difference from 4 is that the initial eccentricity in the current setup is close to zero but not precisely zero. The left panels of <ref> show the temporal evolution of the setup 6.
From the beginning, until T ≈ 56, it shows an evolutionary pattern similar to that of model 4, i.e., inclinations of the two inner particles undergo mirrored oscillations with Δ i periodically changing its sign. At the same time, ΔΩ oscillates around zero value, meaning the two orbits co-rotate and are almost co-planar, i.e., the orbits undergo the strong mode of VHS mechanism. Also similar to model 4 is the stochastic (though rather subtle) evolution of semi-major axes and eccentricities.
At T≈ 56, another close encounter of the two inner particles leads to a more substantial perturbation of their orbits in semi-major axes and eccentricities. Subsequent evolution shows that this event led to the transition from the weak to the strong mode: the inclinations of the two particles exhibit larger amplitude mirrored oscillations. At the same time, longitudes of the ascending nodes precess differentially. At the moments when ΔΩ reaches a natural multiple of 2π, both orbits share the same value of inclination, i.e., they are co-planar for that short period.
Another remarkable feature during the phase of weak mode is
short-periodic oscillations of eccentricity and inclination of the outer particle. These are K–L oscillations induced by the outer perturbing body that now become less damped because of a suitable angular momentum and energy change. To confirm the nature of these oscillations, we show the evolution of a system of two test particles in the external potential with initial conditions taken from the state of M6 shortly after the two-body scattering event at T = 55.5 Myr in the right panels of <ref>. These lighter bodies then have the following orbital parameters: a=0.0183pc, a^' = 0.0146pc, e=0.21, e^' = 0.11, and Δ i= 0.202^∘. The outer particle, which is more influenced by the distant perturber, undergoes coupled regular oscillations of eccentricity and inclination. In contrast, the oscillations of the inner particle are strongly damped due to the stronger effect of the relativistic precession.
§.§ Disc like structures
Let us demonstrate the VHS mechanism in the evolution of a N-body system. We study a setup inspired by <cit.> but with two significant differences. First, the initial eccentricities of the orbits are uniformly distributed within the range [0,1), while <cit.> initially considered circular orbits. Second, the post-Newtonian corrections to the central body's gravity dampen the K–L oscillations instead of the extended mass distribution included in <cit.>.
We consider a hypothetical disk of 50 stars orbiting around , a supermassive black hole of mass M_∙=4×10^6 M_⊙. The disk is perturbed by a massive perturber of mass M_p = 1×10^4 M_⊙ orbiting on a circular orbit at R_p=0.1 pc. The masses of the stars in the disk are sampled from a Salpeter distribution function, ξ(m) ∝ m^-2.35, in the mass range 1-15 M_⊙.
For all orbits, the initial values of the argument of pericentre ω and the longitude of the ascending node Ω are set to zero. At the same time, other orbital elements are sampled uniformly with a ∈ [0.0035,0.02), e ∈ [0.0,1.0), i ∈ [65^∘, 75^∘), and the true anomaly ν∈ [0, 2π). We integrate this setup with the same integration code, , which we used in the previous sections.
<ref> illustrates the temporal evolution of the orbital elements of all 50 stellar orbits. <ref> shows the projection of the normal vectors of the orbital planes of the same orbits at T=0, 2.5 and 20 Myrs. We currently separate the stars whose Ω stays within 20 of the median of the whole sample throughout the course of evolution and mark them in red in both figures. We refer to this as the disc-like structure as the orbits are co-rotating with each other. The stars depicted in blue are objects whose orbits rotate independently and visually occupy a more spread out region in <ref>.
Although the current configuration differs from the model presented in <cit.>, the main dynamical effects are qualitatively similar: approximately 2/3 of the orbits, predominantly from the inner region of the disc, maintain the disc-like configuration, characterized by similar values of both i and Ω, throughout the entire course of evolution. The remaining outer orbits precess differentially in terms of Ω, resulting in a scattered structure. However, this structure still exhibits a specific feature, as the inclinations of these orbits are typically smaller than their initial values.
In contrast, the inclination of the coherent structure grows with respect to the initial value, becoming nearly perpendicular to the orbital plane of the outer perturbing body. We interpret this evolution similarly as was done in <cit.>. Specifically, we suggest that the inner orbits mutually interact in the strong VHS mode. Furthermore, the inner and outer parts of the disc initially act as two bodies that mutually interact in the weak VHS mode. After some time, the outer body loses initial coherency due to the differential precession of the orbits of its individual members, which suppresses the weak mode of VHS mechanism between the inner and outer bodies of the disc.
It is worth noting that the model presented here is scaled so that the coherent structure has spatial dimensions similar to those of the system of S-stars observed in the Galactic Centre. While this paper cannot provide any insight into the role of the VHS mechanism in the dynamical evolution of stars in the Galactic Centre, recent research by <cit.> suggests that coherent disc-like structures can be identified within the S-star cluster. This presents an opportunity to observe the potential effects of the VHS mechanism on the stars in the Galactic Centre.
§ CONCLUSIONS
In this work, we built on the previous study conducted by <cit.> that explored the dynamical evolution of two nearby, Keplerian, and initially co-planar orbits under the influence of a massive, distant perturber. The secular theory proposed in <cit.> assumes constant zero eccentricity of orbits of all bodies (the two inner objects close to the dominant body and the distant perturber). This assumption is only applicable to systems where an additional non-Keplerian spherically symmetric potential is not only present, but is also strong enough to damp K–L oscillations of the two inner bodies caused by the gravity of the distant perturber. The secular theory provides two qualitatively different solutions of orbital evolution of the inner bodies, which we refer to as the weak and strong modes of the VHS mechanism.
[In <cit.>, a similar system with a disk of stars orbiting around a super-massive black hole in the Galactic Centre, embedded in a spherically symmetric stellar cusp and perturbed by gravity of a distant gaseous torus was studied in detail.]
Generally, the weak mode applies when the masses of the bodies on the inner orbits are small, and/or their separation in semi-major axes is large. This mode results in independent rotations of the longitudes of the ascending nodes of the two orbits due to the influence of the distant perturber. Additionally, the two orbits periodically exchange their angular momentum, leading to periodic coupled oscillations of their inclinations. However, when ΔΩ is an integer multiple of 2π, both inner orbits become co-planar again.
For systems with more massive bodies and/or minor separations between the two inner orbits, the strong mode applies. In this mode, the inner orbits have a common rotation rate of Ω, accompanied by oscillations of small-amplitude inclinations.
This paper demonstrates that the qualitative features of the two modes of VHS mechanism are identifiable in systems where some of the critical assumptions of the secular theory are relaxed. Instead of the external potential of some extended mass distribution, post-Newtonian corrections to the dominant body's gravity can dampen the K–L oscillations. This damping is well understood within the original secular theory of <cit.> with the first-order post-Newtonian approximation given by <cit.>. By relaxing the need for the extended mass to dampen the K–L oscillations, VHS mechanism applies to a broader range of astrophysical systems, such as compact planetary systems or the innermost regions of galactic nuclei.
We have further studied systems with non-zero eccentricity of the inner orbits. We cannot use the secular theory of <cit.> to study such a setup. Nevertheless, by directly integrating the equations of motion, we have identified key features of both the weak and strong modes of the VHS mechanism. The main difference we found in these setups compared to the zero eccentricity case is within the strong mode. In this mode, the orbital inclinations of the inner particles may swap, meaning that in some setups, they oscillate around the common starting value. Nonetheless, this does not change the general statement that the orbits co-rotate (ΔΩ≈ 0) within this evolutionary mode.
In order to achieve a more general setup, we have partially relaxed the assumption of constant eccentricity, which assumes complete damping of K–L oscillations of the inner orbits due to the gravity of the outer perturber. We have presented examples of systems where we observe only partially damped K–L oscillations of the inner orbits.[It is important to note that we considered post-Newtonian dynamics in all the examples, which means that some level of damping of K–L oscillations due to the relativistic pericentre advance was always present.] The typical features of the VHS mechanism's weak or strong modes are identifiable in these systems.
Finally, we have demonstrated, similarly to <cit.> and <cit.>, that the VHS mechanism applies to more complex systems with a larger set of initially co-planar bodies in a relativistic potential. Recent research by <cit.> suggests that coherent disc-like structures can be identified within the S-star cluster. This opens avenues for observing the possible effects of VHS mechanism in the stars in the Galactic Centre.
In summary, the analytical expression of VHS mechanism described in <cit.> appears to be a robust phenomenon that can even govern the evolution of systems that do not meet the assumptions of the analytic theory. We have demonstrated through several examples that the VHS mechanism patterns can be found even in systems where instantaneous close encounters significantly affect the orbital evolution. Specifically, the persistent near co-rotating configuration within the strong mode may have straightforward, observationally detectable consequences for a broad range of astrophysical systems, such as compact planetary systems or stellar structures in the innermost regions of galactic nuclei. However, it is essential to note that the strong mode of the VHS mechanism does not create co-planar and co-rotating structures within our current understanding; instead, it allows for the survival of existing such structures for extended periods. The weak mode may lead to a specific evolution of its orientation, as shown in Section <ref>, which was discussed for a particular setup in <cit.>.
The result of a more general understanding of the VHS mechanism is a potential application in the Galactic Centre to orbits of the S-star cluster. A consequence of evolving eccentric orbits is the introduction of chaos in these systems, which needs to be understood better. Studying this in more detail can facilitate a deeper understanding of the evolution of disk-like structures with the VHS mechanism. These studies will lead to significant insights into the behaviour of astrophysical systems and contribute to a better understanding of the underlying mechanisms that govern their evolution.
§ ACKNOWLEDGEMENTS
We thank Sai Sasank Chava and Yugantar Prakash for feedback on the manuscript. We thank David Vokrouhlický for his input on using the Rubincam approximation. MS is supported by the Grant Agency of Charles University under the grant number 179123. LŠ and JH acknowledge support from the Grant Agency of the Czech Republic under the grant 20-21855S.
§ DATA AVAILABILITY STATEMENT
The data and tools used to produce the plots in this paper will be shared on reasonable request to the corresponding author.
mnras
§ INCLINATION CROSSING IN ECCENTRIC STRONG MODE
In Section <ref>, we describe qualitative difference of the strong mode of the VHS mechanism with eccentric orbits in comparison to the circular case. It has been argued in <cit.> that, starting from co-planar configuration, the inclination of the inner orbit always grows, while that of the outer one decreases. An important piece of their argument is that precession of the outer orbit due to the distant perturber is always faster which leads to positive value of sin(Ω^'-Ω) which implicitly occurs in <ref> and <ref> through the dependence of _i on n.n^'.
We don't have secular equations for the VHS mechanism with eccentric orbits in hands, still, we may assume that dependence of di / dt and di^' / dt on ΔΩ≡Ω^' - Ω is similar to the circular case. <ref> shows zoomed-in evolution of model 4 for a short period of time. Indeed, we see that, in contrary to the circular case, ΔΩ reaches non-zero (both positive and negative) vales at the instances of i = i^'. Depending on the sign of ΔΩ, inclination of the inner orbits either grows similarly to the circular case (ΔΩ > 0) or decreases. For comparison, we also show detailed view of evolution of orbital elements for setup similar to model 4, but now with small eccentricities of the two inner orbits, e_0 = e_0^' = 0.08, in <ref>. The oscillatory pattern of ΔΩ is preserved, but now with (i) several orders of magnitude smaller amplitude and (ii) near zero value at the instances of i ≈ i^' and (iii) positive derivative at those instances. Evolution of i and i^' is then in accord with the analytic argumentation for the zero eccentricity case.
Let us note that due to lack of analytic secular theory for the non-zero eccentricity case, it is hard to discriminate, whether evolution of ΔΩ in the strong mode of the VHS mechanism is primarily due to non-uniform precession of the orbits in the field of the distant perturber, or whether it is mainly governed by their mutual torques.
|
http://arxiv.org/abs/2405.09299v1 | 20240515124122 | Charm CP violation and searches | [
"Stefan Schacht"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
#1#2#3#4#1 #2, #3 (#4)
Nuovo Cimento
Nucl. Instrum. Methods
Nucl. Instrum. Methods A
Nucl. Phys. B
Phys. Lett. B
Phys. Rev. Lett.
Phys. Rev. D
Z. Phys. C
ϵ^'
ε
→
π^+π^-γ
p
K^0
K̅^̅0̅
α
α̅
CP-1.80em/
Department of Physics and Astronomy, University of Manchester,
Manchester M13 9PL, United KingdomCharm CP violation and searches
Stefan Schacht
May 20, 2024
===============================
Charm CP violation is a unique gate to the physics of up-type quarks and allows searches for physics beyond the Standard Model in new and exciting ways, complementary to kaon and b decays. We review recent advances, focusing on symmetry-based methods and providing an outlook on the next challenges at the intensity frontier of charm physics.
§ INTRODUCTION
Charm CP violation is a unique gate to explore the flavor structure of up-type quarks and probe for physics
beyond the Standard Model (BSM) in this sector. In 2019 LHCb discovered charm CP violation by determining the difference of
CP asymmetries <cit.>
a_CP^dir(D^0→ K^+K^-) - a_CP^dir(D^0→π^+π^-) = (-0.161± 0.028)% ,
where
a_CP^dir(D^0→ f) ≡| A(D^0→ f)|^2 - | A(D̅^0→ f)|^2
/| A(D^0→ f)|^2 + | A(D̅^0→ f)|^2
.
The interpretation of Eq. (<ref>) and the question if it implies BSM physics or not is an on-going challenge
for the theory community.
Direct CP violation is an interference effect of two amplitudes that have a relative weak and strong phase.
In terms of the underlying theory parameters of the two interfering amplitudes, for charm decays we have <cit.>
a_CP^dir≈ 2 r_CKM r_QCDsinφ_CKMsinδ_QCD .
Here, r_CKM is the magnitude of the ratio of the involved Cabibbo-Kobayashi-Maskawa (CKM) matrix elements of the two amplitudes, r_QCD is the
magnitude of the ratio of the relevant hadronic matrix elements and φ_CKM and δ_QCD are the corresponding relative
weak and strong phases, respectively.
In the Standard Model (SM), the interference effect resulting in a_CP^dir≠ 0 stems from the different ways one can
reach the final state of for example two pions from the initial state of a D^0. This can proceed either directly via the CKM matrix elements V_cd^* V_ud, or by
first decaying to an intermediate state that contains strangeness, for example K^+K^-, through the coupling V_cs^* V_us,
which then annihilates and forms a π^+π^- state:
D^0 V_cd^*V_ud⟶π^+π^- ,
D^0 V_cs^*V_us⟶ K^+ K^-, …QCD⟶π^+π^- .
The same happens completely analogously also for K^+K^- final states:
D^0 V_cd^*V_ud⟶π^+π^-,…QCD⟶ K^+K^- ,
D^0 V_cs^*V_us⟶ K^+ K^- .
The interfering amplitudes have therefore a relative weak phase between the CKM matrix element combinations V_cd^* V_ud and V_cs^* V_us as well as a relative strong phase from the involved non-perturbative QCD effects.
The fact that we know the involved CKM matrix elements from the global CKM fit, i.e., from kaon and b decays, leads to the SM estimate
Δ a_CP^dir∼ 10^-3× r_QCDsinδ_QCD .
However, we are as of now not able to reliably calculate the underlying hadronic physics from first principles, so we do not know a priori the
size of r_QCD and δ_QCD. The ratio r_QCD is also known as the penguin over tree ratio.
Note however, that this commonly used term denotes more precisely just the ratio of CKM-subleading over CKM-leading amplitudes. The penguin diagrams are actually penguin contractions of tree operators <cit.>, and the CKM-leading amplitudes include also other topologies
than tree diagrams, see, e.g., Ref. <cit.>.
In terms of U-spin matrix elements, r_QCD is given as the ratio of Δ U=0 over Δ U=1 hadronic matrix elements.
U-spin is a SU(2) subgroup of the approximate SU(3)-flavor symmetry of the QCD Lagrangian under
unitary rotations of the light quarks u, d and s. U-spin is the symmetry group which connects d and s quarks.
Another such subgroup is isospin, which connects u and d quarks.
Assuming the SM and δ_QCD = 𝒪(1) from large rescattering effects, Eq. (<ref>) implies <cit.>
r_QCD^Exp = 𝒪(1) .
The key question in order to answer the question whether Eq. (<ref>) implies BSM physics or not is therefore equivalent to the
question for the size of r_QCD in the SM.
However, in hadronic charm decays, it is not clear if methods known from kaon and b physics work.
The key issue is if we can overcome soft QCD in charm decays.
At the relevant scale, the strong coupling α_s is too large in order to make a meaningful expansion.
Also, in general an expansion in Λ_QCD/m_c does not work. Note that in some cases this might be different, like for the charm hadron lifetimes, where heavy quark methods show promising results <cit.>. For hadronic decays we are however in a different position. For these, we need to work with a smaller toolbox and find new strategies in order to interpret the data.
Eq. (<ref>) is consistent with large 𝒪(1) non-perturbative effects from low energy QCD, which result in <cit.>
r_QCD^SM = 𝒪(1) ,
in agreement with expectations based on enhancements that we also see in the ratio of Δ I=1/2 over Δ I=3/2D→ππ isospin matrix elements <cit.>, see Ref. <cit.> for a recent update.
On the other hand, works based on light cone sum rules (LCSR) predict <cit.>
r_QCD^SM = 𝒪(0.1) .
Strikingly, this prediction is one order of magnitude below the experimental determination in Eq. (<ref>).
Another promising ansatz to treat hadronic charm decays makes use of the available ππ/KK rescattering data, see Refs. <cit.> for detailed discussions.
First ideas on the lattice can be found in Ref. <cit.>, however the amount of intermediate states proves challenging.
In the following, we review some of the recent developments. In Sec. <ref>, we explain the methodology for the extraction of the
penguin over tree ratio from data with theory uncertainties of ∼1%.
In Sec. <ref> we discuss how recent measurements of CP asymmetries of singly-Cabibbo suppressed decays allow to probe
the breaking of the U-spin symmetry in CKM-subleading amplitudes.
In Sec. <ref> we summarize recent insights on sum rules at higher order in the U-spin expansion.
We conclude in Sec. <ref>.
§ PENGUIN OVER TREE EXTRACTION WITH ∼1% THEORY UNCERTAINTY
As mentioned above, the extraction of the penguin over tree ratio Eq. (<ref>) from Δ a_CP^dir depends on the
assumption that the relevant strong phase is 𝒪(1), see, e.g., Ref. <cit.>.
For a precision extraction of r_QCD it would of course be advantageous not to rely on such an assumption.
It is known that the strong phase between the interfering amplitudes can be obtained in principle from measurements of
time-dependent CP violation or quantum-correlated decays <cit.>.
A difficulty in charm decays is the very slow oscillation between the mesons.
Unlike the time-dependence of B decays, in charm decays we aim just for the linear term of the time dependence, Δ Y^f, in
A_CP(f,t) ≈ a_CP^f + Δ Y^f t/τ_D^0 .
Current determinations of Δ Y^f are compatible with zero <cit.>.
The slopes Δ Y^f of different decay channels with final states f have a universal and a non-universal contribution
and the splitting of Δ Y^f between D^0→ K^+K^- and D^0→π^+π^- is
formally suppressed at the second power of U-spin breaking effects <cit.>.
The case of D^0→π^+π^- is especially interesting as here we currently see hints for very large CP violation <cit.>.
In the D→ππ system we have the additional advantage of isospin symmetry relating the observables of D^0→π^+π^-,
D^0→π^0π^0 and D^0→π^+π^0. Isospin breaking effects are expected only at the 1% level.
Assuming the SM, isospin allows the determination of the relative strong phase between penguin and tree amplitudes from direct CP asymmetries and branching ratios only. Corresponding closed-form expressions have been derived recently in Ref. <cit.>.
D→ππ has the same underlying group-theory structure as B→ππ <cit.>, however,
the different approximations that apply to charm and B decays result in different implications for how
the underlying theory parameters relate to direct CP asymmetries and branching ratios.
For example, an important feature of charm decays is that the physics of branching ratios is to a very good approximation only governed
by the CKM-leading amplitude because the CKM-subleading amplitude is strongly suppressed.
This leads to a decoupling of the physics of branching ratios and CP asymmetries.
An interesting feature of D→ππ is that due to the underlying isospin symmetry, knowledge about the penguin-over-tree ratio in one decay channel translates into a prediction about this ratio in the other one, namely <cit.>
r^00/
r^+- = √(1/2ℬ(D^0→π^+π^-)/ℬ(D^0→π^0π^0)𝒫^00/𝒫^+-) ,
where 𝒫^f are the corresponding phase space factors and r^00 and r^+- are the penguin over tree ratios of the isospin decomposition of D^0→π^0π^0 and D^0→π^+π^- decays, respectively.
With current data it is found <cit.>
r^+- = 5.5^+14.2_-2.7 ,
r^00 = 5.2^+13.3_-2.4 ,
i.e., the central values are unexpectedly large. With future data from LHCb Upgrade II and Belle II it will be possible to reduce the uncertainties considerably <cit.>.
Once also time-dependent CP violation measurements are available, the above methodology can be used in order to not only determine, but overconstrain the
D→ππ system and thereby probe for BSM physics.
§ TESTING THE U-SPIN SYMMETRY IN CKM-SUBLEADING AMPLITUDES
Recent new measurements by LHCb went beyond just the measurement of the combination of CP asymmetries in Eq. (<ref>) and provide
separate determinations of the involved CP asymmetries <cit.>
a_CP^dir(D^0→ K^+K^-) = (7.7± 5.7)· 10^-4 ,
a_CP^dir(D^0→π^+π^- ) = ( 23.2± 6.1) · 10^-4 ,
implying the first evidence for CP violation in a single decay channel.
The central values of Eqs. (<ref>) and (<ref>) violate the U-spin limit sum rule <cit.>
a_CP^dir(D^0→ K^+K^-) +
a_CP^dir(D^0→π^+π^- ) = 0 .
While a violation of Eq. (<ref>) is expected, the amount goes beyond the generic expectation of ∼ 30% SU(3)_F breaking, however
yet with large errors.
With future data, additional sum rules such as <cit.>
a_CP^dir(D_s^+→ K^0π^+) +
a_CP^dir(D^+→K^0 K^+) = 0
can be tested in order to study if a consistent picture emerges <cit.>.
Improved versions of the sum rules Eqs. (<ref>) and (<ref>) connect the decays
a_CP^dir(D^0→ K^+K^-) ,
a_CP^dir(D^0→π^+π^-) ,
a_CP^dir(D^0→π^0π^0) , and
a_CP^dir(D^+→ K_SK^+) ,
a_CP^dir(D_s^+→ K_Sπ^+) ,
a_CP^dir(D_s^+→ K^+π^0) ,
respectively, and can be found in Ref. <cit.>. The latter highlight the synergy and complementarity of LHCb <cit.> and Belle/Belle II <cit.>.
Explanations of large U-spin breaking in the CKM-subleading amplitudes also include Z' models <cit.>,
as these generate operators with the flavor content s̅cu̅s and d̅cu̅d with non-universal coefficients.
Ref. <cit.> presents viable models with a leptophobic Z' below O(20 GeV), including a pattern of CP violation
in D→ππ that includes a violation of the isospin sum rule <cit.>
a_CP^dir(D^+→π^+π^0) = 0 .
More study is needed of the puzzling violation of Eq. (<ref>) once more data becomes available.
§ SOLVING THE PROBLEM OF HIGHER ORDER U-SPIN
The discussion in Sec. <ref> motivates to go to higher order in the U-spin expansion in order to derive sum rules
that hold to higher precision than Eq. (<ref>).
Going to higher order in SU(3)_F breaking is complicated by the proliferation of parameters, visibly already at
next to leading order <cit.>.
In Ref. <cit.> we derived new theorems which enable calculations to arbitrary order in the U-spin expansion on the amplitude level.
This allows especially also to determine a priori up to which order sum rules actually exist. The new method uses symmetry properties of Clebsch-Gordan coefficients and the fact that any given system can be reduced to the discussion of a doublet-only system out of which the given system is constructed.
Many more opportunities lie ahead in this direction. We expect that this methodology will be especially useful for multi-body decays.
Precision sum rules at high order are likely for these to exist due to the higher combinatorial possibilities in comparison to two-body decays.
The first step regarding three-body decays however will be to probe the ratio of Δ U=0 over Δ U=1 matrix elements of underlying pseudo two-body D→ VP decays in order to see if their order of magnitude agrees with the analogous ratio for D→ PP decays in Eq. (<ref>) <cit.>. This enables an important consistency check of the hierarchies of U-spin matrix elements.
Such endeavors are complementary to model-independent searches for CP violation in multi-body decays for example with the energy test <cit.>. Another frontier are charmed baryon decays, see recently Refs. <cit.>, to which the methodology of Ref. <cit.> also applies.
§ CONCLUSIONS
This is just the beginning of the exploration of charm CP violation.
Until recently we had only a measurement of Δ a_CP^dir. Now, with the evidence of CP violation
in D^0→π^+π^- we have two data points. In order to benefit from symmetry-based methods for the
interpretation of the data, we need more measurements of CP asymmetries of singly-Cabibbo suppressed decays.
Only then we will be able to test the corresponding sum rules.
Time-dependent measurements will be very important to use isospin methods to overconstrain the D→ππ system.
The development of new higher order sum rules will provide opportunities to benefit from the abundant data on
multi-body decays.
The key question if we can tell a loop from a tree remains a challenge for charm theory, but
no matter what, we will learn something new: about QCD, and maybe also about new physics.
§ ACKNOWLEDGMENTS
S.S. is supported by a Stephen Hawking Fellowship from UKRI under reference EP/T01623X/1 and the STFC research grants ST/T001038/1 and ST/X00077X/1.
§ REFERENCES
|
http://arxiv.org/abs/2405.10161v1 | 20240516145428 | Torus knots and generalized Schröder paths | [
"Marko Stošić",
"Piotr Sułkowski"
] | hep-th | [
"hep-th",
"math.CO",
"math.QA"
] |
A new dimension in the variability of AGB stars:
convection patterns size changes with pulsationBased on observations made with the Very Large Telescope Interferometer (VLTI) at the Paranal Observatory under programs ID 0104.D-0390(A), 0104.D-0390(B), 0104.D-0390(C), and 60.A-9237(A).
A. Rosales-Guzmán 1,2,
J. Sanchez-Bermudez 1,3,
C. Paladini 2,
B. Freytag4,
M. Wittkowski 5,
A. Alberdi6,
F. Baron7,
J.-P. Berger8,
A. Chiavassa9,
S. Höfner4,
A. Jorissen10,
P. Kervella11,
J.-B. Le Bouquin8,
P. Marigo12,
M. Montargès11,
M. Trabucchi12,
S. Tsvetkova13,
R. Schödel6,
S. Van Eck10
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We relate invariants of torus knots to the counts of a class of lattice paths, which we call generalized Schröder paths. We determine generating functions of such paths, located in a region determined by a type of a torus knot under consideration, and show that they encode colored HOMFLY-PT polynomials of this knot. The generators of uncolored HOMFLY-PT homology correspond to a basic set of such paths. Invoking the knots-quivers correspondence, we express generating functions of such paths as quiver generating series, and also relate them to quadruply-graded knot homology. Furthermore, we determine corresponding A-polynomials, which provide algebraic equations and recursion relations for generating functions of generalized Schröder paths. The lattice paths of our interest explicitly enumerate BPS states associated to knots via brane constructions.
§ INTRODUCTION
It is expected, both from mathematical and physical perspectives, that polynomial knot invariants should have a counting interpretation. From mathematical viewpoint it should arise in consequence of the structure of knot homologies, while in physics it follows from the relation between knot invariants and counting of BPS states. In this work we find a counting interpretation of polynomial knot invariants of torus knots in terms of counting of lattice paths. These results not only provide an explicit manifestation of a combinatorial character of knot invariants, but also lead to explicit formulae for previously unknown counting functions for various classes of lattice paths, as well as recursion relations and algebraic equations that they satisfy.
We call the lattice paths of our interest as generalized Schröder paths. We show that they encode colored HOMFLY-PT polynomials P_r(a,q) of a (m,n) torus knot (also denoted T_m,n), in framing mn, colored by symmetric representations S^r. Generalized Schröder paths are paths in a square lattice composed of three possible steps: horizontal (1,0), vertical (0,1), and diagonal ones (1,1), which start in the origin.
The type (m,n) of a torus knot under consideration determines the region in which the paths lie: we consider paths in the positive quadrant of a plane and below the line y=m/nx of slope m/n. The dependence on a is captured by the number of diagonal steps in a given path, while the dependence on q by the area between the path and the line y=m/nx. In literature, the paths under the line of unit slope, y=x, made of the above three basic steps, are referred to as Schröder paths. By generalized Schröder paths we mean paths that lie below a line of an arbitrary rational slope m/n, see examples in fig. <ref>, <ref> and <ref>.
In particular, paths under the line of the slope 1/f capture invariants of the unknot in framing f.
Combinatorics of lattice paths is a broad and important research direction, within which various types of lattice paths are studied <cit.>. In particular, Schröder paths are generalizations of Dyck paths. Dyck paths are made of only two elementary steps: horizontal (1,0), and vertical (0,1) (without the diagonal step). The number of Dyck paths from the origin to the point (k,k) in the square lattice is equal to the Catalan number C_k, see fig. <ref> (right). Another famous combinatorial objects are Duchon paths, which are paths consisting also of only horizontal and vertical steps, which lie below the line y=2/3 x. By generalized Duchon paths we mean paths made of horizontal and vertical steps, which lie below the line of arbitrary rational slope y=m/n x – in other words, these are Schröder paths without diagonal steps. All these types of paths also play a role in our analysis.
While various properties of lattice paths are known, their relations to knot invariants and related physical concepts are rather new and have not been deeply explored. Some initial results in this context are presented in the previous work that we coauthored <cit.>, which shows that generalized Duchon paths (i.e. those that do not involve diagonal steps) encode extremal colored HOMFLY-PT polynomials (i.e. suitably defined a-independent part of full colored HOMFLY-PT polynomials) of torus knots. We also found that Schröder paths (those under the line y=x, of the unit slope) capture colored HOMFLY-PT invariants of the (unframed) unknot. In the current work we generalize these results to full colored HOMFLY-PT polynomials of arbitrary torus knots and show that their a-dependence is simply captured by including diagonal steps in the construction of paths.
Our work also reveals links between combinatorics of lattice paths and quiver representation theory. A prominent role in this relation is played by the knots-quivers correspondence <cit.>, which enables to express colored HOMFLY-PT polynomials of torus knots in terms of quiver generating series with appropriate identification of generating parameters. In this paper we show that another identification of parameters in the quiver generating series (for the same quiver) yields generating functions of corresponding generalized Schröder paths. This relation also implies that certain elementary paths are in one-to-one correspondence with generators of uncolored HOMFLY-PT homology, thereby generalizing our considerations to the realm of such homologies <cit.>. In fact, quivers that arise in this context correspond to unreduced invariants of (m,n) torus knots, in framing mn. We explicitly identify and list these quivers for a few interesting examples. In particular, we determine the full quivers (both reduced and unreduced one, encoding the full a-dependence) for (3,4) torus knot (i.e. 8_19 knot). It follows that the whole information about the counts of a given class of generalized Schröder paths is encoded in a finite set of parameters, i.e. the entries of a quiver matrix C and parameters that determine the specialization of quiver generating parameters x_i.
To sum up, for a quiver corresponding to a (m,n) torus knot, two different identifications of parameters produce either colored HOMFLY-PT polynomials for this knot, or generating series of generalized Schröder paths under the line of the slope m/n. Therefore, the relation between knot polynomials and counts of Schröder paths is not completely straightforward (as it involves these two parameter specializations). However, we also find a direct expressions for the counts of Schröder paths in terms of knot polynomials. Amusingly, it involves superpolynomials of colored quadruple knot homology, introduced in <cit.> and denoted ^Q_r(a,q,t_r,t_c): we find a specialization of parameters of such polynomials (different than the one that produces colored HOMFLY-PT polynomials) that immediately yields counts of Schröder paths.
Finally, it is known that generating functions of lattice paths (in q=1 limit) should satisfy algebraic equations. This statement also follows from their relation to colored knot polynomials, which take form of q-holonomic functions and for this reason satisfy recursion relations, reducing in q=1 limit to algebraic equations. We explicitly determine such algebraic equations, as well as related recursion relations that capture the dependence on q. In the knot theory context such relations are called classical and quantum (generalized) A-polynomials <cit.>, hence we also refer to such objects as A-polynomials (for lattice paths). We provide a number of nontrivial examples of such A-polynomials.
We believe our results deserve further analysis and generalizations. They provide new direct links between knot theory, quiver representation theory and combinatorics, so it would be important to prove them using and relating to each other the methods of these fields. Furthermore, while in general the counting interpretation of knot invariants follows from considerations in physics, interpretation of certain objects as BPS states and engineering of brane systems in string theory, it would be instructive to rederive our results from physical setup more explicitly. The relations to lattice paths that we present could be also generalized in various ways, e.g. to other values of framing, to knots (and links) other than torus knots, to the refined case, to the categorified level, etc. Such relations might involve other combinatorial models than just those of lattice paths. It would be also interesting to relate our results to the relation between quantum A-polynomials and combinatorics on words presented in <cit.>.
The plan of the paper is as follows. In section <ref> we summarize various features of knot homologies and the knots-quivers correspondence. In section <ref> we present in general terms the main results of this paper, i.e. the relations between generating functions of lattice paths, colored polynomials for torus knots, and quiver generating series. In section
<ref> we illustrate statements from the previous section in explicit examples related to a few particular torus knots and generalized Schröder paths. In section <ref> we determine explicit form of A-polynomials for several classes of generalized Schröder paths.
§ KNOT HOMOLOGIES AND KNOTS-QUIVERS CORRESPONDENCE
In this section we recall basic facts and summarize the notation concerning colored knot polynomials associated to various knot homologies, as well as the knots-quivers correspondence.
§.§ Colored superpolynomials of quadruply-graded homology
In this work we relate generating functions of lattice paths to colored HOMFLY-PT polynomials for torus knots P_r(a,q), where r denotes the coloring by the symmetric representation S^r. The polynomials P_r(a,q)=_r(a,q,-1) are obtained as t=-1 specialization of colored superpolynomials of HOMFLY-PT homology _r(a,q,t) <cit.>. However, it is of advantage to consider yet more general knot polynomials, namely colored superpolynomials of quadruply-graded homologies introduced in <cit.>, denoted ^Q_r(a,q,t_r,t_c). In the convention that we use here, the specialization of ^Q_r(a,q,t_r,t_c) with t_r=1 and t_c=t yields the above colored superpolynomials[Another consistent choice often considered in literature, e.g. in <cit.>, is to define superpolynomials as t_r=t,t_c=1 specialization of quadruply-graded homology, i.e. P_r(a,q,t) = ^Q_r(a,q,t,1).]
_r(a,q,t) = ^Q_r(a,q,1,t).
On the other hand, as we will see in what follows, generalized Schröder paths can be also related directly to the specialization
^Q_r(a,q=1,t_r=1,t_c=q^-1)≡^Q_r(a,1,1,q^-1). To start with, we summarize explict expressions for colored superpolynomials of quadruply-graded homology for several knots, which we will take advantage of in what follows. Invoking the expressions for q-Pochhamer and q-binomial series
(x;q)_k = ∏_j=0^k-1 (1-xq^j), r k _q= (q;q)_r/(q;q)_k (q;q)_r-k,
we have
∙ Trefoil knot (i.e. (2,3) torus knot, T_2,3, or 3_1 knot):
^Q_r(a,q,t_r,t_c)(3_1)=a^2r q^-2r∑_k=0^r r k_q^2 t_c^2 q^2(r+1)k t_r^2k t_c^2r k∏_i=1^k (1+ a^2 q^2(i-2) t_r t_c^2i-1),
which can be also rewritten (for simplicity, including only the dependence on t_c) as
^Q_r(a,q,1,t_c)(3_1) = ∑_k=0^r r k_q^2 t_c^2 a^2k q^2(k^2-k) t_c^2k^2×
× (-1)^r-k a^2k-2r q^2r(k-r)+(r-k)(r-k+1) t_c^2r(k-r)+(r-k)(r-k-1)×
×∏_i=1^k (1+ a^-2 q^2(2-i) t_c^1-2i)(1+ a^-2 q^2(1-r-i) t_c^1-2i-2r).
∙ Torus knots (2,2p+1) (i.e. T_2,2p+1 or (2p+1)_1 knots) for arbitrary p:
^Q_r(a,q,t_r,t_c)(T_2,2p+1) = a^2pr q^-2pr∑_0≤ k_p≤ k_p-1≤⋯≤ k_2≤ k_1≤ rr k_1_q^2 t_c^2k_1 k_2_q^2 t_c^2⋯k_p-1 k_p_q^2 t_c^2
× q^2(2r+1)∑_i=1^p k_i -2∑_i=1^p k_i-1k_i t_r^2∑_i=1^p k_i t_c^4r ∑_i=1^p k_i -2∑_i=1^p k_i-1k_i∏_i=1^k_1 (1+ a^2 q^2(i-2) t_r t_c^2i-1)
with the convention k_0=r. The expression for trefoil arises for p=1, for (2,5) torus knot (i.e. 5_1 knot) for p=2, etc.
∙ Figure-eight knot, 4_1:
^Q_r(a,q,t_r,t_c)(4_1) =∑_k=0^r r k_q^2 t_c^2 a^2k q^2(k^2-k) t_r^2k t_c^2k^2×
×∏_i=1^k (1+ a^-2 q^2(2-i) t_r^-1 t_c^1-2i)(1+ a^-2 q^2(1-r-i) t_r^-3 t_c^1-2i-2r).
∙ Knot 5_2:
^Q_r(a,q,t_r,t_c)(5_2) =a^2r q^-2r∑_k=0^r ∑_j=0^k r k_q^2 t_c^2k j_q^2 t_c^2×
× a^2j+2k t_r^3k+2jq^k^2-k+2j^2-2j+2rk t_c^k^2+2j^2+2r k×
×(-a^-2q^2 t_r^-1t_c^-1;q^-2t_c^-2)_k (-a^-2q^-2r t_r^-3t_c^-2r-1;q^-2t_c^-2)_j.
∙ (3,4) torus knot (i.e. 8_19 knot) <cit.>:
^Q_r(a,q,t_r,t_c)(T_3,4) =a^6 rq^4r^2-4rt_r^4r t_c^4 r^2∑ _j=0^r ∑ _j≥ k_1≥ k_2≥ k_3≥ 0r j_q^2 t_c^2j k_1_q^2 t_c^2k_1 k_2_q^2 t_c^2k_2 k_3_q^2 t_c^2
× q^-2 j + 2k_3-2k_2k_3+2(k_1+k_2+k_3)r+j(k_2+k_3-2r)×
× t_r^2 (-2 j + k_1 + k_2 + k_3 ) t_c^2 ( -k_1 -k_2 -k_2 k_3 +(k_1+ k_2 + k_3) r + j (k_2 + k_3 - 2 r) )×
×(-a^2 q^-2t_r t_c;q^2 t_c^2)_r-j(-a^2 q^2r t_r^3 t_c^1+2 r ;q^2 t_c^2)_r-j×
×(-a^2q^2(r-j-1)t_r t_c^1+2 (r-j) ;q^2 t_c^2)_k_1 .
§.§ Knots-quivers correspondence
A crucial role in this work is also played by the knots-quivers correspondence <cit.>. This correspondence relates symmetrically colored HOMFLY-PT polynomials of a knot K to quiver generating series for the corresponding (symmetric) quiver Q_K. Explicitly, the correspondence states that all the information about all symmetrically-colored HOMFLY-PT polynomials P_r(K)≡ P_r(a,q)(K) of the knot K is contained in the integral symmetric matrix C of size k× k, corresponding to the adjacency matrix of the quiver Q_K with k vertices, as well as two integral vectors 𝐚=(a_1,…,a_k) and 𝐪=(q_1,…,q_k), which encode homological (a,q)-degrees of k generators of uncolored HOMFLY-PT homology. For the reduced HOMFLY-PT polynomials the relationship reads
∑_r≥ 0P_r(a,q)(K)/(q^2;q^2)_rx^r=
P_C(x_1,…,x_k)|_x_i=a^a_iq^q_i-C_i,ix,
where the quiver generating series P_C(x_1,…,x_k) has the form
P_C(x_1,…,x_k)=∑_d_1,…,d_k(-q)^∑_i,jC_i,jd_id_jx_1^d_1 x_2^d_2… x_k^d_k/∏_i=1^k(q^2;q^2)_d_i.
A quiver matrix for the mirror image to a given knot is given by I_k× k-C, where C is a quiver matrix for the original knot. Furthermore, a quiver matrix corresponding to colored HOMFLY-PT polynomials with a framing shifted by f∈ (with respect to polynomials corresponding to the matrix C) takes form
C+f[[ 1 ⋯ 1; ⋮ ⋱ ⋮; 1 ⋯ 1 ]].
In this work we are primarily interested in generating series of unreduced knot polynomials, as they turn out to be directly related to the counting of lattice paths. Unreduced colored HOMFLY-PT polynomials are defined by
P̅_r(K) ≡P̅_r(a,q)(K)=P_r(K)P̅_r(unknot)= P_r(K) a^-rq^r(a^2;q^2)_r/(q^2;q^2)_r,
and their generating series can also be written in the form of quiver generating series for an appropriate quiver matrix C̅
∑_r≥ 0P̅_r(K)x^r=
P_C̅(x_1,…,x_k)|_x_i=a^a_iq^q_i-C̅_i,ix,
where again
P_C̅(x_1,…,x_k)=∑_d_1,…,d_k(-
q)^∑_i,jC̅_i,jd_id_jx_1^d_1 x_2^d_2… x_k^d_k/∏_i=1^k(q^2;q^2)_d_i.
As shown in <cit.>, C̅ is simply related to C, as we recall now. First, we rewrite (<ref>) as follows
∑_r≥ 0P_r(K)x^r=∑_d_1,…,d_k(-
q)^∑_i,jC_i,jd_id_ja^∑_ia_id_iq^∑_i(q_i-C_ii)d_i(q^2;q^2)_∑_i d_i/∏_i=1^k(q^2;q^2)_d_ix^∑_i d_i.
In terms of unreduced polynomials (<ref>) and using the relation r=∑_i d_i we get
∑_r≥ 0P̅_r(K)x^r=
= ∑_d_1,…,d_k(-q)^∑_i,jC_i,jd_id_ja^∑_ia_id_iq^∑_i(q_i-C_ii)d_ia^-∑_i d_iq^∑_i d_i(a^2;q^2)_∑_i d_i/∏_i=1^k(q^2;q^2)_d_ix^∑_i d_i.
Using now
(a^2;q^2)_r=(a^2;q^2)_d_1+⋯+d_k=(a^2;q^2)_d_1(a^2q^2d_1;q^2)_d_2⋯(a^2q^2(d_1+⋯+d_k-1);q^2)_d_k
and the q-binomial identity
(y^2;q^2)_d/(q^2;q^2)_d=∑_α
+β=d(-1)^αy^2αq^α^2-α1/(q^2;q^2)_α
(q^2;q^2)_β
we get
∑_r≥ 0P̅_r(K)x^r=∑_[ α_1,…,α_k; β_1,…,β_k ](-
q)^∑_i,jC_i,j(α_i+β_i)(α_j+β_j)a^∑_i(a_i-1)
(α_i+β_i)q^∑_i(q_i-C_ii+1)(α_i+β_i)×
×(-1)^∑_iα_i a^2∑_iα_iq^∑_i(α_i^2-α_i)q^2∑_iα_i(α_1+
β_1+⋯+α_i-1+β_i-1)x^∑_iα_i+∑_iβ_i/∏_i(q^2;q^2)_α_i
(q^2;q^2)_β_i.
Finally, concatenating variables α and β
(γ_1,…,γ_2k):=(α_1,β_1,α_2,β_2,…,
α_k,β_k),
we get
∑_r≥ 0P̅_r(K)x^r=∑_γ_1,…,γ_2k(-
q)^∑_i,jC̅_i,jγ_iγ_ja^∑_ia̅_iγ_iq^∑_i(q̅_i-C̅_ii)γ_ix^∑_iγ_i/∏_i=1^2k(q^2;q^2)_γ_i,
where
C̅ is 2k× 2k matrix
4pt
C̅=C⊗[[ 1 1; 1 1 ]]+
[[ 1 0 1 0 1 0 ⋯ 1 0; 0 0 1 0 1 0 ⋯ 1 0; 1 1 1 0 1 0 ⋯ 1 0; 0 0 0 0 1 0 ⋯ 1 0; 1 1 1 1 1 0 ⋯ 1 0; 0 0 0 0 0 0 ⋯ 1 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 1 1 1 1 1 1 ⋯ 1 0; 0 0 0 0 0 0 ⋯ 0 0 ]]
and
𝐚̅ =(a_1+1,a_1-1,a_2+1,a_2-1,…,a_k+1,a_k-1),
𝐪̅ =(q_1+1,q_1+1,q_2+1,q_2+1,…,q_k+1,q_k+1).
§ LATTICE PATHS FROM KNOTS, QUIVERS AND A-POLYNOMIALS
In this section we present the main results of this work, namely explicit relations between generating functions of lattice paths and colored polynomials for torus knots, as well as quiver generating series. Furthermore, we also show that generating functions of lattice paths satisfy algebraic equations that we call A-polynomials, as well as recursion relations captured by operators called quantum A-polynomials (in analogy with A-polynomials for knots <cit.>). Explicit examples and illustrations of these statements will be presented in the following sections.
As already explained in the introduction, lattice paths of our interest are generalized Schröder paths, which consist of elementary steps to the right, upwards, and a diagonal one, in a square lattice. We count paths that start at the origin, are located above the horizontal axis, and below a line of rational slope, y=m/nx. We show that the count of such paths can be obtained from the generating series of the colored HOMFLY-PT polynomials of mn-framed (m,n)-torus knot, so that the powers of variable a in the colored HOMFLY-PT polynomials count the number of diagonal steps in a Schröder path, while powers of q capture the area between a given path and the line y=m/nx. This framework naturally generalizes the results from <cit.> where it was shown that the lattice paths without diagonal steps correspond to the generating series of the bottom rows of the colored HOMFLY-PT polynomials of torus knots, which is essentially a=0 specialization of our current results.
§.§ Generalized Schröder paths from quivers
Our first result relates generalized Schröder paths under the line of rational slope m/n to quiver generating series, for a quiver corresponding to (m,n) torus knot.
Let m and n be mutually prime positive integers, such that m< n. Let K be a left handed (m,n) torus knot with framing f=mn. Let C̅ be the matrix of the quiver Q,
corresponding to the unreduced symmetrically colored HOMFLY-PT polynomials of the
knot K. Let 𝐚=(a_1,a_2,…,a_2k) be a vector corresponding
to the a-degrees of the nodes of the quiver Q. Let a_max:=max_i{a_i}, i=1,…,2k, and a'_i=a_max-a_i, i=1,…,2k. Furthermore, let P_C̅(x_1,…,x_2k) be the generating series for the quiver Q
P_C̅(x_1,…,x_2k)=∑_d_1,…,d_2k(-q)^∑_i,jC̅_i,jd_id_j∏_i=1^2kx_i^d_i/∏_i=1^2k(q^2;q^2)_d_i
and let P̅_C̅(x) be the following one variable series specialization of P_C̅(x_1,…,x_2k)
x_i ↦ (-1)^f(-a)^a'_ix, i=1,…,2k.
Finally, let
y(x,a,q) = P̅_C̅(qx)/P̅_C̅(q^-1x)=∑_l≥ 0N_l(a,q)x^l=∑_i,j,ln_i,j,la^iq^jx^l.
The number n_i,j,l introduced above equals the number of generalized Schröder paths under the line y=m/n
x that start at (0,0) and end at (nl,ml), with i/2 diagonal steps and such that the area between the path and the line y=m/n x is equal to j/2. In particular, for each l≥ 0, only finitely many coefficients n_i,j,l are non-zero, i.e. for each l≥ 0, N_l(a,q)∈[a^± 1,q^± 1] is a (Laurent) polynomial.
In other words, each diagonal step in a given path contributes a factor of a^2 to its weight, and the area of each unit lattice square that contributes to the weight of a path (i.e. a square between a given path and the line y=m/nx) is q^2 – this is a consequence of our convention that involves a^2 and q^2 in expressions for knot polynomials and quiver generating series. In the q=1 limit the above result reduces to the generating function of generalized Schröder paths without weighting them by the area
y(x,a)=lim_q→ 1P̅_C̅(q x)/P̅_C̅(q^-1x)= ∑_l≥ 0N_l(a,1)x^l .
§.§ Generalized Schröder paths from quadruply-graded knot homology
The above proposition <ref> expresses generating series of lattice paths in terms of quiver generating series, for a quiver associated to a torus knot. However, the identification of variables (<ref>) is different than the one in (<ref>) that yields colored HOMFLY-PT polynomials. Nonetheless, we find that expressions for generalized Schröder numbers can be obtained directly from colored invariants of torus knots – however, to achieve this we need to consider expressions for superpolynomials of quadruply-graded homologies.
To this end, let _r(a,q,t) be the superpolynomial of the reduced colored HOMFLY-PT homology of the knot K. It can be obtained as a specialization (<ref>) of superpolynomial of quadruply-graded homology, _r(a,q,t)=^Q_r(a,q,1,t). Recall we provided explicit expressions for ^Q_r(a,q,t_r,t_c) for some knots in section <ref>. For (m,n) torus knot we also introduce another specialization and the following notation
p'_r(a,t) = a^-(m-1)(n-1)r_r(a,q=1,t),
p̅_r(a,q) = p'_r(a,t=q^-1)=a^-(m-1)(n-1)r_r(a,q=1,t=q^-1),
p_r(a,q) =q^ fr^2p̅_r(a,q)=a^-(m-1)(n-1)r q^f r^2_r(a,q=1,t=q^-1),
where a factor a^-(m-1)(n-1)r simply shifts the a-degrees, so that the terms with the lowest a-degree in _r(a,q,t) have a-degree equal to zero. We then consider the following specialization of the generating series of unreduced superpolynomials shifted by the above factor a^-(m-1)(n-1)r
P̅(x,a,q) = 1+∑_r≥ 1(-1)^r∏_i=1^r (a^2+q^2i-1)/(q^2;q^2)_r q^f r^2a^-(m-1)(n-1)r_r(a,q=1,t=q^-1) x^r =
=1+∑_r≥ 1(-1)^r∏_i=1^r (a^2+q^2i-1)/(q^2;q^2)_r p_r(a,q) x^r,
and to make contact with the counting of lattice paths corresponding to (m,n) torus knot we also fix the framing as f=mn.
Finally, we take the quotient that governs the growth of p_r(a,q)
y(x,a,q)=P̅(q x,a,q)/P̅(q^-1x,a,q) =
∑_l≥ 0N_l(a,q)x^l=∑_i,j,ln_i,j,la^iq^jx^l .
This y(x,a,q) agrees with (<ref>); in particular the number n_i,j,l in (<ref>) is the same number that appears in (<ref>), i.e. it equals the number of generalized Schröder paths under the line y=m/n x that start at (0,0) and end at (nl,ml), with i/2 diagonal steps and such that the area between the path and the line y=m/n x is equal to j/2.
§.§ Basic Schröder paths and homology generators
Let us call Schröder paths under the line of the slope m/n from the origin to the point (n,m) as the basic paths. One interesting consequence of the above statements is the following:
The number of basic paths weighted by the number of diagonal steps agrees with the number of unreduced generators of uncolored HOMFLY-PT homology of the (m,n) torus knot, weighted by the a-degree.
Indeed, it follows from (<ref>) that to first order in x
y(x,a,q) = 1 + _1(a,q=1,t=q^-1) q^f-1 (a^2 + q) x + 𝒪(x^2)
and thus
y(x,a,q=1) = 1 + _1(a,q=1,t=1) (a^2 + 1) x + 𝒪(x^2).
The term of order x captures counting of basing paths with each diagonal step weighted by a^2. At the same time, this term represents the number of reduced uncolored HOMFLY-PT generators weighted by a (which are captured by _1(a,q=1,t=1)) and analogous term weighted by extra a^2; these two terms indeed represent all unreduced HOMFLY-PT generators.
In consequence, one could interpret Schröder paths of arbitrary length as being constructed from basic paths, in analogy to the construction of colored HOMFLY-PT homology from the uncolored one <cit.>.
§.§ A-polynomials for lattice paths
A-polynomials are algebraic curves well known in knot theory. The original A-polynomial was defined through SL(2,ℂ) character variety of knot complement <cit.>. Later it was realized that it is related to the asymptotic behaviour of colored Jones polynomial via famous AJ-conjecture <cit.>.
More precisely, in this latter sense, A-polynomials arise in two versions, classical and quantum.
Classical A-polynomials encode asymptotics of colored knot polynomials for large color, while quantum A-polynomials provide recursion relations for colored knot polynomials (for all colors, not only large color). Such A-polynomials were also generalized to colored HOMFLY-PT polynomials, related to augmentation polynomials <cit.>, and further generalized to the refined case <cit.> and to super-A-polynomials <cit.>.
We introduce now analogous objects, which we call quantum and classical A-polynomials for lattice paths and which capture information about generating functions of such paths that follow from expressions of the form (<ref>). First, using similar techniques as for knots <cit.>, we determine quantum A-polynomials A(x̂, ŷ), which are operators that encode difference equations satisfied by (<ref>). Such operators can be determined as follows. Let us write the expression of the form (<ref>) as P̅(x,a,q)≡P̅(x) =∑_r=0P̅_r x^r. The coefficients P̅_r are q-holonomic (i.e. they are given as sums of expressions that involve at most quadratic powers of q, q-Pochhammers, and q-binomials), so from a general theory <cit.> it follows that they satisfy recursion relation of the form
α_0(q^r,q) P̅_r + α_1(q^r,q) P̅_r+1 + …α_k (q^r,q) P̅_r+k = 0.
In practice, such recursion relations can be found using e.g. qZeil package <cit.>. The relation (<ref>) can be equivalently written as
𝖠(𝗑̂,ŷ)P̅_r = 0, for 𝖠(𝗑̂,ŷ) = ∑_j=0^k α_j(𝗑̂,q) ŷ^j
where 𝗑̂ and ŷ act respectively as multiplication by q^r and ŷP̅_r=P̅_r+1. This recursion relation can be easily turned into a relation for the generating function P̅(x)=∑_rP̅_r x^r <cit.>. Indeed acting with ŷ or 𝗑̂ on the whole generating series
ŷ∑_r P̅_r x^r = ∑_r P̅_r+1 x^r = x^-1∑_r P̅_r x^r, 𝗑̂P̅(x) = ∑_r q^r P̅_r x ^r = P̅(qx),
has the same effect as respectively acting with x̂^-1 or ŷ^1/2, whose action is defined as
x̂P̅(x) = x P̅(x), ŷP̅(x) = P̅(q^2 x), ŷx̂ = q^2 x̂ŷ.
(The power of 1/2 in the redefinition of 𝗑̂ into ŷ^1/2 is a consequence of our convention, in which q-Pochhammers have the argument q^2.) Therefore the quantum A-polynomial, defined as
A(x̂,ŷ) = 𝖠(ŷ^1/2,x̂^-1),
annihilates the generating series
A(x̂,ŷ) P̅(x) = 0.
Note that, when turning (<ref>) or (<ref>) into A(x̂,ŷ), the term α_i(q^r,q)P̅_r+i=α_i(𝗑̂ ,q)ŷ^i P̅_r is turned into the term α_i(ŷ^1/2,q) x̂^-iP̅(x). It is therefore useful to commute each factor x̂^-i to the left over α_i(ŷ^1/2,q); this introduces additional q^i each time x̂^-i passes over ŷ^1/2, and ultimately we can write A(x̂,ŷ) in the form
A(x̂,ŷ) = ∑_j α̃_j(x̂,q) ŷ^j ≡∑_j x̂^j β_j(ŷ,q)
for appropriate coefficients α̃_j(x̂,q) and β_j(ŷ,q). In the q=1 limit this quantum A-polynomial reduces to the classical A-polynomial
A(x,y) = 0, for A(x,y) = lim_q→ 1A(x̂,ŷ),
and this classical A-polynomial equation is satisfied by y=y(x) defined as q=1 limit of (<ref>) (or equivalently (<ref>)). Such y(x) captures asymptotics of (<ref>) for large r, and equivalently can be found by the saddle point method, which we now briefly review.
To find A(x,y) by the saddle point method, we determine first an equation 𝖠(𝗑,𝗒) that encodes asymptotic expansion of P̅_r for large r. To this end we introduce 𝗑=q^r and z_i=q^k_i, where k_i are summation variables that appear in (<ref>). Then we express P̅_r as
P̅_r ∼∫∏_z_i dz_i exp1/ħ( V(𝗑,z_i) + 𝒪(ħ) ).
The saddle point analysis yields a system of equations
𝗒 = e^𝗑∂ V(𝗑 ,z_i)/∂𝗑,
1 = e^z_i ∂ V(𝗑 ,z_i)/∂ z_i , for each i.
Eliminating all z_i from the above set of equations we get a single relation 𝖠(𝗑,𝗒)=0. Then the A-polynomial equation we are interested in is obtained by the same change of variables as above: 𝗑↦ y^1/2, 𝗒↦ x^-1. In addition, we often rescale such a result by a simple overall factor (e.g. some power of x)
A(x,y) ∼𝖠(y^1/2,x^-1),
so that we obtain a polynomial of the form A(x,y)=1-y+…. This yields the result consistent with (<ref>) that was obtained as q=1 limit of quantum A-polynomial, i.e. either (<ref>) and (<ref>) completely agree, or (<ref>) is one factor in (<ref>) (whose other factors are typically quite simple and are not relevant for our considerations of the classical limit).
Note that V(𝗑)≡ V(𝗑,z_i) that follows from (<ref>) has the following structure
V(𝗑) = V_𝒫(𝗑) + V_uni(𝗑),
where V_𝒫(𝗑) follows from the form of a^-(m-1)(n-1)r𝒫_r(a,1,q^-1), while V_uni(𝗑) is a universal piece arising from all other contributions in the summand in (<ref>)
V_uni(𝗑 ) = i πlog𝗑 +f log^2 𝗑 +2 log a log𝗑 -1/2Li_2(-a^-2𝗑 ^2 )+1/2Li_2(𝗑 ^2).
Also note that the following formulae are useful in determining V(𝗑)
q^r^2 = e^1/ħlog^2 𝗑, a^r = e^1/ħlog𝗑log a,
(𝗑;q^2)_k ∼ e^1/2ħ ( Li_2(𝗑) - Li_2(𝗑z^2) +… )
r k_q^2 ≡(q^2;q^2)_r/(q^2;q^2)_k(q^2;q^2)_r-k∼ e^1/2ħ(-Li_2(𝗑^2) + Li_2(z^2) + Li_2(𝗑^2 z^-2) +…)
where 𝗑=q^r, z=q^k, and … denotes terms with higher powers of ħ.
Furthermore, we find that A-polynomials for lattice paths have the following general properties:
Framing change
To make contact with counting of lattice paths we have to fix framing, which is represented by the factor q^fr^2 in (<ref>), as f=mn. However, we can consider more general classical A-polynomial curves, with arbitrary value of f. In this case, changing the framing by f transforms classical A-polynomial A(x,y,a) into
A_f(x,y,a)=A(x y^f,y,a).
Symmetry
In addition, there is a symmetry property which roughly switches x↔ x^-1 and y↔ y^-1, in a suitable framing. For the A-polynomials above corresponding to lattice paths below y=m/nx, we first decrease framing by m+n,
i.e. we set
A'_m,n(x,y,a)=A_m,n(x y^-m-n,y,a).
Then we have
A'_m,n(x^-1,-a^2 y^-1,a) = x^-ω A'_m,n(x,y,a),
for some integer ω. In this case, the number ω is equal to the number of monomials of the bottom row of the (uncolored) HOMFLY-PT polynomial of the corresponding (m,n) torus knot, i.e.
ω=1/m+nm+nm.
The symmetry property is then the statement that A'_K(x^-1,-a^2 y^-1,a) is equal to A'_K(x,y,a) up to multiplication by a monomial.
Specializations
If we set a=i in A(x,y,a) (i.e. a^2=-1 since a appears only with even powers in A(x,y,a)), then the obtained polynomial is divisible by y-1:
A(x,y,a)|_a^2=-1 = (y-1) P(x,y),
for some polynomial P(x,y) in x and y with integer coefficients. This follows from the fact that setting a^2=-q in the expression for P̅(x,a,q) gives just 1, and therefore in the q=1 limit, and consequently a^2=-1, we get y=y(x)=1, so that y-1 must be a factor.
In addition if we set a=i y in A(x,y,a) (i.e. a^2=-y^2 since a appears only with even powers in A(x,y,a)), then the obtained polynomial is also divisible by y-1:
A(x,y,a)|_a^2=-y^2 = (y-1) Q(x,y),
for some polynomial Q(x,y) in x and y with integer coefficients. This is a consequence of another canceling differential property of 𝒫_r for the corresponding knot, i.e. from setting a^2=-q^2r+1. Moreover, for the same framing f as above in the symmetry property, in this specialization A_f(x,y,a) is also divisible by 1+x. Thus, overall we have
A(x,y,a)|_a^2=-y^2 = (1-y)(1+x y^f) R(x,y),
for some integer f and polynomial R(x,y) in x and y with integral coefficients.
We present explicit form of classical and quantum A-polynomials for various classes of lattice paths in section <ref>.
§ QUIVERS FOR GENERALIZED SCHRÖDER PATHS
In this section we illustrate our main results stated in propositions <ref> and <ref>. Namely, in several explicit and non-trivial examples, we show that generating functions of generalized Schröder paths under a line of rational slope y=m/nx are captured by the same quivers that encode colored invariants of (m,n) torus knots in framing mn, as asserted by proposition <ref>. At the same time, these generating functions, by proposition <ref>, follow directly from expressions related to quadruply-graded homology for such knots.
§.§ Schröder paths for the slope 1/f
To start with, we consider generalized Schröder paths under the line y=x/f. As the first example, which in fact was the motivation for this work, we recall the relation, found in <cit.>, between Schröder paths (under the diagonal line y=x, i.e. for f=1) and quiver generating series corresponding to the suitably framed unknot. In this case (and in the conventions and normalization of the current paper) the full colored HOMFLY-PT polynomials of the unknot in framing f=1 are encoded in the following quiver matrix:
C=[[ 1 1; 1 2 ]]
for which the quiver series takes form
P_C(x_1,x_2)=∑_i,j(-1)^i+j (-q)^ i^2+2 ij+2j^2x_1^ix_2^j/(q^2;q^2)_i(q^2;q^2)_j.
Now let
P_C(qx_1,qx_2)/P_C(q^-1x_1,q^-1x_2)=∑_i,jn̅_i,j(q)x_1^i x_2^j.
Further specialization of the from x_1→ -a^2x, and x_2→ x yields
y(x,a,q) = P_C(-a^2qx, qx)/P_C(-a^2q^-1x,q^-1x)= ∑_k≥ 0 N_k(a,q) x^k = ∑_i,j,kn_i,j,ka^i q^j x^k.
As found in <cit.>, and in agreement with proposition <ref>, the numbers n_i,j,k count Schröder paths below the diagonal y=x, starting from the origin (0,0) and ending at (k,k), with i/2 diagonal steps, and with the area between the the line y=x and the path equal to j/2. We list N_k(a,q) for f=1 in the first section of table <ref>. Note that if we keep arbitrary a, we get the weighted count of Schröder paths, where the powers of a count the number of diagonal steps. On the other, for a=0 we do not allow diagonal steps, so that the counting reduces to Catalan numbers and their q-deformation.
Now, we find that the generalization of the above case to arbitrary f, i.e. to generalized Schröder paths under the line y=x/f (which correspond to unreduced colored HOMFLY-PT polynomial of the f-framed unknot) is captured by the following quiver matrix
C=[[ f f; f f+1 ]]
and the corresponding quiver generating series
P_C(x_1,x_2)=∑_i,j(-1)^f(i+j) (-q)^f i^2+2f ij+(f+1)j^2x_1^ix_2^j/(q^2;q^2)_i(q^2;q^2)_j.
Taking the specialization x_1→ -a^2x, and x_2→ x, we get a single variable series
P̅_C(x)=P_C(-a^2x,x),
and the q-weighted paths counts are captured by the following quotient
y(x,a,q) = P̅_C(qx)/P̅_C(q^-1x)=∑_k≥ 0 N_k(a,q) x^k=∑_i,j,kn_i,j,ka^i q^j x^k.
With this notation, we have
The numbers n_i,j,k in (<ref>) count the generalized Schröder paths under the line y=1/fx, starting at (0,0) ending at (fk,k), with i/2 diagonal steps, and with the area between the boundary line y=1/fx and the path equal to j/2.
We assemble explicit form of N_k(a,q) for several values of f and k, and also for some specializations of a and q, also in Table <ref>. Note that the specialization a=0 (i.e. ignoring diagonal steps), for a given f>1, produces so called q-deformed Fuss-Catalan numbers, and setting further q=1 yields ordinary Fuss-Catalan numbers.
§.§ Slope 2/3
Generating series of generalized Schröder paths under the line y=2/3 x are related to the full colored HOMFLY-PT polynomial of torus knot T_2,3, i.e. trefoil, in framing 6. The quiver corresponding to reduced version of polynomial for the positive 0-framed T_2,3 torus knot, and the vector 𝐚, have been obtained in <cit.>:
C =[[ 0 1 1; 1 2 2; 1 2 3 ]], 𝐚=(2,2,4).
It follows that the matrix of the quiver corresponding to the unreduced HOMFLY-PT polynomial for the negative T_2,3 in framing 6 takes form
C̅ =[[ 6 6 4 5 4 5; 6 7 4 5 4 5; 4 4 4 4 3 4; 5 5 4 5 3 4; 4 4 3 3 3 3; 5 5 4 4 3 4 ]]
and the corresponding quiver generating series reads
P_C̅(x_1,x_2,…,x_6)=∑_d_1,d_2,…,d_6 (-1)^2· 3·∑_i d_i (-q)^∑_i,jC̅_i,j d_i d_j∏_i x_i^d_i/∏_i (q^2;q^2)_d_i.
Upon specializations
x_1→ -a^2x, x_2→ x, x_3 → -a^2x, x_4→ x, x_5 → a^4x, x_6 → -a^2x,
we get a single variable series
P̅_C̅(x)=P_C̅(-a^2x,x,-a^2x,x,a^4x,-a^2x)
and then the quotient
y(x,a,q) = P̅_C̅(qx)/P̅_C̅(q^-1x)=∑_k N_k(a,q) x^k=∑_i,j,kn_i,j,ka^i q^j x^k.
With this notation, we have
The numbers n_i,j,k in (<ref>) count the generalized Schröder paths under the line y=2/3x, starting at (0,0) ending at (3k,2k), with i/2 diagonal steps, and with the area between the boundary line y=2/3x and the path equal to j/2.
Explicit form of N_k(a,q) for several values of k is:
1,a^4 q^2 + 2 a^2 q^3 + q^4 + a^2 q^5 + q^6,a^8 q^4 + 4 a^6 q^5 + 6 a^4 q^6 + a^8 q^6 + 4 a^2 q^7 +
6 a^6 q^7 + q^8 + 12 a^4 q^8 + a^8 q^8 + 10 a^2 q^9 + 6 a^6 q^9 +
+3 q^10 + 13 a^4 q^10 + 12 a^2 q^11 + 4 a^6 q^11 + 4 q^12 +
12 a^4 q^12 + 12 a^2 q^13 + 2 a^6 q^13 + 4 q^14 + 8 a^4 q^14 +
10 a^2 q^15 + a^6 q^15 + 4 q^16 +
+ 5 a^4 q^16 + 7 a^2 q^17 + 3 q^18 +
2 a^4 q^18 + 4 a^2 q^19 + 2 q^20 + a^4 q^20 + 2 a^2 q^21 + q^22 +
a^2 q^23 + q^24,a^12 q^6 + 6 a^10 q^7 + 15 a^8 q^8 +
+ 2 a^12 q^8 + 20 a^6 q^9 +
15 a^10 q^9 + 15 a^4 q^10 + 45 a^8 q^10 + 3 a^12 q^10 + 6 a^2 q^11 +
70 a^6 q^11 + 24 a^10 q^11 + q^12 +60 a^4 q^12 + 78 a^8 q^12 +
+ 2 a^12 q^12 + 27 a^2 q^13 + 132 a^6 q^13 + 25 a^10 q^13 + 5 q^14 +
123 a^4 q^14 + 99 a^8 q^14 + 2 a^12 q^14 + 60 a^2 q^15 +187 a^6 q^15 + 24 a^10 q^15 +
+ 12 q^16 + 187 a^4 q^16 + 104 a^8 q^16 + a^12q^16 +
96 a^2 q^17+ 216 a^6 q^17 + 20 a^10 q^17 + 20 q^18 + 234 a^4 q^18+ 100 a^8 q^18 + a^12 q^18 +
+ 128 a^2 q^19 + 224 a^6 q^19+ 16 a^10 q^19 +
28 q^20 + 257 a^4 q^20 + 86 a^8 q^20 + 148 a^2 q^21 + 209 a^6 q^21 +
11 a^10 q^21 + 34 q^22 + 256 a^4 q^22 +
+ 72 a^8 q^22 + 155 a^2 q^23 +
187 a^6 q^23 + 7 a^10 q^23 + 37 q^24 + 239 a^4 q^24 + 54 a^8 q^24+
150 a^2 q^25 + 156 a^6 q^25 +4 a^10 q^25 + 37 q^26 +
+ 214 a^4 q^26 +
40 a^8 q^26 + 141 a^2 q^27 + 126 a^6 q^27 + 2 a^10 q^27 + 36 q^28 +
181 a^4 q^28 + 26 a^8 q^28 + 124 a^2 q^29 +95 a^6 q^29+ a^10 q^29 +
+
33 q^30 + 149 a^4 q^30 + 17 a^8 q^30 + 107 a^2 q^31 + 69 a^6 q^31 +
29 q^32 + 116 a^4 q^32 + 9 a^8 q^32 + 88 a^2 q^33 +47 a^6 q^33 + 25 q^34 +
+ 88 a^4 q^34 + 5 a^8 q^34 + 71 a^2 q^35 + 30 a^6 q^35 +
21 q^36 + 62 a^4 q^36 + 2 a^8 q^36 + 54 a^2 q^37 + 18 a^6 q^37 +
17 q^38 + 43 a^4 q^38+ a^8 q^38 +
+ 40 a^2 q^39 + 10 a^6 q^39 +
13 q^40 + 27 a^4 q^40 + 28 a^2 q^41 + 5 a^6 q^41 + 10 q^42 +
17 a^4 q^42 + 19 a^2 q^43 + 2 a^6 q^43+ 7 q^44 + 9 a^4 q^44 +
+
12 a^2 q^45 + a^6 q^45 + 5 q^46 + 5 a^4 q^46 + 7 a^2 q^47+ 3 q^48 +
2 a^4 q^48 + 4 a^2 q^49 + 2 q^50 + a^4 q^50 + 2 a^2 q^51 + q^52 +
a^2 q^53 + q^54, …
For q=1 this specializes to
1,2+3a^2+a^4, 23+62a^2+59a^4+23a^6+3a^8,
377 + 1468 a^2 + 2285 a^4 + 1804 a^6 + 753 a^8 + 155 a^10 + 12 a^12, …
For a=1 and q=1 we get
1,6,170,6854, …
while for a=0 and q=1 we get, as expected, the numbers of Duchon paths
1,2,23,377,7229,…
§.§ Slope 2/5
Paths under the line y=2/5 x correspond to the full, unreduced colored HOMFLY-PT polynomials of the 10-framed negative torus knot T_2,5 (i.e. 5_1 knot). The quiver corresponding to reduced version of polynomial for the positive, 0-framed T_2,5 torus knot, and the vector 𝐚, are determined in <cit.>:
C =[[ 0 1 1 3 3; 1 2 2 3 3; 1 2 3 4 4; 3 3 4 4 4; 3 3 4 4 5 ]], 𝐚 = (4,4,6,4,6).
It follows that the quiver corresponding to the unreduced HOMFLY-PT polynomial for the negative T_2,5 in framing 10 takes form
C̅ =[[ 10 10 8 9 8 9 6 7 6 7; 10 11 8 9 8 9 6 7 6 7; 8 8 8 8 7 8 6 7 6 7; 9 9 8 9 7 8 6 7 6 7; 8 8 7 7 7 7 5 6 5 6; 9 9 8 8 7 8 5 6 5 6; 6 6 6 6 5 5 6 6 5 6; 7 7 7 7 6 6 6 7 5 6; 6 6 6 6 5 5 5 5 5 5; 7 7 7 7 6 6 6 6 5 6 ]]
The corresponding quiver generating series is:
P_C̅(x_1,x_2,…,x_10)=∑_d_1,d_2,…,d_10 (-1)^2· 5·∑_i d_i (-q)^∑_i,jC̅_i,j d_i d_j∏_i x_i^d_i/∏_i (q^2;q^2)_d_i.
Upon specializations:
x_2i-1→ -a^2 x_2i, i=1,…,5,
x_2→ x, x_4→ x, x_6 → -a^2x, x_8 → x, x_10→ -a^2x,
we get a single variable series:
P̅_C̅(x)=P_C̅(-a^2x,x,-a^2x,x,a^4x,-a^2x,-a^2x,x,a^4x,-a^2x)
and then the quotient
y(x,a,q) = P̅_C̅(qx)/P̅_C̅(q^-1x)=∑_k N_k(a,q) x^k=∑_i,j,kn_i,j,ka^i q^j x^k.
The numbers n_i,j,k in (<ref>) count the generalized Schröder paths under the line y=2/5x, starting at (0,0) ending at (5k,2k), with i/2 diagonal steps, and with the area between the boundary line y=2/5x and the path equal to j/2.
Explicitly, coefficients N_k(a,q) for several first values of k read
1,a^4 q^6+a^4 q^4+a^2 q^9+2 a^2 q^7+2 a^2 q^5+q^10+q^8+q^6,a^8 q^24+a^8 q^22+2 a^8 q^20+3 a^8 q^18+4 a^8 q^16+4 a^8 q^14+4 a^8 q^12+
+3 a^8 q^10+a^8 q^8+a^6 q^31+2 a^6 q^29+4 a^6 q^27+7 a^6 q^25+10 a^6 q^23+14 a^6 q^21+18 a^6 q^19+20 a^6 q^17+20 a^6 q^15+18 a^6 q^13+
+12 a^6 q^11+4 a^6 q^9+a^4 q^36+2 a^4 q^34+5 a^4 q^32+8 a^4 q^30+13 a^4 q^28+18 a^4 q^26+25 a^4 q^24+31 a^4 q^22+36 a^4 q^20+
+37 a^4 q^18+36 a^4 q^16+30 a^4 q^14+18 a^4 q^12+6 a^4 q^10+a^2 q^39+2 a^2 q^37+4 a^2 q^35+7 a^2 q^33+10 a^2 q^31+14 a^2 q^29+19 a^2 q^27+
+24 a^2 q^25+28 a^2 q^23+30 a^2 q^21+30 a^2 q^19+28 a^2 q^17+22 a^2 q^15+12 a^2 q^13+4 a^2 q^11+q^40+q^38+2 q^36+3 q^34+4 q^32+5 q^30+
+7 q^28+8 q^26+9 q^24+9 q^22+9 q^20+8 q^18+6 q^16+3 q^14+q^12…
For q=1 this specializes to
1,2 a^4+5 a^2+3, 23 a^8+130 a^6+266 a^4+235 a^2 + 76,
377 a^12+3358 a^10+12109 a^8+22715 a^6+23452 a^4+12668 a^2+2803,
For a=1 and q=1 we get
1,10,730,77482, …
while for a=0 and q=1
1,3,76,2803,121637,…
§.§ Slope 3/4
Lattice paths under the line y=3/4 x correspond to the full, unreduced HOMFLY-PT polynomial of the 12-framed negative torus knot T_3,4 (i.e. 8_19 knot). The quiver corresponding to reduced version of extremal polynomial (of fize 5, corresponding to 5 generators in the bottom row, and encoding only a part of HOMFLY-PT polynomials that involve extremal powers of a) was found in <cit.>. Taking advantage of the expression (<ref>) we determine now the quiver for the positive, 0-framed T_3,4 torus knot, including the full a-dependence of its colored HOMFLY-PT polynomials:
C=[[ 0 1 2 3 5 1 2 3 4 5 4; 1 2 3 3 5 2 3 3 5 5 5; 2 3 4 4 5 3 4 4 5 5 5; 3 3 4 4 5 4 4 4 6 5 6; 5 5 5 5 6 6 5 6 6 6 6; 1 2 3 4 6 3 3 4 5 6 5; 2 3 4 4 5 3 5 4 6 5 6; 3 3 4 4 6 4 4 5 6 6 6; 4 5 5 6 6 5 6 6 7 7 7; 5 5 5 5 6 6 5 6 7 7 7; 4 5 5 6 6 5 6 6 7 7 8 ]], 𝐚 = (6,6,6,6,6,8,8,8,8,8,10).
It follows that the matrix C̅ of the corresponding quiver for the unreduced HOMFLY-PT polynomial of the negative T_3,4 in framing 12 is of size 22× 22 and reads:
4pt
[
[ 12 12 10 11 9 10 8 9 6 7 10 11 9 10 8 9 7 8 6 7 7 8; 12 13 10 11 9 10 8 9 6 7 10 11 9 10 8 9 7 8 6 7 7 8; 10 10 10 10 8 9 8 9 6 7 9 10 8 9 8 9 6 7 6 7 6 7; 11 11 10 11 8 9 8 9 6 7 9 10 8 9 8 9 6 7 6 7 6 7; 9 9 8 8 8 8 7 8 6 7 8 9 7 8 7 8 6 7 6 7 6 7; 10 10 9 9 8 9 7 8 6 7 8 9 7 8 7 8 6 7 6 7 6 7; 8 8 8 8 7 7 8 8 6 7 7 8 7 8 7 8 5 6 6 7 5 6; 9 9 9 9 8 8 8 9 6 7 7 8 7 8 7 8 5 6 6 7 5 6; 6 6 6 6 6 6 6 6 6 6 5 6 6 7 5 6 5 6 5 6 5 6; 7 7 7 7 7 7 7 7 6 7 5 6 6 7 5 6 5 6 5 6 5 6; 10 10 9 9 8 8 7 7 5 5 9 9 8 9 7 8 6 7 5 6 6 7; 11 11 10 10 9 9 8 8 6 6 9 10 8 9 7 8 6 7 5 6 6 7; 9 9 8 8 7 7 7 7 6 6 8 8 7 7 7 8 5 6 6 7 5 6; 10 10 9 9 8 8 8 8 7 7 9 9 7 8 7 8 5 6 6 7 5 6; 8 8 8 8 7 7 7 7 5 5 7 7 7 7 7 7 5 6 5 6 5 6; 9 9 9 9 8 8 8 8 6 6 8 8 8 8 7 8 5 6 5 6 5 6; 7 7 6 6 6 6 5 5 5 5 6 6 5 5 5 5 5 5 4 5 4 5; 8 8 7 7 7 7 6 6 6 6 7 7 6 6 6 6 5 6 4 5 4 5; 6 6 6 6 6 6 6 6 5 5 5 5 6 6 5 5 4 4 5 5 4 5; 7 7 7 7 7 7 7 7 6 6 6 6 7 7 6 6 5 5 5 6 4 5; 7 7 6 6 6 6 5 5 5 5 6 6 5 5 5 5 4 4 4 4 4 4; 8 8 7 7 7 7 6 6 6 6 7 7 6 6 6 6 5 5 5 5 4 5; ]]
The corresponding quiver generating series is
P_C̅(x_1,x_2,…,x_22)=∑_d_1,d_2,…,d_22 (-1)^3· 4·∑_i d_i (-q)^∑_i,jC̅_i,j d_i d_j∏_i x_i^d_i/∏_i (q^2;q^2)_d_i.
By taking the specializations:
x_2i-1→ -a^2 x_2i, i=1,…,11,
x_2,x_4,x_6,x_8,x_10→ x, x_12,x_14,x_16,x_18,x_20→ -a^2x, x_22→ a^4x,
we get a single variable series:
P̅_C̅(x) = P_C̅(-a^2x,x,-a^2x,x,-a^2x,x,-a^2x,x,-a^2x,x,a^4x,-a^2x,a^4x,
-a^2x,a^4x,-a^2x,a^4x,-a^2x,a^4x,-a^2x,-a^6x,a^4x),
and then the quotient gives
y(x,a,q) = P̅_C̅(qx)/P̅_C̅(q^-1x)=∑_k N_k(a,q) x^k=∑_i,j,kn_i,j,ka^i q^j x^k.
The numbers n_i,j,k in (<ref>) count the generalized Schröder paths under the line y=3/4x, starting at (0,0) ending at (4k,3k), with i/2 diagonal steps, and with the area between the boundary line y=3/4x and the path equal to j/2.
For arbitrary a and q, N_k(a,q) for a few values of k read
1,a^6 q^3+a^4 q^8+2 a^4 q^6+3 a^4 q^4+a^2 q^11+2 a^2 q^9+4 a^2 q^7+3 a^2 q^5+q^12+q^10+2 q^8+q^6,
a^12 q^12+a^12 q^10+a^12 q^8+a^12 q^6+
+a^10 q^23+2 a^10 q^21+4 a^10 q^19+6 a^10 q^17+9 a^10 q^15+12 a^10 q^13+12 a^10 q^11+10 a^10 q^9+6 a^10 q^7+a^8 q^32+2 a^8 q^30+5 a^8 q^28+
+9 a^8 q^26+16 a^8 q^24+24 a^8 q^22+35 a^8 q^20+44 a^8 q^18+53 a^8 q^16+55 a^8 q^14+49 a^8 q^12+35 a^8 q^10+15 a^8 q^8+a^6 q^39+
+2 a^6 q^37+5 a^6 q^35+10 a^6 q^33+18 a^6 q^31+29 a^6 q^29+45 a^6 q^27+63 a^6 q^25+84 a^6 q^23+105 a^6 q^21+120 a^6 q^19+
+127 a^6 q^17+120 a^6 q^15+96 a^6 q^13+60 a^6 q^11+20 a^6 q^9+a^4 q^44+2 a^4 q^42+5 a^4 q^40+9 a^4 q^38+17 a^4 q^36+27 a^4 q^34+
+42 a^4 q^32+59 a^4 q^30+82 a^4 q^28+104 a^4 q^26+128 a^4 q^24+146 a^4 q^22+156 a^4 q^20+153 a^4 q^18+135 a^4 q^16+99 a^4 q^14+
+55 a^4 q^12+15 a^4 q^10+a^2 q^47+2 a^2 q^45+4 a^2 q^43+7 a^2 q^41+12 a^2 q^39+19 a^2 q^37+28 a^2 q^35+38 a^2 q^33+51 a^2 q^31+65 a^2 q^29+
+78 a^2 q^27+90 a^2 q^25+97 a^2 q^23+98 a^2 q^21+92 a^2 q^19+76 a^2 q^17+52 a^2 q^15+26 a^2 q^13+6 a^2 q^11+q^48+q^46+2 q^44+3 q^42+
+5 q^40+7 q^38+10 q^36+12 q^34+16 q^32+19 q^30+22 q^28+24 q^26+25 q^24+24 q^22+22 q^20+17 q^18+11 q^16+5 q^14+q^12,...
Setting q=1 we get
1, a^6+6 a^4+10 a^2+5, 4 a^12+62 a^10+343 a^8+905 a^6+1235 a^4+842 a^2+227, …
For a=1 and q=1 we get
1,22,3618,871510, …
while for a=0
1,5,227,15090,1182187,…
§ A-POLYNOMIALS FOR GENERALIZED SCHRÖDER PATHS
In this section we present explicit results for classical and quantum A-polynomials that encode generating series of generalized Schröder paths. We derive such A-polynomials following general prescriptions presented in section <ref>.
Moreover, anticipating that the relation to combinatorial models generalizes to other knots, and as a prerequisite for future work, we also derive classical A-polynomials for the generating series (<ref>) involving superpolynomials for 4_1 and 5_2 knots. In this case the resulting series y=y(x) also have integral coefficients – finding combinatorial models that reproduce these results is an interesting challenge.
§.§ A-polynomial for the slope 1/f
Counting of lattice paths under the line of the slope 1/f corresponds to the framed unknot. Generating functions of such lattice paths are encoded by the series (<ref>), in this case simply with p_r(a,q)=q^fr^2, which represents the framing factor. Writing (<ref>) as P̅(x,a,q)=∑_r=0^∞P̅_r x^r, we have
P̅_r+1 = -q^(2r+1)fa^2 + q^2r+1/1-q^2r+2P̅_r.
Multiplying both sides by (1-q^2r+2) and rewriting the resulting expression as a relation for the generating function using operators (<ref>), produces the quantum A-polynomial
A(x̂,ŷ) = 1 - ŷ + q^f a^2 xy^f + q^f+1x̂ŷ^f+1.
This quantum A-polynomial annihilates (<ref>) with p_r(a,q)=q^fr^2 (for any chosen framing f)
A(x̂,ŷ) P̅(x,a,q) = 0.
We can also determine the classical A-polynomial by the saddle point method. To this end we identify V(𝗑) in (<ref>) as V(𝗑) = V_uni(𝗑) given in (<ref>). We find that in this case y=y(x), found in section <ref> and obtained as q=1 limit of (<ref>), or equivalently by proposition <ref>, for any chosen framing f, satisfies the A-polynomial equation A(x,y)=0 with
A(x,y) = 1 - y + a^2xy^f + xy^f+1.
This A(x,y) agrees with q=1 limit of (<ref>).
§.§ A-polynomial for the slope 2/3
Lattice paths for the slope 2/3 correspond to trefoil in framing f=6. To determine quantum A-polynomial for such paths, we consider the series (<ref>) with the contribution (<ref>). Writing (<ref>) as P̅(x,a,q)=∑_r P̅_r x^r and using the package <cit.>, we find a recursion relation for coefficients P̅_r
α_0 P̅_r + α_1 P̅_r+1 + α_2 P̅_r+2 = 0,
where
α_0 = q^18 r+17( q^2 r+1+a^2) (q^4 r+7 +a^2)
α_1 = q^6 r+3(q^4 r+5 + a^2) (a^4 q^6+a^2 q^2 r+9+a^2 q^4 r+9+a^2 q^4 r+13+q^4 r+10+
+ q^4 r+12-q^6 r+14+q^8 r+16)
α_2 = (1-q^2 r+4) (q^4 r+3 + a^2) .
Turning this expression into a difference operator acting on the generating function P̅(x,a,q) and writing it in terms of (<ref>), we find that quantum A-polynomial (<ref>) takes form
A(x̂, ŷ) = β_0 + x̂β_1 + x̂^2 β_2,
where
β_0 = 1 - ŷ + a^-2 q^-5ŷ^2 - a^-2q^-5ŷ^3
β_1 = a^4 q^3 ŷ^3 + a^2 q^4 ŷ^4 - a^-2 q^6 ŷ^8 + a^-2 q^6 ŷ^9 + q^2 (a^2 + q + a^2q^2 + q^3 + a^2q^4) (ŷ^5 + a^-2 q ŷ^7)
β_2 = a^2 q^17ŷ^9 + q^18ŷ^10 + q^24ŷ^11 + a^-2 q^25ŷ^12 .
In the classical limit q=1 this expression factorizes
lim_q→ 1A(x̂, ŷ) = (1 + a^-2y^2)( 1 - y +x (a^4 y^3+a^2 y^4+2 y^5+2 a^2 y^5-y^6+y^7 ) + x^2 (a^2 y^9+y^10 )).
We can also determine the classical A-polynomial using the saddle point method, following section (<ref>). Again, we consider the series (<ref>) with the contribution (<ref>). Setting z_1 = q^k_1, we find
V_3_1(𝗑) = V_uni(𝗑) +2 log ^2 z_1 -4 log𝗑log z_1+
+ 1/2( Li_2(z_1^2)+Li_2(𝗑^2z_1^-2)+ Li_2(-a^2z_1^-2) -Li_2(𝗑^2) ).
It follows that
𝗒 = 𝗑^2f(𝗑^2 + a^2)/z_1^2 (𝗑^2 - z_1^2),
1 = (𝗑^2-z_1^2)(z_1^2 + a^2)/𝗑^4 (z_1^2 - 1).
Eliminating z_1 from these equations and removing an irrelevant overall factor according to (<ref>) we find A-polynomial, which for f=6 takes form
A(x,y) = 1 - y +x (a^4 y^3+a^2 y^4+2 y^5+2 a^2 y^5-y^6+y^7) + x^2 (a^2 y^9+y^10).
This is indeed the same as the second factor in (<ref>). The equation A(x,y)=0 is indeed satisfied by q=1 limit of the generating function of lattice paths y(x,a,q) determined in section <ref>. Newton polygons for A-polynomial in (<ref>) are shown in fig. <ref>.
§.§ A-polynomial for the slope 2/5
In this case we consider just the saddle point method. For the slope 2/5, the generating series (<ref>) involves the contribution (<ref>) with p=2, and we find that
V_5_1(𝗑) = V_3_1(𝗑) + 2log^2 z_2 -4log z_2 log𝗑 -1/2Li_2(z_1^2) + 1/2Li_2(z_2^2)+ 1/2Li_2(z_1^2z_2^-2),
where z_1=q^k_1, z_2=q^k_2. The system of equations (<ref>) takes form
𝗒 = 𝗑^2f(𝗑^2+a^2)/z_1^2 z_2^4 (𝗑^2 - z_1^2), 1 = z_2^2(z_1^2+a^2)(𝗑^2-z_1^2)/𝗑^4(z_1^2 - z_2^2), 1 = z_2^2(z_1^2-z_2^2)/𝗑^4(z_2^2-1).
To get A-polynomial for paths for the slope 2/5 we set f=2(2p+1)=10 and find
A(x,y) = 1 - y + x (2 a^4 y^5 + y^6(a^2 - a^4) + y^7(3 + 4 a^2 + a^4) - y^8( 2 + 2 a^2) +
+ y^9(2 + 2 a^2) - y^10 + y^11) + x^2 (a^8 y^10 + a^6 y^11 + y^12(2 a^4 + 2a^6) +
+ y^13(2 a^2 + 2 a^4) + y^14(3 + 4 a^2 + a^4) + y^15 (a^2 -1)+ 2 y^16) + x^3 (a^2 y^20 + y^21).
Newton polygons for this A-polynomial are shown in fig. <ref>. The above equation is satisfied by the generating function of paths found in section <ref>
y = 1 + x (2 a^4 + 5 a^2 + 3) + x^2 (23 a^8 + 130 a^6 + 266 a^4 + 235 a^2 + 76) +
+ x^3 (377 a^12 + 3358 a^10 + 12109 a^8 + 22715 a^6 + 23452 a^4 + 12668 a^2 + 2803) + …
§.§ A-polynomial for the slope 2/7
In this case we also consider just the saddle point method. For the slope 2/7, the generating series (<ref>) involves the contribution (<ref>) with p=3, and we find that
V_7_1(𝗑) = V_5_1(𝗑) + 2log^2 z_3 - 4log z_3 log𝗑
-1/2Li_2(z_2^2) + 1/2Li_2(z_3^2)+ 1/2Li_2(z_2^2z_3^-2).
The generating function of paths under the line of the slope 2/7 arises once we set f=14 in the term V_5_1(𝗑). We then find
A(x,y) = 1 - y + x (3 a^4 y^7 + a^2 y^8 - 2 a^4 y^8 + 6 a^2 y^9 + 2 a^4 y^9 -
4 a^2 y^10 - a^4 y^10 + 4 a^2 y^11 +
+ a^4 y^11 - 2 a^2 y^12 + 2 a^2 y^13 + y^9 (4 + (y-1) y (3 + 2 y^2 + y^4))) +
+ x^2 (3 a^8 y^14 + 2 a^6 y^15 - a^8 y^15 + 6 a^4 y^16 + 8 a^6 y^16 +
2 a^8 y^16 + 2 a^4 y^17 - a^6 y^17 +
+ 10 a^4 y^18 + 4 a^6 y^18 + a^4 y^19 + 2 a^4 y^20 +
a^2 y^17 (3 + y (12 + y (-2 + y (8 + y)))) +
+ y^18 (6 + y (-3 + y (6 + y (-2 + 3 y))))) +
+ x^3 (a^12 y^21 + 3 a^4 y^25 + 4 a^4 y^26 + 2 a^4 y^27 +
a^10 y^22 (1 + 2 y) + a^8 y^23 (2 + y (2 + y)) +
+ a^2 y^26 (3 + 2 y (3 + y)) + a^6 y^24 (2 + y (4 + y)) +
y^27 (4 + y (-1 + 3 y))) +
+ x^4 (a^2 y^35 + y^36).
The above equation is satisfied by the generating function of paths
y = 1 + x (4 + 7 a^2 + 3 a^4) +…
Newton polygons for this A-polynomial are shown in fig. <ref>. In fact, from figures <ref>, <ref> and <ref> we can deduce the general pattern how Newton polygons look like for A-polynomials for lattice paths under the lines y=2/2p+1x, for all p=0,1,2,…. Namely, involving full a-dependence, i.e. regarding Newton polygons with black dots, they consist of leftmost and rightmost columns of length 2, and p intermediate columns of length 2p+1, all at linearly increasing heights.
§.§ A-polynomial for the slope 3/4
For the lattice paths under the line y=3/4 x (corresponding to torus knot T_3,4, i.e. 8_19 knot), we also determine just the classical A-polynomial by the saddle point method. The potential (<ref>) takes form
V_8_19(𝗑) = V_uni(𝗑) - 6log^2 𝗑 +2 ( log^2z_a + log^2 z_b + log^2 z_c ) +
+ 2(log z_a + log z_b + log z_c)(log z_j - log z_c) + 2(log z_j - log z_c)^2 +
+1/2( Li_2(𝗑^2 z_j^-2) - Li_2(𝗑^2) + Li_2(z_j^2 z_c^-2) + Li_2(z_c^2 z_b^-2) + Li_2(z_a^2) + Li_2(z_b^2 z_a^-2) ) +
+1/2( -Li_2(-a^2𝗑^-2) + Li_2(-a^2𝗑^-2z_j^-2) + Li_2(-a^2 z_c^2 z_j^-2) ),
where z_j=q^j, z_a=q^a,z_b=q^b,z_c=q^c. The generating function of lattice paths under the line of the slope 3/4 arises for f=12. We find that A-polynomial takes form
A(x,y) = 1 - y + x (a^6 y^4 + a^4 y^5 + a^2 y^6 + 2 a^4 y^6 + 5 y^7 + 9 a^2 y^7 + 3 a^4 y^7 +
- 4 y^8 - 4 a^2 y^8 + y^9 + a^2 y^9 + 3 y^10 + 3 a^2 y^10 - y^12 + y^13) +
+ x^2 (-a^8 y^10 - a^6 y^11 + 3 a^4 y^12 + 4 a^6 y^12 + 4 a^2 y^13 + 8 a^4 y^13 + 3 a^6 y^13 +
+ 27 a^2 y^14 + 17 a^4 y^14 - 10 a^2 y^15 - 4 a^4 y^15 + 5 a^2 y^16 + 3 a^4 y^16 + 6 a^2 y^17 +
+ y^14 (10 + y (-6 + y (3 + y (5 + (-1 + y) y))))) +
+ x^3 (-a^10 y^16 - a^8 y^17 - 5 a^6 y^18 - 6 a^8 y^18 + 3 a^4 y^19 + 5 a^6 y^19 + 3 a^8 y^19 +
+ 6 a^2 y^20 + 10 a^4 y^20 + 4 a^6 y^20 + 27 a^2 y^21 + 17 a^4 y^21 - 8 a^2 y^22 - 3 a^4 y^22 +
+ 4 a^2 y^23 + y^21 (10 + y (-4 + y (3 + y - y^2)))) +
+ x^4 (a^12 y^22 + a^10 y^23 - 3 a^8 y^25 + a^6 (-3 + y) y^25 + 4 a^2 y^27 + 9 a^2 y^28 - 2 a^2 y^29 +
+ a^4 y^26 (1 + y) (1 + 3 y) - y^28 (-5 + y - y^2 + y^3)) + x^5 (a^2 y^34 + y^35).
The equation A(x,y)=0 is solved by the generating function of paths found in section <ref>
y(x) = 1 + (a^6 + 6 a^4 + 10 a^2 + 5)x + (227 + 842 a^2 + 1235 a^4 + 905 a^6 + 343 a^8 +
+ 62 a^10 + 4 a^12) x^2 + (15090 + 81812 a^2 + 190339 a^4 + 248089 a^6 + 198317 a^8 +
+ 99975 a^10 + 31434 a^12 + 5857 a^14 + 575 a^16 + 22 a^18) x^3 + …
Newton polygons for this A-polynomial are shown in fig. <ref>. Note that, regarding generic a-dependence (i.e. the polygon with black dots), there is an interesting symmetry: in the second and the fifth column (counting from the left), the third dot (respectively from the top or bottom) is missing. Such a symmetry is not manifest for a=0 (i.e. for the polygon made of red dots) – this indicates that a-dependent path counts and corresponding A-polynomials are indeed more fundamental.
§.§ 4_1 knot
An interesting challenge is to generalize the relation to lattice paths (or possibly more general combinatorial models) beyond torus knots. As the first non-trivial example we find A-polynomial curve corresponding to 4_1 knot; we stress this is not the knot theory A-polynomial (found already e.g. in <cit.>), but A-polynomial that arises from the parameter identification introduced in section <ref>, which is relevant for combinatorial models discussed in this paper. Using (<ref>) and analogous specializations as in (<ref>), and keeping f arbitrary, the potential (<ref>) takes form
V_4_1(𝗑) = V_uni(𝗑)-2log𝗑log z + 2log a log z + 1/2( -Li_2(𝗑^2) + Li_2(z^2) +
+ Li_2(𝗑^2 z^-2)-Li_2(-a^-2z^2) + Li_2(-a^-2𝗑^2) - Li_2(-a^-2𝗑^2 z^2) ),
where z=q^k. In this case the system of equations (<ref>) takes form
y = x^2f (𝗑^2z^2 + a^2) /𝗑^2 - z^2, 1 = (𝗑^2 - z^2)(z^2+a^2)(𝗑^2z^2+a^2)/a^2𝗑^2 z^2 (z^2-1).
We then find
A(x,y) = a^2 y^2 (1- y) + x (a^6 y^f + 4 a^4 y^2 + f + 4 a^2 y^3 + f + y^5 + f) +
x^2 (-a^6 y^2 f +
- 4 a^4 y^2 + 2 f + 4 a^4 y^3 + 2 f + a^2 y^5 + 2 f) + x^3 (-a^6 y^2 + 3 f - a^4 y^3 + 3 f).
The equation A(x,y)=0 is satisfied by
y(x) = 1 + (4 + a^-2 + 4 a^2 + a^4) x + (1 + a^2) (3 + a^10 (-2 + f) + f +
+ a^8 (-6 + 7 f) + a^2 (13 + 7 f) + a^6 (1 + 17 f) + a^4 (16 + 17 f)) a^-4 x^2 + …
It would be great to find a combinatorial model, in which counting of some objects would reproduce (possibly for some specific f) integer coefficients in the above y(x).
§.§ 5_2 knot
Analogously we find A-polynomial curve for 5_2 knot (again we stress that it is different from the knot theory A-polynomial determined in <cit.>). In this case we consider the superpolynomial (<ref>), and the specializations as in (<ref>) yield the potential (<ref>)
V_5_2(𝗑) = V_uni(𝗑)+ 2log a log z_j + 2log a log z_k - log^2 z_k -2log^2 z_j-2log𝗑log z_k +
+ 1/2( -Li_2(-a^-2z_k^2) + Li_2(-a^-2𝗑^2) - Li_2(-a^-2𝗑^2 z_j^2) ) +
+ 1/2( Li_2(𝗑^-2) - Li_2(z_k^-2) - Li_2(z_k^2𝗑^-2) + Li_2(𝗓_𝗄^-2) - Li_2(z_j^-2) - Li_2(z_j^2 z_k^-2) )
where z_j=q^j,z_k=q^k. In this case the system of equations (<ref>) takes form
y = x^2f (𝗑^2z_j^2 + a^2) /z_k^2(𝗑^2 - z_k^2), 1 = (𝗑^2 - z_k^2)(z_k^2+a^2)/𝗑^4 (z_k^2-z_j^2), 1 = (𝗑^2z_j^2 + a^2) (z_k^2-z_j^2) /z_j^2z_k^2(z_j^2-1).
We then find for f=0
A(x,y) = y^7(1-y) + x (a^6 y^2 - a^2 y^4 + 4 a^4 y^4 + 2 a^2 y^5 + 2 y^6 +
5 a^2 y^6 + (1 - y) y^6 +
+ (y-1) y^7 + y^8) + x^2 (-a^8 - 3 a^2 y^3 + 4 a^2 y^4 + y^5 + a^2 y^5 - 2 (y-1) y^5 +
+ y^6 - 4 a^2 y^6 + (1 - y) y^7 - a^6 y (1 + 4 y) - a^4 y^2 (1 + y + 6 y^2)) +
+ x^3 (-2 a^6 - 3 a^2 y^2 + 2 a^2 y^3 - 4 a^2 y^4 + (1 - y) y^4 + (1 - y) y^5 - a^4 y (2 + 5 y)) +
+ x^4 (-a^4 - a^2 y) .
Interestingly, for a=0 this expression factorizes
A(x,y)|_a=0 = y^4 (x - xy + y) (x^2 + x (2 + x) y + y^2 + (x-1) y^3).
Finding a combinatorial model in which the numbers of certain states would be captured by the above A-polynomials is an important challenge too.
§ ACKNOWLEDGEMENTS
The work of P.S. has been supported by the OPUS grant no. 2022/47/B/ST2/03313 “Quantum
geometry and BPS states” funded by the National Science Centre, Poland. The work of M.S. has been supported by the Science Fund of the Republic of Serbia, Project no. 7749891, GWORDS – “Graphical Languages", as well as by Fundação para a Ciência e a Tecnologia (FCT) through CEEC grant with DOI no. 10.54499/2020.02453.CEECIND/CP1587/CT0007.
JHEP
|
http://arxiv.org/abs/2405.09178v1 | 20240515083548 | Testing and Debugging Quantum Programs: The Road to 2030 | [
"Neilson Carlos Leite Ramalho",
"Higor Amario de Souza",
"Marcos Lordello Chaim"
] | cs.SE | [
"cs.SE",
"quant-ph"
] |
Quantum Computing has existed in the theoretical realm for several decades. Recently, given the latest developments in hardware, quantum computing has re-emerged as a promising technology with the potential to solve problems that a classical computer could take hundreds of years to solve. With the rising interest in the field, there are challenges and opportunities for academics and practitioners in terms of software engineering practices, particularly in testing and debugging quantum programs. This paper presents a roadmap for addressing these challenges, pointing out the existing gaps in the literature and suggesting research directions. We present the current state-of-the-art testing and debugging strategies, including classical techniques applied to quantum programs, the development and implementation of quantum-specific assertions, and the identification and classification of bug patterns unique to quantum computing. Additionally, we introduce a conceptual model to illustrate the main concepts regarding the testing and debugging of quantum programs as well as the relationship between them. Those concepts are then used to identify and discuss the main research challenges to cope with quantum programs through 2030, focusing on the interfaces between classical and quantum computing and on creating testing and debugging techniques that take advantage of the unique quantum computing characteristics.
<ccs2012>
<concept> <concept_id>10011007.10011074.10011099.10011102.10011103</concept_id>
<concept_desc>Software and its engineering Software testing and debugging</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010583.10010786.10010813.10011726</concept_id>
<concept_desc>Quantum Computing Quantum Software Testing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Software testing and debugging
[500]Hardware Quantum computation
Integrated Sensing and Communication Enabled Cooperative Passive Sensing Using Mobile Communication System
Zhiqing Wei, Member, IEEE,
Haotian Liu, Student Member, IEEE,
Hujun Li,
Wangjun Jiang, Student Member, IEEE,
Zhiyong Feng, Senior Member, IEEE,
Huici Wu, Member, IEEE,
Ping Zhang, Fellow, IEEE
Zhiqing Wei, Haotian Liu, Wangjun Jiang, Zhiyong Feng, Huici Wu, and Ping Zhang are with Beijing University of Posts and Telecommunications, Beijing 100876, China (emails: {weizhiqing; haotian_liu; jiangwangjun; fengzy; dailywu, pzhang}@bupt.edu.cn).
Hujun Li is with China Telecom Co., Ltd. Sichuan Branch,
Chengdu 610031, China (email: 19108092304@189.cn).
May 20, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Quantum computing has leaped forward in recent years, gaining attention from both academia and industry. The main reason to develop software and hardware solutions based on the quantum realm is the need to speed up the processing of complex problems.
A quantum computer is a device that takes advantage of the specific properties described by quantum mechanics to perform computation <cit.>. While quantum computers have been a theoretical concept for years, several companies are now engaged in developing hardware, programming frameworks, and quantum programming languages such as Q# from Microsoft[https://learn.microsoft.com/en-us/azure/quantum/user-guide/https://learn.microsoft.com/en-us/azure/quantum/user-guide/], Cirq from Google[https://quantumai.google/cirqhttps://quantumai.google/cirq], and Qiskit from IBM[https://qiskit.org/https://qiskit.org/]. As hardware development progresses, quantum computing applications are becoming increasingly important and promising, with potential in fields such as molecular simulations, cybersecurity, finance, and logistics. Quantum computing has also been used to accelerate the execution of classical machine learning and to create new quantum machine learning algorithms <cit.>.
Software Engineering practices and techniques must support the development of quantum applications to achieve productivity, quality, and business-oriented solutions. On one hand, software engineers must know the fundamental basis of quantum computing to understand its specificities. On the other hand, specialists in developing quantum applications should comprehend the importance of software engineer practices, methods, and techniques to deliver bug-free applications that can be maintained during their lifecycles.
With the spread of frameworks and programming languages, it is important to ensure that quantum computing applications will work as expected, both as standalone components and as sub-modules of larger hybrid applications with classical computing components. Quantum computing has certain characteristics that pose new challenges for testing and debugging for researchers and practitioners <cit.>. To keep up with advances made by practitioners, researchers are proposing strategies to test and debug quantum applications, adapting existing techniques or creating new approaches based on quantum computing concepts.
This paper presents a roadmap with insights concerning the future of testing and debugging of quantum programs. We will first provide a brief theoretical background about quantum computing. Concepts such as quantum bits (qubits), superposition, and entanglement are explained and illustrated with an example. By doing so, we will be able to discuss the impact of those quantum characteristics for testing and debugging.
In Section <ref>, we present an overview of the state-of-the-art and state-of-the-practice in quantum computing, such as classical testing and debugging techniques applied for quantum programs, quantum assertions, bug patterns, and bug hierarchies in quantum programs.
In Section <ref>, we propose a conceptual model that represents the topics related to testing and debugging quantum applications. This model will be used to discuss concerns and challenges for the future on the road to 2030. Finally, Section <ref> contains our remarks and conclusions.
§ QUANTUM COMPUTING CONCEPTS
To understand the differences between a Classical Program (CP) and a Quantum Program (QP) in terms of testing and debugging, it is useful to explore some of the key characteristics of QC <cit.>:
* Quantum parallelism: Superposition is the quantum principle that allows calculations to consider multiple quantum states simultaneously, thus leveraging parallelism.
* Statistical results in most cases: unlike classical programs, most quantum computing applications are governed by the inherent uncertainty of superposition.
* Exponential scaling: as qubits can assume two values (|0⟩ and |1⟩), the input space of quantum programs scales as 2N, where N is the number of qubits. With 50 qubits, for example, the number of possibilities increases to 250, which can not be simulated even by current supercomputers.
* Quantum interference: interference occurs when two quantum states are combined so that their amplitudes either amplify (constructive interference for the right answer) or cancel (destructive interference for incorrect answers).
* Asking the right question: This characteristic refers to the mapping of the problem statement such as it can be solved by a quantum computer.
Besides the concepts mentioned above, there are also characteristics such as entanglement and the no-cloning theorem that directly impact the testing and debugging of quantum programs. These concepts will be discussed in detail in the following paragraphs with the support of the running example presented in Listing <ref>.
[language=Python, numbers=left, basicstyle=, numbersep=4pt, label=alg:algoritmo-running-example1,caption=Running example with the Bell state circuit,captionpos=b,xleftmargin=.02]
from qiskit import QuantumCircuit
from qiskit_aer import Aer
from qiskit import transpile
from qiskit.visualization import plot_histogram
from matplotlib import pyplot as plt
# creating a circuit with 2 classical
# registers and 2 quantum registers
circuit = QuantumCircuit(2)
circuit.h(0)
circuit.cx(0,1)
circuit.measure_all()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circuit = transpile(circuit,simulator)
# simulates the circuit
result = simulator.run(circuit,shots=100).result()
counts = result.get_counts(circuit)
plot_histogram(counts)
plt.show()
The code is written in Python and is based on Qiskit, an open-source software development kit for working with quantum computers at the level of pulses, circuits, and application modules <cit.>. One of the central aspects of programming using Qiskit is the circuit, which consists of one or more quantum operators. Figure <ref> presents the Python code based on Qiskit for the quantum circuit described in Listing <ref>. Quantum circuits are composed of wires and quantum gates, responsible for carrying around and manipulating quantum information <cit.>. The vertical dashed lines are called barriers and are used in this example to divide the circuit into pieces so it is easier to explain the parts individually. The circuit of Figure <ref> is composed of two qubits (q_0 and q_1), a Hadamard, and a CNOT operator, as well as the measurements for each qubit.
Similarly to classical bits, qubits can assume two different measurable states: |0⟩ and |1⟩, which are, to a certain extent, equivalent to the classical binary states 0 and 1. However, since qubits are physically subatomic particles, they have certain quantum mechanical properties such as superposition of states and entanglement. Computations that would normally need to be performed serially on 0 and 1 separately on a classical computer could now be completed in a single operation using a qubit on a quantum computer, making computations faster <cit.>.
Qubits can be in different states, represented in Quantum Mechanics as vectors, usually with the Dirac notation. In this notation, vectors are represented as bra-kets, in which a ket is the column vector and a bra its conjugate transpose. In this way, the vectors representing the states |0⟩ and |1⟩ are defined as:
|0⟩ =
[ 1; 0 ], |1⟩ =
[ 0; 1 ]
In Listing <ref>, line 9, the constructor QuantumCircuit creates two qubits initialized with the base state |0⟩ and two classical registers to store the results of the measurements operations in each qubit created.
The circuit is composed of q_0 and q_1 and is, at this point, a composite system. A system with multiple qubits is called a composite quantum system <cit.> and is the result of the tensor product of the separate, individual spaces. For instance, a composite system with two qubits consists of a single quantum system with four dimensions. In general terms, a composite of a quantum system of N qubits consists of a single quantum system with 2^N dimensions.
The composition is represented by the symbol ⊗, which is the tensor product of the state vectors that represent each qubit individually. For two qubits, the computational basis is |00⟩, |01⟩, |10⟩, and |11⟩. The tensor product is defined in the Dirac notation as:
|x⟩⊗|y⟩≡|x⟩|y⟩≡|xy⟩
As stated by quantum mechanics principles, systems are set to a definite state only once they are measured. Before a measurement, the systems are in an indeterminate state. For instance, the superposition of |0⟩ and |1⟩ is a linear combination of these states:
|ψ⟩ = 1/√(2)|0⟩ + 1/√(2)|1⟩
The state presented in Equation <ref> is usually represented as |+⟩, whereas the state |ψ⟩ = 1/√(2)|0⟩ - 1/√(2)|1⟩ is represented by |-⟩.
When quantum states are in a superposition, the probability of a state resulting after the measurement is equal to the modulus squared of the amplitude of that state. This is known as the Born Rule and it was demonstrated by Max Born in 1926. In Equation <ref>, the probability of the state |0⟩ or |1⟩ being returned after measurement is equal to:
P(|0⟩) = P(|1⟩) = 1/√(2)^2 = 1/2
Notice that the Born Rule maps state measurements to the concept of probability, i.e., the sum of the squares of the amplitudes of all possible states in the superposition is equal to 1:
α^2 + β^2 = 1
For the example in Listing <ref>, the qubits are measured in line 13 (measure_all method). Based on the Born Rule, q_0 will have a 50% chance of collapsing to |0⟩ and 50% of collapsing to |1⟩, as follows:
|1/√(2)| ^2 + |1/√(2)| ^2 = 1
However, since the example circuit is composed of q_0 and q_1, the result is a combined state between these two qubits, which is represented by the tensor product (⊗).
Another interesting characteristic of QC is Entanglement, which can be defined as a physical phenomenon in which multiple qubits are correlated with each other, such as the measurement of one of them automatically triggers correlated states of the others, even if these qubits are separated by great distances. In mathematical terms, entangled qubits represent superposition states that are not separable, i.e., cannot be factored into product states.
In Listing <ref>, the entanglement is achieved with the CNOT (Conditional NOT) operator (line 12), which acts on two qubits, a control qubit (which is in superposition) and a target qubit, as follows:
* if the control qubit is |0⟩, no action is taken on the target qubit.
* However, in case the control qubit is |1⟩, the target qubit is switched.
* the control qubit remains the same.
With the circuit defined, the next step consists of assembling it and submitting it to a back-end for execution. For Listing <ref>, the circuit is submitted on line 19 to the Aer simulator (defined in line 16), which consists of an ideal (noise-free) simulator running in the local computer <cit.>.
Given the probabilistic nature of quantum programs, experiments usually need to be executed multiple times (parameter shots, defined in line 19 in the run method), so the results can be checked against a certain expected probabilistic distribution. Once the circuit is executed, the results are collected in a variable called count, which contains a map of the returned values as well as their frequency.
Lastly, the function plot_histogram will print the returned results on (the x-axis) and their respective frequencies (on the y-axis).
Because of the differences in devices in terms of quantum architectures and hardware implementation details, there is a process called transpilation, which is responsible for rewriting a given input circuit to match the topology of that specific quantum device and optimizing the circuit <cit.>. In the running example, the transpilation is executed in line 17, by calling the method transpile with the circuit and the target execution environment (Aer simulator).
One key difference of QC applications when compared to their classical counterparts is the inability to clone quantum states. This is known as the No-cloning Theorem. In classical computing, making multiple copies or inspecting the value of variables is a common task. The no-cloning theorem states that it is not possible to create a copy of an arbitrary quantum state <cit.>. Thus, a read operation of an intermediate state in a qubit will make the quantum state collapse to a classical value, which will be the output state. Thus, the no-cloning theorem poses an important limitation to the process of debugging quantum programs.
There are also gates such as R_x, R_y, and R_z that are defined in terms of angles or rotations. These gates accept an angle as a parameter and rotate the target qubit around their respective axis by the specified angle. They are used in variational quantum algorithms, which are the basis of many Quantum Machine Learning (QML) algorithms. QML is currently a very active research area in QC and has emerged as a dominant paradigm for circuit-based quantum programs in the current noisy intermediate-scale quantum (NISQ) era <cit.>.
In QML applications, there is typically a predefined parameterized quantum circuit (or variational circuit) with predefined architectures whose parameters are optimized by classical optimization algorithms. They are composed of different elements such as parameterized gates and entangling blocks. The classical data are coded into quantum states using different techniques and fed into the parameterized circuit. The output is read and sent to the classical optimizer. Some of the techniques for coding the classical data into quantum states are Basis Encoding, Amplitude Encoding, Time-Evolution Encoding, and Hamiltonian Encoding. As for the output, the most common approaches to map the results of the measurements to a class or label in a classification problem are Parity Post Processing <cit.> and Measuring the first qubit.
§ CURRENT TESTING AND DEBUGGING APPROACHES FOR QUANTUM COMPUTING
This section summarizes the main testing strategies applied to QPs, as well as assertions, bug patterns, taxonomies, and benchmarks. Finally, an overview of the main debugging techniques is presented.
§.§ Testing strategies
In terms of testing QPs, there are systematic studies <cit.> that present a broad overview of different techniques, varying from adapting classical approaches to QPs with multiple measurements and statistical analyses to using Hoare logic to determine whether a quantum program is correct. These works also cover topics such as Bug Benchmark frameworks for QPs <cit.>, reusability of quantum algorithms, data structures, and libraries <cit.>, quantum algorithm development <cit.>, formal verification of quantum protocols <cit.>, simulation of QP <cit.>, and other traditional classical methods such as Mutation Testing, Property-based testing, and Fuzz testing. Other authors <cit.> explore the issues with testing and debugging QPs from a different perspective: they study bug types, taxonomies, bug repositories <cit.> and benchmarks <cit.>.
These systematic reviews cover in detail other topics such as Assertions types <cit.>, and the overall challenges associated with the testing of QPs <cit.>. In what follows, we describe the most promising testing and debugging techniques for QPs.
Classical testing strategies developed for QC cover different aspects of a QP. For instance,
Quito (QUantum InpuT Output coverage) <cit.> is a framework to tackle the problem of test oracles for QPs, as well as coverage criteria for the program input, output, and input-output relations. The framework takes into account a Program Specification (PS) and uses statistical analysis of the test results to determine the criteria for passing the tests. Two Test Oracles are defined: Wrong Output Oracle (WOO) and Output Probability Oracle (OPO).
Another classical approach used to tackle the dimensional problem of the input space is equivalence class partitioning, which is a functional testing technique that consists of dividing the input domain into different classes in which the software being tested has supposedly the same behavior <cit.>.
For QPs, Long and Zhao <cit.> have defined two equivalence partitioning criteria, namely:
* Classical-Superposition Partition (CSP) – each input variable of each quantum state type is divided into classical state input and superposition state input.
* Classical-Superposition-Mixed partition (CSMP) – each input variable of each quantum state type is divided into classical state input, superposition state input, and mixed state input.
In terms of general classical testing approaches adapted for the testing of QPs, a few other examples in the literature are detailed in the following subsections.
§.§.§ Combinatorial Testing.
The coverage criterion defined in Quito has scalability issues as the number of qubits increases. Thus, researchers have been working on optimization approaches to limit the input space. An example of such an approach is QuCAT (QUantum CombinAtorial Testing) <cit.>, a framework that applies Combinatorial Testing (CT) to generate tests for QP. The idea is that faulty points in the program can be reached through a particular combination of the input values of a given characteristic such as pair-wise or 3-wise.
§.§.§ Search-based testing.
A test generation tool for quantum programs, QuSBT (Search-based Testing of Quantum Programs) <cit.>, uses genetic algorithms to generate a test suite for quantum programs with the maximum number of failing tests. The input for the tool consists of the quantum program under test, a list of the input and output qubits, the total number of qubits, and a program specification (PS). The PS maps each input value to its respective probability of occurrence. A statistical test (Pearson's chi-square) is used for checking failures with a probabilistic nature, in which the user can also specify the significance level for the test.
§.§.§ Fuzz Testing.
Wang et al. <cit.> investigated the use of Fuzz Testing in the generation of rare inputs in QP to trigger sensitive branches and thus induce crashes or discover defects. The idea is to use a gray box testing approach to first identify the code measurement operations and the branches produced by these measurements. The procedure continues by producing input matrices that will maximize the probability of these sensitive branches being triggered, thus reaching the code with the possible defect. As measurement operations make qubits collapse to classical values, this approach is more of a hybrid quantum-classical technique, as everything after the measurement is purely classical and does not depend on any quantum characteristic.
§.§.§ Property-Based Testing.
The probabilistic nature of QC programs makes it difficult to assert the value of certain quantum states, especially for test cases in which superposition plays a role. Property-based testing has been studied as an alternative to mitigate the non-deterministic nature of QPs <cit.>, as its main approach consists of generating tests based on general properties of the artifact being tested and not in concrete test cases. As with the classical property-based testing approach, the properties are described as pre-conditions as well as post-conditions and concrete program states become higher-level abstractions. The authors created a property specification language for testing Q# programs and a property-based testing method to generate, execute, and statistically assert the results of concrete test cases. To assert the test results, the authors define five types of assertions: Assert Probability, Assert Entanglement, Assert Equal, Assert Teleported, and Assert Transformed.
§.§.§ Mutation Testing.
Mutation testing plays two important roles in QPs: (1) mutation operators can be used to create faulty versions of QP <cit.>, thus mitigating the lack of quantum bug repositories and quantum benchmark programs, and (2) to assess the quality of test suites for QP. In terms of the assessment of the quality of test suites, Mendiluze et al.<cit.> developed a mutation analysis tool for QPs called Muskit (MUtation testing for QisKIT) based on the Qiskit framework. Muskit has three components: Mutants Generator, Mutants Executor, and a Test Analyzer. The mutant generator component defines three mutation operators: Add Gate, Remove Gate, and Replace Gate. Similarly, Fortunato et al.<cit.> investigated the application of mutation testing in QPs written in Qiskit. The authors created a set of mutation operators to generate mutants based on qubit measurements and quantum gates. These operations were incorporated in a framework called QMutPy, which consists of an extension of the MutPy, a Python-based tool for mutation testing. QMutPy extends the mutation operators already present in MutPy with five quantum operators: Quantum Gate Replacement (QGR), Quantum Gate Deletion (QGD), Quantum Gate Insertion (QGI), Quantum Measurement Insertion (QMI), and Quantum Measurement Deletion (QMD), which derive from the classical mutant operations.
§.§.§ Metamorphic Testing.
Previous works <cit.> developed an approach to use metamorphic relations to test quantum programs. In their work, the authors use metamorphic rules, written as quantum functions and based on properties of the QP, to avoid or delay direct qubit measurement. These metamorphic relations are written as functions that can be executed directly in a quantum computer, consisting of what the authors call an "oracle quantum program". Another approach developed by Paltenghi and Pradel <cit.> is MorphQ, a metamorphic testing framework that aims to tackle two challenges related to testing QPs: (1) the lack of quantum programs available for testing; and (2) the oracle problem. MorphQ is equipped with an automatic quantum program generator, which uses both template-based and grammar-based code generation. The resulting programs do not crash during execution, as the generating strategies consider domain-specific constraints of quantum computing. To alleviate the oracle problem, MorphQ implements ten metamorphic transformations, such that the source and the follow-up programs have related outputs with equivalent behaviors.
§.§.§ Other Quantum-based Testing Approaches.
The characteristics of QC described in Section <ref>, notably superposition and the impossibility of cloning quantum states, pose interesting challenges for researchers in Quantum Software Testing. However, other particularities of quantum systems have been leveraged in the development of novel testing approaches. Reversibility and the unitary nature of quantum operations are cornerstone concepts in specific testing techniques, including: (i) partial equivalence checking <cit.>, (ii) fast equivalence checking for quantum circuits <cit.>, (iii) fault testing for reversible circuits <cit.>, (iv) methods for k-CNOT gates (multiple-control Toffoli gates) <cit.>, and (v) design techniques for gates such as Toffoli, Fredkin, and mixed Toffoli-Fredkin that aim to improve circuit testability <cit.>. Other approaches such as Qraft <cit.> are fully based on the reversal of the circuit being tested to predict the program output. The idea of Qraft is to reverse the quantum circuit and execute the full forward as well as its reversed version to deduce the correct program output for all the quantum states of the original (forward) circuit. Similarly, Qdiff <cit.>, a differential testing framework for testing QPs, takes advantage of reversibility and unitary operators to generate logically equivalent variants of the QPs being tested. Finally, Miranskyy's <cit.> approach considers using QCs to speed up dynamic tests of classical programs. Although the focus of his work is not the test of QPs per se, the author shows that, in some cases, it is possible to translate a classical program to a quantum program and then take advantage of QC computational power.
§.§ Assertions
Assertions in QPs are an important area of study due to the unique constraints posed by quantum computing (Section <ref>). These include non-deterministic outcomes causing the oracle problem, and the no-cloning theorem. The probabilistic behavior also studied in the testing of classical programs is particularly relevant due to the uncertain nature of quantum states. Thus, for the programmer creating tests for a quantum routine, it is considerably difficult to define an oracle upon which the assertions can be based.
The inability to directly read quantum states without collapsing them to classical values makes it impossible to use assertions in the middle of a QP. Consequently, the approaches that emerged to tackle the challenges associated with assertions in QPs are:
* Measurement-based Assertions – these assertions measure the qubit under test at a certain point during the program execution. As such, the state of the qubit collapses and the subsequent execution of the program is impacted. These assertions assume that the program needs to be executed multiple times to perform statistical tests and determine the possible state of the qubit given a certain significance level. Examples of measurement-based assertions are Statistical Assertions <cit.>, Assertions using Swap-Tests <cit.>, and Projection-based Assertions <cit.>.
* Quantum-based Assertions – these assertions are usually called runtime or dynamic assertions and do not rely on measuring the assessed qubit. As the qubit is not measured, the program state is not impacted as the quantum state does not collapse to a classical value. The main types of quantum-based assertions defined in the literature are Runtime (Dynamic) Assertion checking <cit.>, Assertions for Memberships / Approximate Assertions, and Swap-based assertions <cit.>.
* Other types of assertions found in the literature explore particular characteristics of the circuit being tested. They are: Assertions for Symmetry States <cit.>, Nondestructive discrimination (NDD) assertions <cit.>, and Invariant and Inductive assertions <cit.>.
§.§ Bug Patterns in Quantum Programs
In terms of improving the quality of quantum programs and reducing the likelihood of producing software with defects, there have been efforts focusing on mainly two strategies: (i) developing techniques to identify and fix bugs, and (ii) studying the most common defect patterns in QP as well as their characteristics. Knowing the most recurrent bugs and how they manifest themselves in real QPs can help researchers in the development of new ways to mitigate these bugs before they occur.
Focusing on debugging QPs, Huang and Martonosi <cit.> surveyed a set of quantum computing (QC) algorithms and conducted small-scale experiments. These experiments were based on the implementation and the gradual debugging of each step of these programs. Through this process, the authors identified a set of bug types and proposed methods for mitigating them. The bug types found by the authors are Incorrect classical input parameters, Incorrect quantum initial values, Incorrect operations and transformations, Incorrect composition of operations using iteration, Incorrect deallocation of qubits, Incorrect composition of operations using mirroring, and Incorrect composition of operations using recursion. Following a similar structure, but centering their work more on static analysis and less on debugging, Zhao et al.<cit.> studied bug patterns in QPs written in Qiskit.
The authors identified eight bug types, classified into four areas: Initialization, Gate operations, Measurements, and Deallocation. The authors further define bug types for each of the areas listed above. They are Unequal Classical Bits and Qubits, Custom Gates not Recognised, Insufficient Initial Qubits, Over Repeated Measurement Incorrect Operations after Measurement, Unsafely Uncomputation, Inappropriately Modification of Register Size, and Method measure_all.
On a more general work, Luo et al. <cit.> studied 96 real-world bugs and their fixes in four programming languages: Qiskit, Cirq, Q#, and ProjectQ. The bugs analyzed were collected from public repositories on GitHub and questions posted by programmers in Q&A websites Stack Overflow and Stack Exchange. The authors found that more than 80% of the bugs analyzed were related to the quantum-specific parts of the applications. Furthermore, all bugs categorized as having high complexity were found to be associated with quantum computing components in the programs analyzed.
§.§.§ Bug Taxonomies.
Following a similar approach as Luo et al.<cit.>, Paltenghi and Pradel <cit.> ran an empirical study and analyzed 223 bugs from 18 open-source quantum computing projects, showing that 39.9% of all analyzed bugs were related to quantum-specific parts of these platforms such as parts that represent, compile, and optimize quantum programming abstractions.
Moreover, the authors show that many of those bugs do not cause the executing program to crash, but return erroneous results, making it more challenging to identify them. The findings indicate that 9.9% of all bugs analyzed by the authors are related to quantum computing concepts and are placed in components that implement, assemble, and optimize quantum-related routines. In terms of bugs in QML frameworks, Zhao et al. <cit.> investigated 391 real-world bugs collected from 22 open-source repositories of nine popular QML frameworks. They identified that 28% of the bugs found are on quantum-specific parts of the code, underscoring the importance of developing methods to find and prevent them.
§.§.§ Bug Benchmarks.
Benchmarks are widely used in classical software testing to simplify repeated experiments and provide repeatable and objective comparisons <cit.>.
Bug benchmarks are datasets with known bugs, usually including the faulty code, a fixed version, and a test to replicate the problem. For Quantum Software, in terms of bug benchmarks, QBugs <cit.> is a framework that consists of a collection of bugs in quantum software. As quantum algorithms available for testing are scarce, QBugs' authors suggest the creation of an open-source catalog for quantum algorithms, along with a supporting infrastructure that can be used for developers and thus facilitate the execution of controlled empirical experiments.
Following a practical approach, Bugs4Q <cit.> is another bug benchmark that consists of 36 real bugs from Qiskit. These bugs were collected, validated, and made available for the community with their respective test cases for reproducing the erroneous behaviors. The idea of the framework is to keep evolving by adding new bugs with new versions of Qiskit, thus building up a reference bug database with their fixes, unit tests to reproduce buggy behavior, and an interface to access and run experiments.
§.§ Debugging Techniques
Classical debugging methods such as backtracking, cause elimination, and brute force, have been explored and suggested as possible approaches for debugging quantum programs <cit.>. However, debugging quantum programs is currently a challenging problem due to the characteristics of QC such as superposition and the inability to clone quantum states. For instance, a typical debugging approach, which consists of adding print statements in the code to display intermediate values for certain variables, can not be used in QPs due to the collapsing problem.
Although simulators help to observe quantum states for QPs running on classical computers, they are limited to small programs, as the state explosion for programs with a higher number of qubits makes it unmanageable for classical computers. Furthermore, there are challenges in the interpretation of simulation results, even for small quantum programs <cit.>, which present research opportunities in developing scalable visualizations and improving the interpretability of large-scale graphs that developers can use to inspect and better understand the intermediate states of the QPs being debugged <cit.>.
The tooling for developing and debugging quantum algorithms is still limited and has scattered features. Practitioners, for instance, tend to use different tools (such as IBM Composer <cit.> and OpenQASM <cit.>) and alternate between them while debugging quantum algorithms <cit.>.
In terms of debugging strategies for quantum algorithms, programmers may vary from coarse-grained (e.g., for quantum chemistry simulations in which the pair-wise electron interactions do not have inherent physical meanings) to fine-grained as inspecting the inner details of the intermediate subroutines, allowing one to compare intermediate results with the known expected values <cit.>.
Previous research <cit.> adapted debugging with slicing (a classical debugging technique) for QPs. The approach consists of dividing quantum circuits into smaller blocks by adding breakpoints in the form of circuit barriers and executing the blocks separately either in a simulator or a quantum computer. The barriers might have the side effect that some qubits are not used in certain slices, thus allowing the user to add a horizontal slice and separate the unused qubits from the analysis. The vertical slicing is similar to the statistical assertions, as the remaining parts of the circuit need to be simulated to allow the inspection of the intermediate states.
The efforts discussed in this section show that several advances have been made in the realm of testing and debugging in quantum computing. Notwithstanding, there are still gaps to tackle on the road to understanding and proposing effective techniques for the improvement of QPs. The next section is dedicated to discussing these challenges and outlining potential research directions in the testing and debugging of QPs.
§ 2030 HORIZON: EMERGING CHALLENGES AND OPPORTUNITIES
Figure <ref> shows a conceptual model representing the main testing and debugging concepts discussed until now and how they are related to each other. We will rely on Figure <ref> as a guide for the discussion throughout this section. We will mainly deal with the concepts highlighted in the figure, indicated in boldface, and with those that stem from them, which are indicated in italic.
Quantum computing is in a stage comparable to the initial days of classical computing, in which programs were created using low-level machine languages <cit.>. This lack of higher abstractions to the circuit-based model as well as the absence of quantum-specific testing techniques (unlike the adaptation of their classical counterparts) and tooling can pose extra challenges in the testing and debugging of quantum programs.
When it comes to testing techniques, the issues with handling the combinatorial explosion of the input states in the test of QPs have been well documented <cit.>. This is notorious for pure QPs, i.e., those conceptualized to demonstrate theoretical concepts or to function as small example programs. In practice, a QP will not exist as an isolated entity, but as part of a complete solution with classical components. In this hybrid setup, the first step consists of mapping classical data to quantum data. In these approaches, the inputs of a QP are not only discrete but can also assume continuous values that go through a set of steps to be processed by a QC. Thus, mapping input states with expected outputs and developing testing techniques for these areas becomes even more complex.
Following the need to reduce the search space on the input domain, several techniques have been developed such as combinatorial testing, search-based testing, and fuzz testing. These techniques are labor intensive, require significant computational resources and effort, and do not tackle the problem completely, as in most real scenarios the input states are not discrete.
On the testing and debugging QPs front, one of the major factors impacting future techniques is the inability to clone quantum states, which impacts the inspection of the qubit state. Thus, a simple print statement or the analysis of a qubit in a breakpoint makes the qubit collapse to a classical value, affecting the subsequent execution of the QP. Dynamic assertions can be seen as a way to overcome this limitation, albeit with restrictions. They are limited to asserting certain characteristics of the quantum state, such as identifying superposition or entanglement. While this approach has its practical applications, it shares a limitation with property-based testing: it is limited to asserting properties of the quantum state, rather than the actual quantum state itself. The use of simulators in the development and debugging of QPs can help to circumvent these issues, but they are limited to small programs due to state explosion. Thus, a variety of classical structural testing techniques cannot be directly applied to the testing and debugging of quantum programs. Other research directions that have been followed are the study of bug patterns, taxonomies, and techniques for debugging quantum programs. Although bug patterns may address known bugs, helping to reduce their occurrences, there may be other bug types that do not fit into the existing categories. This highlights the importance of developing effective strategies to identify them and overcome existing debugging limitations such as the inability to directly observe intermediate quantum states.
When it comes to the test of QML applications, it is observed that, similarly to other QPs, there are efforts to adapt classical techniques to the quantum space. The similarities between QML algorithms and neural networks make them suitable to share similar testing approaches. The differences we observe are related to the encoding of the classical values to quantum states as well as the outputs produced by the quantum algorithms to classification labels. Although several techniques cover both topics, the testing strategies are, as far as we know, non-existent. On a broader scope, although some testing strategies exist for pure QC applications as well as for QML counterparts, not so much has been developed in terms of the testing of the interfaces between the classical and the quantum world.
As illustrated in Figure <ref>, QC applications exist as part of a complete application in which the quantum-specific components will execute a part of the job. Thus, exploring testing alternatives and developing ways to test these interfaces is important. Likewise, understanding bug patterns in hybrid (classical-quantum) programs and adapting classical debugging techniques to quantum programs are ongoing challenges.
In Figure <ref> we summarize the main testing approaches categorizing the type of program being tested and the type of test created for it. In quadrant A, classical programs are tested and debugged using classical techniques. These are the traditional programming, testing, and debugging strategies without any quantum-related elements. In quadrant B, classical programs are tested and debugged with the support of quantum-centered techniques. In this quadrant, the initiatives focus on using quantum-specific phenomena to speed up classical software testing techniques. For instance, Abreu et al. <cit.> shown that it is possible to take advantage of quantum parallelism to speed up the testing process of metamorphic rules. Likewise, there have been initiatives <cit.> focused on utilizing QC, more specifically Quantum Approximate Optimization Algorithms (QAOA), in test case optimization problems. Similarly, other works use Quantum Annealers, which are specialized quantum computers for solving combinatorial optimization problems, to tackle the test minimization problem <cit.>.
In quadrant C, a quantum program is tested and debugged with the support of classical testing and debugging techniques adapted to the context of QC. Most of the existing techniques described in this paper focus on this quadrant, as they are about adapting classical software techniques to quantum programs. Although this approach can help us understand the complexities of testing QPs, it is important to consider the use of quantum mechanics characteristics such as parallelism and interference in developing the testing and debugging techniques themselves.
Quadrant D, on the other hand, contains the least explored domain so far, which consists of quantum programs being tested and debugged with quantum-centered approaches. In this case, the tests are developed targeted to the quantum realm and leveraging quantum-specific features such as superposition, entanglement, and interference.
The interdisciplinary nature of QC poses an additional challenge to the development of quantum algorithms. On one hand, Computer Scientists have formal education in Software Engineering, Software Testing, and general programming practices, but might lack the necessary understanding of Quantum Physics <cit.> to produce significant contributions to QC. On the other hand, physicists, with a background in quantum mechanics, might lack knowledge of good software development practices <cit.>.
Some authors <cit.> already pointed out the importance of developing quantum-specific paradigms that can abstract away the complexity of working with quantum mechanics concepts.
A shift in the testing approaches is expected to happen once high-level frameworks and programming languages gain traction. Platforms such as Classiq[https://www.classiq.io/https://www.classiq.io/] and Silq[https://silq.ethz.ch/https://silq.ethz.ch/] already explore the idea that QP can be created with higher abstractions other than the circuit-based model. The argument is that circuit-based quantum programming is the equivalent of creating classical circuits using logical gates such as NAND, OR, NOT, and so on. Although circuits can work for small, simple examples, it does not scale up when it comes to bigger applications. The concept of Quantum Algorithm Design <cit.> emerges as an attempt to create computer-aided design (CAD) for QPs, in which high-level functional models are created by the user and translated to quantum circuits in the background. As these macro-component approaches mature, there will be challenges testing the interfaces between the usable components proposed by them, as well as with the testing of the individual components themselves. Similarly to what happened with higher-level programming languages, we expect to see new paradigms being created, as well as design, architectural, and integration patterns for computer systems with both classical and quantum components.
§ CONCLUSIONS
In recent years, Quantum Computing has emerged as a promising field due to its capabilities of solving complex problems and the developments in quantum hardware. As we approach 2030, the rising interest in quantum programming languages and frameworks underscores the importance of the study and development of specialized techniques for testing and debugging quantum programs.
In this paper, we presented an overview of the main concepts and techniques for testing and debugging quantum computing applications and illustrated their relations in a conceptual model. We presented the current challenges in the field and proposed a path forward that involves not only adapting existing practices to the quantum realm but also creating higher-level abstractions to the circuit-based programming model, along with new implementations that can leverage quantum computing's unique characteristics. Additionally, we highlighted the importance of developing approaches for testing and debugging hybrid applications, exploring the interfaces between classical and quantum components to develop more reliable and efficient applications.
2
ACM-Reference-Format
|
http://arxiv.org/abs/2405.09709v1 | 20240515213616 | Spin polarization in heavy-ion collisions induced by thermal vorticity and thermal shear | [
"M. Buzzegoli"
] | nucl-th | [
"nucl-th"
] |
Attention-aided Outdoor Localization in Commercial 5G NR Systems
Guoda Tian1,*, Member, IEEE, Dino Pjanić1,2,*, Student Member, IEEE,
Xuesong Cai1, Senior Member, IEEE,
Bo Bernhardsson3, and Fredrik Tufvesson1, Fellow, IEEE
^1 Dept. of Electrical and Information Technology, Lund University, Sweden
^2 Ericsson AB, Sweden
^3 Dept. of Automatic Control, Lund University, Sweden
^* Equal contribution
May 20, 2024
============================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
One of the goals of relativistic heavy-ion collisions is to study the physics of finite temperature QCD.
By measuring the momentum spectrum of particles and comparing with the theoretical predictions of momentum distributions and correlations, we can infer what happens during a collision.
Most notably, in this way it was found that the nuclear system goes through a phase of matter named quark gluon plasma (QGP) that behaves like a nearly perfect fluid.
Before the measurement of spin polarization of Λ particles in 2017 <cit.>, all the studies were based on the observation of momentum.
This paradigm started to change when it was realized <cit.> that off central collisions should generate a large angular momentum and that particles, due to the spin-orbit coupling, should have their spin aligned along the direction of the total orbital angular momentum.
Later, describing the spin degrees of freedom as close to the local thermal equilibrium (in the same fashion as successfully done for the momenta), the spin polarization was connected to the collective motion of the plasma, and precisely to the thermal vorticity <cit.>.
The observation of global spin polarization of Λs <cit.> is in good agreement with the hydrodynamic predictions, for more detail see for instance the review <cit.>.
More recently, the measurement of the global polarization of Ξ hyperons <cit.> further confirmed the hydrodynamic picture since it showed that spin polarization in heavy-ion collisions, unlike pp collisions, is not sensitive to specific hadron properties.
Over the past decade there has been a significant development of spin physics for heavy-ion collisions,
and spin polarization has also been measured at very high <cit.> and at very low <cit.> energy.
The first measurement of spin polarization as a function of momentum, referred to as local spin polarization, was reported in <cit.> and is in disagreement with the predictions obtained from thermal vorticity. As I will show below, the agreement with the hydrodynamic picture is restored once it is realized that also other collective motions, in particular the shear flow, can induce a spin polarization. I will show how to use quantum statistical field theory to derive the connections between the spin polarization and the hydrodynamic quantities, and I will discuss the features of the shear induced polarization. I will then connect the vorticity induced polarization with the spin-orbit coupling and the related form factor.
§ SPIN POLARIZATION IN QUANTUM STATISTICAL FIELD THEORY
The spin of a massive relativistic particle in quantum mechanics is defined through the Pauli-Lubanski operator
S^μ = -1/2mϵ^μνρσ_νρP_σ ,
where is the angular momentum-boost operator and the energy-momentum operator.
The spin polarization measured in heavy-ion collisions is the average of the Pauli-Lubanski operator for a particle of given momentum p, denoted by S^μ(p),
and it is obtained as the mean value of the operator (<ref>) with the density matrix operator , that is
S^μ(p) = ( S^μ(p) ) .
It is convenient to relate the average spin vector defined in Eq. (<ref>) with the covariant Wigner functions,
which, for a the Dirac field Ψ, are defined as
W_αβ(x,p)
= ∫^4 y/(2π)^4 ^- p · y⟨:
_β(x+y/2) Ψ_α(x-y/2):⟩ ,
where α and β are spinorial indices, : : denotes the normal ordering of creation and destruction operators, and angle bracket ⟨ ⟩ denotes the trace with the density matrix operator. When the momentum p is future time-like, the Wigner functions contains only the contribution form the particle part of the Dirac field, and the following traces
ℱ_+(x,p) = [ W(x,p) θ(p^2)θ(p^0)] , 𝒜_+^μ(x,p) = [ γ^μγ^5 W(x,p) θ(p^2)θ(p^0)],
are the particle part of the scalar and axial Wigner functions.
For a weakly interacting field, it can be shown <cit.> that the average spin vector can be evaluated from the integrals
of ℱ_+ and 𝒜_+^μ over a space-like hyper-surface Σ as follows
S^μ(p) = 1/2∫_ΣΣ· p 𝒜_+^μ(x,p)/∫_ΣΣ· p ℱ_+(x,p) .
Predictions for the spin polarization can be obtained from this formula if the statistical ensemble, described by a density matrix , is known.
The covariant density matrix at the hypersurface Σ is derived assuming that the system reached the local thermodynamic equilibrium at an initial stage and maximizing the entropy of the system.
Neglecting the dissipative effects, this procedure yields <cit.>
_ LE = 1/Zexp[ -∫_ΣΣ_μ(y) ( ^μν(y)
β_ν(y) - ζ(y) ^μ(y) ) ],
with the energy momentum tensor, a conserved current, β=u/T is the inverse four temperature vector and ζ=μ/T where μ is the chemical potential.
This operator is needed to obtain the Wigner functions (<ref>) at the point x.
Since the interaction scales are shorter than the macroscopic ones, the Wigner functions (<ref>) can be obtained
expanding the hydrodynamic fields in the operator (<ref>) around the same point x where they are to be evaluated, that is, setting ζ=0 for simplicity,
β_ν(y) ≃β_ν(x) + ∂_λβ_ν(x) (y-x)^λ.
The resulting density matrix is
ρ_ LE≃1/Zexp[ - β_ν(x) P^ν
+ 1/2ϖ_μν(x) J^μν_x -1/2ξ_μν(x) Q^μν_x
+⋯] ,
where the antisymmetric derivative of β is the thermal vorticity
ϖ_μν = -1/2( ∂_μβ_ν - ∂_νβ_μ),
and the symmetric one is the thermal shear
ξ_μν=1/2(∂_μβ_ν + ∂_νβ_μ).
Thermal vorticity couples with the conserved angular momentum operator
J^μν_x =∫ dΣ_λ[ (y-x)^μT^λν(y) -
(y-x)^νT^λμ(y)],
while the thermal shear couples with a non-conserved symmetric quadrupole like operator
Q^μν_x =∫_Σ_FO dΣ_λ[ (y-x)^μT^λν(y) +
(y-x)^νT^λμ(y)].
The definition of thermal vorticity (<ref>) implies that ϖ is related to the acceleration of the fluid a^μ=u·∂ u^μ, the relativistic angular velocity ω^μ=1/2ϵ^μνρσu_σ∂_ν u_ρ and to the temperature T=1/√(β^2) as follows
ϖ^μν=-1/2[∂^μ(1/T)u^ν-∂^ν(1/T)u^μ]
+ϵ^μνρσω_ρ/Tu_σ + 1/2T(a^μ u^ν - a^ν u^μ) .
While the thermal shear is expressed as
ξ^μν=1/2[∂^μ(1/T)u^ν+∂^ν(1/T)u^μ]
+ 1/2T(a^μ u^ν + a^ν u^μ) + 1/Tσ^μν + 1/3TθΔ^μν ,
where ∇_μ=∂_μ -u_μ(u·∂), Δ^μν=η^μν-u^μ u^ν, θ=∇· u and σ is the shear tensor
σ^μν = 1/2(∇^μ u^ν + ∇^ν u^μ) -1/3Δ^μνθ .
Using the statistical operator in Eq. (<ref>) and linear response theory, the average of the spin vector (<ref>)
for a free Dirac field result in <cit.>
S^μ(p) = - ϵ^μρστ p_τ/8m ∫_Σ dΣ· p n_F∫_Σ dΣ· p
n_F (1 -n_F) [ ϖ_ρσ + 2t̂_ρp^λ/E_pξ_λσ] ,
where t̂ is the time direction in the laboratory frame and n_F is the Fermi-Dirac phase-space distribution function:
n_F = 1/exp[β· p - μ q]+1,
where q is the charge of the particle.
This recently found <cit.> contribution of spin polarization from thermal shear does not affect the predictions of global spin polarization that are well reproduced by thermal vorticity alone, but it is the dominant contribution in local spin polarization <cit.>. Quantitative agreement with the data of local spin polarization has been found if hadronization occurs at nearly constant temperature <cit.>, as expected at the energies where data has been taken.
Further analysis of the impact of thermal shear on spin polarization <cit.> revealed a significant dependence on the initial conditions of the hydrodynamic model.
As the theory for spin polarization does not require any additional parameter compared to the standard model of heavy-ion collisions, the measurements of local spin polarization have the potential to further constraint the initial conditions of the hydrodynamic equations and to probe the properties of the QGP, such as the shear and bulk viscosities.
§ IN MEDIUM FORM FACTORS
As mentioned, the spin polarization can be understood as a result of spin-rotation coupling.
Stating that a medium is rotating is equivalent of saying that there is an effective force causing an acceleration resulting in the circular motion.
The interaction of a single constituent of the medium with the overall rotation can then be studied as an inertial effect caused by a rotating observer.
The analysis of the Dirac equation in rotating coordinates revealed <cit.> that a Dirac particle in a medium rotating with angular velocity Ω⃗ has the Hamiltonian
H = H_0 - J⃗·Ω⃗,
where J⃗=L⃗+S⃗ is the total angular momentum of the fermion and H_0 is the Hamiltonian in absence of rotation.
This is analogous to the interaction of a fermion with an external magnetic field, resulting in the Zeeman effect.
It follows that the energy is lower when the spin is aligned along the rotation, hence spin is polarized along the rotation when the system is close to equilibrium.
More generally, we can describe the interaction with weak gravitational field through the Lagrangian
ℒ_int = 1/2(g_μν-η_μν)T^μν,
which reproduces the Hamiltonian (<ref>) if we choose g as the metric of rotating coordinates.
Using scattering theory, the coupling between spin and rotation can be studied looking at the matrix elements of the energy momentum tensor,
that can be decomposed in terms of the form factors f_1 and g_Ω as follows
⟨ p', s'| T^μν(0)|p, s⟩ = u̅(p',s')[ f_1(q^2) P^μ P^ν/m+g_Ω(q^2)σ^(μαq_α P^ν)/2m
+ O(q^2)] u(p,s),
where |p, s⟩ is a one particle state with momentum p and spin s, u(p,s) and u̅(p',s') are spinors, q=p'-p and P=(p+p')/2.
The form factor g_Ω is also called the gravitomagnetic moment and it can be shown <cit.> that describes exactly the spin rotation coupling in (<ref>).
It has long been known that, differently form other form factors like the magnetic moment, g_Ω can not receive radiative corrections and should always be g_Ω=1 otherwise the theory is not Lorentz covariant and would violate the Einstein Equivalence Principle (EEP) <cit.>.
This restriction is lifted when considering a system in thermal equilibrium. Indeed, the presence of a thermal bath breaks the Lorentz invariance and as a result finite temperature field theory violates the EEP <cit.>. As a consequence, the matrix elements of an operator in a medium can have additional structure and additional form factors <cit.>. In addition to four-vectors P and q, matrix elements can also be given in terms of u, that denotes the fluid velocity, and the vector l^μ=ϵ^μνρσu_ν P_ρ q_σ. The energy-momentum tensor is then decomposed as
⟨p',s'|T_μν(0) |p,s⟩=u̅(p',s'){I_Pγ(P,q)(P_μγ_ν + P_νγ_μ)
+I_uγ(P,q)(u_μγ_ν + u_νγ_μ)+
+I_Pl(P,q)l̂(P_μl̂_ν + P_νl̂_μ)
+I_ul(P,q)l̂(u_μl̂_ν + u_νl̂_μ)}u(p,s)+⋯,
where l̂^μ=l^μ/√(-l^2) and the in-medium spin-rotation coupling is <cit.>
g_Ω(P,q)=4(I_Pγ(P,q) + I_uγ(P,q)/(P· u) -I_Pl(P,q) -I_ul(P,q)/(P· u)).
Explicit calculations in finite temperature QED <cit.>, revealed that indeed the gravitomagnetic moment receives radiative corrections,
for instance, in the low temperature limit T ≪ m, the 1-loop gravitomagnetic moment is
lim_q→ 0
P^2→ (P· u)^2 g_Ω= 1 -1/6e^2 T^2/m^2.
In <cit.> it was shown that g_Ω is directly connected to the axial vortical effect, that is the generation of an axial current induced by the rotation of the medium.
Here I use the same method to show how spin-rotation coupling is related to the spin polarization (<ref>).
In order to apply the formula (<ref>), I first need to compute the Wigner function in a system in thermal equilibrium with thermal vorticity.
Such a system is described by the density matrix in Eq. (<ref>).
The contribution of thermal vorticity ϖ to the Wigner functions (<ref>), obtained using linear response theory to the density matrix in Eq. (<ref>), is
Δ_ϖ W_ab^+(x,k) = -∫_0^1 z∫_ΣΣ_λ(y)ϖ_ρκ(y-x)^κ⟨_ab^+(x,k)^λρ(y+ zβ(x))⟩_c,β,
where the bracket ⟨ ⟩_c,β denotes the connected correlator traced with the density matrix in absence of thermal vorticity
_β = 1/Z_βexp[ -·β(x) - ζ(x) ].
To obtain the spin vector (<ref>) we need the axial part of (<ref>), which, according to the definition (<ref>), is
Δ_ϖ𝒜_+^μ(x,k) = -∫^4 s/(2π)^4∫_0^1 z∫_ΣΣ_λ(y)ϖ_ρκ(y-x)^κ^- k· s
×⟨ψ̅^+(x+s/2)γ^μγ^5ψ^+(x-s/2)^λρ(y+ zβ(x))⟩_c,β .
Expanding the operators inside the correlators in terms of multi-particle states, see <cit.>, the leading contribution of the Wigner function is given in terms of the matrix elements of the axial current
⟨ q',τ|_A^μ(0)|q,τ⟩ = u̅_τ(q') A^μ(q,q') u_τ(q)
= u̅_τ(q') [ F_A1γ^μγ^5 +F_A2(q'-q)^μ/2mγ^5 ] u_τ(q),
and of the energy momentum tensor:
⟨ q',τ|^μν(0)|q,τ⟩ =u̅_τ(q') M^μν((q'+q)/2,q'-q) u_τ(q).
As the system is supposed to be in the hydrodynamic regime, we have a separation of scales between microscopic and macroscopic interactions.
In this approximation, the short-distance interactions are taken into account in the matrix elements (<ref>) and (<ref>) which take place at finite temperature.
The long-distance correlations are taken into account in the thermal expectation values of the creation and annihilation operators
⟨a_τ'^†(q')a_τ(q)⟩ =δ_τ,τ'δ^3(q⃗-q⃗')n_F(β· q-ζ).
One eventually obtains that
Δ_ϖ𝒜_+^μ(x,k) = F_A1(k,0) ϖ_κρu_λΔ^κ_ κ'θ(k_0)/2ε_kδ(k^2-m^2)/(2π)^3 n_F(k)(1-n_F(k))
×{∂/∂ p'_κ'[ M^λρ((p'+k)/2,p'-k)
(k+m)γ^μγ^5(p'+m)]}_p'=k,
where Δ^κ_κ'=η^κ_κ'-u^κ u_κ'.
Finally, plugging the Eq. (<ref>) in Eq. (<ref>) and tracing gives the axial Wigner function induced by thermal vorticity:
Δ_ϖ𝒜_+^μ(x,k) = - g_Ω(k,0)F_A1(k,0) θ(k_0)δ(k^2-m^2)/(2π)^3 n_F(k)(1-n_F(k))ϵ^μνστϖ_νσk_τ .
The spin polarization of a free field, using g_Ω(k,0)=1, F_A1(k,0)=1, and applying the formula (<ref>), is the same as Eq. (<ref>).
For an interacting field, the form factor g_Ω can receive radiative corrections, for instance like in Eq. (<ref>), and the spin polarization induced by thermal vorticity can also receive radiative corrections.
§ SUMMARY AND OUTLOOK
I showed how spin polarization in a medium arises from the flow of the plasma and I discussed on how it depends on the properties of such medium.
I also showed how the spin polarization contains information about the gravitational form factor related to the spin-rotation coupling.
Therefore, measurements of the spin polarization in heavy-ion collisions provide a new tool to probe the properties of QCD matter at finite temperature.
In addition to probe the shear and vortical flow of the fluid and to test the dynamics of spin, several ideas on how to use spin polarization for investigating the QGP have already been proposed.
For instance, spin polarization can reveal local parity violations in QCD <cit.>, it can probe the presence of the critical point <cit.>, and it can be used to estimate the energy loss of a jet traversing the medium <cit.>.
Acknowledgments. M.B. thanks the U.S. Department of Energy for the support through the Grant No. DE-SC0023692.
10
STAR:2017ckg
STAR collaboration, Global Λ hyperon polarization in
nuclear collisions: evidence for the most vortical fluid,
https://doi.org/10.1038/nature23004Nature 548
(2017) 62 [https://arxiv.org/abs/1701.066571701.06657].
Liang:2004ph
Z.-T. Liang and X.-N. Wang, Globally polarized quark-gluon plasma in
non-central A+A collisions,
https://doi.org/10.1103/PhysRevLett.94.102301Phys. Rev. Lett.
94 (2005) 102301
[https://arxiv.org/abs/nucl-th/0410079nucl-th/0410079].
Voloshin:2004ha
S.A. Voloshin, Polarized secondary particles in unpolarized high energy
hadron-hadron collisions?,
https://arxiv.org/abs/nucl-th/0410089nucl-th/0410089.
Becattini:2007sr
F. Becattini, F. Piccinini and J. Rizzo, Angular momentum conservation
in heavy ion collisions at very high energy,
https://doi.org/10.1103/PhysRevC.77.024906Phys. Rev. C
77 (2008) 024906
[https://arxiv.org/abs/0711.12530711.1253].
Becattini:2007nd
F. Becattini and F. Piccinini, The Ideal relativistic spinning gas:
Polarization and spectra,
https://doi.org/10.1016/j.aop.2008.01.001Annals Phys.
323 (2008) 2452
[https://arxiv.org/abs/0710.56940710.5694].
Becattini:2013fla
F. Becattini, V. Chandra, L. Del Zanna and E. Grossi, Relativistic
distribution function for particles with spin at local thermodynamical
equilibrium, https://doi.org/10.1016/j.aop.2013.07.004Annals
Phys. 338 (2013) 32
[https://arxiv.org/abs/1303.34311303.3431].
Becattini:2020ngo
F. Becattini and M.A. Lisa, Polarization and Vorticity in the
Quark–Gluon Plasma,
https://doi.org/10.1146/annurev-nucl-021920-095245Ann. Rev.
Nucl. Part. Sci. 70 (2020) 395
[https://arxiv.org/abs/2003.036402003.03640].
STAR:2020xbm
STAR collaboration, Global Polarization of Ξ and Ω
Hyperons in Au+Au Collisions at √(s_NN) = 200 GeV,
https://doi.org/10.1103/PhysRevLett.126.162301Phys. Rev. Lett.
126 (2021) 162301
[https://arxiv.org/abs/2012.136012012.13601].
ALICE:2019onw
ALICE collaboration, Global polarization of ΛΛ̅ hyperons in Pb-Pb collisions at √(s_NN) = 2.76 and 5.02
TeV, https://doi.org/10.1103/PhysRevC.101.044611Phys. Rev. C
101 (2020) 044611
[https://arxiv.org/abs/1909.012811909.01281].
ALICE:2021pzu
ALICE collaboration, Polarization of Λ and Λ̅ Hyperons along the Beam Direction in Pb-Pb Collisions at √(s_NN)=5.02 TeV,
https://doi.org/10.1103/PhysRevLett.128.172005Phys. Rev. Lett.
128 (2022) 172005
[https://arxiv.org/abs/2107.111832107.11183].
STAR:2021beb
STAR collaboration, Global Λ-hyperon polarization in
Au+Au collisions at √(s_NN)=3 GeV,
https://doi.org/10.1103/PhysRevC.104.L061901Phys. Rev. C
104 L061901 [https://arxiv.org/abs/2108.000442108.00044].
Kornas:2020qzi
HADES collaboration, Λ Polarization in Au+Au
Collisions at √(s)_NN = 2.4 GeV Measured with HADES,
https://doi.org/10.1007/978-3-030-53448-6_68Springer Proc.
Phys. 250 (2020) 435.
STAR:2019erd
STAR collaboration, Polarization of Λ
(Λ̅) hyperons along the beam direction in Au+Au collisions at
√(s__NN) = 200 GeV,
https://doi.org/10.1103/PhysRevLett.123.132301Phys. Rev. Lett.
123 (2019) 132301
[https://arxiv.org/abs/1905.119171905.11917].
Becattini:2020sww
F. Becattini, Polarization in relativistic fluids: a quantum field
theoretical derivation,
https://doi.org/10.1007/978-3-030-71427-7_2Lect. Notes Phys.
987 (2021) 15
[https://arxiv.org/abs/2004.040502004.04050].
Becattini:2019dxo
F. Becattini, M. Buzzegoli and E. Grossi, Reworking the Zubarev's
approach to non-equilibrium quantum statistical mechanics,
https://doi.org/10.3390/particles2020014Particles 2 (2019) 197 [https://arxiv.org/abs/1902.010891902.01089].
Becattini:2021suc
F. Becattini, M. Buzzegoli and A. Palermo, Spin-thermal shear coupling
in a relativistic fluid,
https://doi.org/10.1016/j.physletb.2021.136519Phys. Lett. B
820 (2021) 136519
[https://arxiv.org/abs/2103.109172103.10917].
Liu:2021uhn
S.Y.F. Liu and Y. Yin, Spin polarization induced by the hydrodynamic
gradients, https://doi.org/10.1007/JHEP07(2021)188JHEP
07 (2021) 188
[https://arxiv.org/abs/2103.092002103.09200].
Fu:2021pok
B. Fu, S.Y.F. Liu, L. Pang, H. Song and Y. Yin, Shear-Induced Spin
Polarization in Heavy-Ion Collisions,
https://doi.org/10.1103/PhysRevLett.127.142301Phys. Rev. Lett.
127 (2021) 142301
[https://arxiv.org/abs/2103.104032103.10403].
Becattini:2021iol
F. Becattini, M. Buzzegoli, G. Inghirami, I. Karpenko and A. Palermo,
Local Polarization and Isothermal Local Equilibrium in Relativistic
Heavy Ion Collisions,
https://doi.org/10.1103/PhysRevLett.127.272302Phys. Rev. Lett.
127 (2021) 272302
[https://arxiv.org/abs/2103.146212103.14621].
Ryu:2021lnx
S. Ryu, V. Jupic and C. Shen, Probing early-time longitudinal dynamics
with the hyperon's spin polarization in relativistic
heavy-ion collisions,
https://doi.org/10.1103/PhysRevC.104.054908Phys. Rev. C
104 (2021) 054908
[https://arxiv.org/abs/2106.081252106.08125].
Alzhrani:2022dpi
S. Alzhrani, S. Ryu and C. Shen, spin polarization
in event-by-event relativistic heavy-ion collisions,
https://doi.org/10.1103/PhysRevC.106.014905Phys. Rev. C
106 (2022) 014905
[https://arxiv.org/abs/2203.157182203.15718].
Wu:2022mkr
X.-Y. Wu, C. Yi, G.-Y. Qin and S. Pu, Local and global polarization of
hyperons across RHIC-BES energies: The roles of spin
hall effect, initial condition, and baryon diffusion,
https://doi.org/10.1103/PhysRevC.105.064909Phys. Rev. C
105 (2022) 064909
[https://arxiv.org/abs/2204.022182204.02218].
deOliveira:1962apw
C.G. de Oliveira and J. Tiomno, Representations of Dirac equation in
general relativity, https://doi.org/10.1007/BF02816716Nuovo
Cim. 24 (1962) 672.
Hehl:1990nf
F.W. Hehl and W.-T. Ni, Inertial effects of a Dirac particle,
https://doi.org/10.1103/PhysRevD.42.2045Phys. Rev. D
42 (1990) 2045.
Buzzegoli:2021jeh
M. Buzzegoli and D.E. Kharzeev, Anomalous gravitomagnetic moment and
nonuniversality of the axial vortical effect at finite temperature,
https://doi.org/10.1103/PhysRevD.103.116005Phys. Rev. D
103 (2021) 116005
[https://arxiv.org/abs/2102.016762102.01676].
Kobzarev:1962wt
I.Y. Kobzarev and L.B. Okun, Gravitational interaction of fermions,
Zh. Eksp. Teor. Fiz. 43 (1962) 1904.
Cho:1976de
C.F. Cho and N.D. Hari Dass, Equivalence Principle, Stress Tensor and
Long Range Behavior of Gravitational Interactions,
https://doi.org/10.1103/PhysRevD.14.2511Phys. Rev. D
14 (1976) 2511.
Teryaev:2016edw
O.V. Teryaev, Gravitational form factors and nucleon spin structure,
https://doi.org/10.1007/s11467-016-0573-6Front. Phys.
(Beijing) 11 (2016) 111207.
Donoghue:1984zs
J.F. Donoghue, B.R. Holstein and R.W. Robinett, Renormalization of the
Energy Momentum Tensor and the Validity of the Equivalence Principle at
Finite Temperature,
https://doi.org/10.1103/PhysRevD.30.2561Phys. Rev. D
30 (1984) 2561.
Donoghue:1984ga
J.F. Donoghue, B.R. Holstein and R.W. Robinett, The Principle of
Equivalence at Finite Temperature,
https://doi.org/10.1007/BF00760243Gen. Rel. Grav. 17 (1985) 207.
Lin:2023ass
S. Lin and T. Jia-Yuan, Medium correction to gravitational form
factors, https://doi.org/10.7498/aps.72.20222473Acta Phys.
Sin. 72 (2023) 071201
[https://arxiv.org/abs/2302.124502302.12450].
Du:2008zzb
F. Du, L.E. Finch and J. Sandweiss, Observing spontaneous strong CP
violation through hyperon helicity correlations,
https://doi.org/10.1103/PhysRevC.78.044908Phys. Rev. C
78 (2008) 044908.
Becattini:2020xbh
F. Becattini, M. Buzzegoli, A. Palermo and G. Prokhorov, Polarization as
a signature of local parity violation in hot QCD matter,
https://doi.org/10.1016/j.physletb.2021.136706Phys. Lett. B
822 (2021) 136706
[https://arxiv.org/abs/2009.134492009.13449].
Singh:2021yba
S.K. Singh and J.-e. Alam, Suppression of spin polarization as an
indicator of QCD critical point,
https://doi.org/10.1140/epjc/s10052-023-11776-5Eur. Phys. J. C
83 (2023) 585
[https://arxiv.org/abs/2110.156042110.15604].
Serenone:2021zef
W.M. Serenone, J.a.G.P. Barbon, D.D. Chinellato, M.A. Lisa, C. Shen,
J. Takahashi et al., polarization from
thermalized jet energy,
https://doi.org/10.1016/j.physletb.2021.136500Phys. Lett. B
820 (2021) 136500
[https://arxiv.org/abs/2102.119192102.11919].
Ribeiro:2023waz
V.H. Ribeiro, D. Dobrigkeit Chinellato, M.A. Lisa, W. Matioli Serenone,
C. Shen, J. Takahashi et al., Λ polarization from vortex ring
as medium response for jet thermalization,
https://arxiv.org/abs/2305.024282305.02428.
|
http://arxiv.org/abs/2405.08658v1 | 20240514143535 | Beyond the Black Box: Do More Complex Models Provide Superior XAI Explanations? | [
"Mateusz Cedro",
"Marcin Chlebus"
] | eess.IV | [
"eess.IV",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Conformal scattering of the wave equation in the Vaidya spacetime.
Armand Coudray
==================================================================
The increasing complexity of Artificial Intelligence models poses challenges to interpretability, particularly in healthcare sector. This study investigates the impact of the deep learning models complexity and Explainable AI (XAI) efficacy, utilizing four ResNet architectures (ResNet-18, 34, 50, 101). Through methodical experimentation on 4,369 lung X-ray images of COVID-19-infected and healthy patients, the research evaluates models' classification performance and the relevance of corresponding XAI explanations with respect to the ground-truth disease masks. Results indicate that the increase in model complexity is associated with the decrease in classification accuracy and AUC-ROC scores (ResNet-18: 98.4%, 0.997, ResNet-101: 95.9%, 0.988). Notably, in eleven out of twelve statistical tests performed, no statistically significant differences occurred between XAI quantitative metrics - Relevance Rank Accuracy and proposed Positive Attribution Ratio - across trained models. These results suggest that increased model complexity does not consistently lead to higher performance or the relevance of explanations of models’ decision-making processes.
§ INTRODUCTION
Deep Neural Networks (DNNs) have garnered substantial success across diverse domains of Artificial Intelligence (AI) applications. Nonetheless, the opacity of their decision-making processes presents considerable challenges, particularly in sensitive sectors such as healthcare where transparency is essential <cit.>. Efforts to reconcile the trade-off between model's accuracy and interpretability have led to the development of methods to trace predictions back to input features, enhancing model transparency <cit.>.
In the medical domain, the demand for interpretable models is heightened due to the life-or-death implications of decisions made <cit.>. Despite the intrinsic complexity of accurate machine learning models, medical experts require comprehensible insights into how specific features influence predictions <cit.>.
Artificial neural networks, particularly DNNs, are designed to emulate the complexity of biological systems, resulting in architectures that are not inherently transparent, thus casting these models as "black-box" devices <cit.>. Consequently, enhancing model explainability is a key factor influencing the adoption of machine learning models in sensitive applications <cit.>.
§.§ Interpretable and Explainable AI
The fields of Interpretable Machine Learning (IML) and Explainable Artificial Intelligence (XAI) have risen in response to the need for transparency in deep learning models, attracting significant attention in machine learning research <cit.>. Interpretability is defined by the ability of a machine learning system to make its processes and decisions understandable to humans. The quality of these explanations is crucial for model evaluation, validation, and debugging, and it's evaluated on the clarity of the model's decision-making process, not merely on prediction accuracy <cit.>.
Interpretability is examined from two perspectives: global and local. Global interpretability provides an overarching understanding of the model's functioning and decision patterns, while local interpretability focuses on the specific rationale behind individual predictions. Both levels of interpretability are essential, serving varied purposes from enhancing scientific understanding to identifying biases and substantiating individual decisions <cit.>.
The challenge of explaining how neural networks arrive at their predictions is central to XAI, where the goal is to map and quantify the influence of each input feature on the final decision. This is particularly valuable in medical settings where healthcare professionals benefit from understanding the model's reasoning <cit.>. Despite their capabilities, many machine learning models remain opaque, acting as "black boxes" without revealing the underlying logic guiding their decisions.
The attribution of a deep network's predictions to its input features has been identified as a fundamental challenge <cit.>. This attribution is represented as a vector, quantifying each input feature's contribution to the network's prediction, thereby clarifying the decision-making process, particularly beneficial for clinical experts in understanding the strengths and limitations of the model <cit.>.
Techniques such as DeepLIFT <cit.>, Layerwise Relevance Propagation <cit.>, LIME <cit.>, and Integrated Gradients <cit.> have been developed to unravel these decisions, breaking down the contributions of individual neurons to input features and thus advancing model interpretability. Moreover, in the GradientShap methodology, SHAP values (SHapley Additive exPlanations), have been adopted to attribute importance to each feature in a prediction, grounded in cooperative game theory, enhancing the interpretability of model features <cit.>.
Further refinements in interpretability methods, such as sensitivity and saliency maps, highlight influential image regions. Methods such as SmoothGrad and NoiseGrad have improved these techniques, reducing visual noise and integrating stochastic elements into models, improving both local and global interpretive clarity <cit.>.
Evaluating feature significance within Explainable AI is essential for model optimization and establishing trust in model predictions <cit.>. Challenges arise from the lack of universally accepted interpretability standards and the complexity involved in selecting and configuring appropriate interpretability methods <cit.>.
Evaluating feature significance often involves analyzing the effects of feature removal on model performance. This method, while effective, can alter the evaluation data distribution and thus potentially compromise the assessment's validity <cit.>. With the growing necessity for transparent AI, objective and reproducible evaluation metrics are increasingly important <cit.>. One of the comprehensive frameworks for assessing the quality of the model explanations is Quantus <cit.>. This framework provides a comprehensive set of tools for accurate assessment of explanations and follows a transparent and impartial validation process for various XAI methodologies.
§.§ AI and XAI in Medicine
The integration of AI into healthcare is a strategic initiative aimed at personalizing patient treatment by harnessing the analytical prowess of AI to process and interpret large-scale clinical datasets <cit.>. Deep learning architectures, capable of sifting through extensive data such as hundreds of thousands of labelled X-ray images, are particularly instrumental in this shift from traditional rule-based diagnostics to a more nuanced, data-driven approach. This transition necessitates a framework within which the complex outputs of these models can be understood and trusted by medical professionals, a need met by the emerging field of Explainable AI <cit.>.
XAI in medicine not only aims to unravel the decision-making processes of deep learning models but also strives to validate the reliability of AI-supported recommendations. The overarching goal is to establish a symbiotic relationship where AI systems are not merely tools for data extrapolation but partners in clinical decision-making, providing transparent and interpretable explanations that foster trust and facilitate informed medical judgments <cit.>.
AI carries transformative economic implications, necessitating a balance between peak performance and operational efficiency <cit.>. The deepening of neural networks, while advancing capabilities, approaches a threshold beyond which additional layers yield minimal performance gains, as identified by <cit.> and <cit.>. Contemporary advancements in complex architectural designs, such as Generative Pre-trained Transformers (GPT), have brought to the fore the significant financial and environmental costs inherent in the training processes. This development necessitates a judicious equilibrium between the advantages conferred by AI and the consumption of resources it entails <cit.>.
In healthcare, the role of AI is especially important as it offers the dual benefits of cost reduction and enhanced patient care. However, the adoption of AI must consider not just technological prowess but also the practicalities of application <cit.>. This balance is essential in ensuring that AI's integration into healthcare remains both efficient and beneficial, providing clear, interpretable outcomes that align with the overarching goals of medical practice.
The COVID-19 pandemic has accelerated the application of deep learning computer vision models in medical diagnostics, as the disease can be identified on the X-ray images of infected patients' lungs. <cit.>. Studies utilizing Residual Networks (ResNets) architectures <cit.> on COVID-19 datasets have yielded promising results, underscoring the potential of deep learning in aiding pandemic response <cit.>. Furthermore, the use of saliency maps in medical image segmentation has provided visual explanations that enhance the interpretability of model predictions, essential for medical diagnostics <cit.>.
§.§ Influence of Model Scale on Performance and XAI Evaluations
In the field of machine learning, there is a common hypothesis that an increase in model capacity should correlate with enhanced training efficacy <cit.>. Nonetheless, this correlation is not absolute, as studies have shown variable performance benefits with the scaling of model complexity, particularly in ResNet architectures <cit.>. While deeper neural networks such as ResNet-50, which is a 50-layer Convolutional Neural Network (CNN), have demonstrated improvements in specific tasks, these models do not universally outperform across all scenarios, with instances where less complex models like ResNet-18, an 18-layer CNN, match or exceed the accuracy of their larger counterparts <cit.>.
The concept of diminishing returns becomes evident as network complexity increases beyond a certain threshold, resulting in marginal performance enhancements that do not justify the additional complexity <cit.>. In particular, ResNet-18 has been noted for its competitive performance against more elaborate models in certain classification tasks, prompting a reevaluation of the efficacy of scaling up network depth <cit.>. These observations underscore the imperative for a strategic approach in model selection that weighs computational efficiency against the specific performance requirements of the given task, thereby optimizing the balance between model architecture size and functional output.
To the best of our knowledge, no previous research has explored the relationship between the complexity of deep learning model architectures and the quality of XAI explanations. Our study is the first to address the problem in the literature by conducting experiments to investigate this relationship. In sectors where transparency is paramount, such as healthcare, understanding how architectural complexities affect both the model performance and the quality of XAI explanations is crucial. By conducting methodical experiments, this study aims to gain in-depth insight into the relationship between the complexity of deep learning models and the greatest possible interpretability, ultimately aiming to increase the accuracy and reliability of XAI explanations. Therefore, this study proposed two hypotheses.
Hypothesis 1: As the model's complexity increases, characterized by a greater number of trainable parameters, it exhibits better classification performance.
Hypothesis 2: As the model's complexity increases, characterized by a greater number of trainable parameters, XAI assessment indicators are anticipated to yield inferior results, indicating an increased challenge in explaining the underlying decision-making process.
§ METHODOLOGY
To answer the underlying question, of whether more complex architectures provide better explainability in image classification tasks, in the conducted research the same workflow was employed for all of the trained ResNet models (ResNet-18, ResNet-34, ResNet-50, and ResNet-101). Initially, each ResNet model was trained from scratch, utilizing a consistent subset of randomly assigned images and model hyper-parameters to ensure equitable training conditions across all architectures.
After the training phase, a focused exploration into model explainability was undertaken by generating XAI explanations for each trained model, employing the Quantus library <cit.>. Three XAI methodologies were leveraged: Saliency Maps <cit.>, GradientShap <cit.>, and Integrated Gradients <cit.>, each providing distinct perspectives into model decision-making processes.
The derived explanations were then subjected to a quantitative evaluation utilizing two pertinent metrics: Relevance Rank Accuracy <cit.> and proposed in this paper Positive Attribution Ratio, providing insightful revelations regarding the reliability and interpretability of the explanations propagated by each model. Having this approach, the following experiment provides a clear evaluation of the models' behaviour in the conducted image classification task.
§.§ Data
The dataset used in the following experiment was the COVID-QU-Ex dataset formulated by researchers from Qatar University and the University of Dhaka, which is a collection of the X-ray lung images obtained from various resources <cit.>. The dataset contains three groups of X-rays: COVID-19 pneumonia, other diseases (non-covid), and healthy patients' lungs. For the X-rays from COVID-QU-Ex, corresponding ground-truth masks from the QaTa-COV19 dataset were used. QaTa-COV19 dataset was developed by Qatar University and Tampere University which provides binary segmentation masks of COVID-19 pneumonia <cit.>.
For the following experiment, 4,369 X-ray lung images of different patients and corresponding ground truth masks were used. 2,913 images labelled as COVID-19 infected and 1,456 as Healthy, non-infected patients. For training, validation and testing, X-ray images were randomly split on 70%, 20%, and 10% dataset fractions respectively.
Before training, all the images were resized to the size of 224x224 pixels, turned into grayscale, transformed into tensors, and normalized. Transformations were done with the use of PyTorch's Torchvision library <cit.>.
§.§ Models
In our experiment four ResNets architectures were explored, each distinguished by its depth: ResNet-18, ResNet-34, ResNet-50, and ResNet-101, where the suffix indicates the respective number of layers in the CNN models <cit.>. Recognized for effectively addressing challenges in training deep networks for image classification tasks, these architectures were selected to probe the relationship between network depth and performance. Figure <ref> shows a building block containing the residual connection that provides an identity input to every other layer, which became a state-of-the-art building block of deep learning architectures. Figure <ref> presents a ResNet-34 architecture in comparison with plain 34-layer deep learning architecture <cit.>.
We did not consider using pre-trained ResNet models because they are primarily trained on the ImageNet <cit.> dataset, which consists of natural images. These are significantly different from medical lung X-ray images, and hence would not improve the performance of medical image classification <cit.>. Additionally, we did not use the CheXNet <cit.> pre-trained model, which was trained on over 100,000 frontal-view X-ray images with 14 diseases, because it is a fixed-size model that bases on DenseNet-121 <cit.>, and there is no possibility to compare the XAI explanations to other CheXNet models, since there are no other pre-trained CheXNet models that base on a different number of DenseNet layers.
Baseline performance was established using ResNet-18 and ResNet-34, which were chosen for their balance of predictive power and computational efficiency. In contrast, ResNet-50 and ResNet-101 were scrutinized for potential accuracy improvements, despite their increased computational costs. A uniform training and testing process was applied to all models to ensure a fair comparison, and the trade-offs between model size, computational demand, and predictive accuracy were elucidated in the context of our research.
In Table <ref>, the number of trainable parameters for all ResNet models is presented. Each consequent ResNet model has approximately double the number of the trainable parameters of the former model.
§.§ Model Training Setup
In our research all ResNet architectures were trained from scratch for the image classification tasks. From the original dataset, the X-rays labelled as other diseases (non-covid) were excluded, leaving a dataset categorized into two label groups: COVID-19 and healthy. Under the aforementioned approach, all models conducted binary classification tasks.
A concerted approach was employed to ensure the coherent training, validation, and testing of all models, with the images being randomly partitioned into respective subgroups comprising 70%, 20%, and 10% of the data that contained in total 4,369 images subdivided into 2,913 and 1,456 images of COVID-19 and healthy groups respectively.
The models were developed using the PyTorch library <cit.> and utilized a Cross-Entropy Loss criterion. This criterion computes the cross-entropy loss between predicted and target class labels, facilitating the models' learning from the logits.
The optimization of the model parameters was undertaken using Stochastic Gradient Descent <cit.> with a learning rate and momentum of 0.001 and 0.9, respectively. All models were subjected to the training for 50 epochs, with a batch size of 64, to gauge their efficacy in distinguishing between the defined label groups under consistent hyper-parameter settings <cit.>. Although each model was trained for 50 epochs, the final evaluation on the test set was conducted using the best-performing model checkpoint, which was selected based on the lowest validation loss encountered during the training process.
Ensuring experimental reproducibility and consistency across all training sessions, the random seeds for PyTorch and NumPy were fixed at a value of 42 <cit.>.
All model training sessions and subsequent Explainable AI analyses were conducted utilizing the Nvidia A100 GPU with 40 GB of RAM capacity.
§.§ Gradient-based Techniques
In the field of machine and deep learning, gradients are defined as the rate of change of the output with respect to the input and are acknowledged for their importance in the model's optimization. Formerly, the product of model coefficients with feature values has been examined by practitioners to interpret simpler, usually linear models. In deep neural networks, gradients are perceived as intrinsic coefficients, signifying the intricate connection between input and output <cit.>. With advancements in research, gradient-based techniques have been introduced in the field of XAI, enabling a more profound interpretation of model behaviour and given prediction.
In this section, three gradient-based methods are outlined, specifically Saliency Maps <cit.>, GradientShap <cit.>, and Integrated Gradients <cit.>. In the Saliency Maps method, the derivative of the class score with respect to the input image is calculated, identifying pixels that, when slightly altered, are found to have the most significant influence on the class score. Subsequently, the GradientShap method synthesizes Shapley values and gradients, further enhancing the understanding of model predictions. Lastly, the Integrated Gradients method is presented, wherein the path integration between input and output is detailed, providing a comprehensive attribution explanation.
A comprehensive examination of these gradient-based methodologies is undertaken in this chapter, highlighting their roles in augmenting the interpretability and transparency of deep neural architectures.
X-rays of both healthy and COVID-19-infected lungs, along with their respective ground-truth masks and pixel attribution maps, are presented in Figure <ref> and Figure <ref>.
§.§.§ Saliency Maps
One of the pioneering methodologies in the field of XAI is denoted as saliency maps <cit.>, which delineates the significance of specific components, such as pixels on the image, concerning the observed empirical relationships.
Given the inherent nonlinearity of the models with complex architecture, straightforward interpretations become elusive <cit.>. In this context, the saliency maps serve as an instrumental visualization mechanism, highlighting regions within the image that exhibit strong correlations to distinct tasks. By employing this technique, a transition from the high-dimensional input data space to a substantially reduced vector of projections is facilitated. This process inherently involves profound weight sharing, underlined by associations amongst weights interfacing the input and hidden layers of the feed-forward neural network architecture, as CNNs are. The saliency attributed to an input channel (for instance, the pixel i of an image vector) is quantified by the noticeable alteration in the cost function upon its exclusion.
In the research presented in Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps <cit.>, a gradient-based technique was introduced to compute an image-specific class saliency map, tailored to a distinct image and class combination. This method was harnessed using classification ConvNets and was designed to identify and highlight the spatial significance of a specific class within a given image. Essentially, for a given image I_0 and its associated class c, the pixels of I_0 were ranked based on their impact on the class score S_c(I_0). Given the intricate non-linearity of deep ConvNets, the class score S_c(I) was approximated linearly in the vicinity of I_0 using the first-order Taylor expansion. Through this approach, pixels that could be altered minimally to most influence the class score were explained.
The procedure involved first determining the derivative w via back-propagation. Subsequently, the saliency map was extracted by reorganizing the components of vector w. For grey-scale images, the dimensions of w were found to align with the pixel count of I_0, allowing for the map's computation as M_ij = |w_h(i,j)|, where h(i, j) denoted the index of w that corresponded to the pixel situated in the i-th row and j-th column. Notably, this saliency map derivation utilized a classification ConvNet, trained exclusively on image labels, thereby eliminating the need for supplemental annotations, such as bounding boxes or segmentation masks.
To address the problem of accuracy-interpretability trade-off, <cit.> proposed an explanation framework named SHAP - SHapley Additive exPlanations. SHAP undertakes the model's feature interpretability on the concept from cooperative game theory <cit.> by allocating an importance value to each feature for a specific prediction.
In research on model interpretability, it is commonly addressed that a simple model acts naturally as its own best explanation, eliminating the need for additional clarifications <cit.>. However, for complex models like deep neural network architectures, the original model is not interpretable by its nature. Thus, a more straightforward, interpretable model approximation or the explanation model is needed. Consider denoting the original model as f and the explanation model as g. Explanation models typically employ simplified inputs, x_0, which correlate to the original inputs via a transformation function, x = h_x(x_0). The objective of local methods is to ensure that g(z_0) closely mirrors f(h_x(z_0)) whenever z_0 is akin to x_0.
§.§.§ Shapley Values
In the field of cooperative game theory, the Shapley value is a fundamental mechanism designed to equitably allocate gains and costs among various participants within a coalition <cit.>. This concept, originally formulated by Lloyd Shapley, becomes indispensable in scenarios where distinct actors contribute unequally yet collaborate towards a shared objective. The central premise of the Shapley value is to guarantee that each participant receives a payoff commensurate with their contribution, ensuring it is not less than what they would achieve independently. To clarify, within a strategic game involving multiple players aiming for a specific outcome, the Shapley value quantifies the average marginal contribution of each player, after considering all feasible combinations.
In a machine learning framework, the traditional players of the cooperative game are analogously represented by the features inherent to the machine learning model, with the model's output serving as a corollary to the game's payoff <cit.>. Shapley values offer a perspective on feature importance within linear models, particularly when multicollinearity is present. The application of this method necessitates the retraining of the model for all feature subsets S ⊆ F, where F denotes the complete set of features. Each feature has assigned an importance value, representing its impact on the model prediction when included. To determine this impact, one model, f_S ∪{i}, incorporates the particular feature, while the other, f_S, excludes it. The predictions of these two models are subsequently contrasted based on the current input: f_S ∪{i}(x_S ∪{i}) - f_S(x_S), wherein x_S symbolizes the values of the input features contained within set S. Given that the ramifications of omitting a feature are influenced by the model's other features, the aforementioned differences are evaluated across all feasible subsets S ⊆ F ∖{i}. Subsequent calculations yield the Shapley values, then formally, the contribution ϕ of model feature i is defined as:
ϕ_i = ∑_S ⊆ F ∖{i}|S|!(|F| - |S| - 1)!/|F|! [ f_S ∪{i}(x_S ∪{i}) - f_S(x_S)]
Conceptually, the Shapley value quantifies the average contribution of a specific feature i, by evaluating the incremental payoff introduced by i across all possible coalitions that exclude feature i.
§.§.§ SHAP Values
SHAP values are proposed as a unified measure of feature importance, representing the Shapley values of a conditional expectation function of the original model <cit.>. These values are attributed to each feature, reflecting the change in the expected model prediction upon conditioning on that particular feature. The transition from the base value E[f (z)] — which would have been predicted in the absence of any known features — to the current output f (x) is explained by SHAP values.
The unique additive feature importance measure that adheres to several properties is provided by SHAP values. These properties encompass:
Local accuracy — ensuring that the explanation model g(x_0) corresponds with the original model f (x) when x = h_x(x_0);
Missingness — where features with x_i = 0 are constrained to have no attributed impact;
Consistency — which mandates that if a model's alteration causes a simplified input's contribution to either increase or remain unchanged irrespective of other inputs, the attribution of that input should not diminish.
Conditional expectations are utilized to define simplified inputs within these values. Inherent in the SHAP value definition is a simplified input mapping, denoted as h_x(z_0) = z_S, where z_S contains missing values for features absent in set S. Owing to most models' inability to process arbitrary patterns of missing input values, f (z_S) is approximated with E[f (z) | z_S]. This definition of SHAP values is structured to closely resonate with the foundational Shapley values <cit.>.
§.§.§ GradientShap
GradientShap method estimates SHAP values by evaluating the gradient expectations, achieved by random sampling from a baseline distribution. By introducing white noise to input samples multiple times, it randomly selects a baseline and an intermediate point between the baseline and input, then calculates the gradient with respect to these random points. The resulting SHAP values mirror the expected values of these gradients multiplied by the difference between inputs and baselines.
The underlying assumption with GradientShap presumes that input features are independent and the explanation model is linear, indicating that the interpretations are modelled using the additive composition of feature effects. However, if the model exhibits non-linearity or the input features lack independence, the sequence in which features are incorporated into the expectation becomes significant. Under these circumstances, SHAP values are derived by averaging the Shapley values across all conceivable sequences. Given these conditions, the SHAP value can be approximated by the expected gradients computed for randomly generated samples, after Gaussian noise has been added to each input across various baselines.
§.§.§ Integrated Gradients
The problem addressed in Axiomatic Attribution for Deep Networks publication <cit.> concerned the issue that many previous gradient base methods broke at least one of the two axioms that should always be satisfied in feature attribution methods, namely sensitivity and implementation invariance axioms. To address this problem, the Integrated Gradients method was presented.
The Integrated Gradients approach has emerged as a notable solution in the field of deep neural network interpretation. Rooted from an axiomatic framework inspired by economics literature, Integrated Gradients seeks to fulfil both sensitivity and implementation invariance axioms. This ensures that the computed attributions are not just artefacts of the method but genuinely reflect the network's behaviour <cit.>.
An attribution technique adheres to the sensitivity criterion when, for any input and baseline differing in just one feature with different predictions, the differing feature receives a non-zero attribution. consequently, if the deep network's function exhibits no mathematical dependence on a particular variable, that variable's attribution is always zero. In practical terms, the absence of sensitivity can lead to gradients predominantly concentrating on irrelevant features.
Within the context of neural networks, two architectures are deemed functionally equivalent when they produce consistent outputs across all given inputs, even if their internal implementations differ considerably. For attribution techniques, it is essential to adhere to the principle of implementation invariance. This principle ensures that the attributions remain consistent for networks that are functionally equivalent, regardless of their distinct structures.
In the Integrated Gradients method, gradients are systematically integrated between a designated baseline, usually a black image, and the actual input image. This technique identifies the presence or absence of distinct features, thereby highlighting the significance of specific pixels or features within the contextual framework. It is commonly called a path-attribution technique <cit.>. Critically, Integrated Gradients is deemed a complete path-attribution approach. This implies that the cumulative relevance scores across all input features equate to the disparity between the prediction derived from the actual image and that of the reference image. In computer vision applications, pixel-wise attributions are presented, highlighting the areas of an image that resulted in the model's decision-making process.
§.§ Evaluation Metrics
In the context of machine learning interpretability, ensuring rigorous evaluation of explanatory heatmaps is crucial for computer vision models, especially when discerning model relevance. Therefore, to evaluate all the XAI approaches that have been used in this research study, Relevance Rank Accuracy <cit.> and proposed in this paper Positive Attribution Ratio metrics were used. Since both metrics are calculating the ratios, their values fall within the [0-1] range, with a higher score signifying a more precise relevance heatmap.
§.§.§ Relevance Rank Accuracy
The Relevance Rank Accuracy is defined to gauge the degree to which the most pronounced relevance points are aligned with the ground truth. First, K is determined, representing the size of the ground truth mask. Then, the top K relevance values are extracted. Afterwards, the number of these values that correspond to locations within the ground truth is counted. This count is subsequently normalized by the dimension of the ground truth mask. Formally, this procedure can be expressed as:
P_top K = { p_1, p_2, …, p_K | R_p_1 > R_p_2 > … > R_p_K}
where P_top K represents the set of pixels, each associated with relevance values R_p_1, R_p_2, …, R_p_K, arranged in descending order up to the K-th pixel. Subsequently, the rank accuracy is determined as:
Rank Accuracy = |P_top K∩GT|/|GT|
where GT represents the set of pixel positions contained within the ground truth mask, and |GT| denotes the total count of pixels within this mask.
§.§.§ Positive Attribution Ratio
The Positive Attribution Ratio derives its foundation from the Relevance Mass Accuracy outlined by <cit.>. Nevertheless, a pivotal distinction exists, since it solely operates on pixels that possess positive attribution. We believe that for the future end-user, it is more important to be informed about the ratio of the number of pixels that have positive attribution localized inside the ground truth mask with respect to all positively attributed pixels on the investigated image.
The Relevance Mass Accuracy is calculated by dividing the aggregated sum of relevance values located within the ground truth mask by the total relevance values across the entire image. Essentially, this metric evaluates the proportion of the explanation method's "mass" attributed to the pixels within the ground truth. Positive Attribution Ratio operates in a similar manner, however, it focuses solely on pixels with positive attributions. As such, the Positive Attribution Ratio indicates the proportion of positive attributions within the ground truth mask R_within with respect to the positive attributions across the entire image R_total. This might be formally represented as:
Positive Attribution Ratio = R_within/R_total
where
R_within = ∑_k=1 s.t. p_k ∈GT^|GT| R_p_k,
∀ R_p_k > 0
and
R_total = ∑_k=1^N R_p_k, ∀ R_p_k > 0
where R_p_k denotes the relevance value corresponding to pixel p_k which has positive relevance attribution, GT encompasses pixel locations present within the ground truth mask, |GT| signifies the count of pixels within this mask, and N stands for the overall pixel count in the image.
§ EXPERIMENTS AND RESULTS
§.§ Models' Performance
Performance of the each ResNet models in terms of accuracy, AUC-ROC and Cross-Entropy Loss metrics on the separated test set is presented in Table <ref>. The ResNet-18 architecture achieved the highest accuracy of 98.4% and an AUC-ROC of 0.997, alongside maintaining the lowest cross-entropy loss of 0.066, misclassifying only 7 out of 437 X-ray images in the hold-out test set. Although all models demonstrated high accuracies and AUC-ROC values exceeding 95.9% and 0.988 respectively, an inverse relationship was noted between model complexity and performance metrics, with ResNet-101 registering the lowest accuracy and AUC-ROC scores in the series. These findings are consistent with the results reported by <cit.>.
This evaluation underscores the need to consider the trade-off between model complexity and predictive performance in the selection of suitable deep learning architectures for image classification.
§.§ Results
The quantitative evaluations of all ResNet architectures, utilizing both the Relevance Rank Accuracy and Positive Attribution Ratio metrics, are presented in Table <ref> and Table <ref> respectively. These evaluations incorporated the aforementioned XAI methodologies: Saliency Maps, GradientShap, and Integrated Gradients. The same interpretative methodologies were uniformly implemented across four ResNet models and evaluated independently on a test set consisting of 292 X-ray images labelled as COVID-19 class and 145 X-ray images of Healthy class.
In the context of the COVID-19 class, clear fluctuations in performance indicators are evident. For the Relevance Rank Accuracy metric, ResNet-18 registered the highest mean score of 0.199 (SD=0.1) when analyzed through the Saliency Maps approach. Conversely, the application of GradientShap and Integrated Gradients methodologies resulted in the highest scores of 0.118 (SD=0.09) and 0.119 (SD=0.09), respectively, which were attributed to the ResNet-101 architecture.
In the evaluation of the Healthy class, the Relevance Rank Accuracy metric exposed varying performance paths. The ResNet-34 architecture, when interfaced with the Saliency Maps methodology, achieved a mean score of 0.305 (SD=0.05). However, when subjected to the GradientShap and Integrated Gradients methodologies, mean scores of 0.249 (SD=0.07) and 0.251 (SD=0.07) were predominantly associated with ResNet-34 and ResNet-18 architectures, respectively.
Referring to the Positive Attribution Ratio scores within the COVID-19 group, ResNet-18 achieved the highest mean score of 0.186 (SD=0.12) and 0.120 (SD=0.1) within the Saliency Maps and Integrated Gradients methodologies respectively. ResNet-101 achieved the highest mean score of 0.12 (SD=0.1) in the GradientShap.
For the Healthy class, under the Positive Attribution Ratio metric, ResNet-34 reached the highest mean score of 0.315 (SD=0.07) using the Saliency Maps approach. In contrast, with the GradientShap methodology, ResNet-101 achieved a mean score of 0.263 (SD=0.09), while ResNet-18 reached a mean score of 0.253 (SD=0.08) with the Integrated Gradients approach.
To assess the statistical significance of the differences between the means obtained from each of the ResNet architectures (18, 34, 50, and 101) for both metrics (Relevance Rank Accuracy and Positive Attribution Ratio), a procedure of statistical analyses was performed. These evaluations spanned across each of the three XAI methodologies (Saliency Maps, GradientShap, and Integrated Gradients) and were further separated based on two classes: COVID-19 and Healthy. The analyses for the COVID-19 subgroup were conducted utilizing a set of 292 X-ray images, whereas the Healthy class was assessed based on 145 X-rays. Both subgroups utilized images from the test set.
Upon analyzing the undertaken dataset, several observations emerge. Notably, ResNet-50 did not attain the top performance in either Relevance Rank Accuracy or Positive Attribution Ratio metric. Meanwhile, ResNet-18 secured the highest scores in five out of the twelve evaluated instances. ResNet-101 achieved the highest score in four out of the twelve instances, and lastly, ResNet-34 secured the top scores in three of the twelve evaluations.
From the derived observations, it becomes clear that there is no direct correlation between the size or complexity of the ResNet model architecture and the resultant performance metrics like accuracy or AUC-ROC, aligning with the findings reported by <cit.>. In terms of XAI quantitative metrics results, while ResNet-18 often displayed superior results, ResNet-50 did not necessarily follow suit, despite its increased complexity. Conversely, in certain scenarios, both ResNet-101 and ResNet-34 demonstrated superior performances, surpassing the results achieved by ResNet-50. Hence, it is imperative to understand that the choice of model architecture should not be solely based on its size or complexity. The results emphasize the importance of context-specific evaluations and suggest that in the domain of explainable AI for medical imaging, no one-size-fits-all approach is suitable.
Due to the unequal variances among groups, the Kruskal-Wallis test was used as a non-parametric alternative to the one-way ANOVA. This method evaluates the differences in medians across groups while accommodating the potential non-parametric distribution of the data, thus facilitating the detection of differences among the medians of the ResNet models. To clarify these differences, pairwise comparisons among the four ResNet models were performed using the Mann-Whitney U test. Considering the risk of type I errors due to multiple comparisons, the p-values obtained were adjusted using the Bonferroni correction method. With this statistical approach, a clear understanding of performance disparities across distinct model architectures with specific XAI techniques and image classes was achieved.
The statistical analysis reveals that the sole statistically significant divergence in medians across the ResNet architectures, at a threshold of p < 0.05, is discerned within the Relevance Rank Accuracy metric for the COVID-19 category, only for the Saliency Maps methodology (p = 0.02). An extended analysis using the Mann-Whitney U test explained the statistically significant difference between the results for the ResNet-18 and ResNet-101 architectures, with p = 0.03. Furthermore, a marginal approach to significance within the same group and approach was observed between the ResNet-34 and ResNet-101 models, registering a p-value of 0.053.
In contrast, the remaining comparisons failed to evince any statistical discrepancies across the models, irrespective of the metric, category, or XAI technique in question. Notably, within the Healthy category utilizing Saliency Maps, there was a boundary approach to statistical significance in the difference between medians across ResNet models, resulting in p-value of 0.06 for the Positive Attribution Ratio metrics.
§ DISCUSSION
It is pertinent to note that the efficacy of interpretative methods in XAI hinges on their proper configuration <cit.>. Incorrect settings can substantially diminish their effectiveness, as evidenced by past research <cit.>. Therefore, constructing an empirical framework is crucial for validating the effectiveness and reliability of these methods <cit.>.
In healthcare and finance, users may mistakenly view predictive model outputs as causal, for instance, interpreting high saliency metrics as confirmation of specific health conditions. The capability of adversarial attacks to subtly alter inputs and shift focus from relevant to irrelevant features poses a significant challenge; such manipulations often go undetected as they do not change the diagnostic labels <cit.>. The vulnerability of DNNs to these adversarial attacks is a documented concern, casting doubt on the trustworthiness of their predictive labels <cit.>.
The research by <cit.> delves into the impact of adversarial perturbations on the interpretations provided by neural networks. The interpretation of a neural network is considered vulnerable if there is a possibility to manipulate an image without a perceptual difference, maintaining the initial classification label, while significantly altering the network's interpretation of that image <cit.>.
§ CONCLUSIONS
The influence of architectural complexity on the performance and explainability of ResNet models in medical image classification was investigated in this study. It was found that models with reduced complexity could deliver performance and interpretability comparable to or surpassing that of their more intricate counterparts. Specifically, architectures such as ResNet-18 were shown to provide effective accuracy and interpretability, challenging the prevailing belief that increased complexity ensures enhanced efficacy of the model. This provides grounds for rejecting Hypothesis 1.
Statistical analysis conducted on interpretability metrics on four ResNet models highlighted a lack of consistent correlation between architectural complexity and the quality of XAI explanations. The outcomes of this study necessitate a sensible approach to the selection of deep learning models, especially for applications that demand high precision and transparent explanations, such as those prevalent in healthcare. The results suggest that the additional resources required for more complex architectures, e.g. increased memory usage, higher financial costs, greater environmental impact, and longer training times, may not be justified, given that less complex architectures could achieve similar or superior levels of interpretability. This justifies the rejection of Hypothesis 2.
The study highlights the importance of properly configuring XAI methods to prevent misinterpretation of model predictions and urges for the development of an empirical framework to establish the reliability of these interpretive approaches. The conducted research reinforces the principle of a context-specific selection of neural network architectures underscoring the importance of both performance and interpretability, especially in applications within sensitive domains.
§ FUTURE WORK
In consideration of future explorations within the domain of Explainable Artificial Intelligence and image classification, it is crucial to address the growing interest in the Vision Transformer (ViT) architecture which surpasses the traditional CNN models in a variety of deep learning tasks <cit.>. The inherent capacity of Transformers to facilitate complex, sequential data processing through self-attention mechanisms posits them as a prime candidate for augmenting the interpretability of deep learning models <cit.>.
Future investigations should strive to establish methodological approaches that quantify the effect of Vision Transformer complexity on explanation quality. This research should also extend to examining the capability of Transformers to preserve explainability when processing image datasets.
Additionally, it is crucial to extend the assessment of the explainability of Vision Transformers by leveraging the CheXpert <cit.> dataset, a comprehensive repository of 224,316 chest X-rays across 65,240 patients. The dataset encompasses 14 diverse radiological observations, each accompanied by annotations that mark uncertain diagnoses, providing a robust framework for appraising the interpretability of AI in the field of medical image analysis.
Such research endeavours are expected to contribute significantly to the development of AI systems that are both advanced in their operational capabilities and transparent in their reasoning processes. This balance is essential for building trust and enabling effective Human-AI interaction, propelling the field of XAI forward.
§ REPRODUCIBILITY
The code utilized for replicating the experimental results is accessible at https://github.com/mateuszcedro/Beyond-the-Black-Box.
unsrt
|
http://arxiv.org/abs/2405.09536v1 | 20240515174559 | Wasserstein Gradient Boosting: A General Framework with Applications to Posterior Regression | [
"Takuo Matsubara"
] | stat.ME | [
"stat.ME",
"cs.LG",
"stat.ML"
] |
Phonon Inverse Faraday effect from electron-phonon coupling
R. Matthias Geilhufe
May 20, 2024
===========================================================
Gradient boosting is a sequential ensemble method that fits a new base learner to the gradient of the remaining loss at each step.
We propose a novel family of gradient boosting, Wasserstein gradient boosting, which fits a new base learner to an exactly or approximately available Wasserstein gradient of a loss functional on the space of probability distributions.
Wasserstein gradient boosting returns a set of particles that approximates a target probability distribution assigned at each input.
In probabilistic prediction, a parametric probability distribution is often specified on the space of output variables, and a point estimate of the output-distribution parameter is produced for each input by a model.
Our main application of Wasserstein gradient boosting is a novel distributional estimate of the output-distribution parameter, which approximates the posterior distribution over the output-distribution parameter determined pointwise at each data point.
We empirically demonstrate the superior performance of the probabilistic prediction by Wasserstein gradient boosting in comparison with various existing methods.
§ INTRODUCTION
Gradient boosting is a well-recognised, powerful machine learning method that has achieved considerable success with tabular data <cit.>.
Although gradient boosting has been extensively used for point forecasts and probabilistic classification, a relatively small number of studies have been concerned with the predictive uncertainty of gradient boosting.
Nowadays, predictive uncertainty of machine learning models plays a growing role in real-world production systems <cit.>.
It is vital for safety-critical systems, such as medical diagnoses <cit.> and autonomous driving <cit.>, to assess the potential risk of their actions by taking uncertainty in model predictions into account.
Gradient boosting has already been applied in a diverse range of real-world applications, such as click prediction <cit.>, ranking systems <cit.>, scientific discovery <cit.>, and data competition <cit.>.
There is a pressing need for methodology to harness the power of gradient boosting to probabilistic prediction while incorporating the predictive uncertainty.
A common approach to probabilistic prediction is to specify a parametric output distribution p(y |θ) on the space of outputs y and use a machine learning model that returns a point estimate of the output-distribution parameter θ at each input x.
In recent years, the importance of capturing uncertainty in model predictions has increasingly been emphasised <cit.>.
Several different approaches have been proposed <cit.> to return a distributional estimate (e.g. a set of multiple point estimates) of the output-distribution parameter θ at each input x.
Averaging the output distribution p(y |θ) over the distributional estimate has been demonstrated to confer enhanced predictive accuracy and robustness against adversarial attacks <cit.>.
Furthermore, the dispersion of the distributional estimate has been used as a powerful indicator for out-of-distribution (OOD) detection <cit.>.
In this context, a line of research has trained a model to approximate the posterior distribution p(θ| y_i) of the output distribution determined pointwise at each data point (x_i, y_i) <cit.>.
This setting can be viewed as a regression problem whose target variable is the posterior distribution assigned pointwise at each input variable, called posterior regression in this work.
The existing approaches have been employed in a wide spectrum of engineering and medical applications <cit.>, delivering outstanding accuracy and computational efficiency.
However, the existing approaches are limited to cases where a deep neural network approximates a parameter of a conjugate posterior that is available in closed form.
In general, posterior distributions are known only up to their normalising constants and, therefore, require an approximation typically by particles <cit.>.
Motivated by this challenge, this work formulates a framework of Wasserstein gradient boosting (WGBoost) that returns a set of particles that approximates a target probability distribution assigned at each input.
To the author's knowledge, WGBoost is the first framework that enables to harness gradient boosting for posterior regression and to approximate any general posterior without conjugacy.
<Ref> presents an example of applications of WGBoost detailed in <Ref>.
Our Contributions
Our contributions are summarised as follows.
First, we provide a general formulation of WGBoost in <Ref>.
It is a novel family of gradient boosting that returns a set of particles that approximates a target distribution given at each input.
In contrast to standard gradient boosting that fits a base learner to the gradient of a loss function, WGBoost fits a base learner to an exactly or approximately available Wasserstein gradient of a loss functional over probability distributions.
Second, we develop a concrete algorithm of WGBoost for posterior regression in <Ref>, where the loss functional is given by the Kullback–Leibler (KL) divergence of a target posterior.
Following modern gradient-boosting libraries <cit.> that use second-order gradient boosting (c.f. <Ref>), we establish a second-order WGBoost algorithm built on an approximate Wasserstein gradient and Hessian of the KL divergence.
Finally, we demonstrate the performance of regression and classification with OOD detection on real-world tabular datasets in <Ref>.
§ GENERAL FORMULATION OF WASSERSTEIN GRADIENT BOOSTING
This section presents the general formulation of WGBoost.
<Ref> recaps the notion of Wasserstein gradient flows, a `gradient' system of probability distributions that minimises an objective functional in the Wasserstein space.
<Ref> recaps the notion of gradient boosting, a sequential ensemble method that fits a new base learner to the `gradient' of the remaining loss.
<Ref> combines the above two notions to establish a novel family of gradient boosting, WGBoost, whose output is a set of particles that approximates a target distribution assigned at each input.
Notation and Setting
Let and be input and output spaces, where a dataset { x_i, y_i }_i=1^D belong.
Denote by Θ the parameter space of an output distribution p(y |θ) on .
Suppose Θ = ^d for some dimension d without loss of generality, since re-parametrisation can be performed otherwise.
Let 2 be the 2-Wasserstein space, that is, a set of all probability distributions on Θ with finite second moment equipped with the Wasserstein metric <cit.>.
We identify any probability distribution on Θ with its density whenever it exits.
Denote by ⊙ and ⊘ elementwise multiplication and division of two vectors in ^d.
Let ∇ be the gradient operator.
Let ∇_d^2 be a second-order gradient operator that takes the second derivative at each coordinate i.e. ∇_d^2 f(θ) = [ ∂^2 f(θ) / ∂θ_1^2 , …, ∂^2 f(θ) / ∂θ_d^2 ]^T∈^d.
§.§ Wasserstein Gradient Flow
In the Euclidean space, a gradient flow of a function f means a curve x_t that solves a differential equation ( d / dt ) x_t = - ∇ f(x_t) from an initial value x_0.
That is the continuous-time limit of gradient descent, which minimises the function f as t →∞.
A Wasserstein gradient flow is a curve of probability distributions μ_t minimising a functional ℱ on the 2-Wasserstein space 2.
The Wasserstein gradient flow μ_t solves a partial differential equation, known as the continuity equation:
d/d tμ_t = - ∇·( μ_t ∇_W(μ_t) ) given μ_0 ∈2 ,
where ∇_W(μ): Θ→Θ denotes the Wasserstein gradient of at μ <cit.>.
<Ref> recaps the derivation of the Wasserstein gradient and presents the examples of some functionals.
One of the elegant properties of the Wasserstein gradient flow is casting the infinite-dimensional optimisation of the functional as a finite-dimensional particle update <cit.>.
The continuity equation (<ref>) can be reformulated as a dynamical system of a random variable θ_t ∼μ_t, such that
d/d tθ_t = - [ ∇_W(μ_t) ](θ_t) given θ_0 ∼μ_0 ,
in the sense that the law μ_t of the random variable θ_t is a weak solution of the continuity equation.
Consider the case where the initial measure is set to the empirical distribution π̂_0 of N particles {θ_0^n }_n=1^N.
Discretising the continuous-time system (<ref>) by the Euler method with a small step size ν > 0 yields an iterative update scheme of N particles {θ_m^n }_n=1^N from step m = 0:
[ θ_m+1^1; ⋮; θ_m+1^N ]
=
[ θ_m^1; ⋮; θ_m^N ]
+ ν[ - [ ∇_W(π̂_m) ]( θ_m^1 ); ⋮; - [ ∇_W(π̂_m) ]( θ_m^N ) ] ,
where π̂_m denotes the empirical distribution of the particles {θ_m^n }_n=1^N at step m.
In practice, it is common that a chosen functional has a Wasserstein gradient that is not well-defined for discrete distributions.
In this case, the particle update scheme (<ref>) is not directly applicable because it uses the Wasserstein gradient at the empirical distribution π̂_m.
For example, the KL divergence (μ) = (μ|π) of a target distribution π has a Wasserstein gradient [ ∇_W(μ) ](θ) = - ( ∇logπ(θ) - ∇logμ(θ) ) that is ill-defined for discrete distributions.
One primary approach in such a scenario is to use approximate Wasserstein gradient flows <cit.> that replace the Wasserstein gradient with a certain approximation that is well-defined for discrete distributions.
This work uses the `smoothed' Wasserstein gradient of the KL divergence <cit.> recapped in <Ref>.
§.§ Gradient Boosting
Gradient boosting <cit.> is a sequential ensemble method of M multiple base learners { f_m }_m=1^M, which iteratively constructs a boosting ensemble of m base learners from step m = 0 to M.
Given the current boosting _m at step m, it trains a new base learner f_m+1 to compute the next boosting:
_m+1(x) = _m(x) + ν f_m+1(x)
where ν is a shrinkage hyperparameter called a learning rate.
The initial state of the boosting _0(x) at step m = 0 is set to a constant that best fits the data.
Although any learning algorithm can be used as a base learner in principle, tree-based algorithms are most used <cit.> .
The fundamental idea of gradient boosting is to train the new base learner f_m+1 to approximate the negative gradient of the remaining error of the current boosting F_m.
Suppose that = ^d and a loss function L measures the remaining error at each data point R_i(_m(x_i)) := L( _m(x_i), y_i ).
The new base learner f_m+1 is fitted to the set { x_i, g_i }_i=1^D, where the target variable g_i is specified as
g_i = - ∇ R_i( _m(x_i) ) ∈^d .
At every input-data point x_i, the boosting scheme (<ref>) approximately updates the output of the current boosting _m(x_i) in the steepest descent direction of the error R_i(_m(x_i)).
Such an update scheme can be understood as functional gradient descent <cit.>.
Although <cit.> originally suggested to perform an additional line search to determine a scaling constant for each base learner, it has been reported that the line search can be omitted due to the negligible influence on performance <cit.>.
In modern gradient-boosting libraries e.g. XGBoost <cit.> and LightGBM <cit.>, the standard practice is to use the diagonal (coordinatewise) Newton direction of the remaining error as the target variable of the new base learner.
Let h_i be the diagonal of the Hessian matrix of the error:
h_i := ∇_d^2 R_i( _m(x_i) ) ∈^d .
The new base leaner f_m+1 is fitted to the set { x_i, g_i ⊘ h_i }_i=1^n, where the negative gradient g_i is divided elementwise by the Hessian diagonal h_i.
The target variable g_i ⊘ h_i is the diagonal Newton direction that minimises the second-order Taylor approximation of the remaining error for each coordinate independently.
Combining this second-order gradient boosting and tree-based algorithms has demonstrated exceptional scalability and performance <cit.>.
Although it is possible to use the `full' Newton direction as the target variable, the impracticality of the full Newton direction has been pointed out <cit.>, and the coordinatewise computability of the diagonal Newton direction is suitable for many of popular gradient-boosting tree algorithms <cit.>.
§.§ Wasserstein Gradient Boosting
Suppose that we are given a loss functional _i(μ) over probability distributions μ∈2 on Θ at each input-data point x_i.
For example, the loss functional can be specified by some divergence _i(μ) = (μ|π_i)—such as the KL divergence—of a target distribution π_i assigned at each x_i.
Given the inputs and loss functionals { x_i, _i }_i=1^D, our aim is to construct a map from an input to N particles in Θ whose empirical distribution approximately minimises the loss functional _i.
Our approach is to combine gradient boosting with Wasserstein gradient, where we sequentially construct a set of N boostings {_m^n }_n=1^N—each of which consists of m base learners—from step m = 0.
Here, each n-th boosting _m^n is a map from to Θ at any step m.
Its output _m^n(x) represents the n-th output particle for each input x.
Given the current set of N boostings {_m^n}_n=1^N at step m, WGBoost trains a set of N new base learners { f_m+1^n }_n=1^N and computes the next set of N boostings:
[ _m+1^1(x); ⋮; _m+1^N(x) ]
=
[ _m^1(x); ⋮; _m^N(x) ]
+ ν[ f_m+1^1(x); ⋮; f_m+1^N(x) ]
where ν is a learning rate.
Similarly to standard gradient boosting, the initial state of the set of N boostings {_0^n}_n=1^N at step m = 0 is set to a given set of constants.
Throughout, let π̂_m(x) denote the empirical distribution of the N output particles {_m^n(x) }_n=1^N at step m for each input x.
For better presentation, let _i(μ) denote the Wasserstein gradient of each loss functional _i(μ).
The foundamental idea of WGBoost is to train the n-th new learner f_m+1^n to approximate the negative Wasserstein gradient - _i(π̂_m(x_i)) evaluated at the n-th boosting output _m^n(x_i) for each x_i, so that
[ f_m+1^1(x_i); ⋮; f_m+1^N(x_i) ]≈[ - [ _i( π̂_m(x_i) ) ]( _m^1(x_i) ); ⋮; - [ _i( π̂_m(x_i) ) ]( _m^N(x_i) ) ] .
At every input-data point x_i, the boosting scheme (<ref>) approximates the particle update scheme (<ref>) for the output particles {_m^n(x_i) }_n=1^N under the Wasserstein gradint _i(π̂_m(x_i)) = ∇_W _i(π̂_m(x_i)), by which each boosting output is updated in the direction to decrease the loss functional.
As stated in <Ref>, some loss functionals have a Wasserstein gradient that is not well-defined for empirical distributions.
Any suitable approximation of the Wasserstein gradient can be used as _i(μ), which results in an approximate WGBoost, as in approximate Wasserstein gradient flows.
The general procedure of exact or approximate WGBoost is summarised in Algorithm <ref>.
Similarly to standard gradient boosting, various loss functionals may lead to a different WGBoost.
<Ref> presents examples of Wasserstein gradients of common divergences.
<Ref> illustrates the output of WGBoost using a simple target distribution π_i(θ) = 𝒩(θ|sin(x_i), 0.5) when the approximate Wasserstein gradient of the KL divergence presented later in (<ref>) is applied as _i in Algorithm <ref>.
Stochastic gradient boosting <cit.> uses only a randomly sampled subset of data to fit a new base learner at each step m to reduce the computational cost.
The same subsampling approach can be applied for WGBoost whenever the dataset is large.
If a loss functional _i admits an exactly or approximately available Wasserstein `Hessian', the associated Newton direction may also be computable <cit.>.
Implementing a second-order WGBoost algorithm is immediately possible by plugging such a Newton direction into _i( μ ) in Algorithm <ref>.
The default WGBoost algorithm for posterior regression is built on a diagonal approximate Wasserstein Newton direction of the KL divergence, aligning with the standard practice in modern gradient-boosting libraries to use the diagonal Newton direction.
§ DEFAULT IMPLEMENTATION FOR POSTERIOR REGRESSION UNDER KL DIVERGENCE
This section provides the default setting to implement a concrete WGBoost algorithm for posterior regression, where each loss functional _i(μ) is specified by the KL divergence of a target posterior.
<Ref> recaps the definition of the target posterior in posterior regression, followed by the default choice of the prior discussed in <Ref>.
<Ref> recaps a widely-used approximate Wasserstein gradient of the KL divergence based on kernel smoothing <cit.>.
A further advantage of the kernel smoothing approach is that the approximate Wasserstein Hessian is available.
<Ref> establishes a second-order WGBoost algorithm, similarly to modern gradient-boosting libraries.
§.§ Target Posterior
Suppose that a parametric output distribution p(y |θ) is specified on the output space as is often done in most probabilistic prediction methods.
Suppose further that a prior distribution p_i(θ) of the output-distribution parameter θ is specified at each data point (x_i, y_i) in the dataset.
At each data point (x_i, y_i), the likelihood p(y_i |θ) and the prior p_i(θ) determine the posterior distribution:
π_i(θ) ∝ p(y_i |θ) p_i(θ) .
In posterior regression, the target variable is the posterior distribution π_i assigned pointwise at each input-data point x_i
We apply the framework of WGBoost to construct a map from an input to a set of particles that approximates each target posterior π_i under the loss functional _i(μ) = (μ|π_i).
The constructed map can produce a distributional estimate of the output-distribution parameter θ for a new input x.
Recall that π̂_M(x) denotes the empirical distribution of the output particles of WGBoost at the final step M for each input x.
Based on the output particles of WGBoost, a predictive distribution p(y | x) is defined for each input x via the Bayesian model averaging:
p(y | x) = _θ∼π̂_M(x)[ p( y |θ) ] .
A point prediction ŷ can also be defined for each input x via the Bayes action:
ŷ = argmin_y ∈ _θ∼π̂_M(x)[ U(y, θ) ] ,
which is the minimiser of the average risk of a given utility U: ×Θ→.
For example, if the utility is a quadratic function U(y, θ) = (y - θ)^2, the Bayes action is the mean value of π̂_M(x).
In general, the explicit form of the posterior π_i is known only up to the normalising constant.
The WGBoost algorithm for posterior regression, provided in <Ref>, requires no normalising constant of the posterior π_i.
It depends only on the log-gradient of the posterior ∇logπ_i(θ) = ∇π_i(θ) / π_i(θ) and the diagonal of the log-Hessian of the posterior, cancelling the normalising constant by fraction.
Hence, knowing the form of the likelihood p(y_i |θ) and prior p_i(θ) suffices.
§.§ Choice of Prior
In posterior regression, the prior p_i(θ) of the output-distribution parameter θ is specified at each data point (x_i, y_i).
The approach to specifying the prior may differ depending on whether past data are available.
When past data are available, they can be utilised in any possible way to elicit a reasonable prior for future data.
When no past data are available, we recommend the use of a noninformative prior; see <cit.> for the introduction of noninformative priors that have been developed as a sensible choice of prior in the absence of past data.
Precisely, in order to avoid numerical errors, if a noninformative prior is improper (nonintegrable) as is often the case, we recommend the use of a proper probability distribution that approximates the noninformative prior sufficiently well.
[Normal Location-Scale]
A normal location-scale distribution 𝒩(y | m, σ) of a scalar output y ∈ has the mean and scale parameters m ∈ and σ∈ (0, ∞).
A typical noninformative prior of m and σ is each given by 1 and 1 / σ, which are improper.
At every data point (x_i, y_i), we use a normal prior 𝒩(m | 0, σ_0) over m and an inverse gamma prior IG(σ|α_0, β_0) over σ, with the hyperparameters σ_0 = 10 and α_0 = β_0 = 0.01, which approximate the non-informative priors.
[Categorical]
A categorical distribution 𝒞(y | q) of a k-class label y ∈{ 1, …, k } has a class probability parameter q = ( q_1, …, q_k ) in the k-dimensional simplex Δ_k.
It corresponds to the Bernoulli distribution if k = 2.
A typical noninformative prior of q is given by 1 / ( q_1 ×…× q_k ).
At every data point (x_i, y_i), we use the logistic normal prior—a multivariate generalisation of the logit normal distribution <cit.>—over q with the mean 0 and identity covariance matrix scaled by 10.
In <Ref>, Θ = ^d is supposed for some dimension d, as any parameter that lies in a subset of the Euclidean space (e.g. σ) can be reparametrised as one in the Euclidean space (e.g. logσ).
<Ref> details the reparametrisation used for the experiment.
When a dataset has scalar outputs of a low or high order of magnitude, we also recommend to standardise the outputs to adjust the magnitude.
§.§ Approximate Wasserstein Gradient of KL Divergence
The loss functional _i(μ) considered for posterior regression is the KL divergence (μ|π_i).
A computational challenge of the KL divergence is that the associated Wasserstein gradient [ _i^KL(μ) ]( θ ) := - ( ∇logπ_i(θ) - ∇logμ(θ) ) is not well-defined for empirical distributions.
A particularly successful approach to finding a well-defined approximation of the Wasserstein gradient—which originates in <cit.> and has been applied in wide contexts <cit.>—is to smooth the original Wasserstein gradient through a kernel integral operator ∫_Θ [ _i^KL(μ) ](θ^*) k(θ, θ^*) d μ(θ^*) <cit.>.
By integration-by-part (see <cit.>), the smoothed Wasserstein gradient, denoted ^*_i(μ), falls into the following form that is well-defined for any probability distribution μ:
[ ^*_i(μ) ]( θ ) := - _θ^* ∼μ[ ∇logπ_i(θ^*) k(θ, θ^*) + ∇ k(θ, θ^*) ] ∈^d ,
where ∇ k(θ, θ^*) denotes the gradient of k with respect to the first argument θ.
An approximate Wasserstein gradient flow based on the smoothed Wasserstein gradient ^*_i(μ) is called the Stein variational gradient descent <cit.> or kernelised Wasserstein gradient flow <cit.>.
In most cases, the kernel k is set to the Gaussian kernel k(θ, θ^*) = exp( - θ - θ^* ^2 / h ) with the scale hyperparameter h > 0.
We use the Gaussian kernel with the scale hyperparameter h = 0.1 throughout this work.
Another common approach to approximating the Wasserstein gradient flow of the KL divergence is the Langevin diffusion approach <cit.>.
The discretised algorithm, called the unadjusted Langevin algorithm <cit.>, is a stochastic particle update scheme that adds a Gaussian noise at every iteration.
However, several known challenges, such as asymptotic bias and slow convergence, often necessitate an ad-hoc adjustment of the algorithm <cit.>.
<Ref> discusses a variant of WGBoost built on the Langevin algorithm, although it is not considered the default implementation.
§.§ Default Second-Order Implementation of WGBoost
Following the standard practice in modern gradient-boosting libraries <cit.> to use the diagonal Newton direction, we further consider a diagonal (coordinatewise) approximate Wasserstein Newton direction of the KL divergence.
In a similar manner to the smoothed Wasserstein gradient (<ref>), the approximate Wasserstein Hessian of each KL divergence (μ|π_i) can be obtained by the kernel smoothing.
The diagonal of the approximate Wasserstein Hessian, denoted _̋i^*(μ), is defined by
[ _̋i^*(μ) ]( θ ) := _θ^* ∼μ[ - ∇_d^2 logπ_i(θ^*) k(θ, θ^*)^2 + ∇ k(θ, θ^*) ⊙∇ k(θ, θ^*) ] ∈^d .
The diagonal approximate Wasserstein Newton direction of each KL divergence is then defined by - [ _i^*(μ) ]( · ) ⊘[ _̋i^*(μ) ]( · ).
<Ref> provides the derivation based on <cit.> who derived the Newton direction of the KL divergence in the context of nonparametric variational inference.
The second-order WGBoost algorithm is established by plugging it into _i(μ) in Algorithm <ref> i.e.
[ _i(μ) ]( · ) = [ _i^*(μ) ]( · ) ⊘[ _̋i^*(μ) ]( · ) .
Algorithm <ref> under the setting (<ref>) is considered our default WGBoost algorithm for posterior regression.
We refer the algorithm to as second-order KL approximate WGBoost (SKA-WGBoost).
For full clarity, the explicit pseudocode is provided in <Ref>.
Although it is possible to use the full Newton direction with no diagonal approximation, the inverse and product of ( N × d ) × (N × d) matrices are required at every computation of the direction (c.f. <Ref>), which can be a computational bottleneck if the size of data or output particles is large.
The diagonal Newton direction has a clear computational benefit in that only elementwise division is involved.
The computational complexity is the same as that for the smoothed Wasserstein gradient, scaling linearly to both the particle number N and the parameter dimension d.
Hence, there is essentially no reason not to use the diagonal Newton direction instead of the smoothed Wasserstein gradient.
<Ref> presents a simulation study to compare different WGBoost algorithms.
§ APPLICATIONS WITH REAL-WORLD TABULAR DATA
We empirically demonstrate the performance of the default WGBoost algorithm, SKA-WGBoost, through three applications using real-world tabular data.
The first application illustrates the output of the WGBoost algorithm through a simple conditional density estimation.
The second application benchmarks the regression performance on nine real-world datasets <cit.>.
The third application examines the classification and OOD detection performance on the real-world datasets used in <cit.>.
Throughout, in SKA-WGBoost, we set the number of output particles N to 10 and set each base learner f to the decision tree regressor <cit.> with maximum depth 1 for the first application and 3 for the rest.
<Ref> contains further details, including a choice of the initial constant {ϑ_0^n }_n=1^N.
The source code is available in <https://github.com/takuomatsubara/WGBoost>.
§.§ Illustrative Conditional Density Estimation
This section illustrates the output of the WGBoost algorithm by estimating a conditional density p(y | x) from one-dimensional scalar inputs and outputs { x_i, y_i }_i=1^D.
The normal output distribution 𝒩(y | m, σ) and the prior p_i(m, σ) in <Ref> were used, where the output of the WGBoost algorithm is a set of 10 particles { ( m^n, σ^n ) }_n=1^10 of the mean and scale parameters (m, σ) for each input x.
We set the number of base learners M to 500 and the learning rate ν to 0.1.
The conditional density is estimated using the predictive distribution (<ref>) by the WGBoost algorithm.
We used two real-world datasets, bone mineral density <cit.> and old faithful geyser <cit.>.
<Ref> depicts the result for the former dataset, demonstrating that the WGBoost algorithm captures the heterogeneity of the conditional density on each input well.
<Ref> presents the result for the latter dataset.
§.§ Probabilistic Regression Benchmark
This section examines the regression performance of the WGBoost algorithm using a widely-used benchmark protocol that originated in <cit.> and has been used in subsequent work <cit.>.
We used nine real-world tabular datasets from the University of California Irvine machine learning repository <cit.>, each with one-dimensional scalar outputs.
As in <Ref>, the normal output distribution 𝒩(y | m, σ) and the prior p_i(m, σ) in <Ref> were used.
We randomly held out 10% of each dataset as a test set to measure the negative log likelihood (NLL) of the predictive distribution (<ref>) by the WGBoost algorithm and the room mean squared error (RMSE) of the point prediction produced by taking the mean value of the predictive distribution.
We chose the number of base learners M using an early-stopping approach, following <cit.>, where we held out 20% of the training set as a validation set to choose the number 1 ≤ M ≤ 4000 achieving the least validation error.
Once the number M was chosen, the WGBoost algorithm was trained again using all the entire training set.
We set the learning rate ν to 0.1 for all the nine datasets.
We repeated this procedure 20 times for each dataset, except the protein dataset for which we repeated five times.
We compared the performance of the WGBoost algorithm with five other methods: Monte Carlo Dropout (MCDropout) <cit.>, Deep Ensemble (DEnsemble) <cit.>, Concrete Dropout (CDropout) <cit.>, Natural Gradient Boosting (NGBoost) <cit.>, and Deep Evidential Regression (DEvidential) <cit.>.
<Ref> briefly describes each algorithm and provides further details on the experiment.
<Ref> summarises the NLLs and RMSEs of the six algorithms.
The WGBoost algorithm achieves the best score or a score sufficiently close to the best score for the majority of the nine datasets.
§.§ Classification and Out-of-Distribution Detection
This section examines the classification and anomaly OOD detection performance of the WGBoost algorithm on two real-world tabular datasets, segment and sensorless, following the protocol used in <cit.>.
The categorical output distribution 𝒞(y | q) and the prior p_i(q) in <Ref> were used, where the output of the WGBoost algorithm is a set of 10 particles { q^n }_n=1^10 of the class probability parameter q in the simplex Δ^k for each input x.
We set the number of base learners M to 4000 and the learning rate ν to 0.4.
The dispersion of the output particles of the WGBoost algorithm was used for OOD detection <cit.>.
If a test input was an in-distribution sample from the same distribution as the data, we expected the output particles to concentrate on some small region in Δ^k indicating a high probability of the correct class.
If a test input was an OOD sample, we expected the output particles to disperse over Δ^k because the model ought to be less certain about the correct class.
The segment and sensorless datasets have 7 and 11 classes in total.
For the segment dataset, the data subset that belongs to the last class was kept as the OOD samples.
For the sensorless dataset, the data subset that belongs to the last two classes was kept as the OOD samples.
For each dataset, 20% of the non-OOD samples is held out as a test set to measure the classification accuracy.
Several approaches can define the OOD score of each input <cit.>.
We focused on an approach that uses the variance of the output particles as the OOD score.
For the WGBoost algorithm, we employed the inverse of the maximum norm of the variance as the OOD score.
Given the OOD score, we measured the OOD detection performance by the area under the precision recall curve (PR-AUC), viewing non-OOD test data as the positive class and OOD data as the negative class.
We repeated this procedure five times.
We compared the WGBoost algorithm with four other methods: MCDropout, DEnsemble, and Distributional Distillation (DDistillation) <cit.>, and Posterior Network (PNetwork) <cit.>.
<Ref> briefly describes each algorithm and provides further details on the experiment.
<Ref> summarises the classification and OOD detection performance of the five algorithms.
The WGBoost algorithm demonstrates a high classification and OOD detection accuracy simultaneously.
Although PNetwork has the best OOD detection performance for the sensorless dataset, the performance of the WGBoost algorithm also exceeds 80%, which is distinct from MCDropout, DEnsemble, and DDistillation.
§ DISCUSSION
We proposed a general framework of WGBoost.
We further established a second-order WGBoost algorithm for posterior regression, aligning with the standard practice in modern gradient-boosting libraries.
We empirically demonstrated that the probabilistic forecast by WGBoost leads to better predictive accuracy and OOD detection performance.
This work offers exciting avenues for future research.
Important directions for future study include investigating the convergence properties, evaluating the robustness to misspecified output distributions, and exploring alternatives to the KL divergence.
One limitation of WGBoost may arise when data are not tabular, as in the case of standard gradient boosting.
These questions require careful examination and are critical for future work.
unsrt
|
http://arxiv.org/abs/2405.09050v1 | 20240515025600 | 3D Shape Augmentation with Content-Aware Shape Resizing | [
"Mingxiang Chen",
"Jian Zhang",
"Boli Zhou",
"Yang Song"
] | cs.CV | [
"cs.CV"
] |
[
3D Shape Augmentation with Content-Aware Shape Resizing
Mingxiang Chen, Jian Zhang, Boli Zhou, Yang Song
Ant Group
Xihu District, Hangzhou, China
{mingxiang.cmx, zj134362, zhouboli.zbl, zhaoshan.sy}@antgroup.com
May 20, 2024
====================================================================================================================================================================================
3D Shape Augmentation with Content-Aware Shape Resizing
Mingxiang Chen, Jian Zhang, Boli Zhou, Yang Song
Ant Group
Xihu District, Hangzhou, China
{mingxiang.cmx, zj134362, zhouboli.zbl, zhaoshan.sy}@antgroup.com
May 20, 2024
====================================================================================================================================================================================
type=figure
< g r a p h i c s >
figureGiven an input model (rendered in red in the middle), our approach has the capability to generate a variety of augmented 3D shapes characterized by intricate structures and precise details.
]
Recent advancements in deep learning for 3D models have propelled breakthroughs in generation, detection, and scene understanding. However, the effectiveness of these algorithms hinges on large training datasets. We address the challenge by introducing Efficient 3D Seam Carving (E3SC), a novel 3D model augmentation method based on seam carving, which progressively deforms only part of the input model while ensuring the overall semantics are unchanged. Experiments show that our approach is capable of producing diverse and high-quality augmented 3D shapes across various types and styles of input models, achieving considerable improvements over previous methods. Quantitative evaluations demonstrate that our method effectively enhances the novelty and quality of shapes generated by other subsequent 3D generation algorithms.
§ INTRODUCTION
In recent years, there has been a rapid advancement in deep learning technologies associated with three-dimensional models. The corresponding algorithms have achieved significant breakthroughs in various tasks, including the generation, recognition, detection, and understanding of three-dimensional models and scenes. Despite the notable superiority of deep learning algorithms over traditional machine learning methods, their effectiveness hinges on the availability of large datasets for training. To address this challenge, the adoption of data augmentation has become widespread, especially in domains where substantial datasets are not readily accessible— a circumstance frequently encountered in the context of many 3D tasks. The primary objective of data augmentation is to generate supplementary data for training, and its efficacy in enhancing performance has been empirically validated.
A traditional and straightforward way of 3D shape augmentation is axis scaling, yet it is not content-aware, and the scaled 3D shape may be distorted. Alternative methods have also indicated limited effectiveness in addressing the issue of non-logical deformation. Though widely used in 2D image generation, detection, recognition, and many other tasks, the augmentation of 3D models has been relatively underexplored. On the other hand, the significance of varied and high-quality 3D content is growing within multiple industries, encompassing gaming, robotics, architecture, and social platforms. Constricting training datasets to a narrow scope of 3D models poses limitations on the algorithm's potential in various respects. Consequently, the development of an augmentation technique capable of generating diverse and high-quality 3D models is imperative.
In this paper, we present a novel yet straightforward augmentation method that produces diverse variations when given a 3D model as input. Our approach leverages the content-aware 2D image resizing technique based on seam carving, ensuring precise 3D seam prediction and enhanced computational efficiency. Additionally, we mitigate the issue of diversity by introducing the 'anchor points' into our approach.
Overall, our contributions are summarized as follows:
* We propose Efficient 3D Seam Carving, a content-aware 3D model augmentation algorithm, that evaluates which parts of the model can be deformed along which specific directions, thereby appropriately implementing different deformation strategies for various regions of an input model.
* We introduce beam search and anchor point selection techniques to ensure that our method computes efficiently and outputs various 3D shapes.
* We show compelling results and compare our method with previous methods. The results show that our method produces high-quality and diverse augmented 3D shapes among varied types of input models.
The paper is organized as follows. First, we introduce related works in Section <ref>. The details of our method are explained in Section <ref>. The settings of experiments and their results are discussed in Section <ref>. The conclusion is presented in Section <ref>.
§ RELATED WORKS
Content-aware 2D image retargeting.
Content-Aware Image Retargeting (CAIR) techniques are important for displaying images or videos on various devices with diverse aspect ratios <cit.>. Compared to naive resizing methods such as uniform scaling and fixed-window cropping, CAIR methods preserve crucial regions of the input image while minimizing the occurrence of artifacts and distortion. Generally speaking, the CAIR methods can be classified into 4 categories <cit.>: 1) discrete methods <cit.>, 2) continuous methods <cit.>, 3) multi-operator methods <cit.>, and 4) deep learning based methods <cit.>. Numerous methodologies have been proposed within each category, each with its own set of advantages and limitations. Among them, the Seam Carving (SC) <cit.> algorithm is a set of discrete methods upon which many CAIR techniques are based. The SC method first computes the energy of each pixel based on a pre-defined energy function, typically indicating the pixel's significance. Subsequently, the algorithm identifies a low-energy seam and, based on the target dimensions, determines whether the seam is to be removed or inserted.
Image and 3D shape augmentation.
Data augmentation is a technique commonly used in computer vision and machine learning to artificially increase the size of a training dataset and avoid overfitting by applying various transformations to existing data. Image augmentation proves to be necessary for numerous 2D image processing tasks <cit.>, including but not limited to segmentation <cit.>, detection <cit.>, unsupervised learning <cit.>, and various other applications. The 2D image augmentation techniques can be grouped into three main categories <cit.>: model-free approaches, model-based approaches, and optimizing policy-based. The model-free approach leverages traditional image processing techniques such as transformations like cropping, rotation, flipping, and scaling, whereas the model-based approach capitalizes on image generation models <cit.> to produce synthetic images. On the other hand, the optimizing policy-based method seeks to balance and find the most advantageous combination of both <cit.>. Moreover, image augmentation techniques may vary depending on the target domain, for example, medical images <cit.>, agricultural images <cit.>, and satellite images <cit.>.
As for the 3D shapes, to the best of our knowledge, random scaling <cit.> is probably the most widely adopted augmentation approach. Other methods include piecewise warping <cit.>, shape uniting <cit.>, spectral augmentation <cit.>, and planar decimation <cit.> (as documented in Blender's manual [<https://docs.blender.org/manual/en/latest/modeling/modifiers/generate/decimate.html>]). These methodologies lack content-awareness. Consequently, the execution of major augmentation, such as a large scaling factor, may yield results with noticeable artifacts and distortions (except for planar decimation). Conversely, when minor augmentation is applied, the augmented model may differ imperceptibly from the original model, thereby potentially being detrimental to the algorithm's generalizability. In the context of 2D images, cropping serves as a straightforward remedy for the aforementioned distortion issues. However, in the case of 3D models, integrity is important, and thus, cropping is generally not considered a viable augmentation technique.
§ METHOD
§.§ Overview
A 3D shape can be represented as a discrete occupancy function o:z ∈𝒵↦0,1 which is defined on a 3D grid 𝒵. o(z) records the value of occupancy that o(z)=1 if the center of the grid cell is inside of the 3D shape and o(z)=0 otherwise. Let 𝐆_𝐨 be an N_i × N_j × N_k grid containing the values of occupancies given a water-tight 3D shape. A spatial seam is defined to be
𝐬^𝐳 = {s^z_i,j} = {(i,j,z(i,j))}_i,j,
where i,j,k are coordinate values on the x, y, and z axis, respectively. i,j,k ∈ℤ, 1 ≤ i ≤ N_i, 1 ≤ j ≤ N_j, 1 ≤ k ≤ N_k, and ∀ i,j, |z(i,j)-z(i-1,j)| ≤ 1, |z(i,j)-z(i,j-1)| ≤ 1. Here, z is a mapping z:[1,...,N_i] × [1,...,N_j] ↦ [1,...,N_k] representing the coordinate values on the z axis. That is, a plane splits the 3D shape to two, from top to bottom, from left to right, and containing exactly one voxel for every single coordinate (i,j). In this example, the x-axis and y-axis are associated with independent variables, while the z-axis is associated with the dependent variables. Spatial seams that cut the 3D shape from different directions (from top to bottom, from front to back) are defined in similar ways.
Similar to the occupancy function, a discrete signed distance function (SDF) g:z ∈𝒵↦ℝ is defined on a 3D grid 𝒵, where the value records the signed distance from the center of the grid cell to the surface of the 3D shape. Specifically, we can restrict the values of the signed distance within a defined range, referred to as the Truncated Signed Distance Function (TSDF). The grid containing the SDF and TSDF values are denoted as 𝐆_𝐬 and 𝐆_𝐭, respectively. The definition of the spatial seam in the signed distance grid is analogous to the definition in the occupancy grid.
Given an energy function e, the cost of a spatial seam is
E(s) = ∑_i,j e(𝐆(s^z_i,j)),
where we define the energy function as:
e_z(𝐆) = |∂/∂ z𝐆|,
and the variable z in Eq. (<ref>) changes following the direction of the spatial seam. We name it as the cutting axis. Different from the energy function designed for 2D images <cit.>, where the energy function is commonly associated with the first-order derivatives of the image along the horizontal and vertical axes, typically represented by the sum of their absolute values, we mainly focus on capturing the gradient information of the model exclusively along a specific axis. The rationale behind this observation stems from the fact that when the gradient along a specific axis of a 3D shape reaches zero, it commonly signifies the potential for scale variation along that particular axis. To illustrate, take a handleless cup where the gradients along the y-axis (representing the height direction) are zero in the central segment. If we elongate the central segment along the y-axis, we would achieve a slightly taller or shorter cup while preserving a common shape. In contrast, stretching the cup along the x-axis or z-axis would result in an elliptical cup, a less frequent occurrence.
e_3(𝐆) = |∂/∂ x𝐆| + |∂/∂ y𝐆| + |∂/∂ z𝐆|.
In Section <ref>, a comparative analysis will be conducted between the aforementioned energy function and an alternative energy function considering the first-order derivatives along all three axes. In short, aside from the higher computational cost of using <ref>, the results of the two are similar. Our augmentation algorithm follows the structure outlined in Algorithm <ref>. Notably, the augmentation of a 3D shape involves adjustments along its three axes, where the model is rotated to designate its x-axis, y-axis, and z-axis successively as the cutting axis. Furthermore, a maximum scaling factor denoted as S_max ensures that the dimension N of the model along a specific axis remains within the range [N × (1-S_max), N × (1+S_max)] after the augmentation.
§.§ Beam Search
Solving the global optimal energy or minimal cut on a graph based on a 2D image or 3D video volume is simply not feasible <cit.>. Hence, the beam search algorithm is used to improve efficiency. This effective heuristic search algorithm enables the generation of coherent sequences by exploring multiple candidate hypotheses simultaneously. Firstly, an energy map is initialized using Eq. (<ref>). Subsequently, a 2D energy map is obtained by reducing the original map on the x or y axis, which is named as the reducing axis:
e_x, 2D(j,k) = ∑_i e_z(i,j,k),
e_y, 2D(i,k) = ∑_j e_z(i,j,k).
The remaining axis, which is neither the cutting axis nor the reducing axis, is hereby referred to as the main axis. Given an initial anchor point on the 2D energy map, three possible neighbors on top or bottom (along the main axis) of it are extended. These extensions are then scored based on the sum of their energy. Nevertheless, the cumulative inclusion of the three neighbors above or below the current cell into the existing path will result in an exponential increase in the overall number of plausible paths. Hence, pruning is then applied to retain only the top-n candidate extensions, effectively reducing the search space. The algorithm iteratively repeats the expansion, scoring, and pruning steps until the candidate paths connect the pixels from the top to the bottom of the 2D energy map. Finally, the lowest-cost path is selected as the anchor path. Then the aforementioned procedure is repeated on the 3D energy map, with the only distinction being that the initial anchor, which was a single point, is now replaced by the anchor path obtained from the 2D energy map.
§.§ Diversity
Beyond considerations of computational complexity, an additional rationale for not using global optimal energy lies in our expectation for each augmentation to yield distinct 3D shapes. As depicted in Fig. <ref>, the selection of anchor points and the orientation of spatial seams exert a substantial influence on the trajectory of the seam, thereby ensuring diversity in the resulting models after each augmentation.
Note that for a relatively regular 3D shape, the partial derivatives of its voxels along a particular axis are mostly zero or small values. Therefore, when selecting an anchor point from a 2D energy map, a naive approach is to randomly choose a pixel as the anchor. However, as the number of iterative steps increases, the outcomes of augmentations may manifest only marginal variations. To address this, the possible anchor locations are firstly partitioned into several clusters using mini-batch k-means <cit.>. Subsequently, top k/3 plausible clusters are selected, ranked by their average cost across s simulations on the 2D energy map, with random points within these clusters designated as anchors. At the beginning of each augmentation, m clusters are selected from the top k/3 clusters. For each subsequent step within the augmentation, a cluster is chosen from the selected subset, and a random point within the chosen cluster is assigned as the anchor. In our method, the possible anchors are defined as the cells that are both occupied (or in other words, inside the 3D shape) and the corresponding energy on e_z is less than a threshold ϵ, which equals 1 × 10 ^ -3 in the experiments below.
In contrast to seam carving in images, for an occupancy grid or TSDF grid, the beam search process yields numerous candidate paths with equivalent cumulative gradients due to non-zero gradients occurring only on or near the model's surface during the search. To ensure that beam search can maximally expand the search space, thereby finding solutions closer to the optimal, the beam search is designed to maintain solutions with larger mutual distances when multiple paths have equal costs during the seam search. More specifically, consider the following scenario: suppose we have 12 paths and the beam size equals to 4. 1) If exactly 4 paths have a cumulative gradient less than a specific threshold T, then retain those 4 paths; 2) If there exists a set A containing three paths, where the cumulative gradient of each path is less than the threshold T, and another set B containing three paths, where the cumulative gradient of each equals T, and the cumulative gradients of the remaining six paths are greater than T, then select a path from B with the smallest total distance to all paths in B and place it into set A, and retain the paths in set A; 3) If, in the second case, the difference between the number of paths in set A and the beam size is greater than 1, then first select a path from B with the largest total distance to all paths in B and place it into a candidate set C, and continuously select the path from B that has not been placed into C and also has the largest minimum distance to all paths in C until the sum of the sizes of A and C equals the beam size. Finally, retain the paths in set A and set C. The distance between two paths p_1 and p_2 is defined as:
D(p_1, p_2) = ∑_x |p_1[x] - p_2[x]|.
§.§ Symmetry Check
Numerous methodologies are available for evaluating the symmetry of an input 3D shape. One approach involves examining the chamfer distance between sampled surface points from each half and comparing it with a predetermined threshold. In our proposed method, we quantify symmetry using the elementwise mismatch rate between two halves. Taking inspiration from the F_1 score, which appropriately balances precision and recall, we employ the harmonic mean of mismatch rates for both occupied and unoccupied cells:
m_x(i,j,k) = 𝐆_𝐨(i,j,k) ⊕𝐆_𝐨(N_i-i,j,k)
rate_o,x = ∑_i,j,k m_x(i,j,k) ×𝐆_𝐨(i,j,k)/∑_i,j,k𝐆_𝐨(i,j,k)
rate_u,x = ∑_i,j,k m_x(i,j,k) × (1 - 𝐆_𝐨(i,j,k))/∑_i,j,k 1 - 𝐆_𝐨(i,j,k)
rate_x = 2 * rate_o,x * rate_u,x/rate_o,x + rate_u,x
where ⊕ stands for the XOR operation, and rate_x represents the mismatch rate of the input 3D shape along the x-axis. Our definition of mismatch rate along the y-axis and z-axis is analogous to that of rate_x, a redundant elaboration is omitted at this juncture.
A 3D shape is considered symmetrical along a specific axis when the mismatch rate is below a predefined threshold, denoted as T_s. If the condition is satisfied, given that it would produce two results through mirror transformations (as long as the path is not symmetrical along that axis), the output is determined by selecting the seam with the lower total energy. If the input model is symmetrical along multiple axes, an exhaustive exploration of all feasible combinations of mirror transformations is conducted, and the optimal seam is selected as the final output.
§.§ Performance Trade-offs
Solving the near-optimal seam is hard, particularly when dealing with high-resolution grids. In our experiments, augmenting a 3D shape with a resolution under 128 may require a few seconds, whereas resolutions exceeding 512 can extend the process to several minutes. One plausible optimization involves solely executing a 2D beam search, and stacking the identified path along the reducing axis to establish the resulting seam. Despite receiving significant performance improvement, this approach is accompanied by a substantial decline in seam quality. Hence, we introduce an additional filtering mechanism that if the average energy of the seam surpasses a threshold T_c, the seam is discarded. In our conducted experiments, the algorithm gives satisfactory results when:
e_avg = ∑_i,j,k e(i,j,k)/N_i × N_j × N_k
T_c=
e_avg * 0.25, if e_avg > 4 × 10^-4
1 × 10^-4, otherwise
§ EXPERIMENTS
Dataset.
We employ ShapeNetV2 <cit.> as the evaluation set for our augmentation method. In our experiments, we use Open3D to calculate the TSDF grid and the occupancy grid, which integrates of multiple depth images to derive the TSDF representation. Unless specified, the models in the following experiments are represented by occupancy grids.
Qualitative demonstration.
The augmented outcomes by our method are compared with those obtained through axis scaling <cit.>, piecewise linear warping <cit.>, and spectral augmentation technique <cit.>. In alignment with <cit.>, we independently scale each axis by uniformly sampling scaling factors within the range [0.75, 1.25]. For the piecewise linear warping, we first normalize the vertices of the models to constrain the coordinate values to the interval [-1, 1]. Subsequently, we define a piecewise linear warping function by partitioning the interval [-1, 1] into 6 even sub-intervals. The warping factors are sampled from a log-normal distribution with a standard deviation of 0.25. Consistent with the previous research, the warping factors exhibit symmetry with respect to the x and z-axes. We adopt identical configurations as outlined in <cit.> during the implementation of spectral augmentation.
As shown in Fig. <ref>, we compare the augmented results using Eq. <ref> with an alternative function taking the first-order derivatives along all three axes into account using Eq. <ref>. In most cases, the results are similar. One difference is that if the anchor cell is not an occupied cell (which is not true in our method), the seam paths are sometimes likely to pass through the tips of the 3D model where the sum of the energy is low if Eq. <ref> is used. This can result in a 3D shape deviating from common expectations, for example, the flattened tips of the planes' noses.
In Fig. <ref> and Fig. <ref>, we present illustrative instances of augmented 3D shapes spanning various genres and styles. We utilize occupancy grids and TSDF grids to represent the models, respectively. The outcomes demonstrate the effectiveness of our approach in augmenting a variety of 3D shapes that align with common expectations.
Quantitative Evaluation.
Intuitively, 3D model augmentation should enhance the quality and diversity of results generated by shape-generation algorithms. We employ Neural Wavelet-domain Diffusion <cit.> as the baseline model and compare the outcomes with and without our augmentation method during training. It is noteworthy that, apart from the airplane class models, which contain more than 4000 samples, the number of models in other categories is relatively limited. Specifically, the categories of bookshelf, bed, tower, and camera contain 452, 233, 132, and 113 models, respectively. Prior to training, we augment each model 8 times.
In line with the previous research <cit.>, we uniformly sample 2,048 points on each generated shape and evaluate the shapes using three evaluation metrics: 1) minimum matching distance (MMD), which assesses the fidelity of the generated shapes; 2) coverage (COV), indicating the extent to which the generated shapes encompass the given 3D repository; and 3) 1-NN classifier accuracy (1-NNA), measuring the effectiveness of a classifier in distinguishing the generated shapes from those in the repository. Generally, a low MMD, a high COV, and a 1-NNA close to 50% signify satisfactory generation quality. For the airplane models, we generated 2,000 models for evaluation purposes. For other categories, we generated an equal number of models as the training set for evaluation.
From Table <ref>, it is evident that our augmentation method significantly enhances the performance of shape generation algorithms based on occupancy grid, SDF grid, or TSDF grid, particularly when the training data is insufficient.
Human preference study.
Given that one of the primary goals of our approach is to enlarge the 3D shape dataset, the augmented data holds potential for various downstream applications, such as 3D shape generation, classification, detection, and so on. Consequently, it is imperative for the augmented models to exhibit both logical coherence and diversity. Otherwise, the training set for downstream tasks may be contaminated, thereby yielding unsatisfying outcomes. Evaluating the quality of 3D shapes is a complex task. We focus on quantitative metrics derived from human preference evaluations, considering two key dimensions: diversity and visual quality.
Using models sourced from ShapeNetV2 <cit.> as our initial seed, we apply various methods to generate eight augmented 3D shapes for each model. A questionnaire is then presented, displaying eight 3D shapes augmented by our method compared to eight by a baseline. Both sets share identical rendering configurations and originate from the same source model. Subsequently, we pose two binary choice questions to assess preferences: 'Among the two groups of 3D models, which of them do you think makes more sense to you? In other words, which one do you think exhibits a higher quality?' and 'Which group do you think is more diverse? (Please exclude models that you consider to be of low quality when evaluating.)'. Our study involves 150 participants, each compares 10 pairs of results (2 models from each category).
As shown in Table <ref>, our proposed method shows competitive performance against axis scaling in terms of model quality and is more preferred compared to other baselines on the rest dimensions. It utilizes gradients to evaluate for which parts and in which directions the model can be deformed. This capability is likely the key factor contributing to its potential for achieving higher human preference.
Limitations. Although the approach yields augmented 3D shapes of high quality, we observed that the generated shapes may exhibit artifacts when applied to pixel-style 3D shapes. Illustrated in Fig. <ref>, a genuine jet plane does not feature square engines or a square body, introducing irregular gradients. Our method struggles to accurately estimate the appropriate scaling direction for these components. In some other instances, our approach generates augmented desks and chairs with uncommon shapes. For instance, we observe tables with unusually chunky tabletops and chairs with very thick backrests.
§ CONCLUSION
In this paper, we introduce a novel 3D model augmentation method based on 3D seam carving. Our approach produces diverse and high-quality results across various types of models in the form of both occupancy grids and SDF grids. The proposed method may be useful for many tasks related to 3D models such as model generation, detection, scene segmentation, 3D model recognition, and so on. We expect that this method could offer more pronounced assistance for training deep learning models with smaller datasets.
ieee_fullname
|
http://arxiv.org/abs/2405.09585v1 | 20240515073106 | An Embarrassingly Simple Approach to Enhance Transformer Performance in Genomic Selection for Crop Breeding | [
"Renqi Chen",
"Wenwei Han",
"Haohao Zhang",
"Haoyang Su",
"Zhefan Wang",
"Xiaolei Liu",
"Hao Jiang",
"Wanli Ouyang",
"Nanqing Dong"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Two models for the orbital modulation of γ-rays in Cyg X-3
[
May 20, 2024
==========================================================
Genomic selection (GS), as a critical crop breeding strategy, plays a key role in enhancing food production and addressing the global hunger crisis. The predominant approaches in GS currently revolve around employing statistical methods for prediction. However, statistical methods often come with two main limitations: strong statistical priors and linear assumptions. A recent trend is to capture the non-linear relationships between markers by deep learning. However, as crop datasets are commonly long sequences with limited samples, the robustness of deep learning models, especially Transformers, remains a challenge. In this work, to unleash the unexplored potential of attention mechanism for the task of interest, we propose a simple yet effective Transformer-based framework that enables end-to-end training of the whole sequence. Via experiments on rice3k and wheat3k datasets, we show that, with simple tricks such as k-mer tokenization and random masking, Transformer can achieve overall superior performance against seminal methods on GS tasks of interest.
§ INTRODUCTION
Grain production security serves as the cornerstone of human existence, exerting a pivotal role in shaping the health, stability, and prosperity of global society. Addressing global hunger not only aligns with the United Nations Sustainable Development Goals (SDGs) <cit.> of Zero Hunger, but also adheres to the Leaving No One Behind Principle (LNOB) <cit.>. However, according to the report by the Food and Agriculture Organization of the United Nations in 2023 <cit.>, around 735 million people worldwide were suffering from hunger. Therefore, utilizing technological advancements to boost grain production is vital for eliminating hunger by 2030. Crop breeding stands as a fundamental approach to enhancing grain production. Continuously improving crop varieties to increase yield and resilience effectively boosts grain production, thereby meeting the escalating demand for food.
Genomic selection (GS), proposed by <cit.>, is regarded as a promising breeding paradigm <cit.>. GS usually involves predicting the phenotypes of polygenic traits in plants or crops by using high-density markers covering the entire genome. By implementing early screening of candidate populations, the genetic process can be accelerated, reducing generation intervals and significantly shortening breeding cycles. Unlike phenotype selection <cit.> and marker-assisted selection <cit.>, GS can utilize single nucleotide polymorphism (SNP) data obtained from organisms. SNP sequences are loci sequences with two or more alleles extracted from DNA sequences, which are the most widely adopted genetic markers in GS.[Stripping away segments that exhibit the same patterns from DNA sequences to obtain SNP sequences is advantageous for analyzing genotype-phenotype associations, as phenotypic differences often arise from distinct segments between two DNA sequences.]
A mainstream solution to GS is to mine the statistical knowledge of SNP data. GBLUP <cit.> and SSBLUP <cit.> utilize kinship matrices to weight and aggregate the phenotypic values of different individuals, thereby obtaining their estimated breeding values (EBVs) but may not fully explore and utilize genomic information. Another category of statistical methods involves estimating marker effects on a reference population and then predicting marker effects on a candidate population, followed by linear aggregation to obtain EBVs for the candidates <cit.>. These methods typically demand substantial computational resources and lack parallelization capabilities. Their practicality in time-sensitive breeding scenarios is constrained. In addition, two common limitations of statistical methods are the strong statistical priors (, Gaussian distribution) and linear relationship assumption between markers. Though these two assumptions simplify the statistical computation, statistical models suffer a setback when the underlying data distribution differs.
In order to capture the non-linear relationships between markers, efforts have been made by applying deep learning to process SNP sequences, especially with convolutional neural networks (CNN) <cit.> and Transformers <cit.>. However, two major data challenges threaten the robustness of deep learning models. First, the number of samples for each crop species is limited. This can easily cause overfitting for deep learning models, while statistical models are commonly deemed as robust solutions. Second, SNP are long sequences. Besides, plant genomes, unlike those of animals, demonstrate a higher frequency of long insertions and deletions attributed to the activity of transposable elements <cit.>, leading to a greater abundance of SNP data among different plant genomes. The locality of convolution introduces a strong inductive bias for local dependencies, thus cannot catch the long-range interactions between markers. On the contrary, while Transformers can better tackle long-range dependencies in sequences, attention's quadratic complexity becomes the major bottleneck for long sequence <cit.>.
To mitigate the effect of long sequences, an intuitive way is to statistically pre-process the data before building deep learning models. However, the limitations of statistical methods still remain. For example, DNNGP utilizes principal component analysis (PCA), a statistical dimensionality reduction technique, and condenses the sequence to only several hundred dimensions for CNN modeling <cit.>. However, the viability of this hinges on fulfilling the assumption of PCA which states that there exist linear correlations among variables in the dataset, but this might not be true for the majority of SNP datasets. Thus, the linear dimensionality reduction techniques may result in information loss regarding loci that influences phenotypes. Another statistical way to shorten the sequence is traditional feature selection, , selecting important features out of hundreds of thousands of SNPs <cit.>. For example, a promising method is to employ genome-wide association studies (GWAS) to identify major effect loci within gene sequences <cit.>. However, these methods heavily rely on the accuracy of GWAS and are not conducive to modeling and analyzing quantitative traits.
To address the aforementioned challenges of statistical and deep learning methods, we propose a simple yet effective Transformer-based method that supports end-to-end training. The proposed method leverages simple tricks such as k-mer tokenization, and random masking to achieve robust predictive performance from genotypes to phenotypes on crop breeding datasets. It is worth mentioning that, though similar natural language processing (NLP) techniques have been investigated on DNA data due to the similarities between DNA sequences and text sequences <cit.>, there is no such application on SNP data yet. To the best of our knowledge, we are the first to successfully adapt these simple NLP techniques to address the GS problem.
Compared with statistical methods, our method does not require rigid statistical priors or linear assumptions, and thus can better capture the non-linear relationships. Supported by GPU, our method has a significantly shorter inference time. Compared with existing deep learning methods, our method can better leverage the attention mechanism <cit.> to assist contextual understanding.
We summarize the differences between our method and seminal methods in Tab. <ref>.
We extensively evaluate the robustness of our method on two crop datasets, rice3k <cit.> and wheat3k. rice3k is a public dataset on rice with SNP data. On the rice3k dataset, we outperform the current best-performing method (a hybrid method of GWAS and Transformer) on average over 1.05% in the accuracy metric. wheat3k is a private cereal grain dataset to be released, which contains 3032 SNP sequences following a similar setup of rice3k. Our method also consistently outperforms the seminal baselines on wheat3k.
* We propose an end-to-end Transformer-based framework for genomic selection that enables capturing non-linear relationships between genotypes and phenotypes.
* We show that in genotype-to-phenotype prediction tasks, using k-mer tokenization and random masking can effectively reduce the data complexity while enhancing model prediction performance.
* We conduct extensive experiments on the rice3k and wheat3k datasets, where our method achieves the state-of-the-art performance on both datasets against mainstream methods.
§ RELATED WORK
§.§ Analysis-based Genomic Selection
Currently, analysis-based genomic selection methods are mainly classified into two families: best linear unbiased prediction (BLUP) methods and Bayesian methods <cit.>. BLUP-based methods evaluate random effects using a linear model. GBULP <cit.> constructs a pedigree relationship matrix using genomic information, incorporating it as a random effect into the model to estimate genetic values or predict trait values of individuals. RRBLUP <cit.> builds upon GBLUP by incorporating the concept of ridge regression. SSBLUP <cit.> integrates both the genomic relationship matrix and pedigree relationship matrix, along with phenotypic data, into a unified mixed model. For Bayesian methods, such as <cit.>, the marker effects are assumed to follow different Gaussian distributions. For example, BayesA <cit.> assumes that each marker has its own distribution and variance. Though Bayesian methods can reveal the relationship between genotype and phenotype to some extent, they are constrained by strong statistical priors and linear assumptions. In addition to the two families of methods, statistical learning methods <cit.> may also improve prediction accuracy on specific datasets. However, this improvement might not generalize, especially when the number of samples is significantly smaller than the number of feature dimensions.
§.§ Deep Learning-based Genomic Selection
Fueled by the recent success of deep learning in vision and language tasks, researchers have attempted to integrate deep learning into the Genome Selection (GS) field. DeepGS <cit.> proposes a genome-wide selection framework based on CNN, while DualCNN <cit.> utilizes a dual-stream CNN to predict quantitative traits. ResGS <cit.> uses ResNet <cit.> to extract gene sequence information without using pooling layers. DNNGP <cit.>, a seminal work, performs PCA to reduce the high dimensionality of gene data to extract effective information. However, the limited convolutional receptive field hinders the model's ability to capture linkage disequilibrium loci in long SNP sequences.
Transformer <cit.> has exhibited excellent performance in modeling global interactions within sequences. GPformer <cit.> applies Informer <cit.>, a model from the long-term time series prediction field, to aggregate periodic information and capture region-specific features in SNP sequences. However, the computational complexity of Transformer also has a quadratic relationship with the sequence length. The current Transformer-based methods <cit.> are unable to handle the whole genetic SNP sequences for most crop species. There are hybrid methods <cit.> that use GWAS to pre-process the raw SNP sequence, but the results heavily depend on the results of GWAS and exhibit unstable performance across species. Instead, we propose to leverage tokenization and optimization acceleration techniques to achieve full attention computation on long SNP sequences, thereby mining long-range interactions and obtaining more accurate and robust predictive models.
§.§ Sequence Representation
A straightforward method to handle SNP data is to directly model loci using additive encoding, which is widely adopted by seminal methods <cit.>. But, it is overly rigid to represent genotypes by the number of non-reference alleles.
An emerging research topic is to leverage language models to model DNA data. However, unlike human language, where each word carries rich meaning, DNA's lexical units consist of only four basic nucleotide bases (A, T, C, G) with relatively ambiguous meanings, making them lower-level abstractions. For example, DNABERT <cit.> and DNAGPT <cit.> have shown that encoding DNA sequences into k-mer patterns can improve the performance on downstream tasks without losing information. DNABERT-2 <cit.> uses random masking for unsupervised pre-training on DNA sequences. However, as there are fundamental differences between DNA and SNP sequences, it is still unclear whether these NLP techniques work on SNP data. In this work, we present the first empirical study.
§ PROBLEM FORMULATION
We formalize the genomic selection problem as a mapping learning task. Given a genotype-phenotype pair (x, y), the final goal is to learn a mapping (or function) f: x ↦ y.
In this work, the genotype data x is the raw SNP sequence. Let S={s_1,s_2,…,s_N_l} denote the SNP sequence, where N_l is the sequence length. We have s_i∈{,,,,,,,,,,}, where different letters represent different combinations of alleles <cit.>. In this work, the definition of each letter is presented in Tab. <ref>.
The phenotype data y represents the corresponding phenotype values, which can either be discrete or continuous, depending on the task of interest ( classification or regression).
Let N_s denote the number of samples for the species of interest in a crop dataset. In contrast to common machine learning tasks, we have N_s ≪ N_l. This poses a non-trivial challenge for overfitting-prone methods such as Transformer, and thus Transformer cannot be directly used.
§ METHODOLOGY
The focus of our framework is to mitigate the limitations of Transformer and capture the contextual information and language patterns of SNP sequence. To this end, we first pre-process raw SNP sequence by an initial mapping which does not change its length. After that, we implement k-mer tokenization and random masking to convert SNP sequence to token ID sequence. Then, embedding vectors are generated by the embedding layer and positional encoding <cit.>. Finally, the embedding vectors are mapped to phenotype predictions by a neural network. The overall architecture is elaborated in Fig. <ref>.
§.§ Pre-processed SNP Sequence
In the candidate list of the sequence, {,,,} is the majority and {,,,,,,} is the minority. To avoid the difficulty of excessively large k-mer vocabularies contributed by the minority and improve the computational efficiency of subsequent self-attention, we generate the pre-processed SNP sequence S^p by the mapping rule:
s_i^p ={[ s_i, s_i∈{,,,}; , s_i∉{,,,} ].
§.§ Sequence Tokenizer
Considering the biological significance of SNP sequences and mitigating the attention computational cost on long sequences, we choose non-overlapping k-mer as a tokenization technique on pre-processed SNP sequence S^p. Let T= {…,t_i,t_i+1,t_i+2,…} denote the token ID sequence processed by k-mer operation. Here is an illustration for a 4-mer operation with simulated sequence and token IDs.
T = mer(S^p, k=4)
={…,,,,,,,,,,,,,…}
={…,39,502,346,…}∈ℝ^1×⌊N_l/k⌋
mer = (·, k) is a function that treats the substring of length k as a whole entity and converts it to a unique ID. Although k-mer shortens the SNP sequence to 1/k of original length N_l, the problem of N_s ≪ N_l still exists.
We introduce randomness by random masking on token ID sequence to reduce the risk of overfitting:
T^m= Mask(T)={…,t_i^m,t_i+1^m,t_i+2^m,…}.
Mask(·) is a function that has the pre-defined probability p to convert the original token ID to mask_id, which is defined separately. For the i^th position in T^m, we have
t_i^m=Mask(t_i)=
t_i, p
mask_id, 1-p
The final embedding vector is the sum of the embedded masked token ID sequence and the positional encoding.
E_0 = Embed(T_m)+ E_pos
Embed(·) denotes the embedding layer, where the vocabulary size is 5^k+1. The “5^k” represents the number of all possible combinations and “1” denotes the specific Mask ID for unknown cases. E_pos∈ℝ^1×⌊N_l/k⌋× D_w is the sinusoidal positional encoding <cit.>, where D_w denotes embedding width.
§.§ Phenotype Learning
We use Transformer encoder to learn the contextual information of encoded SNP sequences, which consists of L layers of multi-head self-attention (MSA) <cit.> and multi-layer perceptron (MLP) blocks.
E'_l=MSA(LN(E_l-1))+E_l-1
E_l=MLP(LN(E'_l))+E'_l
LN(·) is the layer normalization <cit.>.
After computing the attention, a linear projector LP(·) is used to map the output to a lower-dimensional space, thus lowering the computational complexity in the regressor:
E_P = LP(E_L)
Finally, the phenotype prediction y_p can be output as:
y_p = MLP(Flatten(E_P)),
where the projected vector is flattened and inputted into MLP layer to predict the phenotype.
There are two types of learning tasks in predicting phenotype from genotype information: variety recognition and trait prediction, corresponding to classification and regression tasks, respectively.
For the classification task, we use the cross entropy (CE) as the loss function:
ℒ_cls = CE(y_p, y_l),
where y_l denotes the label.
For the regression task, we define the loss function by Mean Squared Error (MSE):
ℒ_reg = MSE(y_p, y_l).
§ EXPERIMENTS
§.§ Setup
Datasets
We use two crop datasets: rice3k <cit.> and wheat3k, representing the two most significant staple crops worldwide. Each dataset includes raw SNP sequences and multiple traits of interest (phenotype).
rice3k is a public dataset that comprises genomic data and the corresponding phenotype data for a total of 3024 samples collected from 89 countries. For genomic data, the SNP sequence for each sample has a length of 404388. rice3k encompasses multiple quality traits, discrete variables. We select six valuable and well-structured phenotypes for experimentation, including APCO_REV_REPRO, CUST_REPRO, LPCO_REV_POST, LSEN, PEX_REPRO, and PTH. The definitions of these phenotypes are described in Tab. <ref>.
wheat3k is a private dataset to be released. wheat3k is collected following the setup of rice3k to provide a comprehensive study on wheat. It comprises genomic data and the corresponding phenotype data for a total of 3032 samples. Each SNP sequence for each sample has a length of 201740. In contrast to rice3k, wheat3k focuses on quantitative traits, continuous variables. We select six valuable and well-structured phenotypes for experimentation, including COLD, FHB, LODGING, STERILSPIKE, TKW, and YIELD. The definitions of these phenotypes are described in Tab. <ref>.
Implementation
We implement the proposed framework using PyTorch and conduct experiments on NVIDIA 3090 GPU. There are a total 80 of epochs with early stops. The dimension of the sequence embedding width after embedding layer is set as 32 (, D_w=32). Note that the vocabulary size of embedding layer is 5^k+1, where 1 additional ID for unknown words. Following the settings in DNABERT <cit.>, we use 6-mer (, k=6) and the default random masking ratio is 15% (, p=0.15). We use 3 Transformer encoder blocks in the learning module and an Adam optimizer <cit.> with an initial learning rate of 0.0001 and weight decay of 0.01. All experiments are repeated for five-fold cross-validation, following the standard practice in GS. The reported numbers are the average metrics and standard deviations. Following <cit.>, each phenotype of interest is considered an independent task for training.
Evaluation Metrics
Following the setups of <cit.>, we adopt accuracy (ACC) as the evaluation metric for the classification task (rice3k dataset) and Pearson correlation coefficient (PCC) to evaluate the regression performance (wheat3k dataset) of our model. PCC is defined as.
PCC = ∑_i=1^n (y_p,i - y̅_p)(y_l,i - y̅_l)/√(∑_i=1^n (y_p.i - y̅_p)^2 ∑_i=1^n (y_l,i - y̅_l)^2).
Baselines
To demonstrate the robustness of our method, we select four seminal methods for comparison, including GBLUP <cit.>, RidgeClassifier <cit.>, BayesA <cit.>, DNNGP <cit.>, and DLGWAS <cit.>.
GBLUP is a representative BLUP-based method and performs well across various regression tasks. To suit the classification tasks of rice3k, we replace GBLUP with RidgeClassifier, which shares similar statistical assumptions. BayesA is a robust Bayesian method with simple assumptions. We conduct 1500 iterations to ensure convergence. DNNGP and DLGWAS are two state-of-the-art deep learning methods. We use the source code of DNNGP[<https://github.com/AIBreeding/DNNGP>]. We pre-process SNP sequences using PCA and retain 3000 principal components to preserve 95% variance information. We implement DLGWAS as an integration of GWAS and Transformer. GWAS uses the mixed linear model <cit.> to analyze the correlation of SNP loci, retaining the top 10% of loci based on effect size. After locus screening, the relative order of loci retained in the original sequence is preserved for Transformer modeling.
The model configuration information is kept as consistent as possible for all baselines.
§.§ Results
Performance on Classification Tasks
Tab. <ref> elaborates the classification results of different models on rice3k dataset. On the phenotypes of APCO_REV_REPRO, CUST_REPRO, LPCO_REV_POST, LSEN, PEX_REPRO, and PTH, our model achieves the best performance compared to the existing models in ACC metric. On LPCO_REV_POST, we also obtain competitive results. Specifically, we surpass the current best-performing method DLGWAS over 1.05% across these 6 traits on average. The robustness and advancement of our simple approach to enhancing Transformer are evidenced by the overall elevation in mean values and reduction in standard deviation. These outcomes position our method as a pioneering benchmark in the field.
Performance on Regression Tasks
We further analyze each baseline's capacity on wheat3k dataset, tabulated in Tab. <ref>. Similar to the outstanding performance on the rice3k dataset, our model also achieves the best performance compared to existing methods, where the higher PCC metric and lower standard deviation clearly prove the superiority. On YIELD task, which is important as it signifies the meaning of wheat yield, our method is 1.63% superior to the current state-of-the-art DLGWAS, which holds the potential in the application of genomic selection for crop breeding.
§.§ Ablation Studies
To investigate the contribution of each key component of our proposed method, we conduct a series of ablation experiments on the wheat3k dataset.
Analysis of Pre-Processing
We conduct ablation experiments to have a deep insight into key components in the tokenization. Note, we choose the model without pre-processing module as baseline to study the impact of each component in Tab. <ref>. It is found that k-mer plays a more significant role than random masking, which enhances the performance of baseline model by 1.24%, 8.42%, and 3.81% in PCC on three sub-tasks of wheat3k dataset. This suggests that using k-mer in pre-processing can assist attention mechanisms to better capture contextual information. Another finding is that when utilizing k-mer as tokenizer, random masking can yield greater improvements than using random masking on baseline alone. The difference lies in whether the smallest unit of the masking is 1 bit or k bits, which relates to the influence of masking on context. This proves the importance of context extraction on k-mer. Compared with DLGWAS, Transformer without k-mer and random masking shows inferior performance and our model enhances the performance. This proves the validity of key components.
Analysis of k-mer Tokenization
We study the influence of k in k-mer tokenizer in Fig. <ref>.
Note, when k=5, our model obtains the best results on the FHB and YIELD traits, but the best results on the STERILSPIKE occur at k=6. We hypothesize that the value of k might be task-oriented.
Analysis on Random Masking
We conduct experiments on random masking proportion to study the effect of randomness in the tokenization. In Tab. <ref>, the PCC value of three sub-tasks improves as the random masking proportion increases, reaching its peak value when the proportion equals to 30%. Through randomly masking tokens, the data diversity is enhanced, thereby promoting the model's generalization ability and alleviating overfitting issues arising from data scarcity, especially in our problem formulation (N_s ≪ N_l). When the masking ratio becomes too high, , 45%, it can lead to diminishing returns and worsen the effectiveness of the model because excessively masking a large portion of tokens may disrupt the coherence of the input sequence to make it harder for the model to learn meaningful representations.
Choice of Tokenizer
We compare our method with seminal tokenization algorithms to demonstrate the effectiveness of k-mer and random masking, shown in Tab. <ref>. Character-level tokenizer treats each letter in SNP sequence as an independent feature, where we use {,,,,} and one additional ID for unknown words as the vocabulary. Byte-Pair Encoding builds a sub-word vocabulary of size 8000. We also implement two learnable tokenizers: one MLP layer and one Transformer encoder, where the input is SNP sequence and the embedding width of encoding output is 20. Compared with the second-best tokenizer (Learnable Transformer), our method achieves 1.20%, 2.22%, and 1.08% PCC improvements on FHB, STERILSPIKE, and YIELD, respectively, providing better SNP sequence representation for learning module.
Computational Cost
Lastly, to comprehensively assess a model, computational cost is considered. Take wheat3k as an example, the original sequence length is 201740. Tabulated in Tab. <ref>, we utilize parameter count (Parameters), GPU memory consumption (Memory), and inference time (Time) to measure all models. GBLUP shows fast inference time as a lightweight linear model. But it sacrifices the degree of freedom, thus leading to lower accuracy in Tab. <ref>. After PCA, DNNGP only retains 3000 principal components for CNN, thus generating low computational costs. However, DNNGP reports the lowest performance on both two datasets. For vanilla Transformer without tokenization, the original SNP sequence of length 201740 is input into attention mechanism. For DLGWAS and Ours (6-mer), the inputs for both Transformer encoder consist of 33623 dimensions. For Ours (5-mer), this value is 40348.
Compared with the statistical method BayesA, our method provides a quicker inference time, and compared with vanilla Transformer, our method efficiently reduces GPU memory cost.
§ CONCLUSION
In this study, we present a simple yet effective approach to enhance Transformer's performance in the realm of genomic selection for crop breeding. Specifically, we propose the pre-processing module composed of k-mer tokenizer and random masking to assist contextual understanding of SNP sequence. Experiments on the rice3k and wheat3k datasets demonstrate promising performance on genotype-to-phenotype prediction. The empirical findings of this work not only suggest the potential of DL to handle long sequences but also pose a new research direction on end-to-end genomic selection.
Meanwhile, we shall notice that though tokenization plays an important role in genomic selection, a comprehensive understanding is still needed in future work.
§ ACKNOWLEDGEMENTS
This work is supported by Shanghai Artificial Intelligence Laboratory. This work is also supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA0450203.
named
|
http://arxiv.org/abs/2405.10124v1 | 20240516142254 | Smoothing Linear Codes by Rényi Divergence and Applications to Security Reduction | [
"Hao Yan",
"Cong Ling"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Mergers of hairy black holes: Constraining topological couplings from entropy
Hao Yan, Cong Ling
Imperial College London
,
May 20, 2024
=============================================================================
empty
The concept of the smoothing parameter plays a crucial role in both lattice-based and code-based cryptography, primarily due to its effectiveness in achieving nearly uniform distributions through the addition of noise. Recent research by Pathegama and Barg has determined the optimal smoothing bound for random codes under Rényi Divergence for any order α∈ (1, ∞) <cit.>. Considering the inherent complexity of encoding/decoding algorithms in random codes, our research introduces enhanced structural elements into these coding schemes. Specifically, this paper presents a novel derivation of the smoothing bound for random linear codes, maintaining the same order of Rényi Divergence and achieving optimality for any α∈ (1,∞). We extend this framework under KL Divergence by transitioning from random linear codes to random self-dual codes, and subsequently to random quasi-cyclic codes, incorporating progressively more structures. As an application, we derive an average-case to average-case reduction from the Learning Parity with Noise (LPN) problem to the average-case decoding problem. This reduction aligns with the parameter regime in <cit.>, but uniquely employs Rényi divergence and directly considers Bernoulli noise, instead of combining ball noise and Bernoulli noise.
§ INTRODUCTION
Smoothing Parameter.
In the context of lattices or codes, smoothing refers to the phenomenon where adding sufficiently large noise causes the final distribution to approximate uniformity over the entire Euclidean or Hamming space. The smoothing parameter is defined as the threshold noise level associated with a specific approximation error, ϵ. Conceptually, it quantifies the degree of "smoothness" required in the noise distribution to ensure the discrete structure of the lattice Λ or code C becomes indiscernible. This parameter is pivotal in transforming lattice or code decoding problems into cryptographic security proofs. Specifically, in lattice-based cryptography, the security of problems such as the Short Integer Solution (SIS) and Learning With Errors (LWE) is reduced to the difficulty of lattice problems involving the smoothing parameter, as measured by Statistical Distance (SD) <cit.><cit.><cit.><cit.>, establishing a foundation for post-quantum cryptography. Smoothing parameters can be also used in other lattice based problems such as lattice isomorphism problem <cit.>. Additionally, Rényi divergence serves as another metric for evaluating approximation error within cryptographic contexts <cit.><cit.>.
The smoothing parameter, a critical tool in cryptography, has also been optimized by researchers in the information theory community under the concept of channel resolvability. Channel resolvability addresses the problem of determining the amount of information required to simulate a given channel and its output. Initially proposed by Han and Verdú <cit.>, this concept used SD and Kullback-Leibler (KL) divergence to measure approximation error. Hayashi later extended the solution to the KL divergence framework <cit.><cit.>, while Yu and Tan generalized it to the Rényi divergence for parameters α∈ [0, 2] ∪{∞} <cit.>. For α∈ (2,∞), Pathegama and Barg addressed the problem using random codes, and even proposed that Reed-Muller code can achieve the optimal bound under Bernouli noise, albeit limited to uniform distribution targets <cit.>. Yu provided a more comprehensive solution <cit.>, presenting an achievable coding scheme through random codes, constant composition codes and typical sets.
Code-Based Cryptography.
Coding theory is an important subject aiming to correct errors, where usually linear codes are originally used in digital communication and data transmission that contains encoding and decoding processes. It is computationally difficult to decode messages, and to decrease the decoding algorithm complexity and improve the coding efficiency, various code structures and decoding algorithms were proposed.
The first code-based cryptosystem was proposed by McEliece <cit.>, where binary Goppa code is used for encryption. The security of McEliece's cryptosystem relies on hardness of decoding codewords and distinguishing random matrices and permuted generator matrices. Following this work, various codes are used to build up cryptosystem, among which the MDPC-code based scheme BIKE <cit.> and binary Goppa code based scheme Classic McEliece <cit.> become fourth round candidates in the NIST call for PQC standarization, which are the only two schemes of McEliece's framework.
There are also numerous work aiming to improve efficiencies of McEliece's scheme, leading to the development of alternative frameworks. In 2003, Aleknovich proposed a new framework with a security proof that it's solely based on decoding problem <cit.>. Aleknovich's cryptosystem can be regarded based on the code version of LWE, i.e. Learning Parity With Noise (LPN).
Similar to R-LWE which improve efficiency by ring structures, the Ring-LPN problem was also proposed to improve efficiency of LPN, and was used for a authentication protocol <cit.>. Several years later, the HQC scheme was proposed <cit.>, representing a specific version of Module-LPN. The security of the HQC scheme is grounded in the Quasi-Cyclic Syndrome Decoding problem. Notably, the HQC scheme stands out as the sole candidate within the non-McEliece framework during the fourth round of the Post-Quantum Cryptography (PQC) standardization process. Moreover Quang Dao and Aayush Jain proposed a new variant of LPN named Dense-Sparse LPN based on density of matrices <cit.>. Roughly, the assumption states that (𝐓𝐌, 𝐬𝐓𝐌 + 𝐞) is indistinguishable from (𝐓𝐌, 𝐮), for a random (dense) matrix 𝐓, random sparse matrix 𝐌, and sparse noise vector 𝐞 drawn from the Bernoulli distribution with inverse polynomial noise probability. In addition to the existing frameworks centered around syndromes and decoding, a novel approach has emerged, grounded in the complexity of determining the linear isometry, an equivalence transformation that preserves the metric, between two codes, which shares similarities with LIP in lattice version. Code Equivalence problem was first studied in coding theory but it wasn't until 2020 that the first cryptographic scheme exclusively relying on the hardness of the Linear Code Equivalence problem (LEP) was introduced in <cit.>, named Linear Equivalence Signature Scheme (LESS).
Despite the reliance of code-based cryptographic schemes on hard problems, many of them have faced lack of complete security reduction proof from worst case to average case and search problem to decision problem. In contrast, most problems in lattice-based cryptography have been addressed. Given the shared characteristics of codes and lattices, smoothing techniques have been applied to address the LPN problem. The first worst-case to average-case reduction for codes was established in <cit.> by smoothing a code with random walk noise, which is similar to LWE's classical reduction in <cit.>, albeit with a requirement for balanced codes. This reduction has been further optimized in subsequent works <cit.> and <cit.> where the authors proposed an average-case to average-case reduction for LPN problem as well. However, the need for balanced codes persists because terms involving codewords of both low and high Hamming weights cannot be eliminated as n approaches infinity when estimating the smoothing bounds. As for code equivalence problem, up until now there is no worst-to-average reduction for such problems.
Main Contributions.
In this paper we derive the smoothing bound for random linear codes by Rényi divergence all Rényi parameters α∈ (1, ∞). To introduce more structure into the random code, we reduce random linear codes to a class of random self-dual codes and further reduce them to a class of random quasi-cyclic codes. As an application of the smoothing bound in code-based cryptography, we derive a average-case to average-case reduction from LPN to average-case decoding problem, which has the same parameter regime in <cit.> but our reduction utilizes Rényi divergence and consider directly Bernoulli noise instead of combining ball noise and Bernoulli noise together.
In Section <ref>, some notations and preliminaries of coding theory and cryptography will be introduced. Random linear code smoothing bound for Rényi parameters α∈(1,∞) will be given in Section <ref>. Section <ref> and <ref> will give the smoothing bound of random self dual code and random quasi cyclic code. Application of smoothing bound will be given in Section <ref>.
As we were finalizing this paper, we became aware of the independent and concurrent work by Pathegama and Barg <cit.>. It is important to distinguish our paper from <cit.> in the following ways: (a) While <cit.> addresses the problem of hash functions, our focus is on code-based cryptography; (b) Our results are more general, as we are capable of handling all real-valued parameters α∈ (1, ∞), whereas in <cit.>, α is restricted to natural numbers α∈ℕ; (c) We study self-dual codes, particularly quasi-cyclic codes, which are widely used in the practice of code-based cryptography. This aspect also distinguishes our work from <cit.> which is restricted to random linear codes.
§ PRELIMINARY
We first given some basic notations that used in this paper. For a finite set 𝒳 of outcomes, a probability distribution P assigns a probability P(x) to each outcome x ∈𝒳 such that 0 ≤ P(x) ≤ 1 for all x ∈𝒳 and ∑_x ∈𝒳 P(x) = 1. The probability mass function (PMF) P(x) specifies the probability that a discrete random variable X takes the value x. A finite field, also known as a Galois field, is a field with a finite number of elements. The number of elements in a finite field is called its order, and it is always a power of a prime number. The finite field with q elements is denoted by 𝔽_q. For instance, 𝔽_2 is the finite field with two elements, typically {0, 1}, with addition and multiplication defined modulo 2, which can be written as 𝐙_2. Denote channel as W(· |·). W(y|x) given input x and output y.
§.§ Rényi Entropy and Divergence
Rényi entropy and Rényi divergence are fundamental concepts in information theory that generalize the classical Shannon entropy and Kullback-Leibler (KL) divergence. These measures incorporate a parameter α that allows for a family of entropy and divergence measures, each with different sensitivity to the probability distributions' differences. Rényi entropy of order α, where α > 0 and α≠ 1, for a discrete probability distribution P over a finite set 𝒳, is defined as:
H_α(P) = 1/1 - αlog( ∑_x ∈𝒳 P(x)^α)
Here, 𝒳 denotes the set of possible outcomes, and P(x) represents the probability of the outcome x under the distribution P. As α→ 1, Rényi entropy converges to the Shannon entropy:
H_α(P) → H(P) = - ∑_x ∈𝒳 P(x) log P(x)
Rényi entropy provides a spectrum of entropy measures.
Rényi divergence measures the difference between two probability distributions P and Q. Given two discrete probability distributions P and Q over a finite set 𝒳, the Rényi divergence of order α, where α > 0 and α≠ 1, is defined as:
D_α(P Q) = 1/α - 1log( ∑_x ∈𝒳 P(x)^α Q(x)^1-α)
As α→ 1, Rényi divergence converges to the KL divergence:
D_α(P Q) → D_KL(P Q) = ∑_x ∈𝒳 P(x) logP(x)/Q(x)
For α > 0 and α≠ 1, D_α(P Q) ≥ 0, with equality if and only if P = Q. Additionally, for α≤β, D_α(P Q) ≤ D_β(P Q). This property makes Rényi divergence a useful tool for controlling the trade-off between robustness and sensitivity in various applications. The choice of α influences how the divergence measures the difference between P and Q; for α > 1, the divergence is more sensitive to the regions where P(x) is larger than Q(x), while for 0 < α < 1, it is more sensitive to the regions where P(x) is smaller than Q(x).
By adjusting the parameter α, researchers and practitioners can tailor the Rényi divergence to meet the specific requirements of their applications, making it a versatile and powerful tool in information theory and beyond. If we want to consider q-ary code, the base of log can be replaced with q.
§.§ Linear, Self-Dual, and Quasi-Cyclic Codes
Since we are interested in smoothing of linear code, self dual code and quasi cyclic code, we will first introduce the concepts of these codes.
A linear code C is a subspace of the vector space 𝔽_q^n, where 𝔽_q is a finite field with q elements. For binary codes, q = 2, thus 𝔽_2. The code C is characterized by its dimension k and length n, and is typically denoted as an [n, k] code.
Linear codes possess several key properties. Firstly, as a subspace, they ensure that for any codewords 𝐜_1, 𝐜_2 ∈ C and any scalars a, b ∈𝔽_q, the linear combination a𝐜_1 + b𝐜_2 is also in C. This closure under addition and scalar multiplication is fundamental to their structure. The dimension k of the code represents the number of linearly independent codewords, signifying the amount of information that can be encoded, with C having q^k distinct codewords.
A generator matrix G for an [n, k] linear code is a k × n matrix whose rows form a basis for C. Any codeword 𝐜∈ C can be expressed as 𝐜 = 𝐮G, where 𝐮∈𝔽_q^k is an information vector. Encoding a message vector 𝐮 into a codeword 𝐜 involves multiplying 𝐮 by the generator matrix G. Conversely, a parity-check matrix H for an [n, k] linear code is an (n-k) × n matrix that defines the dual code C^⊥. A vector 𝐯∈𝔽_q^n is a valid codeword of C if and only if it satisfies the parity-check equation H𝐯^T = 0. The rate R of a linear code is defined as R=k/n.
A linear code C is self-dual if C = C^⊥, where C^⊥ is the dual code of C, defined as the set of all vectors orthogonal to every codeword in C.
In this paper we only consider binary self-dual code, and in this case the dimension k is n/2 because C and C^⊥ must span the entire space 𝔽_2^n. The generator matrix G and the parity-check matrix H of a self-dual code are related by G = H. This implies that all codewords in a self-dual code are orthogonal to each other with respect to the standard dot product, i.e. ⟨𝐮 , 𝐯⟩ = 0 for all 𝐮, 𝐯∈ C
(n=2t, k=t) Quasi Cyclic Code
A systematic quasi-cyclic (2t, t) code has the form
[l(x), l(x)a(x)] x^t + 1
where i(x), a(x)∈𝔽_2[x]/x^t+1
Quasi-cyclic codes can be seen as generalizations of cyclic codes and often possess similar algebraic properties, making them easier to analyze and decode. The generator matrix of a quasi-cyclic code has a block circulant structure, which can be leveraged for efficient implementation of encoding and decoding algorithms. Quasi-cyclic codes are widely used in communication systems due to their balance of structure and performance, offering good error correction capabilities with relatively simple implementation.
§.§ Smoothing Parameter And Learning Parity With Noise
In code-based cryptography, researchers are interested in LPN problem and its hardness connection with other problems.
The (decisional) LPN problem with secret length n and noise rate μ∈ (0, 1/2), denoted by LPN_n,μ, challenges to distinguish (𝐚, ⟨𝐚 ,𝐬⟩ + e 2) and (𝐚, u), where 𝐚$←𝐙_2^n, 𝐬$←𝐙_2^n, u $←𝐙_2, and e $← Ber(μ), where Ber(μ) denotes Bernoulli distribution with parameter μ and u is uniform over binary field.
(Linear Code) Average case Decoding Problem - aDP(n, k, t)
* Input: (𝐆,𝐲≐𝐱𝐆+𝐞) where 𝐆 is the random generator of an binary [n, k] linear code. 𝐱∈𝔽_2^k, 𝐆∈𝔽_2^k× n and 𝐞∈𝔽_2^n with Hamming weight t.
* Output: 𝐱𝐆
LPN is a average case problem, and in order to finish a average case to average case reduction, i.e. reduction from LPN to aDP(n,k,t), smoothing technique plays the main role. As noted in <cit.><cit.>, given a linear code with generator 𝐆∈𝔽_2^k× n, a codeword 𝐜=𝐦𝐆+𝐞, and the noise weight wt(𝐞)=t. We sample vector 𝐫∈𝔽_2^n uniformly from a noise distribution, s.t. P(⟨𝐫,𝐞⟩=1)=p, and then multiply it with codeword 𝐜. Thus
⟨𝐫,𝐜⟩=⟨𝐫𝐆^𝐓,𝐦⟩+⟨𝐫,𝐞⟩
which can be fed into the LPN oracle and the error is measured by statistical distance Δ((𝐫𝐆^T, ⟨𝐫, 𝐞⟩), (U_𝔽_2^k, Ber_p) ). When 𝐆 is random over all [n,k] linear code, we can achieve a average case reduction. In this paper, Rényi divergence is considered instead of statistical distance.
§.§ Inequalities
To help analyze smoothing bound, some inequalities will be required, especially rearrangement inequality, which can be refered to <cit.>.
Let a_1 ≤ a_2 ≤…≤ a_n and b_1 ≤ b_2 ≤…≤ b_n be two sequences of real numbers. Then their rearrangement yields the maximum value when the sequences are ordered in the same way, that is,
a_1b_1 + a_2b_2 + … + a_nb_n ≥ a_1b_σ(1) + a_2b_σ(2) + … + a_nb_σ(n)
for any permutation σ of {1, 2, …, n}.
For non-negative numbers x_1, ..., x_n, let σ_j be any permutation of 1, 2, …, n for any j, then there is a inequality as follows,
∑_i=1^n∏_j x_σ_j(i)^p_j≤∑_i=1^n x_i^p.
Here p_j's satisfy ∑_j p_j=p, and are fractions with integers over a common denominator no smaller than 1, i.e. p_j∈ℤ/q, q≥ 1 for 1≤ j≤ n.
Firstly assume q=1, thus p_j's are all integers. Notice that by AM-GM inequality, we have
∑_j p_j x_σ_j(i)^p = ∑_jx_σ_j(i)^p + ⋯ + x_σ_j(i)^p_p_j
≥(∑_j p_j)(∏_j x_σ_j(i)^pp_j)^1/∑_j p_j = p ∏_j x_σ_j(i)^p_j.
Thus lemma can be derivd as follows,
∑_i=1^n ∏_j x_σ_j(i)^p_j ≤1/p∑_i=1^n ∑_j p_j x_σ_j(i)^p
= 1/p∑_j p_j ∑_i=1^n x_σ_j(i)^p
= 1/p∑_j p_j ∑_i=1^n x_i^p = ∑_i=1^n x_i^p.
The proof for the case where q>1 can also be established using a similar procedure, with a slight modification by replacing each x_i with x_i^1/q.
§ RANDOM LINEAR CODE SMOOTHING
Linear codes form a balanced set. Denote ℬ as the set of all linear codes. Then we have |ℬ|=[ n; k ]_q where the Gaussian binomial coefficient is
[ n; k ]_q
=(1-q^n)(1-q^n-1)⋯ (1-q^n-k+1)/(1-q)(1-q^q)⋯ (1-q^k).
Each non-zero codeword belongs to [ n-1; k-1 ]_q linear codes in ℬ. Thus it's easy to prove the averaging lemma of linear codes as follows.
For balanced set ℬ containing all linear codes with encoder F, and any function f(·), there is an identity that
1/|ℬ|∑_F∈ℬ∑_𝐚∈𝔽_q^k/{0} f(F(𝐚)) = q^k - 1/q^n - 1∑_𝐜∈𝔽_q^n/{0} f(𝐜).
or equivalently,
𝔼_F∼ℬ∑_𝐚∈𝔽_q^k/{0} f(F(𝐚)) = q^k - 1/q^n - 1∑_𝐜∈𝔽_q^n/{0} f(𝐜).
§.§ Rényi Divergence for α∈ (1,2)
Combined lemma with the techniques in <cit.>, we will get the following theorem.
Let W(·|·) denote the transition probability of the noisy channel W. Thus for α∈ (1,2),
when W is an additive noise channel, for any rate R satisfies
R ≥ 1 - H_α(W)/n+ε,
where ε>0, then
𝔼_F∼ℬ D_α(U_F+N||U_𝔽_q^n)→ 0
as n→∞.
Here we use W_F(𝐚)(𝐲) to represent the probability W(𝐲 | F(𝐚)) with the output 𝐲 and input F(𝐚), where F is a linear encoder contained in ℬ and 𝐚 is the message vector. Here H_α(W) is the Rényi entropy of order α, i.e.
H_α(W)≐1/α-1log∑_𝐜∈𝔽_q^n W(𝐜) ^α.
U_𝔽_q^n is a uniform r.v. over the Hamming space, U_F is a uniform r.v. over the codewords of encoder F.
Define the affine encoder Λ: 𝐚→ F(𝐚) + G, where F is an linear encoder in ℬ and G is a independent r.v. in 𝔽_q^n. Affine encoder has the property that for 𝐚≠𝐚', Λ(𝐚')=F(𝐚-𝐚') + Λ(𝐚). Noted that here F(𝐚-𝐚') is independent with r.v. Λ(𝐚).
q^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_q^n)=∑_𝐲∈𝔽_q^n (1/q^k∑_𝐚∈𝔽_q^k W_Λ(𝐚)(𝐲) )^α/(1/q^n)^α-1=q^n(α-1)/q^kα∑_y∈𝔽_q^n ( ∑_𝐚∈𝔽_q^k W_Λ(𝐚)(𝐲) )^α
𝔼_Λ q^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_q^n)
=q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n𝔼_Λ ( ∑_𝐚∈𝔽_q^k W_Λ(𝐚)(𝐲) )^α
=q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n𝔼_Λ∑_𝐚∈𝔽_q^k W_Λ(𝐚)(y) ( ∑_𝐚'∈𝔽_q^k W_Λ(𝐚')(y) )^α-1
By using Jensen's inequality for 𝔼[X^α-1] ≤ (𝔼[X])^α-1, 1 < α < 2, we derive ≤q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n∑_𝐚∈𝔽_q^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)+𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_q^k,𝐚'≠𝐚 W_Λ(𝐚')(𝐲) )^α-1
=q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n∑_𝐚∈𝔽_q^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) +𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_q^k,𝐚'≠𝐚 W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) )^α-1
By using the averaging lemma, let 𝐱=𝐚'-𝐚, f(F(𝐱))=W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) =q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n∑_𝐚∈𝔽_q^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + q^k-1/q^n-1∑_𝐜∈𝔽_q^n,𝐜≠0 W_Λ(𝐚)+𝐜(𝐲) )^α-1
≤q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n∑_𝐚∈𝔽_q^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + q^k-1/q^n-1 )^α-1
By using (x+y)^α-1≤ x^α-1+y^α-1 and q^k-1/q^n-1< q^k/q^n ≤q^n(α-1)/q^kα∑_𝐲∈𝔽_q^n∑_𝐚∈𝔽_q^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲)^α + q^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)2^k(α-1)/2^n(α-1)W_Λ(𝐚)(𝐲)
= q^(α-1) n(1-R-H_α(W)/n) + 1
The limitation goes to 1 as 1-R-H_α(W)/n<0. Since channel W is regular <cit.>, we can remove the r.v. G, and the final result proven.
§.§ Rényi Divergence for α∈ℕ
The theorem above can be extended to order α∈ (2, +∞). To begin with, let's first extend linear codes averaging lemma to more general case.
For balanced set ℬ containing all linear codes with encoder F, integer r, real numbers α_1, α_2, ..., α_r and any function f(·)≥ 0, there is an inequality that
1/|ℬ|∑_F∈ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^rf^p_i(F(𝐚_i)) ≤∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i).
or equivalently,
𝔼_F∼ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^rf^α_i(F(𝐚_i)) ≤∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i).
Given a set of vectors {𝐜_1,𝐜_2,...,𝐜_r} and its rank r-j, the span of this set forms a subspace with dimension r-j. Consequently, the number of linear codes containing this subspace is given by the gaussian binomial [ n-r+j; k-r+j ]_q. The probability of random linear codes containing the subspace is
P([ {𝐜_1,𝐜_2,...,𝐜_r}⊆ F; rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j ]) = [ n-r+j; k-r+j ]_q/[ n; k ]_q≤(q^k - 1/q^n - 1)^r-j.
𝔼_F∼ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^rf^α_i(F(𝐚_i))
= 𝔼_F∼ℬ∑_j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i) 1{[ {𝐜_1,𝐜_2,...,𝐜_r}⊆ F; rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j ]}
= ∑_j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i) 𝔼_F∼ℬ1{[ {𝐜_1,𝐜_2,...,𝐜_r}⊆ F; rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j ]}
= ∑_j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i) P([ {𝐜_1,𝐜_2,...,𝐜_r}⊆ F; rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j ])
≤∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rf^α_i(𝐜_i).
Suppose W is an additive noise channel with W(𝐲|𝐱)=W(𝐲-𝐱), and for any rate R satisfies
R ≥ 1 - H_α(W)/n+ε,
where ε>0 and H_α(W)≐1/α-1log∑_𝐱∈𝔽_q^n W(𝐱) ^α.
For 0<j<r, fixed r, denote {𝐜_1,𝐜_2,...,𝐜_r} a arbitrary set of non-zero codewords from 𝔽_q^n with rank r-j.
Then for any subset {𝐜_i_j+1,𝐜_i_j+2,...,𝐜_i_r} with full rank r-j, 1≤ i_j+1, i_j+2, ..., i_r≤ r, and sufficient large n,
∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j
rank{𝐜_i_j+1,𝐜_i_j+2,...,𝐜_i_r}=r-j∏_i=1^rW_𝐜_i(𝐲)^α_i≤
O(q^-ε n)
where α_i∈ℤ/q with some q>1 and ∑_i=1^rα_i=α.
Furthur more, by summation over all subsets {𝐜_i_j+1,𝐜_i_j+2,...,𝐜_i_r} with rank r-j, the following inequality is derived,
∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲)^α_i≤
O(q^-ε n)
Denote
S={𝐜_1,𝐜_2,...,𝐜_r}, rank(S)=r-j. Without loss of generalization, assume 𝐜_j+1,𝐜_j+2,...,𝐜_r forms the basis of S, and 𝐜_1,...,𝐜_j∈ span{𝐜_j+1,𝐜_j+2,...,𝐜_r}. Consequently 𝐜_1,...,𝐜_j can be expressed with linear combinations of the basis as follows,
𝐜_1 = a_1,j+1𝐜_j+1 + a_1,j+2𝐜_j+2 + … + a_1,r𝐜_r
𝐜_2 = a_2,j+1𝐜_j+1 + a_2,j+2𝐜_j+2 + … + a_2,r𝐜_r
⋮
𝐜_j = a_j,j+1𝐜_j+1 + a_j,j+2𝐜_j+2 + … + a_j,r𝐜_r
,
or equivalently in matrix form as follows,
[ 𝐜_1; 𝐜_2; ⋮; 𝐜_j ]
=
𝐀[ 𝐜_j+1; 𝐜_j+2; ⋮; 𝐜_r ].
Notice that in Hamming space the coefficients of the basis take values either 0 to q-1.
To better deal with relations among different 𝐜_i's, we will first partition the set S is into r-j disjoint subsets, labeled C^𝐀_j+1, C^𝐀_j+2, ..., C^𝐀_r, collectively exhaustive of S. The partitioning algorithm involves assigning c_i to set C^𝐀_i for all j+1 ≤ i ≤ r. For 1 ≤ i ≤ j, each c_i is allocated to the subset C^𝐀_r', where r' represents the highest index resulting in a non-zero value of a_i,r'. Thus by using Lemma <ref> for each subset C_r'^𝐀, we derive
∑_𝐜_r'∈𝔽_q^n∏_𝐜_i ∈ C_r'^𝐀 W_𝐜_i(𝐲)^α_i = ∑_𝐜_r'∈𝔽_q^n∏_𝐜_i ∈ C_r'^𝐀 W(𝐲 - 𝐜_i)^α_i
≤∑_𝐜_r'∈𝔽_q^n W(𝐲 - 𝐜_i)^α_r'^𝐀
= q^( 1 - α_r'^𝐀) H_α_r'^𝐀(W).
Here α_r'^𝐀 represents the summation of exponents α_i's corresponding to every 𝐜_i in set C_r'^𝐀. Note that the basis vectors coefficients of 𝐜_1, ...,𝐜_j are fixed given 𝐀.
∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j
rank{𝐜_j+1,𝐜_j+2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲)^α_i
= ∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_j+1,𝐜_j+2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_j+1,𝐜_j+2,...,𝐜_r}=r-j ∑_𝐀∏_𝐜_i_j+1∈ C^𝐀_j+1W_𝐜_i_j+1(𝐲)^α_i_j+1⋯∏_𝐜_i_r∈ C^𝐀_r W_𝐜_i_r(𝐲)^α_i_r
≤∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_j+1,𝐜_j+2,...,𝐜_r}⊆𝔽_q^n ∑_𝐀∏_𝐜_i_j+1∈ C^𝐀_j+1W_𝐜_i_j+1(𝐲)^α_i_j+1⋯∏_𝐜_i_r∈ C^𝐀_r W_𝐜_i_r(𝐲)^α_i_r
= ∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_𝐀∑_𝐜_j+1∈𝔽_q^n ∏_𝐜_i_j+1∈ C^𝐀_j+1W_𝐜_i_j+1(𝐲)^α_i_j+1⋯∑_𝐜_r∈𝔽_q^n ∏_𝐜_i_r∈ C^𝐀_r W_𝐜_i_r(𝐲)^α_i_r
by using inequality (<ref>) on C^𝐀_r ≤∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_𝐀∑_𝐜_j+1∈𝔽_q^n ∏_𝐜_i_j+1∈ C^𝐀_j+1W_𝐜_i_j+1(𝐲)^α_i_j+1
⋯∑_𝐜_r-1∈𝔽_q^n ∏_𝐜_i_r-1∈ C^𝐀_r-1 W_𝐜_i_r-1(𝐲)^α_i_r-1· q^( 1 - α_r^𝐀) H_α_r^𝐀(W)
by using inequality (<ref>) again on C^𝐀_r-1 ≤∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_𝐀∑_𝐜_j+1∈𝔽_q^n ∏_𝐜_i_j+1∈ C^𝐀_j+1W_𝐜_i_j+1(𝐲)^α_i_j+1
⋯∑_𝐜_r-2∈𝔽_q^n ∏_𝐜_i_r-2∈ C^𝐀_r-2 W_𝐜_i_r-2(𝐲)^α_i_r-2· q^( 1 - α_r-1^𝐀) H_α_r-1^𝐀(W)· q^( 1 - α_r^𝐀) H_α_r^𝐀(W)
by using inequality (<ref>) on C^𝐀_r-2, C^𝐀_r-3, ..., C^𝐀_j+1 separately step by step ≤ ...≤∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_𝐀q^( 1 - α_j+1^𝐀) H_α_j+1^𝐀(W)⋯ q^( 1 - α_r-1^𝐀) H_α_r-1^𝐀(W)· q^( 1 - α_r^𝐀) H_α_r^𝐀(W)
≤∑_𝐀∏_i=j+1^r q^-n( 1 - α_i^𝐀)( 1-R-1/nH_α_i^𝐀(W))
≤ O(q^-ε n).
The last step is due to the fact that summation over 𝐀 is a finite sum for 𝐀∈𝔽_q^j×(r-j).
Proof for other basis with rank r-j follows the same procedure. Thus by summing over all basis, we get
∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲)^α_i
=∑_1≤ i_j+1,i_j+2, ...,i_r≤ r∑_𝐲∈𝔽_q^nq^n(α-1)/q^kα(q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j
rank{𝐜_i_j+1,𝐜_i_j+2,...,𝐜_i_r}=r-j∏_i=1^rW_𝐜_i(𝐲)^α_i
≤∑_1≤ i_j+1,i_j+2, ...,i_r≤ r
O(q^-ε n)
=O(q^-ε n)
Now with lemmas above, Theorem <ref> can be extended to cases for α∈ℕ.
Suppose α∈ℕ. When W is an additive noise channel with W(𝐲|𝐱)=W(𝐲-𝐱), and for any rate R satisfies
R ≥ 1 - H_α(W)/n+ε,
where ε>0 and H_α(W)≐1/α-1log∑_𝐱∈𝔽_q^n W(𝐱) ^α, then
𝔼_F∼ℬ D_α(U_F+N||U_𝔽_q^n)→ 0
as n→∞. More specifically,
𝔼_F∼ℬq^(α-1) D_α(U_F(𝐚)+N||U_𝔽_q^n)≤ O(q^-ε n)+1.
Notice that it suffices to prove
q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ ( ∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α≤ O(q^-ε n)+1.
Since if we prove by induction on α, it can be computed as
𝔼_F∼ℬq^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_q^n)
= q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ ( W_0(𝐲) + ∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α
=q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_α_1=0^ααα_1 W_0(𝐲)^α_1(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α-α_1
=q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_α_1=0^ααα_1 W_0(𝐲)^α_1(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α-α_1
=q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α
+∑_α_1=1^ααα_1q^n(α-1)/q^kα∑_y∈𝔽_q^n W_0(𝐲)^α_1𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α-α_1
by induction on all α-α_1 ≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α +∑_α_1=1^ααα_1q^n(α_1-1)/q^kα_1∑_y∈𝔽_q^n W_0(𝐲)^α_1(O(q^-ε n)+1 )
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α +∑_α_1=1^ααα_11/q^k q^n(α_1-1)(1-R-1/nH_α_1(W))(O(q^-ε n)+1 )
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α +O(q^-k),
which mainly relies on dominant type of q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α when n goes to infinity.
Thus
q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ(∑_𝐚∈𝔽_q^k/{0} W_F(𝐚)(𝐲) )^α
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n (∑_r=1^α∑_α_1+...+α_r=
α𝔼_F∼ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i )
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n (∑_r=1^α∑_α_1+...+α_r=
α∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲) ^α_i )
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n (∑_r=1^α∑_α_1+...+α_r=
α∑_j>0 (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲) ^α_i )
+ q^n(α-1)/q^kα∑_y∈𝔽_q^n (∑_r=1^α∑_α_1+...+α_r=
α(q^k - 1/q^n - 1)^r∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r∏_i=1^rW_𝐜_i(𝐲) ^α_i )
Here the last step is by splitting the sum into parts where j>0 and parts where j=0. The first parts can be estimated no larger than ∑_r=1^α∑_α_1+...+α_r=
αO(q^-ε n) by Lemma <ref>, and the second parts can be computed as follows,
q^n(α-1)/q^kα∑_y∈𝔽_q^n (∑_r=1^α∑_α_1+...+α_r=
α(q^k - 1/q^n - 1)^r∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r∏_i=1^rW_𝐜_i(𝐲) ^α_i )
≤∑_r=1^α∑_α_1+...+α_r=
αq^n(α-1)/q^kα∑_y∈𝔽_q^n (q^k - 1/q^n - 1)^r∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n ∏_i=1^rW_𝐜_i(𝐲) ^α_i
≤∑_r=1^α∑_α_1+...+α_r=
αq^n(α-1)/q^kα∑_y∈𝔽_q^n (q^k - 1/q^n - 1)^r∑_𝐜_1 ∈𝔽_q^n W_𝐜_1(𝐲)^α_1∑_𝐜_2 ∈𝔽_q^n W_𝐜_2(𝐲)^α_2 ...∑_𝐜_r ∈𝔽_q^n W_𝐜_r(𝐲)^α_r
= ∑_r=1^α∑_α_1+...+α_r=
αq^n(α-1)/q^kα∑_y∈𝔽_q^n (q^k - 1/q^n - 1)^r∏_i=1^rq^(1-α_i)H_α_i(W)
≤∑_r=1^α∑_α_1+...+α_r= α∏_i=1^rq^n(1-α_i)(1-R-1/nH_α_i(W))
≤∑_r=1^α -1∑_α_1+...+α_r= α∏_i=1^rq^n(1-α_i)(1-R-1/nH_α_i(W)) + 1
≤∑_r=1^α -1∑_α_1+...+α_r= αO(q^-ε n) + 1,
where the last step is due to the fact that there exists some α_i>1 when r<α.
By substituting the estimation of first and second parts above into formula (<ref>), it can be obtained
𝔼_F∼ℬq^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_q^n)
≤∑_r=1^α -1∑_α_1+...+α_r= αO(q^-ε n) +∑_r=1^α -1∑_α_1+...+α_r= αO(q^-ε n)+1
=O(q^-ε n)+1.
The last step is by noting that the sum is finite. Thus the proof completes
Take α=4 and q=2 for example.
𝔼_F∼ℬ2^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_2^n)
= 2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_𝐚∈𝔽_2^k/{0} W_F(𝐚)(𝐲) )^4
= 2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_𝐚∈𝔽_2^k/{0} W_F(𝐚)(𝐲)^4 )_(i) + 2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_{𝐚_1,𝐚_2}∈𝔽_2^k/{0} W_F(𝐚_1)(𝐲)^3W_F(𝐚_2)(𝐲) )_(ii)
+2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_{𝐚_1,𝐚_2}∈𝔽_2^k/{0} W_F(𝐚_1)(𝐲)^2W_F(𝐚_2)(𝐲)^2 )_(iii)
+ 2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_{𝐚_1,𝐚_2,𝐚_3}∈𝔽_2^k/{0} W_F(𝐚_1)(𝐲)^2W_F(𝐚_2)(𝐲)W_F(𝐚_3)(𝐲) )_(iv)
+ 2^3n/2^4k∑_y∈𝔽_2^n𝔼_F∼ℬ ( ∑_{𝐚_1,𝐚_2,𝐚_3,𝐚_4}∈𝔽_2^k/{0} W_F(𝐚_1)(𝐲)W_F(𝐚_2)(𝐲)W_F(𝐚_3)(𝐲)W_F(𝐚_4)(y) )_(v)
≤2^n(α-1)/2^kα∑_y∈𝔽_2^n (∑_r=1^α∑_α_1+...+α_r=
α𝔼_F∼ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_2^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i )
It is easy to prove that
(i)≤ O(2^3n (1-R-1/nH_4(W))),
(ii)≤ O(2^2n (1-R-1/nH_3(W))),
(iii)≤ O(2^n (1-R-1/nH_2(W))· 2^n (1-R-1/nH_2(W))).
And
(iv)≤
2^3n/2^4k∑_y∈𝔽_2^n ( 2^k-1/2^n-1 )^3 (∑_{𝐜_1,𝐜_2,𝐜_3}⊆𝔽_2^n / {0}
rank{𝐜_1,𝐜_2,𝐜_3}=3 W_𝐜_1(𝐲)^2W_𝐜_2(𝐲)W_𝐜_3(𝐲) )
+ 2^3n/2^4k∑_y∈𝔽_2^n ( 2^k-1/2^n-1 )^2 (∑_{𝐜_1,𝐜_2}⊆𝔽_2^n / {0}
rank{𝐜_1,𝐜_2,𝐜_3}=2
𝐜_3=𝐜_2+𝐜_1 W_𝐜_1(𝐲)^2W_𝐜_2(𝐲)W_𝐜_3(𝐲) )
≤ O(2^n (1-R-1/nH_2(W))) + O(2^n (1-R-1/nH_2(W))· 2^n (1-R-1/nH_2(W))) + O(2^2n (1-R-1/nH_3(W))).
Similar to (iv), it can be also derived that
(v)≤ O(2^n (1-R-1/nH_2(W))) + O(2^n (1-R-1/nH_2(W))· 2^n (1-R-1/nH_2(W))) + O(2^2n (1-R-1/nH_3(W))) + 1
This concludes the proof.
§.§ Exponents Analysis
We need to analyze the following dominant exponent:
min_∑α_i=α
α_i∈ℕ∑ (1-α_i)(1-R-1/nH_α_i(W) ).
Here α_i's are non-negative integer partition of integer α.
Denote f(x)=(1-x)( 1 - R - 1/nH_x(W) ), which is concave due to the facts that:
f(x)=(1-x)( 1 - R - 1/nH_x(W) )=(1-x)(1-R)-1/nlog_2∑_ip_i^x
f'(x) = -(1-R) - 1/nln2∑_ip_i^xln p_i/∑_ip_i^x
f”(x) = - 1/nln2(∑_ip_i^xln^2 p_i)(∑_ip_i^x) -(∑_ip_i^xln p_i)^2 /(∑_ip_i^x)^2≤ 0 by Cauchy inequality.
Since f'(-∞)>0 and f'(+∞)<0, f(x) first increase and then decrease.
Thus
min_∑α_i=α
α_i∈ℕ∑ (1-α_i)(1-R-1/nH_α_i(W) ) = min{R+1/nH_2(W)-1, (1-α)(1-R-1/nH_α(W) )}.
Here the minimum cannot be determined.
For example, take p=[0.45, 0.55], α=50, R=0.9, then we have
R+H_2(p)-1=0.583< (1-α)(1-R-1/nH_α(W) )=24.99.
take p=[0.2, 0.8], α=3, R=0.7, then we have
R+H_2(p)-1=0.085 > (1-α)(1-R-1/nH_α(W) )=0.053.
But if we use statistical distance, we will have
2Δ^2≤ D_1≤ D_2≤ ...≤ D_α→ 0,
⇒min{R+1/nH_2(W)-1, max_α∈ℕ (1-α)(1-R-1/nH_α(W) )}
§.§ Rényi Divergence for α>2
Combining techniques from Theorem <ref> and theorem <ref>, we can derive smoothing for random linear codes for scenarios α>2.
Suppose α>2, when W is an additive noise channel with W(𝐲|𝐱)=W(𝐲-𝐱), and for any rate R satisfies
R ≥ 1 - H_α(W)/n+ε,
where ε>0 and H_α(W)≐1/α-1log∑_𝐱∈𝔽_q^n W(𝐱) ^α, then
𝔼_F∼ℬ D_α(U_F+N||U_𝔽_q^n)→ 0
as n→∞. More specifically,
𝔼_F∼ℬq^(α-1) D_α(U_F(𝐚)+N||U_𝔽_q^n)≤ O(q^-ε n)+1.
Denote ⌊α⌋ the largest integer no larger than α, and {α} the difference between α and ⌊α⌋, i.e. α = ⌊α⌋ + {α}.
𝔼_F∼ℬq^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_q^n)
= q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ ( ∑_𝐚∈𝔽_q^k W_F(𝐚)(𝐲) )^α
= q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ ( ∑_𝐚∈𝔽_q^k W_F(𝐚)(𝐲) )^⌊α⌋ ( ∑_𝐚∈𝔽_q^k W_F(𝐚)(𝐲) )^{α}
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i
· ( ∑_𝐚∈span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) + ∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α}
By using (∑ x_i)^{α}≤∑ x_i^{α} since {α}≤ 1 ≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i
·( ∑_𝐚∈span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲)^{α} + (∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α} )
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i∑_𝐚∈span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲)^{α}_(i)
+ q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i (∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α}_(ii).
Now let's compute (i) and (ii) separately.
In (i), 𝐚 can be expressed by linear combination of {𝐚_1,𝐚_2,...,𝐚_r}, and thus there exists some 𝐚_j s.t. its corresponding coefficients is non zero. Thus by Rearangement Inequality, it can be derived
∑_𝐚_jW_F(𝐚_j)(𝐲)^α_jW_F(𝐚)(𝐲) ^{α}≤∑_𝐚_jW_F(𝐚_j)(𝐲)^α_j+ {α}.
And (i) can be computed as
(i) ≤∑_r=1^⌊α⌋∑_α_1+...+α_r
=⌊α⌋∑_𝐚q^n(α-1)/q^kα∑_y∈𝔽_q^n𝔼_F∼ℬ∑_{𝐚_1,𝐚_2,...,𝐚_r}
⊆𝔽_q^k / {0}∏_i=1^r W_F(𝐚_i)(𝐲) ^α_i'
≤∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_𝐚 O(q^-ε n)
= O(q^-ε n).
In the first line α_i''s satisfy ∑_i=1^rα_i'=α, and the second line is achieved by similar techniques from proof in Theorem <ref>.
As for (ii), averaging lemma of linear subspaces (span{𝐚_1, …, 𝐚_r})^⊥ needs to be considered for random linear code encoder F|F(𝐚_1), ...,F(𝐚_r). It can be easily derived that all linear codes that contain subspace span{𝐚_1, …, 𝐚_r} is uniform over all subspaces of (span{𝐚_1, …, 𝐚_r})^⊥. Thus it can be derived that,
𝔼_F|F(𝐚_1), ...,F(𝐚_r)∼ℬ' (∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α}
≤ (𝔼_F|F(𝐚_1), ...,F(𝐚_r)∼ℬ'∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α}
= (q^k-r-1/q^n-r-1∑_𝐜∉span{F(𝐚_1),F(𝐚_2),...,F(𝐚_r)} W_𝐜(𝐲) )^{α}
≤ (q^k-r-1/q^n-r-1)^{α}≤q^k{α}/q^n{α}.
With the subspaces averaging lemma,
(ii)
≤q^n(α-1)/q^kα∑_y∈𝔽_q^n∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_j (q^k - 1/q^n - 1)^r-j·
∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲) ^α_i𝔼_F|𝐜_1, ...,𝐜_r∼ℬ' (∑_𝐚∉span{𝐚_1,𝐚_2,...,𝐚_r} W_F(𝐚)(𝐲) )^{α}
by using spacespace averaging lemma ≤q^n(α-1)/q^kα∑_y∈𝔽_q^n∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲) ^α_iq^k{α}/q^n{α}
=q^n(⌊α⌋-1)/q^k⌊α⌋∑_y∈𝔽_q^n∑_r=1^⌊α⌋∑_α_1+...+α_r=
⌊α⌋∑_j (q^k - 1/q^n - 1)^r-j∑_{𝐜_1,𝐜_2,...,𝐜_r}⊆𝔽_q^n / {0}
rank{𝐜_1,𝐜_2,...,𝐜_r}=r-j∏_i=1^rW_𝐜_i(𝐲) ^α_i
By Theorem <ref>, we derive ≤ O(q^-ε n)+1.
§ RANDOM SELF-DUAL CODE SMOOTHING
Note that self dual code has rate R=0.5, and is sure to have whole one vector 1. Another property is that self dual codes have been proved to be almost balanced in <cit.>.
Let ℬ denote the set of (n=2t, k=t) self-dual codes in which the weight of every codeword is divisible by 4.
The number of codes in ℬ is
(2^t -2 + 1)(2^t - 3 + 1) … (2 + 1)2.
Recall that they all contain the vectors 0, 1.
Let 𝐯 be a vector other than 0, 1 with w(𝐯) ≡ 0 4. The number of codes in ℬ which contain 𝐯 is
(2^t -3 + 1)(2^t - 4 + 1) … (2 + 1)2.
Thus the averaging lemma for almost balanced set of dual codes become as follows.
For balanced set ℬ containing encoders of (n=2t, k=t) self-dual codes in which the weight of every codeword is divisible by 4, and any function f(·), there is an identity that
1/|ℬ|∑_F∈ℬ∑_𝐚∈𝔽_2^k/{0} f(F(𝐚)) = 1/2^t-2 + 1∑_w(𝐯) ≡ 0 4
𝐯≠ 0, 𝐯≠ 1 f(𝐯).
Or equivalently,
𝔼_F∼ℬ∑_𝐚∈𝔽_2^k/{0, 1^-1} f(F(𝐚)) = 1/2^t-2 + 1∑_w(𝐯) ≡ 0 4
𝐯≠ 0, 𝐯≠ 1 f(𝐯).
when W is an additive noise channel, and rate R satisfies
R=0.5 > 1 - H(W)/n,
then
𝔼_F∼ℬ D(U_F+N||U_𝔽_2^n)→ 0
as n→∞.
Define the affine encoder Λ: 𝐚→ F(𝐚) + G, where F is an linear encoder in ℬ and G is a independent r.v. in 𝔽_2^n. Affine encoder has the property that for 𝐚≠𝐚', Λ(𝐚')=F(𝐚-𝐚') + Λ(𝐚). Noted that here F(𝐚-𝐚') is independent with r.v. Λ(𝐚).
2^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_2^n)=∑_𝐲∈𝔽_2^n (1/2^k∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α/(1/2^n)^α-1=2^n(α-1)/2^kα∑_y∈𝔽_2^n ( ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α
𝔼_Λ 2^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_2^n)
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n𝔼_Λ ( ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n𝔼_Λ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(y) ( ∑_𝐚'∈𝔽_2^k W_Λ(𝐚')(y) )^α-1
By using Jensen's inequality for 𝔼[X^α-1] ≤ (𝔼[X])^α-1, 1 < α < 2, we derive ≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)+W_Λ(𝐚)+1(𝐲)+𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_2^k
F(𝐚'-𝐚)≠0, 1 W_Λ(𝐚')(𝐲) )^α-1
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲)+W_Λ(𝐚)+1(𝐲) +𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_2^k
F(𝐚'-𝐚)≠0, 1 W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) )^α-1
By using the averaging lemma, let 𝐱=𝐚'-𝐚, f(F(𝐱))=W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) =2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + W_Λ(𝐚)+1(𝐲) + 1/2^t-2 + 1∑_w(𝐜) ≡ 0 4
𝐜≠ 0, 𝐜≠ 1 W_Λ(𝐚)+𝐜(𝐲) )^α-1
≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + W_Λ(𝐚)+1(𝐲) + 1/2^t-2 + 1 )^α-1
By using (x+y+z)^α-1≤ x^α-1+y^α-1+z^α-1 ≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲)^α + 2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲)W_Λ(𝐚)+1(𝐲)^α-1
+2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚) (1/2^t-2 + 1)^α-1W_Λ(𝐚)(𝐲)
= 2^2(α-1) t(1-R-H_α(W)/n) + 2^2(α-1)t(1-R-H'_α(W)/n) + (2^2t/2^t(2^t-2+1))^α-1
Here
H(W)=lim_α→ 1H_α(W),H'(W)=lim_α→ 1H'_α(W),
and
H'_α(W)=1/1-αlog1/2^n∑_𝐲∈𝔽_2^n, 𝐱∈𝔽_2^n W(𝐲|𝐱) W(𝐲|𝐱+1) ^α-1.
The limitation goes to 1 when 1-R-H_α(W)/n<0 if we first let t→∞ and then α→ 1. Notice that 1-R-H'_α(W)/n<0 due to the rearangement inequality that H_α(W)≤ H_α(W').
Since channel W is regular <cit.>, we can remove the r.v. G, and the final result proven.
§ RANDOM QUASI CYCLIC CODE SMOOTHING
Denote ℬ as the set of (2t, t) quasi cyclic code with odd weight a(x). It has been proven in <cit.> that
|ℬ|=2^t-1-1. And when t≡± 3 8, and t is a prime for which 2 is primitive, then each non-zero vector except all-one vector, belongs to exactly one code in ℬ.
For balanced set ℬ containing encoders of (n=2t, k=t) quasi-cyclic codes in which a(x) has odd weight, t≡± 3 8, and t is a prime for which 2 is primitive, and any function f(·), there is an identity that
1/|ℬ|∑_F∈ℬ∑_𝐚∈𝔽_2^k/{0, 1} f(F(𝐚)) =1/2^t-1 - 1∑_w(𝐯) ≡ 0 2
𝐯≠ 0, 𝐯≠ 1 f(𝐯)
Or equivalently,
𝔼_F∼ℬ∑_𝐚∈𝔽_2^k/{0, 1^-1} f(F(𝐚)) = 1/2^t-1 - 1∑_w(𝐯) ≡ 0 2
𝐯≠ 0, 𝐯≠ 1 f(𝐯).
Under the same condition, we can prove smoothing of quasi cyclic codes for KL divergence.
when W is an additive noise channel, and rate R satisfies
R=0.5 > 1 - H(W)/n,
then
𝔼_F∼ℬ D(U_F+N||U_𝔽_2^n)→ 0
as n→∞.
If we furthur require on W that ∑_w(𝐜) ≡ 0 2 W_𝐜(𝐲) = ∑_w(𝐜) ≡ 1 2 W_𝐜(𝐲) = 1/2, then for α∈ (1,2), as n→∞,
𝔼_F∼ℬ D_α(U_F+N||U_𝔽_2^n)→ 0.
Define the affine encoder Λ: 𝐚→ F(𝐚) + G, where F is an linear encoder in ℬ and G is a independent r.v. in 𝔽_2^n. Affine encoder has the property that for 𝐚≠𝐚', Λ(𝐚')=F(𝐚-𝐚') + Λ(𝐚). Noted that here F(𝐚-𝐚') is independent with r.v. Λ(𝐚).
2^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_2^n)=∑_𝐲∈𝔽_2^n (1/2^k∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α/(1/2^n)^α-1=2^n(α-1)/2^kα∑_y∈𝔽_2^n ( ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α
𝔼_Λ 2^(α-1) D_α(U_Λ(𝐚)+N||U_𝔽_2^n)
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n𝔼_Λ ( ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(𝐲) )^α
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n𝔼_Λ∑_𝐚∈𝔽_2^k W_Λ(𝐚)(y) ( ∑_𝐚'∈𝔽_2^k W_Λ(𝐚')(y) )^α-1
By using Jensen's inequality for 𝔼[X^α-1] ≤ (𝔼[X])^α-1, 1 < α < 2, we derive ≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)+W_Λ(𝐚)+1(𝐲)+𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_2^k
F(𝐚'-𝐚)≠0, 1 W_Λ(𝐚')(𝐲) )^α-1
=2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲)+W_Λ(𝐚)+1(𝐲) +𝔼_F|Λ(𝐚)∑_𝐚'∈𝔽_2^k
F(𝐚'-𝐚)≠0, 1 W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) )^α-1
By using the averaging lemma, let 𝐱=𝐚'-𝐚, f(F(𝐱))=W_Λ(𝐚)+F(𝐚'-𝐚)(𝐲) =2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + W_Λ(𝐚)+1(𝐲) + 1/2^t-1 - 1∑_w(𝐜) ≡ 0 2
𝐜≠ 0, 𝐜≠ 1 W_Λ(𝐚)+𝐜(𝐲) )^α-1
≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲) ( W_Λ(𝐚)(𝐲) + W_Λ(𝐚)+1(𝐲) + 1/2^t-1 - 1∑_w(𝐜) ≡ 0 2 W_Λ(𝐚)+𝐜(𝐲) )^α-1
By using (x+y+z)^α-1≤ x^α-1+y^α-1+z^α-1 ≤2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲)^α + 2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚)W_Λ(𝐚)(𝐲)W_Λ(𝐚)+1(𝐲)^α-1
+2^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k𝔼_Λ(𝐚) (1/2^t-1 - 1)^α-1W_Λ(𝐚)(𝐲)( ∑_w(𝐜) ≡ 0 2 W_Λ(𝐚)+𝐜(𝐲) )^α-1
= 2^2(α-1) t(1-R-H_α(W)/n) + 2^2(α-1)t(1-R-H'_α(W)/n)
+ (1/2^t-1 - 1)^α-12^n(α-1)/2^kα∑_𝐲∈𝔽_2^n∑_𝐚∈𝔽_2^k1/2^n[ ( ∑_w(𝐜) ≡ 0 2 W_𝐜(𝐲) )^α + ( ∑_w(𝐜) ≡ 1 2 W_𝐜(𝐲) )^α]
If we let ∑_w(𝐜) ≡ 0 2 W_𝐜(𝐲) < 1 and ∑_w(𝐜) ≡ 1 2 W_𝐜(𝐲) < 1, this could be simplified as
2^2(α-1) t(1-R-H_α(W)/n) + 2^2(α-1)t(1-R-H'_α(W)/n) + 2·( 2^2t/2^t(2^t-1-1))^α-1.
The limitation goes to 1 when 1-R-H_α(W)/n<0 if we first let t→∞ and then α→ 1. Here 1-R-H'_α(W)/n<0 is satisfied by rearrangement inequality.
If we have additional requirements such that ∑_w(𝐜) ≡ 0 2 W_𝐜(𝐲) = ∑_w(𝐜) ≡ 1 2 W_𝐜(𝐲) = 1/2. Then for α∈ (1,2), the limitation goes to 1.
By removing the r.v. G, and the final result proven.
§ AVERAGE TO AVERAGE CASE REDUCTION
Random code smoothing bound can be applied in reduction between LPN problems of different parameters <cit.>.
Let ε, η∈ (0, 1), C > 0 and n, k, t ∈ℕ be such that (for some constant) k/n = o(1) and
2 ln2 1 + η/1 - ε1/log_2(n/k)k/nt = C log_2(n)
Suppose that there exists an algorithm A which solves aDP(n_a, k, (1-ε)n_a/2(1 - 1/n^C(1+o(1)))) with success probability ε. Then there exists an algorithm which solves aDP(n, k, t) with probability bigger than Ω( ε/√(n_a)) - n_a2^- Ω(n).
The original proof in <cit.> did not show use Bernoulli noise in reduction directly but instead use a reduction of ball noise. Here we show how to use Bernoulli noise to directly obtain the same reduction results.
𝔼_𝐆_k× n∼Λ 2^(α-1) D_α(𝐫𝐆^⊤ , ⟨𝐫, 𝐭⟩||U_𝔽_2^k,Ber_p)
= 𝔼_𝐆_k× n∼Λ 2^(α-1) D_α(𝐫+U_𝐜^⊥ , ⟨𝐫, 𝐭⟩||U_𝔽_2^n,Ber_p)
= 𝔼_𝐆_ (n-k)× n∼Λ 2^(α-1) D_α(𝐫+U_𝐜 , ⟨𝐫, 𝐭⟩||U_𝔽_2^n,Ber_p)
= ∑_y ∈𝔽_2^n𝔼_𝐆P(⟨𝐫, 𝐭⟩ = 1)^α P(𝐫 + U_C = 𝐲 | ⟨𝐫, 𝐭⟩ = 1)^α/(1/2^n)^α-1p^α-1 + ∑_y ∈𝔽_2^n𝔼_𝐆P(⟨𝐫, 𝐭⟩ = 0)^α P(𝐫 + U_C = 𝐲 | ⟨𝐫, 𝐭⟩ = 0)^α/(1/2^n)^α-1(1-p)^α-1
Let P(⟨𝐫, 𝐭⟩=1)=p and 𝐫∼ Ber_r^⊗ n, i.e. p=1 - (1 - 2r)^|𝐭|/2 = ∑_y ∈𝔽_2^n p ·𝔼_𝐆_ (n-k)× n∼Λ P(𝐫 + U_C = 𝐲 | ⟨𝐫, 𝐭⟩ = 1)^α/(1/2^n)^α-1 + ∑_y ∈𝔽_2^n(1-p)·𝔼_𝐆_ (n-k)× n∼Λ P(𝐫 + U_C = 𝐲 | ⟨𝐫, 𝐭⟩ = 0)^α/(1/2^n)^α-1
Notice that value of ⟨𝐫, 𝐭⟩ is a requirement on 𝐫, denote them as 𝐫_0, 𝐫_1 separately = ∑_y ∈𝔽_2^n p ·𝔼_𝐆_ (n-k)× n∼Λ P(𝐫_0 + U_C = 𝐲 )^α/(1/2^n)^α-1 + ∑_y ∈𝔽_2^n(1-p)·𝔼_𝐆_ (n-k)× n∼Λ P(𝐫_1 + U_C = 𝐲 )^α/(1/2^n)^α-1
≤ p· ( 2^(α-1) n(1-n-k/n-H_α( 𝐫_0)/n) + 1) + (1-p)· ( 2^(α-1) n(1-n-k/n-H_α( 𝐫_1)/n) + 1)
= p· 2^(α-1) n(1-n-k/n-H_α( 𝐫_0)/n) + (1-p)· 2^(α-1) n(1-n-k/n-H_α( 𝐫_1)/n) + 1.
The limitation goes to 1 as 1-n-k/n-H_α( 𝐫_0)/n<0 and 1-n-k/n-H_α( 𝐫_1)/n<0.
Here we only compute H_α( 𝐫_0)/n and H_α( 𝐫_1)/n in the case when α=1, i.e. the entropy rate of 𝐫_0 and 𝐫_1.
H( 𝐫_0)/n=H( 𝐫_1)/n=(1-1/n)h(r)
where h(r)=-rlog r- (1-r)log (1-r).
Denote n dimentional random variable 𝐫_1 = (X_1, X_2, ..., X_n), where each X_i∼ Ber_r. Since ⟨𝐫_1, 𝐭⟩ = 1, 𝐫_1 should have odd 1's in positions where 𝐭 has 1. Thus by reordering the positions, we will get
𝐫_1 = (X_1', X_2', ..., X_n-|𝐭|', Y_1, Y_2, ..., Y_|𝐭|-1, Y_|𝐭|) = (X_1', X_2', ..., X_n-|𝐭|', Y_1, Y_2, ..., Y_|𝐭|-1, 1-Y_1- Y_2- ...- Y_|𝐭|-1).
Thus H( 𝐫_1) = ∑_i=1^n-|𝐭|H(X_i')+∑_i=1^|𝐭|-1H(Y_i)=(n-1)h(r).
Similarly by reordering we will get
𝐫_0 = (X_1', X_2', ..., X_n-|𝐭|', Y_1, Y_2, ..., Y_|𝐭|-1, Y_|𝐭|) = (X_1', X_2', ..., X_n-|𝐭|', Y_1, Y_2, ..., Y_|𝐭|-1, Y_1+ Y_2+ ...+ Y_|𝐭|-1).
And H( 𝐫_0) = ∑_i=1^n-|𝐭|H(X_i')+∑_i=1^|𝐭|-1H(Y_i)=(n-1)h(r).
Thus the optimal bound is k/n=h(r) ⇒ r=h^-1(k/n)=k/n/-log_2k/n(1+o(1))⇒ .
1/2-1/2(1-2r)^|t| = 1/2-(1/2)^1+|t|log_2(1-2r)
= 1/2-(1/2)^1+|t|log_2( 1+2k/n/log_2k/n(1+o(1)))
= 1/2- (1/2)^1+|t|2/ln2k/n/log_2k/n(1+o(1))
=1/2 - 1/2n^C(1+o(1)),
which completes the reduction proof.
§ CONCLUSION
This paper delves into the critical role of the smoothing parameter and bound, a key concept involves adding sufficient noise to a discrete structure, such as a code, to make its distribution approximate uniformity over the Hamming space. We explore the optimization of this parameter using Rényi divergence, which includes deriving the smoothing bound for random linear codes across all Rényi parameters α∈ (1, ∞). We further refine this analysis by reducing random linear codes to classes of random self-dual and quasi-cyclic codes, enhancing their structural properties. An application of our results demonstrates an average-case to average-case reduction from LPN to the average-case decoding problem, utilizing Rényi divergence and focusing on Bernoulli noise.
alpha
|
http://arxiv.org/abs/2405.09444v1 | 20240515153935 | Desk-AId: Humanitarian Aid Desk Assessment with Geospatial AI for Predicting Landmine Areas | [
"Flavio Cirillo",
"Gürkan Solmaz",
"Yi-Hsuan Peng",
"Christian Bizer",
"Martin Jebens"
] | cs.CY | [
"cs.CY",
"cs.AI"
] |
RESEARCH ARTICLE
Desk-AId: Humanitarian Aid Desk Assessment with Geospatial AI for Predicting Landmine Areas
Flavio CirilloaCONTACT Flavio Cirillo. Email: flavio.cirillo@neclab.eu,
Gurkan Solmaza,
Yi-Hsuan Pengb,
Christian Bizerb and
Martin Jebensc
aNEC Laboratories Europe, Heidelberg, Germany; bUniversity of Mannheim, Mannheim, Germany; cInternational Committee of the Red Cross
May 20, 2024
==========================================================================================================================================================================================================================================================================================
The process of clearing areas, namely demining, starts by assessing and prioritizing potential hazardous areas (i.e., desk assessment) to go under thorough investigation of experts, who confirm the risk and proceed with the mines clearance operations.
This paper presents Desk-AId that supports the desk assessment phase by estimating landmine risks using geospatial data and socioeconomic information. Desk-AId uses a Geospatial AI approach specialized to landmines. The approach includes mixed data sampling strategies and context-enrichment by historical conflicts and key multi-domain facilities (e.g., buildings, roads, health sites). The proposed system addresses the issue of having only ground-truth for confirmed hazardous areas by implementing a new hard-negative data sampling strategy, where negative points are sampled in the vicinity of hazardous areas.
Experiments validate Desk-Aid in two domains for landmine risk assessment: 1) country-wide, and 2) uncharted study areas). The proposed approach increases the estimation accuracies up to 92%, for different classification models such as RandomForest (RF), Feedforward Neural Networks (FNN), and Graph Neural Networks (GNN).
mine detection, risk assessment, machine learning, geographical artificial intelligence
§ INTRODUCTION
Landmines are a blight in post-conflict regions. Since landmines are cheap to produce, easy to deploy, maintenance-free, and highly durable, massive amounts were excessively deployed during recent civil conflicts.
As of October 2021, at least 60 countries and other areas remain contaminated by antipersonnel mines <cit.>.
Uncleared landmines claimed more than 7000 casualties in 2020 alone, and the numbers, unfortunately, have been more or less steady year-on-year for the past 20 years <cit.>.
Furthermore, in the post-conflict period, unexploded landmines result in not only direct victims, degradation of land and contamination of natural resources, but also socio-economic underdevelopment among the affected populations <cit.>. For example, lands that are marked as hazardous cannot be used for agriculture, transportation, and communication infrastructure.
International response to the landmine problem is referred to as humanitarian mine action (HMA) <cit.>.
The purpose of the mine action is to reduce the impacts of Explosive Remnants of War (ERW)
[ERW denotes all explosive contamination from war, such as landmines and unexploded ordnance (UXO) <cit.>. In this article, the terms ERW, hazard, and landmine are used interchangeably.]
on local populations and to return cleared land to local communities for land rehabilitation.
Many global non-governmental organizations (NGOs), including International Committee of the Red Cross (ICRC), United Nations Mine Action Service (UNMAS), and the Geneva International Centre for Humanitarian Demining (GICHD), conduct demining operations that positively impact local economies and communities <cit.>.
Nevertheless, a significant challenge in the demining operations is the mismatch between the area's size and the available resources.
The cost for removing a landmine can be up to 300 times more than the cost of producing it <cit.>. Further, for every hour spent to deploy a mine over 1000 hours needs to be spent to demine it <cit.>. A team of 10 persons can manually clear in a day at maximum 500 square meters <cit.>.
How to effectively plan the deployment of the limited demining resources remains a persistent problem for the demining experts <cit.>.
Fig. <ref> shows the process of demining that starts with the desk assessment. A human expert analyzes data such as reports, historical records and maps with the aid of visualization and data tools. As results, sub-regions are assessed as potentially dangerous. Further, during this step the sub-regions are prioritized under different criteria (e.g., uncertainty of suspected areas, socio-economic importance).
The risk assessment and prioritization are essential since it can impact human life in case of false negative or the socio-economic factor (thus, human quality of life) in case of false positive.
Once potential hazardous areas are identified, a non-technical survey looks for hints to mark an area as suspected hazardous area (SHA) such as conducting interviews with local communities to gather information about the history of the area and potential mine incidents, or using technological tools such as drones <cit.>.
A technical survey involves the deployment of expert people to the SHA to actually find proof of mine contamination using tools such as metal detectors and ground-penetrating radars. The adoption of late technology such as deep learning demonstrated also good results on speeding up this process <cit.>.
Only after the mines have been located with precise boundaries of Confirmed Hazardous Areas (CHAs), the mines are removed and the area cleared.
Each of the described steps have a different scale in terms of focus from big to small granularity with a ratio between effort time to km^2 from low to very high. Therefore, it is very important to complete each step with the highest possible accuracy.
In this article, we focus on the desk assessment, aiming at enhancing the instruments for the human domain expert to assess the landmine presence risks of sub-regions. Having an accurate risk assessments before the deployment of (technical or non-technical) people on the field is crucial to speed up the demining process of technical survey and reduce the costs.
While investigating new technologies (e.g., GIS and remote sensing) is not recent, automated landmine risk prediction systems still need to be explored. There are few studies <cit.> that applies ML for the risk assessment of landmine presence task.
In this work, we aim to build a pipeline for automatic landmine risk assessment in post-conflict regions. We use confirmed landmine contamination data across a whole country as starting dataset. We combine it with geographical, socio-economic, and remnants of war sectors information to build a meaningful feature set. The selected attributes have the characteristic to be easily computed from abundant data (e.g., buildings location and waterway) so as to build a system easily applicable to different regions.
The proposed approach does not target at the precise localization of landmines but rather at the identification of potential hazard of areas. The smallest granularity considered in the proposed approach is 50 meters that is acceptable also for areas with high erosion factor after many decades as is the case of Skallingen peninsula coastline (Denmark) contaminated by mines deployed during the second world war and cleared only in 2012 <cit.>.
Further, we address the uncertainty of negative points by sampling points in the vicinity of the CHAs that are positive points areas. The idea is that areas close to the CHAs have been more likely surveyed by technical and non-technical expert. We test different combination of distances to the CHAs for the negative sampling.
We extensively test our sampling approach in multiple scenarios such as: i) training and testing throughout whole country risk assessment, ii) testing in CHAs vicinity areas, iii) testing in completely uncharted areas.
We explore also different ML approaches such as tabular data-based and graph data-based.
Our objectives are two-fold: increase the geographic granularity for detection of hazardous area and the prediction of risk in unexplored areas.
This work selects Afghanistan as a case study due to it being one of the countries that has suffered the most from landmines and the related ERW.
The main contributions of this paper are highlighted as follows:
* Building a dataset from geographical and socio-economic data. We develop a generic landmine detection pipeline to build mine contamination dataset across the whole country's land, as well as handling features among geographical, socio-economic, and remnants of war sectors. We also provide insight on the role played by features of different types and their relationships between each other.
* Design an intelligent balanced data sampling method to address uncertainty issue on negative sampling. We address the challenge of having ground-truth only for positive points by proposing a strategy to build a dataset by composing positive points and negatives points geographically close to the positive points. We leverage the idea that areas close by the ground truth positive areas have been more likely surveyed by domain experts if compared to remote locations. We test granularity of 50, 500 and 5000 meters and combinations of them.
* Explore Graph Neural Network to exploit the nature of the geographical data. We implement a location-based graph construction methodology for modeling the neighboring geographical location. The implemented Graph Neural Networks (GNN) outperforms the other commonly-used algorithms such as Feedforward Neural Networks (FNN)
* Extensive experimentation of our pipeline in whole country and uncharted area scenarios. We first test our approach assessing risk for the whole country of Afghanistan. Then, we split the dataset geographically by removing controlled areas for which we have ground-truth CHAs points from the training dataset. These controlled areas are used for testing as uncharted areas. We select two different study areas with different characteristics: an urban area (around the capital Kabul) and an extended remote area in the center of Afghanistan.
We also test the impact of different size of feature set. For every experiment we test various classes of ML model.
* Visualization of the results to support decision-making. Desk-AId is meant to augment already existing desk assessment tools. Thus, we serialize the results into importable files and show them into QGIS [QGIS, <https://qgis.org>], a free and open source geographic system, as an example visualization to support domain expert. QGIS is often adopted as visualization tools for landmine desk assessment purpose.
The final target of this work is to create a viable system that is easy to re-use and quickly applicable to a new region with different characteristics. The aim is to operationalize the desk assessment of landmine hazardous areas with minimal effort on the domain expert speeding up the demining operation in multiple countries.
§ RELATED WORK
In the following, we first introduce an emerging new research topic, namely Geospatial AI or GeoAI, that produced relevant studies to this paper. We, then, examine relevant research questions in agriculture mining. Finally we discuss current investigations that work on the landmine detection and prediction problems.
§.§ Geospatial AI
Geospatial Artificial Intelligence (shortly called as “GeoAI”) is an emerging field that leverages high-performance computing to analyze large amounts of spatial data using AI techniques such as ML, deep learning, and data mining. It combines aspects of spatial science, requiring specific technologies, such as GIS, with AI to extract meaningful insights from big data <cit.>. Constant expansion of big spatial data is one of the reasons to drive GeoAI. Two prominent examples are remote sensing and volunteered geographic information (VGI), which encapsulates user-generated content with a location component. In recent years, VGI exploded with the advent and continued expansion of social media and smartphones <cit.>. The OpenStreetMap (OSM) <cit.>, that we use in this work, demonstrates the benefit of VGI: everyone can use a phone to access and annotate the map attributes.
Similar to this work, Lin et al. <cit.> apply the RF model and mine OSM spatial big data to select the most important geographic features (e.g., land use and roads) for their task, PM_2.5 concentration prediction. Zhu et al. <cit.> demonstrates the promising use of graph convolutional neural networks (GCNNs) <cit.> in geographic knowledge prediction tasks. Their case study is designed as a node classification task in the Beijing metropolitan area for predicting the unobserved place characteristics (e.g., dining, residence) based on the observed properties as nodes and specific place connections as edges using GCNNs.
They compare the result of different edges inside the graph, namely no connection, the connection between spatially adjacent places, and spatial interaction, which they incorporate a taxi traffic record between locations. Since the edge type of spatial interaction displays the best overall accuracy, they conclude that performance can be improved by using more relevant place connections and more information explanatory characteristics. Even though the geographic data at this work does not have an existing graph structure, we consider connecting adjacent places and comparing the GCNNs result with Feedforwards Neural Networks which treat each location as an independent individual. Zhang et al. <cit.> study the problem of anomaly detection of data sources in geographical regions (i.e., urban areas) using spatio-temporal data. Similar techniques can complement our work through their application in the humanitarian datasets where data source anomalies may exist.
§.§ Agriculture Mining
Due to the high reliance on geographic data for prediction models and enormous economic benefits, the abundant technique used in the agriculture mining domain is relevant to the ones we apply in this work.
Agriculture mining, or smart farming, is the research field that tackles the challenges of agricultural production in terms of productivity, environmental impact, food security, and sustainability <cit.>. One of the concepts, namely precision agriculture, is the generation of spatial variability maps that employ precise localization of point measurements in the field. This is analogous in the mine action where the technical survey aims to reduce the size of the mine-contaminated area.
Schuster et al. <cit.> explore the use of a clustering algorithm (k-means) to identify management zones of cotton, with the dependent variable being cotton yield and the independent variable including multi-spectral imaging of the crop and physical characteristics of the field, e.g., slope, soil. The research does not, however, consider more advanced algorithms. Harshath et al. <cit.> demonstrates an encouraging use of more advanced technologies like deep neural networks (DNN), random forest (RF), and linear discriminant analysis on classifying the land as farming/non-farming using geospatial information such as soil type, strength, climate, and type of crop.
To tackle the crop yield prediction problem, Fan et al. <cit.> proposes a combination of GNN and recurrent neural networks (GNN-RNN) approach incorporating geographic and temporal information. Similar to our work, they compare machine learning techniques trained on geographic factors and predict nationwide. They posit that GNN can boost the prediction power of a county's crop yield by combining the features from neighboring counties. Their result shows that the graph-based models (GNN and GNN-RNN) outperform competing baselines such as long short-term memory (LSTM) and convolutional neural network (CNN), illustrating the importance of geographic context in graphs.
§.§ Landmine Detection
The research that focuses on landmine detection problems with machine learning can be categorized into two groups according to different input sources. The first group of methods reads remote sensing data such as satellite images, hyperspectral images, or normalized difference vegetation index (NDVI). Several research demonstrated the usefulness of image data <cit.>. Still, according to the Makki et al. <cit.>, the detection performance suffers from a trade-off between computational complexity and detection performance. Furthermore, different types of remote sensing produce varying advantages in different environments. Therefore, the benefit of using remote sensing is highly dependent on the use case <cit.>.
Also, it is challenging to directly correlate landmine risk with the environmental factors that impact them from the remote images. As a result, another approach focuses on gathering ecological factors and using them as inputs to train models directly.
While early works such as <cit.> mainly focus on spatial statistics analysis in combination with GIS usage, <cit.> and <cit.> could be considered highly relevant for our work because of their similarities in implementing ML techniques on landmine risk prediction problems. They also use explanatory variables from mainly open source data, including land cover (water channel and buildings), remnants of war indicators (control area, conflict area, medical facility, roads, and border), and topography (elevation and slope). Schultz et al. <cit.> studies the presence of landmines in different scales of areas however without exploring granularity aspects that are important for geographical and political considerations.
Rafique et al. <cit.> analyzes the information of confirmed hazardous areas and build a model to assess the risk of areas in the immediate vicinity of the hazardous areas. The prediction is progressively extended to farther areas once the suspect is confirmed or discarded. However, this approach slows down the desk assessment step. The conservative approach of Rafique et al. <cit.> is motivated by the natural ambiguity in the data since while a CHAs is surely hazardous, areas outside the CHAs are often uncharted and therefore might be either hazardous or safe.
This makes the ML models to reach good performance but limits the use case to predicting only in reduced areas. There exist a recent follow-up research which applies the Desk-Aid approach for the mines in Colombia <cit.>. The study also focuses on the interpretable models and domain expert usage of the system.
§ DESK-AID PIPELINE OVERVIEW
Desk-AId is a data processing pipeline that involves multiple steps from the data acquisition till the AI-based risk assessment to support the landmine desk assessment. Fig. <ref> shows the overall pipeline with details of the different approaches and techniques we have adopted in this article. First, the geospatial source data is acquired from multiple sources. More information on the data is report in Table <ref>. Then, we apply different sampling techniques for positive points (i.e., within hazardous areas) and for negative points. In the first case, we have a ground-truth datasets of polygons representing confirmed hazardous areas. Thus, we sample points in the form of (latitude;longitude) within the polygons. In the second case, we do not have confirmed clear areas, therefore, we sample geographical points within the national borders excluding the hazardous polygons. This sampling generates noisy data since it is not sure whether the sampled point is actually a negative point. We adopt different techniques to mitigate the noise that are presented later in section <ref>.
The pipeline, then, uses the location of the samples to sample the geospatial data sources and generate features using different algorithms. In this article, we use state-of-the-art algorithms implemented in the QGIS framework. After this step, we have location and features for negative and positive samples generated with different approaches. We blend these samples to compile a balanced dataset for training a machine learning model. We experiments as ML models classifiers and more sophisticated neural networks. In particular, we model a graph using geographic distance train a graph neural network (GNN) (more details in section <ref>)
The trained model are applied to any geographic point (generated either by the sampling for the ML testing, or selected by the domain expert for the desk assessment) to infer risk assessment that can be visualized into a geographic system.
§ BUILDING THE DATASET
We build the dataset addressing two challenges: i) use open available data for maximize solution replicability, ii) deal with the lack of ground-truth for negative points.
The data considered for the ground-truth is a collection of CHAs polygons and a set of geographical and socio-economic datasets. All the dataset files are in the format of the shapefile (shp) and Geographic Tagged Image File Format (tiff). We utilize the open-source software that is used as a platform by the domain expert: Quantum Geographic Information System (QGIS) to process the datasets, calculate and generate features, and output the aggregated data as CSV files.
§.§ Source data to features
We build the dataset starting from the hazard polygons reported by Afghan mine action NGOs and authorities before 2020. The recorded hazard data has been collected originally by numerous NGOs and authorities for decades and entered into the Information Management System for Mine Action (IMSMA) system[IMSMA Wiki, <https://mwiki.gichd.org/IM/Main_Page>]. From this data, we select the relevant hazard types, such as landmine and explosive remnants of war (ERW).
The data is formed by 12,098 polygons spread throughout Afghanistan (see the red areas in Fig. <ref>). The hazard polygons dataset is containing also other variables that we do not take into consideration since we cannot reproduce for the negative points.
We, then, collect geographical data from multiple sources covering different type of information spread within the national border of Afghanistan. For the selection of the features, we aim at the most abundant data in order to easily apply the system into another country or region. We include dataset reporting building presence with polygons, financial infrastructure, education facilities, health facilities, road lines and water ways. From these information, given the coordinates of a data point, we calculate the distance to the closest polygon or point (depending on the dataset). We utilize a QGIS package 'Distance to nearest hub (points)'. The algorithm computes the distance between the origin features of sample points and their closest destination. Distance calculations are based on the center of the feature, and the resulting layer contains the origin features center point with an additional field indicating the identifier of the nearest destination feature (the categorical feature here) and the distance to it. For example, we input the sample points as the source points layer and indicate education facilities as the destination hubs layer. The algorithm outputs the distance between the sample points and their nearest education facility.
Depending on the meaningfulness of the original dataset (sometimes we face high portion of null data), we also report as feature the type of entity such as school or university for education facilities, or clinic or hospital for health facilities. The number of categories for each features varies between 4 for education facilities, 8 for air traffic facility, 6 plus 1 for unknown category for health facility, and 10 for controlled area authority.
Calculating the distance of a sample point to a line is similar to calculating the distance to the nearest point. However, the line essentially differs from the point in that it's naturally hard to define the a center, which the QGIS package calculated from. Indeed, the distance could not be correctly calculated when setting the line as a destination layer. Therefore, a way to workaround is to first transfer a line to multiple points and then use the 'Distance to nearest hub (points)' package. We utilize a built-in algorithm "Extract Vertices", which takes a line as input and generates a point layer with points representing the vertices in the input line. The road lines are transferred to 4,649,404 points, and the waterway lines are transferred to 968,161 points. Then, each points layer is served as a destination that the interest points calculate the nearest distance.
We include also a Georeferenced Event Dataset dataset from the Uppsala Conflict Data Program (UCDP) <cit.>. This dataset reports events of organized violence with at least one casualty. We use this dataset to calculate distance to a conflict and estimate death for each data point.
Finally we take into consideration also population density information and topographical information such as elevation.
Deriving population density and elevation for interest points fundamentally differ from generating features from polygons, points, or lines since the sources are raster data with continuous data values all over the country. To extract the value for interest points, we utilize the QGIS plugin "Point sampling tool"[Point sampling tool, <https://github.com/borysiasty/pointsamplingtool>]. This tool samples raster values from all the features of the raster cells given a geographical point. Thus, we can specify the sample points and make the algorithm create a file containing population density and elevation value at the location of the sample point.
The hill slope in percentage data is calculated from the elevation layer using the 'slope' package from GDAL[GDAL documentation,<https://gdal.org/programs/gdaldem.html>] in QGIS, the output of which is also a 30-meter grid raster layer, similar to elevation. Since it is raster data, we also utilize the point sampling tool to extract the value at the location of interest points.
§.§ Dataset for ML training
We build a balanced dataset for the ML training from sampling points within polygons with landmine presence (reported in the hazard dataset, see Table <ref>) and absence regions, respectively.
We sample two points for each hazard polygon regardless to the size of the polygon in order to avoid the information loss of small polygons.
After this step, we have the location of the positive data points, and we can start mapping points with explanatory variables calculated from the geographical layers (Table <ref>).
Then, we collect the same number of data point in the hazard absence class that are the negative points. The first approach we follow is to randomly sample points <cit.> throughout the country of Afghanistan that do not lay within the hazard polygons. As seen in Fig. <ref>, the data is very sparse and some areas are very far from the recorded hazard. That does not mean that there are no hazard in those locations, since we are only sure on the hazard presence for areas covered in the given dataset while we do not have sure information for other geographic areas.
As second approach for the negative sampling, we exploit the concept of hard negative mining. We define a buffer zone around the hazard polygons using a heuristic distance, and select a point within such area. In this way we ensure that the negative samples have higher similarity to the positive sample.
We build the negative samples using three buffer zone distances, namely 50 meters, 500 meters, and 5000 meters. The numbers are chosen heuristically from the observation that the minimum distances from features to sample points (e.g., Distance to Building) are roughly 50 meters. Therefore, the three distances are chosen to experiment with the effect of buffers. Fig. <ref> illustrates an example of generation of hard negative points in the buffer zones of hazard polygons. The sample data points in a buffer zone that are inside another hazard polygon is filtered out to guarantee an equal number of negative samples.
We utilize a QGIS "Buffer" tool to draw the buffer zones, change polygons to the line, and assign points on the buffer line using QGIS "QChainage" plugin[QGIS plugin "QChainage", <https://github.com/mach0/qchainage>].
§.§ Study Areas
In addition to the desk assessment of the whole country, we also explore the performances of our approach when applying an ML model to data points laying into an unseen areas. For this purpose we select two study areas with different characteristics. The two chosen study areas are also in the Fig. <ref>.
Study area 1 (SA1) is located closer to the center of the land and has a very low population density (around 8.5 persons/km^2). It covers the land of 360 square kilometers. One obvious distinction of SA1 is the variety of elevation, which ranges from 1140 to 3578 meters. The slope ranges from zero to five.
Study area 2 (SA2) is close to the capital of Afghanistan, Kabul (34.31 N, 69.12 E). It covers the land of 44.5 square kilometers. The terrain in this region is mainly flat, with an elevation of around 2500 meters. Because the area is near the capital, it has a relatively high population density (ranges between 17.5 and 53.7 persons/km^2) and a low distance to the community facilities such as roads, buildings, financial and educational facilities, or airport and health sites.
Because of the terrain and size of study areas, the features of SA1 have a wider variety than SA2.
Fig. <ref> illustrates a detailed comparison of numeric features in both study areas.
§ MACHINE LEARNING MODELS
We test the generated datasets with classifiers and with neural networks. We use two feature sets to examine the effect of adding attributes. One set is formed of seven features (marked with a star in Table <ref>) and the other is the expanded set with eighteen features.
The neural network models use always all eighteen attributes since they are more robust on abundant feature set.
We first implement a feedforward artificial neural network (FNN) and we choose the standard optimizer AdamW <cit.>. An early stopping function from the Keras package is set to avoid overfitting. Since the dataset is balanced in this case, we choose the validation accuracy (val_accuracy) as the monitor and set patience as 50.
Further, we explore the Graph Convolutional Neural Networks (GCNNs) as it utilizes the graph structure to gather node information from neighborhoods in a convolutional manner. The main idea of GCNNs is to generate a node v's representation by aggregating its own features x_v and neighbors' features x_u, where u ∈ N(v). It has been proven to be well-suited for modeling a graph consisting of interconnected geographic locations <cit.>.
Before implementing GCNNs, we need to build a graph. Considering the characteristic of GCNNs and the available data, we define a location-based graph structure as follows:
assuming a set of location (points on the map) where each location point has characteristic 𝐗 that can be represented as feature vectors [x_1, x_2...] and x_i denotes the values for the ith dimension of 𝐗. E refers to the connection between the location points. Considering the complexity and the purpose of this work, we use the QGIS package 'Distance Matrix' to identify the five nearest neighbors and calculate the distance to each point. Then, a location-based graph G = (V,E) is constructed to connect location points as a graph. Each point on the map can be formalized as a node v_i ∈ V in G and the point features 𝐗 are encoded as the node attributes x_k ∈𝐗 on every v_k ∈ V.
On the other hand, the place connection is represented as the edge E where e_i j=(v_i, v_j) ∈ E denote an edge pointing from v_i to v_j. As stated before, the neighborhood of a node v is defined as N(v)={u ∈ V |(v, u) ∈ E}. Here, we have the edge attributes 𝐗^e where 𝐗^e∈𝐑^m × c is an edge feature matrix with 𝐱_v, u^e∈𝐑^c representing the feature vector of an edge, i.e., the distance between the two location points v_i, v_j. After the graph is defined, it is ready to be implemented in GCNNs where it generates node v's class (i.e. presence of landmine) by aggregating v's own features x_v and neighbors' features x_u, where u ∈ N(v).
To load the graph data into the model, we aggregate the graph information into a tuple:
* node features x_k: a two-dimensional where each row corresponds to a node and each column corresponds to a feature of that node.
* edges e_i j: a shape of a two-dimensional array with two rows and a number of columns equal to the number of edges. The first row corresponds to the starting node of an edge and the second row corresponds to the ending node of an edge. We take the links between the five nearest neighbor points.
* edge weights 𝐱_v, u^e: a one-dimensional array with a length equal to a number of edges. It quantifies the relationships between nodes in the graph. The weight corresponds to the distance between two location points.
We implement the GCNNs node classification model following the approach from You et al. <cit.>. First, we apply the FNN module that we have implemented for preprocessing the node features to generate initial node representations.
Next, we choose to implement two layers of graph convolution built from the graph information to build the node embeddings. Too many graph convolutional layers may cause the problem of oversmoothing <cit.>. Finally, the FNN module is applied again to generate the final node embeddings, which are fed into a sigmoid layer to predict the node class for our binary classification problem.
§ EXPERIMENTAL RESULT ANALYSIS
§.§ Evaluating the approaches for negative sampling
We implement our baseline pipeline considering the random sampling approach <cit.> where positive samples (i.e., hazard presence) come from the records and negative samples are randomly sampled in the whole geographic area. In our case, we sample the data throughout the whole Afghanistan.
We have trained three classifiers (i.e., Logistic Regression, Random Forest, and XGBoost) splitting the data in 75% for training and 25% of testing with a balanced dataset between the two classes. Random Forest is the classifier that reaches the best performance. This model achieves good performances with F1 score macro-averaged on the two classes equal to 0.9. We macro-average the F1 since the problem of desk assessment of mine detection requires correct classification of hazards (for human safety reasons) but also not misclassification of non-hazardous areas (for socio-economical reasons).
Fig. <ref> visualizes the results from the random sampling approach in a map. The demo area is in a province called Hilmand (31.36 N, 63.95 E), located in the southwest of Afghanistan. The distance between the positive and negative points ranges from 4100 to 28300 meters. In such scenarios, it is relatively easy for the ML model to distinguish between hazard areas and clear areas.
For the desk assessment application, also it is important to reduce the sizes of sub-regions so as to reduce the effort (and costs) for the following steps in the demining operations. Therefore, we have tested the performance of this model against points that are close 50, 500 and 5000 meters to the recorded hazard. Fig. <ref> shows the results of the model trained on random sampling approach with different testing dataset. We can see that the performance when testing on points closer to the hazardous points drops between 0.49 to 0.57.
We, then, train the model by using sampling method with hard negatives considering three different buffer areas (50m, 500m and 5000m) and a balanced mix of them. Fig. <ref> shows the performances of the two approaches of random and hard negatives for the training for different testing dataset. We can see that the hard negatives approach outperform the random approach tested on the hard negatives. Nevertheless, we notice that when tested on the random dataset the hard negatives approaches do not perform well. This is due to the fact that the hard negative points are geographically linked to the positive points that are limited to particular geographical areas, where the confirmed hazardous areas (CHAs) are recorded. Therefore, the training models do not learn on locations for which we do not have any records such as the vast empty areas depicted in Fig. <ref>. Thus, we have tested a hybrid approach where we mix hard negatives of all the three buffer sizes with random negatives points to represent the rest of the country. The last set of bars in Fig. <ref> shows that this approach is the best trade-off for all the testing scenarios.
§.§ Testing the Study Areas
We test the different trained models also against the two study areas. Notice that the landmine contamination in both study areas is highly imbalanced. SA1 has 9% and SA2 has only 6% of landmine presence. Fig. <ref> and Fig. <ref> shows the results. For each experiment, we remove from the training dataset the points that lay within the study area under focus and test on the points within the study area. The hard negatives and hybrid approach are the approaches that reach the best results.
Fig. <ref> and Fig. <ref> show the Receiver Operating Characteristic (ROC) curve and the area under the ROC curve (AUC). The ROC curve is a graphical representation of the trade-off between true positive rate (sensitivity) and false positive rate (1 - specificity) across different threshold values. For landmine risk assessment task, this translates as the capability of the model to correctly identify landmines over all the predicted positives versus the tendency of the model to incorrectly tagging a point as positive among all the predicted positives.
AUC measure the area under the ROC curve. Fig. <ref> shows that RF outperforms other models with the highest AUC score for 500m (0.640) and 5000m hard negative samples (0.636). XGB performs the second best for the largest buffer (AUC = 0.622).
To understand the difference in the two study areas, we plot the numeric feature distribution for both areas as box plots in Figure <ref>. We can observe that most of the features in SA2 have a more comprehensive range of values in the data distribution. The two study areas are selected in two regions with different characteristics: SA2, compared to SA1, is in the rural county with less population, large slope and elevation distribution, and high distance variability to points/polygons.
This characteristic of data gives a high potential for RF to distinguish the testing data points as it was trained from the whole country's land and has covered a wide range of feature distribution.
The high feature variability in SA2 could also be used to explain the poorer performance of XGB and LR. The box plots in Figure <ref> visualize outliers as a data point located outside the box plot's whiskers. There are outliers in features such as Distance to Road, Distance to Water, Distance to Conflict Area, Slope, and Elevation in SA2.
XGB is known to be more sensitive than other tree-based models, such as RF, because its gradient boosting is easily impacted by outliers. When learning from the whole country land and taking a wide range of features into training, it has a high risk of being overfitted to the outlier. Same for LR, outliers could significantly influence the decision boundary. On the contrary, RF takes the average of multiple decision trees, reducing the impact of outliers.
§.§ Adding features to the dataset
In order to improve the performance, we included all the 18 features for training the model.
Fig. <ref> shows the correlation matrix for the hard sampling approach configured with 500 meters buffer.
We can notice hardly any feature with a high correlation with the target label HazardType that indicates a landmine presence. Nevertheless, some features show noticeable relations with others.
The effect of adding input features to the ML model can be compared in Fig. <ref>. For all the experiments we have trained and tested again with that three classifiers noting that Random Forest is again performing best.
We can observe that adding features is beneficial in all the scenarios, especially for the hard negatives with smallest buffer. An explanation of this might come to the fact that adding new features might lead to overfitting, reducing the capability of the classifier to generalize. This same phenomenon is noticeable when testing with the study area 2 scenario as depicted in Fig. <ref>.
The feature importance of 500m hard negative samples plotted in Table <ref> and Table <ref> also validate that some of the added features such as Distance to Control Area and Distance to Conflict Area are relevant for the model. Looking deeper into the feature importance, we observe that the reduced feature set have similar feature importance except for Population Density. For the expanded feature set, the top important features mostly overlap with the reduced feature set, adding Distance to Control Area and Distance to Conflict Area on the list.
We encounter similar behaviour with another tree-based model such as XGB. However, Linear Regression (LR) model does not show improvement.
This could be explained by calculating the Variance Inflation Factor (VIF) <cit.> (see Table <ref>).
A common rule of thumb is that a VIF score greater than 5 or 10 indicates high multicollinearity.
The features added in the expanded feature sets such as Distance to Health Facility, Estimated Death, Distance to Airport and Authority may result in unstable coefficient estimates. This makes difficult to determine the true effect of each feature on the outcome.
§.§ Neural Network
We experiment three Neural Networks on the 18 features set dataset. In particular, we test Feedforward Neural Network (FNN), Graph Neural Network (GNN), and GNN adding weights (distance to the neighboring points).
FNN treats each point as an independent individual, while GNN's two graph convolutional layers aggregate the features of the neighbors.
Fig. <ref> compares the results of the tree neural network models with Random Forest. We can observe that GNN (simply connecting the neighbors) performs better than FNN in 500m and 5000m hard negative samples. Merely considering neighbors' connection (without the distance to them) does not help in 50m hard negative samples. This can be seen by comparing the plotted graphs in Fig. <ref>[Fig. <ref> only shows a set of 3,000 data points in the graph for simpler visualization]. More nodes (data points) of different classes are connected in 5000m hard negative samples because larger hard samples have a higher chance of linking to other landmine contamination areas.
Another significant result from Fig. <ref> is that GNNs with weights outperform the other models for each dataset, especially when the sample is close (50m or lower). This implies that taking the distance to the five nearest neighbors and their features into account can help to predict the landmine presence probability of the point in question.
§.§ Error Analysis of the Study Area
In this section, we further investigate the models' prediction ability by plotting the landmine risk map in both study areas. Desk-AId is meant to augment already existing landmine desk assessment tools without disrupting already established workflow in the field. Thus, we use QGIS, an open source GIS tool already adopted by NGOs, for visualizing the results. Using the QGIS platform, predicted probability can be compared with the actual landmine distribution. We generated the scaled heat maps in Figures <ref>-<ref>, where the yellow polygons are the landmine contamination. We choose RF (weighted) and XGB trained on mix samples because their AUC scores are above 0.5 in both study areas, meaning they have discriminative prediction ability. Noted that in order to facilitate interpretation, the landmine risk maps have been classified into five risk levels and colored variously in the heatmap: very low (blue), low (green), medium (yellow), high (orange), and very high (red), based on the following cut-off values of probability: 0.1, 0.2, 0.4, and 0.6 (see Fig. <ref>). The thresholds are set empirically. The same approach is also used in the work <cit.> and <cit.>, where they specify that the threshold has to be evaluated and set by the mine action experts for an operational system. The percentage of points predicted probability in two study areas and the intervals' corresponding color in the risk map is summarized in Fig. <ref>.
Comparing the risk map of SA1 in Fig. <ref>-<ref> and the statistics in Fig. <ref> provide a view that the discrimination achieved by the RF model is more effective within the study area. The XGB tends to have more false positives. From the summarized table, XGB predicts 71% of the observations as high or very high risk, with landmine presence possibility higher than 0.4 (orange color in the risk map). This leads to almost the entire region having high landmine risk and could result in a rapid depletion of available mine action resources. For RF, the part where it is detected as high risk is mainly correlated with the landmine contamination area. Nevertheless, both models do not perform well in the southeast region.
Figures <ref> and <ref> compare the performance of the two models in SA2. As discussed previously, RF performs better in this region. It can detect the contamination regions as high risk (orange color on the map). XGB, on the other hand, still suffers from the problem of a high portion of false positives, and it does not detect the contamination area in the southeast region.
The result from the study areas implies that in general RF is suitable for building a model that generalizes from the large region, such as the country's land, and subsequently predicting outcomes for specific study areas. Furthermore, RF is capable of delivering superior performance when the variability of features is extensive. On the other hand, LR and XGB could be helpful when the use case is, for example, validated inside the study region. In other words, if the demining operation is partly finished in the study region, LR and XGB can validate partially inside the area to avoid overfitting rather than cross-validation in the whole country land. This leaves an opportunity for future investigation.
§ DISCUSSION
Application for Demining Use Cases: The Desk-AId system that we developed has proven to be responding satisfactorily to different scenarios. The final target is to have a system easily replicable in different regions/countries with minimal effort.
For this purpose, we focus on the usage of features calculated on geographic and socio-economic information that are easy to find such as building locations, waterways, national/regional borders, and roads. The features used for the training are virtually available on every geographic location. Further, we also overcome the issue of lack of ground truth for negative data points that is a common problem for this kind of datasets. We experience that mixing the hard negative sampling approach for areas close by known areas and filling the rest of the country with the random sampling approach produce the best results. In addition, we see that, even if in some cases the F1 scores are not high such as the case of testing a simple classifier trained on small features set into uncharted areas (see Fig. <ref>- <ref>), the visualization of the risk prediction as continuous numbers matches quite well with the actual separation of hazardous areas with non-hazardous areas (see Fig. <ref>- <ref>).
Field Trials: Easy applicability of the Desk-Aid helped us conduct a successful proof-of-concept project with the open datasets from Afghanistan and received various positive feedbacks from the anonymized humanitarian domain expert and their international humanitarian organization, mainly due to the following factors:
* The innovative use of AI in the very large scale and providing initial results without high costs
* The usefulness of Desk-AId results in the desk assessment phase by the domain experts who are experienced in field testing and demining
* High potential for improving efficiency through prioritization and helping the decision-making for landmine clearance
* Potential reduction in demining time and the substantial costs coming with it (estimated as high as tens of billions of dollars)
* Complementing existing innovations such as using drones for pinpointing surface-level mines
* Having a structured method and pipeline that are agnostic to the specific regions/areas or datasets
* Building the technology on top of existing open-source components and standards that are accepted and used by the humanitarian community
In addition, we are already testing this Desk-AId pipeline in the Cambodia region and working for a field trial deployment in collaboration with the Cambodian Mine Action Centre (CMAC). Further, we are discussing with several NGOs and mine agencies for additional trials in few other post-conflict countries.
Press Releases and Follow-up Research from the Community: As a result of the proof-of-concept study, there have been several press releases in 2023 explaining the approach and developed technology behind the Desk-AId[Press Release-a, <http://www.nec.com/en/press/202303/global_20230329_01.html>][Press Release-b, <https://www.nec.com/en/global/corporateblog/202309/01.html>][Press Release-c, <https://www.nec.com/en/global/sdgs/innovators/project/article13.html>]. The released technology received global news coverage and raised attention in the humanitarian mine action community. As a result, several groups started applying the Desk-AId approach and started follow-up research activities <cit.>. The follow-up studies from relevant academia and organizations such as UNMAS <cit.> contribute to the non-technical survey problem and towards the solution to the global problem of demining.
Technical Aspects of Desk-AId:The granularity of the areas in the proposed system is a minimum of 50 meters. Prediction with a minimum 50 meters of granularity are reasonably unaffected by dynamic changes of the soil, also in regions with very high erosion after decades to the time of landmine displacement (as is the case of the coastline of the Skallingen peninsula <cit.> when mines placed during the second world war are cleared in 2012). Once the desk assessment is performed by our Desk-AId system, other techniques for non-technical and technical surveys (see Fig. <ref> might involve different instruments to collect new dedicated measurements such as airborne sensor for remote sensing <cit.>.
We also explore the utilization of additional set of features, even if those might lead to higher computational complexity. In particular, we experience that for the simpler classifier models, this has alternate effects on different buffer zone sizes. However, the additional features have a positive effect for the application of a trained classification model to an uncharted area. The results are more promising and robust when we train and test with the graph neural network (such as GCNN) that takes into consideration the relation of points with each other (in our case 5 nearest neighbour points). The best performances are obtained when introducing the weights on the links between the graph nodes based on the distance between the points.
Additional Techniques: Further, we considered the application of different techniques for geospatial AI such as compressive sensing <cit.> that is meant to reconstruct a signal sampled at much lower frequency than the traditional Nyquist-Shannon sampling theorem. An issue is that such techniques are basically a convex optimization problem <cit.> that might require vast computational resources, thus, they are not best suited for democratization of AI <cit.>. Compressive sensing Some work in state-of-the-art try to address this issue for spatial analytics <cit.>. Another possible technique to explore is sparse coding <cit.>. Also this technique is quite expensive in terms of computational costs since it translates again into a convex optimization problem.
We experience that each data feature we select do not have strong correlation with the presence of an hazard in the area. This is shown in the feature correlation matrix of Fig. <ref>. However, we implemented models that are capable of calculate a risk assessment and they are useful for the domain experts (see Fig. <ref>- <ref>). A possible future work is to include explanability measures to the risk assessment prediction to provide as much information as possible to the domain experts during the desk assessment phase.
Ethical Use and Risks: The problem addressed in this article requires also a Dual-Use Research of Concern (DURC) discussion. It is undoubtful that high-accuracy predictions in automatic landmine risk assessment can greatly improve the outcome of desk assessment phase. As a result, a more educated plan and prioritization for fast clearance of landmines fosters human life safety and socio-economic growth. Nevertheless, such systems might be used in a malicious way for different purposes such as military operations. Currently there is no procedure in place to avoid vicious usage of this type of technology. Our approach is to continuously discuss with several NGOs that actively work in demining operations and humanitarian action since decades. Further, we often participate to demining workshops (such as GICHD innovation conferences, workshops and webinars[https://www.gichd.org/the-gichd/events-training/]) to be transparent on the targeted application and receive possible feedback from landmine domain experts on ambiguous usages.
§ CONCLUSION
The deployment of landmines involves considerable aspects and typically stands for complex reasons or contingency behind. The process of demining operations starts from desk risk assessment of potential hazardous areas. Reaching good performance at this stage is crucial to efficiently spend time and resources at later stages.
In this work, we develop the Desk-AId as a tool for automatically detecting landmine risk in various different regions by exploiting the contamination across the whole country land and considering the features in the geographical and socio-economical domains, and historical reports. We explore different approaches to deal with the problem of covering vast areas (country-wide in our case) and the application of a trained ML model to an unseen area. We propose a hard negative data sampling strategy that selects the informative negative points and compare it with randomly sample strategy where points are selected on the whole available land.
Moreover, we not only analyze the correlations and multicollinearity between those features but also their roles present in predicting landmine contamination.
To this extent, two tasks of HMA are conducted from the generic models: the first one distinguishes the vicinity of the contamination area by comparing the well-established to the state-of-the-art models, which utilize location-based graph structures defined in this work to build the GCNNs models. It has shown an outperforming result in aggregating the neighboring information. The second task successfully detects the landmine risk in previously unseen regions, for which a heat map is generated from the predicted probability. The size of hazardous area is significantly reduced from the RF model prediction and is therefore highly practical in the HMA usage for non-technical mine action experts. Besides qualitative assessment, each of the experiments is evaluated quantitatively with two sets of attributes and distinct negative sample strategies.
For the future work, we plan to explore a wider range of data collection from open sources, and apply the proposed pipeline in the other countries, using the proposed pipeline of Desk-AId to help plan the humanitarian operations and help solving the global demining problem.
§ DATA AVAILABILITY STATEMENT (DAS)
The data that support the findings of this study are available in [repository name] at [URL/DOI], reference number [reference number]. These data were derived from the following resources available in the public domain: [list resources and URLs]
tfv
|
http://arxiv.org/abs/2405.09678v1 | 20240515193932 | How Stellar Stream Torsion may reveal aspherical Dark Matter Haloes | [
"Adriana Bariego-Quintana"
] | astro-ph.GA | [
"astro-ph.GA",
"gr-qc"
] |
[Corresponding author: ]adriana.bariego@gmail.com
IFIC-Univ. Valencia, c/ Catedrático José Beltrán, 2, E-46980 Paterna, Valencia, Spain.
Flat rotation curves v(r) naturally follow from elongated (prolate) Dark Matter distributions, as shown by our earlier competitive fits to the SPARC database. Intending to probe that distortion of the DM halo one needs observables not contained by the galactic plane.
Stellar streams are caused by tidal stretching of massive substructures such as satellite dwarf galaxies, and would lie on a plane should the DM-halo gravitational field be spherically symmetric. But if the field does not display such spherical symmetry, stellar trajectories, as well as stellar streams, should torsion out of the plane.
This is where the torsion of the stream can be of use: it is a local observable that measures the deviation from planarity of a curve; thus, it quantifies how noncentral the gravitational potential is.
We have performed small simulations to confirm that indeed a galactic central force produces negligible torsion, and quantified the torsion for prolate haloes instead. Examining observational data, we select several streams at large distance from the galactic center, as most promising for the study, and by means of helicoidal fits extract their differential torsion.
We see that their torsion is much larger than expected for a central spherical bulb alone, pointing to an elongated Milky Way halo.
How Stellar Stream Torsion may reveal aspherical Dark Matter Haloes
Adriana Bariego-Quintana
May 20, 2024
=====================================================================
§ INTRODUCTION
There is substantial observational evidence for a Dark Matter (DM) component at different scales in the Universe; from rotation curves to the large scale structure we identify a non-negligible quantity of mass that we are missing in ordinary observations. In this work we focus on the observation of DM at galactic scale, in particular, we study a topic related to the rotation curve problem. Orbital equilibrium outside of a spherical source predicts a decrease of the velocity with which a star orbits around a galaxy as it moves away from its centre, v(r) ∝ r^-1/2. But on the contrary, empirical rotation curves seem to flatten out for a lot of galaxies v(r)∝ C <cit.>.
There has a been great deal of attempts to solve this problem. An alternative approach to DM is to modify the basic laws of mechanics with the Modified Newtonian Dynamics, also called MOND; however, these models run into problems at larger cosmological scales. If DM is accepted, an alternative to an isothermal spherical distribution is to accept a shape modification of the DM gravity source.
In the 3-dimensional cosmos that we live in we would expect rotation curves to decrease with distance to the galactic center v^2∝ 1/r, in a similar way in a 2-dimensional cosmos we would see constant velocities v^2∝ C. Current observations show that the velocities in rotation curves are constant v^2∝ C. The question is how to achieve that dimensional reduction living in a 3-dimensional cosmos.
A spherical DM distribution has to be fine-tuned with an isothermal density profile ρ(r)∼ r^-2 to explain the flatness of the rotation curves. Whereas the extreme case of a cylindrical halo of linear mass density λ naturally explains the flatness v = √(2Gλ) <cit.>. Since the rotation curve is measured to a finite radius the source does not need to be infinitely cylindrical: it is sufficient that it is an elongated DM halo <cit.>. One additional issue that arises concerns the distinction between spherical haloes with an isothermal profile and elongated haloes with arbitrary profiles using as objects of study the galactic rotation curves that are observable within the galactic plane. This differentiation poses a challenge due to insufficient data points outside of the galactic disk. Therefore, the inclusion of out-of-plane observables could become advantageous, for this purpose stellar streams emerge as promising candidates.
§ N-BODY SIMULATIONS OF STELLAR STREAMS AND FITTING
We simulate clusters of N=100 to 1000 point-like stars initially located at randomly distributed positions over a sphere of radius R_0=2 kpc located at a distance of r_0 = 10 to 30 kpc from the galactic center. The simulated stars are given random masses taking values in the range m_⋆ = (1,20) M_⊙, and have a common initial velocity in the y direction v_⋆,y∼ 220 kpc/s to simulate the movement around the gravitational source. Some stars in the cluster receive a random initial velocity Δv = G m_cluster/2r_0 to add some dispersion to the cloud of point-like particles.
After declaring the initial conditions of the particles in the cluster (positions and velocities), we make it evolve around a gravitational source from an initial time t=0 My to different final times.
IMPLEMENTATION OF THE N-BODY SIMULATIONS
Let us first assume we have N particles i=1,2,...,N and each of them feels the gravitational attraction of all the others following Newton's Law of Universal Gravitation
a_i = - G∑^N-1_j≠ i m_j r_j- r_i/|r_j- r_i|^3,
taking G=4.53·10^-12 kpc^3My^-2 M_⊙^-1 as the gravitational constant, m_j being the mass of the j=1,2,...N-1 of the remaining stars and r_j- r_i the distance between the point-like stars.
The next step is to allow this cluster of particles to evolve under the influence of an external force generated by a second gravitational source in whose symmetry center we locate our coordinate origin. In this work we want to consider different kinds of gravitational sources:
Spherical gravitational source, we have the possibility of simulating a spherical galactic bulge or a DM spherical halo. The only difference between both possibilities in this context is the total mass of the central source, dark matter sources are thought to have masses in the range M∈ (10^10, 10^12) M_⊙ whereas galactic sources are found to be slightly less massive M∈ (10^9, 10^11) M_⊙. The gravitational force felt by the individual stars in the stream due to this spherical source is then a_i = - GM∑_i^Nr_i/|r_i|^3.
Cylindrical gravitational source, to simulate a DM cylinder toy-model. We consider a cylinder of linear mass density λ=M/L with a value obtained from galactic rotation curves v_rot = √(2Gλ). There will be a difference between the vertical and horizontal forces felt by the stars in the stream a_i = - 2Gλ/x^2+y^2 (x,y,0).
After specifying the initial conditions of the particles we can describe their movement in time due to their binding forces and to the external forces created by the galactic/DM gravitational sources following Euler's method. It updates the position and velocity of the particles in a certain time step Δ t = t_f / N_it, t_f being the final time and N_it the number of iterations and f denoting the acceleration: v_i+1 = v_i +Δ t/2·f (r _i+Δ t/2v_i) and r_i+1 = r_i + Δ t ·v_i.
FITTING OF THE N-BODY STREAMS
Once the N-body simulations of the streams for chosen final times, t_f, have been arranged, we will fit the stream numerical data to parametric curves in cartesian coordinates following a simple functional dependence r(t) = r_0 + (A ·cos(ω· t +ϕ), B ·sin(ω· t + ϕ), C· t), where {A, B, C, ω, ϕ, r_0 } is the fitting parameters set such that A and B describe the elliptic projection on the XY plane starting with an angular shift ϕ, C describes the advance of the helix in the Z direction and ω is the angular velocity in the XY plane.
We will give some freedom to the fitted parameters to take values withing a certain physically reasonable range: A,B ∈ (0, 50) kpc, C ∈ (0, z_max) kpc, ω∈ (0, 4π) rad/Gy, ϕ∈ (0, 2 π) rad and r_0 ∈ (-50, 50) kpc. To fit the parameters we follow a squared-distance minimization strategy, in which we minimize the sum running over each of the stars in the cloud χ^2 (A, B, C, ω, ϕ, r_0 ) = ∑^N_i=1 (r_i-r(t_i))^2, in this equation we compare r_i, which are the “observed" positions of the point-stars (simulation or data), and r(t_i), which are the “theoretical" positions of these point-stars at a certain time calculated for the different gravitational source shapes.
§ TORSION: A NEW MATHEMATICAL TOOL TO STUDY DM
In this work we represent stellar streams as parametric curves r(t) wrapping around galaxies and deploy an invariant property of curves, the torsion τ, to probe the existent nonsphericity of a DM halo surrounding galaxies.
In differential geometry we can parametrize curves r(t) by an arbitrary variable t that can be traded for the arclength s(t) = ∫ |r(t)'| dt <cit.>. At each point P in the curve there is a trihedron formed by the tangent vector T=dr/ds, the normal vector N=dT/ds and the binormal perpendicular to both, B.
Given a curve r(t) the torsion measures how sharply it is twisting out of the osculating plane, instantaneously defined by the velocity and
normal acceleration. The torsion is defined as the variation of the binormal vector B=T×N, following expression τ = -d B/ds·N and we can also find an expression for the torsion when the curve is described by an arbitrary parameter, t, with τ=( r'× r”)· r”'/| r'× r”|^2.
When a curve lies on a plane, this is defined by the tangent and normal vectors and then the binormal vector will experience no variation giving a null torsion, τ = 0. But when the curve twists, the situation is different because there is a variation of the binormal vector. For example, in the ideal case of a circular helicoidal movement the torsion is constant τ=C.
We will consider different shapes for gravitational sources and obtain the expected torsion using Euler's Method.
Planar orbits around a spherical source.
The orbit of a massive test body around a central-field, spherically shaped gravitational source, is planar and has a null torsion (unless there are additional external forces). A cluster made of test bodies moving around a spherical galaxy halo will lose dust grains forming a kind of contrail, but its shape through space will be a planar curve.
We can quantify the value of the torsion as function of time for the simulation in Fig. (<ref>). There we apply an analytical helicoidal fit to the resulting stream with χ^2/N_gdl=3.83, which yields a value of τ= 1.5 · 10^-4 kpc^-1. This must be considered our level of numerical noise due to the simulation and fitting procedures and provides a floor under which actual torsion cannot be measured in a realistic system.
Helicoidal orbits around a cylindric source.
For the prior spherical field, if we were to add an initial vertical velocity to the cluster, the final orbit would still be planar –but tilted. Simulating instead a cylindrical source, an initial vertical velocity provides a helicoidal movement wrapping around the z axis, but a purely azimuthal one a planar curve.
The extracted value for the torsion in the simulation in Fig. (<ref>) yields
τ=1.0· 10^-2 kpc^-1 from a helicoidal fit of the stream with a χ^2/N_gdl=0.95.
We could also consider the option of a rather ellipsoidal shape, the cylinder being the extreme toy-model example, and also the combination of different kinds of sources such as a spherical galaxy with an ellipsoidal contribution of a DM halo. This would lead to stellar streams with a non-zero, non-constant torsion, see Fig. (<ref>) for different times.
Galactic background torsion.
The galaxy itself, due to its internal aspherical structure, can have a non-negligible stellar-stream torsion. We wish to have a reference for a minimum torsion that could be considered “normal” in order of magnitude, so that if extensive studies of stellar streams show that their torsion exceeds that level, one could reject the hypothesis of a spherical halo. When we consider a galaxy as composed by a disk- plus a sphere-shaped distributions of matter, we find a background value of the order τ∼ O(10^-4) kpc^-1 (see <cit.> for full computations).
Torsion has dimensions of inverse length, so from the equation for the torsion we can expect stellar streams to show an inverse relation of τ with their distance from the galactic center. In a galaxy such as the Milky Way the torsion of galactic streams should have a characteristic scale of (10 kPc)^-1.
§ REAL CASE SCENARIO:
THE MILKY WAY STREAMS
Since we are interested in the analysis of stellar streams wrapping around galaxies to extract information on the shape of dark matter haloes surrounding the galaxies, we can best begin with our own, the Milky Way (MW). The MW stellar streams have been investigated for some time and are still an active field of research. We will use them to infer the shape of the DM halo of our galaxy. For that, we will use galstreams as a tool developed to fit some of the best known streams in the MW <cit.>.
This database includes many streams, but we will only consider those that are
more sensitive to a large part of the halo, namely stellar streams at distances farther than 30 kpc: Cetus-Palca, Cetus, Elqui, Eridanus, Jet, Pal15, Palca, Sagittarius, Willka-Yaku, Orphan-Chenab and Styx. We must discard the last two because they may be influenced by gravity sources that are outside the MW such as the Large Magellanic Cloud <cit.>.
We fit the streams of the database using an helicoidal parametrization with an arbitrary parameter, t, that takes values in the interval t ∈ (0, 1).
The torsions for each of the streams are compiled in Table 1 in <cit.>. The local torsion along many of the streams takes significant values, above the expected O(10^-4) “noise” that we found in the simulations when the streams evolved around aspherical sources, as well as the O(10^-3) floor from including the galactic plane. Higher values for the torsion have been found for streams such as Cetus, Willka-Yaku, Cetus-Palca and Sagittarius than for the rest.
We expect their respective torsions to show an inverse relation with their distance from the galactic center. We selected the streams at 30 kpc or more, meaning that we would consider values of the torsion τ∼ 0.01 kpc^-1 to be different from zero, and Table 1 in <cit.> shows several such. Also, all those with τ> 0.001 kpc-1 could perhaps carry interesting information about the DM distribution causing a serious discrepancy with the hypothesis that the DM halo is purely spherical.
§ CONCLUSION
The problem of galactic rotation curves suggests the presence of substantial amounts of DM surrounding galaxies, yet the precise arrangement of these sources remains unknown. While spherical DM distributions around galaxies require precise adjustments to account for the flatness of rotation curves, a cylindrical or prolate DM source can naturally explain this flattening without the need for fine-tuning. However, observations within the galactic plane are unable to distinguish between spherical haloes with isothermal profiles and cylindrical or elongated gravitational sources with arbitrary density profiles. Yet, information beyond the galactic plane could yield new discriminants.
Stellar streams can be characterized by their torsion. Around a central potential, orbits are contained within a plane and are thus expected to be torsionless, see Fig. <ref>. Conversely, test masses around cylindrical sources are expected to follow helicoidal orbits with nonzero torsion if given both initial vertical and azimuthal velocities. Another natural geometry is an ellipsoid-shaped halo, which deviates from perfect cylindrical symmetry yet exhibits elongation. The expected orbits of stream components would result from a combination of orbits around central potentials and cylinders, as seen in Fig. (<ref>). Torsion therefore serves as an observable specifically customized to assess the prolateness of a DM halo.
From our evaluation of the torsion data from the Milky Way streams, it becomes evident that it is non-negligible in certain considered streams. We do not dare favor one or another interpretation of the DM halo shape in the view of current data. However, we do observe streams exhibiting significant torsion, which is promising and indicates that further investigation could potentially shed light on the shape of the halo.
*
|
http://arxiv.org/abs/2405.09940v1 | 20240516094340 | Robust Singing Voice Transcription Serves Synthesis | [
"Ruiqi Li",
"Yu Zhang",
"Yongqi Wang",
"Zhiqing Hong",
"Rongjie Huang",
"Zhou Zhao"
] | eess.AS | [
"eess.AS",
"cs.SD"
] |
Robust Statistics meets elicitability: When fair model validation breaks down
Tino Werner[Institute for Mathematics, Carl von Ossietzky Universität Oldenburg, P/O Box 2503, 26111 Oldenburg (Oldb), Germany, ]
=====================================================================================================================================
Note-level Automatic Singing Voice Transcription (AST) converts singing recordings into note sequences, facilitating the automatic annotation of singing datasets for Singing Voice Synthesis (SVS) applications.
Current AST methods, however, struggle with accuracy and robustness when used for practical annotation.
This paper presents ROSVOT, the first robust AST model that serves SVS, incorporating a multi-scale framework that effectively captures coarse-grained note information and ensures fine-grained frame-level segmentation, coupled with an attention-based pitch decoder for reliable pitch prediction.
We also established a comprehensive annotation-and-training pipeline for SVS to test the model in real-world settings.
Experimental findings reveal that ROSVOT achieves state-of-the-art transcription accuracy with either clean or noisy inputs.
Moreover, when trained on enlarged, automatically annotated datasets, the SVS model outperforms its baseline, affirming the capability for practical application.
Audio samples are available at https://rosvot.github.iohttps://rosvot.github.io.
§ INTRODUCTION
Note-level automatic singing voice transcription (AST) refers to converting a singing voice recording into a sequence of note events, including note pitches, onsets, and offsets <cit.>. As part of the music information retrieval (MIR) task, AST is widely used in professional music production and post-production tuning.
With the recent advancements of singing voice synthesis (SVS) <cit.>, there is a growing demand for annotated data, while AST methods just demonstrate the potential for automatic annotation.
Note transcription from singing voices is particularly difficult than from musical instruments, as the pitch component of human voices is highly dynamic. When singing, people articulate words, leading to unstable pitches and blurry note boundaries. For instance, if a word starts with a voiceless consonant, the pitch onset may be slightly delayed.
Also, singing techniques like vibrato and appoggiatura further complicate boundary localization.
An AST task is mainly decomposed into two steps: note segmentation and pitch estimation. The first step predicts boundaries, or onset and offset of each note, which is always implemented as classification <cit.> or object detection <cit.> tasks. For pitch estimation, previous works primarily adopt weighted median or average operations on F0 values.
Despite previous accomplishments, there is no AST model that, to our knowledge, achieves a complete annotation pipeline for training an SVS model.
Applying AST approaches to automated annotation for SVS tasks still faces several challenges:
* Insufficient accuracy. Despite numerous efforts to improve accuracy, the performance is still insufficient for automatic annotation. Currently, AST results serve merely as a preliminary guide, necessitating additional manual refinement for actual application <cit.>.
* Asynchronization between notes and texts. SVS models often require text-note synchronized annotation.
Currently, transcribing singing voices without the supervision of word/phoneme boundaries requires additional post-processing for alignment, introducing accumulative errors.
* Inadequate robustness. Web crawling is a popular method for data collection <cit.>,
but the quality varies.
AST methods are vulnerable to noise as sound artifacts tend to disrupt boundary localization and pitch perception.
In this paper, we present ROSVOT, a RObust Singing VOice Transcription model that ultimately serves SVS. The note boundary prediction is formulated as one-dimensional semantic segmentation, and an attention-based decoder is employed for pitch prediction.
To achieve both coarse-grained semantic modeling and fine-grained frame-level segmentation, we devise a multi-scale architecture by integrating Conformer <cit.> and U-Net <cit.>.
Moreover, the model incorporates word boundaries to guide the segmentation process. We randomly mix the input waveforms with MUSAN <cit.> noise to simulate a noisy environment, forming a bottleneck and bolstering denoising capabilities.
To demonstrate the potential of ROSVOT in practical annotation applications, we conduct extensive experiments on a comprehensive annotation-and-training pipeline on an SVS task, simulating real-world scenarios. We choose and slightly modify RMSSinger <cit.>, one of the state-of-the-art SVS models, to be the singing acoustic model. Experiments show that the SVS model trained with pure transcribed annotations achieves 91% of the pitch accuracy compared to manually annotated data, without loss of overall quality.
We also explore the generalization performance on cross-lingual tasks, where we use ROSVOT trained with Mandarin corpora to annotate an English corpus, which is then used to train an SVS model.
Our contributions are summarized as follows:
* We propose ROSVOT, the first robust AST model that serves SVS, which achieves state-of-the-art transcription accuracy under either clean or noisy environments.
* We construct a comprehensive annotation-and-training pipeline to investigate the effect of automatically transcribed annotations on SVS tasks.
* The proposed multi-scale model outperforms the previous best published method by 17% relative improvement on pitch transcription, and by 23% with noisy inputs.
* By incorporating automatically annotated large-scale datasets, we demonstrate ROSVOT's capability of practical application and the opportunity to alleviate data scarcity in SVS.
* We explore the cross-lingual generalization capabilities of ROSVOT.
§ RELATED WORKS
§.§ Automatic Singing Voice Transcription
AST is useful not only in automatic music transcription (AMT) <cit.>, but also a promising task for audio language models <cit.> and speech-singing interaction modeling <cit.>. TONY <cit.> predicts note events by applying hidden Markov models (HMM) on extracted pitch contours. VOCANO <cit.> considers the note boundary prediction as a hierarchical classification task.
and leverages a hand-crafted signal representation for feature engineering.
MusicYOLO <cit.> adopts object detection methods from image processing to localize the onset and offset positions.
Considering the linguistic characteristic of singing voices, <cit.> introduces extra phonetic posteriorgram (PPG) information to improve accuracy.
However, a PPG extractor requires an extra training process and makes the AST model difficult to generalize across languages.
<cit.> attempts to achieve robust transcription through self-supervised pre-training and multimodal injection.
§.§ Singing Voice Synthesis
Recently, there has been notable progress in the field of SVS. HifiSinger <cit.> and WeSinger <cit.> employ GAN-based networks for high-quality synthesis. <cit.> introduces a shallow diffusion mechanism to address over-smoothness issues in the general Text-to-Speech (TTS) field. Taking inspiration from VITS <cit.>, VISinger <cit.> constructs an end-to-end architecture. To achieve singer generalization, NaturalSpeech 2 <cit.> and StyleSinger <cit.> utilize a reference voice clip for timbre and style extraction. To bridge the gap between realistic musical scores and MIDI annotations, RMSSinger <cit.> proposes word-level modeling with a diffusion-based pitch modeling approach. Open-source singing voice corpora also boost the development of SVS <cit.>. However, the quantity of annotated singing voice corpora is still small compared to speech, while note annotations of some corpora are even unavailable.
§ METHOD
§.§ Problem Formulation
In the note segmentation step, the model predicts onset/offset states at each timestep t, where t ∈ [1, T] and T is the temporal length of the spectrogram. Without loss of generality, we introduce silence notes to connect each note in the entire sequence end-to-end, replacing the onset/offset tuples by a single note boundary notation sequence y_bd = [y_bd^1, y_bd^2, ..., y_bd^T], where y_bd^t = 1 if the state is boundary at timestep t and 0 is not. The silence note has a pitch value of 0.
Notice that ∑y_bd = len(p) - 1, where p = [p^1, p^2, ..., p^L] is the pitch value sequence, L is the total number of notes, and len(·) computes lengths of sequences. Therefore, the first step can be treated as semantic segmentation, predicting a binary-label sequence. The second step is to predict the pitch sequence p.
§.§ Overview
As shown in <ref>, a common data collection pipeline for SVS consists of two stages: a) phoneme/word annotation and b) note annotation, where the former can be achieved by utilizing automatic speech recognition (ASR) approaches and forced alignment tools, such as MFA <cit.>. The second stage, however, is far from reaching a fully automatic level. Arduous manual annotation hinders large-scale data collection. A high-precision and robust annotator is required.
Note segmentation is a multi-scale classification task, in that the note events are coarse-grained while the predicted boundary sequence y_bd is fine-grained.
Therefore, we construct a multi-scale model, combining a U-Net backbone and a downsampled Conformer, as illustrated in <ref>.
The model takes Mel-spectrograms, F0 contours, and word boundaries as inputs.
To improve robustness, we train the model under noisy environments and apply various data augmentation operations.
For pitch prediction, we adopt an attention-based method to obtain dynamic temporal weights and perform weighted averages. The note segmentation part and the pitch prediction part are trained jointly to acquire optimal results.
§.§ Data Augmentation
§.§.§ Label Smoothing
The exact temporal positions of note onset and offset are difficult to demarcate on a microscopic scale, because transitions between notes are continuous and smooth. Therefore, label smoothing is a popular strategy in AST tasks <cit.>. Also, soft labels carry more information than hard labels, such as the desired confidence for the model. Specifically, we apply temporal convolution operation between the label sequence y_bd and a Gaussian filter 𝒢[n]:
𝒢[n] = {[ 1/√(2π)σe^-τ^2/2σ^2, if |n| ≤⌊W_𝒢/2⌋; 0, otherwise ].
y_bd = y_bd * ( 𝒢[n]/max(𝒢[n]))
where the filter 𝒢[τ] is normalized before convolution, so the middle of each soft label remains 1. W_𝒢 indicates the window length of the filter.
§.§.§ Noise
We mix realistic noise signals with waveforms before extracting spectrograms. MUSAN noise corpus is utilized to randomly incorporate the interference. MUSAN corpus consists of a variety of noises, such as babble, music, noise, and speech. The intensity of incorporated noise is randomly adjusted according to a signal-to-noise ratio (SNR) interval of [6, 20]. The noise signal η is repeated or chunked to meet the length of each training sample. In the training stage, we conduct noise mixing followed by on-the-fly extraction of Mel-spectrograms:
y = y + η×RMS(y / 10^(SNR/20))/RMS(η)
X = ℱ(y)
where ℱ(·) is Mel-spectrogram extraction operation, RMS(·) is root-mean-square operation, and X is the resulting spectrogram.
In addition to spectrograms, we also add noise to F0 contours and label sequences. Since the model takes F0 contours as input, a clean F0 contour can leak information.
We simply add Gaussian noise to logarithmic F0 contours and soft labels to improve robustness.
§.§ Word Boundary Condition
To regulate segmentation results and better suit practical annotation, we incorporate word boundary conditions. The word boundary sequence y_wbd has the same form as note boundaries y_bd, involving silence or "NONE" words. The regulation is necessary because, in practical annotation, word sequence and note sequence need to be temporally synchronized, as shown in <ref>. In other words, the presence of a word boundary at timestep t implies the existence of a note boundary at t, but the reverse may not hold true. This is because melisma is a commonly used singing technique. Without regulation, additional post-processing is required to synchronize words and note sequences.
Since in practice, the note annotation stage follows the phoneme annotation stage, word boundaries should already be obtained through forced alignment tools like MFA. We directly encode word boundaries as an additional condition to ensure word-note synchronization. However, to provide note-only support, we train an extra word boundary extractor E_W to deal with scenarios like vocal tuning in music industries, where word alignment is unavailable. More details are listed in <ref>.
§.§ Multi-scale Architecture
The semantic information of note events is coarse-grained and high-level, while the segmentation result y_bd is fine-grained and frame-level. To tackle this problem, we design a multi-scale model, incorporating multiple feature encoders and a pitch decoder, illustrated in <ref>.
For precise segmentation, high-resolution results are essential to prevent rounding errors. Hence, we employ a U-Net architecture for its ability to downsample representations while ensuring detailed reconstruction. To capture the high-level features associated with note events, we utilize a Conformer network, one of the most popular ASR models. The U-Net architecture envelops the Conformer, directing its focus towards the downsampled features and easing the computational load of processing long sequences. Through the integration of skip connections, our model achieves refined frame-level accuracy by fusing features across multiple scales.
The U-Net backbone's encoder and decoder each comprise K downsampling and upsampling layers, respectively.
The downsampling rate is set to 2, and the channel dimension remains the same as input to alleviate overfitting.
The intermediate part of the backbone is replaced by a 2-layer Conformer block with relative position encoding <cit.>. The detailed architecture is listed in <ref>.
§.§ Decoders and Objectives
§.§.§ Note Segmentation
We adopt a note boundary decoder, denoted as D_B, to transform the output feature Z from the U-Net backbone into logits ŷ_bd, where Z ∈ℝ^T × C and C is the channel dimension. D_B is implemented by a single matrix W_B∈ℝ^C × 1. A binary cross-entropy (BCE) loss ℒ_B is applied to train note segmentation. The details of loss functions are listed in <ref>, similarly hereinafter.
It is worth mentioning that in the note segmentation task, there is a significant imbalance between positive and negative samples, with a ratio of approximately 1:500[Statistically, there are approximately 2.42 note boundaries per second in our datasets.].
Also, the inclusion of word boundary conditions results in varying classification difficulties, with some boundaries being inherently easier to classify than others.
To tackle this imbalance problem, we employ a focal loss <cit.>, ℒ_FC, to focus more on hard samples.
§.§.§ Pitch Prediction
For pitch value prediction D_P, we leverage an attention-based weighted average operation to aggregate the fine-grained features, instead of simply applying a weighted median or average. Given the output feature Z ∈ℝ^T × C, we obtain an attention weight matrix S through a projection matrix W_A∈ℝ^C × H: S = σ(ZW_A),
where S ∈ℝ^T × H and H denotes the number of attention heads. Then we perform an outer product operation between each vector of Z and S along the time dimension to obtain a pre-weighted representation: Z_1^t = Z^t⊗ S^t and Z_1 ∈ℝ^T × C × H,
which is further averaged along the head dimension to acquire the weighted representation Q ∈ℝ^T × C. In addition, we compute the averaged weights s∈ℝ^T by averaging along the head dimension.
Subsequently, we use the note boundary sequence y_bd to segment Q along the time axis, resulting in a group sequence G = [G^1, G^2, ..., G^L] with length of L, number of notes. Each group (or segment) G^i contains l_i consecutive vectors: G^i = [Q^j+1, Q^j+2, ..., Q^j+l_i], where ∑_i=1^L l_i = T, i ∈ [1, L], j ∈ [1, T], and y_bd^j = 1. We also do the same for the averaged weights s: G_s^i = [s^j+1, s^j+2, ..., s^j+l_i]. For each group, we compute a weighted average z^i:
z^i = ∑ G^i /∑ G_s^i = ∑_k=1^l_i Q^j+k/∑_k=1^l_i s^j+k
Finally, we multiply z with a matrix W_O to compute the logits: p̂ = z W_O, where z∈ℝ^L × C and W_O∈ℝ^C × P. P is the number of pitch categories. A cross-entropy (CE) loss, ℒ_P, is utilized.
§.§ Training and Inference Pipeline
In the training stage, we use ground-truth (GT) note boundaries to segment the intermediate features and optimize the pitch decoder. The overall loss ℒ = λ_Bℒ_B + λ_FCℒ_FC + λ_Pℒ_P is controlled by balancing parameters λ_B, λ_FC, and λ_P.
In the inference stage, firstly we compute the boundary probability σ(ŷ_bd) and use a threshold μ to decide the boundary state. That is, a note boundary exists at time step t if σ(ŷ_bd) > μ, otherwise, it does not. The predicted results will undergo post-processing to clean up boundaries with excessively small spacing between them. Finally, we segment the intermediate feature Z and decode pitches.
It is worth mentioning that μ can control the granularity of generated notes. In other words, a lower μ may result in more fine-grained and subdivided pitches, while a higher one ignores small fluctuations. This is because a lower μ allows more boundaries.
§.§ Singing Voice Synthesis System
Once we complete the inference and automatically annotate a dataset, the new datasets are used to train an SVS system to further investigate the practical performance.
We choose RMSSinger as the singing acoustic model and a pre-trained HiFi-GAN <cit.> model as the vocoder. RMSSinger is originally proposed for word-level realistic music score inputs, denoted as 𝒮. To suit our settings, we drop the word-level attention module and directly use the fine-grained MIDI input. The alignment between MIDI notes and phonemes and other settings are reproduced according to <cit.>.
§ EXPERIMENTS
In this section, we begin by showcasing experiments on AST tasks, followed by simulations and comparisons of a comprehensive annotation-and-training pipeline for an SVS task. We also investigated the model's performance in low-resource scenarios; however, due to space limitations, this part is included in <ref>.
§.§ Experimental Setup
Data We utilize two Mandarin datasets. The first is M4Singer <cit.>, a multi-singer and multi-style singing voice corpus, which is approximately 26.5 hours after pre-processing. Secondly, we collect and annotate a high-quality song corpus, denoted as 𝒟_1. 𝒟_1 is composed of songs sung by 12 professional singers, with a total length of 20.9 hours.
For training AST models, these two datasets are used jointly, with two 3% subsets used as the validation and the testing sets. Two 1% subsets of MUSAN noise corpus are also isolated. The details of data collection are listed in <ref>.
Implementation and Training The sample rate of waveforms is 24k Hz. F0 contours are extracted through a pre-trained RMVPE <cit.> estimator, where each F0 value is quantized into 256 categories. The length of softened boundaries is set to 80 ms. The U-Net backbone is constructed with 4 down- and up-sampling layers, with 16 × downsampling rate. For inference, the boundary threshold μ is set to 0.8.
More details are listed in <ref>.
Evaluation We utilize the library <cit.> for performance evaluation.
Specifically, we apply the metrics proposed in <cit.>: COn (correct onset), COff (correct offset), COnP (correct onset and pitch), COnPOff (correct onset, pitch, and offset).
An average overlap ratio (AOR) is also calculated for correctly transcribed note duration, and a raw pitch accuracy (RPA) for overall perception performance.
Given GT and corresponding predicted notes, their overlap ratio (OR) is defined as the ratio between the duration of the time segment in which the two notes overlap and the time segment spanned by the two notes combined. The AOR is given by the mean OR computed over all matching GT and predicted notes.
For RPA, we transform the GT and predicted note events into frame-level sequences and compute matching scores.
We remove silence notes and designate the boundaries that enclose each note as onset and offset for the evaluation of ROSVOT. This step is unnecessary for other baselines. The onset tolerance is set to 50 ms, and the offset tolerance is the larger value between 50 ms and 20% of note duration. The pitch tolerance is set to 50 cents. All numbers demonstrated are multiplied by 100.
Baselines We compare ROSVOT, denoted as ℳ, with multiple baselines: 1) TONY <cit.>, a automatic software with visualization; 2) VOCANO <cit.>, retrained on the joint datasets; 3) MusicYOLO, retrained; 4) <cit.>, reproduced and retrained. We also compare the results of several variants of ℳ:
1) ℳ (conformer), where the U-Net is dropped and the backbone is the Conformer alone;
2) ℳ (conv), where the middle Conformer blocks are replaced by 8-layer convolution blocks;
3) ℳ (w/o wbd), canceling word boundary condition;
4) ℳ (w/o noise), which is identical to ℳ but trained without noisy environment;
5) ℳ (w/ E_W), meaning that the GT word boundaries are not available and need to be extracted from the extractor E_W.
§.§ Main Results
We run two sets of experiments under clean and noisy environments, respectively. The noisy environment is produced by mixing MUSAN noises with a probability of 0.8 and an SNR range of [6, 20]. The main results are listed in <ref>. For the sake of brevity, only major scores are listed here, and the complete scores are listed in <ref>.
From the results, we can see that 1) the proposed multi-scale model achieves better performances for both boundary detection and pitch prediction by a large margin, even without noises; 2) Training under a noisy environment significantly improves the robustness, while the performances of baselines are severely degraded when facing noisy inputs; 3) The involvement of noises in training stage also improves the inference performance facing clean waveforms, this may because the noise mixing operation forms a bottleneck to force the model to focus on note-related information.
§.§ Ablation Study
To demonstrate the effectiveness of several designs in the proposed method, we conduct ablation studies and compare the results of different hyperparameters. From <ref> we can see that dropping the U-Net backbone or replacing the Conformer with convolution blocks decreases the performance. In particular, the performance of ℳ (conformer) significantly deteriorates when dealing with noisy inputs, suggesting that the downsampling layers contribute to a denoising effect. This is also validated in the results of ℳ (w/o noise), indicating that even though it is trained with clean samples, it still exhibits a certain level of robustness. For a fair comparison, we test ℳ (w/ E_W) to demonstrate the performance in note-only scenarios. The results indicate that despite the accumulated errors introduced by the word boundary extractor E_W, the performance does not decline significantly.
We record the comparison results of different overall downsampling rates of the U-Net backbone in <ref>, where only F1 scores are listed. The results align remarkably well with the length of the soft labels, which are both about 80 ms. We choose the rate of 16 in the final architecture ℳ as it achieves better overall performance.
For pitch prediction, we compare the results between the proposed attention-based method and the weighted median method used in previous works. We drop the pitch decoder D_P and apply a weighted median algorithm on the F0 contours according to <cit.>. The F1 and AOR scores of this algorithm with clean inputs are 70.5 and 92.6, while the scores with noisy inputs are 63.2 and 86.6. The results indicate that a simple weighted median is insufficient in dealing with fluctuated pitches in singing voices, which are full of expressive techniques like portamentos. Also, its performance is largely dependent on the F0 extractor.
§.§ Towards Automatic Annotation
The experimental results indicate that ROSVOT achieves superior performance, but what practical significance does it hold? In this section, we establish a comprehensive SVS pipeline, using ROSVOT as the automatic annotator.
§.§.§ Implementation and Pipeline
Data. We re-align and re-annotate the OpenSinger corpus <cit.>, which consists of 84.8 hours of singing voices recorded by 93 singers. We also perform cross-lingual generalization by annotating an English corpus 𝒟_2, which has a length of 6 hours.
For future reference, we use the term pseudo-annotations for the automatically generated transcriptions. Details are listed in <ref>.
Evaluation. For objective evaluation, we also apply the RPA score to measure the reconstructed F0 contours. The RPA scores for GT Mel are computed between F0s from GT vocoder generations and GT waveforms, while the others are between GT and generations. For subjective evaluation, we conducted crowdsourced mean opinion score (MOS) listening tests. Specifically, we score MOS-P and MOS-Q corresponding to pitch reconstruction and overall quality. The metrics are rated from 1 to 5 and reported with 95% confidence intervals. For a more intuitive demonstration, we record Comparative Mean Opinion Scores (CMOS) and discuss the results in <ref>.
§.§.§ SVS Results
Firstly, we investigate the effect of training with pseudo-annotations at different ratios.
We only utilize M4Singer to train ROSVOT ℳ, which is used to generate the pseudo annotations. Pseudo annotations with different ratios are mixed into 𝒟_1 to form the training set. For inference, we reserve two 1% segments from each real and pseudo group for validation and testing. The results are listed in <ref>, rows 2-7. From the results, we can see that the pitch accuracy of real annotation inputs decreases when mixing more pseudo annotations, but the accuracy of pseudo inputs increases. This suggests a minor discrepancy in the distributions of real and pseudo annotations. However, the performance degradation is not significant: 99% of the pseudo mixing contributes only a 6% drop in performance. The MOS-Q scores share a similar pattern, but they involve a comprehensive evaluation with considerations of audio quality and more. A decrease in pitch accuracy does not necessarily lead to an overall decline in quality.
We further investigate the performance as data size increases. While the AST model remains the same, we train 𝒮 only using M4Singer as the baseline. Next, we gradually mix 𝒟_1 and OpenSinger to expand the data size. To consume the largest datasets in the last row, we construct a large version of RMSSinger with 320-dimensional channels and a 6-layer decoder[The dictionary of the text encoder is also merged with English phonemes for the following cross-lingual experiments], denoted as 𝒮(large). The results are listed in <ref>, rows 8-11. A slight reduction in pitch accuracy can be observed when integrating diverse datasets, which may result from the inherent differences in dataset characteristics and annotation styles. However, the overall quality slightly improves, as the model has been exposed to a sufficient variety of pronunciation styles and singing patterns. This indicates that ROSVOT provides an opportunity for SVS models to scale up, which could be beneficial for large-scale singing voice pre-training or zero-shot SVS.
§.§.§ Cross-lingual Generalization
We further investigate the zero-shot cross-lingual capability of ROSVOT, and explore the feasibility of generalizing a Mandarin SVS model to English. We use the same ROSVOT model combined with E_W trained with M4Singer and test it on TONAS <cit.>, a flamenco a cappella singing corpus. The quantitative results are listed in <ref> in <ref>. Performance is degraded, and we believe it may be due to the flamenco singing style (which has rich techniques like appoggiatura) and the unseen language, since the model is trained with Mandarin pop songs.
For SVS, we finetune the pre-trained model 𝒮(large) on the English corpus 𝒟_2, a 6-hour dataset transcribed automatically using MFA and the AST model above. Note that since the word duration information is obtained using MFA, we drop E_W and directly generate word-note synchronized transcriptions. We finetune both stages 1 and 2 of RMSSinger for 100k steps. The evaluation results are listed in <ref>.
Although the performance is not superior, the results still demonstrate certain cross-lingual capabilities. There are vast differences in pronunciation rules and phonetic characteristics between English and Mandarin, since they belong to two distinct language families. If a model possesses a certain capability to transfer from Mandarin to English, it essentially demonstrates a degree of general cross-linguistic ability. We will leave elaborate validations and evaluations of general cross-lingual capabilities for future work.
§ CONCLUSION
In this paper, we introduce ROSVOT, the first robust AST model that ultimately serves SVS. We leverage a multi-scale architecture to achieve a balance between coarse-grained note modeling and fine-grained segmentation. An attention-based decoder with dynamic weight is devised for pitch regression. Additionally, we establish a comprehensive pipeline for SVS training. Experimental results reveal that our model achieves the best performance under either clean or noisy environments. Annotating and incorporating larger datasets improves the SVS model's performance, indicating the capability of practical annotation of ROSVOT.
§ LIMITATIONS AND POTENTIAL RISKS
The proposed method acknowledges two primary limitations. First, the cross-lingual capability is only tested on a small-scale English dataset, necessitating extensional experiments for a comprehensive evaluation of generalization performance. Second, due to space constraints, only one SVS model is examined as the baseline. Additional verifications involving different SVS models are required to fully demonstrate practical performance. Future work will involve testing automatically annotated transcriptions on a more diverse set of SVS models.
The misuse of the proposed model for singing voice synthesis could potentially lead to copyright-related issues. To address this concern, appropriate constraints will be implemented to mitigate any illegal or unauthorized usage.
§ WORD BOUNDARY CONDITION
Word boundary conditions are introduced to regulate segmentation results. It seems similar to <cit.>, but a word boundary sequence forms a much narrower information bottleneck without introducing unnecessary information. In practice, we embed the word boundary sequences to inform the model of boundary conditions. Also, we use the word boundary sequence as a reference to regulate the predicted note boundaries. Specifically, we remove the note boundaries that are too close to the reference word boundaries, where the threshold is 40 ms.
This regulation is only for automatic annotation. For a note-only application, word-note synchronization is not necessary. In this scenario, we build a word boundary extractor E_W to provide weak linguistic supervision. The extractor shares the same architecture as the note segmentation part of ROSVOT. The multi-scale architecture also functions well in localizing frame-level word boundaries. Specifically, we use an MFA-aligned AISHELL-3 Mandarin corpus to pre-train E_W, followed by fine-tuning it with M4Singer and 𝒟_1.
§ ARCHITECTURE AND IMPLEMENTATION DETAILS
§.§ Hyperparameters
For hyperparameters, we sample waveforms with a sample rate of 24000 Hz. Mel-spectrograms are computed with a window size of 512, and a hop size of 128. The number of Mel bins is set to 80. To form a bottleneck and alleviate overfitting, we only use the first 30 bins (low-frequency part) as input. MUSAN noises are added to the waveforms with a probability of 0.8 and an SNR range of [6, 20]. Gaussian noise is added to the logarithmic F0 contours with a random standard deviation range of [0, 0.04], and is added to the softened boundary labels with [0, 0.002]. F0 contours are extracted through a pre-trained RMVPE <cit.> estimator, where each F0 value is quantized into 256 categories. We set P, the number of pitch categories, to 120, where each pitch number is the exact MIDI number. The length of softened boundaries is set to 80 ms, indicating a 15-frame window W_𝒢. The temperature parameters T_1 and T_2 are set to 0.2 and 0.01. To balance the various objectives, we set λ_B, λ_FC, andλ_P to 1.0, 3.0, and 1.0. For inference, the boundary threshold μ is set to 0.8. The hyperparameters α and γ in the boundary decoder are set to 1/(2.42×128/24000) and 5.0, where the 2.42 in the former indicates the number of note boundaries in one second, and 128 and 24000 indicate the hop size and the audio sample rate.
We train the AST model for 60k steps using 2 NVIDIA 2080Ti GPUs with a batch size of 60k max frames. An AdamW optimizer is used with β_1 = 0.9, β_2 = 0.98, ϵ = 10^-8. The learning rate is set to 10^-5 with a decay rate of 0.998 and a decay step of 500 steps.
§.§ Architecture
For model architecture, we apply three encoders E_M, E_B, E_P to encode Mel-spectrograms, word boundaries, and F0 contours. The encoders consist of a linear projection or an embedding layer, followed by residual convolution blocks.
The U-Net backbone's encoder and decoder each comprise K downsampling and upsampling layers, respectively, where K=4 in our case, with 16 × downsampling rate.
A downsampling layer consists of a residual convolution block and an average pooling layer with a downsampling rate of 2, resulting in an overall downsampling rate of 2^K. For an upsampling layer, the input feature is firstly upsampled through a transposed convolution layer, and is then concatenated with the corresponding skipped feature before a final convolution block. The downsampling rate is set to 2, and the channel dimension remains the same as input to alleviate overfitting.
The intermediate part of the backbone is replaced by a 2-layer Conformer block with relative position encoding.
The Conformer network is 2-layer with a kernel size of 9 and a head size of 4. The head dimension in the pitch decoder is 4. The overall channel dimension is 256. The overall architecture is listed in <ref>.
As for the N-layer residual convolution blocks mentioned many times in the main text, the configuration is illustrated in <ref>.
§ DATA
We recruit 12 professional singers (8 female, 4 male) to record 𝒟_1 and 8 singers (5 female, 3 male) for 𝒟_2. Each singer was compensated at an hourly rate of $600. Singers were informed that the recordings were for scientific research use. For 𝒟_1, we have hired music experts to manually annotate the note and word information. Each annotator was compensated at an hourly rate of $20. Participants were informed that the data would be used for scientific research. For 𝒟_2, we automatically annotate the words and notes through an ASR model <cit.>, MFA, and the proposed AST model. The length of 𝒟_1 is 20.9 hours and 𝒟_2 is 6 hours. We use all the datasets under license CC BY-NC-SA 4.0.
§ OBJECTIVES
The binary cross-entropy (BCE) loss applied to train the note segmentation stage:
ℒ_B = 1/T∑ BCE(y_bd, ŷ_bd)
= - 1/T∑_t=1^T ( y_bd^t ln(σ(ŷ_bd^t / T_1))
+ (1 - y_bd^t) ln(1 - σ(ŷ_bd^t / T_1)) )
where T_1 is the temperature hyperparameter, and σ(·) stands for sigmoid function.
To tackle the imbalance problem, we employ a focal loss <cit.>, ℒ_FC, to focus more on hard samples:
p_t = y_bdσ(ŷ_bd) + (1 - y_bd) (1 - σ(ŷ_bd))
α_t = αy_bd + (1 - α) (1 - y_bd)
ℒ_FC = 1/T∑α_t (1 - p_t)^γBCE(y_bd, ŷ_bd)
where α is a hyperparameter controlling weight of positive samples, and γ controls balance between easy and hard samples.
The cross-entropy (CE) loss used to train the pitch prediction stage:
ℒ_P = - 1/L∑_i=1^L∑_c=1^P p_c^i ln( exp (p̂_c^i / T_2)/∑_k=1^Cexp (p̂_c^k / T_2))
where T_2 is the temperature hyperparameter.
§ DETAILS OF EVALUATION
For each SVS experiment task, 20 samples are randomly selected from our test set for subjective evaluation. Professional listeners, totaling 20 individuals, are engaged to assess the performance. In MOS-Q evaluations, the focus is on overall synthesis quality, encompassing clarity and naturalness. For MOS-P, listeners are exposed to GT samples and instructed to concentrate on pitch reconstruction, disregarding audio quality. In both MOS-Q and MOS-P evaluations, participants rate various singing voice samples on a Likert scale from 1 to 5. It is crucial to highlight that all participants were remunerated for their time and effort, compensated at a rate of $10 per hour, resulting in a total expenditure of approximately $300 on participant compensation. Participants were duly informed that the data were for scientific research use.
§ EXTENSIONAL EXPERIMENTS
§.§ Additional AST Results
The additional experimental results are listed in <ref> and <ref>, where the former is under a clean environment and the latter is noisy.
§.§ Additional Comparisons
We perform additional out-of-domain (OOD) tests on ROSVOT and compare with <cit.>, which focuses more on multimodality, but still has noticeable audio-only AST capability. We directly run OOD tests on datasets MIR-ST500 <cit.> and TONAS <cit.>, where the former is used by <cit.> as an in-domain (ID) set and the latter also an OOD set. The ROSVOT model in this test is still trained with only M4Singer. The results are listed in <ref> (the results of <cit.> are copied from their original paper).
From the results, we can see that ROSVOT outperforms <cit.> in an OOD setting (i.e., on TONAS) in terms of COnPOff and COnP, but falls short in COn. We believe this is because <cit.> incorporated English data in training, which has a relatively closer pattern to flamenco singing than M4Singer. ROSVOT also underperforms facing MIR-ST500. There are three possible reasons:
* Since each song in MIR-ST500 spans several minutes and needs to be source-separated first, it contains non-negligible un-voiced sections. We do not know whether <cit.> segments the voiced parts according to some rules. Hence, we transcribe the whole song and compute the scores at once, resulting in a smaller proportion of positive samples of on/offsets due to longer silences, hereby influencing the performance. Another noteworthy point is that ROSVOT is able to load a 5-minute song into a 2080ti GPU and transcribe it at once, demonstrating considerable efficiency.
* The source-separation result influences the performance. The separator recognizes harmonies as vocals, thus resulting in polyphonic singing voices. We focus on solo vocal note transcription and harmonies are not considered noise in our settings. ROSVOT tends to transcribe vocal harmonies when the main vocal is silent, degrading the accuracy.
* MIR-ST500 is an ID test set to the model of <cit.>, so it should have an advantage.
In conclusion, we believe that ROSVOT demonstrates comparable or superior performance. Considering the total number of parameters, ROSVOT shows considerable efficiency and the capability to process long sequences.
§.§ Additional SVS Results
In <ref>, the quality of test samples could be very similar; without a specific evaluation objective, evaluators may struggle to determine differences in quality. Therefore, we conduct comparative evaluations and record CMOS scores of two typical data combinations compared to 100% real 𝒟_1. We believe CMOS scores, rating from -2 to 2, allow the disregard of unrelated factors. The results are listed in <ref>.
The results demonstrate a trend that, when the ratio of pseudo annotations in the training set increases, the quality of samples generated using real annotation inputs decreases, while that using pseudo annotations also increases. This may indicate that there are certain biases in the annotations within datasets, and the AST model could learn its own bias, which creates a domain gap of the annotation styles among different datasets.
§ LOW-RESOURCE SCENARIOS
Considering the scarcity of annotated singing voice datasets, we investigate the performance of the proposed method under low-resource scenarios. We use M4Singer as the training set and test the model on 𝒟_1. Firstly, we gradually decrease the amount of training data to see the performance degradation. After that, we incorporate features extracted from a pre-trained self-supervised learning (SSL) framework to enhance the performance.
Specifically, we modify the model architecture by introducing a latent feature encoder E_S, transforming the additional SSL representations into 256-dimensional features, and performing a fusion by element-wise addition. This fusion can be illustrated as <ref>. E_S comprises two convolution layers and a convolution block, where the former reduces the dimension of the input features to the model channel dimension. The output of E_S is directly added to the output of the U-Net's encoder to perform the fusion.
We choose XLSR-53 <cit.>, a wav2vec 2.0 <cit.> model pre-trained on 56k hours of speech in 53 languages, to be the SSL feature extractor. We believe that the knowledge of a pre-trained self-supervised model alleviates data scarcity. To simulate the low-resource environment, we actually can get access to singing voice corpora, only without annotations. Therefore, we use all the training data mentioned before to fine-tune the XLSR-53 model with a batch size of 1200k tokens for 20k steps. In this case, we incorporate self-supervised learning to cope with the low-resource problem.
According to <cit.>, features from the second layer of a 12-layer wav2vec 2.0 model are the most related to audio features like pitch and unvoiced ratio, we extract features from the 4th layer of the 24-layer XLSR-53 to be the input feature, which has a dimension of 1024. Before feeding the features to the model, we add Gaussian noises with a standard deviation of 0.05 to perform the data augmentation. The SSL-augmented model is denoted as ℳ(ssl).
The results are listed in <ref>. From the results, we can see that there is no significant improvement after involving SSL features, if enough training data is utilized. However, when decreasing the training data, the original model ℳ exhibits a decline in performance, while ℳ(ssl) experiences a comparatively smaller decrease.
|
http://arxiv.org/abs/2405.10180v1 | 20240516152308 | Model-independent Reconstruction of UV Luminosity Function and Reionization Epoch | [
"Debabrata Adak",
"Dhiraj Kumar Hazra",
"Sourav Mitra",
"Aditi Krishak"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
A Participatory Budgeting based Truthful Budget-Limited Incentive Mechanism for Time-Constrained Tasks in Crowdsensing Systems
Chattu Bhargavi, Vikash Kumar Singh
School of Computer Science and Engineering, VIT-AP University, Inavolu, Beside AP Secretariat, Amavarati, AP, India
May 20, 2024
======================================================================================================================================================================
§ INTRODUCTION
The process of cosmic reionization is an outstanding problem in extragalactic astronomy. During the redshifts z ∼ 6-20 almost all of the hydrogen in the universe became ionized <cit.>. A similar process was followed for helium at a later time and helium reionization was completed at around z∼ 2.5 - 3.5 <cit.>. For this process, the high-redshift (z ≳ 6) star-forming galaxies are often considered to be the dominant contributors of ionizing photons since the abundance of quasars dramatically declines at z ∼ 6 <cit.>. A variety of theoretical and observational studies have shown that the contribution of Active Galactic Nuclei has a very minimal (≲ 1%) contribution to the total ionisation budget at z≥ 6 <cit.>. Most of the studies suggest that star-forming galaxies inside the low mass halo (M_h ≲ 10^9.5 M_⊙) are sufficient enough to complete the reionization process <cit.>. Therefore, their time-dependent abundance, and the redshift evolution of UV luminosity density (ρ_ UV) derived from rest-frame UV luminosity function (LF) of galaxies are of significant interest for understanding reionization history (see, <cit.>).
Significant advancement has been made in determining the behaviour of LFs at redshifts z ∼ 4-10 using Hubble Frontier Fields (HFF, <cit.>) survey data up to magnitude ≃- 15 <cit.>. More recently, the same has been studied in <cit.> and <cit.> with galaxy candidates down to magnitude ≃-22. However, along with studying the faint end of the UV LF, it is also important to investigate the bright-end shape of LF. While the conventional Schechter function <cit.> seems to be a good description of the LFs for ionizing candidates at the fainter end of the magnitude, an exponential decline has been reported in their number densities at the bright-end <cit.>. It is thought to be caused by heating from an active galactic nucleus (AGN, <cit.>), inefficient cooling of gas inside high-mass dark matter halos at low redshifts <cit.> etc. Recent works of <cit.> and <cit.> found that both double power-law (DPL, <cit.>) and lensed Schechter function <cit.> are better fitted to UV luminosity data including brightest galaxy candidates observed in the early data from James Webb Space Telescope (JWST, <cit.>) and Great Optically Luminous Dropout Research data of Subaru HSC (GOLDRUSH) (corrected for AGN contribution) respectively. Publications <cit.> and <cit.> independently established that LFs are more consistent with DPL at z ∼ 9-10. More recent studies in <cit.> and <cit.> also reported an excess of number density compared to that described by the Schechter best fit at z ∼ 4-7.
In this paper, we address the constraints on reionization history in a two-step process. First, we reconstruct the profile of the LFs using Gaussian process regression (GP), a nonparametric regression method at redshifts z ∼ 2-8, and Schechter function model at redshifts z ∼ 2-12 using the HFF data, early JWST data and HSC data. We compare these model-independent LFs with the conventional Schechter function model of LFs. We derive the UV luminosity densities by integrating LFs, and investigate the redshift evolution of UV luminosity densities derived from both profiles. Finally, we use these luminosity densities to constrain the history of reionization along with jointly fitting two other observational data: the CMB power spectrum data of temperature and polarization anisotropy from Planck observation <cit.> and neutral hydrogen fraction data from galaxy, quasar and gamma-ray burst observations.
The paper is organised as follows. In <ref> we briefly discuss the reionization process, different functions used and details of Gaussian process regression. In <ref> we describe the ancillary data sets used in this work. In <ref> we present our results for the UV luminosity functions obtained using two different methods, their comparison, derived UV luminosity densities and the corresponding constraints on reionization history. Finally in <ref> we summarise the work.
§ METHODOLOGY
§.§ Cosmic Reionization
The process of reionization of the intergalactic medium (IGM) is a balance between ionization of hydrogen and helium atoms by cosmic photons and recombination of free electrons and protons to form neutral hydrogen and helium. Analytical and numerical modelling of this process traces a long history <cit.> (see the references therein). The process is studied by the redshift evolution of volume filling factor which is governed by the ionization histories of both hydrogen and helium. The time evolution of is obtained by solving the ordinary differential equation (e.g. <cit.>)
= ṅ_ion/⟨ n_H⟩ - /t_rec,
where ⟨ n_H⟩ = X_pΩ_bρ_c/m_H is the mean comoving number density of hydrogen and depends on the primordial mass-fraction of hydrogen X_p, critical density ρ_c, baryon density Ω_b and m_H is the mass of atomic hydrogen. t_rec denotes the average recombination time in the IGM,
t_rec = 1/C_ HIIα_B(T)(1 + Y_p/4X_p)⟨ n_H⟩ (1 + z)^3,
where α_B(T) is the recombination coefficient for hydrogen (we assume the IGM temperature T to be 20,000 K) and Y_p = 1 - X_p is the primordial helium abundance. The clumping factor C_ HII accounts for the inhomogeneity of the IGM, and is not very well constrained from observations. Recent simulations suggest a possible range of C_ HII value from 1 to 6 <cit.>. In this work we use the fixed value of C_ HII = 3 <cit.> for simplicity.
The comoving production rate of available ionizing photons in the IGM at some redshift is
ṅ_ion = ∫_-∞^M_ trunc f_esc(M)ξ_ion(M)Φ(M)L(M)dM
=⟨ f_escξ_ion⟩ρ_ UV.
This depends on the intrinsic production rate of Lyman continuum (LyC) photons supplied by stellar populations of galaxies, which is parameterized by a numerical factor ξ_ion to count the ionizing photons per unit UV luminosity, the escape fraction f_esc and total luminosity density ρ_ UV from star-forming galaxies with a truncation absolute magnitude M_ trunc. f_esc is a crucial parameter and is not well-constrained from direct observations of LyC photons <cit.>, which is mainly limited to z∼ 2 – 4.5 due to a dramatic increase in the opacity of IGM at high redshifts making direct observation of LyC photons difficult <cit.>. Moreover, the trend of f_esc with halo mass is not well understood from theoretical modelling <cit.>. All pre-JWST literatures suggest that a low ionisation efficiency of around log_10ξ_ion = 25.2 Hz erg^-1 and hence f_esc of 0.2 <cit.> that are sufficient to give reionization history consistent with CMB data. However new JWST data suggests a higher ξ_ion <cit.> that increases at redshifts higher than z∼ 9 <cit.>. It is apparent that ξ_ion and f_esc are completely degenerate parameters. A recent data-driven model-independent study <cit.> found that a constant value of f_esc for z ≥ 6 is permitted. Therefore, we take f_esc as a constant factor and, instead of considering ξ_ion and f_esc to be independent parameters, we consider the magnitude-averaged value of ⟨ f_escξ_ion⟩ as a single parameter in this work.
Luminosity density ρ_ UV is obtained from the luminosity function ϕ(M_ UV) via integration,
ρ_ UV = ∫_-∞^M_ truncΦ(M)L(M)dM,
where L(M) is the luminosity. One can set the truncation magnitude at M_ trunc = -10 corresponding to the predicted range of minimum halo-mass that can host star-forming galaxies <cit.>.
However, due to insufficient data at some redshifts at larger magnitudes (M_ UV> -17), our free-form LFs obtained using GP will be biased to the mean function beyond the range of training data. Therefore, we restrict ourselves to M_ trunc = -17.
Accurate estimation of ρ_ UV from UV galaxies requires a careful analysis of the LF profile that can well-describe the number densities of star-forming galaxy samples down to observed limits. The best model of LF in literature is often assumed to be the Schechter function <cit.>,
Φ(M) = 0.4 ln10 ϕ^* [10^0.4(M^*-M)]^1+α exp[-10^0.4(M^*-M)]
parameterized by ϕ^* (Mpc^-3mag^-1), M^* and α. We find the posterior distributions of Schechter function parameters in redshifts z∼ 2-12 using the UV luminosity data sets described in <ref>. We also fit same data sets using the non-parametric method discussed in <ref>.
We then obtain ρ_ UV corresponding to each of these best-fit LFs using <ref>.
In order to constrain the reionization process, we need to adopt a parametric form <cit.> or a free-form <cit.> of ρ_ UV evolution to constrain ṅ_ion. The model-independent reconstruction of reionization process in <cit.> rules out the single power-law <cit.> form which
that is unable to replicate the decline at z∼ 8. This results in an incorrect value of the Thomson scattering optical depth. Therefore, in our analysis we use the logarithmic double power law <cit.> to describe ρ_ UV,
ρ_ UV = 2ρ_ UV,z=z_ tilt/10^a(z-z_ tilt) + 10^b(z-z_ tilt),
where ρ_ UV,z=z_ tilt is the normalization factor at z_ tilt∼ 8, and a and b are the slopes.
Once the evolution of from Eq. <ref> is determined, we compute the reionization optical depth at redshift z using
τ_ re = ∫_0^zc(1+z^')^2/H(z^')(z^')⟨ n_H⟩σ_T(1 + ηY_p/4X_p)dz^',
where c is the speed of light, H(z) is the Hubble parameter, and σ_T is the Thomson scattering cross section. Here we assume that helium is singly ionized (η = 1) at z > 4 and doubly ionized (η = 2) at z ≤ 4 <cit.>.
§.§ Gaussian Process Regression
Gaussian process regression is a non-parametric Bayesian regression method. This method has been extensively used in cosmological data analysis <cit.>. A Gaussian process is a collection of random variables such that any finite subset of these random variables has a multivariate Gaussian distribution <cit.>. A GP is described by its mean and covariance functions, defined as μ(x) = 𝔼[f(x)], and
k(x,x') = 𝔼[(f(x)-μ(x))(f(x')-μ(x'))], respectively, for a real process f(x). In particular, here f(x) defines the luminosity-magnitude relation guided by data for a given mean function. The covariance function gives the covariance between two random variables and characterizes the covariance matrix having elements C_i,j = k(x_i,x_j). Given a finite set of training points x={x_i}, a function f(x) evaluated at each x_i is a Gaussian random variable and the vector f={f_i} has a multivariate Gaussian distribution given as f∼𝒩(μ(x), C(x,x)).
The choice of the covariance functions is important. We choose the Radial Basis Function (RBF) kernel as the covariance function for our analysis, defined as k(x_i,x_j)= σ_ℓexp(- (x_i-x_j)^2/2ℓ^2), where σ_ℓ and ℓ are the kernel hyperparameters. The σ_ℓ is the amplitude parameter that can be thought as an offset that decides the tilt of reconstructed function f(x) from given mean function and ℓ describes the characteristic length of correlation.
Data points act as training points to optimize the hyperparameters and provide posterior prediction along with uncertainty on the predictions for the given test points.
Although GP is a formalism to be used here to study actual trends of data to define the luminosity-magnitude relation, there are practical technicalities regarding the choice of mean function that require special care. In principle, one can choose a zero mean function or any random function to start with. In this study, however, we are particularly interested in checking the validity of the Schechter function to be a correct description of the luminosity-magnitude relation. To test that, we allow the three Schechter function parameters to vary along with the GP hyperparameters, ℓ and σ_ℓ. In this way, the hyperparamter posteriors, marginalized over the Schechter function parameters can indicate, in a conservative way, whether the Schechter function is a correct model to address the observational data.
§ DATA SETS
We use rest frame UV luminosity function data for redshifts z∼ 2, 3, 4, 5, 6, 7 derived in <cit.> using HUDF, HFF, and CANDELS fields and HUDF data compiled by <cit.>. For z∼4-7 we add data from Hyper Suprime-Cam (HSC) Subaru Strategic Program (SSP) survey <cit.>. These data sets are corrected for active galactic nucleus (AGN) contribution and mostly at brighter end of luminosity. For redshifts 8, 9 and 10 we use luminosity function data from <cit.>, <cit.>, <cit.> and <cit.>. We also use the data redshift 9 and 12 from JWST <cit.>.
We use neutral hydrogen fraction data to constrain reionization from observations of Lyα-emitting galaxies <cit.>, high-redshift quasar spectra <cit.>, gamma ray bursts <cit.>, dark fraction in the spectra of bright quasars <cit.> and ionized near-zones around high-redshift quasars <cit.>.
From Planck CMB, we use Planck binned TTTEEE likelihood with low multipole temperature and polarization likelihoods and the lensing likelihood as discussed in Planck baseline <cit.>.
§ RESULTS
§.§ Relation between magnitude and Galaxy UV LF
To characterise the galaxy UV LF, we use both Schechter function (<ref>) and GP to fit the luminosity data. We use data for the largest possible range of magnitude, -25<M_ UV < -14 wherever available. We use CosmoMc <cit.> as a generic sampler to explore the parameter space.
For fitting the Schechter function we keep all three parameters M^*, ϕ^* and α free. However, we notice that for z>8 the data is not able to constrain all the parameters. Therefore, at z ≥ 9, we keep the slope α fixed at -2.35 following <cit.> and determine the other two Schechter parameters at z∼ 9–12. We find the estimated parameters are consistent with previous results of <cit.>. For z ∼ 10 and 12, we fix the prior on M^∗_UV as the 68% range obtained in z ∼ 9 analysis. In <ref> we present posterior distributions of the Schechter function parameters. We fit the free-form of LF using GP with the same data sets keeping GP hyperparameters (ℓ and σ_ℓ) free along with three Schechter parameters. We avoid GP fitting at z∼ 9 – 12 due to the unavailability of low magnitude high signal-to-noise data where deviation is expected and at these redshifts, even all Schechter parameters cannot be optimised. The analysis of ten redshifts is divided into three plots.
Random samples of luminosity function from the chains for Schechter function are plotted in blue lines in <ref> and <ref>. Modified functions from the Gaussian process are plotted in orange lines in the same figures.
Deviations from Schechter function below magnitude M_ UV=-21 can be noticed. A similar deviation is also reported in <cit.>. It is important to note that most of the previous studies are based on HFF and HUDF data <cit.> and others ground-based observations <cit.> that are limited by the magnitude larger than -23. According to those studies, Schechter function is the best description of the magnitude-luminosity relation. In our study, we combine those previous data sets with ancillary luminosity data sets of brighter galaxies (M_ UV≲ -23) from HSC data <cit.>. Above this magnitude, the AGN contamination to the galaxy luminosity is negligible <cit.> and therefore the previous results are consistent with our results up to certain magnitudes. The added HSC data of brighter galaxies are corrected for contribution from AGNs <cit.>.
We find maximum excess in the bright end shape of galaxy UV LF instead of expected exponential drop of Schechter function at all redshifts where low magnitude (M_ UV≲ -21) data are available. In certain redshifts, due to high signal-to-noise ratio, the deviation is detected at high statistical significance while for other redshifts we obtain a similar deviation with lower significance. Importantly, at z=3,4 the hyperparameters posteriors plotted in the upper left panel of <ref> indicate that the Schechter function (used as a mean function in this analysis) is ruled out by the data at high significance. Data from z=2 and 7 also prefer modification over Schechter function at around ∼95% confidence level (C.L). In order to understand the source for this deviation, we reanalyse z=4 data with two different data cuts (with M_ UV>-23 and M_ UV>-21). The results are presented in the lower right panel of <ref>. When the brightest part of the observation -24<M_ UV<-23 is not used, we notice a decrease in the significance in the GP hyperparameters ruling out Schechter function at 3σ. When we use more conservative cuts by masking data between -24<M_ UV<-21, we find that further drop in significance. These tests reveal that the modifications to Schechter function are needed by the luminosity observations at the lowest magnitude (the brightest) objects.
§.§ Evolution of UV Luminosity Density
We compute the UV luminosity density ρ_ UV following <ref> integrating down to M_ UV = -17. The samples of LFs from Schechter and GP analyses are used to estimate the posterior distribution of the derived parameter ρ_ UV. The mean and the 68% bounds are provided in <ref>. The changes in luminosity density compared to the Schechter function model are not noticeable in the GP results. Though Schechter function is ruled out by the data at certain redshifts, the major required modifications are noticed at the brightest end of the function and the brightest end does not contribute significantly to the integral of the UV luminosity density as the function drops logarithmically with the increase in brightness. In <ref>, we plot the redshift evolution of the luminosity density obtained from the GP (and Schechter fit) and a logarithmic double power law fit to the data (for z ≥ 6). We demonstrate only 1σ and 2σ spread for logarithmic double power-law fit to luminosity density obtained from the Schechter fit. We notice outliers in the data w.r.t. the model around redshifts 9 and 10. Recent JWST data suggests an enhanced population of star-forming Galaxies above redshifts z∼9 <cit.>. Compared to <cit.> new data may seem to hint towards a modification to the luminosity density evolution model. However, note that the data from JWST can not constrain all of the Schechter function parameters due to low signal-to-noise ratio detection, in particular, the slope remains unconstrained (see, <ref>). This may bias the estimation of the luminosity density and therefore, with this data we avoid the exploration for the possible deviation from the double power law model. We expect to revisit this issue in the future when new data from JWST are made available.
§.§ Constraints on Reionization
We obtain the constraint on redshift evolution of IGM neutral hydrogen fraction from joint fitting of Planck data, data, and UV luminosity density data derived in <ref>. We treat the four parameters in logarithmic double power-law form of ρ_UV and log_10⟨ f_escξ_ion⟩ as free parameters. For C_ HII we use a fixed value of 3.
In <ref> we compare the evolution across redshift z∼4-15 solving <ref> using ρ_ UV data sets obtained from both parametric and non-parametric methods. The blue dashed line and magenta solid line are the best fit obtained with ρ_ UV from GP and Schechter fits respectively. The grey and cyan lines indicate random samples of obtained from these fits respectively. We also overplot the data set used in our analysis. Both the ρ_ UV data sets provide similar constraints on reionization history. This is not surprising as we discussed that the obtained luminosity density follows similar redshift evolution owing to the similar nature of luminosity functions at the faint source end. This result suggests that the reionization process is mainly driven by fainter galaxies. The excess dropout galaxies at the brighter-end of magnitude have limited contribution to the reionization process due to lower number density or possibly low escape fraction. Earlier publications in <cit.> reached to a similar conclusions using model-based reionization study. As shown in <ref> we find much tighter bound of as compared to previous works in <cit.>. We quote the midpoint (50%) redshift of reionization z_ re in <ref> which is less than the Planck reported value <cit.>. Our constraints on the duration of reionization (redshift difference between 10% to 90% reionization) are Δ z∼ 1.627^+0.059_-0.071 and 1.627^+0.060_-0.070 with 68% confidence interval using Schechter function and GP based ρ_ UV data sets respectively. This is consistent with a upper bound of Δz<2.8 as reported in <cit.> from the Kinetic Sunyaev–Zel’dovich effect. This implies our values suggest a sharper reionization history. Finally our constrain on reionization optical depth is τ_ re = 0.0492^+0.0008_-0.0006 and 0.0494^+0.0007_-0.0006 from Schechter and GP analyses. These values are also listed in Table <ref>. However, we would like to highlight here that the uncertainties in the reionization histories quoted in this subsection are underestimated as we have fixed both the clumping factor and the escape fraction in our analyses. Therefore, these summary statistics on optical depth and the duration of reionization should be used with caution in any subsequent analysis. The exercise in this subsection has been performed to indicate that the constraints on the cosmology remain unaltered even though Schechter function is ruled out by the observational data. The flattening/increase in the luminosity density data at high reshifts as indicated by the JWST data in this analysis does not show an impact in the optical depth constraints as – (1) we have not used the very high redshift observations (z∼ 14, 16) from JWST, and (2) the double power-law form used for the luminosity density does not capture its increasing trend at high redshifts.
§ SUMMARY
In this paper, we investigate the UV luminosity functions between redshifts z∼2-12 over a wide range of magnitude -25≲ M_ UV≲ -14 based on HST, HSC and JWST data sets. We test whether the conventional Schechter function is a valid theoretical model that can address the data for such a wide range of magnitudes and redshifts and what are the implications of any modified function to the reionization history.
We fit both commonly used Schechter function model at all redshifts z∼2-12 and perform a free-form reconstruction at redshifts z∼2-8 using Gaussian process to the same data sets. We find although Schechter function is a very good description of the dropout galaxies for the fainter end of magnitude (M_ UV≳ -21), its exponential tail is inconsistent with brighter dropout galaxies at almost all redshifts where low magnitude data are available. The Gaussian process regression allows a free-form reconstruction of UV LFs and therefore can well describe the excess LF at the bright-end which is not possible to address with Schechter function. We obtain the UV luminosity densities integrating over both LF forms down to the magnitude M_ UV = -17. Since at the fainter end of the magnitude, Schechter function is consistent with the data and the fainter end contributes dominantly to the luminosity density integral due to the maximum availability of Galaxies, we find similar luminosity densities in both the methods.
Therefore, reionization history is found to be similar in both cases. This implies that brighter dropout galaxies have insignificant contributions in reionization process (supported by earlier publications as well). However, integrating to more fainter sources M_ UV > -15 can indicate certain differences which we do not explore by considering M_ UV = -17 as the conservative choice.
The data at redshifts 9, 10 and 12 do not have a high signal-to-noise ratio and therefore can not constrain all the Schechter function parameters. Therefore testing a modification to Schechter function is beyond the scope of this paper with the available data. More observational data from JWST NIRSpec will be required for further studies at higher redshifts. Furthermore, we have not incorporated redshift dependencies of clumping factor and escape fraction into our simplistic reionization model and also neglect the contributions from quasars as reionization sources which could potentially enhance the accuracy of our findings. It would be intriguing to incorporate all these modifications into our model and reassess the current analysis when new data from JWST and other sources with significant improvement in the signal-to-noise ratio are made available. We defer these extensions for future investigations.
§ ACKNOWLEDGEMENT
All the computations in this paper are done using the HPC Nandadevi and Kamet (<https://hpc.imsc.res.in>) at the Institute of Mathematical Sciences, Chennai, India. The authors would like to thank Yuichi Harikane for providing the UV luminosity data sets used in this work. DKH would like to thank Daniela Paoletti for certain important discussions. DKH would like to acknowledge the support from the Indo-French Centre for the Promotion of Advanced Research – CEFIPRA grant no. 6704-4 and the support through the India-Italy “RELIC - Reconstructing Early and Late events In Cosmology" mobility program.
JHEP
|
http://arxiv.org/abs/2405.09365v1 | 20240515141744 | SARATR-X: A Foundation Model for Synthetic Aperture Radar Images Target Recognition | [
"Weijie L",
"Wei Yang",
"Yuenan Hou",
"Li Liu",
"Yongxiang Liu",
"Xiang Li"
] | cs.CV | [
"cs.CV"
] |
In preparation for submission
Li et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Synthetic aperture radar (SAR) is essential in actively acquiring information for Earth observation. SAR Automatic Target Recognition (ATR) focuses on detecting and classifying various target categories under different image conditions. The current deep learning-based SAR ATR methods are typically designed for specific datasets and applications. Various target characteristics, scene background information, and sensor parameters across ATR datasets challenge the generalization of those methods. This paper aims to achieve general SAR ATR based on a foundation model with Self-Supervised Learning (SSL). Our motivation is to break through the specific dataset and condition limitations and obtain universal perceptual capabilities across the target, scene, and sensor. A foundation model named SARATR-X is proposed with the following four aspects: pre-training dataset, model backbone, SSL, and evaluation task. First, we integrated 14 datasets with various target categories and imaging conditions as a pre-training dataset. Second, different model backbones were discussed to find the most suitable approaches for remote-sensing images. Third, we applied two-stage training and SAR gradient features to ensure the diversity and scalability of SARATR-X. Finally, SARATR-X has achieved competitive and superior performance on 5 datasets with 8 task settings, which shows that the foundation model can achieve universal SAR ATR. We believe it’s time to embrace fundamental models for SAR image interpretation in the era of increasing big data.
Synthetic Aperture Radar (SAR), Target Recognition, Foundation Model, Self-Supervised Learning (SSL), Deep Learning, Masked Image Modeling (MIM)
SARATR-X: A Foundation Model for Synthetic Aperture Radar Images Target Recognition
Weijie Li, Wei Yang^∗, Yuenan Hou, Li Liu^∗, Yongxiang Liu^∗, Xiang Li
This work was supported by the National Key Research and Development Program of China No. 2021YFB3100800, the National Natural Science Foundation of China under Grant 61871384, 61921001, 62022091, 62201588, and 62376283, the Science and Technology Innovation Program of Hunan Province under Grant 2022RC1092, and the Key Stone Grant JS2023-03 of the National University of Defense Technology.
(^∗Corresponding authors: Li Liu, Wei Yang, and Yongxiang Liu. e-mail: liuli_nudt@nudt.edu.cn, yw850716@sina.com, and lyx_bible@sina.com.)
Weijie Li, Wei Yang, Yongxiang Liu, Li Liu, Xiang Li are with the College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China (e-mail: lwj2150508321@sina.com).
Yuenan Hou is with the Shanghai AI Laboratory, Shanghai, 200000, China.
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Based on electromagnetic scattering in microwave frequency bands, Synthetic Aperture Radar (SAR) <cit.> is essential in actively acquiring information for Earth observation with stable imaging capacity under various weather and lighting conditions. SAR Automatic Target Recognition (ATR) aims to automatically localize and classify interested objects in SAR images, e.g., detection and classification. It has been investigated extensively with various civilian and military applications <cit.>. In the past decade, deep learning has reinvigorated SAR ATR with impressive breakthroughs <cit.>. However, as shown in Fig <ref> and <ref>, due to the extensive acquisition conditions as well as the high costs of collection and annotation, there are a plethora of special datasets and algorithms designed for different applications of SAR ATR. For example, nearly a dozen new target datasets and dozens of attention modules have been developed for SAR target recognition in the last few years. However, the target characteristics, scene background information and sensor parameters across datasets challenge their generalization. In contrast to specialized algorithms for particular applications, we aim to investigate a general SAR target recognition method.
A foundation model <cit.> pre-trained in a task-agnostic manner (generally via self-supervision learning) on extensive data can be flexibly adapted to a wide range of downstream tasks under various conditions. Self-supervised learning (SSL) <cit.> can mitigate label inefficiency by exploring supervision directly from the data itself, thereby reducing reliance on expensive expert labeling and efficiently scaling data and models. Hence, SSL has the capability to leverage extensive SAR unlabeled data for constructing foundational models, which can achieve universal recognition with a few labeled target samples in different categories and conditions. As shown in Table <ref>, foundation models are thriving in remote sensing with superior performance in a diverse range of modalities and tasks. In particular, multi-modality foundation models have recently appeared, indicating this research topic's great potential. But, there lacks a foundation model that can effectively accomplish various tasks for SAR ATR, such as FG-MAE <cit.> and SkySense <cit.> mainly focus on SAR scene classification and segmentation, and our previous SAR-JEPA <cit.> and MSFA <cit.> explored the object classification and detection tasks separately. It is still necessary to explore the development of an analogous ATR foundation model for SAR image interpretation, which lies at the intersection of SAR technologies and frontier AI technologies (in particular, the big models).
Thanks to the development of SAR sensors and the enormous efforts of researchers, numerous SAR target datasets have emerged over the past five years in Fig. <ref>. Despite image annotation remaining a challenging and expensive endeavor, SAR sensors have produced vast amounts of data at a significantly accelerated pace compared to the previous ten years. Meanwhile, the investigation of SSL and foundation models for SAR ATR is advancing with increasing datasets. As shown in Table <ref>, existing research <cit.> have discussed different factors of a foundation model, and these studies are progressively advancing towards a goal of achieving general SAR ATR. For example, BIDFC <cit.> performs much better than supervised methods in the few-shot classification of the MSTAR vehicle dataset, SAR-JEPA <cit.> demonstrates that SSL can effectively improve the classification accuracy for different targets across datasets, and MSFA <cit.> shows improved results of ship detection. Inspired by their insightful work, we formally propose a SAR ATR foundation model, SARATR-X, to achieve a breakthrough in universal SAR target recognition.
In this work, we lay the groundwork for developing SARATR-X, a foundation model specifically tailored for SAR ATR tasks. It aims to learn generalizable representations from unlabeled SAR images, offering a basis for efficient adaptation across various downstream ATR tasks. The four parts for creating SARATR-X include pre-training datasets, model backbones, self-supervised learning, and evaluation tasks.
Pre-training datasets need to contain diverse target categories and imaging conditions to accommodate various downstream tasks. However, SAR ATR lacks a large-scale dataset like ImageNet <cit.>, and the most used MSTAR dataset in Table <ref> only includes fine-grained vehicle categories, which is not suitable for larger-scale pre-training. Therefore, many SSL methods combine different datasets; in particular, SARDet-100K incorporates <cit.> 9 SAR target detection datasets. With the increasing number of open-source datasets for SAR ATR, we integrate the vast majority of them for pre-training. As shown in Fig <ref> and Table <ref>, we apply 14 classification and detection datasets with different target categories and imaging conditions to explore the foundation model's potential.
Model backbones aim to achieve better spatial representation in remote sensing images, especially small target signatures in large imagery. As shown in Table <ref>, researchers used two architectures (Transformer or convolutional neural network). Transformer has better spatial resolution without downsampling, and Convolutional Neural Network (CNN) has higher efficiency with its convolution kernel. MSFA discussed the effectiveness of different architectures, and the results showed that the Swin Transformer backbone performs better. Therefore, we discuss different model backbones to find the most suitable approaches for the properties of remote sensing images. HiViT <cit.> is our choice, which combines Swin Transformer advantages and can drop patches in MIM.
Self-supervised learning faces the challenge of SAR image quality. SAR images contain speckle noise due to coherent imaging, and their visual features are not as distinct and rich as RGB images. As shown in Table <ref>, contrastive learning <cit.> uses data augmentation and pre-processing to reduce noise, while MIM <cit.> has a pixel-to-feature improvement for guide signals. Therefore, we consider that the problem to be solved by SAR SSL is how to construct high-quality guide signals. For example,
PGIL <cit.> leveraged a sub-frequency feature of SAR complex images to learn physics information, and our SAR-JEPA <cit.> applied multi-scale gradient ratio to solve multiplicative speckle noise and capture target shape. Furthermore, multi-stage training <cit.> from ImageNet to SAR diminished noise interference on model diversity in Fig. <ref>. Therefore, we apply two-stage training from ImageNet to SAR and use multi-scale gradient features as high-quality guide signals for SAR MIM.
Evaluation tasks need to comprehensively evaluate the performance of the foundation model under different tasks and settings. Benefiting from the open-source target datasets, we first construct a fine-grained classification dataset with various categories to evaluate the effectiveness of the proposed improvements. In the end, we perform a comprehensive comparison between the proposed SARATR-X and existing SOTA methods in public classification and detection tasks.
Our SARATR-X has achieved a competitive and
superior performance on 5 datasets with 8 task settings in Fig <ref>. And its codes and weights can be found at <https://github.com/waterdisappear/SARATR-X>. We hope this work will advance the development of foundation models and general SAR target recognition. The main contributions are summarized as follows:
∙
We propose a vision of general SAR target recognition, aiming to find universal solutions for various SAR ATR tasks and application in an era of SAR images with big unlabeled data and small labeled samples.
∙
We have systematically investigated a foundation model solution for general SAR ATR and present the first SAR ATR foundation model named SARATR-X, which achieves new performance breakthroughs on a wide range of SAR target recognition datasets and settings.
∙
We hope this promising work will stimulate research interest in SAR foundation models, thereby advancing SAR image interpretation with frontier AI technologies.
The remainder of this paper is organized as follows. Sec. <ref> introduces related work in remote sensing and SAR ATR. Sec. <ref> introduces the proposed foundation model SARATR-X. Sec. <ref> and <ref> conducts extensive experiments to demonstrate the superiority of the proposed method. Sec. <ref> concludes the paper and discusses future work.
§ RELATED WORK
As shown in Table <ref> and <ref>, visual foundation models are booming in remote sensing. Many models are proposed for various modalities and tasks. Our studies focus on the foundation model for SAR ATR, i.e., SAR images-based target classification and object detection. In the following, we introduce the recent development of remote sensing foundation models.
§.§ Foundation models in remote sensing
Remote sensing foundation models <cit.> have received widespread attention in the last three years. Researchers have made many breakthroughs, achieving effective learning on various modalities and tasks. In terms of pre-training datasets, researchers use existing large-scale datasets or collect a large number of samples from different sources. As for the model backbone, researchers have improved attention mechanisms, positional encoding, and other aspects to enhance the perception of complex spatial information. MIM has been used to learn spatial-temporal contextual information, while contrast learning is applied to multi-modal learning.
SatMAE <cit.> proposed a novel masking strategy and temporal and spectral positional encoding for multi-spectral and temporal images. With a new dataset fMoW Sentinel with 13 frequency bands, SatMAE achieved new performance on scene classification and semantic segmentation tasks.
RVSA <cit.> improved the pretraining ViT backbone with a rotated varied-size window attention method for the arbitrary-oriented objects. This work shows the importance of learning targets' complex spatial contextual relationships in remote sensing images.
RingMo <cit.> used a patch incomplete mask strategy for dense and small objects. Based on their self-built 2 million images, RingMo has proven effective in many tasks.
RingMo-Sense <cit.> offered a three-branch network and masking strategy to model the spatio-temporal interaction for temporal images.
CMID <cit.> combined contrastive learning and masked image modeling to learn global semantic information and local spatial information.
GFM <cit.> focused on the differences between natural and remote sensing images and employed a multi-objective continual pretraining approach to leverage both knowledge.
DiffusionSat <cit.> was the first remote sensing generative that employs geographic information embedding in stable diffusion.
Scale-MAE <cit.> reconstructed images at different frequencies with improved positional encodings of ViT.
FG-MAE <cit.> employed different hand-designed features to replace the original pixels in the MIM and improve the feature quality. And SMLFR <cit.> used a low pass filter to eliminate high-frequency information from the image pixel.
Multi-modal remote sensing foundation models have been developed, such as SkySense <cit.> and OFA-Net <cit.>. SkySense <cit.> proposed a multi-granularity contrastive learning method to learn representations of different modalities. Besides, a GEO-context prototype was applied to embed geographical contextual information. OFA-Net <cit.> applied a shared Transformer backbone for multiple modalities. Besides this, there are vision-language models <cit.>, such as EarthGPT <cit.>, SkyEyeGPT <cit.> and LHRS-Bot <cit.>, which incorporate large language models to different remote sensing images modalities. However, due to the difficulty of annotating SAR images, the collection of public datasets used by EarthGPT only has 10,554 SAR ship images, much less than 84,838 infrared images and 907,945 optical images.
Therefore, there is relatively less research on SAR foundation models due to the scarcity and fragmentation of the high-resolution SAR target datasets in Fig <ref>. We want to explore the visual foundation model based on SAR images with target recognition to stimulate research enthusiasm in this direction.
§.§ Foundation models in SAR
Researchers have explored SAR foundation models in different aspects, as shown in Table <ref>. Inspired by previous studies, we systematically investigated how to construct a SAR ATR foundation model.
Early SSL was often used as a regularization loss for classification tasks.
RotANet <cit.> predicted the rotational pattern of MSTAR vehicle targets to capture azimuthal features for the classification task.
UACL <cit.> combined data augmentation and adversarial samples in contrastive learning to improve the model's robustness to various adversarial attacks.
PGIL <cit.> used contrastive learning between the SAR complex images' sub-frequency feature and the amplitude images' deep feature to inject physical knowledge into the classification task. Recently, SSL has been used in model pre-training and fine-tuning frameworks.
BIDFC <cit.> proposed weakly contrastive learning for pre-training in fine-grained vehicle datasets MSTAR and used Gaussian noise data augmentation to simulate SAR image noise.
TSCL <cit.> applied the pre-processing of SAR images before data augmentation in contrastive learning.
FG-MAE <cit.> discussed different hand-craft features for multi-spectral and SAR images and used the HOG feature for SAR. Our previous study, SAR-JEPA <cit.> and MSFA <cit.>, focused on target classification and object detection. SAR-JEPA <cit.> applied Local reconstruction and multi-scale gradient feature to capture target spital signatures better. MSFA <cit.> proposed a multi-stage with filter augmentation pre-training framework to use large-scale RBG and SAR data for detection.
Our insight. These studies have demonstrated that SSL can achieve performance improvements on various categories <cit.> and tasks <cit.>, and can even be comparable to the performance of specially designed supervised methods <cit.>. These inspired us to conduct systematic research for foundation models to achieve general SAR target recognition, especially in this era of big data. Firstly, we must extend the pre-dataset with different tasks and scenarios based on existing research. Secondly, a suitable model backbone needs to be discussed for the small target characteristics of remote-sensing images. Thirdly, SSL needs high-quality guide signals from SAR images under noise interference. Finally, we need to evaluate the performance of foundation models comprehensively.
§ APPROACH
We aim to construct a foundation model for general ATR from large-scale SAR images via the SSL method. As described above, the increasing SAR datasets and SSL studies inspired us to build a foundation model for SAR ATR. Our approach revolves around pre-training datasets, model backbones, SSL methods, and evaluation tasks to provide a systematic benchmark for foundation models in SAR ATR.
§.§ Creating a Diverse Pre-training Dataset
Existing work mainly used MSTAR <cit.> as a pre-training dataset. While MSTAR is a high-quality vehicle target slice dataset, its commonly used samples are only a few thousand. Besides, this dataset suffers from background bias due to a single imaging scene <cit.>. By comparison, the pre-training set ImageNet-1K in computer vision contains 1.4 million images with different categories and scenes. Using an insufficient SAR target pre-training dataset would underestimate the potential of this research area. Since diverse targets, scenes, and sensor conditions constitute a huge data sampling space in real-world situations, constructing a large pre-training dataset for the foundation model is central to unifying various SAR ATR tasks.
The increasing SAR target datasets in Fig. <ref> is a primary motivation for achieving this goal. Although SAR images are expensive and no single dataset contains all popular target categories and various imaging conditions, collecting target samples from various open-source datasets can still construct a pre-training dataset with different categories, scenes, and sensors. Therefore, we build a new pre-training dataset containing 186,600 SAR target samples from 14 open-source SAR target datasets in Table <ref>. We construct the dataset to include, as much as possible, target categories (terrestrial and maritime targets such as vehicles, ships, aircraft, oil tanks, bridges, etc.), scenes (typical scenes such as cities, harbors, airports, oceans, etc.), and sensors (satellite, airborne, and simulation platforms of different resolutions and bands).
§.§ Choosing a Scalable Model
We consider two model backbones for SAR target recognition. For the first, we apply Vision Transformer (ViT) <cit.>, commonly used in SSL and has good scalability in model parameters. For the second architecture, we experiment with ConvNeXt-V2 <cit.>, which has the same scalability as ViT and maintains the efficiency of convolutional neural networks. In addition to the scalability of the model parameters, the image properties need to be considered for remote sensing. The SAR target usually has a small foreground and a dynamic context range. Our previous study, MSFA <cit.>, finds that the Swin Transformer is better than ViT with a hierarchical structure, but it is unsuitable for drop patches in MIM to save computing resources. Therefore, we also consider a variant of ViT, Hierarchical Vision Transformer (HiVit) <cit.>, which improves the input spatial resolution and retains the property of ViT with MIM.
§.§ Pre-Training with two steps
We use MIM as the pretext task and masked autoencoders (MAE) <cit.> to save computational resources with dropping patches. MIM can help the foundation model achieve SAR image interpretation by recognizing the contextual relationship around objects. Here is a key point when applying MIM. SAR images belong to coherent imaging, and values have speckle noise that can interfere with the pretext task. Therefore, SARATR-X uses two pre-training steps to build a foundation model in Fig <ref>.
The first step is to perform MIM on ImageNet to obtain better initialization weights. We simplify the multi-stage pre-training of MSFA, which performs SSL on ImageNet with backbone, detection task pre-training on DOTA with the whole framework, and detection task fine-tuning on SAR images. SARATR-X uses the pre-training weight of ImageNet as initialization weights for the SAR pre-training step. This way enhances the diversity of attention during the SAR pre-training step in Fig. <ref>. In contrast, random initialization leads to a convergence of attention towards the same pattern in SAR pre-training with MAE. Besides, ImageNet pre-training backbone weights can be obtained from open source, thus greatly reducing pre-training time. We refer to using ImageNet pre-trained weights as SSL-ImageNet & SAR.
The second step is to perform MIM with SAR images. As mentioned before, SAR image noise is a very tricky problem, and FG-MAR, SAR-JEPA, and MSFA have discussed many features, such as CannyEdge <cit.>, HOG <cit.>, CannyEdge <cit.>, Haar-like <cit.>, SAR-HOG <cit.> and SAR-SIFT <cit.>. We can use different feature combinations to get the best results, but here is the simplest gradient feature that follows our previous SAR-JEPA to avoid getting bogged down on complex feature selection. We use the Multi-scale Gradient Feature (MGF) <cit.> to suppress the speckle noise and extract the target shape.
Multi-scale gradient feature.
The classical differential gradient is not a constant false alarm rate operator due to multiplicative speckle noise in the SAR image. It means that speckle noise would cause the gradient computation to have false points in strong target regions. Previous studies <cit.> have shown that the computing ratio is suitable for multiplicative noise. Here, MGF uses the gradient by ratio <cit.> to obtain gradient features G_m.
R_i = M_1(i)/M_2(i),
G_H = log(R_1),
G_V = log(R_3),
G_m = √(G_H^2+G_V^2),
where R_i denotes the average ratio at different directions. M_1(i) and M_2(i) are the area averages on opposite sides of the current pixel along direction i. i = 1 is the horizontal direction and i = 3 means the vertical direction. The area averages can be computed from the input image and four fixed convolution kernels. Then, Eq. <ref> and <ref> use logarithms to solve the vertical gradient calculation <cit.>. G_H is the horizontal gradient and G_V is the vertical gradient.
MGF = concat(G_m1, G_m2, G_m3)
Due to the dynamic range required for various targets in remote sensing <cit.>, MGF is constructed with convolutional kernels of different sizes. We set the kernel scale r equal to 9, 13, and 17 to obtain G_m1, G_m2, and G_m3, and the whole convolutional kernel size is odd square 2r+1.
§.§ Evaluating with Recognition Tasks
We merged fine-grained classification datasets of Vehicles, Ships, and Aircraft as a new SAR classification dataset named (SAR-VSA). This dataset is used for comparing the SSL model's performance with a few-shot setting in Sec. <ref>. We then report SARATR-X results on existing classification and detection dataset settings with other methods in Sec. <ref>.
§ EXPERIMENTS OF SARATR-X
First, we perform the SSL on the pre-training dataset without label information. Then, we fine-tune the pre-trained model on SAR-VSA with few-shot classification task and linear probing setting to analyze the improvement of SARATR-X. Finally, we discuss the scalability of the proposed approach.
We perform pre-training on 8 NVIDIA RTX3090 GPUs. The SAR pre-training dataset consists of fourteen SAR datasets. The few-shot SAR classification dataset, SAR-VSA, contains 25 fine-grained targets from the three SAR datasets. It is difficult to ensure training convergence by fine-tuning the model's whole parameters under small sample cases. Therefore, we use linear probing <cit.>, which includes a batch normalization layer, to adjust the differences in the data statistical properties and reduce the number of fine-tuning parameters. Detailed settings are in the Appendix <ref>.
§.§ Comparison of Model Backbones
Table <ref> compares different model backbones for SAR ATR: ConvNeXt-V2 <cit.>, ViT <cit.>, and HiViT <cit.>. ConvNeXt-V2 and ViT represent the two main architectures: CNN and Transformer. HiViT combines Vit and Swin Transformer <cit.>.
From the result, we can see that ViT outperforms ConvNeXt-V2. On the one hand, the ViT is more flexible than ConvNeXt-V2 in learning the contextual information in SAR images. On the other hand, multiple downsamplings in ConvNeXt-V2 result in the loss of small targets, while ViT maintains the same spatial resolution in different layers. HiVit, a visual Transformer with ViT's flexibility and hierarchical representation, performs better than ViT and ConvNeXt-V2 in our self-supervised experiments. In particular, HiViT uses small (4 × 4) input patches, capturing small target features well. Fig. <ref> demonstrates that HiViT has a better various attention distance than ViT due to the small target information in remote sensing.
§.§ Strategy of two-step pre-training
Here, we discuss the two-step pre-training strategy to make full use of the available model weights and SAR datasets. Table <ref> has four pre-training settings: SL-ImageNet, SSL-ImageNet, SSL-SAR, and SSL-ImageNet & SAR. SL-ImageNet is pre-training on ImageNet with supervised learning[We used open-source weights from GitHub to conduct the experiments. The supervised weights of ConvNeXt-V2 and HiViT were obtained by supervised fine-tuning after SSL.]; SSL-ImageNet is pre-training on ImageNet with SSL from scratch; SSL-SAR is pre-training on our SAR pre-training dataset from scratch; SSL-ImageNet & SAR pre-training the model on SAR dataset based on the initialized weight from SSL-ImageNet.
It is noticed that the additional supervised information introduced by SL-ImageNet does not necessarily improve the SAR ATR performance, e.g., the linear probing performance of SL-ImageNet for HiVit is lower than that of SSL-ImageNet. SSL-SAR can achieve better results than SSL-ImageNet using less data (12%), reflecting the huge differences in the target features between these two images. However, ImageNet pre-training weights can provide a good initialization for low features such as shape and texture in SSL with visible spectral remote sensing <cit.> and medical image <cit.>. Our experiments also confirmed this conclusion: using SSL-ImageNet as initialization weights improves the pre-training performance of SAR images in Table <ref> and attention diversity in Fig. <ref>. Therefore, our SARATR-X uses the SSL-ImageNet & SAR setting to complement the richness of the pre-training.
§.§ Design of target signal for SAR images
After discussing the backbone and strategy, we focus on the target features for SSL methods with SAR images. Due to the unique multiplicative speckle noise in SAR images, a key point for MIM is designing high-quality guide signals. As shown in Table <ref>, we consider five target features (pixel value <cit.>, low pass filter <cit.>, HOG feature <cit.>, deep feature <cit.>, SAR-HOG <cit.>, and gradient by ratio <cit.>). All SSL methods use the SSL-SAR setting and base version.
First, we consider whether the existing methods based on the ViT are suitable for SAR images. PixMIM <cit.> applies the low pass filter to remove the high-frequency components and drives the model to focus on shape information. However, PixMIM does not outperform MAE because the noise type of SAR is multiplicative, and the filter parameters require a trade-off between target and noise. Then, FG-MAE <cit.> uses HOG to capture the feature for the SSL with SAR scene-level tasks, but we find that HOG does not ensure high-quality SAR target features. Target regions usually have strong scattering values, and the speckle noise causes the gradient computation to have strong false points in these regions. Besides, I-JEPA <cit.> propose deep networks as target feature encoders to capture the deep semantic features but lead to training overfitting noise and failure to learn effective feature representations.
Therefore, we choose the SAR features as target features to enhance HiViT. SAR-HOG changes the gradient calculation of HOG features and uses the gradient by ratio to solve speckle noise, and its result performs better than pixel value and HOG. Inspired by PixMIM, we prefer to use the target shape (i.e., gradient features) directly as the target feature [SAR-HOG uses the same multi-scale setting to illustrate that simple gradient features can effectively represent target shape as an SSL guide signal]. Besides, multi-scale can improve the feature representation for various small targets in remote sensing. We discuss kernel settings for computing gradient in Fig. <ref>. Scale can affect feature quality: a smaller scale is finer for small target edge extraction, while a larger scale is more suitable for large target and noise suppression. Therefore, combining features of different scales has an improvement over a single scale for various target sizes in images.
§.§ Analysis
As stated above, SARATR-X's key points are summarized as follows: HiViT architecture avoids the loss of small target information. SSL-ImageNet & SAR use ImageNet pre-training weights to provide a good initialization for diversity perceptual capability. MGF ensures high-quality target features and suppresses speckle noise under SSL with SAR images. Benefiting the above insights, our SARATR-X can learn high-quality target features from noisy SAR remote sensing images in Table <ref>. Next, we analyze the diversity and scalability of SARATR-X.
Visualization.
Research <cit.> has shown that supervised pre-training and contrastive learning only model global information in the high layer, and MIM can model both local and global information. However, we observe that this phenomenon is not only related to the methods but also to the data properties. Fig. <ref> (a) shows that the ViT with MAE focuses on global information due to large SAR image scenes, which differs from MIM's modeling properties. Therefore, HiViT has various attention ranges with its high spatial resolution and hierarchical structure in Fig. <ref> (b). In addition, using ImageNet's weight as initialization can enhance this problem in Fig. <ref> (c). As for the target feature, Fig. <ref> (d) displays the HOG feature enhances the noise interference, which harms feature diversity. MGF effectively extracts the shape information of the target, making the model more focused on diverse edge information in the lower layer. However, this approach removes texture and preserves edge information, which motivates the higher layers not to need to focus on texture details and diminishes the attention range in Fig. <ref> (e). Therefore, we combine two-step training with MGF in Fig. <ref> (f).
Scaling experiment.
Although MIM is a good learner that scales with data and model resources <cit.>, a question arises as to whether our method can ensure scalability for MIM when dealing with noisy data such as SAR. Fig. <ref> presents the scaling experiment from three perspectives: dataset size, parameters, and training epochs.
Despite our pre-training set comprising 186,660 images, which is smaller than ImageNet-1K, we observe a significant improvement in downstream tasks' performance with increasing data and parameters. This phenomenon indicates that the foundational model can fully unleash its potential in SAR images by extracting high-quality features as guiding signals. However, just like the study <cit.>, when the pre-training size contains 100,000 images, the model tends to overfit during extended training epochs. Moreover, SAR image noise and low resolution can further aggravate the overfitting. However, SARATR-X outperforms our previous study, SAR-JEPA, which has overfitted at 400 epochs with 94,776 SAR images. There is a need to continue investigating how to ensure high-quality feature representation when extending the SAR foundation models.
§ LEVERAGING SARATR-X FOR RECOGNITION
We have discussed different aspects of SARATR-X, but there are many specific datasets and specialized models for SAR ATR. We thus compare our SARATR-X with other state-of-the-art methods, such as supervised learning (CS^nNet <cit.> and PD <cit.>), semi-supervised learning (EUAPS <cit.>), and self-supervised learning (MSFA <cit.> and BIDFC <cit.>). We focus on SAR recognition tasks, including image classification and detection. More detailed settings[We removed downstream tasks' test sets from the pre-training set samples.] are in appendix <ref>.
Classification task.
We study in Table <ref> the performance of SARATR-X on the MSTAR <cit.> dataset with Standard Operating Conditions (SOCs) and Extended Operating Conditions (EOCs). We first notice that SSL (BIDFC and ours) and semi-supervised (EUAPS) significantly outperform other methods under small samples with additional unlabeled samples. Our results surpass the previous best by large margins, demonstrating the value of foundation models in an era of rapidly growing SAR data. In particular, our SARATR-X shows robustness to the EOC setting of imaging condition variations. This result indicates that the foundation model learns stable features and relationships from diverse imaging conditions in a large number of target samples.
Detection task.
As illustrated in Table <ref>, we report box AP for SAR target detection with a horizontal bounding box on multi-category (SARDet-100K and OGSOD), ship (SSDD), and aircraft detection (SAR-Aircraft). SARATR-X performs better than our previous MSFA by 0.8 points on SARDet-100K. MSFA has more complex training processes and target features, which use multi-stage training between RGB and SAR images with three different target features and have a detection pre-training step. SARATR-X is more simple but more effective for SAR images. More significantly, SARATR-X outperforms or has comparable performance on various datasets compared to many specifically designed detection methods in Table <ref>. Of course, our study is only a preliminary exploration of SSL for SAR. More effective target features can be realized in a data-knowledge dual-driven manner by further mining the knowledge of SAR imaging mechanisms and properties. Moreover, with the larger dataset and parameters, the path of foundation models will hopefully lead to general SAR target recognition, but this goal requires researchers together.
§ CONCLUSION AND FUTURE PERSPECTIVES
In this paper, we proposed the SARATR-X for SAR ATR and systematically investigated a foundation model framework. First, a pre-training dataset was built from 14 open-source datasets, including various targets, scenes, and sensors. Then, the foundation model's pre-training backbone, SSL methods, and downsteam tasks were discussed in detail. Importantly, SARATR-X demonstrated superior performance on different target recognition datasets, demonstrating the foundation model's potential in this field.
We believe that research on SAR foundation models, such as SARATR-X, has the potential for generalized feature representations in SAR images and accelerates towards all-day, all-weather target recognition in Earth observation. However, the research of the foundation model requires large data, which is a real problem for SAR images. SAR images are expensive and require specific imaging equipment and algorithms, and privacy and security prevent the data from opening. Therefore, we are particularly grateful to the publishers of open-source SAR target datasets. By making SARATR-X publicly available, we aim to accelerate the progress of the foundation model in SAR target recognition by enabling researchers to use our dataset and code to design better methods or explore downstream applications.
Although this work systematically investigated a foundation model framework, several limitations and challenges require exploration in future work. The SAR images are derived from open-source SAR datasets, and the targets are mainly vehicles, ships, aircraft, oil tanks, etc. Collecting target samples from increasingly unlabelled SAR imagery could further expand the amount of data and the range of downstream applications. In addition, investigating the expert knowledge with text for multimodal interactions to describe the relationship between targets and scenes can further enhance the representation capability of the foundation model. Due to the visual variability of SAR images, textual descriptions are an effective tool for improving interpretability and comprehensibility.
In conclusion, we have verified the SARATR-X's ability to adapt to diverse SAR target datasets, showing high performance and generalizability in classification and detection. By taking full advantage of the rapid growth of SAR images, the SSL-based foundation model opens the door to a generalized feature representation and SAR target recognition. We believe it's time to embrace fundamental models for SAR image interpretation.
§ IMPLEMENTATION DETAILS OF SECTION <REF> EXPERIMENTS OF SARATR-X
Here are the details of the dataset and training settings.
§.§ Pre-traing dataset setting
As shown in Fig. <ref>, we collect data from open-source datasets based on our previous research <cit.>. Now, our pre-training dataset contains 14 open-source SAR target datasets. Here are brief descriptions of each dataset's targets, scenes, and sensors.
AIR-SARShip <cit.> is a ship detection dataset based on the Chinese C-band Gaofen-3 satellite. AIR-SARShip-1.0 and AIR-SARShip-2.0 include 318 VV-polarised images with 1 and 3 m resolutions. This dataset includes harbors, islands, and different conditions of sea surfaces and covers thousands of ships.
HRSID <cit.> is a high-resolution dataset for ship detection and instance segmentation based on the European C-band Sentinel-1B, German X-band TerraSAR-X and TanDEM-X satellites. HRSID consists of 5,604 cropped SAR images with 0.5 to 3 m resolutions. The scene is a busy area of maritime transport, such as harbors and estuarine cities, and the annotation targets are ships of different sizes.
Sandia MiniSAR <cit.> is a 0.1 m resolution dataset based on a Ku-band airborne platform released by Sandia National Laboratories. The dataset contains scenes and targets such as aircraft on tarmacs, buildings in urban areas, and vehicles in desert areas but lacks official annotations.
MSAR <cit.> is a multi-class target detection dataset based on the Chinese C-band HISEA-1 satellite in large-scale scenes. MSAR comprises 28,449 image slices with quad polarization and 1 m resolution. Scenes covered include airports, harbors, nearshore, islands, distant seas, and urban areas. The labeled target categories include aircraft, oil tanks, bridges, and ships.
MSTAR <cit.> is the most commonly used target classification dataset released by the Defense Advanced Research Projects Agency, USA. Its sensor is an X-band radar with HH polarization mode and 0.3 m resolution. It contains ten categories of military vehicles with various imaging angles, target variants, and other conditions but with single grass scenes.
OGSOD <cit.> is a city object detection dataset collected from the Chinese C-band Gaofen-3 satellite with VV and VH polarization modes, and its resolution is 3 m. This dataset also contains optical images from Google Earth with 10 m resolution. It is annotated with static objects, including bridges, harbors, and oil tanks in urban areas.
OpenSARShip <cit.> is a ship slices dataset based on the European C-band Sentinel-1 satellite. Its resolution is 2.3 m to 17.4 m with VV and VH polarization. The dataset contains many ship slices from 10 busy ports. It has a diverse range of ship types but a significant category imbalance.
SADD <cit.> is an aircraft detection dataset collected from the German X-band TerraSAR-X satellite. Its resolution is 0.5 m to 3 m with HH polarization. The dataset contains densely parked aircraft of different sizes on airport tarmacs and runways. It has a large number of small-sized planes as well as the airport perimeter area.
SAMPLE <cit.> is a synthetic and measured paired fine-grained vehicle dataset released by the Air Force Research Laboratory, USA. This dataset is simulated in X-band and 0.3 m resolution. The public version provides 5,380 images of ten categories of vehicle targets at partial imaging angles.
SAR-AIRcraft <cit.> is a aircraft detection dataset based on the Chinese C-band Gaofen-3 satellite with 1 m resolution and single polarization. The dataset collects seven types of aircraft of different sizes from three civil airports. It can support fine-grained aircraft detection and classification studies.
SARSim <cit.> is a fine-grained vehicle dataset created by Terma A/S, Denmark. The simulation system used for this dataset can generate X-band SAR images with resolutions ranging from 0.1m to 0.3m from CAD models. SARSim provides 21,168 vehicle samples in 7 categories (truck, car, motorbike, bus, tank, bulldozer, and pickup) and 3 scenes (grass, roads, and a mean of the two) with 7 imaging depression angles.
SAR-Ship <cit.> is a ship target detection dataset in complex scenes based on Chinese Gaofen-3 and European Sentinel-1 satellites. The public version of this dataset contains 39,729 images from two satellites in different imaging modes and resolutions. The dataset provides ship targets of various sizes in complex ocean scenes such as nearshore, distant seas, harbors, and islands.
SIVED <cit.> is a vehicle detection dataset with rotatable bounding box. It consists of vehicle slices from the MSTAR dataset <cit.> and vehicles in urban areas from the Sandia MiniSAR and FARAD datasets <cit.>, and scenes include car parks, buildings, trees, roads, and others.
SSDD <cit.> is a commonly used SAR ship detection dataset. It is constructed based on Canadian RadarSat-2, German TerraSAR-X, and European Sentinel-1 satellites and contains different scenarios for the inshore and offshore of China and India. The dataset covers various ship sizes in different oceanic conditions with diverse clutter and noise interference.
§.§ Classification dataset for performance Test
We select three target classification datasets, including 25 fine-grained targets from vehicles, ships, aircraft, and others, to evaluate the comprehensive performance of SSL and the foundation model for SAR target recognition. The new SAR classification dataset named SAR-Target in Table <ref>.
MSTAR <cit.> is the most commonly used SAR vehicle datasetw. It has many experimental setting variants, while we refer to the <cit.> to adopt the most commonly used ten-class classification settings, such as infantry vehicle, patrol car, personnel carrier, main battle tank, and truck.
FUSAR-Ship <cit.> contains 15 primary ship categories and many non-ship targets based on the Gaofen-3 satellite in scenes such as sea, land, coast, river, and island. Based on the experimental setting of <cit.>, we have ten ocean target types, such as four fine-grained ships, bridges, and ocean scene slices.
SAR-ACD <cit.> contains five types of aircraft based on the Gaofen-3 satellite in three civil airports. Since the released dataset does not separate the training and test data, we randomly select partial samples as the training set and others as the test set. Fine-grained recognition of aircraft targets is a more challenging task due to the smooth surface of the aircraft, resulting in insignificant SAR image features.
§.§ Hyperparameter settings
Here are the detailed settings of our pre-training and downstream tasks as shown in Table <ref> and <ref>.
Pre-training.
Our default pre-training setting is in Table <ref>, and other hyperparameters of each method use the default settings from their papers and codes. Our pre-training is applied on 8 NVIDIA RTX3090 GPUs with 200 epochs and 800 batch sizes. Compared to the training settings of MAE, we add ColorJitter (contrast=0.5) to increase data richness. Moreover, we modify the batch size and epoch according to 8 GPUs. It is worth noting that although MAE uses the normalized pixel value to enhance the feature representation in the visible spectral images, we find that the normalized pixel value cannot be used due to the SAR image noise and prevents the training loss from decreasing properly.
Classification setting.
All models use the same training settings in downstream classification tasks. Table <ref> gives the default setting. Our few-shot learning setting is based on the Dassl toolbox <cit.> and is averaged over 10 random experiments. Since we focus on the small sample case of the downstream classification task, we use the linear probing method in MAE to finetune models and avoid overfitting.
Partial fine-tuning. Fig. <ref> shows why we chose linear probing for few-shot evaluation, as HiViT overfits when many blocks are fine-tuned. Experimental results show that our SAR-KGPA consistently obtains better representations than MAE, and overfitting occurs later.
§ IMPLEMENTATION DETAILS OF SECTION <REF> LEVERAGING SARATR-X FOR RECOGNITION
§.§ Dataset description
We choose MSTAR, the most commonly used dataset in SAR target classification. SOCs are in similar image conditions, and the training set's depression angle under SOC is 17°, while the test set is 15°. Ten categories of targets include BMP2, BRDM2, BTR60, BTR70, T62, T72, 2S1, D7, ZIL131, and ZSU234 as shown in Table <ref>. The EOCs setting is the imaging condition variations to test robustness. Existing methods have been saturated with different experimental settings of MSTAR <cit.>, but few samples and depression angle variations remain challenging. We use SARDet-100K and OGSOD datasets, which have many test samples and categories, to evaluate the detection performance fully. The OGSOD comparison results are derived from the original article's <cit.> single-modal approach using only SAR images. SSDD and SAR-Aircraft are ship and aircraft categories.
§.§ Hyperparameter settings
Based on our scaling experiment, we use HiViT-B with 600 epochs pre-trained on SSL-ImageNet & SAR as the foundation model for classification and detection tasks.
Classification setting is follow Table <ref>. The only difference is that we use partial fine-tuning for better performance, and the last 6 blocks are added to the fine-tuned.
Detection setting is follows the default setting in HiViT, and we adjust the learning rate to 5e-4. We used the same settings for each dataset fine-tuning, see our GitHub configuration based on the mmdetection <cit.> framework for details.
§.§ Detailed results
We provide detail classification and detection result on Table <ref> and <ref>. Although the proposed method outperforms existing methods in mAP, there is still scope for improvement in some refined metrics.
plain
|
http://arxiv.org/abs/2405.09860v1 | 20240516073834 | Optimal Switching Networks for Paired-Egress Bell State Analyzer Pools | [
"Marii Koyama",
"Claire Yun",
"Amin Taherkhani",
"Naphan Benchasattabuse",
"Bernard Ousmane Sane",
"Michal Hajdušek",
"Shota Nagayama",
"Rodney Van Meter"
] | quant-ph | [
"quant-ph",
"cs.NI"
] |
switch
case
[1](#1)
SE[SWITCH]SwitchEndSwitch[1] #1
SE[CASE]CaseEndCase[1] #1
*EndSwitch
*EndCase
IEEEexample:BSTcontrol
Optimal Switching Networks for Paired-Egress Bell State Analyzer Pools
This work was supported by JST [Moonshot R&D Program] Grant Numbers [JPMJMS226C] and [JPMJMS2061].
Marii Koyama5,
Claire Yun 6,
Amin Taherkhani1,
Naphan Benchasattabuse13,
Bernard Ousmane Sane13,
Michal Hajdušek13,
Shota Nagayama41,
Rodney Van Meter53
1Graduate School of Media and Governance, Keio University Shonan Fujisawa Campus, Kanagawa, Japan
3Quantum Computing Center, Keio University, Kanagawa, Japan
4mercari R4D, Mercari, Inc., Tokyo, Japan
5Faculty of Environment and Information Studies, Keio University Shonan Fujisawa Campus, Kanagawa, Japan
6Department of Information Science, College of Agriculture and Life Science, Cornell University, New York, United States
{mia, cly29, amin, whit3z, bernard, michal, rdv}@sfc.wide.ad.jp, {shota}@qitf.org
Received September 15, 1996; accepted March 16, 1997
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
plain
plain
To scale quantum computers to useful levels, we must build networks of quantum computational nodes that can share entanglement for use in distributed forms of quantum algorithms. In one proposed architecture, node-to-node entanglement is created when nodes emit photons entangled with stationary memories, with the photons routed through a switched interconnect to a shared pool of Bell state analyzers (BSAs). Designs that optimize switching circuits will reduce loss and crosstalk, raising entanglement rates and fidelity. We present optimal designs for switched interconnects constrained to planar layouts, appropriate for silicon waveguides and Mach-Zehnder interferometer (MZI) 2 × 2 switch points. The architectures for the optimal designs are scalable and algorithmically structured to pair any arbitrary inputs in a rearrangeable, non-blocking way. For pairing N inputs, N(N - 2)/4 switches are required, which is less than half of number of switches required for full permutation switching networks. An efficient routing algorithm is also presented for each architecture. These designs can also be employed in reverse for entanglement generation using a shared pool of entangled paired photon sources.
Quantum Network, Fault-Tolerant Quantum Computing, Interconnect Networks, Switched-BSA, Planar Architecture, Photonic Chip, Photonic Switch, Heralded Entanglement Generation
§ INTRODUCTION
The importance of switching of signals has been understood since the earliest days of telecommunications.
The mid-twentieth century saw advances in both the practice and theory of switching for telephone networks, resulting in multi-stage designs such as Clos, Benes̆, omega and butterfly networks for electrical signals, coupling small switch units together via discrete wires or coaxial cables <cit.>.
Inspired by these designs, and facing the need to scale up systems, computer architects have built multicomputers, systems with many independent processors and memory units connected via interconnection networks <cit.>.
Multicomputer designs for quantum computers, in which a number of independent quantum computers with separate quantum registers and control systems are coupled via an interconnect network, are widely seen as a necessary architectural approach to achieving scalable, fault-tolerant systems <cit.>.
For multicomputer quantum processing of transferring quantum state (teledata) or performing teleportation of quantum gates (telegate), the ability to generate Bell pairs and deliver them to arbitrary quantum processing units is essential <cit.>.
In addition to the architectural challenges of solving a large scale quantum algorithm using a quantum multicomputer, the cooperative nature of some distributed quantum algorithms over a structured quantum network requires an efficient interconnect between quantum nodes in the network.
To achieve scalability, the unrealistic, abstract model with direct interfaces between every pair of quantum computers or nodes in the network must be replaced with a realistic switch interconnect architecture <cit.> to have reconfigurable paths between arbitrary nodes.
Switching interconnects are also indispensable components of a number of quantum network testbed designs that are planned to be deployed in the near future <cit.>, paving the way to the eventual quantum Internet <cit.>.
The primary service of quantum networks is the distribution of entangled states, usually entangled pairs of qubits.
This departure from the packet-forwarding or circuit-switching nature of classical networks presents a unique set of challenges for the design of quantum switching networks, with the goal of distributing pairs of entangled photons to the terminals.
One approach to quantum switching interconnects assumes the availability of a shared pool of entangled photon-pair sources (EPPS Pool) as pictured in the left panel of Fig. <ref>.
In this architecture, pairs of entangled photons are generated at the EPPS nodes and routed by the optical switch to the appropriate end nodes.
This demonstrates the fundamental difference between classical and quantum switching interconnects.
In the EPPS Pool architecture, initially neighboring inputs (entangled photon pairs) must be switched to an arbitrary pair of outputs.
A non-planar optimal solution to this problem was proposed by Drost et al. <cit.> for cases up to 10×10 quantum switches.
This solution was found via exhaustive search but did not provide a good recursive design that would lead to optimal and scalable quantum switching networks.
Optimal planar and scalable designs for full permutation N× N switching networks are over-designed for the problem of pair matching as they require the ability to route all N! input-output combinations.
The ability to route photons to arbitrary BSAs, including arbitrary choice of the BSA ports, is not required for successful execution of entanglement swapping between desired pairs of photons.
We consider the inverse problem to the EPPS Pool architecture, shown in the right panel of Fig. <ref>.
Photons originating from the quantum network are inputs into the quantum switch, which routes the desired pairs of photons to be incident onto the same Bell State Analyzer (BSA).
The pair of photons then undergoes a Bell-state measurement, leading to entanglement between the respective end nodes.
The unique aspect of this BSA Pool architecture is that which BSA is used to perform entanglement swapping is irrelevant.
We propose three recursive designs for a planar N× N quantum switch composed of a number of 2×2 switch points.
We obtain a lower bound for the number of switch points required for the quantum switch to be rearrangeably non-blocking, and demonstrate that all three of our designs saturate this lower bound.
For each design, we present an efficient routing algorithm.
We further analyze the depth of the three designs, and the average number of switch points that a photon traverses in order to better understand their loss properties.
Finally, we compare our planar designs with existing planar <cit.> and non-planar solutions for quantum <cit.> as well as classical switches <cit.>, and demonstrate favourable scaling properties of our designs.
§ PRELIMINARIES
We begin by describing the basics of optical switching networks and how quantum networks differ from classical ones.
We then proceed to summarize fabrication factors that place constraints on our design.
Finally, we discuss the assumptions used in this work before ending this section with the problem statement.
§.§ Classical and quantum optical switching networks
An optical switching network has a number of important characteristics that should be considered while designing a switching configuration:
* Size: the number of input and output ports
* Blocking/non-blocking: whether the network can handle all possible input/output combinations
* Switching time: the reconfiguration time for the network
* Propagation delay: the time needed for photons to cross the network
* Insertion loss: the probability of losing photons when crossing the physical interface from the channel to the switching element, typically involving a fiber/air boundary or a fiber/chip interface (generally reported in dB)
* Switching loss: the probability of losing photons within the switching element (generally reported in dB)
* Crosstalk: the leakage of signal to undesired transmission paths (generally reported in dB)
* Redundancy: whether or not connectivity is degraded if a switch point or link fails
* Physical dimensions: especially important when considering integration into the system
* Cost: often reported as per-port cost for large configurations
The above list is largely common between classical and quantum switching networks. In a classical network, within reason, loss can be compensated for by increasing input signal power.
In the systems we are designing (Fig. <ref>), the networks instead carry single photons.
Loss becomes the most critical metric and increases as the photons pass through channels, interfaces and switch points.
Polarization may be critical in quantum networks (depending on choice of qubit representation), and it can be changed by each component. If the network itself can be built with enough stability that its effect on polarization can be characterized at infrequent intervals, network operation will be more efficient <cit.>. Moreover, in physical design, self-calibration can also be achieved by adding auxiliary optical components to the photonic core <cit.>.
Most of the above characteristics depend on the choice of physical fabrication technology, but from the architectural design point of view, the number of input ports, total number of switches, and circuit depth indirectly affect propagation delay, insertion loss, switching loss, crosstalk, physical dimensions and cost.
§.§ Fabrication Considerations
For optical systems, we can build switching systems based on photonic integrated circuit (PIC) technology <cit.>, or via free-space propagation of light, e.g. using micro electro-mechanical system (MEMS) switches <cit.>.
PICs have relatively high loss per centimeter of waveguide, but have the advantage of fewer fiber/air or fiber/chip interfaces that must be crossed compared to designs that use discrete components for each switch point.
In this paper, we focus on integrated circuits.
Photolithographic fabrication for waveguides <cit.> or photonic crystals <cit.> results in planar layouts.
Two waveguides can be run close to each other, resulting in 2× 2 switch units based on Mach-Zehnder interferometric techniques, allowing two photons to be routed straight through or swapped under programmatic control.
Grids of these basic units take sets of inputs from one edge of a chip and route them to a set of outputs at the opposite edge of the chip.
Promising platforms for fabrication of integrated photonic circuit such as silicon on insulator <cit.>, silica on silicon <cit.> and stoichiometric silicon nitride (Si_3N_4) <cit.> have different fabrication complexity, photon loss and index contrast with different supported wavelengths.
These layouts are generally kept planar due to the difficulty of
routing a waveguide off the substrate surface without significant
loss <cit.>.
Therefore, we focus exclusively on planar switch designs in this work.
§.§ Assumptions
Following the discussion in Sec. <ref> and Sec. <ref>, we now outline the assumptions used in the rest of the paper.
The switch waveguides carry individual photons generated from quantum memories or entangled photon pair sources.
BSAs are located outside the switch, with each BSA being attached to two neighboring output ports of the switch.
Photons are to be matched in pairs and routed to any available BSA (paired egress).
Use of the switch occurs in independent rounds, between which the switch may be reconfigured.
The design must be planar and non-blocking for all possible choices of input pairings.
Switch points are based on 2 × 2 switching elements compatible with basic building blocks in photonic integrated circuits.
Photons are guided single-pass only, from one side of the switch to another, and without any recirculation.
0
* As shown in Fig. <ref> switch points in the design are based on 2 × 2 switching elements compatible with basic building blocks in photonic integrated circuits (MZI or couplers).
* Due to high loss probability in coupling and lack of scalability in pure non-planar design, we focused on planar design. Although the non-planar design can be compared with the planar or even can be converted to a planar version (by add extra active switches for managing the crossed waveguides)
* The design is based on forward-only architecture, meaning that the photons are coupled to an input port of the switch and cross into the inside switch architecture and after passing the determined path, they are guided toward the BSAs on the other side of the architecture.
* Here we assume that path adjustment between each paired nodes is done out of the switch fabric.
* While the focus is on switching network for pairing the inputs for pool of BSAs but the design should be applicable for reverse problem of entanglement generation using a pool of entanglement photon pair source.
* In switch design all basic types of link architecture should be supported. By assuming polarization calibrations in physical layer, switch interconnect can be compatible with all three MIM (Memory Interface Memory), MSM (Memory Source Memory) and MM (Memory to Memory). Sample pairing for MIM and MSM links in the sample switched BSA is shown in Fig. <ref>
* Suppose in a quantum network, N nodes are called to collaboratively perform a high level quantum application to distribute entanglement between each arbitrary pair of nodes. In ideal case for each node we need N-1 quantum network interfaces and N(N-2)/2 different BSAs between each pair of nodes. But sharing N/2 BSAs with one quantum network interface per node is sufficient for concurrent execution of heralded entanglement generation between each pair of nodes if we have nonblocking switching architecture in the middle. Therefore in this design we suppose that each node requires exactly one BSA and at the same time attempt to perform heralded entanglement generation with one paired node.
Issues of achieving indistinguishability between the input photons, such as polarization maintenance, spectral properties and time of arrival of the photons at the BSA, are out of the scope for this work.
Based on the above assumptions, we can define the problem to be addressed.
§.§ Problem statement
Consider an N× N switching network with N input ports, with photons X_0,X_1,… X_N-1 where photon X_i comes in at the i-th port, and N output ports coupled to N/2 BSAs.
A paired egress request is represented as a tuple (X_i,X_j), where 0 ≤ i < j < N, and we call the set of all required pairings PL_N/2={(X_i,X_j)}, where i and j each appear exactly once.
We want to find a scalable, planar and rearrangeably non-blocking topology for a switching network composed of base 2× 2 switch points, and a corresponding efficient routing algorithm capable of handling an arbitrary pair list PL_N/2.
The routing algorithm must provide the state of every switch point in the network, denoted by SW^l_i∈{,}, where SW^l_i represent the switch at layer l between line i and i+1,
and a permuted list of photons where the two photons at index 2j and 2j+1 of the list go into the same BSA (the BSA_j) after exiting the switch.
The concept of a layer will be made more precise when we introduce our designs.
0
§ OPTIMAL NUMBER OF SWITCH POINTS
In order to show that our proposed switching networks are optimal, we consider the minimum number of switch points required to pair all N input photons.
It is known that a classical planar N× N switching network requires at least N(N-1)/2 switch points to achieve all input-output permutations <cit.>.
We adapt the techniques used in <cit.> for the case of paired-egress switching networks.
In order to obtain the minimum number of switch points for the network to be rearrangeably non-blocking, we consider the worst case scenario, where always the two most distant input photons are required to be paired together.
The list of pairings is therefore given by (X_0,X_N-1), (X_1,X_N-2), (X_2,X_N-3), and so on.
Bringing the input photons X_0 and X_N-1 together means that we must swap with all the other N-2 input photons, requiring N-2 switch points.
This is true regardless of which BSA is assigned to perform entanglement swapping on the input photons X_0 and X_N-1.
A sample path and its required swaps are shown in the lower portion of the right panel of Fig. <ref>.
Bringing the next pair of photons, (X_1,X_N-2), together requires N-4 swaps.
Continuing with this logic, the minimum number of swaps, and therefore switch points, is given by
∑_k=1^N/2-1 (N -2k) = N(N-2)/4.
We observe that the minimum number of switch points for a planar N× N rearrangeably non-blocking network with paired egresses is less than half of that obtained in <cit.> for a classical switching network.
§ TRIANGULAR DESIGN
We begin with the simplest planar configuration for a rearrangeable non-blocking switching network with the minimum number of switch points. From this point forward in the paper, all designs are assumed to be paired egress, so we dispense with the qualifier.
§.§ Architecture
The smallest non-trivial switching network is a 4×4 switch, shown in Fig. <ref>(a), and requires at least 2 switch points.
By changing the state of the switch points, it is possible to achieve all possible input photon pairings.
For example, leaving both switch points in the state results in entanglement swapping between input photon pairs (0,1) and (2,3).
Turning both of the switch points to state routes the input photons in such a way that entanglement swapping is performed on the pairs (1,2) and (0,3).
This 4×4 switch forms the basic building block for the triangular switch architecture, as depicted in Fig. <ref>(b).
For N input photons, the switch is composed of N/2-1 layers, where layer k contains SW^k=2k switch points.
The total number of switch points is therefore given by
SW_triangular = ∑_k=1^N/2-1 SW^k = N(N-2)/4.
We see that the triangular architecture is optimal in terms of the number of switch points.
The switch is constructed recursively by adding all switch points within a layer in a cascaded fashion.
For layer k, the switch points are arranged in the following way,
SW^k_0 → SW^k_1 →…→ SW^k_2k-1,
where SW^l_i represent the switch at layer l between line i and i+1, as shown in Fig. <ref>(b) for the case of N = 12.
§.§ Routing
Showing that a switch design is rearrangeably non-blocking amounts to demonstrating that given a pair list PL_N/2, it is always possible to find a configuration of all switch points that permutes the input photon list such that all photon pairs are adjacent to each other.
The routing algorithm strongly depends on the design of the switch.
For the triangular design, it is relatively straightforward and is presented in Algorithm <ref>.
The input to the routing algorithm is given by the input photon list (X_0,…,X_N-1), and the pair list PL_N/2.
The main strategy, similar to the bubble sort algorithm, is to start with photon X_N-1 since it is not incident onto any switch points and cannot be routed.
We find its partner X_j, such that (X_j,X_N-1) ∈ PL_N/2, and configure the switch points in layer N/2-1 in order for the pair to meet on lines (N-2, N-1).
This is achieved by setting all switch points SW^N/2-1_k, for j≤ k<N-2 to the state.
All other switch points in the layer, SW^N/2-1_k where 0≤ k<j, are set to the state.
The effect of this configuration leaves the ordering of photons X_0,…,X_j-1 unaffected, cascades the photon X_j down to line N-2, and shifts all remaining photons up by one unit.
This process is next repeated for the next layer with input size N-2, and the new list of photons (X_0,…,X_j-1, X_j+1,…, X_N-2, X_j, X_N-1) until the input size becomes 2, reaching the trivial case.
For each layer, we traverse the photon list of length 2l+2 once and assign the switch states of 2l switches.
Therefore, the time complexity of this routing algorithm is O(N^2).
§ CHEVRON DESIGN
Our second switch design redistributes the switch points in a more uniform manner.
This change leads to a more complicated, yet still efficient, routing algorithm.
§.§ Architecture
The chevron design is pictured in Fig. <ref> for the case of N=12.
Similar to the triangular design, the chevron design consists of N/2-1 layers with layer k consisting of SW^k=2k switch points, resulting in the optimal number of total switch points N(N-2)/4.
However, the switch points are laid out in a chevron, rather than a single diagonal arrangement.
For layer l, with l being even, half of the switch points in that layer are placed in the upper half of the chevron, with the rest of the switch points being placed in the bottom half.
More precisely, the switch points in the upper half of a layer are placed as follows,
SW^l_N/2-l-1→ SW^l_N/2-l→…→ SW^l_N/2-2.
The switch points in the bottom half of the layer mirror the above arrangement,
SW^l_N/2+l-1→ SW^l_N/2+l-2→…→ SW^l_N/2.
For odd layers, the placement of the switch points is similar with the exception that the switch point SW^l_N/2 is removed and a new switch point SW^l_N/2-1 is placed at the tip of the chevron, as shown in Fig. <ref>.
§.§ Routing
Algorithm <ref> illustrates the procedure for determining switch status and the permuted list in the routing process.
The routing algorithm for the chevron operates on the principle that if all photon pairs entering the last layer are adjacent to each other, except for two adjacent photons requiring pairing with the top- and bottom-most photons, the last chevron layer can handle their pairing.
Alternatively, if all photons are already paired except for the top- and bottom-most photons, the algorithm can address this scenario as well.
This approach is valid as we recursively go from layer N/2-1 to N/2-2 reducing the size of the switch from N × N to (N-2) × (N-2) by virtually pairing the photons that need to be matched with the top- and bottom-most photons until reaching the trivial inputs 2 × 2.
This guarantees that at every recursion, all the photons will have a pairing within the considered range, either true or virtual pairings.
Determining where the pairs meet and which switch states to set at layer l involves examining two different scenarios.
First, consider the case where the top- and bottom-most photons form a pair, while the other photon pairs (either true or virtual) are already adjacent.
In this scenario, all switch points in layer l are set to , causing the two outer photons to meet at the middle (N/2, N/2+1) or (N/2-1, N/2)
when l is odd or even, respectively.
However, if the top- and bottom-most photons need to pair with two virtual pairings from the previous layer (on lines i and i+1), and the virtual pair is oriented incorrectly, the switch point SW_i^l is set to .
Consequently, the two outer photons adjust to meet their partners, and if the pair resides in a different half (e.g., the top-most's partner is in the bottom half), its partner also moves to meet the outer qubit at the middle.
This results in only one switch point, aside from possibly SW_i^l, being set to ; SW_i^l when
or SW_i+1^l otherwise, while the remaining switch points are set to .
All the possible scenarios and how the photons are moved are depicted in Fig. <ref>.
The runtime of the routing algorithm for the chevron architecture is O(N^2).
In each recursion, we go through the photon list only once to find partners of the top- and the bottom-most photons and all the switch states in the layer can then be decided just by knowing where the top- and bottom-most photons need to be moved to.
[t]
Routing algorithm for chevron design
Input: Photons: indexable list (X_0, X_1, X_2, …, X_N-1),
PL: set of photons to be paired {(X_i, X_j) | 0 ≤ i, j ≤ N-1},
SW: set of switch states (initially null set),
left: indicating the start switch line considered in the recursion
Output: permuted list: π((X_0, X_1, …, X_N-1)),
Set of switch states { SW_k^l }
§ BRICKWORK DESIGN
Our final design rearranges the switch points further into a brickwork pattern as shown in Fig. <ref>.
§.§ Architecture
This time, an N× N switch consists of N/2 layers.
There are three types of layers; odd, even, and the last layer.
Unlike previous designs, the number of switch points inside a layer is independent of the layer number k.
Odd layers contain N/2-1 switch points, while even layers contain N/2 switch points.
The brickwork design is given by the following switch point placement,
SW^k_1 → SW^k_3→…→ SW^k_N-3; for odd k,
SW^k_0 → SW^k_2→…→ SW^k_N-2; for even k.
The exception to this rule is the final layer that contains ⌊ N/4 ⌋ switch points that are placed according to (<ref>).
The total number of switch points is therefore given by
SW = ∑_odd k SW^k + ∑_even k SW^k = N(N-2)/4,
which shows that the brickwork design is also optimal in terms of the number of switch points.
The brickwork design is similar to a full mesh structure of Spanke and Beneš in <cit.>, with one difference.
The full mesh architecture requires N layers to perform N! full permutations, while in the brickwork design N/2 layers are sufficient for pairing an arbitrary input set.
§.§ Routing
The routing algorithm for pairing all the input photons in the brickwork design is shown in Algorithm <ref>.
The underlying concept shares similarities with the Triangular design, but the process of removing switches is more intricate.
First, we identify the partner photon X_i of photon X_N-1 and progressively shift it downward to meet with X_N-1.
If the lowest line X_i can reach is line j, where j ≠ N-2, we also shift photon X_N-1 up to line j+1.
This method ensures that upon removing all utilized switch points and wire segments traversed by X_i and X_N-1, the resultant structure maintains a brickwork architecture, potentially with additional switch points in the last and second-to-last layers if the photon X_i is shifted with the earliest switch points it encounters while X_N-1 is shifted as late as possible.
These extra switch points can be configured to the state, allowing for further recursion of the same routing algorithm until we reach the trivial photon parings.
[t]
Inputs: Photons: indexable list (X_0, X_1, X_2, …, X_N-1),
PL: set of tuple photons to be paired {(X_i,X_j) | 0 ≤ i,j ≤ N - 1},
SW: set of switch states (initially null set)
Outputs: Permuted list: π((X_0, X_1, …, X_N-1)),
Set of switch states SW^l_k
A sample routing round for pairing two photons (X_11 with X_2) in a 12 × 12 brickwork switch is shown in Fig. <ref>.
Given that the distance between X_2 and X_11 exceeds N/2 + 1 making it impossible for the pair to meet on line (10, 11), requiring that both photons must move to pair up on the closest line to X_11, which is (8, 9).
By applying move as soon as possible routing policy for X_2 and move as late as possible routing policy for X_11 results in the desired permuted list (X_0,X_1,X_3,..,X_8,X_2,X_11,X_9,X_10).
We have now assigned nine switch states, leaving the switch points into a brickwork design of smaller size with SW_0^6 being an extra switch. Therefore, as shown in top-left side of Fig. <ref>, we set it to the state.
These two move policies guarantee that after pairing the last unpaired photon with its partner will leave the switch in the Brickwork design and thus allowing the smaller Brickwork to be routed the same way.
Similar to routing algorithms in triangular and chevron designs, we traverse the photon list once per iteration, thus the runtime of the routing algorithm in brickwork design is also O(N^2).
§ DISCUSSION
Having demonstrated that our three proposed planar designs are optimal and rearrangeable non-blocking, we now turn to a more detailed analysis of their depth properties.
The depth is directly related to the expected losses of the switching network.
We are not aware of any work considering planar designs for paired-egress outputs, making direct comparison impossible.
To place our work into the larger context of other switching network designs, we consider existing planar and non-planar designs, as well as classical permutation networks and shared EPPS pool networks.
§.§ Switching network depth
We first focus on the maximum depth, which quantifies the maximum number of switch points that a photon has to traverse.
For the Triangular design in Sec. <ref>, the maximum depth is N-2.
For the Chevron design in Sec. <ref>, the maximum depth is N-2 when N/2 is even, N-3 when N/2 is odd.
The Brickwork design in Sec. <ref> reduces the maximum depth to N/2.
The maximum depth of the Triangular and Chevron designs is close to the optimal design for classical planar N× N permutation networks, shown to be N-1 in <cit.>.
The brickwork design on the other hand requires far fewer switch point traversals in the worst case scenario, reducing the overall loss in the switch.
In an ideal pairing mechanism, an arbitrary pair of input photons should traverse the same number of switch points in the same states.
This is generally not true in real configurations, leading to an imbalance between the number of switch points that each photon of a pair passes through.
In order to quantify this imbalance, we introduce the depth difference Δ, defined intuitively as the difference between the maximum and the minimum depths of the switch.
It is straightforward to see from Fig. <ref> that the minimum depth for the Triangular design is 0.
Therefore the Δ_triangular = N-2, which is also the maximum depth.
For the Chevron design, the maximum depth depends on the parity of N/2.
When N/2 is odd, a photon traverses at most N-3 switch points, while its pairing partner needs to pass through at least N/2-3 switch points, giving Δ_chevron=N/2.
The same depth difference is obtained for the case when N/2 is even. In this case, the maximum and minimum depths are given by N-2 and N/2-2, respectively.
In the Brickwork design, the maximum depth is N/2, while the minimum depth is ⌈ N/4 - 1 ⌉, resulting Δ_brickwork=⌊ N/4 + 1 ⌋.
Out of the three proposed switches, the Brickwork design is the most balanced thanks to its lowest depth difference.
This means that the photons traversing the switch experience comparable losses.
The opposite is true for the Triangular design.
Whether this is an undesirable effect ultimately depends on the network traffic.
For quantum networks with fairly uniform traffic patterns, the Triangular design results in an uneven distribution of end-to-end Bell pair generation rates, leading to a decrease in the quality of service for certain connections.
On the other hand, we can envision scenarios where this imbalance may be a welcome feature.
The input ports with few switch points and their respective BSAs can be reserved for high-demand connections.
For example, the optical switch may be a component of a quantum gateway connecting two independent networks.
The low-loss input ports can be used for internetwork connections, while the more lossy input ports can be used for entanglement distribution within a single network.
§.§ Comparison with other designs
In this section, we compare the designs in terms of the number of switches with other related designs.
Fig. <ref> illustrates the maximum depth for the our proposed designs and those analyzed in <cit.>.
Given that our switches are designed for paired-egress BSA pools, it is not surprising that the their maximum depth is always lower than the optimal full permutational switch of <cit.>.
Fig. <ref> shows the minimum number of required switch points in related planar and non-planar designs for up to 16 input ports.
The focus of this paper has been planar designs suitable for chip fabrication, but of course non-planar designs can simply be “flattened” into planar form.
In this case, we have to consider the fixed crosspoints, which can be viewed as planar switch points with permanent states.
Drost et al.'s work proposing designs for small-scale non-planar switching networks is the work most comparable to our own, but does not address planar designs and does not include a scalable design <cit.>. Fig. <ref> includes data on planarized Drost designs, counting switches and non-switching crosspoints up to 10 ports, the largest design they found. More detailed follow-on design work needs to consider the non-planar design and analyze the photon loss based on the number of interfaces required for coupling and decoupling of input photons toward 2 × 2 switches.
Table <ref> compares our proposed planar designs for paired egress to other scalable designs. To the best of our knowledge, no scalable structure for input pairing has been found before this work. Therefore we compare to some important full permutation switch networks. As the table shows, the key result of our paper is the >50% reduction in switch points compared to Spanke and Beneš's optimal planar design for full permutation switching networks <cit.>.
§ CONCLUSION
We have shown that for an N× N switching network with paired-egress BSA Pools, the lower bound on the number of switching points in a planar architecture is N(N-2)/4.
We proposed three rearrangeable non-blocking designs that saturate this lower bound, along with their corresponding efficient routing algorithms.
Due to their recursive construction, our designs can be scaled to arbitrary size.
Our switch designs can be reversed in a straightforward manner.
The BSAs can be replaced with EPPS nodes, and the routing algorithms then distribute the entangled photon pairs to the desired outputs.
Therefore, our solution is directly applicable to the shared EPPS Pools switching problem considered in <cit.>.
The shared BSA Pool architecture was recently used as an integral component in a proposal for distribution of remote entanglement between neutral ytterbium atoms coupled to optical cavities <cit.>.
This demonstrates the relevance of optical switches with paired-egress BSA Pools, and the role they are expected to play in distributed quantum computing and quantum networking.
The performance of distributed systems crucially depends on how effectively we can harness the power of connected computers. This effectiveness largely relies on the performance of their network systems and architecture. Ideally, the network should always be capable of processing communication requests from computers; any delay waiting for network responses leads to decreased overall performance. Our rearrangeable non-blocking design ensures that the inputs and outputs of switches operate without stalling, achieving multiplexed parallel communication processing among different input-output pairs. Therefore, our optical switch designs prevent contention for communication from being a system-wide bottleneck. Thereby our work is essential for large-scale distributed quantum computers and the quantum Internet.
§ ACKNOWLEDGMENT
The authors would like to thank Takao Tomono, Rikizo Ikuta, and Alto Osada for their help and fruitful discussions.
IEEEtran-new.bst
|
http://arxiv.org/abs/2405.10119v1 | 20240516141544 | Applications of Quantum Machine Learning for Quantitative Finance | [
"Piotr Mironowicz",
"Akshata Shenoy H.",
"Antonio Mandarino",
"A. Ege Yilmaz",
"Thomas Ankenbrand"
] | quant-ph | [
"quant-ph"
] |
piotr.mironowicz@gmail.com
0000-0003-4122-5372
Department of Algorithms and Systems Modelling, Faculty of Electronics, Telecommunications and Informatics, Gdańsk University of Technology
Narutowicza 11/12, Gdańsk, 80-233
Poland
Department of Physics, Stockholm University
Roslagstullsbacken 21, Stockholm, 114 21
Sweden
International Centre for Theory of Quantum Technologies, University of Gdańsk
Jana Bażyńskiego 1A, Gdańsk, 80-309
Poland
International Centre for Theory of Quantum Technologies, University of Gdańsk
Jana Bażyńskiego 1A, Gdańsk, 80-309
Poland
0000-0003-3745-5204
antonio.mandarino.work@gmail.com
International Centre for Theory of Quantum Technologies, University of Gdańsk
Jana Bażyńskiego 1A, Gdańsk, 80-309
Poland
Department of Physics Aldo Pontremoli, University of Milan
Via Celoria 16, 20133 Milan
Italy
Hochschule Luzern, Institut für Finanzdienstleistungen Zug IFZ
Suurstoffi 1, 6343 Rotkreuz
Switzerland
Hochschule Luzern, Institut für Finanzdienstleistungen Zug IFZ
Suurstoffi 1, 6343 Rotkreuz
Switzerland
Machine learning and quantum machine learning (QML) have gained significant importance, as they offer powerful tools for tackling complex computational problems across various domains.
This work gives an extensive overview of QML's uses in quantitative finance, an important discipline in the financial industry.
We examine the connection between quantum computing and machine learning in financial applications, spanning a range of use cases including fraud detection,
underwriting, Value-at-Risk, stock market prediction, portfolio optimization, and option pricing by overviewing the corpus of literature concerning various financial subdomains.
<ccs2012>
<concept>
<concept_id>10002944.10011122.10002945</concept_id>
<concept_desc>General and reference Surveys and overviews</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010583.10010786.10010813.10011726</concept_id>
<concept_desc>Hardware Quantum computation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010432.10010441</concept_id>
<concept_desc>Applied computing Physics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010455.10010460</concept_id>
<concept_desc>Applied computing Economics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]General and reference Surveys and overviews
[500]Hardware Quantum computation
[500]Applied computing Physics
[500]Computing methodologies Machine learning
[500]Applied computing Economics
Applications of Quantum Machine Learning for Quantitative Finance
Thomas Ankenbrand
May 20, 2024
=================================================================
§ INTRODUCTION
In recent years, the convergence of quantum computing (QC) and machine learning (ML) has sparked significant interest across various domains, with finance being no exception. Quantum machine learning (QML) is a branch of ML that harnesses the principles of quantum mechanics to process and analyze data more efficiently. By leveraging the unique properties of quantum systems, such as superposition and entanglement, QML techniques have the potential to improve traditional methodologies in finance. These techniques promise to enhance predictive capabilities in areas such as portfolio optimization, market prediction, trading, pricing, and risk management. As quantum computers continue to evolve and become more accessible, the integration of QML into finance applications is expected.
This review paper delves into this burgeoning field of QML in finance, offering insights into its diverse technologies, applications, and potential implications. The paper is structured into several sections, each focusing on different facets of this intersection. Firstly, in sec. <ref>, we explore various quantitative finance use cases where QML techniques can be applied, including portfolio optimization, market prediction and trading, pricing, and risk management. Subsequently, we delve into the underlying algorithms and ML methodologies that form the basis for these applications.
Following this, the paper delves into the realm of QC and algorithms, providing a foundational understanding of QC principles and covering essential topics such as quantum gates, qubits, quantum circuits, and quantum algorithms.
The crucial part of the review is the comprehensive literature review given in sec. <ref>, which examines the existing body of research on various aspects of QML in finance. Within this part, we explore specific applications, including portfolio optimization in sec. <ref>, market prediction and trading strategies in sec. <ref>, pricing in sec. <ref>, risk management in sec. <ref>, shedding light on the current state of the art and potential future directions. By providing a holistic overview of the field, this review aims to stimulate further research and facilitate advancements at the intersection of QC, ML, and quantitative finance.
Whereas our review concentrates on QML, there exist other reviews concerning QC in finance <cit.>.
§ QUANTITATIVE FINANCE USE CASES
This section provides an overview of some prominent applications in quantitative finance, including portfolio optimization in <ref>, market prediction and trading in <ref>, pricing in <ref>, and risk management in <ref>. Each of these areas plays a vital role in finance, and can at best benefit from the use of quantum computing.
§.§ Portfolio Optimization
The science of building an investment portfolio to maximize returns while lowering risk is known as portfolio optimization. To achieve the best possible balance between risk and reward, a complex process of carefully choosing a wide variety of assets, including stocks, bonds, and other financial instruments, is involved. A key tenet of this strategy is diversification, which distributes assets across a variety of (uncorrelated or low correlated) asset classes and industries, thereby lowering the portfolio's total risk. <cit.>
To assess the risk-adjusted return of an investment or portfolio, the Sharpe Ratio is a commonly used metric. It is computed by dividing the excess return of the investment (or portfolio) relative to the risk-free rate by the standard deviation of the investment's returns. It was created by William F. Sharpe <cit.>. The formula for the Sharpe Ratio is as follows:
r_Sharpe≡R_p - R_f/σ_p,
where R_p is the expected portfolio return, R_f is the risk-free rate, and σ_p is the standard deviation of portfolio returns. A higher Sharpe Ratio indicates a better risk-adjusted return, as it shows how much excess return an investor receives for the extra volatility taken on compared to a risk-free asset. Thus, the Sharpe Ratio plays a crucial role in helping investors make informed decisions about their investments <cit.>.
An all-encompassing approach to portfolio optimization ought to take crucial elements into account such as transaction costs, liquidity, and constraints set by the investor. For example, the ease with which an asset can be purchased or sold without significantly affecting its price is referred to as liquidity. Transaction costs comprise a range of expenses related to the purchase and sale of shares, such as taxes and brokerage fees. Furthermore, restrictions might be applied, limiting particular investment kinds or setting particular risk levels based on investor preferences. There are several approaches to achieving an optimal portfolio, each with special characteristics, including risk parity, factor-based models, and mean-variance optimization.
The so-called mean-variance optimization was pioneered by Harry Markowitz in the 1950s <cit.>. This approach entails estimating the expected return and risk (volatility) of each asset based on historical data. Subsequently, the investor constructs what is termed as an efficient frontier, encompassing all possible combinations of assets that offer the highest expected return for a given level of risk. To be more precise, this frontier comprises portfolios that meet the unique condition of having no other portfolio with a higher expected return given the same standard deviation of return. The optimal portfolio, in turn, finds its place along this efficient frontier, and its identification relies on determining the combination of assets that delivers the highest return for the investor's specific level of risk tolerance <cit.>. Implementing the Markowitz mean-variance theory entails estimating the means and covariances of asset returns. Typically, these estimations involve using a sample mean vector and a sample covariance matrix <cit.>.
There are two main types of portfolio optimization problems: unconstrained and constrained <cit.>. The main difference between unconstrained and constrained portfolio optimization problems is the presence of constraints on certain weights parameters in the latter. An unconstrained portfolio optimization problem is a mathematical problem in which the goal is to find the optimal portfolio weights that will maximize or minimize a certain objective function, such as the expected return or the variance of the portfolio, without any constraints on the weights. This means that the weights can take on any value, including negative values, usually meaning a short position on that asset, as long as they result in the optimal portfolio <cit.>. The constraints in constrained portfolio optimization problem can take many forms, such as limiting the maximum weight of any individual asset, requiring that the weights sum to 1 (to ensure that the portfolio is fully invested), or requiring that the portfolio meet certain regulatory requirements <cit.>. Constrained portfolio optimization is used more often in practice since it allows for realistic and practical investment scenarios. For example, it may not be realistic to allow for negative weights and to invest more than 100% of the portfolio in a single asset.
An example of portfolio optimization of n assets is the following problem:
minimize q · x^⊤ Q x - μ^⊤ x
over x ∈{ 0,1 }^n
subject to 1^⊤ x = B,
where q > 0 reflects the risk appetite of the decision maker, Q is an n by n real matrix specifying covariances between asset returns, μ is a vector with the expected return for each of the assets, and B is the number of assets to be selected that can be interpreted as a budget. The pivotal operation entails computing the inverse of the covariance matrix and solving the associated quadratic optimization problem.
Factor-based models present another popular approach to portfolio construction. These models are utilized by investors to estimate the riskiness and relationships between securities in a portfolio. These models take into account various factors, including value, size, momentum, and quality. By incorporating these factors into the optimization process, investors aim to capture additional explanations of return beyond those explained by traditional asset classes.
An alternate approach to portfolio optimization is provided by the risk parity <cit.>, whose main objective is to distribute risk equally among diverse asset classes.
Online portfolio optimization is a dynamic approach to investment management that takes into account new information as it becomes available. This is in contrast to traditional portfolio optimization, which assumes that all relevant information is known at the time the portfolio is created. Online portfolio optimization algorithms can adjust the portfolio weights in real-time, allowing for more efficient and effective risk management <cit.>.
Portfolio rebalancing is the process of realigning the weightings of a portfolio of investments. It involves periodically buying or selling assets in the portfolio to maintain a desired asset allocation.
§.§ Market Prediction and Trading
Market prediction refers to the process of forecasting future trends and movements in financial markets. It involves analyzing various factors such as historical data, economic indicators, market sentiment, and other relevant information to make predictions about the direction of prices, asset values, and overall market conditions. Market prediction include fundamental analysis, technical analysis, quantitative modeling, sentiment analysis, and microstructure analysis <cit.>.
Fundamental analysis involves examining the underlying factors that influence the value of an asset or market, by considering economic indicators, industry trends, company financials, and other relevant information to assess the intrinsic value of an asset <cit.>. Technical analysis focuses on studying historical price patterns and market trends to predict future price movements with the aid of tools such as statistical models, indicators, and charts, used to identify patterns that suggest potential selling or buying opportunities <cit.>. Quantitative modeling involves using mathematical models and statistical techniques to analyze datasets and identify patterns or relationships between variables sometimes with support of AI methods <cit.>. Sentiment analysis is a technique that aims to gauge market sentiment or investor emotions by analyzing news articles, social media posts, and other sources of information to monitor the overall sentiment, to assess whether investors are optimistic or pessimistic about the market <cit.>.
Microstructure analysis delves into the intricate details of how orders are executed, prices are formed, and liquidity is provided in financial markets <cit.>. By studying the dynamics of order flow, market impact, and price discovery processes, quantitative analysts can gain insights into market behavior and optimize their trading strategies accordingly. Understanding market microstructure is crucial for developing effective trading algorithms that can navigate the complexities of modern financial markets <cit.>.
Algorithmic trading involves the use of computer algorithms to execute trading strategies at a speed and frequency that is often impossible for human traders <cit.>. This approach relies on quantitative models to identify trading opportunities based on various factors such as price movements, volume, and market data. Algorithms can be designed to execute trades automatically based on predefined criteria, allowing for rapid decision-making and execution in the financial markets <cit.>.
Market prediction is inherently uncertain and subject to various risks and limitations, as financial markets are influenced by a multitude of factors, including economic conditions, geopolitical events, investor behavior, and unforeseen events which can lead to unexpected market movements that may deviate from predictions. To improve the accuracy of the prediction of the stock market, neural networks (NNs) have been employed <cit.>.
§.§ Pricing
Options are financial derivative contracts that give the right, without imposing an obligation, to either purchase, known as a call option, or sell, known as a put option, an underlying asset at an agreed-upon price called the strike, within a specified timeframe known as the exercise window <cit.>. There are several types of options based on their exercise characteristics and settlement terms. The major types include:
* American options, which can be exercised at any time before expiration, making them more flexible than European options;
* European options, which can only be exercised at expiration, which simplifies their pricing compared to American options;
* Bermudan options, which can be exercised at a set (always discretely spaced) number of times, what makes it intermediate between American and European options;
* Asian options, or average value options, which have a payoff based on the average price of the underlying asset over a specific period rather than just the spot price at expiration;
* Barrier options, which come into existence or cease to exist when the underlying asset’s price reaches a predetermined barrier level <cit.>.
Optimal stopping theory deals with determining the optimal moment to "stop" or take an action to maximize the expected reward <cit.>. For instance, American options can be seen as a super-martingale hedging problem for the seller and a stochastic optimal halting problem for the buyer <cit.>.
Because the characteristics that define an option are stochastic, it is often difficult to determine its fair value; for this reason, option pricing often requires the use of numerical methods. Because of its adaptability and efficient handling of stochastic parameters, Monte Carlo simulation is one of the most popular techniques <cit.>. Monte Carlo simulation is used to simulate possible future outcomes by generating random samples and is particularly useful for complex option pricing problems with multiple sources of uncertainty <cit.>. Nonetheless, for complex options Monte Carlo methods usually require significant computational resources to provide accurate option price estimates <cit.>. Scientific research fields in option pricing concentrate on refining current models, creating new pricing techniques, investigating alternative market dynamics hypotheses, and advancing options trading risk management strategies. The goal of stochastic volatility models <cit.> is to increase option pricing accuracy <cit.> and better represent the dynamics of volatility in financial markets. Jump diffusion models incorporate sudden jumps in asset prices into the modeling framework to account for extreme market events that impact option prices <cit.>.
One of the most well-known methods for option pricing is the Black-Scholes model. By taking into account variables such as the price of the underlying asset, the strike price, the time to expiration, the risk-free rate, and volatility, it offers an estimate of the value of European-style options <cit.>.
The model assumes that the stock price follows a lognormal distribution, and stock returns a normal distribution; there are no transaction costs or taxes; the risk-free interest rate is constant; and the option can only be exercised at expiration; there is no arbitrage nor dividends <cit.>.
Let us denote by C(S_t,t) the call option price at time t and the stock price S_t at t, K is the strike price of the option, T is the time of maturity of the option, r is the risk-free interest rate, and N(x) is the cumulative distribution function of the standard normal distribution.
The model introduces the following formulae:
C(S_t,t) = S_t N(d_1) - Ke^-r(T-t) N(d_2),
P(S_t,t) = Ke^-r(T-t) N(-d_2) - S_t N(-d_1),
where
d_1 = ln(S_t/K) + (r + σ^2/2)(T - t)/σ√(T - t),
d_2 = d_1 - σ√(T - t).
The former equation (<ref>) is for a European call option, and the latter (<ref>) is for a European put option.
A more generic numerical method for the valuation of options is the binomial options pricing model. It is a lattice-based model utilizing discretized time to represent the fluctuating price of the underlying financial instrument. This approach is useful in situations when the closed-form Black-Scholes formula is not feasible. It can be applied to both American and Bermudan options since it employs the instrument's description over a period of time rather than a single point. Although it requires more computing power than the Black-Scholes model, it is often more accurate. In 1979, Rendleman and Bartter separately, and Cox, Ross, and Rubinstein (CRR) formalized it <cit.>.
The Heath-Jarrow-Morton (HJM) framework is a mathematical model to price interest rate derivatives <cit.>. HJM and its variants are described by the following stochastic differential equation:
df(t,T) = α(t,T) dt + σ(t,T) dW(t)
where T denotes maturity, i.e. the time at which the final payment is due on a financial instrument. df(t,T) represents the instantaneous forward interest rate of a zero-coupon bond, i.e. a bond that does not make periodic interest payments but profits to the investor from the difference between the purchase price and the face value, with maturity T. The parameters α(t,T) and σ(t,T) are describing the drift, i.e. systematic tendency of the stochastic process, and the diffusion i.e. random fluctuations over time, respectively. dW(t) denotes a Brownian motion under the risk-neutral assumption <cit.>.
§.§ Risk Management
Value-at-Risk (VaR) is a widely used measure in risk management to quantify the potential loss on an asset or portfolio over a specific time horizon and with a certain confidence level. Selecting a time horizon (such as one day or one week) and a confidence level α (such as 95%, 99%) are necessary for the calculation of VaR. The possible loss that could be exceeded with the selected confidence level over the given time horizon is then estimated using past data or statistical models <cit.>. When the probability of suffering a loss smaller than VaR is (at least) p, and the likelihood of suffering a loss larger than VaR is (at most) 1-p, then we say we have p VaR. A loss that is greater than the VaR threshold is called a VaR breach <cit.>.
Let us consider a portfolio with K assets, where 𝕃≡ (L_1, ⋯ ,L_K) with L_i ∈ℝ_+ denote possible losses associated with the relevant asset. The total loss is denoted as ℒ≡∑_i ∈ [K] L_i, and its expected value is 𝔼[ℒ] = ∑_i ∈ [K]𝔼[L_i]. Let α∈ [0,1] be the confidence level. VaR is then defined in the following way:
VaR_α[ℒ] ≡inf_p ∈ℝ_+{ℙ[ℒ≤ p] ≥α}.
Conditional Value at Risk (CVaR), also known as expected shortfall, is the anticipated loss for losses exceeding VaR. CVaR is particularly responsive to extreme events occurring in the tail of the loss distribution.
The economic capital requirement (ECR) <cit.> is a critical risk metric representing the capital needed to maintain solvency at a given confidence level. It is defined as the difference between VaR and the expected loss:
ECR_α[ℒ] ≡VaR_α[ℒ] - 𝔼[ℒ].
A pertinent and significant issue revolves around estimating the probability of debtors reimbursing their loans, which is a crucial quantitative matter for banks. Financial institutions typically aim to gauge the creditworthiness of debtors by categorizing them into classes known as credit ratings. These institutions have the option to develop their credit rating model or rely on credit ratings provided by major rating agencies. Borrowers are commonly divided into two primary categories based on their creditworthiness: investment-grade borrowers with low credit risk and sub-investment-grade borrowers with higher credit risk. When a borrower's rating declines from investment to sub-investment grade, they are referred to as fallen angels <cit.>.
Underwriting is a critical process in the insurance industry where an insurer assesses the risk associated with insuring a particular individual or entity and determines the terms and conditions of the insurance policy, by analyzing various factors such as health history, lifestyle choices, occupation, and more to determine the likelihood of a claim being made <cit.>.
The Local Outlier Factor (LOF) algorithm is a widely studied unsupervised anomaly detection (AD) method, frequently employed due to its effectiveness. It operates through three key steps: determining the k-distance neighborhood for each data point x; computing the local reachability density of x; and calculating the local outlier factor of x to determine its abnormality <cit.>. However, it's important to note that the LOF algorithm can become computationally expensive, especially when dealing with large datasets.
Fraud Detection (FD), a particular case of AD is a critical area of focus to safeguard financial markets, institutions, and investors from fraudulent activities <cit.>. Utilizing advanced data analytics and ML algorithms is crucial for detecting anomalies and patterns indicative of fraudulent activities in financial transactions <cit.>. Understanding the behavioral patterns of individuals or entities involved in financial transactions is essential for fraud detection <cit.>. Network analysis involves examining the relationships and connections between different entities in the financial system to uncover potential fraud schemes <cit.>.
We refer to <cit.> for a review of ML methods in fraud detection.
§ MACHINE LEARNING
Machine Learning (ML) is an approach in the development of algorithms and statistical models enabling learning by leveraging data without explicit programming <cit.>. The algorithm is data-driven and hence adaptable to newer inputs improving their performance over a period. This makes them invaluable when the existence of a suitable algorithm is uncertain but abundant data is available. In the financial sector, ML has made significant impact from algorithmic trading to fraud detection, though their influence has worked both ways <cit.>. The general steps involved in training a ML algorithm (fig.(<ref>)) are described as follows.
* Formulation of the problem to be solved and collection of data.
* Preprocessing the data <cit.> for managing the missing values, normalization, dimension reduction using feature selection or feature extraction techniques. In the case of supervised learning, the processed dataset is typically split into training and testing sets.
* Suitable model is selected and trained to predict the target outcomes while optimizing its parameters and performance. The accuracy of the model is assessed with suitable metrics. The model is evaluated on unseen data and further tuning in terms of the hyperparameters is carried out.
* The model is implemented in real-world scenarios with regular maintenance to ensure it is up-to-date.
Generally a dataset is comprised of multiple data points, each characterized by one or more attributes known as features <cit.>. The total number of features is the dimension of the data. Large dimensional datasets increase the computational complexity associated with training a model known as the “Curse of Dimensionality”. This leads to the risk of overfitting and inefficiency in the training process. As a result, dimension reduction techniques namely feature selection (FS) and feature extraction (FE) emerge as key pre-processes in addressing them <cit.>.
FS reduces the dimensionality of a dataset by identifying and retaining the most relevant features that contribute to the learning process. Data is not merely discarded but methodically selected to expedite training and enhance the interpretability of a model. Effective FS relies on three criteria <cit.>: Relevance is the importance of a feature in making accurate predictions, Redundancy eliminates features that do not contribute to additional information, while Diversity ensures that the selected data includes features that provide a comprehensive representation of the unprocessed data necessary to address the task. However, if a small subset of data is selected from a very diverse collection of features, the risk of loss of information still persists. FS techniques comprise of: (i) The computationally less intense Filter methods which rely on the statistical relation between the feature and the target outcome without any dependency on the ML algorithm. E.g. Correlation coefficient scores, Chi-Square test. (ii) The computationally expensive Wrapper methods create a subset of features based on the ML model and achieve the best FS with respect to it. E.g. Forward selection, Backward elimination and Recursive feature selection. (iii) Embedded methods integrate FS into the training process of a model itself and perform better than the other two methods. E.g. LASSO and RIDGE regression.
While FS chooses a subset of features from the original data, FE transforms these features into new ones while retaining all the relevant information. Often creating a subspace of lower dimension, the process removes correlations between features<cit.>. FE methods include: (i) Principal component analysis where new variables known as principal components are created as a linear combination of the original feature set. Each principal component is uncorrelated with each other. E.g. Covariance matrix calculation, Eigenvalue and eigenvector computation and sorting. (ii) Linear discriminant analysis finds a linear combination of features that maximizes the separation between different classes or groups in the dataset, useful in classification tasks. They include within-class scatter matrix, between-class scatter matrix calculation, eigenvalue and eigenvector computation to name a few. (iii) t-distributed Stochastic Neighbor Embedding is a non-linear technique primarily used in exploratory data analysis. It maps high-dimensional datasets to a lower-dimensional space, preserving the similarity relationships between data points. E.g. Probability distribution calculation, similarity calculation and optimization gradient descent.
Finally, generalization of a model, to better capture its predictive powers on unseen data that was not used during the training process but originates from the same distribution <cit.> is discussed. A generalized model should ideally recognize irregularities and complexities present in this data without overfitting. Patterns and structures identified in the training dataset must be applied accurately on this unseen data. A simpler model is preferred over a complex one when both work equally well with the same predictiveness. Accuracy, precision, recall are certain metrics that analyze the performance of a generalized model. In general, it is known that complexity of the model, quality of the dataset employed in training and regularization techniques affect the generalization process.
ML can be broadly classified into, Supervised learning, Unsupervised learning and Reinforcement learning, based on the algorithms employed for learning. The algorithms are further characterized in terms of their interpretability and explainability to understand why some models outperform others <cit.>. Interpretability refers to the effect and the cause which gave rise to a particular model and its performance. It determines a model's predictive capability, in the event a change in its input parameters occur, owing to its implementation in realistic scenarios. Higher the interpretability better is the understanding of how a model arrives at its predictions. This is especially critical for financial transactions where the stakes associated with decisions are greater. For instance, a model's incorrect decision to block a legitimate credit card transaction could have serious consequences in case of an emergency. When a model lacks interpretability, it becomes necessary to employ additional methods to make its decisions comprehendible, leading us to the concept of model explainability. Here, the focus is on elucidating the model's behavior, even if the inner workings driving its predictive powers are not entirely clear. As ML continues to evolve, balancing the performance of models to have a clear understanding of their decision capability is necessary.
§.§ Supervised Learning
Supervised learning (SL) algorithms generate a function that map inputs to desired outputs using labelled datasets. They are divided into different classes of finite order, forming the basis for model training. SL model trains to employ the labelled datasets and associate the correct output to the corresponding input via mathematical analysis <cit.>. SL can be divided into classification and regression algorithms. A classification model maps its inputs to predetermined discrete outputs. Most common example is classifying emails into `spam' or `not spam' classes. A regression model maps an input to a continuous or a real-valued output. For instance, the value of a house may vary depending on its construction year, area and location. We now briefly discuss some of the SL algorithms.
Linear Regression (LR) predicts outcomes accurately using a set of optimal coefficients. The key assumptions here are that the relationship between the input and output variables is linear (Linearity), both the variables are error-free, input variables are independent of each other (Absence of multi-collinearity) and normalized (Normalization). For correlated inputs, LR tends to overfit while their normalization ensures better predictions. A general LR model can be expressed as, ℳ_β_0β_1(X) = β_0+β_1 X, where X is the input variable, β_1 is a d-dimensional vector of coefficients and β_0 is a real number known as the bias coefficient or intercept. This model ℳ_β_0β_1(X) is utilized to predict the output variables Y (Y⟵ℳ_β_0β_1(X)). Different coefficients (β_0,β_1) and (β'_0,β'_1) lead to two different models ℳ_β_0β_1(X) and ℳ_β'_0β'_1(X) predicting Y and Y' respectively. Such models are very useful in predicting the financial and economic performances of companies, banks<cit.>, stock market and trading<cit.>. Though a LR model is simple and easy to interpret, it is not always suitable for classification problems since it predicts continuous-valued outcomes. A more accurate approach would be to employ logistic regression where the output is binary.
Logistic Regression is useful for categorical output variable. An input x is said to belong to a particular class y or not, corresponding to P(y=1|x) (positive decision) or P(y=0|x) (negative decision), based on σ(x)= 1/1+e^-x known as the sigmoid function. Here, the relationship between the input and the output is not linear. The choice between linear and logistic regression depends on the nature of the target outcome. Logistic regression is very aptly suited for binary classification in high risk scenario<cit.> where precise financial decisions are to be made.
Decision Trees are models with tree-like structure for performing decision tasks<cit.>. They consists of nodes, branches and leaves. The basic node is known as the root and has no incoming edges. Internal nodes have outgoing edges and represent a test on the features. Each branch indicates the outcomes of this test while the leaves are a class or a continuous value. Starting from the root of the tree, data is split into different nodes based on certain criteria using the branches until the final outcome is reached. Decision trees can be categorical or regression type. They are easy to understand and do not require data normalization. Such models very useful in business decision-making tasks<cit.>. A major disadvantage of such algorithms are risk of overfitting and model instability for small variations in input data.
Support Vector Machines (SVMs) are helpful in linear and nonlinear classification problems<cit.>. In this formalism, an optimal hyperplane is constructed using support vectors, that clearly separates two classes of data. Support vectors are the closest data points to the hyperplane. A hyperplane is mathematically represented as β_0+β_1. X=0 where β_0 is the bias and β_1 is the p-dimensional vector representing a set of data points {(x_i,y_i)}^n_i=1, n is the length of the sample and y_n=± 1 depending on the class to which the data point belongs to. For instance, consider the binary classification problem<cit.>, where the aim is to construct a hyperplane that divides the datasets into two classes. All the points corresponding to y_n=1 (-1) belong to the first (second) class and lie on one (other) side of the hyperplane. Although (SVMs) are meant for general linear classification, with the help of Kernel functions, they can be used for non-linear cases as well. Kernel functions map non-linear datasets into a larger dimensional linear space which can then be separated as different classes using a hyperplane. Commonly used kernel functions are polynomial functions, radial basis functions and sigmoid functions. Support vector machines are suitable for small or medium-sized datasets of data mining and have been extensively applied for financial forecast.
Support Vector Regression (SVR) is an extension of SVMs though they are useful for predicting continuous values as opposed to classification tasks.
Suppose {(x_i,y_i)}^n_i=1 is the available training data set<cit.>, then the basic idea of the SVR is to find a function f(x) that predicts the target values with no more than ϵ deviation from the actual values. A best fit with ϵ-threshold in contrast to a best hyperplane is looked for. Represented as a tubular region around the regression line, the ϵ-region encompasses the errors points which are not penalized. Any other point lying outside this region is penalized. In addition to the ϵ-threshold, the function f(x) is expected to be as flat as possible. The trade-off between the ϵ-deviation and the flatness can be captured by the C parameter. Similar to SVMs, the input dataset can be transformed into a higher dimensional space using common Kernel functions. The most common use cases of this formalism w.r.t. finance includes stock market exchange and currency conversion<cit.>.
Neural Networks (NNs) perform computations using artificial neurons which are several interconnected nodes that mimic a human brain and its functioning. A NN comprises of an input layer for receiving the input data, an output layer for accessing the targeted output and several in between hidden layers for computation and FE depending on the complexity of the problem<cit.>. Neurons in each layer process inputs received by them and pass their outputs to the next layer for further processing. Associated with each neuron is an activation function (also known as the transfer function) which defines the learning ability of a neuron. This function indicates whether a neuron has to be activated or not for predicting the targeted outcomes. Typical examples of activation functions are the sigmoid and rectified linear unit. The importance of each neuron can be determined by its weight, bias and the activation function. Upon obtaining the final output from the network in a single run, the network re-calculates these parameters for each neuron via a feedback process, to minimize error propagation and predict accurate results. A stable NN is obtained after iterating over several forward and backward propagations. NNs are broadly classified as convolutional NNs, recurrent NNs and deep learning NNs. Though primarily employed for pattern recoginition, speech recognition and natural language processing, these networks can also be used for classification problems similar to logistic regression. For instance, a sigmoid function can be employed to find a separation between classes via hyperplanes (known as the perceptron algorithm). They find application in option trading and financial forecasting.
Finally, w.r.t. classification tasks, we would like to point out some metrics useful in visualizing the performance of any model <cit.>. The receiver operating characteristic (ROC) curve measures the performance of a classification model for various threshold settings. The curve shows the variation of the true positive rate against the false positive rates at each threshold point. The true positive rate (false positive rate) is defined as the ratio of the true positive (false positive) to the sum of true and false positives. Also known as the sensitivity or Recall, this metric gives the model's ability to infer the proportion of true positives and false positives correctly. An efficient algorithm that computes the points under the ROC curve is known as the Area under the curve (AUC). An average of the performance across all threshold settings can be obtained here. AUC varies from 0 to 1 indicating that the model's predictions are completely incorrect or correct respectively. It is interesting to note that AUC infers the predictive capabilities of a model irrespective of threshold classification. Furthermore, it is noteworthy to mention radial basis functions for SVMs <cit.> and NNs <cit.>. These are radially symmetric functions which give the distance of an input from a chosen fixed point. This real-valued function is especially useful for introducing non-linearity in regression and classification models.
§.§ Unsupervised Learning
Unsupervised learning (USL) is applicable in ML for uncategorized datasets where the data inputs do not have corresponding labelled outputs. USL algorithms discover the inherent structure in the dataset by identifying latent variables and patterns without explicit feedback or supervision, in contrast to SL. USL algorithms can be classified into clustering and Association algorithms.
Clustering algorithm is a fundamental USL technique where data points are organized into distinct groups (clusters) based on their similarities and differences <cit.>. Clustering algorithms are versatile and can be categorized as follows: Exclusive Clustering where each data point belongs to exactly one cluster. E.g. K-Means clustering assigns data points to the nearest cluster center. In Overlapping Clustering a data point can belong to one or multiple clusters. E.g. Fuzzy C-Means where data points have degrees of belonging to various clusters. Hierarchical Clustering builds a hierarchy of clusters, either agglomeratively (bottom-up) or divisively (top-down) continuously merging and reorganizing them during the training process. E.g. Nested structures in datasets. Probabilistic Clustering assigns data points to clusters based on their probability of belonging assuming different probability distributions. E.g. Gaussian mixture models.
Association Algorithms find correlations, patterns or relationships between data points within large datasets. A combination of one or more data points that appear in the dataset known as Itemsets, meeting a minimum support threshold, are identified. Support refers to how often a data point occurs in the dataset. Association or relationship rules (If-Then statements) are then built based on how often these itemsets occur. For instance, Market basket analysis <cit.>, where patterns in customer purchases are identified and analyzed is an apt example. The commonly known association algorithms are the A-Priori algorithm employing bottoms-up approach in mining frequent itemsets and the FP-Growth which uses a tree-like structure. The algorithms are important for credit risk assessment where correlations between various financial behaviors and credit risk are identified.
A significant aspect of USL is generative modeling, which focuses on understanding and capturing the internal structure of data. Generative models learn the joint probability distribution of input data, as opposed to discriminative models that focus only on the conditional probability distribution. In the context of finance <cit.>, models like Generative Adversarial Networks (GANs), consisting of a generator and discriminator, generate new and artificial data samples that mimic the real data. This is invaluable in realistic financial scenarios for augmenting datasets where real data is scarce or sensitive. Similarly, Variational Autoencoders are powerful in feature learning and dimensionality reduction. They can reconstruct input data after compressing it into a lower-dimensional subspace, useful for identifying key features in complex financial datasets <cit.>. Other considerations in USL are anamoly detection, dimensionality reduction and data processing as mentioned before.
§.§ Reinforcement Learning
Consider the context, where a learner is expected to make decisions based on observations to solve a specific problem. Such learners in RL are called agents. As the decisions of the agent depend on observations or states coming from an environment, the observations depend on the decisions or actions of the agent. For the vast majority of the problems of practical interest, a priori knowledge of the complete specification of the environment is not available <cit.>, which makes classical optimization methods and supervised learning techniques inapplicable. Instead, this type of information can be revealed by interacting with the environment and developing strategies or policies based on the responses from the environment. These responses include the state of the environment and a reward signal based on the actions taken by the agent. The process of taking an action based on an observation and receiving a response as a result is called a transition (see Figure <ref>). RL tasks can be episodic, where transitions lead to a terminal state at a certain point and the task is restarted. They can also be continuous, where this is not the case.
Some reinforcement learning algorithms are mentioned in brief here. Q-Learning is an approach aimed to develop an optimal action policy for any Markov decision process. It is an off-policy due to random actions taken by the algorithm to gain rewards in a greedy manner. The algorithm uses a Q-function Q(s,a) to estimate the value of an action a taken in a particular state s. This helps the agent in taking further actions to maximize the cumulative reward, sometimes at the cost of sacrificing immediate gains. The Q(s,a) is updated at each step based on the actions taken, rewards assigned and the learning rate<cit.> known as value iteration. The agent develops an optimal policy by making random actions known as exploitation and observing these actions to increase rewards known as exploration. Q-Learning is simple to implement and works in a model-free environment, though it cannot handle large, complex and continuous states of the environment. In such cases Deep-Q Learning, where Q-Learning is combined with deep learning is preferred<cit.>. The state-action-reward-state-action algorithm is another important technique which is an on-policy and hence does not make random actions similar to Q-Learning. In contrast to Q-Learning, a policy gradient algorithm learns the policy that maps each state directly to a particular action known as policy iteration. Some noteworthy hybrid algorithms that use both value based and policy based iteration approaches are Soft-Actor-Critic, Twin Delayed Deep Deterministic Policy Gradients and deep deterministic policy gradients<cit.>.
§ QUANTUM COMPUTING AND ALGORITHMS
We will now cover basics of QC and QML. Interested readers seeking a comprehensive introduction to QC can refer to seminal work <cit.>, and recent one <cit.>, and for QML <cit.>.
§.§ Basics of Quantum Computing Methods
Classical computers employ logic gates to execute classical algorithms, analogously a circuit-based quantum computer utilizes quantum gates to run quantum algorithms on quantum bits, known as qubits.
The mathematical description of a qubit is a vector in a two-dimensional Hilbert space ℋ) ≃ℂ^2. The Dirac notation is the standard one to denote a qubit, namely a general state can be expressed as:
|ψ_1⟩ = α|0⟩ + β|1⟩,
in which we have introduced the complex amplitudes α and β, that fulfill the normalization condition |α|^2 + |β|^2 = 1.
The two orthonormal states |0⟩ and |1⟩ form the computational basis for one qubit and conventionally they are the eigenstate of the
third Pauli matrix, such that σ_3 |k⟩ = (-1)^k |k⟩.
In particular, the identity and the set of the tree Pauli matrices
σ_0 = 𝕀 =
[ 1 0; 0 1 ], σ_1 = σ_x =
[ 0 1; 1 0 ], σ_2 = σ_y =
[ 0 -i; i 0 ], σ_3 = σ_z =
[ 1 0; 0 -1 ],
constitute a basis in which any observable acting on a qubit can be expanded.
The linear combination of states in Eq.(<ref>) is known as a superposition of the two basis states,
and the squares of the amplitudes account for the probability of detecting the corresponding state.
Namely, we have a probability |α|^2 (|β|^2) that the state |ψ_1⟩ collapse in |0⟩ (|1⟩) after a projective measurement.
Similarly, we introduce a register of N qubits
|ψ_N⟩ = ∑_i_1, ..., i_N = 0, 1α_i_1, ..., i_N|i_1, ..., i_N⟩,
for which the normalization condition is ∑_i_1, ..., i_N = 0, 1 |α_i_1, ..., i_N|^2=1.
Any logic operation on such register is performed via a quantum gate, that is formally expressed as a unitary matrix U such that U^† U = U U^† = 𝕀,
therefore it belongs to the 𝐒𝐔(2^N) Lie group.
Through the set of operators defined in (<ref>), one can define the most relevant class of single and two-qubit parametric operations, respectively:
R_α(θ) = e^- i θ/2σ_α and C_αα(θ) = e^- i θ/2σ_α⊗σ_α,
where ⊗ denotes the tensor product <cit.>.
They are generally known as rotation and controlled gates and are used to build variational, or parametrized, quantum circuits (PQCs), thanks to the possibility of adjusting the angles θ defining each gate.
Despite significant advancements in designing quantum gates to operate on qubits, a considerable gap still persists between the theoretical results achieved in quantum algorithms, and their practical implementation on quantum processing units. In fact, at the present stage computing systems and simulators remain distant from showcasing any form of quantum advantage in addressing problems emerging outside the purely academic interests and that could impact our daily existence. This gap can be attributed in part to the current era of noisy intermediate-scale quantum (NISQ) technologies <cit.>. Quantum processing units are constrained by a limited number of qubits and significant levels of noise, including decoherence processes that spoil all the quantum resources requested for the desired speed-up <cit.>.
Techniques to mitigate errors and eventually correct them, such as surface codes and the engineering of logical qubits, have been proposed <cit.>. Here the main approach is to map a set of physical qubits into a logical qubit, often utilizing specialized network-like circuits for quantum processors composed of logical qubits. However, given the non-unique mapping questions related to the optimal embedding arise and about the scalability of such procedure on different computing architectures.
§.§ Basic building blocks of hybrid algorithms
A hybrid quantum-classical algorithm combines elements of both quantum and classical computation to solve computational problems more efficiently than classical methods alone. These algorithms leverage the strengths of quantum computing, such as superposition and entanglement, alongside classical techniques to achieve better performance. A Quantum Processing Unit (QPU) is a hardware device designed to perform quantum computations. It consists of qubits, the fundamental units of quantum information, and is used to execute quantum algorithms. The Quantum Approximate Optimization Algorithm (QAOA) <cit.> is a quantum algorithm designed to solve combinatorial optimization problems by preparing a quantum state that encodes the solution to the problem and then measuring it to obtain an approximate solution. The Variational Quantum Eigensolver (VQE) <cit.> is a quantum algorithm used to find the ground state energy of a given Hamiltonian by optimizing the parameters of a parameterized quantum circuit. A Parameterized Quantum Circuit (PQC) is a quantum circuit with adjustable parameters that can be optimized to solve specific computational tasks. Finally, a Quantum Support Vector Machine (QSVM) <cit.> is a QML algorithm used for classification tasks, leveraging QC principles to perform efficient classification of data points into different classes.
Quantum annealing <cit.> (QA) is a computational technique that leverages quantum mechanical principles to solve optimization problems. In QA, a quantum system is initialized in a simple, known state and gradually evolved towards a low-energy state that represents the solution to the optimization problem. Forward annealing refers to the process of gradually decreasing the system's temperature or energy levels from an initial high value to a low value, allowing it to explore different configurations and settle into the ground state. Reverse annealing <cit.>, on the other hand, involves starting from the ground state and gradually increasing the temperature or energy levels to explore different configurations before reaching the desired state. Simulated quantum annealing (SQA) <cit.>, also known as quantum-inspired annealing, mimics the behavior of QA using classical computing resources, often by simulating the behavior of a quantum system undergoing annealing. While not as powerful as true QA, the simulated one can still be useful for solving optimization problems more efficiently than classical methods in certain cases.
The quantum phase estimation algorithm is a technique for estimating the phase that corresponds to an eigenvalue of a given unitary operator <cit.>.
Grover provided an algorithm for unstructured search in <cit.>.
A quantum algorithm for minimal search is shown that locates the index y in a table T of size N such that T[y] is minimum with a probability of at least 1-1/2^c, with a time complexity of O(c √(N)) was given in <cit.>.
The Grover searching technique was expanded upon by Quantum Amplitude Amplification and Estimation (QAE), which was first presented in <cit.>. This approach considers a Boolean function χ:X →{0,1}, where x is refered to as "good" if χ(x)=1 and "bad" otherwise. Consider a quantum algorithm 𝒜, 𝒜|0⟩ = ∑_x∈ Xα_x |x⟩, and let a represent the probability that a good element is obtained if 𝒜|0⟩ is measured. Next, assuming that algorithm 𝒜 makes no measurements, amplitude amplification is a procedure that enables a good x to be located. Note that 𝒜 in Grover's searching algorithm is limited to generating an equal superposition of all members of X, and requires that there is a known unique x such that χ(x)=1. Whether the value of a is known in advance or not, QAE still functions. After several applications of 𝒜 and its inverse, which is proportional to 1/√(a) even in the worst scenario, a "good" x can be identified if the value of a is known. For a wide range of search problems for which there are effective classical heuristics, the quadratic speedup can also be achieved. It is possible to estimate the value of a by combining concepts from Grover's and Shor's quantum algorithms in the amplitude estimation process. Applying QAE to the problem of approximate counting allows one to estimate the number of x∈ X such that χ(x)=1.
Several ideas have been put forth on how to construct quantum modular exponentiation, multipliers, and adders utilizing a set of basic quantum gates. Reversible versions of well-known classical implementations were the first circuits that were proposed <cit.>. Further significant developments include <cit.>, see <cit.> for an overview.
A quantum algorithm, called Harrow–Hassidim–Lloyd (HHL), for solving systems of linear equations efficiently was demonstrated in <cit.>. For a given linear equation A x = b with the unknown x, and A with condition number κ, and given matrix M, the HHL computes the expectation value of x^† M x. For small κ it was shown that any classical algorithm requires exponentially more time than HHL. Later, the HHL method was used in the work <cit.>, which efficiently determined the quality of a least-squares fit over an exponentially large data set. In many instances, the algorithm could also efficiently identify a concise function that approximated the data to be fitted and bounded the approximation error. In cases where the input data consisted of pure quantum states, the algorithm was employed to provide an efficient parametric estimation of the quantum state, and therefore, it could be utilized as an alternative to full quantum-state tomography, particularly when a fault-tolerant QPU was available.
In <cit.> it was demonstrated that multiple copies of a quantum system, characterized by a density matrix ρ, could be employed to construct the unitary transformation exp(-i ρ t). Consequently, this enabled the performance of quantum principal component analysis on an unknown low-rank density matrix. This approach allowed for the determination in a quantum form of eigenvectors associated with the largest eigenvalues, accomplishing this task in exponentially less time compared to any preexisting algorithm.
A quantum algorithm is presented by the authors of <cit.> for systems of linear ordinary differential equations with constant coefficients, including possibly inhomogeneous instances. This technique delivers an exponential improvement over previous quantum methods by producing a quantum state proportional to the answer at a predetermined final time and attaining polynomial complexity in the logarithm of the inverse error. They are simulating the evolution according to the propagator using a Taylor series and encoding the simulation into a sparse, well-conditioned linear system by leveraging the HHL approach. Their method provides improved numerical stability without requiring extra hypotheses by avoiding the drawbacks of finite difference techniques. Consequently, they present a quantum algorithm for linear differential equations with a complexity of poly(log(1/ϵ)), marking a substantial exponential improvement over existing methods <cit.> where the overall complexity remains poly(1/ϵ) due to the inherent error introduced by the multistep method.
The Feynman-Kac formula establishes a connection between solutions of certain partial differential equations (PDEs) and Markov processes and provides a way to solve certain types of the former by relating them to the behavior of the latter. An algorithm based on variational quantum imaginary time evolution for solving the Feynman-Kac partial differential equation resulting from a multidimensional system of stochastic differential equations was proposed in <cit.>. The correspondence between the Feynman-Kac PDEs and the Wick-rotated Schrödinger equation was utilized for this purpose.
Exact inference on Bayesian networks is widely acknowledged as a computationally challenging task, characterized by its #P-hard complexity, thus practitioners usually resort to approximate inference techniques when dealing with such networks. These techniques are employed to draw samples from the distribution concerning query variables, given the provided evidence variable values (e). Through the implementation of a quantum adaptation of rejection sampling, a substantial enhancement in efficiency is achieved. For a Bayesian network containing n variables, each with at most m parents per node, a single unbiased sample is obtained with classical resources in time O(n m P^-1(e)), whereas quantum technologies allow for a square-root speedup to O(n 2^m P^-1/2(e)) time per sample <cit.>.
In the paper <cit.> an algorithm for QPU-based prediction was presented, centered around a linear regression model with least-squares optimization. The scheme concentrated on the machine-learning task of predicting the output for a new input based on provided data point examples, and was adapted to handle non-sparse data matrices representable through low-rank approximations, and incorporated substantial enhancements were made to reduce its dependency on the condition number. The prediction's outcome could be obtained through a single-qubit measurement or harnessed for further quantum information processing tasks. To this end the quantum principal component analysis of <cit.>, as discussed above, was employed. Another quantum algorithm for fitting a linear regression model to a given data set through the least squares approach returning the optimal parameters in classical form was presented in <cit.>. The algorithm, once executed, fully determined the fitted model, allowing for cost-effective predictions on new data; it was able to operate on data sets with non-sparse design matrices. Its runtime was characterized by a polynomial dependence on the logarithm size of the data set, the number of adjustable parameters d, the condition number of the design matrix κ, and the desired precision in the output. It was also established that the polynomial dependencies on d and κ were essential, indicating that significant improvements to the algorithm were unattainable. Furthermore, a complementary quantum algorithm was introduced to estimate the quality of the least-squares fit without explicitly computing its parameters.
A Determinantal Point Process (DPP) is a type of stochastic process characterized by a probability distribution, which is expressed as a determinant of a certain function. They occur in quantum physics and random matrix theory, and provide effective algorithms for tasks such as sampling, marginalization, conditioning, and other inference operations, and thus are important for ML <cit.>. Their quantum versions were discussed in <cit.>, where novel QML algorithms based on quantum subspace states, offering advancements in quantum linear algebra are introduced. The work considers 3 algorithms. The first algorithm facilitates quantum determinant sampling, achieving a significant speedup compared to classical methods. The second algorithm focuses on quantum singular value estimation for compound matrices, potentially yielding exponential improvements in efficiency. Lastly, the third algorithm reduces the circuit depth for quantum topological data analysis, enhancing computational efficiency.
Recently, orthogonal NNs have emerged as a novel NN architecture that enforces orthogonality on the weight matrices. The characteristic of orthogonality in the trained model weights is utilized to prevent redundancy in the acquired features <cit.>. The paper <cit.> introduced two related novel quantum methods for NNs aimed at enhancing performance in ML applications. The first method, called quantum orthogonal NN, utilizes a quantum pyramidal circuit to implement orthogonal matrix multiplication, with efficient training algorithms for both classical and quantum hardware. The second method, quantum-assisted NNs, employs QPU for inner product estimation during inference and training of classical NNs. Extensive experiments on medical image classification tasks demonstrate similar accuracy levels between quantum and classical NNs, suggesting the potential usefulness of quantum methods in visual tasks as quantum hardware advances.
Several recent papers have explored the application of QC principles to sentiment analysis in natural language processing. The work <cit.> proposed a method based on the Lambeq toolkit, while <cit.> introduced a quantum-inspired fully complex-valued NN, and <cit.> presented a quantum-like implicit sentiment analysis approach using sememes knowledge. Liu et al. (2023) <cit.> surveyed various quantum-cognitively inspired sentiment analysis models. Additionally, Sharma et al. (2022) <cit.> conducted a comparative study between classical and QML models for sentiment analysis. However, before they become effective for finance, the application of quantum sentiment analysis must wait for further advancements.
A model known as the Hybrid classical-quantum Autoencoder was put forth in <cit.>. It combined the functions of a PQC placed in the bottleneck of a classical autoencoder (AE). Performance was improved in terms of F1 score, recall, and precision once the PQC was included. The benefits of QC in unsupervised AD are illustrated in <cit.>. Using QAE and minimal search, the k-distance neighborhood of each data point is identified. Using a quantum multiply-adder, the local reachability density of every data point is computed in parallel. Grover's search for anomalous data points and amplitude estimation are used in parallel to determine each data point's local outlier factor. The method achieves polynomial speedup in the number of data points and exponential speedup in the dimension compared to its traditional equivalent. The work <cit.> presents a semisupervised AD method based on the SVR with quantum kernel reconstruction loss (QSVR). This model is evaluated against a classical autoencoder, a quantum autoencoder, and against an SVR with an RBF kernel. QSVR outperforms all these models, earning the greatest mean AUC over all data sets. The models are thoroughly benchmarked on ten real-world AD data sets and one toy data set. QSVR performs better on 9 out of 11 data sets compared to the quantum autoencoder. QML for FD in Phishing URL was analyzed in <cit.>.
Adiabatic Quantum Algorithms (AQAs) is a type of quantum algorithm that uses adiabatic evolution, a gradual change in the system’s Hamiltonian, to find the ground state of a problem. The basic idea is to start with a simple Hamiltonian that can be easily prepared and solved, and then gradually change it to the problem Hamiltonian, which encodes the solution to the problem. The system will remain in the ground state throughout the evolution if the change is slow enough, and the final state will be the solution to the problem <cit.>.
Counterdiabatic (CD) driving is a technique used to speed up AQAs. The concept of CD driving was first introduced in physical chemistry in 2003 <cit.> and reintroduced in <cit.>. The key idea behind CD driving is to introduce additional terms in the Hamiltonian of the system that counteract the unwanted transitions. By carefully designing these counterdiabatic terms, it is possible to drive the system along a specific path that enhances the evolution towards the desired final state <cit.>. CD driving has been applied e.g. in QA <cit.>.
A review of the basic results for quantum error mitigation techniques in hybrid quantum-classical algorithms is given in <cit.>. The issue of implementing QAOA in near-term devices is discussed in <cit.>.
§.§ Quantum Machine Learning
Quantum machine learning (QML) has emerged as a novel paradigm at the intersection of quantum computing and machine intelligence, aiming to enhance computation speed-up within the current NISQ era using both quantum and classical computational systems and algorithms <cit.>. Alas, at the present stage the broader field of quantum machine intelligence <cit.> aiming to develop a learning theory in the quantum domain has proficiently focused on the subfield of QML. In this area, researchers operate at the frontiers of the available quantum hardware and classical ML, even if the first milestones in QML beared in mind the architecture of a fault-tolerant quantum computer and of a Quantum random-access memory <cit.>. Aiming at a comprehensive review of all the relevant contributions in the field is still a Sisyphean endeavor due to the escalating number of publications and patents. However, it signals a growing scientific and industrial interest.
Therefore, in Section <ref> we briefly review the algorithms that are the most promising for application in quantitative finance, in our opinion.
Several attempts to translate learning models resembling NNs into the quantum domain, with the primary objective to have a cost function of the form
ℒ(𝐱) = ϕ(W 𝐱 + b)
that could be linked to the unitary transformations that is the core of a quantum circuit and aiming to harness the benefits of quantum information processing.
However, recently the concept of quantum neural networks (QNNs) encompass an entire class of QML algorithms that are based on a combination of parametrized quantum circuits, that
are subsequently optimized or trained with the aid of a classical processing unit.
In particular, the link with the NN terms used in classical computer science
stems from the fact that in the quantum case the hidden layers of the network are composed by the
sequential action of a given ensemble of parametric gates.
The main building blocks of a QNN are a preparation routine or feature map <cit.> in which the data are encoded in a quantum state, the second part of the circuit is the proper variational model containing the gates that are eventually optimised to fulfil the required learning task. The procedure follows the quest of minimizing a loss function minimisation, that is a function of specific measurement outcomes performed on the state.
In formula, if we denote by 𝐱 the data we want to encode in our state then the unitary describing the feature map will be U_𝐱, while the parametric circuit described by the unitary V(θ) will contain set of trainaible weights θ that will be classically optimized. Therefore, the final N-qubit state on which the loss function will be computed reads:
V(θ) U_𝐱|0⟩^⊗ N.
Despite the fact that the field is rapidly developing,
we refer the interest reader to <cit.>,
where a detailed description of the state of the art with the major challenges facing the quantum machine learning algorithms are thoroughly discussed.
§ LITERATURE REVIEW
The body of literature concerning quantum computing in finance is vast and multifaceted. Notably, several authors, including <cit.>, have presented thorough summaries of the uses of quantum computing in finance. Our literature review, on the other hand, focuses on the use of QML in the finance domain.
While a majority of the sources analyzed entail theoretical explorations employing quantum algorithms for optimization objectives, thus potentially enhancing the speed of various machine learning techniques, only a small subset of these sources present tangible hardware or simulator implementations. Moreover, not all sources contextualize their algorithms within financial applications. Nonetheless, our review encompasses both practical financial implementations and theoretical advancements in quantum machine learning, recognizing the potential applicability of theoretical progress to established finance use cases.
Recently, Wang et al. (2022) <cit.> have proposed a Quantum Finance Software Development Kit (QFSDK) to address the complexities of implementing quantum finance calculations, particularly for educational purposes. This kit, developed in Python, offers a tool for students to gain hands-on experience with quantum finance concepts and applications, thereby enhancing their understanding of this field.
§.§ Portfolio Optimization and Quantum Machine Learning
As discussed in sec. <ref> portfolio optimization is a method that involves selecting the best combination of investments to achieve a specific goal, such as maximizing returns or minimizing risk.
One of the first QML approaches to portfolio optimization based on quantum annealers was demonstrated in <cit.>. The authors addressed a multi-period portfolio optimization problem utilizing D-Wave Systems’ quantum annealer, offering a formulation of the problem and discussing various integer encoding schemes. Their numerical examples showcased high success rates, with the formulation accommodating transaction costs and bypassing the need for inverting a covariance matrix. Furthermore, they highlighted the challenges posed by discrete multi-period portfolio optimization and provided insights into potential enhancements for future scalability. For the study <cit.>, 63 securities listed on the Abu Dhabi Securities Exchange were considered, utilizing the weekly closing value of each over a period of one year. The covariance matrix and expected values were read into the classical CPU with MATLAB along with the budgets and other parameters. The approach tested whether the adoption of the D-Wave QPU allowed for a meaningful increment in computational performance for solving the Markowitz portfolio. D-Wave’s quantum optimizer was used to find the optimal allocation of funds. The D-Wave Solver API (SAPI) embedding routine was used to program the dense Ising model into the unique hardware connectivity graph on the D-Wave processor called Chimera.
The authors of <cit.> endeavored to address the limitation of the work <cit.>, which focused on portfolios of insufficient size to assess the scalability of the chosen approach with respect to problem size. They generated parametrized samples of portfolio optimization problems based on real financial data statistics. The samples were linked to quadratic binary optimization forms programmable in the analog D-Wave Quantum Annealer 2000Q. The performance was compared with a genetic algorithm approach with the following results. After investigating various options to optimize quantum computation, they discovered that seeding the quantum annealer with a solution candidate found by a greedy local search and then employing a reverse annealing protocol yielded the best results in terms of expected time-to-solution as a function of the number of variables for the most challenging instance set; this approach was called the optimized reverse annealing protocol. The authors found the method to be more than 100 times faster on average than the corresponding forward quantum annealing.
In the work <cit.> by leveraging quantum access to historical return records, the algorithm determines the optimal risk-return tradeoff curve, facilitating the sampling of the optimal portfolio. If the pertinent data is stored in quantum random access memory, leveraging HHL, quantum walk Hamiltonian simulation methods, and the quantum state exponentiation method <cit.>, the matrix pseudo-inverse and quadratic optimization problem can be resolved, potentially achieving a runtime polynomial in log(N), instead of the classical complexity requiring polynomial time in N. Consequently, this approach enables the determination of the risk-return curve and the unveiling of the quantum state associated with the optimal portfolio. To be more specific, this method concerns the unconstrained portfolio optimization problem. The calculations were performed on a QPU emulator. A work appeared soon afterwards <cit.> dealt with the constrained portfolio optimization problem with quantum interior point method for second order cone programs <cit.> generalized to SDP <cit.>. The method was evaluated with a QPU emulator with Gaussian noise with a dataset containing historical data about the stocks of the S&P-500 companies from the years 2007-2016.
Next, the work <cit.> applied to the discrete portfolio optimization problem, a different approach than quantum annealing, namely the gate quantum computing model. They evaluate a portfolio rebalancing use case on an idealized simulator of a gate-model QPU, considering characteristics such as trading in discrete lots, non-linear trading costs, and investment constraints. The authors design a novel problem encoding and hard constraint mixers for the Quantum Alternating Operator Ansatz <cit.> and compare it to the QAOA. Experimental findings indicate that this application is feasible for NISQ hardware, as it can identify portfolios with adjusted returns within 5% of optimal and optimal risk for a small eight-stock portfolio.
Now let us focus on the recent development of the QA approaches to portfolio optimization. The study <cit.> focuses on benchmarking on portfolio optimization of the QA controls available in the programmable quantum annealer D-Wave 2000Q, including controls for mapping the logical problem onto hardware and scheduling the annealing process. They explore how the controls influence computational performance and error mechanisms by tuning the quantum dynamics. The study evaluates both forward and reverse annealing methods, identifying control variations that optimize performance e.g. in terms of probability of success. In <cit.> the portfolio optimization over D-Wave is compared with conventional commercial solvers, demonstrating that the QA approach shows promising performance, coming close to the performance of existing solvers for problems of similar size. In <cit.> a quantum-inspired integer simulated annealing method for portfolio optimization in the presence of discretized convex and non-convex cost functions is presented, yet not executed on a quantum hardware. The paper <cit.> compares a workflow combining classical preprocessing with modified QUBO models, evaluated on various annealing platforms, including D-Wave, using real-world stock data. The outcomes from QA show promise, although they fell short of the performance achieved with simulated annealing and digital annealing. This discrepancy may be attributed to various factors such as inherent noise, lack of error correction, or scaling issues. In work <cit.> portfolio optimization using digitized-counterdiabatic quantum computing is explored. This concept is applied to discrete mean-variance portfolio optimization, demonstrating improved success probabilities compared to variational quantum algorithms like QAOA and DC-QAOA. The study highlights the potential of digitized-counter-diabatic quantum algorithms for finance applications in the NISQ era.
The series of papers by Cohen et al. <cit.> explores the application of classical and quantum algorithms for portfolio optimization based on the Sharpe ratio, a simplified Chicago Quantum Ratio (CQR), then a new Chicago Quantum Net Score (CQNS), and using U.S. equities. In the first paper <cit.>, they investigate portfolio optimization of 40 stocks using the D-Wave Quantum Annealer, exploring various problem formulations based on risk vs return metrics. The second paper <cit.> extends this investigation to 60 stocks. Finally, in the third paper <cit.>, the authors analyze 3,171 U.S. common stocks to create efficient portfolios, incorporating classical solvers and QA techniques. These papers collectively demonstrate the potential of both classical and quantum methods for selecting attractive portfolios in the financial domain.
In <cit.> a dynamic portfolio optimization with a minimal holding period is proposed and was performed on D-Wave 2000Q. The algorithm efficiently samples near-optimal portfolios at each trading step and post-selects to meet the minimal holding constraint. Results indicate that the method produces investment trajectories much closer to the efficient frontier than typical portfolios and can easily adapt to different risk profiles. The work <cit.> demonstrates how to obtain the best investment portfolio with a given target risk and implement individual investment bands (i.e. minimum and maximum possible investments) for each asset to impose diversification and avoid corner solutions. The study utilizes D-Wave Hybrid and its Advantage QPU to find optimal portfolios for assets from S&P100 and S&P500, showing how practical daily constraints in quantitative finance can be implemented with real data under realistic market conditions. More complex indexes like the Nasdaq Composite can be analyzed with the aid of clustering algorithms. In <cit.>, the problem of dynamic portfolio optimization over a period of time is addressed using classical solvers, D-Wave Hybrid QA, VQEs on IBM-Q, and a quantum-inspired optimizer based on Tensor Networks. The comparison is taken on real data from daily prices over 8 years of 52 assets. Results indicate that D-Wave Hybrid and Tensor Networks can handle the largest systems effectively.
The paper <cit.> proposes a novel QUBO formulation for portfolio optimization, incorporating both the Sharpe ratio and a diversification term. It modifies the all-or-nothing selection approach of each asset as in <cit.> by introducing in portfolio construction a linear combination of investments on all assets with arbitrarily large precision. Furthermore, a diversification term is introduced to promote investments across multiple sectors, enhancing portfolio resilience. Results obtained using classical QUBO solvers and the D-Wave Leap hybrid classical-quantum solver demonstrate the effectiveness of the proposed approach in maximizing returns while minimizing sector-specific risks. Optimal outcomes are achieved through D-Wave Leap Hybrid in one of the considered scenarios and classical solver in the other one. The study <cit.> introduces a system, Q4FuturePOP, designed for portfolio optimization with future asset values executed on D-Wave Advantage System 6.2, with 5610 qubits and 40134 couplers spread over a Pegasus topology for QUBO. Unlike traditional approaches using historical data, Q4FuturePOP utilizes future asset predictions for formulating the optimization problem. Through preliminary evaluations it demonstrates promising performance, surpassing expert solutions from financial advisors like Welzia Management in some instances.
Now, we move to other QML implementation approaches. The study <cit.> compares classical ML, specifically restricted Boltzmann machines (RBMs), with variants of quantum circuit Born machines (QCBMs) implemented on ion-trap QPUs, for a probabilistic version out of the portfolio optimization problem. It utilizes time-series pricing data from asset subsets of the S&P500 stock market index to assess the performance of both models. The quantum models demonstrated superior performance compared to RBMs when considering the same number of parameters. The effectiveness of certain HHL enhancements is empirically demonstrated through the application to small portfolio optimization problems, which were executed end-to-end on the Quantinuum System Model H1-2 trapped-ion QPU, in work <cit.>. In <cit.> a quantum version of an existing classical online portfolio optimization algorithm <cit.>, leveraging quantum state preparation, inner product estimation, and multi-sampling techniques, is introduced. The quantum algorithm employs quantum maximum finding, and exhibits a quadratic speedup in time complexity relative to the number of assets in the portfolio n, while the transaction cost remains constant in n, making it particularly suitable for practical applications with a large number of assets. The authors systematically detail the transition from the classical to the quantum algorithm, starting from an extended version of the classical algorithm, incorporating a sampling procedure to render the transaction cost independent of n, and ultimately employing quantum inner product estimation and quantum multi-sampling techniques to devise the quantum online portfolio optimization algorithm.
In <cit.> a VQE in solving portfolio optimization problems using simulators and real QPUs with over 100 qubits provided by IBM is analyzed. Comparisons are drawn with three classical algorithms by backtesting. Findings indicate that quantum algorithms exhibit competitiveness with classical counterparts, with the advantage of efficiently handling a large number of assets on future larger QPUs. The paper <cit.> employs VQE to tackle portfolio optimization efficiently and defines optimal hyperparameters for VQE execution on actual IBM QPUs. By converting the problem into QUBO, with constraints integrated into the objective function, the study identifies key hyperparameters like ansatzes and optimization methods. Through experiments on simulators and real QPUs, the research demonstrates a strong correlation between solution quality and quantum hardware size. It concludes that with proper hyperparameters and sufficiently sized quantum hardware, VQE can produce solutions close to exact ones, even without error-mitigation techniques, suggesting a promising avenue for future optimization endeavors.
§.§ Market Prediction and Trading using Quantum Machine Learning
A significant challenge in the financial industry revolves around trading and hedging portfolios of derivatives. Recently, an innovative approach has been introduced to address this issue without relying on frictionless and complete market assumptions <cit.>. In this approach, trading decisions within hedging strategies are modeled as NNs in a reinforcement learning framework. Expanding on this work, <cit.> adapts the problem studied by <cit.> to a quantum-native setup. In this quantum framework, market states are encoded into quantum states, and policies and value functions are represented using Quantum Neural Networks (QNNs). The implementation is carried out on the 20-qubit trapped-ion quantum processor, Quantinuum H1, with training conducted through gradient descent. The effectiveness of this quantum approach is compared against the Black-Scholes delta hedge model, and the results show that the QNN policies outperform the traditional model significantly.
The research conducted in <cit.> delves into metrics utilized for encoding combinatorial search as a binary quadratic model tailored for feature selection. The primary objective is to enhance the model's overall performance, particularly in terms of generalization, model fit, and accuracy when applied to the regression task of price prediction. Through the utilization of quantum-assisted routines with QUBO using the D-Wave Advantage 1.1 sampler, the authors observed notable enhancements in the quality of predictive model outputs, coupled with a reduction in input dimensionality for the learning algorithm across synthetic and real-world datasets.
In <cit.> the so-called quantum Elman NN (QENN) was investigated. The Elman NN (ENN) itself, is a type of recurrent NN developed by Jeffrey Elman in 1990 <cit.> to process sequential data and has been widely used in various fields including time series prediction. The ENN consists of an input layer, a hidden layer, a context layer, and an output layer. The input layer receives the classical input data, which is then processed through the hidden layer, which in the quantum case can be a quantum register. The context layer, which can also be a quantum register, stores the previous state of the hidden layer and provides feedback to the network, allowing it to remember past information and learn temporal dependencies in the data. Finally, the output layer produces the network’s output, possibly in quantum form, based on the processed information.
The learning rates in <cit.> are tuned using a quantum version of genetic algorithms. The method was applied for the prediction of closing prices on the Nasdaq, BSE Sensex, HSI, SSE, Russell 2000, and TAIEX stock markets. It was shown that QENN can attain the quality of prediction of ENN with the much smaller size of the hidden and context layers.
The study <cit.> unveils a novel hybrid deep QNN designed for financial forecasting tasks. Central to the approach is an encoder module that converts partitioned financial time series into a sequence of density matrices, followed by the utilization of a deep quantum network to forecast the density matrix at a future time step. The research demonstrates that the maximum price attained by security at a later time can be extracted from the output density matrix. Through extensive experimentation involving 24 securities, the system showcases remarkable accuracy and efficiency across both regression and extrapolation scenarios.
The paper <cit.> examined the quantum analogs of classical data preprocessing and forecasting utilizing autoregressive integrated moving average (ARIMA) time series models, employing straightforward quantum operators with minimal quantum gate requirements. The study <cit.> also explored time series forecasting using quantum technologies. They investigated the effectiveness of PQCs employed as QNN for predicting time series signals through simulated quantum forward propagation. Their findings suggest that QNNs can proficiently model time series data while offering the notable advantage of faster training compared to classical machine learning models when executed on QPUs. Another study <cit.> focusing on time series forecasting introduces two classical-quantum hybrid architectures employing QNNs and Hybrid-Quantum Neural Networks. These architectures integrate quantum variational circuits with specialized encoding schemes, with optimization executed by a classical computer. The experiments were performed on an QPU emulator. Performance validation across four representative forecasting problems (e.g. USD-to-EUR currency exchange rate) demonstrates competitive performance, despite a comparable number of trainable parameters relative to classical solutions.
The study <cit.> demonstrates two applications within Itaú Unibanco, Latin America's largest bank. Quantum algorithms for DPP were applied to enhance Random Forest models for churn prediction. These QML algorithms improved precision by nearly 6% compared to baseline models. So-called quantum compound NN architectures which utilize quantum orthogonal NNs, were designed for credit risk assessment. The experiments were conducted on IBM 16-qubits platform guadelupe. These NN architectures demonstrated superior accuracy and generalization compared to classical fully-connected NNs while requiring fewer parameters.
In <cit.> QUBO on D-Wave was employed for feature selection, and Principal Component Analysis (PCA) was used for dimensionality reduction. The task of stock price prediction was transformed into a classification problem, and the QSVM was trained to predict price movements. The performance of QSVM was compared with classical models, and their accuracy was analyzed using datasets formulated with QA and PCA. The study focused on predicting stock prices and binary classification for four companies: Apple, Visa, Johnson and Johnson, and Honeywell, using real-time stock data. Various Quantum Computing techniques were compared with their classical counterparts in terms of prediction model accuracy and F-score. QA showcased superior efficacy in extracting the most pertinent features from financial data compared to PCA. However, QSVM did not demonstrate a notable advantage over classical SVM with the provided datasets.
§.§ Pricing and Quantum Machine Learning
The Heath-Jarrow-Morton (HJM) model, widely employed in finance for valuing interest rate derivatives <cit.>, encounters a notable challenge due to its extensive degrees of freedom when describing the evolution of the yield curve. One potential strategy to tackle this challenge involves the application of principal component analysis for factor selection. The use of quantum Principal Component Analysis (qPCA) can effectively reduce the number of noisy factors as shown in <cit.>, facilitating the determination of fair prices for interest rate derivatives. The estimation of principal components for 2 × 2 and 3 × 3 cross-correlation matrices, based on historical data for two and three time-maturing forward rates, is executed using the 5-qubit IBMQX2 quantum processor. The results indicate that the algorithm can provide reasonable approximations for the 2 × 2 case, although the quantum processor faces limitations related to gate fidelities, connectivity, and the number of qubits.
Simultaneously, experimental outcomes with simulators suggest that improved results could be achievable with the availability of a lower-level programming interface. Such an interface would enable the customization of quantum algorithm optimization to align with chip constraints, offering a promising avenue for refinement.
In classical machine learning, Generative Adversarial Networks (GANs) excel at generative modeling, an unsupervised learning approach involving a generator and a discriminator engaged in a competitive training process. The introduction of quantum systems, replacing the generator, discriminator, or both, extends this framework into the domain of quantum computing. An exemplary application is demonstrated in the work <cit.>, where quantum-classical hybrid GANs are employed to learn and transfer approximations of probability distributions from classical data to gate-based QPUs. This is an efficient, approximate data loading scheme that requires significantly fewer gates than existing methods. Specifically, a log-normal distribution is learned that models the spot price of an underlying asset for a European call option. Finally, QAE is used to estimate the expected payoff of the option, given the efficient, approximate data loading by the quantum GANs (qGANs). The training and loading are run on an actual QPU, the IBM Q Boeblingen chip with 20 qubits, with the gradient-based optimization of the qGAN parameters taking place on a classical computer.
The earliest quantum attempts at derivative pricing were presented in two papers by Chen <cit.> and Chen <cit.> in 2001. In <cit.> a quantum adaptation of select areas of arbitrage theory, asset pricing, and optional decomposition within financial markets, were presented, operating on finite-dimensional quantum probability spaces. The work resolved certain paradoxes in classical models and re-deriving option pricing formulas. In <cit.> a quantum model for binomial markets was proposed, preventing arbitrage opportunities and revisiting option pricing formulas from a quantum perspective. In <cit.> two formulations for finding optimal arbitrage opportunities as a QUBO solved using D-Wave were presented.
An early quantum algorithm for pricing of financial derivatives was presented in <cit.>. The relevant probability distributions were prepared in quantum superposition, and the payoff functions were implemented via quantum circuits, with the price of financial derivatives extracted via quantum measurements. QAE on an emulator was applied to achieve a quadratic quantum speedup in the number of steps required to obtain an estimate for the price with high confidence. This study combined QAE and the quantum algorithm for Monte Carlo, with the pricing of financial derivatives, and provided a foundation for further research at the intersection of quantum computing and finance.
The paper <cit.> introduced a hybrid quantum-classical algorithm, inspired by quantum chemistry, for pricing European and Asian options in the Black-Scholes model. By leveraging the equivalence between the pricing PDE and the Schroedinger equation in imaginary time, the algorithm transforms the Black-Scholes PDE into the Heat Equation and represents the option price as a wave function. This wave function is then solved using a hybrid quantum and classical algorithm, incorporating McLachlan's invariance principle <cit.> to build a quantum circuit of imaginary time evolution. Despite requiring only a few qubits on an emulator, the shallow quantum circuit accurately represented European and Asian call option prices, indicating a promising potential for applying quantum computing techniques in quantitative finance.
The paper <cit.> introduced a quantum algorithm for European option pricing in finance, employing a unary representation of the asset value. The algorithm first generates the amplitude distribution corresponding to the asset value at maturity using a low-depth circuit; then it computes the expected return with simple controlled gates; and finally employs the QAE. A comparison of unary and binary option pricing algorithms executed on Qiskit emulator and using error maps indicated that unary representation could offer a significant advantage in practice for near-term devices.
In <cit.> pricing of various types of options, including vanilla options, multi-asset options, and path-dependent options such as barrier options, was examined using the gate-based IBM Q Tokyo quantum device showing a quadratic speed-up compared to traditional Monte Carlo simulations. Complex features in exotic options, such as path dependency with barriers and averages, were addressed. The results rely on QAE, and its variant without phase estimation <cit.> was demonstrated to reduce the number of gates required for measuring option prices. An effective error mitigation scheme was employed to reduce errors arising from noisy two-qubit gates.
In <cit.>, a quantum least squares Monte Carlo (LSM) approach is introduced that leverages quantum access to a stochastic process, utilizes quantum circuits for computing optimal stopping times, and employs quantum techniques for Monte Carlo simulations. A nearly quadratic speedup in runtime compared to traditional LSM methods is demonstrated, and examples of its application to American option pricing are given. In <cit.> LSM was applied to Bermudan option pricing. This method approximates the continuation value, a crucial component of Bermudan option pricing, through Chebyshev interpolation. It utilizes values at interpolation nodes estimated by QAE and demonstrates a quadratic speed-up compared to classical LSM. In <cit.> the preparation of an initial state representing the option price, followed by its evolution using existing time simulation algorithms in Wick’s imaginary time-space were employed for pricing options. Due to its utilization of a hybrid variational algorithm, the proposed method was deemed relevant for NISQ QPUs. The method was numerically verified for European options and has potential extensions to path-dependent options like Asian options.
In <cit.>, the authors focus on derivative pricing through the solution of the Black-Scholes partial differential equation using the finite difference method (FDM), a suitable approach for certain types of derivatives but challenged by the so-called curse of dimensionality. They introduce a quantum algorithm for FDM-based pricing of multi-asset derivatives, demonstrating an exponential speedup in dimensionality compared to classical algorithms. Leveraging quantum differential equations solving algorithms <cit.>, the proposed approach addresses the issue of extracting derivative prices from the output state of the quantum algorithm, outlining the calculation process and estimating its complexity. The work optionally used also QAE for the calculation of the so-called nodal continuation values important for Bermudan option pricing. In the subsequent study <cit.>, the same authors make their algorithm feasible to run on a small QPU but by avoiding certain efficiency bottlenecks of embedding derivative price in the amplitude, they use variational quantum simulation to solve the Black-Scholes equation and compute the derivative price from the inner product between the solution and a probability distribution. They employed a QPU emulator. In <cit.>, a quantum Monte Carlo algorithm was proposed to solve high-dimensional Black-Scholes PDEs with correlation and general continuous and piece-wise affine payoff functions. The approach involves uploading the multivariate log-normal distribution and the rotated form of the payoff function; subsequently, QAE is applied. Error and complexity analyses show that the computational complexity grows only polynomially in the space dimension of the Black-Scholes PDE and the reciprocal of the accuracy level, indicating that the algorithm is not afflicted by the curse of dimensionality. The results were verified on a QPU emulator qfinance using Qiskit.
In <cit.> a strategy to optimize the PQC for pricing a specific type of derivatives, known as Target Accrual Redemption Forward, combining elements of an option and a forward contract, was introduced. This strategy was based on an energy-based method proposed in <cit.>. It combined pre-trained variational circuits with fault-tolerant quantum computing to reduce resource requirements. The target cost function was determined as the energy of the associated quantum harmonic oscillator problem, Gaussian in its ground state <cit.>. The circuits required to encode these states for different choices of the register size n were pre-trained and applicable to any derivative pricing problem, thereby eliminating the need to include training costs in overall resource estimations. The numerical study demonstrated that the state prepared variably approached the target exponentially fast in terms of the number of gate operations. In the study <cit.> QAE is the primary source of quantum speedup in a model in which the volatility of the underlying asset price depends on the price and time. The paper explores two variants for the state preparation step of QAE: amplitude encoding, where the probability distribution of the derivative's payoff is encoded into probabilistic amplitudes, and pseudo-random number (PRN) type, where sequences of PRNs simulate asset price evolution akin to classical Monte Carlo simulation.
Other noticeable quantum approaches to option pricing, which we yet not classify as QML include <cit.>. A so-called quantum first fundamental theorem of asset pricing stating the equivalence between no-arbitrage and the existence of a risk-free density operator under which all assets are martingale was given in <cit.>.
§.§ Risk Management and Quantum Machine Learning
In a pioneering work <cit.>, a quantum algorithm has been presented to analyze risk more efficiently compared to traditional Monte Carlo simulations on classical computers. QAE was utilized to price securities and assess VaR and Conditional VaR on a gate-based QPU. The implementation of this algorithm and the trade-off between convergence rate and circuit depth were demonstrated, indicating a near quadratic speed-up compared to Monte Carlo methods as circuit depths increase gradually. Two toy models were employed to demonstrate the algorithm's efficacy. The first model utilized real hardware, the IBM Q Experience, to price Treasury bills, representing short-term debt obligations, under the risk of potential interest rate increases. The second model simulated the algorithm to showcase how a QPU can evaluate financial risk for a two-asset portfolio comprising government debt with varying maturity dates. Both models confirmed the improved convergence rate over Monte Carlo methods.
In paper <cit.>, a QA algorithm in QUBO form for a dynamic asset allocation problem with an expected shortfall constraint is presented. The algorithm, which is dynamic and allows the risk target to emerge from the market volatility, is formulated in a manner suitable for implementation by a quantum annealer like D-Wave.
The work <cit.> introduces a quantum algorithm based on QAE for VaR estimation and evaluates it on a Qiskit QPU emulator. The algorithm was designed for the efficient estimation of credit risk, surpassing the capabilities of classical Monte Carlo simulations. Specifically, the algorithm focuses on estimating the ECR. The study implements this algorithm for a realistic loss distribution, providing a comprehensive analysis of its scalability to practical problem sizes. It offers insights into the total number of required qubits, the anticipated circuit depth, and the expected runtime under reasonable assumptions regarding future fault-tolerant quantum hardware. The conclusions highlight a quadratic speedup achieved by the quantum algorithm in estimating economic capital requirements, supported by a simulation that considers realistic problem sizes. The scalability and expected runtime are thoroughly examined, with the suggestion that the results extend to more intricate uncertainty models or alternative objectives, such as conditional value at risk, with minimal additional computational overhead. The development of a quantum circuit for the Gaussian conditional independence model is detailed, with the acknowledgment that diverse credit risk models would necessitate their dedicated quantum circuits.
In <cit.>, variational quantum-classical Wasserstein GANs were presented to solve the problems of sampling efficiency limits and GAN training instabilities. The model maintained the structure of the classical discriminative model but substituted a QNN for the Wasserstein GAN generator to ensure that there was no need to prepare high-dimensional classical data in a quantum circuit. In terms of F1 score, the effectiveness of a dataset of credit card fraud was comparable to the traditional method. The work examined with TensorFlow Quantum emulator how to sample noise, layer depth and width, sampling noise, and the initialization strategy for QNN design parameters affected convergence and performance.
Feature selection stands as a challenging and crucial task within machine learning, involving the identification of a subset of pertinent features in a dataset from the original set. This process differs from feature extraction, which generates new features from the original set, capturing essential information in a lower-dimensional space. Utilizing QNNs, a quantum algorithm for feature selection is introduced in <cit.>. Specifically, QNNs are trained to generate feature subsets that optimize the performance of a predictive model. While any arbitrary classifier and scoring function could be used, the study opts for logistic regression as the classifier and log-loss as the performance score.
The efficacy of the QNN feature selection method is assessed using a publicly available real-world credit risk dataset containing 1000 data points that evaluate a customer's creditworthiness based on 20 attributes. The optimizer for training the QNN parameters is a version of the gradient-free method called simultaneous perturbation stochastic approximation. The feature selection algorithm is implemented on superconducting quantum hardware ibmq montreal with 27 qubits, demonstrating results that compete with state-of-the-art classical methods and, in certain experiments, surpass them.
In <cit.>, it was found that QPUs were able to solve certain tasks related to Foreign Exchange reserves management, including risk measurement using the quantum Monte Carlo method and Markowitz-like portfolio optimization employing the HHL algorithm and QAOA. However, due to current hardware limitations, the application of QAOA for a task with only five binary variables and only a few of the demonstrations were successful. The application of the quantum Monte Carlo method to risk measurement generated only partially correct results, and the use of the HHL algorithm in portfolio optimization failed. Running the algorithms discussed above on a QPU simulator and on IBM Lima superconducting QPU confirmed the correctness of the implementations. The work also provided a very concise yet comprehensive overview of general quantum computation with a tutorial.
After discussing the risk assessment methods we move to the topic of FD with QML. Early promising developments for FD with QML were discussed in <cit.>, with conclusion that QPU available at the time of the work were not stable and error tolerant for practical problems.
In <cit.>, different single-qubit architectures proposed in <cit.> were analyzed and a novel implementation was presented using the Qiskit. The former trained a single-qubit based on the concept of data re-uploading, allowing the encoding of mathematical functions in the degrees of freedom of a series of gates applied to a single-qubit state. The authors trained a QNN on real data with Qiskit QPU emulator, and benchmarked it against a NN algorithm, particularly using the Kaggle credit card FD dataset with 284807 transactions, out of which 492 are fraudulent <cit.>. Different accuracies associated with various layer formulations, the use of an initial data loading layer, and different numbers of layers for a specific problem were demonstrated.
In <cit.>, more complex QNN models were used on that dataset. The results established classical benchmarks based on supervised and unsupervised ML methods, with average precision chosen as a robust metric for detecting anomalous data. Quantum kernels of different types were employed for performing AD, and it was observed that the method could challenge equivalent classical protocols as the number of features, equal to the number of qubits for data embedding, increased. Simulations with registers up to 20 qubits showed that quantum kernels with re-uploading demonstrated better average precision, with the advantage increasing with system size. At 20 qubits, the quantum-classical separation of average precision was equal to 15%.
In <cit.> QML was employed for fraud identification in digital transactional payments using the datase <cit.>. The implementation of the QNN was emulated using Python, and it was observed that the classical Neural Network required about 4.7 times more time and achieved lower accuracy (95.37%) compared to QNN.
In <cit.>, a QNN was introduced to learn directly from raw images to train a normality model. It was demonstrated that a quantum-classical hybrid solution , executed on the QPU emulator Rigetti’s Forest SDK, can outperform its classical counterpart, even when they have the same number of learnable parameters. Other works concerning AD not directly related to finance include <cit.>.
A hybrid system integrating quantum and classical machine learning algorithms for the detection of phishing attacks within financial transaction networks based on the Ethereum blockchain is proposed by <cit.>. Data is accessed through the Etherscan block explorer, a tool for the open-source public blockchain platform Ethereum. Phishing account labels are derived from public reports on phishing activities, resulting in a dataset of 3 million nodes. Among these, 1165 nodes (0.039%) are identified as phishing, creating a high class-imbalance scenario in the classification task. QNNs and QSVMs are employed for this purpose, with extensive testing of QNNs involving various parametrization schemes and QSVMs implemented on both annealers and gate-based devices.
Optimal configurations for the models are determined through simulators, and the study conducts exhaustive experimentation using these optimized models on IBM's 5- and 27-qubit chips, as well as a D-Wave annealer with 5617 qubits. Surprisingly, the results do not indicate a performance improvement with an increased number of qubits. In the optimization of QNNs, the classical optimizer chosen is the gradient-free algorithm known as constrained optimization by linear approximation. The study reveals that stacking and bagging, techniques that capitalize on the complementary strengths of quantum and classical models, lead to improved results.
The findings highlight that gate-based QSVMs consistently yield lower false positives, resulting in higher precision compared to other classical and quantum models. This characteristic is particularly valuable in the context of AD problems.
In <cit.>, an application of a QSVM algorithm, utilizing the IBM Safer Payments and IBM Quantum Computers via the Qiskit software stack and real card payment data, for a classification problem in the financial payment industry was presented. A novel method for searching for the best features was explored using the QSVM’s feature map characteristics. The results were compared with classical solutions using fraud-specific key performance indicators: Accuracy, Recall, and False Positive Rate, extracted from analyses based on human expertise (rule decisions), and classical machine learning algorithms. The QSVM provided a complementary exploration of the feature space, leading to improved accuracy of the mixed quantum-classical method for FD, despite the use of a drastically reduced data set to fit the current state of Quantum Hardware.
The paper <cit.> proposed a detection system, which was implemented with SVM supplemented with D-Wave QA. Twelve machine learning methods were further examined to assess their detection performance, and QSVM was contrasted with them on two datasets: a highly imbalanced bank loan dataset <cit.> (time series) and a moderately imbalanced Israel credit card transactions <cit.> (non-time series). With the former dataset, the QSVM was found to perform better than the others in terms of speed and accuracy; however, with the latter dataset, its detection accuracy was comparable to that of the others. It was demonstrated for both datasets that feature selection greatly increased the detection speed while only slightly increasing the accuracy.
A comparison of four QML models on Qiskit QPU emulator was done in <cit.>, and QSVM performed the best, with F1 scores of 0.98 for both fraud and non-fraud classes. Promising outcomes were also shown by other quantum models.
In <cit.>, a framework was established for QML fairness verification. Adopting the fairness notion which asserts that any two similar individuals must be treated similarly to ensure unbiased treatment, the study explored how quantum noise could potentially enhance fairness. An algorithm was formulated based on Tensor Networks and implemented using Google’s TensorFlow Quantum to ascertain whether a (noisy) QML model adhered to fairness principles. Experimental results, including income prediction and credit scoring based on real-world data, validated the utility and effectiveness of the algorithm, particularly for a class of random (noisy) quantum decision models characterized by 27 qubits (resulting in 227-dimensional state space). Subsequently, in <cit.>, the authors extended their work by defining a formal framework for detecting violations of differential privacy.
The work <cit.> involved a hybrid, quantum multiple kernel learning (QMKL) approach aimed at enhancing classification quality compared to a single kernel method. Robustness testing of QMKL was conducted across various financially relevant datasets utilizing both fidelity and projected quantum kernel techniques. Application of the QMKL encompassed multiple financially related datasets, including HSBC Digital Payment data. Both fidelity quantum kernel <cit.> and the more recent projected quantum kernel <cit.> techniques underwent testing in simulation and practical demonstration on quantum hardware ibm_auckland. Hardware implementation was optimized using an error mitigation pipeline comprising randomized compiling to mitigate coherent errors and pulse-efficient transpilation to reduce temporal overhead for cross-resonance gates during two-qubit unitary rotations. This pulse transpilation strategy facilitated scaling of the feature space up to 20 qubits on hardware.
In <cit.>, the results from <cit.> were reassessed and utilized as benchmarks, on a Qiskit emulator, to assess the efficacy of two linear time complexity methods based on data size: randomized measurements for quantum kernel measurement <cit.> and an ensemble method termed variable subsampling <cit.>. The dataset from <cit.> was employed for training purposes. It was revealed that while attainable improvements in average precision and F1 score over the classical kernel were observable, they were not deemed very significant. Models utilizing variable subsampling with the inversion test demonstrated stability, whereas those employing the randomized measurement method exhibited high variance. Variable subsampling notably manifested considerable enhancements in training and testing times, suggesting potential performance elevation opportunities through alternate hyperparameters.
In <cit.>, a QML method for fallen-angels forecasting with a quantum-enhanced classifier based on the QBoost algorithm <cit.> was proposed. This solution was implemented on a neutral atom QPU with up to 60 qubits using a real-life dataset. The proposed classifier, trained on the QPU, achieved competitive performance with 27.9% precision compared to the state-of-the-art Random Forest benchmark, which achieved 28% precision for the same recall of approximately 83%. However, the proposed approach outperformed its classical counterpart in terms of interpretability, employing only 50 learners compared to 1200 for the Random Forest, while maintaining comparable runtimes.
In <cit.>, an approach for detecting financial fraud using Quantum GNNs was proposed. Classical and quantum GNNs benchmark using a real-world financial fraud detection dataset showed that the latter outperformed the former and achieved an AUC of 0.85.
§ CONCLUSIONS
We have provided a comprehensive overview of the applications of QML in finance. Through an exploration of various use cases including portfolio optimization, market prediction, pricing, and risk management, the potential of QML to improve financial analysis and decision-making has been highlighted. By examining the synergy between quantum computing and machine learning, insights into the future of quantitative finance have been elucidated. Despite the promising advancements and competitive performance demonstrated by QML algorithms, challenges such as scalability, hardware limitations, and algorithmic complexity remain. Addressing these challenges will be crucial for realizing the full potential of QML in finance. Overall, this review underscores the importance of continued research and development in QML for advancing quantitative finance and unlocking new opportunities in the financial industry.
Quantum technologies offer promising applications in portfolio optimization, leveraging quantum computing's potential to efficiently solve complex optimization problems. Techniques such as QA and VQEs have been explored to address portfolio optimization challenges. QA algorithms have been employed to find optimal portfolios by minimizing risk while maximizing returns, while VQE algorithms provide a quantum approach to compute the eigenvalues of portfolio matrices. These quantum approaches aim to enhance the efficiency and accuracy of portfolio optimization, potentially outperforming classical optimization methods as quantum computing hardware continues to advance. We also note that in certain cases quantitative finance inspires quantum methods, e.g. in <cit.> Conditional Value-at-Risk concept was utilized to enhance the efficiency of general quantum optimization techniques.
§ ACKNOWLEDGEMENTS
This work was initiated when PM was partially, ASH and AM were fully supported by the Foundation for Polish Science (IRAP project, ICTQT, contract No. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme). PM also acknowledges support from the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT), and from NCBiR QUANTERA/2/2020 (www.quantera.eu) an ERA-Net cofund in Quantum Technologies under the project eDICT. The work of EY and TA was carried out as part of the IFZ FinTech program with financial support from various industry partners and the Lucerne University of Applied Sciences and Arts.
ACM-Reference-Format
|
http://arxiv.org/abs/2405.08886v1 | 20240514180519 | The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks | [
"Ziquan Liu",
"Yufei Cui",
"Yan Yan",
"Yi Xu",
"Xiangyang Ji",
"Xue Liu",
"Antoni B. Chan"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
equal*
Ziquan Liuqmul
Yufei Cuimcgill
Yan Yanwsu
Yi Xudtu
Xiangyang Jitsinghua
Xue Liumcgill
Antoni B. Chancityu
qmulQueen Mary University of London
mcgillMcGill University, Mila
wsuWashington State University
dtuDalian University of Technology
tsinghuaTsinghua University
cityuCity University of Hong Kong
Ziquan Liuziquan.liu@qmul.ac.uk
Machine Learning, ICML
0.3in
]
In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient health and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making.
With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used l_∞-norm bounded attack if the model is not adversarially trained, which underpins the importance of adversarial training for CP. Our paper next demonstrates that the prediction set size (PSS) of CP using adversarially trained models with AT variants is often worse than using standard AT, inspiring us to research into CP-efficient AT for improved PSS. We propose to optimize a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, where the Beta-weighting loss is shown to be an upper bound of PSS at the population level by our theoretical analysis. Moreover, our empirical study on four image classification datasets across three popular AT baselines validates the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR).
§ INTRODUCTION
The research into adversarial defense has been focused on improving adversarial training with various strategies, such as logit-level supervision <cit.> and loss re-weighting <cit.>. However, the predictive uncertainty of an adversarially trained model is a crucial dimension of the model in safety-critical applications such as healthcare <cit.>, and is not sufficiently understood. Existing works focus on calibration uncertainty <cit.>, without investigating practical uncertainty quantification of a model, e.g., prediction sets in image classification <cit.>.
On the other hand, the research into conformal prediction (CP) has been extended to non-i.i.d. (identically independently distributed) settings, including distribution shifts <cit.> and toy adversarial noise <cit.>. However, there is little research work on the performance of CP under standard adversarial attacks in the adversarial defense community, such as PGD-based attacks <cit.> with l_∞-norm bounded perturbations. For example, <cit.> and <cit.> only consider l_2-norm bounded adversarial perturbations with a small attack budget, e.g., ϵ=0.125 for the CIFAR dataset <cit.>. In contrast, the common l_2-norm bounded attack budget in the adversarial defense community reaches ϵ=0.5 on CIFAR <cit.>. In other words, existing research on adversarially robust conformal prediction is not practical enough to be used under standard adversarial attacks.
In this context, our paper is among the first research papers to explore uncertainty of deep learning models within the framework of CP in the presence of a standard adversary. We first present an empirical result that shows the failure of three popular CP methods on non-robust models under standard adversarial attacks, indicating the necessity of using adversarial training (AT) during the training stage. Next, we show the CP performance of three popular AT methods, finding that advanced AT methods like TRADES <cit.> and MART <cit.> substantially increase the PSS in CP even though they improve the Top-1 robust accuracy. This key observation inspires us to develop the uncertainty-reducing AT (AT-UR) to learn an adversarially robust model with improved CP-efficiency <cit.>, meaning that CP uses a smaller PSS to satisfy the coverage. The proposed AT-UR consists of two training techniques, Beta weighting and entropy minimization, based on our observation about the two major factors that affect PSS: True Class Probability Ranking (TCPR) and prediction entropy, both defined in Sec. <ref>. Our theoretical analysis on the Beta-weighting loss reveals that the proposed weighted loss is an upper bound for the PSS at the population level. The proposed AT-UR is demonstrated to be effective at reducing the PSS of models on multiple image classification datasets. In summary, there are four major contributions of this paper.
* We test several CP methods under commonly used adversarial attacks in the adversarial defense community. It turns out that for models not adversarially trained, CP cannot generate informative prediction sets. Thus, adversarial training is necessary for CP to work under adversarial attacks.
* We test the performance of adversarially trained models with CP and demonstrate that improved AT often learns a more uncertain model and leads to less efficient CP with increased PSS.
* We propose uncertainty-reducing AT (AT-UR) to learn a CP-efficient and adversarially robust model by minimizing the entropy of predictive distributions and a weighted loss where the weight is a Beta density function of TCPR.
* Our main theorem shows that at the population-level, the Beta-weighting loss is an upper bound for the targeted PSS, so minimizing the weighted loss leads to reduced PSS in theory. This theoretical result corroborates our hypothesis that optimizing the promising samples with high weights leads to reduced PSS.
* Our empirical study demonstrates that the proposed AT-UR learns adversarially robust models with substantially improved CP-efficiency on four image classification datasets across three AT methods, validating our major theoretical result.
The paper is structured as follows. Section 2 discusses related works and Section 3 introduces mathematical notations and two key concepts in this paper. Section 4 shows the pitfalls of three CP methods under standard attacks when the model is not robustly trained and the low CP-efficiency of two improved AT methods and motivates us to develop the AT-UR introduced in Section 5. Our major empirical results are shown in Section 6 and we conclude the paper in Section 7. Our code is available at <https://github.com/ziquanliu/ICML2024-AT-UR>.
1. Entropy-based focal loss (sometimes working well)
2. CP distillation (not working well)
3. label smoothing/regularization (not working well)
4. Beta-distribution rank focal (best)
5. hard sample abstain (promising)
6. small-loss sample drop (reduce overfitting)
§ RELATED WORKS
Adversarial Robustness. The most effective approach to defending against adversarial attacks is adversarial training (AT) <cit.>. There is a sequence of works following the vanilla version of AT based on projected gradient descent (PGD), including regularization <cit.>, logit-level supervision <cit.> and loss re-weighting <cit.>. Existing methods on regularization focus on improving Top-1 robust accuracy by training the model with certain properties like linearization <cit.> and large margins <cit.>. In contrast, our work focuses on the PSS, i.e., the efficiency of CP, in adversarially trained models by regularizing the model to have low prediction entropy. The entropy minimization regularization also entails logit-level supervision as in <cit.>. In comparison, our proposed approach, AT-EM, enhances CP efficiency, whereas TRADES (Zhang et al., 2019) impedes CP-efficiency. The most related work is <cit.> which also studies CP under adversarial attacks. However, there are two fundamental differences: 1) <cit.> only considers a small attack budget under l_2-norm bounded attacks, while our work investigates CP under common adversarial attacks in adversarial defense literature with l_∞-norm bounded attacks; 2) Our paper shows that AT is essential for CP to work under strong adversarial attacks and proposes novel AT methods to learn a CP-efficient and adversarially-robust model, while <cit.> only considers the post-training stage. Our experiment validates that <cit.> fails when there are strong adversarial attacks (Fig. <ref>).
Uncertainty Quantification. Uncertainty quantification aims to provide an uncertainty measure for a machine learning system's decisions. Within this domain, Bayesian methods stand out as a principled approach, treating model parameters as random variables with distinct probability distributions. This is exemplified in Bayesian Neural Networks (BNNs), which place priors on network weights and biases, updating these with posterior distributions as data is observed <cit.>. However, the large scale of modern neural networks introduces challenges for Bayesian methods, making prior and posterior selection, and approximate inference daunting tasks <cit.>. This can sometimes compromise the optimal uncertainty quantification in BNNs. In contrast, the frequentist approach offers a more direct route to uncertainty estimation. It views model parameters as fixed yet unknown, deriving uncertainty through methods like conformal prediction <cit.>. While Bayesian methods integrate prior beliefs with data, their computational demands in large networks can be overwhelming, positioning the straightforward frequentist methods as a viable alternative for efficient uncertainty quantification. Thus, our paper investigates the uncertainty of adversarially trained models via CP. Note that our work is fundamentally different from existing research on uncertainty calibration for AT <cit.>, as our focus is to produce a valid prediction set while uncertainty calibration aims to align accuracy and uncertainty. Finally, <cit.> proposes to train a model with uniform conformity scores on a calibration set in standard training, while our work proposes CP-aware adversarial training to reduce PSS.
§ PRELIMINARY
Before diving into the details of our analysis and the proposed method, we first introduce our mathematical notations, adversarial training and conformal prediction.
Notations. Denote a training set with m samples by _ = {(x_i, y_i)}_i=1^m.
Suppose each data sample (x_i, y_i) ∈× is drawn from an underlying distribution defined on the space ×, where x_i and y_i are the feature and label, respectively.
Particularly, we consider the classification problem and assume that there are K classes, i.e., = {1,..., K} (we denote [K] = {1, ..., K} for simplicity).
Let f_θ : →Δ_p^K denote a predictive model from a hypothesis class that generates a K-dimensional probability simplex: Δ_p^K = { v ∈ [0, 1]^K : ∑_k=1^K v_k = 1 }. θ is the model parameter we optimize during training. A loss function ℓ : ×→ is used to measure the difference between the prediction made by f_θ(x) and the ground-truth label y.
To measure the performance of f_θ in the sense of population over , the true risk is typically defined as R(f_θ) = _(x, y) ∼ [ f_θ(x) ≠ y ].
Unfortunately, R(f) cannot be realized in practice, since the underlying is unreachable.
Instead, the empirical risk R(f_θ) = 1/m∑_i=1^m [ f_θ(x_i) ≠ y_i ] is usually used to estimate R(f_θ), where [·] is the indicator function.
The estimation error of R(f_θ) to R(f_θ) is usually referred to as generalization error bound and can be bounded by a standard rate O(1/√(m)).
To enable the minimization of empirical risk, a loss function ℓ is used as the surrogate of [·], leading to the classical learning paradigm empirical risk minimization (ERM): min_f_θ∈L(f_θ) = 1/m∑_i=1^m ℓ(f_θ(x_i), y_i). In this work, we use the standard cross-entropy loss as the loss function where j is the index for a jth element in a vector,
ℓ(f_θ(x_i), y_i)=-∑_j=1^Ky_ijlog(f_θ(x_i)_j).
Adversarial training. Write the loss for sample (x_i,y_i) in adversarial training as l(x̃_i,y_i), where x̃_i=x_i+δ_i and δ_i is generated from an adversarial attack, e.g., PGD attack <cit.>. The vanilla adversarial training minimizes the loss with uniform weights for a mini-batch with B samples, i.e.,
∇ f_θ = ∇1/B∑_i=1^Bl(f_θ(x̃_i)_j,y_i),
where ∇ f_θ is the gradient in this mini-batch step optimization with respect to θ.
Conformal prediction (CP). CP is a distribution-free uncertainty quantification method and can be used in a wide range of tasks including both regression and classification <cit.>. This paper focuses on the image classification task, where CP outputs a prediction set instead of the Top-1 predicted class as in a standard image classification model, and satisfies a coverage guarantee. Mathematically, CP maps an input sample x to a prediction set (x), which is subset of [K]={1,⋯,K}, with the following coverage guarantee,
P(y∈(x))≥ 1-α,
where 1-α is a pre-defined confidence level such as 90%, meaning that the prediction set will contain the ground-truth label with 90% confidence for future data. This paper mainly considers the split conformal prediction, an efficient CP approach applicable to any pre-trained black-box classifier <cit.> as it does not need to re-train the classifier with different train-calibration-test splits.
The prediction set of CP is produced by the calibration-then-test procedure. In the context of a classification task, we define a prediction set function (x,u;π,τ), where u is a random variable sampled from a uniform distribution Uniform[0,1] independent of all other variables, π is shorthand for the predictive distribution f_θ(x), and τ is a threshold parameter that controls the size of the prediction set. An increase in the value of τ leads to an expansion in the size of the prediction set within 𝒮(x,u;π,τ). We give one example <cit.> of the function in Appendix <ref>. The calibration process computes the smallest threshold parameter τ̂_cal to achieve an empirical coverage of (1-α)(n_c+1)/n_c on the calibrations set with n_c samples. For a test sample x^*, a prediction set is the output of the function 𝒮(x^*,u;π^*,τ̂_cal).
§ NECESSITATE AT FOR ROBUST AND EFFICIENT COVERAGE.
The pitfalls of CP under strong adversarial attacks. We test the performance of three conformal prediction methods, i.e., APS (Adaptive Prediction Sets) <cit.>, RAPS (Regularized Adaptive Prediction Sets) <cit.> and RSCP (Randomly Smoothed Conformal Prediction) <cit.>, under standard adversarial attacks. Specifically, for APS and RAPS, we use PGD100 adversarial attacks with l_∞-norm bound and attack budget ϵ=8/255=0.0314. For RSCP, we adopt PGD20 with an l_2-norm bound, in accordance with the original paper's settings, but with a larger attack budget of ϵ=0.5 as in RobustBench <cit.>. If not specified otherwise, we use adversarial attack PGD100 with l_∞ norm and ϵ=8/255=0.0314 to generate adversarial examples throughout this paper.
Fig. <ref> shows the coverage and PSS of three CP methods on CIFAR10 and CIFAR100 when models are trained in a standard way, i.e., without adversarial training. Although all CP methods have good coverages, their prediction set sizes are close to the number of classes in both datasets as the classifier is completely broken under strong adversarial attacks. In contrast, when the same models are applied to standard images, the PSS are 1.03 and 2.39 for CIFAR10/CIFAR100. This result reveals that adversarial training is indispensable if one wants to use CP to get reasonable uncertainty quantification for their model in an adversarial environment. Therefore, in next section, we test AT and two improved AT methods to investigate the performance of CP for adversarially trained models.
Improved AT Compromises Conformal Prediction's Efficiency. We test three popular adversarial training methods, i.e., AT <cit.>, TRADES <cit.> and MART <cit.>, using APS as the conformal prediction method under a commonly used adversarial attack, AutoAttack with l_∞-norm and ϵ=8/255=0.0314. See more detailed experimental settings in Sec. <ref>. Tab. <ref> shows their coverage and PSS, as well as clean and robust accuracy on four datasets. The results demonstrate that while the two enhanced adversarial training methods, TRADES and MART, effectively improve the Top-1 accuracy in the presence of adversarial attacks, they often lead to an increase in the size of the prediction set, consequently yielding a less CP-efficient model. In other words, the improvement in Top-1 accuracy does not necessarily lead to less uncertainty. Therefore, to design a new AT method that learns an adversarially robust model with efficient CP, a deep investigation into the PSS is necessary. In the following section, we identify two major factors that play an important role in controlling the PSS through our empirical study.
§ UNCERTAINTY-REDUCING ADVERSARIAL TRAINING
This section investigates two factors highly correlated with PSS and introduces our uncertainty-reducing adversarial training method.
§.§ Entropy Minimization for CP-Efficiency
The PSS is closely related to the entropy of prediction distribution, as both quantities reflect the prediction uncertainty of a model. A more uniform categorical distribution has higher uncertainty, which is reflected in its higher entropy. Fig. <ref> visualizes the kernel density estimation (KDE) <cit.> of entropy values calculated with adversarial test samples on three datasets. It is evident that TRADES and MART learn models with predictive distributions that have higher entropy values than AT, thus increasing the PSS comparatively.
To decrease the PSS of AT, we add an entropy minimization term to the loss function,
ℓ_EM(f_θ(x_i), y_i)=-∑_j^Ky_ijlog(f_θ(x_i)_j)+λ_EM H(f_θ(x_i)),
where the regularization is the entropy function H(f_θ(x_i))=-∑_j^Kf_θ(x_i)_jlog(f_θ(x_i)_j). We set λ_EM=0.3 in all of our experiments based on a hyperparameter search experiment on CIFAR100, where λ_EM∈{0.1,0.3,1.0,3.0}. The AT scheme with entropy minimization (EM) is denoted as AT-EM. This entropy term is the same as the entropy minimization in semi-supervised learning <cit.>. However, note that our work is the first to use entropy minimization in adversarial training for improving CP-efficiency. Fig. <ref> also shows the KDE of entropy values on adversarial test sets using AT-EM. The reduction in predictive entropy effectively leads to a substantial decrease in the PSS of AT-EM.
The second factor that affects PSS is the distribution of True Class Probability Ranking (TCPR) on the test dataset. The TCPR is defined as the ranking of a sample x's ground-truth class probability among the whole predictive probability. In equation, we sort π with the descending order into π̂,
π̂= {π_(1),⋯,π_(K)},
where π_(j)≥π_(j+1),∀ j=1,⋯,K-1, and (j) is the sorted index. TCPR is the index j in π̂ corresponding to the ground-truth label y, i.e., Sort(y)=j.
The TCPR matters to the PSS as we observe that a model with higher robust accuracy does not necessarily have a smaller PSS as shown in Tab. <ref>. This discovery indicates that improving Top-1 accuracy, i.e., the percentage of samples with TCPR=1, is not enough to learn a CP-efficient model. In particular, the model capacity might be not strong enough to fit all the adversarial training data or achieve 100% adversarial training accuracy as a result of a strong adversary and high task complexity, e.g., a large number of classes. For instance, on CIFAR100, the robust accuracy on training data of a pre-trained ResNet50 is only around 45% after 60 epochs of fine-tuning.
Motivated by this observation, we propose to use a Beta distribution density function (Fig. <ref>) to weight the loss samples so that the TCPR distribution shifts towards the lower TCPR region. This design embodies our intuition that the training should focus on samples with promising TCPR's, whose TCPR's are neither 1 nor too large, because TCPR=1 means the sample is correctly classified and a large TCPR means the sample is an outlier and probably hopeless to learn. Those samples with promising TCPR's are important to control PSS as they are the majority of the dataset and thus largely affect the averaged PSS, see Fig. <ref> for the percentage of promising samples throughout AT training on CIFAR100.
With the previous intuition, we propose an importance weighting scheme based on the Beta distribution density function of TCPR to learn a CP-efficient model. Let the TCPR of sample x̃_i be r_i∈[K] and the normalized TCPR be r̂_i∈(0,1]. Note that in our implementation we use the index starting from 0 instead of 1, so r̂_i∈[0,1) in practice. We use the Beta distribution density function, e.g., Fig. <ref>, to give an importance weight to sample x̃_i. We use the Beta distribution density up-shifted by 1
p̃_Β(z; a, b) = 1+p_Β(z; a, b)
=1+Γ(a+b) /Γ(a) Γ(b) · (z)^a-1· (1-z)^b-1,
where Γ(a) is the Gamma function. We use the add-1 Beta function p̃_Beta for stable optimization and better performance based on our pilot study. To enforce the model to focus on samples with promising TCPR's, we use the Beta distribution with a=1.1 and b∈{3.0,4.0,5.0}. When a=1.1 and b=5.0, we have the Beta weighting function shown in Fig. <ref>. The objective function of Beta-weighting AT is
ℓ_Beta(f_θ(x_i), y_i)=-p̃_Β(r̂_i; a, b)·∑_j^Ky_ijlog(f_θ(x_i)_j)
We name this Beta distribution based importance weighting scheme in AT as AT-Beta.
In summary, the proposed AT-UR consists of two methods, AT-Beta
and AT-EM. It also contains the combination of the two methods, i.e.,
ℓ_Beta-EM(f_θ(x_i), y_i) =-p̃_Β(r̂_i; a, b)·∑_j^Ky_ijlog(f_θ(x_i)_j)
+λ_EM H(f_θ(x_i)),
denoted as AT-Beta-EM. We test the three variants of AT-UR in our experiment and observe that different image classification tasks need different versions of AT-UR.
§.§ Theoretical Analysis on Beta Weighting
The previous subsection introduces the intuition behind the proposed AT-UR. This section gives the theoretical analysis on the Beta weighting, which shows a theoretical connection between Beta weighting and the PSS. We drop the subscript θ for f_θ to lighten the notation. Note that we leave the full proof in Appendix <ref>.
Importance Weighting (IW) Algorithm.
IW assigns importance weight ω(x, y) to each sample (x, y) ∈_ such that ω(x, y) is directly determined by TCPR r̂.
Analogous to the empirical risk R(f), we define the IW empirical risk with weights ω(x, y) for f as follows
R_ω (f)
=
1/m∑_i=1^m ω(x_i, y_i) ·ℓ(f(x_i), y_i) .
It is worth noting that restricting ω(x_i, y_i) = 1 as a special case for all data samples reduces R_ω(f) to R(f).
§.§ Beta Weighting for CP-Efficiency
We design a group-wise IW approach that groups data into K disjoint subsets according to their TCPR's, and assigns the same weight to a group of data.
For a sample (x, y), the importance weight is ω(x, y) = p̃_Β(r̂(x, y); a, b). The following theorem proves that the expectation of ℓ_Beta is an upper bound for the expectation of PSS, which indicates that optimizing ℓ_Beta is theoretically beneficial to reducing PSS and CP-efficiency.
(Learning bound for the expected size of CP prediction sets)
Let
L_Β(f) := ∑_k=1^K σ_k ·[ ℓ(f(X), Y) | r_f(X,Y) = k ], where σ_k ∼ p_Β(k/(K+1); a, b) with a=1.1, b=5.
We have the following inequality
_X [ | _f(X) | ]
≤ L_Β(f),
where | _f(X) | is the cardinality of the prediction set _f(X) for a classifier f with input X and r_f(X,Y) is TCPR of (X,Y) in the classifier f.
Remark. This theorem corroborates our intuition in the previous subsection that optimizing samples with moderate PSS with high importance may lead to the improvement of CP-efficiency. As far as we know, the main theorem is one of the first to build a connection between importance weighting and PSS in conformal prediction. The next section presents our empirical results on various datasets, which further confirm the effectiveness of the proposed AT-UR. See Appendix <ref> for the full proof. Note that this bound also holds for other (a,b)'s with a different constant.
(Generalization error bound of IW empirical risk, Theorem 1 in <cit.>)
Let M = sup_(x, y) ∈×ω(x, y) denote the infinity norm of ω on the domain.
For given f ∈ and δ > 0,
with probability at least 1-δ, the following bound holds:
R(f) - R_ω(f)
≤ 2M log(1/δ) / 3m + √( 2 d_2( || /ω) log(1/δ) / m ) ,
where d_2(| | 𝒬) = ∫_x (x) ·(x) /𝒬(x) dx is the base-2 exponential for Rényi divergence of order 2 between two distributions and 𝒬 and m is the number of training samples.
(Beta weighting preserves generalization error bound.)
Probably need to revise this? Suppose _(x, y) ∼{r̂(x, y) = k } = k^-c/∑_k'=1^K (k')^-c is a polynomially decaying function with c = max{ K^-α, b ln(a) + 1 /ln(K) + 2 - α} for α≥ 0.
Beta weighting improves generalization error bound compared with ERM.
Remark.
Theorem <ref> shows that the Beta weighting approach guarantees improved generalization error bound, which is beneficial to ensure the desirable accuracy for prediction.
Meanwhile, the Beta-based IW strategy focuses on penalizing the data samples whose PSS is moderately large (e.g., 10-20 labels included out of 100+ class labels, see experiments).
§ EXPERIMENT
We first give the details of our experimental setting and then present the main empirical result.
§.§ Experimental Setting
Model. We use the adversarially pre-trained ResNet50 <cit.> with l_∞ norm and an attack budget ϵ_pt=4/255 in all experiment of our paper. The reason is that, besides testing on CIFAR10/100, we also test on more challenging datasets such as Caltech256 and CUB200, on which an adversarially pre-trained model is shown to be much more robust than random initialized weights <cit.>.
Dataset. Four datasets are used to evaluate our method, i.e., CIFAR10, CIFAR100 <cit.>, Caltech-256 <cit.> and Caltech-UCSD Birds-200-2011 (CUB200) <cit.>. CIFAR10 and CIFAR100 contain low-resolution images of 10 and 100 classes, where the training and validation sets have 50,000 and 10,000 images respectively. Caltech-256 has 30,607 high-resolution images and 257 classes, which are split into a training and a validation set using a 9:1 ratio. CUB200 also contains high-resolution bird images for fine-grained image classification, with 200 classes, 5,994 training images and 5,794 validation images.
Training and Adversarial Attack. In all adversarial training of this paper, we generate adversarial perturbations using PGD attack. The PGD attack has 10 steps, with stepsize λ=2/255 and attack budget ϵ=8/255. The batch size is set as 128 and the training epoch is 60. We divide the learning rate by 0.1 at the 30th and 50th epoch. We use the strong AutoAttack <cit.> with ϵ=8/255 in Tab. <ref>. We use the PGD attack with 100 steps for all other results in this paper. The stepsize and attack budget in PGD100 is the same as in adversarial training, i.e., λ=2/255 and ϵ=8/255. See more training details in Appendix <ref>.
Conformal Prediction Setting. We fix the training set in our experiment and randomly split the original test set into calibration and test set with a ratio of 1:4 for conformal prediction. For each AT method, we repeat the training for three trials with three different seeds and repeat the calibration-test splits five times, which produces 15 trials for our evaluation. The mean and standard deviation of coverage and PSS of 15 trials are reported. If not specified, we use APS <cit.> as the CP method in our paper as the performance of APS is more stable than RAPS, as shown in Fig. <ref>. The target coverage is set as 90% following existing literature in CP <cit.>. We use the same adversarial attack setting as in <cit.>, i.e., both calibration and test samples are attacked with the same adversary. We discuss the limitations of this setting in the conclusion section.
Baselines. We use AT <cit.>, Fair-AT (FAT) <cit.> and TRADES <cit.> as the baseline and test the performance of the two proposed uncertainty-reducing methods with the three baselines. AT and TRADES are the most popular adversarial training methods and FAT reduces the robustness variance among classes, which could reduce the PSS, which is validated by our experiment. Note that we only report the performance of CP, i.e., coverage and PSS, in the main paper as the main target of our paper is to improve CP efficiency.
§.§ Experimental Results
Efficacy of AT-UR in reducing PSS. The coverage and PSS of all tested methods under the AutoAttack are shown in Tab. <ref>. The proposed AT-UR methods effectively reduce the PSS when combined with the three AT baselines on four datasets, validating our intuition on the connection between the two factors, i.e., predictive entropy and TCPR, and PSS. More importantly, the result is also consistent with our finding in Theorem <ref>. There are two phenomena worth noting. First, the Beta weighting generally works better than EM when using AT and FAT, with Beta+EM potentially improving the CP-efficiency in some cases. Second, when using TRADES, EM is more promising than Beta weighting (e.g., EM is better than the other two on three out of four datasets). Thus, we recommend that for AT and FAT, using Beta or Beta-EM is the first choice if one needs to train an adversarially robust and also CP-efficient model, while for TRADES, it is more reasonable to first try EM.
Note that although the Top-1 accuracy of our method (Appendix <ref>) is decreased compared to baselines, the main target of our method is to improve CP efficiency as we use the conformal prediction instead of the Top-1 prediction. Tab. <ref> shows the normalized PSS result to mitigate the influence of different K's on the comparison. The coverage and PSS on clean images are reported in the Appendix <ref>. Our AT-UR is effective at improving the CP-efficiency on clean images as well.
Coverage-PSS curve visualization. To visualize the effect of AT-UR more comprehensively, we plot the CP curve by adjusting the threshold τ̂_cal to get different points on the curve of coverage versus PSS. Fig. <ref> shows the CP curve of AT, AT-Beta and AT-EM on three datasets. It demonstrates that AT-UR achieves a reduced PSS compared to the AT baseline, not only at 90% coverage, but also over a wide range of coverage values.
§.§ Detailed Empirical Analysis
(a) Sensitivity to hyperparameters and performance under different attack budgets. We use different Beta-weighting hyperparameters on Caltech256. The performance is stable within a range of
b=(3.0, 4.0, 5.0) as shown in Tab. <ref>. In addition to the ϵ=8.0, we test different attack budgets ϵ=4.0, 12.0, 16.0 and report the result on Caltech256 in Tab. <ref>. The result shows that across the attack budgets, our method is consistently better than the AT baseline.
(b) Compare with Uncertainty-Aware training <cit.>. We train three models with vanilla AT, Conformal AT in <cit.> and our AT-Beta on CIFAR100 respectively. The experiment follows the setting in the Conformal Training (See more details in Appendix <ref>). AT, Conformal AT and AT-Beta have the averaged coverage and PSS of (89.82, 33.43), (90.36, 35.32) and (89.78, 30.18) respectively, demonstrating the effectiveness of our Beta-weighting scheme over Conformal AT in the adversarial environment.
(c) Does Focal loss improve CP-efficiency? We consider using a power function r̂_i^η as in focal loss <cit.> to generate loss weights and test the CP performance of AT-Focal. We set η=0.5 based on a hyperparameter search from {0.1,0.5,1.0,2.0}. AT-Focal forces the model to focus on hard samples, contrary to our AT-Beta which focuses on promising samples. The averaged coverage and PSS of AT-Focal on CIFAR100 and Caltech256 are (90.50, 27.24) and (91.38, 48.35) respectively, which is far worse than the AT baseline of (90.45, 23.79) and (91.35, 43.20). This result corroborates that promising samples are crucial for improving CP-efficiency instead of hard samples.
(d) What is the difference between label smoothing and AT-EM? The formulation of AT-EM is similar to the formulation of label smoothing <cit.>, if we combine the log term in (<ref>). However, label smoothing and AT-EM train the model towards two different directions: the former increases the prediction entropy (by smoothing the label probabilities to be more uniform), while the latter decreases the prediction entropy. We validate this argument on Caltech256 and find that label smoothing makes the CP-efficiency much worse than the AT baseline, with an averaged coverage and PSS of (90.22, 46.39), compared to (91.35, 43.20) of AT.
(e) Is AT-UR robust to uncertainty-aware adversarial attacks? PGD attack and AutoAttack are both designed for reducing Top-1 accuracy instead of CP-efficiency. We design an uncertainty adversarial attack that maximizes the entropy of the predictive distribution to test the performance of our AT-UR under the uncertainty-aware adversarial attack. Tab. <ref> shows the result of our method when combined with three AT methods on CIFAR100 using the uncertainty-aware attack in the inference stage (using the same well-trained models with the Top-1 attack). The attacker is PGD100 and all settings here are the same as in Sec. <ref>. Our method is still competitive under this uncertainty-aware adversarial attack.
§ CONCLUSION
This paper first studies the pitfalls of CP under adversarial attacks and thus underscores the importance of AT when using CP in an adversarial environment. Then we unveil the compromised CP-efficiency of popular AT methods and propose to design uncertainty-reducing AT for CP-efficiency based on our empirical observation on two factors affecting the PSS. Our theoretical results establish the connection between PSS and Beta weighting. Our experiment validates the effectiveness of the proposed AT-UR on four datasets when combined with three AT baselines. A common limitation shared by this study and <cit.> is the assumption that the adversarial attack is known, enabling the calibration set to be targeted by the same adversary as the test set. In future research, we will alleviate this constraint by exploring CP within an adversary-agnostic context. We will also explore the robustness of conformal prediction in large language models.
§ IMPACT STATEMENT
This paper investigating and improving CP-efficiency for deep learning models under adversarial attacks makes an important contribution to the reliability and safety of artificial intelligence (AI) systems. The theoretical and empirical results in this paper hold immense societal implications, particularly in high-stakes applications such as self-driving cars and medical diagnosis, advancing the positive impact of AI on society by promoting secure and reliable AI-driven advancements.
§ ACKNOWLEDGEMENT
This work was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 11215820).
icml2024
empty
§ ADAPTIVE PREDICTION SETS <CIT.>
We introduce one example of prediction set function, i.e., APS conformal prediction used in our experiment. Assume we have the prediction distribution π(x)=f_θ(x) and order this probability vector with the descending order π_(1)(x) ≥π_(2)(x) ≥…≥π_(K)(x). We first define the following generalized conditional quantile function,
Q(x; π, τ) = min{ k ∈{1,…,K} : π_(1)(x) + π_(2)(x) + … + π_(k)(x) ≥τ},
which returns the class index with the generalized quantile τ∈[0,1]. The function can be defined as
𝒮(x, u ; π, τ) =
`y' indices of the Q(x ; π,τ)-1 largest π_y(x),
if u ≤ U(x ; π,τ) ,
`y' indices of the Q(x ; π,τ) largest π_y(x),
otherwise,
where
U(x; π, τ) = 1/π_(Q(x ; π, τ))(x)[∑_k=1^Q(x ; π, τ)π_(k)(x) - τ].
It has input x, u ∈ [0,1], π, and τ and can be seen as a generalized inverse of Equation <ref>.
On the calibration set, we compute a generalized inverse quantile conformity score with the following function,
E(x,y,u;π) = min{τ∈ [0,1] : y ∈(x, u ; π, τ) },
which is the smallest quantile to ensure that the ground-truth class is contained in the prediction set (x, u ; π, τ). With the conformity scores on calibration set {E_i}_i=1^n_c, we compute the (1-α)(1+n_c)th largest value in the score set as τ̂_cal. During inference, the prediction set is generated with (x^*, u ; π^*, τ̂_cal) for a novel test sample x^*.
§ MORE EXPERIMENTAL DETAILS
APS Setting. We use the default setting of APS specified in the official code of <cit.>, i.e., first use temperature scaling <cit.> to calibrate the prediction distribution then compute the generalized inverse quantile conformity score to perform the calibration and conformal prediction.
Hyperparameter and Baseline Setting. As mentioned in the main paper, we use a=1.1 and search b from the discrete set {2.0,3.0,4.0,5.0} in Beta distribution since the parameter combinations perform well in our pilot study and satisfy the goal of focusing on promising samples. The learning rate and weight decay of AT, FAT and TRADES are determined by grid search from {1e-4,3e-4,1e-3,3e-3,1e-2} and {1e-3,1e-4,1e-5} respectively. We compute the class weight for FAT using the output of a softmax function with error rate of each class as input. The temperature in the softmax function is set as 1.0. For TRADES, we follow the default setting β=6.0 for the KL divergence term <cit.>. Our AT-UR method also determines the learning rate and weight decay using the grid search with the same mentioned grid. For TRADES, we weight both the cross-entropy loss and KL divergence loss with the Beta density function based on TCPR.
CP Curve. The CP curve in Fig. <ref> is obtained by using different threshold values, for instance, using the linspace function in numpy <cit.> with np.linspace(0.9,1.1,200)×τ̂_cal generates 200 different (coverage, PSS) points.
Compare with Conformal AT <cit.>. We use the experimental setting in the original paper, where they train a randomly initialized ResNet50 using SGDM with batch size=128, learning rate=0.1, weight decay=0.0005, for 120 epochs, where the learning rate is divided by 10 at 100th epoch. 45000 original training samples are used for training and the remaining 5000 samples are used as a held-out set for computing the conformal loss. We use the same attack parameters in this setting as in other experiments during both training and inference. In the conformal inference based on APS scores, we split the original test set with a ratio of 1:1 into a calibration and a test set and only test the final-epoch model. We run three trials for each approach and report the average coverage and PSS in the main paper. The experiment result that the effectiveness of our AT-Beta is generalizable to the randomly intialized model training.
§ MORE EXPERIMENTAL RESULTS
Note that this paper uses CP as the inference method to achieve a coverage guarantee, which is orthogonal to the Top-1 inference method. Thus, Top-1 accuracy is not a relevant metric in the context of CP inference. Nevertheless, we show the Top-1 accuracy of tested methods in Tab. <ref>. Using AT-UR generally worsens the Top-1 accuracy, especially for TRADES. However, note that using TRADES-Beta-EM can improve the Top-1 robust accuracy of TRADES-Beta on CIFAR10 and TRADES-EM on Caltech256. This result again confirms the observation that Top-1 accuracy is not necessarily correlated with CP-efficiency. When we compare the result of using PGD100 and AA, the robust accuracy under AA drops while the prediction set size reduced (CP efficiency is improved), indicating that a stronger attack can lead to reduced PSS. To reduce the effect of number of classes (K) on the PSS, Tab. <ref> shows the normalized PSS using K when using the PGD100 attack.
Fig. <ref> and Fig. <ref> shows the CP curve of FAT and TRADES when they are combined with EM and Beta on three datasets. It demonstrates that the CP-efficiency is also improved when using FAT and TRADES as in the experiment using AT. In most cases (5 out of 6), AT-UR (either EM or Beta) has a lower PSS than the corresponding baseline within a large range of coverage.
Fig. <ref> shows the percentage of samples TCPR=1, 1<TCPR<20 and TCPR≥20 during AT on CIFAR100, demonstrating that the promising samples are the majority in most time training, especially for the early 30 epochs.
We include the Coverage and PSS on clean images in Tab. <ref> as a reference. It shows that our AT-UR improves the CP-efficiency even on clean images across various datasets and adversarial training methods.
§ PROOF OF THEOREM
(Theorem <ref> restated, Beta weighting preserves generalization error bound.)
Suppose _(x, y) ∼{r̂(x, y) = k } = k^-c/∑_k'=1^K (k')^-c is a polynomially decaying function with c = max{ K^-α, b ln(a) + 1 /ln(K) + 2 - α} for α≥ 0.
Beta weighting improves generalization error bound compared with ERM.
(of Theorem <ref>)
The key idea to prove Theorem <ref> is to show d_2( || / ω) ≤ d_2( || ) = 1 (recall d_2 is the base-2 exponential for Renýi divergence of order 2, as in Lemma <ref>), which implies that Beta weighting gives tighter generalization error bound than ERM.
First, we derive the following equivalent formulations.
d_2( || / ω)
= ∫_(x, y)(x, y) ·ω(x, y) d(x, y)
=
∫_(x, y)(x, y) · p_Β(r̂(x, y)/K; a, b) d(x, y)
= ∫_(x, y)(x, y) ( ∑_k=1^K [r̂(x, y) = 1] ) · p_Β(r̂(x, y)/K; a, b) d(x, y)
= ∑_k=1^K ∫_(x, y)(x, y) ·[r̂(x, y) = 1] · p_Β(r̂(x, y)/K; a, b) d(x, y)
= ∑_k=1^K ∫_(x, y)(x, y) ·[r̂(x, y) = 1] · p_Β(k/K; a, b) d(x, y)
= ∑_k=1^K _(x, y) ∼{r̂(x, y) = k }_ = p_k · p_Β(k/K; a, b) .
Suppose p_k = k^-c/∑_k'=1^K (k')^-c a polynomially decaying function of k for c ≥ 0.
p_k · p_Β(k/K)
= k^-c/∑_k'=1^K (k')^-c·Γ(a+b) /Γ(a) Γ(b) · ( k/K )^a-1· ( 1 - k/K )^b-1
= K^-c/ K^-c· k^-c/∑_k'=1^K (k')^-c· (a+b-1)! / (a-1)! (b-1)! · ( k/K )^a-1· ( 1 - k/K )^b-1
= K^-c/∑_k'=1^K (k')^-c· k^-c/ K^-c· (a-c+b-1)! ·∏_i=a-c+b^a+b-1 i / (a-c-1)! (b-1)! ·∏_i=a-c^a-1 i · ( k/K )^a-1· ( 1 - k/K )^b-1
= K^-c/∑_k'=1^K (k')^-c·Γ(a-c+b) /Γ(a-c) Γ(b) ·∏_i=1^c a+b-c-1+i / a-c-1+i · ( k/K )^a-c-1· ( 1 - k/K )^b-1
= K^-c/∑_k'=1^K (k')^-c_ = A·∏_i=1^c ( 1 + b / a-c-1+i )
_ = B ·Γ(a-c+b) /Γ(a-c) Γ(b) · ( k/K )^a-c-1· ( 1 - k/K )^b-1_ = p_Β(k/K; a-c, b)
where term A can be bounded as follows
K^-c/∑_k=1^K k^-c≤ K^-c ( c - 1 ) / 1 - ( K + 1 )^-(c-1)≤
c K^-c
, c > 0 ,
and
term B can be bounded as follows
B
= ∏_i=1^c ( 1 + b/a-c-1+i )
=
exp( log( ∏_i=1^c ( 1 + b/a-c-1+i ) ))
= exp( ∑_i=1^c log( 1 + b/a-c-1+i ) )
≤exp( ∑_i=1^c b/a-c-1+i )
≤ exp( ∑_i=1^a-1b/i )
≤exp( b ( ln(a) + 1 ) ) .
Then, combining term A and B together:
K^-2· K^-c+2·exp( b ln(a) + 1 ) · c
≤
K^-2
⇔ exp( ln( K^c-2 / c ) ) ≥exp( b ln(a) + 1 )
(a)⇐ exp( ln( K^c-2+α ) ) ≥exp( b ln(a) + 1 )
⇔
c-2 + α≥b ln(a) + 1 /ln(K)
⇔
c ≥b ln(a) + 1 /ln(K) + 2 - α,
where the development (a) is due to c = max{ K^-α, b ln(a) + 1 /ln(K) + 2 - α} for α≥ 0.
As a result, we have
p_k · p_Β(k/K)
=
K^-2· p_Β(k/K; a-c; b)
⇒ ∑_k=1^K p_k · p_Β(k/K; a-c; b)
≤∑_k=1^K p_Β(k/K; a-c; b) / K^2
≤
1 .
(1)
∑_k=1^K k^-c≥∫_1^K+1 k^-c dk
=
k^-(c-1)/ -(c-1) |_k=1^K+1
=
(K+1)^-(c-1)/ -(c-1) - 1^-(c-1)/ -(c-1)
=
(K+1)^-(c-1) - 1 / - (c-1) ,
where the inequality is due to the left Riemann sum for the monotonically decreasing function k^-c with c > 0.
Γ(z)
=
∫_0^∞ t^z-1exp(-t) dt
=
1/z exp(γ z)∏_i=1^∞ (1+z/i) exp(-z/i) ,
where the second equality is due to Weierstrass's definition for Gamma function.
Γ(a+b) /Γ(a) Γ(b)
=
Γ(a+b+c) /Γ(a) Γ(b+c) ·Γ(a+b) Γ(b+c)/Γ(a+b+c) Γ(b)_ = A
A
= Γ(a+b) Γ(b+c)/Γ(a+b+c) Γ(b)
= (a+b+c) exp(γ (a+b+c)) b exp(γ(b)) / (a+b) exp(γ (a+b)) (b+c) exp(γ(b+c)) ∏_i=1^∞ (1+a+b/i) (1+b+c/i) / (1+a+b+c/i) (1+b/i) exp( - 0 / i )
= b(a+b+c) /(a+b)(b+c)_≤ 1 ∏_i=1^∞ (i+a+b) (i+b+c) / (i+a+b+c) (i+b) _ = B
B
= ∏_i=1^∞ (i+a+b) (i+b+c) / (i+a+b+c) (i+b)
=
exp( ∑_i=1^∞ln( (i+a+b) (i+b+c) / (i+a+b+c) (i+b) ) )
= exp( ∑_i=1^∞ln( (1+a/i+b) (1-a/i+a+b+c) ) )
= exp( ∑_i=1^∞ln( 1 + ac/(i+b)(i+a+b+c) ) )
≤ exp( ∑_i=1^∞ln( 1 + ac/i^2 ) )
≤exp( ln( 4 · ( 1 + ac )^ 2( ⌈√(ac)⌉ - 1 ) ) )
=
4 · ( 1 + ac )^ 2( ⌈√(ac)⌉ - 1 )
,
where the first inequality is due to b, c ≥ 0, and the second inequality is due to partial sum 2 below.
Note that we need ac ≤ 2, so that 4 · ( 1 + ac )^ 2( ⌈√(ac)⌉ - 1 ) ≤ 36, otherwise it will go to 1024, 62500, ...
Partial sum 1:
∑_i=⌈√(a)⌉ +1^n ln(1+a/i^2)
=
- ∑_i=⌈√(a)⌉ +1^n ln(i^2/i^2+a)
=
- ∑_i=⌈√(a)⌉ +1^n ln(i^2/i^2+a)
=
- ∑_i=⌈√(a)⌉ +1^n ln( 1 - a/i^2+a)
≤
- ∑_i=⌈√(a)⌉ +1^n ln( 1 - a/i^2)
=
- ∑_i=⌈√(a)⌉ +1^n ln( i^2 - a/i^2)
=
- ∑_i=⌈√(a)⌉ +1^n ln( (i + √(a)) /i· (i-√(a)) /i )
=
- ∑_i=⌈√(a)⌉ +1^n ( ln( i + √(a)/i ) - ln( i/ i-√(a) ) )
=
- ∑_i=⌈√(a)⌉ +1^n ln(i+√(a)/i)
+ ∑_i=⌈√(a)⌉ +1^n ln(i/i-√(a))
=
- ∑_i=⌈√(a)⌉ +1^n-⌈√(a)⌉ln(i+√(a)/i)
- ∑_i=n-⌈√(a)⌉+1^nln(i+√(a)/i)
+ ∑_i=2⌈√(a)⌉ +1^n ln(i/i-√(a))
+ ∑_i=⌈√(a)⌉ +1^2⌈√(a)⌉ln(i/i-√(a))
=
- ∑_i=⌈√(a)⌉ +1^n-⌈√(a)⌉ln(i+√(a)/i)
+ ∑_i=⌈√(a)⌉ +1^n-⌈√(a)⌉ln(i+⌈√(a)⌉/i+⌈√(a)⌉-√(a))
- ∑_i=n-⌈√(a)⌉+1^nln(i+√(a)/i)
+ ∑_i=⌈√(a)⌉ +1^2⌈√(a)⌉ln(i/i-√(a))
≤ ∑_i=⌈√(a)⌉ +1^2⌈√(a)⌉ln(i/i-√(a))
=
∑_i=1^⌈√(a)⌉ln(i+⌈√(a)⌉/i+⌈√(a)⌉-√(a))
,
where the first inequality is due to 0 ≤ a and the last inequality is due to i+√(a)/i+⌈√(a)⌉-√(a)≤i+√(a)/i.
Partial sum 2:
∑_i=1^⌈√(a)⌉ln(1+a/i^2)
+ ∑_i=1^⌈√(a)⌉ln(i+⌈√(a)⌉/i+⌈√(a)⌉-√(a))
= ∑_i=1^⌈√(a)⌉ln( (1+a/i^2) · ( 1 + √(a)/i+⌈√(a)⌉-√(a) ) )
≤ ∑_i=1^⌈√(a)⌉ln( (1+a/i^2) · ( 1 + √(a)/i ) )
= ln( (1+a/( ⌈√(a)⌉ )^2) · ( 1 + √(a)/⌈√(a)⌉ ) )
+ ∑_i=1^⌈√(a)⌉-1ln( (1+a/i^2) · ( 1 + √(a)/i ) )
≤ ln( 4 )
+ ∑_i=1^⌈√(a)⌉-1ln( (1+a/i^2)^2 )
≤ ln(4)
+ ( ⌈√(a)⌉-1 ) ·ln( (1+a)^2 )
=
ln( 4 · ( 1 + a )^ 2( ⌈√(a)⌉ - 1 ) )
p_k · p_Β(k/K; a, b)
= k^-c/∑_k'=1^K (k')^-c·Γ(a+b) /Γ(a) Γ(b) · ( k/K )^a-1· ( 1 - k/K )^b-1
≤
k^-c+1· c-1 / 1 - (K+1)^-c+1_≤ 1 ·1/K·
(k/K)^a-2· (1-k/K)^b-1·Γ(a+b-1) /Γ(a-1) Γ(b) _ = p_Β(k/K; a-1, b) ·Γ(a-1) Γ(a+b) /Γ(a) Γ(a+b-1)
≤ 1 / K · b_Β(k/K; a-1, b) · ( 1 + b/a(a-1) ) ,
where the last inequality is due to technical lemmas below.
Lemma.
k^-c· (k/K)^a-1
=
k^-c· (k/K)^a-2· k/K
=
k^-c+1· (k/K)^a-2 / K
Lemma.
If c ≥ 1:
k^-c+1 (c-1) / 1 - (K+1)^-c+1≤
k^-c+1 (c-1)
≤
1
If c < 1:
k^-c+1 (c-1) / 1 - (K+1)^-c+1
=
k^-c+1 (1-c) / (K+1)^-c+1 - 1 ≤ 1
Lemma.
Γ(a-1) Γ(a+b) /Γ(a) Γ(a+b-1)
= a ( a+b-1 ) exp( γ a ) exp( γ (a+b-1) ) / (a-1) (a+b) exp( γ(a-1) ) exp( γ(a+b) ) ·
∏_i=1^∞ ( 1 + a-1/i ) · ( 1 + a+b/i ) / ( 1 + a+b-1/i ) · ( 1 + a+b-1 /i) ·exp(- a-1/i·exp(- a+b /i) ) /exp( - a/i ) ·exp(- a+b-1/i )
= a (a+b-1) / ( a-1 ) ( a+b ) ·∏_i=1^∞ ( i + a - 1 ) · ( i + a + b ) / ( i + a ) · ( i + a + b - 1 )
≤
1 + b / a(a-1)
Now, we can compose generalization error bound.
For p_k ∼ k^-c:
∑_k=1^K p_k · p_Β(k/K; a, b)
≤∑_k=1^K p_Β(k/K; a-1, b) / K · ( 1 + b/a(a-1) )
≤
2 + 2b / a(a-1)
Note that the final inequality still requires finer proof for the gap between sum and integral.
This problem is resolved:
∑_k=1^K p_Β(k/K; a, b) / K
=
∑_k=1^⌊ ( a-1 ) K /a+b-1⌋ p_Β(k/K; a, b) / K
+ ∑_k=⌈ ( a-1 ) K /a+b-1⌉^K p_Β(k/K; a, b) / K
≤ ∫_1/K^⌈ ( a-1 ) K /a+b-1⌉ / K p_Β(z; a, b) dz
+ ∫_⌊ ( a-1 ) K /a+b-1⌋ / K ^1 p_Β(k/K; a, b) dz
≤
2 ∫_ 0 ^ 1 p_Β(z; a, b) dz
=
2 .
This completes the proof of the generalization error bound.
For exponential p_k = exp(-k) /∑_k'=1^K exp(-k'):
Lemma.
exp(-k)
=
1/exp(k)≤1/1+k≤1/k
=
k^-1 ,
which means that it reduces to poly type for c=1, so we can re-use the techniques developed for poly c=1.
Lemma. The additional constant when transferring from exp type to poly c=1:
∑_k=1^K exp(-k)
≥exp(-1)
For Gaussian-like p_k = exp(-k^2) /∑_k'=1^K exp(-(k')^2):
Lemma.
exp(-k^2)
=
1/exp(k^2)≤1/1+k^2≤1/k^2
=
k^-2 ,
which means that it reduces to poly type for c=2, so we can re-use the techniques developed for poly c=2.
Lemma. The additional constant when transferring from exp type to poly c=2:
∑_k=1^K exp(-k^2)
≥exp(-1^2)
On the other hand, for cost-sensitive-type weighting in Beta distribution:
( 1 - λ ) R(f) + λ(f)
=
( 1 - λ ) ∑_k=1^K { r_f(x, y) = k }·[ [ h(x) ≠ y ] | r_f(x, y) = k ] + λ(f)
+ λ∑_k=1^K { r_f(x, y) = k }· k
= ∑_k=1^K { r_f(x, y) = k }· ( (1-λ) [ [ h(x) ≠ y ] | r_f(x, y) = k ] + λ k )
≤ ∑_k=1^K { r_f(x, y) = k }· ( (1-λ) [ ℓ(f(x), y)) | r_f(x, y) = k ] + λ k )
≤ ∑_k=1^K { r_f(x, y) = k }· ( (1-λ) [ ℓ(f(x), y)) | r_f(x, y) = k ] + λσ_k [ ℓ(f(x), y) | r_f(x, y) = k ] )
= ∑_k=1^K
{ r_f(x, y) = k }_ = p_k^·( (1 - λ + λσ_k) [ ℓ(f(x), y) | r_f(x, y) = k ] )
,
where the first inequality is due to surrogate loss,
the second inequality is due to the assumption k ≤σ_k ·[ ℓ(f(x), y) | r_f(x, y) = k ].
Missing: we can simply focus only on λσ_k part, instead of the overall weights, right?
Task: show { p_k · k / - log(f(x)_k) }_k=1^K are Beta-like.
Assume: p_k = exp(-k) /∑_k'=1^K exp(-k'), and
f(x)_k ≤ M - (k/K)^a.
Now we start:
p_k · k / - log(f(x)_k) ≤exp(-k) · k / - ( f(x)_k - 1 ) ≤exp( -k + ln(k) ) / 1 - M + (k/K)^a ≤( K-k/K)^b
· K / 1 - M ·( k / K )^a
,
where the first inequality is due to 1+ln(x) ≤ x,
the second inequality is due to two lemmas.
Therefore, we show that the above rank-minimization can be regarded as a cost-sensitive learning problem with weights following Beta distribution up to a costant, which can be merged to λ in practice.
Lemma:
( K/K-k)^b
≤exp(k - ln(k))
Lemma:
1/b+c≤ d c /b⇔
b
≤d c^2/1-dc
Lemma:
1 - M
≤ K (k/K)^2a/ 1 - K (k/K)^a⇔1/1-M + (k/K)^a≤K(k/K)^a/1-M
Still missing: lower bound with Beta PDF? No need to get lower bound, since we only need to minimize the upper bound of the cost-sensitive objective.
Still missing: p_k can be exp or poly, so generalization and cost-sensitive analysis should be consistent (now they are not)
Task option 2: show { p_k · k / - log(f(x)_k) }_k=1^K are Beta-like.
Assume: p_k = k^-c/∑_k'=1^K (k')^-c ), and
f(x)_k ≤ M - (k/K)^α≤ 1 - (k/K)^β, for β < 1.
Now we start:
p_k · k / - log(f(x)_k) ≤ k^-c· k / - ( f(x)_k - 1 ) ·∑_k'=1^K (k')^-c≤ k^-c· k / 1 - M + (k/K)^α·c-1/1-(K+1)^-c+1
≤
( k/K )^-β· k^-c· k ·c-1/1-(K+1)^-c+1
≤
( k/K )^1-β· ( 1 - k / K + 1 )^b-1· K ·c-1/1-(K+1)^-c+1∼
p_Β(k/K; 2-β, b)
,
where the first inequality is due to 1+ln(x) ≤ x,
the second inequality is due to two lemmas.
Therefore, we show that the above rank-minimization can be regarded as a cost-sensitive learning problem with weights following Beta distribution up to a constant, which can be merged to λ in practice.
Lemma.
M - (k/K)^α≤
1 - (k/K)^β, 0 < β < 1.
k^-c≤
( 1 - k/(K+1) )^b-1
(of Lemma <ref>)
k^-c≤ ( 1 - k/K+1 )^b-1⇔
Define the Beta-weighted loss function as L_Β(f) = p_Β(k/K+1; a, b) ·[ ℓ(f(x), y) | r_f(x, y) = k ].
Define p_k = { r_f(x, y) = k }.
Define ℓ̅_k(f) = [ ℓ(f(x), y) | r_f(x, y) = k ].
Define σ_k = k ·{ r_f(x, y) = k }/ℓ̅_k(f).
(Learning bound for the expected size of CP prediction sets)
Let
L_Β(f) := ∑_k=1^K σ_k ·[ ℓ(f(X), Y) | r_f(X,Y) = k ],
where σ_k ∼ p_Β(k/(K+1); a, b) with a = 1.1, b = 5.
_X [ | _f(X) | ]
≤
L_Β(f) .
(of Theorem <ref>)
Before the proof, we first present two key lemmas (Lemma <ref> and Lemma <ref>) below. Note that our theoretical analysis only uses the original Beta function p_Beta instead of the up-shifted version, which does not affect the conclusion since the orignal Beta-weighting can be regarded as a regularization term. Thus, this theorem shows that the Beta weighting term controls the prediction set size, while the ERM term minimizes the generalization error. As we noted in the main paper, we use the ranking starting from 0 in our implementation while the theoretical analysis assumes the ranking starts from 1. Nevertheless, we can shift the index from 1 to 0 to get the same theoretical result.
The proof for Lemma <ref> and Lemma <ref> can be found in Section <ref> and Section <ref>, respectively.
(Expected size of CP prediction upper bounded by partial average rank)
Let K^* = max{ k ∈ [K] : _XY [ ∑_l=1^k f(X)_(l)≤τ_1-α | r_f(X, Y) ≥ k ] ≥ 1 - α}.
_X[ | _f(X) | ]
≤∑_k=1^K^* k ·[ r_f(X, Y) = k ]
(Partial average rank upper bounded by L_Β)
∑_k=1^K^* k ·[ r_f(X, Y) = k ]
≤∑_k=1^K σ_k ·[ ℓ(f(X), Y) | r_f(X, Y) = k ]
,
where σ_k = 3 / 5 ·γ·ξ· p_Β(k/(K+1); a, b), γ is a positive constant satisfying l̅_k≥ k/γ,∀ k∈[K^*] and ξ is a positive constant satisfying p_k ≤ξ· ( 1 - k/K+1 )^b-1,∀ k∈ [K^*].
Now we can start proving Theorem <ref>.
By inequality (<ref>) from Lemma <ref> and (<ref>) from Lemma <ref>, we have
_X[ | _f(X) | ]
(<ref>) ≤ ∑_k=1^K^* k ·[ r_f(X, Y) = k ]
(<ref>) ≤ ∑_k=1^K σ_k ·[ ℓ(f(X), Y) | r_f(X, Y) = k ]
_ = L_Β(f)
,
where σ_k = 3 / 5 ·γ·ξ· p_Β(k/(K+1); a, b).
This completes the proof of Theorem <ref>
§.§ Proof for Lemma <ref>
(of Lemma <ref>)
We first introduce the notations. f(X)_(l) is the lth sorted predictive probability with the descending order, V(X,y) is the cumulative summation of f(X)_(l), i.e., V(X,y)=∑_l=1^y f(X)_(l). r_f(X,Y) is the TCPR of input (X,Y) when using the classifier f. τ_1-α is the 1-α quantile of the conformity score at the population level and α is the confidence level for conformal prediction.
Now we start with the expected size of prediction sets of CP method:
_X[ | _f(X) | ]
=
_X [ ∑_y=1^K [ V(X, y) ≤τ_1-α ] ]
= ∑_y=1^K _X [ [ V(X, y) ≤τ_1-α ] ·_Y [ [ r_f(X, Y) < r_f(X, y) ] + [ r_f(X, Y) ≥ r_f(X, y) ] ] ]
= ∑_y=1^K _X [ [ V(X, y) ≤τ_1-α ] ·_Y [ [ r_f(X, Y) < r_f(X, y) ] ] ]
+ ∑_y=1^K _X [ [ V(X, y) ≤τ_1-α ] ·_Y [ [ r_f(X, Y) ≥ r_f(X, y) ] ] ]
= ∑_y=1^K _XY[ [ V(X, y) ≤τ_1-α ] ·[ r_f(X, Y) < r_f(X, y) ] ]
_ = A
+
∑_y=1^K _XY[ [ V(X, y) ≤τ_1-α ] ·[ r_f(X, Y) ≥ r_f(X, y) ] ]
_ = B
.
Below we upper bound the two terms A and B, respectively.
For A, we have
A
=
∑_y=1^K _XY [ V(X, y) ≤τ_1-α, r_f(X, Y) < r_f(X, y) ]
= ∑_y=1^K _XY [ r_f(X, Y) < r_f(X, y) ] ·_XY [ V(X, y) ≤τ_1-α | r_f(X, Y) < r_f(X, y) ]
(a) ≤ ∑_y=1^K _XY [ r_f(X, Y) < r_f(X, y) ] ·_XY [ V(X, Y) ≤τ_1-α | r_f(X, Y) ≤ r_f(X, y) ]
+ ∑_y=1^K _XY [ r_f(X, Y) ≥ r_f(X, y) ] ·( 1 - α + 1/n)
- ∑_y=1^K _XY [ r_f(X, Y) ≥ r_f(X, y) ] ·( 1 - α + 1/n)
(b) ≤ ∑_y=1^K _XY( [ r_f(X, Y) < r_f(X, y) ] + [ r_f(X, Y) ≥ r_f(X, y) ] ) · ( 1 - α )
- ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
=
K ( 1 - α ) - ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
,
where the above inequality (a) is due to [ V(X, y) ≤τ_1-α | r_f(X, Y) < r_f(X, y) ] ≤[ V(X, Y) ≤τ_1-α | r_f(X, Y) < r_f(X, y) ], and
the inequality (b) is due to [ V(X, Y) ≤τ_1-α | r_f(X, Y) < r_f(X, y) ] ≤[ V(X, Y) ≤τ_1-α ] ≤ 1 - α + 1/n, the latter is due to the upper bound in <cit.>.
It is also worth highlighting that the second term in the last line above can be re-written as the average rank:
∑_y=1^K [ r_f(X, Y) ≥ r_f(X, y) ]
=
∑_k=1^K k ·[ r_f(X,Y) = k ]
.
Now we turn to term B and upper bound it as follows:
B - ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
= ∑_y=1^K _XY[ [ V(X, y) ≤τ_1-α] ·[ r_f(X, Y) ≥ r_f(X, y) ] ]
- ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
= ∑_y=1^K _XY[ V(X, y) ≤τ_1-α , r_f(X, Y) ≥ r_f(X, y) ]
- ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
= ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] ·_XY[ V(X, y) ≤τ_1-α| r_f(X, Y) ≥ r_f(X, y) ]
- ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] · ( 1 - α )
= ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] ·( _XY[ V(X, y) ≤τ_1-α| r_f(X, Y) ≥ r_f(X, y) ] - ( 1 - α ) )
= ∑_y=1^K _XY[ r_f(X, Y) ≥ r_f(X, y) ] ·( _XY[ ∑_l=1^r_f(X,y) f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ r_f(X, y) ] - ( 1 - α ) )
(a) = ∑_k=1^K _XY[ r_f(X, Y) ≥ k ] ·(
_XY[ ∑_l=1^k f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ k ]
_ =: H(k)
- ( 1 - α ) )
= ∑_k=1^K^*_XY[ r_f(X, Y) ≥ k ] ·( _XY[ ∑_l=1^k f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ k ] - ( 1 - α ) )
+ ∑_k=K^*+1^K _XY[ r_f(X, Y) ≥ k ] ·( _XY[ ∑_l=1^k f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ k ] - ( 1 - α ) )
(b) ≤ ∑_k=1^K^*_XY[ r_f(X, Y) ≥ k ] ·( _XY[ ∑_l=1^k f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ k ] - ( 1 - α ) )
,
where the above equality (a) is due to k = r_f(X, y),
the inequality (b) is due to the definition of K^* = max{ k ∈ [K] : _XY [ ∑_l=1^k f(X)_(l)≤τ_1-α | r_f(X, Y) ≥ k ] ≥ 1 - α ) } and the assumption of the monotonically decreasing function of H(k) = _XY [ ∑_l=1^k f(X)_(l)≤τ_1-α | r_f(X, Y) ≥ k ] in k.
We plot the empirical estimation of H(k) = _XY [ ∑_l=1^k f(X)_(l)≤τ_1-α | r_f(X, Y) ≥ k ] with the test set (adversarially attacked by PGD100) of CIFAR100 using an AT-trained model in Fig. <ref>, which validates our assumption on the monotonically decreasing property of this function.
Combining the above two inequalities (<ref>) and (<ref>) together, we have
_X[ | _f(X) | ]
≤
K ( 1 - α )
+ ∑_k=1^K^*_XY[ r_f(X, Y) ≥ k ] ·( _XY[ ∑_l=1^k f(X)_(l)≤τ_1-α| r_f(X, Y) ≥ k ] - ( 1 - α ) )
≤
K ( 1 - α )
+ ∑_k=1^K^*_XY[ r_f(X, Y) ≥ k ]
=
K ( 1 - α )
+ ∑_k=1^K^* k ·_XY[ r_f(X, Y) = k ]
.
After dropping the constant (since the training does not optimize the constant), this completes the proof for Lemma <ref>.
§.§ Proof for Lemma <ref>
(of Lemma <ref>)
The proof for Lemma <ref> needs the following technical lemma, which is proved in Section <ref>.
(Upper bound of Gamma functions)
Let a = 1.1, b = 5.
Then we have the following inequality
Γ(a+b) /Γ(a) Γ(b) ≤
3 / 10.
We use p_k to denote [ r_f(X, Y) = k ] and l̅_k to denote [ ℓ(f(X), Y) | r_f(X, Y) = k ].
k · p_k /ℓ̅_k (a) ≤ k · p_k / k / γ (b) ≤γ·ξ· (1 - k/K+1)^b-1 (c) ≤γ·ξ· 2 /(K+1)^a-1· (1 - k/K+1)^b-1
(d) ≤
2 γ·ξ·k^a-1/(K+1)^a-1· (1 - k/K+1)^b-1·Γ(a) Γ(b) /Γ(a+b) ·Γ(a+b) /Γ(a) Γ(b)
=
2 γ·ξ· p_Β(k/(K+1); a, b) ·Γ(a+b) /Γ(a) Γ(b)
(e) ≤
3 / 5 ·γ·ξ· p_Β(k/(K+1); a, b)
=
σ_k
,
where the above inequality (a) is due to the assumption ℓ̅_k ≥ k / γ,
the inequality (b) is due to the assumption p_k ≤ξ· ( 1 - k/K+1 )^b-1,
the inequality (c) is due to the assumption K ≤ 2^10 and a-1 = 1/10,
the inequality (d) is due to 1 ≤ k,
the inequality (e) is due to Lemma <ref>.
We plot the curve of l̅_k versus k/γ (γ=10) and p_k versus ξ(1-k/K+1)^b-1 (ξ=0.5, b=5) with an adversarially trained model on CIFAR100 in Fig. <ref>, indicating that the two assumptions are valid in practice. This proves the inequality
∑_k=1^K^* k ·[ r_f(X, Y) = k ]
≤∑_k=1^K^*σ_k ·[ ℓ(f(X), Y) | r_f(X, Y) = k ].
This completes the proof of Lemma <ref>.
§.§ Proof for Lemma <ref>
(of Lemma <ref>)
We use Weierstrass's definition for Gamma function:
Γ(z)
=
exp( -γ z ) / z ∏_i=1^∞ ( 1 + z/i )^-1·exp( z / i )
,
where γ_0 is the Euler–Mascheroni constant.
Now we start the proof:
Γ(a+b) /Γ(a) Γ(b)
= exp(-γ_0 a) / a ·exp(-γ_0 b) / b · a+b /exp(-γ_0 (a+b)) ·
∏_i=1^∞ (1+a/i)^-1· (1+b/i)^-1· (1+(a+b)/i) ·exp( a/i + b/i - (a+b)/i )
= a+b / ab ·∏_i=1^∞ 1 + (a+b)/i / ( 1+a/i )( 1+b/i ) (a) ≤ 6 / 5 ·∏_i=1^∞ ( i + a + b ) i / (i+a) (i+b)
= 6 / 5 ·∏_i=1^∞( 1 - ab / i^2 + (a+b)i + ab )
(b) ≤ 6 / 5 ·∏_i=1^∞exp( - ab / i^2 + (a+b)i + ab )
= 6 / 5 ·exp( -ab ·∑_i=1^∞ 1 / i^2 + (a+b)i + ab )
(c) = 6 / 5 ·exp( -ab ·∑_i=1^∞ 1 / i^2 + 61/10 · i + 11/2 )
(d) < 6 / 5 ·exp( - 3 ab / 10 )
(e) < 6 / 5 ·exp( - 3 / 2 )
<
3/10
,
where the inequality (a) is due to 1/a < 1, 1/b = 1/5,
the inequality (b) is due to 1+x ≤exp(x),
the equality (c) is due to a = 1.1, b = 5,
the inequality (d) is due to
∑_i=1^∞ 1 / i^2 + 61/10 · i + 11/2
>
∑_i=1^100 1 / i^2 + 61/10 · i + 11/2
>
3/10,
and the inequality (e) is due to a > 1, b = 5.
This completes the proof of Lemma <ref>.
|
http://arxiv.org/abs/2405.09111v1 | 20240515055720 | CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving | [
"Dechen Gao",
"Shuangyu Cai",
"Hanchu Zhou",
"Hang Wang",
"Iman Soltani",
"Junshan Zhang"
] | cs.RO | [
"cs.RO",
"cs.AI"
] |
Complex-valued 3D atomic spectroscopy with Gaussian-assisted inline holography
Saijun Wu^1
May 20, 2024
==============================================================================
To safely navigate intricate real-world scenarios, autonomous vehicles (AVs) must be able to adapt to diverse road conditions and anticipate future events. World model based reinforcement learning (RL) has emerged as a promising approach by learning and predicting the complex dynamics of various environments. Nevertheless, to the best of our knowledge, there does not exist an accessible platform for training and testing such algorithms in sophisticated driving environments.
To fill this void, we introduce CarDreamer, the first open-source learning platform designed specifically for developing and evaluating world model based autonomous driving algorithms. It comprises three key components:
1) World model (WM) backbone: CarDreamer has integrated some state-of-the-art world models, which simplifies the reproduction of RL algorithms. The backbone is decoupled from the rest and communicates using the standard Gym interface, so that users can easily integrate and test their own algorithms.
2) Built-in tasks: CarDreamer offers a comprehensive set of highly configurable driving tasks which are compatible with Gym interfaces and are equipped with empirically optimized reward functions.
3) Task development suite: CarDreamer integrates a flexible task development suite to streamline the creation of driving tasks. This suite enables easy definition of traffic flows and vehicle routes, along with automatic collection of multi-modal observation data. A visualization server allows users to trace real-time agent driving videos and performance metrics through a browser.
Furthermore, we conduct extensive experiments using built-in tasks to evaluate the performance and potential of WMs in autonomous driving.
Thanks to the richness and flexibility of CarDreamer, we also systematically study the impact of observation modality, observability, and sharing of vehicle intentions on AV safety and efficiency. All code and documents are accessible on our GitHub page <https://github.com/ucd-dare/CarDreamer>.
§ INTRODUCTION
Autonomous vehicles (AVs) are expected to play a central role in future mobility system with many promising benefits like safety and efficiency <cit.>. Recent years have witnessed great achievement on the development of AVs. In the U.S. alone, millions of miles have been driven on public roads by AVs <cit.>. However, achieving robust AVs that are capable of navigating complex and diverse real-world scenarios remains a challenging frontier <cit.>. For instance, as calculated by the US Department of Transportation’s Federal Highway Administration, AVs experience a crash rate about two times more than the conventional vehicles per million miles traveled <cit.>.
The reliability of AVs directly hinges upon the generalization capability of autonomous systems in unforeseen scenarios. World model (WM), which excels in generalization, offers a promising solution with its ability to learn the complex dynamics of environments and anticipate future scenarios. In particular, WMs learn a compact latent representation that encodes the key elements and dynamics of the environment. This learned representation facilitates better generalization, allowing the WM to make predictions in scenarios beyond its training samples. Internally, WMs incorporate components that mimic human-like perception and decision-making, such as vision model and memory model <cit.>. Indeed, humans excel at handling rare or unseen events with proper actions thanks to human's internal world model <cit.>. By emulating cognitive processes akin to human intelligence, WM based reinforcement learning (RL) has demonstrated state-of-the-art performance in domains such as Atari games and Minecraft <cit.>. However, WMs' application on autonomous driving remains an exciting open field <cit.>, partially due to the lack of easy-to-use platforms to train and test such RL algorithms. The endeavor on developing a learning platform for WM-based autonomous driving can be extremely beneficial for the research in this domain.
Thus motivated, we introduce CarDreamer, the first open-source learning platform designed specifically for WM based autonomous driving. CarDreamer aims to facilitate the rapid development and evaluation of algorithms, enabling users to test their algorithms on provided tasks or quickly implement customized tasks through a comprehensive development suite. CarDreamer's three key contributions include:
* Integrated WM algorithms for reproduction. CarDreamer has integrated state-of-the-art WMs, including DreamerV2, DreamerV3, and Planning2Explore, significantly reducing the time required to reproduce the performance of existing algorithms. These algorithms are decoupled from the rest of CarDreamer and communicates through the unified Gym interface. This enables straightforward integration and testing of new algorithms without additional adaptation efforts as long as they support Gym interface.
* Highly configurable built-in tasks with optimized reward. CarDreamer provides a comprehensive set of driving tasks, such as lane changing and overtaking. These tasks allow extensive customization in terms of difficulty, observability, observation modalities, and communication of vehicle intentions. They expose the same Gym interface for convenient use and the reward functions are meticulously designed to optimize training efficiency.
* Task Development Suite and Visualization Server. This suite not only simplifies the creation of customized driving tasks via API-driven traffic spawning and control but also includes a modular observer for easy multi-modal data collection and configuration. A visualization server enables the real-time display of agent driving videos and statistics on a web browser, which accelerates reward engineering and algorithm development by providing immediate performance insights.
In addition to introducing the CarDreamer platform, we present comprehensive experiments that evaluate the overall performance and potential of WMs in autonomous driving. We highlight its predictive accuracy on multi-modal observation inputs. Furthermore, the comparison of different levels of observability and intention sharing demonstrates that communication can markedly enhance both traffic safety and efficiency. To the best of our knowledge, these results represent the first experimental manifestation of WMs’ efficacy in autonomous driving tasks with communication of vehicle intentions.
§ RELATED WORK
World Models in Reinforcement Learning. RL usually suffers from low sample efficiency <cit.>, which significantly hinders it from being practical especially for tasks where interacting with environment can be costly and time-consuming. To remedy for the issue, model-based RL leverages a world model that explicitly learns environment dynamics to “imagine” future trajectories, allowing agents to interact with the world model instead of the actual environment <cit.>. As the high-dimensional observations that evolve on intricate dynamics can be intractable, prior works typically learn dynamics on a latent space. Typical designs involve modeling dynamics with a Recurrent State-Space Model (RSSM) <cit.>. Dreamer provides a series of works that leverages RSSM <cit.> and train agents in world model's imagination and have demonstrated promising sample efficiency and generalization ability on conventional RL benchmarks. ISO-Dream <cit.> isolates controllable and non-controllable sources to the changes of dynamics such that agents can differentiate changes that are independent and dependent to their actions. Planning2Explore <cit.> promotes exploration by directing it towards states of higher uncertainty, facilitating the learning of more robust dynamics and enabling quick adaptation to new tasks in a zero or few-shot manner. LEXA <cit.> utilizes a learned world model to train separate explorer and achiever policies for forward-looking exploration and goal achievement.
World Models for Autonomous Driving. In the field of autonomous driving, there has been mainly two branches of studies of world models <cit.>, 1) leveraging world models as neural driving simulator to synthesize realistic driving videos, and 2) utilize world models in simulation to train and evaluate agent policies. In the first branch, GAIA-1 <cit.> utilizes a world model to generate realistic driving scenarios given videos, texts, and actions as inputs. DriveDreamer <cit.> generates driving scenarios along with the actions given prior information such as high-definition maps and 3D bounding boxes. ADriver-1 <cit.> eliminates the need of extensive prior information and is able to achieve infinite driving inside its world model by providing both scenario generation and action predictions. DriveDreamer-2 <cit.> is built-upon large language models to make prompting more user-friendly to generate diverse traffic conditions for driving video generation. For the second branch, MILE <cit.> conducts imitation learning based on a Dreamer-style world model. It learns from offline expert data using road map and camera inputs, and predicts the transition of future Bird-Eye Views (BEVs) as an auxiliary task. SEM2 <cit.> conducts reinforcement learning with a Dreamer-style world model, and decodes camera and LiDAR representations into semantic BEV masks. Think2Drive <cit.> trains DreameV3 with BEV inputs and tests it on 39 CARLA benchmarks. Our platform aims at facilitating this branch of research, providing tailored benchmarks and tools for WM based RL algorithms in autonomous driving.
Simulators. Collecting data for autonomous driving in the real world is costly and time-consuming. To this end, various simulators, such as CARLA <cit.>, SUMO <cit.>, and Flow <cit.>, have been developed. CARLA is distinguished by its realistic environmental modeling and image rendering capabilities. These simulators are generally designed for general traffic simulation rather than for RL applications. Our platform is specifically tailored for WM-based RL, offering RL rewards, interfaces, and automating training data requirements.
§ BACKGROUND
We give a brief introduction in this section to the two cornerstones that CarDreamer involves: CARLA, <cit.> a high-fidelity and flexible simulator, and gym <cit.>, a standard interface for RL training and evaluation.
CARLA. CARLA is an open-source simulator that aims at simulating real-world traffic scenarios. CARLA is based on Unreal Engine which provides realistic physics and high-quality rendering. CARLA provides digital assets including maps, buildings, vehicles, and various landmarks. It supports various sensors such as RGB camera, LiDAR, RADAR. Users can create vehicles or pedestrians and take full control of these actors. It is indeed a very general tool, but the main drawback of its application in RL algorithms also come from its generality. As is stated in <ref>, obtaining the BEV involves a cumbersome process, impeding its fast deployment in training RL algorithms.
Gym. Gym is a standard interface defined by OpenAI to formalize the communication between agents and environments. Two functions reset() and step(action) constitute the core part of this interface. The former initializes the environment to its start state. The latter takes an action input from the agents, simulates the evolution of the environment, and returns observation data, reward signals, a terminal indicator, and some extra information. In this way, an RL algorithm can be easily tested on various environments with minor adaptation as long as they both support gym interface. There have been extensive efforts in developing diverse Gym benchmarks, such as Atari games, DMC suites. However, in the field of WM based RL algorithms for autonomous driving in CARLA, CarDreamer is the first platform that provides diverse urban driving tasks through gym interface to facilitate training and evaluation.
§ CARDREAMER ARCHITECTURE AND IMPLEMENTATION
§.§ Overview of CarDreamer
As depicted in <Ref>, CarDreamer comprises three principal components: built-in tasks, task development suite, and world model backbone. The task development suite facilitates a variety of API functionalities, including vehicle spawning, traffic flow control, and route planning within CARLA. An observer module automates the collection of multi-modal observation data, such as sensor data and BEVs, managed by independent and customizable data handlers. This data serves dual purposes: it is utilized by the task and a training visualization server. The visualization server displays real-time driving videos and environment feedback via an HTTP server and integrates seamlessly with the world model algorithm through the gym interface. Upon receiving an action as the agent's response, the observer collects data from data handlers at the subsequent frame, thus continuing this operational cycle. We will now explore each module in detail.
§.§ Built-In Tasks
We have meticulously crafted a wide array of realistic tasks, ranging from simple skills such as lane following and left turning to more complex challenges like random roaming across mixed road conditions that include crossroads, roundabouts, and varying traffic flows. These tasks are highly configurable, offering numerous options that present fundamental questions in autonomous driving.
Observability & Intention Sharing: Partial observability presents a significant challenge in RL, where incomplete state information can exponentially increase the complexity of the input space by encompassing all historical steps <cit.>. To address the lack of tools tailored to these challenges in autonomous driving, we offer three observability settings in CarDreamer: 1) Field-of-View (FOV) includes only the vehicles within the camera's FOV. 2) Shared-FOV (SFOV) enables a vehicle to communicate with and collect FOV data from other vehicles within its own FOV. 3) Full Observability (FULL) assumes complete environment and background traffic information. Furthermore, users have control over whether the vehicle shares their intention, and whom the vehicles share with. These configurations aligns with the fundamental questions of “what information to communicate” and “whom to communicate with” <cit.>.
Observation Modality: Users can configure the observation space to include various modalities, from sensor data such as RGB cameras and LiDAR to synthetic data such as BEVs. This flexibility supports the development of end-to-end models that are capable of making decisions directly from multi-modal raw sensor data <cit.> or planning with BEV perception <cit.>.
Difficulty: Difficulty settings primarily affect the density of traffic, posing significant collision avoidance challenges. As safety-critical events of AVs are rare <cit.>, it is inherently difficult to validate the robustness of AVs due to the infrequent nature of such events <cit.>. CarDreamer is specifically designed to enabling a comprehensive evaluation of safety and efficiency in scenarios that mimic these infrequent but critical events.
Reward function. Each task within CarDreamer is equipped with an optimized reward function, which has been experimentally shown to enable DreamerV3 to successfully navigate through waypoints within just 10,000 training steps (see <Ref> for details). Notably, our empirical findings indicate that rewarding the agent based on its speed or incremental position changes leads to superior performance compared to rewarding absolute position. This is because when rewarded solely for position, the agent can exploit the reward function by making a small initial movement and then remaining stationary, as any further movement risks incurring collision penalties. In practice, we do observe such sub-optimal behavior, where the learned policy converges to a local optimum to avoid collision by remaining stationary. Conversely, rewarding speed forces the agent to maintain continuous motion to accumulate rewards, mitigating the risk of premature convergence to undesirable stationary policies.
Our reward design carefully addresses crucial requirements for driving tasks, such as trajectory smoothness, which are often overlooked in conventional RL algorithms. Typically, these algorithms include an entropy term in their loss function or value estimation to encourage exploration and prevent premature convergence. However, in autonomous driving contexts, this entropy term can incentivize vehicles to follow a zigzag trajectory, as such erratic motion generates higher entropy rewards compared to smoother paths, even though both trajectories might achieve similar progress towards the goal. To counteract this effect, we introduce a penalty term specifically designed to discourage motion perpendicular to the goal direction. As a result, we have developed a reward function that effectively balances goal progression and trajectory smoothness, structured as follows
r = α v_parallel - β v_perp - γ𝕀_collision.
Here, v_parallel and v_perp represent the speed parallel and perpendicular to the goal direction, respectively. 𝕀_collision is the indicator for collision. α, β, γ are scaling factors. For tasks like waypoint following, additional reward terms are included for reaching each waypoint.
Interface and Usage. All built-in tasks in CarDreamer utilize a unified gym interface, allowing straightforward training and testing of RL algorithms without additional adaptations. Beyond direct usage, CarDreamer supports a variety of algorithms, including those for curriculum learning, which can leverage the progression from simpler to more complex tasks; or continual learning, which aims at addressing catastrophic forgetting when learning a new task. Additionally, for imitation learning, CarDreamer simplifies the collection of observational data in the simulator. Although initially designed for WM-based RL algorithms, the gym interface enables diverse applications across various algorithmic strategies.
§.§ Task Development Suite
For users requiring customized tasks, CarDreamer offers a highly modular Task Development Suite. This suite is adaptable to various levels of customization to satisfy diverse user requirements.
The initial module, World Manager, caters to basic needs such as varying driving scenarios with different maps, routes, spawning locations, or background traffic flows. The World Manager is responsible for managing `actors', a term borrowed from CARLA <cit.>, which encompasses all entities including vehicles, pedestrians, traffic lights, and sensors. It provides API calls to spawn various actors, particularly vehicles at different locations with either a default or a customized blueprint. These vehicles can be controlled by the user or by an autopilot, a simple rule-based autonomous driving algorithm. Upon reset, it transparently destroys and releases resources.
The second module, the Observer, automates the collection of observation data across various modalities. While it allows users to easily access pre-defined observation modalities without manual interaction, it also supports extensive customization for data specifications. This is achieved through a series of data handlers, each delivering data for a particular modality, such as a RGB camera handler and a BEV handler. Each data handler is highly modular and independently manages the entire lifecycle of a specific type of data. Users can enhance the observer by registering a new data handler tailored to their own requirements.
The third module comprises Route Planners that accommodate diverse needs for task routes. CarDreamer includes several planners: a random planner for exploratory roaming across the entire map, a fixed path planner that creates waypoints connecting user-defined locations, and a fixed ending planner that generates routes using the classical A* algorithm from the current position to a designated endpoint. For additional customization, a base class is available for users to develop their own planners by overriding the init_route() and extend_route() methods, which define the initialization and extension of routes per time step, respectively.
Additionally, the suite features a visualization server that seamlessly integrates the output from the Observer and other statistical data from environment feedback, displaying via an HTTP server. This automation facilitates rapid feedback, enhancing the process of reward engineering and algorithm development without extra coding efforts.
§.§ World Model Backbone
The World Model Backbone in CarDreamer seamlessly integrates state-of-the-art approaches such as DreamerV2 <cit.>, DreamerV3 <cit.>, and Planning2Explore <cit.>, facilitating rapid reproduction of these models. This backbone architecture is strategically designed to decouple the world model implementation from task-specific components, thereby enhancing modularity and extensibility. Communication between these components is efficiently managed through the standard gym interface, which allows for extensive customization.
This decoupling enables users to easily adapt or replace the default world models with their own implementations, supporting rapid prototyping, benchmarking, and comparative analysis against established baselines. CarDreamer thus provides a comprehensive testbed for world model-based algorithms, fostering an ecosystem conducive to accelerated research and development within this field. The platform encourages users to explore innovative architectures, loss functions, and training strategies, all within a consistent and standardized evaluation framework characterized by diverse driving tasks and performance metrics.
§ CARDREAMER TASK EXPERIMENTS
This section showcases the versatility and capabilities of CarDreamer through a comprehensive set of experiments across a wide range of settings. We use DreamerV3 <cit.> as the model backbone. <Ref> focuses on task training and evaluation, where we evaluate the performance of WMs in diverse driving tasks within CarDreamer. In <Ref>, we assess the prediction accuracy of WMs to accurately imagine future states in different observation modality settings.
Furthermore, <Ref> systematically evaluates the significant impact of observability and intention sharing on traffic safety and efficiency.
§.§ World Model Training & Evaluation
We use a small DreamerV3 model with only 18M parameters <Ref> as the model backbone. A small DreamerV3 has 32 CNN multiplier, 512 GRU and MLP units, and the MLP has only two layers within its RSSM <cit.>. The small memory overhead is around 10GB which allows us to train on a single NVIDIA 4090 GPU alongside running CARLA simulator.
We train the agent on each task. The change in reward curves with respect to time steps is shown in <Ref>. Simpler tasks with less traffic, such as 'right turn simple' and 'lane merge', typically converge within 50k steps (about 1 hour), whereas tasks involving denser, aggressive traffic flows, which require collision avoidance, take approximately 150k-200k steps to converge (about 3 to 4 hours).
In our evaluation, we employ several metrics to rigorously assess the performance of autonomous driving agents executed within the CarDreamer tasks, detailed in <Ref>. These metrics include:
* Success Rate: This metric measures the percentage of episodes in which the ego vehicle successfully completes the task by reaching a destination point or traveling a predetermined distance without incident or out of lanes.
* Average Distance (m): Represents the average distance traveled by the ego vehicle across all episodes before the episode terminates, either through task completion or due to a failure such as a collision or timeout.
* Collision Rate (%): Calculates the percentage of episodes where the ego vehicle is involved in a collision.
* Average Speed (m/s): Measures the average speed maintained by the ego vehicle throughout the task. This metric is indicative of how efficiently the vehicle navigates the environment, balancing speed with safety.
* Waypoint Distance: This metric quantifies the average divergence from the desired route waypoints. It assesses the vehicle's ability to adhere to the planned path, reflecting its navigation accuracy and precision in following the given trajectory.
It is worth noting that several tasks, such as `right turn' and `left turn', are notably challenging in environments with background traffic, where traffic flows aggressively and always disregards traffic rules and signs. This behavior increases the potential for collisions with the ego vehicle. Consequently, the AV must accurately predict future maneuvering of other vehicles to successfully navigate through the task.
§.§ Predictions in Different Observation Modalities
WM's imagination capability allows it to effectively predict future scenarios and manage potential events. To evaluate the WM's imagination performance with observations of different modalities, we conduct the experiments on the “right turn hard” task. We choose three different modalities: BEV, camera, and LiDAR. For each one, the WM is required to imagine the observations in a few future steps given the start state and a series of actions.
The results, illustrated in Fig.<ref>, compare the ground-truth images with the imagined ones across three modalities. The first row displays the ground truth observation images, the second row the WM's imagined outcomes, and the third row the differences between them. We selected frames within an imagination horizon of up to 64 time steps.
The findings demonstrate the WM's proficiency in accurately predicting the future despite the different modalities. In the BEV experiment (a), the WM precisely predicted the positions and trajectories of vehicles moving straight and making right turns, as well as the rotation and translation of the BEV with respect to the ego vehicle. Similarly, in camera and LiDAR settings, WM successfully predicts a vehicle driving in front of the ego vehicle.
§.§ Benefits of V2V Communication
A distinctive feature of CarDreamer is its ability to facilitate easy customization of the level to which vehicles communicate. Vehicles can share FOV views, leading to different observability. Besides, they can even share intentions (represented by vehicles' planned waypoints) for better planning. We utilize this feature to evaluate the impact of communication. An agent is trained and tested on the “right turn hard” task under different settings, i.e., different obsrevability and whether it has access to others' intentions. The “right turn hard” task is particularly suitable for testing observability and intention communication due to the dense traffic and frequent potential for collisions from vehicles outside the FOV.
The reward curves are shown in <Ref> and some performance metrics are shown in <Ref>. Note that the successful behavior in making the right turn is approximately indicated by rewards exceeding 250 in our reward functions. The results show that limited observability or lack of intention sharing impedes the agent from completing the task. The evenly sampled images during one episode (shown in <Ref>) provides a good explanation: the agent adopts a conservative and sub-optimal policy where it stops at the crossroad to avoid collision. For example, in the first three rows of <Ref>, the agent stops moving before merging into the car flow. In contrast, the complete information enables the ego vehicle to successfully execute the right turn.
§ CONCLUSION
We introduced CarDreamer, an open-source learning platform tailored for the development and evaluation of WM based RL algorithms in autonomous driving. CarDreamer offers a comprehensive set of built-in tasks, a flexible task development suite, and an integrated world model backbone, all aimed at facilitating rapid prototyping of driving tasks and algorithm testing within this specialized domain. With its modular design and diverse task configurations, CarDreamer establishes itself as a flexible and challenging testbed for assessing the performance of WM based autonomous driving systems. The experiments we conduct using our platform gives a comprehensive evaluation of the performance of DreamerV3 different driving tasks. We emphasize its predictive accuracy across different observation modalities and the significant impact of communication on performance.
Looking to the future, a promising avenue for further development involves the integration of curriculum learning <cit.> and continual learning <cit.> strategies. These approaches aim to systematically enhance the learning process by gradually increasing task complexity or continuously integrating new knowledge without forgetting previously acquired information. Furthermore, exploring advanced techniques such as transfer learning <cit.> and meta-learning <cit.> could significantly improve the platform’s capabilities for few-shot adaptation to new environments. This would further augment CarDreamer’s utility in developing more generalized and robust autonomous driving approaches.
unsrt
|
http://arxiv.org/abs/2405.08698v1 | 20240514153756 | Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises | [
"Yue Xia",
"Christoph Hofmeister",
"Maximilian Egger",
"Rawad Bitar"
] | cs.IT | [
"cs.IT",
"cs.CR",
"cs.DC",
"cs.LG",
"math.IT"
] |
Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises
This project has received funding from the German Research Foundation (DFG) under Grant Agreement No. WA 3907/7-1.
Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar
School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
{yue1.xia, christoph.hofmeister, maximilian.egger, rawad.bitar}@tum.de
Received 05 March 2024 / Accepted 02 May 2024
==============================================================================================================================================================================================================================================
Federated learning (FL) shows great promise in large scale machine learning, but brings new risks in terms of privacy and security. We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users. The scheme builds on the preexisting non-private FLTrust scheme, which tolerates malicious users through trust scores (TS) that attenuate or amplify the users' gradients. The trust scores are based on the ReLU function, which we approximate by a polynomial. The distributed and privacy-preserving computation in ByITFL is designed using a combination of Lagrange coded computing, verifiable secret sharing and re-randomization steps.
ByITFL is the first Byzantine resilient scheme for FL with full information-theoretic privacy.
Byzantine-Resilience, Federated Learning, Information-Theoretic Privacy, Secure Aggregation
§ INTRODUCTION
Federated learning (FL), first proposed in <cit.>, is an emerging machine learning paradigm
that allows users to train a model under the coordination of a central entity called federator while keeping their private data local. The training is iterative.
Per iteration, the federator sends the current global model to the users, who update it based on their local training data, and return the result is sent back to the federator.
Using an aggregation rule, the federator combines the users' updates into the new global model. The process repeats until the model attains certain converge criteria.
FL addresses the privacy concerns of traditional centralized machine learning by transmitting users' local model updates instead of their private data directly. However, the local model updates, i.e. gradients or weights, may contain sensitive information about the users' data. Upon receiving the local updates, the federator can perform gradient inversion attacks <cit.> <cit.> to reconstruct the users' private training data. Private aggregation protocols <cit.> <cit.> are introduced to ensure that the federator obtains an aggregate of the model updates without revealing any additional information about individual updates.
Beyond privacy, FL gives rise to security concerns caused by users
sending corrupt updates. We call malicious users with arbitrary behavior Byzantine. Using simple linear aggregation as in FedAvg <cit.>, even a single Byzantine worker can force arbitrary aggregation results <cit.>; leading the model to sub-optimal solutions or divergence. The primary countermeasure is removing outliers in the users' local updates.
Many Byzantine-resilient aggregations have been proposed <cit.>, in which the federator removes outliers by observing the individual updates, thereby compromising the users' privacy.
This highlights an inherent tension between Byzantine-resilience and privacy. While the former requires access to the individual local updates to prune outliers, the latter requires concealing the individual local updates. BREA <cit.> and ByzSecAgg <cit.> address this problem by utilizing secret sharing schemes to make Krum <cit.>, a distance-based Byzantine-resilient aggregation rule, privacy-preserving. Both schemes are computationally private and leak the pairwise distances between local updates to the federator.
The encoding polynomial of the pairwise distances in BREA is further no longer completely random; hence, the federator may obtain extra knowledge (cf. Section <ref> for more details). ByzSecAgg <cit.> alleviates this problem by additional re-randomization. Other Byzantine-resilient secure aggregation schemes <cit.> are either based on clustering and compromise privacy or require two federators.
Currently, there is no Byzantine-resilient scheme without privacy compromises in the literature.
For this reason, we propose (pronounced “byte FL”), a Byzantine-resilient
and information-theoretically (IT) private secure aggregation scheme. For Byzantine-resilience we build on FLTrust <cit.>. The federator collects a small root dataset to compute a federator model update and computes a trust score (TS) for each user based on the relative direction of the user's and federator's model update.
FLTrust uses the rectified linear unit function (x)=max(0,x) in computing the TSs. We approximate (x) by a polynomial of degree to enable an IT private protocol. Each user's model update is embedded into a finite field through a stochastic quantizer, partitioned into sub-vectors, secret shared with all users by Lagrange Coded Computing (LCC) <cit.>, and verified against corruptions using an IT verifiable secret sharing (ITVSS) scheme <cit.>, <cit.>.
Re-randomization <cit.> is required before reconstructing the aggregation result to perfectly hide the local model updates.
From the perspective of LCC, Byzantine users can be seen as errors and dropouts as erasures in a Reed-Solomon code, cf. <cit.>.
The federator decodes the aggregate and updates the global model.
With the degree of the approximation polynomial and the number of sub-vectors, the proposed scheme is resilient against any Byzantine users, IT private against the federator and against any colluding users, and robust against any dropouts, as long as ≥ 2+(+1)· (+-1)++1.
§ SYSTEM MODEL AND PRELIMINARIES
We use [n] to denote the set of positive integers {1,⋯,n} and use ⌊ x ⌋ for the largest integer less than or equal to x. All vectors are denoted in bold type and scalars are denoted in normal type. I(X;Y) denotes the mutual information between random variables X and Y. H(X) denotes the entropy of X.
§.§ System Model
We consider FL with a semi-honest federator and users, including Byzantine users, colluding users and dropout users, as is illustrated in Fig. <ref>. Each user i holds a private local dataset _i, in addition, we require the federator to collect a small dataset _0, called root dataset.
The federator possesses a -dimensional global model ∈ℝ^ and coordinates the training process. The goal is to train the global model using private data held by the users to find the optimal global model ^* as the solution to the optimization problem ^* = argmin_F(),
where ∇ f(, ) is an unbiased estimator of the true gradient ∇ F(), i.e. ∇ F()=𝔼_∼𝒟[∇ f(,)]. This is done iteratively. Specifically, in each global iteration , the federator broadcasts the current global model ^() to all users. Each user i, for i ∈ [n], initializes its local model to the current global model, i.e., ^(0)_i=^(), and updates it for one or more local iterations based on its local dataset _i
^(+1)_i = ^()_i-_u ·∇ f(_i; _i^()),
where _u is the local learning rate and is the local iteration. Upon finishing all local training iterations, users send the local model update
_i=_i-^()
to the federator. Meanwhile, the federator trains on a small dataset to get _0, which is assumed to be public. Upon receiving the local updates, the federator aggregates them according to some aggregation rule , i.e., = (_0,_1, ⋯, _n),
and computes the global model for the next iteration [
Although we present our scheme in the simple gradient descent setting, it does not depend on the exact update rule and is applicable to, e.g., momentum and higher order methods and adaptive learning rate schedules. Similarly, it is compatible with additional privacy mechanisms based on adding noise to updates, like differential privacy <cit.>.]
^(+1) = ^()-·,
where is the global learning rate.
§.§ Threat Model and Defense Goals
We focus on curious entities with unlimited computing resources, requiring perfect IT privacy. We consider Byzantine attackers. They may arbitrarily deviate from the protocol
and have access to all users' datassets.
should be resilient even when up to such users collaboratively misbehave.
Our scheme guarantees the privacy of honest users' local model updates in each iteration. Up to users may collude and share information with each other to guess the private data of honest users. The federator is honest-but-curious, thus honestly conducts the protocol but tries to infer as much sensitive information as possible.
Therefore, for each training iteration, the privacy constraint for guarantees that, once knowing the current global model, which is required for the learning task, and the datasets of the colluding users, which may give information about what the datasets of the honest users look like, no group of up to colluding users can learn any additional information about the local model updates of the other honest users:
I(_[n] ∖𝒯; _𝒯, M_𝒯|_𝒯, ^()) = 0,
and, knowing the current global model and the root dataset, the federator should not gain any information about the local model updates of the honest users beyond the aggregation:
I(_[n] ∖𝒯; _0, M_f |_0, ^(), ) = 0,
where 𝒯 is the set of colluding users, | 𝒯 |=, M_𝒯 describe the messages received by the colluding parties and M_f denotes the intermediate messages received by the federator.
We consider the possibility of a subset of up to users experiencing delays or dropping out during protocol. The protocol should be IT private against the curious-but-honest federator and against any collusion of up to users, robust against Byzantine users and at the same time be able to tolerate up to users staying silent during the execution.
§
We present , which leverages the ability of FLTrust <cit.> to provide resilience against Byzantine attacks. Assuming the federator holds a small training dataset and performs local training to obtain a federator model update, approximates the ReLU function used to compute the trust scores in FLTrust by a polynomial and uses LCC to provide IT privacy against eavesdroppers. consists of the following five main steps, which we will detail in the sequel:
* Users normalize and quantize their local model updates. The federator model update is treated accordingly.
* The normalized updates are partitioned into smaller sub-vectors and secret shared using LCC and ITVSS.
* Users validate the normalization based on the secret shares from other users.
* Users compute a secret representation of the aggregation result by evaluating the target polynomials.
* The federator receives shares of the aggregation from the users to reconstruct the secure aggregation by decoding an error correcting code
and updates the global model.
§.§ Normalization and Quantization
To defend against Byzantine attacks performed on the magnitude, the federator and all users first normalize their model update _i to a unit vector
_i= _i/_i , ∀ i ∈{0,1,⋯,},
so that the impact of extremely large/small local updates, more likely originating from Byzantine users, can be eliminated.
Since the training process is performed in the real domain and LCC (like every IT private secret sharing) works over finite fields, it is essential to transfer the normalized model updates _i ∈ℝ^ to vectors in a prime field _i ∈𝔽_p^, where p is a large prime. Therefore, users apply an element-wise stochastic quantizer Q_q(x) with 2q+1 quantization intervals as in <cit.>, <cit.>. The relation between p and q is explained later.
Note that the stochastic rounding is unbiased, i.e., 𝔼_Q[Q_q(x)]=x. Let ϕ(x) = x+p p map integers to values in 𝔽_p, the quantization is defined to be
_i := ϕ(q· Q_q(_i)).
§.§ Sharing of the Normalized Model Updates
The federator and the users first partition their normalized model updates _i into smaller subvectors
_i = [_i^(1),_i^(2),⋯,_i^()]^T, ∀ i ∈{ 0,1,⋯,},
where each sub-vector is of size / and ∈ [-+1/2--].
We assume the federator model update is public, the federator broadcasts the sub-vectors of _0 to the users. Each user i uses LCC <cit.> to secret share _i with all users by the degree-(+m-1) encoding polynomial
u_i(z) = ∑_j ∈ []_i^(j)·∏_l ∈ [+t]∖{j}z-β_l/β_j-β_l
+ ∑_j ∈ []r_i^(j)·∏_l ∈ [+t] ∖{+j }z-β_l/β_+j-β_l, ∀ i ∈ [],
where β_1, ⋯, β_+t are +t distinct elements from 𝔽_p and r^(j)'s are chosen independently and uniformly at random from 𝔽_p. Note that u_i(β_1)=_i^(1), ⋯, u_i(β_)=_i^() and the finite field size should be large enough to avoid any wrap around, which we describe in subsection <ref>. Secret shares are computed by evaluating u_i(z) at n distinct values {α_l }_l ∈ [n], which are selected from 𝔽_p such that {α_l }_l ∈ [n]∩{β_l }_l ∈ []=∅. Hence, each user j receives a secret share of _i from other user i, i.e. _i,j=u_i(α_j), which is a vector of size /, for i,j ∈ [].
Note that, we leverage the ITVSS protocol from <cit.> to prevent Byzantine users from misbehaving in the secret sharing phase.
§.§ Validation of Normalization
Malicious users may misbehave during normalization.
Thus, upon receiving a secret share, each user i verifies correct normalization by locally computing the squared l2-norm _i,j_2^2 of the secret shares for each j∈ [] and sending the computed shares to the federator.
This is possible due to LCC and a re-randomization step before sending computations to the federator, which will be detailed in the next subsection. Upon receiving the computation results, the federator utilizes error correction decoding of the underlying Reed-Solomon code, cf. <cit.>, to reconstruct _i _2^2 for each _i and checks if it is within a certain interval, i.e.,
| _i _2^2 - ϕ(q· Q_q(1))^2 | < ε· q^2,
where ε is a predefined threshold and can be set empirically.
Note that the error correction requires the total number of users ≥ 2+2(+-1)++1. The interval is caused by the accuracy loss due to quantization. If any user does not pass the normalization check, the federator marks them as Byzantine and excludes them from future computations.
§.§ Users Secure Computation
In FLTrust <cit.>, the federator assigns to each user a trust score
_i=(cos(θ_i)), ∀ i ∈ [],
where θ_i is the angle between the federator's and the user's model update. The federator aggregates the local model updates by averaging the normalized updates weighted by their trust scores.
Making FLTrust IT private is not straightforward, which is why we propose to approximate the ReLU by a degree- polynomial function h(x)=h_0+h_1x+ ⋯ + h_kx^.
Therefore, the trust score for each user becomes
_i ≈ h(cos(θ_i))=h(⟨_0,_i ⟩), ∀ i ∈ [],
and the aggregation result is
=_0 /∑_i∈ []_i·∑_i∈ [](_i ·_i)
= _0 /·,
where = ∑_i∈ []h(⟨_0,_i ⟩) and = ∑_i∈ [](h(⟨_0,_i ⟩) ·_i).
Since the federator possesses _0, for computing the aggregation in a privacy-preserving manner, the federator only needs to compute the value of / in a private manner without learning individual users' private information beyond this quotient. The colluding users should learn nothing about other honest users during the computation.
Privacy Against Colluding Users:
Both and are polynomial functions of the model updates _0 and _i for i ∈ [n], where _0 is public and _i's are secret shared among the users using LCC. It's worth mentioning that LCC allows the computation of an arbitrary polynomial f with degree (f) over its secret. Suppose user i holds a secret s_i, the user partitions it and shares it among users via a degree-(+-1) encoding polynomial u_i(z). Each user j, holding its secret share s_i,j=u_i(α_j), is able to compute f(s_i,j)=f(u_i(α_j)) locally, which is an evaluation of the resulting polynomial f(u_i(z)) at the point α_j. Upon having more than (m+t-1) (f) + 1 evaluations from the users, the resulting polynomial f(u_i(z)) can be interpolated. The desired computation is obtained by evaluating the polynomial f(u_i(z)) at points {β_l }_l ∈ [], i.e., f(s_i)=[f(s_i^(1)),⋯, f(s_i^())]^T=[f(u_i(β_1)),⋯, f(u_i(β_))]^T.
Hence, it is possible to perform the polynomial computations of and on the secret shares, such that each user obtains an evaluation point of and . This guarantees that any set of up to users are not able to learn anything from the shares.
Privacy Against the Federator:
Privacy against the federator has not yet been guaranteed: the reconstructions of the computation results, i.e. _i _2^2, and , cannot perfectly hide the secret values _1, ⋯, _ against the federator. LCC is a linear secret sharing scheme that is additively, but not multiplicatively, homomorphic. Given two secrets a and b shared among n users with encoding polynomial u_a(z) and u_b(z), each user i, having the secret shares a_i=u_a(α_i) and b_i=u_b(α_i), is able to locally compute the sum of the shares a_i+b_i=u_a(α_i)+u_b(α_i)=u_a+b(α_i), which perfectly hides the secrets.
This property does not hold for multiplication on shares <cit.>. The product of a_i and b_i results in a secret share of u_a(z)· u_b(z), whose evaluation at β_1 is indeed a · b,
but is not a completely random polynomial perfectly hiding the secret, i.e. u_a(z)· u_b(z) ≠ u_a· b(z).
The federator can learn additional information about a and b beyond a· b.
We follow the re-randomization step from <cit.>, which involves sub-sharing the users' secret shares using ITVSS <cit.>, and linearly combining to construct re-randomized secret shares.
The users own the re-randomized shares of and . It remains to ensure that the federator obtains the quotient / without gaining any additional information about and .
Therefore, each user i
1) chooses an independent value _i uniformly at random from 𝔽_p and secret shares it by LCC and ITVSS among all users,
2) adds the shares of _j's from all other user j and obtains the share of[The case =0 can be avoided by minor changes, omitted for brevity.] = ∑_j=[]_j.
Each user multiplies the re-randomized shares of and by their share of , performs another re-randomization and sends the resulting shares of and to the federator.
§.§ Secure Aggregation
The federator receives the secret shares of and , where the degree of the encoding polynomial for
is · (+-1) and for is (+1)· (+-1). With sufficient number of users sending evaluations to the federator, the federator is able to leverage the error correction property of Reed-Solomon codes <cit.> to decode the values of and . Therefore, we require the total number of users in the system to be ≥ 2+(+1)· (+-1)++1.
Upon decoding the correct values, the federator computes / = /, converts the results from the finite field back to the real domain by de-quantizing by Q_q(x)^-1 and demapping by ϕ^-1
and computes the global model for the next iteration. To ensure the correctness of the result, none of the computations should cause a wrap around in the finite field. Each entry of the normalized gradient is in the range -q to q, hence the dot product is in the range -dq^2 to dq^2. Scaling the gradients by this value and summing such entries results in the range - q^3 to q^3. Accounting for the 0 value, we thus require p≥2 q^3+1.
§ THEORETICAL ANALYSIS
We analyze the properties of our proposed scheme in theory, beginning with the privacy guarantee achieved by .
with ≥ 2+(+1)· (+-1)++1 guarantees IT privacy of the honest users' local model updates against any number of colluding users or the federator according to (<ref>) and (<ref>).
We first prove the privacy against any colluding users according to (<ref>):
I(_[n] ∖𝒯; _𝒯, M_𝒯|_𝒯, ^())
= H(_[n] ∖𝒯|_𝒯, ^()) - H(_[n] ∖𝒯|_𝒯, M_𝒯, _𝒯, ^())
= H(_[n] ∖𝒯, _𝒯, ^()) - H(_𝒯, ^())
- H(_[n] ∖𝒯, _𝒯, M_𝒯, _𝒯, ^()) + H(_𝒯, M_𝒯, _𝒯, ^())
= H(_[n] ∖𝒯, _𝒯, ^()) - H(_𝒯, ^())
- H(_[n] ∖𝒯, M_𝒯, _𝒯, ^()) + H(M_𝒯, _𝒯, ^()),
where the last equation follows because _𝒯 is a deterministic function of _𝒯 and ^(). We then consider the exchanged messages M_𝒯, which include the shares of the normalized local model updates _𝒯,i and sub-shares from the re-randomization step in Step C and Step D of the scheme, i.e. when computing the squared l2-norm _i _2^2 to check the correctness of the normalization and the two sums and in the aggregation. We can leverage the privacy guarantees of LCC <cit.>, the re-randomization step <cit.>, <cit.> and ITVSS <cit.>. When the number of colluding users satisfies ≥ 2+(+1)· (+-1)++1, the exchanged messages M_𝒯 observed by the colluding users are completely random and independent of _[n] ∖𝒯, _𝒯 and ^(), i.e.,
H(_[n] ∖𝒯, M_𝒯, _𝒯, ^())
= H(_[n] ∖𝒯, _𝒯, ^()) + H(M_𝒯),
H(M_𝒯, _𝒯, ^())
= H(_𝒯, ^()) + H(M_𝒯).
By substituting the above into (<ref>) we prove (<ref>), showing that is IT private against users.
For privacy against the honest-but-curious federator, we need to prove (<ref>). We have
I(_[n] ∖𝒯; _0, M_f |_0, ^(), )
= H(_0, M_f |_0, ^(), ) - H(_0, M_f |_[n] ∖𝒯, _0, ^(), )
= H(_0 |_0, ^(), ) + H(M_f |_0, _0, ^(), )
- H(_0 |_[n] ∖𝒯, _0, ^(), ) - H(M_f |_0, _[n] ∖𝒯, _0, ^(), )
= H(M_f |_0, _0, ^(), ) - H(M_f |_0, _[n] ∖𝒯, _0, ^(), ),
where the last equation holds because _0 is a deterministic function of _0 and ^(). With regard to the exchanged messages M_f, we need to consider: 1) the computed shares of _i _2^2, and sent from the users, which are completely random, i.e. uniformly distributed and independent of _0, _[n] ∖𝒯, _0, ^(),, by leveraging the privacy guarantee of the re-randomization step <cit.>, <cit.>, and 2) the reconstructed values of _i _2^2 and /. Since _i _2^2 lies within a certain range for all possible model updates (ideally equivalent to one), the value of _i _2^2 is also independent of _0, _[n] ∖𝒯, _0, ^(),. Regarding /, we have H(/|, _0)=0 since = _0 · /. We denote the computed shares as c, and have
H(M_f |_0, _0, ^(), )
= H(c, _i _2^2, /|_0, _0, ^(), )
= H(c, _i _2^2 |_0, _0, ^(), )
+ H(/|c, _i _2^2, _0, _0, ^(), )
= H(c, _i _2^2 |_0, _0, ^(), )
= H(c, _i _2^2),
and H(M_f |_0, _[n] ∖𝒯, _0, ^(), )= H(c, _i _2^2) for the same reason. This concludes the proof of IT privacy against the federator. Hence, we conclude the proof of Theorem 1.
requires a user to communicate O(/n^3+n^4) and the federator O(d/mn+n^2) scalars. The computation cost is O((d/n^3+n^4)log^2nloglog n) and O((d/mn+n^2)log^2nloglog n), respectively.
For each user sharing a single scalar, the computation complexity of encoding in LCC is O(nlog^2 nloglog n) <cit.> and that of ITVSS <cit.> is O(n^2 log^2 n). Since the re-randomization step <cit.> involves sub-sharing each secret share using VSS and a linear combination of the sub-shares, the computation complexity is O(n^2log^2 nloglog n+n^3 log^2 n). The communication cost for the ITVSS is O(n^2), and O(n^3) for re-randomization. With respect to the federator, the computation cost for decoding a single scalar using error-correction decoding takes O(nlog^2 nloglog n). The remaining proof follows counting arguments, here omitted for brevity.
In Table <ref>, we compare the communication and computation complexity of with respect to , and the partitioning parameter to the previous solutions BREA and ByzSecAgg. In , the large communication and computation complexity for users stems from the ITVSS scheme and the re-randomization. This is the cost of achieving IT privacy, while BREA and ByzSecAgg rely on computational privacy. By partitioning the model updates into sub-vectors, the complexities can be reduced.
§ EXPERIMENTS
We numerically evaluate the performance of using the approximated ReLU function, and demonstrate the convergence and Byzantine-resilience compared to FedAvg and FLTrust.
We consider 10-class image classification on MNIST with uniform data distribution across =40 users and a three-layer neural network.
50% of the users are Byzantine (=20) and perform either a trim attack, which is an untargeted local model poisoning attack presented in <cit.>, or a label flipping attack with the same setting as <cit.>.
We set q=1024 and =6. As in <cit.>, the size of the root dataset _0 is 100.
We set ε=0.02 for the normalization check.
The ReLU function is approximated between (-q^2, q^2)=(-1024^2, 1024^2), which are the values taken by the cosine similarity after quantization. Fig. <ref> shows the ReLU approximation with different polynomial degrees ∈{2,4,6,8}.
Fig. <ref> shows the convergence of compared to FedAvg and FLTrust under the trim attack and label flipping. Notice that FedAvg cannot defend against Byzantine attacks, and has a comparable performance to FLTrust.
§ CONCLUSION
We proposed a Byzantine-resilient secure aggregation scheme for FL that guarantees IT privacy. requires the federator to hold a root dataset for reference, used to scale the users' local model updates during the aggregation. To enable IT privacy, we used a suitable approximation of ReLU to compute trust scores and secret sharing techniques to ensure user privacy. We analyzed the achieved privacy and complexity of the algorithm. Through experiments, we demonstrated the convergence in the presence of Byzantine users. Extending the established methods to wireless settings such as in <cit.> is an interesting direction. Further, we consider the investigation of targeted attacks tailored to as important future work.
IEEEtran
|
http://arxiv.org/abs/2405.10264v1 | 20240516171539 | Architectures and random properties of symplectic quantum circuits | [
"Diego García-Martín",
"Paolo Braccia",
"M. Cerezo"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
Information Sciences, Los Alamos National Laboratory, 87545 NM, USA
Theoretical Division, Los Alamos National Laboratory, 87545 NM, USA
Information Sciences, Los Alamos National Laboratory, 87545 NM, USA
Parametrized and random unitary (or orthogonal) n-qubit circuits play a central role in quantum information. As such, one could naturally assume that circuits implementing symplectic transformation would attract similar attention. However, this is not the case, as 𝕊ℙ(d/2)—the group of d× d unitary symplectic matrices—has thus far been overlooked. In this work, we aim at starting to right this wrong. We begin by presenting a universal set of generators 𝒢 for the symplectic algebra i𝔰𝔭(d/2), consisting of one- and two-qubit Pauli operators acting on neighboring sites in a one-dimensional lattice. Here, we uncover two critical differences between such set, and equivalent ones for unitary and orthogonal circuits. Namely, we find that the operators in 𝒢 cannot generate arbitrary local symplectic unitaries and that they are not translationally invariant. We then review the Schur-Weyl duality between the symplectic group and the Brauer algebra, and use tools from Weingarten calculus to prove that Pauli measurements at the output of Haar random symplectic circuits can
converge to Gaussian processes. As a by-product, such analysis provides us with concentration bounds for Pauli measurements in circuits that form t-designs over 𝕊ℙ(d/2). To finish, we present tensor-network tools to analyze shallow random symplectic circuits, and we use these to numerically show that computational-basis measurements anti-concentrate at logarithmic depth.
Architectures and random properties of symplectic quantum circuits
M. Cerezo
==================================================================
§ INTRODUCTION
The underlying mathematical structures behind the circuits implemented in the standard gate model of quantum computation are those of unitaries and groups. For instance, given an available set of implementable gates one can wonder what kind of interesting evolutions are available by their composition. Here, one can study specific combinations of gates (creating a single unitary to solve a given problem), random combinations (e.g., average properties as a function of the number of gates taken), or properties of all possible combinations (what is the emerging group structure).
The connection between quantum computing and group theory has led to the discovery of universal gate sets capable of approximating any evolution in 𝕌(d), the unitary group of dimension d <cit.>. Moreover, researchers have also studied architectures that can only implement unitaries from a subgroup of 𝕌(d), such as circuits composed of gates from the Clifford group <cit.>, or from some representation of a Lie group, like matchgate circuits <cit.>, group-equivariant circuits <cit.> or circuits with translationally-invariant generators <cit.>. The analysis of such architectures has led to insightful results on their classical simulability <cit.>, their use in quantum machine learning <cit.>, and on how imposing locality in the generating gates can lead to failures to achieve (subgroup) universality <cit.>.
In the previous context, the study of random quantum circuits has been particularly active <cit.>. These circuits exhibit the appealing feature of being analytically tractable, e.g., via Weingarten calculus <cit.>, providing a test-bed for quantum advantage in sampling problems <cit.> and for probing quantum many-body dynamics and the emergence of quantum chaos <cit.>.
For instance, the convergence of random circuits to t-designs over 𝕌(d) and the appearance of the anti-concentration phenomenon
have been the subject of numerous works <cit.>. Crucially,
the study of random circuits has been mainly focused on the unitary group, with significantly less attention being payed to circuits sampled from Lie subgroups of 𝕌(d) (with some notable recent exceptions <cit.>).
In this work, we contribute to the body of knowledge of circuits that belong to subgroups of 𝕌(d) by studying quantum circuits implementing transformations from the compact symplectic group (d/2) (see Fig. <ref>). This Lie group consists of all the d× d unitary symplectic matrices, which are unitaries that preserve a non-degenerate anti-symmetric bilinear matrix Ω. Despite its importance in random matrix theory <cit.>, and classical <cit.> and quantum <cit.> dynamics, this group has been mostly neglected in the recent literature.
We begin by discussing how the non-uniqueness of Ω is a salient and important feature of (d/2) that is not present when studying circuits that implement evolutions from the unitary or orthogonal groups. This up-to-congruence freedom can be exploited to show that when the canonical form of Ω is used, we can find a set of generators for the Lie algebra i𝔰𝔭(d/2) consisting of one- and two-qubit Paulis acting on neighboring sites in a one-dimensional lattice. This set of generators leads to quantum circuit architectures that implement symplectic transformations and that are universal in (d/2). Remarkably, these circuits cannot be built from translationally-invariant local generators <cit.>. In fact, circuits built from locally-symplectic quantum gates do not necessarily produce globally-symplectic transformations, but instead span the entire special unitary group 𝕊𝕌(d).
After identifying how to produce symplectic evolutions, we review the Schur-Weyl duality between the symplectic group and the Brauer algebra, showing that it can be used along with the Weingarten calculus <cit.> (which we present via tensor notation) to compute average properties of symplectic random circuits. In particular, we prove that the outputs of Haar random symplectic circuits can converge in distribution to Gaussian Processes (GPs) when the measurement operator is traceless and involutory. The fact that the outcomes of Haar random symplectic circuits form GPs allows us to provide concentration bounds and show that Pauli expectation values concentrate exponentially in the Hilbert space dimension. That is, doubly exponentially in the number of qubits. Furthermore, we give concentration bounds for random circuits that form t-designs over (d/2). Finally, following the results in Ref. <cit.>, we present tensor-network-based tools capable of analyzing average properties of shallow symplectic random circuits. Notably, we use these to numerically show that computational-basis measurement appear to anti-concentrate at logarithmic-depth, indicating that these circuits may be used in quantum supremacy experiments <cit.>.
§ PRELIMINARIES
In this section, we introduce some basic concepts that will be used throughout this work.
We begin by recalling that the standard representation of the compact symplectic group 𝕊ℙ(d/2):=𝕊ℙ(d;ℂ)∩𝕊𝕌(d) consists of all d× d unitary matrices (with d an even number), such that any S∈𝕊ℙ(d/2) satisfies the relation
S^TΩ S = Ω ,
where Ω is a non-degenerate anti-symmetric bilinear form. In other words, 𝕊ℙ(d/2) is the group of unitary matrices that preserve the product x⃗^T Ω y⃗ for vectors x⃗,y⃗∈ℂ^d. Then, we recall that the Lie algebra associated with 𝕊ℙ(d/2) is the symplectic Lie algebra, denoted as 𝔰𝔭(d/2), whose elements are d× d anti-Hermitian matrices, such that any M∈𝔰𝔭(d/2) satisfies
M^TΩ=-Ω M .
Moreover, any orthogonal basis for the symplectic Lie algebra is of dimension d(d+1)/2.
Here, we remark that Ω in Eqs. (<ref>) and (<ref>) is not uniquely defined. Typically, one uses the Darboux basis—or canonical form– in which Ω takes the form
Ω=[ 0 𝕀_d/2; - 𝕀_d/2 0 ] ,
with 𝕀_d/2 being the d/2 × d/2 identity matrix. In this work we will assume that Ω is given by Eq. (<ref>), as any other non-degenerate anti-symmetric bilinear form Ω' can always be mapped to Ω by a change of basis Q Ω' Q^T=Ω, with a d× d orthogonal matrix Q (i.e., such that Q^T=Q^-1). To finish, we recall that Ω has the elementary properties
Ω^2=-𝕀_d , and ΩΩ^T=Ω^TΩ=𝕀_d .
§ PAULI OPERATOR BASIS FOR THE SYMPLECTIC LIE ALGEBRA
Let us now focus on the case when d=2^n so that the symplectic unitaries act on the Hilbert space =(ℂ^2)^⊗ n of n qubits. With this choice one can verify that
Ω=i Y⊗𝕀^⊗ n-1 ,
with Y the Pauli matrix and 𝕀 the 2× 2 identity. Here, we ask the following question: What is a natural choice for the basis elements of the standard representation of the symplectic Lie algebra 𝔰𝔭(d/2)? As we prove in Appendix <ref>, the following proposition holds.
A basis for the standard representation of the 𝔰𝔭(d/2) algebra is
B_𝔰𝔭(d/2)≡ i{{X,Y,Z}⊗ P_s } ∪ i{𝕀⊗ P_a} ,
where P_s and P_a belong to the sets of arbitrary symmetric and anti-symmetric Pauli strings on n-1 qubits, respectively, and 𝕀,X,Y,Z are the usual 2× 2 Pauli matrices.
We recall that {P_s} and {P_a} are composed of all Paulis acting on n-1 qubits with an even or odd number of Y's, respectively. It is interesting to note that Eqs. (<ref>) and (<ref>) reveal that the first qubit plays a privileged role. As we will see below, this asymmetry will translate into the structure of symplectic quantum circuits. In particular, it will be responsible for the lack of translational invariance in the generators of the circuit.
§ QUANTUM CIRCUITS FOR SYMPLECTIC UNITARIES
The fact that the matrices in 𝕊ℙ(d/2) are unitary implies that they can be implemented by quantum circuits. While some architectures for such symplectic unitaries have been found <cit.>, they do not make use of the canonical form of Ω and are composed of non-local gates obtained by either correlating parameters <cit.> or by using non-local generators <cit.>.
Our first contribution is to show that by taking Ω as in Eq. (<ref>), we can find a set local generators for which circuits of the form
U= ∏_l e^ i θ_l H_l ,
where θ_l are real-valued parameters and H_l∈, are universal and can therefore produce any unitary in 𝕊ℙ(d/2). In particular, the following theorem, whose proof can be found in Appendix <ref>, holds.
The set of unitaries of the form in Eq. (<ref>), with generators taken from
= {Y_i}_i=1^n∪{X_iY_i+1, Y_iX_i+1}_i=2^n-1∪ X_1 ∪ Z_1 Z_2 ,
is universal in 𝕊ℙ(d/2), as
_ℝ⟨ i ⟩_ Lie= 𝔰𝔭(d/2) .
Here, ⟨ i⟩_ Lie is the Lie closure of i, i.e., the set of operators obtained by the nested commutation of the elements in i.
In Eq. (<ref>), X_i, Y_i and Z_i denote the Pauli operators acting on the i-th qubit.
Let us now discuss the implications of Theorem <ref>. First, we note that the quantum circuits obtained from the set of generators in Eq. (<ref>) can be implemented with one- and two-qubit gates acting on nearest neighbors on a one-dimensional chain of qubits with open boundary conditions (see Fig. <ref>). Moreover, each gate has an independent parameter. These features render the circuits readily implementable with the topologies and connectivities available in near-term quantum hardware.
A second important implication of Theorem <ref> is that symplectic circuits are not translationally invariant in the sense that the local generators {H_l} are not the same on each pair of adjacent qubits. This is in stark contrast with the unitary and orthogonal groups 𝕌(d) and 𝕆(d), as these can be constructed from translationally invariant generators <cit.>. As mentioned in the previous section, the lack of translational invariance for the symplectic group can be traced back to the asymmetric structure of the Ω=iY⊗𝕀^⊗ n-1 matrix.
In fact, we can see that to construct quantum circuits that implement symplectic transformations in 𝕊ℙ(d/2), one can choose local generators from the special orthogonal algebra 𝔰𝔬(4) acting on the last n-1 qubits, such as {Y⊗𝕀, 𝕀⊗ Y, X⊗ Y, Y⊗ X}. The reason is that i𝕀⊗ P_a belongs to B_(d/2) for all anti-symmetric Paulis P_a according to Eq. (<ref>), and this set is a basis for (d/2).
Then, in order to generate (d/2) in the last n-1 qubits, it suffices to employ the local generators of (4) on each pair of nearest neighbors in those n-1 qubits <cit.>. Following this reasoning, we now need a different set of generators acting on the first pair of qubits. It can be shown that adding operators from the (2) algebra such as {X⊗𝕀,Y⊗𝕀,Z⊗ Z} acting on the first pair of qubits completely generates (d/2), and nothing else (see Appendix <ref>).
To finish, we note that we have defined symplectic transformations with respect to the canonical form of the Ω matrix in Eq. (<ref>). If we were to choose a different Ω, there always exist an orthogonal change of basis that would take as back to the canonical form, as explained in Sec. <ref>. This would then correspond to global unitaries acting at the beginning and end of the circuit.
§ CIRCUITS WITH LOCAL SYMPLECTIC GATES ARE NOT SYMPLECTIC
In the previous section we have shown that one can generate globally symplectic unitaries in 𝕊ℙ(d/2) by implementing locally symplectic unitaries on the first two qubits, plus orthogonal unitaries acting on the second through last qubits. This raises the question as to what happens if we construct a circuit where all gates are locally symplectic (including those acting on the second through last qubits). For example, we can consider circuits such as those in Fig. <ref>, where local gates from 𝕊ℙ(2) are implemented on neighboring qubits on a one-dimensional connectivity.
Following Proposition <ref>, one possible choice of generators for the local gates that produce universal 𝕊ℙ(2) circuits is {X⊗𝕀 ,Y⊗𝕀, 𝕀⊗ Y, X⊗ X}, since these suffice to generate all the basis B_(2) elements via (nested) commutation. Given that this set of generators is translationally invariant, it falls under the general classification of Ref. <cit.>.
In particular, it is known that they produce unitary universal circuits (up to a global phase), that is, their Lie closure leads to (d). For the shake of completeness, we formalize this claim in the following proposition, proved in Appendix <ref>.
The set of unitaries of the form in Eq. (<ref>), with generators taken from
_L = ⋃_i=1^n-1{X_i ,Y_i, Y_i+1, X_iX_i+1} ,
is universal in 𝕊𝕌(d), as
_ℝ⟨ i _L⟩_ Lie= 𝔰𝔲(d) .
Ultimately, the expressive power of locally-symplectic circuits stems from the simple observation that the compact symplectic group is not amenable to the tensor product structure of the Hilbert space of qubits, in contrast to the orthogonal and unitary groups. More precisely, let U,O be unitary and orthogonal matrices, respectively. Then, 𝕀_s ⊗ U is unitary for any Hilbert space partition index s, and analogously for O. However, if S is a symplectic matrix, then 𝕀_s ⊗ S is not symplectic in general. Indeed, taking Ω from Eq. (<ref>), we have that 𝕀⊗ S^T Ω 𝕀⊗ S=iY⊗ S^T S, which is not equal to Ω unless S is also orthogonal. This implies that local symplectic generators that do not belong to the special orthogonal Lie algebra (e.g., X⊗𝕀 and X⊗ X) on the last n-1 qubits are no longer in the symplectic algebra when tensored with identities on the rest of the qubits. Hence, it is clear that quantum circuits with locally-symplectic gates will be able to generate non-symplectic transformations.
§ SYMPLECTIC WEINGARTEN CALCULUS
Now that we know how to construct quantum circuits that implement symplectic transformations, we turn to study their average properties. In particular, if we assume that the circuits that we implement sample unitaries according to the Haar measure over 𝕊ℙ(d/2), either exactly or approximately, we can leverage the tools from the symplectic Weingarten calculus <cit.>. For ease of notation, we will also use diagrammatic tensor notation to simplify computations. We refer the reader to Refs. <cit.> for an in-depth treatment of Weingarten calculus on the unitary and orthogonal groups from a quantum information perspective.
The goal of Weingarten calculus is to compute integrals of polynomials in the entries of matrices (and their complex conjugates) over the left-and-right-invariant Haar measure on a compact matrix Lie group G. This can be shown to be equivalent to computing matrix entries of the following operator,
^(t)_G[ X] =∫_G dμ(U) U^⊗ t X (U^†)^⊗ t .
Here, ^(t)_G[ X] is called the t-th fold twirl of X over G, X belongs to the set of bounded operators (^⊗ t) acting on ^⊗ t, and dμ(U) is the Haar measure on G. It is straightforward to show that the twirl is an orthogonal projector onto the t-th order commutant of the tensor representation of G, that is, the vector subspace of all matrices that commute with U^⊗ t for all U∈ G.
Hence, we can write
^(t)_G[ X] =∑_μ,ν W^-1_μν[P_ν X] P_μ ,
where the {P_μ} operators are a basis that spans (note that they need not be orthonormal, nor Hermitian), and W is the Gram matrix of the aforementioned basis with respect to the Hilbert-Schmidt inner product, i.e., it is the matrix whose entries are W_μν=[P_μ^† P_ν].
In summary, in order to compute , one needs to find a set of operators spanning , compute the corresponding Gram matrix, and invert it (or in some cases perform the pseudo-inverse).
Perhaps the main ingredient necessary for using Eq. (<ref>), is the knowledge of a basis for . While in some cases such basis might not be readily available, when G is the standard representation of a unitary, orthogonal or symplectic group, one can use
the Schur-Weyl duality to obtain such basis. In particular, when G is the unitary group, then is found to be spanned by a representation of the symmetric group S_t <cit.>, whereas if G is the orthogonal or the symplectic group, its commutant is spanned by some representation of the Brauer algebra 𝔅_t(δ) <cit.> (with δ=d for the orthogonal group, and δ=-d for the symplectic group).
We recall that the Brauer algebra 𝔅_t(δ) consists of all possible pairings of a set of size 2t. That is, given a set of 2t items, the elements of the Brauer algebra correspond to all possible ways of splitting them into pairs. This has two important implications. First, we can see that all the permutations in S_t are also in 𝔅_t(δ), as these corresponds to the pairings that can only connect the first t items to the remainder ones. Second, a straightforward calculation reveals that there are D_t=(2t)!/2^t t!=(2t-1)!! elements in the Brauer algebra. Here we also note that every element σ∈𝔅_t(δ) can be completely specified by t disjoint pairs, as
σ={{λ_1, σ(λ_1)}∪…∪{λ_t, σ(λ_t)}} .
In Fig. <ref>, we diagrammatically represent all the elements of σ∈𝔅_t(-d) for t=1,2 (as well as some for t=3) using tensor representation.
Additionally, a Brauer algebra 𝔅_t(δ) depends on a parameter δ and has the structure of a ℤ(δ)-algebra. This implies that when we multiply two elements in 𝔅_t(δ), we do not necessarily obtain an element from 𝔅_t(δ) but rather an element in 𝔅_t(δ) times an integer power of δ. Diagrammatically, this means that when we connect (multiply) two diagrams, closed loops can appear. Then, the power to which the factor δ is raised is equal to the number of closed loops formed.
While the previous determines how the abstract Brauer algebra is defined, we still need to specify how its elements are represented and how they act on ^⊗ t. In particular, we here consider the representation F_d:𝔅_t(-d)→(^⊗ t) such that
F_d(σ) = ∑_i_1,…,i_2t=1^d∏_γ=1^t Ω_σ(λ_γ)^h(λ_γ,σ(λ_γ))|i_t+1,i_t+2,…,i_2t⟩
×⟨i_1,i_2,…,i_t| Ω_σ(λ_γ)^h(λ_γ,σ(λ_γ))δ_i_λ_γ, i_σ(λ_γ) ,
where h(λ_γ,σ(λ_γ))=1 if λ_γ,σ(λ_γ)≤ n or if λ_γ,σ(λ_γ)> n and zero otherwise, and where Ω_σ(λ_γ) indicates that the Ω matrix acts on the σ(λ_γ)-th copy of the Hilbert space.
Equipped with the previous knowledge, let us consider specific values of t. In each case, we will present the basis elements of 𝔅_t(-d) as well as explicitly compute the formula for the twirl in Eq. (<ref>). First, we consider the case when t=1. As shown in Fig. <ref>, 𝔅_1(-d) contains a single element {{1,2}} whose representation is given by
F_d({{1,2}})=∑_i_1=1^d|i_1⟩⟨i_1|:=𝕀_d ,
which indeed confirms that the representation of 𝕊ℙ(d/2) is irreducible (the only element in the t=1 commutant is the identity). As such, we find
W=[ d ] ,
and thus
^(1)_𝕊ℙ(d/2)[X]=[X]/d 𝕀_d .
Then, when t=2, 𝔅_2(-d) contains three elements given by {{1,3},{2,4}}, {{1,4},{2,3}}, and {{1,2},{3,4}}, whose representations are
F_d({{1,3},{2,4}}) =∑_i_1,i_2=1^d|i_1i_2⟩⟨i_1i_2|:=𝕀_d⊗𝕀_d ,
F_d({{1,4},{2,3}}) =∑_i_1,i_2=1^d|i_1i_2⟩⟨i_2i_1|:= SWAP ,
F_d({{1,2},{3,4}}) =∑_i_1,i_2=1^d𝕀_d⊗Ω|i_1i_1⟩⟨i_2i_2|𝕀_d⊗Ω:=Π_s .
Recalling that the maximally-entangled Bell state |Φ^+⟩ between the two copies of is |Φ^+⟩=1/√(d)∑_i_1=1^d|i,i⟩, we find that Π_s= d (𝕀_d ⊗Ω) |Φ^+⟩⟨Φ^+|(𝕀_d⊗Ω). The identification of Π_s with the Bell state shows that Π_s satisfies an analogous of the so-called ricochet property <cit.>,
(A⊗ B)Π_s =d (𝕀_d⊗ BΩ A^T )|Φ^+⟩⟨Φ^+| 𝕀_d ⊗Ω
=d (AΩ^T B^T⊗𝕀_d)|Φ^+⟩⟨Φ^+| 𝕀_d ⊗Ω ,
which allows one to readily verify that Π_s belongs to _(d/2)^(2).
In this t=2 case, the Gram matrix is
W= [ d^2 d -d; d d^2 d; -d d d^2 ] ,
leading to the formula for the two-fold twirl,
^(2)_𝕊ℙ(d/2)[X] =(d-1)[X] -[X SWAP] +[X Π_s]/d(d+1)(d-2) 𝕀_d⊗𝕀_d
+-[X] + (d-1)[X SWAP] -[X Π_s]/d(d+1)(d-2) SWAP
+[X] -[X SWAP] + (d-1) [X Π_s]/d(d+1)(d-2) Π_s .
We refer to Fig. <ref> for a visualization in tensor notation of how the elements of the Gram matrix (<ref>) are computed.
Given that the dimension of 𝔅_t(-d) is D_t=(2t-1)!!, keeping track of all the elements in the commutant of (d/2) quickly becomes intractable as t grows. However, we can derive asymptotic formulas that can be used to perform calculations in the large d limit. Here, the key is to realize that the Gram matrix is given by
W = d^t(𝕀_D_t+1/d B) ,
with B a matrix whose entries are in (1). We refer the reader to Appendix <ref> for additional details on why W takes this form.
Equation (<ref>) allows us to write
W^-1=1/d^t(𝕀_D_t+C) ,
where the matrix entries of C are in (1/d). To see this, suppose we write B in a basis such that it is diagonal. Then, in this basis W^-1 is diagonal with entries [W^-1]_μμ=1/d^t(1+λ_μ/d)=1/d^t(1-λ_μ/d+λ_μ), so let us write it as W^-1=1/d^t(𝕀_D_t+D). Since all the matrix entries of B are in (1), so are its eigenvalues, i.e., λ_μ∈(1) ∀μ, which implies that the entries of D are at most (1/d). Finally, C and D are related by a unitary change of basis, and therefore the matrix entries of C are suppressed as (1/d).
Once we have found W^-1, all that is left is to evaluate Eq. (<ref>), which leads to
^(t)_𝕊ℙ(d/2)[X] =1/d^t∑_σ∈[X F_d(σ^T)] F_d(σ)
+1/d^t∑_σ,π∈c_π,σ [XF_d(σ^T)] F_d(π) ,
where the c_π,σ are the matrix entries of C in Eq. (<ref>), and thus are upper bounded as (1/d). Moreover, we here recall that given some σ∈, we define its transpose as σ^T={{λ_1+t, σ(λ_1)+t}∪…∪{λ_t+t , σ(λ_t)+t}}, where the sum is taken mod t.
Note that if we fix t and take the limit d→∞, the second sum in Eq. (<ref>) gets asymptotically suppressed with the Hilbert space dimension d.
§ GAUSSIAN PROCESSES FROM RANDOM SYMPLECTIC CIRCUITS
Recently it has been shown that Pauli measurement outcomes at the output of Haar random circuits sampled from 𝕌(d) or 𝕆(d) <cit.> (as well as the outputs of some shallow quantum neural networks <cit.>) converge in distribution to Gaussian Processes (GPs) under certain assumptions. In this section we will show that the asymptotic Weingarten tools previously presented can be used to prove that such phenomenon will also occur for Haar random symplectic quantum circuits.
In particular, we will consider a setting where we are given a set 𝒟={ρ_1,…,ρ_m} of real-valued n-qubit quantum states on a d-dimensional Hilbert space (i.e., ρ_j=ρ_j^T ∀ j). We then take the m states from 𝒟 and send them through a unitary U which is sampled according to the Haar measure over 𝕊ℙ(d/2). At the output of the circuit we measure the expectation value of a Pauli operator O taken from i𝔰𝔭(d/2) [Our results also hold if instead of a Pauli O from i𝔰𝔭(d/2) we take O'=S O S^†, with S an arbitrary matrix from (d/2).]. This leads to a set of quantities of the form
C(ρ_j)=[Uρ_j U O] ,
which we collect in a length-m vector
𝒞=(C(ρ_1),…, C(ρ_m) ) .
We will say that 𝒞 forms a GP iff it follows a multivariate Gaussian distribution, which we denote as (μ⃗,Σ⃗).[Alternatively, we can also say that 𝒞 forms a GP iff every linear combination of its entries follows a univariate Gaussian distribution.] We recall that a multivariate Gaussian (μ⃗,Σ⃗) is completely determined by its m-dimensional mean vector μ⃗=(𝔼[C(ρ_1)],…,𝔼[C(ρ_m)]), and its m× m dimensional covariance matrix with entries Σ⃗_jj'= Cov[C(ρ_j),C(ρ_j')], as all higher moments can be computed from μ⃗ and Σ⃗ alone via Wick's theorem <cit.>. Hence, in what follows we will determine conditions for which 𝒞 forms a GP, and report only its mean and its covariance matrix entries.
First, we can show that the following theorem holds (see Appendix <ref> for a proof).
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If [ρ_j ρ_j']∈Ω(1/(log(d))) and |[Ωρ_j Ωρ_j']|∈ o(1/(log(d))) ∀ j,j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and covariance matrix
Σ⃗_j,j' = [ρ_jρ_j']/d .
Interestingly, we see here that the states for which Theorem <ref> holds and 𝒞 forms a GP are such that their inner products are at most polynomially vanishing with n. However, when we conjugate them by Ω = iY⊗𝕀_d/2, effectively leading to rotated states ρ_j= Y_1 ρ_j Y_1 (up to a minus sign), then the inner products between the ρ_j and the original ρ_j' are strictly smaller than polynomially vanishing with n. We use precisely this condition to create a set 𝒟 of such states in Fig. <ref>, where we show that the distribution indeed converges to a multivariate Gaussian with positive correlation. In particular, we there consider a system of n=24 qubits and sample 10^4 independent unitaries from 𝕊ℙ(d/2).[Sampling from 𝕊ℙ(d/2) can be achieved by initializing a random d/2×d/2 quaternionic matrix, mapping it to its complex d× d representation and performing a QR decomposition of the latter, as explained in <cit.>.]
Next, we are also able to prove convergence to a GP in a different regime. Namely, when the overlaps between ρ_j (as well as between its transformed version ρ_j) and ρ_j' remain at most polynomially vanishing. This result is stated in the next theorem, whose proof we present in Appendix <ref>.
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If |[ρ_j ρ_j']+[Ωρ_j Ωρ_j']|∈Ω(1/(log(d))) ∀ j,j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and covariance matrix
Σ⃗_j,j' = 2 _𝔤[ρ_jρ_j']/d .
Here, we defined _𝔤[ρ_jρ_j'] as follows. Given that any quantum state can be written as ρ=1/d∑_k c_k P_k, where the sum runs over all d^2 Pauli matrices (including the identity) and -1≤ c_k≤ 1 ∀ k, it follows that
ρ= 1/d(∑_k \ P_k∈ i𝔰𝔭(d/2) c_k P_k +∑_k \ P_k∉ i𝔰𝔭(d/2) c_k P_k ) ,
where we separated ρ into its algebra and out-of-the-algebra components, i.e., ρ=ρ_𝔤 + ρ_𝔤. Then, _𝔤[ρ_jρ_j'] is simply the Hilbert-Schmidt product between the algebra components of ρ_j and ρ_j'. For instance,
_𝔤[ρ_jρ_j]= d∑_k \ P_k∈ i𝔰𝔭(d/2) c_k^2 .
Note that a crucial difference between Eqs. (<ref>) and (<ref>) is that in the former all covariances must be positive (i.e., we have a positively correlated GP), whereas in the latter the covariances can be negative. Finally, we prove in Appendix <ref> that there exist states for which symplectic quantum circuits form uncorrelated GPs.
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If [ρ_j^2]+[Ωρ_j Ωρ_j]∈Ω(1/(log(d))) ∀ j and [ρ_j ρ_j']=-[Ωρ_j Ωρ_j'] ∀ j≠ j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and diagonal covariance matrix
Σ⃗_j,j' = 2 _𝔤[ρ_j^2]/d if j=j'
0 if j≠ j'
.
§ CONCENTRATION OF MEASURE IN SYMPLECTIC CIRCUITS
In this section, we show that we can leverage the knowledge of the exact output distribution of random symplectic quantum circuits to characterize the concentration of measure phenomenon in these circuits. In particular, we provide concentration bounds for circuits that are sampled from the Haar measure on the symplectic group, and also for circuits that form t-designs over (d/2). In the case of Haar random symplectic circuits we compute tail probabilities to obtain the desired bound. For symplectic t-designs, we use an extension of Chebyshev's inequality to arbitrary moments.
Our first result is:
Let C(ρ_j) be the expectation value of a Haar random symplectic quantum circuit as in Eq. (<ref>). If the conditions under which Theorems <ref>, <ref> and <ref> hold are satisfied, then
Pr(|C(ρ_j)|≥ c)∈(_[ρ_j^2]/c√(d) e^dc^2 / (4 _[ρ_j^2])) .
This corollary, proven in Appendix <ref>, shows that Haar random symplectic processes concentrate exponentially in the Hilbert space dimension. That is, doubly-exponentially in the number of qubits. This feature is analogous to that encountered in random unitary and orthogonal quantum circuits <cit.>. Intuitively, we can understand this result from the fact that the probability density function of a Gaussian distribution decreases exponentially with 1/ σ^2, and here σ^2 is itself exponentially decreasing with the number of qubits (see Eq. (<ref>)). We also remark that the smaller the component of ρ_j in the algebra, i.e., the smaller _[ρ_j^2] is, the more concentrated C(ρ_j) becomes, in agreement with the results in Refs. <cit.>.
This extreme concentration of measure comes at the expense of the exponential (in the number of qubits) time or depth that is required to obtain a truly Haar random circuit <cit.>. In practice, however, one often encounters circuits that are not fully random but that are sufficiently so to reproduce the first t moments of the Haar random distribution. These are called t-designs. Therefore, if U forms a t-design over 𝕊ℙ(d/2), we can provide tight concentration bounds for the circuit outputs (see Appendix <ref> for a proof).
Let C(ρ_j) be the expectation value of a quantum circuit that forms a t-design over (d/2) as in Eq. (<ref>). If the conditions under which Theorems <ref>, <ref> and <ref> hold are satisfied, then
Pr(|C(ρ_j)|≥ c)∈((2⌊ t/2⌋-1)!! (2 _[ρ_j^2]/d c^2)^⌊ t/2⌋) .
This result implies that symplectic 2-designs will concentrate as (1/d), 4-designs as (1/d^2), 6-designs as (1/d^3), etc.
Note that for t=2 the bound in Eq. (<ref>) is analogous to the known concentration result for unitary 2-designs (i.e., barren plateaus <cit.>).
§ ANTI-CONCENTRATION IN SYMPLECTIC CIRCUITS
Let us now study the emergence of anti-concentration in symplectic quantum circuits that form 2-designs over the (d/2) group. Anti-concentration roughly refers to the property that the output probabilities after measuring in the computational basis are not concentrated in a small subset of bit-strings <cit.>. More precisely, we say that quantum circuits sampled from a measure μ (e.g., the Haar measure) on a set of unitaries exhibit anti-concentration when there exist constants α,β>0 such that
Pr_U∼μ(|⟨x| U |0⟩^⊗ n|^2 ≥α/d) > β ,
for all computational-basis states |x⟩, with x∈{0,1}^n. That is, the probability that for a unitary U sampled from μ, all bit-string probabilities are at most a non-zero constant factor away from the uniform distribution, is lower-bounded by a positive constant.
Anti-concentration has been shown to be a very important property, as it is a necessary condition for the hardness of classical simulation in random circuit sampling <cit.>. Here, we prove that Haar random symplectic circuits and circuit ensembles that form symplectic 2-designs anti-concentrate, as stated in the following theorem.
Let μ be the Haar measure on (d/2), or a measure giving rise to a 2-design over (d/2). Then,
Pr_U∼μ(|⟨x| U |0⟩^⊗ n|^2 ≥α/d) ≥(1-α)^2/2 ,
for 0≤α≤ 1.
A detailed proof of this theorem can be found in Appendix <ref>. We note that the anti-concentration result can also be understood from the so-called collision probability <cit.>, defined as
z = 𝔼_U∼μ[∑_x ∈{0,1}^⊗ n p_U(x)^2]=2^n𝔼_U∼μ[p_U(x)^2] ,
where p_U(x) = |⟨x| U |0⟩^⊗ n|^2, which can be found to be equal to
z_H = 2/d + 1 ,
when μ is the Haar measure over (d/2). Hence, we can see that the probability measurements are indeed at most a non-zero constant factor away from the uniform distribution (for which z=1/d).
§ SHALLOW LOCALLY RANDOM SYMPLECTIC CIRCUITS
In the previous sections we have discussed tools to work with random quantum circuits that are Haar random, or that form a t-design over 𝕊ℙ(d/2). However, one might be interested in studying the properties of shallow random circuits sampled, according to some measure dU, from some set of unitaries ⊆𝕊ℙ(d/2). Here, one needs to evaluate t-th order twirls such as
^(t)_[ X] =∫_ dU U^⊗ t X (U^†)^⊗ t ,
or concomitantly, we need to compute the t-th moment operator
^(t)_ =∫_ dU U^⊗ t⊗ (U^*)^⊗ t .
Now, given that need not be a group, one cannot directly leverage the Weingarten calculus to evaluate these quantities. However, the analysis of ^(t)_ and ^(t)_ can become tractable again under the assumption that the circuit is composed of gates that are sampled according to the Haar measure from some local group. In particular, let us consider a circuit which takes the form
U = ∏_l U_l ,
where we assume that U_l acts non-trivially on k_l (non-necessarily neighboring) qubits whose indexes we denote as I_l, and where we omitted the parameter dependency for ease of notation. For instance, if U_1 is a three-qubit gate acting on the first, second, and third qubits, we would have k_1=3 and I_1={1,2,3}. Then, it is standard to assume that each U_l is independently sampled from a local group G_l⊆𝕌(2^k_l) according to its associated Haar measure dμ_l(U_l). In this scenario, we can see that if we write
^(t)_=∏_j^(t)_G_l ,
where
^(t)_G_l=∫_G_l dμ_l(U_l) U_l^⊗ t⊗ (U_l^*)^⊗ t ,
then each individual t-th moment operator ^(t)_G_l associated to each local gate can be evaluated via the Weingarten calculus as a projector onto the t-th fold commutant of G_l.
The previous idea of studying circuits composed of random local gates has been explored in Refs. <cit.>, and it has been shown that the task of computing the moment operator can be mapped to that of analyzing a Markov chain-like process obtained from the product of the non-orthogonal projectors ^(t)_G_l. Importantly, it is worth highlighting the fact that most of the previous references work with random quantum circuits composed of gates sampled from 𝕌(4) (with the notable exception of <cit.>). Thus, little to-no-attention has been payed to local random circuits leading to sets of unitaries that belong to the symplectic group. As such, the question of who the local groups G_l can be so that one still obtains (globally) symplectic unitaries in has not been yet addressed.
While a priori one could be tempted to choose all G_l as 𝕊ℙ(2^k_l-1), this would lead to non-symplectic unitaries (as we have already seen that global symplectic unitaries cannot be constructed from locally symplectic gates, see Sec. <ref>). Instead, referring to Proposition <ref> we find that the most natural choice is
G_l=𝕆(2^k_l) , if 1∉ I_l ,
𝕊ℙ(2^k_l-1) , if 1∈ I_l .
That is, if the gate U_l acts non-trivially on the first qubit, then it must be sampled from a symplectic local group 𝕊ℙ(2^k_l-1), while if it does not act on the first qubit, then it must be sampled from an orthogonal local group 𝕆(2^k_l). In fact, it follows directly from the proof of Theorem <ref> that such circuits will produce unitaries in 𝕊ℙ(d/2).
Given Eq. (<ref>), one can analyze features of locally random circuits such as how fast will their properties converge to those of a t-design over 𝕊ℙ(d/2). As an example, let us study the depth at which the probability outcomes in the computational basis anti-concentrate, for a circuit composed of two-qubit Haar random gates acting in a brick-layered fashion on neighboring qubits (see Fig. <ref>). Here, we need to evaluate the second moment operators ^(2)_𝕊ℙ(2) and ^(2)_𝕆(4), which respectively project onto their commutants spanned by {𝕀_4⊗𝕀_4,SWAP,Π_s} and {𝕀_4⊗𝕀_4,SWAP,Π}, with Π=(𝕀_d⊗Ω)Π_s(𝕀_d⊗Ω). From here, we can study the action of ^(2)_𝕊ℙ(2) and ^(2)_𝕆(4) simply by studying how they project their local commutants onto each other. In particular, we can leverage the recently developed tensor-network formalism of Ref. <cit.> to numerically investigate the behavior of the collision probability z defined in Eq. (<ref>).
We will analyze how z changes as a function of the number of layers n_L in the circuit (where a layer is defined as in Fig. <ref>) for different qubit numbers, as this scaling can be used to diagnose the depth at which the architecture at hand anti-concentrates. In Fig. <ref>, we show how z approaches z_H with increasing circuit depth for a system of n=22 qubits. There we also present the depth n_L^* for which the difference | z_H-z| becomes smaller than ϵ/d, for some small constant ϵ. In particular, we deem the condition | z_H-z|<ϵ/d as the emergence of anti-concentration <cit.>. Our numerical results show that anti-concentration happens at logarithmic depth, i.e., n_L^*=(log (n)), which is the same scaling observed for quantum circuits composed of random unitary and orthogonal local gates <cit.>.
§ CONCLUSIONS
In this work, we have addressed the study of quantum circuits implementing symplectic unitary transformations. In particular, we have introduced a simple universal architecture for symplectic unitaries that can be readily implemented on near-term quantum hardware, as it only requires one- and two-qubit gates acting on nearest neighbors in a one-dimensional lattice. Furthermore, we have derived properties of random symplectic circuits, both in the deep and shallow regimes, including a proof that the circuits' outputs can converge to Gaussian processes (e.g., when measuring a Pauli) or exhibit the anti-concentration phenomenon (when performing computational-basis measurements).
Interestingly our work reveals some key differences between circuits that implement unitary or orthogonal evolutions, and those that implement symplectic ones. For instance, we have shown that the structure of the symplectic Lie algebra and its associated Lie group places a privileged role on a single qubit in the system, thus breaking typical qubit-exchange symmetries appearing when working with 𝕌(2^n) or 𝕆(2^n). This small, albeit important difference makes it such that care must be taken when constructing symplectic circuits, as translationally invariant sets of generators are not available. It also leads to potentially counter-intuitive results, such as circuits composed of locally symplectic gates being able to produce non-symplectic unitaries.
Looking forward, we expect that our constructions will encourage the community to explore the simulation of physical processes described via symplectic unitaries, as these can now be compiled to qubit architectures and therefore implemented in most currently-available quantum hardware. Indeed, we hope that our work will spark the interest on quantum circuits that produce symplectic evolutions, and that compelling applications will be discovered soon.
§ ACKNOWLEDGMENTS
We thank Martin Larocca, Bojko N. Bakalov, Nahuel L. Diaz, and Alexander F. Kemper for insightful conversations. D.G.M., P.B. and M.C. were supported by Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory (LANL) under project numbers 20230527ECR and 20230049DR. M.C. was also initially supported by LANL's ASC Beyond Moore’s Law project.
§ PROOF OF PROPOSITION 1
In this appendix, we provide the proof of Proposition <ref>, which we recall for convenience.
A basis for the standard representation of the 𝔰𝔭(d/2) algebra is
B_𝔰𝔭(d/2)≡ i{{X,Y,Z}⊗ P_s } ∪ i{𝕀⊗ P_a} ,
where P_s and P_a belong to the sets of arbitrary symmetric and anti-symmetric Pauli strings on n-1 qubits, respectively, and 𝕀,X,Y,Z are the usual 2× 2 Pauli matrices.
We first recall that any matrix M∈𝔰𝔭(d/2) satisfies M^TΩ=-Ω M. Then, we note that Ω= iY⊗𝕀_d/2. Hence, we are looking for Pauli strings P satisfying P^T (Y⊗𝕀_d/2)=-(Y⊗𝕀_d/2) P. We know that P^T=P if and only if the number of Y's in P is even, and P^T=-P otherwise. Therefore, the Pauli matrices belonging to i𝔰𝔭(d/2) have to ant-icommute with (Y⊗𝕀_d/2) when P^T=P, and commute with it when P^T=-P. Given that any Pauli commutes with 𝕀_d/2, it follows that all Pauli strings in i𝔰𝔭(d/2) have the form {X,Y,Z}⊗ P_s or 𝕀⊗ P_a,
with P_s a symmetric and P_a an anti-symmetric Pauli string. Since M^TΩ=-Ω M is a linear equation, all (real) linear combinations of such Paulis also belong in i𝔰𝔭(d/2). Finally, the dimension of B_𝔰𝔭(d/2) can be checked to be d(d+1)/2, which is precisely (𝔰𝔭(d/2)), as follows. The set
{P_s} ({P_a}) is composed of all Paulis acting on n-1 qubits with an even (odd) number of Y's, and it is an orthogonal basis for the space of symmetric (anti-symmetric) matrices. Therefore, the dimensions of {P_s} and {P_a} are N_1=2^n-1(2^n-1+1)/2 and N_2=2^n-1(2^n-1-1)/2, respectively.
As such, there are 3N_1+N_2=2^n-1(2^n+1)=d(d+1)/2 elements in B_𝔰𝔭(d/2), which is precisely the dimension of 𝔰𝔭(d/2).
We then conclude that B_𝔰𝔭(d/2) is a basis of 𝔰𝔭(d/2).
As a sanity check, we can show that the operators in Eq. (<ref>) are indeed closed under commutation, so that they form a Lie algebra. For this purpose, we can make use of the properties of the commutator of symmetric and anti-symmetric matrices. In particular, let P_a, P_a',P_a” be anti-symmetric Pauli strings and P_s, P'_s,P_s” symmetric Pauli strings. Then, we have that ([P_a,P_a'])^T = (P_a P_a' - P_a'P_a)^T= P_a' P_a - P_a P_a'= -[P_a,P_a']. Similarly, we find that ([P_s,P_s'])^T= -[P_s,P_s'] and ([P_a,P_s])^T=[P_a,P_s]. That is, the commutator of two anti-symmetric or symmetric Paulis is anti-symmetric, whereas the commutator of a symmetric and an anti-symmetric one is symmetric (this is the reason why symmetric matrices do not form a Lie algebra under commutation). Therefore, the commutator of non-commuting matrices of the form [i𝕀⊗ P_a,i𝕀⊗ P_a'] gives a matrix of the same form i𝕀⊗ P_a”, up to real constant factors that can be ignored. The non-zero commutators [i𝕀⊗ P_a, i{X,Y,Z}⊗ P_s ] have the form i{X,Y,Z}⊗ P_s'. Besides, the commutator [iX⊗ P_s, iX⊗ P_s'] returns either zero or a matrix of the form {𝕀⊗ P_a} (the same happens if we replace X by Y or Z in the first qubit).
The commutator [iX⊗ P_s, iZ⊗ P_s'] gives iY⊗ P_s
” or zero, which is true because we are commuting two symmetric matrices, and hence we must obtain an anti-symmetric one. And finally, [iX⊗ P_s, iY⊗ P_s'] results in iZ⊗ P_s” or zero, and analogously under the exchange X↔ Z on the first qubit. Therefore, the set of operators in Proposition <ref> is closed under commutation.
§ PROOF OF THEOREM 1
We here prove Theorem <ref>, which reads as
The set of unitaries of the form in Eq. (<ref>), with generators taken from
= {Y_i}_i=1^n∪{X_iY_i+1, Y_iX_i+1}_i=2^n-1∪ X_1 ∪ Z_1 Z_2 ,
is universal in 𝕊ℙ(d/2), as
_ℝ⟨ i ⟩_ Lie= 𝔰𝔭(d/2) .
Here ⟨ i⟩_ Lie is the Lie closure of i, i.e., the set of operators obtained by the nested commutation of the elements in i.
We begin by showing that the generators {Y_i}_i=2^n∪{X_iY_i+1, Y_iX_i+1}_i=2^n-1 produce the full special orthogonal algebra (d/2) in the last n-1 qubits. This algebra is the span (over the real numbers) of all the real anti-symmetric matrices. In other words, it is the span of all the Pauli strings with an odd number of Y's (times i). Clearly, the generators Y⊗𝕀, 𝕀⊗ Y, Y⊗ X and X⊗ Y generate the full (4) algebra, as it can be checked by direct calculation. From here, we proceed by induction. That is, we show that if we have the full (2^k) algebra for some k, we obtain the (2^k+1) algebra by adding the operators 𝕀_2^k⊗ Y, 𝕀_2^k-1⊗ Y⊗ X and 𝕀_2^k-1⊗ X⊗ Y, and taking commutators (see also <cit.>). In particular, we notice that any anti-symmetric Pauli string with support on k+1 site takes the form P_s⊗ Y or P_a ⊗{𝕀,X,Z}, where P_s and P_a are arbitrary symmetric and anti-symmetric Pauli strings on k sites, respectively. Since we already have the full (2^k) algebra, we have all Pauli strings of the form P_a ⊗𝕀, where the support of P_a on the k-th site can be any of {𝕀,X_k,Y_k,Z_k}. Commuting 𝕀_2^k-1⊗ X⊗ Y with all Pauli strings P_a ⊗𝕀 such that the support of P_a on the k-th site is Y_k, produces all operators of the form P_s⊗ Y such that the support of P_s on the k-th site is Z_k. Further commuting the latter with Y_k gives all the operators P_s⊗ Y such that the support of P_s on the k-th site is X_k. Likewise, commuting 𝕀_2^k-1⊗ X⊗ Y with all Pauli strings P_a ⊗𝕀 such that the support of P_a on the k-th site is Z_k, produces all operators of the form P_s⊗ Y such that the support of P_s on the k-th site is Y_k. We now compute the commutators of 𝕀_2^k-1⊗ Y⊗ X with operators P_s⊗ Y such that the support of P_s on the k-th site is Y_k. This gives us all operators P_a⊗ Z such that the support of P_a on the k-th site is 𝕀, which upon commutation with Y_k gives us all operators P_a⊗ X such that the support of P_a on the k-th site is 𝕀. Now, we know that the (2^k) algebra contains all the anti-symmetric Paulis (times i). Hence, commuting operators from (2^k) with P_a⊗{X,Z} such that the support of P_a on the k-th site is 𝕀, we can generate all operators of the form P_a⊗{X,Z}. The last step is to obtain the operators P_s⊗ Y such that the support of P_s on the k-th site is 𝕀. We achieve this via commutation of 𝕀_2^k-1⊗ Y⊗ X with operators of the form P_a⊗ Z such that the support of P_s on the k-th site is Y.
We are left with the task of showing that adding the operators X_1 and Z_1 Z_2 indeed generates the (d/2) algebra, i.e., all the operators in Eq. (<ref>). Those of the form i{𝕀⊗ P_a} are already generated by the orthogonal operators in the last n-1 qubits. Then, starting with the commutators of Z_1 Z_2 with the operators 𝕀⊗ P_s, we get all operators Z⊗ P_s such that the support of P_s on the second qubit is X_2 or Y_2. Further commuting those with and X_1, Y_1 and Y_2 we obtain all operators of the form {X,Y,Z}⊗ P_s such that the support of P_s on the second qubit is not 𝕀. To generate the remaining algebra operators, we commute Z_1Z_2 with the operators {X,Y}⊗ P_s with support Z_2 on the second qubit, and then all the resulting operators with X_1. This way, we have generated all the operators in Eq. (<ref>). Since the algebra is closed under commutation, and all our operators belong to it, this concludes the proof.
§ PROOF OF PROPOSITION 2
Here, we present the proof of Proposition <ref>, that we restate for convenience.
The set of unitaries of the form in Eq. (<ref>), with generators taken from
_L = ⋃_i=1^n-1{X_i ,Y_i, Y_i+1, X_iX_i+1} ,
is universal in 𝕊𝕌(d), as
_ℝ⟨ i _L⟩_ Lie= 𝔰𝔲(d) .
The first step is to notice that we can generate via commutators the full special unitary algebra (d/2) in the first n-1 qubits. This is true because we have the single-qubit operators X,Y,Z acting on all of the latter, and also the two-qubit operators X⊗ X on all qubit pairs {(q,q+1)}_q=1^n-2. Commuting X⊗ X with the single-qubit Paulis, we obtain all the two-qubit nearest-neighbors Pauli operators. From here, one can generate the entire (d/2) algebra as detailed in <cit.>. The last step is computing the commutators of operators in (d/2) with the generators acting on the last two qubits. Here, we note that commuting with Y_n-1 just transforms X_n-1 into Z_n-1 (up to a constant factor), and vice versa, and hence it does not generate any new linearly independent operator, since we already have the entire (d/2) algebra. The same is true for commutators with X_n-1. Furthermore, all operators in (d/2) have trivial support in the last qubit and therefore commute with Y_n.
We then turn to the commutations with the two-qubit operators acting on the last pair of qubits, which are {{X,Y,Z}⊗{X,Z}} according to Eq. (<ref>). Commuting these with operators in (d/2) produce all operators of the form P⊗{X_n, Z_n}, where P≠𝕀_2^n-1 has support on the first n-1 qubits and {X_n, Z_n} are the X,Z Pauli matrices acting on the n-th qubit. Further commuting the latter, we can obtain all three-qubit operators of the form P_n-2⊗ P_n-1'⊗ Y, where P_n-2 and P_n-1' are arbitrary Pauli operators (different from the identity) acting on the qubits n-2 and n-1. Finally, commuting e.g., Y_n-2⊗ Y_n-1⊗ Y_n with Y_n-2⊗ Y_n-1⊗ X_n, we get Z_n, from which we can generate all single-qubit and two-qubit operators acting on the last two qubits. Since we know that these are universal for quantum computation, we have generated the full (d) algebra.
§ ASYMPTOTIC WEINGARTEN CALCULUS
We now discuss the reasons why the Gram matrix W in Eq. (<ref>) takes that form, namely
W = d^t(𝕀_D_t+1/d B) ,
with the entries of B∈(1).
This follows from the fact that [F_d(σ^T) F_d(σ)]=d^t ∀σ∈ and |[F_d(σ^T)F_d(π)]|≤ d^t-1 when π≠σ.
Diagrammatically, σ and σ^T are specular images of each other ∀σ∈. For the case of permutations in S_t, it is then clear that [F_d(σ^T)F_d(σ)]= [F_d(σ) F_d(σ)]=[𝕀_d^⊗ t]=d^t, as σ^T=σ^-1 and F_d(σ^-1)=F_d(σ).
For the rest of the elements in , which do not have an inverse, we notice that σ and σ^T are such that for every pair λ_γ, σ(λ_γ)≤ t there exists a pair λ_γ', σ^T(λ_γ') > t such that λ_γ'=λ_γ+t= and σ^T(λ_γ')=σ(λ_γ)+t. A similar result holds for every pair λ_γ, σ(λ_γ)> t (this is a consequence of σ and σ^T being specular images). Hence, the number of -d factors (or equivalently, closed loops) that appear in F_d(σ^T)F_d( σ) is even, and we have [F_d(σ^T) F_d(σ)]=d^t.
All other entries of the Gram matrix, [F_d(σ^T)F_d(π)] where π≠σ, are the same as those for the representation of the Brauer algebra 𝔅_t(d) from the orthogonal Schur-Weyl duality, up to a minus sign in some entries. Therefore, it follows that |[F_d(σ^T)F_d(π)]|≤ d^t-1 (see e.g., Supplemental Proposition 13 in Ref. <cit.>), and so we find Eq. (<ref>).
§ PROOF OF THEOREM 2
In this Appendix we provide the proof of Theorem <ref>, which states that the outputs of random symplectic quantum circuits converge in distribution to a Gaussian process under certain conditions.
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If [ρ_j ρ_j']∈Ω(1/(log(d))) and |[Ωρ_j Ωρ_j']|∈ o(1/(log d)) ∀ j,j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and covariance matrix
Σ⃗_j,j' = [ρ_jρ_j']/d .
Our goal is to compute the moments of 𝒞, that is, quantities of the form
𝔼_(d/2)[C(ρ_j_1)⋯ C(ρ_j_t)] .
Using the linearity of the trace, and the fact that [A] [B] = [A⊗ B], we find that
∫_(d/2) dμ(U) [Uρ_j_1U^† O ] ⋯[Uρ_j_tU^† O ] =
[∫_(d/2) dμ(U) U^⊗ tΛ (U^†)^⊗ t O^⊗ t] = [^(t)_(d/2)[Λ] O^⊗ t] ,
where we defined Λ≡ρ_j_1⊗⋯⊗ρ_j_t. We first exactly compute the first and second moments. For t=1, we have
𝔼_(d/2)[C(ρ_j)] = [^(t)_(d/2)[ρ_j] O] = [𝕀_d/d O] = 0 ,
where we used Eq. (<ref>) and the property that O is traceless. The second-order twirl is given by Eq. (<ref>), i.e.,
^(2)_𝕊ℙ(d/2)[Λ] =(d-1)[Λ] -[Λ SWAP] +[ΛΠ_s]/d(d+1)(d-2) 𝕀_d⊗𝕀_d
+-[Λ] + (d-1)[Λ SWAP] -[ΛΠ_s]/d(d+1)(d-2) SWAP
+[Λ] -[Λ SWAP] + (d-1) [ΛΠ_s]/d(d+1)(d-2) Π_s .
Using Eqs. (<ref>) and (<ref>), the covariance matrix entries are found to be
Σ⃗_j, j'^𝕊ℙ =1/d+1([ρ_j ρ_j'] +[Ωρ_j Ωρ_j'^T] ) .
Let us evaluate [Ω P_j Ω P_j'^T] for Pauli operators P_j,P_j', which gives
[iY⊗𝕀_d/2 P_j iY⊗𝕀_d/2 P_j'^T]=
d δ_j j' if P∈ i𝔰𝔭(d/2) ,
-d δ_j j' otherwise .
Since every quantum state can be written as
ρ =1/d∑ c_k P_k ,
where the c_k are real coefficients such that -1≤ c_k ≤ 1 ∀ k, it follows that
[Ωρ_j Ωρ_j^T ] = 1/d(∑_k\ P_k∈ i𝔰𝔭(d/2) c_k^2 - ∑_k\ P_k∉ i𝔰𝔭(d/2) c_k^2 )
= 2 _𝔤[ρ_j^2] - [ρ_j^2] ,
where we defined _𝔤[·] as in Eq. (<ref>). Analogously,
[Ωρ_j Ωρ_j'^T ] = 2 _𝔤[ρ_j ρ_j'] - [ρ_j ρ_j'] .
Hence, we arrive at the following expression for the covariance matrix entries,
Σ⃗_j, j'^𝕊ℙ=2 _𝔤[ρ_jρ_j']/d+1 .
Using that [ρ_j ρ_j']∈Ω(1/(log(d))) and |[Ωρ_j Ωρ_j']|∈ o(1/(log d)) ∀ j,j', we can approximate this covariance in the large-d limit as in Eq. (<ref>).
We remark here that we could have substituted any Pauli P_j by O'=S P_j S^†, where S∈(d/2), in Eq. (<ref>), and that equation would still hold. This follows from the symplectic condition S^T Ω S = Ω and the fact that the transpose of a symplectic matrix is symplectic. This implies that our GP results will also be valid when we replace O by O'.
To compute higher moments we will use the asymptotic Weingarten calculus for the symplectic group explained in Sec. <ref>. In particular, Eq. (<ref>) gives
[^(t)_(d/2)[Λ] O^⊗ t]
=1/d^t∑_σ∈[F_d(σ^T)Λ][F_d(σ) O^⊗ t]
+∑_σ,π∈c_π,σ/d^t [F_d(σ^T)Λ][F_d(π) O^⊗ t].
Let us first focus on the factors of the form [F_d(σ) O^⊗ t]. It is straightforward to show that [σ O^⊗ t]=0 whenever σ contains a cycle of odd length (this is a consequence of O being traceless), and |[F_d(σ) O^⊗ t]|=d^|σ|, where |σ| is the number of cycles in σ, otherwise. We refer to Supplemental Proposition 6 in Ref. <cit.> for a detailed derivation of the previous when σ is a permutation.
When σ is not a permutation, we simply note that for all σ
[F_d(σ) O^⊗ t] = ±∏_α=1^|σ|[O^|c_α|Ω^2 b_α]=± d^|σ| ,
where the product runs over the cycles in σ, |c_α| is the length of the cycle c_α, and b_α is the number of pairs (i,c_α(i)) in c_α such that both i and c_α(i)≤ t (i.e., the number of pairs (i,c_α(i)) that are in the first column). Here, we used the fact that Ω either commutes or anti-commutes with O, together with O^T=± O and Ω^T = -Ω=Ω^-1.
It is clear then that |[F_d(σ) O^⊗ t]=d^|σ|| is maximized whenever σ consists of a product of disjoint length-two cycles. Otherwise it is at least 1/d times smaller. Moreover, if σ is a product of disjoint length-two cycles, then [F_d(σ) O^⊗ t]=d^t/2 since [Ω OΩ O^T]=d when O is a Pauli in i(d/2).
We then need to study the terms of the form [F_d(σ^T)Λ]. When σ is a permutation, it holds that |[F_d(σ^T)Λ]| ≤ 1. (see Supplemental Proposition 7 in <cit.>). Moreover this is also true for the non-permutation elements in . To see this, it suffices to notice that when one expands |[F_d(σ^T)Λ]| there will appear quantities of the form ⟨ψ_i|Ω|ψ_i'⟩ in addition to those of the form ⟨ψ_i|ψ_i'⟩ in the products. Since Ω|ψ_i'⟩ is just another quantum state |ψ_i'⟩, the result follows.
This implies that [F_d(σ^T)Λ] cannot be large so as to compensate the factors (1/d) by which |[F_d(σ) O^⊗ t]| are suppressed when σ is not the disjoint product of length-two cycles. They could however be very small in principle, and that is why we require [ρ_j ρ_j']∈Ω(1/(log(d))) ∀ j,j', which implies that [F_d(σ^T)Λ]∈Ω(1/(log(d))) when σ^T is a disjoint product of length-two cycles.
We now recall that the permutations such that σ=σ^T are known as involutions and must consist of a product of disjoint transpositions plus fixed points. More generally, we have that for an element σ∈ to satisfy that σ=σ^T it must consist of a product of disjoint length-two cycles and fixed points. We denote as T_t the set of permutations in σ∈𝕊_t that are a product of disjoint length-two cycles. Therefore, using (<ref>) and the fact that |[Ωρ_j Ωρ_j']|∈ o(1/(log d)) ∀ j,j' (this condition allows us to only retain the contribution from the elements in T_t, instead of all the σ∈ that are the product of disjoint length-two cycles), we arrive at
[^(t)_(d/2)[Λ] O^⊗ t]
=1/d^t/2∑_σ∈ T_t[F_d(σ^T)Λ] + (1/d^t/2+1) .
Or just retaining the leading-order terms,
[^(t)_(d/2)[Λ] O^⊗ t]≈1/d^t/2∑_σ∈ T_t∏_(j,j')∈σ[ρ_j ρ_j'] ,
where the product runs over all the cycles (j,j') in σ.
The last step is comparing Eq. (<ref>) with the moments of a multivariate Gaussian, which are given by Wick's or Isserlis' theorem <cit.>. This theorem states that, for random variables X_1,X_2,…, X_t that form a GP, the t-th order moment is 𝔼[X_1X_2⋯ X_t]=0 if t is odd, and
𝔼[X_1X_2⋯ X_t]=
∑_σ∈ T_t∏_(j,j')∈σ Cov [X_j,X_j'] ,
if t is even.
Clearly, Eq. (<ref>) matches Eq. (<ref>) by identifying Cov [X_j,X_j']=[ρ_j ρ_j']/d as in Eq. (<ref>).
Finally, it can be proven that these moments uniquely determine the distribution of 𝒞 using Carleman's condition (see Ref. <cit.>). Hence, 𝒞 forms a GP.
§ PROOF OF THEOREM 3
In this appendix we prove Theorem <ref>, which is a more general statement about the convergence to GPs of ramdom symplectic quantum circuits than that from Theorem <ref>.
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If |[ρ_j ρ_j']+[Ωρ_j Ωρ_j']|∈Ω(1/(log(d))) ∀ j,j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and covariance matrix
Σ⃗_j,j' = 2 _𝔤[ρ_jρ_j']/d .
The proof of this result is completely analogous to that of Theorem <ref>. The only difference is that we now need to retain all the contributions coming from the set of elements of the Brauer algebra consisting of disjoint products of length-two cycles (that we denote 𝔗_t), and not just those arising from the elements in T_t. This is so because of the condition |[ρ_j ρ_j']+[Ωρ_j Ωρ_j']|∈Ω(1/(log(d))) ∀ j,j'.
Hence, instead of Eq. (<ref>), we find
[^(t)_(d/2)[Λ] O^⊗ t]
=1/d^t/2∑_σ∈𝔗_t[F_d(σ^T)Λ] + (1/d^t/2+1) .
Now, we can group the contributions in the previous equation as follows. Let us consider a permutation σ∈ T_t. For each pair (j,j')∈σ, we can substitute the transposition by a two-length cycle that is not a permutation, obtaining an element in 𝔗_t. This amounts to replacing a factor [ρ_jρ_j'] by a factor [Ωρ_j Ωρ_j'] in [F_d(σ^T)Λ].
We can do this for every possible pair (j,j'), and thus, we have that
∑_σ∈𝔗_t[F_d(σ^T)Λ] = ∑_σ∈ T_t∏_(j,j')∈σ([ρ_j ρ_j']+[Ωρ_j Ωρ_j']) .
Therefore, using (<ref>) we arrive at
[^(t)_(d/2)[Λ] O^⊗ t] ≈1/d^t/2∑_σ∈ T_t∏_(j,j')∈σ 2 _𝔤[ρ_jρ_j'] .
Again, comparing Eqs. (<ref>) and (<ref>) it is clear that they match by identifying Cov [X_j,X_j']=2_[ρ_j ρ_j']/d as in Eq. (<ref>) (note that we approximated d+1≈ d for large d, see Eq. (<ref>)). We conclude that 𝒞 forms a GP.
§ PROOF OF THEOREM 4
We here present the proof of Theorem <ref>, which shows that random symplectic quantum circuits can form uncorrelated GPs.
Let 𝒞 be a vector of expectation values of the Hermitian operator O over a set of states from 𝒟, as in Eq. (<ref>). If [ρ_j^2]+[Ωρ_j Ωρ_j]∈Ω(1/(log(d))) ∀ j and [ρ_j ρ_j']=-[Ωρ_j Ωρ_j'] ∀ j≠ j', then in the large d-limit 𝒞 forms a GP with mean vector μ⃗=0⃗ and diagonal covariance matrix
Σ⃗_j,j' = 2 _𝔤[ρ_j^2]/d if j=j'
0 if j≠ j'
.
To prove this result we will use a somewhat different strategy than that employed in the proofs of Theorems <ref> and <ref>. Namely, we will leverage the fact the if 𝒞 is a GP with zero mean (i.e., μ⃗=0⃗), then any linear combination of its entries, =∑_j a_j C(ρ_j), where the a_j are constants, follows a Gaussian distribution (0,σ^2) with σ^2=∑_j,j' a_j a_j' Cov[C(ρ_j), C(ρ_j')].
Let us then compute the moments of . For the first moment, we have
𝔼_(d/2)[] = ∑_j a_j 𝔼_(d/2)[C(ρ_j)]=0 .
For the second moment,
𝔼_(d/2)[^2] = ∑_j,j' a_j a_j' 𝔼_(d/2)[C(ρ_j), C(ρ_j')]
= ∑_j a_j^2 𝔼_(d/2)[C(ρ_j)^2]
= ∑_j a_j^2 2 _𝔤[ρ_j^2]/d ,
where we used Eq. (<ref>) together with the condition that [ρ_j ρ_j']=-[Ωρ_j Ωρ_j'] ∀ j≠ j'.
It is clear that Eq. (<ref>) matches σ^2=∑_j,j' a_j a_j' Cov[C(ρ_j), C(ρ_j')], when the covariance matrix entries are given as in Eq. (<ref>).
Then, we find that every higher-order moment takes the form
𝔼_(d/2)[^t] =
∑_t_1+…+t_m=ttt_1,…, t_m𝔼_(d/2)[a_1^t_1C(ρ_1)^t_1⋯ a_m^t_m C(ρ_m)^t_m] ,
where all the t_j are non-negative. When t is odd, it follows that 𝔼_(d/2)[a_1^t_1C(ρ_1)^t_1⋯ a_m^t_m C(ρ_m)^t_m]=0 from the fact that [F_d(σ) O^⊗ t] ∀σ∈, as explained in Appendix <ref>. This matches the odd moments of a Gaussian distribution. When t is even, we begin by proving that given two set of values {t_1,…,t_m} and {t'_1,…,t'_m}
𝔼_(d/2)[C(ρ_1)^t_1⋯ C(ρ_m)^t_m]/𝔼_(d/2)[C(ρ_1)^t'_1⋯ C(ρ_m)^t'_m]∈ o(1) ,
whenever there exists an odd number in {t_1,…,t_m} but all numbers in {t'_1,…,t'_m} are even. To see that this is case, we simply note that if there exists an odd number in {t_1,…,t_m}, then there is at least one length-two cycle in any element in 𝔗_t that connects two different states ρ_j,ρ_j' (with j≠ j') in [F_d(σ^T)Λ]. For each such pair (j,j')∈σ^T, we can either have a transposition or a two-length cycle that is not a permutation (which we recall produce a factor [ρ_jρ_j'] or [Ωρ_j Ωρ_j'], respectively).
Hence, since [ρ_j ρ_j']=-[Ωρ_j Ωρ_j'] ∀ j≠ j' we find that the sum of all contributions to Eq. (<ref>) coming from elements in 𝔗_t is zero, and so we need to take into account elements σ∈ that are not the product of disjoint length-two cycles in order to compute the larger non-zero contributions to 𝔼_(d/2)[C(ρ_1)^t_1⋯ C(ρ_m)^t_m]. However, we already know from Appendices <ref> and <ref> that these are suppressed as (1/d) compared to the values 𝔼_(d/2)[C(ρ_1)^t'_1⋯ C(ρ_m)^t'_m] when all the numbers in {t'_1,…,t'_m} are even. Thus, Eq. (<ref>) holds.
Following these considerations, we find that the leading-order contributions give us (for even t)
𝔼_(d/2)[^t] ≈
∑_t_1+…+t_m=t
t_1,…,t_m eventt_1,…, t_m𝔼_(d/2)[a_1^t_1C(ρ_1)^t_1⋯ a_m^t_m C(ρ_m)^t_m]
= ∑_t_1+…+t_m=t
t_1,…,t_m eventt_1,…, t_m∏_j=1^m (t_j-1)!! a_j^t_j(2 _𝔤[ρ_j^2]/d)^t_j/2
=∑_t_1+…+t_m=t
t_1,…,t_m event!/∏_j=1^m t_j!!∏_j=1^m a_j^t_j(2 _𝔤[ρ_j^2]/d)^t_j/2 .
where we used that t_j!=t_j!!(t_j-1)!!.
We need to show that 𝔼_(d/2)[^t] matches the even moments of a variable X following a zero-mean Gaussian distribution, which are given by
𝔼[X^t] = (t-1)!! σ^t .
To do this, we begin by computing
σ^t = (∑_j,j' a_j a_j' Cov[C(ρ_j), C(ρ_j')])^t/2
= (∑_j a_j^2 2 _𝔤[ρ_j^2]/d)^t/2
= ∑_k_1+…+k_m=t/2t/2k_1,…, k_m∏_j=1^m a_j^2k_j(2 _𝔤[ρ_j^2]/d)^k_j ,
where we used Eq. (<ref>). Now, since the sum in Eq. (<ref>) is restricted to even values t_1,…,t_m, we can re-write it by re-labeling t_j=2k_j, obtaining
𝔼_(d/2)[^t] ≈
∑_2 k_1+…+2k_m=t t!/∏_j=1^m (2k_j)!!∏_j=1^m a_j^2k_j(2 _𝔤[ρ_j^2]/d)^k_j .
Since t_j=2k_j is a bijective function, there are exactly the same number of terms in the sums in Eq. (<ref>) and (<ref>). To show that Eq. (<ref>) is indeed equal to Eq. (<ref>) for all possible values of the constants a_j, we need to prove that
(t-1)!! t/2k_1,…, k_m = t!/∏_j=1^m (2k_j)!! ,
which follows from the properties of the double factorial for even and odd numbers. Namely, (2k_j)!!=2^k_j k_j! and (t-1)!!=t!/2^t/2(t/2)!, which imply
t!/2^t/2(t/2)!(t/2)!/∏_j=1^m k_j! = t!/2^t/2∏_j=1^m k_j! .
Hence, 𝒞 forms a GP.
§ PROOF OF COROLLARY 1
Let us now prove Corollary <ref>, which reads as follows.
Let C(ρ_j) be the expectation value of a Haar random symplectic quantum circuit as in Eq. (<ref>). If the conditions under which Theorems <ref>, <ref> and <ref> hold are satisfied, then
Pr(|C(ρ_j)|≥ c)∈(_[ρ_j^2]/c√(d) e^dc^2 / (4 _[ρ_j^2])) .
Since the marginal distributions of a GP are Gaussian, C(ρ_j) follows a Gaussian distribution (0,σ^2) with σ^2=2 _[ρ_j^2]/d (see Eq. (<ref>)). Thus, we have that
Pr(|C_j(ρ_i)|≥ c) =√(d)/√(π _[ρ_j^2])∫_c^∞ dx e^-x^2 d/4 _[ρ_j^2]
= Erfc[c√(d)/2 _[ρ_j^2]] ,
where Erfc is the complementary error function. Using the fact that for large x, Erfc[x]≤e^-x^2/x√(π), we find Eq. (<ref>).
§ PROOF OF COROLLARY 2
In this section we present the proof for Corollary <ref>, which reads
Let C(ρ_j) be the expectation value of a quantum circuit that forms a t-design over (d/2) as in Eq. (<ref>). If the conditions under which Theorems <ref>, <ref> and <ref> hold are satisfied, then
Pr(|C(ρ_j)|≥ c)∈((2⌊ t/2⌋-1)!! (2 _[ρ_j^2]/d c^2)^⌊ t/2⌋) .
Here we can use the generalization of Chebyshev's inequality to higher-order moments,
(|X- 𝔼[X]|≥ c)≤𝔼[|X-𝔼 [X]|^t]/c^t ,
for c>0 and for t≥ 2. Since 𝔼_(d/2)[C(ρ_j)] = 0, this inequality simplifies to
(|C(ρ_j)|≥ c)≤𝔼_(d/2)[|C(ρ_j)|^t]/c^t .
If an ensemble of quantum circuits forms a t-design over (d/2), then we can readily evaluate 𝔼[C(ρ_j)^t], as this quantity matches the t first moment of a (0,σ^2) distribution with σ^2=2 _[ρ_j^2]/d. In particular, we know that since the odd moments are zero, we only need to take into account the largest even moment that the t-design matches. Therefore, using
𝔼[|C_j(ρ_i)^2⌊ t/2⌋] =(2⌊ t/2⌋-1)!! 𝔼[C_j(ρ_i)^2]^⌊ t/2⌋
=(2⌊ t/2⌋-1)!! (2 _[ρ_j^2]/d)^⌊ t/2⌋ ,
and plugging it into Eq. (<ref>), we arrive at (<ref>).
§ PROOF OF THEOREM 5
In this appendix we prove that sufficiently-random symplectic quantum circuits anti-concentrate, as stated in Theorem <ref>. We recall this result for convenience.
Let μ be the Haar measure on (d/2), or a measure giving rise to a 2-design over (d/2). Then,
Pr_U∼μ(|⟨x| U |0⟩^⊗ n|^2 ≥α/d) ≥(1-α)^2/2 ,
for 0≤α≤1.
In order to show anti-concentration, we will use Paley–Zygmund inequality <cit.>, which states that for a random variable 𝒵≥0 with finite variance, it holds that
Pr(𝒵> c 𝔼[𝒵]) ≥ (1-c)^2𝔼[𝒵]^2/𝔼[𝒵^2] ,
where 0≤ c≤ 1. In our case, 𝒵≡[Uρ_0 U^†Π_x] with ρ_0 =( |0⟩⟨0|)^⊗ n and Π_x = |x⟩⟨x|. Hence, we need to compute
𝔼_(d/2)[[Uρ_0 U^†Π_x]] = [Π_x∫_(d/2) dμ(U) Uρ_0 U^†]
= [Π_x𝕀_d/d] = 1/d
where we used Eq. (<ref>). We also need to compute
𝔼_(d/2)[[Uρ_0 U^†Π_x]^2]
= [Π_x^⊗ 2∫_(d/2) dμ(U) U^⊗ 2ρ_0^⊗ 2 (U^†)^⊗ 2]
= [Π_x^⊗ 2(𝕀_d⊗𝕀_d+ SWAP/d(d+1))]
= 2/d(d+1)=z_H/d ,
where we used Eq. (<ref>), together with [ρ_0^⊗ 2]=1, [ρ_0^⊗ 2 SWAP] =[ρ_0^2]=1, and
[ρ_0^⊗ 2Π_s]
= [ (𝕀_d ⊗Ωρ_0 Ωρ_0^T) Π]
= [ (𝕀_d ⊗Ωρ_0 Ωρ_0) Π]
= 1/d^2[ (𝕀_d ⊗Ω(∑_P_i∈{𝕀,Z}^⊗ n P_i) Ω(∑_P_i'∈{𝕀,Z}^⊗ n P_i'))Π]
= 1/d^2[ (𝕀_d ⊗∑_P_i∈{𝕀,Z}^⊗ nΩ P_iΩ P_i))Π]
=0 .
Hence, we arrive at
Pr_U∼μ(|⟨x| U |0⟩^⊗ n|^2>α/d) ≥ (1-α)^2 d(d+1)/2d^2 ,
which implies
Pr_U∼μ(|⟨x| U |0⟩^⊗ n|^2≥α/d) ≥(1-α)^2/2 .
§ TENSOR NETWORK-BASED CALCULATION OF THE MOMENTS OF SHALLOW RANDOM 𝕊ℙ(2) CIRCUITS
Let us recall that in <cit.> the authors present a toolbox to compute expectation values of circuits composed of local random gates via tensor networks. As mentioned in the main text, the key idea developed therein is to map the evaluation of ^(t)_ in Eq. (<ref>) to a Markov-chain like process where the ^(t)_G_l only act on their local commutants. As such, in order to use such formalism, we need to a tensor representation for the superoperators _𝕊ℙ(2)^(2)=∫_𝕊ℙ(2) dμ(V) V^⊗ 2⊗ (V^*)^⊗ 2 and _𝕊𝕆(4)^(2)=∫_𝕊𝕆(4) dμ(V) V^⊗ 2⊗ (V^*)^⊗ 2.
We have shown in the main text that a basis for ^(2)_(2) is {𝕀_d⊗𝕀_d, SWAP, Π_s}, where we recall that these operators act on two copies of the two qubits targeted by a 𝕊ℙ(2) gate. We will refer to these two two-qubit systems as A and B.
With this in mind, we note that
SWAP = SWAP_A ⊗ SWAP_B = 𝕀_A + S_A/2⊗𝕀_B + S_B/2 ,
where S_J=X_J_1⊗ X_J_2+Y_J_1⊗ Y_J_2+Z_J_1⊗ Z_J_2.
In the same spirit one finds
Π_s = -𝕀_A + S_A/2⊗𝕀_B + B_B/2 ,
where B_J=X_J_1⊗ X_J_2-Y_J_1⊗ Y_J_2+Z_J_1⊗ Z_J_2.
Due to these factorization properties, we can see that _𝕊ℙ(2)^(2) projects in a basis that can be decomposed as a tensor product of the form {𝕀, S}_A ⊗{𝕀, S, B}_B. Crucially, we remark that the asymmetry of this basis arises from the fact that there is a preferred qubit in our construction of symplectic circuits.
Then, we recall that it was shown in <cit.> that _𝕊𝕆(4)^(2) will project on a local tensor product basis of the form {𝕀, S, B}_A ⊗{𝕀, S, B}_B, meaning that we can describe the full action of ^(2)_ as acting on a vector space of dimension 2× 3^n-1 (which again reveals the special role played by the first qubit). The only thing that remains is to describe the action of _𝕊ℙ(2)^(2) and _𝕊𝕆(4)^(2) in this space. Given that the description of _𝕊𝕆(4)^(2) in the aforementioned basis was already presented in <cit.>, we here now only derive that of _𝕊ℙ(2)^(2). In particular, all that we need to do is to compute how this operator acts on all basis states in {𝕀, S}⊗{𝕀, S, B}, which we can compute via the twirl map of Eq. (<ref>). A direct calculation reveals that _𝕊ℙ(2)^(2) acts on this reduced space as a matrix τ given by
τ=[ 1 0 0 0 0 0; 0 1/4 3/20 1/10 1/4 -3/20; 0 3/20 1/4 -1/10 3/20 -1/4; 0 3/20 -3/20 3/10 3/20 3/20; 0 3/5 0 3/5 3/5 0; 0 0 -3/5 3/5 0 3/5 ] .
|
http://arxiv.org/abs/2405.09134v1 | 20240515065742 | Contractibility of the Rips complexes of Integer lattices via local domination | [
"Žiga Virk"
] | math.MG | [
"math.MG",
"math.AT",
"math.CO"
] |
OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality
Chao Fan,
Saihui Hou,
Junhao Liang,
Chuanfu Shen,
Jingzhe Ma,
Dongyang Jin,
Yongzhen Huang,
and Shiqi Yu
Chao Fan, Junhao Liang, Chuanfu Shen, Jingzhe Ma, Dongyang Jin, and Shiqi Yu are with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. E-mail: {12131100, 12132342, 11950016, 12031127, 12332451}@mail.sustech.edu.cn and yusq@sustech.edu.cn.
Chuanfu Shen is also with the Department of Industrial and Manufacturing Systems Engineering, University of Hong Kong, Hong Kong, China.
Saihui Hou and Yongzhen Huang are with the School of Artificial Intelligence, Beijing Normal University, and also with Watrix Technology Limited Co. Ltd., Beijing, China. E-mail: {housaihui, huangyongzhen}@bnu.edu.cn.
Corresponding author: Shiqi Yu.
Manuscript received Jan 28, 2024.
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
May 20, 2024
We prove that for each positive integer n, the Rips complexes of the n-dimensional integer lattice in the d_1 metric (i.e., the Manhattan metric, also called the natural word metric in the Cayley graph) are contractible at scales above n^2(2n-1)^2, with the bounds arising from the Jung's constants. We introduce a new concept of locally dominated vertices in a simplicial complex, upon which our proof strategy is based. This allows us to deduce the contractibility of the Rips complexes from a local geometric condition called local crushing. In the case of the integer lattices in dimension n and a fixed scale r, this condition entails the comparison of finitely many distances to conclude that the corresponding Rips complex is contractible. In particular, we are able to verify that for n=1,2,3, the Rips complex of the n-dimensional integer lattice at scale greater or equal to n is contractible. We conjecture that the same proof strategy can be used to extend this result to all dimensions n
§ INTRODUCTION
Given a metric space X and r ≥ 0, the (closed) Rips complex (X, r) is the abstract simplicial complex with the vertex set X, for which a finite σ⊆ X is a simplex iff (σ) ≤ r. Here, (σ) = max_x,y∈σ d(x,y).
The Rips complexes, sometimes referred to as Vietoris-Rips complexes, were originally introduced by Vietoris <cit.>, <cit.> (following <cit.> we reserve the name “Vietoris complex” for the Vietoris' cover based construction of a simplicial complex). The construction was later rediscovered by Rips in the context of geometric group theory and used by Gromov <cit.>. Rips proved that the Rips complexes of hyperbolic groups are contractible for large scales r <cit.>, when considering a group equipped with the word metric, i.e., as a Cayley graph. This turns out to be an important aspect in terms of group actions and has far reaching ramifications in geometric group theory. There are many variations and generalizations of this results. Some of these include <cit.> (from the computational perspective) and <cit.>, which also incorporate Bestvina-Brady Morse theory. The motivating question for our work has been posed by an author of the latter two papers, Matthew Zaremsky: Are the Rips complexes of the free finitely generated Abelian groups (integer lattices in d_1 metric) contractible for large scales? Our main result confirms that this is indeed the case, and thus represents an extension of the Rips' result to the “highly” non-hyperbolic free finitely generated Abelian groups.
While the aspect of geometric group theory is one of the main motivations to study the above question, it is not the only one. The second context in which our results play a prominent role is that of the reconstruction results, i.e., the question whether the Rips complex of a space (or a nearby sample in, say, Gromov-Hausdorff metric) at a reasonably small scale attains the homotopy type of the space itself. This question was first studied by Hausmann in <cit.>, where he has used the Rips complexes to define a new cohomology theory. He proved that the reconstruction result holds for a large class of Riemannian manifolds. A simpler proof of his reconstruction result using Vietoris complexes and Dowker duality was recently given in <cit.>. Motivated by the followup question of Hausmann, Latschev later proved the reconstruction result for a Gromov-Hausdorff nearby sample for a large class of Riemannian manifolds. His result has later been generalized and extended, sometimes within the setting of computational geometry, in <cit.>, with the results of <cit.> also fitting into this setting despite the geometric group theoretic framework. Unfortunately, none of the tools used in these papers turned out to applicable in our setting of integer lattices. By scaling the integer lattice to a sufficiently dense set, our main result is a specific variant of a reconstruction results.
The third context in which our result can be positioned, is that of persistent homology <cit.>, within which the Rips complexes play a fundamental role in applications <cit.>. One of its fundamental properties is stability, a version of which also holds for the Rips complexes: If metric spaces are δ-close in the Gromov-Hausdorff distance, then the persistence diagrams (obtained via the Rips filtrations) are at distance at most 2δ, <cit.>. It has led, for example, to a counterexample of the Hausmann's conjecture from <cit.> on the monotonicity of the connectivity of Rips complexes, see <cit.>. The stability theorem implies that the persistent homology of the Rips complexes of integer lattices is close to that of the Euclidean space in d_1 metric (which has trivial persistent homology), but not necessarily the same. Our main result implies that persistent homology (and the homotopy type of Rips complexes) of integer lattices are more than stable, they are rigid at large scales: at any sufficiently large scale they reconstruct the homology (and homotopy type) of the Rips complex of the Euclidean space at that scale (which is homotopically trivial). In contrast to the reconstruction results, which are a specific type of rigidity results holding for reasonably small scales, this property holds for sufficiently large scales. Outside the realm of the reconstruction results, the rigidity of persistent homology (or the homotopy type) has only been observed in a few settings: for flag complexes of circular graphs, including the homotopy type of Rips complexes of S^1 and certain ellipses <cit.>, for 1-dimensional persistence of geodesic spaces <cit.>, and for connectivity of the Rips complexes of spheres above small and medium scales <cit.>.
Main results. Our main results state that the Rips complexes of the integer lattices in the d_1 metric are contractible for large scales. For dimensions n=1, 2, 3 we obtain the optimal (see Section <ref> for a discussion and a conjecture on the optimality) contractibility bound r ≥ n. For larger dimensions, we obtain a bound arising from the Jung's constants.
* Theorems <ref> and <ref>: Fix n∈{1,2,3}. Then for any r ≥ n, (^n, r) is contractible, when using the d_1 (Manhattan) metric.
* Theorem <ref>: Fix a positive integer n>0. Then for any r ≥ n^2 (2n-1)^2, (^n, r) is contractible, when using the d_1 (Manhattan) metric.
A comment on other d_p metrics. For a definition of d_p metrics for p∈ [1,∞] see the beginning of Section <ref>. For example, d_2 is the Euclidean metric. In <cit.>, Matt Zaremsky has proved that ((^n, d_2), t) is contractible for all t > √(n)/(2 - √(3)) using the Bestvina-Brady discrete Morse theory. He has later informed us that a similar argument also seems to work for d_1 + ε metrics for positive ε > 0, although the argument hasn't been written down by the time of the writing. For this reason, the present paper focuses on the d_1 metric, and only comments on why the same arguments hold in other d_p metrics in Section <ref>.
Main tool.
As our main tool we introduce a new concept: local domination of a vertex in a simplicial complex, see Definition <ref>. The local domination is a generalization of the classical domination and allows us to remove a locally dominated (but not necessarily dominated) vertex from a simplicial complex without changing the homotopy type. As a novel tool in combinatorial topology it is of independent interest. While domination suffices to prove our main results in dimensions 1 and 2, higher dimensional lattices require the use of local domination. The metric counterpart of the local domination which allows for a transition to Rips complexes is called a local crushing. It is a generalization of the Hausmann's crushing <cit.>.
The structure of the paper is as follows.
Section <ref> we provide preliminaries.
Section <ref> we demonstrate how domination is used to prove our first main result about contractibility in dimensions 1 and 2, see Theorem <ref>. We then discuss why the approach fails in higher dimensions (Remark <ref>) and introduce local domination (Definition <ref> and Theorem <ref>), along with its metric counterpart aimed at Rips complexes, local crushing (Definition <ref> and Theorem <ref>).
Section <ref> We introduce a variant of the local crushing in our context, the Local Euclidean crushing property (Definition <ref>, Theorem <ref>).
Section <ref> We prove our capital results on the contractibility of the Rips complexes of lattices in dimension 3 (Theorem <ref>) and higher (Theorem <ref>) using .
Section <ref> We discuss why our approach works for all d_p metrics. We conjecture that our proof strategy can be used to prove optimal contractibility bounds, and comment on why (^n,r) is not contractible for r<n.
§ PRELIMINARIES
Let n be a non-negative integer. Point (0, 0, …, 0)∈^n will be denoted by 0^n. Throughout the paper, the distance d on ^n will always be the d_1 distance (also called the Manhattan distance) unless stated otherwise:
d_1( (a_1, a_2, …, a_n), (b_1, b_2, …, b_n) ) = ∑_i=1^n |a_i - b_i|.
The one point space is denoted by ∙. The notation X ≃∙ means X is contractible.
An (abstract) simplicial complex is a subset-closed collection of non-empty subsets of its vertex set. The relation of being a subcomplex is denoted by ≤. The n^th skeleton of K will be denoted by K^(n)≤ K. The vertex set of K is V(K)= K^(0). Given a vertex a ∈ V(K), its star is _K(a)={σ∈ K | a∈σ}≤ K and its link is _K(a)={σ∈ K | a∉σ, σ∪{a}∈ K}≤_K (a). A simplicial complex K is a flag complex if the following condition holds: σ⊆ V(K) is a simplex in K iff each pair contained in σ is a simplex in K. Given A ⊆ V(K), the induced subcomplex is defined as _K (A)={σ∈ K |σ⊆ A}≤ K. Given a subcomplex K' ≤ K, the induced subcomplex is _K(K') = _K(V(K')) ≥ K'. Clearly, each Rips complex is a flag complex.
Let K be a simplicial complex and a,b∈ V(K). A vertex a is dominated by the vertex b ≠ a if ∀σ∈ K: a∈σ{b}∪σ∈ K.
§ LOCAL DOMINATION AND LOCAL CRUSHING
Let K be a simplicial complex and a∈ V(K) a non-isolated vertex. A vertex a is locally dominated in K if there exists a simplex L_a ∈ K not containing a, such that the following condition holds: if σ∈ K with a∈σ, then ∃ b_σ∈ L_a: σ∪{b_σ}∈ K. In particular, a is not isolated.
The local domination is a combinatorially-local variant of the concept of the domination. If L_a can be chosen to be a vertex in Definition <ref>, the corresponding local domination is the standard domination, see Figure <ref>.
The following results will motivate the local domination. It turns out that the domination is sufficient to prove contractibility of Rips complexes of lattices in dimension 1 and 2 (Theorem <ref>), but falls short in higher dimensions (Remark <ref>).
Let n ∈{1,2}. Then for each r ≥ n, (^n,r) ≃∙.
By the Whitehead Theorem it suffices to show that all the homotopy groups are trivial. As any homotopy class is contained in the Rips complex on finitely many points, it suffices to show that for each positive integer M, (Z^n_M, r) ≃∙, where Z_M = {0, 1,2, …, M}.
n=1: First, note that 0 is dominated by 1 in Z_M as r ≥ 1. Removing 0 thus preserves the homotopy type. We proceed by induction (see Figure <ref>) removing the left-most point until we reach the one-vertex simplicial complex ({M},r).
n=2: The proof is similar to the case n=1 in the sense that we are inductively removing dominated points from Z^2_M in a certain order. Starting in the lowest row, we are removing the vertices from left to the right (see Figure <ref>). Assume that v is the leftmost vertex in the lowest row of the (potentially reduced) Z^2_M, and let the y coordinate of v be below M:
* If v is not the last vertex in the row, then v is dominated by the vertex v + (1,1), and can thus be removed. Observe that the d_1 distance is the same as the induced path distance on the indicated grid on Figure <ref>: at any scale, the shortest path α from a point w≠ v to v passes either through v + (1,0) or through v + (0,1) just before reaching v, and so we obtain a path of the same lenght from w to v + (1,1) by diverting the last segment of α towards v + (1,1).
* If a vertex v is the last vertex in the row, then v is dominated by the vertex v + (0,1), and can thus be removed.
When the reduction reaches Z_M ×{M} we refer to the case n=1 to conclude the contractibility.
Assumption r ≥ 2 was used in two places. First, to assure that the pairs of vertices (v, v + (1,1)) and (v, v + (0,1)) form an edge. Second, to invoke case n=1, where r ≥ 1 is required.
The vertex 0^3∈ Z={0,1,2,3}^3 is not dominated in (Z,3). While each of the three vertices (3,0,0), (0,3,0), (0,0,3) forms an edge with 0^3 (i.e., are at distance at most 3), it is easy to see that there is no vertex on Z, other than 0^3, that would be at distance at most 3 from all three of them. Thus, the domination cannot be used and we turn our attention to the more general local domination.
The following theorem establishes the crucial property that removing a locally dominated (and in a special case, a dominated vertex) vertex is a homotopy equivalence.
Let K be a simplicial complex and a∈ V(K) a (non-isolated) locally dominated vertex in K. If K is a finite flag complex, then _K(V(K) ∖{a}) ≃ K.
In the proof of Theorem <ref> we will use the following well known lemma, see for example <cit.>, or recent uses in <cit.>.
Let K be a flag simplicial complex and a∈ V(K). If _K(a) ≃∙, then K ≃_K(V(K) ∖{a}).
According to Lemma <ref> it suffices to show that _K(a) ≃∙. Choose a simplex L_a according to Definition <ref>. Define a cover of _K(a) by subcomplexes = {M_σ}_σ∈_K(a), where M_σ = __K(a)( σ∪ L_a). Observe that the nerve () is the full simplex as the intersection of is non-trivial (it contains at least L_a). In particular, () ≃∙. By the nerve theorem it remains to prove that is a good cover of _K(a).
Let σ_1, σ_2, …, σ_ℓ be a collection of simplices in _K(a). We intend to prove that = ⋂_i=1^ℓ M_σ_i≃∙. Observe that = _L_a(L_a ∪⋂_i=1^ℓσ_i). Now ⋂_i=1^ℓσ_i ∈_K(a) and by Definition <ref>, there exists b∈ L_a, such that ⋂_i=1^ℓσ_i ∪{b}∈_K(a). This implies that every vertex of
⋂_i=1^ℓσ_i is connected to b.
As is a flag complex and b ∈ L_a, this implies is a cone with the apex at b (as every vertex of is connected to b) and thus contractible. As a result, is a good cover and the theorem follows.
We now translate the local domination concept into a metric setting amenable to Rips complexes.
Let (X, d) be a finite metric space, x ∈ X, and r>0. Point x is locally crushable (at scale r) in X if there exists a subset L_x ⊂ X not containing x and of diameter (L_x) ≤ r, such that for each A ⊂ X satisfying x∈ A, (A) ≤ r, there exists y∈ L_x: (A ∪{b}) ≤ r.
The concept of a crushing as a method of transforming the underlying set of a Rips complex while preserving the homotopy type of the complex has first been used by Hausmann <cit.>. Crushing and its discrete variant have later been used in <cit.>. Definition <ref> provides a combinatorially-local version of crushing.
The following theorem is a direct consequence of Theorem <ref> and the definition of Rips complexes.
Let X be a finite metric space, x ∈ X, and r>0. If point x is locally crushable in X, then (X,r) ≃(X ∖{x},r).
§ LOCAL EUCLIDEAN CRUSHING
In this section we prove a variant of the local crushing propert for compact subsets of a Euclidean space, see Definition <ref>. While our intended application is to finite subsets (i.e., the simplices of Rips complexes), the use of Jung's constant allows us to phrase it for compact subsets. Throughout this section we fix a positive integer n.
Given a subset A ⊂^n, let (A) denote the smallest box of the form ∏_i=1^n [a_i, b_i] ⊂^n containing A. In particular, each a_i is the minimal value of the i^th coordinate attained by A.
Given a positive integer n and ρ >0, we say that the local Euclidean crushing property (n, ρ) holds, if for each compact τ⊂^n-1× [0, ∞) with 0^n ∈τ, (τ) ≤ 2n-1, there exists c_τ∈ ([-1,1]^n-1× [0,1]) ∩(τ), such that the following holds: ∀ x∈τ: d(x, c_τ) ≤ (2n-1) - ρ.
Observe that if (n, ρ) holds, then (n, ρ') also holds for all positive ρ' < ρ.
Here we provide a few comments on Definition <ref>.
In a forthcoming argument, Definition <ref> will be used inductively in the lexicographical order. To that purpose, we have phrased it in terms of subsets of ^n-1× [0, ∞) (where all lexicographical successors of 0^n lie) instead of ^n.
The definition is essentially phrasing a local crushing condition for compact subsets in ^n-1× [0, ∞) with the redundancy ρ. The redundancy will later be required to pass to the lattices. Without the loss of generality, the “anchor” point is taken to be 0^n. The role of the subset L_x from Definition <ref>, onto which the local crushing is occurring, is assigned to the box [-1,1]^n-1× [0,1] as one of the simplest forms of easily determined diameter. Its diameter 2n-1 is thus used as the diameter bound.
The proof of the variant of the property using ^n instead of ^n-1× [0, ∞), and L_x=[-1,1]^n instead of [-1,1]^n-1× [0,1], is identical to the given proof.
We will now explain how the constant ρ is related to the Jung's constant.
Given x∈^n and r>0, define the closed ball B(x,r)= {y∈^n | d_1(x,y) ≤ r}.
For a compact subset A ⊂^n define the smallest enclosing radius (A) as the minimal radius of a ball in ^n containing A, and keep in mind that we are using the d_1 metric. By compactness, the minimum (A) exists. The center C(A) of such a ball may not be unique though. For A'= {(1,0),(0,1)}, any point on the closed line segment from (0,0) to (1,1) is the center of such a ball. Therefore, a center may not be contained the convex hull of A. However, each center is in (A). This is easy to see: if a point C' has, for example, the first coordinate smaller than the first coordinates of all points of A, then increasing its first coordinate to the minimal first coordinate of A decreases the distances from C' to each point of A. Performing such a modification for each coordinate we obtain a C(A) within (A).
Given a positive integer n, the Jung's constant is defined as
J(n) = sup{(A) | A ⊂^n, (A) ≤ 1}.
There is a long history of results on parameters J(n), starting with <cit.> who proved that J(n, d_2) =√(n/n+1) holds for the Euclidean metric d_2. For a review of the topic see <cit.> or <cit.>. For our argument it will be crucial to have J(n) < 1 (which holds by (1) of Theorem <ref>), while explicit upper bounds will provide eventual bounds on the scales at which the Rips complexes of integer lattices are contractible. The following proposition states some of the bounds on the Jung's constant that we will be using.
The following are some of the bounds on the Jung's constant in the d_1 metric:
* The bound J(n) ≤n/n+1 was proved by <cit.>, see also a short proof using the Helly's theorem in <cit.>
* J(n) = n/n+1 iff there exists a Hadamard matrix of order n + 1. This result was proved by <cit.>, see also <cit.> and <cit.> for some related results.
Fix a positive integer n and κ∈ [J(n),1). Then (n,ρ(n)) holds for ρ(n) = 1-κ/(2n-1) κ.
Choose a compact τ⊂^n-1× [0, ∞) with 0^n ∈τ, (τ) ≤ 2n-1. There exists c'_τ∈(τ) such that ∀ x∈τ: d(x, c'_τ) ≤ (2n-1) κ. As 0^n ∈(τ), so is the convex combination
c_τ = 1/(2n-1) κ c'_τ + (1-1/(2n-1) κ)0^n.
Choose any y∈τ. Then by the triangle inequality,
d(y, c_τ) ≤1/(2n-1) κ d(y, c'_τ)+ (1-1/(2n-1) κ) d(y, 0^n) ≤
≤ 1 + ((2n-1) - 1/κ) = (2n-1) ( 1/2n-1 + 1 - 1/(2n-1) κ)=
=(2n-1) ( 1 - 1-κ/(2n-1) κ).
and similarly, d(0^n, c_τ) ≤ 1, which implies c_τ∈ ([-1,1]^n-1× [0,1]). Thus the theorem holds.
For the purposes of the inductive proof of Theorem <ref> we will require the (n, ρ(n)) property with a parameter ρ(n) which is monotone in n. Combining with (1) of Theorem <ref> we thus phrase the following.
Fix a positive integer n. Then (n,ρ(n)) holds for ρ(n) = 1/n(2n-1).
Apply κ = n/n+1 from (1) of Theorem <ref> to Lemma <ref>.
§ CONTRACTIBILITY OF INTEGER LATTICES
In Subsection <ref> we use the property inductively to conclude the contractibility of Rips complexes of integer lattices at large scales. In Subsection <ref> we provide ad-hoc local crushings dimension 3, which yield smaller scales of contractibility for Rips complexes of ^3.
In both of these arguments we will be using the lexicographical order in ^n defined as follows: (x_1, x_2, …, x_n) ≺ (y_1, y_2, …, y_n) iff the largest index i at which the two vectors differ satisfies x_i < y_i. When x_i, y_i ∈{0, 1, …, 9}, the equivalent condition is that the corresponding n-digit numbers satisfy x_n x_n-1… x_2 x_1 < y_n y_n-1… y_2 y_1. In comparison to the standard lexicographical order we have reversed the indexing so as to have the last coordinate being the dominant one. Also, for a positive number w let (w)^n denote the integer lattice scaled by w, i.e., { w · z | z ∈^n}. For each i let π_i ^n → denote the projection to the i^th coordinate.
§.§ Contractibility of integer lattices at large scales
For each r ≥ n^2 (2n-1)^2 we have ((^n, d_1), r) ≃∙.
Define ρ = 1/n(2n-1) and observe that (n', ρ) holds for all n' ∈{1, 2, …, n} by Corollary <ref>.
Choose any real m ≥ n / ρ. We claim that ((1/m ·)^n, (2n-1)) is contractible. Deferring the proof of the claim for a moment, we observe that the claim combined with the scaling of (1/m ·)^n by m implies that (^n, (2n-1) m) is contractible, ∀ m ≥ n / ρ. This means that the theorem holds for all scales r = (2n-1)m ≥ n^2 (2n-1)^2.
It thus remains to prove the claim that ((1/m ·)^n, 2n-1) is contractible. By the Whitehead theorem it suffices to show that for each sufficiently large M>0, the Rips complex (X, 2n -1) is contractible for X=(1/m ·)^n ∩ [-M, M]^n, as each element of a homotopy group of ((1/m ·)^n, 2n-1) is contained in a (X, 2n-1) for a large enough M.
The proof that (X, 2n-1) is contractible proceeds by induction. Let x_1, x_2, …, x_ℓ denote the collection of points of X = X_0 arranged in the lexicographical order and define X_i = X ∖{x_1, x_2, …, x_i}. We will prove that for each i ∈{0,1, 2, …, ℓ -2}, x_i+1 is locally crushable in X_i at scale 2n-1. By Theorem <ref>, this will imply that (X, 2n-1) ≃(X_ℓ -1, 2n-1) and is thus contractible as X_ℓ-1 is a single point.
There are two different types of inductive steps to prove that x_i+1 is locally crushable in X_i at scale 2n-1.
* The first type is used when the last coordinate of x_i+1 (namely, π_n(x_i+1)) is not maximal within X, i.e., π_n(x_i+1) < max_z∈ X_iπ_n(z). By the definition of the lexicographical order, X_i contains the entire subgrid of X above x_i+1, namely the non-empty subset
X'_i=(1/m ·)^n∩ ([-M, M]^n-1×[ π_n(x_i+1) + 1/m, M ]).
We will now prove that x_i+1 is locally crushable in X_i with the subset L_x_i+1 (see Definition <ref>) being
L_x_i+1={z ∈ X_i | 1 ≥ |π_j(z)-π_j(x_i+1)|, ∀ j ∈{1, 2, …, n}; π_n(z) > π_n(x_i+1)}.
Observe that L_x_i+1 is contained in the box [-1,1]^n-1× [0,1] translated by x_i+1. We refer to this translated box as B_i.
Following the definition of the local crushing, choose a simplex τ∈(X_i, 2n-1) containing x_i+1. By the property there exists c_τ∈ B_i ∩(τ): d(y, c_τ) ≤ 2n-1 - ρ, ∀ y ∈τ. Next, we change each of the coordinate of c_τ by at most 1/m to reach a point c'_τ of L_x_i+1:
* For j < n we snap the j^th coordinate of c_τ (namely, π_j(c_τ)) to the closest value of 1/m · in the direction of π_j(x_i+1). (If π_j(c_τ)=π_j(x_i+1), no change occurs.)
* We snap the n^th coordinate to the closest value of 1/m · in the direction of π_j(x_i+1). In case that new value is π_j(x_i+1), we have π_n(c_τ) ∈ [π_j(x_i+1), π_j(x_i+1) + 1/m) and we thus snap to π_j(x_i+1) + 1/m instead.
The resulting point is denoted by c'_τ. We first argue that c'_τ∈ X_i. By the property, c_τ∈(X). By the structure of the X (the cubical grid), the above modification c_τ→ c'_τ snapping coordinates towards x_i+1∈ X (or the last coordinate to π_n(x_i+1) + 1/m) thus results in c'_τ being in X. Since π_n(c'_τ) > π_n(x_i+1), and since X_i contains all the points of X with n^th coordinate above π_n(x_i+1) due to the lexicographical order (i.e., it contains X'_i), we have c'_τ∈ X_i.
Next we argue that the resulting point c'_τ is in L_x+1:
* For j < n, as |π_j(c_τ) - π_j(x_i+1)| ≤ 1 due to the property, so is |π_j(c'_τ) - π_j(x_i+1)| ≤ 1 as the j^th coordinate was moved (if moved) towards π_j(x_i+1).
* In the same spirit we can observe that π_n(c'_τ) - π_j(x_i+1) ∈ (0, 1]. In particular, this means that c'_τ≠ x_i.
Thus c'_τ is in L_x+1.
Changing each coordinate by at most 1/m in the transition c_τ→ c'_τ provides a change by at most n/m in the distance. Thus ∀ y∈τ:
d(y, c'_τ) ≤ d(y, c_τ) + d(c_τ, c'_τ) ≤ (2n-1 - ρ) +n/m < 2n -1
by the definition of m. The locally crushing condition thus holds, with c'_τ being the point corresponding to τ according to Definition <ref>.
* The second type of the inductive step on the index i uses another induction, that on the dimension n. Let i be the first index, for which π_n(X_i) is a single point. This means that
X_i = ((1/m ·)^n-1∩ [-M, M]^n-1) ×{π_n(x_i+1)}.
By the statement of this theorem for n-1, we may inductively assume that a sequence of local crushings induces the homotopy equivalence (X_i, 2n-1) ≃(X_ℓ-1, 2n-1), because the relevant parameters have been chosen so that they hold for the case n-1 as well:
* We have chosen ρ so that (n', ρ) holds for all n' ∈{1, 2, …, n}.
* We have chosen r ≥ n^2 (2n-1)^2 and thus r ≥ (n-1)^2 (2(n-1)-1)^2 also holds.
* We have chosen m ≥ n/ρ and thus m ≥ (n-1)/ρ also holds.
Thus X can be transformed to X_ℓ-1 by a sequence of local crushings and thus (X, 2n-1) ≃(X_ℓ -1, 2n-1) ≃∙.
§.§ Contractibility of integer lattices at small scales
The overall strategy of the next result is the same as that of Theorem <ref> (see Figures <ref> and <ref>) with the incorporation of local crushings. It is also an adaptation of the strategy of Theorem <ref>, using explicit local domination steps instead of the ones implied by the property.
For each r ≥ 3, (^3, r) ≃∙.
As in the proof of Theorems <ref> and <ref>, it suffices to show that for arbitrarily large M>0 and Z={0, 1, …, M}^3, we have (Z,r) ≃∙. Again, we will proceed by induction, removing points from Z in the lexicographical order. Assume z_1, z_2, …, z_ℓ are the points of Z ordered in the lexicographical order, and define Z_i = Z ∖{z_1, z_2, …, z_i}. We will prove that for each i ∈{0,1, 2, …, ℓ-2}, z_i+1 is locally crushable in Z_i, yielding (Z,r) ≃(Z_ℓ-1,r)=∙.
Fix i. Translating by -z_i+1 we recenter Z_i so that z_i+1 corresponds to 0^3. Thus, there exist integers a_1, a_2 ≤ 0 and b_1, b_2, b_3 ≥ 0 so that Z_i consists of those points in
{a_1, a_1 +1, …, b_1}×{a_2, a_2 +1, …, b_2}×{0, 1, …, b_3}
which appear lexicographically after 0^3. We proceed by case analysis:
* If i is the first index at which b_3=0, the grid Z_i is two-dimensional and thus there is a sequence of local crushings inducing (Z_i,r)≃∙ by Theorem <ref>.
* If b_3 ≠ 0 but b_1=b_2 = 0, then all the points of Z_i except for 0^3 have the last coordinate 1, hence 0^3 is dominated in (Z_i,3) by (0,0,1). The technical reason is that ∀ w ∈ Z_i, w ≠ 0^3: d(w,(0,0,1)) = d(w,0^3)-1.
* Similarly, if b_2, b_3 ≠ 0 but b_1=0, then 0^3 is dominated in (Z_i,3) by (0,1,1). The technical reason is that ∀ w ∈ Z_i: d(w,(0,1,1)) ≤ d(w,0^3). This is easy to see because any point in Z_i other than 0^3 has either the second or the third coordinate non-zero and is thus at distance at most r-1 from either (0,1,0) or (0,0,1). The same argument was used in the proof of Theorem <ref>, case n=2, item (1).
Analogously, if b_1, b_3 ≠ 0 but b_2=0, then 0^3 is dominated in (Z_i,3) by (1,0,1).
* Assume b_1, b_2, b_3 ≠ 0. We will prove that the local crushing condition holds for 0^3 with L_0^3={(0,0,1), (1,1,0)}, which is of diameter 3. Assume τ∈(Z_i, r) contains 0^3. If τ∪{(0,0,1)} is not a simplex, then there exists (t_1, t_2,0)∈τ): |t_1| + t_2 = r (note that t_2>0 due to the lexicographical order). Take any (s_1, s_2, s_3)∈τ different than 0^3. As d((t_1, t_2,0), (s_1, s_2, s_3)) ≤ r, we can't have both s_1 and s_2 equal to 0. In particular, either s_2> 0 (in this case s_1 may be negative) or s_1 > 0. This means that (s_1, s_2, s_3) is either at distance at most r-1 from (0,1,0) or (1,0,0) respectively. In both cases we can conclude d((s_1, s_2, s_3), (1,1,0)) ≤ r as in (3) above. This implies τ∪{(1,1,0)}∈(Z_i, r) and thus concludes the proof of local crushing.
§ CONCLUSION
We conclude with two comments on our results.
First, the proof of Theorem <ref> actually holds for any d_p metric. For finite p ≥ 1, d_p is defined as
d_p( (a_1, a_2, …, a_n), (b_1, b_2, …, b_n) ) = (∑_i=1^n |a_i - b_i|^p)^1/p,
while
d_∞( (a_1, a_2, …, a_n), (b_1, b_2, …, b_n) ) = max_i=1, 2, …, n |a_i - b_i|.
There are only few places in the proof of Theorem <ref> that actually use specifics of d_1. We next discuss them and comment on why the same argument holds for any d_p metric with p ∈ [1,∞].
* The proof makes use of L_x_i+1, which is contained in the box [-1,1]^n-1× [0,1]. The diameter of this box is 2n-1 in d_1 (see Remark <ref>) and strictly less in other d_p metrics. Hence the estimates of the proof hold for any other d_p metric.
* Another property that is used is the fact that the center of the minimal ball in (^n, d_1) containing A lies in (A). The same argument that is given just before Definition <ref> implies that the same holds in any d_p for p∈ [1,∞). A special case is d_∞: there a center is not unique, but it is easy to see (again, by the same argument that is given just before Definition <ref>) that it can be chosen in (A).
* All the other metric estimates in the proof hold for any d_p as they only use the triangle inequality.
* The bound of (1) of Theorem <ref> holds for ant d_p metric by the same references (<cit.>, <cit.>) and consequently so does the auxiliary Lemma <ref>.
The smaller diameter of the mentioned box in (1), and tighter bounds on the Jung's constant in (4) can be used to deduce better bounds of the main result for d_p metrics.
The second comment is related to the following question: Is the Rips complex ((^n, d_1),r) contractible for each r ≥ n? We conjecture that the same proof strategy as that in the proof of Theorem <ref> should work. Unfortunately, we failed to noticed a general pattern in the case analysis of the said proof, that would extend to higher dimensions. However, it is clear that for each fixed dimension n, such a case analysis only needs to consider finitely many cases, hence specific low dimensions may be amenable to computational verification. Concerning the lower bound for the scale parameter, it is easy to see that for r<n, the complex ((^n, d_1),r) is not contractible: according to <cit.> the Rips complex (({0,1}^n, d_1),r) has non-trivial homology, and since there is a contraction (1-Lipschitz retraction) ^n →{0,1}^n, the inclusion {0,1}^n ↪^n induces an injection on the homology of Rips complexes at each scale r, see <cit.>
§ ACKNOWLEDGMENTS
The author would like to thank Henry Adams, Arseniy Akopyan, and Matthew Zaremsky for fruitful discussions on the subject. The author was supported by the Slovenian Research Agency grants No. J1-4001 and P1-0292.
99
AA
M. Adamaszek, H. Adams.
The Vietoris-Rips complexes of a circle.
Pacific Journal of Mathematics 290 (2017), 1–40.
Ad5
M. Adamaszek, H. Adams, and S. Reddy:
On Vietoris-Rips complexes of ellipses,
Journal of Topology and Analysis 11 (2019), 661-690.
ABV
H. Adams, J. Bush, and Ž. Virk,
The connectivity of Vietoris-Rips complexes of spheres,
in preparation.
AVCubes
H. Adams and Ž. Virk,
Lower bounds on the homology of Vietoris-Rips complexes of hypercube graphs,
Bull. Malays. Math. Sci. Soc. 47, 72 (2024).
Alimov
A. R. Alimov and I. G. Tsar'kov,
Chebyshev centres, Jung constants, and their applications,
Russ. Math. Surv. 74 (2019), 775–849.
Amir
D. Amir,
On Jung's constant and related constants in normed linear spaces, Pacific J. Math. 118(1): 1–15 (1985).
AttD. Attali, A. Lieutier, and D. Salinas. Vietoris-Rips complexes also provide topologically correct reconstructions of sampled shapes. In Proceedings of the 27th annual ACM symposium on Computational geometry, SoCG '11, pages 491–500, New York, NY, USA, 2011. ACM.
Bauer
U. Bauer,
Ripser: efficient computation of Vietoris-Rips persistence barcodes,
Journal of Applied and Computational Topology, 5:391–423, 2021.
BauerRoll
U. Bauer and F. Roll, Gromov Hyperbolicity, Geodesic Defect, and Apparent Pairs in Vietoris-Rips Filtrations, In: 38th International Symposium on Computational Geometry (SoCG 2022). Vol. 224. Leibniz International Proceedings in Informatics (LIPIcs). 2022.
Bjor
A. Björner,
Topological Methods, in: Handbook of Combinatorics (ed. R. Graham, M. Grötschel, L. Lovász), Chapter 34, 1819–1872. North-Holland, Amsterdam (1995).
Bohn
F. Bohnenblust,
Convexregions and projections in Minkowski spaces,
Ann. Math.,39 (1938), 301–308.
Bridson
M.R. Bridson and A. Haefliger, Metric spaces of non-positive curvature, volume 319 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.
CombM. Cencelj, J. Dydak, A. Vavpetič, and Ž. Virk:
A combinatorial approach to coarse geometry,
Topology and its Applications 159(2012), 646–658.
Cha2 F. Chazal, V. de Silva, and S. Oudot,
Persistence stability for geometric complexes, Geom. Dedicata (2014) 173: 193.
MC
M. Čufar,
Ripserer.jl: flexible and efficient persistent homology computation in Julia, Journal of Open Source Software, 5(54), 2614 (2020).
Dol
V.L. Dol'nikov,
Jung constant in ℓ^n_1,
Mathematical Notes of the Academy of Sciences of the USSR 42, 787–791 (1987).
Ziqin
Z. Feng,
Homotopy types of Vietoris-Rips complexes of Hypercube Graphs, arXiv:2305.07084.
Anu
S. Goyal, S. Shukla, and A. Singh,
Matching complexes of 3 × n grid graphs,
Electronic Journal of Combinatorics, vol. 28, no. 4, Article no. P4.16, 2021.
EH
H. Edelsbrunner and J.L. Harer,
Computational Topology. An Introduction,
Amer. Math. Soc., Providence, Rhode Island, 2010.
GR
M. Gromov,
Hyperbolic groups in Essays in group theory, S.M. Gersten (ed.), MSRI Publ. 8 75–263 (1987).
Haus J.-C. Hausmann. On the Vietoris-Rips complexes and a cohomology theory for metric spaces. Annals of Mathematics Studies, 138:175–188, 1995.
Ivanov
V.I. Ivanov and S.A. Pichugov,
Jung constants of the ℓ^n_p-spaces, Mathematical Notes of the Academy of Sciences of the USSR 48, 997–1004 (1990).
Jung
H.W.E. Jung,
Über die kleinste Kugel, die eine räumliche Figur einschlisst, J. Reine. Angew. Math, 123 (1901), 241–257.
LatJ. Latschev. Vietoris-Rips complexes of metric spaces near a closed Riemannian manifold. Archiv der Mathematik, 77(6):522–528, 2001.
Lef
S. Lefschetz,
Algebraic Topology,
AMS Coll. Publ. 27 (1942).
Lem
B. Lemež and Ž. Virk,
Reconstruction Properties of Selective Rips Complexes,
Glasnik Matematicki 57(2022), vol. 2, 73-88.
Sush1
S. Majhi, Vietoris-Rips complexes of metric spaces near a metric graph. J Appl. and Comput. Topology 7, 741–770 (2023).
Sush2
S. Majhi, Demystifying Latschev's Theorem: Manifold Reconstruction from Noisy Data, Discrete Comput Geom (2024).
Mun
J.R. Munkres,
Topology,
second edition, Prentice Hall, 2000.
Ro1
J. Roe, Coarse cohomology and index theory for complete Riemannian manifolds, Memoirs Amer.
Math. Soc. No. 497, 1993.
Rolle
A. Rolle,
The Degree-Rips Complexes of an Annulus with Outliers, in 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 58:1–58:14, Schloss Dagstuhl–Leibniz–Zentrum für Informatik (2022)
Varhatis
M. N. Vrahatis,
Towards the mathematical foundation of the minimum enclosing ball and related problems, arXiv:2402.06629.
Zar2
M. Varisco and M. Zaremsky,
Equivariant Morse theory on Vietoris-Rips complexes and universal spaces for proper actions,
Bull. Lond. Math. Soc. Vol. 53 (2021), No. 6, 1724–1739.
Viet
L. Vietoris,
Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen
Abbildungen. Math. Ann. 97, 454–472 (1927).
ZV1
Ž. Virk,
Approximations of 1-Dimensional Intrinsic Persistence of Geodesic Spaces and Their Stability,
Revista Matemática Complutense
32 (2019), 195–213.
ZVCounterex
Ž. Virk:
A Counter-example to Hausmann's Conjecture, Found Comput Math 22(2022), 469–475.
ZV2
Ž. Virk,
Footprints of geodesics in persistent homology,
Mediterranean Journal of Mathematics 19 (2022).
ZV3
Ž. Virk,
Rips complexes as nerves and a Functorial Dowker-Nerve Diagram,
Mediterranean Journal of Mathematics 18 (2021).
ZVContractions
Ž. Virk,
Contractions in persistence and metric graphs,
Bull. Malays. Math. Sci. Soc. 45 (2022), 2003–2016
Zar1
M. Zaremsky,
Bestvina-Brady discrete Morse theory and Vietoris-Rips complexes,
Amer. J. Math. Vol. 144 (2022), No. 5, 1177–1200.
|
http://arxiv.org/abs/2405.09786v1 | 20240516031952 | IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency | [
"Linshan Hou",
"Ruili Feng",
"Zhongyun Hua",
"Wei Luo",
"Leo Yu Zhang",
"Yiming Li"
] | cs.LG | [
"cs.LG",
"cs.CR"
] |
[
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented
Scaling Consistency
equal*
Linshan Houyyy
Ruili Fengyyy1
Zhongyun Huayyy
Wei Luoyyy2
Leo Yu Zhangyyy3
Yiming Lisch
yyyDepartment of Computer Science and Technology, University of Harbin Institute of Technology, Shenzhen, Shenzhen, China
yyy1University of Science and Technology of China, Hefei, China
yyy2Department of School of Info Technology, University of Deakin, Australia
yyy3Department of School of Info Technology, University of Griffith University, Gold Coast, Australia
schSchool of Nanyang Technological University, Singapore
Zhongyun Huahuazhongyun@hit.edu.cn
Yiming Liliyiming.tech@gmail.com
Machine Learning, ICML
0.3in
]
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a `firewall' to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, , parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks.
§ INTRODUCTION
Backdoor attack is an emerging training-phase threat to deep neural networks (DNNs) <cit.>. A backdoored model behaves normally on benign samples while misclassifying malicious samples containing the adversary-specified patterns (, triggers). This attack could happen whenever the training stage is not fully controlled.
It poses a significant threat to the lifecycle and supply chain of DNNs.
Currently, there are five representative defense strategies to alleviate backdoor threats, including (1) data purification <cit.>, (2) poison suppression <cit.>, (3) model-level backdoor detection <cit.>, (4) model-level backdoor mitigation <cit.>, and (5) input-level backdoor detection (IBD) <cit.>. In general, the first four strategies typically demand substantial computational resources since they usually require model training. However, these resources are unavailable for many researchers and developers, especially those using third-party models. In contrast, the last one is less resource-intensive and is, therefore, our main focus. It aims to detect and prevent malicious inputs and can serve as the firewall of deployed models.
To the best of our knowledge, SCALE-UP <cit.> currently stands as the most advanced IBD. It observes that the predictions of poisoned samples (, those containing triggers) exhibit more robustness to pixel-level amplification compared with those of benign samples and provides the theoretical foundations for this phenomenon. Employing this intriguing phenomenon, SCALE-UP directly enlarges all pixel values of the suspicious input sample with varying amplification intensities and assesses its prediction consistency for detection. However, SCALE-UP encounters some intrinsic limitations due to the restriction of pixel values (, bounded in [0, 255]). For example, as shown in <ref>(a), benign samples containing black and white pixels maintain their initial predictions during the amplification process. This stability is due to their extreme pixel values (0 or 255), which remain unaffected against amplification. Conversely, in poisoned samples, amplification often turns higher pixel values to the maximum (, 255). It leads to large blank areas in the scaled poisoned images, masking the triggers and thus leading to changes in their predictions. Recognizing that prediction results are from the co-effects of pixel and parameter values, as shown in <ref>(b), while parameter values are not bounded, an intriguing question arises:
Shall the model's parameters expose backdoors with more grace than the humble pixel's tale?
Fortunately, the answer is yes! In this paper, we reveal that the prediction confidences of poisoned samples have parameter-oriented scaling consistency (PSC). Specifically, we scale up the learned parameters of the batch normalization (BN) layers, which are widely exploited in advanced DNN structures. We demonstrate that the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when the number of amplified BN layers increases. In particular, we show that this intriguing phenomenon is not accidental, where we prove that we can always find a scaling factor for BN parameters to expose latent backdoors for all attacked models (under some classical assumptions in learning theory). The scaled model can misclassify benign samples while maintaining the predictions of poisoned samples, leading to PSC phenomenon.
Motivated by this finding, we propose a simple yet effective IBD method to identify and filter malicious testing samples, dubbed IBD-PSC. Specifically, for each suspicious testing image, our IBD-PSC measures its PSC value. This PSC value is defined as the average confidence generated over a range of parameter-scaled versions of the original model on the label, which is predicted by the original model. The larger the PSC value, the more likely the suspicious sample is poisoned. In particular, we start from the last layer of the deployed model and scale up different numbers of BN layers to obtain the scaled models. It is motivated by the previous findings <cit.> that trigger patterns often manifest as complicated features learned by the deeper layers of models, especially for those attacks with elaborate designs <cit.>. To effectively determine the optimal number of layers for amplification, we design an adaptive algorithm by evaluating the scaling impact on the model's performance when processing benign samples.
In conclusion, our main contributions are four-fold. (1) We disclose an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are more consistent than benign ones when scaling up BN parameters. (2) We provide theoretical insights to elucidate the PSC phenomenon. (3) We design a simple yet effective method (, IBD-PSC) to filter out poisoned testing images based on our findings. (4) We conduct extensive experiments on benchmark datasets, verifying the effectiveness of our method against 13 representative attacks and its resistance to potential adaptive attacks.
§ RELATED WORK
§.§ Backdoor Attacks
In general, existing backdoor attacks can be categorized into three types based on the adversaries' capabilities: (1) poison-only attacks, (2) training-controlled attacks, and (3) model-controlled attacks. These attacks could happen whenever the training stage is not fully controlled.
Poison-only Backdoor Attacks. In these attacks, the adversaries can only manipulate the training dataset. Gu <cit.> proposed the first poison-only attack (, BadNets). BadNets poisoned a few training samples by patching a predefined trigger, , a 3× 3 white square, onto the bottom right corner of these samples. It then altered the labels of the modified samples to an adversaries-specified target label. Models trained on such poisoned training sets create a relation between the trigger and the target label. Subsequent studies further developed more stealthy attack methods, including invisible and clean-label attacks. The former methods <cit.> typically used imperceptible triggers to bypass manual detection, while the latter ones <cit.> maintained the ground-truth label of poisoned samples. Besides, there are also the physical attack <cit.> that adopt physical objects or spatial transformations as triggers and the adaptive attack methods <cit.> that are specifically designed to evade defenses.
Training-controlled Backdoor Attacks. In these attacks, adversaries can modify both the training dataset and the training process. One line of work aimed to circumvent existing defenses and human detection. For instance, the adversaries may introduce a `noise mode' <cit.> or incorporate well-designed regularization terms into training loss <cit.>. Another line of work focused on augmenting the effectiveness of attacks. For instance, Wang <cit.> exploited learning algorithms beyond supervised learning to ensure the correct injections of subtle triggers. Besides, Li <cit.> and Zhang <cit.> introduced spatial transformations to poisoned samples to hide the triggers more robustly, extending the threat of backdoor attacks to the real physical scenarios.
Model-controlled Backdoor Attacks. In model-controlled backdoor attacks, adversaries modify model architectures or parameters directly to inject backdoors. For example, Tang <cit.> implanted hidden backdoors by inserting an additional malicious module into the benign victim model. Qi <cit.> proposed to maliciously modify the parameters of a narrow subnet in the benign model instead of inserting an additional module. This approach was more stealthy and was highly effective in both digital and physical scenarios.
§.§ Backdoor Defenses
Based on the stage of the model lifecycle where defense occurs, existing defenses can be mainly divided into five main categories: (1) data purification <cit.>, (2) poison suppression <cit.>, (3) model-level backdoor detection <cit.>, (4) model-level backdoor mitigation <cit.>, and (5) input-level backdoor detection (IBD) <cit.>. Specifically, data purification intends to filter out all poisoned samples in a given (third-party) dataset. It usually needs to train a model before identifying the influence of each training sample; Poison suppression aims to hinder the model's learning of the poisoned samples by modifying its training process to prevent backdoor creation; Model-level detection usually trains a meta-classifier or approximates trigger generation to determine whether a suspicious model contains hidden backdoors; IBD detects and prevents malicious inputs and acts as a `firewall' of deployed models. In general, the first four strategies demand substantial computational resources since they typically necessitate model training or fine-tuning. However, these resources are unavailable for many researchers and developers, especially those using third-party models. This paper primarily focuses on IBD, which is more computation-friendly.
Previous IBD methods <cit.> are effective under certain (implicit) assumptions concerning the backdoor triggers. For example, STRIP <cit.> posited that trigger features play a dominant role, and the predictions of poisoned samples will not be affected even when benign features are overlaid. These assumptions can be easily circumvented by adaptive backdoor attacks <cit.>. To the best of our knowledge, the most advanced IBD method is SCALE-UP <cit.>. It amplified all pixel values of an input sample with varying intensities and treated it as poisoned if the predictions were consistent. However, SCALE-UP inherited some potential limitations due to pixel value constraints (bounded in [0, 255]). For example, these constraints may alter predictions of poisoned samples, as amplification can transform higher pixel values into the maximum value of 255, causing triggers (, a white square) to disappear. How to design effective yet efficient IBD methods is still a critical open question.
§ PARAMETER-ORIENTED SCALING CONSISTENCY
As demonstrated in <cit.>, the predictions of poisoned samples are significantly more consistent than benign ones when amplifying all pixel values. Motivated by the fact that model predictions result from the co-effects of samples and model parameters, in this section, we explore whether a similar intriguing phenomenon still exists if we scale up model parameters instead of pixel values.
For simplicity, we mainly focus on the learnable parameters of BN layers since they are used to transform features and are widely exploited in almost all advanced DNNs. Before illustrating our key observation and its theoretical support, we first briefly review the mechanism of BN.
Batch Normalization. Let ϕ(·;γ, ) denotes the BN function, for a given batch feature maps , the BN operation transforms it into : =ϕ(; γ, ). This transformation is expressed as ϕ(; γ, ) = γ( - _a/√(σ_a^2 + ϵ)) +, where ϵ is a small value, _a and σ_a are the mean and standard deviation of , respectively. The γ and are learnable parameters, designed to scale and shift normalized features, and learned during the training process.
We obtain a parameter-amplified model by performing scalar multiplication on the learned parameters of BN layers with a fixed magnification factor (, 1.5). The scaling process starts from the final BN layer and progressively moves backward to the preceding layers.
The PSC is defined as the average prediction confidence (probability) on the originally predicted label over a series of parameter-amplified variants of the deployed model. In particular, we select the label predicted upon the unmodified model as the originally predicted label.
Settings. In this section, we adopt BadNets <cit.>, WaNet <cit.>, and BATT <cit.> on CIFAR-10 <cit.> as examples for analyses. They are the representative of (1) patch-based attack, (2) sample-specific attack, and (3) physical attack, respectively. We exploit a standard ResNet-18 <cit.> as our model structure. It contains twenty BN layers. For all attacks, we set the poisoning rate as 10%.
Specifically, for each benign and poisoned image, we scale up on the BN parameters (, γ and ) with ω=1.5 times starting from the last layer and gradually moving forward to more layers. Similar to <cit.>, we also calculate the average confidence defined as the average probability of samples on the label predicted by the original unamplified model. In this paper, confidence refers to the predicted probability assigned to an input sample for a specified label. For instance, if an image of a cat is predicted as the cat label with a probability of 0.9, then the confidence of the input under the cat label is 0.9. More details are in our <ref>.
Results.
As shown in <ref>, the average prediction confidence of the poisoned and benign samples decreases at almost the same rate as the number of amplified BN layers increases under the benign model. In contrast, as shown in <ref>-<ref>, the average prediction confidence of the poisoned samples remains nearly unchanged, whereas that of the benign samples also decreases during the parameter-amplified process under all three attacked models. In other words, benign and poisoned samples enjoy different BN-amplified prediction behaviors under attacked models. We call this intriguing phenomenon (of poisoned samples) as parameter-oriented scaling consistency (PSC).
To verify that the PSC phenomenon is not accidental, we provide the following theoretical and empirical analyses.
Let F=FC∘ f_L∘…∘ f_1 be a backdoored DNN with L hidden layers and FC denotes the fully-connected layers. Let x be an input, =f_l∘⋯∘ f_1(x) be its batch-normalized feature after the l-th layer (1≤ l≤ L), and t represent the attacker-specified target class. Assume that follows a mixture of Gaussian distributions. Then the following two statements hold: (1) Amplifying the and γ parameters of the l-th BN layer can make ‖‖_2 ( is the amplified version of ) arbitrarily large, and (2) There exists a positive constant M that is independent of , such that whenever ‖‖_2 > M, then max FC∘ f_L∘…∘ f_l+1()=t, even when max FC∘ f_L∘…∘ f_l+1()≠t
<ref> indicates that larger enough feature norms can induce decreasing confidence in the original predicted class if the inputs are benign samples (under certain classical assumptions in learning theory). Poisoned samples, instead, will stay fine.
Its proof is in <ref>.
In practice, we find amplifying only a single BN layer may require an unreasonably large amplification factor, and is unstable among different attacks or even BN layers. A detailed exploration of these observations is presented in <ref>. Fortunately, as shown in <ref>, amplifying multiple BN layers with a small factor (, 1.5) can also significantly increase the feature norm in the last pre-FC layer and is more stable and robust across different settings. As such, we amplify multiple layers throughout this work.
§ THE PROPOSED METHOD
§.§ Preliminaries
Threat Model. This work focuses on input-level backdoor detection under the white-box setting with limited computational capacities. Defenders have full access to the suspicious model downloaded from a third party, but they lack the resources to remove potential backdoors (via backdoor mitigation). Similar to prior works <cit.>, we assume that defenders have access to a limited number of local benign samples.
Defenders' Goals. An ideal IBD solution aims to precisely identify and eliminate all poisoned input samples while preserving the inference efficiency of the deployed model. Consequently, defenders have two main goals: (1) Effectiveness: The defense should accurately identify whether a given suspicious image is malicious. (2) Efficiency: The defense must operate in real-time and integrate seamlessly as a plug-and-play module, ensuring minimal impact on the model's inference time.
The Overview of DNNs. Consider a DNN model ℱ: 𝒳→ [0,1]^C consisting L hidden layers, where 𝒳 is the input space and C is the number of classes. We can specify it as
ℱ= FC∘ f_L ∘ f_L-1∘⋯∘ f_2 ∘ f_1,
where FC denotes the fully-connected layers and f_i represents i-th hidden layer consisting of one convolutional, batch normalization, and activation layer.
The Main Pipeline of Backdoor Attack. Let 𝒟 = { (_i, y_i) }_i=1^N denote a training set, consisting of N i.i.d. samples. For each sample (, y), ∈𝒳 = [0,1]^d_c × d_w × d_h and y ∈𝒴 = {1,2, ⋯, C}, an adversary creates a poisoned training set 𝒟̂ by injecting a pre-defined trigger into a subset of benign samples (, 𝒟_s). The trigger is procured through a designated trigger generating function, symbolized as = τ(), where τ: 𝒳→𝒳. The generated poisoned samples are represented as 𝒟_p ={( , t ) |=+,(,y)∈𝒟_s }. The final poisoned training set 𝒟̂ is formed by combining 𝒟_p with the remaining benign samples 𝒟_b, , 𝒟̂ = 𝒟_p ∪𝒟_b. The poisoning rate is ρ = |𝒟_p| / |𝒟̂|. The backdoor will be created for DNNs trained on the poisoned dataset 𝒟̂.
§.§ The Overview of IBD-PSC
As demonstrated in <ref>, the prediction confidences of poisoned samples exhibit greater consistency than those of benign ones when scaling up BN parameters of attacked DNNs. As such, we can detect whether a suspicious image is malicious by examining its parameter-oriented scaled consistency (PSC), a method we refer to as IBD-PSC.
In general, as shown in <ref>, our IBD-PSC has two main stages, including (1) model amplification and (2) input detection. In the first stage, we amplify the BN parameters of different layers in the original model to obtain a series of parameter-amplified models. In the second stage, we calculate the PSC value of the suspicious image based on the obtained models and the original one. A larger the PSC value indicates a higher likelihood that the suspicious image is malicious. The technical details are as follows.
§.§ Model Amplification
Overview. In this stage, we intend to obtain n different parameter-amplified versions of the original model by scaling up the parameters (, γ and ) of its different BN layers. In particular, we amplify the later parts of the original model. It is motivated by the previous findings that trigger patterns often manifest as complicated features learned by the deeper (convolutional) layers of DNNs, especially for those attacks with elaborate designs <cit.>. This finding is also consistent to our observations in <ref>. Specifically, let k denote the penultimate BN layers in which we scale up in the first parameter-amplified model. For i-th amplified model, we scale up the parameters in the last (k+i-1) BN layers with the same scaling factor ω. Let ℱ denotes the original model, its parameter-amplified version containing k amplified BN layers with scaling factor ω (, ℱ̂_k^ω) can be defined as
ℱ̂_k^ω = FC∘f̂_L^ω∘f̂_L-1^ω∘ ... ∘f̂_L-k+1^ω∘ ... ∘ f_2 ∘ f_1,
where f̂_i^ω represents the BN layer of the i-th hidden layer undergoing an amplification process. It scales the original BN layer's parameters γ and by a scaling factor ω, , γ̂=ω·γ and =ω·. We also conduct ablation studies in <ref> to assess the impact of amplifying BN layers in a forward sequential manner and the impact of amplifying across all BN layers, respectively.
We exploit n instead of one parameter-amplified model (with many amplified BN layers) to balance the performance on benign and poisoned samples. In practice, n is a defender-assigned hyper-parameter. More details and its impact are included in <ref>. Accordingly, the last remaining question for model amplification is selecting a suitable starting point k. Its technical details are as follows.
Layer Selection. To optimally determine the number of amplified BN layers, we design an adaptive algorithm to dynamically select a suitable k. Motivated by our PSC phenomenon (see <ref>), we intend to find the point where the prediction accuracy for benign samples begins to decline significantly. Specifically, we incrementally increase k from 1 to L and monitor the error rate η. Let 𝒟_r denote the set of remaining benign samples. We can then compute the error rate η as the proportion of samples within 𝒟_r that are misclassified by the parameter-amplified model ℱ̂_k^ω, ,
η = 1/|𝒟_r|∑_(,y)∈𝒟_r𝕀((ℱ̂_k^ω())≠ y),
where 𝕀 denotes the indicator function. Once η exceeds a predefined threshold ξ (, 60%), the BN layers from the (L-k+1)-th to the L-th layer are determined as the target layers for amplification. The details of the adaptive algorithm are outlined in <ref>.
§.§ Input Detection
Once we obtain n parameter-amplified versions of the original model ℱ with the starting amplified point k (, {ℱ̂_k+i-1^ω}_i=1^n), for each suspicious image, our IBD-PSC can examine it by calculating its PSC value based on their predictions. Specifically, we define the PSC value as the average confidence generated over parameter-amplified models on the label predicted by the original model, ,
PSC() = 1/n∑_i=k^k+n-1ℱ̂_i^ω()_y',
where y' = (ℱ()).
After obtaining the PSC value, IBD-PSC assesses whether the input sample is malicious by comparing it to a predefined threshold T. If PSC > T, it is marked as a poisoned image.
§ EXPERIMENTS
§.§ Experiment Settings
Datasets and Models. We follow the settings in existing backdoor defenses and conduct experiments on CIFAR-10 <cit.>, GTSRB <cit.> and a subset of ImageNet dataset with 200 classes (dubbed `SubImageNet-200') <cit.> using the ResNet18 architecture <cit.>.
More detailed settings are presented in <ref>.
Attack Baselines. We evaluate the effectiveness of IBD-PSC against thirteen representative backdoor attacks, including 1) BadNets <cit.>, 2) Blend <cit.>, 3) LC <cit.>, 4) ISSBA <cit.>, 5) TaCT <cit.>, 6) NARCISSUS <cit.>, 7) Adap-Patch <cit.>, 8) BATT <cit.>, 9) PhysicalBA <cit.>, 10) IAD <cit.>, 11) WaNet <cit.>, 12) BPP <cit.>, and 13) SRA <cit.>. The first eight attacks are representative of poison-only attacks, while the last one is a model-controlled attack. The remaining four are training-controlled attacks. More details about the attack baselines are in the <ref>.
Defense Settings.
We compare our defense with classical and advanced input-level backdoor defenses, including STRIP <cit.>, TeCo <cit.> and SCALE-UP <cit.>. We implement these defenses using their official codes with default settings. Our IBD-PSC assumes defenders have access to only 100 benign samples, with default settings of ω=1.5, n=5, ϵ=60%, and T=0.9. More details can be found in <ref>.
Evaluation Metrics.
We employ two common metrics in our evaluation: 1) the area under the receiver operating curve (AUROC) measures the overall performance of detection methods across different thresholds, and 2) the F1 score measures both detection precision and recall.
§.§ Main Results
As shown in <ref>-<ref>, IBD-PSC consistently achieves promising performance in all cases across various datasets. For instance, it achieves AUROC and F1 scores approaching 1.0, indicating its effectiveness in various attack scenarios. The results also demonstrate that IBD-PSC achieves a substantial improvement in detection performance compared to the defense baselines. We also provide the ROC curves of defenses against four representative attacks in <ref>.
In contrast, all baseline defenses fail in some cases (marked in red), especially under attacks involving subtle alterations across multiple pixels (, Blend, WaNet) or physical attacks. This failure is primarily caused by their implicit assumptions about backdoors, such as sample-agnostic triggers and robustness against image preprocessing. We also provide the results with PreActResNet18 <cit.> and MobileNet <cit.> architectures in <ref>. Additionally, for more experimental results under other attack baselines, please refer to our Appendix <ref>.
We also calculated the inference time of all methods under identical and ideal conditions for evaluating efficiency. For example, we assume that defenders will load all required models and images simultaneously (with more memory requirements compared to the vanilla model inference). This comparison is relatively fair and reasonable since different defenses differ greatly in their mechanisms and requirements. Detailed settings can be found in <ref>. As shown in <ref>, the efficiency of our IBD-PSC is on par with or even better than all baseline defenses. Even compared to no defense, the extra time is negligible, although IBD-PSC may increase some storage/computational consumptions.
§.§ Ablation Study
Impact of Scaling Factor ω. IBD-PSC generates scaled models by amplifying the learnable parameter values of the selected BN layers with a fixed scalar ω. We hereby explore its effects to our method. Specifically, we vary ω from 1 to 2 and calculate the AUROC and F1 scores of IBD-PSC against three representative backdoor attacks (, BadNets, WaNet, and BATT) on CIFAR-10. As shown in <ref>, in the initial phase, increasing ω can significantly improve both the AUROC and F1 scores against different backdoor attacks. Furthermore, the AUROC and F1 scores converge to nearly one and stabilize at approximately one for ω values of 1.5 or higher, , the scaling factor has a relatively minor influence when it is sufficiently large.
Impact on Benign Samples within the target class. We evaluated the False Positive Rate (FPR) to assess the impact of our defense on benign samples within the target class. The results, shown in Table <ref>, demonstrate that although our defense, labeled 'Ours-C', does incur some false positives, it significantly better than that of the variant with label consistency (dubbed 'Ours-L') and SCALE-UP. This is primarily because the amplification process might also reduce the prediction confidence of benign samples from the target class, potentially due to 'short-cut' classes that are easier to predict. We plan to explore ways to further mitigate this issue in our future work. Therefore, we used consistency of confidence rather than that of the predicted label (that used in SCALE-UP) for detection.
Additionally, we conduct comprehensive ablation studies as detailed in <ref> to evaluate the robustness of our defense. The results in <ref> indicate that our defense is not sensitive to the hyperparameter selection, consistently achieving stable and promising performance (greater than 0.9) with hyperparameters close to our default settings (i.e., n=5, T=0.9, ϵ=0.6, and ω=1.5). The results presented in <ref> show that our defense proves effective across various target label selections, with both AUROC and F1 scores consistently nearing 1. Moreover, as shown in <ref>, our defense remains effective even with a minimal dataset of benign samples (as few as 25 samples). We also validate that our defense maintains minimal impact on the regular functionality of benign models in <ref>. Finally, we highlight the necessity of parameter amplification in <ref> by demonstrating that reducing the parameters of the BN layer is insufficient for detecting poisoned samples.
§.§ Resistance to Potential Adaptive Attacks
We initially assess the performance of IBD-PSC against attacks with low poisoning rates. This is because a small poisoning rate ρ can prevent models from over-fitting triggers, thus weakening the association between triggers and target labels, as demonstrated in previous studies <cit.>. Specifically, we conduct attacks (BadNets, WaNet, and BATT) on the CIFAR-10 dataset with ρ ranging from 0.02 to 0.1, ensuring the attack success rates exceed 80%. The results in <ref> consistently demonstrate the effectiveness of IBD-PSC, with AUROC and F1 scores consistently above 0.98 and 0.95, respectively. We also emphasize that our defense consistently outperforms baseline defenses across a range of poisoning rates and remains highly effective against attacks with small poisoning rates on the SubImageNet-200 dataset. For detailed discussions, please refer to <ref>.
We further evaluate the robustness of IBD-PSC against potential adaptive attacks in the worst-case scenario where adversaries possess complete knowledge of our defense. The knowledge enables attackers to tailor adaptive attacks specifically designed to counteract the effects of parameter amplification. We explore one form of adaptive attack that maintains accurate predictions of benign samples even when subjected to model parameter amplification. Additionally, we design another form of adaptive attack that reduces the confidence with which poisoned samples are classified into the target classes by parameters-amplified models, which is presented in <ref>. Typically, a vanilla backdoored model functions normally with benign samples but yields adversary-specific predictions when exposed to poisoned samples. The loss function for training such backdoored models is defined as follows:
ℒ_bd = ∑_i=1^|𝒟_b|ℒ(ℱ(_i), y_i) +∑_j=1^|𝒟_p|ℒ(ℱ(_j), y_t),
where ℒ(·) represents the cross entropy loss function.
We introduce an adaptive loss item specifically designed to ensure that the benign samples are correctly predicted when subjected to model parameter amplification. This loss ℒ_ada is defined as:
ℒ_ada = ∑_i=1^|𝒟_b|ℒ(ℱ̂_̂k̂^̂ω̂(_i;),y_i).
Subsequently, we integrate this adaptive loss ℒ_ada with the vanilla loss ℒ_bd to formulate the overall loss function as ℒ = αℒ_bd + (1-α) ℒ_ada, where α is a weighting factor. We then optimize the original model's parameters by minimizing ℒ during the training phase.
Similar to previous experiments, we also employ the three representative backdoor attacks to develop adaptive attacks on the CIFAR-10 dataset. Further details can be found in <ref>. <ref> demonstrates the sustained robustness of our IBD-PSC across all cases. The effectiveness primarily originates from our adaptive layer selection strategy, which dynamically identifies BN layers for amplification, regardless of whether it is a vanilla or an adaptive backdoored model. The layers selected during the inference stage typically differ from those used in the training phase, enabling the IBD-PSC to effectively detect poisoned samples.
§.§ A Closer Look to the Effectiveness of our Method
To gain deeper insights, we delve into the mechanisms of both SCALE-UP and our IBD-PSC. We utilize t-SNE <cit.> for visualizing the features of benign and poisoned samples in the last hidden layer. We adopt the representative BadNets attack method on the CIFAR-10 dataset as an example for our discussions. More results about other attack methods can be found in <ref>. The results in <ref> demonstrate that both SCALE-UP and our IBD-PSC induce more significant shifts in the feature space for benign samples compared to the poisoned samples. These larger shifts result in changes in the predictions for benign samples. These results provide clear evidence of the effectiveness of the two defense methods. Furthermore, in contrast to SCALE-UP, our IBD-PSC method induces more significant shifts in benign samples. This disparity in shift magnitude may stem from the constrained pixel value range of [0, 255], potentially mitigating the impact of amplification. However, the values of model parameters do not have such bounded constraints. Consequently, the larger shifts contribute to a more distinct separation between benign and poisoned samples, significantly augmenting the effectiveness of IBD-PSC.
§ CONCLUSION
In this paper, we proposed a simple yet effective method (dubbed IBD-PSC) for determining whether a suspicious image is poisoned. The IBD-PSC was inspired by our discovery of an intriguing phenomenon, named parameter-oriented scaled consistency (PSC). This phenomenon manifests through a significant uniformity of prediction confidences for poisoned samples, in contrast to benign ones, when the parameters of selected BN layers undergo amplification. We provided the theoretical and empirical foundations to support this phenomenon. To enhance the detection performance, we also designed an adaptive algorithm to dynamically select the number of BN layers for amplification. We conducted thirteen backdoor attack methods on benchmark datasets to comprehensively verify the effectiveness of our IBD-PSC. We also demonstrated that our IBD-PSC is highly efficient and resistant to potential adaptive attacks.
§ STATEMENT OF THE BROADER IMPACT
Backdoor attacks have posed severe threats in DNNs since developers often rely on external untrustworthy training resources (, datasets and model backbones). This paper proposes a simple yet effective input-level backdoor detection to identify and filter malicious testing samples. It generally has no ethical issues since it does not expose new vulnerabilities within DNNs and is purely defensive. However, we need to notice that our work can only filter out poisoned input images but cannot repair potential backdoors in the deployed model. Besides, it cannot recover trigger patterns or the ground-truth class of the poisoned samples. People should not be too optimistic about eliminating backdoor threats. Moreover, the adversaries may design more advanced backdoor attacks against our defense, although we have demonstrated that it is challenging. People should use only trusted training resources and models to eliminate and prevent backdoor attacks at the source.
icml2024
59
urlstyle[Bai et al.(2021)Bai, Wu, Zhang, Li, Li, and Xia]bai2021targeted
Bai, J., Wu, B., Zhang, Y., Li, Y., Li, Z., and Xia, S.-T.
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits.
In ICLR, 2021.
[Chen et al.(2018)Chen, Carvalho, Baracaldo, Ludwig, Edwards, Lee, Molloy, and Srivastava]chen2018detecting
Chen, B., Carvalho, W., Baracaldo, N., Ludwig, H., Edwards, B., Lee, T., Molloy, I., and Srivastava, B.
Detecting backdoor attacks on deep neural networks by activation clustering.
In CEUR Workshop, 2018.
[Chen et al.(2022)Chen, Wu, and Wang]chen2022effective
Chen, W., Wu, B., and Wang, H.
Effective backdoor defense by exploiting sensitivity of poisoned samples.
In NeurIPS, 2022.
[Chen et al.(2017)Chen, Liu, Li, Lu, and Song]chen2017targeted
Chen, X., Liu, C., Li, B., Lu, K., and Song, D.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning.
arXiv, 2017.
[Chou et al.(2020)Chou, Tramèr, Pellegrino, and Boneh]chou2018sentinet
Chou, E., Tramèr, F., Pellegrino, G., and Boneh, D.
SentiNet: Detecting physical attacks against deep learning systems.
In IEEE S&P Workshop, 2020.
[Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei]deng2009imagenet
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L.
ImageNet: A large-scale hierarchical image database.
In CVPR, 2009.
[Doan et al.(2021)Doan, Lao, and Li]doan2021backdoor
Doan, K., Lao, Y., and Li, P.
Backdoor Attack with Imperceptible Input and Latent Modification.
In NeurIPS, 2021.
[Gao et al.(2021)Gao, Kim, Doan, Zhang, Zhang, Nepal, Ranasinghe, and Kim]gao2021design
Gao, Y., Kim, Y., Doan, B. G., Zhang, Z., Zhang, G., Nepal, S., Ranasinghe, D. C., and Kim, H.
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks.
TDSC, 2021.
[Gong et al.(2023)Gong, Wang, Chen, Xue, Wang, and Shen]gong2023kaleidoscope
Gong, X., Wang, Z., Chen, Y., Xue, M., Wang, Q., and Shen, C.
Kaleidoscope: Physical Backdoor Attacks against Deep Neural Networks with RGB Filters.
TDSC, 2023.
[Gu et al.(2017)Gu, Dolan-Gavitt, and Garg]gu2017badnets
Gu, T., Dolan-Gavitt, B., and Garg, S.
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain.
IEEE Access, 2017.
[Guo et al.(2024)Guo, Lu, Bao, Pang, Yan, Du, and Li]guo2024gaussian
Guo, H., Lu, C., Bao, F., Pang, T., Yan, S., Du, C., and Li, C.
Gaussian mixture solvers for diffusion models.
In NeurIPS, 2024.
[Guo et al.(2023a)Guo, Li, Wang, and Liu]guo2023policycleanse
Guo, J., Li, A., Wang, L., and Liu, C.
PolicyCleanse: Backdoor Detection and Mitigation in Reinforcement Learning.
In ICCV, 2023a.
[Guo et al.(2023b)Guo, Li, Chen, Guo, Sun, and Liu]guo2023scale
Guo, J., Li, Y., Chen, X., Guo, H., Sun, L., and Liu, C.
SCALE-UP: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency.
In ICLR, 2023b.
[Hayase et al.(2021)Hayase, Kong, Somani, and Oh]hayase2021spectre
Hayase, J., Kong, W., Somani, R., and Oh, S.
Spectre: Defending against backdoor attacks using robust statistics.
In ICML, 2021.
[He et al.(2016a)He, Zhang, Ren, and Sun]he2016deep
He, K., Zhang, X., Ren, S., and Sun, J.
Deep Residual Learning for Image Recognition.
In CVPR, 2016a.
[He et al.(2016b)He, Zhang, Ren, and Sun]he2016identity
He, K., Zhang, X., Ren, S., and Sun, J.
Identity Mappings in Deep Residual Networks.
In ICCV, 2016b.
[Huang et al.(2023)Huang, Ma, Erfani, and Bailey]huang2023distilling
Huang, H., Ma, X., Erfani, S., and Bailey, J.
Distilling cognitive backdoor patterns within an image.
In ICLR, 2023.
[Huang et al.(2022)Huang, Li, Wu, Qin, and Ren]huang2022backdoor
Huang, K., Li, Y., Wu, B., Qin, Z., and Ren, K.
Backdoor Defense via Decoupling the Training Process.
In ICLR, 2022.
[Jebreel et al.(2023)Jebreel, Domingo-Ferrer, and Li]jebreel2023defending
Jebreel, N. M., Domingo-Ferrer, J., and Li, Y.
Defending Against Backdoor Attacks by Layer-wise Feature Analysis.
In SIGKDD, 2023.
[Krizhevsky et al.(2009)Krizhevsky, Hinton, et al.]krizhevsky2009learning
Krizhevsky, A., Hinton, G., et al.
Learning Multiple Layers of Features from Tiny Images.
Technical report, 2009.
[Li et al.(2020)Li, Xue, Zhao, Zhu, and Zhang]li2020invisible
Li, S., Xue, M., Zhao, B. Z. H., Zhu, H., and Zhang, X.
Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization.
TDSC, 2020.
[Li et al.(2021a)Li, Li, Wu, Li, He, and Lyu]li2021invisible
Li, Y., Li, Y., Wu, B., Li, L., He, R., and Lyu, S.
Invisible Backdoor Attack with Sample-Specific Triggers.
In ICCV, 2021a.
[Li et al.(2021b)Li, Lyu, Koren, Lyu, Li, and Ma]li2021anti
Li, Y., Lyu, X., Koren, N., Lyu, L., Li, B., and Ma, X.
Anti-Backdoor Learning: Training Clean Models on Poisoned Data.
In NeurIPS, 2021b.
[Li et al.(2021c)Li, Zhai, Jiang, Li, and Xia]li2021backdoor
Li, Y., Zhai, T., Jiang, Y., Li, Z., and Xia, S.-T.
Backdoor Attack in the Physical World.
In ICLR Workshop, 2021c.
[Li et al.(2022)Li, Jiang, Li, and Xia]li2022backdoor
Li, Y., Jiang, Y., Li, Z., and Xia, S.-T.
Backdoor Learning: A Survey.
TNNLS, 2022.
[Li et al.(2023)Li, Mengxi, Yang, Yong, and Shu-Tao]backdoorbox
Li, Y., Mengxi, Y., Yang, B., Yong, J., and Shu-Tao, X.
BackdoorBox: A Python Toolbox for Backdoor Learning.
In ICLR Workshop, 2023.
[Liu et al.(2018)Liu, Dolan-Gavitt, and Garg]liu2018fine
Liu, K., Dolan-Gavitt, B., and Garg, S.
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.
In RAID, 2018.
[Liu et al.(2023)Liu, Li, Wang, Hu, Ye, Jin, Wu, and Xiao]liu2023detecting
Liu, X., Li, M., Wang, H., Hu, S., Ye, D., Jin, H., Wu, L., and Xiao, C.
Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency.
In CVPR, 2023.
[Loureiro et al.(2021)Loureiro, Sicuro, Gerbelot, Pacco, Krzakala, and Zdeborová]loureiro2021learning
Loureiro, B., Sicuro, G., Gerbelot, C., Pacco, A., Krzakala, F., and Zdeborová, L.
Learning gaussian mixtures with generalized linear models: Precise asymptotics in high-dimensions.
In NeurIPS, 2021.
[Ma et al.(2022)Ma, Wang, Sun, Xue, Wen, and Xiang]ma2022beatrix
Ma, W., Wang, D., Sun, R., Xue, M., Wen, S., and Xiang, Y.
The" beatrix”resurrections: Robust backdoor detection via gram matrices.
In NDSS, 2022.
[Nguyen & Tran(2020)Nguyen and Tran]nguyen2020input
Nguyen, T. A. and Tran, A.
Input-Aware Dynamic Backdoor Attack.
In NeurIPS, 2020.
[Nguyen & Tran(2021)Nguyen and Tran]nguyen2021wanet
Nguyen, T. A. and Tran, A. T.
WaNet – Imperceptible Warping-based Backdoor Attack.
In ICLR, 2021.
[Pal et al.(2024)Pal, Yao, Wang, Shen, and Liu]pal2024backdoor
Pal, S., Yao, Y., Wang, R., Shen, B., and Liu, S.
Backdoor secrets unveiled: Identifying backdoor data with optimized scaled prediction consistency.
In ICLR, 2024.
[Pan et al.(2023)Pan, Zeng, Lyu, Lin, and Jia]pan2023asset
Pan, M., Zeng, Y., Lyu, L., Lin, X., and Jia, R.
{ASSET}: Robust backdoor data detection across a multiplicity of deep learning paradigms.
In USENIX Security, 2023.
[Papyan et al.(2020)Papyan, Han, and Donoho]papyan2020prevalence
Papyan, V., Han, X., and Donoho, D. L.
Prevalence of neural collapse during the terminal phase of deep learning training.
PNAS, 2020.
[Peri et al.(2020)Peri, Gupta, Huang, Fowl, Zhu, Feizi, Goldstein, and Dickerson]peri2020deep
Peri, N., Gupta, N., Huang, W. R., Fowl, L., Zhu, C., Feizi, S., Goldstein, T., and Dickerson, J. P.
Deep k-nn defense against clean-label data poisoning attacks.
In ECCV, 2020.
[Qi et al.(2022)Qi, Xie, Pan, Zhu, Yang, and Bu]qi2022towards
Qi, X., Xie, T., Pan, R., Zhu, J., Yang, Y., and Bu, K.
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.
In CVPR, 2022.
[Qi et al.(2023)Qi, Xie, Li, Mahloujifar, and Mittal]qi2023revisiting
Qi, X., Xie, T., Li, Y., Mahloujifar, S., and Mittal, P.
Revisiting the Assumption of Latent Separability for Backdoor Defenses.
In ICLR, 2023.
[Stallkamp et al.(2012)Stallkamp, Schlipsing, Salmen, and Igel]stallkamp2012man
Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C.
Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition.
Neural Networks, 2012.
[Tang et al.(2021)Tang, Wang, Tang, and Zhang]tang2021demon
Tang, D., Wang, X., Tang, H., and Zhang, K.
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection.
In USENIX Security, 2021.
[Tang et al.(2020)Tang, Du, Liu, Yang, and Hu]tang2020embarrassingly
Tang, R., Du, M., Liu, N., Yang, F., and Hu, X.
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks.
In SIGKDD, 2020.
[Tang et al.(2023)Tang, Yuan, Li, Liu, Chen, and Hu]tang2023setting
Tang, R., Yuan, J., Li, Y., Liu, Z., Chen, R., and Hu, X.
Setting the Trap: Capturing and Defeating Backdoor Threats in PLMs through Honeypots.
In NeurIPS, 2023.
[Tishby & Zaslavsky(2015)Tishby and Zaslavsky]tishby2015deep
Tishby, N. and Zaslavsky, N.
Deep Learning and the Information Bottleneck Principle.
In ITW, 2015.
[Tran et al.(2018)Tran, Li, and Madry]tran2018spectral
Tran, B., Li, J., and Madry, A.
Spectral Signatures in Backdoor Attacks.
In NeurIPS, 2018.
[Turner et al.(2019)Turner, Tsipras, and Madry]turner2019label
Turner, A., Tsipras, D., and Madry, A.
Label-Consistent Backdoor Attacks.
arXiv, 2019.
[Van der Maaten & Hinton(2008)Van der Maaten and Hinton]van2008visualizing
Van der Maaten, L. and Hinton, G.
Visualizing data using t-SNE.
JMLR, 2008.
[Wang et al.(2019)Wang, Yao, Shan, Li, Viswanath, Zheng, and Zhao]wang2019neural
Wang, B., Yao, Y., Shan, S., Li, H., Viswanath, B., Zheng, H., and Zhao, B. Y.
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks.
In IEEE S&P, 2019.
[Wang et al.(2024)Wang, Xiang, Miller, and Kesidis]wang2024mm
Wang, H., Xiang, Z., Miller, D. J., and Kesidis, G.
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic.
In IEEE S&P, 2024.
[Wang et al.(2022a)Wang, Ding, Zhai, and Ma]wang2022training
Wang, Z., Ding, H., Zhai, J., and Ma, S.
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training.
In NeurIPS, 2022a.
[Wang et al.(2022b)Wang, Zhai, and Ma]Wangbpp
Wang, Z., Zhai, J., and Ma, S.
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning.
In CVPR, 2022b.
[Wenger et al.(2021)Wenger, Passananti, Bhagoji, Yao, Zheng, and Zhao]wenger2021backdoor
Wenger, E., Passananti, J., Bhagoji, A. N., Yao, Y., Zheng, H., and Zhao, B. Y.
Backdoor Attacks Against Deep Learning Systems in the Physical World.
In CVPR, 2021.
[Xia et al.(2022)Xia, Niu, Li, and Li]xia2022enhancing
Xia, P., Niu, H., Li, Z., and Li, B.
Enhancing backdoor attacks with multi-level mmd regularization.
TDSC, 2022.
[Xiang et al.(2023)Xiang, Xiong, and Li]xiang2023umd
Xiang, Z., Xiong, Z., and Li, B.
Umd: Unsupervised model detection for x2x backdoor attacks.
In ICML, 2023.
[Xu et al.(2023)Xu, Li, Jiang, and Xia]xu2023batt
Xu, T., Li, Y., Jiang, Y., and Xia, S.-T.
Batt: Backdoor attack with transformation-based triggers.
In ICASSP, 2023.
[Zeng et al.(2021)Zeng, Park, Mao, and Jia]zeng2021rethinking
Zeng, Y., Park, W., Mao, Z. M., and Jia, R.
Rethinking the backdoor attacks' triggers: A frequency perspective.
In ICCV, 2021.
[Zeng et al.(2022)Zeng, Chen, Park, Mao, Jin, and Jia]zeng2022adversarial
Zeng, Y., Chen, S., Park, W., Mao, Z. M., Jin, M., and Jia, R.
Adversarial Unlearning of Backdoors via Implicit Hypergradient.
In ICLR, 2022.
[Zeng et al.(2023)Zeng, Pan, Just, Lyu, Qiu, and Jia]zeng2023narcissus
Zeng, Y., Pan, M., Just, H. A., Lyu, L., Qiu, M., and Jia, R.
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information.
In CCS, 2023.
[Zhang et al.(2022)Zhang, Dongdong, Huang, Liao, Zhang, Feng, Hua, and Yu]zhang2022poison
Zhang, J., Dongdong, C., Huang, Q., Liao, J., Zhang, W., Feng, H., Hua, G., and Yu, N.
Poison ink: Robust and invisible backdoor attack.
TIP, 2022.
[Zoran & Weiss(2012)Zoran and Weiss]zoran2012natural
Zoran, D. and Weiss, Y.
Natural images, gaussian mixtures and dead leaves.
In NeurIPS, 2012.
§ APPENDIX
§ THE OMITTED PROOF OF THEOREM 3.1
Theorem 3.1. Let F=FC∘ f_L∘…∘ f_1 be a backdoored DNN with L hidden layers and FC denotes the fully-connected layers. Let x be an input, =f_l∘⋯∘ f_1(x) be its batch-normalized feature after the l-th layer (1≤ l≤ L), and t represent the attacker-specified target class. Assume that follows a mixture of Gaussian distributions. Then the following two statements hold: (1) Amplifying the and γ parameters of the l-th BN layer can make ‖‖_2 ( is the amplified version of ) arbitrarily large, and (2) There exists a positive constant M that is independent of , such that whenever ‖‖_2 > M, then max FC∘ f_L∘…∘ f_l+1()=t, even when max FC∘ f_L∘…∘ f_l+1()≠t
Proof of Theorem 3.1:
For simplicity, let denote the benign model and denote the backdoored model. We look at the l-th (pre-batch-norm) feature layer _l such that
= _l(), =ϕ(;γ_Benign,_Benign),()=FC∘ f_L∘⋯∘ f_l+1(),
= _l(), =ϕ(;γ, ),()=FC∘ f_L∘⋯∘ f_l+1().
We assume all features follow the mixture of Gaussians. The assumption is commonly used in many deep learning theory papers <cit.> as it simplifies the analysis and provides a tractable framework for modeling complex data distributions. So and will follow:
∼1/C∑_c=1^C Z_cexp-‖-μ_c‖_2^2/2σ_c^2,
^c|c∼ Z_cexp-‖-μ_c‖_2^2/2σ_c^2, c∈,
^c|c∼ B_c exp-‖^c-(_Benign-γ_Benignμ_-μ_c/√(σ^2_+ϵ))‖_2^2/2γ^2σ_c^2/σ_^2=(;_Benign-γ_Benignμ_-μ_c/√(σ^2_+ϵ), γ_Benignσ_c/σ_), c∈,
and
∼1/C∑_c=1^C Z_cexp-‖-_c‖_2^2/2_c^2,
^c|c∼ Z_cexp-‖-_c‖_2^2/2_c^2, c∈ ,
^c|c∼ B_c exp-‖^c-(-γ_-_c/√(^2_+ϵ))‖_2^2/2γ^2_c^2/_^2=(;-γ_-_c/√(^2_+ϵ), γ_c/_), c∈,
where
μ_c = _∼ p(|()=c)[], _c=_∼ p(|()=c)[],
σ_c=_∼ p(|()=c)(), σ̃_c=_∼ p(|()=c)(), c∈,
μ_=_∼ p()[], _=_∼ p()[],σ_=_∼ p()(), _=_∼ p()[].
For a sufficiently trained network, it is well-known that, with the neural collapse <cit.>, μ_c and σ_c, c∈
form
a simplex and are uniformly distributed. Below we will try to find out the characteristics of a backdoored model.
§.§ Characterize the Backdoored Model
We assume that the trigger ∈, which fools the backdoored model to recognize + as the attacked target class t instead of its true class c, is very small, i.e., ‖‖_2≪1. In this paper, we assume that all images have been normalized, i.e., x ∈ [0, 1]. Accordingly, ||t||_2 ≪ 1 holds true in practice since the triggers are either very sparse (, BadNets) or have a small overall magnitude (, WaNet).
So the feature distribution of ()=+ may be approximated by
ṽ
()≈_l(+)=_l()+∇_l()^T=+∇^T,
()=γ(()-_/√(_^2+ϵ))+≈+∇^T/√(_^2+ϵ)≡+^T.
As () should be recognized as category t, the conditional probability of () being sampled from |c should be smaller than from |t for all c∈, and we thus can get, ∀,∈,
B_texp-‖()-(-γ_-_t/√(^2_+ϵ))‖_2^2/2γ^2_t^2/_^2> B_cexp-‖()-(-γ_-_c/√(^2_+ϵ))‖_2^2/2γ^2_c^2/_^2,∀ c∈,c≠ t,
⇔logB_t/B_c+1/2γ^2_t^2_c^2(_t^2‖()-(-γ_-_c/√(^2_+ϵ)-_c^2)‖_2^2-_c^2‖()-(-γ_-_t/√(^2_+ϵ))‖_2^2)> 0,
∀()=+^T.
Note that this is actually a quadratic form (the form of ax^2+bx+c>0, ∀ x) of , to make sure the above inequality holds for all (or at least most of in the feature space), it is obvious that the quadratic coefficient (_t^2-_c^2) must be positive, so we should have
_t>_c,∀ c∈, c≠ t.
So we can confirm a key characteristic of the backdoored model, that the variance of the attacked target class t is larger than any of the others.
§.§ Parameter-oriented Scaling Consistency of Backdoored Models
After obtaining the above characteristic of the backdoored model, we can then prove the parameter-oriented scaling consistency of it.
Let
Γ_c=-γ_-_c/√(^2_+ϵ), c=1,⋯,C.
Considering the above mixture of the Gaussian model, a sample will be classified into class t if and only if
B_texp-‖-(-γ_-_t/√(^2_+ϵ))‖_2^2/2γ^2_t^2/_^2> B_cexp-‖-(-γ_-_c/√(^2_+ϵ))‖_2^2/2γ^2_c^2/_^2,∀ c∈, c≠ d,
⇔logB_t/B_c+ _^2/2γ^2_t^2_c^2(_t^2‖-Γ_c‖_2^2-_c^2‖-Γ_t‖_2^2 )≥ 0.
The above can stand if
logB_t/B_c+ _^2/2γ^2_t^2_c^2( (_t^2-_c^2)‖‖_2^2-2‖_t^2Γ_c-_c^2Γ_t‖_2‖‖_2+‖_t^2Γ_c-_c^2Γ_t‖_2^2/_t^2-_c^2)
+_^2/2γ^2_t^2_c^2(_t^2‖Γ_c‖_2^2-_c^2‖Γ_t‖_2^2-‖_t^2Γ_c-_c^2Γ_t‖_2^2)≥ 0
⇔‖‖_2 ≥max{1/√(_t^2-_c^2)√(-_t^2‖Γ_c‖_2^2+_c^2‖Γ_t‖_2^2+‖_t^2Γ_c-_c^2Γ_t‖_2^2-logB_t/B_c2γ^2_t^2_c^2/_^2)
+‖_t^2Γ_c-_c^2Γ_t‖_2/_t^2-_c^2,0}.
So just like <ref>, the above is also a quadratic form for ‖‖_2 with positive quadratic coefficient (_t^2-_c^2)>0. So when ‖‖_2 is large enough (<ref>), we will always have is more likely to be identified into category t than all the others.
Note that scale the parameter _Inf,γ_Inf when inference will not influence the value of ,γ in <ref>. The , γ in <ref> is used to describe the underlying feature distributions (which are assumed to be the mixture of Gaussians). They will not change upon training finished.
As a result, when we scale the Batch Norm parameter _Inf, γ_Inf, we will get a _Scale with larger norm propositional to γ_Inf and linearly increasing with respect to _Inf. When _Inf, γ_Inf are larger enough, the scaled feature _Scale will make <ref> always positive.
The above proof can be intuitively understood as follows: if we sample from a mixture of Gaussian distribution, then all remote points will be sampled from the Gaussian with the largest variance.
§ DETAILED CONFIGURATIONS OF THE EMPIRICAL STUDY IN <REF>
In this section, we adopt BadNets <cit.>, WaNet <cit.>, and BATT <cit.> as examples for our analysis. These attacks epitomize static, dynamic, and physical backdoor attacks, respectively. Our experiments are conducted on the CIFAR-10 dataset <cit.>, using the ResNet18 model <cit.>. For each attack, we set the poisoning rate (ρ) to 0.1, achieving attack success rates over 99%. In particular, we implement the backdoor attacks using their official codes with default settings. Specifically, the backdoor trigger for BadNets is represented as a 3× 3 grid in black-and-white and is added to the lower-right corner of the poisoned images. For WaNet, the trigger is applied to the original images through elastic image warping transformation. In the case of BATT, the poisoned samples are obtained by rotating the original images by sixteen degrees.
These attacks are implemented using the BackdoorBox toolkit <cit.>[<https://github.com/THUYimingLi/BackdoorBox>].
Regarding the scaling procedure, we adopt a layer-wise weight scaling operation to generate the parameter-amplified models. we scale up on the BN parameters (, γ and ) with ω=1.5 times starting from the last layer and gradually moving forward to more layers. For example, in a 20-layer model, the first iteration involves scaling the weights of the 20th layer, and the next iteration extends the scaling to the 20th and the 19th layers, and so on. We then calculate the average confidence of 2000 testing samples for each parameter-scaled model. The average confidence is defined as the average probability of samples on the label predicted by the original unamplified model.
§ DETAILED EXPLORATION OF AMPLIFYING A SINGLE BN LAYERS IN <REF>
As described in <ref>, we find amplifying only a single BN layer
may require an unreasonably large amplification factor, and due to the nonlinearity of neural network layers, often leads to unstable defense performance across different attacks. To further explain the phenomenon, we conduct an empirical investigation aimed at investigating the percentage of benign samples to be predicted as the target class when amplifying the learnable parameters of individual BN layers with scale S. The results are displayed in Table <ref>, and we have three primary observations:
(1) The amplification factor for achieving effective defense varies considerably from layer to layer. (2) Some attacks (e.g., WaNet and BATT) require an unreasonably large amplification factor to achieve a substantial misclassification rate. (3) Amplifying only a single BN layer may not be adequate to misclassify the majority of benign samples in some cases. For instance, amplifying the first BN layer alone cannot misclassify benign samples from the Ada-patch attack into the intended target class.
To address this, we spread the amplification across multiple consecutive BN layers, using a small factor (e.g., 1.5) on each layer.
Instead of controlling the layer-wise amplification factor, we vary the number of amplified layers to achieve different levels of accumulated amplification.
This relation is demonstrated in <ref> (see <ref> for the density plot), where we see amplifying more layers induces higher last-layer activations, and increases the room to differentiate poisoned samples from the benign ones.
§ WHY SCALE THE LATER LAYERS?
Our defense relies on building a profile of how the target model behavior changes under progressive modifications to the model.
Motivated by a widely accepted hypothesis (e.g., <cit.>) that layers situated towards the later stages exert a more direct influence on the ultimate model output,
we designed our defense by
amplifying the model parameters in stages, starting from the last hidden layer and progressively moving backward through the preceding layers.
Here, we examine the alternative of a forward model scaling approach,
which
scales
model parameters starting from the initial layers of the model and progresses forward to the latter layers.
The results in <ref> demonstrate that while this defense strategy proves to be effective against most backdoor attacks, such as BadNets, and ISSBA, it exhibits poor performance against others like Blend, BATT, and LC attacks. This discrepancy may be attributed to the fact that in those attacks, the trigger features closely resemble benign features in the model's shallow layers, making it challenging for the amplification operation to sufficiently separate these two types of features.
§ WHY NOT AMPLIFYING ALL BN LAYERS?
In our defense, we amplify the later parts of the original model. It is motivated by the previous findings that trigger patterns often manifest as complicated features learned by the deeper (convolutional) layers of DNNs, especially for those attacks with elaborate designs <cit.>. It is also consistent with our observations in <ref>.
We investigate the performance of our defense by amplifying all BN layers within a model. As shown in <ref>, amplifying all layers leads to defense failure against Blend, LC, and WaNet attacks. In particular, its F1 score drops to 0, suggesting that amplifying all layers in the defense fails to detect any poisoned samples.
§ DETAILED SETTINGS FOR EXPERIMENTAL DATASETS AND CONFIGURATIONS
In line with the existing backdoor defense methods <cit.>, we select the most commonly used benchmark datasets and model architectures for our experiments. The datasets and models used are outlined in <ref>.
CIFAR-10 is a benchmark dataset consisting of 3 × 32 × 32 color images representing ten different object categories <cit.>. The training set comprises 50,000 images, while the test set contains 10,000 images, with an equal distribution across the ten classes.
GTSRB is a benchmark dataset consisting of images of German traffic signs, categorized into 43 classes <cit.>. The training set consists of 39,209 images, while the test set contains 12,630 images. Given the considerable variation in image sizes within this dataset, we resize all images to a uniform size of 3 × 32 × 32 for our experiments, ensuring consistency and convenience in handling.
SubImageNet-200. We adopt a subset of the ImageNet benchmark dataset <cit.> by randomly selecting 200 categories from the most common categories in the original ImageNet. Specifically, the subset includes 100,000 images from the original ImageNet for training (500 images per class) and 10,000 images for testing (50 images per class). For simplicity, all images are resized to a uniform dimension of 3 × 224 × 224.
§ DETAILS OF TRAINING BACKDOORED MODELS
§.§ Backdoor Attacks
In <ref>, we assess the effectiveness of our defense against thirteen backdoor attacks. These attacks are categorized into three types: 1) poisoning-only attacks, 2) training-controlled, 3) and model-controlled attacks.
* Poison-only Backdoor Attacks: For the most commonly studied poisoning-only attacks, we consider various forms. This includes classic static attacks like (1) BadNet <cit.> and (2) Blend <cit.>, sample-specific attack such as (3) ISSBA <cit.>, clean-label attacks represented by (4) Label-Consistent (LC) <cit.> and (5) NARCISSUS <cit.>.
In addition, we also consider adaptive attacks like (6) TaCT <cit.> and (7) Adap-Patch <cit.>, which are designed to slip past existing defenses.
* Training-controlled Backdoor Attacks: The training-controlled attacks include the (8) Dynamic <cit.>, (9) WaNet <cit.>, (10) BPP <cit.> attacks, and physical backdoor attacks, including (11) PhysicalBA <cit.> and (12) BATT <cit.>.
* Model-controlled Backdoor Attacks: we assess attacks involving direct modification of model parameters, such as (13) subnet replacement attack (SRA) <cit.>.
The poisoning rate ρ for data-poisoning-based backdoor attacks is set to 0.1. The target class label is set to 0. In particular, the BATT attack consists of two attack modes, utilizing spatial rotation and translation transformations as triggers, respectively. In our study, we specifically employ spatial rotation as our triggers. The examples of both triggers and the corresponding poisoned samples are depicted in <ref>.
§.§ Additional Details of Training Backdoored Models
We adopt the standard training pipeline for developing backdoor models. This involves an SGD optimizer with a momentum of 0.9 and a weight decay of 10^-4. The initial learning rate is set at 0.1, which is reduced to 10% of its previous value at the 50th and 75th epochs. The training comprises 200 epochs with a batch size of 128. For data augmentation on the CIFAR-10 dataset, we apply RandomHorizontalFlip and RandomCrop32 (randomly cropping images to a size of 3 × 32 × 32). Additionally, RandomRotation15 is used to randomly rotate images within a range of [-15, 15] degrees.
For data augmentation on the CIFAR-10 dataset, we utilize RandomHorizontalFlip with a probability of 0.5 and RandomCrop32, which randomly crops images to a size of 3 × 32 × 32. For the GTSRB dataset, we employ the RandomRotation15 augmentation technique, where images are randomly rotated within a range of [-15, 15] degrees. For the GTSRB dataset, we apply RandomCrop224, RandomHorizontalFlip, and RandomRotation20 to enhance the accuracy of the backdoored model on the benign samples.
All experiments are performed on a server with the Ubuntu 16.04.6 LTS operating system, a 3.20GHz CPU, 2 NVIDIA's GeForce GTX3090 GPUs with 62G RAM, and an 8TB hard disk.
§.§ Effectiveness of the Backdoored Attacks
Following the settings in existing backdoor attacks, we use two metrics to measure the effectiveness of the backdoor attacks: attack success rate (ASR) and benign accuracy (BA). ASR indicates the success rate of classifying the poisoned samples into the corresponding target classes. BA measures the accuracy of a backdoored model on the benign testing dataset.
BA and ASR for different backdoor attacks are included in <ref> and <ref>.
§ IMPLEMENTATION OF THE BASELINE DEFENSES
(1) STRIP: We implement STRIP following their official open-sourced codes[<https://github.com/garrisongys/STRIP>]. STRIP detects backdoor attacks by observing the prediction behaviors of an input sample when superimposing benign features on it.
(2) TeCo: We implement TeCo following their official open-sourced codes[<https://github.com/CGCL-codes/TeCo>].
(3) SCALE-UP: We implement SCALE-UP (data-limited) following the most commonly used open-sourced toolbox codes[<https://github.com/vtu81/backdoor-toolbox>].
§ ROC CURVE COMPARISON WITH BASELINE DEFENSES
In addition to AUROC and F1 scire metrics,
we also visually compare the ROC curves of competing defense methods against attacks. ROC curves for the CIFAR-10 experiments can be found in <ref>.
§ GENERALIZABILITY TO OTHER MODEL ARCHITECTURES
We evaluate the effectiveness of our defense on additional model architectures including PreActResNet18 <cit.>, and MobileNet <cit.>.
The defense performance is presented in <ref>.
As shown, most of the average AUROC and F1 scores on both architectures are above 0.96, with a few slightly lower scores (still above 0.93). This result indicates that our defense has general applicability across different model architectures.
§ PERFORMANCE OF OUR IBD-PSC AGAINST ADDITIONAL BACKDOOR SCENARIOS
<ref> presents the performance (AUROC and F1 scores) of our IBD-PSC against some other types of backdoor attacks, including clean-label attacks (LC <cit.>, NARCISSUS <cit.>), source-specific attack (TaCT <cit.>), training-controlled attack (BPP <cit.>), model-controlled attack (SRA <cit.>), and adaptive attack (Adap-Patch <cit.>). The results demonstrate that IBD-PSC consistently outperforms other defense strategies across almost all types of backdoor attacks. It achieves the highest average scores in both AUROC and F1 metrics, marked in bold, underscoring its superior detection capabilities. This comprehensive evaluation affirms the robustness of IBD-PSC as a formidable defense mechanism in the ever-evolving landscape of backdoor attacks in cybersecurity.
§ SETTINGS FOR THE INFERENCE TIME COMPARISON
The inference time is critical for this task (, detecting poisoned testing images) because the detection is usually deployed as the `firewall' for online inference.
In the case of STRIP, TeCo, SCALE-UP, and our defense, defenders utilize the target model's prediction for defense purposes. This means that both input identification and prediction can be carried out simultaneously.
We calculated the inference time of all defense methods under identical and ideal conditions for evaluating efficiency. For example, we assume that defenders will load all required models and images simultaneously (with more memory requirements compared to the vanilla model inference). This comparison is relatively fair and reasonable since different defenses differ greatly in their mechanisms and requirements. More precisely, before inference, we engage in preparatory steps such as selecting the BN layers to be amplified and preparing the parameter-amplified models. These models are subsequently deployed across different machines, enabling simultaneous processing of input samples. While this approach requires additional storage space to accommodate the various model versions, it considerably accelerates the detection process. For SCALE-UP, we calculate the inference time needed to obtain predictions for multiple augmented images associated with a given input. This is achieved by concurrently feeding all the images into the deployed model as a batch instead of predicting them individually.
§ ABLATION STUDIES
§.§ Ablation Studies on the Threshold T
In our defense, we assess whether an input sample is malicious by comparing its PSC value to a predefined threshold T.
Following the other experiments, we conduct an ablation study of T on three representative attacks: BadNets, WaNet, and BATT on the CIFAR-10 dataset, by adjusting T from 0.5 to 0.9. The results are shown in <ref>. As we can see, a wide range of values of T can lead to a high F1 score. In our experiment, we set T to 0.9.
§.§ Ablation Studies on the Hyperparameter ξ.
In our defense, we design an adaptive algorithm to dynamically select a suitable number of the BN layers to be amplified. The algorithm uses a predefined hyperparameter error rate threshold ξ. Here we empirically show that our defense is insensitive to changes in ξ.
Again, this is demonstrated on three representative attacks: BadNets, WaNet, and BATT, with varying values of ξ from 10% to 90%. <ref> shows that the defense performance against BadNets and WaNet attacks exhibits remarkable resilience to variations in ξ.
While the BATT attack does manifest a more pronounced response to changes in ξ, with the F1 score experiencing fluctuations, the metric is eventually higher when the error rate reaches approximately 60%. This observation signals that the overall influence of the error rate on the defense efficacy remains limited. Consequently, we advocate for an error rate of around 60%, as it appears to strike a judicious balance, ensuring adequate detection accuracy without unduly compromising the defense strategy against the assessed backdoor threats.
§.§ Ablation Studies on the Number of Amplified Models n
In our defense, we build a profile of n progressively amplified models,
to capture the model's dynamic response to such interventions.
In practice, the number of amplifications n is a defender-assigned hyper-parameter.
As illustrated in <ref>, the detection performance under the BadNets attack exhibits consistency across various values of n, suggesting a relative insensitivity to the number of amplifications. In contrast, for the WaNet and BATT attacks, there is an improvement in detection effectiveness as n increases, which plateaus when n reaches five. This stabilization suggests an optimal defense performance, and thus we establish n = 5 as the optimal value for our defense, ensuring stable detection performance.
§.§ Robustness to the Target Class
We further evaluate the robustness of our defense to the changes of the target class. We select three attacks, including the patch-based, dynamic, and physical backdoor attacks mentioned above, and apply them to target each of the ten labels of CIFAR-10. We display the AUROC and F1 scores of our defense against these backdoored models in <ref>. As shown, our defense demonstrates consistent performance against different attacks and target labels. Specifically, the AUROC and F1 scores are consistently close to 1, with the average AUROC and F1 scores of each attack all exceeding 0.96 and 0.94, respectively. This indicates that our defense maintains strong performance against different types of attacks and target labels. Additionally, the standard deviations of AUROC and F1 scores across different cases are generally below 0.02.
§.§ Ablation Studies on the Size of Local Benign Samples 𝒟_r
Following similar studies in the literature <cit.>, we assume that defenders possess a small benign dataset 𝒟_r to calibrate
different model parameters.
By default, 𝒟_r contains merely 100 samples.
We evaluate the robustness of our defense to the change in the size of 𝒟_r. The results on the CIFAR-10 dataset using ResNet18 against six attacks on the CIFAR-10 dataset are shown in <ref>. It is evident that for most attacks, including BadNets, IAD, WaNet, BATT, and NARCISSUS, the detection performance remains consistently high and relatively stable across varying sizes of 𝒟_r. For the SRA attack, the F1 score increases and stabilizes when the size of 𝒟_r reaches 100. Overall, this demonstrates that our defense is effective with as few as 100 benign samples.
§.§ False Positive Rates on Benign Models
In this study, we investigate a scenario in which a defender obtains a third-party DNN model but cannot determine whether the model is compromised with backdoors. To ensure security, it is common to deploy an input-level backdoor detection system, similar to a network firewall, to filter potentially poisoned samples. In such a context, evaluating the impact of the deployed defenses on benign models is crucial.
To achieve this objective, we train five benign models using different random seeds. Subsequently, we conduct tests on these models to calculate the false positive rate of our defense, which represents the proportion of benign samples incorrectly identified as backdoor samples. These benign samples are incorrectly rejected by the defense during inference.
<ref> presents the false positive rates of our defense on different benign models trained on various datasets. We can observe variations in the false positive rate among different models, but overall, it remains relatively low (below 3% on the CIFAR-10 and 1% on the SubImageNet-200 dataset, and around 6% on the GTSRB dataset).
We attribute the higher false positive rate on GTSRB to the relative simplicity of image features in this dataset, making the models more prone to overfitting. Consequently, when amplifying model weights, some benign samples may not decrease their prediction confidence due to overfitting. This overfitting phenomenon on GTSRB has also been reported in the SCALE-UP defense <cit.>. However, in the real world, datasets are often more similar to the ImageNet dataset, characterized by its comprehensive and rich feature information. Our defense performs best on the SubImageNet-200 dataset, achieving an error rate of less than 1%.
In summary, these results indicate that our defense strategy effectively identifies poisoned samples while maintaining a minimal impact on the regular functionality of benign models.
§.§ Is Shrinking as Effective as Amplifying?
In this study, we focus on detecting poisoned samples by amplifying model parameters with a scaling factor greater than one. To complement this approach, we conducted ablation experiments in this section involving shrinking model parameters using a scaling factor smaller than one.
According to <ref>, larger enough feature norms can induce decreasing of confidence in the original predicted class, if the inputs are benign samples (and
certain classical assumptions in learning theory are adopted). Poisoned samples, instead, will stay fine. Therefore, by inversely reducing the values of parameters, we expect to observe a degradation in detection performance. The experiments are conducted across a range of reduced magnification factors (0.1 to 0.9) against BadNet, WaNet, and BATT attacks on the CIFAR-10 dataset using the ResNet18 model. The results displayed in <ref>, clearly indicate a reduction in detection performance, as evidenced by lowered AUROC values and F1 scores approaching zero. This trend remains consistent across the various attack methods examined.
The reduction in detection performance with decreased parameter values reveals the effectiveness of parameter amplification as a defensive strategy, offering a reason to adopt this approach in safeguarding against backdoor threats.
We also examine the L2 Norm for the last-hidden-layer activations under reduced model parameters. We set the parameter reduction factor to 0.9 and reduced the values of parameters of the model's last four and eight hidden layers, respectively. The L2 norm is calculated on both benign models and backdoored models under BadNet, WaNet, and BATT attacks. As shown in <ref>, a greater reduction in parameters led to a smaller L2 norm in both benign and backdoored models.
This observation provides empirical justification for the necessity of parameter amplification, thereby reinforcing our insights for our proposed defense.
§ ROBUSTNESS AGAINST ADAPTIVE ATTACKS
§.§ Existing Attacks with Small Poisoning Rates
we conduct additional experiments under small poisoning rates (i.e., 0.5% and 1%). Specifically, following the suggestion from the backdoor-toolbox, we remove samples that contain trigger patterns but still cannot be correctly predicted as the target label by attacked DNNs. As shown in <ref>, our defense is still highly effective in these cases, although its performance is slightly lower than that of TeCo, which requires significantly more inference time. In contrast, both STRIP and SCALE-UP failed.
Furthermore, we evaluate the effectiveness of our defense against backdoor attacks with small poisoning rates on the SubImageNet-200 dataset. As demonstrated in <ref>, our defense is still highly effective in defending against attacks with different poisoning rates on the SubImageNet-200 dataset (instead of only effective on CIFAR-10).
Additionally, we compared the performance of our defense with baseline defenses under small poisoning rates. As shown in <ref>, our defense outperforms the baseline defenses in most scenarios across different poisoning rates.
§.§ Adaptive Attacks in the Worst-case Scenario
In addition to the existing attacks,
we further consider the worst-case scenario where potential adaptive attacks are tailored for our defense.
Design 1. A natural assumption is that the adversary would design an adaptive loss ℒ_ada=∑_i=1^|𝒟_b|ℒ(ℱ̂_̂k̂^̂ω̂(_i;),y_i) to ensure that the benign samples are correctly predicted when subjected to model parameter amplification, hence breaking our consistency assumption. This adaptive loss is then integrated into the overall loss function as ℒ = αℒ_bd + (1-α) ℒ_ada, where α represents the weighting factor.
The adversary would aim to find a α value that best balances
the ASRs and the BAs.
<ref> presents the performance (BA, ASR) of the adaptive attacks under various α settings. As evident from the results, all three attacks (BadNets, WaNet, BATT) on the CIFAR-10 dataset employed in the experiments consistently exhibit high ASRs and BA across different values of α on the CIFAR-10 dataset,
underscoring the effectiveness of the adaptive attacks.
On the other hand, we have shown in <ref> such as adaptative attacks can still be effectively defended by our method.
We conducted further investigation.
We observed that
the adaptive loss indeed induced a model ℱ' substantially different from the nonadaptive version ℱ.
However, our defense, particular Algorithm <ref>, can readjust for the modified backdoor model (Note that model ℱ is an input in Algorithm <ref>).
In particular, we observed that on a nonadaptive backdoor model ℱ, Algorithm <ref> returns k=10. On the adaptive model ℱ', Algorithm <ref> returns k=15.
In other words, our algorithm learned to exploit the earlier layers not touched by the adaptive attack.
This ability to counter adaptive attacks is a key advantage of our method compared with the input-based SCALEUP method.
Design 2. Compared with that ensuring the correct prediction of benign samples under model-parameter amplification in Design 1, we can also design another adaptive attack by decreasing the confidence of predicting poisoned samples to the targeted class by parameter-amplified models. Inspired by label smoothing, we design another adaptive loss term ℒ'_ada to decrease the confidence of poisoned samples when model parameter amplification occurs. ℒ'_ada is defined as:
ℒ'_ada = ∑_j=1^|𝒟_p|ℒ(ℱ̂_̂k̂^̂ω̂ (x_j;θ̂),ŷ_i),
where ŷ_i represents the label-smoothing form of the target class t. ŷ_i is defined as:
ŷ_i,c = 1 - ζ if c = t
ζ/C-1 otherwise.
Here, ζ is set to 0.2, specifically chosen to lower the confidence with which poisoned samples are classified into the target class. The term C denotes the total number of classes, |𝒟_p| represents the number of poisoned samples in the training set, and x_j denotes a poisoned sample.
We then integrate this adaptive loss term ℒ_ada with the vanilla backdoor loss ℒ_bd to formulate the overall loss function as ℒ = αℒ_bd + (1-α') ℒ_ada', where α' is a weighting factor. We evaluate the robustness of our defense under the same settings as those used in our <ref>. As shown in the <ref>, decreasing the confidence of poisoned samples significantly reduces BA and thus the attack can be easily noticed. Furthermore, our defense remains effective even under this new adaptive attack. As analyzed in previous discussions on adaptive attacks, the effectiveness of IBD-SPC largely stems from our adaptive layer selection strategy. This strategy dynamically identifies BN layers for amplification, regardless of whether the model is a vanilla or an adaptively backdoored model, ensuring the robustness of our defense mechanism across various scenarios.
§ HOW MODEL AMPLIFICATION CHANGES THE LATENT REPRESENTATION
In this section, we provide a comprehensive set of t-SNE visualizations for all the attacks considered in our study. These visualizations show how the hidden layer features of benign and poisoned samples change under the modifications by SCALE-UP and our defense strategy. As indicated in <ref> and <ref>, the amplification of pixel values by SCALE-UP results in a limited change within the feature space. In contrast, our defense achieves a more pronounced shift by modifying the model parameters, providing a more discernible differentiation between benign and poisoned samples.
This is intuitively why our method achieves better performance in backdoor attack detection.
§ COMPARING OUR DEFENSE WITH MASK-AWARE SPC <CIT.>
Pal <cit.> introduced a mask-aware scaled prediction consistency (SPC) technique to identify and filter out poisoned samples within the training set. It presents intriguing observations similar to ours regarding SCALE-UP. However, it's important to clarify that the findings related to SCALE-UP in our paper constitute only a minor component of our research scope. Our study, while similar to the SCALE-UP framework, diverges significantly by exploring the concept from a different angle—parameter scaling. This distinction underlines a substantial difference between our approach and MSPC. Nevertheless, we highlight several critical differences regarding the application scenario of our work, which differs from the training set purification, focused on by MSPC.
Our method requires fewer assumptions about potential adversaries. We explore scenarios where the user employs a third-party model and necessitates a real-time detection of poisoned samples during the inference phase. This detection setup is akin to a firewall and aligns with the setting proposed in SCALE-UP. Importantly, we do not limit our adversaries to employing poison-only attack methods, which is required by MSPC.
Differences in Detection Focus. Our defense is implemented during the inference stage, necessitating the capability for real-time detection. In contrast, training set purification methods are relatively more relaxed regarding detection time since they operate during the data collection phase.
§ TRAINING SET PURIFICATION
We also endeavor to apply our defense within the application scenario of identifying and filtering poisoned samples within the training set.
§.§ Related Work
Training set purification aims to filter out potential poisoned samples from a contaminated training set, thereby ensuring that the model trained on the purified dataset is free from backdoors. Many existing studies assume that backdoored models will develop abnormal latent representations for poisoned samples, which are significantly different from those of benign samples, allowing poisoned samples to be identified. Chen <cit.> firstly observe that samples within the target class form two separate clusters in the feature space of the penultimate layer. They employ cluster analysis techniques, such as K-means, to segregate the two clusters. Samples from the smaller cluster are classified as poisoned, based on the assumption that the number of poisoned samples is significantly lower than that of benign samples. Subsequent works generally utilize different cluster analysis methods like singular value decomposition (SVD) <cit.>, Gram matrix <cit.>, K-Nearest-Neighbors <cit.>, and feature decomposition <cit.> to detect poisoned samples. Another line of work proposes using differentiating characteristics, such as the faster speed of model fitting <cit.>, the presence of high-frequency artifacts <cit.>, and the sensitivity of poisoned samples to transformations <cit.>, as methods to identify poiosned samples.
More recently, Huang <cit.> hypothesis that poisoned samples require less input information to be predicted correctly. They introduced the cognitive pattern signature technique, which distills a minimal pattern (given by a mask) for an input sample to retain its original prediction. This technique reveals that poisoned samples typically exhibit a significantly smaller L1 norm of the cognitive pattern than benign samples. Pan <cit.> propose an proactive training set purification method called ASSET, which induces maximum loss difference between poisoned and benign samples by optimizing opposite objectives on the base and poisoned sets. Pal <cit.> present intriguing observations about the limitations of the SCALE-UP <cit.>, based on which they proposed a masked scaled prediction consistency (SPC) technique. This method selectively amplifies specific pixels in input samples, thereby more effectively exposing the prediction invariance of poisoned data to an input scaling factor.
§.§ Identifying and Filtering Potentially Poisoned Samples within Training set
Following the methodology outlined in references<cit.>, we first train a model on a potentially compromised training set and subsequently apply our detection method to identify and filter potentially poisoned samples within that training set.
The detection performance is presented in <ref>, which demonstrates the effectiveness of our method in filtering training set samples across various attacks, achieving a 100% TPR and nearly 100% AUROC while maintaining an FPR close to 0%. Note that we reproduce MSPC using its open-source codes with their default settings. However, it performs relatively poorly in defending against WaNet compared to the results reported in its original paper. We speculate that this is probably because we exploited the WaNet with noise mode, whereas MSPC was tested on the vanilla WaNet (as mentioned in their Appendix E). After removing suspected poisoned samples from the training set, we retrain the model on this purified training set to evaluate both its BA and ASR. We conduct experiments on the CIFAR-10 dataset against three representative attacks, and the results are presented in <ref>. As can be seen, the ASR scores of these retrained models are less than 0.5%, thereby rendering these backdoor attacks ineffective.
§ REPRODUCIBILITY STATEMENT
We have provided detailed descriptions encompassing the datasets utilized, training and evaluation settings, along the computational resources involved. To facilitate the replication of our experimental results, the corresponding codes and model checkpoints have been provided in the supplementary materials.
§ DISCUSSIONS ABOUT THE ADOPTED DATA
In this paper, all the samples we used are from publicly available datasets, including CIFAR-10, GTSRB, and ImageNet. It's worth noting that our defense method is implemented by modifying the pre-trained model parameters, without making any alterations to the input samples themselves. Therefore, our study doesn't raise any concerns regarding the privacy of human-related images within the dataset.
§ POTENTIAL LIMITATIONS AND FUTURE DIRECTIONS
In this section, we analyze the potential limitations and future directions of this work.
Firstly, our defense requires more memory and inference times than the standard model inference without any defense. Specifically, let M_s and M_d denote the memory (for loading models) required by the standard model inference and by that of our defense, respectively. Let T_s and T_d denote the inference time required by the standard model inference and by that of our defense. Assuming that we adopt n (e.g., n=5) parameter-amplified models for our defense. We have the following equation: M_d · T_d = n × M_s · T_s. Accordingly, the users may need more GPUs to load all/some amplified models simultaneously to ensure efficiency or require more time for prediction by loading those models one by one when the memory is limited. In particular, the storage costs of our defense are similar to those without defense since we can easily obtain amplified models based on the standard one and, therefore, only need to save one model copy (e.g., vanilla model). We will explore how to reduce those costs in our future works.
Secondly, our IBD-PSC requires a few local benign samples, although their number could be small (, 25, as shown in Figure <ref>). We will explore how to extend our method to the `data-free' scenarios in our future works.
Thirdly, our method can only detect whether a suspicious testing image is malicious. Currently, our defense cannot recover the correct label of malicious samples or their trigger patterns. As such, the users can only mark and refuse to predict those samples. We will explore how to incorporate those additional functionalities in our future works.
Fourthly, our work currently focuses only on image classification tasks. We will explore its performance on other modalities (, text and audio) and tasks (, detection and tracking) in our future works.
|
http://arxiv.org/abs/2405.09121v1 | 20240515063501 | Dirac Fermions and Topological Phases in Magnetic Topological Insulator Films | [
"Kai-Zhi Bai",
"Bo Fu",
"Shun-Qing Shen"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, The University of Hong Kong, Pokfulam Road,
Hong Kong, China
School of Sciences, Great Bay University, Dongguan, China
sshen@hku.hk
Department of Physics, The University of Hong Kong, Pokfulam Road,
Hong Kong, China
Quantum Science Center of Guangdong-Hong Kong-Macau Greater Bay Area, China
We develop a Dirac fermion theory for topological phases in magnetic topological insulator films.
The theory is based on exact solutions of the energies and the wave functions for an effective model of the three-dimensional topological insulator (TI) film.
It is found that the TI film consists of a pair of massless or massive Dirac fermions for the surface states, and a series of massive Dirac fermions for the bulk states.
The massive Dirac fermion always carries zero or integer quantum Hall conductance when the valence band is fully occupied while the massless Dirac fermion carries a one-half quantum Hall conductance when the chemical potential is located around the Dirac point for a finite range.
The magnetic exchange interaction in the magnetic layers in the film can be used to manipulate either the masses or chirality of the Dirac fermions and gives rise to distinct topological phases, which cover the known topological insulating phases, such as quantum anomalous Hall effect, quantum spin Hall effect and axion effect, and also the novel topological metallic phases, such as half quantized Hall effect, half quantum mirror Hall effect, and metallic quantum anomalous Hall effect.
Dirac Fermions and Topological Phases in Magnetic Topological Insulator Films
Shun-Qing Shen
May 20, 2024
=============================================================================
§ INTRODUCTION
Topological phases, bridging the abstract topological classification<cit.> to the in practical electronic phases of matter,
have gained an increasing interest and redefined the way people understand and estimate physics in condensed matter systems<cit.>.
In contrast to phases described by the Landau-Ginzburg theory and spontaneous symmetry breaking scheme<cit.>,
phases termed after topological share no local order parameter, but topological invariants<cit.> defined globally only.
These invariants, such as Chern numbers and the ℤ_2 invariant, exhibit robustness against continuous deformations that do not alter certain preconditions imposed over specified topological class,
like the global gap for an insulator<cit.> and symmetry constraints, over total system<cit.> or the Fermi surface in a metal<cit.>.
Within the vast topological phase landscape, the three-dimensional topological insulator (3D TI)<cit.>
stands out as a unique state of matter, protected by the time-reversal symmetry and characterized by a strong ℤ_2 index.
As a result of the celebrated bulk-boundary correspondence<cit.>, the surface of a 3D TI hosts a single gapless Dirac fermion,
whose low energy dispersion is necessarily governed by the massless Dirac equation in 2D, exhibiting spin-momentum locking<cit.>.
Nevertheless, the ever existence of such a gapless Dirac fermion has to be restrained by the no-go Nielsen-Ninomiya theorem<cit.>,
and it turns out that the high energy states of this fermionic band gain a bulk-like mass<cit.> to reconcile the contradiction.
The sign of this restored mass is defined as the chirality<cit.> for a regulated 2D gapless Dirac fermion, and it is responsible for the half-quantization of its Hall conductance.
The emergence of the high energy mass term due to lattice regularization essentially both breaks parity symmetry<cit.> explicitly and evades locality<cit.>.
The gapless behavior of the surface Dirac fermion can be altered through the finite-size effect.
When the topological insulator is exfoliated into a film, two gapless Dirac fermions emerge at the top and bottom surfaces.
However, as the thickness of the film is further reduced to the ultra-thin limit, by quantum confinement<cit.> the surface states of two Dirac bands become gapped.
The thickness-dependent mass gap exhibits an exponentially decaying and oscillating pattern<cit.>, revealing multiple topological phase transitions.
This phenomenon provides a pathway to realize the 2D quantum spin Hall effect<cit.> with an ultra-thin TI film.
The occurrence of spontaneous magnetization can alter the topological property of the TI film.
Typically, a pair of gapless Dirac fermions emerge at two surfaces of a TI film, each carrying half-quantized Hall conductance with opposite signs under mirror symmetry, leading to the half quantum mirror Hall effect<cit.>.
The effect shares a similar quantized non-local transport signature with the quantum spin Hall effect<cit.>, while being intrinsically a metallic phase.
Further gaping out the surface states by an out-of-plane magnetism<cit.> gives rise to various topologically distinct phases.
Within the scheme of magnetic topological insulators, two such phases have been discovered as the Chern insulator<cit.>, aka quantum anomalous Hall effect (QAHE) that is characterized by Chern invariant and quantized Hall plateau, and the axion insulator<cit.>, signatured by zero Hall plateau and non-vanishing longitudinal conductance.
A semi-magnetic topological insulator, on the other hand, bears with the half-quantized quantum anomalous Hall effect (half QAHE)<cit.> with a half quantized Hall conductance and unusual bulk-boundary correspondence, signed by the absence of edge state but the appearance of the pow-law decaying current from boundary to bulk.
In addition, if the magnetization is pushed away from the surfaces and towards the middle of the film with sufficient strength, the metallic quantized anomalous Hall effect (metallic QAHE)<cit.> can occur, which also exhibits integer Hall conductance but lacks chiral edge states.
Remarkably, the physics underlying the topological phases in the (magnetic) topological insulator films can be all attributed to the topological properties of the emergent two-dimensional Dirac fermions in the system.
While certain phases, like QAHE and half QAHE, can be well explained by focusing on the interplay between surface Dirac fermions and magnetism, there exist other phases that essentially involve higher bulk bands, notably the metallic QAHE.
These higher bulk bands are identified as a series of massive Dirac fermions, revealing that both gapless and gapped Dirac fermions in the topological insulator film interact with spontaneous magnetism to generate various topological phases.
The topological index, or the quantized Hall conductance in each phase, is always given by some gapped or gapless Dirac fermion(s), described by a modified Dirac equation.
In this paper, we will provide a unified framework to discuss and review how emergent Dirac fermions exist and generate various topological phases in magnetic topological insulator films, thus naturally partitioning the paper into two main parts.
The first part of the paper will focus on establishing the existence of Dirac fermions in magnetic topological insulator films. This discussion will heavily rely on a newly defined basis derived from an exact solution in 1D.
We will thoroughly investigate the Hall conductivity carried by different types of Dirac fermions within this framework, setting the stage for the subsequent discussion of topological phases.
In the second part we will delve into the characterization and analysis of topological phases in magnetic topological insulator films.
These phases will be classified into weak- and strong-magnetism regimes, providing a comprehensive understanding of how different magnetic strengths influence the emergence of various topological states.
In the remainder of this introduction we will give an overview of the main results of this paper following the line.
The TI film is equivalent to a set of Dirac fermions: a pair of massless Dirac fermions for bands that contain the surface states,
and a series of massive Dirac fermions consisting of purely bulk states, classified by their momentum-dependent mass terms m_n(k).
This scenario holds with both its continuum and lattice model versions, and is made clear and exact
through an introduced unitary transformation in the whole k-space, based on an exact solution in one dimension perpendicular to the film plane.
The finite-size effect is briefly discussed here.
The Hall conductivity carried by a massive or gapless Dirac fermion is discussed generally, with additional symmetry constraints imposed on the Fermi surface for the latter one, for both continuum and lattice models.
A direct deduction leads to the result that the Hall conductivity associated with the gapless and gapped Dirac fermions in the TI film are ± e^2/2h and 0, respectively, leading to a half quantum mirror Hall effect by 1/2 - 1/2, serving as a metallic partner to the insulating quantum spin Hall effect.
A brief proof for the half-quantization of metallic band structure with considered symmetry constraints over the Fermi surface is also presented.
Additionally, a field theoretical deduction for the half quantization, and a discussion on handling the Hall conductivity of a gapless Dirac fermion are provided.
The introduced magnetism, characterized by out-of-plane polarization, manifests as two equivalent matrix Higgs fields that collectively couple the Dirac fermions in a TI film, generating and altering their masses.
Treated at the mean-field level, the exchange interaction stands as an out-of-plane Zeeman field in TI film, which transforms via the unitary transformation into two momentum-dependent matrix fields 𝐈_S/A(k).
The two fields directly couple different species of Dirac fermions and alter their masses, serving as mass-generating Higgs fields,
whose non-vanishing expectation values arise concurrently with the spontaneous establishment of the ferromagnetic order.
Depending on the field strength, generally two regimes as weak and strong magnetism are classified.
In addition, the forms of other kinds of spin and orbital fields under unitary transformation are discussed.
In the weak Zeeman field regime, the topological phases are characterized by focusing on n = 1 matrix elements affecting the two gapless Dirac fermions near the surface.
This framework clarifies the underlying physics behind the Chern insulator, axion insulator, and half QAHE, with symmetric, anti-symmetric, or unilateral distribution of Zeeman fields at the surface of the TI film, respectively.
The resulting Hall conductance exhibits quantized nature: 1 + 0, 0 + 0, and 1/2 + 0 in units of e^2/h.
Additionally, the mirror layer Chern number in the Chern insulator with symmetrically distributed magnetism is examined, revealing (1/4)–(1/2)–(1/4) partition for the non-trivial band and (c/4)–(-c/2)–(c/4) with c ≈ 1 for the trivial band.
In the strong Zeeman field regime, the discussion is based on the effective mass picture, involving the gapped series of Dirac fermions through matrix Higgs fields couplings.
Another metallic topological phase, the metallic QAHE, is identified where the magnetism is centralized in the middle of the TI film.
Despite remaining gapless and lacking chiral edge states, its Hall conductance is quantized into an integer over e^2/h.
Additionally, higher Chern insulators resulting from sub-band inversion at high-symmetry points are presented under a uniform Zeeman field.
Furthermore, the paper discusses topological phases characterized by cooperation between magnetism in the middle and surface, based on the framework of gapping out surface states in the metallic QAHE.
The plan of the remainder of this paper is as follows.
Beginning with the exact solution of the model Hamiltonian for a topological insulator film in Section <ref>,
we demonstrate that a TI film comprises a pair of gapless Dirac fermions, containing low-energy surface states, and a series of gapped massive bulk Dirac fermions.
Section <ref> offers a comprehensive discussion on the Hall conductivity, a critical indicator revealing the presence of topological phases, carried by different species of Dirac fermions.
Moving on to the inclusion of magnetism in Section <ref>, we unveil the role of magnetism as matrix Higgs fields, responsible for generating masses of the Dirac fermions in a TI film.
This section also briefly explores other spin and orbital fields possible within the framework
In Section <ref>, based on the weak magnetism approximation, we identify topological phases processable under the lowest four-band model framework, which stresses surface states with magnetism:
half quantum mirror Hall effect, quantum anomalous Hall effect, half-quantized anomalous Hall effect, and axion insulator.
We introduce the mirror layer Chern number and illustrate the Hall conductivity distribution in symmetrically magnetized TI film.
The Chern and axion insulator phases in interlayer anti-ferromagnetic material MnBi_2Te_4 are also discussed under the frame.
In Section <ref>, we delve into topological phases within relatively strong magnetism regimes, such as high Chern number insulators and the metallic quantized anomalous Hall effect, where bulk Dirac fermions come into play.
The paper concludes in Section <ref> with a summary and a discussion of future prospects.
§ MASSLESS AND MASSIVE DIRAC FERMIONS IN A TOPOLOGICAL INSULATOR FILM
In this section, by solving the minimal continuum and lattice models of the topological insulator, we show that from the physical aspect, a topological insulator film is composed of a pair of gapless Dirac fermions,
whose low energy parts near Dirac point are composed of massless surface states inside the bulk gap while the high energy parts away from the Dirac point evolve into bulk states gradually,
together with a series of gapped massive Dirac fermions consist of purely bulk states.
Quantitatively, we write
H_c(k) =⊕_n[λ_∥k·σ+m_n(k)τ_zσ_z],
H_l(k) =⊕_n=1^L_z[λ_∥sin(k_xa)σ_x + λ_∥sin(k_ya)σ_y + m_n(k)τ_zσ_z],
for the continuum and lattice model, respectively.
Here we adopt homogeneous in-film-plane parameter set with a and λ_∥ the in plane lattice constant and Fermi velocity, and k = (k_x,k_y) is the in film plane wavevector.
Notice that an infinitely direct summed Dirac fermions exists in the continuum model, while there are only 2L_z species with L_z the layer number along opened z-direction of the film in the lattice model.
For the mainly concerned individual Dirac cone with a single Dirac point at k = 0, its topological property is revealed based on a general discussion over the nature of its Hall conductivity quantization, as revealed in the schematic diagram Fig. <ref>.
Especially, in the strong TI film with a single Dirac cone at Γ, aka k = 0 point, the gapless pair of Dirac fermions carry ± e^2/2h, as half-quantized Hall conductivity, while the gapped series are all trivial.
§.§ The continuum model
In this subsection, the exact solution of the confined 3D modified Dirac
equation, which is the continuum model describing the topological
insulator film, is presented. A detailed study can be found in Appendix
<ref>.
The continuum model Hamiltonian for the 3D TI reads<cit.>
H_TI(k,k_z) =λ_∥(k·σ)τ_x+λ_⊥k_zσ_zτ_x+(m_0(k)-t_⊥k_z^2)σ_0τ_z
=H_1d(k,k_z)+H_∥(k),
where H_∥(k)=λ_∥(k·σ)τ_x,
m_0(k)=m_0-t_∥k^2. This Hamiltonian is
isotropic only in x-y plane. Substituting k_z⟼-i∂_z
leads to the real-z-space description for the 1D part as H_1d(k,z)=⊕_s=±h(s),
where
h(s)=-isλ_⊥∂_zτ_x+(m_0(k)+t_⊥∂_z^2)τ_z.
Solving the eigen-problem h(s)ϕ=Eϕ leads to specifically
symmetrized chiral-partner basis<cit.>
φ^n(s) =C[ -isλ_⊥f_+^n; t_⊥η^nf_-^n ], E=m_n,
χ^n(s) =C[ t_⊥η^nf_-^n; isλ_⊥f_+^n ], E=-m_n,
where the dependence on (k,z) is inherited inside even/odd
parity functions f_±^n(k,z) and real factor η^n(k),
whose definition can be found in Appendix <ref>. The k-dependent
eigenvalue of h(s) is represented by ± m_n(k), n=1,2,⋯,
as a mass term, which can be solved in a closed manner through equations
m_n =m_0(k)-t_⊥ξ_1^2g(ξ_1)-ξ_2^2g(ξ_2)/g(ξ_1)-g(ξ_2),
ξ_α =√(-F/D+(-1)^α-1√(R)/D), α=1,2,
where
g(ξ)=tan(ξ L/2)/ξ
D=2t_⊥^2
F=-2m_0(k)t_⊥+λ_⊥^2
R=F^2-2D(m_0^2(k)-m_n^2)
.
Project TI film Hamiltonian on eigenstates of H_1d equals to
performing an infinite-dimensional local unitary transformation in k-space, which gives
Hamiltonian equivalent to the TI film one as (see Appendix <ref>)
H(k)=⊕_nλ_∥τ_0(k·σ)+m_n(k)τ_zσ_z,
as Eq. (<ref>), where the projection basis is organized as
Φ_1^n =[ φ^n(+); 0 ],Φ_2^n =[ 0; χ^n(-) ],
Φ_3^n =[ χ^n(+); 0 ],Φ_4^n =[ 0; φ^n(-) ].
We have to emphasize here that although spin is still preserved as σ in the transformed Hamiltonian, the degrees of freedom τ newly appeared here shares a different meaning as with the original TI film Hamiltonian.
Notice that Φ_1,4 (Φ_2,3) are z-parity even (odd)
states, while Φ_1,2 (Φ_3,4) are z-mirror even (odd)
states, which means that under the projection, the unitary matrices related to two operators are transformed into (see Appendix <ref> for detail)
P_z = τ_zσ_z,
M_z = τ_z.
Meanwhile, the local unitary matrix in k-space that transforms the continuum model Hamiltonian under the original representation is formally written as
U^c(k,z) = ({{Φ(k,z)}_i}^n),
where the double brackets mean that we arrange i = 1,2,3,4 index inside each n = 1,2,⋯,
we see that U^c is topologically trivial in (k_x,k_y) space, as it consists of certain arrangement of eigenstates Φ^n_i, which is solved from the separated 1D Hamiltonian and has a well-defined
global representation within the same gauge choice in (k_x,k_y) plane, and is therefore topologically trivial.
Our solution reveals that the 3D topological insulator film is composed
of effectively 2D multiDirac fermions, different by their mass terms represented in Fig. <ref> only.
Notice that for the continuum model, there are in fact an infinite number of m_ns as a basic property of bound states in a quantum well, and we just present several lowest branches of the solutions.
Also notice that from the solved m_n, the mass terms show sign jumping behavior at high energy (large k).
Comparing the mass configurations in continuum model with the general classification in Fig. <ref> reveals that while all n ≥ 2 masses serve as trivial massive Dirac band in the bulk,
the lowest states with n=1 are necessarily not, which in the presented case
serve as two possible gapless Dirac cones whose low-energy parts
are localized z-mirror-symmetrically at top and bottom surfaces. Especially, the analytic
expression for m_1(k), when the film is thick enough,
can be written as<cit.> (also see Appendix <ref>, and here t_⊥>0 is assumed without losing generality)
m_1(k)=Θ(-m_0(k))m_0(k).
Notice that the Heaviside Theta function appeared here only reveals
physics that, at low-energy zone near the Dirac point, the surface
Dirac cone is massless, which preserves both time-reversal and parity
symmetry, while for the high-energy part away from the Dirac point,
the non-vanishing mass term reveals that the surface Dirac cone has
emerged into the bulk state, which breaks both time-reversal and parity
symmetry explicitly. The appearance of such non-vanishing high-energy
mass term is analogical to the introduced regulator<cit.>
in quantum field theory. In this sense, one should not worry about
the nonanalytic behavior of the Theta function near m_0(k)=0,
as it can always be replaced by its mollifier<cit.>.
For the completeness of discussion here, we notice that an ultra-thin TI film bears an exponentially
decaying oscillating small gap m(0) with varying film thickness<cit.>, which reads upon lowest order (for derivation, also see Appendix <ref>)
m_1(0) ≈ -4 m_0/√(4 γ - 1)sin(u √( 4γ - 1)L) e^-u L,
with γ = m_0 t_⊥/λ_⊥^2,u = λ_⊥/2t_⊥.
The numerical result is shown in Fig. <ref>, with a great correspondence between the lowest order approximated gap and that from solving the set of non-linear equations, especially for relatively large L.
The exponentially decaying tendency is best revealed by the logarithmic absolute value of mass gap at k = 0, as its center decreases linearly with thickness,
while the oscillating nature is revealed by the dips, which will extend to minus infinity at strict gap closing point, and the mass gap will reverse its sign before and after the dip, as shown directly by the m_1(0)-L diagram and the inner amplified picture.
Since m_1(∞) = m_0(∞) < 0 is certain, we see that the oscillating behavior of m_1(0) with thickness L can drive m_1(k) to share configuration that jumps between the one shown in Fig. <ref>(b) and (c),
i.e., between a trivial band and a band with unit Chern number.
Then for an ultra-thin film which owns two copies ± m_1(k) reflected by τ_z in Eq. (<ref>),
the ℤ_2 topological index shows jumping behavior between ℤ_2 = 0 and ℤ_2 = 1, i.e., between a band insulator and a quantum spin Hall insulator<cit.>.
We will not discuss further about this phenomenon except for giving an explicit ℤ_2(L_z) oscillating diagram below in the lattice model subsection shown in Fig. <ref>.
We emphasize here that the exponentially decaying gap will not affect physically observable topological phase, either for an insulating or metallic one, for a TI film with enough thickness.
The solution of the continuum model enlightens us to commence with
the lattice model of TI film below.
§.§ The lattice model
In this subsection, we ask and deal with the same question as above, but in the more realistic lattice model.
Details are present in Appendix <ref>.
The Hamiltonian of a 3D TI with nearest-neighbour
hopping on cubic lattice is<cit.>
ℋ_TI=∑_lΨ_l^†ℳ_0Ψ_l+∑_l,μ(Ψ_l^†𝒯_μΨ_l+μ+h.c.),
where energy and hopping matrices are ℳ_0=(m_0-2∑_μt_μ)β,
𝒯_μ=t_μβ-iλ_μ/2α_μ,
with l and μ denoting site locations and three spatial directions
while {β,α_μ} denoting Dirac matrices under standard
Dirac representation β=σ_0τ_z, α_μ=σ_μτ_x,
where Pauli matrices σ_μ and τ_μ represent different
degrees of freedom, respectively. For instance, one could choose they
to represent spin and pseudo-spin (like orbital) ones.
Ψ_l represents vectorized Fermionic operator at site l.
Notice that when adopting a full Fourier transformation upon all three
spatial dimensions, i.e., an infinite bulk system, the Hamiltonian
is transformed into the standard modified Dirac's equation<cit.>
on lattice ℋ_TI=∑_kΨ_k^†H(k)Ψ_k
where
H(k)=∑_μλ_μsin(k_μa_μ)α_μ+[m_0-4t_μsin^2(k_μa_μ/2)]β,
whose continuum model is just an anisotropic version of Eq. (<ref>).
This model avoids the fermion-doubling problem<cit.> by introducing Wilson terms<cit.> that break chiral symmetry explicitly for k ≠ 0.
Consider such a film with L_z number of sites along z direction.
The Fourier transformation in x-y plane gives
ℋ_Film =∑_l_z,k(Ψ_l_z,k^†ℳ_0(k)Ψ_l_z,k+Ψ_l_z,k^†𝒯_zΨ_l_z+1,k+h.c.)
+∑_l_z,kΨ_l_z,k^†H_∥Ψ_l_z,k,
with
H_∥=λ_∥[sin(k_xa)σ_xτ_x+sin(k_yb)σ_yτ_x],
and ℳ_0(k)=M_0(k)σ_0τ_z=[m_0(k)-2t_⊥]σ_0τ_z,
where
m_0(k)=m_0-4t_∥(sin^2k_xa/2+sin^2k_yb/2).
Note that we have set t_x=t_y=t_∥, t_z=t_⊥,
λ_x=λ_y=λ_∥, λ_z=λ_⊥,a=b.
The solution of lattice model shares much similarity with the continuum
one<cit.>. The details can be found in Appendix <ref>
as a repeat. Separating the Hamiltonian at k as
ℋ_Film(k)=ℋ_1d(k)+ℋ_S(k),
where
ℋ_1d(k) =∑_l_z(Ψ_l_z,k^†ℳ_0(k)Ψ_l_z,k+Ψ_l_z,k^†𝒯_zΨ_l_z+1,k+h.c.),
ℋ_S(k) =∑_l_zΨ_l_z,k^†H_∥Ψ_l_z,k.
The eigenvalues of ℋ_1d can be obtained with a set of
simultaneous equations below,
m_n =M+2t_⊥cosξ_1g(ξ_1)-cosξ_2g(ξ_2)/g(ξ_1)-g(ξ_2),
cosξ_α =-Mt_⊥+(-1)^α-1√(M^2t_⊥^2-(t_⊥^2-λ_⊥^2/4)(M^2+λ_⊥^2-m_n^2))/2(t_⊥^2-λ_⊥^2/4),
where
M=M_0(k),
g(ξ)=tan(ξ(L_z+1))/2sinξ,
and the sign of ξ is fixed by
sinξ_α=√(1-cosξ_α^2), α=1,2.
Now, different from the continuum model,
the set of equations give L_z solutions m_n(k),n=1,2,⋯,L_z including
one surface state and L_z-1 purely trivial bulk states,
if within suitable choice of parameters.
This is essentially because now the Dirac equation is put on lattice, and the number of solutions is constrained by finite lattice constants.
And the other set of L_z
masses are just the chiral partners with eigenvalues -m_n(k).
The projection basis shares the same form as with the continuum model eigenstates, with only re-defined factor η (for details, refer to Appendix <ref> or <cit.>).
And the projection of the TI film model offers an equivalent description
as
H(k)=⊕_n=1^L_z[λ_∥(sin(k_xa)σ_x+sin(k_yb)σ_y)+m_n(k)τ_zσ_z],
as Eq. (<ref>), where 2L_z Dirac fermions H=⊕_n,χh_n,χ(k)
emerge as
h_n,χ(k)=λ_∥(sin(k_xa)σ_x+sin(k_yb)σ_y)+χ m_n(k)σ_z,
with χ=± labelling the mirror eigenvalue<cit.>.
An example of m_n(k) with L_z = 80 is presented in Fig. <ref>.
Among these Dirac fermions, two of them with ± m_1(k) are gapless Dirac cones with their low-energy states localized at top and bottom surfaces, while emerging into the bulk at their high-energy away from Dirac point,
and the remaining fermions are all gapped.
Notice that the same arguments about the projection as a trivial local unitary transformation
and Heaviside Theta function form of lowest solution (see below) can be made here,
as in the continuum model.
Generally, the lowest mass reads (t_⊥ > 0 assumed)
m_1(k)= Θ[|M_0(k)| - 2 t_⊥](2 t_⊥ - |M_0(k)|),
as an analogy with Eq. (<ref>), with respect to the non-trivial condition of ℋ_1d, as a chiral symmetric 1D lattice Hamiltonian sharing similar form with the Su-Schrieffer-Heeger model<cit.>.
And as one can see, since from the continuum model to the lattice model, the base manifold where momentum k lives changes from a 2-sphere S^2 to a 2-Torus T^2, which splits the original infinity point k = +∞ of the continuum Dirac operator k·σ to three additional Dirac points, X = (π,0), Y = (0,π) and M = (π,π) of the lattice Dirac operator sin(k_x)σ_x + sin(k_y)σ_y (unit lattice constant), apart from Γ = (0,0), as a consequence of the periodicity driven fermion-doubling<cit.>,
the topological property of the film system is altered by terms on the `boundary' and reflected exactly by the changed form of m_1(k).
Such a change is known to generate weak topological phases<cit.> apart from the classification hierarchy, and here for the completeness we give a minor discussion.
§.§.§ Strong topological insulator
Consider h_1,χ, and one finds that Dirac points can exist only at four high symmetry points Γ,X,Y,M,
then the above general formula can hold gapless Dirac fermions with linear dispersion only within three parameter regimes
0<m_0<4t_⊥
4t_∥<m_0<4t_∥+4t_⊥
8t_∥<m_0<8t_∥+4t_⊥,
inside which m_1(k) = 0 can be realized for high symmetry points.
Here we assume without losing generality that 0 < t_⊥ < t_∥.
Now the first and the third case generates two strong topological insulators (STI), with the single Dirac point located at Γ and M, respectively.
For these two cases, a simplified mass term can be written separately as
m_1(k) = Θ(-m_0(k))m_0(k),
m_1(k) = Θ[m_0(k) - 4 t_⊥](4 t_⊥-m_0(k)).
And to illustrate the topological phases we are interested in
in this paper, as to be discussed in the following sections, we choose parameters so that the first case 0<m_0<4t_⊥ < 4 t_∥
is satisfied, and it will be adequate to write m_1(k)=Θ(-m_0(k))m_0(k),
which is zero when m_0(k) > 0 with k near the Γ point, and becomes negative when m_0(k) < 0 away from Γ point.
§.§.§ Weak topological insulator
The second case is also somewhat of interest, however, we will just swipe through its basic property.
At this time, as one can easily verify, two gapless Dirac cones at X and Y form and they are connected through high energy bulk states.
Such a phase is recognized as the weak topological insulator (WTI) in the literature<cit.> and does not belong to the usual ten-fold way classification<cit.>.
Especially notice that such a phase can be recognized as a transition phase between two strong topological insulator phases stated above, accompanied by the Lifshitz transition of Fermi surface<cit.>.
§.§.§ Oscillating ℤ_2 invariant
As discussed in the continuum model case, in ultra-thin film limit, the strong TI thin film with single Dirac cone at Γ (k = 0) point will show oscillating behavior between a quantum spin Hall insulator and an ordinary insulator.
The topological index of this kind is carried out explicitly in Fig. <ref>, with ℤ_2 = (-1)^ν with ν = 0,1, and the latter corresponds to a non-trivial 2D quantum spin Hall insulator.
The mass oscillating and the index oscillating matches perfectly, as ℤ_2 = -1 (ν = 1) zones correspond to m_1(0) > 0, so do their sign transitions (remind that m_1(π,π) < 0 and m_1(0) > 0 leads to a nontrivial mass configuration, as to be discussed below).
Notice that when attributed to lowest n = 1 block in Eq. (<ref>), there is no constraint to force L_z to be integer from Eq. (<ref>), and in this sense we continue the n = 1 block from integer L_z to a positively real one.
This is why we can do the calculation above.
Again we emphasize that we will consider thick-enough strong TI film for topological phases hereafter, and the exponentially decaying finite size effect is physically negligible.
§ THE QUANTUM HALL CONDUCTIVITY OF DIRAC FERMIONS
As stated, in both continuum model and lattice model, the strong topological insulator film is composed of two gapless Dirac fermions and countable gapped Dirac fermions.
We have also claimed that all of the massive fermions inside are trivial, while saying nothing about the massless two.
Here in this subsection, we shall complete the basic picture of them.
Discussion here is restricted in effectively two-dimension and the zero-temperature limit.
§.§ In the continuum model
Our starting point is the continuum model of a two-band Dirac fermion appeared above
h^C_DF = λk·σ + m(k) σ_z,
with k = (k_x,k_y) and σ = (σ_x,σ_y).
Notice that the mass depends on k = |k| and possesses a topologically trivial infinity behavior.
Its Hall conductivity can be carried out by a deformed Kubo formula<cit.>,
when the chemical potential μ lies at the valence band,
σ_H = -e^2/h1/4 π∫d^2 k Θ(μ + d) (∂_k_xd×∂_k_yd) ·d/d^3,
where d(k) = (λ k_x, λ k_y, m(k)), d = |d|, and to reveal possible topological property, we have used the Heaviside Theta function with Θ(x > 0) = 1 and zero otherwise, as the zero-temperature Fermi-Dirac distribution.
The Hall conductivity can then be carried out easily by defining
cosθ = m/(λ^2 k^2 + m^2)^1/2,
and notice that
σ_H/e^2/h = 1/2∫_k_F^+∞dk^2 ∂cosθ/∂ k^2,
which finally leads to
σ_H = e^2/2 h[sgn(m(+∞)) - m(k_F)/d(k_F)],
with k_F the Fermi vector determined by μ = d(k_F), and sgn(x) the sign function.
From this equation, three topological phases are readily to be classified.
Notice that we assume a path connected Fermi surface.
§.§.§ Gapless/Metallic case
The first case corresponds to a metallic phase with finite k_F, and if m(k_F) = 0 which leaves a perfect linearized dispersion near the Fermi surface, we obtain a half-quantized Hall conductance as
σ_H(μ | d(k_F) = λ k_F) = e^2/2 hsgn(m(+∞)),
where the half-quantization is completely determined by the high-energy mass sign which may be recognized as the chirality assigned to the low-energy massless Dirac fermion near the Fermi surface.
In our equivalent model, such a case exists for the n = 1 bands
h_1,χ = λ_∥(k·σ)+ χΘ(-m_0(k))m_0(k) σ_z, χ = ±.
Since m_0(k) = m_0 - t_⊥ k^2, then by assuming m_0 > 0, t_⊥ > 0, we have
σ^1,χ_H (k_F < k_c) = -χe^2/2 h,
with k_c = √(m_0/t_⊥) identified.
For each gapless Dirac fermion, the exact half-quantization<cit.> comes deeply from the parity `anomaly'<cit.>, which manifests itself as an explicit symmetry breaking term at high energy for a low energetically massless 2D Dirac fermion.
To be more clearer, the 2D parity symmetry is indeed an in-plane Mirror symmetry<cit.>, say about x, which forces (k_x,k_y) ℳ_x⟶ (k_x,-k_y),
and in our model, the projected spin degrees of freedom makes the related unitary transformation to be U_M_x = σ_x, then the imposed parity symmetry U^†_M_x h(k) U_M_x = h(ℳ_x k) stands only when k < k_c, which forms a parity invariant regime (PIR) inside which the parity symmetry is respected.
The parity invariant regime is recognized as the low energy zone around the Dirac point with small k, and for larger k > k_c recognized as the high energy zone, the non-vanishing mass term breaks the 2D parity symmetry explicitly, as a consequence of regulating the effective low energy theory<cit.>.
§.§.§ Insulating case
The remaining two phases are insulating with k_F = 0 recognized when the chemical potential lies inside the global insulating gap, then simply
σ_H(|μ| < d_min) = e^2/2 h[sgn(m(+∞)) - sgn(m(0)) ],
for a Dirac cone, where d_min = min(d(k)) denotes the bound of the global gap.
And clearly σ_H/(e^2/h) = 0, ± 1 appears notifying trivial or non-trivial phases depending on the relative signs of low and high energy masses, with the ± 1 cases identified as the Chern insulator or equivalently, the quantum anomalous Hall effect.
In our equivalent model, one sees from Fig. <ref> that all n ≥ 2 masses contains the same sign, and the corresponding Dirac cones are all trivial.
And we come back to the statement that in a TI film, there are two gapless Dirac fermions with opposite half-quantized Hall conductance, while all other bands forming paired trivial massive Dirac fermions.
The quantized nature of the Hall conductance in insulating system, σ_H = - C e^2/h, is referred to the famous TKNN theorem<cit.>, with its robustness against continuous non-gap-closing perturbation rooted in the topological nature of C as the Chern invariant<cit.>.
§.§ In the lattice model
Now we turn to the lattice model with a starting Dirac Hamiltonian defined on the lattice
h^L_DF = λ(sin(k_x)σ_x+sin(k_y)σ_y)+ m(k)σ_z.
Firstly we notice that when m ≡ 0, the remaining part is a naive lattice realization of single Weyl fermion, which is strongly constrained by the Nielsen-Ninomiya theorem<cit.>.
And there appears to be four connected Dirac points at Γ,X,Y,M, respectively.
Any non-vanishing m(k) will serve as a lattice regularization of the theory, with only difference as its effectiveness upon gapping which Dirac point.
Essentially, here the difference with a continuum model appears, say in the latter case there is only a single gapless Dirac cone, and the infinity is usually treated by one-point compactification and the k-space is topologically equivalent to a sphere surface S^2,
while on lattice the Brillouin zone geometry as a torus T^2 can contain non-trivial property on its periodic boundary.
Such a non-trivial property is exactly reflected by the existence of four Dirac points under naive lattice realization of Dirac operator k ·σ.
With an analogical formulation, we write
σ_H = e^2/2 h[ S_X + S_Y - S_Γ - S_M ],
with S_k an analogy to m(k)/d(k) appeared in the continuum model.
S_k becomes zero when the chemical potential lies in the metallic states around k, and over those states certain symmetry constraint is imposed in a finite regime around, such as the parity symmetry which requires m(ℳ_xk) = -m(k), and essentially,
the imposed symmetry should ensure that the net Berry curvature integral contributed from the regime (constrained also by chemical potential) is zero wherever we put the Fermi level inside.
On the other hand, we recognize S_k = sgn(m(k)) when Dirac point k is gapped, and the Fermi level lies inside.
The formula is further classified into two cases under additional conditions.
§.§.§ Gapless/Metallic case
The first case corresponds to the existence of gapless Dirac fermion(s) inside a parity invariant regime.
Give an example as a single gapless Dirac fermion at Γ point, let the Fermi level lie in the symmetry constrained regime (SCR), and we recognize
σ_H(k_F ⊆SCR) = e^2/2 h[ sgn(m(X)) + sgn(m(Y)) - sgn(m(M)) ],
which is always half-quantized.
Notice that k_F = {k | d(k) = μ} is now a set, representing Fermi surface wavevectors.
Also notice that unlike the case in the continuum model where the regulator comes from only at infinity, here on the square lattice,
a single gapless Dirac fermion owns three regulators.
At the same time, if sgn(m(X))= sgn(m(Y)) =sgn(m(M)) is recognized which makes the boundary of the Brillouin zone trivial, we get
σ_H(k_F ⊆SCR) = e^2/2 hsgn(m(M)).
In our equivalent model on lattice, the lowest two cones
h_1,χ(k)=λ_∥(sin(k_xa)σ_x+sin(k_yb)σ_y)+χ m_1(k)σ_z,
satisfy the condition, with m_1(k) = Θ(-m_0(k))m_0(k) identified.
Since under our model parameter choice, it is easy to verify that sgn(m_1(X))= sgn(m_1(Y)) =sgn(m_1(M)) < 0, and we write
σ_H^1,χ(m_1(k_F)>0) = -χe^2/2 h,
inside the symmetry constrained regime which is now the parity invariant regime defined by m_0(k) > 0.
§.§.§ Insulating case
The second case corresponding to a globally gapped Dirac band.
Now by requiring the chemical potential to lie inside the gap, the Chern number reads
C = 1/2[ sgn(m(Γ)) + sgn(m(M)) - sgn(m(X)) - sgn(m(Y)) ],
which ranges among 0,± 1,± 2.
This formula has two common versions that we will come up with in the following.
The first version is the most familiar one with a trivial Brillouin boundary when sgn(m(X))= sgn(m(Y)) =sgn(m(M)) is recognized, and
C = 1/2[ sgn(m(Γ)) - sgn(m(M))].
The mass term generating this formula, is usually written as
m(k)=m_0-4t(sin^2k_x/2+sin^2k_y/2),
with a relatively small |m_0| compared to |t|, and correspondingly, we have
C = 1/2[sgn(m_0) + sgn(t)],
which is non-trivial with unit Chern number when m_0 t > 0.
And when we relax the value of m_0, a better formula for this mass term is
C = -sgn(m(X))/2[ sgn(m(Γ)) - sgn(m(M))].
In our equivalent model on lattice within our parameter choice as a strong topological insulator with homogeneous in-film-plane parameters, Eq. (<ref>) is enough to describe
all n ≥ 2 massive Dirac fermions; and since from Fig. <ref>, all m_n ≥ 2(k) do not change sign at Γ and M, they are evidently all trivial.
§.§ A glance in proof of half-quantization
The proof<cit.> for the half-quantization of a general band structure in 2D comes as follows, with a requirement of parity or time reversal symmetry at the Fermi surface.
Without losing generality we consider connected Fermi surface.
Recognizing the infinity as one point compacts the k-space,
then the existence of Fermi surface cuts the curvature integral into two parts with three boundaries where the Stokes theorem applies
-2 πσ_H/e^2/h = ∮_FSd k ·(A^M) + ∮_FSd k ·(A^L) + ∮_FSd k ·(Ã^L),
where A^M refers to non-Abelian Berry connection (convention follows that 𝒜 = i⟨u | d | u|$⟩) formed by the metallic bands crossed by the Fermi surface with parity or time-reversal symmetry, whileA^Lrefers to connection of bands with lower energy, on the boundary formed byk_F.
Essentially, the last two terms are phase integrals around one mutual boundary with opposite orientations, which will contribute an integer value<cit.>2πC.
For the first term, requiring the 2D parity (i.e., mirror) symmetry at the Fermi surface leads to local unitary transformationU^M_krelating states at parity-symmetric points, which leads to
A^M_μ (k) = i (U^M_k)^†∂_k_μ U^M_k + (U^M_k)^† A_ν^M(ℳk) U^M_k J_νμ,
whereJ_νμ = ∂(ℳk)_ν/∂k_μis the Jacobian matrix with(J) = -1.
And similarly, requiring time reversal at Fermi surface leads to
A^M_μ (k) = i (U^T_k)^†∂_k_μ U^T_k - (U_k^T)^† A^M_μ(-k) U_k^T,
whereU_k^Tis the unitary matrix relating time reversal points satisfying thatU^T_k = -(U_-k^T)^T.
Performing Berry phase loop integral of both sides leads to, for both symmetry restricted cases,
∮_FSd k ·(A^M) = i/2∮_FSdk ·(U_k^†∇_k U_k) = π N.
Combining three terms gives
σ_H = -e^2/h( C + N/2),
with bothCandNintegers.
The proof here can be easily generalized to the lattice model, with simply replacing the base manifold by a torus, and to the case when the Fermi surface consists of several separately connected components,
with the curvature integral cut into more parts determined by Fermi surface position ink-space.
When bands related toCandNare fully separated, the former can be recognized as the Chern number contributed from these fully occupied bands, while the latter reduces to a quantized Fermi surface loop integral over metallic bands<cit.>.
We would like to emphasis here that even though reduced to cumulating low energy (refer to Fermi surface here) quantities, theNindex in our analysis has to be determined by the properties of far Fermi sea, i.e., high energy regime.
This is because the application of the Stokes theorem, which turns the Fermi sea volume integral over Berry curvature into Fermi surface line integral over Berry phase,
requires a self-consistent gauge choice of the vector field.
This gauge choice must not contain any singularities in the integrated volume, in order to ensure the existence of a non-singular gauge field throughout the volume.
§.§ View from field theory
The gapless Dirac fermion in a strong topological insulator
film can be written asℋ_0(k)=
λ_·(sink_x,sink_y)+m(k)σ_zwithm(k) = Θ(-m_0(k))m_0(k)identified,
which is constructed on lattice with finite 2D Brillouin zone. The
time-ordered Green function is𝒢_0(k)=[ω-d·σ(1-iη)]^-1wherek_μ=(ω,k)_μ,d(k)=(λ_sink_x,λsink_y,m(k))andηis infinitesimal small quantity. In order to study a linear
electromagnetic response in the film system, we include the electromagnetic
fields𝒜which are coupled to the current through the
interaction termℋ_gauge=j·𝒜.
The electric current density operator in the momentum space is given
byj=∇_k𝒢_0^-1(k).
With the electromagnetic fields, the action reads (e = ħ= 1)
S=∫_kψ_k^†𝒢_0^-1(k)ψ_k+∫_k∫_q𝒜^μ(q)ψ_k+q/2^†∂_k_μ𝒢_0^-1(k)ψ_k-q/2,
where∫_k=∫dω/2π∫_BZd^2k/(2π)^2and the momentumkintegral is performed over the
whole 2D Brillouin zone. By integrating out the fermions in the action,
the effective action for gauge fieldsS_eff[𝒜]can be obtained by expanding to the quadratic order
𝒮_eff =1/2∫d^3q/(2π)^3𝒜^μ(-q)Π_μν(q)𝒜^ν(q).
whereμ,νrun over the space-time indices(0,1,2)with the
vacuum polarization operator defined as
iΠ_μν(q) =∫d^3k/(2π)^3Tr[∂_k_μ𝒢_0^-1(k)𝒢_0(k+q/2)∂_k_ν𝒢_0^-1(k)𝒢_0(k-q/2)],
There is no divergence inΠ_μνas the momentum integral
is performed over a finite Brillouin zone due to the lattice regularization.
The antisymmetric termsΠ_μν^A(q)can be evaluated as
follows
Π_μν^A=1/2πϵ_μνζq^ζC,
with Chern number in the case following definition that
C=∫_BZd^2k/4πd̂·∂_k_xd̂×∂_k_yd̂,
whereϵ_μνζis Levi-Civita symbol andd̂=d/|d|.
Finally we obtain the Chern-Simons theory for𝒜_μS_eff [𝒜]=C/2π∫ d^3xϵ^μνς𝒜_μ∂_ς𝒜_ν.
For the lattice Hamiltonianℋ_0(k), we
haveC=-sgn(m(π,π))/2which is a half-integer
with its sign determined by the sign ofm(π,π).
Restoring physical units, the Chern-Simons
term corresponds a half quantum Hall effect
⟨ j^ν⟩=δ S_eff/δ𝒜_ν=sgn(m(π,π))/2e^2/hϵ^μνς∂_ς𝒜_μ.
Notice that upon DC linear response, the result is strict.
If we now focus on the low-energy effective model of the lattice four-band
Hamiltonian by neglecting higher energy states(∝m(k)),
which can be expressed asℋ_0^low(k)=λ(k_xσ_1+k_yσ_2).
There is a linear ultraviolet divergence inΠ_μν(q)which
should be regularized by Pauli-Villars method in a gauge-invariant
way. In the Pauli-Villars regularization approach, we need to introduce
a second Dirac field massMσ_3. In the limit (M→∞),
the regulator field decouples from the theory, which removes the divergence
inΠ_μν, leaving a finite contribution for the crossed
polarization tensorΠ_μν=sgn(M)/4πϵ_μνζq^ζ.
This also induces a Chern-Simons term and corresponds to a half-quantum
Hall effect.
The comparation of mass configuration and band dispersion of two methods are put in Fig. <ref>.
The advantage of our approach for lattice realization single gapless Dirac fermion lies in its reality, as it appears naturally in a topological insulator film,
and also in its conciseness of expressing topological property with a single analytical mass term.
The price here, however, is to introduce symmetry breaking term at high energy zone explicitly, and the form of Theta function (or its mollifier) will introduce long range hopping in real space.
§.§ Unexchangeable limits
In the usual context of quantum field theory, a massive(2+1)-D Dirac fermion bears half-quantized Hall conductivity when the chemical potential lies inside the gap, even if the mass is infinitesimally small<cit.>, under which one gets in fact a Dirac point.
Such a picture relies on the limit sequence that one firstly takesμ→0, and then the massm →0,
while on the other hand, once the sequence is inverted, say at first place, one stays at finite chemical potentialμand takesm →0, which leads to zero Hall conductivity, one gets constant zero Hall plateau when pushingμ→0.
And in this sense one realizes that a gapless Dirac point is singular, and different approaches to reach it will lead to different and even contradictory pictures.
The same thing happens in our model.
Consider now a gapless Dirac fermion is perturbed by a small constant mass term
h = λ_∥(k·σ)+ [δ m + Θ(-m_0(k))m_0(k)] σ_z,
where for simplicity we discuss in the continuum model here.
Givenm_0(k) = m_0 - b k^2withm_0 b > 0, by Eq. (<ref>) we have
σ_H = -e^2/2 h[sgn(b) + δ m/√(λ_∥^2 k^2 + δ m^2)],
where a smallμnear the Dirac point is assumed.
Now the two different limits for the Hall conductivity of the gapless Dirac cone in the case read
lim_δ m → 0lim_μ→ 0σ_H = -e^2/2 h[sgn(b) + sgn(δ m) ],
lim_μ→ 0lim_δ m → 0σ_H = -e^2/2 hsgn(b),
i.e., firstly pushing chemical potential to zero and then pushingδmto zero leads an undefined limit that depends on the limit directionδmtakes (positive or negative),
while an admittedly infinitesimal mass gap will not affect the half-quantization of the gapless Dirac cone by subsequent Fermi level tuning — not only toμ→0but for all possible Fermi wavevectors that lie inside the parity invariant regime<cit.> defined bym_0(k) > 0.
The corresponding schematic diagram illustrating the sequential limit taking processes upon evaluating the Hall conductivity of a regulated gapless Dirac fermion is presented in Fig. <ref>.
In reality, which limit the measured Hall conductance takes has to depend on specific situation of the system,
while for the Dirac point emerged in a purely magnetic TI, the second perspective may be deemed more realistic.
§ MAGNETIC AND ORBITAL FIELDS IN TOPOLOGICAL INSULATOR FILMS
In this section we consider more ingredients, such as exchange interaction, gate-voltage and orbital orders, to play their roles in the topological insulator film at the mean-field level.
We identify the mean field to beV(k,l_z)σ_μτ_ν, with single in plane wavevector and out of plane position dependence, and transform the field into the Dirac fermion representation.
For instance, an inducedz-Zeeman fieldV_z(l_z)σ_zτ_0with solelyz-dependence and intrinsic spin-orbital couplingH_∥(k)that only depend onkare two special cases under the formulation.
For our interest, we will mainly consider magnetic exchange interaction that has been approximated to affect as an effectively mean-field Zeeman field<cit.> alongzdirection, and
transformation over other spin and orbital related fields are discussed and summarized later.
§.§ Magnetism polarized along z direction
The stated meanz-Zeeman field is assumed to be uniform intralayer while varies withl_z, and that is to say<cit.>,
𝒱_Z(k) = ∑_l_z,kΨ^†_l_z,k V_Z(l_z) Ψ_l_z,k,
where
V_Z(l_z) ≡ V_z(l_z)σ_z τ_0,
which acts on spinz.
For several schematic examples with different Zeeman configurations, see Fig. <ref>.
Its equivalent action by projection⟨Φ^n_m |V_Z |Φ^n'_m'|$⟩ (m,m'=1,2,3,4; n,n'= 1,⋯,L_z) reads
V(k) = (𝐈_S(k) τ_0 - 𝐈_A(k) τ_y)σ_z.
In the expression, two projected Hermitian matrices 𝐈_S/A(k) have been defined with elements
I_S^nn' = |C_n C_n'| ∑_l_z V_z(l_z) [λ_⊥^2(f_+^n)^* f_+^n' + t_⊥^2 η^n η^n' (f_-^n)^* f_-^n' ]= (I_S^n'n)^*,
iI_A^nn' = i|C_n C_n'| ∑_l_z V_z(l_z) λ_⊥ t_⊥ [η^n'(f_+^n)^* f_-^n' + η^n (f_-^n)^* f_+^n'] = -i (I_A^n'n)^*,
where n,n' = 1,⋯,L_z.
Notice that I_S/A is non-vanishing only when the symmetric/antisymmetric component of V_z is non-zero.
Our formula then illustrates that the Zeeman field in a TI film is brought into two classes by the discrete parity or mirror symmetry, with S (A) labelling the part respects (disrespects) this symmetry.
Bring the transformed Zeeman term into -Dirac fermions representation, we obtain
H^V = ⊕_n=1^L_z[λ_∥(sin(k_xa)σ_x+sin(k_yb)σ_y)+m_n(k)τ_zσ_z] + (𝐈_S(k) τ_0 - 𝐈_A(k) τ_y)σ_z.
Under the local unitary transformation, the Zeeman field in TI film undergoes a transformation into the 𝐈 matrices, which act as generalized Higgs fields in matrix form, generating mass through the Yukawa-like couplings among Dirac fermions in the film<cit.>.
This phenomenon occurs precisely due to the fact that the projected Zeeman terms still act on spin-z component, similar to how masses affect the system.
The emergence of a non-vanishing Higgs expectation value is closely associated with the establishment of the magnetic order in the system, either by intrinsic spontaneous magnetization or a proximate magnetic field.
A closer looking then classifies this action into three aspects.
Firstly, the intra-Dirac cone elements I_S^nn tell how the Zeeman field directly modifies the mass term m_n, and due to the trace invariance under unitary transformation,
such a direct modification is significant in understanding the impact of the Zeeman field on the overall mass generation process.
Secondly, the intra-block inter-Dirac cone elements I_A^nn terms couple the two mirror-symmetric Dirac fermions with the same n-label together, and forces them to recombine into two new Dirac fermions that break the mirror symmetry.
Finally, the general inter-block elements I_S/A^nn'(n ≠ n') couple Dirac cones with different n-labels.
Nevertheless, since the winding part of Dirac fermions in our equivalent TI film model (see Eq. (<ref>)) is identity in subspace spanned by n and τ,
the total effect of the projected Zeeman term is to modify the mass terms, i.e.,
𝐌(k)σ_z = [⊕_n=1^L_z m_n τ_z + 𝐈_S τ_0 - 𝐈_A τ_y ](k) σ_z,
and further diagonalization of this total mass part will give another set of 2 L_z mass terms without affecting the winding part, i.e.,
𝐌(k) diagonalization⟶⊕_n=1^2 L_zm̃_n (k),
and accordingly, we can write down the Dirac fermion Hamiltonian under Zeeman field as
H̃(k)=⊕_n=1^2 L_z[λ_∥(sin(k_xa)σ_x+sin(k_yb)σ_y)+ m̃_n(k)σ_z],
which describes the 2 L_z Dirac fermions in a magnetic topological insulator film.
Notice that the Zeeman term alters the mass of Dirac fermions thus their topology, which is the origin of the fruitful magnetic topological phases in the system.
The formula and discussion above are general and applies for any z-varying Zeeman configurations.
For our consideration here, we separately discuss main cases.
§.§.§ Uniform field strength
In this case V_z(l_z) ≡ V for any l_z, and it is easy to check out that
I_S^nn' = V δ_nn',
I_A^nn' = 0,
which offers us with an exact projection without further diagonalization as
H^V(k) = ⊕_n = 1^ L_z[λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + (m_n(k) τ_z + V τ_0) σ_z] = h^V_n,χ(k),
where each sub-block
h^V_n,χ(k) = λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + (χ m_n(k) + V) σ_z,
describes a Dirac fermion of TI film modified by a uniform Zeeman splitting V.
This formula serves as a clear physical picture to illustrate the formation of higher Chern number in TI film, with multi-sub-bands inversion<cit.>
generated by the direct Higgs coupling V, as we shall illustrate in the section thereafter.
§.§.§ Weak Zeeman field
When a weak Zeeman field, whose strength is comparably small to major parameters in topological insulator, especially, the bulk gap m_0, is applied to the topological insulator film system, its effective Hamiltonian can be obtained by considering only n = n' = 1 elements in the projected matrix as a cut-off approximation.
The reason why we can do this lies in the basis wavefunction distribution along z-direction.
As revealed in Fig. <ref>, where we have presented n = 1 basis wavefunction distribution for the strong topological insulator with single Dirac cone at Γ,
together with n = 2 basis wavefunction distribution as a representative for higher states, the surface state and higher states have little overlap in the low-energy zone (near Dirac cone, in our case the parity-invariant regime<cit.> around Γ point, i.e., small k area),
which makes the overlap integral I_S/A^1,n ≥ 2 approaches zero in the regime.
This tells that the low-energy behavior of the system under weak Zeeman field is dominated by only I_S/A^1,1 terms.
And when we turn to high-energy part, the effective Hamiltonian for n = 1 is dominated by the non-vanishing mass term m_1(k) since Zeeman integrals are all perturbative quantities in the case.
What is more, since n ≥ 2 bands are naturally gapped with minimal gap m_0, weak Zeeman field has no prominent influence to them.
Based on the picture above, it suffices that we only consider n = 1 block with m_1(k) and preserve I_S/A^1,1 as influence (mass-)source at low energy.
This procedure is equivalent to a cut-off approximation.
Notice that since low-energy surface states distribute mainly at two surfaces, Zeeman field at these two zones should play the major role.
Now we ignore n = 1 index and write
I_S(k) = ⟨Φ_1(k) | V_Z |Φ_1(k)|
⟩
iI_A(k) = ⟨Φ_1(k) | V_Z | Φ_3(k)|⟩,
which varies with wavevector k, then by utilizing basis solutions above we have
I_S = |C|^2 ∑_l_z V_S(l_z) [λ_⊥^2 |f_+|^2 + t_⊥^2 η^2 |f_-|^2 ],
iI_A = i|C|^2 ∑_l_z V_A(l_z) λ_⊥ t_⊥ 2η[(f_+)^* f_-],
respecting (anti-)symmetric part projection of Zeeman field to z as
V_S/A(l_z) = V_z(l_z) ± V_z(-l_z)/2.
Note that I_S/A are real.
The effective Hamiltonian for Zeeman term then reads
V_EFF(k) = (I_S(k) τ_0 - I_A(k) τ_y)σ_z.
Adding this term to lowest four-band model leads to
H_EFF = λ_∥(sin(k_x a)σ_x + sin(k_y a)σ_y) + m(k)τ_z σ_z + I_S(k) τ_0σ_z - I_A(k) τ_yσ_z,
where m(k) = Θ(-m_0(k))m_0(k) for thick-enough film, while I_S/A(k) are z-Zeeman-related integrals dependent on k.
This effective Hamiltonian serves as the starting point analysing magnetic phases in a topological insulator film within weak Zeeman regime,
and we should confine the Zeeman distribution to mainly stay at top and bottom surfaces to make best use of them.
Effective mass treatment Diagonalization of the mass part in the weak Zeeman field case shows much less complexity than that in Eq. (<ref>), and is accessible analytically.
A careful look on Eq. (<ref>) tells that we can treat all the latter-three terms as mass terms, since by τ-space diagonalization
U_M^†[mτ_z + I_S τ_0 - I_A τ_y]U_M = [ m̃_+ ; m̃_- ],
where the defined unitary matrix reads
U_M = 1/√(2)[ isgn(I_A)√(1+m/M) √(1-m/M); √(1-m/M) isgn(I_A)√(1+m/M) ],
with M(k) = √(m^2(k) + I_A^2(k)), we can write H̃_EFF = ⊕_χ = ±H̃_χ with
H̃_χ = λ_∥(sin(k_x a)σ_x + sin(k_y a)σ_y) + m̃_χ(k) σ_z,
where the effective mass is defined as
m̃_χ (k) ≡ I_S (k) + χ√(m^2 (k) + I_A^2 (k)).
This equation illustrates minimally the mass generation brought by the matrix form Higgs field, which is reduced into merely two components I_S/A(k) here.
The ultimate effect given by the Zeeman field action to the system is reduced to a correction of the Dirac mass term, which is responsible for the possible
non-trivial topology of the system.
The treatment here relies on the sign invariance of I_A inside the parity invariant regime, which insures the global gauge consistence for the transformation.
Notice that the gap is now determined by
Δ_χ = 2|m̃_χ(0)| = 2|I_S(0) + χ |I_A(0)| |,
which is non-zero (gapped) as long as |I_S(0)| ≠ |I_A(0)|.
The χ-Chern number, according to Eq. (<ref>), for each gapped surface state is written as
C_χ = 1/2[sgn(m̃_χ(0)) - sgn(m̃_χ(π,π))],
which, by utilizing the fact that m(0) = 0 and Zeeman field is added perturbatively so that m(k) dominates at (π,π), we obtain that
C_χ = 1/2[ sgn(I_S(0) + χ |I_A(0)|) - χ]
= -χΘ(-|I_A(0)| - χ I_S(0)).
This formula works in the chosen parameter regime 0 < m_0 < 4t_⊥ within weak Zeeman treatment.
§.§.§ Strong Zeeman field
For a general strong Zeeman field whose strength is comparably big enough with the system parameter (mainly bulk gap m_0) or even stronger, with arbitrary configuration along z direction, both the uniform and the weak criterion fails,
and in this case, we usually have to adopt the most general formula by Eq. (<ref>), whose topological property
is revealed after a further diagonalization of mass terms given by Eq. (<ref>), which
turns the total Hamiltonian again into a direct sum of a series of Dirac fermions shown in Eq. (<ref>).
Then based on our discussion in <ref>, the Hall conductivity of each single Dirac fermion is determined, from which we can analyse the topological property of the system.
§.§ Other fields
In the subsection, we present more examples of spin and orbital fields other than the z-Zeeman field discussed above, and the result is listed in Table <ref>.
The signals appeared here only apply in the subsection.
The list of results reveal the power of our general procedure, and is enlightening for discovering more topological phases driven by diverse physical origins.
For a given field V(k,l_z)σ_μτ_ν, the transformation follows similarly by organizing the projected elements
∑_l_z V(k,l_z)⟨Φ^n_m (k,l_z)|σ_μτ_ν|Φ^n'_m'(k,l_z)|$⟩ (m,m'=1,2,3,4;n,n'= 1,⋯,L_z)
aligned with the sequence of the basis.
The form of field after transformation will always be two L_z ×L_zmatrix fields different byz-parity symmetry labels,Scounting for symmetric distribution andAfor the opposite, with each attached with new4 ×4Dirac matrices.
To express matrix quantities𝐈,𝐆,𝐎in Table <ref>, we introduce the momentum-dependent matrix-form acting functionalℱ_koverVfield that generates projected matrix component like
ℱ^nn'_k[V] = ∑_l_z V(k,l_z) F^nn'_V(k,l_z) = (ℱ^n' n_k[V])^*,
where the summation kernelF^nn'_V(k,l_z)depends on different Dirac matrix the untransformed field carries.
However, in practice, we find that the non-vanishing components in the transformed field matrix are only generated by four kinds of summation kernels,
F^nn'_S+(k,l_z) = |C_n C_n'|[λ_⊥^2(f_+^n)^* f_+^n' + t_⊥^2 η^n η^n' (f_-^n)^* f_-^n' ],
F^nn'_A+(k,l_z) = |C_n C_n'|λ_⊥ t_⊥ [η^n'(f_+^n)^* f_-^n' + η^n (f_-^n)^* f_+^n'],
F^nn'_S-(k,l_z) = |C_n C_n'|[- λ_⊥^2(f_+^n)^* f_+^n' + t_⊥^2 η^n η^n' (f_-^n)^* f_-^n' ],
F^nn'_A-(k,l_z) = |C_n C_n'| (-i) λ_⊥ t_⊥ [η^n'(f_+^n)^* f_-^n' - η^n (f_-^n)^* f_+^n'],
different by symmetry requirement and an inner sign.
In the table the symmetry labels between the field after transformation and the summation kernel are corresponding.
The table can be longer once one considers more kinds of Dirac matrices.
This procedure above is general, powerful while easy to understand.
Despite the easiness of the transformation, the
non-trivial difficult part is to endow physical meaning to the attached fields, both before transformation and after.
For instance, the spin-orbital coupling remains its meaning after the transformation, while being block-diagonal in the Dirac fermion representation;
thez-Zeeman field, as discussed above, is transformed into some matrix form Higgs field, which stands as the effective mass generator.
Spin-orbital duality Interestingly, we see that they-orbital order is transformed to attach the same Dirac matrices as the transformedz-Zeeman field, but with symmetry indices of matrix quantities exchanged.
This relation tells that, as long as some topological phase is discovered withz-Zeeman fieldZ_z = Z_z,S + Z_z,A, another phase with the same topological index can immediately be identified withy-orbital order satisfying thatO_y,A = Z_z,S, O_y,S = Z_z,A.
For instance, we show the dual phases formed byσ_zandτ_yorders in Table <ref>, the Chern insulator, aka quantum anomalous Hall effect (QAHE), the axion insulator, the half QAHE and the metallic QAHE
as several typical phases in magnetic topological insulator as we will discuss below.
Here one has to notice that for the metallic QAHE<cit.>, which requires a relatively strong magnetism in the middle of a topological insulator film, the correspondingτ_yorbital order induced metallic QAHE requires a higher threshold for the
antisymmetric field strengthO_y,A^m, due to the odd function nature which forcesO_y,A^m(L_z/2) = 0.
Following the effective mass treatment above, we can furthermore construct quantitative model unifying the two orders.
There are now totally five mass terms that read
𝐌(k) = [⊕_n=1^L_z m_n τ_z + (𝐈^z_S + 𝐎_A^y )τ_0 - (𝐈_A^z + 𝐎_S^y ) τ_y ](k),
and a similar diagonalization leads to the effective masses
𝐌(k) diagonalization⟶⊕_n=1^2 L_zm̃_n (k),
without affecting the spin-orbital coupling field.
On the other hand, in the context of weak interaction, we only preserven = n' = 1components and write
down mass terms forn = 1block as
[m τ_z + (I^z_S + O^y_A )τ_0 - (I^z_A + O^y_S )τ_y](k),
withn = 1label ignored.
Here merely a substitutionI_S →I^z_S + O^y_A,I_A →I^z_A + O^y_Shappened compare with Eq. (<ref>),
and a similar diagonalization leads to two effective masses for the surface Dirac bands as
m̃_χ(k) = (I^z_S + O^y_A )(k) + χ√(m^2 (k) + (I^z_A + O^y_S )^2 (k)),
from which the synergistic and competing relations betweenσ_zandτ_yorders are shown more explicitly.
§ TOPOLOGICAL PHASES WITH WEAK FIELD
Counting on the mean strength of the magnetic exchange interaction, our exploration can be further divided into two main branches as weak and strong Zeeman fields.
The division follows simply from the criterion whether the phase can be described within then = 1frame, or equivalently, whether Eq. (<ref>) from weak Zeeman field approximation is applicable.
If it is the case, we identify the phase to lie inside the weak interaction regime, as we shall discuss here.
From here on, all topological insulator means a strong TI with single Dirac point atΓ.
§.§ Half quantum mirror Hall effect: a non-magnetic film with mirror symmetry
The topological insulator film itself without adding any external ingredients or interactions is already interesting enough and exhibits a novel topological phase<cit.>,
namely, the half quantum Mirror Hall effect shown in Fig. <ref>, which is deeply related to the mirror symmetry of the system and reveals measurable parity anomaly physics.
A general film Hamiltonian readsℋ = ∑_l_z,l_z',k Ψ_l_z,k^† H(l_z,l_z',k) Ψ_l_z',k , and the out of film plane mirror symmetryℳ_zemerges as a combination of inversion andC_2zrotation that readsℳ_z Ψ_l_z,kℳ_z^-1 = U_z Ψ_-l_z,k,whereU_zis a unitary matrix.
Requiring such symmetry over the system Hamiltonian leads to the conditionU_z^† H(l_z,l_z',k,) U_z = H(-l_z,-l_z',k).
It is then possible to write down the mirror operator under{Ψ_k,l_z}asM_z = C_2z P, withU_zas its off-diagonal elements, and the Hamiltonian
can be projected into decoupled mirror-labelled parts as
H_χ = P^M_z_χ H, P^M_z_χ = 1 + iχ M_z/2,
withχlabelling the eigenvalue of mirror operator. EachH_χis yet again a complete system whose non-trivial property
is revealed by the (zero-temperature, ignored below) mirror Hall conductivity
σ_H^χ = e^2/h/π[∑_E_n^χ < μ < E_m^χ∫d^2 k v̅_x^mn,χv̅_y^nm,χ/(E_n^χ - E_m^χ)^2],
wherev̅_i^mn,χ = ⟨n^χ | ∂_k_i H^χ |m^χ|$⟩
is the expectation value of mirror velocity operator evaluated over eigenstates of the mirror-projected Hamiltonian.
Clearly, this is just the usual Kubo formula<cit.> evaluated over the projected Hamiltonian H_χ, and thanks to the imposed mirror symmetry, two parts with mirror label χ = ± do not
communicate with each other and are totally decoupled.
The gapless pair of Dirac fermions in a topological insulator film causes the half quantum mirror Hall effect.
Here in the concrete model the off-diagonal elements of mirror operator reads U_z = -iσ_z τ_z, which is projected into τ_z under multi-Dirac fermions representation (see Appendix <ref>), indicated by χ = ± as its eigenvalue in the effective Hamiltonian.
The gapless n = 1 Dirac fermions in strong topological insulator film read
H_n=1 = H_surf,+⊕ H_surf,=,
where each block with mirror label reads
H_surf,χ= λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + χ m(k) σ_z,
with m(k) = Θ(-m_0(k))m_0(k) identified.
To show nature of half quantum mirror Hall effect, we calculate the Hall conductivity of H_χ obtained from the mirror-projected TI film Hamiltonian, and of the split Dirac fermion H_surf,χ.
The results are shown in Fig. <ref>, where the half-quantized transverse conductivity nature is shown for each χ part with inverse sign, indicating quantum spin Hall like physics<cit.>, while the topological origin of the half-quantized mirror Hall conductivity is bounded with the metallic gapless Dirac fermions<cit.>.
Their massless low-energy parts exist mirror-(anti-)symmetrically at both top and bottom surfaces of the TI film as a result from the bulk-boundary correspondence of 3D strong topological insulator<cit.>, corresponding to states with mass ±Θ(-m_0(k))m_0(k) term at m_0(k) > 0.
Here, the symmetry statement is traced back to our basis, which is chosen to distribute along z either mirror symmetrically or antisymmetrically (see Appendix <ref>).
As a complete band, the surface Dirac cone does not end at finite wavevector, but gradually emerges into the bulk with regulated non-zero mass term represented by Θ(-m_0(k))m_0(k) at m_0(k) < 0,
and it is this non-vanishing high-energy part that ultimately gives rise to the half-quantized Hall conductivity, as discussed in <ref>, which finally reads by Eq. (<ref>) as σ_H^χ = -χ e^2/2h,
when Fermi surface satisfies that m_0(k_F) > 0.
The physically observable effect generated by the phase is embedded in the mirror Hall conductivity<cit.>, which is defined as
σ_H^Mirror = ∑_χχσ_H^χ,
and equals to quantum unit -e^2/h in the case.
The quantity reveals that, though, by opposite Hall conductivity, the charge current by a transverse electrical field vanishes as σ_H = ∑_χσ_H^χ = 0, the `mirror' current does not, similar to that in quantum spin Hall effect.
Nevertheless, a better way looking at the half quantum mirror Hall effect may start from treating it as an intrinsic `spin' Hall effect in metal, while the effect shows quantization with its transverse `spin' Hall conductivity that shares a topological origin deeply related to the parity anomaly,
and replacing `spin' with mirror leads to the observation that in different mirror sectors, the mirror current and the charge current will be either parallel or anti-parallel with the same quantized magnitude.
Such a way of narration also lies at the lineage of induced dissipationless mirror current and dissipative longitudinal current, as they are both generated by metallic gapless Dirac fermions.
To detect the mirror current, non-local electrical transport signals<cit.> are needed, while to reveal the quantized nature, one needs to perform a series of measurements to fully separate the dissipationless and dissipative currents<cit.>, by changing the sample width and notice the scale invariance of Hall conductance.
§.§ Quantum anomalous Hall effect: Chern Insulators
The Chern insulator is identified as an insulating phase which hosts quantum Hall effect<cit.> with quantized Hall conductance, while without need of applying external magnetic field to form Landau levels<cit.>.
The key ingredient lies in the breaking of time reversal symmetry, which makes the non-vanishing Hall conductivity possible, as studies extensively in anomalous Hall effect<cit.>.
The quantization nature, on the other hand, is determined by the Berry phase flux integral over the Brillouin zone, which is an integer know as the first Chern number<cit.>.
An insulator with non-zero Chern number is known to host gapless chiral edge modes<cit.> that circulate around the system dissipationlessly without backscattering<cit.>.
Essentially, the number of the modes is equal to the Chern invariant, as a physical realization of the index theorem by bulk-boundary correspondence<cit.>.
It is usually argued that to realize a Chern insulator in a realistic material, relatively strong spin-orbital coupling together with internal magnetism are needed<cit.>.
With confined geometry, the topological insulator film is predicted<cit.> to host the quantum anomalous Hall effect (QAHE)
with proper magnetism, either by magnetic doping approach<cit.> or establishing intrinsic magnetic order<cit.>.
In this sense three typical cases realizing the Chern insulating phase is presented in Fig. <ref>, with uniform Zeeman field (to make consistence with discussion here, the Zeeman strength here is still chosen to be weak, while the uniformly strong strength case is left to be discussed in the higher Chern number case later on),
symmetric top and bottom surface Zeeman fields configuration and an asymmetric configuration which does not break the holistic polarization, by which we mean that the symmetric ingredient in the configuration overwhelms the asymmetric one.
The common feature these realizations share is the parallel polarization of the top and bottom surface-magnetism vertical to the TI film plane, effectively as the Zeeman field directions that point to both up or down.
The verification of the three cases are brought out by numerical calculations with both TI film and weak Zeeman effective four-band models, as revealed in Fig. <ref>, Fig. <ref> and Fig. <ref>, respectively.
Besides the bands in (a) that all show Zeeman-gapped feature with perfect correspondence between two methods, the Hall conductivity in (c) pictures captures the essence of a Chern insulator with an integer Chern number quantifying the quantized Hall plateau magnitude.
What is more, the calculated I_S/A in (b) and Hall conductivity in(d) for H̃_χ reveal more about physics behind the phenomenon.
Below, based on the symmetric or asymmetric Zeeman configurations, we further divide the discussion into two classes.
§.§.§ Symmetric magnetic structure
In this class,
I_S ≠ 0
I_A = 0
,
and the given first two cases satisfy the condition.
In case I and II, the symmetric Zeeman distribution leads to the vanishing of I_A, and the effective mass, according to Eq. (<ref>), is written as
m̃_χ(k) = I_S(k) + χ |m(k)|, I_S > 0,
it is thus clear that under the circumstance, χ = - branch will contain a mass sign change from Dirac point Γ = (0,0) to high-energy point M = (π,π), and is topologically non-trivial with unit Chern number given by Eq. (<ref>), while χ = + mass remains positive and leads to a trivial gapped surface band.
And this composes of explanation of the χ-dependent Hall conductivity for the first two cases.
§.§.§ Asymmetric magnetic structure
In this case,
I_S ≠ 0
I_A ≠ 0
|I_S| > |I_A|
,
i.e., an imbalance between top and bottom Zeeman strength appears, while their directions remain parallel so that the symmetry component overwhelms, as reflected by the case III.
Now we observe that in Fig. <ref> (d) χ = - branch is non-trivial with unit quantized Hall plateau, and χ = + branch is trivial with a broader zero-Hall plateau, this means that the non-trivial χ = - band has a smaller gap than the χ = + band, as revealed in Fig. <ref> (a).
Lifting this to some principle, we claim that the surface band with a smaller magnetic gap is non-trivial for Chern insulator film.
To gain insight from the phenomenon, notice that in this case, both I_S and I_A are non-vanishing, but generally I_S > |I_A| > 0 since the Zeeman configuration is more close to the symmetric case, i.e. V_S > |V_A| > 0 near two surfaces in this case.
The above observation leads to
m̃_χ(0) = I_S(0) + χ |I_A(0)| > 0
m̃_χ(M) = I_S(M) + χ√(m^2(M) + I_A^2(M))∼χ |m(M)|
,
and since non-trivial topology requires mass inversion, we conclude that m̃_- is non-trivial with unit Chern number while χ = + is trivial,
and clearly the gap Δ = 2|m̃(0)| tells that Δ_- < Δ_+.
Pictures and discussions above complete the case study for the Chern insulator phase here.
Notice that in the typical cases given above, it is always that χ = - band that has -e^2/h Hall conductivity while the χ = + band is trivial with zero Hall contribution,
i.e., it is a 1 + 0 combination with the sign of Hall conductivity determined by the polarization direction of Zeeman field, as we shall illustrate further below.
Generalization of the picture above about the Chern insulator phase in TI film to arbitrary weak Zeeman configuration that varies layer by layer is put here.
According to Eq. (<ref>), the non-trivial condition is satisfied whenever |I_S| > |I_A|, i.e., symmetric Zeeman distribution overwhelms asymmetric configuration,
and especially there exists a χ for which it holds that
-χ I_S(0) > |I_A(0)|,
and correspondingly we have
C_χ = -χ, C_χ̅ = 0,
with χ̅ = -χ identified.
This tells us that while one of the two gapped surface Dirac fermions becomes topologically non-trivial, carrying non-vanishing Chern index of unit, the other gapped cone is driven into a topologically trivial band.
Then totally the system owns unit Chern number and quantized Hall conductivity.
Meanwhile, by definition of I_S in Eq. (<ref>), one deduces that when I_S(0) > 0 which corresponds to a general z-up V_S configuration, it is χ = - that satisfies the condition, vice versa, which allows us to write
C_- = 1, C_+ = 0, for I_S(0) >0
C_+ = -1, C_- = 0, for I_S(0) <0
,
with I_S(0) contributed mainly contributed from surfaces.
There is indeed no threshold for the Zeeman strength to realize Chern insulator counting the gapless feature of surface states as long as Eq. (<ref>) is satisfied.
We have seen that for the topological insulator based Chern insulator, there are always one trivially gapped Dirac cone and one with unit Chern number, and a natural question emerges as which cone is non-trivial?
In the symmetric case, gaps of two Dirac fermions are the same, and we have to rely on χ labelled mirror symmetry together with magnetization direction to decide which cone is non-trivial.
However, for the slightly asymmetric case, a quick answer to the question can be made: the one with smaller gap is.
To see why, we can consider the gap equation Eq. (<ref>) which can be re-written as
Δ_χ = 2|(-χ I_S(0)) - |I_A(0)||,
we find that for the asymmetric Chern insulator case -χ I_S(0) > |I_A(0)| ≥ 0, and it always hold that
Δ_χ < Δ_χ̅,
then combined with Eq. (<ref>), we arrive at the conclusion that it is always the cone with smaller gap which becomes topologically non-trivial carrying unit Chern number, while the cone with a larger Zeeman gap becomes just trivial.
§.§.§ Mirror layer Chern number
Notice that there exists a fully symmetric case where V_A = 0, and in this special case, quantity proposed as mirror layer Chern number can be defined.
Again, the mirror-symmetric Hamiltonian including Zeeman term can be projected into decoupled mirror-labelled parts as
H_χ = P^M_z_χ H, P^M_z_χ = 1 + iχ M_z/2, <ref>
with M_z represented mirror operator, and its off-diagonal elements are recognized to be U_z, which relates quantity at ± l_z.
Due to the film geometry, it is natural to introduce the so-called layer Hall conductivity<cit.> by considering layer-dependent eigenstates
σ_H(l) = e^2/h/π∑_E_n < μ < E_m∑_l'∫d^2 k v̅_x^nm(l) v̅_y^mn(l')/(E_n - E_m)^2,
where in the usual case, the expectation value of velocity operator is v̅_i^mn(l) = ⟨m(l) | ∂_k_i H | n(l)|$⟩ with only diagonal elements, which, however, fails for the mirror projected Hamiltonian.
The key observation lies in the fact that by projection∂_k_i H_χcontains not only diagonal elements but off-diagonal part, which induces
additional non-local transition contribution from exactly mirror symmetrized layers.
Work the effect out and one obtains the mirror layer Hall conductivity
σ^χ_H(l) = e^2/h/π[∑_E_n^χ < μ < E_m^χ∑_l'∫d^2 k v̅_χ,k_x^nm(l) v̅_χ,k_y^mn(l') /(E_n^χ - E_m^χ)^2],
with
v̅_χ,k_i^nm(l) = 1/2⟨n^χ(l)|(v_k_i(l)|m^χ(l)⟩ + U_z v_k_i(-l)|m^χ(- l)⟩),
where the appeared velocity operator is defined through the original Hamiltonian and is assumed to contain only diagonal elementv_k_i(l) = (∂_k_iH)(l).
Now we turn to our special case.
As stated in half quantum mirror Hall effect, the bare Hamiltonian without external field contains mirror symmetry, while the same symmetry constraint imposed on the Zeeman field distribution leads to the restriction thatV_z(l_z) = V_z(-l_z), which is equivalent to the requirement thatV_A(l_z) = 0.
Thus, Chern insulator generated by TI film with symmetric Zeeman field owns mirror symmetry, and the correspondingσ_H^χ(l_z)could be carried out, so does its layer-cumulated versionσ_H,c^χ(l_z) = ∑_l = -(L_z - 1)/2 ^l_z σ_H^χ(l), as presented in Fig. <ref>.
The off-diagonal elements of mirror operator readsU_z = -iσ_z τ_zfor the TI film.
The layer dependent Hall conductivity serves us a new insight to understand the phenomenon.
Treating the system as a whole, its layer-resolved Hall conductivity, as presented in Fig. <ref>(c), becomes non-zero mainly near the top and bottom surfaces where time-reversal symmetry is broken explicitly under Zeeman field.
And the cumulated Hall conductivity gains approximately half quantum Hall conductivity near two surfaces.
On the other hand, as shown in Fig. <ref>(a), (b), when we split the system by mirror symmetry, the layer-resolved mirror Hall conductivity shows similar top and bottom distribution like the whole system, but with only half the amplitude by mirror splitting,
while the Hall conductivity distribution around mirror plane shows opposite-sign peaks inherited from the time-reversal unbroken bulk property like that in the half quantum mirror Hall effect.
Once the Hall conductivity contribution is added layer by layer, we immediately see the tri-section configuration:
for the non-trivialC_- = 1part, there exists two Hall-plateaus separating the surface and bulk, then following the top-middle-bottom section cut, we see a contribution rather close to(-1/4)—(-1/2)—(-1/4)from each section; and for the trivialC_+ = 0part, the section separation is not that apparent, and we only roughly write(-c/4)—(c/2)—(-c/4)withcapproximately one to represent the observed distribution.
§.§ Axion insulator: an antisymmetric magnetic structure
Along with the special3 + 1space-time dimension, the Maxwell electrodynamics is allowed to be decorated with an extraθterm, which generates axion electrodynamics<cit.> to
the space-time dependentθaxion field that couples with the ordinary electromagnetic field.
On a practical level, based on the picture of surface Hall effect<cit.> and analogical mathematical structure between Hall current and magnetization current,
people generalize and propose the topological field theory<cit.>, where aθterm is introduced to describe the magnetoelectric effect<cit.>
in a topological insulator medium, where the axion field is forced to gain a magnitude ofπ<cit.> by symmetry and topological requirement.
Realistically, an anti-ferromagnetic TI represents an example of the axion insulator<cit.>.
The axion field, proportional to the space-time volume integral field productE·Bor equivalently the Chern-Simons form<cit.>, is odd under time reversal/inversion.
In a system with such symmetry, theθfield matters only for its absolute value and is defined only modulo2π, which is essential for itsπmagnitude<cit.>.
The anti-ferromagnetic TI certainly breaks these two symmetries, however, as a 3D system, itsθquantization is protected by an effective time-reversal symmetry as a combination of time reversal and translation<cit.>.
The magnetic configuration in TI film closest to the proposed axion insulator is the one in Fig. <ref>, which shows a zero-Hall plateau and accompanied non-vanishing longitudinal conductance as an experimental signature<cit.>.
Here then, based on the effective mass picture, we show that the two Dirac cones with gapped surface states are both trivial, once high-energy parts are involved.
Now the fully antisymmetric magnetic configuration leads toI_S = 0for allk, and the only left Zeeman quantity isI_A, as shown in Fig. <ref>(b),
then upon weak Zeeman approximation, the two effective masses become, according to Eq. (<ref>),
m̃_χ (k) =χ√(m^2 (k) + I_A^2 (k)),
which do not show sign reverse in whole Brillouin zone for bothχand are thus trivial.
Numerical result for the Hall conductance related to two masses are shown in Fig. <ref> (d), where the two Hall conductances cancel each other exactly at any chemical potential.
Especially the zero-plateaus, which correspond to chemical potential lying inside the Zeeman gap, reveal that both bands are trivial with zero Chern number.
We can also generalize this case.
Generally for the axion insulator we need|I_S(0)| < |I_A(0)|, i.e., asymmetric Zeeman distribution overwhelms symmetric configuration at surfaces, then from Eq. (<ref>) we have
C_+ = C_- = 0,
which in fact leads to a trivially insulating phase viewing from the effective 2D model.
The phase is termed as the axion insulator (AI) phase, since the totally asymmetric magnetic polarization leads to, if one switches a surface-state-representation, a sign difference of low-energy mass of top and bottom surface states, which gives rise to
non-vanishing Berry curvature at low-energy thus surface Hall contribution, with opposite sign for two surfaces.
However, the Chern number as we have shown for each complete surface band is zero, which reveals an overall cancellation of transverse transport signals to the linear order, and the Hall conductivity contributed from the gapped surface states is not protected to be half-quantized.
Furthermore, counting on the zero Chern number nature for each individual band, the absence of chiral edge state for anx-yopened TI film stands firmly, and the non-vanishing longitudinal conductance measured has to be induced by the side-surface states of a topological insulator,
and the signal becomes non-zero only when the chemical potential is fine-tuned to avoid falling in the finite-size gap∼λ_∥^2/L_z^2of the side surface.
§.§ MnBi_2Te_4 film: even and odd number of magnetic layers
The first intrinsic antiferromagnetic topological insulator<cit.>, MnBi_2Te_4(Te-Bi-Te-Mn-Te-Bi-Te)<cit.>, is composed of septuple
layers (SLs), with out-of-plane intralayer ferromagnetism and interlayer anti-ferromagnetism, known as the A-type AFM state.
It is predicted and shown that with odd or even SL layer number, the material will exhibit quantum anomalous Hall effect<cit.> or the axion insulating phase<cit.>, respectively.
Here, based on the lowest four-band model and the discussed Chern and axion insulator pictures, we can explain these two phenomenon in a simple and elegant way.
The combination of layer-number-odevity determined (anti-)symmetric Zeeman distribution and the localized nature of surface states leads to two qualitatively distinct physical pictures.
As revealed in the schematic diagram Fig. <ref>, when the layer numberL_zis odd, the Zeeman distribution is symmetric with parallel polarization of the outermost top and bottom Zeeman field direction, and vice versa.
Based on the symmetry analysis, two cases are identified.
§.§.§ Odd layer: Chern insulator
In this case
I_S > 0
I_A = 0
, L_z 2 = 1,
with the maximum value ofI_Scentralized aroundΓas shown in Fig. <ref>(a), and its sign is controlled by the outermost layer Zeeman field direction, given by the fact that the low energy states aroundΓare localized near two surfaces.I_Salmost vanishes for largeksince the high energy states emerge into bulk and distribute diffusively, which leads to the cancellation ofI_Sintegral counting on the interlayer antiferromagnetism.
Discussion above classifies the odd SL MnBi_2Te_4films into Chern insulator phase, as nowm̃_χ = I_S + χ|m|following Eq. (<ref>), withsgn(m̃_χ(Γ)) = sgn(I_S) > 0,sgn(m̃_χ(M)) = χ,
andm̃_-changes signs atΓandMwhich gives rise to a unit Chern number, whilem̃_+is trivially gapped.
Totally, the odd-layer MnBi_2Te_4stands as a Chern insulator with unit Hall plateau, as revealed in Fig. <ref>(c), where the relatively narrow quantized Hall plateau for the quantum anomalous Hall insulator phase is due to the second-outside-layer Zeeman field which owns an inverse polarization direction compared with the outermost field by the interlayer anti-ferromagnetic nature,
and thus weakens theI_Sintegral at theΓpoint, whose amplitude is recognized as the band gap which measures the width of the quantized plateau when the chemical potential shifts.
§.§.§ Even layer: axion insulator
In this case
I_S = 0
I_A > 0
, L_z 2 = 0,
with the maximum value ofI_Acentralized aroundΓas shown in Fig. <ref>(b),
which classifies the even SL MnBi_2Te_4films into axion insulator phase, as nowm̃_χ =χ√(m^2 + I_A^2 )following Eq. (<ref>), withsgn(m̃_χ(Γ)) = sgn(m̃_χ(M)) = χ,
and both become trivial since they do not change signs.
Totally, the even-layer MnBi_2Te_4shares zero Hall plateau revealed in Fig. <ref>(d).
§.§ Half-quantized anomalous Hall effect: a semi-magnetic film
From a model point of view, there exists a search for a phase characterized by a domain-wall separating the axion insulator|I_A(0)| > |I_S(0)|and Chern insulator|I_A(0)| < |I_S(0)|, and that comes to the celebrated half-quantized anomalous Hall phase<cit.> with condition|I_S| = |I_A|inside the parity-invariant regime.
Configurationally, this corresponds to a semi-magnetic TI with a Zeeman field applied on only one side, as illustrated in Fig. <ref>.
The corresponding numerical results are presented in Fig. <ref>.
Another motivation for searching such a phase lies deeply at the lattice realization of single Dirac fermion, which serves as a basis for the lattice gauge theory<cit.>.
The Nielsen-Ninomiya theorem<cit.>, however, imposes strong constraints on this realization.
Tremendous approaches have been proposed like the Wilson fermion<cit.>, the SLAC fermion<cit.>, the Tan fermion<cit.>, etc.
These realizations either break one or more conditions required by the fermion-doubling theorem, such as symmetry or locality, or evade the physical requirements like existence of first order derivative of wavefunction and finite bandwidth on lattice.
In this context, by introducing magnetism to gap out surface states of one Dirac cone through magnetism, the remaining gapless Dirac cone, as depicted in Fig. <ref>(a), essentially serves as one lattice realization of single Dirac fermion.
As stated, the gapless Dirac cone on lattice has to boil one or more conditions required by the fermion-doubling problem, and it is the 2D parity symmetry together with the locality that are broken.
To avoid doubling caused by periodicity of Brillouin zone, the mass term of this gapless Dirac fermion has to contain non-vanishing bulk-like high-energy part, as captured by Eq. (<ref>), which breaks the parity symmetry explicitly, while the vanishing low energy mass preserves the symmetry.
Such a low-energy symmetry-preserving while high-energy symmetry-breaking term shares similarity with the `quantum anomaly'<cit.> in field theory, specifically the parity anomaly in this case.
However, the gapless Dirac fermion appeared here manifests itself as a regularized complete condensed matter system with explicit symmetry breaking at high-energy, which should be distinguished from the spontaneous symmetry breaking case under the frame of quantum anomaly.
The locality principle is violated by the massless to massive transition.
The gapless Dirac fermion, identified as the band with gapless surface states contributes a half-quantized Hall conductance.
From Fig. <ref> (d),χ= +band is trivial with zero-Hall plateau inside the Zeeman gap, i.e., the Zeeman gapped band is trivial, while theχ= -band contains a relatively large Hall plateau quantized to-e^2/2h, which correspond to the Hall conductance contributed the high-energy part of the gapless Dirac band<cit.>.
To explain this behavior, it is important to note that we now haveI ≡I_S = I_A > 0around the Dirac point revealed in Fig. <ref> (b) (valid in the parity invariant regime bounded byk_c), and the effective masses become, according to Eq. (<ref>),
m̃_χ = I(k) + χ√(m^2(k) + I^2 (k)),
from which we see thatm̃_+ > 0holds for anykand is trivial, while
m̃_- =
0, k<k_c
I -√(m^2 + I^2 )∼ -|m(k)|, k>k_c
,
which is nontrivial and offers us with a half-quantized Hall conductance within the regimek<k_c, as read from Eq. <ref>.
To realize this phase generally, we need|I_S(k)| = |I_A(k)|whenk<k_c.
Under the situation, one specifies theχsuch satisfying that
- χ I_S(k < k_c) = |I_A(k < k_c)|,
which gives the gaps according to Eq. (<ref>) thatΔ_χ = 0whileΔ_χ̅ = 4|I_A(0)|, i.e., one gapless band plus one gapped band.
For the gapped band, the Chern number description still works and gives
C_χ̅ = -χ̅Θ(-2|I_A(0)|) = 0,
while for the gapless band, we can not use Chern number to define its topology in principle, since it describes a metallic phase with a non-vanishing Fermi surface.
Nevertheless, the effective masses now have property
m̃_χ(k < k_c) = 0
m̃_χ̅(k < k_c) = 2 χ̅ |I_A(k)|
,
then combined with the high-energy conditionm̃_χ(π,π) ∼χ|m(π,π)|,
one obtains that
σ_H^χ = χ2e^2h
σ_H^χ̅ = 0
, |μ| < 2|I_A(0)|,
in line with Eq. <ref>, i.e., the gapless Dirac cone provides half-quantized Hall conductance, accompanied by a trivial gapped cone.
This phenomenon is known as the half-quantized anomalous Hall effect<cit.>.
It is important to note that the chemical potential should lie within the gapped band to avoid non-quantized contributions from the trivialχ̅band.
Additionally, the weak Zeeman field presumption ensures that the Zeeman gap, smaller than the bulk gap, does not exceed the energy limit of the parity-invariant regime.
The metallic nature of the non-trivial gapless Dirac fermion indicates that the system stays inside a metallic topological phase.
Notice that the non-trivial gapless band requirement Eq. (<ref>) givesχ= - sgn(I) = - sgn(V)withI = I_S(0)andV = V_S^top, and we can write down the asymptotic Hamiltonian for this band as
H_half∼λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + sgn(V) m(k) σ_z,
counting for the fact thatm(k) ≤0.
This effective Hamiltonian offers with half-quantized Hall conductance-sgn(V) e^2/2h, which does not depend on whether the magnetism is put at the top or bottom of TI film, but only on its polarization direction.
Under an external magnetic field, such a single gapless Dirac fermion will step into the quantum Hall regime<cit.> and exhibits quantized Hall conductance whenever an integer number of Landau levels become fully filled<cit.>.
Especially, the `anomaly' contribution will manifest itself to compensate the half quantization contributed from the lowest Landau level, so to keep the integer value of Chern invariant for this gapped Landau level system.
§.§ Phase diagram
To appreciate more on the phases mentioned, especially on the phase transitions among, we go back to the effective model and assume that the immersed depth of top and bottom Zeeman field, if exists, is relatively longer than the characteristic exponentially decaying length of surface states while being much smaller than the film thickness, with uniform strength for the top or bottom field.
Then we can adopt the substitution
I_S/A→ V_S/A = V_S/A^top.
And the effective model reads
H̃_χ = λ_∥(sin(k_x a)σ_x + sin(k_y a)σ_y) + m̃_χ(k) σ_z
m̃_χ (k) = V_S + χ√(m^2 (k) + V_A)
m(k) = Θ(-m_0(k))m_0(k)
m_0(k) = m_0 - 4t_∥(sin^2k_x a/2 + sin^2 k_y b/2)
,
from which one reads the Hall conductance as (in the Zeeman gap or the parity invariant regime)
σ_H^χ = e^2/h1/2[χ - sgn(V_S + χ |V_A|)].
Now let us introduce the top Zeeman strengthV_z^top = V_0, and the bottom Zeeman strengthV_z^bottom = x V_0described by the collaboration
betweenV_z^topand a phenomenological parameterxcharacterizing their relative strength.
Then accordingly we have
V_S = V_0 1 + x2
V_A = V_0 1 - x2,
which gives further the Hall conductance
σ_H^χ = e^2/h1/2[χ - sgn(V_0)sgn(1 + x/2 + χsgn(V_0) |1 - x/2|)],
whose dependence on parameters(x,V_0)are presented in Fig. <ref> as a phase diagram emphasizing the role the relative strengthxplayed here.
Notice that we have definedsgn(0) = 0here, corresponding to realistic physical phenomenon whenV_0 = 0.
From the diagram, except forV_0 = 0line, which represents a pure topological insulator film with half quantum mirror Hall effect, it is alwaysx ≥0side that gives rise to phases with non-vanishing Hall conductance, belonging to either Chern insulator or half-quantized anomalous Hall metal phase,
while thex < 0side termed as axion insulator phase always contain two trivially gapped Dirac cones/fermions.
Focusing on the phase transition, we observe that a phase characterized by an anomalous half-quantized index always emerges upon the integer index phase transition.
This phenomenon echoes transitions observed in integer quantum Hall systems<cit.>, where the renormalization group flow diagram exhibits a generic fixed point with half-quantized Hall conductance and finite longitudinal conductance, suggesting a phase transition in 2D from a field theoretical point of view.
However, the physics here should differ, as the robustness of the gapless surface state is protected by the bulk and corresponding surface time-reversal symmetry as an intrinsic feature of 3D strong topological insulators<cit.>.
Put the statement differently, the additional dimension in our system exhibits robust topological/geometric effects,
making it plausible that phases characterized by half-integers here are more likely to be symmetry-protected metallic topological phases, while this protection only occurs in a finite regime over the whole Brillouin zone.
Especially, the half QAHE here is protected by a parity invariant regime, and is different from a critical quantum Hall transition phase without protection from any non-conformal symmetries.
In the phase diagram we draw, the line of half quantum mirror Hall effect is crossed changing between two Chern insulator phases characterized by opposite Chern number, since such a phase transition relies on changing of Zeeman polarization direction, thus crossingV_0 = 0where half quantum mirror Hall effect happens.
A similar thing happens for the transition between axion insulator phases differed by Zeeman direction.
On the other hand, lines representing half QAHE are crossed when stepping between the Chern insulator and axion insulator phases, with the sign of Hall conductance determined by Zeeman direction.
§ TOPOLOGICAL PHASES WITH STRONG FIELD
A more extensive and complex regime exists beyond the weak Zeeman field approximation, and the criterion tells that
the topological phase appearing here can not be simply described undern = 1frame.
In this scenario, we step into the strong interaction regime, where the appearance ofn ≥2cones is unavoidable.
Surprisingly, the inter-Dirac-cone interaction can sometimes play the ultimate role deciding the topological property of the system.
It is in such situations that out effective mass picture from Eq. (<ref>) and Eq. (<ref>)
serves as the ultimate criterion for the topological property in the system.
§.§ Metallic quantized anomalous Hall effect: a film with a magnetic
sandwich structure
One another novel metallic topological phase bearing a pair of gapless Dirac fermions has been recently proposed<cit.>, which shows a quantized Hall conductivity of unit that originates from two metallic bands, each with one-half quantum.
To further enhance our understanding of magnetic topological phases, the key findings related to this phase are summarized below.
The schematic diagram is shown in Fig. <ref>.
We set total layer numberL_z = 22which is even, and thez-symmetric site positions read
l_z = ±1/2,⋯,±L_z - 1/2.
Accordingly,z-symmetric Zeeman field in magnetically doped layers at the middle of the TI film is set as
V_z(l_z) = α t_⊥, l_z= ± 1/2, ⋯, ± (m_z - 1)/2
0, otherwise,
with magnetic layer numberm_z = 6.
Byz-symmetricV_S(l_z)= V_z(l_z),V_A(l_z) = 0, the projection only contains𝐈_Sterm proportional toα.
Then we bringαto the front explicitly as
𝐈_S(α,k) τ_0 σ_z ↦α𝐈_S(k) τ_0 σ_z,
with𝐈_S(α= 1,k) ↦𝐈_S(k)is a re-definition.
The metallic feature and quantized Hall conductivity nature are revealed in Fig. <ref>.
The band structure of the film is shown in the presence of strong enough magnetism (α=0.9), with a pair of massless Dirac fermions.
The pairing nature is reflected by the double degeneracy of band dispersion near theΓpoint, as labelled by the red and yellow lines.
The unbroken surface states picture is possible due to the localized nature of the surface states inside the bulk-gap, which is not affected by the far-away magnetism in the middle of the film.
Meanwhile, a quantized Hall conductivity is observed, when the chemical potential lies inside both the bulk and magnetic gap.
And as we shall see later, essentially the quantization comes from the two gapless Dirac fermions, each sharing a half-quantized Hall conductivity with the same sign,
based on which we further identify that the effect is not only superficially metallic, but originates from such metallic bands.
And it is in this circumstance that we term this new phase as the `metallic quantized anomalous Hall effect' (metallic QAHE), indicating that it differs from
the conventional QAHE, aka the Chern insulator in an insulating phase.
Attributed to the mass exchange mechanism over the effective mass picture presented in Section <ref>, such a topological phase transition with the increasing ofαas Zeeman strength in the middle can be explained.
Absorbing theα-dependent Zeeman term into the one-dimensional Hamiltonian separated from the TI film leads to anα-dependent 1D HamiltonianH_1d(α), withH_1d(α= 0)coming back to the 1D Hamiltonian extracted from TI film and solved exactly before.
ProjectingH_1d(α)over solutions ofH_1d(0)leads to(⊕_n=1^L_zm_nτ_z+α𝐈_S(𝐤)τ_0)σ_z,
and further diagonalizing this provides a bijection which maps the
projected Hamiltonian form into the mass term⊕_n = 1,χ= ±^L_zm̃_n,χ(𝐤,α)σ_z.
Notice that bothσ_zandτ_zhere are good quantum numbers, as spin and mirror indices (χ= ±), respectively.
Confining to the subspace withσ_z=+, we could then track
the evolution and interaction of the mass termsm̃_n,χbetweenn=1andn=2blocks with increasingαfor givenχ.
As shown in Fig. <ref>,m̃_n,+ (n = 1,2)maintain their shapes increasingα, whilem̃_n,- (n = 1,2)have effectively exchanged their high-energy parts through the low-energy mass exchange, which leads to the high-energy mass sign change of the gapless Dirac cone, and alters its Hall conductivity frome^2/2hto-e^2/2h, when Fermi surface lies inside the parity invariant regime.
Then combined with the unaltered-e^2/2hfromm̃_1,+, totally a topological phase transition happens, driving the system from zero Hall conductivity to quantized Hall conductivity, with Hall contribution coming from two metallic bands, which makes the system a metallic topological phase.
We can identify
α_c ≈ 0.74
in this case to indicate0 →-1plateau transition.
Notice that although we have explicitly exploited thez-mirror symmetry to separate our effective masses into two groups,
this symmetry consideration is not necessary here and the metallic QAHE is not protected by the symmetry.
For example, from Eq. (<ref>), Eq. (<ref>) we see clearly that a general Zeeman field configuration can still generates2 L_zindependent Dirac masses,
and if we place a strong enough Zeeman field in the middle of the film deviating from the symmetric case, still we can see the effect with unit Hall plateau.
The key difference between our metallic QAHE and the conventional QAHE or equivalently the Chern insulator lies in the unconventional bulk boundary correspondence.
As discussed in <cit.>, the half quantized Hall conductivity bears no chiral edge states, while its corresponding surface physics lies in the existence of the chiral current, which is indeed a bulk states contribution and decays algebraically along the metallic surface, starting from the middle magnetic zone where time reversal symmetry is broken most severely.
§.§.§ A qualitatively model with n = 1,2
A qualitative understanding of the phenomenon within a cut-off approximation based on then = 1,2blocks can be deduced.
In the mass exchange picture above, we have used the fully diagonalizedm̃_1,2to illustrate the physics behind, while the picture with onlyn = 1involved based on the weak Zeeman field approximation breaks down.
This is essentially because, the weak field approximation heavily relies on effect the magnetism has upon the surface states, which is not the case here since the magnetism in the middle will not directly affect the surface states, and
were there to be any physics effects, they must be conducted through the bulk states, whose wavefunction has maximal overlap with the magnetic areas.
Here, the metallic QAHE is just the first non-trivial case of such kind, where the inter-nblocks interaction conducted through magnetism is deterministic, and luckily, we
have found a way to directly observe the overall effect by a second diagonalization, yielding the effective massesm̃_n.
While the process and the results are straightforward and conclusive, it will be more satisfying if a simplified model exists and grasps the core of physics even qualitatively.
Interestingly, a model incorporating then = 1,2blocks plays a crucial role in achieving this.
For simplicity, we consider the symmetric Zeeman field in the middle, and by preservingn = 1,2, the mass terms read
M(α) = [ m_1 ; m_2 ]τ_z + α[ I_S^11 I_S^12; I_S^21 I_S^22 ]τ_0,
withk-dependence inm_nandI_Sterms.
The Hamiltonian forn = 1,2reads
H^n = 1,2(k) = λ_∥ρ_0 τ_0 (sin (k_x a)σ_x + sin (k_y b)σ_y) + M σ_z,
withρanother pseudo-spin degrees of freedom for two blocks.
Following the effective mass treatment, we further block-diagonalizeH^1,2into2 ×2sub-blocks.
Notice that again the projected mirror operatorτ_zinMserves as a good quantum number due to the chosen symmetric Zeeman distribution,
then a splitM = ⊕_χ M_χ (χ= ±)can be made, so does that for the HamiltonianH^1,2 = ⊕_χ H^1,2_χ,
where
H_χ^1,2 = λ_∥ρ_0 (sin (k_x a)σ_x + sin (k_y b)σ_y) + α(I_S^12)(k)ρ_xσ_z - α(I_S^12)(k)ρ_yσ_z + E_±(k)ρ_0σ_z + Δ_±(k) ρ_zσ_z,
with
E_χ = [χ (m_1 + m_2) + α(I_S^11 + I_S^22)]/2
Δ_χ = [χ (m_1 - m_2) + α(I_S^11 - I_S^22)]/2
.
Clearly, diagonalization inρ-space is accessible without altering the linear part, which leads to
H̃_χζ^1,2 = λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + m̃_χ,ζσ_z,
where
m̃_χ,ζ = (E_χ(k) + ζΛ_χ(k))σ_z, χ,ζ = ±,
withΛ_χ = √(Δ_χ^2 + α^2 |I_S^12|^2)defined.
This is reached by a unitary transformationU_χ = U_2^χU_1^χfor eachχ, where U_2^χ = e^iρ_x θ_2^χ, U_1^χ = e^iρ_y θ_1^χ, with defintionstan2θ_1^χ = α(I_S^12)/Δ_χ, tan2θ_2^χ = α(I_S^12)/δ_χ, δ_χ = √(α^2 (I_S^12)^2 + Δ_χ^2).
Now we choose caseα> 0so thatαI_S^nn > 0to illustrate the physics.
Topological phase transition happens whenαI_S^22 > m_2(0) >0(form_2(0) > 0see Fig. <ref>) with the help ofI_S^12.
In the case now, we identify the Hall conductivity for each sub-block as
σ^χζ_H = e^2/2 h[sgn(m̃_χ,ζ(M)) - sgn(m̃_χ,ζ(k^χ,ζ_F)) ],
withm̃_χ,ζ(k^χ,ζ_F)recognized asm̃_χ,ζat Fermi surface of the band, and for an insulating band with Fermi level inside the gap, it ism̃_χ,ζ(0).
For unification and simplicity, we will always assume Fermi level to lie inside insulating gap and the parity invariant regime of a gapless band nearΓpoint so to always recognizek_F = 0, and those worrying about the singular gapless Dirac point for the metallic case can always take the unambiguous second limit in Section <ref>.
Then by treating
m_1(0) = 0, m_2(0) > 0
-m_1(M) ≈ m_2(M) ≫α |I_S(M)| >0
I_S^11(0) = I_S^12(0) = 0, I_S^22(0) > 0
I_A = 0
,
where quantitiesI_S/Acan be read from Fig. <ref>, we can write
m̃_χ,ζ(0) = χ m_2(0) + α I_S^22(0)2 + ζ|χ m_2(0) + α I_S^22(0)2|
m̃_χ,ζ(M) ≈ζ m_2(M)
.
Clearly,m̃_χ,ζ(M)are almost unchanged since the Zeeman field is not that strong here, and the Hall conductivity formula is reduced into
σ^χζ_H = e^2/2 h[ζ - sgn(m̃_χ,ζ(0)) ].
Form̃_χ,ζ(0)two cases should be distinguished.
WhenαI_S^22(0) < m_2(0),
m̃_++(0) = m_2(0) + α I_S^22(0) > 0
m̃_+-(0) = 0
m̃_-+(0) = 0
m̃_–(0) = -m_2(0) + α I_S^22(0) < 0
,
and we obtain
σ^++_H = 0
σ^+-_H = -e^2/2h
σ^-+_H = e^2/2h
σ^–_H = 0
,
with total Hall conductivity zero.
Interestingly, in this case the symmetric magnetism in the middle does not even quantitatively change the half quantum mirror Hall phase.
On the other hand, forαI_S^22(0) > m_2(0),
m̃_++(0) = m_2(0) + α I_S^22(0) > 0
m̃_+-(0) = 0
m̃_-+(0) = -m_2(0) + α I_S^22(0) > 0
m̃_–(0) = 0
,
and we obtain
σ^++_H = 0
σ^+-_H = -e^2/2h
σ^-+_H = 0
σ^–_H = -e^2/2h
,
with total Hall conductivity unit upone^2/h.
This unit is fundamentally different theC = 1as Chern insulator case, since here1 = 1/2 + 1/2,
with non-vanishing contribution coming from two metallic bands describing gapless Dirac fermions.
It is recognized that the phase transition happens only withinχ= -sub-blocks, whereζ= ±Dirac fermions exchange their low-energy mass when crossing the qualitative phase transition pointI_S^22(0) = m_2(0),
and by treating approximatelyI_S^22 ≈αt_⊥, m_2(0) ≈m_0, we see the qualitative critical point as
α_c^quali = m_0/t_⊥≈ 0.7,
which is close to the numerical result.I_S^12as inter-nDirac fermions coupling plays an important role here.
Without this term,n = 1andn = 2Dirac fermions will totally be decoupled from Eq. (<ref>),
which makes the mass exchange betweenζ-Dirac fermions withχ= -impossible.
With this term, which serves as an avoid-crossing source betweenζ-Dirac fermions masses, and obtains its maximum nearly after surface to bulk transition ofn = 1gapless Dirac fermions,
we see that the crossing behavior ofm̃_-,ζatΔ_-(k_cross) = 0is prohibited by a non-zeroI_S^12(k_cross), and the two bands are forced to exchange masses before and afterk_cross.
This is possible sinceΔ_-(k) = 0requires thatI_S^11(k) > I_S^22(k), which can happen only whenn = 1surface states emerge into the bulk atk > k_c, whereI_S^12(k)is also non-zero.
§.§.§ Stronger field in the middle
Encouraged by the mass exchange series diagrams, a natural question to ask is what happens when we increase Zeeman strength in the middle further.
A first step answer to the ask is we will meet a system with higher Hall conductance.
For instance, after increasing Zeeman field strength toα= 1.2, we see from Fig. <ref>(a) that the Hall conductivity of the system becomes-2e^2/hnow.
For the reason behind, we again look on the effective masses presented in Fig. <ref>(b), where a pair of gapless Dirac cones and one non-trivial gapped Dirac cone with mass sign reverse emerge,
and essentially, from Eq. <ref> and Eq. <ref>, they contribute synergistically to the Hall conductivity, i.e.,1/2 + 1/2 + 1 = 2units over-e^2/h.
A careful trace over the effective mass evolution upon increasingαreveals that, at this time,n = 3band ofχ= +closes and reopens gap, during which an avoid crossing happens and forces it to exchange low energy mass withn = 1band ofχ= +,
which leads to the result above.
§.§ Higher Chern Number Insulator
Based on magnetic TI film, several proposals to realize higher Chern number has been provided<cit.>,
among which one theoretical proposal<cit.> utilizes one-by-one sub-band inversion to illustrate the increasing Chern number process.
Here the physics behind is brought out in a more strict way with a similar picture.
Still we firstly present an example shown in Fig. <ref>(a) as the Chern numbers of a uniformly magnetized TI film with total layer numberL_z = 8.
The algorithm follows <cit.>.
With increasing the uniform Zeeman strengthV, the change of Chern numbers experiences three stages:
For the relatively weak Zeeman field, the Chern number plateau increases step by step, from0to8, as revealed by red dots;
For the Zeeman field with medium strength, the Chern number plateau drops from8to-8with2as a step, illustrated by blue dots;
Finally for the relatively strong Zeeman field, the Chern number plateau again increases from-8to0one-by-one, shown by purple dots.
Notice that under our parameter choice we havem_1(π,0) ∼2 eVandm_1(π,π) ∼4.3 eV.
The Hamiltonian Eq. (<ref>) now best suits to describe the phenomenon, where the uniform Zeeman field makes it exact to preserve diagonal blocks.
However, due to the largely adjustable magnitude of the Zeeman field, Eq. (<ref>) becomes inapplicable here,
and a more general formula following Eq. <ref> is written as<cit.>
C_χ = -sgn(m̃_χ(X))/2[sgn(m̃_χ(Γ)) - sgn(m̃_χ(M))],
i.e., it counts for the mass sign-change induced topological phase transition atX = (π,0).
In this case, theχ-Chern number for eachn = 1,⋯,L_zis written as
C^n_χ = - sgn(V + χ m_n(X))/2[sgn(V + χ m_n(Γ)) - sgn(V + χ m_n(M))].
In our case,|m_n(Γ)| < |m_n(X)| < |m_n(M)|, and admittedly, all bulk bandsn ≥2are trivial by which I meanm_n(Γ/X/M)share the same sign, then focusing on one band and increasingVfrom zero,
we see that whenVjust crosses|m_n(Γ)|, the band withχm_n < 0increases its Chern number from zero to one; continue to increaseVso that it is bigger thatm_n(X), the corresponding Chern number reverses its sign from1to-1;
and finally whenVgoes beyond the bandwidth|m_n(M)|, the band goes back to its trivial phase with zero Chern number.
Notice that under our assumptionV > 0, the bandχ̅ m_n > 0is always trivial.
It is now clear that the sub-band mass-inversion atΓ, XandMpoints are responsible for the change of Chern numbers, or equivalently the anomalous Hall plateaus with quantum units conductance revealed in Fig. <ref>(a).
As presented in Fig. <ref>(b), the massesm_n(k)now shares the property thatmax[m_n(Γ)] < min[m_l(X)],max[m_n(X)] < min[m_l(M)], as revealed by the green guidance lines.
Then the Chern number change can be divided into three regions with increasing Zeeman fieldVlabelled in Fig. <ref>(a), i.e., theΓ-mass inverse region, theX(Y)-mass inverse region and theM-mass inverse region, without crossing among distinct regions.
The physics happened in each region is exactlyL_z = 8copies illustrated above with increasingV, i.e., the Chern number increases one-by-one in theΓ-region each time Zeeman fieldVcrosses some|m_n(Γ)|and makes the band non-trivial, until it reaches its maximumC_max = L_z = 8,
then decreases two-by-two in theX-region onceVgets bigger than some|m_n(X)|, where topological phase transition happens with both sides non-trivial, until bottom touchingC_min = L_z - 2L_z = -8, and finally the Chern number goes back to zero step-by-step in theM-region as long asVbecomes bigger than some bandwidth|m_n(M)|and makes corresponding band trivial again.
The inverse process happens for an opposite Zeeman field, with Chern number reverse its sign.
§.§ Cooperation between middle and surfaces
Similar to the approach of gapping out surface(s) of a topological insulator film,
we can gap out the surface states in metallic QAHE with surface magnetism polarized alongzdirection.
In this sense we explore the cooperation between magnetism in the middle and at surface(s).
The surface magnetism is chosen to be weak compared to the smallest gap in metallic QAHE, and it can thus be treated again as a perturbation.
This is simply because gapping out the gapless surface needs no threshold over surface magnetic strength.
Based on such a picture, the physics beneath comes from perturbating two gapless Dirac fermions with the same high-energy mass signs in metallic QAHE,
whose simplified model Hamiltonian readsH_MQAHE = h⊕hwith single Dirac cone Hamiltonian
h(k) = λ_∥ (sin(k_x a)σ_x + sin(k_y b)σ_y) + sgn(V^mid)m̃(k) σ_z,
withm̃(k) = Θ(-m_0(k))m_0(k)identified.
Considering now in metallic QAHE, the middle Zeeman field does not affect the gapless surface states, then the projection of top and bottom Zeeman fields onto the mirror-symmetric surface states can still be written asI_S(k)τ_0 σ_z - I_A(k) τ_y σ_z.
And by approximation, we recognizeI_S ≡V_S^top,I_A ≡V_A^topso that
the phenomenological mass terms read
sgn(V^mid)m̃(k) τ_0 + V_S^topτ_0 + V_A^topτ_y,
which can be diagonalized without affecting linear term as
m̃_ζ(k) = sgn(V^mid)m̃(k) + V_S^top + ζ V_A^top,
withζ= ±.
Attributing to Eq. (<ref>), we have for a gapped Dirac cone with V_S^top + ζV_A^top ≠0,
C^ζ = 1/2[ sgn( V_S^top + ζ V_A^top) + sgn(V^mid) ],
while for a gapless Dirac cone with V_S^top + ζV_A^top = 0, according to Eq. (<ref>) we have
N^ζ = sgn(V^mid),
and the corresponding Hall conductivity readsσ_H^ζ = -Ce^2/horσ_H^ζ = -Ne^2/2hdepending on gapped or gapless nature,
which serves as the starting point for analyzing phases below.
For an instance,
adding gap openingz-Zeeman field at both top and bottom surfaces parallel to magnetic polarization in the metallic QAHE system leads toC = 2state, composed of a pair of non-trivial gapped Dirac fermions each carrying unit Chern number, as represented in Fig. <ref>.
SuchC = 2state has been observed<cit.> in a similar magnetic structure with an alternate explanation based on the assumption that magnetic layers dividing topological insulator film does not hold side surface states, which then turn the magnetic insulator-topological insulator multilayer structure into individualC = 1insulators, each can be explained by discussion over Chern insulator in weak Zeeman field section.
Here instead we assume that the magnetism does not alter the bulk gapm_0very much, so that the side surface state goes throughout zone with magnetism.
The calculated Hall conductivity for one configuration following the assumption is shown in Fig. <ref>(a), where aC = 2plateau is presented inside the top/bottom Zeeman gap for surface states.
The system is thus identified as a Chern insulator by the gapped band structure shown in Fig. <ref>(c).
For simplicity, we have chosen a symmetric surface Zeeman distribution withV_A^top = 0.
Now sinceV_S^top > 0, V^mid>0, we have mass sign changes atΓandMfor both surface states as revealed by mass configurations in Fig. <ref>(b), and by Eq. (<ref>)
C^+ = C^- = 1,
which leads to totally aC = 2state.
Now let us switch downV^top, which makesV_S^top = -V_A^top>0, accordingly we haveN^+ = 1, C^- = 1, which corresponds to a system with Hall conductivity3e^2/2h.
Further we re-addV^top = -V^bottom<0, which leads toV_S^top = 0, V_A^top>0, and we seeC^+ = 1, C^- = 0, which makes the system a Chern insulator again with unit Chern number.
Next we reverseV^bottomto minus, andV_S^top < 0, V_A^top = 0, which makes the system trivial withC^+ = C^- = 0.
Finally, we switch down againV^top, and nowV_S^top = -V_A^top < 0, accordingly we haveN^+ = 1, C^- = 0, which leaves half quantization of Hall conductivity in the system.
Totally, we see that there exists five more additional topologically distinct phases upon tuning surface magnetism of metallic QAHE, with Hall conductivities quantized into2,3/2,1,1/2and0over quantum units, respectively.
The topological property of these additional phases can be easily verified by calculating their Hall conductivities, or reading from their effective mass pictures.
The signs of Hall conductivities are inverted once we overturn magnetism at both surfaces and in the middle.
§ DISCUSSION AND CONCLUSION
It is quite remarkable and surprising that so many topologically distinct phases already emerge under such a relatively simple model describing a magnetic topological insulator film.
At the core of physics, however, such a descriptive and predictive power of the frame should be estimated.
Although, admittedly infinite possibilities exist to explain the phenomena, down to the ground several principles, such as symmetry, topology, emergence and conciseness, have almost
fixed the formalism we are willing to adapt in addressing the problem.
In our focused questions, particularly regarding the Hall conductance for different species of Dirac fermions in the system, the property of several points in the spectrum is already sufficient to solely determine the result.
And to endow physical meaning to these points, we name the points to represent low energy and anomaly.
The invariance of laws of physics then suggests that, once we have grasped these key ingredients,
the complexities of the more intricate components will naturally fall into place.
Below we summarize key points in our paper and extend to further discussions.
The introduced local unitary transformation ink-space, based on the exact solution, unveils the existence of a pair of gapless Dirac fermions and a series of massive gapped Dirac fermions in a 3D topological insulator film, when viewed as 2D system effectively.
This comprehensive understanding of the constitutes inside the TI film is paramount is our discussion.
The Hall conductivity associated with the gapless and gapped Dirac fermions in the TI film are±e^2/2hand0, respectively.
This results in a half-quantized topological phase, serving as a metallic partner to the insulating quantum spin Hall effect, namely, the half quantum mirror Hall effect in TI film itself.
The pairing feature of the gapless Dirac fermions in half quantum mirror Hall effect is summarized in Table <ref>.
It is noteworthy that their existence here is not a result from the Nielsen-Ninomiya theorem, since they are two separable fermions in whole Brillouin zone; rather, it is the mirror symmetry along the opened direction of the TI film that requires the doubling — symmetric and antisymmetric.
The mass term of the gapless Dirac fermion in our study is a regularized one that can be directly expressed on a lattice. However, this regularization comes at the cost of introducing an explicitly parity symmetry-breaking term away from the Dirac point. As a result, the gapless Dirac fermion remains massless at low energy but becomes massive at high energy.
In the article, a Heaviside Theta function is utilized to grasp the feature of such a mass term, which exhibits long-range algebraic decay with first power modified by a sinusoidal function, when Fourier transformed into real space.
Specifically, it contains a hopping term proportional to∼sin(Δl)/Δl, withΔlthe distance between sites.
Not accidentally, a similar hopping term with the same algebraic decaying order has been used as one way to construct single gapless Dirac fermion on lattice, known as the SLAC fermion<cit.>.
However, it is important to note that in our theory, the phenomenological evasion of locality by the gapless Dirac fermion, residing in effectively 2D space, is a consequence of the bulk property of the 3D TI, where locality is preserved.
This phenomenon underscores the concept of bulk-boundary correspondence
and suggests that a seemingly unphysical theory in lower dimensions can be attributed to a projection from a higher-dimensional theory.
It is noteworthy that the procedure employed here is different from a dimensional reduction, and is not an effective field theory because the Dirac fermion naturally obtains completeness on lattice.
Rather, a better similarity can be shared with the quasicrystal containing aperiodic order, which can arise from projections of higher-dimensional periodic lattices<cit.>.
Essentially, both the gapless Dirac fermion containing surface states of a 3D TI, and the quasicrystal from tilings, are physically realizable systems.
The formalism introduced here, involving the transformation of a confined spatially(n + 1)D Dirac Hamiltonian intonD Dirac fermions through the construction of a local unitary matrix using solutions from a decomposed 1D Hamiltonian along the confined direction, can be generalized to arbitrary dimensions, with the aid of Clifford algebra.
In particular, initiating from a 4D space modified Dirac equation, a unitary transformation yields a pair of gapless Dirac fermions effectively in 3D space.
This extension holds the potential to enhance our comprehension of the chiral anomaly in the system<cit.>.
What is more, given that the high-energy components of the two Dirac fermions explicitly break the chiral symmetry, they are not obliged to be paired by violating conditions stipulated by the Nielsen-Ninomiya theorem.
As a result, we can anticipate that when the constrained 4D Hamiltonian becomes `semi-magnetic', a single gapless Dirac fermion will be observed, similar to that in half QAHE.
The introduced magnetism, initially presented as an out-of-plane Zeeman field at the mean field level, undergoes the unitary transformation into two momentum-dependent matrix Higgs fields𝐈_S/A(k),
which obtain non-vanishing values along with the spontaneous symmetry breaking that establishes intra-plane ferromagnetic order.
The two fields play a pivotal role in generating mass to the Dirac fermions through Yukawa-like couplings.
The nature of the magnetic structure, influencing the distribution and strength of the Zeeman field along the open direction, leads to the classification of several topologically distinct phases, including the
Chern insulator, axion insulator, half-quantized anomalous Hall effect and metallic quantized anomalous Hall effect.
A summary of their main features is presented in Table <ref>.
Essentially,𝐈_Spredominates in the Chern insulator and metallic QAHE phases,𝐈_Atakes precedence in the axion insulator, while a collaborative effort between both𝐈_Sand𝐈_Ais necessary to achieve the half QAHE.
In the presence of a uniform Zeeman field, the mass of each Dirac fermion in TI film is directly modified by a Zeeman field.
By tuning the strength of magnetism, sub-band inversion happens step-by-step for each Dirac fermion, whose Chern character changes correspondingly.
Summing those mass-modified Dirac fermions together gives a Chern insulator that carries jumping Hall conductance among integers in[-L_z, L_z]over quantum unite^2/h, withL_zthe total layer number.
With a relatively weak Zeeman field compared with the bulk gap, focusing solely onn = 1matrix elements that act on two gapless Dirac fermions becomes feasible.
In this scenario, only fields near the two surfaces maximally tune the topological property of the TI film by influencing the surface states.
This approximation, referred to as the weak Zeeman field condition, elucidates the underlying physics behind the Chern insulator, axion insulator and half QAHE clearly, with Hall conductance showing1 + 0,0 + 0and1/2 + 0quantized nature upon quantum unit.
Under a general strong Zeeman field, the gapped series of Dirac fermions have to be involved, and then ≠1Higgs components can play a crucial role.
The most general description is conducted by a further diagonalization over mass termsm_nand Higgs fields𝐈_S/A, and the procedure leads to effective massesm̃_nfor the Dirac fermions, which determine the topological property of the system.
As discussed, the avoid crossing betweenm̃_1andm̃_2leads to for formation of two gapless Dirac fermions with the same chirality (high energy mass sign), which bears a doublet of half quantized Hall conductivity and leads to the metallic QAHE.
Interestingly, in the case another cut-off overn = 1,2blocks can be made, since the Zeeman field applied should not alter then ≥3states dramatically.
When𝐈_A = 0, the mirror symmetry is respected by the system, allowing for the separation of the total Hamiltonian by projection operator of mirror symmetry.
This separation provides valuable insights, such as the application of mirror layer Chern number in a Chern insulator with a unit Chern number.
It is certainly reasonable but lamentable that we cannot exhaustively list all relevant topological phases in magnetic topological insulators in the article.
The sheer multitude of possible magnetic distributions makes it impractical to cover every potential scenario.
However, our work lays down a unified framework that enables the depiction of both discovered and yet-to-be-discovered topological phases in a uniform and consistent manner,
grounded in the conceptualization of the grouped Dirac fermions and the associated mass generation mechanism.
We believe that the diversity and variety of different magnetic configurations can lead to even richer topological phases within our framework.
Furthermore, as elaborated in Section <ref>, our exploration is not confined solely to topological phases induced by magnetism, especially a Zeeman field in the TI film.
One illustrative example, as highlighted earlier, involves the duality betweenz-Zeeman fieldσ_zand a special orbital orderτ_y.
This duality has the potential to generate all topological phases discussed in the paper,
with symmetric and antisymmetric distribution exchanged for the time reversal breakingτ_yfield.
This approach extends beyond the commonly studied ferromagnetism (or layer-by-layer antiferromagnetism, as observed in materials like MnBi_2Te_4) induced quantum anomalous Hall effect (QAHE).
Moreover, leveraging the superconducting effect, we can include the superconducting pairing field into the frame across all pairing symmetries.
This inclusion opens avenues for exploration and determination of the possibilities and conditions necessary for realizing topological superconductors<cit.> within the solid framework we have established.
An additional intriguing aspect to consider pertain to the symmetries in the system.
The modified Dirac equation model we employed for the topological insulator film encapsulates fruitful symmetries, like the standard time reversal, particle hole, chiral symmetries, together with the inversion symmetry in each dimension and the 1D mirror symmetry along each direction.
Some of these symmetries play crucial roles in determining our solutions and topological phases in the system.
For instance, in solving the separated 1D Hamiltonian, the utilization of oneDimensional parity and chiral symmetry is essential;
thez-mirror symmetry becomes decisive for the manifestation of the half quantum mirror Hall effect, contributing to quantized mirror Hall conductance;
despite not a protecting symmetry in the metallic quantized anomalous Hall effect, the ever existence of the same mirror symmetry helps us to cut the effective masses into two groups by their mirror labels and clarifies the mass exchange mechanism.
It may prove worthwhile to contemplate a starting point Hamiltonian with lower symmetry or introduce additional symmetry-breaking fields to assess the stability of these effects.
For instance, the half quantum mirror Hall effect is clearly a metallic twin partner of the quantum spin Hall effect, and it should also share a generalℤ_2classification scheme depending on the time reversal symmetry solely.
Consequently, it is worthy to give a unified expression for this invariant.
Moreover, as we have shortly discussed, the half-quantization of the gapless Dirac fermion is protected by the parity invariant regime around the Dirac point,
and indeed, this 2D parity symmetry coexists with the time reversal in our model, which warrants further discussion regarding their individual impacts on half-quantization.
This exploration can be extended to encompass broader symmetries and other kinds of metallic topological phase classes, providing a comprehensive understanding.
Besides, the exploration of disorder and interaction effects in metallic phases presents a rich avenue for investigation.
As previously discussed, metallic topological phases inherently grapple with disorder effects on their metallic side, wherein mechanisms like skew-scattering and side-jump alter the transverse transport behavior<cit.>.
The stability of these phases against disorder, addressed through parameter renormalization, poses a significant question, akin to considerations in their insulating counterparts<cit.>.
Moreover, while the adiabatic criterion justifiably establishes a connection between a gapped interacting phase and a non-interacting one by preserving gap opening, it
remains elusive in what way we can say something similar in those metallic phases.
Clarifying how this linkage can be articulated in the context of these metallic phases poses an ongoing challenge.
In short, the interplay between magnetism and topology in 3D TI film is investigated under a unified frame, exploiting the Dirac fermion physics and mass generating mechanism.
This work was supported by the National Key R&D Program of China under Grant No. 2019YFA0308603 and the Research Grants Council, University
Grants Committee, Hong Kong under Grant Nos. C7012-21G and 17301823.
§ DERIVATION OF EQ. (<REF>)
We start from solving
h(s) = -isλ_⊥∂_z τ_x + (m_0(k)+ t_⊥∂_z^2) τ_z,
withsdefined by eigenvalue ofσ_z.
All parameters are real withm_0(k)= m_0 - t_∥k^2 > 0to be the criterion for the region where surface states emerge.
For the purpose of keeping consistence with the lattice model in <ref>, one in fact needs to substitute parameters as
λ_⊥→ cλ_⊥ , λ_∥→ a λ_∥, t_⊥→ c^2 t_⊥, t_∥→ a^2 t_∥.
However, we would not write in that way explicitly for simplicity. Also, to make discussion pithy, we shall omitsin wavefunction below.
The eigen-problem ofh(s)is a second-order differential equation and allows us to set solutions with trial functionϕ= ϕ_ξ e^iξz . Using∂_zϕ= iξϕ, ∂_z^2ϕ= -ξ^2 ϕ, one has equation below:
[ m_0(k)- t_⊥ξ^2 sλ_⊥ξ; sλ_⊥ξ -m_0(k)+ t_⊥ξ^2 ]ϕ = Eϕ,
which readily leads to
E^2 - (m_0(k)- t_⊥ξ^2)^2 - λ_⊥^2ξ^2 = 0,
and gives
ξ^p_α = pξ_α = p √(-F/D + (-1)^α -1√(R)/D), p = ±, α = 1,2,
where
D = 2t_⊥^2, F = -2m_0(k)t_⊥+ sλ_⊥^2, R = F^2 - 2D(m_0^2(k) - E^2).
For eachξ^s_α, one has
ϕ_α p = [ sλ_⊥ p ξ_α; E - m_0(k)+ t_⊥ξ^2_α ],
and the general solution would be
Φ = ∑_α pC_α pϕ_α pe^ipξ_αz.
Now considering finite size alongzdirection with top and bottom surfaces located at±L2, respectively, one would have boundary condition
Φ(±L/2) = 0,
applying which one would get four linear equations for coefficients
ℙ (C_1+ , C_1- , C_2+ , C_2-)^T = 0,
and requirement(ℙ) = 0leads to two transcendental equations
m_1ξ_2m_2ξ_1 = tanξ_2 L/2tanξ_1L/2
m_1ξ_2m_2ξ_1 = tanξ_1 L/2tanξ_2L/2
which gives two energies varying withk, designated asE_+andE_-, respectively. To be clearer,
E_+ = m_0(k)- t_⊥ξ_1^2 g^+(ξ_1) - ξ_2^2 g^+(ξ_2)/g^+(ξ_1) - g^+(ξ_2), g^+(ξ) = tan(ξ L/2)/ξ,
E_- = m_0(k)- t_⊥ξ_1^2 g^-(ξ_1) - ξ_2^2 g^-(ξ_2)/g^-(ξ_1) - g^-(ξ_2), g^-(ξ) = 1/tan(ξ L/2) ξ.
In common sense, it is time takingE_±into expressions ofξs, together with the coefficients equations again and solve them. However, that not only is tricky but lacks of physical insight, and we shall change our perspective.
Notice that under parity operationz ↔-z,τ_x ↔-τ_xandh(s) ↔h(s), then bothh(s)andH_1dhas parity symmetry and the general solution should contain two factors below considering the boundary condition:
f_+(z) = cos(ξ_1z)cos(ξ_1L/2) - cos(ξ_2z)cos(ξ_2L/2)
f_-(z) = sin(ξ_1z)sin(ξ_1L/2) - sin(ξ_2z)sin(ξ_2L/2),
where the subscripts refer to even or odd parity.
Now we can assume that for energyE,h(s)has solution
ϕ = c̃ f_+ + d̃_ f_- =[ c̃_1f_+ + d̃_1f_-; c̃_2f_+ + d̃_2 f_- ],
and the two-line eigen-equationh(s)ϕ= E ϕgives,
for the first line
d̃_2 = it_⊥η_1c̃_1/sλ_⊥,
c̃_2 = -it_⊥η_2d̃_1/sλ_⊥,
which leads to
ϕ^+_1 = C^+_1[ -i sλ_⊥ f_+; t_⊥η_1 f_- ], E = E_+,
ϕ^-_1 = C^-_1 [ i sλ_⊥ f_-; t_⊥η_2 f_+ ], E = E_-;
and for the second line,
d̃_1 = -it_⊥η_1c̃_2/sλ_⊥,
c̃_1 = it_⊥η_2 d̃_2/sλ_⊥,
which leads to
ϕ^+_2 = C^+_2[ t_⊥η_1f_-; isλ_⊥ f_+ ], E = -E_+,
ϕ^-_2 = C^-_2 [ t_⊥η_2f_+; -isλ_⊥ f_- ], E = -E_-,
by defining two coefficients
η_1 = ξ_1^2 - ξ_2^2ξ_1 (ξ_1L/2) - ξ_2(ξ_2L/2),
η_2 = ξ_1^2 - ξ_2^2ξ_1 tan(ξ_1L/2) - ξ_2tan(ξ_2L/2),
withCis the norm, and super and lower indices representE_±and line index, respectively.
Clearly,C^ι_1 = C^ι_2is identified, andϕ^ι_1 = -iτ_y ϕ^ι_2 as they are chiral partners (ι= ±).
Solution above seems to give four solutions, mathematical restriction, however, tells that equations from different lines for
the same set of coefficients must stand simultaneously, i.e.,
(<ref>)⇔(<ref>) and (<ref>)⇔(<ref>), which gives us two relations as
1 = |i t_⊥η_1/s λ_⊥·it_⊥η_2/sλ_⊥| ⟹ |η_1 η_2| = λ_⊥^2/t_⊥^2,
E_+ = -E_-,
and the latter is also a physical result from Dirac equation. Then, we only have two independent solutions for oneh(s)sub-block, say Eq. (<ref>) and Eq. (<ref>).
Formal combination of equations for the simultaneous-standing equations from different lines again leads to
E^2 - (m_0(k)- t_⊥ξ^2)^2 - λ_⊥^2ξ^2 = 0.
Then we see that the guessing solution not only satisfies the boundary condition,
but also satisfies allE - ξequations, thus it is indeed our solution.
Notice that, by Eq (<ref>),ξ_αare both complex or not complex for a given energy, where complex means both real and imaginary parts ofξare non-vanishing, determined by the sign ofR.
This information, combined with property of trigonometric/hyperbolic function leads to the conclusion that
quadratic formf_+^* f_-andη(at certain(k,z,E)) are always real.
Essentially,f_±are either real or purely imaginary.
Now, we restoresexplicitly and extract
φ(s) = ϕ^s,+_1, χ(s) = ϕ^s,+_2
as two solutions forh(s)for basis construction.
Then by defining
m = E_+ = m_0(k)- t_⊥ξ_1^2 g(ξ_1) - ξ_2^2 g(ξ_2)g(ξ_1)-g(ξ_2),
g(ξ) = tan(ξ L/2)ξ,
η = ξ_1^2 - ξ_2^2ξ_1 (ξ_1L/2) - ξ_2(ξ_2L/2),
C = C^+_1 = C^+_2,
one obtains four projecting basis in certain sequence as
Φ_1 = [ φ(+); 0 ] = C[ -i λ_⊥ f_+; t_⊥η f_-; 0; 0 ],
Φ_2 = [ 0; χ(-) ] = C [ 0; 0; t_⊥η f_-; -iλ_⊥ f_+ ],
Φ_3 = [ χ(+); 0 ] = C[ t_⊥η f_-; iλ_⊥ f_+; 0; 0 ],
Φ_4 = [ 0; φ(-) ] = C[ 0; 0; i λ_⊥ f_+; t_⊥η f_- ],
with energy(m,-m,-m,m), respectively.
Notice thatΦ_3,4are chiral partners ofΦ_1,2by-iτ_y, respectively.
To obtainm, a set of closed equations need to be solved
m = m_0(k)- t_⊥ξ_1^2 g^+(ξ_1) - ξ_2^2 g^+(ξ_2)/g^+(ξ_1)-g^+(ξ_2),
ξ_α = √(-F/D + (-1)^α -1√(R)/D), α = 1,2,
where
g^+(ξ) = tan(ξ L/2)/ξ
D = 2t_⊥^2
F = -2m_0(k)t_⊥+ λ_⊥^2
R = F^2 - 2D(m_0^2(k) - m^2)
.
Basically, there are three variables(m,ξ_1,ξ_2)with three equations, then they could be determined exactly.
§.§ Symmetry analysis of solutions
Firstly, as we have stated, the chiral symmetryτ_yis respected in Eq. (<ref>) since{h(s),τ_y} = 0, and this symmetry is
reflected in our solutions byφ(s) = -iτ_y χ(s)with opposite energies.
Meanwhile, we have relied on the help from the 1D parity symmetry which is a reflection alongzdirection, or simply, thez-parity𝒫_z, which acts on the basis as
Φ(z) 𝒫_z⟶τ_z Φ(-z),
withτ_zthe unitary matrix related to inner degrees of freedom transformation.
Now sincef_±(z) = ±f_±(- z), we identify thatΦ_1,4(Φ_2,3) are even (odd) underz-parity,
and correspondingly, under representation ofΦ, the unitary matrix related toz-parity is written asτ_z σ_z.
There exists in fact a hidden symmetry in the model, namely, the mirror symmetry about thex-yplane.
Effectively, it will also bringzto-zas an inversion, but with an additional operation that rotates spin angular momentum byπphase, i.e., such az-Mirror symmetryℳ_zis a combination of𝒫_zand a𝒞_2zrotation, which then acts on the basis as
Φ(z) ℳ_z⟶σ_z τ_z Φ(-z),
and classifiesΦ_1,2(Φ_3,4) intoz-mirror even (odd) states.
Accordingly, underΦrepresentation this operator has formτ_z σ_0.
Then combined with the spin indexs = ±appeared inφ(s),χ(s), we can further assignΦ_ito beΦ_χ,swithχ,slabelling mirror and spin-zindex as
Φ_++ = Φ_1, Φ_+- = Φ_2,
Φ_-+ = Φ_3, Φ_– = Φ_4.
The singleh(s)does not share time reversal symmetry, since under𝒯 = iσ_y 𝒦,h(+) ↔h(-), i.e.,H_1downs this symmetry.
Also given by the fact that time reversal keeps energy unconverted, one findsΦ_4 = e^iθ𝒯 Φ_1, Φ_2 = e^iθ 𝒯 Φ_3, whereθ= 0orπdepending onk,E.
The essential point to get avoid of subtlef_±^*is to notice that they are either both real or imaginary, as stated above, whileηis always real.
Also notice that we did not writekexplicitly sinceH_1d(k) = H_1d(-k).
The combination of time reversal and chiral symmetries gives rise to a particle hole symmetry, which, when implanted over basis, readsφ(s) = e^i θ φ^*(s̅) = e^i θ [-iτ_y χ(s̅)]^*, withs̅ = -sidentified.
Similar analysis applies for the lattice model, and the projected𝒫_z,ℳ_zshare the same matrix form above.
§.§ Equivalent block Hamiltonian
The projection procedure works under the given basis representationH_TI(k), which is formallyH = ⟨Φ| H_TI(k) |Φ⟩ , with
(H)^n n'_ij = ∫d z (Φ^n_i (z))^† H_TI(k,z) Φ^n'_j(z),
where the integral is done from-L/2toL/2. Clearly, projection onH_1dwould givediag (m,-m,-m,m), then we only need to deal withH_∥(k) = λ_∥(k·σ) τ_x = λ_∥ (k_xσ_x + k_y σ_y) τ_xterm.
SinceH_∥(k)is purely off-diagonal, it is easy to conclude that
⟨Φ^n_i|H_∥|Φ^n'_i| ⟩= 0, i = 1,2,3,4
⟨Φ^n_1|H_∥|Φ^n'_3| ⟩= 0 = ⟨Φ^n_2|H_∥|Φ^n'_4|.⟩
Then only four terms need consideration by hermicity, among which
⟨Φ^n_1|H_∥|Φ^n'_4| ⟩= λ_∥ k_- |C^n C^n'| ∫dz iλ_⊥ t_⊥ [η^n (f^n_+)^* f^n'_- + η^n'(f_-^n)^* f_+^n'] = 0,
⟨Φ^n_2|H_∥|Φ^n'_3| ⟩= λ_∥ k_- |C^n C^n'| ∫dz iλ_⊥ t_⊥ [η^n (f^n_-)^* f^n'_+ + η^n'(f_+^n)^* f_-^n'] = 0,
asf_-f_+is odd toz.
Herek_± = k_x ±i k_y is defined.
Then, the only remaining terms are
⟨Φ^n_1|H_∥|Φ^n'_2| ⟩= ∫dz λ_∥k_-φ^†(λ_⊥)τ_xχ(-λ_⊥) = λ_∥k_- δ_n n',
⟨Φ^n_3|H_∥|Φ^n'_4| ⟩= ∫dz λ_∥k_-φ^†(λ_⊥)τ_xχ(-λ_⊥) = λ_∥k_-δ_n n',
where the normalization condition is used.
And finally we arrive at the block Hamiltonian
H(k) = ⊕_nλ_∥τ_0(k·σ)+m_n(k)τ_zσ_z,
as Eq. (<ref>).
Here, notice that the spin degree of freedom is fully preserved asσ, while the newly-definedτowns different meaning from the original one.
To make the transformation more formal, we define the transformation matrix
U^c(k,z) = ({{Φ}_i}^n)(k,z),
where the double brackets mean that we arrangei = 1,2,3,4index inside eachn = 1,2,⋯,
and by written more straightforwardly,
U^c = (Φ^1,Φ^2,⋯), Φ^n = (Φ_1^n,Φ_2^n,Φ_3^n,Φ_4^n).
This transformation then brings the Hamiltonian of the boundary constrained topological insulator filmH_TI(k,-i ∂_z)into the direct sum form of Dirac fermions by
H(k) = ∫dz (U^c)^†(k,z) H_TI(k,-i∂_z)U^c(k,z).
§.§ Analytic expression for mass term
The proof has been posted separately<cit.>, and here is a repetition.
Analytic expression for effective massm(k)is obtained in theL →∞case as a thick limit,
however, notice that finite-size correction tom(k)decays exponentially with thickness<cit.>, our proof here is suitable even for a thin film.
ClosedE-ξequations are
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2
ξ_1^2 ξ_2^2 = m_0(k)^2 - E^2t_⊥^2
E = m_0(k) - t_⊥ξ_1^2 g^+(ξ_1) - ξ_2^2 g^+(ξ_2)g^+(ξ_1)-g^+(ξ_2),
whereg^+(ξ) = tan(ξL/2)/ξ.
We shall assumeλ_⊥ > 0, t_⊥ > 0in the following discussion, without losing generality, andm_0(k)controls the expression form.
The classification ontan(ξL/2)leads to
lim_L → +∞tan (ξ L/2) =
i, (ξ) > 0
N.A., (ξ) = 0
-i, (ξ) < 0
.
And three basic cases are separated as
(ξ_1) > 0 > (ξ_2)
(ξ_1,2) > 0
(ξ_1) = 0, (ξ_2) > 0
,
while other cases could be obtained similarly.
Case I. ((ξ_1) > 0 > (ξ_2))
Nowtan(ξ_1 L/2) = i = -tan(ξ_2 L/2)(L→+∞ignored), and
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2
ξ_1^2 ξ_2^2 = m_0(k)^2 - E^2t_⊥^2
E = m_0(k) - t_⊥ξ_1 ξ_2
,
where the second and third equations lead to
m_0(k)^2 - E^2 = (m_0(k) - E)^2,
which offers two possible solutionsE = 0orE = m_0(k).
I. (E = 0) This leads to
ξ_1 ξ_2 = m_0(k)t_⊥
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2.
Requiring(ξ_1) > 0 > (ξ_2)then gives
ξ_1 + ξ_2 =
2u √(4γ - 1), γ > 1/4
2u i √(1 - 4 γ), γ < 1/4
ξ_1 - ξ_2 = 2u i
,
γ = m_0(k) t_⊥/λ_⊥^2
u = λ_⊥/2t_⊥,
which offers:∙γ> 1/4:
ξ_1 = u(√(4γ - 1) + i)
ξ_2 = u(√(4γ - 1) - i)
;∙γ< 1/4:
ξ_1 = iu(√(1 - 4γ) + 1)
ξ_2 = iu(√(1 - 4γ) - 1)
.
The latter condition stands only whenγ> 0as for(ξ_2) < 0.
II. (E = m_0(k)) This leads to
ξ_1 ξ_2 = 0
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2,
and one ofξ_α = 0is unavoidable, which fails the precondition and is abandoned, i.e.,E = m_0(k)is not a solution in the case.
Case II. ((ξ_1,2) > 0)
Nowtan(ξ_1 L/2) = i = tan(ξ_2 L/2)(L→+∞ignored), and
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2
ξ_1^2 ξ_2^2 = m_0(k)^2 - E^2t_⊥^2
E = m_0(k) + t_⊥ξ_1 ξ_2
,
then the second and third equations above leads to
m_0(k)^2 - E^2 = (m_0(k) - E)^2,
which gives us two possible solutions asE = 0orE = m_0(k).
I. (E = 0) This condition leads to
ξ_1 ξ_2 = -m_0(k)t_⊥
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2.
Requirement(ξ_1,2) > 0then gives
ξ_1 + ξ_2 = 2u i
ξ_1 - ξ_2 =
2u √(4γ - 1), γ > 1/4
2u i √(1 - 4 γ), γ < 1/4
,
which offers:∙γ> 1/4:
ξ_1 = u(√(4γ - 1) + i)
ξ_2 = u(- √(4γ - 1) + i)
;∙γ< 1/4:
ξ_1 = iu(√(1 - 4γ) + 1)
ξ_2 = iu(-√(1 - 4γ) + 1)
.
The latter condition stands only whenγ> 0as for(ξ_2) > 0.
II. (E = m_0(k)) This leads to
ξ_1 ξ_2 = 0
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2,
and again one ofξ_α = 0is unavoidable, and one concludesE = m_0(k)is not a solution in the case.
Case III. ((ξ_1) = 0, (ξ_2) > 0)
By guessingE = m_0(k), we have
ξ_1 ξ_2 = 0
ξ_1^2 + ξ_2^2 = 2m_0(k) t_⊥ - λ_⊥^2t_⊥^2,
which gives
(ξ_1 + ξ_2)^2 = 4u^2(2γ - 1)
(ξ_1 - ξ_2)^2 = 4u^2(2γ - 1)
,
and choosing
ξ_1 = 0
ξ_2 = 2ui√(1 - 2γ),
fulfills the requirement.
Notice thatγ< 1/2is assumed, which should not bother the self-consistent solution.
Meanwhile, sinceξ_1 = 0leads to degenerate eigenvalue±ξ_1, then one should generally assume another solution as
(A + B z) e^i ξ_1 zϕ|_ξ_1 = 0, E = m_0(k),
which, however, only gives result thatB = 0whileAis arbitrary, which passes no additional information.
Retrospecting the definitionγ= m_0(k) t_⊥/λ_⊥^2, the discussion above naturally leads to the conclusion that the lowest eigenenergy ofH_1dreads
E =
0, m_0(k) > 0
m_0(k), m_0(k) < 0
,
or by re-defining lowestE(k)asm_1(k), we write
m_1(k) = Θ(-m_0(k))m_0(k),
as result mention in Eq. (<ref>)
§.§ Finite-size correction to mass term
We could in fact conserve lowest order correction to see the finite size gap whenLis not that large. Forξ_1andξ_2, one could approximately get lowest order correction fortan(ξL/2)by treatingβ^±L/2as small quantity (depend on sign of(ξ))
tan (ξ L/2) ≈
i(1 - 2β^L), (ξ) > 0
-i(1 - 2β^-L), (ξ) < 0
.
Also notice that from the originalE-ξequation
E^2 - (m_0(k) - t_⊥ξ^2)^2 - λ_⊥^2ξ^2 = 0,
which could be further split into (whenE = 0as zeroth-order)
t_⊥ξ^2 ± i λ_⊥ξ - m_0(k) = 0,
one solves
ξ = s_1 iλ_⊥ + s_2 √(4m_0(k)t_⊥ - λ_⊥^2)/2 t_⊥ = u(i s_1 + s_2√(4γ - 1)),
wheres_1,s_2 = ±without restriction.
Notice that in real calculation, one needs to specify which branchξ_1,2lie in, but such choice will not affect the final result as long as chosenξ_1,2satisfy zeroth-order solution.
Now again we have two cases below:∙γ> 1/4, we choose
ξ_1 = ξ_2^*
(ξ_1) > 0 > (ξ_2)
(ξ_1) = (ξ_2) > 0
,
as main branch condition, then
tan (ξ_1 L/2) ≈ i(1 - 2β_1^L)
tan (ξ_2 L/2) ≈ -i(1 - 2β_2^-L)
,
and
E(k) ≈ (m_0(k) - t_⊥ξ_1 ξ_2) + 2t_⊥ξ_1 ξ_2 ξ_1 - ξ_2/ξ_1 + ξ_2 (e^i ξ_1 L - e^-i ξ_2 L).
Notice that first term in bracket is zeroth order asE ≈0. Now, it is time to utilize four solutions in Eq. (<ref>).
By main branch condition above, accordingly we choose
ξ_1 = u (√(4 γ -1) + i)
ξ_2 = u (√(4 γ -1) - i)
,
considering thatγ> 1/4in this zone.
Afterwards, one obtains
E(k) ≈ -4 m_0(k)/√(4 γ - 1)sin(u√(4 γ - 1) L)e^-u L.
Low energy surface state mass shows both exponentially decay and oscillating behavior.∙0 < γ< 1/4, we choose
(ξ_1) > 0
(ξ_2) > 0
,
as main branch condition, then
tan (ξ_1 L/2) ≈ i(1 - 2β_1^L)
tan (ξ_2 L/2) ≈ i(1 - 2β_2^L)
,
E(k) ≈ (m_0(k) + t_⊥ξ_1 ξ_2) - 2t_⊥ξ_1 ξ_2 ξ_1 + ξ_2/ξ_1 - ξ_2 (e^i ξ_1 L - e^i ξ_2 L),
where first term in bracket is again zeroth order energy approaching zero.
Again, utilizing four solutions in Eq. (<ref>) with main branch condition above, we choose
ξ_1 = iu (1 + √(1 - 4γ))
ξ_2 = iu (1 - √(1 - 4γ))
,
considering that0<γ< 1/4in this zone.
Again, one obtains
E(k) ≈ -4 m_0(k)/√(1 - 4γ)sinh(u √(1 - 4γ)L) e^-u L.
Sincesin(i x) = i sinh(x),
and byγ= m_0(k) t_⊥/λ_⊥^2, we may setγ(k_c)= 0and obtain a unified expression for lowest order mass correction
E(k < k_c) = -4 m_0(k)/√(4 γ - 1)sin(u √( 4γ - 1)L) e^-u L.
However, as a comment, in numerical calculation,Ein zone 0 < γ< 1/4 is suppressed into zero in a much slower manner,
which is caused by exponential cancellation betweensinhandexp.
Nevertheless, since√(1 - 4γ) < 1in the region, we conclude that the exponential increasing
is always slower than the decaying, which finally pushes the state to zero energy for L →+∞.
§ DERIVATION OF EQ. (<REF>)
To obtain an effective model, we start from solvingℋ_1dand notice that[ℋ_1d(k),σ_z] = 0, from which we could let
ℋ_1d(k)ζ_s ⊗|ϕ^s(k)⟩ = ζ_s ⊗ℋ^s_1d(k)|ϕ^s(k)⟩,
whereℋ^s_1d(k)is split Hamiltonian that only acts on one subspace, and by definition
σ_z ζ_s = s ζ_s, s = ±.
Under basis of{Ψ_l_z,k}_l_z,ℋ^s_1d(k)is in its matrix form denoted asH^s_1d(k),
with solution defined from its eigenvalue equation
H^s_1d(k) ϕ^s(k) = E^s(k) ϕ^s(k), ϕ^s(k) = ⊕_l_zϕ^s_l_z(k).
To make discussion pithy, we shall omits, kand letM ≡M_0(k)below in the section.
Eq. (<ref>) can be written in th e recurrence form as
(t_⊥τ_z + iλ_⊥/2sτ_x) ϕ_l_z - 1 + M τ_z ϕ_l_z + (t_⊥τ_z - iλ_⊥/2sτ_x)ϕ_l_z + 1 = E ϕ_l_z,
by observing which could we set trial function asϕ_l_z = e^i ξl_z ϕ= β^l_z ϕwhereβ= e^iξ. Then accordingly the equation is reduced to
[(t_⊥τ_z + iλ_⊥/2sτ_x) β^-1 + (M τ_z - E) + (t_⊥τ_z - iλ_⊥/2sτ_x)β ] ϕ = 0,
which firstly leads to
E^2 = (M + 2t_⊥cosξ)^2 + λ_⊥^2 sin^2 ξ,
requiring non-trivialϕ. From Eq. (<ref>) one solves
cosξ^p_α = -Mt_⊥ + (-1)^α - 1√(M^2t_⊥^2 - (t_⊥^2 - λ_⊥^2/4)(M^2 + λ_⊥^2 - E^2))2(t_⊥^2 - λ_⊥^2/4),
sinξ^p_α = p √(1 - cos^2 ξ_α), p = ±, α = 1,2,
which tells that
β^p_α = e^iξ^p_α = cosξ_α + ip √(1 - cos^2 ξ_α).
Here one thing to notice is that the sign change ofsinξ^p_αis caused by sign change ofξ, rather than a phase shift likeξ→ξ+ π, since the latter will lead to
the sign change ofcosξ, too, and that is not our solution.
To make maximum utilization of the symmetry, we consider canonical boundary condition in which the centre of 1-d chain sits atz = 0, then by denoting l = L_z + 1, we would have
ϕ^s (±l/2) = 0,
and it is essential to notice that sitesl_z = ± L_z + 12are two fictitious points where the constraints take place, and true lattice stops atl_z = ± L_z - 12as we only have L_zsites.
What is more, for compensation of unifying expression regardless of odevity of L_z,l_zwould be forced to choose different ways to be taken out as follows
l_z = 0,± 1,± 2,⋯,± L_z + 12, for L_z odd,
l_z = ±12, ±32,⋯,± L_z + 12 , for L_z even,
which conforms mirror symmetry toz = 0. Afterwards,
enlightened by the idea of symmetric trial functions, we also build several functions fromβ^p_αconsidering the symmetric case stated above. Denote
E(β,l_z) = β^l_z + β^-l_zβ^( L_z + 1)/2 + β^-( L_z + 1)/2 = cos (ξ l_z)cos (ξ l/2)
O(β,l_z) = β^l_z - β^-l_zβ^( L_z + 1)/2 - β^-( L_z + 1)/2 = sin (ξ l_z)sin (ξ l/2),
where `E' and `O', namely even and odd, represent the parity of two functions aboutz, and one should not identifyEhere as the energy function. From which we establish two sets of factors respecting boundary condition with even or odd parity
f_+(l_z) = ∑_α(-1)^α - 1 E(β^p_α,l_z)
f_-(l_z) = ∑_α(-1)^α - 1 O(β^p_α,l_z)
,
where the summation is overαbut withoutpsince it only changes sign ofξand thus does not influence the value ofEorO.
Before proceeding, let us find some special properties about those functions or factors. Let
a = β + 1β = 2cosξ
b = β - 1β = 2isinξ,
who weight as the lattice differential operators that lead to relation
f_+(l_z ± 1) = ∑_α(-1)^α - 1a_α E(β_α,l_z) ± i b_αtan (ξ_α l/2) O(β_α,l_z)2≡ g_±,
f_-(l_z ± 1) = ∑_α(-1)^α - 1a_α O(β_α,l_z) ∓ i b_α (ξ_α l/2) E(β_α,l_z)2≡ h_±.
One could again see that the iteration relation is also independent ofpwithin our expectation.
Now we are able to come back and solve the chain problem. Let
ϕ_l_z = c f_+(l_z) + d f_-(l_z),
to be guessed general solution confined by boundary condition.
Bring this trial solution into Eq. (<ref>) and requiring vanishing coefficients ofE(β_α,l_z)andO(β_α,l_z), one obtains, after re-organization,
(M - E + t_⊥ a_α)c_1 - λ_⊥2sd_2b_α(ξ_α l/2) = 0
- λ_⊥2sc_1b_αtan(ξ_αl/2) + (M + E + t_⊥ a_α)d_2 = 0
,
(M - E + t_⊥ a_α) d_1 + λ_⊥2s c_2 b_αtan (ξ_α l/2) = 0
λ_⊥2sd_1b_α(ξ_αl/2) + (M + E + t_⊥ a_α)c_2 = 0
,
for differentα.
Requiring simultaneous standing with respect toαleads to four solutions in pairs
d_2 = i t_⊥η_1s λ_⊥c_1, E = E_+
c_1 = i t_⊥η_2s λ_⊥d_2, E = -E_-
,
c_2 = -it_⊥η_2s λ_⊥d_1, E = E_-
d_1 = -i t_⊥η_1s λ_⊥c_2, E = -E_+
,
where the formal expression for energies are
E_± = M + 2t_⊥cosξ_1 g^±(ξ_1) - cosξ_2 g^±(ξ_2)/g^±(ξ_1) - g^±(ξ_2),
with two defined functions
g^±(ξ) = tan^± 1(ξ ( L_z + 1)/2)sinξ
and two dimensionless factors
η_1 = -2(cosξ_1 - cosξ_2)sinξ_1 (ξ_1 l/2) - sinξ_2 (ξ_2 l/2),
η_2 = -2(cosξ_1 - cosξ_2)sinξ_1 tan(ξ_1 l/2) - sinξ_2 tan(ξ_2 l/2),
have been introduced.
From the above discussion we seemingly have four solutions, mathematical restriction, however, tells that
equations in Eq. (<ref>) in the same brace must stand simultaneously, which then gives us two relations as
1 = |i t_⊥η_1s λ_⊥·it_⊥η_2sλ_⊥| ⟹ |η_1 η_2| = λ_⊥^2t_⊥^2,
m ≡ E_+ = -E_-,
and the latter one is also a physical result from Dirac equation. This reduces our four solutions to two independent ones for eachs.
The above discussion is equivalent to requiring simultaneous standing of equations in left brace of Eq. (<ref>)
E^2 = (M + 2t_⊥cosξ_α)^2 + λ_⊥^2 sin^2 ξ_α,
which is independent ofαand matches the result of Eq. (<ref>).
Similar arguments can be made here as in the continuum model.
Counting on complexity ofξ_1,2restricted by Eq. (<ref>) and the property of trigonometric/hyperbolic function leads to the conclusion that
quadratic formf_+^* f_-andη(at certain(k,z,E)) are always real.
Essentially,f_±are either real or purely imaginary.
In short, what we need solving to get all energy statesmare the simultaneous equations below
m = M + 2 t_⊥cosξ_1 g(ξ_1) - cosξ_2 g(ξ_2)/g(ξ_1) - g(ξ_2),
cosξ_α = -M t_⊥ + ( - 1)^α - 1√(M^2 t_⊥^2 - (t_⊥^2 - λ_⊥^2/4)(M^2 + λ_⊥^2 - m^2))/2(t_⊥^2 - λ_⊥^2/4),
where
M = M_0(k) = m_0 - 4 t_∥(sin^2 k_x a2 + sin^2 k_y b2) - 2 t_⊥,
g(ξ) = tan(ξ( L_z + 1))/2sinξ,
and sign ofξis fixed byp = +so that
sinξ_α = √(1 - cosξ_α^2), α = 1,2.
Basically, there are three variablesξ_1, ξ_2andm, together with three equations above, then it is in a sense some exact system of equations but a non-linear transcendental version.
From this set of equations, one may expect L_zsolutionsm_n(k), n = 1,2,⋯, L_zincluding one surface state and L_z - 1purely trivial bulk states, if within suitable choice of parameters.
And the other set of L_zsolutions are just chiral partners with- m_n(k).
Notice that these2L_zsolutions compose eigenvalues for oneH_1d^s, then by
countings = ±there are in fact4 L_zsolutions in total, which is expected from the matrix form ofH_1d.
Here it comes to construct basis for projection, we firstly ignore lower index formsince our wavefunction solution form is universal whateverntakes. Then by countings, we totally have four independent solutions for eachmas follows
φ(s) = [ c_1 f_+; d_2 f_- ] = C [ -isλ_⊥ f_+; t_⊥η f_- ], E = m
χ(s) = [ d_1 f_-; c_2 f_+ ] = C [ t_⊥η f_-; isλ_⊥ f_+ ], E = -m
,
where we have ignored lower index ofη_1, and the normCis the same forφandχstates.
Then restoringn-indices we have4L_zbasis in certain sequence as
Φ^n_1 = ζ_+ ⊗φ(+) = [ φ^n(+); 0 ], Φ^n_2 = [ 0; χ^n(-) ],
Φ^n_3 = [ χ^n(+); 0 ], Φ^n_4 =[ 0; φ^n(-) ],
with energies(m_n(k),-m_n(k),-m_n(k),m_n(k)), respectively.
The(k,l_z)dependence of these basis states are inherited from functionsf^n_±(k,l_z)and factorη^n(k).
The basis here shares the same symmetry analysis as within the continuum model, while here the parity and mirror symmetries can be written down explicitly in the off-diagonal matrix form,
withσ_0τ_zand-iσ_zτ_zas the off-diagonal elements, respectively.
And especially, by combining the mirror and spin-zindex, we assignΦ_i^n = Φ^n_χ,swith
Φ^n_++ = Φ^n_1, Φ^n_+- = Φ^n_2,
Φ^n_-+ = Φ^n_3, Φ^n_– = Φ^n_4.
Now we turn to the projection, which is formally
⟨Φ | H_Film | Φ|=⟩⟨Φ | H_1d | Φ|+⟩⟨Φ | H_∥ | Φ|,⟩
where the first part, by the definition of eigenvalue equation, is just⊕_n diag(m_n, -m_n, -m_n,m_n) = ⊕_n m_n(k) τ_z σ_z; while in the second part, since H_∥ = λ_∥ (sin(k_x a) σ_x τ_x + sin(k_y b )σ_y τ_x )is purely off diagonal, it is easy to conclude that
⟨Φ^n_i|H_∥|Φ^n'_i| ⟩= 0, i = 1,2,3,4,
⟨Φ^n_1|H_∥|Φ^n'_3| ⟩= 0 = ⟨Φ^n_2|H_∥|Φ^n'_4|.⟩
Then only four terms need consideration by hermicity, among which
⟨Φ^n_1|H_∥|Φ^n'_4| ⟩= λ_∥ (sin (k_x a)-i sin(k_y b)) ∑_l_z |C|^2 iλ_⊥ t_⊥ [η^n (f_+^n)^* f_-^n' + η^n' (f_-^n)^* f_+^n'] = 0,
⟨Φ^n_2|H_∥|Φ^n'_3| ⟩= λ_∥ (sin (k_x a)-i sin(k_y b))∑_l_z|C|^2 i λ_⊥ t_⊥ [η^n (f_-^n)^* f_+^n' + η^n' (f_+^n)^* f_-^n'] = 0,
asf_-f_+is odd toz. Then, the only remaining terms are
⟨Φ^n_1 | H_∥ | Φ^n'_2| ⟩= λ_∥ (sin (k_x a)-i sin(k_y b))δ_nn' = ⟨Φ^n_3 | H_∥ | Φ^n'_4|,⟩
where normalization condition is used.
Finally we arrive at the equivalent Hamiltonian
H(k) = ⊕_n = 1^ L_z[λ_∥ (sin (k_x a)σ_x + sin (k_y b)σ_y) + m_n(k) τ_z σ_z] = ⊕_n , χ h_n,χ(k),
where unspecified degrees of freedom are all identity matrix. And hereto we have successfully arrived at Eq. (<ref>) in the main text.
Also notice thatHis exactly equivalent to originalH_Film, since by counting alln,
the projection we did is just a unitary basis transformation, where the unitary matrix is composed of solutions ofH_1d.
The projection here is also a unitary transformation, which shares a simpler form than that in the continuum model.
Since now the original Hamiltonian reads
ℋ_Film(k) = ∑_l_z,l_z'Ψ_l_z^† H_Film(k,l_z,l_z') Ψ_l_z',
then by definingΨ= ⊕_l_zΨ_l_z, we identify the unitary transformation as
ℋ_Film(k) = (Ψ^†U^l) [ (U^l)^† H_Film(k) U^l] ( (U^l)^†Ψ),
where
U^l = (Φ^1,Φ^2,⋯,Φ^L_z), Φ^n = (Φ_1^n,Φ_2^n,Φ_3^n,Φ_4^n),
and we recognizeΦ_i^n = ⊕_l_z Φ_i^n(l_z) here so thatU^lis a4 L_z ×4 L_zunitary matrix.
And here againU^lis trivial ink-space.
The core transformation on matrix form of Hamiltonian gives rise to
H(k) = (U^l(k))^† H_Film(k) U^l(k),
while the inverse transformation(U^l)^† Ψassigns composed Fermionic operators to the new basis.
Essentially, the transformation to eachh_n,χis done by
h_n,χ = (U^l_n,χ)^† H_FilmU^l_n,χ,
where
U_n,χ^l = Φ^n_χ = (Φ^n_χ,s = +,Φ^n_χ,s = -),
is a2 L_z ×2matrix.
apsrev4-2 |
http://arxiv.org/abs/2405.10318v1 | 20240516175936 | Gauge theory of giant phonon magnetic moment in doped Dirac semimetals | [
"Wenqin Chen",
"Xiao-Wei Zhang",
"Ying Su",
"Ting Cao",
"Di Xiao",
"Shi-Zeng Lin"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
Department of Physics, University of Washington, Seattle, Washington 98195, USA
Theoretical Division, T-4 and CNLS, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
Center for Integrated Nanotechnologies (CINT), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
dixiao@uw.edu
Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
Department of Physics, University of Washington, Seattle, Washington 98195, USA
szl@lanl.gov
Theoretical Division, T-4 and CNLS, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Center for Integrated Nanotechnologies (CINT), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
We present a quantum theory for phonon magnetic moment in doped Dirac semimetals. Our theory is based on an emergent gauge field approach to the electron-phonon coupling, applicable for both gapless and gapped systems. We find that the magnetic moment is directly proportional to the electrical Hall conductivity through the phonon Hall viscosity. Our theory is combined with the first-principles calculations, allowing us to quantitatively implement it to realistic materials. Magnetic moments are found to be on the order of Bohr magneton for certain phonon modes in graphene and Cd_3 As_2. Our results provide practical guidance for the dynamic generation of large magnetization in the topological quantum materials.
Gauge theory of giant phonon magnetic moment in doped Dirac semimetals
Shi-Zeng Lin
May 20, 2024
======================================================================
Introduction.— Circularly polarized phonons, the collective excitations of ionic circular motions <cit.>, have recently attracted significant interest due to their contributions to various phenomena such as the Einstein-de Haas effect <cit.>, the thermal Hall effect <cit.>, and phonon-induced effective magnetic fields <cit.>. These phonons carry an orbital magnetic moment, classically understood as a ionic loop current of Born effective charge <cit.>. The magnitude is predicted to be on the order of the nuclear magneton μ_N. Phonon magnetic moments have been observed in experiments via the phonon Zeeman effect across several materials <cit.>. Surprisingly, the measured moments can be up to a few Bohr magneton μ_B, orders of magnitude larger than the classical prediction, indicating the necessity of quantum theories capturing the contributions from electronic degrees of freedom.
Towards this goal, a quantum theory based on electron-phonon coupling has recently been developed from the adiabatic pumping of electronic current <cit.>. This theory is suitable for insulating materials. However, it diverges when the band gap closes due to the breakdown of the adiabatic approximation, and therefore is unable to handle the metallic phase of materials <cit.>. On the other hand, a phenomenological model of the phonons coupled to the cyclotron motion of carriers has been used for Dirac semimetals <cit.>, however, the microscopic mechanism is still unclear. In fact, a theory that quantitatively accounts for the gapless systems remains absent.
In this Letter, we propose a quantum theory for the phonon magnetic moment in doped Dirac semimetals. Our theory is based on an emergent gauge theory approach to the electron-phonon coupling. The mechanism of time-reversal symmetry (TRS) breaking is formulated in a topological Chern-Simons term, which appears as a phonon Hall viscosity modifying the phonon dynamics. We find that the phonon magnetic moment is directly linked to the Hall conductivity through the phonon Hall viscosity. Our results provide a theoretical framework to calculate the phonon magnetic moment from the basic properties of crystal structure and electronic transport. We then apply our theory to realistic materials such as graphene and Cd_3 As_2 by establishing a first-principles method for the computation of phonon-induced emergent gauge fields. Giant phonon magnetic moments on the order of μ_B are found for certain phonon modes. Our theory serves as practical guidance for the dynamic generation of large magnetization in materials.
Electron-phonon coupling and emergent gauge fields in Dirac semimetals.— We start from a general model for Dirac semimetals with electron-phonon (e-ph) coupling, described by the Hamiltonian ℋ = ℋ_D + ℋ_e-ph. Here we consider Dirac semimetals with two valleys (K_χ, χ = ±) located away from the Γ point, which are related by time-reversal or inversion. The low-energy Hamiltonian at one valley is written as ℋ_D =ħ∑_j v_j k_j γ^j - ε_F where v_j is the Fermi velocity, γ^j the Dirac matrices, and ε_F the Fermi energy <cit.>. The e-ph coupling Hamiltonian generally has the form <cit.>
ℋ_e-ph^ν = ∑_q∑_αβ g^ν_αβ(k,q)Q^ν_q c^†_α,k+q/2 c_β,k-q/2,
where g^ν_αβ(k,q) is the e-ph coupling matrix element, ν labels phonon modes, q is the phonon wavevector, and α,β are indices for the electronic basis. The phonon displacement operator is defined in terms of the bosonic operators as Q^ν_q = [ħ/(2 m_I ω_q^ν)]^1/2 (b^ν_q+b^ν†_-q) where m_I is the ionic mass and ω_q^ν is the mode frequency. Next we project ℋ^ν_e-ph into the basis of massless Dirac fermions at the K_χ valley. The lowest-order coupling, for which g^ν_αβ(q) is independent of k, can lead to the emergence of U(1) gauge fields if g^ν_αβ(q) is compatible with the little group symmetries at the Dirac point K_χ <cit.>. The emergent gauge field interacts with Dirac fermions in the form of minimal coupling as described by an effective Hamiltonian,
ℋ_eff = ∑_j v_j (ħ k_j -e A_j - e χ a_j^ν) γ^j - ε_F,
where A is the electromagnetic(EM) gauge field and a^ν is the emergent gauge field induced by the phonon mode ν. We note that the emergent gauge fields at the two valleys (χ=±) have opposite signs as required by the TRS. In a more general context, the emergent gauge field must transform equivalently as k under the little group at K_χ for the minimal coupling to be allowed. It is worth mentioning that there can be an additional scalar field induced by the phonons. It affect ε_F as an electrostatic pseudopotential and thus will not be our primary focus here.
Phonon Hall viscosity.— We move on to discuss the topological quantum field theory of the emergent gauge field. Integrating out the fermionic degrees of freedom in the system defined by Eq. (<ref>), the quantum corrections from the Dirac fermions give rise to a Chern-Simons (CS) term in the effective action, if the gauge fields are (2+1)-dimensional <cit.>. For the EM gauge field A, the CS term is S_CS[A]=σ_xy/2∫d^3x ϵ^ijk A_i ∂_j A_k. It describes a Hall conductivity response that reads J_i ≡ -δ S_CS[A]/δ A_j = σ_xyϵ_ijE_j. Since the emergent gauge field a^ν couple to the Dirac fermions equivalently as A, we write a CS term of the same form associated to a^ν,
S_CS[a^ν] = σ_xy/2∫d^3xϵ^ijk a^ν_i ∂_j a_k^ν.
The valley index χ is dropped out because S_CS[a^ν] is quadratic in a^ν, which indicates the contributions from electrons at two valleys add up. We identify S_CS[a^ν] as the effective term describing the phonon Hall viscosity <cit.>. This response is universal in topologically nontrivial systems since it is directly linked to the Hall conductivity σ_xy. The coefficient of phonon Hall viscosity η_H is the antisymmetric part of a general viscosity tensor η_ijkl <cit.>. It is dissipationless and exists only when TRS is broken.
As we show next, the phonon Hall viscosity will modify the phonon dynamics. For the sake of simplicity, here we restrict ourselves to a model of phonons defined by the Lagrangian ℒ_ph = (ρ_I/2)[(Q̇^ν_x)^2 + (Q̇^ν_y)^2 - (ω_0^νQ^ν_x)^2 - (ω_0^νQ^ν_y)^2], where ρ_I is the ionic mass density, Q^ν_x,y are the linearly polarized phonon displacements, and ω_0^ν is the Γ-point(q=0) frequency. This model describes doubly degenerate optical modes in the long-wavelength limit, but our theory applies generally to the modes that have the emergent gauge field description. We further assume C_4z rotation symmetry and thus the emergent gauge field is simply a^ν = (g^ν/ev_F) Q^ν. To switch on the phonon Hall viscosity η_H, an out-of-plane magnetic field B is applied to break the TRS. As a result, there is an additional phonon viscosity term in the phonon Lagrangian,
ℒ_η = η_H (Q^ν_y Q̇^ν_x - Q^ν_x Q̇^ν_y),
where η_H = σ_xy(g^ν)^2/(2e^2 v_F^2). We solve the equations of motion in the circularly polarized basis {Q_l/r^ν = (Q_x^ν± i Q_y^ν)/√(2)}. The frequencies of the left-handed and right-handed polarized modes are ω^ν_l/r = [(ω^ν_0)^2 + (η_H/ρ_I)^2]^1/2±η_H/ρ_I, respectively. We find a splitting of the phonon frequencies given by δω^ν = 2η_H/ρ_I that is proportional to η_H.
To obtain the magnetic field dependence of phonon frequencies, we now turn to compute the Hall conductivity. In Dirac semimetals, the Dirac-cone dispersion transforms into Landau levels (LLs) in a magnetic field. For a finite concentration n_F of carriers (electrons or holes), the Fermi level ε_F is between the nth and (n+1)th LL. Under relatively weak field, the LLs are filled to a large index (|n|≫1) such that the Hall conductivity
response of the Dirac fermions is semiclassical. Hence we can use the Drude theory to calculate the Hall conductivity, σ_xy = σ_0ω_cτ_tr/(1+ω_c^2τ_tr^2), where σ_0 is the dc conductivity, ω_c is the cyclotron frequency, and τ_tr is the transport lifetime. The field dependence enters through the cyclotron frequency given by ω_c = eB/m^*_c (m^*_c is the cyclotron effective mass). As a result, we obtain the phonon frequencies as functions of B,
ω^ν_l/r = ω^ν_0[√(1+(ξμ_tr B/1+μ_tr^2 B^2)^2)±ξμ_tr B/1+μ_tr^2 B^2]
where ξ = σ_0(g^ν)^2/(2e^2 v_F^2ρ_Iω_0^ν) and the carrier mobility μ_tr = eτ_tr / m_c^*. In the quantum limit at large magnetic fields and low doping, one needs to go beyond the semiclassical approximation for σ_xy and calculate σ_xy quantum mechanically, i.e. using the Kubo formula [We consider the region where the phonon mode is not resonant with the excitations between the Landau levels in the quantum limit.].
Giant phonon magnetic moment.— The most important finding in Eq. (<ref>) is that the frequencies of left-handed and right-handed polarized phonons split linearly with B when the field is weak (B≪μ_tr^-1). This linear field dependence is identified as the phonon Zeeman effect <cit.>. It is due to the magnetic moment of phonons interacting with the applied external magnetic field via the Zeeman coupling of the form ħω^ν_l/r = ħω^ν_0±μ_ph·B. We have
μ_ph = (g^ν)^2/v_F^2 ρ_I Bħ/2e^2σ_xy
where σ_xy(B→0)=σ_0 μ_trB. This is our main result: the phonon magnetic moment μ_ph is directly linked to the electrical Hall conductivity σ_xy through the phonon Hall viscosity η_H. Finally, we express μ_ph in the unit of Bohr magneton as μ_ph = σ_0 τ_tr (e v_F)^-2(g^ν)^2ρ_I^-1(m_e/m_c^*)μ_B in the limit of ω_c τ≪ 1.
Our theory is also applicable to gapped systems with broken TRS. In (2+1) dimensions, this can be shown by adding a mass term m_χv_F^2σ^z to Eq. (<ref>). Following the same derivation, we find a splitting of phonon energies,
ħδω^ν = (g^ν)^2/v_F^2ρ_Iħ/e^2σ_xy.
Here we set ε_F inside the gap, and σ_xy is the anomalous Hall conductance induced by the electronic Berry curvature of a filled band, with σ_xy = (e^2/4πħ)(m_+ / |m_+| - m_- / |m_-|) <cit.>.
Having established the general theory framework, we now apply it to concrete examples of Dirac semimetals. In general, the e-ph coupling matrix element g^ν_αβ is computed based on the density functional perturbation theory (DFPT) <cit.>. However, since we are intersted in the e-ph coupling in the form of Eq. (<ref>), we adopt the frozen phonon approach. We compare the Dirac cone shifted by the phonons with its equilibrium position in the k-space. The displacement, as denoted by K_χ - e χa^ν/ħ, provides a measure of the emergent gauge field. Below, the electron band structure calculations are performed using the Quantum ESPRESSO package <cit.>. The phonon spectra and eigenvectors are calculated using DFPT for graphene, and the finite-displacement approach for Cd_3As_2, respectively. Details of numerical calculations can be found in Ref. <cit.>.
We first focus on monolayer graphene, a material that hosts 2D massless Dirac fermions <cit.>. Figure 2(a) shows the calculated phonon spectrum of graphene. At the Γ point, there are two pairs of doubly-degenerate in-plane modes corresponding to the irreducible representations E_1u,E_2g of the D_6h point group. We consider the Raman-active E_2g pair, i.e., the G band <cit.>, consisting of the longitudinal optical (LO) and transverse optical (TO) modes. As shown in Fig. 2(b), they can be used to construct left-handed and right-handed circular modes. The corresponding emergent gauge fields are a^E_2g_x = (g^TO/ev_F)Q^TO and a^E_2g_y = -(g^LO/ev_F)Q^LO <cit.>.
Next we perform the first-principles simulation of a^E_2g. Our calculations cover all three optical modes. While the out-of-plane mode does not couple to electrons due to the σ_h symmetry, we find the LO and TO modes indeed induce gauge fields. In Fig. 2(c) and (d), electronic energy contours are plotted at the K_+ valley in equilibrium and in the presence of the TO phonon mode. We see that the Dirac cone is shifted from its equilibrium position. From the shift distance 0.05 Å^-1 and the phonon displacement amplitude 0.03 Å, the e-ph coupling matrix element g^TO is 9.7 eV·Å^-1. To establish the validity of our method, we recalculate g^TO directly using the DFPT <cit.>. The result g^TO_(DFPT) = 9.4 eV·Å^-1 is in good agreement with our gauge field calculation, justifying the emergent gauge approach.
We adopt the following parameters for graphene <cit.>: v_F ≈ 10^6 m·s^-1, ρ_I ≈ 7.63×10^-7 kg·m^-2, μ_tr≈10^4 cm^2·V^-1·s^-1, n_F≈10^11 cm^-2. The field dependence of ħω^E_g_l/r are plotted in Fig. 2(e) according to Eq. (<ref>). At 0.5 Tesla, the splitting is 0.5 meV. The phonon magnetic moment given in Eq. (<ref>) is calculated to be μ_ph = 10.6 μ_B. Notably, a splitting of the E_2g phonons in graphene under strong magnetic fields when the phonons are in resonance with the magnetoexcitons has been reported in the literature <cit.>. Our theory provides a non-resonance mechanism for phonon splitting based on the phonon Hall viscosity under weak magnetic fields.
Finally, we turn to Cd_3As_2, an archetypal 3D Dirac semimetal. The electronic bands in Fig. 3(a) are calculated from the experimentally determined crystal structure with the space group I4_1/acd <cit.>. Two 3D massless Dirac cones are located at momenta K_χ = (0,0,χ k_z0). Figure 3(b) shows the phonon spectrum of Cd_3As_2 in the THz frequency range. We focus on the Γ-point in-plane optical phonons that are doubly degenerate. Depending on whether the modes are symmetric or antisymmetric under inversion, they belong to the irreducible representations E_g, E_u of the point group D_4h, respectively. As showcase examples, we choose the Γ_8 branch with E_u and the Γ_11 branch with E_g in our first-principles calculations. We find the infrared-active E_u mode does not induce a gauge field, forbidden by the inversion symmetry <cit.>. For the Raman-active E_g mode, an emergent gauge field is allowed as plotted in the Fig. 3(d).
For an intuitive understanding, we develop a phonon-modulated tight-binding model to derive the emergent gauge field in Cd_3 As_2.
The low-energy electronic properties near the Fermi level can be described by an effective tetragonal lattice with Cd-5s and As-4p orbitals on each site <cit.>. This model can capture the e-ph coupling of the acoustic modes <cit.>. To account for the optical modes, we formally double the unit cell and thus fold the Brillouin zone along z-axis. The ion displacement associated with the optical modes are shown in Fig. 3(e). Because of the relative rotation of s and p orbitals between neighboring sites, the inter-orbital hopping integrals along z become nonzero. We find the modification to Hamiltonian from the E_u mode does not contribute a gauge field, consistent with the first-principles calculations <cit.>. For the E_g mode, we obtain an effective e-ph coupling Hamiltonian in the basis of {c^†_s↑,k, c^†_p↑,k,c^†_s↓,k, c^†_p↓,k}|0⟩,
δℋ_latt^E_g = tβ/l_zsink_zl_z (Q^E_g_xσ^zτ^x + Q^E_g_yτ^y),
where t is the hopping integral, β is the Grüneisen parameter, l_z is the bond-length along z, and σ_j,τ_j are the Pauli matrices in spin and orbital space, respectively. Expanding at the valley K_χ and comparing with the effective total Hamiltonian (<ref>), we obtain the emergent gauge field: a^E_g = (tβ k_z0/e v_F)Q^E_g.
Due to the Zeeman interaction, the external magnetic field splits the Dirac points into Weyl nodes in Cd_3 As_2 <cit.>. The Weyl nodes are monopole sources of the Berry curvature that open a channel for intrinsic Hall conductivity. Hence our calculations of Hall conductivity need to include a Zeeman contribution given by σ_xy^Zeeman = (e^2/4π^2ħ)κ_z, where κ_z is the distance between the Weyl nodes <cit.>. We adopt the following parameters for Cd_3 As_2 <cit.>: v_F ≈ 1.5×10^6 m·s^-1, ρ_I ≈ 3.03 g·cm^-3, n_F 1.2≈ 10^19 cm^-3, μ_tr≈ 3.2×10^5 cm^2·V^-1·s^-1. The e-ph coupling matrix element g^E_g from our first-principles calculations is 79.5 meV·Å^-1. The phonon magnetic moment is μ_ph=1.04 μ_B. Interestingly, the order of magnitude of the calculated magnetic moment agrees well with the experiment <cit.>, despite that we considered Raman-active modes but the experiment measured the infrared-active mode.
Conclusion and outlook.— We have established a theoretical framework to calculate the phonon magnetic moment in doped Dirac semimetals by treating phonon as an emergent gauge field. We find that the phonon magnetic moment is directly proportional to the Hall conductivity, indicating that a significant enhancement can be achieved with high carrier concentration and carrier mobility. Our theory is combined with the first-principles calculations, allowing us to quantitatively implement it to realistic materials. Magnetic moments are found to be on the order of Bohr magneton for the optical modes in graphene and Cd_3 As_2.
These modes are Raman active, and their magnetic moments can be measured by the phonon Zeeman splitting under magnetic fields using the Raman spectroscopy <cit.>. Our results also pave the way for subsequent extensions to the infrared-active modes. In future experimental investigations, our theory offers tangible direction to search for large phonon magnetic moments in the topological quantum materials.
Acknowledgments.— We thank Alexander Balatsky and Prashant Padmanabhan for the helpful discussion. The work at UW was supported by DOE Award No. DE-SC0012509. The work at LANL was carried out under the auspices of the U.S. DOE NNSA under contract No. 89233218CNA000001 through the LDRD Program, and was supported by the Center for Nonlinear Studies at LANL, and was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. DOE Office of Science, under user proposals #2018BU0010 and #2018BU0083.
64
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Zhang and Niu(2015)]Chiral-Phonons
author author L. Zhang and author Q. Niu, title title Chiral phonons at high-symmetry points
in monolayer hexagonal lattices, https://doi.org/10.1103/PhysRevLett.115.115502 journal
journal Phys. Rev. Lett. volume 115, pages 115502 (year 2015)NoStop
[Zhu et al.(2018)Zhu,
Yi, Li, Xiao, Zhang, Yang, Kaindl, Li,
Wang, and Zhang]Observation-of-chiral
author author H. Zhu, author J. Yi, author M.-Y. Li, author
J. Xiao, author L. Zhang, author C.-W. Yang, author R. A. Kaindl, author L.-J. Li,
author Y. Wang, and author X. Zhang, title
title Observation of chiral phonons, https://doi.org/10.1126/science.aar2711 journal journal Science volume 359, pages
579 (year 2018)NoStop
[Ishito et al.(2023)Ishito,
Mao, Kousaka, Togawa,
Iwasaki, Zhang, Murakami,
Kishine, and Satoh]Truly-chiral
author author K. Ishito, author H. Mao,
author Y. Kousaka, author Y. Togawa, author
S. Iwasaki, author T. Zhang, author S. Murakami, author J.-i. Kishine, and author T. Satoh, title title Truly
chiral phonons in α-hgs, @noop journal
journal Nature Physics volume 19, pages 35 (year 2023)NoStop
[Ueda et al.(2023)Ueda,
García-Fernández, Agrestini,
Romao, van den Brink, Spaldin, Zhou, and Staub]Chiral-phonons-in-quartz
author author H. Ueda, author M. García-Fernández, author S. Agrestini, author C. P. Romao, author J. van den Brink, author N. A. Spaldin, author K.-J. Zhou, and author U. Staub, title title Chiral phonons in quartz probed by x-rays, @noop journal journal Nature volume 618, pages 946 (year
2023)NoStop
[Zhang and Niu(2014)]Angular-Momentum
author author L. Zhang and author Q. Niu, title title Angular momentum of phonons and the
einstein–de haas effect, https://doi.org/10.1103/PhysRevLett.112.085503 journal
journal Phys. Rev. Lett. volume 112, pages 085503 (year 2014)NoStop
[Dornes et al.(2019)Dornes,
Acremann, Savoini, Kubli,
Neugebauer, Abreu, Huber,
Lantz, Vaz, Lemke et al.]The-ultrafast-Einstein
author author C. Dornes, author Y. Acremann,
author M. Savoini, author M. Kubli, author
M. J. Neugebauer, author
E. Abreu, author L. Huber, author G. Lantz, author C. A. Vaz, author H. Lemke, et al., title title The
ultrafast einstein–de haas effect, @noop journal
journal Nature volume 565, pages 209 (year 2019)NoStop
[Grissonnanche et al.(2019)Grissonnanche, Legros, Badoux,
Lefrançois, Zatko, Lizaire, Laliberté, Gourgout,
Zhou, Pyon et al.]Giant-thermal
author author G. Grissonnanche, author A. Legros, author S. Badoux,
author E. Lefrançois,
author V. Zatko, author M. Lizaire, author
F. Laliberté, author
A. Gourgout, author
J.-S. Zhou, author
S. Pyon, et al., title title Giant thermal hall conductivity in the pseudogap
phase of cuprate superconductors, @noop journal
journal Nature volume 571, pages 376 (year 2019)NoStop
[Li et al.(2020)Li,
Fauqué, Zhu, and Behnia]Phonon-Thermal
author author X. Li, author B. Fauqué,
author Z. Zhu, and author K. Behnia, title
title Phonon thermal hall effect in strontium titanate, https://doi.org/10.1103/PhysRevLett.124.105901 journal
journal Phys. Rev. Lett. volume 124, pages 105901 (year 2020)NoStop
[Grissonnanche et al.(2020)Grissonnanche, Thériault, Gourgout,
Boulanger, Lefrançois, Ataei, Laliberté, Dion, Zhou, Pyon et al.]Chiral-phonons-in-the
author author G. Grissonnanche, author S. Thériault, author A. Gourgout, author M.-E. Boulanger, author E. Lefrançois, author A. Ataei, author F. Laliberté, author M. Dion,
author J.-S. Zhou, author S. Pyon, et al., title title Chiral phonons in the pseudogap phase
of cuprates, @noop journal journal
Nature Physics volume 16, pages 1108
(year 2020)NoStop
[Park and Yang(2020)]Phonon-Angular-Momentum-Hall
author author S. Park and author B.-J. Yang, title title Phonon angular momentum hall
effect, https://doi.org/10.1021/acs.nanolett.0c03220 journal journal Nano Letters volume
20, pages 7694 (year 2020), note
pMID: 32955897NoStop
[Boulanger et al.(2020)Boulanger, Grissonnanche, Badoux,
Allaire, Lefrançois, Legros, Gourgout, Dion, Wang, Chen et al.]Thermal-Hall-conductivity
author author M.-E. Boulanger, author G. Grissonnanche, author S. Badoux, author A. Allaire,
author É. Lefrançois,
author A. Legros, author A. Gourgout, author
M. Dion, author C. Wang, author X. Chen, et al., title title Thermal hall
conductivity in the cuprate mott insulators nd2cuo4 and sr2cuo2cl2, @noop journal journal Nature
communications volume 11, pages 5325
(year 2020)NoStop
[Chen et al.(2022)Chen,
Boulanger, Wang, Tafti, and Taillefer]Large-phonon-thermal-Hall
author author L. Chen, author M.-E. Boulanger,
author Z.-C. Wang, author F. Tafti, and author
L. Taillefer, title title Large phonon thermal hall conductivity in the antiferromagnetic
insulator cu3teo6, https://doi.org/10.1073/pnas.2208016119
journal journal Proceedings of the National
Academy of Sciences volume 119, pages
e2208016119 (year 2022)NoStop
[Saito et al.(2019)Saito,
Misaki, Ishizuka, and Nagaosa]Berry-Phase-of-Phonons
author author T. Saito, author K. Misaki,
author H. Ishizuka, and author N. Nagaosa, title title Berry phase of phonons and thermal hall effect in
nonmagnetic insulators, https://doi.org/10.1103/PhysRevLett.123.255901 journal
journal Phys. Rev. Lett. volume 123, pages 255901 (year 2019)NoStop
[Flebus and MacDonald(2023)]Phonon-Hall-Viscosity
author author B. Flebus and author A. H. MacDonald, title title Phonon hall viscosity
of ionic crystals, https://doi.org/10.1103/PhysRevLett.131.236301
journal journal Phys. Rev. Lett. volume 131, pages 236301 (year
2023)NoStop
[Nova et al.(2017)Nova,
Cartella, Cantaluppi, Först, Bossini, Mikhaylovskiy,
Kimel, Merlin, and Cavalleri]An-effective-magnetic-field
author author T. F. Nova, author A. Cartella,
author A. Cantaluppi, author M. Först, author
D. Bossini, author R. V. Mikhaylovskiy, author A. V. Kimel, author R. Merlin, and author A. Cavalleri, title title An
effective magnetic field from optically driven phonons, @noop
journal journal Nature Physics volume 13, pages 132 (year
2017)NoStop
[Luo et al.(2023)Luo,
Lin, Zhang, Chen,
Blackert, Xu, Yakobson, and Zhu]Large-effective-magnetic
author author J. Luo, author T. Lin, author J. Zhang, author
X. Chen, author E. R. Blackert, author R. Xu, author B. I. Yakobson, and author H. Zhu, title title Large
effective magnetic fields from chiral phonons in rare-earth halides, https://doi.org/10.1126/science.adi9601 journal journal Science volume 382, pages
698 (year 2023)NoStop
[Basini et al.(2024)Basini,
Pancaldi, Wehinger, Udina,
Unikandanunni, Tadano, Hoffmann, Balatsky, and Bonetti]Terahertz-electric-field
author author M. Basini, author M. Pancaldi,
author B. Wehinger, author M. Udina, author
V. Unikandanunni, author
T. Tadano, author M. C. Hoffmann, author A. V. Balatsky, and author S. Bonetti, title title
Terahertz electric-field-driven dynamical multiferroicity in srtio3, https://doi.org/10.1038/s41586-024-07175-9 journal
journal Nature volume 628, pages 534–539 (year 2024)NoStop
[Juraschek et al.(2020)Juraschek, Narang, and Spaldin]Phono-magnetic-analogs
author author D. M. Juraschek, author P. Narang, and author N. A. Spaldin, title title Phono-magnetic analogs to
opto-magnetic effects, https://doi.org/10.1103/PhysRevResearch.2.043035 journal
journal Phys. Rev. Res. volume 2, pages 043035 (year 2020)NoStop
[Juraschek et al.(2022)Juraschek, Neuman, and Narang]Giant-effective-magnetic
author author D. M. Juraschek, author T. Neuman, and author P. Narang, title title Giant effective magnetic fields from
optically driven chiral phonons in 4 f paramagnets, @noop
journal journal Physical Review Research volume 4, pages 013129 (year
2022)NoStop
[Shin et al.(2018)Shin,
Hübener, De Giovannini, Jin, Rubio, and Park]Phonon-driven-spin-Floquet
author author D. Shin, author H. Hübener,
author U. De Giovannini, author H. Jin, author
A. Rubio, and author
N. Park, title title Phonon-driven spin-floquet magneto-valleytronics in mos2, @noop journal journal Nature
communications volume 9, pages 638
(year 2018)NoStop
[Ren et al.(2024)Ren,
Rudner, and Xiao]Light-Driven-Spontaneous
author author Y. Ren, author M. Rudner, and author D. Xiao, title title Light-driven spontaneous phonon chirality and
magnetization in paramagnets, https://doi.org/10.1103/PhysRevLett.132.096702 journal
journal Phys. Rev. Lett. volume 132, pages 096702 (year 2024)NoStop
[Geilhufe et al.(2021)Geilhufe, Juričči ćć, Bonetti,
Zhu, and Balatsky]Dynamically-induced-magnetism
author author R. M. Geilhufe, author V. Juričči ćć, author S. Bonetti, author
J.-X. Zhu, and author
A. V. Balatsky, title
title Dynamically induced magnetism in ktao_3, https://doi.org/10.1103/PhysRevResearch.3.L022011 journal journal Phys. Rev. Res. volume
3, pages L022011 (year 2021)NoStop
[Kahana et al.(2023)Kahana,
Lopez, and Juraschek]Light-induced-weak
author author T. Kahana, author D. A. B. Lopez, and author D. M. Juraschek, @noop title Light-induced weak
ferromagnetism through nonlinear magnonic rectification (year
2023), https://arxiv.org/abs/2305.18656 arXiv:2305.18656
[cond-mat.mtrl-sci] NoStop
[Juraschek et al.(2017)Juraschek, Fechner, Balatsky, and Spaldin]Dynamical-multiferroicity
author author D. M. Juraschek, author M. Fechner,
author A. V. Balatsky, and author N. A. Spaldin, title title Dynamical multiferroicity, https://doi.org/10.1103/PhysRevMaterials.1.014401 journal
journal Phys. Rev. Mater. volume 1, pages 014401 (year 2017)NoStop
[Juraschek and Spaldin(2019)]Orbital-magnetic
author author D. M. Juraschek and author N. A. Spaldin, title title Orbital magnetic moments
of phonons, https://doi.org/10.1103/PhysRevMaterials.3.064405
journal journal Phys. Rev. Mater. volume 3, pages 064405 (year
2019)NoStop
[Cheng et al.(2020)Cheng,
Schumann, Wang, Zhang,
Barbalas, Stemmer, and Armitage]A-Large-Effective
author author B. Cheng, author T. Schumann,
author Y. Wang, author
X. Zhang, author D. Barbalas, author S. Stemmer, and author N. P. Armitage, title title A large
effective phonon magnetic moment in a Dirac semimetal, https://doi.org/10.1021/acs.nanolett.0c01983 journal
journal Nano Letters volume 20, pages 5991 (year 2020), note pMID:
32633978NoStop
[Baydin et al.(2022)Baydin,
Hernandez, Rodriguez-Vega, Okazaki, Tay, Noe, Katayama, Takeda, Nojiri, Rappl, Abramof, Fiete, and Kono]Magnetic-Control
author author A. Baydin, author F. G. G. Hernandez, author M. Rodriguez-Vega, author A. K. Okazaki, author F. Tay,
author G. T. Noe, author I. Katayama, author
J. Takeda, author H. Nojiri, author P. H. O. Rappl, author E. Abramof, author G. A. Fiete, and author J. Kono, title title Magnetic control of soft chiral phonons in pbte, https://doi.org/10.1103/PhysRevLett.128.075901 journal
journal Phys. Rev. Lett. volume 128, pages 075901 (year 2022)NoStop
[Hernandez et al.(2023)Hernandez, Baydin, Chaudhary, Tay, Katayama, Takeda, Nojiri, Okazaki, Rappl, Abramof, Rodriguez-Vega, Fiete, and Kono]Observation-of-interplay
author author F. G. G. Hernandez, author A. Baydin, author S. Chaudhary,
author F. Tay, author
I. Katayama, author
J. Takeda, author H. Nojiri, author A. K. Okazaki, author P. H. O. Rappl, author E. Abramof, author M. Rodriguez-Vega, author G. A. Fiete, and author J. Kono, title title Observation of interplay between
phonon chirality and electronic band topology, https://doi.org/10.1126/sciadv.adj4074 journal journal Science Advances volume 9, pages eadj4074 (year 2023)NoStop
[Dong and Niu(2018)]Geometrodynamics-of-electrons
author author L. Dong and author Q. Niu, title title Geometrodynamics of electrons in a
crystal under position and time-dependent deformation, https://doi.org/10.1103/PhysRevB.98.115162 journal journal Phys. Rev. B volume 98, pages 115162 (year 2018)NoStop
[Trifunovic et al.(2019)Trifunovic, Ono, and Watanabe]Geometric-orbital-magnetization
author author L. Trifunovic, author S. Ono, and author H. Watanabe, title title Geometric orbital magnetization in
adiabatic processes, https://doi.org/10.1103/PhysRevB.100.054408
journal journal Phys. Rev. B volume 100, pages 054408 (year
2019)NoStop
[Ren et al.(2021)Ren,
Xiao, Saparov, and Niu]Phonon-Magnetic-Moment
author author Y. Ren, author C. Xiao, author D. Saparov, and author
Q. Niu, title title Phonon magnetic moment from electronic topological magnetization, https://doi.org/10.1103/PhysRevLett.127.186403 journal
journal Phys. Rev. Lett. volume 127, pages 186403 (year 2021)NoStop
[Zhang et al.(2023)Zhang,
Ren, Wang, Cao, and Xiao]Gate-Tunable-Phonon
author author X.-W. Zhang, author Y. Ren, author C. Wang, author
T. Cao, and author
D. Xiao, title title Gate-tunable phonon magnetic moment in bilayer graphene, https://doi.org/10.1103/PhysRevLett.130.226302 journal
journal Phys. Rev. Lett. volume 130, pages 226302 (year 2023)NoStop
[Armitage et al.(2018)Armitage, Mele, and Vishwanath]RevModPhys.90.015001
author author N. P. Armitage, author E. J. Mele, and author A. Vishwanath, title title Weyl and Dirac semimetals in
three-dimensional solids, https://doi.org/10.1103/RevModPhys.90.015001 journal
journal Rev. Mod. Phys. volume 90, pages 015001 (year 2018)NoStop
[Mahan(2000)]Mahan
author author G. D. Mahan, @noop title Many-Particle Physics (publisher Springer New York, NY, year
2000)NoStop
[Giustino(2017)]RevModPhys.89.015003
author author F. Giustino, title title Electron-phonon
interactions from first principles, https://doi.org/10.1103/RevModPhys.89.015003 journal
journal Rev. Mod. Phys. volume 89, pages 015003 (year 2017)NoStop
[Suzuura and Ando(2002)]Phonons-and-electron-phonon
author author H. Suzuura and author T. Ando, title title Phonons and electron-phonon scattering
in carbon nanotubes, https://doi.org/10.1103/PhysRevB.65.235412
journal journal Phys. Rev. B volume 65, pages 235412 (year
2002)NoStop
[Mañes(2007)]Symmetry-based
author author J. L. Mañes, title title Symmetry-based approach
to electron-phonon interactions in graphene, https://doi.org/10.1103/PhysRevB.76.045430 journal journal Phys. Rev. B volume 76, pages 045430 (year 2007)NoStop
[Vozmediano et al.(2010)Vozmediano, Katsnelson, and Guinea]VOZMEDIANO2010109
author author M. Vozmediano, author M. Katsnelson, and author F. Guinea, title title Gauge fields in
graphene, https://doi.org/https://doi.org/10.1016/j.physrep.2010.07.003 journal journal Physics Reports volume
496, pages 109 (year 2010)NoStop
[Cortijo et al.(2015)Cortijo, Ferreirós, Landsteiner, and Vozmediano]Elastic_Gauge
author author A. Cortijo, author Y. Ferreirós, author K. Landsteiner, and author M. A. H. Vozmediano, title title Elastic
gauge fields in Weyl semimetals, https://doi.org/10.1103/PhysRevLett.115.177202 journal
journal Phys. Rev. Lett. volume 115, pages 177202 (year 2015)NoStop
[Hu et al.(2021)Hu,
Yu, Garate, and Liu]Phonon_Helicity
author author L.-H. Hu, author J. Yu, author I. Garate, and author
C.-X. Liu, title title Phonon helicity induced by electronic berry curvature in Dirac
materials, https://doi.org/10.1103/PhysRevLett.127.125901
journal journal Phys. Rev. Lett. volume 127, pages 125901 (year
2021)NoStop
[Tong(2018)]gauge-theory
author author D. Tong, http://www.damtp.cam.ac.uk/user/tong/gaugetheory.html
title Gauge theory (publisher Lecture
notes, year 2018)NoStop
[Barkeshli et al.(2012)Barkeshli, Chung, and Qi]Dissipationless-phonon
author author M. Barkeshli, author S. B. Chung, and author X.-L. Qi, title title Dissipationless phonon hall
viscosity, https://doi.org/10.1103/PhysRevB.85.245107 journal journal Phys. Rev. B volume
85, pages 245107 (year 2012)NoStop
[Avron et al.(1995)Avron,
Seiler, and Zograf]Viscosity
author author J. E. Avron, author R. Seiler, and author P. G. Zograf, title title Viscosity of quantum hall fluids, https://doi.org/10.1103/PhysRevLett.75.697 journal journal Phys. Rev. Lett. volume 75, pages 697 (year 1995)NoStop
[Note1()]Note1
note We consider the region where the phonon mode is not resonant
with the excitations between the Landau levels in the quantum
limit.Stop
[Haldane(1988)]Model-for-a-Quantum-Hall-Effect
author author F. D. M. Haldane, title title Model for a
quantum hall effect without landau levels: Condensed-matter realization of
the "parity anomaly", https://doi.org/10.1103/PhysRevLett.61.2015
journal journal Phys. Rev. Lett. volume 61, pages 2015 (year
1988)NoStop
[Baroni et al.(2001)Baroni,
De Gironcoli, Dal Corso, and Giannozzi]baroni2001phonons
author author S. Baroni, author S. De Gironcoli, author A. Dal Corso, and author P. Giannozzi, title title Phonons and related
crystal properties from density-functional perturbation theory, @noop
journal journal Reviews of modern Physics volume 73, pages 515 (year
2001)NoStop
[Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli et al.]QUANTUM_ESPRESSO
author author P. Giannozzi, author S. Baroni,
author N. Bonini, author M. Calandra, author
R. Car, author C. Cavazzoni, author D. Ceresoli, et al., title title Quantum espresso: A modular and open-source software project for
quantum simulations of materials, https://doi.org/10.1088/0953-8984/21/39/395502 journal
journal Journal of Physics: Condensed Matter volume 21, pages 395502 (year
2009)NoStop
[sup()]supplemental
@noop title See supplemental materials at [url] for
details of the electron-phonon coupling, the phonon-modulated tight-binding
model for Cd_3As_2, the hall conductivity of
Cd_3As_2, and the first-principles calculationsNoStop
[Castro Neto et al.(2009)Castro Neto, Guinea, Peres, Novoselov, and Geim]The-electronic
author author A. H. Castro Neto, author F. Guinea,
author N. M. R. Peres, author K. S. Novoselov, and author A. K. Geim, title
title The electronic properties of graphene, https://doi.org/10.1103/RevModPhys.81.109 journal journal Rev. Mod. Phys. volume 81, pages 109 (year 2009)NoStop
[Malard et al.(2009)Malard,
Pimenta, Dresselhaus, and Dresselhaus]Raman-spectroscopy
author author L. Malard, author M. Pimenta,
author G. Dresselhaus, and author M. Dresselhaus, title title Raman spectroscopy in graphene, https://doi.org/https://doi.org/10.1016/j.physrep.2009.02.003
journal journal Physics Reports volume 473, pages 51 (year
2009)NoStop
[Zhao et al.(2022)Zhao,
Sharma, Liang, Glasenapp,
Mourokh, Kovalev, Huber,
Prada, Tiemann, and Blick]Acoustically-Induced
author author P. Zhao, author C. H. Sharma,
author R. Liang, author C. Glasenapp, author
L. Mourokh, author V. M. Kovalev, author P. Huber, author M. Prada, author L. Tiemann, and author R. H. Blick, title title Acoustically
induced giant synthetic hall voltages in graphene, https://doi.org/10.1103/PhysRevLett.128.256601 journal
journal Phys. Rev. Lett. volume 128, pages 256601 (year 2022)NoStop
[Faugeras et al.(2011)Faugeras, Amado, Kossacki, Orlita, Kühne, Nicolet, Latyshev, and Potemski]Magneto-Raman-Scattering
author author C. Faugeras, author M. Amado,
author P. Kossacki, author M. Orlita, author
M. Kühne, author A. A. L. Nicolet, author Y. I. Latyshev, and author M. Potemski, title title
Magneto-raman scattering of graphene on graphite: Electronic and phonon
excitations, https://doi.org/10.1103/PhysRevLett.107.036807
journal journal Phys. Rev. Lett. volume 107, pages 036807 (year
2011)NoStop
[Rémi et al.(2014)Rémi,
Goldberg, and Swan]Charge-Tuning-of-Nonresonant
author author S. Rémi, author B. B. Goldberg, and author A. K. Swan, title title Charge tuning of nonresonant
magnetoexciton phonon interactions in graphene, https://doi.org/10.1103/PhysRevLett.112.056803 journal
journal Phys. Rev. Lett. volume 112, pages 056803 (year 2014)NoStop
[Goerbig et al.(2007)Goerbig, Fuchs, Kechedzhi, and Fal'ko]Filling-Factor-Dependent
author author M. O. Goerbig, author J.-N. Fuchs,
author K. Kechedzhi, and author V. I. Fal'ko, title title Filling-factor-dependent magnetophonon resonance
in graphene, https://doi.org/10.1103/PhysRevLett.99.087402
journal journal Phys. Rev. Lett. volume 99, pages 087402 (year
2007)NoStop
[Goerbig(2011)]Electronic-properties-of-graphene
author author M. O. Goerbig, title title Electronic properties of
graphene in a strong magnetic field, https://doi.org/10.1103/RevModPhys.83.1193 journal journal Rev. Mod. Phys. volume 83, pages 1193 (year 2011)NoStop
[Ali et al.(2014)Ali,
Gibson, Jeon, Zhou,
Yazdani, and Cava]The-Crystal
author author M. N. Ali, author Q. Gibson,
author S. Jeon, author
B. B. Zhou, author
A. Yazdani, and author
R. J. Cava, title title The crystal and electronic structures of Cd_3As_2,
the three-dimensional electronic analogue of graphene, https://doi.org/10.1021/ic403163d journal journal Inorganic Chemistry volume 53, pages 4062 (year 2014), note pMID:
24679042NoStop
[Wang et al.(2013)Wang,
Weng, Wu, Dai, and Fang]Three-dimensional
author author Z. Wang, author H. Weng, author Q. Wu, author
X. Dai, and author
Z. Fang, title title Three-dimensional Dirac semimetal and quantum transport in
Cd_3As_2, https://doi.org/10.1103/PhysRevB.88.125427 journal journal Phys. Rev. B volume 88, pages 125427 (year 2013)NoStop
[Shapourian et al.(2015)Shapourian, Hughes, and Ryu]Viscoelastic-response
author author H. Shapourian, author T. L. Hughes, and author S. Ryu, title title Viscoelastic response of topological
tight-binding models in two and three dimensions, https://doi.org/10.1103/PhysRevB.92.165131 journal journal Phys. Rev. B volume 92, pages 165131 (year 2015)NoStop
[Pikulin et al.(2016)Pikulin, Chen, and Franz]Chiral-Anomaly
author author D. I. Pikulin, author A. Chen, and author M. Franz, title title Chiral anomaly from strain-induced gauge fields in
Dirac and Weyl semimetals, https://doi.org/10.1103/PhysRevX.6.041021 journal journal Phys. Rev. X volume 6, pages
041021 (year 2016)NoStop
[Wang et al.(2012)Wang,
Sun, Chen, Franchini,
Xu, Weng, Dai, and Fang]Dirac-semimetal
author author Z. Wang, author Y. Sun, author X.-Q. Chen, author
C. Franchini, author
G. Xu, author H. Weng, author X. Dai, and author Z. Fang, title title Dirac semimetal and topological phase
transitions in A_3Bi (A=Na, K, Rb), https://doi.org/10.1103/PhysRevB.85.195320 journal
journal Phys. Rev. B volume 85, pages 195320 (year 2012)NoStop
[Cano et al.(2017)Cano,
Bradlyn, Wang, Hirschberger,
Ong, and Bernevig]Chiral-anomaly-factory
author author J. Cano, author B. Bradlyn,
author Z. Wang, author
M. Hirschberger, author
N. P. Ong, and author
B. A. Bernevig, title
title Chiral anomaly factory: Creating Weyl fermions with a
magnetic field, https://doi.org/10.1103/PhysRevB.95.161306
journal journal Phys. Rev. B volume 95, pages 161306 (year
2017)NoStop
[Liang et al.(2015)Liang,
Gibson, Ali, Liu,
Cava, and Ong]Ultrahigh-mobility-and-giant
author author T. Liang, author Q. Gibson,
author M. N. Ali, author M. Liu, author
R. Cava, and author
N. Ong, title title Ultrahigh mobility and giant magnetoresistance in the Dirac
semimetal Cd_3As_2, @noop journal
journal Nature materials volume 14, pages 280 (year 2015)NoStop
[Schaack(1976)]Observation-of-circularly
author author G. Schaack, title title Observation of circularly
polarized phonon states in an external magnetic field, https://doi.org/10.1088/0022-3719/9/11/009 journal journal Journal of Physics C: Solid State Physics volume 9, pages L297 (year 1976)NoStop
[Schaack(1977)]Magnetic-field-dependent-splitting
author author G. Schaack, title title Magnetic field dependent
splitting of doubly degenerate phonon states in anhydrous
cerium-trichloride, @noop journal journal Zeitschrift für Physik B Condensed Matter volume 26, pages 49 (year 1977)NoStop
|
http://arxiv.org/abs/2405.09889v1 | 20240516081458 | Flow and Equation of State of nuclear matter at $\mathbf{E_{\mathrm{kin}}}$/A=0.25-1.5 GeV with the SMASH transport approach | [
"Lucia Anna Tarasovičová",
"Justin Mohs",
"Anton Andronic",
"Hannah Elfner",
"Karl-Heinz Kampert"
] | nucl-th | [
"nucl-th"
] |
0000-0001-5086-8658]L.A. Tarasovičová
Pavol Jozef Šafárik University, Šrobárova 2, 04011, Košice, Slovakia
0000-0001-8437-0946]J. Mohs
Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
Institute for Theoretical Physics, Goethe University, Frankfurt am Main, Germany
0000-0002-2372-6117]A. Andronic
Institut für Kernphysik, Universität Münster, Germany
0000-0002-6213-3613]H. Elfner
GSI Helmholtzzentrum für Schwerionenforschung, Planckstr. 1, 64291 Darmstadt, Germany
Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
Institute for Theoretical Physics, Goethe University, Frankfurt am Main, Germany
Helmholtz Research Academy Hesse for FAIR (HFHF), GSI Helmholtz Center, Campus Frankfurt, Max-von-Laue-Straße 12, 60438 Frankfurt am Main, Germany
0000-0002-2805-0195]K.-H. Kampert
University Wuppertal, Department of Physics, 42117 Wuppertal, Germany
We present a comparison of directed and elliptic flow data by the FOPI collaboration in Au–Au, Xe–CsI, and Ni–Ni collisions at beam kinetic energies from 0.25 to 1.5 GeV per nucleon to simulations using the SMASH hadronic transport model.
The Equation of State is parameterized as a function of nuclear density and momentum dependent potentials are newly introduced in SMASH.
With a statistical analysis, we show that the collective flow data at lower energies is in best agreement with a soft momentum dependent potential, while the elliptic flow at higher energies requires a harder momentum dependent Equation of State.
§ INTRODUCTION
The Equation of State (EoS) of nuclear matter at densities a few times the normal nuclear matter density has recently attracted increased attention because it influences the properties of neutron stars and neutron star mergers, with the latter now being probed by gravitational wave interferometers, see e.g.<cit.>.
Independent constraints of the EoS are provided by laboratory experiments of heavy-ion collisions performed at beam kinetic energies in the range of E_ kin∼ 0.1 to a few GeV per nucleon () in the laboratory frame <cit.>.
Through a comparison of the measured collective flow data and transport model calculations, a range of constrains were achieved in the last decades, see e.g. <cit.>.
Further information about the EoS from heavy-ion collisions were extracted using the production of strange mesons below threshold <cit.>.
Moreover, it has recently been shown that by combining data from astrophysical multi-messenger observations and flow measurements in heavy-ion collisions within a Bayesian analysis of thousands of nuclear theory motivated EoS versions <cit.>, further important constraints on the EoS can be achieved.
This can start a new multidisciplinary field of science.
Nevertheless, the uncertainties in the EoS in the range of densities ρ_B/ρ_0 ≃ 1-3 (with ρ_0 being the normal nuclear matter density) remain large up to this day <cit.> and are to some extent model dependent.
As a simple parametrization of the EoS may not be able to describe consistently the flow data across a broad range of beam energies, Bayesian approaches were recently successfully employed, leading to indications of a softening of the EoS at densities ρ_B/ρ_0 ≃ 3-4 <cit.>.
The variability in the transport model approaches and also the constraints on symmetric matter EoS from flow (and other observables) in heavy-ion collisions (see e.g. <cit.>) led to the Transport Model Evaluation Project (TMEP) <cit.>.
The SMASH transport model <cit.>, which we use in the present paper, is part of the TMEP.
In this paper, we compare model calculations with FOPI data on the directed and
elliptic flow coefficients v_1 <cit.> and v_2 <cit.>, respectively, spanning beam energies from 0.25 to 1.5 .
SMASH has been already employed for the description of HADES data at 1.2 <cit.> and in the multi-GeV range for the description of the STAR BES data <cit.>, but its application to the beam energies down to 0.25 is studied here for the first time.
§ MODEL DESCRIPTION
The transport model SMASH <cit.> is a Boltzmann-Uehling-Uhlenbeck-type model with open-source code[Wergieluk, A. (2024) ‘smash-transport/smash: SMASH-3.1’. Zenodo. doi: 10.5281/zenodo.10707746.], used over a wide range of collision energies, either standalone <cit.> or as an afterburner for hydrodynamic calculations <cit.>.
A large set of 208 stable hadrons and resonances are included in the model and Pauli-blocking is taken into account.
We refer the reader to <cit.> for further details.
As nuclear potentials can be related to the EoS, which plays a major role for the description of a heavy-ion collision in the energy range considered here, we will first describe the ones employed in this work.
§.§ Potentials
We incorporate a Skyrme and a symmetry potential of the form
U_Skyrme = A (ρ_B/ρ_0) + B( ρ_B/ρ_0)^τ
U_symmetry = ± 2S_potρ_I3/ρ_0 ,
where ρ_B denotes the baryon density, ρ_0 the saturation density and A, B an τ are parameters as given in Table <ref>.
The sign in the symmetry potential depends on the sign of the isospin of the considered particle. ρ_I3 denotes the density of the relative isospin projection I_3/I and S_pot is a constant which is fixed to 18 MeV as agreed in the code comparison effort <cit.>.
We further add a momentum-dependent term to the potential for which we use the form suggested in Ref. <cit.>
U_MD(ρ,𝐩)=2C/ρ_0g∫d^3p'/(2πħ)^3f(𝐫, 𝐩')/1+(𝐩-𝐩'/ħΛ)^2 .
Here, 𝐩 denotes the momentum three-vector of the particle of interest, g is the degeneracy factor and C and Λ are parameters, see Table <ref>.
In order to simplify the integral, we follow the implementation in GiBUU <cit.> and apply a cold nuclear matter approximation for the distribution function f(𝐫,𝐩) = Θ(p_F(ρ_B(𝐫))-|𝐩|) so that the integral can be solved analytically.
Consistent with this approximation, the degeneracy factor is g=4 due to spin and isospin of nucleons.
As the potentials are not written in a covariant form, one has to fix the frame in which they are evaluated for a relativistic description of the system.
We therefore calculate the single particle potentials using the above equations only in the local rest frame (LRF) and find the energy in the calculation frame making use of the invariance of p_μ p^μ and solving
E_calc^2 - c^2 p_calc^2 = E_LRF^2- c^2 p_LRF^2
for E_calc, which is the energy in the calculation frame for a given momentum 𝐩_calc in that frame.
Note that since 𝐩_LRF is obtained by boosting the four-momentum from the calculation frame to the local rest frame and E_LRF = √(m^2 c^4 + c^2 p_LRF^2)+U(ρ,p), the equation needs to be solved numerically.
The gradient of the energy in the calculation frame is obtained using finite difference with a lattice spacing of 1 fm and the momenta are updated after each time step of 0.1 fm/c according to
𝐩̇_calc = -∇ E_calc .
The parameters of the potentials A, B, τ, C and Λ are given in Table <ref> and are fixed for a given incompressibility κ to reproduce nuclear ground state properties and the optical potential <cit.>.
For the evaluation of the nuclear potentials, the density needs to be calculated.
A Lorentz-contracted Gaussian smearing kernel is applied in order to obtain a smooth density profile (see Ref. <cit.> for details on the smearing kernel).
As described in the following section, we present calculations with coalescence and with dynamic light nuclei formation via scatterings.
The calculations with coalescence are using only one test particle to keep the coalescence simple. In this case, we obtain good statistics for the density calculation using 300 parallel ensembles.
For the dynamic formation of light nuclei, we use 25 test particles.
The collision time evolution of the average density in the central cell of the participant zone at = 0.4 GeV is shown in Fig. <ref> (left) for two impact parameter ranges of Au-Au collisions. The impact parameter range shown for Ni–Ni collisions corresponds to the more central Au–Au class. For both systems,
soft and hard EoS parametrizations are compared. As expected, a soft EoS leads to larger average densities than the hard EoS. While differences between the two EoS parametrizations are significant, the dependence on the system size is rather weak. The duration of the dense phase is shorter in Ni–Ni compared to Au–Au following the expectation.
In Fig. <ref> (right), the evolution of the directed flow coefficient v_1 (see below) with time for two centralities in Au–Au collisions is shown.
The directed flow is larger for the stiffer EoS.
It mainly builds up between 10 and 30 fm/c and continues to rise at later times when the density is already significantly smaller in the central cell.
We observe that the directed flow builds up earlier and more rapidly with a stiff EoS, which can be associated to a stronger bounce off compared with the case of soft EoS. Our simulations for the results shown further on extend to 100 fm/c.
§.§ Light nuclei formation
Another important aspect of modeling heavy-ion collisions in the considered energy range is the formation of light nuclei, as a large fraction of nucleons is bound <cit.>.
Light nuclei exhibit a stronger sensitivity to collective flow (see Ref. <cit.> and references therein).
The mechanisms of light nuclei formation are interesting per se and currently investigated over a broad range of collision energies in SMASH <cit.>, within the PHSD model <cit.>, or in kinetic approaches <cit.>.
In this work, we mainly apply a coalescence model where light nuclei are identified in the final state of the calculation <cit.>.
In order to decide whether a pair of nucleons or nuclei coalesce, we boost to their two-particle center-of-mass (c.m.s.) frame, find the latest time where one of the two has taken part in a collision, and determine the positions of the candidates at that time.
If the distance of the two in the c.m.s. frame is smaller than a threshold of 3 fm and the momentum difference, also in the c.m.s. frame, is below the threshold of 300 MeV/c, coalescence is possible.
In this setup, we calculate the densities for the potentials using 300 parallel ensembles and the light nuclei are identified in each ensemble separately.
We also performed our calculations treating all light nuclei with mass number A≤ 3 as active degrees of freedom.
In this setting, deuterons are mainly produced from a proton and a neutron in the reaction p+n+N↔ d+N, where N is a nucleon acting as a catalyst.
A=3 nuclei are produced in a similar way in 4↔ 2 reactions from their three constituents and a catalyst.
The 3↔ 2 and the 4↔ 2 interactions are performed using the stochastic collision criterion <cit.>.
As the stochastic collision criterion requires a sufficiently large number of test particles, we represent each particle with 25 test particles
This method is not available for producing larger nuclei which is important close to the projectile and target rapidities. Therefore, we focus mainly on the results from coalescence but present elliptic flow calculations at midrapidity using the stochastic production of light nuclei.
§ ANALYSIS DETAILS
§.§ Observables and kinematic variables
Pressure gradients in the initial stage of the collisions lead to collective expansion of the compressed and heated matter.
At energies discussed in this work, the presence of the spectator part of the colliding nuclei plays an important role in the final particle anisotropies observed in non-central collisions.
The resulting spatial (early) anisotropy translates into the azimuthal anisotropy of the final-state particles, which are usually quantified by the coefficients in a Fourier expansion of the azimuthal distribution of these particles
dN/ dφ∝ 1 + 2 ∑_n=1^∞ v_ncos (nφ),
where φ is measured with respect to the reaction plane.
The Fourier coefficients and are measures of the directed and elliptical flow, respectively.
These can be calculated from the azimuthal distribution as follows
v_1 = ⟨cosφ⟩, v_2 = ⟨cos (2φ) ⟩.
Following published data, the results are presented as a function of c.m.s. transverse momentum per nucleon and rapidity, both normalized to the projectile values in c.m.s.
These are defined as = (p_T/A)/(p_P^c.m./A_P) and = (y/y_P)^c.m.s., respectively.
The subscript P denotes characteristics of the projectile.
In the following, this quantities are called for simplicity transverse momentum and rapidity, although they are normalized quantities.
The impact of the detector acceptance loss for very forward angles (θ_ lab<1.2^∘) was tested and found negligible.
Thus, this geometrical cut is not applied in the simulations.
§.§ Centrality selection
The default way for the collision centrality selection in the SMASH simulations is a direct setting of the impact parameter b.
Since b can not be experimentally measured directly, the collisions in the experimental data are divided based on the charged-particle multiplicity and the variable defined as
E_ rat = ∑_i E_⊥,i/∑_i E_||,i ,
where the sums run over the transverse and longitudinal c.m.s. kinetic energy components of all the particles detected in an event.
First, the multiplicity distribution is divided into classes, based on percentiles of the inelastic (geometric) cross section.
Then, based on the correlation between and the multiplicity, the collisions with the highest multiplicities and highest E_ rat are selected as the most central collisions.
For the analysis performed here, the actual ranges of b used for different centrality classes are taken from the assignments performed in Ref. <cit.>.
In order to check that centrality selections based on the impact parameter lead to the same centrality ranges as in the experimental selection, the method is used in simulations and compared with the selection based on the impact parameter.
The result of this check is shown in Fig. <ref> for semi-central Au–Au collisions at = 0.4 GeV.
As can be seen, the two methods give practically identical results for , both as a function of rapidity and transverse momentum.
In the appendix, similar figures are provided also for the other two studied collision systems, Xe–CsI and Ni–Ni, showing consistency of the two methods in these cases, too.
Based on this, the selection on the impact parameter is used in the following analysis.
§.§ Particle selection
Particle selection in the model is performed as for the data, namely Z=1 particles.
In SMASH, deuterons and tritons are produced with an afterburner based on the coalescence model as described in Sec. <ref>.
In the case of the study of at midrapidity, the stochastic collision criterion is used as well, as described in Sec. <ref>.
In this case, the simulations are marked in the legends of the figures with an additional "s", e.g. HPs stands for Hard EoS with momentum dependent potentials and the stochastic collision criterion.
The impact of spectator nuclei, which can significantly influence directed flow at forward rapidities, is suppressed by selecting only particles which interacted at least once during the system evolution.
This is not possible in case of the stochastic collision criterion, where also the particles in the spectator part of the colliding nuclei interact.
As a consequence, the stochastic collision criterion is used only for the studies of at midrapidity, where the impact of the spectators is negligible.
§ RESULTS
To study the EoS and the impact of different potentials, we start by investigating the directed flow of charged particles as a function of rapidity and transverse momentum.
Figure <ref> compares for Au–Au collisions at = 0.4 GeV with a broad set of model settings.
It is found that the momentum dependent potentials lead to a higher directed flow for -integrated values (l.h.s.). The trend is different when studying as a function of in a selected bin of rapidities (r.h.s): for high transverse momenta, the momentum dependent potentials lead to larger values while the trend is reversed at low . Similar observations were reported in <cit.>.
The mean , , also increases in case of momentum dependent potentials, as visible in the right panel of Fig. <ref>.
For other collision systems and energies discussed below, only Hard EoS and Soft EoS, both with momentum dependent potentials, are studied as bracketing cases.
Figures <ref> and <ref> present the results of the directed flow coefficient for Ni–Ni and Xe–CsI collisions, respectively.
Like in Au-Au collisions, the soft EoS with momentum dependence (SP) reproduces the overall data better than the hard EoS with momentum dependence (HP).
However, for all studied collision systems, one can notice a poor description of the directed flow at low , which for the SP EoS is significantly weaker than in experimental data.
The description of directed flow as function of rapidity is in good agreement with the measurements for the soft momentum dependent EoS. This is due to the fact, that the directed flow close to the mean transverse momentum is in good agreement with the data. At smaller momenta there might be ambiguities in the spectator selection that affect the results.
The elliptic flow coefficient as a function of in Au–Au collisions at midrapidity is shown in Fig. <ref>.
Model calculations at = 0.4 and 1.0 GeV are compared to FOPI data <cit.>.
For the collision energy 0.4 (left panel of Fig. <ref>), the complete set of different EoS versions is shown, including, for some cases, calculations with the stochastic collision criterion.
The impact of the momentum dependent potential is illustrated and appears significant both for the medium and hard EoS.
Using a constant potential overshoots the measured elliptic flow coefficients very strongly in both cases, even predicting positive values.
When switching on the momentum dependence of the potentials, the direction of is reversed and brings the coefficients much closer to the data.
This effect is similar for both the hard and medium EoS.
The soft EoS (with momentum dependence) seems to describe the data on average the best, but the trends seen in the data are not fully reproduced by the model.
A milder dependence on is exhibited by the model starting around = 0.7, which roughly coincides with the mean value.
At a collision energy of = 1 GeV, shown in the right panel of Fig. <ref>, the hard EoS fits the data better, but we notice again that the dependence of the data is not reproduced well.
Data-model comparisons for other collision energies between 0.4 and 1.5 can be found in the Appendix.
The -integrated elliptic flow at midrapidity in Au–Au collisions as a function of beam energy is presented in Fig. <ref> for two centrality classes.
As this measurement represents Z=1 particles at mid-rapidity, it is not affected by spectators.
Thus, the predictions with and without the stochastic criterion, which are shown as well, largely overlap with one another for both centrality classes and different EoS in agreement with expectations.
At the lowest collision energy considered here, = 0.25 GeV, the model strongly overestimates the elliptic flow magnitude
for both EoS parametrizations, suggesting that the model reaches its limits below ≃ 0.4 GeV.
At those low beam energies, it is important to correct the cross-sections for the part already treated in potentials by employing medium modified cross-sections, which was not done for the present work.
Generally, both EoS parametrizations, with and without stochastic criterion, predict an energy dependence of the elliptic flow parameter that is significantly stronger than what is found in data.
Moreover, the hard EoS overestimates the elliptic flow magnitude at all energies and approaches the data only above ≃ 1.2 GeV.
The soft EoS, on the other hand, describes the data fairly well for collision energies between 0.4 and 0.8 , and slightly underestimates the data at higher energies.
Due to limitations of the model, we disregard the data points at = 0.25 GeV in the χ^2/n.d.f. calculations presented below.
The only studied observable for which the calculation with stochastic collision criterion significantly differ from the geometric one, is the mean transverse momentum for Z=1 particles at midrapidity.
As seen in Fig. <ref>, this difference is significant only for low collision energies while for > 0.6 GeV all four sets of simulations lead to rather comparable values for mean .
In this energy range, however, the model overestimates the measured value for both centrality classes.
In order to better quantify which version of EoS (potentials) best describes the experimental data, χ^2 per number of degrees of freedom (n.d.f.) is calculated with the values summarised in Fig. <ref>.
In the left panel, is shown for all 289 available data points for the standard geometric collision criterion for the interaction during transport and is compared with for a sub-sample of 155 data points containing only the results in all collision systems at = 0.4 GeV.
In the both cases, the soft EoS is characterized by a lower .
By taking into account only the data, the for the hard EoS increases by a factor of 1.65 while the one for the soft EoS only by a factor of 1.38, reflecting the large deviations of the HP-EoS from data, seen in Fig. <ref>.
This can be seen also in the right panel of Fig. <ref>, where is calculated separately for all the data (133 data points), at low collision energies ( =0.4, 0.6, and 0.8 GeV, representing a total of 69 data points), and high collision energies (1.0, 1.2, and 1.5 ; 63 data points).
The for the soft EoS is rather similar for all three cases, but the one for the hard EoS is the smallest in the case of higher collision energies.
It is also significantly smaller than the one for soft EoS at higher energies, suggesting a transition from soft to hard EoS as a function of energy in the collision energy range explored here.
The BUU transport model of Danielewicz <cit.> predicted a similar trend <cit.>, while a study within the IQMD model <cit.> found a preference for a soft EOS throughout this energy range.
An onset of a softening of EoS is implied by elliptic flow data for ≃2 GeV <cit.>. The conclusions about the stiffness of the EoS in the energy regime from =1 GeV and higher also depend on the amount of resonances employed in the transport approach (see e.g. <cit.>).
§ CONCLUSIONS
We have performed an exploratory study of the description of directed and elliptic flow in the SMASH transport model at collision energies spanning = 0.25 – 1.5 GeV. While the model clearly shows its limitations at = 0.25 GeV, it describes the data well for ≥ 0.4 GeV.
Clearly, the momentum dependent potentials are important and the sensitivity to the EoS is significant.
While overall a soft EoS is preferred by the data, the elliptic flow data are better described by a hard EoS for the higher collision energies explored here (=1.0-1.5 GeV).
Our results are in general consistent with earlier findings on the EoS in other transport approaches.
Further quantitative studies are needed in order to understand systematic uncertainties in transport approaches in general and in SMASH in particular.
The relevance of better constraints on the EoS from heavy-ion collisions (together with further constraints on the symmetry energy) for neutron stars and their collisions will certainly motivate such studies.
utphys
|
http://arxiv.org/abs/2405.09466v1 | 20240515160104 | A geometric formulation to measure global and genuine entanglement in three-qubit systems | [
"Salvio Luna-Hernandez",
"Marco Enriquez",
"Oscar Rosas-Ortiz"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] |
=1ex
= 0.55cm = 0.55cm
=1.5em
=22.5cm =16cm =-1.0cm
|
http://arxiv.org/abs/2405.09057v1 | 20240515030821 | Response Matching for generating materials and molecules | [
"Bingqing Cheng"
] | cs.LG | [
"cs.LG",
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
APS/123-QED
bingqingcheng@berkeley.edu
Department of Chemistry, University of California, Berkeley, CA, USA
The Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria
Machine learning has recently emerged as a powerful tool for generating new molecular and material structures. The success of state-of-the-art models stems from their ability to incorporate physical symmetries, such as translation, rotation, and periodicity. Here, we present a novel generative method called Response Matching (RM), which leverages the fact that each stable material or molecule exists at the minimum of its potential energy surface. Consequently, any perturbation induces a response in energy and stress, driving the structure back to equilibrium. Matching to such response is closely related to score matching in diffusion models.
By employing the combination of a machine learning interatomic potential and random structure search as the denoising model, RM exploits the locality of atomic interactions, and inherently respects permutation, translation, rotation, and periodic invariances.
RM is the first model to handle both molecules and bulk materials under the same framework.
We demonstrate the efficiency and generalization of RM across three systems: a small organic molecular dataset, stable crystals from the Materials Project, and one-shot learning on a single diamond configuration.
Response Matching for generating materials and molecules
Bingqing Cheng
May 20, 2024
========================================================
§ INTRODUCTION
The exploration of new materials and molecules is crucial for technological advancements. Previous approaches rely on human intuition to propose and synthesize new molecules or high-throughput computational screening <cit.>. While these methods have led to some of the most important discoveries in history, they are often time-consuming and expensive, restricting the chemical space explored. This has encouraged the use of machine learning to generate new molecular and material structures by training on datasets of equilibrium atomistic structures <cit.>.
A crucial aspect of recent generative models for molecules is the encoding of geometric symmetries, such as translations and rotations. For materials, it's also essential to account for periodic boundary conditions (PBC) in the structures <cit.>. Moreover, efficiently account for different element types is vital, since materials can contain over a hundred elements from the periodic table.
However, current generative models of atomistic structures do not incorporate a key inductive bias: the locality of atomic interactions. According to the locality assumption, the energy and forces acting on an atom depend solely on its neighboring atoms within a specific cutoff radius. This assumption is crucial for the success of most state-of-the-art machine learning interatomic potentials (MLIPs) <cit.>.
Another physical bias not currently exploited is that atoms cannot get too close to each other due to strong repulsive forces at short interatomic distances.
Another key insight is that every stable material or molecule resides at the minimum of the potential energy surface (PES) of the system.
As such, when slight perturbations are introduced to the atomic positions of the stable structure, the resulting forces will guide the structure back to its equilibrium state during relaxation.
This insight was implicitly used in Crystal Diffusion Variational AutoEncoder (CDVAE) <cit.>.
The idea of “forces” can be furthur generalized into anyresponse properties of systems to external noises, including the response to lattice deformation.
Here, we introduce a novel generative method for materials and molecules, Response Matching (RM), which leverages the locality of atomic interactions, atomic repulsion, and the PES minimum, while naturally incorporating permutation, translation, rotation, and periodic invariances. We also highlight how RM is closely related to the Denoising Diffusion Probabilistic Model (DDPM) <cit.>. Finally, we demonstrate the RM model across three systems: a small organic molecular dataset, stable crystals from the Materials Project <cit.>, and a single diamond configuration training datum.
§ RELATED WORK
§.§ Generative models of materials and molecules
Earlier works on generative models of small molecules include the autoregressive G-SchNet <cit.> and equivariant normalizing flows (ENF) <cit.> have been employed to generate three-dimensional equilibrium structures of small organic molecules, while variational autoencoders have been used for material generation <cit.>. More recently, diffusion models have been utilized for generating molecules <cit.>. Additionally, these models can be conditioned on specific chemical or biological properties through guidance mechanisms <cit.>.
For generating materials, the PBC
and the lattice vectors of the cell need to be considered in addition to the atomic coordinates.
Available methods include: latent representations can be directly used <cit.>, or used together with variational autoencoder to generate stable three-dimensional structures in CDVAE <cit.>.
MatterGen performs joint diffusion on atom type, coordinates, and lattice <cit.>.
A double diffusion model on both lattice and coordinates also pre-selects space groups <cit.>.
DiffCSP is a diffusion model for crystal structure prediction purposes that utilizes fractional coordinates <cit.>.
§.§ Machine learning interatomic potentials
Machine learning interatomic potentials (MLIPs) enable precise and comprehensive exploration of material and molecular properties at scale, by learning from quantum-mechanical calculations and then predicting the energy and forces of atomic configurations speedily <cit.>.
Most MLIPs exploit the nearsightedness of atomic interactions, expressing the total potential energy of the system as the sum of the atomic energies for each atom, i.e.:
E = ∑_i E_i.
The forces can readily be computed by taking the derivatives of the total energy with respect to atomic coordinates.
and the stress can be computed from the virial.
There are many MLIP methods available, e.g., Behler-Parrinello neural network potentals <cit.>, GAP <cit.>, Moment Tensor Potentials (MTPs) <cit.>, Atomic Cluster Expansion (ACE) <cit.>, NequIP <cit.>, MACE <cit.>, to name a few.
In this work, we use Cartesian Atomic Cluster Expansion
(CACE) <cit.> without using message passing layers, due to its efficiency and alchemical learning capabilities, which are important for learning PES of materials with diverse elements.
In CACE, an atom is treated as a node on a graph, and edges connects atoms within a cutoff radius r_cut.
Each chemical element is embedded using a learnable vector θ with dimension N_embedding.
The type of edge that connects two atoms, i and j, is encoded using the tensor product of the embedding vectors of the two nodes,
T=θ_i ⊗θ_j.
The length of the edge, r_ji, is described using a radial basis R.
The angular component of the edge, 𝐫̂_ji, is encoded using an angular basis L.
The edge basis combines all this information:
χ_cn𝐥 (i, j) =
T_c(θ_i, θ_j)
R_n(r_ji)
L_𝐥(𝐫̂_ji).
The atom-centered representation is made by
summing over all the edges of a node,
A_i, cn𝐥 =
∑_j∈𝒩(i)χ_cn𝐥 (i, j).
The orientation-dependent A features are symmetrized to get the rotationally-invariant B features of different body orders ν, e.g.
for ν = 2 <cit.>,
B_i, cnl^(2) =
∑_𝐥
𝒞(𝐥)
A_i, cn𝐥^2.
Then, a multilayer perceptron (MLP) maps these invariant features to the target of the atomic energy of each atom i,
E_i = MLP(B_i).
§.§ DDPM
Our method is closely related to the foundational work on DDPMs <cit.>, which involves a noise model and a denoising neural network.
The noise model corrupts a data point 𝐱 to a sampled log signal-to-noise ratio, λ, as follows:
𝐱_λ = α_λ𝐱 + σ_λϵ,
where ϵ is a random noise, α_λ^2 = 1/(1+e^-λ), and σ^2_λ = 1-α_λ^2.
The denoising model learns to predict the clean input 𝐱 from 𝐱_λ, or equivalently, the added noise.
The denoising neural network with parameters θ is trained on the score matching objective over multiple noise scales:
L_λ = ‖ϵ^θ (𝐱_λ) - ϵ‖^2
§.§ Crystal structure prediction
Crystal structure prediction (CSP) is a computational method that combine quantum mechanical calculations, such as density functional theory (DFT)
with optimization algorithms to search for local minima on the PES.
Notable examples include USPEX that uses the evolutionary algorithm to search for stable structures <cit.>, and random structure search (RSS) that starts with random atomic positions and then performs PES minimization <cit.>.
As DFT is computationally expensive, recent studies have started to use MLIPs as the surrogate PES, e.g. <cit.>.
§ METHODS
Just like DDPM, Response Matching also includes a noise step and a denoising step. Noise is directly applied to the Cartesian coordinates of atomic structures. Consider the coordinates of an equilibrium structure as R_0 = (r_1, ..., r_N), where r_i denotes the position of atom i. Random atomic displacements are added to this initial structure, generating a sequence of increasingly amorphous structures, 𝐑_λ. These noisy displacements can be determined all at once based on the chosen value of λ, rather than being added step by step. The displacement for each atom i is represented as Δr_i, λ, and for periodic atomic structures, this displacement follows the minimal image convention in PBC. For simplicity, we later omit λ in the subscripts.
As each equilibrium structure lies at the minimum of the PES, the forces on atoms are nearly zero.
When displacements are applied to these atomic positions, the atomic forces deviate from zero and tend to pull the atoms back to their original coordinates.
The harmonic approximation, a simple yet widely used physical assumption in quantum mechanical calculations and atomistic modeling, states:
the force on the atom i is
F_i^H = - k Δr_i.
The tilde notation on the force indicates its fictitious nature, as the force constant k is not a physical value but rather a hyperparameter of the RM model. This approximation effectively attaches a harmonic spring between the current position of atom i and its equilibrium position.
We incorporate another physical inductive bias: atoms cannot approach too closely due to strong repulsive forces at short distances.
To account for this effect, we apply a short-range repulsive pairwise potential between atom pairs i and j, such as:
g_c(r_ji) =
m (1-r_ji^2r_c^2)^n if r_ji < r_c
0 if r_ji≥ r_c,
where r_ji is the scalar distance between the pair, r_c is typically a fraction of an Angstrom, and m and n are hyperparameters dictating the strength of the repulsion.
The resulting repulsive force on the atom i is
F_i^R = - ∑_j∈𝒩(i)d g_c(r_ji)d r_i,
summed over all the atoms j within the distance r_c from the atom i.
Combining the harmonic and repulsive forces, the total fictitious force on the atom i is F_i = F_i^H + F_i^R.
For periodic systems, we also add distortions to their periodic cell to create elastic strain γ.
The associated stress is approximated utilizing the stress-strain relationship for isotropic elastic materials:
σ = 𝐂γ,
where the components of ϵ include normal and shear moduli that are treated as hyperparameters.
For molecules without periodicity, the lattice strain step is skipped.
The general idea of RM is to use a denoising model with parameters θ to fit to the fictitious response properties, i.e., forces and stresses.
The corresponding objective function is:
L_λ = ∑_i=1^N𝐅^θ_i - 𝐅_i^2
+ βσ^θ- σ^2,
where β balances the relative weight between the force loss and the stress loss.
Compared with the objective in DDPM (Eqn. (<ref>)),
One can see that the mathematical framework is identical, with RM employing a physics-inspired response to the noise instead of the noise itself.
Using MLIPs for this denoising model is advantageous because they inherently incorporate the translational, rotational, and permutational symmetries of atomic systems.
After the MLIP is trained using the objective in Eqn. (<ref>), it is then used for the denoising step in RM.
As the denoising model has a pseudo potential energy surface E, rather than performing denoising with
a fixed schedule,
one can directly search for local minima on the E.
This is effectively a random structure search process <cit.>:
For each RSS run, we first chose a reasonable cell shape at random, and added atoms of chosen the elements and composition into the simulation cell at random positions while keeping the initial density of the cell close to the typical density range of this system.
We set a lower bound on the interatomic
distance for each pair of atomic species, but otherwise imposed no additional constraints on the initial structures.
We then relax both the atomic positions and simulation cell using the FIRE (Fast Inertial Relaxation Engine) optimizer <cit.>, which continues until the pseudo forces on the atoms become negligible. FIRE dynamically adjusts step sizes for faster convergence <cit.>, making it a common choice in atomistic simulations due to its efficiency.
Alternatively, one can use other methods such as Langevin dynamics, simulated annealing <cit.>,
evolutionary algorithm <cit.>,
particle-swarm optimization and Bayesian optimization <cit.>.
The choice of the optimization method will likely influence the probability of finding the global minimum on the PES versus other local minima.
For the purpose of finding new materials and molecules, it may not always be advantageous to
maximize this probability, as the local minima can also correspond to synthesizable structures.
§ EXAMPLES
§.§ Molecule generation: QM7b
QM7b <cit.> is curated based on GDB-13 <cit.> (a database of nearly 1 billion stable and synthetically accessible organic molecules),
and it is composed of 7,211 molecules of up to 23 atoms (including 7 heavy atoms of C, N, O, S, and Cl).
During training, random displacements with a maximum magnitude of 1.6 Å were added to all the atoms in the original molecules, and a CACE potential was employed to learn these pseudo forces. We used r_cut = 4.5 Å, l_max = 3, ν_max = 3, and N_embedding = 3. The model has 16,490 trainable parameters.
Training takes roughly one day on a laptop.
Notably, as training is solely based on the pseudo forces, atomization energy or other molecular properties were not included. However, as seen in Fig. <ref>a, E_at is highly correlated with E, with a Pearson correlation coefficient of R = 0.98. This high correlation primarily results from the additivity of atomic energies (refer to Eqn. (<ref>)), an important bias in the MLIP. Furthermore, the MLIP captures subtle energy differences between distinct atomic environments.
We compared the actual atomization energy per atom with the pseudo energy per atom for molecules with the most common compositions in QM7b. Each composition appears approximately 200-300 times in the training set. Fig. <ref>a illustrates that E_at per atom is significantly correlated with E per atom across these compositions. The MLIP identifies these energy differences because the training structures are PES minima.
To generate new molecules, we randomly placed atoms of specific compositions in an orthorhombic box without periodic conditions, with the sole constraint that the minimum interatomic distance be at least 0.7 Å. We conducted two sets of generation tasks: the first set used compositions well-represented in the training set, while the second set included compositions that were out-of-distribution (containing 8-9 heavy atoms).
Generating a single structure by geometry relaxation generally takes less than 10 seconds on a laptop.
Currently, the geometry relaxation is performed in serial, and it may be dramatically accelerated by relaxing many molecules together using one, sparsely connected, atomic graph.
Fig. <ref> displays selected configurations generated using the denoising model.
We assess model performance by evaluating the chemical feasibility of the generated molecules, determining whether the model can learn chemical rules from data. Finding a rigorous and unbiased evaluation metric is challenging, and previous studies have used various criteria to assess the feasibility of a molecule represented in three-dimensional coordinates.
For instance, a stability metric <cit.> checks whether all atoms in a molecule have correct valence based on specific bond distances. A common validity measure <cit.> assesses whether a molecule can be sanitized by RDKit <cit.> using default settings.
According to these metrics, 4.9% of generated molecules were considered stable and 40.2% valid using ENF trained on around 14,000 QM9 molecules <cit.>. Training with 100k QM9 molecules, the E(3) Equivariant Diffusion Model (EDM) achieved 82% stability and 91.9% validity, while the MiDi diffusion model, which generates both 2D molecular graphs and their corresponding 3D coordinates <cit.>, reached 84% stability and 97.9% validity.
Furthermore, the Geometric Latent Diffusion Model (GEOLDM) <cit.> attained 89.4% stability and 93.8% validity, and the Geometry-Complete Diffusion Model (GCDM) <cit.> achieved 85.7% stability and 94.8% validity.
Here we use a set of comprehensive and stringent criteria offered by
PoseBusters <cit.>:
RDKit’s chemical sanitisation check (equivalent to the aforementioned validity),
all atoms connected,
bond lengths,
bond angles,
internal steric clash,
aromatic ring flatness,
double bond flatness, and the calculated energy of the input molecule based on a forcefield.
The energy calculation step also includes a valency check for each atom, akin to the stability check.
Fig. <ref> illustrates the percentage of molecules with given compositions that pass these chemical feasibility checks. For comparison, the first row displays statistics for the QM7b set. Each criterion provides different insights into the chemical structure, and the pass percentages vary but are correlated. The feasibility percentages are highly dependent on specific compositions.
Overall, the feasibility rate is quite high. The sanitization (validity) rate ranges from 80% to 100%, which is similar to rates reported in previous studies trained on the QM9 dataset.
§.§ Material generation: Materials Project structures
We used the MP-20 structures as in Ref. <cit.>, includes almost all experimentally stable materials from the Materials Project (MP) <cit.> with unit cells including at most 20 atoms. The training set contains 27,136 structures.
During the training, cell distortion of up to 0.1 was applied, and random displacements with a maximum magnitude of 0.8 Å were added to all atoms.
A CACE potential was used to learn the these pseudo forces and stresses.
We used r_cut=4 Å, l_max=2, ν_max=2, and N_embedding = 4 for aggressive alchemical compression.
The model has 16,312 trainable parameters.
The training time took two days on an A10 card.
The alchemical learning capacity of CACE not only enhances learning efficiency, but is crucial to developing a model that is applicable throughout the periodic table. The learnable embedding θ for each element type encodes its chemical information and can be visualized to provide insights into data-driven similarities.
Given that N_embedding=4 in this case, we performed a principal component analysis (PCA) and plotted the first two principal component axes in Fig. <ref>. The elements are color-coded on the basis of their chemical groups, and the PCA map reveals that elements in the same group tend to cluster together. Elements within a group often have similar appearances and behaviors because they possess the same number of electrons in their outermost shell. This demonstrates that the element embedding scheme effectively captures the nature of the periodic table in a data-driven manner.
To generate new crystals,
we randomly placed atoms of specific compositions in a box with periodic conditions,
with an initial molar volume that is from linear regression of the total volume with respect to the chemical elements from training structures.
We then performed FIRE optimization both for the cell and atomic positions.
We test our model on the same selected set as used in DiffCSP <cit.>, which contains 10 binary and 5 ternary compounds in MP-20 test set.
For each of these compositions, we generate 30 structures and compared them to the group truth using StructureMatcher from pymatgen <cit.>.
For eight of these compounds (Ag_6O_2, Bi_2F_8, Co_2Sb_2, Co_4B_2, Cr_4Si_4, KZnF_3, Sr_2O_4, YMg_3), we found the ground truth structures.
Such match rate is similar to USPEX <cit.>, a CSP method based on DFT,
although worse than the 11/15 match rate using DiffCSP <cit.>.
To demonstrate how the FM model can help scientific applications,
we used it for the
Li-S battery system, which is an attractive candidate for numerous energy storage applications <cit.>.
We searched for a number of stoichiometries Li_xS_(1-x). As sulfur is highly polymorphic and has complex structures, we used the known pure S structures from MP.
To organize the generated structures, we plotted the pseudo convex hull using the pseudo excess energy, E_ex = E - xE_Li-(1-x)E_S, in Fig. <ref>.
In reality, Li_2S should be the stable phase on the energy convex hall <cit.> rather than LiS, but the overall trend in stability for LiS compounds is captured.
It is worth cautioning that such results are not trustworthy, especially because energies did not enter the training of the model, and one always has to check against quantum mechanical calculations.
Nevertheless, the trained RM model can offer a quick and perhaps insightful first step in exploring a system.
Several structures in the MP <cit.> are found,
and they typically have lower pseudo energies, which suggests that one can use E for additional screening.
Among the seven matched MP structures, only the materials with IDs mp-10173 (Li, P6_3/mmc), mp-1125 (Li_2S, Pnma), mp-1153 (Li_2S, Fm3m̅) are in the training set.
§.§ One-shot learning: A single diamond structure
To show data efficiency and generalization of the RM,
we trained on a single data point of a cubic diamond structure.
In the noising stage,
uniform random displacement with a maximum magnitude of 0.8 Å, and lattice strain up to 0.1 were added to the original structure with a molar volume of 4.4 Å^3.
For the CACE potential, we used r_cut=4.5 Å, l_max=3, ν_max=3, and N_embedding = 1.
The model only has 2,137 trainable parameters.
Training takes less than an hour on a laptop.
During the relaxation stage, 2-12 carbon atoms were randomly placed in a simulation box with the molar volume set to be between 3.8 Å^3 and 5.6 Å^3.
Generating one structure takes a few seconds depending on the system size.
We plot the pseudo-energy E against molar volume for the obtained structures in Fig. <ref>. The cubic diamond structures (blue dots in Fig. <ref>) consistently have the lowest E at all molar volumes compared to the other structures found. The lowest-energy cubic diamond has a molar volume of 4.4 Å^3, which matches the training configuration exactly.
Hexagonal diamonds (red dots in Fig. <ref>) and diamonds with stacking faults (purple dots in Fig. <ref>) are also frequently found. Notably, a set of graphite structures (in the inset of Fig. <ref>) with varying volumes were identified. Graphite structures differ significantly from diamonds, consisting of stacked layers of carbon atoms in a hexagonal lattice. Within each layer, carbon atoms form strong covalent bonds with three neighboring atoms in a trigonal planar arrangement.
The hexagonal diamonds, diamonds with stacking faults, and graphite structures are all local minima on the pseudo energy surface, while the cubic diamond is the true global minimu.
This example thus demonstrates that tracing the local minima can generate out-of-distribution yet physically meaningful structures.
§ DISCUSSION
In summary, Response Matching (RM) is a novel generative method that has a noising step equivalent to other diffusion models, and a denoising model that is effectively crystal structure prediction using a machine learning interatomic potential. Just as DDPMs <cit.> are trained to predict added noise, the MLIP in RM is trained to predict the response of the system to the added noise.
These responses, in the form of pseudo forces and stress, are simply proportional to the noise.
The benefits of using MLIP for denoising include:
(i) Exploiting the locality of atomic interactions while naturally respecting permutation, translation, rotation, and periodic invariances.
(ii) Allowing RM to simultaneously handle both molecules and bulk materials with and without periodic boundary conditions.
(iii) Enabling advanced optimization methods such as FIRE <cit.>, simulated annealing <cit.>, particle-swarm optimization and Bayesian optimization <cit.> during denoising rather than adhering to a fixed schedule.
(iv) The pesudo energies from the RM model are not directly trained but are somehow correlated with the real energies of the systems,
which may offer physical insights and additional way of screening the generated structures.
Moreover, given that MLIPs are well-developed, their advances are directly transferable to generative models of materials and molecules.
§ LIMITATIONS
The current RM model can be extended in several ways.
(i) Most crystals fall into a limited set of space groups. In crystal structure prediction, a common technique is to select a space group based on popularity and snap to it during relaxation <cit.>.
This approach can also be applied to RM, which could enhance model efficiency by reducing the search space to realistic structures and facilitating the generation of experimentally relevant crystals.
(ii) One can incorporate another term, y^θ- y^2 in the objective function in Eqn. (<ref>), with y being the specific properties, such as bandgap, solubility, or thermal stability.
This can enable RM to generate molecules and materials conditioned on specific properties, which is crucial for designing functional materials with targeted properties.
(iii) Rather than fixing composition, alchemical swaps of elements using Monte Carlo moves or another generative model can dynamically determine atom types during denoising.
(iv) One can further synergize RM and MLIPs, e.g.
training foundation models using both data from quantum mechanical calculations and unlabelled structural data.
These enhancements would further expand the applicability and flexibility of RM in the generation of materials and molecular structures.
10
jain2013commentary
Anubhav Jain, Shyue Ping Ong, Geoffroy Hautier, Wei Chen, William Davidson Richards, Stephen Dacek, Shreyas Cholia, Dan Gunter, David Skinner, Gerbrand Ceder, et al.
Commentary: The materials project: A materials genome approach to accelerating materials innovation.
APL materials, 1(1), 2013.
bilodeau2022generative
Camille Bilodeau, Wengong Jin, Tommi Jaakkola, Regina Barzilay, and Klavs F Jensen.
Generative models for molecular discovery: Recent advances and challenges.
Wiley Interdisciplinary Reviews: Computational Molecular Science, 12(5):e1608, 2022.
anstine2023generative
Dylan M Anstine and Olexandr Isayev.
Generative models as an emerging paradigm in the chemical sciences.
Journal of the American Chemical Society, 145(16):8736–8750, 2023.
zeni2023mattergen
Claudio Zeni, Robert Pinsler, Daniel Zügner, Andrew Fowler, Matthew Horton, Xiang Fu, Sasha Shysheya, Jonathan Crabbé, Lixin Sun, Jake Smith, et al.
Mattergen: a generative model for inorganic materials design.
arXiv preprint arXiv:2312.03687, 2023.
xie2021crystal
Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi S Jaakkola.
Crystal diffusion variational autoencoder for periodic material generation.
In International Conference on Learning Representations, 2021.
behler2007generalized
Jörg Behler and Michele Parrinello.
Generalized neural-network representation of high-dimensional potential-energy surfaces.
Physical review letters, 98(14):146401, 2007.
bartok2010gaussian
Albert P Bartók, Mike C Payne, Risi Kondor, and Gábor Csányi.
Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons.
Physical review letters, 104(13):136403, 2010.
shapeev2016moment
Alexander V Shapeev.
Moment tensor potentials: A class of systematically improvable interatomic potentials.
Multiscale Modeling & Simulation, 14(3):1153–1173, 2016.
drautz2019atomic
Ralf Drautz.
Atomic cluster expansion for accurate and transferable interatomic potentials.
Physical Review B, 99(1):014104, 2019.
batzner20223
Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky.
E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.
Nature communications, 13(1):2453, 2022.
batatia2022mace
Ilyes Batatia, David P Kovacs, Gregor Simm, Christoph Ortner, and Gábor Csányi.
Mace: Higher order equivariant message passing neural networks for fast and accurate force fields.
Advances in Neural Information Processing Systems, 35:11423–11436, 2022.
cheng2024cartesian
Bingqing Cheng.
Cartesian atomic cluster expansion for machine learning interatomic potentials.
arXiv preprint arXiv:2402.07472, 2024.
ho2020denoising
Jonathan Ho, Ajay Jain, and Pieter Abbeel.
Denoising diffusion probabilistic models.
Advances in neural information processing systems, 33:6840–6851, 2020.
nichol2021improved
Alexander Quinn Nichol and Prafulla Dhariwal.
Improved denoising diffusion probabilistic models.
In International conference on machine learning, pages 8162–8171. PMLR, 2021.
gebauer2019symmetry
Niklas Gebauer, Michael Gastegger, and Kristof Schütt.
Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules.
Advances in neural information processing systems, 32, 2019.
garcia2021n
Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling.
E (n) equivariant normalizing flows.
Advances in Neural Information Processing Systems, 34:4181–4192, 2021.
hoogeboom2022equivariant
Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling.
Equivariant diffusion for molecule generation in 3d.
In International conference on machine learning, pages 8867–8887. PMLR, 2022.
wu2022diffusion
Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, and Qiang Liu.
Diffusion-based molecule generation with informative prior bridges.
Advances in Neural Information Processing Systems, 35:36533–36545, 2022.
xu2023geometric
Minkai Xu, Alexander S Powers, Ron O Dror, Stefano Ermon, and Jure Leskovec.
Geometric latent diffusion models for 3d molecule generation.
In International Conference on Machine Learning, pages 38592–38610. PMLR, 2023.
peng2023moldiff
Xingang Peng, Jiaqi Guan, Qiang Liu, and Jianzhu Ma.
Moldiff: addressing the atom-bond inconsistency problem in 3d molecule diffusion generation.
arXiv preprint arXiv:2305.07508, 2023.
huang2023mdm
Lei Huang, Hengtong Zhang, Tingyang Xu, and Ka-Chun Wong.
Mdm: Molecular diffusion model for 3d molecule generation.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5105–5112, 2023.
morehead2023geometry
Alex Morehead and Jianlin Cheng.
Geometry-complete diffusion for 3d molecule generation and optimization.
ArXiv, 2023.
ragoza2022generating
Matthew Ragoza, Tomohide Masuda, and David Ryan Koes.
Generating 3d molecules conditional on receptor binding sites with deep generative models.
Chemical science, 13(9):2701–2713, 2022.
gebauer2022inverse
Niklas WA Gebauer, Michael Gastegger, Stefaan SP Hessmann, Klaus-Robert Müller, and Kristof T Schütt.
Inverse design of 3d molecular structures with conditional generative neural networks.
Nature communications, 13(1):973, 2022.
corso2022diffdock
Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola.
Diffdock: Diffusion steps, twists, and turns for molecular docking.
arXiv preprint arXiv:2210.01776, 2022.
pickard2024
Chris J Pickard.
Tbd.
Personal Communication.
Personal Communication, 2024.
sultanov2023data
Arsen Sultanov, Jean-Claude Crivello, Tabea Rebafka, and Nataliya Sokolovska.
Data-driven score-based models for generating stable structures with adaptive crystal cells.
Journal of Chemical Information and Modeling, 63(22):6986–6997, 2023.
jiao2024crystal
Rui Jiao, Wenbing Huang, Peijia Lin, Jiaqi Han, Pin Chen, Yutong Lu, and Yang Liu.
Crystal structure prediction by joint equivariant diffusion.
Advances in Neural Information Processing Systems, 36, 2024.
keith2021combining
John A Keith, Valentin Vassilev-Galindo, Bingqing Cheng, Stefan Chmiela, Michael Gastegger, Klaus-Robert Müller, and Alexandre Tkatchenko.
Combining machine learning and computational chemistry for predictive insights into chemical systems.
Chemical reviews, 121(16):9816–9872, 2021.
unke2021machine
Oliver T Unke, Stefan Chmiela, Huziel E Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T Schütt, Alexandre Tkatchenko, and Klaus-Robert Müller.
Machine learning force fields.
Chemical Reviews, 121(16):10142–10186, 2021.
Zhang2021
Yaolong Zhang, Junfan Xia, and Bin Jiang.
Physically motivated recursively embedded atom neural networks: Incorporating local completeness and nonlocality.
Physical Review Letters, 127(15), October 2021.
oganov2006crystal
Artem R Oganov and Colin W Glass.
Crystal structure prediction using ab initio evolutionary techniques: Principles and applications.
The Journal of chemical physics, 124(24), 2006.
Pickard2011
Chris J. Pickard and R. J. Needs.
Ab initio random structure searching.
23(5):053201, 2011.
cheng2022crystal
Guanjian Cheng, Xin-Gao Gong, and Wan-Jian Yin.
Crystal structure prediction by combining graph network and optimization algorithm.
Nature communications, 13(1):1492, 2022.
salzbrenner2023developments
Pascal T Salzbrenner, Se Hun Joo, Lewis J Conway, Peter IC Cooke, Bonan Zhu, Milosz P Matraszek, William C Witt, and Chris J Pickard.
Developments and further applications of ephemeral data derived potentials.
The Journal of Chemical Physics, 159(14), 2023.
merchant2023scaling
Amil Merchant, Simon Batzner, Samuel S Schoenholz, Muratahan Aykol, Gowoon Cheon, and Ekin Dogus Cubuk.
Scaling deep learning for materials discovery.
Nature, 624(7990):80–85, 2023.
bitzek2006structural
Erik Bitzek, Pekka Koskinen, Franz Gähler, Michael Moseler, and Peter Gumbsch.
Structural relaxation made simple.
Physical review letters, 97(17):170201, 2006.
bertsimas1993simulated
Dimitris Bertsimas and John Tsitsiklis.
Simulated annealing.
Statistical science, 8(1):10–15, 1993.
qm7b
Grégoire Montavon, Matthias Rupp, Vivekanand Gobre, Alvaro Vazquez-Mayagoitia, Katja Hansen, Alexandre Tkatchenko, Klaus-Robert Müller, and O Anatole von Lilienfeld.
Machine learning of molecular electronic properties in chemical compound space.
New Journal of Physics, 15(9):095003, 2013.
gdb
L. C. Blum and J.-L. Reymond.
970 million druglike small molecules for virtual screening in the chemical universe database GDB-13.
J. Am. Chem. Soc., 131:8732, 2009.
buttenschoen2024posebusters
Martin Buttenschoen, Garrett M Morris, and Charlotte M Deane.
Posebusters: Ai-based docking methods fail to generate physically valid poses or generalise to novel sequences.
Chemical Science, 2024.
satorras2021n
Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling.
E (n) equivariant graph neural networks.
In International conference on machine learning, pages 9323–9332. PMLR, 2021.
simonovsky2018graphvae
Martin Simonovsky and Nikos Komodakis.
Graphvae: Towards generation of small graphs using variational autoencoders.
In Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27, pages 412–422. Springer, 2018.
bento2020open
A Patrícia Bento, Anne Hersey, Eloy Félix, Greg Landrum, Anna Gaulton, Francis Atkinson, Louisa J Bellis, Marleen De Veij, and Andrew R Leach.
An open source chemical structure curation pipeline using rdkit.
Journal of Cheminformatics, 12:1–16, 2020.
vignac2023midi
Clement Vignac, Nagham Osman, Laura Toni, and Pascal Frossard.
Midi: Mixed graph and 3d denoising diffusion for molecule generation.
In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 560–576. Springer, 2023.
ong2013python
Shyue Ping Ong, William Davidson Richards, Anubhav Jain, Geoffroy Hautier, Michael Kocher, Shreyas Cholia, Dan Gunter, Vincent L Chevrier, Kristin A Persson, and Gerbrand Ceder.
Python materials genomics (pymatgen): A robust, open-source python library for materials analysis.
Computational Materials Science, 68:314–319, 2013.
see2014ab
Kimberly A See, Michal Leskes, John M Griffin, Sylvia Britto, Peter D Matthews, Alexandra Emly, Anton Van der Ven, Dominic S Wright, Andrew J Morris, Clare P Grey, and Ram Seshadri.
Ab initio structure search and in situ 7li nmr studies of discharge products in the li–s battery system.
Journal of the American Chemical Society, 136(46):16368–16377, 2014.
|
http://arxiv.org/abs/2405.09507v1 | 20240515165835 | QueryNER: Segmentation of E-commerce Queries | [
"Chester Palen-Michel",
"Lizzie Liang",
"Zhe Wu",
"Constantine Lignos"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
§ INTRODUCTION
An important challenge in e-commerce query understanding is returning relevant results for null and low recall queries.
These queries return few or no results due to vocabulary mismatch or queries containing too many terms that over-constrain the retrieval process.
A common approach to recover from null and low recall queries is to rewrite terms with similar words or to remove terms from the query to relax the constraints.
In applying these recovery methods, queries are often treated as unstructured sequences of tokens <cit.> rather than natural groupings of tokens.
Past work in natural language processing (NLP) has treated grouping tokens as a shallow parsing or chunking task <cit.>.
By chunking a query, we can find the boundaries between spans of tokens and identify the purpose of each span.
This allows us to weight spans rather than tokens, drop spans as a recovery approach, and potentially better cluster similar chunks and link them to a knowledge graph.
Chunking is often framed as a sequence labeling task, and while there has been sequence labeling work in e-commerce, it has largely focused on aspect-value extraction.
Aspect-value extraction identifies portions of a string of text (values) for more narrowly defined aspects like brand or color (;papenmeier2021dataset).
While aspect-value extraction does identify some natural groupings of tokens, the goal is often only to identify spans that are values for predefined aspects.
If there is no aspect defined for a span of text, it will not be identified.
Aspect-value extraction approaches tend to have either few aspect types with many tokens not included as part of a span, or they have large complex aspect ontologies with thousands of aspects <cit.>.
While there is work on e-commerce aspect-value extraction, there have been few datasets released publicly for research.
Much of the broader sequence labeling datasets that are publicly available to NLP researchers focus on the task of Named Entity Recognition (NER).
NER datasets typically include general entity types like persons, organizations, and locations (e.g. tjong-kim-sang-2002-introduction,sang2003introduction,hovy-etal-2006-ontonotes) but not the entity types associated with spans for e-commerce.
E-commerce data presents additional challenges compared to other sequence labeling datasets since it can be more noisy and unstructured <cit.>.
We present QueryNER, a publicly available dataset,[<https://github.com/bltlab/query-ner>] manually annotated for e-commerce query segmentation.
The task in QueryNER is not to extract aspects, but rather to segment the user’s query into meaningful chunks.
Unlike the e-commerce task of aspect extraction, which tends to focus on fine-grained types that are often specific to particular categories of items, the tag types of QueryNER aim to be broadly applicable to queries for any product category.
This difference in approach leads QueryNER to have nearly all tokens included in some form of span, with the exception of a few special characters, some prepositions and conjunctions.
In Table <ref>, an example query is shown with the entity spans identified following the QueryNER schema compared with a hypothetical aspect-value extraction.
The type ontology is intended to be a small number of entity types and general purpose enough that it can be used for a broad range of e-commerce product categories.
Annotators also do not necessarily need to become domain experts in the products involved in the annotation process or familiarize themselves with thousands of aspects.
As seen in Table <ref>, Speaker cover, made of velvet, made to order, 1 pair and high-end are all natural chunks of the query.
For example, made of velvet clearly refers to the material while made to order is a more general description of the product and may very well not be covered under certain aspect ontologies.
Contributions:
Our contributions are the following.
(1) We define a type ontology and annotation guidelines that are broadly applicable to e-commerce segmentation.
(2) We release QueryNER, a new manually annotated dataset and open benchmark for this task.
(3) We report baseline results from models trained on the QueryNER dataset.
(4) We discuss the results of an experiment showing promising directions for using QueryNER as part of a null and low query recovery strategy by dropping spans rather than individual tokens.
(5) We conduct experiments showing benefit of data augmentation for query segmentation.
§ RELATED WORK
Sequence labeling is a well established task in NLP with tasks like NER (e.g. tjong-kim-sang-2002-introduction,sang2003introduction,hovy-etal-2006-ontonotes) and chunking (e.g. ) framed as labeling each token with a label indicating whether it is part of a span or not and what type of span.
The labeling most typically uses BIO labels where B marks the beginning of a span, I marks inside a span and O, outside the span.
Other label encodings have been used such as BIOES <cit.>.
Prior work in aspect-value extraction has largely framed the task as sequence tagging as well.
<cit.> experimented with embedding representations for aspect-value extraction.
<cit.> made use of a knowledge graph and entity linking to reformulate queries and use aspect-value extraction for entity span information in a rephrasing model.
However, neither of these works release a public dataset.
Since there is a lack of public aspect-value extraction datasets, there has been work with alternative approaches to create training datasets such as distant supervision or iterative bootstrapping approaches.
<cit.> examine a bootstrapping method using positive unlabeled learning. They point out that while there are NER datasets for PER, ORG, LOC, there are not many publicly available NER datasets for e-commerce related tasks.
<cit.> work with product titles and bootstrap from a seed list of product attributes.
<cit.> use distant supervision to automatically create training data for a limited number of categories.
They use a question-answering approach for aspect-value extraction.
Some e-commerce sequence labeling data has been released, but it has some drawbacks.
papenmeier2021dataset release an e-commerce dataset for attribute-value extraction, but it only covers queries about laptops and jackets.
Due to the nature of e-commerce work, even datasets with general purpose queries or product titles are often not publicly available.
reddy2023shopping
created a dataset of Amazon e-commerce queries and matching product titles including judgments for relevance using ESCI labels (exact match, substitute, complement, irrelevant).
§ DATASET CREATION
QueryNER uses a subset of the Amazon Shopping Queries Dataset reddy2023shopping as the underlying data.
We release our dataset as token offsets that can be mapped to the Shopping Queries Dataset.
QueryNER consists of an ontology of
17 types.
One main difference in the guidelines given to annotators was to mark the fullest extent of a span possible.
This intended to include words like size in the span [size 12] rather than size [12].
QueryNER follows the CoNLL tradition of using BIO format.
§.§ Ontology and Annotation Guidelines
The following are descriptions and examples for each entity type in QueryNER:
core_product_type:
The main thing being sold.
Generic ways of describing a product.
These are not official product names but common objects.
Examples: teapot, tennis shoes, figurine, lounge pants, dish soap
product_name:
The specific name of a product or model name.
Examples: F150, air jordan 7, sorento
product_number:
The number for a product.
It can be e-commerce product number or companies product number.
Editions of an item that are numbered can also be marked as product number as well as trading card or comic numbers.
Examples: BQ4422-001, 7101, DCC-3200P1
modifier:
Modifier is used for spans that clarify the type of product.
This can describe certain features a project has like “2 in 1" or “high performance".
Modifier can also be used for constraining the type of a product.
Modifier can also be used as a catch all for “type" of a product that does not fit in other predefined categories.
For example, “for sale" is not quite a condition nor a price.
Similarly “fast shipping", “trusted seller" may not fit other categories, but are still meaningful chunks.
creator:
The company or person who creates or produces the product.
It could also be the designer name associated with the product or brand name.
Examples:ford, disney, jim shore, honda, Hot wheels, dc comics
condition:
The condition of the product.
This describes whether the product is new or old and can go into more detail about things such as whether a product includes its original tags.
Examples: new, used, mint condition
UoM (Unit of Measurement):
Any way of measuring size or other unit of measurement.
This can include everything from clothing sizes, to lengths and widths, car engine sizes, battery capacity, amount of memory in a computer, lens sizes for cameras.
This includes time expressions that are units of measurement such as 30 minutes or 4 hours of battery life.
department:
Category of the population the item was made for.
Examples: Mens, womens, kids, jr., wmns
material:
The material or physical entity that makes up the item.
Examples: denim, canvas, plastic, metal, cotton, felt
time:
An expression of the date or time associated with the product that is not a unit of measurement.
For example, 30 mins or 4 hours for battery life should be labeled UoM.
Time spans such as 1920-1924 are marked as a single span [1920-1924].
content:
Names of characters, titles of tv or movies, sayings or phrases that appear on or within the product itself.
Many mugs, t-shirts, figurines, or comic books have some form of content or characters associated with them.
color:
The color, pattern, appearance related to the surface appearance or 2-dimensional design of the product.
Examples: unc blue, light gray, wolf gray, floral, cherry
shape:
The shape, form, or positioning or 3-dimensional design of the product.
Includes design descriptions for things like clothing, accessories, or automotive that refer to 3-dimensional descriptions of the item.
Examples: fit, slim, low, long sleeve, flat, rear, front, rectangular, orb
quantity:
The number of the product being sold. Includes for example “lot of 4".
Examples: 2 cds, lot of 4, multi-lot, package of 6, 3-box break
occasion:
The purpose or intended use of an item.
Typically an event, holiday, season, or occasion. “hiking boots" is its own product and `hiking` in this case should NOT be marked as a occasion.
Examples: sport, athletic, wedding, winter, halloween, bridal, birthday
origin:
The origin of a product.
This is likely the location it comes from but could also be a specific event where the item was created such as a convention.
States or provinces can also be an origin tag.
price:
The price of a product.
Also includes words expressing relative price like “expensive", “cheap", “lowest price", or “good deal".
A full copy of the annotation guidelines is included in Appendix <ref>.
§.§ Annotation Process
We began with pre-liminary annotation experiments on internal data in multiple categories in order to refine the annotation guidelines and type ontology.
We then turned to conducting annotation on a subsection of the public Shopping Queries Dataset.
Three annotators were assigned to the test data in order to assess agreement and ensure quality.
One annotator was assigned to the training and development portions of the data.
Additional quality checks were conducted which included flagging queries without a core product type or multiple core product types for further review.
We originally selected ten thousand queries for annotation.
Some were thrown out due to being outside the target language.
Some were removed for profanity after not being identified in the original filtering and query selection.
The test set was further adjudicated to resolve conflicts and review annotation.
The adjudication process generally accepted annotations where more than one annotator was in agreement unless it appeared to be a clear violation of the annotation guidelines.
The adjudication was conducted by a single adjudicator who was involved in the creation of the annotation guidelines.
§.§ Agreement
We computed inter-annotator agreement using Fleiss' Kappa across all three annotators and also Cohen's Kappa between pairs of annotators.
Agreement measures are given in Table <ref>.
Agreement was relatively high in the initial internal annotation, but was lower when annotating the publicly released Amazon Shopping Queries Dataset.
Cohen's kappa values for the public dataset were more consistent across annotator pairs than the internal annotation where it appeared annotator 2 tended to conflict more with annotators 1 and 3.
While the domain of both the internal and public data sets are e-commerce, there could be slight differences in the types of queries which lead to the difference in agreement scores.
For example, the internal data included more automotive parts and accessories than the Shopping Queries Dataset.
The internal trial annotation also was a smaller amount of data and consisted of only 400 queries with three annotators per query, while the subset of the Amazon Shopping Queries dataset that was annotated for QueryNER included 1,000 queries with three annotators per query for the test set.
§.§ Dataset
The final QueryNER dataset contains close to 1,000 queries in the test set and over 7,000 queries in the training set.
The average of the lengths of all queries is 3.63 tokens, and the distribution of lengths of queries are shown in Figure <ref>.
The average length of an entity span is 1.60 tokens.
We also present the count of entities of different lengths in Figure <ref>.
Entity lengths are shown on a log scale to show that while the vast majority of entities are one or two tokens long, there are smaller quantities that do have longer lengths.
Table <ref> shows the number of queries entities and tokens in each of the train, development, and test splits.
Table <ref> shows the balance of entity types in the corpus.
Unsurprisingly, is the most frequent type since most queries have a main product.
The next most frequent types are and .
§ EXPERIMENTS
We conducted three sets of experiments.
We set baseline results on the QueryNER dataset.
We examined the effect of dropping spans identified by QueryNER compared with token dropping as a recovery for null and low recall queries.
Finally we experimented with simple data augmentation techniques to probe the robustness of our models.
<cit.> highlighted some sequence labeling evaluation issues from invalid BIO label sequences.
We use what they refer to as “conlleval" repair for all score reporting in this work.
For the following experiments, unless otherwise stated, we use huggingface to implement an encoder with a token classification head with hyper-parameters of a batch size of 16, 20 epochs of training, a learning rate of 5.0e-5, and a warmup ratio of 0.1.
All experiments are run using 10 different random seeds.
We report the average precision, recall, and F1 score with standard deviations.
§.§ Baseline Tagging Experiments
We establish baseline performance on the QueryNER dataset.
We train a sequence labeling model using BERT<cit.>, XLM-R <cit.>, and also a BERT model with further pre-training using masked language modeling on the rest of the Amazon ESCI queries not already annotated in QueryNER.
Further pre-training has been shown to increase scores when adapting a model to a specific domain or task <cit.>.
The baseline scores are reported in Table <ref>.
The baseline scores demonstrate the challenge of segmenting e-commerce queries which have less context and a freer word order than typical English sentences.
For comparison, performance on the English CoNLL NER task has F1 scores reported in the nineties (e.g. ).
Using a Wilcoxon rank-sum test, the difference between BERT and BERT with continued pretraining on the rest of the Amazon ESCI dataset's queries has a p-value of 0.0008 and is statistically significant despite being a fairly small difference in F1 score.
§.§ Token vs Entity Dropping
For this experiment, we attempt applying the segmentation model trained using data annotated with the QueryNER ontology to do query reformulation for recovery on null and low recall queries.
We consider 5,471 null and low recall queries and segment them with a model trained on an internal version of the QueryNER dataset.
We then create variants for each query, one by randomly dropping two tokens () and one which drops a random entity but preserves any entity with the type ().
We run the original query and the variants through our internal information retrieval system to get measures of recall and relevance.
Recall is just the number of items returned.
Relevance is a model-based measure on a scale of 1 to 5, where 5 is most relevant.
Relevance is an aggregate of the top 60 items returned.
We compute the proportional delta between the original query and the variant from token or entity dropping and show the count of queries binned by their gain in recall (shown in Figure <ref>) and relevance (shown in Figure <ref>).
Since the average number of tokens in an entity is 1.6 tokens and so differs from the random dropping of two tokens, these experiments are not directly comparable.
However, overall the mean decrease in relevance over all the queries was lower for dropping an entity and keeping the core product type than for randomly dropping two tokens while maintaining similar gains in recall.
While random dropping of two tokens does appear to have more cases where the gain in recall is substantial, it is possible many of these query rewrites are lower in relevance and relaxing the constraints too much or had very low recall in the original query.
A proportional gain of thousands of items may suggest that the query has been relaxed too much, though we note that there are few of these cases.
Note the right most peak in the distribution for the relevance experiment in Figure <ref> is on a log scale and so there are very few that have such a large relevance increase.
To have that much of a relevance increase the original query would have to have returned very few relevant items.
§.§ Data Augmentation
Given the short nature of e-commerce queries and the lack of context available to the model, we hypothesize that the models trained on the QueryNER dataset may not be robust to unseen or noisy data.
The model may memorize positional information (for example, creators tend to come at the beginning) or may memorize specific tokens as being a certain entity type.
<cit.> applied transformations to create adversarial examples to make a more challenging test dataset.
They then showed how training with augmented data could lead to a more robust model.
We similarly apply a series of transformations to the QueryNER test set to create a more challenging test set of queries.
We assess the best performing model from the baseline experiments, BERT plus continued pre-training on the rest of the Shopping Queries Dataset, on this transformed test set.
We then train individual models using the concatenation of transformed versions of the training data and the original training data.
We compare how these models trained on augmented data perform on both the challenge transformed test sets and also the original QueryNER dataset.
We apply five transformations to the QueryNER train and test sets.
Examples of the transformations are shown in Table <ref>.
Shuffled:
Shuffled is simply a random shuffling of the entity spans of the test set.
Butterfingers:
The butterfingers transformation replaces a small number of characters as if someone has made a typographical mistake.
We use the implementation by <cit.> in their NL-Augmenter package.
Color:
We replace just color spans with other color spans from tagging the rest of the Shopping Queries Dataset and using these entities for replacement.
We limit to only colors with two tokens or less to avoid mislabeled color spans.
Mention replacement:
We create a set of entity mentions from tagging the rest of the Shopping Queries Dataset.
We replace entity spans for entities that co-occur with a particular core product type or a particular creator.
For example, we replace the entities around “shoe" with other entity spans that have co-occurred with “shoe" in other contexts.
Numeric:
Replaces number words and digits using the implementation by <cit.>.
All Transformations:
Applies all transformations in the order of mention replacement, shuffle, butterfinger, numeric, color swap.
The results from the augmentation experiments are shown in Table <ref>.
For all experiments we used the BERT model with continued pre-training on the Shopping Queries Dataset as the base model.
When evaluating the model trained on the original training data on the transformed challenge test sets, we did see a severe drop in performance for butterfinger and all transformations test sets.
Modest drops in performance occurred for numeric, color, and shuffled test sets.
The mention replacement test set appeared to become easier rather than more difficult.
Since the core product and creator from the original dataset were preserved, perhaps these served as sufficient cues to make the dataset less challenging.
As expected, performance improves on the transformed test sets when training a model on both the transformed training data concatenated with the original training data.
However, even with incorporating transformed data into the training, the butterfinger transformation test set still appears to be fairly challenging.
The performance of all the models trained on the combination of original and transformed datasets all perform slightly worse than the baseline models trained only on the original training data.
It is notable, however, that despite this roughly two points of F1 degredation in performance on the original test set, there is notable gain in the challenge transformed test sets for the butterfinger transformation, shuffled transformation and all transformations applied at once.
These experiments demonstrate that if the goal is to produce a robust segmentation model that performs well even under cases with spelling mistakes and free word order, training on a combination of augmented data can help, but at the cost of a slight drop in performance on clean less noisy data.
§ DISCUSSION
The results of our annotation efforts show that the task presented by QueryNER is challenging.
We were able to achieve relatively high agreement despite very little contextual information for the annotators.
F1 scores on baseline experiments provide room for further improvement, and we showed through augmentation experiments that simple data augmentation strategies can make the models trained on QueryNER more robust to noise.
§.§ Limitations
There were a number of challenges and limitations in defining the QueryNER ontology and creating the dataset.
For example, determining when to consider a token as a core product type with a modifier compared with a single chunk as a core product type.
We favored using the longer span of , but there are cases of greater ambiguity with longer noun-noun compounds such as “dog cone collar”.
It can be difficult for annotators to decide whether the correct chunking should be , , or .
Many of the annotator disagreements come from slight span mismatches despite guidelines urging annotators to prefer longer more complete spans.
There are also a number of ambiguous tokens that can be difficult to differentiate between types.
Consider “mazda” which can be both a as the make of a car, but also appears as a in the phrase “mazda 3”.
Another limitation is that the dataset is only in English, though we did a small internal experiment showing the promise of multilingual transfer to Spanish with similar performance as the English data.
The dataset was created with a specific set of business use cases in mind.
The underlying shopping queries are from the Shopping Queries Dataset and therefore is only representative of the types of queries included within it.
When applying the ontology we created based on internal data to the public Amazon Shopping Queries dataset, there were differences in the distribution of entity types from the public data.
The internal data had more years and dates and more frequently discussed the condition of an item, so the entity types and were more prominent in the internally annotated data.
While the ontology appeared to cover the public shopping queries dataset and was tested internally on multiple shopping categories, other subdivisions of entity types may be useful for other use cases or specific categories.
We would expect the models trained on the QueryNER dataset to be somewhat limited by the types of queries available within this dataset.
§.§ Future Work
There are a number of promising directions for future work.
We have done some internal annotation of product titles showing broad applicability of the ontology, a potential direction for future work could be to conduct more annotation of product titles on a public dataset.
Another potential direction of future work could be to add relations between chunks of queries or product titles, clustering the chunks, or linking them to a knowledge graph.
The Shopping Queries Dataset includes Spanish and Japanese as well.
We experimented internally with transfer learning on a small set of 100 queries, but another promising direction would be to annotate queries from other languages.
§ CONCLUSION
We defined a type ontology and annotation guidelines that are broadly applicable to e-commerce segmentation and
released QueryNER, a new manually annotated dataset and open benchmark for query segmentation.
Our baseline models showed that the task of e-commerce query segmentation is challenging due to lack of context from short strings of text.
We showed one promising direction for using QueryNER as part of a null and low query recovery strategy by dropping spans rather than individual tokens.
Experiments with data augmentation showed how baseline models are not robust to transformations and noise, especially to permutations at the character level within a word.
We showed that using artificially augmented training data can help the model to be more robust to this type of noise, but at a slight cost of performance when measuring on the original test set.
§ ETHICS AND BROADER IMPACT
We have attempted to design an ontology that is broadly applicable in e-commerce queries and product titles and have tested using it with a range of different product categories.
The annotation is inevitably the product of the biases and opinions of the designers of the ontology and the annotators.
We have made efforts to report agreement measures and will release the original annotations of each annotator before adjudication for transparency.
The annotation effort was salary and contract work and there was no specific hourly wage. But annotators were paid a minimum living wage. Crowd sourcing was not used for this work.
We believe the impact of QueryNER will be positive since there are few if any publicly available chunking or NER datasets in the e-commerce domain.
§ ACKNOWLEDGMENTS
This work was supported by the grant “Improving Relevance and Recovery by Extracting Latent Query Structure” by eBay to Brandeis University.
§ BIBLIOGRAPHICAL REFERENCES
lrec-coling2024-natbib
§ LANGUAGE RESOURCE REFERENCES
lrec-coling2024-natbib
custom_language_resource
§ QUERYNER ANNOTATION GUIDELINES
§.§ Overview
The goal of this task is to divide a user’s query into meaningful chunks and assign a broad type to each span.
As an example:
“Speaker cover”, “made of velvet”, “made to order”, “1 pair” and “high-end” are all natural chunks of the query. The goal of this task is to better understand a query by being able to better break it up into meaningful pieces and assigning meaningful types to the spans. For example “made of velvet” clearly refers to the material while “made to order” is a more general description of the product.
§.§ Motivation
The goal of these annotation guidelines is to provide a type ontology for sequence labeling of e-commerce queries and product titles into meaningful chunks. While this task is similar to extraction of e-commerce aspects, there are some notable differences.
Unlike the e-commerce task of aspect extraction, which tends to focus on fine-grained types that are often specific to particular categories of items, the tag types of QueryNER aim to be broadly applicable to queries for any product category. The goal is not to extract aspects, but rather to segment the user's query into meaningful chunks.
The type ontology is also meant to be small and general purpose enough that annotators do not necessarily need to become domain experts in the products involved in the annotation process.
§.§ Steps
* First familiarize yourself with the tag types and procedures in this document.
* Read the full string of text first.
* Try to identify the main entity that is being described. For example, “sneakers”. This will often be core_product_type but may be other types if there is no generic product mentioned. If it is unclear what the main entity being sold is, it is best to do a search to try to find the item or similar items. This can also help identify unfamiliar brand names or item specific terminology.
* Mark the spans that are clearest or easiest to identify first. It is not required to go from left to right. This can help narrow down the more difficult decisions.
* However, you may need to reconsider these decisions after doing a web search as some things that seem like obvious design descriptions or demographics may actually be part of an unfamiliar brand name.
§.§ Rules & Tips
* Assign a tag to every word in the query.
* Pay attention to context. For example “gold” would be tagged as Material in a fine jewelry item, but as Color for a pair of sandals.
No Tag
* Use “No Tag” for words or punctuation that is clearly not part of any span.
* Using “No Tag” for this task should be very rare and should be avoided as much as possible.
* Tag special characters when they are part of a chunk:
The “&” in "Abercrombie & Fitch" should be tagged as "Brand Name" because it is part of the trademarked brand name.
* The “&” in “blue & green” should get “No Tag” because it serves as punctuation only since “blue” and “green” can serve as their own chunks.
* Do not tag prepositions when they are just joining two core_products. For example, [Earbuds] with [case]. You should include the prepositions when they are a core part of the span of a modifier: [Earbuds with bluetooth] [with mic]. Do not tag prepositions with “No tag” when they are part of a chunk of the query.
Obscure tag
Use the “Obscure” tag for words that you cannot decipher.
Words and titles not in the Native Language (except for EN) should be tagged as “Obscure” unless:
* The word is commonly used in the native language. For instance: "attaché" and "Art Nouveau".
* The word is part of a brand name or a product name. For instance, the French brand name "Petit Bateau".
* If the majority of words are not in the native language (except for EN), all words should be marked as “Obscure”.
* Tag misspellings and abbreviations whenever possible. If you cannot understand the meaning of the word or special character, use “Obscure”.
* If a word has a gender disagreement or the word order is not fluent, tag it normally.
For example, “iluminado retículo” instead of “retículo iluminado” or “objetivo” instead of “objetiva”.
Spans
Tag the full span of tokens for each chunk
Always attempt to mark the most complete span possible.
For each meaningful chunk of a query, the entire span should be marked. For example, for the text “hiking boots made in Italy size 12”, “hiking boots” should be marked core_product_type rather than two separate spans. “made in Italy” should be marked “Origin”. Note the entire span is marked and not just “Italy”. “Size 12” is also marked in its entirety as UoM rather than just marking the span “12”.
How to tell whether to divide a chunk further?
If removing a portion of a chunk fundamentally changes the meaning, it should be kept as a single chunk. For example, [wedding dress] vs [dress] is a different product. Another test is to reword to separate the two spans. [dress] [to wear at a wedding] is a different product than an actual wedding dress since it could be a bridesmaid’s dress or a guest’s dress. Another example, [ausdom] [headphone pads] would mark “headphone pads” as a span since just “pads” could include brake pads, pads for furniture, or other types of pads. 24v car battery charger
Separate distinct chunks
It is possible for tags of the same type to appear consecutively. These should be divided into separate chunks. For example, “looney tunes bugs bunny daffy duck coffee mug” should be tagged as
“looney tunes bugs bunny daffy duck coffee mug”.
Where “looney tunes”, “bugs bunny”, and “daffy duck” are separate chunks tagged as “content” and “coffee mug” is tagged as core_product_type.
§.§ Comparisons of Types
§.§.§ Core_product_type vs Modifier
Examples like “hiking boots”, “battery tray”, or “wedding dress” are tagged as a single core_product_type since the multi-word span describes a unique product. For example, with “wedding dress” vs ”dress” there is a greater change in meaning, where for “comfortable boots”, comfortable should just be marked as a modifier as it is not part of the core meaning of the product.
Here [two way] is a modifier while [car speaker] is a core_product_type.
[Two way] [car speaker]
§.§.§ Creator vs Product_name
Product_name is not to be used for brand names or names of companies, which are marked creator.
Kia [creator] sorento [product_name]
§.§.§ Product_name vs Product_number
Product names and product numbers can occur in the same query. Product numbers differ in that they are more of an identifying number and not a known product line or specific name, while product names may contain numbers, but typically have other non-number components like “Campus 80s” in the example below or “f20”. FW7619, however, is an alphanumeric identifier for the product’s style.
Adidas[creator] Campus 80s[product_name] By Alex Nash[creator] FW7619[product_number]
Even product names, models or lines of products that contain numbers are marked product_name and not product_number. product_number is used for numbers that are not names but identifiers or codes used by either eBay or another retailer. “frame” is marked as core_product_type because it is the main item being sold and is a generic mention and not a specific name of a facemask frame.
Resmed [creator] f20 [product_name] frame [core_product_type]
The number for a product. This can be an eBay product number or a company’s product number. Note that it cannot be a model number that is used as the name of a product, as in Ford F150, F150 is the product’s name.
§.§.§ Condition vs Modifier
Condition refers to the quality, newness, or availability of the product. More general descriptions of how the product will be made are left to the modifier label. For example, High-end[condition]
vs made to order[modifier].
High-end[condition] speaker cover for B&W 805d 1 pair made of velvet suede made to order[modifier]
If you encounter one you aren’t familiar with, it is often possible to find them explained by doing a web search.
eBay maintains a list of acronyms:
<https://www.pages.ebay.com/pr/es-co/help/account/acronyms.html>
However you may find some that aren’t included in that list. It is usually possible to find the meaning with a web search.
This website also has a list of e-commerce acronyms:
<https://resellingrevealed.com/ebay-abbreviations-acronyms/>
§.§.§ Department vs UoM
For departments next to sizes, still mark them separate even though the span may be indicating a set of sizes based on gender or age group. So “men’s size 12” should still be marked department and UoM.
§.§.§ Occasion vs Core_product_type
`hiking boots' is its own product and `hiking` in this case should NOT be marked as a purpose. Only mark purpose when the span is not a typical phrase to describe a product, for example, “plates for [wedding]”
§.§.§ Time vs UoM
Time that is used as a measurement of how long something lasts should be marked as a unit of measurement (UoM), but should be marked as time when not used as a measurement. Time periods, such as 1912-1915 should still be marked as time despite being a duration.
[4 hours] laptop battery
“4 hours” is marked as UoM since it is a measurement of the battery life.
Battery from [1910]
“1910” is marked as time since it is not a measurement of the battery.
§.§.§ Time vs Demographic
In this case, teens refers to the time “1910s” rather than a demographic.
Teens[time] 1920s[time]
§.§.§ Content vs Creator
Items that are signed are considered content as well as the name of the person signing if included. Note that the name from a signature differs from the name of the creator. In cases where the signer is also the creator, mark as creator.
§.§.§ Quantity vs Condition
Note just because a number is referenced does not mean it is a quantity. In the example below, 4 unopened is marked as Condition since it refers to the condition of a portion of the items.
[Lot of 6] plastic canvas kits Christmas Needlecraft Shop [4 unopened]
§.§.§ Origin vs UoM
Be careful that this is not part of the size. Clothing often specifies the country for the size, for example “USA size 12”. Since this whole span is a size it should be marked as a unit of measurement [UoM].
§.§.§ Origin vs Content
Locations may also show up as content on items like clothing or posters. In these cases, they should be marked content.
§.§.§ Origin vs Creator
Origin locations may also appear in product names or names of companies. The whole span of text “Nintendo of America” would be marked as creator rather than origin. “Milwaukee” is a brand name of tools and should be marked creator.
p0.18 p0.35p.35
Tag
Definition
Examples
core_product_type The main thing being sold. Generic ways of describing a product. These are not official product names but common objects. teapot, tennis shoes, figurine, lounge pants, dish soap, comic book
product_name The specific name of a product or model name F150, air jordan 7, sorento
product_number The number for a product. Can be an eBay product number or a company's product number. Editions of an item that are numbered can also be marked as product number as well as trading card or comic numbers. BQ4422-001, 7101, airforce 1 cw2290-111 size 12. In this example, cw2290-111 is used as a style code. This particular item also has an eBay product id of 6045644602, which would also be marked as product_number if it was included in the text of the query.
modifier Modifier is used for spans that clarify the type of product. This can describe certain features a project has like “2 in 1” or “high performance”. Modifier can also be used for constraining the type of a product. Modifier can also be used as a catch all for “type” of a product that doesn’t fit in other predefined categories 4 in 1 system, performance, essential, 1 spray, easy, garage.
For “car battery charger” “car” modifies the “battery charger” which would be tagged core_product_type.
For example, “sport utility” when describing a vehicle as in “kia sportage sport utility”.
However, if the text is selling a “2011 sport utility vehicle” or “grey SUV”, “SUV” and “sport utility vehicle” would just be marked as core_product_type because they refer to the item itself rather than specifying the type of item being sold.
Vintage wooden sculpture Nite owl
design statue [signed]
[Hi temp] masking plugs
condition The condition of the product. This describes whether the product is new or old and can go into more detail about things such as whether a product includes its original tags.
There are a number of acronyms commonly used by sellers (and less frequently by buyers).
Also use this tag for descriptions of what the product doesn’t come with or how it was packaged. For example, “no key”, “loose”, “no box”, “in original box”. new, used, mint condition, vintage, near mint, no box
NOS = New Old stock
OEM = Original Equipment
Manufacturer
EUC = excellent used condition
nm = near mint
NIB = New in box Condition
Vtg = Vintage
UoM (Unit of Measurement) Any way of measuring size or other unit of measurement. This can include everything from clothing sizes, to lengths and widths, car engine sizes, battery capacity, amount of memory in a computer, lens sizes for cameras. This includes time expressions that are units of measurement such as 30 minutes or 4 hours of battery life. 16oz, L, Medium, 5 inches, 30 mins
[4 hours] of battery life
Tesla model x [12 volt] battery
2020 Shirt FR0820 Men’s [size S] new
department Category of the population the item was made for. Mens, womens, kids, jr., wmns
Elf clown elk hat for [adults] [kids] gift
material The material or physical entity that makes up the item. denim, canvas, plastic, metal, cotton,
felt
motorcycle jacket black [leather]
size xl
VANS hi top black [canvas]
skateboarding
shoes
EASECASE Custom made
[genuine leather] case
time An expression of the date or time associated with the product that is not a unit of measurement. For example, 30 mins or 4 hours for battery life should be labeled UoM. There are time abbreviations to be aware of.
MCM = Mid Century Modern 1999, 2012-13, ‘67, autumn,
[Mid century] hat ladies
content Names of characters, titles of tv or movies, sayings or phrases that appear on or within the product itself.
Many mugs, t-shirts, figurines, or comic books have some form of content or characters associated with them. Often multiple consecutive chunks are found for content.
Meaningful chunks or phrases should still be split. Star wars, knights of the old republic
big trouble in little china, big bang
theory, linus, lucy, far east tour
[Boston Marathon 2020]
[Rise Up N Run] Shirt
creator The company or person who creates or produces the product. It could also be the designer name associated with the product or brand name. Ford, Disney, Jim Shore, Honda, Hot wheels, dc comics, polaroid
color Description of the color, pattern, appearance related to the surface appearance or 2-dimensional design of the product.
Color can include colors described by words that can also be flavors or foods in other contexts. unc blue, light gray, wolf gray, floral
SPENDOR AUdio SP2 Loudspeaker
[Cherry]
Color also includes patterns:
The emily & meritt [pirate stripe]
sheet set
shape Description of the shape, form, or positioning or 3-dimensional design of
the product.
Includes design descriptions for things like clothing, accessories, or automotive that refer to 3-dimensional descriptions of the item. fit, slim, low, long sleeve, flat, rear, front, rectangular, orb, cylindrical
quantity The number of the product being sold. Includes for example “lot of 4”.
Box break: Multiple people split the cost of a box and share the contents cds, lot of 4, multi-lot, package of 6, 3-box break, Snap-On Tools Hammer [a set of 4] great condition
occasion The purpose or intended use of an item. Typically an event, holiday, season, or occasion. `hiking boots` is its own product and `hiking` in this case should NOT be marked as a purpose. Only mark occasion when the span is not a typical phrase to describe a product, for example, “plates for wedding” would be an occasion, but “wedding dress” would just be a core_product_type. sport, athletic, wedding, winter, halloween, bridal, birthday, [Christmas] Santa Claus figurine nutmeg shaker, Japanese [kitchen] chefs knife moritaka hamono aogami super kurochi 240mm, Elf clown elk hat for adults kids [gift]
origin The origin of a product. This is likely the location it comes from but could also be a specific event where the item was created such as a convention. States or provinces can also be an origin tag. For example: Amish style kitchen table [made in Ohio] Made in the USA, USA, Japan, Germany, Comic-con
price The price of a product. Also includes words expressing relative price like “expensive”, “cheap”, or “good deal”. $12.99, cheap, affordable, expensive
no_tag Used for punctuation or words that are not part of a chunk. For this task this tag should be used very rarely since the goal is to separate each query into chunks. Each word is expected to be part of some chunk. ‘, “, &, –
obscure Indecipherable text or text in a language outside the target. Please make the best effort when possible to identify spans that may be product numbers or model names or numbers Asdfjalksjdf, other languages
Entity tags with their descriptions and examples.
|
http://arxiv.org/abs/2405.09697v1 | 20240515204759 | Weakly Supervised Bayesian Shape Modeling from Unsegmented Medical Images | [
"Jadie Adams",
"Krithika Iyer",
"Shireen Elhabian"
] | cs.CV | [
"cs.CV"
] |
Bayesian Image-To-Shape Modeling
Adams et al.
Scientific Computing and Imaging Institute, University of Utah, UT, USA Kahlert School of Computing, University of Utah, UT, USA
{jadie, iyerkrithika, shireen}@sci.utah.edu
Weakly Supervised Bayesian Shape Modeling from Unsegmented Medical Images
Jadie Adams1,2 Krithika Iyer1,2 Shireen Y. Elhabian1,2
May 20, 2024
=========================================================================
Anatomical shape analysis plays a pivotal role in clinical research and hypothesis testing, where the relationship between form and function is paramount. Correspondence-based statistical shape modeling (SSM) facilitates population-level morphometrics but requires a cumbersome, potentially bias-inducing construction pipeline. Traditional construction pipelines require manual and computationally expensive steps, hindering their widespread use. Furthermore, such methods utilize templates or assumptions (linearity) that can bias or limit the expressivity of the variation captured by the constructed SSM. Recent advancements in deep learning have streamlined this process in inference by providing SSM prediction directly from unsegmented medical images. However, the proposed approaches are fully supervised and require utilizing a traditional SSM construction pipeline to create training data, thus inheriting the associated burdens and limitations.
To address these challenges, we introduce a weakly supervised deep learning approach to predict SSM from images using point cloud supervision. Specifically, we propose reducing the supervision associated with the state-of-the-art fully Bayesian variational information bottleneck DeepSSM (BVIB-DeepSSM) model. BVIB-DeepSSM is an effective, principled framework for predicting probabilistic anatomical shapes from images with quantification of both aleatoric and epistemic uncertainties. Whereas the original BVIB-DeepSSM method requires strong supervision in the form of ground truth correspondence points, the proposed approach utilizes weak supervision via point cloud surface representations, which are more readily obtainable. Furthermore, the proposed approach learns correspondence in a completely data-driven manner without prior assumptions about the expected variability in shape cohort. Our experiments demonstrate that this approach yields similar accuracy and uncertainty estimation to the fully supervised scenario while substantially enhancing the feasibility of model training for SSM construction.
§ INTRODUCTION
Statistical shape modeling (SSM) has emerged as a useful tool in medical imaging and computational anatomy, offering valuable insights into the variability of anatomical structures, such as organs or bones,
across a given population. SSM provides a population-level statistical representation of morphology, enabling wide-ranging applications in clinical research, including disease diagnosis <cit.>, treatment planning <cit.>, surgical simulation <cit.>, and outcome prediction <cit.>.
In SSM, shapes are either represented explicitly via landmark or correspondence points, or implicitly via deformation fields (coordinate transformations in relation to a predefined or learnable atlas) <cit.>.
The point distribution model (PDM) is a widely adopted explicit shape representation consisting of dense sets of correspondence points defined on the surface of the anatomical shapes in semantically consistent locations across the population.
Traditionally, PDMs were automatically defined on preprocessed shape cohorts (segmented from medical images) via pairwise mapping to a predefined or learned atlas/template (e.g., <cit.>) or via groupwise optimization (e.g., <cit.>).
Such SSM construction pipelines require time-consuming and expert-driven steps such as segmentation, shape registration, and optimization parameter tuning or atlas construction. Furthermore, each time a new shape is added, the pipeline must be rerun as optimization is performed across the entire cohort simultaneously.
Deep learning approaches, such as DeepSSM <cit.>, offer an alternative to traditional pipelines by leveraging trained neural networks to directly infer PDMs from unsegmented volumetric images with minimal preprocessing. In inference, this alleviated the need for segmentation and reoptimization given a new scan.
However, integrating deep learning-based solutions into clinical practice necessitates understanding the uncertainty associated with model predictions. Therefore, Bayesian deep learning frameworks have been proposed to provide probabilistic PDM predictions capable of quantifying the two primary forms of uncertainty: aleatoric (data-dependent) and epistemic (model-dependent) <cit.>.
A notable approach is BVIB-DeepSSM <cit.>, a probabilistic formulation of DeepSSM <cit.> that utilizes a fully Bayesian extension of the variational information bottleneck (VIB) framework <cit.>.
BVIB-DeepSSM provides PDM prediction from unsegmented images with estimates of aleatoric and epistemic uncertainty that correlate with prediction error, ensuring reliable prediction without compromising accuracy.
Even though deep learning approaches mitigate the overhead associated with SSM construction during inference, they still depend on traditional SSM techniques to construct image/PDM pairs to supervise network training.
This reliance not only slows down the training preparation process but also means that the network inherits any limiting assumptions made during the construction of training PDMs. Such biases or assumptions can arise from various sources, such as atlas selection in pairwise surface matching approaches or in the definition of optimization objectives.
For instance, the current state-of-the-art (SOTA) groupwise optimization PDM construction method, known as particle-based shape modeling (PSM) <cit.>, imposes a linearity assumption. This assumption restricts the ability of PSM to accurately represent complex, nonlinear shape variations. Training networks on PSM-constructed PDMs could similarly bias network predictions.
We propose leveraging weak supervision from point clouds in BVIB-DeepSSM training to overcome these limitations. Point cloud shape representations consist of sets of unordered, nonuniform points that sample the surface of the shape. Recently, there has been growing interest in learning SSM from point clouds due to their ease of acquisition compared to the complete, noise-free surface representations (such as meshes or binary volumes) required by traditional SSM construction methods <cit.>.
This work proposes training BVIB-DeepSSM using image/point cloud pairs instead of image/PDM pairs. This approach significantly reduces the required supervision and enables training the model on readily available segmentation datasets.
Our contributions are summarized as follows:
* We provide a framework to predict SSM directly from images with reduced supervision by utilizing point cloud shape representations in training rather than ground truth PDMs.
* We introduce formulations of the VIB and fully Bayesian VIB objectives that utilize permutation-invariant Chamfer distance.
* We provide comprehensive experiments that demonstrate that the proposed approach improves the feasibility of predicting SSM from images without sacrificing accuracy or uncertainty calibration.
§ RELATED WORK
Traditional PDM construction methods utilize metrics such as entropy <cit.> or minimum description length <cit.>, or employ parametric representations <cit.>.
PSM <cit.> represents the SOTA optimization-based technique for group-wise SSM construction <cit.>. However, PSM assumes linear correlations, leading to a bias in the captured population variation.
DeepSSM <cit.> was the pioneering deep learning approach to predict PDMs directly from raw, unsegmented images. DeepSSM utilizes PDM supervision, where training labels are constructed via the full PSM pipeline (including segmentation, preprocessing and alignment, and PDM optimization).
Uncertain-DeepSSM <cit.> adapted the DeepSSM network to be Bayesian, providing aleatoric and epistemic uncertainties.
DeepSSM, Uncertain-DeepSSM, and other formulations <cit.> rely on a supervised low-dimensional encoding (shape descriptors), precomputed using principal component analysis (PCA). PCA supervision enforces a linear relationship between the latent and the output spaces and restricts the learning task to strictly SSM prediction. Additionally, PCA does not scale in the case of large sets of high-dimensional shape data.
In contrast, VIB-DeepSSM <cit.> introduced a variational information bottleneck (VIB) architecture <cit.> to learn a low-dimensional latent encoding tailored to the PDM estimation task, leading to improved generalization and more accurate estimation of aleatoric uncertainty.
However, the VIB framework is only half Bayesian <cit.>; thus, VIB-DeepSSM lacks the capability to quantify epistemic uncertainty.
BVIB-DeepSSM <cit.> extended the VIB-DeepSSM framework to be fully Bayesian, enabling the prediction of probabilistic shapes from images with quantification for both forms of uncertainty. This SOTA model is the basis of the proposed approach.
Recent work has explored unsupervised estimation of SSM from various shape representations <cit.>.
One study demonstrated that networks designed for point cloud competition perform reasonably well at the task of anatomical PDM generation <cit.>. These networks typically have an encoder-decoder architecture with a bottleneck <cit.>.
The decoder provides a continuous mapping from the learned latent space to output space, resulting in consistently ordered output point clouds, providing correspondence as a by-product.
Mesh2SSM <cit.> explicitly predicts PDMs in an unsupervised manner from mesh shape representations, utilizing complete surface information.
Point2SSM <cit.> is a self-supervised technique proposed to predict anatomical SSM from point clouds. By employing Chamfer distance reconstruction loss, Point2SSM encourages the predicted PDMs to accurately sample the entire point clouds.
Recently, SCorP <cit.> proposed leveraging a shape prior learned from surface meshes to predict PDMs from unsegmented images within a student/teacher framework.
While closely related to the proposed method, SCorP lacks uncertainty quantification and necessitates complete mesh surface representations.
The proposed method requires only image/point cloud pairs to supervise network training, providing PDM prediction and granular uncertainty estimates in inference.
§ BACKGROUND
§.§ Notation
Let , , and denote random variables and let , , and denote realizations of those respective random variables.
Given an unsegmented volumetric image of an anatomy, denoted ∈ℝ^H × W × D, the goal is to predict a PDM denoted ∈ℝ^3M. Each PDM is a set M ordered correspondence points, where a 3D vector of coordinates defines each correspondence point.
Training the network requires a set of paired data, denoted 𝒟 = {, }.
Here = {}_n=1^N is a set of N unsegmented volumetric images.
In previously proposed fully supervised settings, = {}_n=1^N where ∈ℝ^3M denotes a ground truth PDM, constructed via a traditional pipeline, comprised of M ordered correspondence points.
In the proposed weakly supervised setting, _n denotes a point cloud shape representation, meaning an unordered set of points on the surface of the shape n.
The VIB framework utilizes a learned stochastic latent encoding: = {}_n=1^N, where ∈L and L ≪ 3M.
§.§ Variational Information Bottleneck
In information bottleneck (IB) theory, a stochastic encoding is learned to capture the minimal sufficient statistics required of input to predict the output <cit.>.
The encoding and model parameters are estimated by maximizing the IB objective:
argmax_ I(,; ) - β I(, ; )
where I denotes mutual information and β is a Lagrangian multiplier.
The first term in Eq. <ref> encourages to be maximally expressive of , encouraging predictive accuracy. The second term encourages to be maximally compressive of , affecting the model complexity.
In the deep variational information bottleneck (VIB) <cit.> approach, the IB model is parameterized via a neural network with weights = {, }, and a latent distribution is learned by minimizing the IB objective (Eq. <ref>). Direct calculation of mutual information is intractable in this context, and thus VIB employs variational inference to derive a theoretical lower bound on the IB objective:
ℒ_VIB = ∼ q(|, )- log p(|, ) + βq(|, )p()
The first term, the negative log-likelihood (NLL), encourages to be predictive of . The second term, the Kullback–Leibler (KL) divergence, encourages to be compressive of . The β hyper-parameter controls the tradeoff.
§.§ BVIB-DeepSSM
VIB-DeepSSM <cit.> employs the VIB <cit.> approach to learn the latent encoding in the context of the task: predicting PDM from image .
The VIB-DeepSSM architecture (Fig. <ref>.A) is comprised of an encoder and decoder. The encoder, f_e, comprised of 3D convolutional and densely connected layers parameterized by , maps the input image to a Gaussian latent distribution: 𝒩(|,).
Posterior samples _ϵ are acquired from this predicted latent distribution using the reparameterization trick to enable gradient calculation.
The decoder, f_d parameterized by , maps the latent encoding to the predicted output .
VIB-DeepSSM allows for capturing aleatoric uncertainty as the variance of the p(|) distribution.
This variance is computed by sampling multiple latent encodings from 𝒩(|,) and passing them through the decoder to get a sampled distribution of predictions. A Gaussian distribution is estimated from these samples denoted: 𝒩(|,). The estimated captures the aleatoric or data-dependent uncertainty.
However, this approach does not quantify epistemic uncertainty because VIB is only half-Bayesian <cit.>.
BVIB-DeepSSM <cit.> derived the fully Bayesian VIB formulation by applying an additional PAC-Bound with respect to the network parameters.
In the VIB-DeepSSM model, parameters = {, } are fit via maximum likelihood estimation. BVIB-DeepSSM utilizes variational inference to approximate the posterior p(|). The BVIB-DeepSSM objective results in two intractable posteriors p(|,) and p(|, ), the former is approximated via q(|, ) as in Eq. <ref> and the latter is approximated by q(). The two KL divergence terms are minimized via a joint evidence lower bound, resulting in the objective:
ℒ_BVIB = - log p(|, ) +
βq(|, )p()
+ q()p()
In BVIB-DeepSSM (Fig. <ref>.B), epistemic uncertainty is captured as the variance in predictions made with various network weights sampled from the learned distribution p(|).
Many approaches have been proposed for the computationally challenging task of estimating a distribution over model parameters for epistemic uncertainty quantification. Among these approaches, concrete dropout and ensembling have been shown to be most effective <cit.> and are utilized in the BVIB-DeepSSM formulation <cit.>.
Concrete dropout employs Monte Carlo dropout sampling as a practical approach for approximate variational inference <cit.>. This approach utilizes a continuous relaxation of the Bernoulli distribution (concrete distribution) to parameterize the learned distribution over weights. Epistemic uncertainty is captured by the spread of predictions resulting from inference with various sampled dropout masks. Concrete dropout automatically optimizes the dropout probabilities at each layer alongside the network weights.
Batch ensemble <cit.> compromises between a single network and a full deep ensemble, achieving a balance between performance, computation time, and memory usage.
BVIB-DeepSSM additionally proposed combining concrete dropout and batch ensemble to acquire a multimodal approximate posterior on weights for increased flexibility and expressiveness <cit.>.
§ METHODS
We propose using point cloud shape representations to weakly supervise BVIB-DeepSSM.
Recent work has shown that bottleneck network architectures with fixed decoders supervised by point cloud-based loss can learn correspondence <cit.>. In such networks, the bottleneck captures a population-specific shape prior. Directly decoding the latent shape feature representation results in a consistent ordering of the output point clouds across samples, providing PDMs.
We propose to leverage this effect in BVIB-DeepSSM, allowing for the replacement of ground truth PDMs with unordered point clouds in the training data. This advancement requires updating the BVIB-DeepSSM formulation in two crucial ways: first, the objective must altered for point cloud supervision, and second, consideration regarding the epistemic uncertainty quantification approach must be made.
§.§ Proposed Weakly Supervised Loss
Reducing the supervision requires updating the first term in the VIB and BVIB-DeepSSM objectives (Eq. <ref> and <ref>), the NLL term.
In the PDM-supervised setting, the NLL term is expressed as:
- log p(|, ) = || - ||_2/2 + 1/2log
where and are the mean and variance of the predicted distribution estimated using various posterior samples and .
Replacing (an ordered PDM) with unordered point clouds requires replacing the L2 norm in Eq. <ref> with a permutation invariant distance metric.
Chamfer distance is most commonly used for this purpose. The Chamfer distance from point cloud to point cloud is defined as:
CD(→) =
1/||∑_∈min_∈ || - ||_2^2
Typically, bidirectional Chamfer distance is used:
CD(, ) = CD(→) + CD(→)
Note the number of points in , denoted ||, is not required to match ||.
We propose utilizing the single directional distance CD(→) as a replacement for the L2 norm in Eq. <ref>, as this is a commensurate metric that is calculated point-wise, in a permutation-invariant manner.
The resulting updated NLL term is expressed as:
- log p( |, ) ≈CD(→ )/2 + 1/2log
This update enables permutation invariant point-wise error estimation. However, the single-directional CD does not ensure that the predicted PDM will sample the entire surface well. For instance, if all points in converge to a single point in , CD(→) would be minimized. To prevent this behavior, we include CD(→) as a regularization term. This term encourages the predicted points to be well-spread across the surface so that each point in has a close neighbor in . The resulting updated VIB-DeepSSM objective is expressed as:
Thus, the proposed BVIB-DeepSSM objective is expressed as:
ℒ_Proposed VIB = - log p(|, ) + βq(|, )p() + αCD(→)
where log p(^|, ) is computed via Eq. <ref> and CD(→) is the regularization term computed via Eq. <ref>, weighted by hyperparameter α.
§.§ Weakly Supervised Epistemic Uncertainty Quantification
In addition to updating the learning objective, the reduction in supervision requires adapting the approach to epistemic uncertainty quantification. While ensembling proved to be an effective frequentist approximation for estimating a distribution over model parameters in the original BVIB-DeepSSM formulation, it is not appropriate in the weakly supervised setting. This is because point cloud supervision does not enforce one particular point ordering in correspondence prediction as PDM supervision does. Rather, network-specific correspondence is induced by two factors: the Chamfer distance reconstruction loss and the consistent, continuous mapping from the latent space to the output space provided by the decoder. Thus, while a given network provides correspondence across predictions, there is no mechanism to enforce correspondence consistency across different networks or ensemble members. Each member would learn a unique output point ordering, rendering the ensemble averaging effect meaningless. Thus, in the weakly supervised context, a true Bayesian approximation method must be used to learn a distribution over weights within a single network.
Additionally, introducing stochasticity to the decoder would be detrimental to PDM prediction, as correspondence is induced by the established continuous mapping from the latent to output space. Thus, we propose adapting the concrete dropout-based BVIB-DeepSSM model to estimate epistemic uncertainty from the encoder alone and utilize a fully deterministic decoder. Here, predictive distributions are acquired by decoding various samples with various encoder dropout masks.
Thus, the proposed BVIB-DeepSSM objective is expressed as:
ℒ_Proposed = - log p(|, ) + βq(|, )p()
+ q()p() + αCD(→)
where, as in Eq. <ref>, log p(|, ) is computed via Eq. <ref>.
An overview of VIB-DeepSSM, BVIB-DeepSSM, and the proposed weakly supervised BVIB-DeepSSM approach are provided in Fig. <ref>.
§ EXPERIMENTS
§.§ Datasets
We utilize two challenging datasets to evaluate the proposed method: the left atrium and the liver.
The left atrium dataset includes 1,096 shapes derived from cardiac late gadolinium enhancement MRI images of different atrial fibrillation patients. The images were manually segmented at the University of Utah Division of Cardiovascular Medicine with spatial resolution 0.65 × 0.65 × 2.5 mm^3, and the endocardium wall was used to cut off pulmonary veins. This dataset includes substantial morphological diversity in overall size, the size of the left atrium appendage, and the quantity and arrangement of pulmonary veins. Following BVIB-DeepSSM, we hold out outlier cases in the test set, selected via thresholding on an outlier degree computed on images and meshes <cit.>. The resulting test set includes 40 shape outliers, 78 image outliers, and 92 randomly selected inlier test samples.
The liver dataset contains 834 CT scans and corresponding quality-controlled segmentations from the open-source AbdomenCT-1K dataset <cit.>. These images vary significantly in intensity, quality, and resolution, providing a challenging test case. We randomly split the liver data 80%/10%/10%. into training, validation, and test sets.
§.§ Experimental Setup
We compare the proposed weakly supervised adaptations of VIB-DeepSSM and BVIB-DeepSSM with the original formulations.
We utilize the PSM construction method implemented in the ShapeWorks software suite <cit.> to create PDMs for training fully supervised methods. Additionally, we utilize ShapeWorks to process images and segmentations, including cropping around the region of interest and downsampling to manage memory usage.
We generate surface meshes with 5000 vertices from the segmentations. The vertices serve as point clouds for the proposed weak supervision.
We employ image augmentation in training all models in the form of additive Gaussian noise with random variance between 0 and 1% of the full signal.
Following the BVIB-DeepSSM strategy, burn-in is used to convert the loss from deterministic (L2 or CD) to probabilistic (Eqs <ref> and <ref>) <cit.>.
This burn-in counteracts the accuracy reduction that occurs when NLL-based loss is used with a gradient-based optimizer <cit.>.
The concrete dropout implementation of BVIB-DeepSSM is used with initial dropout probabilities of 0.1.
All models were trained until the validation error (either L2 or CD, depending on supervision) had not decreased in 50 epochs.
The training was done on Tesla V100 GPU with Xavier initialization <cit.>, Adam optimization <cit.>.
Full model parameters and training and evaluation code are provided at <https://github.com/jadie1/Weakly-Supervised-BVIB-DeepSSM/>.
§.§ Evaluation Metrics
There are three factors to consider when evaluating probabilistic PDM prediction accuracy.
The first is surface sampling accuracy, which assesses how well the points are constrained to capture the complete shape surfaces.
The second is the assessment of how well the population-level statistics are captured through predicted correspondences.
The third is the calibration of the uncertainty estimates. This section describes the metrics used to assess these three factors.
§.§.§ Surface Sampling:
A small Chamfer distance between the point cloud and predicted PDM, CD(,) (Eq. (<ref>)), indicates the output points accurately capture the complete shape. Point-to-surface distance (P2S) assesses how well points are constrained to the surface. P2S quantifies the distance of the predicted points to a complete ground truth surface shape representation (mesh).
§.§.§ Correspondence/SSM Metrics:
Principal component analysis (PCA) is used in SSM analysis to understand the modes of variation in the population and to evaluate how effectively population-level statistics are captured <cit.>. Three key metrics help measure this: compactness, generalization, and specificity.
Compactness (Comp.): Good correspondence leads to a more compact SSM, meaning the training data distribution can be represented using a minimal number of parameters. Strong correspondence allows a larger proportion of explained population variance to be captured with fewer PCA modes. A larger area under the cumulative variance plot indicates better correspondence.
Generalization (Gen. CD): A precise SSM should generalize effectively from training subjects to new, unseen subjects. The generalization metric measures the Chamfer distance (CD) between estimated correspondences from test point clouds and their reconstructions from training SSM-based PCA embeddings using varying numbers of components. A smaller CD indicates better generalization.
Specificity (Spec. CD): Specificity assesses whether the predicted SSM produces valid instances of the shape class. It is calculated as the average Chamfer distance between training examples and generated samples from the training SSM-based PCA embeddings using different numbers of components. A smaller CD suggests the SSM is more specific.
Recent work also utilizes mapping error (ME) to estimate correspondence accuracy <cit.>.
ME quantifies how consistent output point neighborhoods are across the population <cit.>. A lower ME indicates consistent neighborhoods, implying better correspondence.
§.§.§ Uncertainty Calibration
We expect well-calibrated uncertainty estimates to correlate with the error. Thus, a higher Pearson r coefficient between predicted uncertainty and P2F error suggests better calibration. Furthermore, accurate uncertainty estimation is useful in out-of-distribution (OOD) detection.
§.§ Results
Fig. <ref> provides the surface sampling and correspondence evaluation metrics across both test sets. The proposed weakly supervised models provide similar accuracy across all metrics while significantly reducing the supervision requirement. Hence, we are not sacrificing accuracy, but we democratize building networks that provide PDMs directly from unsegmented images by requiring only the level of supervision needed to train segmentation networks.
Fig. <ref> displays the modes of variation resulting from the predicted SSM on the test sets for the BVIB-DeepSSM model. The mean shape and primary and secondary modes of variation resulting from PDM supervision and the proposed PC supervision are very similar. In the left atrium case, the primary mode captures the length of the left atrium appendage, and the secondary mode captures the volume or sphericity. The primary and secondary modes of variation in the liver dataset capture the size and curvature.
The results demonstrate that the proposed weak supervision does not lead to less accurate PDM prediction. The predicted points sample the true surface to the same degree and offer a similar correspondence accuracy despite the absence of ground truth correspondence supervision. Additionally, the similarity in the captured modes of variation suggests that PDM predictions made with weak supervision could be equally useful in downstream clinical tasks as the predictions made with full PDM supervision.
Fig. <ref> displays the point-wise correlation between the predicted uncertainty values and P2S distance error across the test set. We would expect the uncertainty estimates to be higher for predicted correspondence points that are further from the true shape surface. The r correlation coefficients and resulting average uncertainty heatmaps are very similar, indicating that reducing the supervision does not significantly impact the uncertainty calibration. These results demonstrate the effectiveness of estimating the NLL term with CD rather than L2 Euclidean distance (Eq. <ref>). The spatial correlation between the P2S error and uncertainty heatmaps demonstrates the utility of these probabilistic frameworks in aiding in assessing prediction reliability.
Uncertainty quantification is also useful in detecting out-of-distribution (OOD) samples. The left atrium dataset is comprised of three subsets: image outliers, shape outliers, and randomly selected inlier examples. This partitioning was performed by thresholding on a precomputed outlier degree <cit.> as shown in Fig. <ref>. The error and uncertainty estimation distributions across subsets are shown in Fig. <ref>. The predicted uncertainty is slightly higher for the outlier test sets, especially for the extreme image outliers. The full and weakly supervised models provide similar patterns in error and uncertainty across the left atrium test sets.
Fig. <ref> shows the distribution of image outlier degrees across the randomly selected liver test set. Three outlier cases are identified in this histogram. These three cases are also clearly identifiable in the P2S error vs prediction uncertainty scatter plots in Fig. <ref>. Here, prediction uncertainty is the total aleatoric for VIB models and the sum of the total aleatoric and epistemic estimates for BVIB. The outlier cases are clearly identifiable, given the prediction uncertainty resulting from both full and weak supervision, suggesting the proposed weakly supervised approach is not detrimental to OOD detection.
§ CONCLUSION
We proposed an alternative training approach to BVIB-DeepSSM with reduced supervision. The proposed framework matches PDM-supervision accuracy while significantly streamlining the training pipeline.
In future work, the point cloud shape representations could also be leveraged to learn a more expressive prior p().
In <cit.>, it is proven that learning the variational autoencoder (VAE) latent prior is necessary for reaching the extremum of the VAE objective. This proof can be directly applied to show learning p() is necessary for reaching the extremum of the VIB objective.
Future work could explore utilizing a point cloud autoencoder to learn p() in a shape-informed manner.
Overall, this work improves the feasibility of SSM construction from images, making SSM more accessible as a tool for clinical research.
§ ACKNOWLEDGEMENTS
This work was supported by the National Institutes of Health under grant numbers NIBIB-U24EB029011, NIAMS-R01AR076120, NHLBI-R01HL135568, and NIBIB-R01EB016701. We thank the University of Utah Division of Cardiovascular Medicine for providing left atrium MRI scans and segmentations from the Atrial Fibrillation projects and the ShapeWorks team.
splncs04
|
http://arxiv.org/abs/2405.09767v1 | 20240516021458 | Compact quantum algorithms that can potentially maintain quantum advantage for solving time-dependent differential equations | [
"Sachin S. Bharadwaj",
"Katepalli R. Sreenivasan"
] | quant-ph | [
"quant-ph",
"physics.app-ph",
"physics.comp-ph",
"physics.flu-dyn"
] |
APS/123-QED
sachin.bharadwaj@nyu.edu
Department of Mechanical and Aerospace Engineering, New York University, New York 11201 USA
katepalli.sreenivasan@nyu.edu
Department of Mechanical and Aerospace Engineering, New York University, New York 11201 USA
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012
Department of Physics, New York University, New York, NY 10012
Many claims of computational advantages have been made for quantum computing over classical, but they have not been demonstrated for practical problems. Here, we present algorithms for solving time-dependent PDEs governing fluid flow problems. We build on an idea based on linear combination of unitaries to simulate non-unitary, non-Hermitian quantum systems, and generate hybrid quantum-classical algorithms that efficiently perform iterative matrix-vector multiplication and matrix inversion operations. These algorithms lead to low-depth quantum circuits that protect quantum advantage, with the best-case asymptotic complexities that are near-optimal. We demonstrate the performance of the algorithms by conducting: (a) ideal state-vector simulations using an in-house, high performance, quantum simulator called QFlowS; (b) experiments on a real quantum device (IBM Cairo); and (c) noisy simulations using Qiskit Aer. We also provide device specifications such as error-rates (noise) and state sampling (measurement) to accurately perform convergent flow simulations on noisy devices.
Compact quantum algorithms that can potentially maintain quantum advantage for solving time-dependent differential equations
Katepalli R. Sreenivasan
May 20, 2024
============================================================================================================================
§ INTRODUCTION
Simulating nonlinear phenomena is quite arduous. In the case of hydrodynamic turbulence, for instance, the combination of a wide range of scales and the need for fine computational resolution <cit.> makes it quite challenging <cit.>. Similar challenges are encountered in other problems such as glassy and molecular dynamics, protein folding and chemical reactions. Simulating such phenomena requires a paradigm shift in computing—and a strong candidate for that is Quantum Computing (QC). QC is successful in surpassing its classical counterparts in some cases <cit.>, but it is yet to demonstrate its might in solving practical problems. See, e.g., <cit.> for a discussion of problems involving fluid dynamics. A general bottleneck is that most classical systems are nonlinear, whereas quantum algorithms that consist of quantum gates and circuits obey laws of quantum mechanics, which are linear and unitary. For the numerical tools used in this work, reconciling this mismatch makes linearization of the problem inevitable <cit.>, leading to high dimensional linear problems. It is thus necessary to create efficient quantum algorithms to solve high dimensional linear system of equations. Even without linearization, solving nonlinear PDEs would still demand, at the least, an efficient way to iteratively operate general non-unitary and non-hermitian matrices. The current work focuses precisely on developing these tools.
To solve flow problems (and PDEs in general) using QC, different quantum algorithms have already been proposed—e.g., Quantum Linear System Algorithms (QLSA) <cit.> and Variational Quantum Algorithms (VQA) <cit.>. These algorithms solve the governing PDEs from a continuum scale approach. There have also been meso-scale approaches based on Lattice Boltzmann methods <cit.>. A somewhat newer approach called Schrödingerization <cit.> uses analog qubits (continuous variables) to map classical PDEs into equivalent quantum systems, and integrates the corresponding Schrödinger equation. Although the present work uses the continuum approach, the same tools can be applied also to the discrete systems and others involving iterative matrix-vector multiplications and inversions. The examples include data analysis, machine learning, image processing, optimization, and so forth.
In this work we propose a set of hybrid, quantum linear systems algorithms based on Linear Combination of Unitaries (LCU) for solving the fluid equations. The current approach can provide provable guarantees on gate and time complexities. We present six Time Marching Compact Quantum Circuit (TMCQC) algorithms to solve the time-dependent PDEs by explicit and implicit time marching schemes. We make use of a certain concept of LCU, which has been used earlier in a different context of non-hermitian open quantum systems <cit.> and quantum chemistry applications. We transform this tool into a hybrid quantum-classical algorithm that use a time marching approach. These algorithms require only two, or at most four, controlled unitaries for the LCU decomposition leading to low-depth quantum circuits. We show that, of the six TMCQCs proposed, the gate complexity (of basic two-qubit gates) is at best 𝒪(sϵ,log(N_gτ),polylog(ϵ/ε_U,1/ε_N),log^-3((κ-1)^-1)). Here, N_g is the grid size, τ is the number of time steps, s,κ are the sparsity and condition numbers of the matrix operators, and ϵ ,ε_U,ε_N determine the accuracy of different approximations of the algorithm, as discussed in later sections.
The above complexity scaling is near-optimal in all parameters except s (which, however, still contributes as a small constant prefactor owing to the sparse, tri-diagonal matrices used here). At the worst, the complexity could be exponential in both τ and s. As one would expect, an LCU tool based on the time marching approach would generally tend to quickly diminish the success of the algorithm, thus blowing up the sampling sizes required to recover the solution accurately. However, using tools such as the Richardson extrapolation, we also show that the query complexity of the algorithm can also be kept to a minimum, such that it contributes only a constant pre-factor to the overall complexity. The qubit complexity scales as 𝒪(log(N_gτ)) at best and 𝒪(log(N_gτ (log(ε^-1_N))/log((κ-1)^-1))) at the worst, which is optimal in both cases. Note that the complexities mentioned here are without any quantum amplitude amplification. The algorithms also avoid the need for expensive phase estimation methods, trotterization schemes, bit-arithmetic and classical optimization subroutines, making the quantum circuits compact. Furthermore, we also propose a few end-to-end strategies, using which the overall circuit simulations can conserve any available quantum advantage, apart from also making them amenable on current and near-term hardware.
We use these algorithms to simulate a linear advection-diffusion problem and use its analytical and classical solutions as reference for estimating the accuracy of the solutions obtained. The simple and well-understood nature of the chosen problem makes it an ideal candidate for assessing the performance of the quantum algorithm. However, the tools described here can be extended to a more general class of flow problems. The performance of the current algorithms is studied via (a) statevector simulations using QFlowS—an in-house, high-performance, quantum simulator <cit.>, (b) experiments with a real quantum device (IBM Cairo), and (c) noisy simulations with IBM Qiskit Aer platform to study noise effects by separating error contributions due to the algorithm itself from finite sampling of the final quantum states. The results of these three exercises indicate that our algorithms can produce accurate results that capture the flow physics both qualitatively and quantitatively. We use insights from these results to prescribe parameters for circuit designs to perform accurate, forward time simulations of flow problems. We also estimate the resource and specification requirements of near-term devices that would be necessary to perform convergent flow simulations. We further highlight that, although we solve a simplified flow problem, these algorithms are quite general and can be used to solve nonlinear problems as well, by appropriately linearizing the problem into any one of the TMCQC formats, thus broadening their appeal. These results suggest that the proposed algorithms can be simulated on near-term machines at full scale, while retaining a potential quantum advantage.
In Section <ref> we describe governing PDEs that will be used to assess the performance of the algorithms. Section <ref> introduces the specific linear combination of unitaries for constructing the time marching quantum circuits; they are discussed in detail in Section <ref>. In Section <ref>, we present end-to-end strategies in view of current and near-term devices, followed in Section <ref> by numerical results on simulators and a real quantum device. Finally, we summarize the discussion and outline our conclusions in Section <ref>.
We remark in passing that even though variational algorithms could be quite efficient in practice, it is hard to prove results rigorously owing to their strong dependence on the underlying optimization algorithm <cit.>.
§ GOVERNING EQUATIONS
The PDEs considered here reflect the momentum and mass conservation (assuming no body forces or source terms) and are of the form
∂u/∂ t + 𝐂·∇𝐮 = 1/Re∇ ^2u - ∇p,
∇·u = 0,
where 𝐮 = (u,v,w) is the velocity, 𝐂 is the advection velocity, 𝐩 is the pressure, Re = UL/ν is the Reynolds number—U being a characteristic velocity, ν the kinematic viscosity and L a characteristic length.
When 𝐂=𝐮, it represents the full nonlinear Navier-Stokes equations, while if 𝐂=constant≠ 0 one has the linear advection-diffusion equation; 𝐂=0 represents the well-known linear Poiseuille/Couette flow equation.
Here, we particularly consider the one-dimensional, linear, advection-diffusion problem as the running example for all discussions, given by
∂ u/∂ t + C∂ u/∂ y = D ∂^2 u/∂ y^2 ,
where the velocity varies only along y (wall-normal direction).
We set C=U (the constant advection velocity in the x-direction), and we put D, representing the diffusion coefficient, to unity. The initial condition u(L/2,0) is chosen as the delta function δ(x) and the system is subject to a periodic boundary condition u(0,t) = u(L,t). Such a setting also admits an analytical solution <cit.>, making it an ideal test-case to evaluate the accuracy of the quantum solutions.
The algorithmic framework developed here is also agnostic to the linearity or otherwise of the PDEs. For instance, if we were to begin by considering the nonlinear case (C=𝐮(x,t)), a preceding linearization such as a Homotopy Analysis and Carlemann or Koopman method <cit.> can be applied to first obtain an approximate, higher dimensional linear system of equations. Such a system can then be solved using the proposed methods.
§ LINEAR COMBINATION OF UNITARIES
The PDEs are first discretized in space and time, using an appropriate finite difference scheme, to obtain a linear system of equations to be solved by the quantum algorithm. A detailed account of this numerical setup is presented in Section <ref> of the Appendix. Computing numerical solutions of the approximate system of equations translates to iterative operations of the form Mb or M^-1b. Here, M and b represent a general finite difference matrix and the instantaneous velocity field, respectively, both constructed classically with an underlying numerical method. These are just the operations that we now intend a quantum computer to perform efficiently.
To do this, we design a hybrid quantum-classical workflow as depicted in figure <ref>.
(a) The first step is to encode the discretized velocity field, 𝐮(x,t), as the amplitudes of an n_g=log_2(N_g) qubit quantum state, |Ψ⟩_u=1/𝒩∑_j=0^N_g-1u_j|j⟩, where 𝒩 is an appropriate normalization constant such that || |Ψ⟩_u ||=1. The specific structure of such a vector (for a general b) would depend on the numerical scheme. This step is performed by a Quantum State Preparation algorithm, which is made of a quantum circuit that efficiently encodes the input values on to qubits using a set of 2 qubit gates. To be able to achieve quantum advantage it is important that this step is done efficiently, with a complexity less than 𝒪(N_g). It is, in fact, possible to do so if they meet certain sparsity requirements, or have a specific functional form <cit.> (denoted as operator R from here on). Here, since we use a delta function as our initial condition, such a state can be easily prepared with a single NOT gate, thus lowering the onus on state preparation.
(b) The next step is to apply the finite difference operator M on the prepared state as M|Ψ⟩ (or iteratively as M^τ|Ψ⟩ to march τ time steps). This is a non-trivial task, since M is neither unitary nor hermitian in general.
To make this possible, we invoke the concept of a linear combination of unitary operators. With this, an arbitrary matrix may be approximated by a weighted sum of unitary operators, which themselves can be broken down into fundamental quantum gates. For this, one could use different algorithms proposed earlier <cit.>. In this work, however, we redesign a strategy that was previously proposed in the context of simulating non-unitary and non-hermitian quantum systems <cit.> into a PDE solver algorithm. This redesigned algorithm requires only 2, or at most 4, controlled unitaries for the linear unitary decomposition, reducing the circuit depths drastically, making the overall algorithm more amenable to use on near-term devices.
(c) The solutions can then be measured for storage and post-processing classically; however, this would diminish the quantum advantage, since measuring the state is an 𝒪(N) operation. Instead, one could also perform quantum post processing to compute linear or nonlinear observables of the flow field, with the quantum computer outputting a single real value that can measured efficiently. We discuss these in more detail in Section <ref>. The following discussion will describe the proposed decomposition in terms of a linear combination of unitaries.
(1) Four unitaries: A non-unitary matrix M can be decomposed into symmetric and anti-symmetric matrices, S and A, respectively, as
S = 1/2(M+M^†) &
A = 1/2(M-M^†),
where M = S+A. Each of these matrices can be further decomposed exactly as
S = lim_ϵ→ 0i/2ϵ(e^-iϵ S -e^iϵ S)
A = lim_ϵ→ 01/2ϵ(e^ϵ A-e^-ϵ A),
where ϵ is an expansion parameter. It can be easily shown that e^± iS and e^± A are both unitary matrices. We shall now define these operators as U_0 = ie^-iϵ S, U_1 = -ie^iϵ S, U_2 = e^ϵ A and U_3 = -e^-ϵ A, and express M exactly as a weighted sum of purely unitary operators given by
M = ∑_k=0^k=3β_kU_k=lim_ϵ→ 01/2ϵ(U_0+U_1+U_2+U_3). However, in practice, we approximate M by choosing a small enough ϵ, thus resulting in a decomposition with just 4 unitaries given by
M̃ = 1/2ϵ(U_0+U_1+U_2+U_3).
Since the goal is to be able to simulate our algorithms on NISQ and near-term devices, it is desired to have a reduction in circuit depth. We now show that we can go a step further to reduce this decomposition to only two unitaries.
(2) Two unitaries: We can attain this reduction by trading in one extra qubit. For this, consider a simple hermitian dilation of the original matrix M given by
M̂ = [ 0 M; M^† 0 ].
Since M_ij∈ℝ, the matrix M̂ is symmetric with zero anti-symmetric part. This would leave us with a symmetric, hermitian, and Hamiltonian M̂ that can be decomposed into just two unitaries as
M̂ =lim_ϵ→ 0i/2ϵ(e^-iϵM̂-e^iϵM̂) = 1/2ϵ(Û_0+Û_1).
However, to accommodate this reduction, the size of input vector would need to be doubled and given by b̂=[0,b], where one half of the dilated vector is padded with zeros. This doubling would require only one additional qubit. From here on, let K represent the number of unitaries corresponding to either two or four unitaries. We now examine how to implement the decomposition as a quantum circuit.
The basic circuit requires two quantum registers, both set to 0 initially: (1) |Ψ⟩_u – to store the state vector that is to be operated on, and (2) |Ψ⟩_a – a register with a total of n_a=log_2(K) ancillary qubits (here, n_a∈{1,2}). |Ψ⟩_u is prepared using the operator R which, in our case, is simply a NOT gate to prepare the initial delta function. |Ψ⟩_a, on the other hand, is prepared by an operator V into a superposition state proportional to
|Ψ⟩_a = V|0⟩^⊗ n_a = √(1/β)∑_k√(β_k)|k⟩,
where β = ∑_kβ_k. Since all |β_k| are equal, the ancillary register can be prepared as a uniform superposition state by simply applying Hadamard gates on each qubit of the register. This preparation step is also efficient, since it requires only an 𝒪(1) depth operation. Now, using register |Ψ⟩_a as the control qubits, we apply the LCU unitaries as a series of uniformly controlled operations U_k on |Ψ⟩_u which is represented by the operator, W=∑_k|k⟩⟨ k|⊗ U_k. Then, |Ψ⟩_a is reset to 0 by applying V^†. Finally the ancillary register is measured in the computational basis, yielding a state proportional to
|Ψ⟩ = R|0⟩^⊗ n_a⊗ V|0⟩^⊗ n_g↦ (V^†⊗𝕀)⊗ W(|Ψ⟩_a⊗ |Ψ⟩_u)
= |0⟩^n_a(∑_kβ_kU_k)|Ψ⟩_u + |Ψ⟩_⊥
= 1/√(β)|0⟩^n_aM|Ψ⟩_u + |Ψ⟩_⊥.
Here, we drop for brevity the normalization constants as may exist, |Ψ⟩_⊥ corresponds to an orthogonal subspace that stores the unwanted remainder from the above operation of eq. (<ref>). The post-selected solution subspace is then re-scaled classically by 𝒪(|||Ψ⟩_u||/ϵ) = 𝒪(1/ϵ) to obtain the actual solution. However, this procedure also shows that, the circuit for linearly combining the unitaries applies the operator M only probabilistically, and the solution subspace of the quantum state is prepared with a small but finite success probability p_succ. Since we are interested in applying such a decomposition for τ time steps, p_succ would decay as function of f(2^-K,ϵ^-1,τ,|||Ψ⟩_τ||^-1). The smaller the p_succ, the larger is the number of repeated circuit simulations (query complexity or shots) required to measure or sample the solution subspace accurately. However, we design the time marching quantum circuits such that the overall query complexity is still kept near optimal, such that the required number of shots does not blow up exponentially. For instance, we use a Richardson extrapolation strategy (outlined in Section <ref>), with which we can do simulations even when ϵ∼𝒪(1).
Finally, to translate these operations outlined above into quantum circuits that work on simulators or real hardware, one would need a further decomposition of the above operators into fundamental 1- and 2-qubit gates. This can be done in one of the following ways.
(a) Direct Approach – This involves decomposing the controlled multi-qubit unitaries into standard two-level unitaries, which in turn require about 𝒪(N^2) 2-qubit gates <cit.>. Such a decomposition results in deep circuits that might be amenable only on future fault-tolerant devices. Although the chances of achieving a quantum advantage from such a decomposition are bleak to none at present, one could still attempt to use circuit parallelization strategies, proposed in Section <ref>, to ameliorate the large circuit depths. Additionally, since the current decomposition requires only two controlled-unitaries, this reduces the overall circuit depth further, compared to previously proposed unitary methods.
(b) QISKIT Transpile – IBM's quantum simulators offer an optimized transpile functionality (a culmination of various underlying algorithms), that decompose multi-qubit unitaries into single and two qubit gates, picked from a fixed basis of gates available. These transpliations offer multiple levels of optimization with which the circuit depths can be further shortened. This also allows one to design circuits by considering a specific qubit topology on a quantum processor, to extract the best performance. The transpilation feature is quite robust if not the most efficient. We use this feature to implement our linear unitary circuits to conduct real IBM hardware experiments and perform noisy simulations with Qiskit Aer, the results of which are described in Section <ref>.
(c) Hamiltonian Simulation – To analyze theoretically the asymptotic gate complexity scaling of the proposed algorithms (in terms of 1- and 2-qubit gates), we assume access to a black-box with simulations based on Hamiltonian algorithm (described in <cit.> (see Lemma 10) and <cit.> (see Sec. 2.1, Lemma 8 and proof of Theorem 3 in Sec. 3.1)), to efficiently implement the controlled unitaries. In our case, this translates to implementing a Hamiltonian simulation of unitaries of the form e^± iϵŜ (for K=2) and its powers. For an s-sparse matrix M̂, of size N× N, scaled such that ||·||≤ 1, the gate complexity to implement controlled unitaries of the form e^± iϵŜ, up to an admissible error of ε_U, would require
G_U = 𝒪((sϵ+1) (log N + log^2.5(ϵ/ε_U))log(ϵ/ε_U))
one- and two-qubit gates <cit.>.
From here on, G_U represents the above expression. We remark here that the sparsity is s=d+2, where d is the order of finite central difference scheme used. Even if we pick an extremely accurate scheme with the order d=8, say, the sparsity would at most be s=10, thus contributing only a small pre-factor to the above asymptotic complexity.
§ TIME MARCHING COMPACT QUANTUM CIRCUITS
As already mentioned, a time marching numerical scheme can be translated into one of the following operations: 𝐮(𝐱,t+1)=A𝐮(𝐱,t) or =A^-1𝐮(𝐱,t), which are called explicit and implicit schemes, respectively (see Section <ref> of the Appendix). For the present discussion, the general matrix M used before will be replaced by A, which now represents a specific numerical setup. We wish to use the framework of linear combination of unitaries, developed in the previous section, to perform efficient time marching simulations, using the above two matrix operations. We now introduce six methods of constructing quantum circuits to perform such a simulation. We shall refer to these as Time Marching Compact Quantum Circuits (TMCQC1-6) and outline their designs. In the discussion immediately below, we make an implicit assumption—which we discuss in more detail towards the end of this section—that the query complexity required to reconstruct the final solution contributes a constant pre-factor to the overall complexity of each method, thus making comparisons between gate complexities and classical time complexities more meaningful.
§.§ TMCQC1 - Explicit Expansion Circuit
With explicit time marching (see Section <ref> of the Appendix), integrating the system for τ time steps requires the application of matrix-vector multiplication operations of the form (𝐀_E)^τ. In TMCQC1, such an operator is approximated either by two or four unitaries by naively computing the multinomial expansion given by
(we drop the 1/(2ϵ) for brevity),
(𝐀_E)^τ = (U_0+U_1+U_2+U_3)^τ or =(Û_0+Û_1)^τ
∑_| v|=ττ v U^v≡∑_k=0^N_T-1Ũ_k,
where v=(v_1,⋯,v_K) and U^v = U^v_1_0U^v_2_1⋯ U^v_K_K-1.
Gate complexity - This multinomial expansion with K monomials has a total of N_T=τ+K-1 K-1 terms. When K=2 or 4, N_T = 𝒪(τ) or =𝒪(τ^3), respectively. For the rest of the section, we shall use K=2 for clarity. Now each term in the above expansion is in turn a product of at most 𝒪(τ) unitaries. Therefore the overall expansion has at most of 𝒪(τ^2) unitaries in total. We shall refer to this as the LCU depth. From the previous section we know that, each of these unitaries can be implemented with a gate complexity of G_U at best, thus giving a total complexity of 𝒪(G_Uτ^K)≡𝒪(log(N_g),polylog(ϵ/ε_U),sϵ,τ^2). Now, comparing this with the classical complexity of explicit time marching via simple matrix-vector multiplications, given by 𝒪(N_gsτ) <cit.>, we may infer the following. The quantum algorithm is exponentially better in N_g, but worse in τ by 𝒪(τ). This implies that, even though the quantum algorithm is exponentially better in N_g, in order to compensate for the τ scaling, one would need to set the CFL stability criterion such that N_g≫τ; this ensures that the advantage gained by the N_g scaling outperforms (or at the least matches the loss from) the poorer τ scaling. In any case, it would suggest that simulating only short T horizons would make this step possible. Further, if we had access to (say) 𝒪(τ) parallel circuits, we could simulate the unitaries in parallel (as discussed later in Sec. <ref>) to lower the gate complexity, thus ameliorating the τ scaling.
Qubit complexity - This approach would require n_a = log_2(τ) ancillary qubits, n_g = log_2(N_g) qubits to store the velocity field, giving a total of n = log_2(N_gτ) qubits.
§.§ TMCQC2 - Explicit Serial Circuit
The previous approach can be improved by trading in a few extra qubits, while still keeping the total qubit complexity logarithmic in the problem size. An alternative way to implement the same operation as earlier is to concatenate the LCU decomposition of A_E serially, τ times. However, to keep track of the time steps, we would need an additional register, which we shall refer to as clock register, comprising n_c=log_2(τ) qubits.
The quantum circuit for this is shown in figure <ref>(a) (for K=4). A step-by-step description of this circuit operation is outlined in Section <ref> of the Appendix.
Gate complexity - It is easy to see that a serial circuit such as this would lead to an LCU depth of 𝒪(τ), with a total gate complexity of 𝒪(G_Uτ) ≡𝒪(log(N_g),polylog(ϵ/ε_U),sϵ,τ). Comparing this with the classical scaling of 𝒪(N_gsτ), one is led to conclude that the quantum algorithm is exponentially better in N_g and comparable in τ and s, thus offering a potential advantage for any τ≥ 1.
Qubit complexity - It is clear that the number of qubits required in this case is n=n_g+n_a+n_c=log_2(N_gτ)+2.
§.§ TMCQC3 - Implicit Expansion Circuit
We now consider an implicit time marching scheme, which requires one to perform matrix inversion operations. Since we are now faced with an inversion problem, decomposing by linear combination of unitaries is more involved. Some approaches proposed earlier include approximating the inverse function f(x) = 1/x, in terms of either Fourier or Chebyshev series <cit.>. The terms in these series are then implemented as unitary gates. Alternatively, we explore here a simpler yet efficient approach by truncating a Neumann series expansion to approximate a matrix inverse operation of the form 𝐀_I=(𝐈 - α𝐀)^-1. The series approximation up to P terms could be written as (𝐈 - α𝐀)^-1 = ∑_p=0^P𝐀^p≈𝐀̃_I, where 𝐀̃_I represents the approximate inverse operator.
The accuracy of this approximation is given by the truncation error ε_N, which clearly depends on the number of terms retained in the series. A detailed account of this method is presented in Section <ref> of the Appendix, where Lemma 1 proves this result: Given an α such that the spectral radius ρ(α 𝐀)<1 (to ensure convergence), and if P_min is the number of terms for a truncation error of ε_N, the error is bounded from above as
ε_N≤𝒪(||𝐀||^P_min) = 𝒪((κ-1)^P_min).
From this, it can be easily shown that the number of terms required is
P_min = 𝒪( ⌈log(1/ε_N)/log(1/||𝐀||)⌉) =𝒪( ⌈log(1/ε_N)/log(1/(κ-1))⌉).
From Proposition 2 of the Appendix, it is also clear that the error in the velocity solution obtained from such a truncated approximation is similarly bounded. Each term of the truncated series is now further decomposed, using the linear combination unitary method. The final set of unitaries is obtained by computing the multinomial expansion of the new truncated series, while marching forward τ time steps.
Gate complexity - To compute the gate complexity, first consider a pth order term in the truncated series. If this term is written in terms of the unitaries decomposition, it will produce p+K-1 K-1 terms, which is 𝒪(p^K-1). Recall that each of these newly produced terms is in fact a product of pth order products (at most) of unitaries, thus making the depth 𝒪(p^K). Next, we sum the series up to P_min terms, giving a total of (for K=2) 𝒪(P^3_min).
Finally, since we need to perform time marching, the series obtained above is finally raised to the exponent τ, expanding which gives a total depth of N_T=τ+(P^min)^3-1 (P^min)^3-1 = 𝒪(τ^P^3_min). Therefore the overall gate complexity is 𝒪(G_Uτ^P^3_min). It is clear that the algorithm is exponential in τ and also clearly worse (in terms of τ) than the corresponding classical complexity given by 𝒪(log(N_gsτκlog(1/ε))). Again, although the algorithm is still logarithmic in N_g, it offers the poorest complexity scaling compared to all other TMCQCs presented here.
Qubit complexity - We note that this algorithm requires n_g=log_2(N_g) qubits for storing the grid, n_a=P^3_minlog_2(τ) ancilla qubits for the LCU decomposition, thus giving a total qubit complexity of 𝒪(P^3_minlog(N_gτ)).
Although we will show that the current method TMCQC3 offers the poorest complexity scaling among the algorithms discussed, the benefit of the Truncated Neumann Series approach will be reaped maximally by the algorithms to be described next.
§.§ TMCQC4 - Implicit Serial Circuit
We can improve the previous algorithm by using an additional clock register. The set of unitaries obtained from the truncated Neumann series is now concatenated serially τ times to perform time marching, with the clock register tracking the time step count.
Gate complexity - The truncated series has a total of 𝒪(P^3_min) terms, so the total depth can be estimated to be 𝒪(P^3_minτ). Thus it can be seen readily that the total gate complexity is 𝒪(sϵ,log(N_gτ),polylog(ϵ/ε_U,1/ε_N),log^-3((κ-1)^-1)).
Comparing this with the classical complexity of (log(N_gsτκlog(1/ε))), one finds that the algorithm (without any parallelization) has a comparable scaling in τ, while it is exponentially better than classical in N_g and κ.
Qubit complexity - It can be seen easily that this approach has the total qubit complexity of 𝒪(log(N_gτ P_min)).
§.§ TMCQC5,6 - Explicit & Implicit One-shot Circuits
Now two final quantum circuit designs are described; as we shall show, they offer the most efficient complexity scaling so far. In contrast to previous methods, these approaches construct a single-large matrix inversion problem (of the size N_gτ× N_gτ), solving which provides at once the solutions for all time steps and hence the term one-shot. (This should not be confused with the number of shots required to sample the wavefunctions). This method can be constructed with both explicit (𝐀_EOũ = 𝐛_EO) or implicit (𝐀_IOũ = 𝐛_IO) schemes; details of these construction are outlined in the Appendix. In order to approximate the inverse, we again use the truncated Neumann series as in TMCQC3,4. Since in this case the solution of the matrix inversion includes the solutions at all time steps, there is no need for any further action. The terms of the truncated series can now be readily written in terms of the unitaries decomposition. Furthermore, since the method avoids an iterative/serial style time marching, the diminution of success probability is also less severe than in previous methods.
Gate complexity - This is given by simply computing the terms obtained by just applying the unitaries decomposition to each term of the truncated series. The depth is therefore just 𝒪(P^3_min), which yields a total gate complexity of 𝒪(G_UP^3_min) ≡𝒪(sϵ,log(N_gτ),polylog(ϵ/ε_U,1/ε_N),log^-3((κ-1)^-1)). This scaling is nearly optimal in every parameter except s, which, as already noted earlier, is at most s≤10. Thus the method is exponentially better than its classical counterpart in terms of all system parameters.
Qubit complexity - The input vector is now a larger dimensional state of size N_gτ, which requires n_g=log_2(N_gτ) qubits. Further, to apply the unitaries decomposition, we require an ancilla register of size n_a=3log_2(P_min) thus having a total qubit complexity of 𝒪(log_2(τ N_gP_min)).
The complexities of all circuit designs discussed above are summarized in Table <ref>.
§.§ Query complexity and success probability
The gate complexities outlined above correspond to a single application of the quantum circuit. However, the solution thus prepared has a small, yet finite success probability p_succ. This implies that one would need to query the circuit repeatedly (in other words, repeat the experiment for N_s shots) to reconstruct the solution state by repeated sampling. The overall time complexity of the simulation would therefore be the product of the gate and query complexities. To compute this, we need to first estimate N_s for which we first consider eq. (<ref>) that represents the fundamental action of linear unitaries. Before rescaling the solutions, simply applying the unitaries circuit prepares the state proportional to 2ϵ A|Ψ⟩.
From Proposition 1 (see Appendix), without loss of generality, the matrix A can always be scaled by a small constant δ≲𝒪(1/ϵ), and the solutions can be simply scaled back after measurements. Therefore the solution subspace produced from eq. (<ref>) can be said to be proportional to (2ϵδ/√(β))A|Ψ⟩.
This implies that a single application of the linear unitaries operation produces the desired state with the success probability
p_succ∼(2ϵδ|| A|Ψ⟩||/√(β))^2.
The above value is for a single linear combination of unitaries block. However in essence, each TMCQC provides a different unitary block-encoding of the corresponding A to perform time marching. The p_succ of the solution at t=τΔ t thus varies for each TMCQC. The specific dependence arises from the difference in the number of times the unitaries oracle is queried and the corresponding depth, which we shall represent from now on as G_L. Comparing all the TMCQCs we can easily identify that in TMCQC2 and TMCQC4, the decay in p_succ is particularly the most severe since the unitaries are applied in series. Therefore for τ such applications, the RHS of eq. (<ref>) will now be raised to the power τ. For generality, let us represent this exponent with G_L. A single application of the two-unitary oracle involves preparing a superposition state by Hadamard gates acting on ancilla qubits. This implies that β = 2. Since the p_succ for every time step is independent, the probability thus tends to decay exponentially <cit.>, compounding for each of the τ steps. Apart from this, the overall success probability also depends on ratio of the norms of the initial and final time solution states. Thus, we can rewrite p_succ as
p_succ = (2ϵδ|| A||/2)^2G_L∏_k=1^G_L|| |Ψ⟩_k ||^2/|| |Ψ⟩_k-1 ||^2
= (ϵδ|| A||)^2G_L|| |Ψ⟩_τ ||^2/|| |Ψ⟩_0 ||^2 .
This implies that it would require at least
N_s = (1/ϵδ|| A||)^2G_L|| |Ψ⟩_0 ||^2/|| |Ψ⟩_τ ||^2≈𝒪(|| |Ψ⟩_0 ||^2/|| |Ψ⟩_τ ||^2)
number of shots to recover the solution, up to a constant pre-factor η^2G_L = (1/ϵδ|| A||)^2G_L. Since we can readily choose an ϵ and δ such that ϵδ|| A||≳ 1 (see Proposition 1 of Appendix and its ensuing discussion), such that η∼𝒪(1) is not an unfavorably large constant; this takes the overall complexity closer to optimal.
We should also note that the complexity discussed above can be further improved quadratically by applying a Grover-like amplitude amplification, as shown in <cit.>. However, we continue our discussion without it, to emphasize that the corresponding results are still efficient in some strict sense, even before applying amplitude amplification. This optimistic nature of the discussion can however be quickly diminished when we begin to consider the effects of noise. In practice (as shown later in Section <ref>), to simulate these circuits on currently available IBMQ machines, we need at least N_s=2^15 shots to be able to accurately recover the solution. Nevertheless, the query complexity presented here relates to the asymptotic complexity scaling, which suggests that, when reasonably large problem sizes are solved with the proposed methods, the required shot count will not significantly diminish any available quantum advantage.
In summary, it is ideal to ensure that η≲ 1, otherwise it would lead to a large pre-factor 𝒪(2^log(τ)/log(κ-1)) at best, or 𝒪(2^τ^2) at worst. While ensuring the query complexity to be controllable, the following three situations need special attention.
* Stability criteria – Flow problems solved by implicit schemes (TMQCQ3,4,6) are not constrained by any stability criteria, which therefore allows more flexibility to choose the parameters. However, explicit schemes as in TMCQC1,2,5 have to satisfy a stability criterion. For the specific case of the advection-diffusion problem being discussed, this is equivalent to α = DΔ t/(Δ x)^2≤ 0.5. This criterion requires the system parameters U,N_g,Δ t to be chosen to satisfy δ|| A || < 1 and δϵ|| A ||≳ 1. For second-order, central difference, explicit scheme, || A|| = 1. In this case, just ensuring δϵ≳ 1 is sufficient.
* Noise – As already noted, it is important to choose a large enough ϵ to make the solution distinguishable from noise. At the same time, to maintain the accuracy of the solution even for large ϵ values, we could employ the concept of Richardson-extrapolation to defer our approach to limϵ→ 0. The accuracy of the extrapolated solutions can be 𝒪(ϵ^4) or better. Viewed alternatively, we can lower the shot count N_s significantly for the same given accuracy.
* Steady state and decaying flows – The above two considerations alone are insufficient to maintain a small N_s. We also need to ensure that the steady state limit
lim_τ→∞|| |Ψ⟩_0 ||/|| |Ψ⟩_τ ||,
does not diverge. If the time marching operator is norm-preserving (or the flow is statistically steady), then || |Ψ⟩_τ || = || |Ψ⟩_0 || trivially. The alternative scenario of the norm-increasing flow is better, where || |Ψ⟩_τ ||≥|| |Ψ⟩_0 ||, corresponding to flows that experience constantly increasing net influx of energy. Both cases are in practice possible if we consider a constant forcing term in our governing PDEs. However, the more restrictive case is the viscous, dissipative flow whose norm decays with time; in this case the steady state of the flow corresponds to vanishingly small velocity fields due to constant dissipation of energy from the system. However, the advection-diffusion problem considered in this work is not as severe as can be seen from the analytical solution <cit.>. The steady state corresponds to a constant, uniform velocity field with a small, yet non-zero norm (in this work, this ratio is = 1/2). For flows whose steady state norms are identically zero, the query complexity grows as 𝒪(|| |Ψ⟩_τ ||^-1). In such cases, there could be two alternatives (which should be studied in more detail): (1) The problem of the growing pre-factor η can be addressed by choosing a smaller ϵ such that η^G_L∼𝒪(|| |Ψ⟩_τ ||^-1). (2) When this choice is not possible, one is limited to solving the problem for only short time horizons, for which the norm decays not too steeply.
The scenario described so far considers the growth in N_s as being most severe. If, however, we consider TMCQC5,6 there is no compounding decay in terms of ϵ. The only contributing factor would be from the normalization stemming from the linear decomposition in terms unitaries. However, we show that the unitaries depth is 𝒪(log(1/ϵ)/log(1/(κ-1))^3. For large problem sizes, the corresponding pre-factor in query complexity would not be critical. In any case, from the discussion so far, it is clear that the overall time complexity of the algorithms can be pushed closer to optimal.
§ END-TO-END ALGORITHM AND NEAR-TERM STRATEGIES
A hybrid quantum-classical algorithm requires one to consider: (A) a suitable quantum state preparation, (B) an optimal circuit design for the quantum solver, (C) efficient solution read-out and post processing, and (D) effects of noise and decoherence on real quantum hardware. These factors tend to diminish expected quantum advantage. In this section we outline strategies that could potentially render the overall algorithm to be an end-to-end type, as well as one that can be simulated on noisy near-term quantum devices, while retaining some quantum advantage. A schematic of these methods and typical circuit designs are given in figure <ref>(b).
(1) Quantum State Preparation - The first step of a hybrid QCFD algorithm is to encode the initial velocity field into qubit states. In general, for arbitrary states of size N, the complexity of state preparation would scale linearly with N. This scaling compromises the quantum speed-up that might be achieved from the PDE solver algorithm. However, specific examples might not need such exponential circuit depths. For an input state that has an integrable, convex functional form, the initial state can be prepared with sub-exponential circuit depths as shown in <cit.>. For sparse states, one can prepare the initial state more efficiently with sub exponential circuit depths <cit.>. Certain cases might, however, require one to trade additional ancillary qubits for shorter circuit depths. Further, since the algorithms proposed here require only a single copy of the initial state for every circuit execution, we can begin with a simple state that is easy to prepare. In fact, most problems in fluid dynamics offer great flexibility in choosing initial conditions, which could be a uniform flow (constant function) that would require a state preparation circuit of just a single layer of Hadamard gates, or a delta function (the case considered here), preparing which requires a single NOT gate. Many times, a random initial conditions are admissible or necessary. More recent efforts demonstrate efficient ways to encode polynomial functions as well <cit.>.
(2) Circuit Parallelization and Reset gates - Circuit depths that can be simulated on current and near-term hardware are rather limited. The limitation arises from the finite, short coherence life-spans of the qubits. Longer circuit executions tend to run into errors based on decoherence. In order to make an attempt at fitting the TMCQCs on such hardware, we propose here a few strategies to parallelize the circuit execution, to both reduce the effective circuit depths and the execution times.
(A) Shot parallelization - Generally the shot count available on real quantum hardware (here, IBMQ) is limited to about ∼ 2^15 to 2^19 shots. If we consider a simulation requiring a large shot-count of about (say) N_s=2^20 shots or higher, such an execution would not be possible with current hardware.
A simple workaround to achieve this would be actually to execute multiple quantum circuits in parallel. For the current example, naively, we can perform two parallel circuit executions with 2^19 shots each. When possible, we could go even further by executing several low shot-count circuits in parallel, with which we can increase the total shot count while improving the sampling accuracy of the quantum state. To fit our simulations using only the available shot count, we could instead use the Richardson extrapolation to lower the required N_s. This is discussed below in this section.
(B) LCU parallelization -
Let us consider a single time step simulation using either two or four unitary block encoding.
* Naive Parallel unitaries - We execute each unitary U_i (without any controls) in parallel on 2 (or 4) separate circuits, starting with the same initial state. This requires 3n_g qubits in total. When the resulting parallel (partial) solution states have purely real amplitudes, we can perform a straightforward measurement in the computational basis of each partial solution. Following this, these partial solutions can be combined classically to output the final solution. However, if any of the partial solutions have complex amplitudes, a direct extraction of solutions requires expensive tomography to reconstruct the state. Two possibilities exist. In the first case, we combine those parallel unitaries as a single circuit such that the grouping produces real amplitudes for the partial states. From here, we can proceed with measurement as earlier. This new grouping would, of course, increase the depth of the circuit, which is now at some intermediate level of parallelization.
Secondly, if we are interested only in computing the expectation value of a certain observable, instead of the full state, it can be done while still retaining the full parallelization. The expectation value of an operator U_i can be computed accurately by using a Hadamard Test circuit. A direct Hadamard test outputs Re(⟨ψ| U_i|ψ⟩). For the imaginary part Im(⟨ψ| U_i|ψ⟩), a simple modification is done by adding the S^† gate as shown in figure <ref>(b).
If we are given access to m=G_L parallel circuits, every parallel circuit would then be significantly shallower, apart from improving the overall complexity itself. This reduction in circuit depths makes it amenable for near-term quantum devices. For example, for the Qiskit Transpile command to solve an N_g=4 system, if we use a total of 4 qubits as a single circuit, the transpiled depth is ∼1300. If we implement it as 4 parallel circuits by eliminating all control operations and qubits, the depth is 10. This is about ∼ 130 times shallower in depth. In fact, even if we include a full arbitrary state preparation step, which is at most a circuit of 𝒪(10) depth, the reduction in depth obtained from parallelization can easily compensate for an expensive state preparation. Since these shallower circuits are more accurate with lower effects of noise and decoherence, the total shots required to extract the partial solutions would also drop. We explore this strategy by implementing it on a real IBM quantum device, and present results later in Section <ref>.
* Fanout Parallel unitaries - A more robust parallelization is possible by invoking the idea of fanout quantum circuits <cit.>, though is is hard to realize using near-term devices. Here, the circuit can be parallelized by a single entangled quantum circuit (not separate circuits as earlier) at the cost of extra ancilla qubits <cit.>. In these types of strategies, the state of a single ancilla register prepared with the target state is then “basis-type copied" (fanout) to other additional ancilla registers by applying controlled CNOT gates as |ψ⟩|0⟩⋯|0⟩→|ψ⟩|ψ⟩⋯|ψ⟩. Now the unitary operators are applied in parallel on copies of the target state |ψ⟩. Especially when the Hamiltonian that is being simulated has certain properties <cit.>, or when the set of unitaries can be partitioned into Pauli operators, such fanout schemes can be used, with the aid of Clifford circuits <cit.>, to parallelize the overall algorithm.
(C) Reset gates - Apart from parallelizing the circuits to lower the circuit depth, one can also apply reset gates now possible on IBMQ devices to lower the width (ancilla qubit complexity) of the quantum circuit.
It is common to have multiple ancilla for controlled unitary rotations <cit.>. Instead of having separate control qubits, after application of every controlled operation on the target state, the ancilla register can be reset back to |0⟩ state, and can be reused to control the next controlled unitary on the target state as shown in figure <ref>(b). Such operations can improve the qubit complexity, but care needs to be taken while rescaling the final solution by accounting for any re-normalized coefficients of the states that were reset, as well as accounting for the breaking of any important entanglement in the circuit. Another advantage of such resets is that we can reset a qubit quickly before it reaches its coherence time limits. The qubit now spawns a new life for the next controlled operation within its coherence span, thus lowering potential errors due to decoherence.
(3) Richardson Extrapolation –
The success probability with the algorithms outlined so far can be enhanced via Richardson extrapolation <cit.>, which offers an elegant way to reduce the required number of shots. Conversely, it could be used to improve the accuracy for a fixed number of total available shots of the sample the solution. This tool allows us to simulate the unitaries decomposition even for ϵ∼𝒪(1), this being crucial to control the query complexity of the overall algorithm, as discussed in the previous section. The concept of this extrapolation is as follows: Given an operator U(Γ,ϵ), estimating its output at ϵ→ 0, could be done through extrapolation as shown in <cit.>. From this we can write
U(Γ,0) = U(Γ,ϵ_1) - γ^2U(Γ,ϵ_2)/(1-γ^2),
where ϵ_1>ϵ_2 and γ = ϵ_1/ϵ_2 is the order of extrapolation, where the error of extrapolation scales as 𝒪(ϵ^4). Γ here represents, collectively, any arbitrary set of parameters on which the operator could depend. Higher powers of γ leads to higher order and more accurate extrapolations. As we shall demonstrate later through simulations, even for ϵ_1 and ϵ_2 close to unity, the solution can be computed accurately through extrapolation. This method could also serve as an aid to possible amplitude amplification procedures <cit.>. The extrapolation procedure can be viewed alternatively as follows. Given a fixed number of shots N_s, we can extrapolate the solutions that were obtained with ϵ_1,2 and N_s shots as
|𝐮⟩_0 = |𝐮⟩_ϵ_1 - γ^2|𝐮⟩_ϵ_2/(1-γ^2),
to produce an extrapolated solution with higher accuracy. The procedure can be repeated for higher orders as well, giving more accurate extrapolations. If we compute the gradient of the above expression with respect to γ, we note that the gradient is maximum at about 1. Therefore γ is a number close to but greater than 1; that is, if ϵ_1,2 are close to each other and ϵ_1>ϵ_2, then the effect of extrapolation is amplified. Applying this tool in our TMCQC simulations has the benefits of lowering the required N_s for a given accuracy, improving the query complexity, and allowing large ϵ simulations to help distinguish them from the noise on real hardware devices.
(4) Quantum Post Processing - Finally, once the quantum solver stage has prepared the solution state, we can either choose to measure the entire field, which is an 𝒪(N_g) operation that could compromise the available quantum advantage, or compute important functions or expectation values of flow observables, outputting a single real value (or a few). Measuring this single value (or a few values) protects the quantum advantage by avoiding the tomography of the entire state. We call this Quantum Post Processing. Such meaningful functions of the field could be either mean flow field ⟨𝐮⟩ = ∫_0^Lu(x,t)dx or the mean gradient field ⟨∂𝐮/∂𝐱⟩ (and its higher orders). The results for the mean flow are shown in figure <ref>(b). The mean gradient can be computed by first using a circuit with linear unitaries to approximate the numerical gradient followed by a string of Hadamard gates applied on all qubits of the target register to compute the sum, which can then be divided by N_g classically. The more important functions of the field, however, tend to be nonlinear, for instance, the mean viscous dissipation rate given by ε=ν⟨(∂ u/∂ x)^2⟩, where ν is the viscosity. Such nonlinear functions can be computed using the QPP algorithm described in <cit.>. This method also avoids the need for expensive bit-arithmetic circuits to perform nonlinear transformations, although it requires a nominal amount of classical pre-computation done hybridly, to prepare the quantum circuits. The QPP for computing nonlinear observables consists briefly of re-encoding the amplitude and bit encoded values using a fixed number of qubits(n_QPP) that decide the accuracy as
u_i| i⟩→| u_i⟩.
The target values are now represented by an n_QPP-bit binary approximation. At this point, instead of using bit-arithmetic, since n_QPP is known apriori and thus also the corresponding binary basis, we can use this knowledge to generate controlled rotation gates corresponding to each basis state. Therefore, now using these bit-encoded states as control qubits, we can apply controlled rotations on an ancillary register (initially set to 0), where the rotation angles are classically precomputed to be R_y(θ=arccos(f(u_i)), where f(u) is the nonlinear function to be applied on the field or on a coarser subspace of it. This produces a state proportional to
| u_i⟩|0⟩→ f(u_i)| i⟩.
Such a QPP algorithm produces a single and real measurable, which not only offers insight into the flow field but also lowers the measurement complexity, thus protecting to some degree the available quantum advantage. The limitations of querying for a single value (or a few values) are obvious, though many experiments of the pre-digital era did just that.
§ NUMERICAL RESULTS
In this section, we implement the quantum algorithms on simulators as well as a real quantum device. All the TMCQC algorithms proposed in this work have as a common feature, namely the two-unitary or four-unitary decomposition. The circuit designs differ in only the number of total unitaries involved for marching τ time steps as well as in the number of ancillae qubits required. We explore considerably large parameter spaces that affect the performance of the algorithms, while also maintaining brevity and completeness of discussion. To achieve this, we present results from implementing the TMCQC2 algorithm. Although this algorithm does not enjoy the best asymptotic complexity, it is still an ideal representative of the general circuit structure of all other TMCQCs. Another reason for this choice is that the current capacity of the simulators used here, such as QFlowS and Qiskit, limits the maximum size of circuits that can be simulated. The algorithm chosen conforms to these restrictions while giving extra room to explore a wider range of parameters. We evaluate the correctness and performance of the algorithm and disentangle the interplay between noise, sampling accuracy and the accuracy of the algorithm itself, to estimate the relevant quantities needed for time marching simulations on near-term quantum devices.
§.§ QFlowS Simulation
The algorithm must be able to produce physically meaningful and accurate results in the absence of noise or decoherence errors. In to this, we implement a full gate-level simulation of the present flow problem (eq. (<ref>)) on QFlowS <cit.>. QFlowS is an in-house, high-performance quantum simulator, with which we perform ideal state vector simulations, without noise or decoherence. Building noise models into this simulator is an ongoing effort.
From figure <ref>(a,b), we observe that, qualitatively, the quantum solution captures the analytical solution well in both space and time. The initial condition of the flow is a delta function, as shown by the bright yellow spot in figure <ref>(a,b). In time, the bright peak slowly diffuses while it also slowly advects in the direction x>0. In figure <ref> (c), we compare this solution with the classical numerics, since both of them are plagued by the same errors due to the finite difference approximation. The two solutions agree well. This observation is then bolstered quantitatively by computing MSE as shown in figure <ref>(d) for different values of ϵ. It can be clearly seen that the MSE converges with time, and the accuracy improves for decreasing values of ϵ, as expected. The more gradual decay in MSE for larger ϵ can be attributed to two reasons. First, the inaccurate unitary representation of M leads to faster diffusion (numerical diffusion), which leads to the faster decay of the solution to zero, as shown by MSE asymptoting with τ. Second, the finite resolution of x and t leads to inaccuracies at every step, which accrue quickly, impeding MSE from convergence. This issue, however, plagues even smaller ϵ. Thus, in practice, for error convergence in serial time marching algorithms (TMCQC1-4), it is crucial to have large enough resolution to keep under check the errors accruing due to large ϵ approximations, device noise errors and inaccurate quantum state tomography. Apart from better resolution, simulating with larger values of ϵ is somewhat crucial to lift the solution above the noise level of the hardware, and to lower the query complexity. To do this, we now invoke the Richardson extrapolation to lower shot count required to sample the quantum solution or, conversely, to improve the accuracy of the solution given a fixed number of shots. We first present results using the latter interpretation; the former is discussed later in this section. We extrapolate the solutions computed at larger ϵ values (ϵ∼𝒪(1)), to estimate the solution at ϵ→ 0 in a deferred manner as described in the previous section. The solutions computed this way are shown in figure <ref>(a). Here, we also capture the effect of increasing grid resolution (up to 128 grid points, with 11 total qubits). The extrapolated quantum solutions show excellent agreement with the exact solutions and the accuracy improves with increasing grid resolution as summarized in Table <ref>(c). Furthermore, we show in figure <ref>(b), the applicability and effectiveness of the extrapolation technique extended to even quantum post processing results, where one measures a function of the solution instead of the measuring the entire field itself.
§.§ IBM Quantum Device Experiment
Here we evaluate the implementation the unitaries decomposition on an IBM quantum platform that have quantum processors with up to 127 qubits. We use the IBMQ Cairo device with 27 qubits, whose circuit topology is shown in figure <ref>(a) and its specifications (at the time when the experiment was performed) is summarized in figure <ref>(b). Since the devices are generally calibrated almost every two hours, the exact experimental results might be different at different times. Apart from the gate and measurement errors, one also needs to account for the times T1 and T2, which indicate the lifetime of the qubits, beyond which decoherence sets in, leading to another source of error. To avoid this, the depth of the circuit needs to be sufficiently shallow for the execution time per shot T_s < min{T_1,T_2}, whose values are provided in figure <ref>(b). Considering these error rates and coherence times, the circuit sizes of current devices (within a reasonable error threshold) is rather limited. For this reason, we attempt to implement a parallelized, single time step circuit (for N_g=4 and N_g=8), by decomposing it into basic gates with Qiskit transpile. We show that our unitaries can indeed be simulated with reasonable accuracy on a real quantum device, bolstering the viability of the current algorithm to be scaled up, given adequate resources on near-term devices. This ability also opens a promising avenue to actually perform multi-step, time-marching simulation of PDEs on real quantum devices. We have not used any error correction codes or sophisticated qubit topology optimization. The accuracy of the current results even without these additional features shows that further improvement is possible. These tools become mandatory when simulating larger problem sizes.
To implement our algorithm, we employ the unitaries parallelization and the shot parallelization techniques described earlier to make each parallel circuit as shallow as possible and minimize noise and decoherence effects. To decompose the unitaries in each parallel circuit, we use Qiksit transpilation at two-level optimization, generating a circuit consisting of basis gates available on IBMQ Cairo, listed in figure <ref>(b). It is worth noting that, apart from gate count reduction, circuits can be optimized further by specifying coupling maps (although it is not used here). This allows one to specify a custom qubit topology by picking qubits and interconnects with the least instantaneous errors. Further we observe that, the circuit parallelization used here results in sufficient depth reduction that compensates the overhead due to repeated quantum state preparation. Since each unitary is applied in parallel, it removes the need for control qubits and controlled gates, making the circuits more efficient. In fact, the depth reduction (including state preparation) is by a factor ∼ 130, which is nearly 99% in certainty. This situation greatly benefits the one-shot algorithms TMCQC5,6. The depths of the methods proposed here appear to be gradually approaching those of variational quantum algorithms <cit.>. The transpiled circuit of these unitaries for the N_g=4 case is shown in figure <ref> (c). The depths for the N_g=4,8 circuits are 10 and 95, respectively (excluding the measurement operators). For the number of qubits being used and the gate count, these depths are close to the simulation limits on these devices, with acceptable error bounds.
At this stage, it is important to note that, for the two-unitary strategy, we have a decomposition that results from a parent Hermitian matrix. By acting these two unitaries in parallel on our instantaneous velocity field, we obtain solution quantum states, whose amplitudes are purely real, with no imaginary part. These parallel solutions can be measured simply in the computational basis and added classically to obtain the final solution in a hybrid form. In this case, we need an additional qubit to accommodate the dilated Hermitian matrix. Proceeding to a fully parallel four-unitary, whose operators are half the size of that of the two-unitary case, leads to much shallower parallel circuits. However, the parallel solutions from U_0 and U_1 have a finite imaginary part in them (unlike solutions from U_2 and U_3), although their partial sum is purely real. If we wish to measure these results separately, it would require a full state tomography only for these two components, while the remaining two components can be directly measured as before.
Instead, if we are only interested in computing the expectation value of an observable O, we could do so by keeping the complex amplitudes in tact and by just performing a Hadamard test, which computes the Re(⟨ψ| O|ψ⟩. It can be modified easily by adding an additional conjugate phase gate S^†, as shown in figure <ref>(b), also to obtain the Im(⟨ψ| O|ψ⟩. Here, for a fully parallel four-unitary case, we present some results from computing || U_i𝐮||, for each unitary (as shown in figure <ref>), and compare them with classical computations of the same quantity, for N_g=4,8 cases. To obtain the actual velocity field solution with purely real amplitudes, we also perform an intermediate level parallelization for N_g=4, by combining unitaries into two parallel circuits. For this, with an extra ancilla qubit, we apply controlled U_0,1 in one circuit and U_2,3 in another, in parallel. We then combine these results classically. This circuit, of course, requires one less ancilla compared to a full four-unitary circuit. We also compare this with the parallel two-unitary case. Both yield solutions with comparable accuracy. Of them, the fully parallel four-unitary case suffers far less from noise and decoherence, owing to its short depth circuits, suggesting that accurate computation of expectation values of operators seems more reachable on near-term devices.
Further, to ensure that our results are not dominated by the errors due to decoherence of qubits, one requires T_s < min{T_1,T_2}. We examine this by studying the timing performance of these experiments on IBM Cairo, which is summarized in figure <ref>(d). The measurements show that the naive execution time per shot is T_s≈1/2min{T_1,T_2}. It has to be noted that T_s is actually computed by naively dividing the overall execution time by N_s shots, but this does not reflect the actual time taken per shot (T_s-act). After every shot, there is a finite delay time before the next shot gets executed, apart from other overhead times for every shot <cit.>. By accounting for these overheads, our estimate of T_s-act shows that min{T_1,T_2}≈ 50T_s-act, and therefore the circuit sizes are well within these decoherence limits. We also then compute the mean time required to simulate the same problem classically, on MATLAB (with Intel(R) Core(TM) i9-9900 CPU @ 3.10GHz processor). The results indicate that the timings are rather comparable with the classical algorithm being ≈ 3× faster than T_s-act. However, it remains to be seen how these numbers would scale for larger problems when compared with high performance classical algorithms.
Figures <ref>(a,b,d,e) show that the parallel solutions of U_i𝐮, computed on IBM Cairo, capture the classical solutions well. However, the error can be seen to increase with resolution, as a result of the inevitable increase in the circuit depth and the associated errors. The overall error, however, is comparable to the highest accuracy one can expect on that specific quantum device (for these circuit sizes), suggesting that our circuits are able to extract the maximum available accuracy of the machine. The error contours shown in figures <ref>(c,f) suggest that high amplitude regions of the wavefunction are recovered more accurately than near zero. This is expected since the near-zero values are more prone to noise. Proceeding further, using an intermediate level parallelization with the four-unitary scheme, we show in figure <ref>, the results of marching two time steps by reconstructing the full solution. It can be seen clearly to capture the corresponding classical results well, having errors comparable to those on the quantum device. The partial sum of parallel circuits used here have depth of about 153 and the CNOT error rate of the device is ≈9-3 (experiment performed on March 30, 2024). Considering the measured MSE, the performance as expected shows that our circuits fully extract the available accuracy on these machines. Again, the near-zero values cannot be captured accurately given the overwhelming effects of noise. To overcome this difficulty, instead of a delta function, one could choose an initial condition of the form (say) u(x,t=0)=0.5sin(π x) + 1, which also satisfies the boundary conditions. The shifted initial condition avoids near zero solutions. In any case, the results shown here forms a stricter evaluation of the performance.
It is important to reiterate here that the algorithms proposed in this work do not require measurements to be made in between time steps, but the results shown above (from real device experiments) do involve measurements at each time step, considering the currently available circuit sizes. Even though such measurements compromise the quantum advantage, besides adding error because of intermediate measurements, we proceed to do so solely to assess the algorithm's ability to implement the proposed decomposition accurately. However, the accuracy, besides being constrained by the device's limitations, also depends dynamically on the instantaneous magnitude of the solutions, since larger magnitudes can be easily distinguished from the underlying noise. The time-marching performance discussed already and in the next section can thus be considered to represent a very strict lower bound on performance, where the accumulating errors are at their maximum. Nevertheless, the errors in every time step, when compared to the instantaneous classical solutions, are still within admissible values considering the specifications of the machine, thus reflecting the viability of the proposed algorithms.
Now, if we consider scaling up the problem, we would have to replace these naively transpiled circuits with the transpiled Hamiltonian simulation circuits. At the same time if we invoked the TMCQC5,6 designs, it would result in maximum quantum advantage as theoretically shown here. In this case there will be a total of 𝒪(P^3_min) unitaries that would need to be parallelized. The reduction in the number of CNOTs due to removal of controls on unitaries (and ancilla qubit complexity) by parallelization performs even better at those scales. While we await the realization of that feature as quantum devices emerge with better error-correction subroutines <cit.>, given that asymptotically the proposed TMCQCs require exponentially fewer resources, a decent possibility exists of achieving quantum advantage in the near-term when combined with end-to-end strategies that have been described earlier, along with suitable error correction subroutines <cit.>.
To now disentangle the interplay among noise, ϵ and finite sampling of quantum states, and to study their effect on the accuracy of solutions, we perform noisy simulations on Qiskit Aer for a range of these parameters. We outline our observations in the following section. For this, we implement circuits that include all controlled operations without any parallelization. These circuits are thus much deeper, yielding rather overly safe estimates of the error rates and ϵ to achieve convergent simulations. In practice, with careful optimization, parallelization, and basic error-correction, these estimates can be relaxed significantly (by one to two orders of magnitude).
§.§ Qiskit Aer - Noisy Simulations
The influence of noise on the accuracy of results needs to be assessed not only on our algorithms' viability of near-term devices, but also to hone the algorithmic design for maximum accuracy. We use Qiskit Aer to build a custom noise model into our algorithm, allowing us to simulate noisy circuits. In this work we use the bit-flip error noise model to set error rates (failure rates) of measurement operations (p_meas) and reset operations (p_res) and gate action (p_gate). Here, we set all the error rates to be equal (=p_noise) and vary them together. The error rate can be thought of as the probability with which a certain operation/gate would fail. The error rate on current devices (at least those that were available to us) are at best p_noise=10^-3. Further, it has been shown earlier in <cit.>, that if these error rates can be reduced to threshold values of ∈[10^-4 and 10^-5], there exist error-correction algorithms with which one may perform simulations of arbitrarily long circuits. In fact, more recent progress <cit.> has shown that, using carefully designed error-correction codes, these error rates can be further reduced to about p_noise∈[10^-7, 10^-12]. These algorithms make use of logical qubits that are, in turn, composed of several qubits, resilient as an aggregate to errors due to noise and decoherence. Therefore, in this work we study the effect of noise by varying error rates between p_noise∈[10^-3, 10^-10]. We consider the N_g=4,16 cases with the same initial and boundary conditions as before.
We recall that the TMCQCs rely on two and four unitaries decomposition to approximate the time marching matrix. This approximation is exact for limϵ→ 0. However, choosing small values of ϵ not only diminishes the probability of the post-selected solution subspace, but also the magnitude of the solution amplitudes itself, which will then be scaled down by ϵ. Distinguishing these small magnitudes becomes particularly hard with noise and finite sampling errors due to limited shot count (N_s) on current devices. The effects of noise can be varied even if one were to provide a large enough N_s. Let us consider the following two extreme scenarios.
High noise - With a p_noise=10^-4 (close to the threshold <cit.>) and for N_g=4, we can see from the histogram in figure <ref>(a), the shot counts of the solution subspace amplitudes become progressively lower with decreasing ϵ. We can also observe that the counts of the small amplitude values seem to saturate and not go to 0. This is a typical signature of a finite noise simulation, where even the near-zero values are lifted to larger finite noise amplitudes. Such errors due to noise can accumulate rather quickly in the time marching solutions, so that their long-time accuracy is not feasible. From this point of view, noise appears to have an adverse impact on the accuracy of simulations.
Zero noise - For an ideal device with zero noise, it is obvious that the shot count of solutions, especially the ones with small amplitudes, would tend continually towards zero with decreasing ϵ. In fact, for a small enough ϵ, the shot count would be exactly zero for several time steps (leading to divergence) and would therefore require an extremely large N_s to even obtain a single sample. This works against our desire to obtain quantum advantage. In this case, the divergence of the error with time steps is essentially due to velocity solutions that remain zero at some grid points for several time steps. From this perspective, it appears that noise can, in fact, be used to advantage, since it amplifies near-zero signals (but these near-zero values would be lost in the noise). Therefore, even though the solutions might have higher errors for the initial few time steps, they could stabilize subsequently when the magnitudes become large enough.
It is now important to remind ourselves that, on current and near-term devices, it is impossible to escape the effects of noise. However, the previous discussion motivates the need to find a middle ground with finite noise, just small enough ϵ and N_s, such that the effective solution possesses maximum accuracy. A simple step to address this issue would be to begin with a shifted initial condition as described earlier (in Section <ref>.B), which would make the solutions avoid near-zero values to begin with. Here, however, we still use delta functions to provide a strict analysis on the subject. Further, we can invoke the idea of Richardson extrapolation introduced earlier (in Section <ref>) to perform simulations with larger ϵ values to approximate solutions at smaller ϵ. This tool would also lower the errors in the solution or, alternatively, lower N_s for a desired accuracy. In fact this tool can even be extended to perform error mitigation as shown in <cit.>. Now, to understand the interplay between these quantities we perform our analysis in two steps. First, we fix N_s=2^24, which is large enough to avoid significant undersampling errors. We then study the connection between p_noise and ϵ, together with Richardson extrapolation. Second, we fix the error rates and study the connection between N_s and ϵ.
For the first step, the error phase diagram is shown in figure <ref>(b), demarcated into four quadrants. The Northwest (NW) quadrant is least favorable, since we have small ϵ values coupled with large p_noise, leading to the solutions being completely overwhelmed by noise, thus showing the least accuracy or maximum error. In the Northeast (NE) quadrant, ϵ values are also larger and, therefore, even though p_noise is large, the solutions are somewhat more expressive in the presence of noise, leading to a slightly better accuracy. The solutions here are still erroneous, since they correspond to an inaccurate unitaries decomposition with large ϵ values. Next, the Southeast (SE) quadrant shows an improvement over the previous case, since p_noise is now small. Finally, the Southwest (SW) quadrant offers the highest accuracy (dark-blue zone) with small ϵ and p_noise, both of which are favorable features. The objective would now be to grow the expanse of this dark-blue zone to the rest of the phase-diagram. Eastward with larger ϵ values, we can invoke the Richardson extrapolation. By simply performing extrapolations with ϵ values lying in the range ∈[0.5,1.5], we see that the accuracy can be improved in the SE quadrant, as shown in the right panel in figure <ref>(b). Expanding northward would require error mitigation schemes. In fact, to estimate solutions at the zero noise limit, we can invoke another layer of Richardson extrapolation as shown in <cit.>. Exploration in that direction forms an important part of our future work.
We can see the effect of extrapolation itself a bit more clearly, by plotting the extrapolated solutions (with (ϵ_1,ϵ_2)=(1.5,1)) of the velocity field for varying error rates, as shown in figure <ref>(c).
It can be clearly seen that accuracy progressively improves with lower error rates and begins to converge at about p_noise≈10^-7. Also, the solution at p_noise=10^-3 can be comparable to the solutions obtained from experiments on a real device (IBM Cairo), as detailed in the previous section. To quantify the error and magnify the trends, we plot MSE as a function of p_noise. In the high noise region, it can be seen that smaller ϵ values clearly lead to larger errors and this trend reverses in the low noise region. As p_noise becomes very small, the errors for each ϵ asymptote to the corresponding no-noise limits. We also compute errors in solutions computed via Richardson extrapolation (shown in red and yellow stars). As we lower the p_noise, the effect of extrapolation becomes more pronounced. At the threshold of approximately p_noise=10^-5, the improvement in accuracy after extrapolations is already ≈ 98% (or ≈ 70 times), indicating the effectiveness of the extrapolation on near-term devices.
So far we have studied a single application of the unitaries decomposition. To successfully implement a time-marching simulation, it is also important to examine the time evolution of errors. To ensure convergence, it is important to minimize the overall error at every time step. For this, first we increase the grid resolution to N_g=16 to lower the errors of grid resolution. Then we use Richardson extrapolation with (ϵ_1,ϵ_2)=(1,0.5) to further improve accuracy. We then perform forward time marching simulations up to τ=32 time steps, for varying levels of p_noise. The time evolution of the flow field, shown in figures <ref>(a,b), demonstrate that the extrapolated solutions capture the flow physics accurately. To do better, we plot the error convergence of extrapolated solutions as in the inset of figure <ref>(b). From this it can be seen that the error at each time step for p_noise=10^-6 is still large, leading to non-convergent behavior. However, for p_noise<10^-6, the error seems to be under control and the solution begins to converge. It is useful to remind ourselves of the circuits being simulated here are fully controlled circuits without any parallelization or optimization. The circuit depths are ≈ 36963 with a total of about ≈ 21270 CNOT gates and these numbers can be reduced substantially by methods described in Sec. <ref>). The error convergence can then occur at p_noise well above 10^-6, closer to the theoretical threshold <cit.>. Furthermore, increasing the grid resolution can also significantly improve the error at every time step, and therefore reduce the onus on the p_noise values required for convergence. In any case, the results presented here can be viewed as strict estimates of the error rates needed for convergence. These estimates suggest that the algorithm presented here has a potential for actual implementation on near-term quantum devices possessing logical qubits and error-correction algorithms. Recent results from <cit.> are already in the same ball-park.
Finally, we fix the error rates at p_noise = 10^-8 and vary N_s to study the effect of finite sampling of qubit states. We again plot the error convergence of extrapolated solutions as a function of time, but now with varying N_s as shown in figure <ref> c). Although the overall error evolution might seem to converge, they converge to lower values with increasing N_s, as expected. To quantify this better and assess the advantage of the Richardson extrapolation, we plot the decay of the MSE with N_s with and without extrapolation at ϵ_1=1 and ϵ_2=0.5, all for τ=32. They correspond to errors accumulated over all previous time steps, thus also include the effect of time marching. We see that the MSE of the extrapolated solutions has a power-law decay with N_s, of the form ∼ 0.0185N^-0.5513_s≈ 1/√(N_s), which, in fact, is the Law of Large Numbers. This behavior shows that the extrapolated solutions, even after accumulating errors from several time steps, still follows the ≈ 1/√(N_s) scaling, unlike the un-extrapolated ones. This suggests that the extrapolated solutions have overcome the errors from inaccurate decomposition (due to large ϵ values) and only errors from under-sampling persist. For the solutions with no extrapolation, the MSE decays as ∼ 0.0008N^-0.095_s as shown in figure <ref> d). These trends show the advantage of extrapolation; for a fixed accuracy, it reduces the required number of shots by orders of magnitude.
In summary, we provide estimates of p_noise, ϵ and N_s, in conjunction with extrapolation techniques, to perform accurate and convergent time marching simulations with the proposed quantum algorithms. Such estimates are scarce in the literature for quantum algorithms solving PDEs in a time marching fashion. The current estimates also suggest that the algorithms introduced here have a promising potential to be implemented on near-term devices <cit.> with improved accuracy.
§.§ Applications
(1) Nonlinearity – We have so far demonstrated the working and performance of the proposed algorithms by taking a specific example of a linear, advection-diffusion problem. However, we emphasize that these algorithms are agnostic to the nonlinearity of the governing PDEs, as long as they can be reduced to a set of iterative matrix inversion or matrix-vector multiplication operations; they can be solved similarly by matching any of the TMCQC forms. Such efforts would generally involve employing a linearization technique such as Carleman <cit.>, Koopman <cit.> or Homotopy <cit.> methods, which embed a finite dimensional nonlinear problem as an infinite dimensional linear problem. This generates a linear system of equations of large dimensions that can be solved using the TMCQCs presented in this work. The degree of nonlinearity that can be solved would strongly depend on the linearization used, and the complexity of the overall algorithm remains to be explored.
(2) Other QCFD approaches – In the past, there have been other approaches proposed to simulate fluid flow problems with QC. Some promising directions include Quantum Lattice Boltzmann algorithms <cit.>, Schrödingerization algorithms <cit.> and Variational Quantum Linear Solvers <cit.>. Most of these algorithms can benefit from an efficient algorithm to perform iterative matrix-vector multiplication and inversion operations. The algorithms proposed here can form a basis to all such operations making this a versatile tool.
(3) Beyond fluid dynamics – As mentioned already, these algorithms fundamentally offer an elegant and efficient way to apply iterative matrix-vector multiplications or inversions, as quantum circuits. Therefore, the scope of the tool proposed here is well beyond solving fluid flow problems. In fact, such operations are ubiquitous in machine learning, quantum sensing, interior point methods, image processing and so on, thus making the proposed algorithm versatile. Exploration of these aspects is a worthy enterprise, given the potential and viability of the algorithms on near-term devices and the complexity guarantees.
§ CONCLUSIONS AND OUTLOOK
In summary, we have introduced a set of Time Marching Compact Quantum Circuit algorithms that require shallow circuit depths and a minimal number of unitaries to simulate unsteady PDEs governing fluid flow problems. The algorithm can be used to solve linear and nonlinear problems, if they can be cast into any one of the TMCQC formats. We show that, these algorithms have, at best, a gate complexity of 𝒪(sϵ,log(N_gτ),polylog(ϵ/ε_U,1/ε_N),log^-3((κ-1)^-1)). Further, by appropriately initializing the numerical setup for the flow problem, we show that the query complexity can also be minimized, contributing at most a constant prefactor in the overall time complexity. Therefore, we can say that the overall asymptotic time complexity has a near-optimal scaling (logarithmic to polylogarithmic), in N_g,τ,ϵ_U,ϵ_N and κ, but exponential in sparsity s. Since the matrices used in this work are sparse, tri-diagonal matrices, the sparsity would contribute only a constant prefactor. At the worst, the complexity of the algorithms is still near-optimal in all system parameters, but exponential in τ and s.
We have proposed a set of end-to-end strategies and quantum subroutines including: quantum state preparation, circuit parallelization, Richardson extrapolation, and Quantum Post Processing—using which the overall algorithm can conserve the available quantum advantage. The proposed TMCQCs are then simulated on QFlowS, which is an in-house high performance quantum simulator. The results from these ideal (no-noise) simulations show that the proposed algorithm can accurately capture the flow physics of the chosen problem, qualitatively and quantitatively.
We have then experimented unitaries decomposition by transpilation on a real quantum device (IBM Cairo) after carefully designing and parallelizing the circuits, so that they conform to the quantum volume requirements of the device. Again, the results from these experiments (subject to both noise and decoherence) show that the algorithm captures the flow physics quite well, by reaching the maximum possible accuracy attainable on the device, for given circuit sizes. The parallelization techniques were effective in lowering circuit depths and gate counts. The results bolster the possibility of implementing our algorithm on real devices. Further, when the simulation of even larger circuit sizes becomes possible, the current transpiled circuits can be replaced by the improved Hamiltonian simulation circuits to attain the asymptotic complexities derived here.
Keeping as reference the currently available devices and near-term devices on the horizon, it is also important to determine what specifications would be necessary in practice to carry out a full scale simulation. To do this, and also to better understand the interplay between noise, state sampling, ϵ and their collective effect on time marching simulations, we have performed a comprehensive set of noisy simulations using Qiskit Aer. Our analysis suggests that, with Richardson extrapolation, we can carry out accurate simulations even with ϵ∼𝒪(1), but error-convergent results are possible only for p_noise < 10^-6. Our analysis thus provides an in-depth view of the effects as well estimates of the required cut-offs on each of these quantities, for designing algorithms that offer error-convergent time marching simulations. We also highlight the role of extrapolation in reducing the errors even with large ϵ simulations, or in lowering the shot counts by orders of magnitude to achieve a given accuracy. The extrapolation itself provides up to ∼ 98% improvement in performance. Taking the cue from our experimental results as well as the recent progress made with logical qubits and quantum error-correction codes <cit.>, it appears that our estimates present a decent possibility for the proposed algorithms to be implemented at full scale, on near-term quantum devices. Considering the methodology presented in this work, along with their projected, asymptotic complexities, it appears that these algorithms have a potential to demonstrate quantum advantage on near-term devices despite their numerous limitations. Of course, when more powerful devices become available, further effort is essential in carefully designing these algorithms in tandem with error-correction subroutines, to achieve quantum advantage. Nevertheless, the analysis presented here indicates that such an undertaking stands a fair chance.
In essence, the importance and applicability of the tools presented in this work extend beyond solving fluid dynamical PDEs. Fundamentally, the algorithms offer a way for performing iterative, matrix-vector multiplication and inversion operations on exponentially large data dimensions, while also considering limitations on current and near-term devices. Apart from its utility in the previously proposed QCFD approaches, such a tool is necessary in nearly most numerical methods and therefore caters to a huge range of applications, including machine learning, image processing, optimization and quantum sensing, to name a few. As noted earlier, the proposed algorithm can be used to solve nonlinear PDEs as well, once they are appropriately cast into a linear setting via linearization techniques, thus highlighting the versatility, in principle, of the proposed algorithms. However, exploring these possibilities and different applications in greater detail, along with experiments on other kinds of quantum computers such as those with photonic or ion-trap qubits, is an important part of future work.
Acknowledgements We thank Balu Nadiga, Stephan Eidenbenz, Dhawal Buaria, András Gilyén, Srinivasan Arunachalam, Philipp Pfeffer, Julia Ingelmann, Jörg Schumacher, Akash Rodhiya and Wael Itani for helpful discussions. We wish to acknowledge New York University's Greene supercomputing facility on which part of these simulations were performed and the IBM Quantum resource access provided by Los Alamos National Laboratory through the Oak Ridge OLCF allocation, for which we are grateful to Stephan Eidenbenz and Balu Nadiga. The views and results of this paper are those of the authors, and do not reflect the
official policy or position of IBM or the IBM Quantum team.
Data Availability All the data is included in the manuscript.
Code Availability The code will be made available following any reasonable request made to the authors.
§ NUMERICAL SETUP
We outline here the numerical discretization of the governing equations using finite differences and the different methods to set up time marching equations that can be simulated using the quantum algorithms proposed in this work. We consider the 1D linear Advection-Diffusion flow (C=U) under periodic boundary conditions, described in the main body of the paper. We further discuss implications of using Dirichlet boundary conditions.
§.§ Spatial discretization
We review the discretization of the spatial dimension of the flow domain using central finite difference schemes. We consider the domain of length L, to be discretized into N_g equidistant grid points with a spacing of Δ x=h=L/N_g (x_i = x_0+iΔ x), creating a discretization error ∼𝒪(Δ x^2). The velocity field at each point and at time t is given by 𝐮=[u(0,t), u(Δ x,t),⋯,u((N_g-1)Δ x,t)]. To approximate the spatial derivatives in eq.(<ref>), we employ the well known 2nd order central difference scheme
D∂^2 u/∂ x^2 -U∂ u/∂ x≈ Du_i+1- 2u_i + u_i-1/(Δ x)^2
- U u_i+1 - u_i-1/2Δ x.
§.§ Temporal discretization
To integrate in time, the temporal domain t ∈ [0,T] is discretized into τ = T/Δ t time steps (t_j=t_0+jΔ t) using two different schemes, both admitting an error ∼𝒪(Δ t).
(1) Forward Euler or Explicit method: Applying this discretization to the time derivative together with eq.(<ref>) gives
u_i^j+1-u_i^j/Δ t = Du^j_i+1- 2u^j_i + u^j_i-1/(Δ x)^2
- U u^j_i+1 - u^j_i-1/2Δ x,
u_i^j+1 = (α - χ) u_i+1^j + ( 1- 2 α) u_i^j +( α + χ) u_i-1^j,
where α = D τ/(Δ x)^2 and χ = Uτ/2 Δ x are the stability and convective parameters, respectively. For the problem under discussion, α≤α_cfl = 1/2 is required to ensure stability of the solution (von Neumann stability criteria). This can now be cast into a matrix equation of the form
𝐮^j+1 = (𝐈 - 𝐀_ex)𝐮^j = 𝐀_E𝐮^j.
(a) Under periodic boundary conditions the matrix takes the form
𝐀_E = [ 1-2α α-χ 0 ⋯ 0 α+χ; α+χ 1-2α α-χ 0; 0 α+χ 1-2α α-χ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 α+χ 1-2α α-χ; α-χ 0 ⋯ 0 α+χ 1-2α; ].
(b) Under Dirichlet boundary conditions, we only solve for velocity at the interior points of the domain and not at the boundaries (since this is already given). Then the matrix would be given by
𝐀_E = [ 1-2α α-χ 0 ⋯ 0 0; α+χ 1-2α α-χ 0; 0 α+χ 1-2α α-χ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 α+χ 1-2α α-χ; 0 0 ⋯ 0 α+χ 1-2α; ].
(2) Backward Euler or Implicit method: In this method, the discretized governing equation would take the form
u_i^j+1-u_i^j/Δ t = Du^j+1_i+1- 2u^j+1_i + u^j+1_i-1/(Δ x)^2
- U u^j+1_i+1 - u^j+1_i-1/2Δ x,
u_i^j = (- α + χ) u_i+1^j+1 + ( 1 + 2 α) u_i^j+1 +( -α - χ) u_i-1^j+1.
The above equation can be written as matrix equation of the form
𝐮^j = (𝐈 - 𝐀_im)𝐮^j+1 = 𝐀_I𝐮^j+1.
Therefore 𝐮^j+1 = 𝐀^-1_I𝐮^j. This scheme is known to be unconditionally stable for any size of Δ t and is not subject to any stability criteria as in the explicit case. However, to obtain the solution at every time step requires inverting the matrix 𝐀_I. In this work we employ the truncated Neumann series to approximate this inverse (described later in Section <ref> of the Appendix), which requires the condition ||𝐀_im|| < 1, for the series to converge. Therefore, α and χ need to be chosen accordingly.
(a) Under periodic boundary conditions, the matrix takes the form
𝐀_I = [ 1+2α -α + χ 0 ⋯ 0 -α-χ; -α-χ 1+2α -α+χ 0; 0 -α-χ 1+2α -α+χ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 -α-χ 1+2α -α+χ; -α+χ 0 ⋯ 0 -α-χ 1+2α; ].
(b) Under Dirichlet boundary conditions the matrix is given by
𝐀_I = [ 1+2α -α + χ 0 ⋯ 0 0; -α-χ 1+2α -α+χ 0; 0 -α-χ 1+2α -α+χ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 -α-χ 1+2α -α+χ; 0 0 ⋯ 0 -α-χ 1+2α; ].
With this discretization, the following sections describe specific aspects of different quantum algorithms proposed for time marching.
NOTE – The case of χ=0 corresponds to the Poiseuille (pipe) flow <cit.>, where the matrices 𝐀_E, 𝐀_I are symmetric for Periodic as well as Dirichlet boundary conditions.
§.§ Explicit method - TMCQCs
(a) Explicit expansion method: The explicit forward time integration up to τ time steps translates to the application of the operator (A_E)^τ. A given matrix is first decomposed into a linear combination of unitaries, and the sum is raised to the power τ and expanded, yielding a new sum of unitaries with appropriate coefficients.
(b) Explicit iterative method: To advance by τ time steps using the iterative method requires one to iteratively invoke the unitaries circuit to apply the original decomposition
τ times. To extend the single step operation outlined in Section <ref> of the main text, to perform τ iterations we introduce an additional ancillary register q_a0 with n_t=log(τ) countdown qubits. Let us consider a general non-unitary, non-hermitian matrix M, and consider for clarity the case for τ=2 shown in figure <ref>.
Including the additional ancillary qubit (n_t=1), with the first application of the LCU operators as before (marked by red-dashed line (τ=1 on figure <ref>(a), we are left with the state proportional to
|1⟩1/√(β)|0⟩^n_aM|Ψ⟩_u + |0⟩|Ψ⟩_⊥, where
|Ψ⟩_⊥ = 1/√(β)( (U_0+U_1+U_2+U_3)^M|00⟩ +
(U_0-U_1+U_2-U_3)_A|01⟩ +(U_0+U_1-U_2-U_3)_B|10⟩
+(U_0-U_1-U_2+U_3)_C|11⟩).
The countdown qubits are all set to unity initially as the state |τ-1⟩ which, after every time step, are reduced by one in binary, using bit flips to finally reach |0⟩, thus having counted τ time steps. Now we use the ancillary qubit q_a0 to tag the subspace of the wave function, thus associating it with a time step counter and apply a controlled NOT gate with q_a1 and q_a2 (requiring both to be 0), which yields the state
|1⟩|0⟩^n_aM|Ψ⟩_u + |0⟩|Ψ⟩_⊥.
We then reapply the unitaries circuit similar to the first iteration, but this time, all the operations will be controlled additionally by q_a0 (requiring to be set to 1, marking the solution subspace) as shown in figure <ref>(a) by the red-dashed line (τ=2). This gives
|Ψ⟩ = |1⟩|00⟩1/√(β')M^2| b⟩ + |1⟩1/√(β')(|01⟩ A+|10⟩ B +|11⟩ C )M| b⟩^|Ψ⟩_⊥_1
+ |0⟩1/√(β)(|01⟩ A+|10⟩ B +|11⟩ C )| b⟩_|Ψ⟩_⊥_2.
Finally we apply a NOT gate on q_a0 giving the state
|Ψ⟩ = |0⟩|00⟩1/√(β')M^2| b⟩ + |1⟩|Ψ⟩_⊥_1 + |1⟩|Ψ⟩_⊥_2,
and then measure the first three qubits in the computational basis. When one lands an output of (for the first three qubits), |000⟩ upon measurement, we are left with a state proportional to M^2| b⟩.
(c) Explicit one-shot method: Alternatively, a matrix inversion problem of the form
𝐀_EOũ = 𝐛_EO
can be setup to solve for the velocity field at all time steps together. 𝐀_EO has a double-banded structure as written in eq.<ref>, (A_EO)_ij = 𝐈 ∀ i=j (see below). However, ∀ i≤τ, (A_EO)_ij = -(𝐈 - 𝐀_e) for j=i-1 and ∀ i≥τ, (A_EO)_ij = -𝐈 for j=i-1. Further, (b_EO)_i = {u_in∀ i=0; = 𝐟Δ t
∀ 0<i≤τ; and = 0 ∀ i>τ}. Here, f=0 identically. Following this, the inverse of matrix 𝐀_EO is approximated either as a truncated Neumann series (see later sections, for details on error bounds) or as a truncated Fourier series approach as outlined in <cit.>, which also provides error bounds on the truncation. Here, we consider the former approach. First, we rewrite the inverse as (I-(I-𝐀_EO))^-1.
Then, under the constraint of ||I-𝐀_EO||<1, the matrix inverse is given by
𝐀^-1_EO≈∑_p=0^P(I-𝐀_EO)^p = ∑_p=0^P(𝐀̃_EO)^p.
§.§ Implicit method - TMCQCs
(a) Implicit iterative method: This method again, requires one to simply multiply the initial state by the matrix 𝐀_NI, τ times iteratively or serially giving
u^τ = (𝐀_NI)^τu^0,
where the above matrix 𝐀_NI is a truncated Neumann series approximation to the inverse of 𝐀_I, given by
𝐀_NI = 𝐀^-1_I≈∑_p=0^P𝐀^p
computed up to P terms. This approximation is convergent only when α is chosen such that ||α𝐀||<1.
(b) Implicit expansion method: This method proceeds exactly as the explicit expansion method, except that the matrix 𝐀_𝐍𝐈 is used to compute the expansion.
(c) Implicit one-shot method: One can also alternatively set up a matrix inversion problem of the form
𝐀_IOũ = 𝐛_IO,
to solve for the velocity field at all time steps together. The matrix will be of the form shown in eq.<ref>, where (A_IO)_ij = (𝐈 - 𝐀_i) ∀ i=j and 1<i<τ. However, ∀ i≤τ, (A_IO)_ij = -𝐈 for j=i-1 and ∀ i≥τ, (A_IO)_ij = -𝐈 for j=i-1. Further, (b_IO)_i = {u_in∀ i=0; = -𝐟Δ t
∀ 0<i≤τ; and = 0 ∀ i>τ}. Here again, f=0 identically. Further the inverse of matrix 𝐀_IO is again given by the truncated Neumann approximation as,
𝐀^-1_IO≈∑_p=0^P(I-𝐀_IO)^p = 𝐀̃_IO
§ TRUNCATED NEUMANN SERIES
The matrix inversion problem of the kind (𝐈-𝐌)^-1, under the condition ρ(𝐌)<1 (where ρ is the spectral radius), can be rewritten using the Neumann power series as follows:
(𝐈-𝐌)^-1 = ∑_p=0^∞𝐌^p.
To approximate the inverse using such a summation, we compute a truncated sum of eq.(<ref>) up to p=P_min-1 terms. Then it can be shown that for Laplacian type operators outlined earlier, the error bound of truncation ε_N is given by Theorem 1.
Lemma 1 Consider a central finite difference matrix 𝐌∈ℝ^N× N, corresponding to a flow field discretized into N grid points, integrated up to time T (with τ time steps), with an α = Dτ/(Δ x)^2 and χ = Uτ/(2Δ x), chosen such that ||𝐌||<1. If the inverse (𝐈-𝐌)^-1, is approximated by the truncated Neumann series as ∑_p=0^P_min-1𝐌^p, the truncation error of ε_N = || (𝐈-𝐌)^-1 - ∑_p=0^P_min-1𝐌^p|| is bounded from above by
ε_N≤𝒪( ||𝐌||^P_min) = 𝒪((κ-1)^P_min),
where κ = ||𝐈-𝐌||·|| (𝐈-𝐌)^-1||≤ 2 is the condition number and P_min is the number of terms needed to admit a specified ε_N is bounded below accordingly by
P_min = 𝒪(⌈( log(1/ε_N)/log(1/(κ-1)))⌉).
Proof - Starting with the Neumann series representation as in eq.
<ref>, consider a truncated series 𝐑_P_min computed up to P_min terms
𝐑_P_min = 𝐈+𝐌+𝐌^2+⋯ +𝐌^P_min-1.
The truncation error would therefore be given by
ε_N = || (𝐈-𝐌)^-1 - 𝐑_P_min|| = ||∑_p=0^∞𝐌^p - ∑_p=0^P_min-1𝐌^p||
= ||𝐌^P_min+𝐌^P_min+1 + ⋯||
= ||𝐌^P_min(∑_p=0^∞𝐌^p) ||
= ||𝐌^P_min(𝐈-𝐌)^-1) ||
≤||𝐌^P_min||·|| (𝐈-𝐌)^-1) ||
Consider the first factor in this eq.(<ref>), given the submultiplicativity of the matrix norm we have
= ||𝐌^P_min||≤∏_P_min||𝐌|| = ||𝐌||^P_min.
For the second factor in eq.(<ref>), again noting that ||𝐌||≤||𝐌||_∞, and additionally by invoking the Varah bound <cit.>, we obtain
|| (𝐈-𝐌)^-1) ||_∞≤1/Γ where,
Γ = min_i{| (𝐈-𝐌)_i,i| - ∑_i≠ j| (𝐈-𝐌)_i,j|}
|| (𝐈-𝐌)^-1) ||_∞≤1/Γ(α,χ).
Now given a 2nd order scheme, Γ(α,χ) depends on the choice of boundary conditions and magnitudes of α and χ. For both boundary conditions, Γ = 1 when χ≤α, while for χ > α, Γ=1-2(α-χ). The case χ=0 corresponds to the Poiseuille flow problem, which has a symmetric finite difference matrix for both boundary conditions.
In this work we consider the case χ≤α for the following reason. First, we note that Pe = UL/D represents the Peclét number, which is the ratio of the advective to diffusive time scales. In other words, Pe→ 0 corresponds to a flow that is mainly diffusive, while for Pe→∞ the flow is dominated mainly by the advection process. Rewriting the inequality between χ and α, we note that χ>α and χ≤α correspond to Pe>2N and Pe≤ 2N, respectively. Since the former entails only advective processes, we choose the latter to allow both diffusive and advective processes together. Now, since for this case Γ=1, from eq.(<ref>) and eq.(<ref>), we get
ε_N≤||𝐌||^P_min.
To compute the bound for ||𝐌_∞||, we proceed as follows. The resource requirement for a matrix inversion depends on the condition number κ = ||𝐈-𝐌||·|| (𝐈-𝐌)^-1||. So let us consider the inequality
|| (𝐈-𝐌)) || ≤|| (𝐈-𝐌)) ||_∞
κ/|| (𝐈-𝐌)^-1|| ≤||𝐈||_∞ + ||𝐌||_∞ = 2.
The above equations follow again from eq.(<ref>) when Γ=1.
Now we note that
0 ≤||𝐌||≤ 1 .
This is a necessary condition for inversion and therefore the eigenvalues are bound between -1 ≤λ≤ 1. Since we know κ≥ 1 always, we get from eq.(<ref>):
1 ≤κ≤ 2.
By subtracting equation (<ref>) from (<ref>) we note
||𝐌||≤κ -1,
and thus prove that
ε_N≤𝒪((κ-1)^P_min).
By rearranging the above equation, we show that
P_min =𝒪( ⌈log(1/ε_N)/log(1/ ||𝐌||)⌉)
=𝒪(⌈( log(1/ε_N)/log(1/(κ-1)))⌉).
The bound clearly shows that, for both increasing accuracy (small ε_N) and larger the κ requires a larger P_min to estimate the matrix inverse.
As a confirmation of this result, we numerically compute the dependence of ε_N with κ for varying cut-off P_min values. The power-law fits to these relations are shown in figure <ref>. The fits confirm the theoretical result, eq.(<ref>).
Proposition 1 A suitable combination of α and χ exists such that for both explicit and implicit time schemes, the corresponding finite difference matrix operators 𝐌 (or (𝐈-𝐌)) can be re-scaled by a positive real number δ≲𝒪(1/ϵ) such that Q = ϵδ||𝐌||≳ 1, while still ensuring ||δ𝐌|| = δ||𝐌||<1 and α<α_cfl for implicit and explicit schemes, respectively.
In Proposition 1, the effect of such a scaling in a quantum algorithm is that the resulting solutions simply scale similarly. These can be easily rescaled back classically, to obtain the original scale solution.
A few clarifications at this point motivate this scaling. To accurately approximate the matrix through linear combination of unitaries, one requires small ϵ. However, such small ϵ would require a large N_s to extract the solution. To ameliorate this, we use the Richardson extrapolation (see Sections <ref> & <ref>) to produce accurate solutions even with larger ϵ values, as discussed further in Lemma 2. The method essentially offers a deferred approach to the limϵ→ 0 case. It is also necessary to distinguish the solution from the noise on the real quantum devices. The error from a simple second order extrapolation would generally be of the order 𝒪(ϵ^2). In Section <ref>, for instance, we present results from extrapolation with ϵ as large as 0.5 up to 1.5, which demonstrate high accuracy. For an ϵ that gives reasonably accurate solutions, we also need to make sure that the corresponding N_s is a small number (see eq.(<ref>). To do this we pick a δ to scale A such that δϵ||A||≳ 1. The δ should also ensure that ||A||<1 for the explicit case, while the stability criteria is satisfied for the implicit case.
Lemma 2 The time marching operators as approximated by a second order, Richardson extrapolated, the block encoding of linear combination of unitaries for (a) TMCQC1,2 (b) TMCQC3,4 and (c) TMCQC5,6 admit an error ε = |𝐗 - 𝐘|_max,of at most (a) 𝒪(τϵ^4), (b) 𝒪(τ P^3_minϵ^4) and (c) 𝒪(P^2_minϵ^4).
Proof – The above follows directly from error from unitaries block-encoding, error from Hamiltonian simulation and the query complexity associated with implementing these steps in each TMCQC method. Without loss of generality, we can set the error due to unitary decomposition ε_LCU = ε_U = ε (Ham-Sim error). We first note that the two-step and four-step unitary decomposition as in eq.(<ref>,<ref>) contributes an error of the order ε = 𝒪(ϵ^2). Further applying the second order Richardson extrapolation approximates the solution with an error 𝒪(ϵ^4). The overall error for each case is computed as follows:
(a) TMCQC1,2 – This method involves applying the unitaries encoding τ times iteratively for τ time steps. Therefore the cumulative error can be easily seen to be 𝒪(τϵ^4).
(b) TMCQC3,4 – In this case the LCU is applied τ P^3_min times, yielding an overall error of 𝒪(τ P^3_minϵ^4).
(c) TMCQC5,6 – For these one shot methods, the final matrix needs to be inverted and approximated as a truncated Neumann series with P_min terms. Every p-th term in the series that is given by 𝐌^p, admits an error of pϵ^4. This error is then summed up to P_min terms, giving an overall error of 𝒪(P^2_minϵ^4).
From the above and eq.(<ref>), we can connect the overall block-encoding error with the truncated Neumann series error as follows for each case:
(a) ε = 𝒪(τϵ^4) – (TMCQC1,2)
(b)ε = 𝒪(τϵ^4⌈( log(1/ε_N)/log(1/(κ-1)))⌉^3) –(TMCQC3,4 )
(c)ε = 𝒪(ϵ^4⌈( log(1/ε_N)/log(1/(κ-1)))⌉^2) –
(TMCQC5,6)
We therefore note that, one can always choose an appropriately small ϵ such that ε≤ 0.5, which is used as the premise for Proposition 2 (see below) to show that, when these approximated operators are acted on the quantum states, the corresponding error in the solution is also bounded similarly. Further given the ϵ^4 factor, ϵ need not be extremely small. In other words, even if one chooses a relatively large ϵ to ensure that the effect of noise does not dominate the solution, ε < 0.5 can still be satisfied.
Proposition 2 (see Proposition 9 of <cit.>)
Given a Hermitian operator 𝐉 such that ||𝐉^-1||≤ 1 and 𝐉̅ is an approximation of 𝐉 such that ||𝐉-𝐉̅||≤ε < 0.5, the corresponding solution states |𝐮⟩ = 𝐉|𝐮⟩/||𝐉|𝐮⟩|| and |𝐮̅⟩ = 𝐉̅|𝐮̅⟩/||𝐉̅|𝐮̅⟩|| satisfy || ||𝐮⟩-|𝐮̅⟩ ||≤ 4ε.
|
http://arxiv.org/abs/2405.09892v1 | 20240516081619 | Balancing Similarity and Complementarity for Federated Learning | [
"Kunda Yan",
"Sen Cui",
"Abudukelimu Wuerkaixi",
"Jingfeng Zhang",
"Bo Han",
"Gang Niu",
"Masashi Sugiyama",
"Changshui Zhang"
] | cs.LG | [
"cs.LG",
"cs.DC"
] |
[
Balancing Similarity and Complementarity for Federated Learning
equal*
co†
Kunda Yanequal,aa
Sen Cuiequal,aa
Abudukelimu Wuerkaixiaa
Jingfeng Zhangbb,cc
Bo Handd,cc
Gang Niucc
Masashi Sugiyamaco,cc,ee
Changshui Zhangco,aa
aa Institute for Artificial Intelligence, Tsinghua University (THUAI),
Beijing National Research Center for Information Science and Technology (BNRist),
Department of Automation, Tsinghua University, Beijing, P.R.China
bbThe University of Auckland
ccRIKEN
ddHong Kong Baptist University
eeThe University of Tokyo
Masashi Sugiyamasugi@k.u-tokyo.ac.jp
Changshui Zhangzcs@mail.tsinghua.edu.cn
Machine Learning, ICML
0.3in
]
In mobile and IoT systems, Federated Learning (FL) is increasingly important for effectively using data while maintaining user privacy. One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data, arising from numerous clients and diverse data sources. This requires strategic cooperation, often with clients having similar characteristics. However, we are interested in a fundamental question: does achieving optimal cooperation necessarily entail cooperating with the most similar clients? Typically, significant model performance improvements are often realized not by partnering with the most similar models, but through leveraging complementary data. Our theoretical and empirical analyses suggest that optimal cooperation is achieved by enhancing complementarity in feature distribution while restricting the disparity in the correlation between features and targets. Accordingly, we introduce a novel framework, , which balances similarity and complementarity in FL cooperation. Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity. The strength of lies in its adaptability to various levels of data heterogeneity and multimodal scenarios. Our comprehensive unimodal and multimodal experiments demonstrate that markedly surpasses other state-of-the-art FL methods.
§ INTRODUCTION
Federated Learning (FL) <cit.>, emerging as a pivotal paradigm in machine learning, is increasingly acclaimed for facilitating collaborative training across diverse clients while ensuring data confidentiality. However, FL still encounters significant challenges, chiefly statistical heterogeneity - the occurrence of non-i.i.d. data across diverse local clients, as explored in prior research. <cit.>.
In real-world scenarios with data from heterogeneous user bases, models often face performance decline due to local data distribution variances <cit.>.
In the context of multimodal learning, statistical heterogeneity is notably pronounced <cit.>. Variations in dimensionality, quality, and reliability among diverse data sources exacerbate heterogeneity within each client's modalities and magnify distribution discrepancies between clients. Such significant heterogeneity complicates achieving consistent and efficient learning in the FL framework <cit.>.
In response to this challenge, a promising direction involves the identification of optimal collaborators predicated on model similarity metrics <cit.>. For example, IFCA <cit.> clusters cooperative clients based on the similarity of their model parameters, whereas CFL <cit.> employs gradient similarity for the same purpose. pFedGraph <cit.> constructs a cooperation graph guided by an intuitive notion that clients with greater similarity should collaborate more intensively.
These methods collectively emphasize the importance of model similarity in strategic collaboration.
However, we raise a fundamental question: does achieving optimal cooperation necessarily entail cooperating with the most similar clients? Theoretically, similarity oriented from collaboration is conservative, such that it could potentially result in unproductive cooperation. For example, under the assumption of two completely identical clients or models, cooperation between them would yield no information gain for either party, despite their maximal similarity.
Interestingly, the fundamental precondition for model enhancement through collaboration is complementarity, not similarity.
Inspired by this intuition, we design experiments to explore the underlying mechanism. As an illustration in FL, we exemplify cooperation between two clients through model parameter aggregation. During the cooperation, we incrementally enhance the disparity in their data distributions to promote complementarity. Our investigation explores the alterations in the average accuracy and model similarity with the increase of data complementarity. As depicted in Figure <ref>, cooperation between the two clients with the highest model similarity does not yield the maximum gains, while cooperation between clients when the data exhibits moderate complementarity is more advantageous, even if their models are not the most similar. Additionally, excessive data complementarity might indicate significant discrepancies in data distributions, rendering the cooperation less effective. The experimental results demonstrate the indispensability of complementarity in the cooperation of FL. Therefore, an intriguing question emerges: how can we deduce the cooperation gain network among clients by simultaneously considering similarity and complementarity, thereby facilitating more optimal model cooperative learning?
We present an answer to this question grounded in a thorough analysis of statistical heterogeneity. Briefly, suppose we use p_i(x, y) = p_i(x) p_i(y | x) to denote the joint distribution of the feature x and label y in the i^th client. By controlling one of the distributions, it is observed that varied p(y|x) signals the presence of a concept shift among clients. A substantial concept shift can detrimentally affect model learning. On the other hand, the limited nature of data within each client makes it challenging to precisely characterize the true local distribution. Hence, varied p(x) which indicates the presence of a covariate shift could be beneficial, potentially providing more information gain. Experiments in Figure <ref> also explain that a moderate covariate shift could introduce complementarity in model learning, leading to enhanced performance. Consequently, we argue that allowing moderate variations in the marginal distribution p(x) while ensuring consistency in the conditional distribution p(y|x) presents a more effective cooperation than merely relying on singular model similarity metrics in previous research.
Within the aforementioned analysis, we propose a novel cooperation framework by balancing Similarity and Complementarity, named . Specifically, we introduce a cooperation network where each node signifies a client and edges reflect cooperation strength. This network is dynamically optimized, balancing model similarity with feature complementarity. The edge weights denote this balance, ensuring that clients not only collaborate with similar models but also leverage complementary feature insights. We applied the cooperation network to a FL framework, dividing it into two processes: server-side and client-side, achieving personalized interactive cooperation under privacy protection conditions. Leveraging the refined approach, our adeptly accommodates various levels of data heterogeneity and multi-modal scenarios, and effectively identifies the optimal collaborators for each client.
Our experiments validate the efficacy of , demonstrating its ability to consider both similarity and complementarity in cooperation while maintaining a balance. Thanks to this property of , it outperforms 12 unimodal and 4 multimodal baselines across various benchmark datasets. Consequently, we conclude that complementarity is indeed beneficial in FL cooperation, rather than solely focusing on similarity.
We summarize our contributions as follows:
* We challenge a widely accepted notion that model similarity can be a robust metric for determining the potential benefits of cooperative model learning. We argue that achieving optimal cooperation necessitates a dual consideration of similarity and complementarity.
* We propose a novel collaboration framework, , which infers the cooperation by optimizing a constrained objective. The objective quantifies a balanced similarity and complementarity between local clients.
* We demonstrate through extensive experiments that exhibits superior performance in addressing data heterogeneity in FL, surpassing other state-of-the-art FL methods in both unimodal and multimodal scenarios. Code is accessible at <https://anonymous.4open.science/r/FedSaC-CE22>
§ RELATED WORK
§.§ Federated Learning and Statistical Heterogeneity
Federated learning <cit.> has become a key focus in the machine learning field for its practical applications, but it also presents several challenges, including communication efficiency <cit.>, privacy concerns <cit.>, and statistical heterogeneity <cit.>, and they have been the topic of multiple research efforts <cit.>. Recently, a wealth of work has been proposed to handle statistical heterogeneity. For example, <cit.> seek a balanced model performance distribution by maximizing the model performance on any arbitrary target distribution. <cit.> develop MOON that corrects local training by maximizing the similarity between local and global models. Some clustering-based FL methods <cit.> also attempt to utilize model similarity to cluster similar clients in order to mitigate the impact of statistical heterogeneity. A fundamental question arises “whether a high degree of model similarity invariably leads to more effective collaboration”?
§.§ Personalized Federated Learning
A global model (e.g., FedAvg <cit.>) could harm certain clients when there are severe distribution discrepancies <cit.>, and this stimulates the study of personalized federated learning <cit.>. One line of work focused on a better balance between global and local training. For example, there are researches <cit.> proposing to stabilize local training by regulating the deviation from the global model over the parameter space. Another line of research aims to achieve a more fine-grained cooperation via collaboratively learning with similar clients. For example, <cit.> proposed to cluster the collaborative clients according to their model parameter similarity, and learn a personalized model for each cluster. <cit.> specify who to collaborate at what intensity level for each client according to model similarity.While the current trend in personalized federated learning heavily relies on similarity metrics, we suggest that a balanced focus on both similarity and complementarity can more accurately optimize collaboration benefits.
§.§ Federated Multimodal Learning
Considering the diverse data modalities in real life, there are a few research investigating the tasks of federated multi-modal learning, i.e., collaboratively learning models on distributed sources containing multimodal data <cit.>. In particular, Xiong et al. propose a co-attention mechanism<cit.> to fuse different modalities. <cit.> design a regularization technique to restrict global-local discrepancy by contrastive learning. <cit.> enhanced the challenge for multimodal clients with unlabeled local data using a semi-supervised framework. Our approach optimizes a weighted sum of model similarity and feature complementarity for automatic weight allocation to clients. Given the higher statistical heterogeneity in multimodal data compared to unimodal data, global models, as currently developed, struggle with conflicting client dependencies. Therefore, our focus is on developing personalized models for multimodal tasks to address data heterogeneity effectively.
§ PROBLEM SETUP
The problem to be solved in this paper is formally defined in this section. Specifically, we introduce the objective of federated learning, and through analyzing the statistical heterogeneity, demonstrate the feasibility and significance of balancing similarity and complementarity in FL.
§.§ Notations
Suppose there are N clients in a federated network, each client owns a private dataset D^i with n^i data samples, where i = 1,…,N. We define the relative size of each dataset D^i as p^i = n^i/∑_jn^j. The dataset D^i = {X^i, Y^i} consists of the input space X^i and output space Y^i. A data point is denoted by {x, y}, with x signifying either a unimodal or a multimodal feature. The input space and the output space are shared across all clients.
Federated Learning. In FL scenario, each client collaboratively refines a predictive model using local data and collective knowledge to optimally predict label y.
<cit.>, as an exemplar method introduced by Mcmahan et al., learns a global model θ^g for all clients by minimizing the empirical risk over the samples from all clients, i.e.,
min _θ^g ∈Θ ∑_i=1^N p^i ℒ_i(θ^g; D^i),
where Θ is the hypothesis space and ℒ_i denotes the loss objective of each clients. From Eq.<ref>, presumes that i.i.d. data from different clients converge to a shared joint distribution p(x, y), indicating statistical homogeneity across diverse data points.
§.§ Statistical Heterogeneity
In practical scenarios, the i.i.d assumption in is largely unrealistic.
There could be noticeable distinctive traits in local datasets across different clients stemming from diverse environments and contexts in which clients gather data <cit.>. Existing research reveals that such statistical heterogeneity may result in under-performance of global models <cit.>.
In the given context, the concept of personalized federated learning is introduced as a potential solution to mitigate the statistical heterogeneity issues by facilitating selective cooperation <cit.>. Existing works assume that clients derive more benefit from collaborating with peers who possess similar characteristics, thereby implying a diminished level of cooperation where dissimilarities exist. This allows each client to utilize information more akin to their local distribution. However, we pose a fundamental question: is cooperation with similar peers truly optimal?
We endeavor to delve deeply into statistical heterogeneity to provide an unexpected answer. Suppose we use p(x, y) to denote the joint distribution of features and labels, the nature of statistical heterogeneity lies in the disparate joint distributions across various clients, i.e., p(x^k_1, y^k_1) ≠ p(x^k_2, y^k_2), where k_1 ≠ k_2. The joint distribution p(x, y) can be decomposed as p(x, y) = p(x)p(y|x), thus allowing statistical heterogeneity to be represented as
p(x^k_1)p(y^k_1 | x^k_1) ≠ p(x^k_2)p(y^k_2 | x^k_2),
where k_1 ≠ k_2. Within this formulation, we recall the following definition.
Here is the definition of two distribution shifts.
(1) Covarient shift <cit.>: The distribution of input features p(x) exhibits disparities among different clients, while the conditional distribution p(y|x) is shared.
(2) Concept shift <cit.>: The relationship between input features and output labels p(y|x) alterations among different clients, even if the distribution of input features p(x) remains constant.
§.§ Optimal Cooperation
To effectively mitigate the challenges of statistical heterogeneity in federated learning, previous research has predominantly focused on employing a similarity metric to facilitate client cooperation. This approach, as highlighted in studies such as <cit.>, emphasizes maximizing similarity to address concept shift, which is indeed a crucial aspect of aligning learning models across diverse clients.
Maximizing similarity is justifiable for addressing concept shift. Specifically, when two clients exhibit similar conditional distributions p(y|x), it signifies a shared correlation or mapping between input features and output labels. Such a correlation is instrumental in fostering a more coherent alignment and synthesis of the learned models or knowledge, thereby enhancing the overall effectiveness of the federated learning process. However, in the case of covariant shift, where there is a variation in input features across clients, this strategy may not yield the same level of effectiveness.
Moderate covariant shift is beneficial for cooperation in federated learning, as exhibited in Figure <ref>. In the context of federated learning, local clients, limited by their specific data subsets, often present an incomplete representation of the broader data distribution. Therefore, clients are inclined to engage in cooperation to surmount the limitations posed by their individual data paucity.
When there is minimal covariate shift between two clients, overlapping input features can limit the model's capacity to absorb diverse information. This constraint impedes the detection of underlying data patterns, hindering collaborative efforts. Thus, we suggest that a client should collaborate with peers whose input features exhibit moderate variations. Such strategic collaborations harness complementary data, enhancing the model's predictive accuracy and generalization potential.
Given the aforementioned analysis, it becomes apparent that reliance on similarity metrics as a collaborative criterion is suboptimal in the presence of covariate shift. It would be more judicious to permit moderate variations in p(x) while maintaining similarity in the conditional distribution p(y|x).
§ OUR METHOD: BALANCING SIMILARITY AND COMPLEMENTARITY
In this section, we detail the construction of a cooperation network, designed to identify optimal collaborators for each client within FL framework. Section <ref> introduces our cooperation network, and Section <ref> discusses the global optimization process, balancing similarity and complementarity. In Section <ref>, we provide specific methods for computing similarity and complementarity under privacy protection, and further decompose the global optimization into server-side and client-side to fit FL architecture.
§.§ Cooperation Network
Contrary to prior personalized FL works, we argue that merely targeting similar clients is not always optimal, as diverse feature distributions can yield more insights for robust generalization. Hence, our goal of cooperation among federated clients is to achieve a balance: seeking data with a similar conditional distribution while ensuring a complementary marginal distribution.
To measure this balance, we introduce two metrics: similarity, targeting minimal concept shifts within similar conditional distributions, and complementarity, addressing moderate covariate shifts for diverse marginal distributions. Informed by this rationale as discussed in Section <ref>, we advocate that clients should collaborate with others who share similar p(y|x) but different p(x). Such a balanced collaboration ensures that clients not only access shared knowledge but also harness complementary insights from different data angles, ultimately boosting learning outcomes.
From a global perspective, the cooperation strength of a client is influenced by other clients. Inspired by <cit.>, we construct a cooperation network shown in Figure <ref> which balances similarity and complementarity among clients while collaborating. This network comprises N nodes, each representing a client. The adjacency matrix of the network is denoted as W∈ℝ^N× N, where the element W_ij indicates the cooperation strength between the i^th and j^th clients in federated learning. We establish a global objective encompassing both similarity and complementarity to determine the optimal weights in the adjacency matrix, which identifies the optimal collaborators for each client.
§.§ Optimization with Similarity and Complementarity
In practice, client data distributions are inaccessible. To bypass the privacy constraint, we utilize local models as surrogates for estimating data distributions.
We capture the local data distribution of clients as the sampling distribution for their marginal distribution p(x).
For the conditional distribution p(y|x), intuitively, the personalized model parameters, after local training, can capture the mapping from the marginal distribution p(x) to the label distribution p(y).
Hence, we consider the local-trained parameters as approximate surrogates for the conditional distribution p(y|x).
We present a global optimization equation, as articulated in Equation <ref>, which aims to refine the local personalized model parameters {θ^i} for each client and the network adjacency matrix W. The term ℒ_i(∑_j=1^NW_ijθ^j; D^i) denotes the empirical risk on the local dataset of the i^th client, following the weighted aggregation
of model parameters across multiple clients. 𝒞 denotes the complementarity of marginal distributions between two clients, while
𝒮 denotes the similarity in model parameters between them. The hyperparameters α and β are introduced to adjust the prominence of complementarity and similarity, ensuring a balanced emphasis on both during optimization.
9.5pt10pt
min_{θ^i}, W ∑_i=1^Np^i(ℒ_i(∑_j=1^NW_ijθ^j; D^i) + α∑_j=1^N𝒞(W_ij;D^i; D^j)
- β∑_j=1^N𝒮(W_ij;θ^i; θ^j))
s.t. ∑_j=1^NW_ij=1,∀i; W_ij≥ 0,∀i, j
The optimization equation minimizes the empirical risk on local data while balancing the similarity and complementarity among clients. The two constraints ensure the normalization and non-negativity of each client's cooperation weight. Compared to previous methods, our cooperation network approach flexibly determines the cooperation strength among clients. By considering variations in marginal distributions across different clients, it more effectively captures distributional differences, leading to enhanced model performance.
§.§ FedSaC: Balancing Similarity and Complementarity
In practical FL architectures, each client is restricted to its local dataset and model. Model aggregation, as well as the computation of complementarity and similarity, require coordination with a central server. Given this structure, we initially introduce the metric of similarity and complementarity under privacy constraints. Subsequently, we partition the global optimization equation into two stages, optimizing separately at the server side and the client side.
The aforementioned process is illustrated in Figure <ref>.
§.§.§ The Metric of Similarity and Complementarity
Similarity Metric. Following conventional practices, we use model parameters as proxies and adopt the cosine distance between the local models of the i^th and j^th clients as our similarity metric, denoted as,
𝒮(W_ij;θ^i; θ^j)=W_ijθ^i·θ^j/θ^i·θ^j.
Complementarity Metric. In light of the privacy principles inherent to FL, we use an indirect method to capture data complementarity. For a given local dataset D^i at client i, the local model θ^i extracts the feature matrix X^i. Applying singular value decomposition(SVD) on X^i yields:
X^i = U^iΣ^i(V^i)^T,
where U^i contains the singular vectors of X^i, capturing the direction in the feature space. For our purposes, we consider the first k columns of U^i, denoted U^i_k, as the representative subspace for client i.
To gauge the complementarity between clients i and j, we utilize the principal angles between their respective subspaces. The l^th principal angle cosϕ_l between the two is given by:
cosϕ_l = max_u ∈U^i_k, v ∈U^j_ku^Tv,
where l=1,…, k. These angles offer a quantifiable measure of the complementarity between the two datasets. A small principal angle suggests a high similarity between the subspaces, while an angle close to π/2 implies that the subspaces are nearly orthogonal, indicating significant divergence in their feature spaces.
By averaging these angles, we obtain complementarity as:
𝒞(W_ij;D^i; D^j)=W_ijcos(1/k·∑_lϕ_l).
§.§.§ FedSaC in FL architecture
Server Side.
On the server side, we compute the similarity and complementarity based on the model parameters {θ^i} and the subspace representation {U^i_k} received from the local client. Subsequently, we derive the adjacency matrix W through optimization equations. In FL scenario, as the empirical loss of local clients is elusive, we utilize the relative dataset size p_i as a surrogate measure. Clients with larger datasets are considered more reliable collaborators and should thus be assigned greater cooperative weight. Given these considerations, the optimization equation on the server side is defined as:
9.2pt10pt
min_W_i* ∑_j=1^N((W_ij-p^j)^2 + α𝒞(W_ij;D^i; D^j) - β𝒮(W_ij;θ^i; θ^j))
s.t. ∑_j=1^NW_ij=1,∀i; W_ij≥ 0,∀i, j
Using the cooperation network W, we derive the aggregated model θ̃^i = ∑_jW_ijθ^j for each client.
Client Side.
On each client side, our objective is to minimize the local empirical risk while preventing overfitting of the aggregated model on the local dataset. We replace the current local model with the aggregated model received from the server θ^i ←θ̃^i, and further refine this local model. For the i^th client, the optimization equation is defined as:
min_θ^i ℒ_i(θ^i; D^i) - λcos(θ^i, θ̃^i),
where cos(θ^i, θ̃^i) ensures that the locally optimized model does not deviate excessively from the aggregated model, and λ represents the regularization hyperparameter. The optimized model then serves as the current local model for participation in the subsequent optimization round.
§ EXPERIMENTS
§.§ Unimodal Experiments Setup
Datasets and Data Heterogeneity. Following the predominant experimental setup in personalized federated learning, we evaluate our proposed on two image classification datasets: CIFAR-10 and CIFAR-100 <cit.>. For each dataset, we implement four partitions with different heterogeneous levels into K clients <cit.>. 1) Homogeneous partition, where each client is imbued with data samples under a uniform probability schema. 2) Dirichlet partition <cit.>, where the allocation ratio of data samples from each category is instantiated from Dir_K(α). Notably, we define heterogeneity levels with α=0.1 (high) and α=0.5 (low). 3) Pathological partition, where each client is assigned with data exclusively from 2 categories for 10 classification datasets and 20 categories for 100 classification datasets.
Baselines. We compare our with 12 representative FL approaches including: 1) Local: Local training without information sharing. 2) FedAvg <cit.> and 3) FedProx <cit.>: Popular FL methods where local updates are centrally aggregated. 4) CFL <cit.>: Clustered FL for client group learning. 5) pFedMe <cit.>: Using regularized loss functions to decouple local and global models. 6) Ditto <cit.>: Enhanced robustness and fairness by regularized optimization. 7) FedAMP <cit.>: Pairwise collaboration between similar clients in FL. 8) FedRep <cit.>: Shared data representation with local client heads. 9) pFedHN <cit.>: Hypernetworks generate unique client models in personalized FL. 10) FedRoD <cit.>: Decoupled framework balances generic and personalized predictors 11) kNN-Per <cit.>: Personalization via global embeddings and local kNN interpolation. 12)pFedGraph <cit.>: Adaptive collaboration via learned graph.
§.§ Unimodal Experimental Results
Our experimental evaluations, conducted across various levels of heterogeneity on the CIFAR-10 and CIFAR-100 datasets, conclusively demonstrate the superior performance of our proposed model, . Our analysis of the results presented in Table <ref> leads to two primary insights:
Superiority Across Heterogeneity Levels. consistently surpasses baseline models in various heterogeneity settings, highlighting its superior performance. Experiments show that our personalized method notably outperforms conventional techniques like FedAvg, especially in situations with statistical heterogeneity. Furthermore, demonstrates significant or comparable enhancements over pFedGraph, a method based on similarity metrics. This comparison emphasizes the efficacy of our balanced approach to similarity and complementarity, a crucial aspect in federated learning for handling diverse data distributions.
Enhanced Performance under Strong Complementarity. excels in scenarios with significant complementarity, such as those involving Dirichlet partitioning. These settings often feature imbalanced data distributions, posing challenges for local models with limited data categories. 's effective balance of similarity and complementarity addresses these challenges, enhancing data representation. In Dirichlet partitioning, it consistently surpasses pFedGraph, which relies on similarity metrics, improving accuracy by 3% to 8%. This alignment of empirical results with our theoretical framework confirms the effectiveness and versatility of in various federated learning contexts, especially with high data heterogeneity.
§.§ Multimodal Experiments Setup
Datasets and Baselines. In our multimodal experiments, we employ the CUB200-2011 <cit.> multimodal dataset, which encompasses two modalities—images and text—to undertake the task of classifying 200 bird species.
For multimodal baselines, we not only compare with local training but also extend unimodal methods FedAvg and pFedGraph to the multimodal context, executing tasks separately within each modality. Additionally, we incorporate the multimodal federated learning method FedIoT <cit.> for comparison. This method conducts unsupervised training on local clients and supervised aggregation on the server.
Multimodal Setup. Our proficient unimodal method has been expanded to multimodal experimentation. Unlike unimodal scenarios, the multimodal approach leverages inter-client complementarity to enhance personalized model performance and utilizes inter-modality complementarity to contribute additional information to the model. Therefore, we introduce a strategy for the fusion of multimodal information complementarity. The specific setup details will be presented in Appendix <ref>.
§.§ Multimodal Experimental Results
The multimodal experimental results, as depicted in Table <ref>, demonstrate that our method surpasses all baselines. It significantly outperforms FedIoT, a method tailored for multimodal federated learning, which validates the efficacy of in handling complex multimodal data. Particularly in scenarios modeled by Dirichlet distributions, our method demonstrates a distinct advantage over other baselines, reflecting a consistent trend with our unimodal experiment outcomes. Notably, we observe a more pronounced improvement in the visual modality post-cooperation, suggesting that visual data may provide richer information that enhances the robustness of the method's cooperative framework.
§.§ Visualization
In our visualization, Figure <ref> presents core matrices and cooperation networks. Figure <ref> shows local data's cosine similarity, while Figures <ref> and <ref> display the model similarity and feature complementarity matrices, respectively. The comparison of Figures <ref> and <ref> demonstrates a complementary pattern, affirming our metric's effectiveness in capturing local data relationships under privacy constraints. Figures <ref> to <ref> depict cooperation networks under three collaboration scenarios: focusing on similarity, complementarity, and a balance of both. It is observed that in the similarity-based network, clients predominantly maintain their own models, hindering cooperative effectiveness and information gain. In the complementarity network, clients almost completely abandon their initial states, which is disadvantageous for training. The balanced approach allows for probabilistic exploration while filtering out clients with excessively high heterogeneity, as indicated by the darker areas that also show inconsistency in the local data matrix.
The visualization underscores our method's role in boosting FL collaboration efficiency.
§.§ More Discussion
Large-Scale Clients. In our experiments with a smaller scale of client data, we enhanced cooperation efficiency in large-scale client collaborations (e.g., with 50 or 100 clients) by randomly selecting a subset of clients in each iteration. The feasibility of this approach is demonstrated in Appendix <ref>.
Communication Overhead. Despite the additional steps introduced for optimal cooperation, Appendix <ref> analyzes and confirms that the extra Computational cost is minimal compared to local training, and thus acceptable.
§ CONCLUSION
In this study, we investigate the complex dynamics of federated learning, mitigating the significant challenge of statistical heterogeneity. We shift the focus from model similarity to a balance between similarity and feature complementarity. Our framework, FedSaC, effectively constructs a cooperation network by optimizing this balance. Extensive experiments show FedSaC's superiority over current FL methods in various scenarios. This research challenges conventional approaches and contributes to developing more robust learning models for complex federated settings.
§ IMPACT STATEMENT
This study presents the framework, offering a strategic approach to address statistical heterogeneity in Federated Learning. Academically, it introduces a novel perspective to FL, encouraging future research to explore the interplay of complementarity and similarity in model cooperation. Practically, this framework can be flexibly applied across various industries, facilitating more efficient and privacy-preserving data analysis models. Ethically, aligns with the increasing demand for ethical data use and user privacy in technological advancements. Future work will further investigate the significance of balancing similarity and complementarity in multimodal architectures.
langley00
icml2024
§ DISCUSSIONS ABOUT FEDSAC.
§.§ FedSac Optimization
Global Optimization. In our research, the overarching optimization equation is presented as Equation 3, which is fundamentally grounded in the optimization objective of FedAvg <cit.>. The equation is expressed as follows:
min _{θ^i}∑_i=1^N p^i ℒ(θ^g; D^i),
This equation is restructured to align with our targeted optimization goals. The global model θ_g can be represented as a weighted aggregation of local models, with the weights corresponding to the relative sizes of each client's dataset. The revised formulation is presented thusly:
min_{θ^i}, W ∑_i=1^Np^i ℒ_i(∑_j=1^NW_ijθ^j; D^i)
s.t. W_ij=p^j, ∀i, j; ∑_j=1^NW_ij=1,∀i; W_ij≥ 0,∀i, j
Further to this, we introduced two additional regularization terms to balance similarity and complementarity. 𝒞 objective reduces cooperation intensity between clients with similar datasets, while 𝒮 objective increases it for clients with similar model parameters.
Server-Client Optimization. Within the federated learning framework, Equation 3 poses practical challenges, as clients should not directly receive models from other clients. Consequently, transferring models to a centralized server becomes essential. At the server side, we aim to estimate the first term of Equation 3, namely the empirical loss of local models. In line with our optimization objectives, which are aligned with FedAvg, we adopt a FedAvg-inspired approach. Here, we approximate the empirical loss using the relative sizes of the datasets, operating under the premise that clients with larger local datasets are more suitable for collaboration. This concept has been validated for its rationality <cit.> in federated learning scenarios.
§.§ The Metric of Similarity and Complementarity
Similarity Metric. In the realm of federated learning, utilizing model parameters to gauge client similarity is a prevalent approach <cit.>. Aligning with the approach in <cit.>, we utilize cosine distance of model parameters for similarity assessment.
Complementarity Metric. Considering the privacy concerns in federated learning, direct computation of distances using local datasets is not feasible. Instead, we draw upon the principle angle method, a technique that measures distances between subspaces <cit.>. This adapted approach relies on limited, non-sensitive information to determine the degree of similarity between clients.
The principle angle method offers a geometric perspective for measuring the distance between subspaces. Specifically, when dealing with two subspaces, V and W, the method determines the angles θ_1, θ_2, ⋯, θ_k between them. Here, k represents the number of dimensions in the smaller of the two subspaces. The calculation of the i^th principal angle is as described in Equation 6. This implies that the cosine of the largest principal angle corresponds to the largest singular value of the matrix product V^TW.
cosθ_i = <x_i, y_i>/x_i·y_i = max_x_i ∈V, y_i ∈W<x_i, y_i>/x_i·y_i.
The principal angle method provides a clear geometric perspective on how similar or different two subspaces are. By measuring the angles between the subspaces, it offers a more intuitive understanding of their relationship.
In our method, we harness the principal angle method to effectively represent the local data distributions of clients as model output features. This is achieved through SVD, where we select the leading k principal component vectors to represent each client's data in a subspace. This approach maps varied client datasets to a common feature space and ensures privacy by using only a few principal components, which are insufficient to reconstruct the original data. The technique aligns data from different clients effectively while maintaining privacy in federated learning.
§ EXPERIMENTS AND IMPLEMENTATION DETAILS
§.§ Unimodal Implementation Details
Basic Setup.Adhering to the training setting presented in <cit.> , we partition the dataset across 10 local clients. Each client utilizes a simple CNN classifier consisting of 2 convolutional layers, 2 subsequent fully-connected layers and a final classification layer. Notably, the representation dimension prior to the classification layer is set at 84, which will be utilized for extracting the representative subspace to compute the complementarity. In the FL training phase, we execute 50 communication rounds. Each round consists of local training iterations that vary depending on the dataset: 200 iterations for CIFAR-10 and 400 iterations for CIFAR-100. Training employs the SGD optimizer with an initial learning rate 0.01 and a batch size 64.
Hyperparameter Setup. We have set key hyperparameters for optimal performance. We use three eigenvectors (k = 3) for our representative subspace. The regularization hyperparameter λ is set at 1. Additionally, the hyperparameters α and β control the degree of complementarity and similarity in our optimization equation. In experiments, we consider two scenarios based on client dataset characteristics. For datasets with complementarity, α=0.9 and β=1.4 balance similarity and complementarity for enhanced performance. In contrast, for datasets lacking complementarity, such as in the Pathological partition, we reduce complementarity by setting α=0.5 and β=1.6. The first setting is generally applied unless low complementarity among clients is known, in which case the second setting is used. To facilitate convergence, we use the initial settings for the first 70% of communication rounds, then set α=0 in the remaining rounds.
§.§ Multimodal Implementation Setup
In our setup, we distribute the CUB200-2011 dataset among clients, with an equal split between image and text modalities. Each client possesses distinct feature extraction networks and uniform classification networks to fulfill the classification task. Within the same modality, we employ the unimodal method to facilitate cooperation among clients. For cross-modality cooperation, structural differences in feature extraction layers necessitate restricting collaborative efforts to the classification layer. Given the inherent complementarity between different modalities, we focus on the similarity within the classification layers during cooperation. The cooperation weights for the classification layers are derived by excluding the complementarity term 𝒞 from the optimization <ref>. These weights are used to aggregate the classification layers at the server side, enabling the effective fusion of cross-modal information.
§.§ Multimodal Implementation Details
We allocate the CUB200-2011 dataset across 8 clients, with 4 handling image data and the others processing text data. For image modality clients, we employ a CNN architecture with four convolutional layers and a single classification layer. Text clients, on the other hand, utilize a TextCNN network consisting of five convolutional layers and a classification layer. The representation dimension is set at 256. The training involves 30 communication rounds, each comprising 200 iterations. We employ the Adam optimizer with an initial learning rate of 0.001. Throughout the training, we adopt a balanced setting for similarity and complementarity, with α=0.7 and β=1.2, keeping the rest of the setup consistent with the unimodal .
§.§ DataSets
In our experiments, CIFAR-10, CIFAR-100 <cit.> and CUB200-2011 <cit.> are all public dataset.
CIFAR-10 and CIFAR-100. The CIFAR-10 and CIFAR-100 datasets are key benchmarks in machine learning, each containing 60,000 32x32 color images. CIFAR-10 is categorized into 10 classes with 6,000 images per class, suitable for basic image recognition. CIFAR-100, offering a finer classification challenge, divides the same number of images across 100 classes, with 600 images per class. Both datasets, split into 50,000 training and 10,000 test images, are extensively used for evaluating image classification algorithms.
CUB200-2011. The CUB200-2011 dataset is specifically tailored for fine-grained visual categorization tasks, focusing on bird species identification. It consists of 11,788 images of 200 bird species, with both training and testing sets. Each species comes with a set of images that offer varying poses and backgrounds, providing a comprehensive dataset for advanced image recognition tasks. CUB200-2011 is particularly useful for research in areas requiring detailed visual discrimination, such as in distinguishing between closely related species.
Heterogeneity Partition. In our study, we employ the CIFAR-10 dataset and select two clients to illustrate the level of heterogeneity under four distinct partitioning schemes, shown as <ref>.
§.§ Baselines
FedAvg <cit.> streamlines the training of deep networks from decentralized data in federated learning. It enables multiple clients to collaboratively train a shared model while maintaining data privacy and reducing communication overhead. Suitable for scenarios where central data collection is impractical due to privacy concerns, like in IoT and healthcare applications.
FedProx <cit.> specifically tackles system and statistical heterogeneity in federated networks. It introduces a proximal term to the optimization objective, enhancing stability and accuracy in networks with devices of varying capabilities. This modification leads to more robust convergence and improved accuracy in heterogeneous settings.
CFL <cit.> is designed for large-scale peer-to-peer networks, optimizing federated learning by aggregating local model updates in a hierarchical manner. It ensures communication efficiency and data privacy through secure and authenticated encryption techniques. CFL stands out for its significant improvement in communication and computational efficiency, while robustly maintaining data integrity and privacy.
pFedMe <cit.> introduces a personalized federated learning algorithm using Moreau envelopes as clients' regularized loss functions, allowing for the decoupling of personalized model optimization from global model learning. pFedMe is effective in handling statistical diversity among clients, leading to state-of-the-art convergence rates and superior empirical performance compared to traditional FedAvg and Per-FedAvg algorithms.
Ditto <cit.> is a framework that enhances federated learning by simultaneously achieving fairness and robustness through personalization. It addresses the challenges of statistical heterogeneity in networks, using a simple yet scalable technique that improves accuracy, fairness, and robustness. Ditto is particularly effective against training-time data and model poisoning attacks and reduces performance disparities across devices.
FedAMP <cit.> This method employs federated attentive message passing to facilitate collaborations among clients with similar non-iid data, establishing convergence for both convex and non-convex models. FedAMP emphasizes pairwise collaborations between clients with similar data, overcoming the bottleneck of one global model trying to fit all clients in personalized cross-silo federated learning scenarios.
FedRep <cit.> utilizes a shared data representation across clients while allowing unique local heads for each client. This approach harnesses local updates concerning low-dimensional parameters, enabling efficient learning in heterogeneous data environments. By focusing on linear convergence and sample complexity, FedRep demonstrates improved performance over alternative personalized federated learning methods, especially in federated settings with non-iid data.
pFedHN <cit.> introduces a personalized federated learning approach using hypernetworks. This method trains a central hypernetwork to generate unique personal models for each client, effectively sharing parameters across clients. It excels in handling data disparities among clients, reducing communication costs, and generalizing better to new clients with varying distributions and computational resources.
FedRoD <cit.> simultaneously addresses generic and personalized learning objectives. It employs a two-loss, two-predictor system, decoupling the tasks of generic model training and personalized adaptation. The framework uses a class-balanced loss for the generic predictor and an empirical risk-based approach for the personalized predictor, facilitating robustness to non-identical class distributions and enabling zero-shot adaptation and effective fine-tuning for new clients.
kNN-Per <cit.> introduces local memorization using k-nearest neighbors in federated learning, enhancing the model's ability to personalize based on individual device data. This method stands out in its use of local data patterns to inform the federated learning process.
pFedGraph <cit.> proposes the construction of inferred collaboration graphs among clients in federated learning. It dynamically computes these graphs based on the volume of data and model similarity at each client. This method strategically identifies similar clients for cooperation, effectively mitigating issues arising from data heterogeneity.
FedIoT <cit.> proposes a multimodal federated learning framework for IoT data, utilizing autoencoders to process multimodal data from clients. It introduces a multimodal FedAvg algorithm to aggregate local models from diverse data sources, enhancing classification performance in semi-supervised scenarios with unimodal and multimodal clients.
§.§ Computing Resources
Part of the experiments is conducted on a local server with Ubuntu 16.04 system.
It has two physical CPU chips which are Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz with 20 cpu cores. The other experiments are conducted on a remote server. It has 8 GPUs which are GeForce RTX 3090.
§ PRIVACY DISCUSSION
Our exhibits similar data privacy preservation compared with baselines, as it does not share any private data of the clients. During communication, only model parameters are allowed to be shared. Similar to baselines, the sharing of model parameters is intended to maintain data privacy. The representative subspaces are derived from local data feature statistics generated by the model, a method that does not reveal any privacy details of the original dataset. Our approach is also compatible with protective strategies like differential privacy <cit.>. Specifically, for representative subspaces, we primarily rely on calculating their principal angles. Therefore, we could apply methods such as random cropping and adding minor noise to ensure that the original data cannot be reconstructed.
§ SUPPLEMENTARY EXPERIMENTS
§.§ Hyperparameters Experiments
The experimental analysis focused on evaluating the influence of hyperparameters, as illustrated in Figures <ref> and <ref>.
Hyperparameter α. In Figure <ref>, the similarity hyperparameter, denoted as β, is fixed at 1.4, hile the complementary hyperparameter, α, varied from 0.6 to 1.2. The results indicate that in data partitions characterized by complementarity, a moderate increase in α enhances accuracy. However, in partitions with high heterogeneity, the influence of α on the outcomes exhibits fluctuations. Notably, the experimental results consistently outperform the baseline, irrespective of the variations in α.
Hyperparameter β. Figure <ref> presents the outcomes with α set at 0.9, examining the impact of changes in
β ranging from 0.2 to 1.6. It is observed that an optimal level of similarity substantially benefits the experimental results, which uniformly exceed the baseline performance.
Hyperparameter λ. We employed the CIFAR100 dataset to assess the impact of the hyperparameter λ, associated with regularization constraints in local training, as demonstrated in Table <ref>. The results indicate that setting λ to either 0.01 or 0.1 yields favorable outcomes with minimal fluctuation.
Hyperparameter k. the influence of the subspace dimensionality, represented by k, on the experimental outcomes was examined, as detailed in Table <ref>. The findings suggest that k=3 is an appropriate choice for obtaining the representative subspace.
§.§ Experiments in Large-Scale Client Cooperation
In our experiments, we primarily focused on scenarios with a limited number of clients, specifically 8-10 clients. In situations involving a large number of clients, while the computational time overhead may not significantly impact our performance – a topic we will delve into in the following section – the cooperation among numerous clients could affect the convergence and stability of the collaboration. Therefore, for cooperation with a large client base, we incorporated an additional process to control the number of collaborators.
Specifically, in large-scale client cooperation, we randomly select k clients (k=10 in application) for collaboration before each iteration. This approach ensures convergence while enhancing cooperation efficiency. Table <ref> presents the results in cooperation networks with 50 and 100 clients, incorporating this step. The results confirm the effectiveness of such collaborative efforts.
§ COMPUTATIONAL COST AND COMPLEXITY ANALYSIS
In comparison to classical federated learning methods, our approach incurs additional time overhead to provide informative feedback for client collaboration, aiming to obtain more suitable personalized models. The extra time expenditure primarily stems from three aspects: computing the similarity metric 𝒮, computing the complementarity metric 𝒞, and solving the optimization equation. We will analyze the computational complexity and demonstrate that this overhead is acceptable.
The computational complexity of the similarity measure 𝒮 is proportional to the model parameters, approximating the cost of one inference time. For the complementarity measure 𝒞, the feature matrix X is first inferred. In cases of large local sample volumes, random sampling can approximate the local data distribution for effective computation of complementarity. Assuming random sampling of m samples with feature dimension d, the resulting feature matrix X ∈ℝ^m × d is processed. The complexity of extracting the representative subspace using SVD is O(md^2). In practical computations, this step's overhead is almost negligible compared to the duration of model training.
The subsequent step involves solving the optimization equation. Notably, each row of the adjacency matrix W in Equation <ref> is independent, allowing for the independent computation of cooperation weights for each client with others. We simplify the optimization equation as follows:
min_W_i* ∑_j=1^N(W_ij^2 - 2p^jW_ij + (p^j)^2 + αW_ijcos(1/k·∑_lϕ_l) - βW_ijθ^i·θ^j/θ^i·θ^j)
s.t. ∑_j=1^NW_ij=1,∀i; W_ij≥ 0,∀i, j
It can be deduced that:
min_W_i* ∑_j=1^N(W_ij^2 +(αcos(1/k·∑_lϕ_l) - βθ^i·θ^j/θ^i·θ^j - 2p^j)W_ij)
s.t. ∑_j=1^NW_ij=1,∀i; W_ij≥ 0,∀i, j
It is evident that the objective of the optimization equation is equivalent to ∑_j=1^N(W_ij^2 + ϕ_ijW_ij), which forms a quadratic optimization function. This function is convex, as evidenced by its compliance with the convex set inequality constraint W_ij≥ 0, ∀i, j and the affine transformation equality constraint ∑_j=1^NW_ij=1, ∀i. Therefore, this optimization problem is a convex optimization problem, solvable using convex optimization solvers. Such solvers can rapidly find the unique optimal solution.
We tested the runtime of each additional phase in our experiments on our platform and compared it with the training duration of a single client in one local training round, given ten clients, as shown in Table <ref>. The results indicate that the time required for similarity metric and solving the optimization equation is negligible compared to local training duration. Although the complementarity metric phase, which involves an inference process, does take some time, it is still significantly less than the local training duration. Therefore, the additional cost of cooperation is acceptable. As a result, does not introduce substantial additional computational and communication costs, making its computational overhead comparable to existing baselines.
§ CONVERGENCE ANALYSIS
The introduction of complementarity in our approach does not lead to convergence issues. As depicted in Figure <ref>, we illustrate the accuracy progression over communication rounds on the CIFAR-100 dataset under a Diri(low) partition. It is observed that the accuracy of the FedSaC method steadily rises and gradually converges. Unlike local training, which may lead to overfitting and a subsequent decline in accuracy due to excessive training, our method effectively circumvents the overfitting problem. In contrast to other baselines that converge prematurely and potentially get trapped in local optima, our approach consistently explores better solutions, achieving optimal performance before ultimately converging.
|
http://arxiv.org/abs/2405.09965v1 | 20240516102103 | Leveraging Large Language Models for Automated Web-Form-Test Generation: An Empirical Study | [
"Tao Li",
"Chenhui Cui",
"Lei Ma",
"Dave Towey",
"Yujie Xie",
"Rubing Huang"
] | cs.SE | [
"cs.SE"
] |
Leveraging Large Language Models for Automated Web-Form-Test Generation: An Empirical Study
Tao Li, Chenhui Cui, Lei Ma, Dave Towey, Yujie Xie, Rubing Huang
Tao Li, Chenhui Cui, Yujie Xie, and Rubing Huang are with the School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China. E-mail: {3220007015, 3230002105, 2230004390}@student.must.edu.mo, rbhuang@must.edu.mo.
Lei Ma is with The University of Tokyo, Tokyo 113-8654, Japan, and also with the University of Alberta, Edmonton, AB T6G 2R3, Canada. E-mail: ma.lei@acm.org.
Dave Towey is with the School of Computer Science, University of Nottingham Ningbo China, Ningbo, Zhejiang 315100, China. E-mail: dave.towey@nottingham.edu.cn.
May 20, 2024
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The testing of web forms is an essential activity for ensuring the quality of web applications, which mainly involves evaluating the interactions between users and forms.
Automated test-case generation remains a challenge for web-form testing:
Due to the complex, multi-level structure of web pages, it can be difficult to automatically capture their inherent contextual information for inclusion in the tests.
Large Language Models (LLMs) have great potential for contextual text generation.
OpenAI's GPT LLMs have been receiving a lot of attention in software testing, however, they may fail to be applied in practice because of information security concerns.
To the best of our knowledge, no comparative study examining different LLMs has yet been reported for web-form-test generation.
To address this gap in the literature, we conducted a comprehensive empirical study investigating the effectiveness of 11 LLMs on 146 web forms from 30 open-source Java web applications.
According to the experimental results, different LLMs can achieve different testing effectiveness.
Notably, the GPT-4, GLM-4, and Baichuan2 LLMs can generate better web-form tests than the others.
Compared with GPT-4, other LLMs find it difficult to generate appropriate tests for web forms, resulting in decreased successfully-submitted rates (SSRs, measured by the proportions of the LLMs-generated web-form tests that can be successfully inserted into the web forms and submitted) ranging from 9.10% to 74.15%.
Nevertheless, some LLMs (such as GLM-3, GLM-4, Baichuan2, and Spark-3.5) achieve higher SSRs than GPT-3.5, indicating a better ability to generate appropriate tests for web forms.
Our findings also show that, for all LLMs, when the designed prompts include complete and clear contextual information about the web forms, more effective web-form tests were generated.
Finally, we offer some insights for using LLMs to guide automated web-form testing.
Automated Web-Form Testing,
Large Language Models (LLMs),
Web-Form-Test Generation,
Java Web Applications,
Empirical Study.
§ INTRODUCTION
IN the swiftly evolving digital era, web applications have become a cornerstone of daily interactions.
By March 2024, the Internet Archive had saved more than 866 billion web pages <cit.>.
A web application consists of various HyperText Markup Language (HTML) elements, such as links, buttons, and sliders.
Among them, web forms, as a key element composed of the Document Object Model (DOM), are a fundamental interface component.
They not only serve as an interaction bridge between users and web applications <cit.> but also play an important role in improving the user experience and data collection efficiency <cit.>.
Web-form testing has been widely used to ensure the quality of web forms <cit.>.
It aims at simulating user inputs and evaluating user interactions <cit.>.
However, there may be challenges to automatically generating web-form tests, due to the properties of web forms:
(1) Web forms generally have complex structures with various basic and customized components <cit.>.
The basic components, such as tags, elements, attributes, and placeholders, play a significant role in constructing web forms.
In addition, the developers may introduce customized components (for example, control logic code) that could increase the complexity of the web-form structures; and
(2) Web forms also provide diverse contextual information for user interaction <cit.>.
For example, web forms can offer a drop-down list or a set of radio boxes for users to make a single selection from multiple options.
Consequently, web-form testing is crucial to ensure accurate interaction and evaluation of contextual information in complex web structures.
Recently, Large Language Models (LLMs) <cit.> have significantly enhanced Natural Language Processing (NLP) technologies <cit.>, leading to a groundbreaking era of Artificial Intelligence Generated Content (AIGC) <cit.>.
Previous studies have shown that LLMs have the potential to improve software engineering <cit.>.
Due to their reasoning and text-input generation capabilities <cit.>,
LLMs can also use extracted contextual information from complex web-form structures to improve the web-form-test generation process <cit.>.
Among the available LLMs, the GPT series <cit.>, produced by OpenAI, has received a lot of attention in web-form testing <cit.>.
However, due to information security concerns <cit.>, it may not be possible to apply GPT LLMs in practical software projects.
For example, when generating texts, GPT LLMs may leak sensitive user information, such as personal identity information.
Many recent studies have only analyzed a single or very few LLMs <cit.>.
For example, Alian et al. <cit.> proposed a testing method that only compared two kinds of LLMs to guide web-form testing (GPT-4 <cit.> and LLaMa2 <cit.>).
To the best of our knowledge, there is no comprehensive research available for testers to fully understand the different performances of various LLMs.
Motivated by these facts, we conducted a comprehensive empirical study to extensively analyze the effectiveness of various LLMs in generating web-form tests.
Our Work:
(1) We accessed 11 LLMs via publicly available APIs.
(2) We designed three types of prompts based on the HTML content of web forms for LLMs to use: Raw HTML for Task Prompt (RH-P); LLM-Processed HTML for Task Prompt (LH-P); and Parser-Processed HTML for Task Prompt (PH-P).
(3) We generated 14,454 web-form tests and executed them on 30 Java web open-source projects with 146 web forms from GitHub to evaluate the capabilities of the LLMs.
Key Findings:
(1) Different LLMs achieve different successfully-submitted results.
GPT-4, GLM-4, and Baichuan2 LLMs achieve better successfully-submitted rates (SSRs) — the proportions of the LLMs-generated web-form tests that can be successfully inserted into the web forms and submitted — indicating better web-form test generation (effectiveness) than the others.
In addition, the model size significantly affects the generation performance, such as with the LLaMa2 series of LLMs.
(2) Compared with GPT-4, the other LLMs have difficulty generating appropriate tests for web forms, resulting in a significant decrease in SSRs.
Specifically, the SSR performance of the other LLMS was between 9.10% and 74.15% lower.
Compared with GPT-3.5, some LLMs (such as GLM-3, GLM-4, Baichuan2, and Spark-3.5) were more suitable for generating appropriate web-form tests and achieving higher SSRs.
(3) Different contextual information and prompt constructions have different effects on the effectiveness of the LLMs.
The prompts constructed from parser-processed HTML (PH-P) are generally better than the other two types (RH-P and LH-P).
Practical Implications:
(1) To extract contextual information from web forms, it is necessary to fully understand their properties and ensure accurate parsing of the HTML content.
(2) To prune the HTML content of web forms and reduce complexity, it is necessary to remove redundant components (e.g., the customized components introduced by developers) and avoid requiring complete and complex web-form contextual information.
However, the pruned components should not affect the key contextual information of the web-form (e.g., IDs, which are critical for inserting the generated tests into the corresponding web forms).
(3) To select a specific LLM for generating tests for web forms, if there are no practical testing constraints (such as data privacy issues <cit.>), GPT-4 would be preferable due to its high quality and effective web-form tests.
Otherwise, some alternative LLMs should be selected, such as the LLaMa2 series.
Contributions: This study offers the following significant contributions:
* To the best of our knowledge, this is the first empirical study on multiple LLMs (11 LLMs) that investigates the potential for LLMs to generate automated web-form tests.
* We prune the HTML to reduce the complexity of the web-form structure.
We propose three context construction approaches to extract contextual information from web forms for prompt construction.
* We select 146 web forms from 30 open-source Java web applications to deeply evaluate the effectiveness of the different LLMs.
* We summarize three key findings from the experimental results, and provide two practical implications for future research on web-form testing.
The rest of this paper is organized as follows:
Section <ref> introduces some preliminaries, including the HTML parsing process, web-form testing, and LLMs.
Section <ref> describes the approach to conduct this empirical study.
Section <ref> explains the design of our experiments.
Section <ref> presents and analyzes the experimental results.
Section <ref> outlines some related work.
Section <ref> concludes this paper and discusses some future work.
§ PRELIMINARIES
This section provides a brief overview of the basic concepts of HTML parsing, web-form testing, and LLMs.
§.§ HTML Parsing Process
Listing <ref> shows a basic example HTML structure, which is made up of various components, such as:
title in the (lines <ref> to <ref>);
content in the (lines <ref> to <ref>); and
image files in the (line <ref>).
These components provide instructions to the browser for how to display information <cit.>.
[caption=The basic HTML structure., label=lst:html-script, captionpos=b,escapechar=|]
<!DOCTYPE HTML>
<html>
<head>||
<title>This is a title.</title>
</head>||
<body>||
<h1>This is a body.</h1>
<img src="ddt.gif" alt="error"/>||
</body>||
</html>
The process of parsing an HTML structure is illustrated in Figure <ref>.
It involves the following steps:
(1) The resource loader is initially activated to load the webpage corresponding to the URL.
(2) This loader uses the network module to initiate requests and handle responses.
(3) Data can be obtained from web pages or resources through synchronous and asynchronous methods.
(4) The web page is then handed over to an HTML interpreter and transformed into a series of words or tokens.
(5) Based on these words, the interpreter constructs nodes, forming a DOM tree <cit.>.
(6) If a node is written in JavaScript, then the JavaScript engine is called to interpret and execute it
(7) JavaScript code may modify the structure of the DOM tree.
(8) If a node uses other resources (such as images, Cascading Style Sheets (CSS), videos, etc.), then the resource loader is called to load them.
§.§ Web-Form testing
Web-form testing ensures the interactive functionality, usability, and compatibility of web forms in a web application.
Nowadays, automated testing methods are widely used <cit.>.
For example,
Selenium <cit.> is a popular automated testing framework for web-form testing.
The Selenium web driver, the core of Selenium <cit.>, allows testers to automatically control web applications' behavior on the real browsers by automated test scripts <cit.>.
It uses the native automation support from web browsers <cit.> to enable an end-to-end test execution <cit.>.
Figure <ref> outlines the main steps in web-form testing using Selenium:
In Step (1), a customized automated test script is developed, which designs the execution logic of the testing task and the initiation request for the Selenium web driver.
In Step (2), the web driver sends an establish connection command to the browser where the web application with the Application Under Test (AUT) has been installed.
After Steps (1) and (2), the pipeline between the test script and the web application is established.
Then, testers can use this pipeline to automatically execute test scripts to test the web-form.
In Step (3), the UI operations (e.g., click a button) for the web-form are defined in the test script and sent to the browser.
In Step (4), the web-form tests are executed within the AUT on the browser with a series of actions.
After executing the web-form tests, in Step (5), the browser returns the test results to the testers.
Finally, in Step (6), the testing environment is reset to the original state.
Web-form testing could be complex due to two properties of web forms:
(1) Web forms can comprise various components, including basic and customized components.
For example, tags are the basic components that specify the input content type;
elements establish the layout of the web-form; and
attributes specify the functionality and characteristics of the elements.
In addition, the customized components introduced by developers can enhance the diversity of the web-form structures, and can include some control logic.
(2) Web forms also provide diverse contextual information for users to interact with the web applications:
For example, web forms may provide different approaches for users to select from alternative options, such as using a selector list or a group of radio boxes.
§.§ LLMs
LLMs are complex deep neural networks trained on various datasets, including books, code, articles, and websites.
This training enables the model to discern and replicate the intricate patterns and relationships inherent in the language it learns.
As a result, LLMs can produce coherent content ranging from grammatically accurate text to syntactically correct code snippets <cit.>.
LLMs <cit.> are advanced technologies in deep learning and NLP, with models including GPT <cit.>, GLM <cit.>, and LLaMa2 <cit.>.
These models learn complex features (such as language structure, grammar, and semantics) by training on large-scale text data.
With this training, LLMs can complete more complex and diverse NLP tasks, such as text generation <cit.>, translation <cit.>, and AI assistance (e.g., contributing to a range of software engineering tasks, including specification generation and the translation of legacy code) <cit.>.
LLMs are used into four key stages of the software engineering lifecycle:
(1) software requirements and design (e.g., software specifications generation <cit.>, GUI layouts <cit.>);
(2) software development (e.g., code generation <cit.>, code summary <cit.>);
(3) software testing (e.g., unit test generation <cit.>, GUI testing <cit.>); and
(4) software maintenance (e.g., code review <cit.>, bug report detection <cit.>).
LLMs can solve complex software engineering problems and promote better software engineering development.
Currently, most generative LLMs use the Transformer architecture, which is composed of two main components: the encoder and the decoder <cit.>.
Here, we take LLaMa2(7B) as an example to illustrate the basic structure of an LLM.
Figure <ref> in the appendix file shows the structure of LLaMa2(7B) along with the input text (prompt).
LLaMa2(7B) only uses the decoder part of the transformer, which is a decoder-only structure and shares 32 decoder layers.
§ APPROACH
This study focuses on the automated web-form-test generation task for web-form testing.
Figure <ref> illustrates the framework of this empirical study, which includes the following five steps:
(1) HTML pruning;
(2) context construction;
(3) prompt design;
(4) LLM communication; and
(5) web-form-test insertion.
§.§ HTML Pruning
This section outlines the method of HTML pruning, a critical technique for clarifying the contextual information of web forms.
On the one hand, although HTML has a significant tree-like DOM structure <cit.>, it contains various types of components, which can make it difficult to filter out the key ones.
On the other hand, with the development of web forms, developers generally use essential basic components, such as name and ID.
Additionally, some customized components can be added to make the code logic more transparent, such as data-ID and user-role.
However, these customized components may make the HTML trimming process more complex.
Therefore, designing a method for pruning HTML is becoming increasingly challenging.
After analysis of the web-form structure, some key components (e.g., ID, name, type) are selected while pruning the remaining components.
In other words, the HTML contents of the web forms are simplified while maintaining semantic integrity, which makes the whole HTML clear and the web-form readable.
Meanwhile, selected key components enable us to successfully fill generated web-form tests into web forms, and also ensure the smooth execution of test scripts.
Algorithm <ref> provides the pseudocode for HTML pruning, i.e., the function 𝒟, 𝒮, where 𝒟 is a web driver, and 𝒮 is a filter tag set.
In the initialization stage, a web-form ℱ is generated by searching for elements from 𝒟 with the keyword “form", i.e., the function 𝒟, “form".
For each element χ in ℱ, the algorithm checks each attribute α to find the potential tag τ, i.e., the function .
If the tag τ does not belong to the tag set 𝒮, the attribute α will be removed from the element χ; otherwise, α will be saved. After that, the updated element χ is appended to a pruned HTML ℋ.
§.§ Context Construction
This section offers an overview of the method of context construction, which involves transforming the contextual information of a web form into task prompts to guide the LLM to generate web-form tests.
LLMs have a strong logical understanding, and the ability to generate high-quality content. These mainly depend on the prompt quality <cit.>.
In this study, we focused on the application of LLMs to web-form-test generation.
To build prompts for LLMs, the contextual information needs to be constructed from the HTML content of the web-form (which includes various components such as placeholders and attributes).
Using the web-form structure, we propose three methods of context construction in this study.
§.§.§ Context from Raw HTML
The first method constructs context by directly using the raw HTML
—
this was motivated by the fact that the HTML layout effectively represents the relationships among the various components.
Due to the limited amount of context that LLMs can handle <cit.>, it is necessary to “simplify" the raw HTML content by removing some CSS and custom components originally defined by developers
—
the function 𝒟, 𝒮 (as shown in Algorithm <ref>).
Then, the pruned HTML is reorganized as a string, which is considered as the contextual information.
§.§.§ Context from LLM-Processed HTML
The second method of context construction uses the LLM to pre-process HTML as the targeted format.
This is because LLMs can be very good at parsing HTML content.
We provide the LLM with web-form HTML content, and then ask it to parse the content according to the specific format.
More specifically, LLMs parse the contextual information from the HTML content (which includes accurately assembled elements of the web-form, such input tag name, tag ID, and tag type).
In general, LLMs convert web-form HTML into the JavaScript Object Notation (JSON) format, i.e., a list of JSON structured results.
Then, the context is constructed by traversing the web-form structure JSON, extracting the corresponding JSON information, and concatenating it into the web-form structure context content.
Algorithm <ref> provides the pseudocode for getting context from the LLM-Processed HTML.
At the initialization stage, s is an empty string, and the HTML pruning ℋ is generated by function 𝒟, “form".
Then, ℋ is transformed into a JSON structured objects list Ω through a specific LLM, i.e., the function ℋ, ℒ.
For each element χ∈Ω, the algorithm uses the following functions to capture the contextual information:
χ retrieves the name of the JSON object χ (e.g., “hint text”);
χ obtains the corresponding value of χ (e.g., “Please enter your name.”); and
concatenates the obtained name and value as natural language sentences (e.g., “The hint text is `Please enter your name.”').
After that, all reconstructed HTML information is reformed into a parsed context string as the context.
§.§.§ Context from Parser-Processed HTML
The third method for context construction is similar to the second, with the only difference being that they use different HTML-parsing functions ( in Algorithm <ref>):
Instead of LLM as the HTML parser, a Java-based HTML parser (Jsoup[<https://jsoup.org/>.]) converts the web-form HTML into a list of JSON structured results. The context is also constructed as a string.
§.§ Prompt Design
This section provides a detailed introduction to the process of prompt design.
The constructed prompts are directly used to guide the web-form-test generation for LLMs.
After collecting and constructing the context (as discussed in Section <ref>), the next stage is to design the prompts for the LLMs.
This includes five steps that produce the following five prompt types:
(1) global prompt;
(2) task prompt;
(3) note prompt;
(4) instruction prompt; and
(5) objective prompt.
Figure <ref> presents the basic framework of the prompt structure, where the “%s” are replaced by the extracted information when testing the AUTs.
§.§.§ Global Prompt
The global prompt contains the most basic information, such as the name of the web application, the title of the web-form, and the number of elements within the form.
§.§.§ Task Prompt
The task prompt includes the core contextual information for the web application.
Based on the three types of context (Section <ref>), three types of task prompts are designed Raw HTML for Task Prompt (RH-P); LLM-Processed HTML for Task Prompt (LH-P); and Parser-Processed HTML for Task Prompt (PH-P).
Two categories of natural language sentences are designed for the LLMs to analyze:
Category 1 asks the LLMs to analyze the structure of the HTML code; and
Category 2 asks the LLMs to analyze the contextual information in natural language expressions.
As shown in Figure <ref>,
RH-P uses Category 1, while LH-P and PH-P use Category 2 (as they are based on the contextual information in the natural language expressions).
§.§.§ Note Prompt
The note prompt attempts to restrict the outputs from the LLMs, guiding them to return a brief output rather than detailed information, such as a step-by-step analysis.
§.§.§ Instruction Prompt
The instruction prompt guides the LLMs to return results in the specified format:
The generated results must be surrounded by a pair of triple quotation marks.
§.§.§ Objective Prompt
The objective prompt asks the LLMs to strictly follow the requirements and return the results.
After collecting the above five sub-prompts, we successively concatenate them into a string as the final prompt for the LLMs.
§.§ LLM Communication
This section elaborates on the communication with the LLM, which is a critical step for the LLMs to generate web-form tests.
Once the communication connection with the LLM is established, the LLM can be used to generate the web-form tests based on the extracted contextual information.
First, we standardize the encapsulation of various LLM APIs, which leads to better management, easier communication, and more convenient collection of experimental data.
We can communicate with a specific LLM by sending the name of the LLM to this encapsulated API.
Second, when running the AUT, we use three types of task prompts to construct the complete prompts (Section <ref>) to guide the LLMs to generate web-form tests:
The test script automatically sends the prompts as a message.
As shown in Figure <ref>, the contextual information is concatenated into three types of prompts (RH-P, LH-P, and PH-P) to communicate with a specific LLM and guide that LLM to generate web-form tests, respectively.
After receiving the response from the LLM, we extract the web-form tests from the response text.
Finally, we convert the extracted web-form tests into a set of key-value pairs for web-form-test insertion: The key stores the selectors of each component (i.e., a series of approaches for detecting the positions of the components in an HTML document such as ID selector, CSS selector, etc.); and the value stores the corresponding generated web-form tests.
§.§ Web-Form-Test Insertion
This section explains the insertion process of the web-form tests generated by the LLMs.
This step is the key to achieving automation of the entire web-form testing process.
After extracting all the LLM-generated web-form tests, they are stored in a set with the corresponding selectors for all the web-form components.
If the selector detects the web-form component, the corresponding generated web-form-test is inserted.
As shown in Figure <ref>, the function selector, test implements the component selection and web-form-test insertion process.
Once the web-form tests are inserted into the corresponding components,
some UI operations (e.g., click a button) are performed to submit the inserted values to the web application's server.
As illustrated in Figure <ref>, the “Sign in” button is automatically clicked to submit the inserted values.
Finally, a monitor is set to wait for whether the submission triggers a response from the application server:
If a response is caught, then it is considered a successful submission; otherwise, it has failed.
§ STUDY DESIGN
This section introduces the details of our study design, including
the research questions,
the selection of LLMs,
the evaluation setup, and the experiment environment.
§.§ Research Questions
Our research aimed to evaluate the effectiveness of LLMs in generating tests for web forms, guided by the following three research questions:
* RQ1: How effective are web-form tests generated by different LLMs?
* RQ1.1: How well do different types of prompts guide the web-form-test generation?
* RQ1.2: How well do different LLMs generate web-form tests?
* RQ2: What is the quality of the generated web-form tests?
* RQ2.1: Why are some generated web-form tests not submitted?
* RQ2.2: What is the quality of the generated web-form tests, from the perspective of software testers?
* RQ3: What insights/advice can be offered to testers using LLMs for web-form testing?
* RQ3.1: What strategies can be used to design appropriate prompts for testing web forms?
* RQ3.2: What are the criteria for choosing the best LLM for a specific testing scenario?
§.§ LLM Selection
After an in-depth investigation and analysis of the current mainstream and widely-used LLMs <cit.>, we selected 11.
These state-of-the-art LLMs were chosen based on the number of parameters <cit.>, the ease of API access and integration <cit.>, and the availability of commercial open-source options <cit.>.
Table <ref> lists some relevant information for these 11 LLMs, including the model name and version, the owning company or organization, the number of parameters, the source website, and the year of its release.
Interestingly, some information about these LLMs was not fully disclosed, such as the number of parameters.
However, when these models were released, their capabilities were often vigorously promoted, which makes the empirical research in this article more important and meaningful.
Due to the space limitations, more details about the subject LLMs can be found in the appendix file.
§.§ Evaluation Setup
We designed different experimental stages to answer each research question.
For RQ1, we identified 300 Java web applications from GitHub using keywords such as “Java web”, “Jobs”, and “Books”.
These keywords were selected from an online statistical resource[<http://5000best.com/websites/>.] that lists the top 5000 famous websites.
We then cloned these selected web applications, and excluded those without web forms and those that could not run properly in the experimental environment.
Finally, 30 Java web applications, with 146 web forms, were selected for the experiment.
To evaluate the effectiveness of the web-form tests, a successfully-submitted rate (SSR)[This metric was originally referred to as the “form passing rate” <cit.>.] was used to measure the proportion of LLMs-generated web-form tests that can be successfully inserted into the web forms and submitted.
We ran each web-form three times under 11 LLMs and three types of prompts:
This was a total of 146 × 11 × 3 × 3 = 14,454 generated web-form tests.
For RQ1.1 and RQ1.2, we evaluated the SSR from the perspectives of the three types of prompts and 11 LLMs.
For RQ2.1, we conducted a case study to analyze why some generated web-form tests were not passing.
For RQ2.2, we designed an online questionnaire[<https://github.com/abelli1024/web-form-testing-empirical-study>.] to collect user evaluations of LLM-generated web-form tests.
We invited 20 testers with software testing experience from famous Internet enterprises and research institutions.
For each web-form, we presented the screenshot and the corresponding web-form-test text generated by the 11 LLMs and the three types of prompts.
We randomly selected 15 web forms from the 146 web forms for each tester.
We used Kendall’s W <cit.> value to measure the agreement among different testers for responses to the questionnaire statements (which used a 5-point Likert scale:
strongly disagree “1”;
disagree “2”;
neutral “3”;
agree “4”; and
strongly agree “5”).
A higher Kendall's W value (close to 1.0) indicates a higher level of agreement among the testers' evaluation results.
We also collected data on the number of testers who use LLMs in their testing processes.
Additionally, we identified the main concerns for users who currently use or intend to use LLMs for testing.
For RQ3.1 and RQ3.2, based on the findings from other research questions, we provide some advice for constructing prompts for LLMs, and for selecting a specific LLM for a particular testing scenario, respectively.
§.§ Experiment Environment
All experiments were conducted on a MacBook Pro laptop with an Apple M3 Max processor and 64GB of RAM.
The test script was developed in Java.
The version of Google Chrome was 122.0.6261.112 (official version) (arm64), and the version of the Chrome web driver was 122.0.6261.128 (r1250580).
§ RESULTS
This section introduces the experimental results, and answers the research questions.
§.§ RQ1: How effective are web-form tests generated by different LLMs?
This section discusses the effectiveness of the LLM-generated web-form tests.
We provide an analysis from the perspective of the three prompt types, and from the perspective of the 11 LLMs.
§.§.§ Answer to RQ1.1
Table <ref> and Figure <ref> present the SSR results of the 438 test tasks for the three types of prompts (RH-P, LH-P, and PH-P).
Based on these results, we have the following observations:
* The average SSR results for RH-P, LH-P, and PH-P are 60.21%, 50.27%, and 70.63%, respectively, indicating that PH-P can guide LLMs to generate better web-form tests than RH-P or LH-P.
* The RH-P SSR ranges from 0.00% to 98.86%, the LH-P SSR ranges from 0.00% to 97.03%, and the PH-P SSR ranges from 0.00% to 99.54%.
For all three types of prompts, GPT-4 always has the highest SSR results, indicating the best performance.
GLM-4V always has an SSR of 0.00%, and may not be suitable for generating appropriate web-form tests.
Apart from GLM-4V, LLaMa2(7B) performed worst with the RH-P and LH-P types, with SSRs of 34.47% and 0.23%, respectively; and
LLaMa2(13B) performed worst with the PH-P type, with an SSR of 40.41%.
* For RH-P (Figure <ref>), the SSR results of six LLMs (GPT-4, GLM-4, Baichuan2, LLaMa2(70B), GLM-3, and Spark-3.5) are greater than the average (60.21%), the other five LLMs (Spark-3, GPT-3.5, LLaMa2(7B), LLaMa2(13B), and GLM-4V) have lower results.
* For LH-P (Figure <ref>), the SSR results of six LLMs (GPT-4, GLM-4, Baichuan2, Spark-3.5, GPT-3.5, and GLM-3) are greater than the average (50.27%), the other five LLMs (Spark-3, LLaMa2(7B), LLaMa2(13B), LLaMa2(70B), and GLM-4V) have lower results.
* For PH-P (Figure <ref>), the SSR results of seven LLMs (GPT-4, Baichuan2, Spark-3.5, GLM-4, GPT-3.5, GLM-3, LLaMa2(70B), and Spark-3) are greater than the average (70.63%), the other three LLMs (LLaMa2(7B), LLaMa2(13B), and GLM-4V) have the lower results.
* The average SSR results for RH-P, LH-P, and PH-P are 60.21%, 50.27%, and 70.63%, respectively.
Only PH-P achieved a higher average SSR than the overall average result (60.37%): RH-P and LH-P were both lower than the overall average.
We next discuss the different performance of these three types of prompts.
* With RH-P, we directly feed the raw pruned HTML of the web forms into the LLMs.
This enables the LLMs to use the contextual information from the HTML to generate the web-form tests.
However, we found that some LLMs cannot generate effective web-form tests based on the HTML context.
For example, hint texts in the raw HTML context are descriptions displayed in the web forms that guide users towards what should be entered.
However, some LLMs (such as LLaMa2(13B), LLaMa2(70B), and Baichuan2) only returned the hint texts (e.g., “Please enter the user name”), without generating any web-form tests.
* With LH-P and PH-P, we used two approaches to proceed with the pruned HTML: with the LLMs, and with the automated-testing tool.
We found that the automated-testing tool could perform better for web-form-test generation.
One of the reasons is that LH-P cannot process the HTML into JSON objects that conform to the expectations:
Some key contextual information used for guiding the web-form-test generation may be missed when getting the JSON objects, resulting in the LLMs not successfully generating the web-form tests.
In contrast, the automated testing tool could process the pruned HTML without missing any key contextual information.
Therefore, compared to the automated testing tool, LLMs appear less suitable for parsing the HTML into JSON objects.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ1.1:
Among the three types of prompts, PH-P performed better than RH-P and LH-P.
Furthermore, the LLMs were found to be less suitable for parsing the HTML of web forms compared to the automated-testing tools.
§.§.§ Answer to RQ1.2
Table <ref> and Figure <ref> also present the SSR results of the 438 tests from the perspective of the 11 LLMs.
Tables <ref> and <ref> compare the SSR results for GPT-3.5 and GPT-4 with other LLMs:
Δ# denotes the difference in the number of successfully submitted tests for an LLM compared to that of GPT-3.5 or GPT-4; and
Δ% denotes the change in the SSR.
Based on the results, we have the following observations:
* The average SSR results of the 11 LLMs is 60.37%, ranging from 0.00% to 98.48%.
Among all LLMs, GPT-4 always has the highest average SSR (98.48%), indicating the best ability to generate appropriate tests for web forms.
GLM-4V always has the lowest results (0%).
The performance of the other LLMs ranked according to SSR is:
GLM-4 (89.50%),
Baichuan2 (89.04%),
Spark-3.5 (83.18%),
GLM-3 (74.66%),
GPT-3.5 (65.60%),
Spark-3 (55.78%),
LLaMa2(70B) (54.11%),
LLaMa2(13B) (28.08%), and
LLaMa2(7B) (25.65%).
* Compared with GPT-3.5, GPT-4, GLM-3, GLM-4, Baichuan2, and Spark-3.5 achieved better performance, with SSR improvement rates of 60.9%, 44.56%, 43.08%, 30.31%, and 21.59%, respectively.
GLM-4V, LLaMa2(70B), LLaMa2(13B), LLaMa2(7B), and Spark-3 may not be suitable for generating suitable web-form tests:
Their SSR performances represented degradation of 100.00%, 57.58%, 51.74%, 7.99%, and 13.16%, respectively.
* Compared with GPT-4, all other LLMS were less effective, with decreases in SSRs of 9.10% to 100.00%.
The GPT-3.5, GLM-3, GLM-4, Baichun2, and Spark-3.5 SSR decreases were less than the average reduction (42.65%); while the reductions for GLM-4V, LLaMa2(70B), LLaMa2(13B), LLaMa2(7B), and Spark-3 were greater than the average.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ1.2:
Some LLMs (such as GPT-4, GLM-4, and Baichuan2) can generate relatively effective web-form tests.
Among the 11 LLMs, GPT-4 always has the best effectiveness;
and GLM-4V always performs worst (with SSRs of 0.00%).
Compared with GPT-4, the remaining LLMs have difficulty generating appropriate tests for web forms, with SSRs ranging from 9.10% to 74.15%.
Nevertheless, some LLMs (such as GLM-3, GLM-4, Baichuan2, and Spark-3.5) achieve higher SSRs than GPT-3.5, indicating better effectiveness.
§.§ RQ2: What is the quality of the generated web-form tests?
This section discusses the quality of the web-form tests generated by the different LLMs.
The section includes an analysis of the reasons why some tests cannot be submitted successfully.
We also include an analysis of the test quality, from a tester’s perspective.
§.§.§ Answer to RQ2.1
Figure <ref> presents three web-form instances.
Tables <ref> to <ref> show the web-form test information generated with these three web forms by the 11 LLMs.
A “52” indicates that the context of the web-form tests was consistent with the web-form, which can lead to successful submission;
“54” indicates that the generated web-form tests neither satisfied the context, nor were successfully submitted through the web-form.
We categorized the reasons why the LLM-generated web-form tests could not be submitted successfully as follows:
* Reason 1: Some LLMs (such as Baichuan2, Spark, and GLM-4V) were unable to generate the correct web-form test content based on the provided web-form contextual information.
For example, in Figure (<ref>), when we log in to the web-form, we need to provide three inputs: email, password, and whether or not to remember the password.
Some LLMs (such as GLM-3) generated web-form tests with incorrect formatting, such as an email not confirming to regular email formatting.
This may be because some LLMs may not be able to correctly parse the contextual information.
* Reason 2: Some LLMs (such as GLM-4, GLM-4V, and LLaMa2(7B)) were unable to generate web-form tests in the specific format restricted in the designed prompts.
For example, in Figure (<ref>), we constructed restrictive prompts for generating web-form tests (Section <ref>).
However, these LLMs did not return the correct information, which prevented the correct parsing of the information, especially with LH-P.
When using LH-P, the LLMs first parsed the HTML of a given web-form into a list of JSON structures (Algorithm <ref>).
If the LLMs could not return the correct parsed JSON structures, the subsequent testing process would fail.
* Reason 3: Connection problems between the testing environment and the LLM API, or issues with the reasoning process of the LLMs, may also have caused failure or interruption of the web-form-test generation process.
In Figure (<ref>), for example, Spark-3 encountered a request timeout when generating the restaurant reservation information based on the content of the web-form.
Therefore, it was unable to parse and submit the web-form-test information.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ2.1:
(1) Different LLMs are constrained by training datasets, parameter sizes, and other factors, resulting in significant differences in their effectiveness when generating web-form tests.
(2) Generating web-form tests that are consistent with the web-form context will greatly increase the SSRs.
§.§.§ Answer to RQ2.2
Figure <ref> shows the quality scores given by 20 testers for the different LLMs-generated web-form tests, using the three types of prompts
Table <ref> presents the average quality scores of the 20 testers.
Furthermore, the Kendall's W value for the 20 testers was 0.94 (close to 1.0), indicating a strong agreement among evaluators.
Based on these results, we have the following observations:
From the perspective of the three types of prompts (RH-P, LH-P, and PH-P):
* The average scores for RH-P, LH-P, and PH-P were 2.40, 2.26, and 2.76, respectively.
PH-P achieved a better score than the overall average (2.47), while LH-P and PH-P scored lower than the overall average.
* For the RH-P prompt, only GPT-4 and GLM-4 achieved average scores greater than 3.00 (3.62 and 3.21, respectively), indicating that only these two LLMs achieved a “Neutral” performance from the perspective of the testers.
Testers rejected (sometimes strongly) the quality of the web-form tests generated by other LLMs using RH-P.
* For the LH-P prompt, GPT-4, GLM-4, Baichuan2, and Spark-3.5 achieved a “Neutral” performance according to testers.
They rejected (sometimes strongly) the quality of the web-form tests generated by other LLMs using RH-P.
* For the PH-P prompt, GPT-4, GLM-3, GLM-4, Baichuan2, and Spark-3.5 achieved a “Neutral” performance, according to the testers.
The average score across the 11 LLMs was 2.47.
Five of them (GPT-4, GLM-3, GLM-4, Baichuan2, and Spark-3.5) achieved scores better than the average.
GPT-4 always scored better than other LLMs, and GLM-4V always scored the worst.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ2.2:
The main findings of the quality evaluation from the 20 testers are consistent with the findings in the effectiveness evaluation (Section <ref>).
GPT-4 received the highest endorsements of the testers.
Other LLMs received poorer evaluations.
In general, there is still room for improving the quality of the LLM-generated web-form tests to meet the expectations of the testers.
§.§ RQ3: What insights/advice can be offered to testers using LLMs for web-form testing?
This section provides some insights and advice for using LLMs to support the web-form-test generation from two perspectives: prompt design and LLM selection.
§.§.§ Answer to RQ3.1
Based on the experimental results for the three types of prompts (RH-P, LH-P, and PH-P), we found that the following strategies should be followed:
* Insight 1: Ensure accurate extraction of web-form context for prompt guidance.
We can extract the contextual information from the HTML of the web-form and use it as part of the prompt to guide LLMs to generate web-form tests.
However, we need to accurately parse the web-form and accurately extract the contextual information.
For example, we found that using PH-P to construct prompts was 20.36% more successful than using LH-P.
* Insight 2: Simplify web-form HTML for simpler, clearer prompts.
We found that the HTML structure of web forms can be complex.
It is necessary to simplify this when constructing prompts, and to avoid directly using raw HTML to build prompts (which may make the content of the prompts too complex).
For example, we found that using PH-P to construct prompts was 9.94% more successful than using RH-P.
* Insight 3: Simple and clear task requirements should be set for LLMs to help them achieve the expected results more effectively.
The PH-P method extracts web-form contextual information using pruned web-form HTML content, while
RH-P directly uses the pruning HTML (more complex) to construct contextual information:
PH-P provides LLMs with simpler contextual information.
We found that simpler and clearer prompts achieve more effective web-form-test generation.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ3.1:
We offer the following advice for designing and constructing prompts when testing web forms:
(1) Simplify the HTML content in the web forms;
(2) Accurately extract the contextual information from the web forms; and
(3) Set simple and clear task requirements (simplify the prompt structures).
§.§.§ Answer to RQ3.2
According to our study<ref>, over 50% (11 out of 20) of the testers have started using LLMs to guide quality assurance work, which indicates that LLMs are playing an increasingly important role in testing.
80% (16 out of 20) of the testers were concerned about privacy and security issues when using LLMs for the web-form quality assurance.
Based on the importance of LLMs, the security concerns of testers, and the main findings of our research, we offer the following guidance for selecting LLMs for testing:
* Insight 4: To generate effective web-form tests, GPT-4 is highly recommended.
According to our experimental results (Sections <ref>), GPT-4 can generate the most effective web-form tests.
* Insight 5: If facing testing constraints, the alternative LLMs (not GPT series) should be selected.
Actual web-form testing tasks can involve combining real user data into prompt content.
This helps the LLMs better understand the contextual information of the web forms.
However, this may also lead to private data leakage:
GPT LLMs should, therefore, not be directly used <cit.>.
Alternative LLMs, such as the LLaMa2 series LLMs, should be considered.
In this study, we compared the effectiveness of GPT-3.5 and GPT-4 with other LLMs, which should help testers to more conveniently choose an appropriate LLM based on the actual situation.
For example, the GLM-4 and Baichuan2 effectiveness are only about 10% less than GPT-4.
[breakable,colframe=black,colback=white,arc=0mm,left=1mm,top=1mm,bottom=1mm,right=1mm,boxrule=0.25mm]
Summary of Answers to RQ3.2:
We offer the following advice for choosing LLMs:
(1) If data privacy is not an issue when testing, GPT-4 is a good choice for ensuring the effectiveness of the generated web-form tests.
(2) If testers are concerned about data privacy and security, then alternative, open-source LLMs, such as the LLaMa2 series, should be considered.
§.§ Threats to Validity
This section discusses potential threats to the validity of our study.
* The first threat is related to the representativeness of our experimental subject selection.
To mitigate this threat, when selecting the AUTs on Github, we filtered through multiple keywords, such as “Java Web”, “Jobs”, and “Books”.
This enabled the selection of different types of Java web applications.
* The second threat involves our communication with the LLMs.
As we know, if LLMs are continuously interacted by using a fixed communication role, they may suffer from some biases to generate web-form tests <cit.>.
To avoid such a potential threat, in this study we maintain random access to APIs (i.e., using a custom default role defined by the LLM), and use task guidance based on direct prompts to communicate with the LLMs.
* The third threat involves our research on web-form contextual information.
We designed three types of prompts to better extract contextual information from the web forms.
This helped us to build multiple prompts to guide the LLMs' generation of web-form tests, and to evaluate their effectiveness.
§ RELATED WORK
This section presents some related work, including web-form testing, string data generation, and LLMs for software testing.
§.§ Web-Form Testing
Web-form testing is critical to software quality assurance, impacting both the user experience and the application stability.
As web applications become increasingly complex, researchers have worked to enhance automation, efficiency, and accuracy in web-form testing.
Rothermel et al. <cit.> proposed an automated framework for identifying and validating form elements and their interaction behavior.
Using the consistency between visual elements and back-end logic, their method significantly improved testing coverage and efficiency, setting new standards for form-based visual-program testing.
Ricca et al. <cit.> introduced a UML model to assess static site structures and guide white-box testing.
Applied to real-world scenarios, their approach improved verification and validation, using automatic test-case generation to ensure comprehensive testing and to simplify regression checks.
They emphasized the importance of thorough testing to ensure the quality and performance of web applications, offering a detailed perspective on web-form testing.
Furche et al. <cit.> used ontology tools to automate the understanding of form structure and content:
They optimized the integration and retrieval process of form data, and demonstrated the efficiency and potential advantages of semantic technology for handling complex web forms.
Santiago et al. <cit.> integrated machine learning and constraint-solving techniques to predict the effective input of form fields.
They were able to generate test cases that comply with logical constraints, demonstrating the practical application of machine learning technology in enhancing test automation and improving accuracy.
Cruz-Benito et al. <cit.> explored the adaptability of web forms based on user-feature detection.
They studied the impact of user behavior and preferences on form design through A/B testing and machine learning techniques, and proposed strategies to improve user satisfaction and interaction efficiency through customized experiences.
Lukanov et al. <cit.> used the functional Near Infrared Spectroscopy (fNIRS) measure to study the impact of web-form layout on users' psychological burden.
This provided the scientific basis for understanding how different design schemes affect users' cognitive burden, emphasizing the importance of optimizing the user experience in form design.
Alian et al. <cit.> improved the accuracy and efficiency of automatic filling and validation processes by conducting an in-depth analysis of the semantics of form elements.
They highlighted the critical role of semantic analysis in improving the quality of web-form testing.
§.§ LLMs for Software Testing
With the growing application of LLMs in software engineering, their innovative applications and challenges in the software testing process have become an important research topic.
Wang et al. <cit.> summarized how LLMs are transforming software testing, providing insights into their future roles and challenges based on current practices and research.
Yu et al. <cit.> explored LLMs' roles in improving the generation and migration of automated test scripts, underlining their flexibility and efficiency in complex scenarios, and the potential for enhanced script maintainability and adaptability.
Zhang et al. <cit.> evaluated the effectiveness of LLM-generated security tests.
Through experimental comparisons, they demonstrated LLMs' ability to identify potential security vulnerabilities and generate corresponding test cases, highlighting their potential for automating security testing.
Feldt et al. <cit.> investigated the use of conversational LLMs to develop agents that autonomously conduct testing tasks, introducing innovative approaches to software testing through natural language understanding.
Alshraideh et al. <cit.> proposed a method for generating test data through search-based techniques that incorporate program-specific search operators, showing how LLMs can optimize the search process to improve the efficiency and quality of test-data generation.
Gu et al. <cit.> introduced a method based on LLMs for generating testing code for the Go language compiler.
They used LLMs' ability to understand language structure to generate high-quality code, thus improving the automation level of compiler testing.
Kang et al. <cit.> conducted an experimental study on LLMs' capability to reproduce generic software defects, revealing the ability of LLMs to learn from a few examples, and offering new approaches for software defect diagnosis and repair.
Schäfer et al. <cit.> conducted an empirical evaluation of the effectiveness of using LLMs for automated unit-test generation, highlighting the potential for LLMs to generate high-quality test cases while also noting their limitations in dealing with specific testing challenges.
Fatima et al. <cit.> proposed a black-box approach based on LLMs for predicting flaky tests.
By analyzing the historical data of test executions, this predictor can effectively identify potentially flaky tests, helping to improve the stability and reliability of testing.
To the best of our knowledge, there has been no empirical research assessing the effectiveness of LLMs in generating web-form tests.
Additionally, there is limited understanding of how testers can make optimal use of LLMs for this purpose.
This article aims to address these gaps in the literature.
§ CONCLUSIONS AND FUTURE WORK
This paper has reported on an empirical study to investigate the effectiveness of 11 state-of-the-art LLMs in web-form-test generation.
We evaluated these LLMs using 146 web forms from 30 open-source Java web applications.
Based on our experimental results, we have the following conclusions:
(1) Some LLMs (such as GPT-4, GLM-4, and Baichuan2) can generate relatively efficient and high-quality tests for the web-form.
However, under the same conditions, LLMs such as GLM-4V, LLaMa2(7B), and LLaMa2(13B) did not perform well on the same testing task, indicating that LLMs can still be optimized and improved for automated web-form-test generation.
(2) Some LLMs (such as GLM-3, GLM-4, Baichuan2, and Spark-3.5) may be more suitable for generating appropriate tests for web forms than GPT-3.5, delivering a higher SSR than GPT-3.5.
(3) A comparison of the experimental results for the three different prompt methods (RH-P, LH-P, and PH-P) revealed that clear and concise web-form contextual content could better guide theLLMs to generate appropriate content.
If the contextual information of key information in the HTML elements is missing, it may reduce the effectiveness performance around from 10% to 20%.
(4) Regardless of the prompt, GLM-4V performs poorly.
An analysis of this found that GLM-4V may not be suitable for generating web-form tests, which indicates that the selection of models should depend on their specific areas of expertise.
Our future research will include the following two directions:
* Based on the experimental results, the average SSR of the selected LLMs is 60.37%, which means that there is room for improvement.
Some approaches can be adopted to improve the LLM effectiveness for generating web-form tests, such as optimizing the design and construction of prompts, and fine-tuning or retaining the LLMs.
* Based on the workflow of our empirical study with various LLMs, an automated web-form-test generation tool can be designed and developed.
Furthermore, the automated generation tool can provide testers with intelligently recommended options (guided by different LLMs) for the testing process.
§ ACKNOWLEDGMENTS
The authors would like to thank the testers from the companies and research institutions who participated in our study questionnaires.
We would also like to thank Yinming Huang from Macau University of Science and Technology for their valuable help in the experiments of this study.
§ ACKNOWLEDGMENT
The authors would like to thank the testers from the companies and research institutions who participated in our study.
We would also like to thank Yinming Huang from the Macau University of Science and Technology for his valuable help in the experiments.
IEEEtran
arabic
|
http://arxiv.org/abs/2405.09144v1 | 20240515071706 | Evaluation scheme for children-centered language interaction competence of AI-driven robots | [
"Siqi Xie",
"Jiantao Li"
] | cs.HC | [
"cs.HC"
] |
Both authors contributed equally to this research.
sqxie@mails.ccnu.edu.cn
Central China Normal University
152 Luoyu Rd.
Wuhan
Hubei
China
430079
0009-0007-7948-4560
[1]
lijt0909@163.com
Beijing Language and Culture University
15 Xueyuan Rd.
Beijing
Beijing
China
100083
This article explores the evaluation method for the language communication proficiency of AI-driven robots engaging in interactive communication with children. The utilization of AI-driven robots in children’s everyday communication is swiftly advancing, underscoring the importance of evaluating these robots’ language communication skills. Based on 11 Chinese families’ interviews and thematic analysis of the comment text from shopping websites (like Taobao, Tmall, and JD. et al.), a framework is introduced in the article to assess five key dimensions of child-robot language communication: interactivity, specificity, development, sociality, and safety. We draw on the concept of “children’s agency”, viewing children as active participants in shaping society and cultural life alongside adults. Therefore, this article places particular emphasis on collecting data related to children. Whether through survey interviews or direct interactive experiments, we treat children as an independent object for data collection. The study involved empirical research following the mentioned framework, which involved capturing interaction videos in natural conversation settings among children from 6 families. Analysis was performed on quantitative data obtained from video recordings, alongside questionnaires and interviews carried out by parents acting as participants or observers. We found that the presence or absence of parents during children’s interactions with robots can impact the evaluation of robots’ language communication abilities. Ultimately, this article proposes an enhanced comprehensive evaluation framework incorporating insights from parents and children, supported by empirical evidence and inter-rater consistency assessments, showcasing the scheme’s efficacy.
<ccs2012>
<concept>
<concept_id>10003120</concept_id>
<concept_desc>Human-centered computing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003122</concept_id>
<concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing
[500]Human-centered computing Human computer interaction (HCI)
[500]Human-centered computing HCI design and evaluation methods
Evaluation scheme for children-centered language interaction competence of AI-driven robots
Jiantao Li
===========================================================================================
§ INTRODUCTION
The advent of artificial intelligence (AI) has ushered in a new era of technological advancement, significantly impacting various sectors including education and domestic environments. In recent years, the incorporation of AI-driven robots in child-oriented settings has witnessed substantial growth. These robots are no longer passive tools but active participants in fostering learning, social interaction, and emotional support for children <cit.>. However, the effectiveness of these interactions largely hinges on the robot's ability to understand, process, and respond to children's language cues appropriately. Given children's unique linguistic characteristics and developmental stages, standard language interaction benchmarks used for adults may not be suitable.
Therefore, there is a pressing need for a dedicated evaluation scheme that comprehensively assesses a robot's language interaction competence specifically in the context of child users <cit.>. To establish a comprehensive and adaptive evaluation scheme for children-centered language interaction competence, a mixed-methods approach was selected, which combines quantitative and qualitative methods to analyze and interpret the data, including descriptive statistics, content analysis, and thematic analysis. Firstly, we have implemented a preliminary study to establish the evaluation framework, which interring 11 families and around 30 thousand words of comment test on shopping websites. Then six children between 3 and 6 years old were selected to interact with AI-driven robots, these children and their parents were consulted to develop the evaluation scheme based on a questionnaire and semi-structured interviews.
§ BACKGROUND AND RELATED WORK
To evaluate the language interaction competence of robots, most research carried out on evaluating the effects of robot-assisted language learning <cit.>, emotion interaction <cit.>or engagement degree<cit.>, based on the efficiency-oriented approach. The components can be deduced from what kinds of functions or characters can improve the efficiency of HRI. For example, students evaluate robots in the robot-assisted language learning tasks from different aspects, including the understanding ability, speaking speed, sound quality, and English proficiency <cit.>; a conceptual framework consisting of three main components: goal orientation, embodiment, and multimodality was proposed to comprehensively understand different chatbot types and their possibilities for educational use in language learning <cit.>; Dautenhahn has proposed some dimensions for social robots <cit.>, but these kinds of dimensions are suitable for all the human beings, not specifically oriented to children, and not typically focused on social language skills.
In conclusion, these studies can’t comprehensively answer the following questions: What dimensions and components can affect the evaluation of child-directed language interaction competence of robots? How do family members affect child-robot interaction and the evaluation process? Despite the growing achievement, the research gaps are evident: recent research lacks theoretical frameworks to support the above questions and there is inadequate empirical research concerning the above questions.
§ PRELIMINARY STUDY
The preliminary study aims to establish an outline of the evaluation framework, and we use the thematic analysis method to consult 11 Chinese families' views about children's language intelligence products. We collect data of semi-structured interviews and consumer feedback on online shopping platforms (such as Tmall[https://www.tmall.com] and JD[https://www.jd.com/]) to obtain information on the usage demands, experiences, and evaluations of children's robots (mainly in terms of interactive language).
Following established open coding methods <cit.>, the data obtained in this study was conducted by two researchers using thematic analysis. Firstly, the interviews were transcribed into text using Feishu Minutes software[https://www.feishu.cn/product/minutes], and manual verification was conducted to enhance the understanding of the entire interview content. After transcription, the data were read multiple times to develop initial coding awareness. Then, initial coding was conducted by two coders independently analyzing the interview data, comparing similarities and differences, and using Nvivo for inductive and theory-driven thematic analysis. This approach allowed for the discovery of facts based on the interview data and provided specific investigations into the interactive language of children's robots. Next, themes were identified by comparing and categorizing the initial codes, grouping conceptually similar codes, and assigning different codes to different themes. Only the initial codes agreed upon by both coders were confirmed as final codes. Finally, the themes were examined to ensure the accuracy of the coding and the relevance to the respective theme. The specific theme codes can be found in Fig.<ref>.
Furthermore, a saturation test was conducted using 411 feedback evaluations from shopping platforms. The analysis results showed that during the coding analysis process, the identified codes and themes were repeatedly present, and no new codes or themes emerged. Therefore, through the saturation test, it can be concluded that the extracted codes and themes in this study are reliable.
§ EMPIRICAL RESEARCH
Only when observing the real interactive context can different evaluation dimensions be tested, in this case, a comprehensive evaluation framework of robots' children-centered language ability will be proposed. Our research's objective is to explore whether the five dimensions of the evaluation framework can comprehensively assess the language interaction competence of robots.
§.§ Methods
The research is centered on preschool-aged children due to their higher likelihood of being cared for at home compared to older children. A total of 6 kindergartens between 3 and 6 years old and their parents from China participated in this study. And the study involved parental participants from two age ranges: 20-30 years and 30-40 years, with three parents in each group. They also had a range of education levels from bachelor to doctoral degrees. We use a robot called Alpha Egg GPT, which is LLM-equipped for managing the dialogue flow (i.e., The robot was embedded in the API of iFlytek's Alpha-egg large cognitive model for children). Parents were requested to complete an online survey, providing demographic information about their family and their previous encounters with robots. Subsequently, the researcher arranged experiments with the parents and their child, either at their residence or within a laboratory setting. The engagement between the individual child and the robots extended for approximately 30 minutes, encompassing a series of prescribed procedures followed by Table<ref>:
Throughout our study, we meticulously recorded all interaction sessions between children and robots. These video recordings serve as a valuable resource, capturing the nuances of behavior, communication, and emotional responses. By analyzing these videos, we gain insights into how children engage with robots. Non-verbal cues, gestures, and expressions provide valuable context that might not be apparent through other data collection methods.In addition to video recordings, we conduct post-interviews with both parents and children. These interviews provide subjective perspectives and rich contextual information. Following the practical session of each family, a brief 10-minute interview was conducted, primarily aimed at gathering insights and experiences from both parents and children regarding their feelings in the respective session. After observing and receiving participants' feedback, we have developed a comprehensive evaluation scheme based on 5 dimensions extracted from our preliminary study. More details will be provided in the following section.We transcribed videos by Baidu Netdisk[https://pan.baidu.com] and noted the timestamps of positive and negative feedback from children. Then, we asked parents and children to complete a questionnaire based on the developed evaluation scheme. This questionnaire reports the degree of satisfaction, which is a quantitative measure used to assess the success of child-robot interactions.
§.§ Developed scheme
The preliminary research involved the implementation of an extensive assessment framework. We established a detailed evaluation scheme based on empirical research. The evaluation system, outlined in Table <ref>, encompasses a total of five dimensions and 16 indicators.
§.§ Result and Findings
To evaluate the efficiency of the developed evaluation scheme, parents finished a questionnaire and made a 5-rate scale to score the different dimensions. They are asked to assess it by the practical experience and the video recording of the interaction. In the end, we calculate the mean score, SD, and internal reliability of each dimension, Table <ref> shows the result. The Cronbach α score serves as an essential metric for assessing the reliability of the developed evaluation scheme.
The study found that the complex communication style of robots, which included the use of sophisticated vocabulary and intricate grammatical structures, proved to be a hurdle for the children involved. Simplifying language, using more common words and shorter sentences, is crucial to enhance the effectiveness of robot-child interactions. Secondly, parental involvement plays a crucial role in fostering a child's willingness to engage and participate actively, and supportive environments contribute to a more positive and immersive experience for the child. Thirdly, the robots exhibited significant limitations when it came to discerning individual voice characteristics amidst a cacophony of multiple speakers, underscoring a pressing need for the enhancement of multi-part voice recognition capabilities.
§ DISCUSSION
Balance of Safety and Creativity
The delicate equilibrium between safety and creativity lies at the heart of designing effective language interaction competence evaluation schemes for AI-driven robots. While fostering creativity is essential for engaging children and promoting cognitive development, ensuring safety remains paramount. Striking this balance necessitates thoughtful consideration of the following aspects:
1. Content Filters: Implementing robust content filters to shield children from harmful or inappropriate material while still allowing for imaginative and educational interactions.
2. Dynamic Adaptation: Developing adaptable algorithms that adjust their responses based on context, age, and individual preferences, thereby fostering creativity without compromising safety.
3. Human Oversight: Regular human monitoring and intervention to address unforeseen situations and maintain a safe environment.
Considering the Role of AI-Driven Robots
Recognizing the multifaceted roles AI-driven robots play in children's lives is crucial. Beyond mere communication tools, they serve as companions, educators, and playmates.
1. Educational Enhancement: Leveraging robots to enhance language learning, cognitive skills, and social development.
2. Emotional Support: Acknowledging the potential for robots to provide emotional support and companionship, especially in contexts where human interaction is limited.
3. Ethical Responsibility: Ensuring that robots uphold ethical standards and align with societal values, considering their influential role in shaping children's perceptions and behaviors.
Family Members' Participation
Involving family members—particularly parents and guardians—in the evaluation process is pivotal. Their active participation contributes valuable insights and fosters a holistic understanding of child-robot interactions.
1. Parental Perception: Recognizing that parents' perspectives significantly impact the evaluation of robots' language communication abilities.
2. Collaborative Assessment: Encouraging joint assessments by parents and children, as their combined viewpoints provide a comprehensive picture.
3. Feedback Loop: Establishing a continuous feedback loop between families, researchers, and developers to refine and improve language interaction competence.
In summary, thoughtful consideration of safety, the multifaceted role of AI-driven robots, and family involvement will shape effective evaluation schemes, ensuring that child-robot language interactions are not only proficient but also enriching and secure.
§ CONCLUSION
We present a conceptual model aimed at gauging the linguistic capabilities of AI-powered robots engaging with children. Our scheme scrutinizes five facets of communication between children and robots: responsiveness, precision, progression, sociability, and security. Information was gathered from six family units via video documentation, surveys, and personal discussions. Our results underscore the necessity for a thorough assessment protocol integrating perspectives from both parental figures and children.
§ ACKNOWLEDGMENTS
We express our great gratitude to all the participating families, whose time and efforts were essential to the success of this study. Our special gratitude is extended to participant Zhenda Zhang and his son Dudu, for their remarkable dedication and invaluable contribution to this research. Furthermore, we would like to acknowledge the support provided by the National Social Science Foundation of China (Major Program) (23&ZD320) and the Fundamental Research Funds for the Central Universities (No. 30106230479).
ACM-Reference-Format
|
http://arxiv.org/abs/2405.08909v1 | 20240514190233 | ADA-Track: End-to-End Multi-Camera 3D Multi-Object Tracking with Alternating Detection and Association | [
"Shuxiao Ding",
"Lukas Schneider",
"Marius Cordts",
"Juergen Gall"
] | cs.CV | [
"cs.CV"
] |
The quantum Mpemba effect in free-fermionic mixed states
Sara Murciano
May 20, 2024
========================================================
Many query-based approaches for 3D Multi-Object Tracking (MOT) adopt the tracking-by-attention paradigm, utilizing track queries for identity-consistent detection and object queries for identity-agnostic track spawning.
Tracking-by-attention, however, entangles detection and tracking queries in one embedding for both the detection and tracking task, which is sub-optimal.
Other approaches resemble the tracking-by-detection paradigm, detecting objects using decoupled track and detection queries followed by a subsequent association.
These methods, however, do not leverage synergies between the detection and association task.
Combining the strengths of both paradigms, we introduce , a novel end-to-end framework for 3D MOT from multi-view cameras.
We introduce a learnable data association module based on edge-augmented cross-attention, leveraging appearance and geometric features.
Furthermore, we integrate this association module into the decoder layer of a DETR-based 3D detector, enabling simultaneous DETR-like query-to-image cross-attention for detection and query-to-query cross-attention for data association.
By stacking these decoder layers, queries are refined for the detection and association task alternately, effectively harnessing the task dependencies.
We evaluate our method on the nuScenes dataset and demonstrate the advantage of our approach compared to the two previous paradigms.
Code is available at <https://github.com/dsx0511/ADA-Track>.
§ INTRODUCTION
Accurate and consistent 3D Multi-Object Tracking (MOT) is critical for ensuring the reliability and safety of autonomous driving.
Recently, vision-centric perception solely relying on multi-view cameras has garnered significant attention in the autonomous driving community, thanks to lower cost of sensors and the advancements of transformers for computer vision.
Within this domain, two predominant approaches have emerged:
one transforms multi-view features into an intermediate dense Bird's-Eye View (BEV) representation <cit.>,
while the other leverages object queries <cit.> that directly interact with the multi-view images <cit.> to construct an object-centric representation.
Owing to the advantages in modeling object motion, the latter has been extended to query-based MOT in many works <cit.>.
Among the query-based MOT approaches, the majority adopts the tracking-by-attention (TBA) paradigm <cit.>.
As illustrated in subfig:tba, TBA utilizes track queries (colored squares) to consistently detect the same identity across frames and introduces object queries (white squares) to initialize tracks for newly appearing objects in each frame.
However, this highly entangled design is sub-optimal for balancing the detection and tracking performance.
Firstly, each track query consisting of a single embedding is tasked with accomplishing both, detection and tracking, while the two tasks share the same network architecture.
Furthermore, track queries for identity-aware tracking and object queries for identity-agnostic detection are also processed by identical network weights.
We argue that such an approach is sub-optimal for extracting task-specific information from the single query representation.
Secondly, data association is implicitly addressed using self-attention between all queries.
Although it effectively integrates information of query relations into query refinement, a notable drawback arises during inference.
The network only outputs one confidence score for each object, but it is unclear whether it represents a detection or an association confidence.
This requires sophisticated manually tuned post-processing.
Other query-based MOT approaches <cit.> use decoupled detection and track queries to solve detection and tracking tasks independently.
Both query types will be associated explicitly in a heuristic or learnable module, as shown in subfig:tbd.
However, this still inherits the decoupled design of the tracking-by-detection (TBD) paradigm and struggles to optimize and harmonize both tasks effectively.
In this paper, we argue that detection and tracking is a chicken-and-egg problem: accurate detection enables robust initialization and straightforward association to tracks, while well-established tracks incorporate temporal context to mitigate potential detection errors.
Our method elegantly addresses this challenge by leveraging synergies in both tasks while decoupling them.
We propose , a novel query-based end-to-end multi-camera 3D MOT framework that conducts object detection and explicit association in an alternating manner, as shown in subfig:ours.
We propagate track queries across frames representing a unique object instance, while generating decoupled detection queries that detect all objects in each frame.
Inspired by <cit.>, we propose a learned data association module based on an edge-augmented cross-attention <cit.>.
In this module, edge features between track and detection queries represent association information. These features are incorporated into attention calculations, updated layer-by-layer, and further used to output affinity scores.
Different from <cit.>, we include appearance features in the nodes and geometric features in the edges, resulting in a fully differentiable appearance-geometric reasoning.
We then integrate the learned association module into each transformer decoder layer of a query-based multi-camera 3D detector, DETR3D <cit.>.
In this way, the decoder layer sequentially conducts a query-to-image cross-attention to refine query representations for object detection and a query-to-query cross-attention to refine query and edge representations for data association.
By stacking the decoder layers, iteratively refined query and edge features provide useful information to each other, resulting in a harmonized optimization of the detection and tracking task.
We evaluate our method on the nuScenes dataset <cit.> and compare our proposed alternating detection and association paradigm with approaches based on the other two paradigms.
While achieving state-of-the-art performance, our proposed paradigm can easily be combined with various query-based 3D detectors.
§ RELATED WORK
Multi-camera 3D detection
Current research on multi-camera 3D object detection falls into two major categories.
The first category transforms multi-view image features into a dense Bird's-Eye View (BEV) representation using CNNs <cit.> or transformers <cit.>.
Although it has been demonstrated that temporal fusion in BEV effectively boosts the detection performance <cit.> or supports downstream tasks <cit.>, such a structured representation may struggle to effectively model object motion.
The second category contains works that follow DETR <cit.>, where sparse object queries interact with multi-view images <cit.>.
This sparse query-based representation facilitates effective object-centric temporal fusion by interacting queries with multi-frame sensor data <cit.> or propagating queries across frames <cit.>.
Consequently, end-to-end detection and tracking methods built on top of query-based detectors have emerged as a popular choice.
Tracking-by-attention
Proposed concurrently by TrackFormer <cit.> and MOTR <cit.>,
tracking-by-attention (TBA) leverages track queries to detect objects and simultaneously maintain their consistent identities across frames.
TBA regards MOT as a multi-frame set prediction problem, which relies on the self-attention for implicit association and a bipartite matching to force identity-consistent target assignment, similar to a learned duplicate removal <cit.>.
MUTR3D <cit.> extends the tracking-by-attention paradigm to multi-camera 3D MOT based on a DETR3D <cit.> detector, which additionally updates the 3D reference point of each query besides query feature propagation.
STAR-Track <cit.> improves MUTR3D by proposing a Latent Motion Model (LMM) to update the query appearance feature based on geometric motion prediction.
PF-Track <cit.> proposes a joint tracking and prediction framework, utilizing a memory bank of queries to refine queries and predict future locations over an extended horizon for occlusion handling.
Despite the fully differentiable design, TBA processes the same query representation using shared network weights for both detection and tracking tasks, which inevitably affects the balance of both tasks.
In this work, we address this problem by introducing decoupled task-dependent queries with a differentiable association module,
while an alternating optimization strongly couples both tasks in a more reasonable manner.
Tracking by detection
Many tracking-by-detection (TBD) approaches use a standalone algorithm for data association that can be combined with arbitrary object detectors.
SORT <cit.>, for instance, uses a Kalman Filter as the motion model and associates the objects using Hungarian Matching <cit.>.
Subsequent works <cit.> improve SORT and achieve competitive performance in many 2D MOT benchmarks <cit.>.
Similar pipelines for 3D MOT <cit.> have also demonstrated promising performance.
Besides model-based approaches, learning-based methods usually formulate the association problem using a graph structure and solve it using GNNs <cit.> or transformers <cit.>.
TBD in a joint detection and tracking framework has also become a popular choice combined with query-based detectors <cit.>.
These approaches usually decouple detection and track queries, process them independently, and associate them based on IoU <cit.>, box center <cit.>, pixel-wise distribution <cit.> or a learned metric <cit.>.
Although these works are end-to-end trainable, the association module is still separated from the upstream detector, and detection and tracking are processed sequentially, limiting the effective utilization of task dependencies.
In our work, we address this problem by stacking detection and association modules in an alternating fashion.
In doing so, we utilize the synergies between both tasks.
§ APPROACH
An overview of is shown in fig:overview.
For each frame t, feature maps F_c^t are extracted from multi-view images I_c^t using a CNN for each camera c.
A set of track queries Q_T^t (depicted as colored squares in fig:overview) are propagated from the previous frame in order to consistently detect the same identity across frames.
A fixed number of N_D detection queries Q_D^t (white squares in fig:overview) is randomly initialized and responsible to detect all objects in the current frame.
Following recent works, we assign a 3D reference to each of those queries <cit.>.
The transformer decoder layer first conducts a self-attention between queries and an image-to-query cross-attention (DETR3D <cit.> or PETR <cit.>) to refine queries for the object detection task.
Subsequently, a query-to-query edge-augmented cross-attention integrates both query types and edge features and refines them for the data association task.
The overall transformer decoder layer is repeated L_d times to alternately refine query and edge features for both detection and association task.
Finally, a track update module associates the track and detection queries and generates track queries Q_T^t+1 for the next frame.
§.§ Joint detection and association decoder layer
Next, we discuss the decoding process for a single frame t and omit the notation of the frame index t for simplicity.
Query-to-query self-attention
Existing approaches with decoupled queries <cit.> process track and detection queries independently and conduct self-attention only within the same query type.
In contrast, our approach seeks a joint optimization of the representations of both query types.
We concatenate both Q_T and Q_D and apply self-attention among all queries regardless of their type.
This self-attention enables detection queries to leverage track queries as prior information, leading to a more targeted interaction with image features in the subsequent layer.
Image-to-query cross-attention
Next, both query types Q_T and Q_D attend to the multi-view image features F_c with cross-attention.
Our approach is compatible with various sparse query-based 3D detectors, thus cross-attention can be implemented in multiple fashions, DETR3D <cit.> or PETR <cit.>.
The interaction between queries and images refines the query features to form an object-centric representation.
Subsequently, the network predicts bounding boxes and category confidences using MLPs for each query. This results in track boxes B_T^(l) from track queries and detection boxes B_D^(l) from detection queries, where l denotes the layer index of the decoder layer.
Each box b_i=[c_i, s_i, θ_i, v_i] ∈ℝ^9 is parameterized by 3D box center c_i ∈ℝ^3, 3D box size s_i ∈ℝ^3, yaw angle θ_i ∈ℝ and BEV velocity v_i ∈ℝ^2.
Both sets of boxes are used to calculate auxiliary box losses as well as to build the position encoding for the edge features that we will discuss next.
Query-to-query edge-augmented cross-attention
Our association module requires a differentiable and lightweight architecture that can be integrated into each decoder layer.
To this end, we opt for a learned association module that was recently proposed by 3DMOTFormer <cit.>.
The module is based on an Edge-Augmented Graph Transformer <cit.> to learn the affinity between tracks and detections.
Different from <cit.>, the query positions change across decoder layers when refining them iteratively in the joint detection and tracking framework.
Therefore, we opt for a fully-connected graph instead of a distance truncated graph to ensure the same graph structure in different layers, enabling the edge features to iterate over layers.
Our learned association leverages both appearance and geometric features.
Formally, for each decoder layer l, we use track Q_T^(l)∈ℝ^N_T × d_k and detection queries Q_D^(l)∈ℝ^N_D × d_k as node features to provide appearance information obtained from the image-to-query cross-attention, where d_k is the number of channels.
An MLP embeds aggregated pair-wise box differences to produce a relative positional encoding E_pos^(l)∈ℝ^N_D × N_T × d_k for edge features, E_pos^(l) = MLP(B_diff^(l)), where B_diff^(l) = { b_diff,ij^(l)}∈ℝ^N_T × N_D × 9.
These box position features are defined for each pair {i,j} of track b_T,i^(l)∈ℝ^9 and detection boxes b_D,j^(l)∈ℝ^9 by calculating their absolute difference b_diff,ij^(l) = |b_T,i^(l) - b_D,j^(l)|.
The position encoding E_pos^(l) is added to the edge features E^(l)∈ℝ^N_D × N_T × d_k as part of the input to the edge-augmented cross-attention, E^(l) E_pos^(l) + E^(l).
As the initial edge features E^(0) are zero-initialized for each frame, the input edge features are equal to the edge position encoding for the first layer l=1, E^(1) = E_pos^(1).
As shown in the right part of fig:overview, we treat track queries Q_T^(l) as source set (key and value) and detection queries Q_D^(l) as target set (query) in the edge-augmented cross-attention.
The edge-augmented attention
A^(l) = softmax( (Q_D^(l) W_Q^(l))(Q_T^(l) W_K^(l))^T/√(d_k) + E^(l) W_E1^(l))
takes both dot-product and edge features into consideration, where { W_Q^(l), W_K^(l)}∈ℝ^d_k × d_k and W_E1^(l)∈ℝ^d_k × 1 are learnable weights.
The feature representation of the targets, the detection queries, as well as the edge features are updated using the attention A^(l)∈ℝ^N_D × N_T × 1,
Q_D^(l+1) = Q_D^(l) + Â^(l)(Q_T^(l) W_V^(l)), E^(l+1) = E^(l) + A^(l) W_E2^(l),
where W_V^(l)∈ℝ^d_k × d_k and W_E2^(l)∈ℝ^1 × d_k are learnable weights and Â^(l)∈ℝ^N_D × N_T is A^(l) with squeezed third dimension.
The update of Q_D^(l) enables a feature integration of tracking and association information,
resulting in a better query-to-image interaction for the next layer l+1.
As there are no existing tracks in the first frame, we skip the edge-augmented cross-attention for t=1 and hence the decoder layer becomes identical to the object detector, DETR3D <cit.> or PETR <cit.>.
§.§ Track Update
After all L_d layers of the decoder, we obtain the final track and detection queries Q_T^(L_d) and Q_E^(L_d) as well as edge features E^(L_d).
The track update module associates both query types and propagates feature embeddings of track queries Q_T^t+1 and their corresponding reference points C_T^t+1 to t+1.
Data association
We use different association schemes for training and inference.
During training, we first match tracks and detections to ground-truth objects (details in sec:train).
If a track and a detection query are matched to the same ground-truth identity, both queries are considered as a matched pair.
All track queries that are unmatched with a ground-truth are terminated, while all detection queries that are matched to the newly appearing ground-truth objects spawn a new track.
During inference, we apply an MLP followed by a sigmoid function on the final edge features E^(L_d) to estimate the affinity scores S between all track-detection pairs, S=sigmoid(MLP(E^(L_d))) ∈ℝ^N_D × N_T.
Then, the score matrix S is used as matching costs of a Hungarian Algorithm <cit.> to obtain a one-to-one matching.
We heuristically keep the unmatched tracks for T_d=5 frames and mark them as temporally inactive tracks before termination.
An unmatched detection box initializes a new track, if its confidence is higher than τ_new=0.4.
Feature and box update
Given a matched pair {i,j}, the track instance i is assigned with a track query q_T,i^(L_d) and a predicted track box b_T,i.
The same is for the detection instance j with detection query q_D,j^(L_d) and detection box b_D,j.
Hence, we need to determine the resulting query and box for this associated pair.
Similar to <cit.>, we empirically choose the detection query q_D,j^(L_d) and detection box b_D,j to represent the associated pair {i,j}, which corresponds to an update of its associated track i at frame t: q̂_T,i^t = q_D,j^(L_d) and b̂_T,i^t = b_D,j.
We will analyse this choice in sec:ablation.
For unmatched detection or track queries, we directly use their respective features and boxes.
Eventually, the track queries Q̂_T^t = {q̂_T,i^t} and their corresponding boxes B̂_T^t = {b̂_T,i^t} determine the final output of frame t.
Query Propagation
We directly use the updated query features Q̂_T^t for the next frame, Q_T^t+1 = Q̂_T^t.
We also propagate their 3D reference point after applying a simple motion update following MUTR3D <cit.>.
Concretely, the 3D reference point (box center) ĉ_T,i^t and the BEV velocity v̂_T,i^t are extracted from the box parameter of b̂_T,i^t.
We then predict the reference point for the next frame t+1 with a constant velocity assumption: c_T,i^t+1 = ĉ_T,i^t + v̂_T,i^tΔ t,
where Δ t is the time difference between both frames.
We transform the predicted reference points c_T^t+1 to the new vehicle coordinate system with ego-motion compensation.
Together, the track queries combined with the predicted reference points {Q_T^t+1, C_T^t+1} serve as the input for frame t+1.
§.§ Training
Target assignment
Tracking-by-attention approaches use identity-guided matching for the track queries, while
the Hungarian Algorithm <cit.> matches the remaining ground-truth boxes and the object queries.
In this work, track and detection queries detect objects independently and are explicitly associated with each other, which requires isolated matching for both query types.
To achieve that, we match all ground-truth identities with detection queries using the Hungarian Algorithm, in addition to the same identity-guided matching for track queries.
fig:target_assign illustrates this difference in target assignment.
These matching results assign the targets for the box losses.
In addition, we apply an explicit loss function for the association module, where we regard it as a binary classification problem.
If a ground-truth identity is matched with both, a track query and a detection query, the edge between these two queries should be classified as positive.
In all other cases, one of both queries is matched with ∅, or both queries are matched to different ground truths, the association target for this detection-track pair should be negative.
Loss function
For the bounding box loss in both query types, we follow <cit.> and use the Focal Loss <cit.> as the classification loss ℒ_cls and ℓ_1-loss as the regression loss ℒ_reg.
The track and detection queries contribute to separate loss terms.
This results in ℒ_cls,T and ℒ_reg,T for the track and ℒ_cls,D and ℒ_reg,D for the detection part.
For association, we use Focal Loss with α=0.5 and γ=1.0, denoted as ℒ_asso.
We include auxiliary losses after each intermediate decoder layer for all the previously mentioned loss terms.
Given a training sequence with T frames, the losses for the detection queries are calculated for all T frames, whereas the losses for track queries and association are calculated from the second frame onwards.
The overall loss for the whole training sequence can be formulated as
ℒ = ∑_t=1^T (λ_clsℒ_cls,D^t + λ_regℒ_reg,D^t) +
∑_t=2^T (λ_clsℒ_cls,T^t + λ_regℒ_reg,T^t + λ_lossℒ_asso^t).
We use λ_cls=2.0,λ_reg=0.25 following existing works <cit.> and λ_asso=10.0. The impact of the loss weight λ_asso is further analyzed in the supplementary.
§ EXPERIMENTS
§.§ Experiment Setup
Dataset
We evaluate our approach on the nuScenes <cit.> dataset.
NuScenes is a large-scale dataset for autonomous driving with 700, 150, and 150 sequences for training, validation, and testing.
Each sequence is 20 seconds in length.
The sensor equipment contains LiDAR, RADAR, six cameras covering 360° Field of View as well as IMU and GPS.
Metrics
The primary metrics for 3D MOT on nuScenes are AMOTA (average multi-object tracking accuracy) and AMOTP (average multi-object tracking precision) <cit.>.
AMOTA is an average of the MOTAR over multiple recalls, where MOTAR is the recall-normalized MOTA at the corresponding recall r.
The calculation of MOTA takes IDS (identity switch), FP (false positive) and FN (false negative) into consideration.
AMOTP are the averaged position errors of all TPs (true positive) over all recalls.
NuScenes also uses secondary metrics from CLEAR MOT <cit.>, including MOTA, MOTP, IDS, FP, FN, .
All the values of the CLEAR MOT metrics are reported at the recall R, where the highest MOTA is reached, R = argmax_r MOTA_r.
Implementation details
To compare with most multi-camera MOT works <cit.>, we validate our approach using two query-based detectors: DETR3D <cit.> and PETR <cit.>.
For DETR3D experiments, input images are full-resolution 1600 × 900 and we use ResNet-101 <cit.> as the backbone with an FPN <cit.>.
For PETR experiments, we crop the images to 1600 × 640 and use VoVNetV2 <cit.> with FPN <cit.> as image feature extractor.
We initialize the model using the corresponding single-frame detector checkpoints pre-trained for 24 epochs.
We then train our tracker for 24 epochs on sampled mini-sequences which contain T=3 frames.
For all DETR3D experiments, we freeze the weights of the image backbone and FPN following STAR-Track <cit.>.
For PETR, we only freeze the image backbone following PF-Track <cit.>.
All models are trained using a cosine-annealing schedule with an initial learning rate of 2e^-4 and an AdamW <cit.> optimizer with a weight decay of 1e^-2.
All DETR3D experiments are trained on four V100 GPUs, while all PETR experiments use eight A100 GPUs.
Each GPU holds one batch element.
§.§ Paradigm Comparison
Baselines
To validate the superiority of our proposed alternating detection and association paradigm, we first compare our method with query-based tracking-by-attention and tracking-by-detection baselines that are also without bells and whistles.
We select MUTR3D <cit.> as the tracking-by-attention baseline by reproducing it on DETR3D and PETR.
For the tracking-by-detection baseline, we modify our architecture to follow subfig:tbd.
To that end, we first use standard DETR3D or PETR decoders to process track and detection queries independently. Both sets of queries are then fed into the association module by stacking our proposed query-to-query edge-augmented cross-attentions (sec:decoder).
This architecture shares the implementation details with as described in sec:exp_setup, batch size, training epochs, or weight freezing.
We denote both baselines as TBA-Baseline and TBD-Baseline.
Results
tab:val shows the comparison on the nuScenes validation split, where the baselines are shown in the top part of each detector group.
For both detectors, the TBD-Baselines achieve significantly higher AMOTA than the TBA-Baselines.
On top, outperforms the TBD-Baselines by 2.8%P AMOTA for DETR3D and 2.7%P for PETR, respectively, while achieving a considerably higher recall.
Considering the secondary metrics, we observe that the TBA-Baseline produces fewer IDS than the TBD-Baseline and , both methods with explicit association.
However, TBA-Baseline achieves a low IDS by tracking only the easy cases, resulting in a substantially higher FN and lower MOTA.
Compared to TBD-Baseline, again improves MOTA by a much higher TP and lower FN.
These observations show the importance of the decoupled queries with explicit association as in the TBD-Baseline and in , which produces distinguishable query representations for both tasks and thus improves their performances.
further improves over the TBD-Baseline by fully utilizing the task inter-dependency using alternating detection and association.
§.§ Comparison with Existing Works
We compare with existing works based on TBA or TBD in the remaining part of each detector group in tab:val.
Due to an implementation issue, MUTR3D <cit.> reported a lower performance than our TBA-Baseline, which is a reproduction of MUTR3D with fewer training epochs and fixed backbone during training.
DQ-Track <cit.> also uses decoupled queries and a sophisticated learned association module following TBD.
outperforms it by 1.2%P AMOTA, which again highlights the effectiveness of our alternating detection and association design.
The training of STAR-Track <cit.> requires an initialization on a pre-trained MUTR3D checkpoint, which results in a total training epoch of 48.
For a fair comparison, we additionally train our model for 48 epochs, denoted as -long, which outperforms STAR-Track by 1.3%P AMOTA.
Comparing based on the PETR detector, we achieve on-par performance (0.479 AMOTA) with PF-Track <cit.>.
However, PF-Track <cit.> is a joint tracking and prediction method, utilizing a track extension module to replace low-confidence detections with predicted trajectories when outputting tracking results, which additionally requires supervision from future frames.
Our is a pure tracking method but still achieves the same AMOTA.
Compared to PF-Track without track extension, achieves considerably higher AMOTA by 2.6%P.
We compare with end-to-end methods on the test split in tab:test, where we train based on DETR3D <cit.> with VoVNetV2-99 <cit.> backbone on both training and validation set.
Compared to query-based methods using DETR3D or PETR in the second part of tab:test, achieves 0.456 AMOTA and 1.237 AMOTP, outperforming the recent state-of-the-art methods STAR-Track <cit.> and PF-Track <cit.> by 1.7%P and 2.2%P AMOTA, respectively.
Compared to other non-query-based methods, also achieves the best performance, improving CC-3DT <cit.> that used a stronger BEVFormer <cit.> detector by 4.6% AMOTA.
In addition, we highlight that can be combined with many components proposed in existing tracking-by-attention works.
For instance, the Past and Future Reasoning in <cit.> or the Latent Motion Model in <cit.> improve the query embedding or the motion update during query propagation, while we use the simple approach as in MUTR3D. These extensions can be seamlessly integrated into our framework and further improvements are expected.
§.§ Ablation Study
Number of training frames
tab:frames shows the impact of varying the training sample length T. We observe that an increasing sample length increases the AMOTA and many secondary metrics.
A particularly substantial improvement occurs when increasing from a sample length of 2 to 3 frames, resulting in a notable 3.2%P increase in AMOTA.
The reason is that the autoregressive training scheme becomes active when T ≥ 3, allowing association results to propagate into subsequent frames.
This enables optimization through gradients across multiple frames to optimize the whole sequence, significantly boosting robustness during inference.
A further increase in training frames from 3 to 4 yields marginal improvements but a considerable computational and memory overhead during training.
Thus, we use T=3 as the default setting.
Joint optimization
One of the main contributions of this work is the introduction of an alternating detection and association paradigm, where both tasks iteratively inform each other layer-by-layer for joint optimization.
To assess its effectiveness, we combine the predictions of the bounding boxes and the association scores from different decoder layers.
The first part of tab:layer_output illustrates using bounding boxes from the last layer alongside association scores from decoder layers 1 to 5.
A notable increase in AMOTA is observed when using association scores from the second instead of the first layer.
Using association scores from higher layers leads to gradually increased AMOTA.
We also observe a considerable reduction in IDS.
Together, this shows the iterative improvements of the association.
The second part of tab:layer_output presents results using boxes from various layers combined with association scores from the last layer.
We can observe a similar tendency in AMOTA increase with higher layers as in the first part, where the most significant increase occurs from the first to the second layer.
In addition, using box predictions in later layers results in a notable improvement in AMOTP, confirming the iterative optimization of localization precision.
Overall, combining outputs from distinct layers yields gradual performance increases when outputs from higher layers are used.
This observation shows that the relevance between box and association predictions is not only constrained within the same layer but across different layers, which validates the iterative optimization through stacked layers of both tasks.
Feature update weight
We analyze the impact of the feature update as discussed in sec:track_update.
Here we generalize the feature update to a weighted average, q_T,i = w_T q_T,i^(L_d) + (1 - w_T) q_D,j^(L_d), where w_t is the update rate.
As shown in tab:feat_weight, substituting track query features with detection query features (w_t = 0) yields the optimal performance, surpassing other values by 0.7% to 1.5% in AMOTA.
This can be attributed to the fact that detection queries inherently incorporate track query features within their representation through edge-augmented cross-attention.
Thus, an additional merging between detection and track query features is unnecessary.
Box update
Both track and detection queries predict bounding boxes, resulting in two candidates for each instance when two queries are associated.
We use the box from the detection side to represent this associated pair and validate this choice in tab:box_update.
As shown in the first row (Exp. A), selecting the track box results in an AMOTA decrease of 1.3%P compared to our approach (Exp. C), which shows the better quality of detection boxes.
One could argue that detection queries are further refined in the edge-augmented cross-attention but track queries are not.
In the second row of tab:box_update, we additionally reverse the edge-augmented cross-attention by using tracks as queries and detections as keys.
This leads to an additional AMOTA decrease of 2.2%P.
This finding also verifies the necessity of the decoupled queries for decoupled tasks, where using detection queries to locate objects and associate them to track queries is more effective than relying on track queries to locate objects.
§ CONCLUSION
We presented a novel query-based multi-camera 3D multi-object tracking approach, termed .
We observed that decoupling the detection and association task while simultaneously leveraging the synergies between these tasks is the key to accomplish high-quality tracking.
In line with this finding, we proposed a paradigm that conducts both, detection and association, in an alternating manner.
In addition, we proposed a learned association module based on edge-augmented cross-attention which can be seamlessly integrated into any query-based decoder.
Extensive experiments show the effectiveness of our approach compared to other paradigms, while achieving state-of-the-art performance on the nuScenes tracking benchmark.
Acknowledgement
This work is a result of the joint research project STADT:up (Förderkennzeichen 19A22006O).
The project is supported by the German Federal Ministry for Economic Affairs and Climate Action (BMWK), based on a decision of the German Bundestag.
The author is solely responsible for the content of this publication.
Juergen Gall has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) GA 1927/5-2 (FOR 2535 Anticipating Human Behavior) and the ERC Consolidator Grant FORHUE (101044724).
§ MODEL DETAILS
We provide the details of the model architectures as well as some design choices for the experiments with the DETR3D <cit.> and PETR <cit.> detector.
DETR3D
For all DETR3D-based <cit.> experiments, we use ResNet-101 <cit.> as the backbone.
A FPN <cit.> image neck is attached to the ResNet-101 which outputs multi-scale feature maps {C_2, C_3, C_4, C_5} with downsampling rates {1/4, 1/8, 1/16, 1/32} for the detection head.
The detection head consists of 6 transformer decoder layers.
Each decoder layer has a query-to-query self-attention, a DETR3D-based query-to-image cross-attention, and an edge-augmented cross-attention.
The box regression and classification heads with two-layer MLPs are attached to the output of the DETR3D-based query-to-image cross-attention.
All decoder layers have an embedding dimension of 256 and a feed-forward dimension of 512.
Following DETR3D <cit.>, the query embeddings and query position encodings of the detection queries are randomly initialized, and the initial reference points are estimated from their initial position encoding using a linear layer.
PETR
For all PETR-based <cit.> experiments, we utilize VoVNetV2-99 <cit.> as backbone and a FPN <cit.> image neck.
The FPN feature map C_5 is upsampled and fused with C_4, producing the final single-scale feature map with the downsampling rate 1/16 for the detection head.
The architecture of the detection head is the same as in the DETR3D-based experiments, except that the query-to-image cross-attention is based on PETR <cit.> with 3D position encodings.
The embedding dimension is 256 and the feed-forward dimension is 2048.
In contrast to <cit.>, PETR <cit.> generates query position encodings using the uniformly initialized reference points while initializing detection queries.
In the query propagation phase, we adhere to this setting to update the query position encodings using the updated reference point positions at each timestamp.
We found this design choice to be essential for ensuring the effectiveness of the PETR-based .
§ ADDITIONAL EXPERIMENTS
We provide additional experiments of to validate our system design.
All experiments are evaluated on the nuScenes validation set with the DETR3D detector.
§.§ IDF1 Analysis
Approaches with an explicit association module typically exhibit a relatively higher IDS than TBA-based methods, as illustrated in our analysis in Tables 1 and 2.
However, this discrepancy might stem from the evaluation protocol of nuScenes tracking benchmark <cit.> which computes IDS for each category at the recall where the highest MOTA is achieved.
Nevertheless, different methods do not necessarily achieve the best IDS while achieving the highest MOTA due to trade-offs between FP, FN, and IDS.
To verify the consistency of association of compared to TBA-Baseline, we show a supplementary comparison of IDF1 across all categories in tab:idf1.
In comparison to TBA-Baseline, we observe an 8.3%P higher average IDF1 and consistently higher IDF1 for all categories, especially for large objects such as buses and fast-moving objects such as motorcycles.
Moreover, the IDF1 improvement of is particularly noteworthy for categories with lower occurrences (except for cars and pedestrians), showing its ability to handle class imbalance.
The analysis based on IDF1 underscores the efficiency of in associating tracks consistently.
§.§ Complexity and runtime analysis
and TBD-Baseline require additional Edge-Augmented Cross-Attention modules compared to TBA-Baseline.
We compare the number of parameters, FLOPs and runtime of with TBA-Baseline and TBD-Baseline in tab:complexity.
As shown in tab:complexity, only adds about 6.7% parameters, 2.9% FLOPs, 4.1% inference time to TBA, and even less compared to TBD.
Therefore, the additional computational overhead of is minimal.
Despite this slight increase in complexity, the performance gain of is much more significant, 17.6% higher AMOTA than TBA-Baseline with DETR3D (see Table 1).
§.§ Ablation Studies
Appearance and geometry cues for association
We investigate the role of appearance and geometric cues in the learnable association module based on edge-augmented cross-attention.
As shown in tab:edge_feat, using only the center position (first row) instead of the complete box parameters (fourth row) leads to 1.2%P decrease in AMOTA, which underscores the significance of leveraging the entire box information for robust data association.
If geometric features are excluded (second row), the zero-initialized edge features are refined exclusively through appearance-based query features layer-by-layer, resulting in a substantial performance drop across all metrics when compared to the use of geometric-based edge features.
This observation highlights the usage of geometric features in enhancing the model's ability to distinguish between object instances.
Even without geometric features, using relative positional encodings derived from query feature differences (third row) yields a notable AMOTA increase of 1.7%P compared to scenarios without relative positional encoding (second row).
This finding shows the importance of the exact edge feature encoding in the model architecture of edge-augmented cross-attention.
Edge feature refinement
tab:edge_iter illustrates the effectiveness of the iterative refinement of edge features over decoder layers.
In the case where edge features remain independent within each decoder layer (first row), there is a notable decrease in AMOTA by 3.2%P when compared to scenarios where edge feature refinement occurs across layers (second row).
This experiment shows the potential for iterative optimization of data association across decoder layers, aligning with the fundamental design of our architecture.
In addition, since the edge features also participate in the query feature update in the edge-augmented cross-attention, the refinement of the edge features itself also contributes to the iterative refinement of query representations, which also improves the overall performance.
Masked self-attention
The self-attention layer in facilitates temporal modeling between queries. As shown in tab:block_attn, when we use only the attention from track to detection queries in the self-attention layer (third row), we observe no changes in AMOTA compared to our default setting (fourth row). The other metrics are even slightly better, indicating that this setting is slightly preferable compared to the default setting that we used in all other experiments. Conversely, using only the attention from detection to track queries (second row) results in a noticeable drop of 1.4%P in AMOTA. The results are similar when attention is not computed across different query types (first row).
These results show that self-attention is important for enhancing detection queries, enabling detection queries to incorporate information from past tracks and frames.
Association module
tab:asso_module compares Edge-Augmented Cross-Attention with alternative association modules.
We replace the Edge-Augmented Cross-Attention with association networks utilizing the difference or concatenation of detection and track query features (node features).
In both cases, we use an MLP and sigmoid to obtain the association scores S as before.
Using only the difference or concatenation of node features results in a significant performance drop, highlighting the necessity of using explicit edge features and the effectiveness of Edge-Augmented Cross-Attention in data association.
Robustness against appearance change
One of the biggest challenges of MOT in autonomous driving is ego-motion, where the camera mounting points move with the ego-vehicle, leading to appearance changes of observed objects due to varying observation angles.
We evaluate the sequences where the average speed of the ego-vehicle is ≥ 5m/s, indicating significant appearance changes in observed objects due to ego-motion.
As shown in tab:appearance, achieves similar AMOTA for sequences with ego-motion (≥ 5m/s) compared to all scenes (≥ 0m/s). Some secondary metrics are even better for the sequences with ego-motion.
Number of queries
tab:num_query shows the comparison of varying the number of detection queries N_D, where AMOTA increases until N_D=300.
Using fewer detection queries typically causes missed detections and lower detection performance, inevitably affecting the tracking performance.
However, continuing to increase the number of detection queries does not yield further improvements.
This is attributed to the risk of introducing an imbalance in the classification for the data association task, potentially reducing the association performance.
As a result, we opt for N_D=300 as the default setting.
Association loss weight
We evaluate the weight of the association loss λ_asso in tab:asso_loss.
Using λ_asso=5 and λ_asso=10 yield the same AMOTA of 0.378.
Lower or higher association loss weights result in performance drops with different ranges, which can be attributed to the imbalance of the multi-task training.
We choose λ_asso=10 as default.
Focusing parameter in association loss
We use the focal loss as the association loss ℒ_asso with a focusing parameter of γ=1.0.
This choice is validated in tab:asso_loss, where lowering or raising the focusing parameter γ results in a notable decrease of AMOTA, ranging from 1.2%P to 2.0%P.
Therefore γ=1.0 is a reasonable choice for effectively controlling the class imbalance in the data association task.
Hyperparameters during inference
During inference, we use two hyperparameters: the number of frames until unmatched tracks are kept and the score threshold τ_new to spawn new tracks.
We evaluate both hyperparameters in tab:miss_tolerance and tab:score_thresh.
As shown in tab:miss_tolerance, low values of T_d cause a significant performance drop caused by the insufficient handling of occluded objects.
The AMOTA peaks at T_d=5 and a higher value again leads to a decrease of AMOTA, which might keep too many tracks and cause a higher class imbalance in the data association.
As for the score threshold τ_new, when setting τ_new≤ 0.3, the tracker initializes excessive noisy detections, which results in a significant performance drop.
We choose τ_new=0.4 as the default value.
ieeenat_fullname
|
http://arxiv.org/abs/2405.09199v1 | 20240515091324 | Investigating the whirling heat current density in the Guyer--Krumhansl equation | [
"Mátyás Szücs",
"Carmelo Filippo Munafo",
"Róbert Kovács"
] | physics.app-ph | [
"physics.app-ph"
] |
Equivalence of flexible stripline and coaxial cables for superconducting qubit control and readout pulses
R. E. Lake
May 20, 2024
=========================================================================================================
Among the numerous heat conduction models, the Guyer–Krumhansl equation has a special role. Besides its various application possibilities in nanotechnology, cryotechnology, and even in case of modeling heterogeneous materials, it poses additional mathematical challenges compared to the Fourier or Cattaneo (a.k.a. Maxwell–Cattaneo–Vernotte) equations. Furthermore, the Guyer–Krumhansl equation is the first heat conduction model, which includes the curl of the heat flux density in the evolution equation. In the present paper, we place our focus on the consequences of the existence of such whirling heat current density by solving the two-dimensional Guyer–Krumhansl equation with a space and time-dependent heat pulse boundary condition. The discretization poses further challenges in regard to the boundary condition for which we propose a particular extrapolation method. Furthermore, with the help of the Helmholtz decomposition, we show the analogy with the linearized acoustics of Newtonian fluids, which reveals how the heat flux density plays the role of the velocity field. Our solutions also reveal an unexpected temperature evolution caused by the whirling heat flux density, namely, the temperature can locally be decreased for a short time in a case when the curl of the heat flux density dominates the heat conduction process.
§ INTRODUCTION
Several heat conduction models and approaches have been developed and tested in previous decades, such as the Cattaneo <cit.>, Guyer–Krumhansl (GK) <cit.>, two-temperature <cit.>, and Jeffreys equations <cit.>. From an engineering point of view, the GK and Jeffreys equations could be viable alternatives to the Fourier equation <cit.>. In some particular situation, also the two-temperature model can be useful, however, with strict restrictions on the composition, time scales and measurements <cit.>. Both models contain Fourier's law as a particular case, furthermore, they can model second sound (similarly to the Cattaneo equation) and provide a useful, effective description for heterogeneous materials <cit.> using a continuum approach. Therefore, the continuum background of GK equation hold broad potential for more and more advanced practical applications, also including heat foam-based heat exchangers and thermal storage technologies <cit.>.
The GK equation, however, is a notably more complicated model than the Jeffreys equation, especially in two or three spatial dimensions.
The reason lies in their background. The Jeffreys equation doubles Fourier's law and does not introduce second-order tensors along with further complications (couplings, isotropic representation, boundary conditions, etc.). On the contrary, the GK equation originates from phonon hydrodynamics <cit.>, and therefore, it is a special fluid model, thus allowing the curl of the heat flux. This phenomenon is particularly interesting for superfluids <cit.>, and also makes us revisit our expectations about how it can appear in the temperature history, even for solids in their effective description. In our work, we place the emphasis on the transient evolution which highlights particular temperature decrease effects occurring locally due to the heat current vorticity in agreement with the findings of <cit.>. This is apparent only in a transient setting, the time evolution of the heat current vorticity induces the local temperature decrease relative to the initial state. Therefore, these observations remain hidden in the work of Beardo et al. <cit.> since they applied the GK equation on a stationary problem.
The phonon hydrodynamic description is mainly characteristic of Rational Extended Thermodynamics, constraining the coefficients and possible nonlinearities (state dependence) <cit.>. However, it is possible to derive the GK equation in a continuum framework using the internal variable approach. We wish to introduce an additional interesting aspect compared to the work of Fülöp and Ván <cit.> in regard to the constitutive properties of the internal variable <cit.>. The continuum background offers more flexibility but keeps the structure of the GK equation. Consequently, as the model becomes free from the limitations of phonon hydrodynamics, the coefficients are merely restricted by the second law of thermodynamics. Thus, the GK equation can be used in the effective description of heterogeneous materials <cit.> and still can inherit the coefficients from phonon hydrodynamics. However, we wish to emphasize that we do not want to delve into the general details of two-dimensional materials and the related phonon hydrodynamic properties. In this respect, we want to refer to the recent work of Shang et al. <cit.> in which they also conduct a similar investigation as ours, but they slightly modify the GK equation based on particular phonon hydrodynamic assumptions being characteristic at nanoscale for two-dimensional materials. Since phonon hydrodynamics strictly restrict the applicable parameters, the effects of the whirling heat current density remain hidden, however, the flexibility given the present continuum approach allows to find further insights.
In the present paper, we show the continuum derivation of the GK equation together with its complete isotropic representation – which, to the best of our knowledge, was first derived by Ván <cit.> –, highlighting the differences compared to phonon hydrodynamics. Longitudinal and transversal modes are also discussed, highlighting the role of the rotational part of the heat flux field and providing further insights into the structure of the GK equation. Additionally, we solve a two-dimensional problem with space and time-dependent heat pulse boundary condition, using a staggered scheme. Our particular interest is to show the consequences of space-dependent boundaries and how they affect their numerical realization. We evaluate the findings in relation to the curl of the heat flux, the necessary boundary conditions, and how such a complex model can relate to the Fourier equation.
§ THE GUYER–KRUMHANSL EQUATION WITHIN THE FRAME OF INTERNAL VARIABLES
Considering a rigid, homogeneous, and isotropic heat conductor at rest w.r.t. a given reference frame, the balance of internal energy with neglected mechanical interactions and volumetric heat sources reads as
e = - ∇· ,
where , e and are the (mass) density – which is constant due to the assumption of rigidity –, the (mass) specific internal energy and the heat current density, respectively, is the partial time derivative and ∇ is the nabla operator. Let φ denote a physical quantity with arbitrary tensorial order, its components the corresponding basis built from Cartesian base vectors e_j , j = 1,2,3 are denoted as φ_i_1… i_N , i_1 , … , i_N = 1,2,3. In accordance with the usual notation and usage of the nabla operator in continuum mechanics, right gradient, right divergence and right curl are defined as
grad_ Rφ := φ⊗∇ = _j φ_i_1… i_Ne_i_1⊗…⊗e_i_N⊗e_j,
div_ Rφ := φ·∇ = _j φ_i_1… i_N - 1 je_i_1⊗…⊗e_i_N-1 ,
curl_ Rφ := φ×∇ = ϵ_i_N j k_j φ_i_1… i_Ne_i_1⊗…⊗e_i_N -1⊗e_k ,
where Einstein summation is applied and ϵ_ijk is the Levi-Civita permutation symbol. Especially, for an arbitrary scalar field a
a ⊗∇ = ∇⊗ a =: ∇ a
and, divergence and rotation is not interpreted. For an arbitrary vector field v
v⊗∇ = ( ∇⊗v)^ T ,
v·∇ = ∇·v ,
v×∇ = - ∇×v ,
where ^ T denotes the transpose of a second order tensor.
Assuming that the thermodynamical state space is spanned by the specific internal energy and an internal variable denoted by , then the Gibbs relation for the specific entropy function s reads as
s = 1/T e - · ,
the partial derivatives of entropy w.r.t. variables e and are
s/ e = 1/T , s/ = - ,
where T denotes the temperature and is the entropy conjugate of the internal variable.
Let us assume that entropy current density is also generalized through a second order tensor, called Nyíri multiplier <cit.> and denoted by ,
= ( 1/T + ) ,
where is the identity tensor. Therefore, entropy production rate density is calculated through the balance of entropy
0 ≤ = s + ∇·eq:Gibbs-eq:Nyiri=1/T e - · + [ ( 1/T + ) ·∇] · + ( 1/T + ) : ( ⊗∇)
eq:bal-e= - 1/T∇· - · + [ ( 1/T + ) ·∇] · + 1/T∇· + : ( ⊗∇)
= - · + [ ( 1/T + ) ·∇] · + : ( ⊗∇) .
According to classical irreversible thermodynamics <cit.>, entropy production rate density is a quadratic expression of thermodynamical fluxes and forces, among which functional relations are prescibed, fluxes depend on the forces. However, to differentiate between fluxes and forces in eq:ent-prod is complicated, these correspondences are rather formal. Nevertheless, the proven Onsagerian method works, we choose and group the quantities according to Table <ref>.
Assuming isotropic material, due to Curie's principle the different tensorial orders and characters do not couple, hence positive semi-definiteness of eq:ent-prod is ensured via the linear Onsagerian equations
= - l_11 + l_12( 1/T + ) ·∇ ,
= - l_21 + l_22( 1/T + ) ·∇,
= 𝕃 : ( ⊗∇)
with the criteria on the scalar coefficients
l_11 ≥ 0 , l_22 ≥ 0 , l_11 l_22 - l_12 l_21 ≥ 0
and the fourth order constitutive tensor 𝕃 with appropriate conditions. Since isotropy of the material is assumed eq:Ons-3 can be given in the isotropic decomposition as
= L^ s - L^ d/3( ∇·) + L^ d/2( ⊗∇ + ∇⊗) + L^ A/2ϵ : ( ×∇) ,
with the Levi–Civita tensor ϵ and the scalar coefficients
L^ s ≥ 0 , L^ d ≥ 0 , L^ A ≥ 0 ,
which map the spherical, symmetric traceless (deviatoric), and antisymmetric parts of the gradient of heat current density, respectively. In general, coefficients l_11, l_12, l_21, l_22, L^ s, L^ d and L^ A are state-dependent parameters, however, for simplicity, we treat them as constants. Equations eq:Ons-1 and eq:Ons-2 can be reformulated as
= - l_11 + l^ S( 1/T + ) ·∇ + l^ A( 1/T + ) ·∇ ,
= - l^ S + l^ A + l_22( 1/T + ) ·∇
with l^ S = 1/2( l_12 + l_21) and l^ A = 1/2( l_12 - l_21), then entropy production rate density can be given as
0 ≤ =
[ - 1/T + ]·[ l_11 l^ S; l^ S l_22 ][ -; 1/T + ] + : ( ⊗∇) ,
therefore, the terms in eq:Ons-11 and eq:Ons-22 with coefficients l^ A do not increase entropy. Let us now assume that l^ A = -1, l^ S = 0 and l_22 = 0, then one obtains for eq:Ons-11 and eq:Ons-22
= - ( 1/T + ) ·∇ - l_11 ,
= - .
According to the latter equation, the entropy-conjugated internal variable - is identified with the heat current density. Furthermore, assuming the linear equation of state
= m
with a constant m – which is positive to ensure concavity of specific entropy –, the internal variable can be eliminated, hence eq:Ons-111 and eq:Ons-222 reduces to the single equation
- /m = - ( 1/T + ) ·∇ + l_11 .
Finally, replacing eq:Ons-3-iso into eq:almost-GK, we obtain the Guyer–Krumhansl equation
τ + = - λ∇ T + η_1 Δ + η_2 ∇( ∇·)
with the coefficients defined as
λ := 1 / l_11 T^2 ≥ 0 , τ := /m l_11≥ 0 , η_1 := L^ d + L^ A/ 2 l_11≥ 0 , η_2 := 2 L^ s + L^ d - 3 L^ A/ 6 l_11 .
The balance of internal energy eq:bal-e, the Guyer–Krumhansl equation together with the (thermostatic caloric) equation of state
e = c T ,
where c is the specific heat capacity of the material, forming a closed system of equations, which, with appropriate initial and boundary conditions, can be solved.
We must observe that η_1 and η_2 are linearly independent coefficients and are merely restricted by the second law of thermodynamics. Therefore, they are positive semidefinite and free from the restrictions given by the phonon hydrodynamic background, η_1 is not identical to the square of the mean free path, and the ratio of η_2 to η_1 is not necessarily 2 either. The present continuum background provides a notably more flexible adjustment for the coefficients, either by means of an experiment such as <cit.> or inheriting the phonon hydrodynamic approach <cit.>.
Furthermore, we wish to call attention to the coefficients (<ref>) in which relations their functional relationships become apparent. It is clear that if λ=λ(T) holds (a linear or exponential one), then l_11=l_11(T) can be immediately given and that T-dependence is inherited in all the other coefficients. The different parameters can optionally adjust the necessary T-dependence. Additionally, if such nonlinearities are required, then equations eq:Ons-3-iso, eq:Ons-11 and eq:Ons-22 [or instead of these latter ones eq:Ons-111 and eq:Ons-222] must be the starting point in order to take into account the additional contributions correctly. For instance, due to an assumed temperature dependence in the coefficients L^ s, L^ d and L^ A a direct multiplicative coupling between temperature gradient and gradient of heat current density emerges.
Comparing our derivation to the work of Fülöp and Ván <cit.>, we have introduced two notable modifications here.
First, we distinguished between the internal variable and its entropy conjugate, revealing the need for an additional equation of state. In other words, the particular form of specific entropy
s(e,) = ŝ(e) - m/2·
with m > 0 constant immediately implicitly imposes equation (<ref>), hence, in this linear case with constant coefficients the two derivations are equivalent. When the equation of state eq:const-eq-xi is assumed to be linear, then the internal variable itself can be identified with the heat current density, as done, for example, in <cit.>. Futhermore, distinction between the internal variable and its conjugate enables the compatibility with the GENERIC (General Equation for the Non-Equilibrium Reversible–Irreversible Coupling) framework <cit.>. Finally we note that if equation of state eq:const-eq-xi is beyond a linear relationship, then elimination of the internal variable and its conjugate can be complicated or impossible, therefore, in this case deeper knowledge (microscopic or mesoscopic interpretations) about these variables are required.
Second, we interpret the current multiplier as a separate, relaxed state variable in accordance with <cit.>, hence a direct interpretation of Müller's K-vector as K = is obtained <cit.>.
§ LONGITUDINAL AND TRANSVERSAL HEAT PROPAGATION
Via the vector Laplacian identity
Δ = ∇( ∇·) - ∇×∇×
the Guyer–Krumhansl constitutive equation eq:GK can also be given as
τ + = - λ∇ T + ( η_1 + η_2 ) ∇( ∇·) - η_1 ∇×∇× ,
therefore, theoretically, in a GK-type heat conductor transversal heat propagation can also be observed. Via the Helmholtz decomposition, the heat current density can be uniquely – up to a space-independent but arbitrary time-dependent vector function – given as a sum of irrotational (curl-free, ^*) and solenoidal (divergence-free, ^∘) vector fields,
= ^* + ^∘ , with ∇· = ∇·^* and ∇× = ∇×^∘ .
Curl-free and divergence-free components of the vector field are usually referred to as longitudinal and transversal components, respectively. Since ∇ T is curl-free, the governing equations [together with eq:eos] of a GK-type heat conductor can be reformulated as
c T = - ∇·^* ,
τ^* + ^* = - λ∇ T + ( η_1 + η_2 ) Δ^* ,
τ^∘ + ^∘ = η_1 Δ^∘ ,
which decomposition reveals the time evolutions of the longitudinal and transversal components of heat current density. Here, the evolution of ^∘ is decoupled from evolutions of ^* and T, thus ^∘ can only be introduced through a spatially-dependent boundary condition or by a particular initial condition. A non-homogeneous temperature field alone can not induce non-zero ^∘. Since boundary conditions on the heat current density defines itself, it is inevitable to introduce disturbances into both parts of , but unique separation of these on the boundary can be complicated.
Later on, we refer on η_1 + η_2 as the longitudinal GK coefficient and on η_1 as the transversal GK coefficient. Based on our previous 1 spatial dimensional studies of the GK equation <cit.>, the longitudinal GK coefficient (denoted usually in 1 spatial dimension with κ^2) has to be positive semi-definite, hence the seemingly indefinite η_2 parameter in eq:coeff is constraint by
η_2 ≥ - η_1 .
An additional interesting property is related to the Fourier resonance condition, the temperature history given by the GK equation is identical with the Fourier's one. Namely, replacing the gradient of eq:gkT into the partial time derivative of eq:gkq and taking the advantage of commutation of ∇ and , one obtains
τ( ^* - η_1 + η_2/τΔ^* ) + ( ^* - λ/ cΔ^* ) = 0 ,
which is the sum of a Fourier heat conduction equation and the partial time derivative of a slightly modified Fourier heat conduction equation. If the additional time scale becomes identical given by the thermal diffusivity,
η_1 + η_2/τ = λ/ c ,
Fourier resonance occurs. The resonance condition eq:Fres is the same as obtained in the one-dimensional case <cit.>, however, in the three-dimensional setting, Fourier resonance appears only in the longitudinal direction, the transversal contribution may distort this behaviour. When
( η_1 + η_2 ) / τ/λ / ( c) > 1,
then over-diffusive solutions are obtained, while in the under-damped opposite case attenuated wave-like propagation of the temperature field is observable. From an engineering point of view, Fourier resonance also seems to be a natural requirement, since the GK equation also consists of the Fourier equation as a particular case, and thus it is highly advantageous in practice if the GK equation can reproduce the simpler Fourier solution by the particular alignment of coefficients, without any modifications on the model.
Let us refer here on the governing equations of linearized acoustics of Newtonian fluids, which reads – via applying Helmholtz decomposition on the velocity field –
ρ = - ρ̅∇·^* ,
ρ̅^* = - a̅_s^2 ∇ρ + ( η̅_ Vol + 4/3η̅_ Sh) Δ^* ,
ρ̅^∘ = η̅_ ShΔ^∘
with density difference ρ measured from density ρ̅ of the unperturbed state, isentropic speed of sound a̅_s, volume and shear viscosities η̅_ Vol and η̅_ Sh, respectively (equations (3.100) and (3.109) in <cit.>). Via the curl of velocity the vorticity ω = ∇× = ∇×^∘ is defined, hence eq:ac-3 can also be written on ω instead of ^∘. Therefore, Helmholtz decomposition highlights that in linear approximation of acoustics the evolution of ω is decoupled from ρ and ^*, hence resulting that sound is a transversal wave [eq:ac-1 and eq:ac-2] and vorticity can not be introduced through the acoustic fields ρ and ^* (nor through the pressure field). We are dealing with something very similar in case of GK equations eq:gkT, eq:gkq and eq:gk0.
§ A STAGGERED GRID FINITE DIFFERENCE METHOD DEMONSTRATED ON THE HEAT PULSE EXPERIMENT IN THREE SPATIAL DIMENSIONS
We aim to numerically model the heat pulse experiment in which a single short pulse thermally excites the sample. We take into account both the spatial and time dependence of the pulse, serving as an outstanding example to demonstrate the role of boundary conditions due to the appearance of in-plane derivatives.
In order to ease the discretization of the second-order derivative of , it is advantageous to introduce the gradient of heat current density as an auxiliary variable
:= ⊗∇ ,
hence, a system of first-order equations has to be solved. Additionally, that feels natural to introduce in the case of the GK equation as it strongly resembles the current multiplier . After that reformulation, we obtain
τ + = - λ∇ T + η_1 ·∇ + η_2 ∇tr
for which it is crucial to correctly discretize on the boundary.
We wish to solve the GK equation in Cartesian coordinate system for a rectangular domain with the size of X × Y × Z. We assume that the heat pulse excites the entire Z direction uniformly, therefore we can reduce the problem to two-dimensional in X and Y. Furthermore, we consider symmetry at y=0, so that - Y/2≤ y ≤Y/2, and thus we deal only half of the rectangle. Consequently, the boundary conditions are
q_x ( t , x = 0 , y ) =
Q_ P,Z 1/τ_ P ^ Y_ P ^[ 1 - cos( 2 π t /τ_ P ^) ] [ 1 + cos( 2 πy/Y_ P ^) ] if 0 ≤ t ≤τ_ P ^ and 0 ≤ y ≤Y_ P ^/2≤Y/2
0 otherwise ,
q_x ( t , x = X , y ) = 0 ,
q_y ( t , x , y = 0 ) = 0 ,
q_y ( t , x , y = + Y/2 ) = 0 .
where Q_ P,Z is the Z-length specific amount of heat introduced during the heat pulse, measured in J/m.
The initial condition describes equilibrium with homogeneous temperature distribution T(t=0,x,y)=T_0, hence (t=0,x,y)=0, and (t,x,y)=0.
We wish to transform the GK equation to a non-dimensional one using the following characteristic scales. We choose X for the length scale and X^2/α for the time scale using the thermal diffusivity α := λ/ c (this leads to the usual Fourier number).
Therefore, we obtain the following non-dimensional variables
:= t/X^2/α , := x/X , := y/X,
and derivatives
= X^2/α , = X _x , = X _y .
The non-dimensional fields read
_i := q_i/α Q_ P,Z/X^3 R_Y , _ij = Q_ij/α Q_ P,Z/X^2 R_Y, := T - T_0/T_ max - T_0 = c X^2 R_Y ( T - T_0 )/Q_ P,Z
in which i, j = x, y, R_Y = Y/X and T_ max is calculated via integrating eq:bal-e [together with eq:eos] on the whole sample volume in time from the initial homogeneous temperature state T_0 to the final homogeneous temperature state T_ max.
Let us summarize the complete system of non-dimensional equations,
= - ( _x + _y ) ,
τ̂_x + _x = - + ( η̂_1 + η̂_2 ) _xx + η̂_1 _xy + η̂_2 _yy ,
τ̂_y + _y = - + ( η̂_1 + η̂_2 ) _yy + η̂_1 _yx + η̂_2 _xx ,
_xx = _x ,
_xy = _x ,
_yx = _y ,
_yy = _y
where
τ̂ = τ/X^2/α , η̂_1 = η_1/X^2 , η̂_2 = η_2/X^2,
and the non-dimensional heat pulse boundary conditions reads
_x ( , = 0 , ) =
R_Y/τ̂_ P ^ R_ Y, P ^[ 1 - cos( 2 π/τ̂_ P ^) ] [ 1 + cos( 2 π/R_ Y, P ^) ] if 0 ≤≤τ̂_ P ^ and 0 ≤≤ R_Y , P ^≤ R_Y
0 otherwise ,
where R_ Y, P = Y_ P /X stands.
Spatial discretization is realized via a staggered scheme <cit.>, which is depicted in Figure <ref>, and the structure of governing equations eq:nondim-1–eq:nondim-n restricts how each field can be represented on the discrete lattice with directional equidistant grid points with distance and , hence x̂_m = m , m = 1 , … , M, ŷ_n = n , n = 1 , … , N, where M = 1/ and N = R_Y/2. The investigated time interval is also discretized through equidistance time steps , t̂^j = j , j = 1 , … , J. Therefore, the approximated value of a function f in the discrete time and space coordinates ( t̂^j , x̂_m, ŷ_n ) is denoted by f^j_m,n. Temperature, as a state variable characterizing homogeneously one discrete cell, is placed in the middle of the cell, while heat current density characterizing fluxes through the boundaries of the cell, therefore, the corresponding normal components of the heat current density are placed on the boundaries of the cell in line with the discrete temperature values. Discretization of follows directly from the discrete values of heat current density and equations eq:nondim-2–eq:nondim-n, consequently, its diagonal elements are also in the middle of the cell, but its off-diagonal elements are placed in the corners of the cell. Since we apply -boundaries, only complete cells are used to discretize the entire spatial domain.
Furthermore, let us note that according to Figure <ref>, one needs to prescribe Q_xy and Q_yx on the boundaries, in accordance with the -boundaries. On each side, one of these off-diagonal quantities can be determined analytically and represented on the discrete lattice. For instance, for a given q_x(t,x=0,y), Q_xy can be determined immediately, however, Q_yx must be extrapolated from the bulk nodes. This procedure holds for all four boundaries. For the extrapolation, we use quadratic Lagrange polynomials in order to preserve the sign given by three bulk points next to each boundary. This extrapolation is schematically demonstrated in Figure <ref> for one setting in which the m^th value of Q_yx is calculated based on the bulk point of m+1, m+2, and m+3. We want to emphasize that any direct definition of the on the boundaries can significantly distort the physical content of the solution, and most probably, a different problem is solved than expected in that case. With our method utilizing Lagrange polynomials, we can avoid the definition of any additional, unnecessary boundary conditions or introduction of virtual nodes.
For the time derivatives, we choose the simplest forward time stepping method, the explicit Euler, which is eligible for our aim to investigate the solutions of the two-dimensional GK equation. Summarizing the numerical scheme built in accordance of Figure <ref>:
_m+ , n+^j+1 = T_m+ , n+^j + ( xjm,n+ - xjm+1,n+/ + yjm+,n - yjm+,n+1/) ,
xj+1m , n+ = ( 1 - /τ̂) xjm , n+ + /τ̂[ ^j_m- , n+ - ^j_m+ , n+/+
5ex + ( η̂_1 + η̂_2 ) xxjm+ , n+ - xxjm- , n+/ + η̂_1 xyjm , n+1 - xyjm , n/+
5ex + η̂_2 yyjm+ , n+ - yyjm- , n+/] ,
yj+1m+ , n = ( 1 - /τ̂) yjm+ , n + /τ̂[ ^j_m+ , n- - ^j_m+ , n+/+
5ex + ( η̂_1 + η̂_2 ) yyjm+ , n+ - yyjm+ , n-/ + η̂_1 yxjm+1 , n - xyjm , n/+
5ex + η̂_2 xxjm+ , n+ - xxjm+ , n-/] ,
xxjm+ , n+ = xjm+1 , n+ - xjm , n+/ ,
xyjm , n = xjm , n+ - xjm , n-/ ,
yxjm , n = yjm+ , n - yjm- , n/ ,
yyjm+ , n+ = yjm+ , n+1 - yjm+ , n/ .
§ NUMERICAL RESULTS
§.§ 2D Fourier solutions
Here, let us start with the Fourier heat equation and present the plots we use to visualize the temperature and heat flux density history in time and space. We wish to emphasize that we have a different code for the Fourier equation in which we solve only the Fourier equation and not the simplification of the GK equation, hence there are no difficulties with the boundary conditions in this case, no is used. Figure <ref> shows the temperature histories at specified spatial points. The first column shows the front face, and each column increases the spatial step by 1/4, and the last one is related to the rear face of the sample. Additionally, the first row shows the top side, and the last one presents the temperature history on the symmetry axis. We use the same plot concept for the GK equation as well. Furthermore, it is insightful to check the vector plot for the heat flux field in which we can observe the effect of space-dependent excitation and compare it immediately with the corresponding temperature distribution, see Figure <ref>. Also, Figure <ref> shows that it is straightforward to reproduce the well-known 1D solutions by applying a spatially homogeneous boundary condition. The curl of the heat flux field is identically zero. For all the subsequent calculations, we fix R_ Y, P =0.4 with fixed spatial resolution (Δx̂ = Δŷ=0.02), and τ̂_ P = 0.01.
§.§ 2D GK solutions
§.§.§ Demonstrating the Fourier resonance
First, we start by showing that the GK equation reproduces the Fourier solutions when η̂_1=0, and η̂_2=τ̂=0.05, Figure <ref> shows the difference between the temperature fields. The observed errors are practically zero. This solution also supports our handling of on the boundary and the extrapolation method as it reproduces the Fourier solution in a particular parameter setting. In this parameter setting, the ∇×∇× term vanishes in the Eq. (<ref>), thus the Laplacian of remains. Notable differences emerge when we keep τ̂=0.05, but η̂_1 = η̂_2 = 0.025, despite that η̂_1 + η̂_2 = τ̂, the resonance condition is violated and the solution significantly differs from the Fourier equation, see Figure <ref> for details.
Let us turn our attention to the more interesting and intriguing solutions when η̂_1≠0, and thus the curl of the heat flux field becomes meaningful.
§.§.§ Vorticity-free solution
We call the solution to be over-diffusive if η̂_1 + η̂_2 > τ̂. Let us start with the case when we keep η̂_1 = 0 in order to avoid the effects of the nonzero curl of the heat flux field. It is worth comparing the characteristics of the temperature history in the middle to the rear side. In the middle, the two distinct time scales are apparent on the contrary to the rear side (see Figure <ref>). In a heat pulse experiment using heterogeneous materials, such two distinct time scales are also visible, and that numerical solution reflects the size dependence of the observation <cit.>. This effect might not be apparent for thicker samples, and it also depends on the material properties. Figure <ref> shows the 2D vector plot of the heat flux in which the over-diffusive behavior remains hidden; the temperature contours (isothermal lines) are slightly distorted compared to the Fourier case. Furthermore, since η=0, the curl of the heat flux field is expected to be zero, and this is reflected by Figure <ref>, too.
§.§.§ Solutions with strong vorticity
We turn our attention to the reverse case in which we keep η̂_2 = 0, and now let us investigate the effects of the parameter η̂_1, let it be η̂_1 = 0.075. Figure <ref> presents the temperature history for the given spatial points. Comparing it to the previous case, we can observe a notably different behavior. First, near the front face, the temperature decreases due to the significant curl effects. We emphasize that this is not identical to reaching a negative absolute temperature. We wish to emphasize that the apparent negative temperature is relative to the initial temperature on contrary to the observations of Zhukovsky <cit.>.
However, the temperature field can exhibit unusual evolution in such a particular parameter setting since the GK equation is based on a hydrodynamic analogy, and the curl of the heat flux field can naturally appear. Here, we particularly strengthened this effect to make it easily observable. This is only meaningful in a two- or three-dimensional setting. This temperature-decreasing effect disappears soon and is not observable for any other spatial domains. This is also depicted in Figure <ref>. Furthermore, contrary to the previous situations, the curl of the heat flux field becomes significantly larger (Figure <ref>).
§.§.§ Devating from the phonon hydrodynamic ratio
We wish to recall that the ratio η̂_2/η̂_1 =2 is fixed in a phonon hydrodynamic approach, but this does not necessarily hold in a continuum framework. In order to make this difference apparent, we provide solutions with respect to η̂_2/η̂_1. Figure <ref> shows the rear side temperature history for three situations in which η̂_2/η̂_1={1.5, 2, 2.5} with fixed η̂_1=0.05. This does not show any remarkable properties compared to the one-dimensional case, for which only the effect of η̂_1 + η̂_2 is observable, and the increase of the over-diffusion makes the temperature signal propagation to be faster. However, if we consider the front-side temperature history in the middle (Figure <ref>), then it makes more visible how the ratio of η̂_2/η̂_1 modifies the solution. Decreasing η̂_2/η̂_1 amplifies the temperature-decrease effect near the heat pulse since η̂_1 – the rotational part – becomes more dominant.
§.§.§ How does the curl of heat current density behave on the boundary?
Previously, we introduced the auxiliary field in order to ease the discretization of the second-order spatial derivatives and make it easier to realize the boundary conditions properly. Since the diagonal elements of are inside the spatial domain, they are not directly related to the boundary conditions. However, the off-diagonals are not independent of the -boundary. We introduced an extrapolation from the bulk in order to avoid the definition of incompatible boundary data for the unknown off-diagonals, and thus avoiding the introduction of any artificial distortion. Now let us depict the difference between the Q_xy and Q_yx components in two cases. In fact, this difference is the only component of the curl of the in-plane heat current density. In the first one, η̂_1=0 is considered, hence, the rotational term is zero (η̂_2 = τ̂=0.05). In the second case, η̂_1=η̂_2 = 0.05, and notable differences are expected. Figure <ref> presents their difference, highlighting that η̂_1 indeed introduces significant changes in the evolution of off-diagonals of , especially near the boundaries, but also notably affecting the bulk behavior. Figure <ref> shows the time evolution in agreement with Eq. (<ref>), presenting an exponential decay in time.
§ DISCUSSION AND SUMMARY
In the present paper, we revisited the continuum thermodynamic background of the Guyer–Krumhansl equation. With distinguishing between the internal variable and its entropy-conjugate, we revealed the need for a further constitutive relationship, which connect these two together. This additional possibility remains hidden when a particular extension for the entropy density is assumed, especially when the internal variable is immediately identified with the heat flux density. Although we do not investigate its outcomes in greater detail here, we note that it poses further potential and could have non-trivial consequences.
Furthermore, we applied a staggered discretization approach to numerically solve a two-dimensional setting including a space and time-dependent heat flux boundary condition. The discretization is eased by introducing a second-order tensory, as an auxiliary quantity, and is also helpful in the proper realization of boundary conditions. We discussed that is not independent of the given boundary condition, however, not all components of can be found immediately. In order to avoid artificial and unnecessary assumptions, we proposed to use a quadratic Lagrangian extrapolation based on the bulk points to update the unknown components on the boundary. That approach successfully reproduced the solutions of the Fourier equation applying the resonance condition (η̂_1=0, with vanishing rotational term).
Additionally, the continuum background of the GK equation allowed to adjust the parameters in a range, which is beyond the validity of phonon hydrodynamics. In a case when the rotational terms dominate the heat flux density time evolution, we could observe that the temperature can be significantly decreased, even under the initial temperature. However, we want to strongly emphasize that this phenomenon is not identical to obtain negative temperature, and occurs only locally for a brief period. This is a characteristic outcome of the whirling heat current density.
Our analysis also revealed that when the GK equation is used as an effective description of heterogeneous materials, η̂_1=0 is a necessary choice in order to keep the possibility for Fourier resonance, and avoid solutions being characteristic only for the hydrodynamic domain, for low-temperature or nanoscale problems. However, the continuum background always offers the possibility to inherit the coefficients from phonon hydrodynamics, and therefore our numerical approach and findings can be useful even in these situations.
§ ACKNOWLEDGEMENT
The authors express their gratitude to Péter Ván for his useful suggestions. The research of C.F.M. has been carried out under the auspices
of GNFM (National Group of Mathematical-Physics) of INdAM (National Institute of
Advanced Mathematics), through the grant ‘Progetto Giovani’ CUPE53C22001930001 for
financial support. Project no. TKP-6-6/PALY-2021 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme. The research was funded by the Sustainable Development and Technologies National Programme of the Hungarian Academy of Sciences (FFT NP FTA). This work was partially supported in part by the Hungarian Scientific Research Fund under Grant agreement FK 134277.
10
Cattaneo58
C. Cattaneo.
Sur une forme de lequation de la chaleur eliminant le paradoxe dune
propagation instantanee.
Comptes Rendus Hebdomadaires Des Seances De L'Academie Des
Sciences, 247(4):431–433, 1958.
GK66
R. A. Guyer and J. A. Krumhansl.
Thermal Conductivity, Second Sound, and Phonon Hydrodynamic
Phenomena in Nonmetallic Crystals.
Physical Review, 148:778–788, 1966.
Sobolev94
S. L. Sobolev.
Heat conduction equation for systems with an inhomogeneous internal
structure.
Journal of Engineering Physics and Thermophysics,
66(4):436–440, 1994.
Sobolev97
S. L. Sobolev.
Local non-equilibrium transport models.
Physics-Uspekhi, 40(10):1043–1053, 1997.
Sobolev16
S. L. Sobolev.
Nonlocal two-temperature model: Application to heat transport in
metals irradiated by ultrashort laser pulses.
International Journal of Heat and Mass Transfer, 94:138–144,
2016.
JosPre89
D. D. Joseph and L. Preziosi.
Heat waves.
Reviews of Modern Physics, 61(1):41–86, 1989.
FehKov24
A. Fehér and R. Kovács.
On the dynamic thermal conductivity and diffusivity observed in heat
pulse experiments.
Journal of Non-Equilibrium Thermodynamics, 49:161–170, 2024.
KovFehSob22
R. Kovács, A. Fehér, and S. Sobolev.
On the two-temperature description of heterogeneous materials.
International Journal of Heat and Mass Transfer, 194:123021,
2022.
ChenEtal21
X. Chen, X. Li, X. Xia, C. Sun, and R. Liu.
Thermal storage analysis of a foam-filled PCM heat exchanger
subjected to fluctuating flow conditions.
Energy, 216:119259, 2021.
NematEtal22
A. NematpourKeshteli, M. Iasiello, G. Langella, and N. Bianco.
Enhancing PCMs thermal conductivity: A comparison among porous
metal foams, nanoparticles and finned surfaces in triplex tube heat
exchangers.
Applied Thermal Engineering, 212:118623, 2022.
ZhangEtal22
S. Zhang, L. Pu, S. Mancin, Z. Ma, and L. Xu.
Experimental study on heat transfer characteristics of metal
foam/paraffin composite PCMs in large cavities: Effects of material types
and heating configurations.
Applied Energy, 325:119790, 2022.
MulRug98
I. Müller and T. Ruggeri.
Rational Extended Thermodynamics.
Springer, 1998.
DreStr93a
W. Dreyer and H. Struchtrup.
Heat pulse experiments revisited.
Continuum Mechanics and Thermodynamics, 5:3–50, 1993.
MongEtal18
M. S. Mongiovi, D. Jou, and M. Sciacca.
Non-equilibrium thermodynamics, heat transport and thermal waves in
laminar and turbulent superfluid helium.
Physics Reports, 2018.
SykEtal21
M. Sỳkora, M. Pavelka, M. La Mantia, D. Jou, and M. Grmela.
On the relations between large-scale models of superfluid helium-4.
Physics of Fluids, 33(12), 2021.
JouEtal11
D. Jou, M. S. Mongiovì, and M. Sciacca.
Hydrodynamic equations of anisotropic, polarized and inhomogeneous
superfluid vortex tangles.
Physica D: Nonlinear Phenomena, 240(3):249–258, 2011.
SykEtal23
M. Sỳkora, M. Pavelka, L. Restuccia, and D. Jou.
Multiscale heat transport with inertia and thermal vortices.
Physica Scripta, 98(10):105234, 2023.
JordiEtal24
J. Tur-Prats, M. Gutiérrez-Pérez, J. Bafaluy, J. Camacho, F. X. Alvarez, and A. Beardo.
Microscopic origin of heat vorticity in quasi-ballistic phonon transport.
International Journal of Heat and Mass Transfer, 226:125464, 2024.
VanFul12
P. Ván and T. Fülöp.
Universality in heat conduction theory – weakly nonlocal
thermodynamics.
Annalen der Physik (Berlin), 524(8):470–478, 2012.
BerVan17b
A. Berezovski and P. Ván.
Internal Variables in Thermoelasticity.
Springer, 2017.
ShangEtal20
M.-Y. Shang, C. Zhang, Z. Guo, and J.-T. Lü.
Heat vortex in hydrodynamic phonon transport of two-dimensional
materials.
Scientific Reports, 10(1):8272, 2020.
Van2001
P. Ván.
Weakly nonlocal irreversible thermodynamics – the Guyer-Krumhansl and the Cahn-Hilliard equations.
Physics Letters A, 290: 88–92, 2001.
Nyiri1991
B. Nyíri.
On the entropy current.
Journal of Non-Equilibrium Thermodynamics, 16(2): 179–186, 1991.
deGroot1963
S. R. de Groot and P. Mazur.
Non-equilibrium thermodynamics.
Dover Publications, Amsterdam, 1962.
Gyarmati1970
I. Gyarmati.
Non-Equilibrium Thermodynamics—Field Theory and Variational Principles.,
Springer-Verlag, Berlin, Heidelberg, New York, 1970.
KovVan2015
R. Kovács and P. Ván.
Generalized heat conduction in heat pulse experiments.
International Journal of Heat and Mass Transfer, 83:613–620, 2015.
SzucsEtal2022
M. Szücs, M. Pavelka, R. Kovács, T. Fülöp, P. Ván, and M. Grmela.
A case study of non-Fourier heat condction using internal variables and GENERIC.
Journal of Non-Equilibrium Thermodynamics, 47:31–60, 2022.
Muller1968
I. Müller.
A Thermodynamic Theory of Mixtures of Fluids.
Archive for Rational Mechanics and Analysis, 28:1–39, 1968.
Muller1971
I. Müller.
Die Kältefunktion, eine universelle Funktion in der Thermodynamik viskoser wärmeleitender Flüssigkeiten.
Archive for Rational Mechanics and Analysis, 40:1–36, 1971.
Rossing2007
T. D. Rossing (ed.).
Springer Handbook of Acoustics.
Spinger New York, NY, 2007.
FulEtal20
T. Fülöp, R. Kovács, M. Szücs, and M. Fawaier.
Thermodynamical extension of a symplectic numerical scheme with half
space and time shifts demonstrated on rheological waves in solids.
Entropy, 22:155, 2020.
FehEtal21
A. Fehér, N. Lukács, L. Somlai, T. Fodor, M. Szücs, T. Fülöp, P. Ván, and
R. Kovács.
Size effects and beyond-Fourier heat conduction in room-temperature
experiments.
Journal of Non-Equilibrium Thermodynamics, 46:403–411, 2021.
Zhukov16
K. Zhukovsky.
Violation of the maximum principle and negative solutions for pulse propagation in Guyer–Krumhansl model.
International Journal of Heat and Mass Transfer, 98:523–529, 2016.
|
http://arxiv.org/abs/2405.09825v1 | 20240516055053 | A Sample of Compact Object Candidates in Single-lined Spectroscopic Binaries from LAMOST Medium Resolution Survey | [
"Hao-Bin Liu",
"Wei-Min Gu",
"Zhi-Xiang Zhang",
"Tuan Yi",
"Jin-Zhong Liu",
"Mouyuan Sun"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.HE"
] |
UTF8gbsn
Compact object candidates from LAMOST-MRS
Liu et al.
0000-0002-2912-095X]Hao-Bin Liu
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, People's Republic of China
0000-0003-3137-1851]Wei-Min Gu
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, People's Republic of China
0000-0002-2419-6875]Zhi-Xiang Zhang
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, People's Republic of China
0000-0002-5839-6744]Tuan Yi
Department of Astronomy, School of Physics, Peking University, Beijing 100871, People's Republic of China
0000-0002-7420-6744]jin-zhong Liu
Xinjiang Observatory, Chinese Academy of Sciences, 150 Science 1–street Urumqi, Xinjiang 830011, People's Republic of China
0000-0002-0771-2153]Mouyuan Sun
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, People's Republic of China
Wei-Min Gu
guwm@xmu.edu.cn
The stellar spectra from LAMOST Medium Resolution Survey can be used to search for compact objects in binaries. The LAMOST DR10 catalog includes > 980, 000 targets with multiple medium resolution spectra. We select the targets with large or rapid radial velocity variation, and obtained an input-sample of 1822 sources. We use light curves and spectra to identify and exclude eclipsing binaries and double-lined spectroscopic binaries in the input-sample. We finally derive a catalog of 89 candidates with well-folded radial velocity, which are all single-lined spectroscopic binaries, indicating an unseen companion residing in each system. The mass function of each system can be well constrained based on the radial velocity curve. In our sample, 26 sources have mass function higher than 0.1 , among which 18 sources have ellipsoidal type light curves. In our opinion, compact objects are likely existent in all these 26 binaries, which are worth follow-up identification.
§ INTRODUCTION
Compact objects (black holes, neutron stars and white dwarfs) represent the final products of stellar evolution. The search for compact objects provides significant implications for understanding stellar physics and interstellar medium.
Notably, astrometric observations from were used to identify compact objects. For instance, <cit.> and <cit.> reported two binaries containing stellar black holes (Gaia BH1 and Gaia BH2) from Gaia data release 3, whose companions are respectively a G-type star and a red giant.
Recently, <cit.> discovered a black hole of 32.70± 0.82 existed in a wide binary system Gaia BH3.
The Doppler spectroscopy have yielded interesting discoveries. <cit.> claimed a black hole of 3.8 to 6.9 in MWC 656, which was challenged by <cit.> who derived a much lower mass of 0.94 ± 0.34 for the unseen companion. <cit.> reported that the binary LB-1 consists of a B-type star and a black hole of 68^+11_-13 albeit this system is also likely to host two luminous stars <cit.>. <cit.> reported the detection of a black hole of 11.1^+2.1_-2.4 in NGC 1850, which was later shown to be a stripped-star binary <cit.>. In addition, several black hole candidates are also reported in quiescent binary systems <cit.>.
Long-term spectroscopic monitoring has also yielded valuable research in dynamically identifying compact binary, containing a neutron star or a white dwarf <cit.>. Increasingly, attention has been given to the search of compact objects through time-domain surveys, particularly via the spectroscopic surveys that frequently generate potential candidates. Through comprehensive advantages in terms of data potential, such time-domain spectroscopic surveys have great predominance and prospects to identify compact object candidates.
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), a large spectroscopic survey covering the north sky, offers tens of millions of spectra <cit.>. The Data Release 10 (DR10) represents a significant milestone in the project, indicating the accumulation of a large volume of observational data and the completion of data processing and analysis up to that release. The DR10 provides multiple spectral observations for individual targets in the Low Resolution Survey (LRS) and the Medium Resolution Survey (MRS). The MRS observation is conducted in about 50% nights of the LAMOST second stage (since 2018) and have a resolution of 7500 respectively at 5163 Å (blue band) and 6593 Å (red band), which allows for accurate measurement of radial velocities and improved identification of spectral components. Based on the DR10-MRS data, single-lined spectroscopic binary stars can be easily classified and compact objects can be potentially identified in them.
Previous researches on LAMOST data have made significant progresses, with several compact binary candidates discovered, where the unseen companions are neutron stars and high-mass white dwarfs <cit.>. With the accumulation of data from LAMOST and ongoing observations, there are more compact object candidates yet to be identified.
Especially in DR10-MRS, which includes a decade of LAMOST spectral data, it provides an excellent opportunity for searching for compact object binary candidates. Moreover, the DR10-MRS data has relatively high spectral resolution and single-exposure data from the same day, which not only meets the precision requirements for radial velocity measurements but also provides supplementary data for the short-period binary star dynamical analysis. We collect the DR10-MRS data as much as possible, and try to search for the compact object candidates in it. The benefit of this search strategy represents on the larger sample coverage and observation data space to obtain more compact objects candidates.
The crux, however, is to overcome the abundance of false data and spurious samples. Previous work relies on designed selecting methods or complex parameter spaces to circumvent most of the false data. In this work, we are shifting focus to radial velocity curves, making everything less cumbersome. We introduce how we generate input-sample relying on raw data from public DR10-MRS catalog and how our pipeline works in Section <ref>. We try to fit the radial velocity curve for every target in input-sample, and then exclude those double-lined spectroscopic binaries and eclipsing binaries. As a result, we obtain a sample with a size of 95 for compact object binary candidate. Our results are displayed in Section <ref>. We summarize and discuss our results in Section <ref>. Details and techniques are provided in the Appendix.
§ CANDIDATE SELECTION
We download a complete LAMOST MRS General Catalog, named dr10_v1.0_MRS_catalogue, from LAMOST DR10's data access, and remove invalid data in it. In total, we obtain a catalog containing 1134865 sources, including 984719 with repeated observations.
In this catalog, 130332 sources have more than 3 medium-resolution observational nights and 85062 sources have more than 5 nights. In each observational night, LAMOST conducts commonly 3 medium-resolution exposures for each source. This provides abundant time-domain spectral observation data for a large number of targets, offering a substantial data foundation for dynamic analysis.
In this work, we are dedicated to comprehensively searching for compact object candidates in binaries among sources in DR10. We aim to select candidates from those sources with repeated observations, and the targets should satisfy the following conditions:
* Requirement: Radial velocity data should be well-folded.
* Exclusion: Consisting of two main-sequence stars.
We aim to filter spectroscopic binary through the variation in radial velocity and exclude those consisting
of two main-sequence stars, including eclipsing binaries and double-lined spectroscopic binaries. As for the first condition, starting from the spectra, we tend to select targets with well-folded radial velocity data, which exhibit relatively high acceptability in terms of their orbital parameters. As for the second condition, we adopted the approach as described in <cit.> to exclude: visually inspecting eclipsing binaries by folding photometric data and exclude double-lined spectroscopic binaries referred to the Cross-Correlation Function (CCF) peaks and profiles.
Here, we qualitatively describe our processing workflow as follows. For each target, We collect photometric data and search for its photometric period .
We consider as a potential orbital period, no matter which specific pattern exhibited in the light curve. Then we search the period of the radial velocity data based on or without a reference period, and fold radial velocities to fit radial velocity curve. We select possible candidates among those targets have smaller fitting residuals.
After we have selected targets whose radial velocities can be well-folded, we then rule-out those targets with inconsistencies existing in spectroscopy and photometry.
For this investigation, we do not expect to narrow down the input-sample size by individually inspecting. Instead, we attempt to directly fold radial velocities, fit velocity curves, and utilize fitting residuals to select targets.
We select targets with well-folded radial velocities before verifying them for single-lined spectroscopic binaries, which has alleviated several manual inspections of photometric types and spectroscopic data.
we reduce artificial involvement with imperative visual inspections positioned as the ultimate step. This strategic approach is devised to contend with the large volume of sources within the DR10 catalog.
§.§ Input-sample Reduction
We do the reduction for that targets without spectroscopic binary characteristics, and reduce the catalog size down to the an input-sample. We notice that, in DR10 catalog, there is a high proportion of targets without an obvious radial velocity varying. It is hard to reliably complete periodic analysis for such targets if the observational frequency is not sufficiently abundant. Thus, we prioritize the selection of targets with significant radial velocity variations to enhance the reliability of periodic analysis, especially for those targets with relatively fewer observations.
We obtained the input-sample by screening of the DR10-MRS catalog: We calculate the maximum radial velocity variation for each target by the parameters given from DR10-MRS. In DR10-MRS, each target generally get exposures not less than three times on the same day.
The catalog provides eight radial velocity measurements and errors, which are also in the following LAMOST MRS Parameter catalog, including , , and so on, corresponding to radial velocities estimated with the B and R band spectra, and others, respectively. The catalog also provides local modified Julian minute () and signal-to-noise ratio (), etc, for every single exposure. To mitigate the effect caused by measurement errors, we calculate the weighted average radial velocity for the observation day i based on all available exposures:
= ∑_j=0^n_i (_i,j·_i,j) / ∑_j=0^n_i_i,j
where _i,j and _i,j are radial velocity () and signal-to-noise ratio () of the exposure j on the
observation day i.
We exclude ineffective data and exposures of poor quality with ≤ 10. After eliminating ineffective data, we count the number of effective exposures. For each target, we note the number of observation days as , the number of exposures on observation day i as n_i (i = 1,2...) and the all exposure number as (i.e., = ∑_i n_i).
Typically, a target has not less than three exposures on the same day (n_i ≥ 3).
However, after data reduction by , there are some observation days that have only one exposure remained (i.e., n_i = 1), and this remaining one always keeps worthless measurement so that we are no longer counting the day with only a single exposure into effective observation days . Thus we count effective observation days with enough exposures at a minimum of two exposures (0 ≤≤).
We apply these two constructed statistics and for filtering criteria, to select targets with enough exposures and large radial velocity variations:
* Criterion 1: ⩾ 5
* Criterion 2: Δ = V_ r_, max - V_ r_, min > 100
Furthermore, we notice that some targets still exhibited significant radial velocity variations within a short time (∼ hours), but the lack of observations resulting in not meeting Criterion 2. These targets still hold the potential to include compact objects. Hence, we supplement a set of filtering criteria to encompass short-period targets. We calculate the weighted average rate of radial velocity change based on the radial velocity (), observation time () and signal-to-noise ratio () of all available exposures on the day i and applied the following filter to select such targets:
dv/dt|_i =
Mean(Δ/Δ) |_i = ∑_j=0^n_i(_i,j - _i,j-1/_i,j - _i,j-1· _i,j) / ∑_j=0^n_i_i,j
where _i,j and _i,j are radial velocity (), observation time () and signal-to-noise ratio () of the exposure j on the observation day i. We also need to exclude false samples selected due to bad data points, specifically those abrupt changes deviated significantly from the mean . To achieve that, we utilize two statistical parameters, the mean and the variance, to construct criterion:
Var(Δ/Δ) < 0.1 ·(Mean(Δ/Δ))^2
We only consider observation days with more than three exposures (n_i ≥ 3) to prevent accidental error caused by insufficient values. The details of the and variance filter is described in Appendix <ref>.
Thus we complement these targets selected by two statistic criteria:
* Criterion 1': ()_max > 60 km· s^-1· h^-1
* Criterion 2': Var() / [Mean()]^2 < 0.1
Adding targets selected by into input-sample, we obtain the input-sample with the size of 1822. Every target in input-sample have either large enough Δ or large enough . Then we search the photometric and spectroscopic data to fit radial velocity in the following sections.
§.§ Photometric Data
Photometric data can provide helpful information for the estimate variations and periodic analysis for the close binary. Such systems undergoing orbital motion typically manifest periodic effects in their light curves, such as eclipses and ellipsoidal modulations <cit.>.
We make efforts to collect photometric data from several sky surveys, including TESS [<https://tess.mit.edu/science-area/getting-started-with-tess/>](Transiting Exoplanet Survey Satellite), ZTF[<https://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-scan?projshort=ZTF mission=irsa>] (Zwicky Transient Facility), and ASAS-SN[<https://asas-sn.osu.edu/variables>] (All-Sky Automated Survey for Supernovae), to facilitate accurate period analysis. For each target in our sample, we collect photometric data as follows:
* We use the module <cit.> in package <cit.> to download TESS Full-Frame-Images (FFIs) with a cutout-pixel size of 10 from every available sectors in the TESS mission. We utilized the function to generate target masks and background masks with thresholds set at 15 and 0.001 in FFIs, respectively. We extracted the corresponding fluxes from each, completing the subtraction of background noise and instrument noise.
* We use the package <cit.> to download light curves in ASAS-SN database. We employed the function to cross-match targets with the ASAS-SN database, utilizing a cross-matching radius of 2”. We proceed to download the data if there are 20 or more observations of flux.
* We download light curves in ZTF DR19 through the ZTF Lightcurve Queries[<https://irsa.ipac.caltech.edu/docs/program_interface/ztf_lightcurve_api.html>] in IRSA (Infrared Science Archive <cit.>). We use to obtain the available g-band&i-band lightcurves within 5 arcsec of a target position through using the IRSA API.
Following the flux correction for data cleaning and time calibration across different time systems, we utilize the Lomb-Scargle algorithm <cit.> to search for possible period to fold the light curves.
If the folded light curve from the photometric data exhibited a clean and complete periodic profile, we accept the derived period value. Several samples for periodic light curve are displayed in Appendix <ref>.
Photometric data with a large time span will privide accurate periods, such as those from ASAS-SN and/or ZTF. However, ASAS-SN and ZTF have limited sky coverage for input-sample, thus many targets could not be searched for periods using these two surveys. TESS provides light curve data for nearly every target, but it has a limiting magnitude of only about 15 in TESS band and the time span of each sector is less than 30 days. So there is a risk associated with using TESS period to fold radial velocity, particularly for the target with insufficient spectroscopic observations. To obtain more accurate periods, we attempt to combine data from all available sectors of TESS for each target, enabling the time span is as large as possible to cover the spectroscopic observations.
§.§ Radial Velocity Measurement
We have obtained the photometric period from photometric analysis. We consider it as possible associated orbital period and use it to fold the radial velocity data in order to obtain the radial velocity curve. The DR10-MRS provides radial velocity measurements for both the red and blue arms of each single exposure. We have generated an input-sample used these measurements given by DR10-MRS catalog.
In fact, there are still numerous imprecise (or incorrect) radial velocity measurements provided in this catalog, possibly due to poor template matching, especially for those spectroscopic binaries with multiple components in observed spectra.
Considering that we do not focus on the barycentric velocity (center-of-mass radial velocity) in this step, we can improve reliability and measurement accuracy by measuring relative radial velocities.
We pick the spectrum with the highest signal-to-noise ratio as the template, and measure the radial velocities of each spectrum from the CCF between them and the template.
In this process, we use the package <cit.> to do the reduction on removing cosmic rays and the normalization. We use CCF to measure the relative radial velocities of all spectra for each target in input-sample. As a theoretical template is not necessary, we obtain more accurate radial velocities for a large number of targets with lower time consumption.
Simultaneously, we plot spectra that were shifted based on the radial velocity measurements for final visual inspection. We compare the shifted spectra with template to verify whether the profiles of absorption lines have changed or correctly overlapped after phase calibration. Multiple components in the spectrum could be easily identified, as the absorption line profiles would often get broader (or narrower), and the CCF structures show double peak structures. These phenomena suggest that the system could be a double-lined spectroscopic binary, as there are additional components present in the spectra.
The shifted spectrum, which clearly coincides with the template, indicates measurements are reliable. The radial velocity measurements will ultimately be adopted if they pass the verification through the shifted spectra and CCF profiles for those final candidates. In Appendix <ref>, we provide examples illustrating the double-peaked structure of the CCF for double-lined spectroscopic binary spectra.
It is worth noting that the Radial Velocity Zero-Point (RVZP) of MRS may vary over time and with different fibers <cit.>. The RVZP correction usually introduces an average uncertainty of around 5 to radial velocity measurements <cit.>. Our selected sources have a typical radial velocity semi-amplitude of 100 , which is significantly larger than 5 . Thus we do not take RVZP into account in radial velocity measurements.
§.§ Radial Velocity Curve Fits
We fit folded data for the radial velocity curve based on a sinusoidal function:
V_r(t) = V_0 + K sin(2 π t/ + ϕ_0)
where is the orbital period, K is the semi-amplitude of the radial velocity curve, V_0 is the offset radial velocity generated by the template selection and ϕ_0 is initial phase of the folding starting point.
This model assumes that the orbit is circular, which is justified,
since ellipsoidal binaries undergo tidal interactions that both circularize the orbit and synchronize the rotational and orbital period <cit.>.
Ideally, the time span of photometric data acquisition should cover the spectroscopic observations. In LAMOST DR10-MRS, the time span of individual target spectroscopic observations could be relatively large (10^3 days). Therefore, the best time span for photometric data should also be greater than this length.
The data of ZTF and ASAS-SN are best-suited for searching a precise period because they have thousands of days of observational data. If a target can obtain its photometric period from ZTF or ASAS-SN, this period will be directly regarded as a preferred period for folding the radial velocity. However, if the target is relatively faint resulting in bad data in the ground-based photometric surveys, and the photometric period can only be obtained from TESS, then this period cannot be directly used to fold the radial velocity. In this case, we search for orbit period using radial velocity data within a small neighborhood of the reference period in order to obtain a reliable radial velocity curve. Several samples for radial velocity curve are displayed in Appendix <ref>. We show radial velocity curves for those target with at least 0.1 mass function in Appendix <ref>.
Still, for some targets, there is no essential relation between photometry and orbit, whose and arise from different physics. For targets that cannot be folded by the photometric period , we directly utilize the radial velocity data for searching period. We employ the algorithm to search for potential periodic signals within the range of 0.1 days to 100 days. We fold the radial velocities with the period corresponding to the highest Lomb-Scargle power to obtain the radial velocity curve.
In cases where there is no photometric period available or a photometric period obtained by TESS, the quality of the radial velocity curve depends more on the number of radial velocity observations. If there are too few radial velocity data points, the fitting of the radial velocity curve may have multiple solutions, leading to overfitting or under-fitting. Hence, we still select the candidates with more radial velocity observations in a higher priority.
§ RESULTS
The above procedure produces possible radial velocity curves for every target in our input-sample. After period search and folding, over 1500 targets did not show well-converged radial velocity curve profiles, meaning them lack periodic variations in their radial velocity observations, possibly due to insufficient exposures or incorrect adopted period. We conducted a manual review of approximately 300 targets with complete radial velocity curves, excluding those exhibiting variability attributed to various types of eclipses (As shown in Figure <ref> in the Appendix <ref>) and those with double-lined spectroscopic spectra. We do the spectroscopic exclusion by proofreading shifted spectra and double-peaked structure in the CCF. We have provided an explanation of the types of CCF structure in the Appendix <ref>.
We select the candidates from the input-sample, and through manual examination of both photometric and spectroscopic data, we eliminate potential spectroscopic binaries, leaving us with 89 candidates. These targets all possess a complete folded velocity curve with a sufficient number of radial velocity points. 64 targets also have light curves that synchronize with the orbit phase, indicating orbital modulation in the photometric variation. For other targets, either no light curve is available or the light curve does not show clearly converge at some period. For these targets, the orbital period is primarily determined through period searching in the radial velocity data.
Fitting the radial velocity curves yielded orbital parameters for the targets, including the radial velocity semi-amplitude K and orbital period . Then, the mass of the invisible object can be constrained by the mass function, expressed as <cit.>:
f(M_2) = M_2^3 sin^3(i)/(M_1 + M_2)^2 = K_1^3 /2π G,
where M_1 is the mass of the optically visible star; M_2 is the mass of the unseen companion; K_1 is the semi-amplitude of the radial velocity; is the orbital period; i is the inclination, and G is the gravitational constant. We calculate the mass function for our sample using parameters obtained from the radial velocity curve fitting. The distribution of all targets in the mass function space is shown in Figure <ref>, and listed in Appendix <ref>.
In the Figure <ref>, we present the distributions of radial velocity semi-amplitudes and orbital periods for all 89 candidates in our sample. The orbital periods of our candidates range from 0.1 days to approximately 10 days, and the radial velocity semi-amplitudes span from about 30 to approximately 200 . Nearly all of the sources fall above the green reference curve, indicating their mass functions are greater than 0.01 . There are 69 targets reside in the region above the orange reference curve which corresponds to a mass function greater than 0.05 . Among these targets, targets No.1-26 in the sample have mass functions exceeding 0.1 .
In our result, there are three published identifications of inclusive targets (J0419, J2354 and J1729), which are marked with colored asterisks in figures. Target No.1 (J0419) contains an extremely low mass pre-white dwarf and a compact object, with M_1 = 0.176 ± 0.014 and M_2 = 1.09 ± 0.05 <cit.>. Target No.2 (J2354) contains an K-type star and a neutron star candidate, with M_1 = 0.70 ± 0.05 and M_2 = 1.26 ± 0.03 <cit.>. Target No.18 (J1729) contains an K-type star and a white dwarf, with M_1 = 0.81 ± 0.07 and M_2 ≥ 0.63 <cit.>.
The remaining targets currently await individual analysis.
In addition, compared to the three identified candidates, the other candidates in our sample can exhibit relatively longer orbital periods. Leveraging tens of observations provided by LAMOST and employing period analysis, we also obtained targets with relatively minor radial velocity variations. As shown in Table <ref>, except for Target No.18 (J1729), which only has one day of effective observational data, the rest of the targets have multiple days of LAMOST observations. Half of the targets have more than ten days of spectroscopic observations, with dozens of radial velocities. We examined the spectra of the candidates to ensure accurate radial velocity measurements. Well-folded radial velocity data provided reliable orbital parameters. We have ruled out double-lined spectroscopic binaries among the candidates in our sample, and it is worth to further identify our candidates, especially for candidates resembling the green asterisk marker (J1729) in the Figure <ref>.
We collect the parallax and the stellar parameters including effective temperature T_eff, surface gravity logg, metallicity [Fe/H], and radius from DR3 <cit.>. We use the stellar evolution models to evaluate the mass of visible stars by utilizing the python package [<https://isochrones.readthedocs.io/en/latest/>]. We use the DR3 parameters and photometry as inputs to derive isochrone interpolated mass with MESA Isochrones & Stellar Tracks isochrones <cit.>. The isochrone mass of visible star is shown in Table <ref>. Combined with the mass function equation, the lower limit of the invisible object's mass M^min_2 is calculated under the assumption that i=90^∘. We find that the lower limit of the invisible object's mass M^min_2 is above or close to half of the visible stellar mass M_1 for the first thirty or so targets in our sample. A normal main-sequence star at a such mass would contribute non-negligible optical flux and manifest double-lined spectroscopic features, thus these single-lined spectroscopic binaries could conceal a compact object. The binary mass for target No.1-26 is presented in Figure <ref> and listed in Table <ref>. Parameters for all candidates is presented in in Appendix <ref> and Appendix <ref>.
§ CONCLUSIONS AND DISCUSSION
In this paper, we have constructed a catalog of 89 compact object candidates in single-lined spectroscopic binaries using the LAMOST DR10-MRS observations.
All candidates have well-folded radial velocity data. Overall, the results are very encouraging: a large number of candidate compact objects are discovered in a single pipeline execution, among which our existing identifications are also included (i.e., target No.1 J0419, target No.2 J2354 and target No.18 J1729). For 64 out of 89 objects, their phase folded light curves are consistent with the radial velocity variations; hence, their orbital parameters (e.g., orbital period and semi-amplitude) are more reliable. Our main results are as follows:
* Initial target selection: We utilize data from the DR10-MRS catalog to establish selection criteria to pre-select targets, particularly introducing variability rates to selective focus on the shorter-period targets.
By imposing constraints on the variance of we additionally obtain a larger number of targets with reliable radial velocity measurements, expanding sample pool for subsequent verification.
Combining criteria on radial velocity variation, we create an initial sample of 1822 targets, all of which have the potential to possess large mass function because of large or rapid radial velocity variations.
* Compact object candidates: We ultimately select 89 single-lined spectroscopic binaries with well-folded radial velocity data, which probably contain compact object candidates. The mass function of 26 candidates are larger than 0.1 . Among them, 18 targets also exhibit ellipsoidal type light curves. These candidates fulfill the criteria wherein the minimum limit of the unseen object's mass M^min_2 is above or approximately half of the half of the visible stellar mass M_1. This estimation strongly supports the presence of compact objects within these candidates.
* Optimization of selection procedures: We directly fit radial velocity curves for 1822 targets, selecting those with well-converged curves. We deferred manual inspection (for both photometry and spectroscopy) until after the source selection stage, which saved us from manually inspecting over a thousand targets and greatly reduced the time resource waste caused by erroneous measurements both in photometric and spectroscopic. In order to achieve this goal, we use the workflow described in Section 2 for handling photometric and spectral data, generating reliable radial velocity curves for potential targets.
To further constrain the mass determination of the unseen object, precise inclination measurement is imperative. Ellipsoidal modulation has been detected in part of our candidates, allowing for inclination analysis through fitting light curves. However, a majority of reported candidates lack observable ellipsoidal light curves because the insufficiently close proximity of the binary stars results in tidal deformation not obvious. Besides, there are challenges in accurately constraining the rotational velocity of the visible stars, limiting our ability to determine inclination by measuring . While this paper focused on compiling the compact object binary catalog, it is important to highlight these potential researches for the sampled candidates, which will be
pursued in the future work.
§ ACKNOWLEDGMENTS
We thank the anonymous referee for their constructive suggestions to improve the paper.
This work was supported by the National Key R&D Program of China under grants 2023YFA1607901 and 2021YFA1600401, the National Natural Science Foundation of China under grants 11925301, 12033006, 12221003, 12263003 and 12322303, the Natural Science Foundation of Fujian Province of China under grants 2022J06002, and the fellowship of China National Postdoctoral Program for Innovation Talents under grant BX20230020. J.Z.L was supported by the Tianshan Talent Training Program through the grant 2023TSYCCX0101.
This paper uses the data from the LAMOST survey.
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences.
Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This paper includes data collected by the TESS mission obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations can be accessed via <cit.>.
ZTF is a public-private partnership, with equal support from the ZTF Partnership and from the U.S. National Science Foundation through the Mid-Scale Innovations Program (MSIP).
Astropy <cit.>,
Lightkurve <cit.>,
Matplotlib <cit.>,
NumPy <cit.>,
Pandas <cit.>,
ASAS-SN sky patrol <cit.>,
Laspec <cit.>,
Isochrones <cit.>
aasjournal
§ OBSERVATION CONCLUSIONS FOR 1 - 26 TARGETS
ccccccccc
Mass functions for 1 - 26 targets
0.99
Number Designation Source ID Δ LC K
(1) (2) (3) (4) (5) (6)
(days) (km s^-1) (days) (km s^-1) ()
1 J041920.07+072545.4 410015 31 418.7 0.61 ELL 208.6 0.560
2 J235456.76+335625.7 1129096 5 426.5 0.48 ELL 210.5 0.455
3 J134417.15+520307.2 818528 5 130.55 3.21 96.3 0.291
4 J102740.93+384018.9 697652 11 126.44 9.97 ELL 63.9 0.263
5 J122924.56+473128.5 771613 7 185.76 2.23 ELL 94.5 0.190
6 J060826.18+211334.1 400747 6 141.74 3.87 78.5 0.189
7 J122302.07+482910.8 767594 10 189.98 1.96 ELL 94.2 0.166
8 J060030.98+290855.0 387387 11 447.04 0.15 ELL 219.2 0.162
9 J085304.59+132032.3 616838 17 249.3 0.79 ELL 126.2 0.161
10 J060649.24+213233.3 397914 12 157.38 2.77 ELL 79.3 0.140
11 J060907.13+222550.6 401948 11 128.91 2.52 81.5 0.138
12 J092323.52+421819.5 651359 15 134.79 3.64 ELL 70.5 0.129
13 J230854.08+355132.4 1095399 9 207.93 0.95 ELL 109.9 0.128
14 J090332.95+425639.8 630705 35 210.12 8.41 ELL 53.2 0.128
15 J044826.19+223906.2 296802 7 154.56 2.47 ELL 79.2 0.124
16 J063350.35+220056.0 447835 7 235.13 0.69 120.5 0.123
17 J155751.89+435021.4 884025 6 175.58 1.87 86.7 0.123
18 J172900.17+652952.8 905386 1 94.8 0.6 ELL 93.9 0.123
19 J075855.51+384540.7 550320 15 177.7 1.35 ELL 93.8 0.113
20 J103613.90+071115.6 703271 10 113.53 5.68 57.3 0.108
21 J061921.71+224432.3 422288 10 163.02 2.56 ELL 74.6 0.107
22 J061125.78+115043.5 406053 6 114.71 1.93 81.3 0.105
23 J011726.82+452013.2 64557 7 163.61 2.08 79.2 0.104
24 J060537.69+222546.9 395839 13 136.43 3.25 ELL 67.5 0.101
25 J202309.48+401916.5 1010372 7 134.48 1.18 ELL 94.6 0.101
26 J022924.20+581721.3 135083 6 109.53 5.07 ELL 58.2 0.101
Column(1) The effective observation days in LAMOST DR10-MRS. Column(2) The maximum difference for effective mean radial velocity in LAMOST DR10-MRS. Column(3) The orbital period obtained from the radial velocity curve fitting. Column(4) The types of light curve, "Ell" stands for ellipsoidal modulation. Column(5) The semi-amplitude of the radial velocity obtained from the radial velocity curve fitting. Column(6)The mass function of the unseen companion calculated by Column(3) and Column(5).
cccccccccc
Mass measurements for 1 - 26 targets
Number Designation Source ID M_1 M_2^90 M_2^80 M_2^70 M_2^65 M_2^60
(1) (2) (3) (4) (5) (6)
() () () () () ()
1 J041920.07+072545.4 410015
2 J235456.76+335625.7 1129096
3 J134417.15+520307.2 818528 1.10 1.13 1.16 1.24 1.31 1.41
4 J102740.93+384018.9 697652 0.88 0.96 0.99 1.06 1.12 1.21
5 J122924.56+473128.5 771613 1.18 0.95 0.97 1.04 1.10 1.17
6 J060826.18+211334.1 400747 1.01 0.87 0.89 0.96 1.01 1.08
7 J122302.07+482910.8 767594 0.79 0.72 0.74 0.79 0.84 0.90
8 J060030.98+290855.0 387387
9 J085304.59+132032.3 616838 0.85 0.74 0.76 0.81 0.86 0.92
10 J060649.24+213233.3 397914
11 J060907.13+222550.6 401948 1.18 0.82 0.84 0.90 0.94 1.01
12 J092323.52+421819.5 651359 0.94 0.70 0.72 0.77 0.81 0.86
13 J090332.95+425639.8 630705 1.16 0.79 0.80 0.86 0.90 0.96
14 J230854.08+355132.4 1095399 0.71 0.61 0.62 0.66 0.70 0.75
15 J044826.19+223906.2 296802
16 J063350.35+220056.0 447835 1.83 0.99 1.01 1.08 1.13 1.20
17 J155751.89+435021.4 884025 0.93 0.68 0.70 0.74 0.78 0.84
18 J172900.17+652952.8 905386
19 J075855.51+384540.7 550320 1.33 0.80 0.82 0.87 0.92 0.97
20 J103613.90+071115.6 703271 1.09 0.70 0.72 0.76 0.80 0.86
21 J061921.71+224432.3 422288 0.91 0.63 0.65 0.69 0.73 0.77
22 J061125.78+115043.5 406053 2.57 1.13 1.15 1.22 1.28 1.36
23 J011726.82+452013.2 64557 1.46 0.81 0.83 0.88 0.93 0.99
24 J022924.20+581721.3 135083 1.89 0.93 0.95 1.01 1.05 1.12
25 J202309.48+401916.5 1010372 1.84 0.92 0.93 0.99 1.04 1.10
26 J060537.69+222546.9 395839 1.07 0.68 0.69 0.74 0.77 0.82
Column(1) The mass of visible star from Gaia. Column(2-6) The mass of unseen companion estimated by Column(1) and inclination of 90°, 80°, 70°, 65° and 60° respectively.
§ CRITERION VARIANCE
Observations conducted within the same day, with intervals much shorter than the orbital period, should showcase uniform variations in radial velocity. However, during our screening process, we noticed abrupt changes in the radial velocity data for most targets across multiple observations on the same day. To address this issue, we devised a statistical metric based on the rate of change in radial velocity to filter out potentially erroneous measurements. For ideal target, the variance in the rate of change of radial velocity for a pristine sample should be sufficiently low. Hence, we employed the variance in radial velocity change rate as a filtering criterion. We presented a case with ample observation instances to validate the effectiveness of this approach.
§ RADIAL VELOCITY CURVES
Here we show two targets, for example, amoung our sample with their radial velocity curves and light curves, which are J060649.24+213233.3 (left sub-figure) and J063552.55+184233.7 (righ sub-figure).
§ LIGHT CURVES
Here we show some folded TESS light curves. we collect photometric data from all availble sectors of TESS, shown in different colors and flux offsets in each figure.
§ LAMOST SPECTROSCOPY
We use the cross-correlation function (CCF) to measure the relative radial velocities of the LAMOST spectra. The template is the spectrum with the highest signal-to-noise ratio for each target. To verify that the systems are single-lined spectroscopic binaries, we checked whether there are additional peaks in the CCF profile indicates the presence of extra components in the spectrum. We measure all available spectra for each target to compute the CCF. We check spectra which are chosen to be mostly near the quadrature phases. In these phases, if there are two components, they would be easily detected in the velocity space. As shown in the following figure, it is evident from the CCF that J061400 is likely to be a double-lined spectroscopic binary, while J075855 is a single-lined spectroscopic binary.
§ THE MASS DISTRIBUTIONS
§ FULL SELECTED SAMPLES
ccccccccccc
Characteristics of 89 selected candidates
0.99
Number Designation Source ID Δ LC K_1 M_1 M_2^min
(1) (2) (3) (4) (5) (6) (7) (8)
(days) (km s^-1) (days) (km s^-1) () () ()
1 J041920.07+072545.4 410015 31 418.7 0.61 ELL 208.6 0.56
2 J235456.76+335625.7 1129096 5 426.5 0.48 ELL 210.5 0.455
3 J134417.15+520307.2 818528 5 130.55 3.21 95.04 0.291 1.10 1.13
4 J102740.93+384018.9 697652 11 126.44 2.66 ELL 67.87 0.263 0.88 0.96
5 J122924.56+473128.5 771613 7 185.76 2.23 ELL 93.9 0.19 1.18 0.95
6 J060826.18+211334.1 400747 6 141.74 3.87 78.42 0.189 1.01 0.87
7 J122302.07+482910.8 767594 10 189.98 1.57 ELL 102.96 0.166 0.79 0.72
8 J060030.98+290855.0 387387 11 447.04 0.15 ELL 210.61 0.162
9 J085304.59+132032.3 616838 17 249.3 0.79 ELL 128.69 0.161 0.85 0.74
10 J060649.24+213233.3 397914 12 157.38 2.77 ELL 78.87 0.14
11 J060907.13+222550.6 401948 11 128.91 2.52 82.49 0.138 1.18 0.82
12 J092323.52+421819.5 651359 15 134.79 3.64 ELL 68.54 0.129 0.94 0.70
13 J090332.95+425639.8 630705 35 210.12 0.89 ELL 59.07 0.128 1.16 0.79
14 J230854.08+355132.4 1095399 9 207.93 0.95 ELL 110.92 0.128 0.71 0.61
15 J044826.19+223906.2 296802 7 154.56 2.47 ELL 78.59 0.124
16 J063350.35+220056.0 447835 7 235.13 0.69 123.88 0.123 1.83 0.99
17 J155751.89+435021.4 884025 6 175.58 1.87 86.54 0.123 0.93 0.68
18 J172900.17+652952.8 905386 1 94.8 0.6 ELL 93.87 0.123
19 J075855.51+384540.7 550320 15 177.7 1.35 ELL 92.97 0.113 1.33 0.80
20 J103613.90+071115.6 703271 10 113.53 1.5 61.41 0.108 1.09 0.70
21 J061921.71+224432.3 422288 10 163.02 2.56 ELL 67.76 0.107 0.91 0.63
22 J061125.78+115043.5 406053 6 114.71 1.93 82.93 0.105 2.57 1.13
23 J011726.82+452013.2 64557 7 163.61 2.08 78.89 0.104 1.46 0.81
24 J022924.20+581721.3 135083 6 109.53 5.07 ELL 53.93 0.101 1.89 0.93
25 J202309.48+401916.5 1010372 7 134.48 1.18 ELL 94.04 0.101 1.84 0.92
26 J060537.69+222546.9 395839 13 136.43 3.25 ELL 67.55 0.101 1.07 0.68
27 J085553.20+181128.2 621071 15 133.92 3.26 ELL 66.68 0.097 0.81 0.57
28 J033013.01+591627.5 193088 7 172.03 0.77 ELL 120.59 0.094 1.04 0.64
29 J035557.04+235457.0 223483 13 135.03 2.72 ELL 69.98 0.094 0.89 0.59
30 J155244.61+461437.5 881769 13 117.01 4.11 ELL 60.9 0.093 0.95 0.61
31 J154452.52+505346.9 878260 10 236.82 0.46 ELL 120.03 0.092 0.60 0.48
32 J064126.68+234312.8 463302 18 182.78 1.24 87.94 0.091
33 J050047.77+493317.9 314676 9 120.79 3.93 ELL 60.4 0.091 1.02 0.63
34 J084919.55+181050.1 611176 6 126.28 3.34 ELL 67.4 0.09 1.18 0.68
35 J104756.16+402735.3 712440 15 153.03 2.08 ELL 76.23 0.09 0.89 0.58
36 J091434.79+425744.2 642865 20 117.69 4.28 61.71 0.09 0.99 0.62
37 J034813.39+250555.6 214231 17 130 3.93 ELL 61.74 0.089 1.48 0.77
38 J064717.33+243633.7 473755 16 128.46 2.91 ELL 68.91 0.088 1.82 0.86
39 J112713.81+003407.1 737276 7 109.38 4.44 ELL 56.75 0.083 1.00 0.60
40 J044422.50+483630.6 291282 17 123.93 3.22 ELL 61.74 0.082
41 J063552.55+184233.7 451888 6 149.44 1.96 ELL 74.73 0.082 2.53 1.01
42 J032320.58+521217.3 185497 14 133.14 2.74 ELL 69.05 0.078 4.27 1.35
43 J104734.61+075525.0 712171 16 116.39 4.1 56.02 0.075 0.85 0.52
44 J145742.06+385246.4 852175 16 110.86 4.22 ELL 54.2 0.074 1.22 0.63
45 J112022.64+030629.2 732821 6 117.04 3.05 ELL 65.16 0.074 1.39 0.68
46 J105637.88+104446.8 718646 9 162.64 1.37 ELL 80.28 0.074 0.91 0.54
47 J050627.51+443221.7 320573 6 124.89 2.91 ELL 62.93 0.072 1.64 0.74
48 J230328.86+365245.5 1092454 7 108.04 3.96 ELL 56.62 0.07
49 J061213.85+213955.7 407534 16 114.8 4.59 52.38 0.068 2.11 0.84
50 J001323.45+574834.3 15183 9 118.05 2.82 60.96 0.068
51 J011110.49+092102.5 57176 13 107.71 4.26 ELL 54.44 0.067 0.95 0.53
52 J035912.12+570228.8 227541 9 137.63 1.38 ELL 77.48 0.066 1.96 0.79
53 J010421.68+041336.5 48956 8 196.25 0.57 104.86 0.065
54 J064044.85+235615.5 461798 24 117.5 3.15 ELL 58.94 0.065 1.62 0.71
55 J142953.75+460353.8 841276 9 232.55 0.59 95.67 0.065 0.78 0.47
56 J080602.82+410506.1 558589 10 132.27 1.72 60.96 0.065 1.05 0.55
57 J023639.27+553554.8 143047 6 138.94 0.89 ELL 86.7 0.062
58 J030357.53+545157.6 167876 17 118.98 2.64 63.53 0.062 1.32 0.61
59 J011256.97+451943.3 59447 7 115.02 2.41 ELL 63.14 0.062
60 J034121.29+573315.0 205866 6 125.71 1.1 ELL 65.06 0.06 1.04 0.53
61 J010641.31+025621.7 51463 9 177.95 0.87 ELL 84.44 0.059 1.07 0.53
62 J193028.19+440007.2 1004002 15 118.77 2.57 ELL 60.96 0.057
63 J225115.59+345157.4 1085828 9 138.71 1.74 ELL 67.72 0.054 0.96 0.48
64 J231017.65+334202.6 1096283 15 102.14 3.61 63.64 0.053 1.14 0.53
65 J125038.89+535228.9 783510 7 100.42 3.94 ELL 51.24 0.052 0.95 0.47
66 J064742.91+244600.4 474426 11 137.94 1.54 ELL 69.31 0.052 1.26 0.56
67 J060429.52+344052.7 393673 7 101.57 2.22 64.99 0.051 1.61 0.64
68 J092306.86+431939.7 651103 34 165.83 1.31 ELL 72.04 0.051
69 J085125.29+120256.4 614307 16 123.86 2.82 ELL 61.4 0.051 1.14 0.52
70 J225434.80+344144.2 1087587 12 156.5 1.13 ELL 75.89 0.049 1.61 0.63
71 J064109.34+233700.5 462696 20 144.73 1.48 ELL 69.19 0.047 1.61 0.61
72 J203510.62+422022.8 1013286 7 113.57 2.45 57.17 0.046
73 J034544.41+241313.1 211191 14 101.42 3.31 ELL 51.63 0.046 0.93 0.44
74 J043432.74+272850.3 280833 8 113.24 2.31 ELL 61.27 0.045 1.08 0.48
75 J074928.34+392312.9 539298 20 101.39 3.04 ELL 51.53 0.042 0.88 0.41
76 J085056.28+131737.6 613574 5 115.44 0.68 ELL 81.8 0.039 1.29 0.50
77 J064601.31+235035.7 471648 19 127.44 1.25 64.52 0.035 0.84 0.37
78 J203932.62+405357.6 1014996 7 180.12 2.26 ELL 52.36 0.033
79 J041640.72+485628.8 252081 11 106.75 1.97 ELL 53.33 0.03
80 J084000.87+183844.4 599891 9 105.69 1.29 ELL 59.68 0.027 1.09 0.39
81 J080832.08+423555.1 561935 19 121.27 1.17 ELL 60.88 0.025 1.07 0.37
82 J222623.64+302804.0 1074150 13 120.21 2.1 ELL 47.67 0.024 1.27 0.41
83 J031618.78+324339.4 178887 8 107.94 1.54 52.91 0.023 1.17 0.38
84 J083756.41+123453.1 597848 20 116.57 1.44 ELL 52.9 0.02 1.10 0.35
85 J222256.75+305648.8 1071843 13 107.11 2.37 ELL 39.97 0.015 1.36 0.35
86 J064220.56+240502.9 465138 19 106.66 2 ELL 41.57 0.014 1.37 0.35
87 J090436.48+434449.0 631750 16 96.78 1.22 46.16 0.012 0.89 0.25
88 J103218.28+052318.7 700588 7 127.07 2.98 34.37 0.01 0.92 0.24
89 J055058.33+285141.1 376806 7 106.92 2.22 49.79 0.005
Column(1) The effective observation days in LAMOST DR10-MRS. Column(2) The maximum difference for effective mean radial velocity in LAMOST DR10-MRS. Column(3) The orbital period obtained from the radial velocity curve fitting. Column(4) The types of light curve, Ell stands for ellipsoidal modulation and blank denote no observably periodic change. Column(5) The semi-amplitude of the radial velocity obtained from the radial velocity curve fitting. Column(6)The mass function of the unseen companion calculated by Column(3) and Column(5). Column(7) The mass of visible star measured by . Column(8) The low mass limit of unseen companion estimated with inclination of i = 90^∘
§ MASSIVE CANDIDATES
To assess the reliability of the orbital fits and their uncertainties, here we show radial velocity curves of candidates with mass function larger than 0.1 in Figure <ref> and Figure <ref>.
|
http://arxiv.org/abs/2405.09017v1 | 20240515005440 | A Japanese-Chinese Parallel Corpus Using Crowdsourcing for Web Mining | [
"Masaaki Nagata",
"Makoto Morishita",
"Katsuki Chousa",
"Norihito Yasuda"
] | cs.CL | [
"cs.CL"
] |
Differential phase contrast from electrons that cause inner shell ionization
[
May 20, 2024
============================================================================
Using crowdsourcing, we collected more than 10,000 URL pairs
(parallel top page pairs) of bilingual websites that contain
parallel documents and created a Japanese-Chinese parallel corpus of
4.6M sentence pairs from these websites. We used a Japanese-Chinese
bilingual dictionary of 160K word pairs for document and sentence
alignment. We then used high-quality 1.2M Japanese-Chinese sentence
pairs to train a parallel corpus filter based on statistical
language models and word translation probabilities. We compared the
translation accuracy of the model trained on these 4.6M sentence
pairs with that of the model trained on Japanese-Chinese sentence
pairs from CCMatrix (12.4M) <cit.>, a
parallel corpus from global web mining. Although our corpus is only
one-third the size of CCMatrix, we found that the accuracy of the
two models was comparable and confirmed that it is feasible to use
crowdsourcing for web mining of parallel data.[Work in
progress]
§ INTRODUCTION
Parallel data is vital in machine translation for traditional
encoder-decoders and recent large language models. From an analysis of
Palm's training data, <cit.> showed that
large language models could translate because their training data
contain parallel data incidentally.
Relatively small large language models of around 10B parameters have
poor translation accuracy. However, <cit.> proposed
ALMA, a method of fine-tuning an LLM on a large amount of monolingual
data, followed by fine-tuning on a small amount of high-quality
bilingual data. They achieved translation accuracy comparable to GPT-3
using relatively small LLMs. <cit.> showed that
continuous pre-training of large language models on large amounts of
parallel data before fine-tuning on high-quality bilingual data
improves translation accuracy over ALMA.
This paper discusses a method for collecting Japanese and Chinese
parallel sentence pairs from the web. Translation between Japanese and
Chinese is considered one of the most important non-English language
pairs in terms of the number of speakers and the scale of the economy.
We specifically report on the effectiveness of crowdsourcing in
collecting URLs of websites containing parallel data.
<cit.> proposed a method of collecting
parallel URLs (parallel page pairs) using cloud workers for domain
adaptation of machine translation. We collected parallel top page URL
pairs of bilingual websites by specifying only language pairs to the
crowd workers, with no particular restriction on the target domain.
The experiment results show crowdsourced websites can collect more
parallel sentence pairs with less crawling than automatically
collected bilingual websites using Common Crawl. We also show that
the translation accuracy achieved using the Japanese-Chinese parallel
corpus created using crowdsourcing is comparable to that achieved
using Japanese-Chinese pairs of CCMatrix
<cit.>, a parallel corpus created by global
web mining. It is worth noting that our corpus only contains
one-third of the data present in CCMatrix.
We release a 4.6M Japanese-Chinese parallel corpus created using
crowdsourcing as JParaCrawl Chinese v2.0 for research purposes only.
[<https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/>]
§ RELATED WORK
§.§ Web mining for parallel data
Research on collecting bilingual data by mining the web began around
2000 <cit.>. We can
broadly divide current research into hierarchical mining (local
mining) and global mining.
In hierarchical mining (or local mining), based on the web's
hierarchical structure, we first search for websites that include
parallel documents, then search for parallel document pairs within a
website, and then search for parallel sentence pairs within a parallel
document pairs. In global mining, we consider the web a flat, massive
set of sentences. We use similarity based on a multilingual sentence
embedding model to find a sentence's translation among all sentences
in the web in different languages. ParaCrawl
<cit.> is a prime example of the former, and
CCMatrix <cit.> is a prime example of the
latter.
Most previous parallel corpora are created by local mining. Like
EuroParl <cit.> and OpenSubtitles
<cit.>, we first identify a
website that contains parallel documents, extract bilingual document
pairs based primarily on metadata, and then perform sentence
alignment.
The first successful example of global mining is WikiMatrix
<cit.>, which collected bilingual text
pairs from Wikipedia in various languages using
LASER[<https://github.com/facebookresearch/LASER>], a
multilingual sentence embedding model. CCMatrix
<cit.> applies global mining to CCNet
<cit.>, a monolingual corpus of various
languages extracted from Common Crawl. In the No Language Left Behind
(NLLB) project <cit.>, they extended CCMatrix to over 200
languages.
Performing global mining for the entire web requires enormous
computational resources. We consider local mining a more realistic
approach to collecting bilingual data for specific language pairs, as
in the case of JParaCrawl
<cit.>,
which collected parallel data between Japanese and English.
In local mining, many previous works on document and sentence
alignment exist, but little research has been done on how to find
websites that contain parallel data. ParaCrawl
<cit.> used the Common Crawl archives to
collect websites that contain bilingual text. They applied a language
detector on each site's web pages and looked for sites that contained
approximately the same amount of text in the language pair to be
collected. CCAligned <cit.> analyzed
the Common Crawl archives using language-identifiable strings in URLs
as clues and collected parallel URL pairs.
<cit.> used a cloud worker to collect
parallel URLs for domain adaptation of machine translation.
§.§ Japanese-Chinese parallel corpora
The most widely used Japanese-Chinese parallel corpus for research is
Japanese-Chinese ASPEC (Asian Scientific Paper Excerpt Corpus), which
has 0.68 million sentence pairs consisting of abstracts of Japanese
scientific papers and their manual translation into Chinese
<cit.>. The JPO-NICT Chinese-Japanese
parallel
corpus[<https://alaginrc.nict.go.jp/jpo-outline.html>],
which has about 130 million Japanese-Chinese patent sentence pairs,
extracted from patent applications in Japan and China based on patent
families. ASPEC and JPO-NICT are parallel corpora for specific fields
and unsuitable for general translation between Japanese and Chinese.
WCC-JC 3.0 <cit.> is
a Japanese-Chinese parallel corpus of approximately 3M sentence pairs
collected from the web, including movie and TV subtitles, lyrics, and
news articles.
It is available for research purposes by sending an email request to
the authors.
Japanese-Chinese bilingual data collected from Wikipedia include
LinguaTools-WikiTitles v2014 (1.7M sentence pairs), which is contained
in OPUS <cit.>, a collection of open parallel
corpora, WikiMatrix (1.3M sentence pairs)
<cit.>, and Wikipedia Chinese-Japanese
Parallel Corpus (0.13M sentence pairs)
<cit.>. OpenSubtitles
v2018
<cit.>,
collected from movie subtitles, contains 1.1M Japanese-Chinese
sentence pairs.
The largest publicly available Japanese-Chinese parallel data
collected from the web is CCMatrix <cit.>,
with approximately 12M sentence pairs. JParaCrawl Chinese v1.0
<cit.> contains 83K Japanese-Chinese
sentence pairs. The Asian Language Treebank
<cit.> translates English Wikinews into
Japanese, Chinese, and other Asian languages and contains
approximately 20,000 sentences divided into train, dev, and test sets.
§ METHODOLOGY
§.§ Parallel website mining
Our procedure for collecting parallel data is the same as ParaCrawl
<cit.> and follows the pipeline of
Bitextor[<https://github.com/bitextor/bitextor>], which
consists of web crawling, document alignment, sentence alignment, and
parallel corpus filtering.
In ParaCrawl, they determine which websites to crawl by analyzing the
Common Crawl archives. They first apply CLD2
[<https://github.com/CLD2Owners/cld2>] to each web page
to identify its language and extract websites containing approximately
the same target language pair texts. They then crawl the extracted
websites with Heritrix
[<https://github.com/internetarchive/heritrix3>].
In this study, we analyzed 12 sets of Common Crawl archives (104TB in
total) published from September 2021 to June 2023 using the language
detector CLD2 and enumerated about 40,000 websites that contain
roughly equal amounts of Japanese and Chinese text in order of total
text volume in a website.
We used Extractor[<https://github.com/paracrawl/extractor>] from the ParaCrawl project for this procedure.
<cit.> proposed a method of collecting
parallel URLs (parallel web pages) using cloud workers for domain
adaptation of machine translation. We asked crowd workers to collect
websites containing parallel pages, specifying only the language
pairs, and report the pair of URLs of the Japanese and Chinese top
pages for each website.
For both parallel top page URL pairs collected using crowdsourcing and
bilingual website URLs obtained from Common Crawl, we used Heritrix to
crawl each website for up to 48 hours, crawling Word and PDF files as
well as HTML.
§.§ Document and sentence alignment
For document and sentence alignment in Bitextor, we can use either
machine translation or a bilingual dictionary to determine semantic
equivalence. This study used bilingual dictionary-based document and
sentence alignment to create a parallel corpus with minimum external
language resources required.
The bilingual dictionary-based document alignment in Bitextor
calculates the similarity of documents from features obtained from the
bilingual dictionary and the structure of HTML. The bilingual
dictionary-based sentence alignment uses Hunalign
<cit.>[<http://mokk.bme.hu/resources/hunalign/>].
We used mecab[<https://taku910.github.io/mecab/>] for
Japanese word segmentation and
jieba[<https://github.com/fxsjy/jieba>] for Chinese word
segmentation. We used the EDR Japanese-Chinese bilingual Dictionary
(533,957 entries)
<cit.>[<https://www2.nict.go.jp/ipp/EDR/JPN/J_HotNews.html>]
as our bilingual dictionary.
To reduce the computation of document and sentence alignment, we
applied word segmenters to Japanese and Chinese headwords in the EDR
Japanese-Chinese Bilingual Dictionary to obtain 157,900 one-to-one
alignment word pairs. We added correspondences between Japanese Kanji
and simplified Chinese characters to the bilingual dictionary,
resulting in approximately 160,000 entries.
§.§ Parallel corpus filtering
The ParaCrawl project has two parallel corpus filters, Bicleaner and
Bicleaner AI. Bicleaner
<cit.>
extracts features using word translation probabilities and statistical
language models and trains a random-forest classifier to classify
whether a sentence pair is parallel. Bicleaner AI
<cit.> uses a pre-trained
multilingual language model to train a binary classifier. Both
methods require high-quality parallel data to train the classifier.
In this study, we used
Bicleaner[<https://github.com/bitextor/bicleaner>] to
minimize the use of external resources and improve computational
efficiency. We used mecab and pkuseg
<cit.>[<https://github.com/lancopku/pkuseg-python>]
for Japanese and Chinese word segmentation to compute word frequency
and obtain word alignment from high-quality parallel sentence pairs.
We used AWESOME-align <cit.> for word alignment
to compute word translation probabilities.
To train the parallel corpus filter, we used an in-house
Japanese-Chinese parallel corpus (1.2M sentence pairs) consisting of
travel conversations, dictionary examples, literary works, and
newspaper articles. Among the in-house Japanese-Chinese parallel
corpus, the Basic Travel Expression Corpus (BTEC, about 0.5M sentence
pairs) <cit.> is the largest, followed by
the dictionary example sentences (about 260,000 sentence pairs).
We used the Japanese-Chinese Bicleaner model to compute scores for
bilingual sentence pairs and extract sentence pairs with a threshold
value of 0.5 or higher. We further calculated each sentence's vector
using the multilingual sentence embedding model LaBSE
<cit.> and filtered out sentence pairs with a
cosine distance below a threshold of 0.7.
For URLs obtained from Common Crawl and URL pairs obtained from
crowdsourcing, Table <ref> shows the number of
sites we successfully crawled, the number of sites that yielded any
parallel sentence pairs, and the total number of parallel sentence
pairs obtained. The percentage of successful parallel sentence pair
extraction was considerably higher for websites obtained from
crowdsourcing, at 74.5 percent, compared to 27.2 percent for those
obtained from Common Crawl. The total number of parallel sentence
pairs obtained for websites from Common Crawl and those from
crowdsourcing is 2.8M and 4.6M sentence pairs, respectively,
indicating that crowdsourcing can collect more sentence pairs with
less crawling than analyzing Common Crawl.
§ TRANSLATION EXPERIMENTS
§.§ Datasets
We examined the accuracy of Japanese-to-Chinese and
Chinese-to-Japanese translations to assess the quality of parallel
sentence pairs collected using crowdsourcing.
Table <ref> shows the Japanese-Chinese parallel
datasets used in the translation experiments.
We used CCMatrix <cit.>,
WikiTitles <cit.>,
WikiMatrix <cit.>, and
OpenSubtitles2018 <cit.> for
comparison because they have more than one million sentence pairs and
are readily available.
We combined WikiTitles, WikiMatrix, and OpenSubtitles2018 into one
(wt-wm-os), and trained three models from ccmatrix, wt-wm-os, and
crowdsourcing.
We further trained one model from all five corpora.
For the development set, we used
news-commentary-v18 (1,677 sentence
pairs)[<https://data.statmt.org/news-commentary/v18.1/>],
the dev set of Asian Language Treebank Parallel Corpus (1,000 sentence
pairs)[<https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/index.html>],
and the dev set of FLORES-200 (997 sentence
pairs)[<https://github.com/facebookresearch/flores>].
For the test sets, we used the test set of Asian Language Treebank
(1,000 sentence pairs), the test set of ASPEC-JC (2107 sentence pairs),
the devtest of FLORES-200 (1012 sentence pairs), and NTREX-128 (1997
sentence pairs) <cit.>. We also used as our
test set 1,000 sentences randomly sampled from our in-house
Japanese-Chinese parallel corpus (bitext_cj) and our in-house Chinese
translations of news (495 sentences) and question answering (497
sentences) from the WMT2023 Japanese-English test set (wmt2023j). The
source language of these test sets is Japanese for aspec-jc and
wmt2023j, a mixture of Japanese and Chinese for bitext_cj, and
English for the others.
§.§ Experiment condition
We used fairseq <cit.> as the translation
software and transformer big <cit.> as the
translation model.
Table <ref> shows the hyper parameters of Transformer.
We used sentencepiece <cit.> to
tokenize training, development, and test data.
The vocabulary size is 32K for both Japanese and Chinese.
We evaluated translation accuracy using sacreBLEU
<cit.> and COMET
(wmt22-comet-da) <cit.>.
§.§ Translation accuracy
Table <ref> shows the translation accuracy from Japanese to
Chinese, and Table <ref> shows that from Chinese to Japanese.
Among the three translation models, ccmatrix, wt-wm-os, and
crowdsourcing, ccmatrix and crowdsourcing have about the same translation
accuracy, while wt-wm-os is less accurate.
Between ccmatrix and crowdsourcing, crowdsourcing has higher
translation accuracy from Japanese to Chinese, and ccmatrix has higher
accuracy from Chinese to Japanese.
Creating a single translation model from all bilingual data yields a
higher translation accuracy than these three models.
§ DISCUSSION
Crowdsourcing (4.6M) has only one-third of sentence pairs of CCMatrix
(12.4M), but the translation accuracy is about the same. This
indicates that parallel sentence pairs obtained using crowdsourcing
have higher quality than CCMatrix.
For Japanese-to-Chinese translation, the higher accuracy of our
parallel corpus collected using crowdsourcing compared to CCMatrix is
probably due to the fact that the crowdsourcing was done in Japan by
Japanese crowd workers. Many of the websites collected by Japanese
crowd workers are Japanese websites that include pages translated into
Chinese. More diversity may be needed when translating Chinese into
Japanese.
This study evaluated the translation accuracy of parallel sentence
pairs collected using crowdsourcing (4.6M). However, we expect that
adding parallel sentence pairs collected using Common Crawl (2.8M)
will increase diversity and improve translation accuracy from Chinese
to Japanese. Another issue is that evaluating Chinese-to-Japanese
translation accuracy could be more reliable if we had a test set whose
source sentences originated from Chinese and whose reference sentences
are direct manual translations from Chinese to Japanese.
§ CONCLUSION
This paper describes an attempt to create a Japanese-Chinese parallel
corpus from the web by collecting URL pairs of parallel websites
through crowdsourcing. We collected 4.6M sentence pairs and showed
that we could achieve the same level of translation accuracy as the
CCMatrix (12.4M) with one-third of the data.
In the future, we will create an adult filter to filter the parallel
sentence pairs (2.8M) collected using Common Crawl and add these to
make a Japanese-Chinese parallel corpus with more diverse content. We
will also train a machine translation model using the sentence pairs
created with bilingual dictionary-based document and sentence
alignment to perform machine translation-based document and sentence
alignment, which could improve the quality of parallel sentence pairs.
|
http://arxiv.org/abs/2405.09053v1 | 20240515030118 | Deep Learning-Based CSI Feedback for XL-MIMO Systems in the Near-Field Domain | [
"Zhangjie Peng",
"Ruijing Liu",
"Zhaotian Li",
"Cunhua Pan",
"Jiangzhou Wang"
] | eess.SP | [
"eess.SP"
] |
Deep Learning-Based CSI Feedback for XL-MIMO Systems in the Near-Field Domain
Zhangjie Peng, Ruijing Liu, Zhaotian Li,
Cunhua Pan, Senior Member, IEEE,
and Jiangzhou Wang, Fellow, IEEE
Z. Peng, R. Liu and Z. Li are with the College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 200234, China. (e-mail: pengzhangjie@shnu.edu.cn; 1000511821@smail.shnu.edu.cn; 1000511820@smail.shnu.edu.cn;).
C. Pan is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China. (e-mail: cpan@seu.edu.cn).
J. Wang is with the School of Engineering, University of Kent, CT2 7NT Canterbury, U.K. (e-mail: j.z.wang@kent.ac.uk).
May 20, 2024
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
lemmaLemma
theoremTheorem
remarkRemark
corollaryCorollary
propositionProposition
In this paper, we consider an extremely large-scale massive multiple-input-multiple-output (XL-MIMO) system. As the scale of antenna arrays increases, the range of near-field communications also expands. In this case, the signals no longer exhibit planar wave characteristics but spherical wave characteristics in the near-field channel, which makes the channel state information (CSI) highly complex. Additionally, the increase of the antenna arrays scale also makes the size of the CSI matrix significantly increase. Therefore, CSI feedback in the near-field channel becomes highly challenging. To solve this issue, we propose a deep-learning (DL)-based ExtendNLNet that can compress the CSI, and further reduce the overhead of CSI feedback. In addition, we have introduced the Non-Local block to obtain a larger area of CSI features. Simulation results show that the proposed ExtendNLNet can significantly improve the CSI recovery quality compared to other DL-based methods.
Deep Learning, XL-MIMO, near-field domain, CSI feedback.
§ INTRODUCTION
As one of the key technologies for 6G, extremely large-scale massive multiple-input-multiple-output (XL-MIMO) systems have a larger number of antennas than massive MIMO systems and can achieve high spectral efficiency and high energy efficiency <cit.>. However, the Rayleigh distance increases as the number of antennas increases, leading to an expansion of the range of near-field communications <cit.>. Therefore, the research on near-field communications will become essential in studying XL-MIMO systems.
For near-field communications, the authors of <cit.> considered the performance analysis of XL-MIMO systems in the near-field domain, including the signal-to-noise ratio (SNR) scaling laws and achievable data rate. In <cit.>, the authors proposed an algorithm to achieve low-complexity near-field channel estimation for XL-MIMO systems in the near-field domain.
In <cit.>, the authors used beamspace modulation to improve the capacity by exploiting the increased degrees of freedom in the near-field domain of XL-MIMO systems.
In <cit.>, the authors proposed an efficient beam alignment (BA) algorithm for XL-MIMO systems in the near-field domain, which can achieve low BA error rate and overhead.
As a new direction of machine learning, deep learning (DL) has been widely applied in the communication field to improve system performance <cit.>.
The authors of <cit.> proposed a DL-based network to estimate the multi-user high-dimensional channel in XL-MIMO systems for both near-field and far-field users. The authors of <cit.> proposed a DL-based beamforming training scheme for XL-MIMO systems, and used supplementary codewords to enhance the beamforming training. Similarly, the authors of <cit.> also proposed a DL based beamforming training scheme for XL-MIMO systems in the near-field domain, considered the near-field codebook, and improved the performance of beam training.
Different with time division duplex (TDD) systems, the uplink and downlink channels in frequency division duplexing (FDD) systems lack channel reciprocity. Therefore, channel estimation must be completed at the user, and the user needs to feedback channel state information (CSI) to the base station (BS) in FDD systems <cit.>. Since the increase of the antenna array scale also makes the size of the CSI matrix significantly increase, feedback the CSI matrix directly would occupy a significant portion of channel bandwidth resources. Therefore, CSI needs to be compressed and restored during the feedback process. The authors of <cit.> proposed a codebook-based method to compress CSI in massive MIMO systems. In <cit.>, a method based on compressive sensing was proposed to compress CSI in massive MIMO systems.
The authors of <cit.> proposed a multiple-rate compressive sensing neural network framework to compress and quantize the CSI.
However, these methods are not satisfactory in term of the accuracy of the CSI decompression, and the research on CSI feedback for XL-MIMO systems in the near-field domain is still not studied.
In this paper, we consider the CSI feedback for XL-MIMO systems in the near-field domain, and the contributions are summarized as follows: 1) A network called ExtendNLNet is proposed for XL-MIMO systems to compress and decompress the CSI of the near-field channel; 2) We introduce the Non-Local block to obtain a larger scale of CSI features; 3) Simulation results demonstrate that the performance of the proposed ExtendNLNet improves significantly as the compression ratio increases.
§ SYSTEM MODEL
§.§ Signal Model
As shown in Fig. <ref>, we consider an XL-MIMO system in the near-field domain. The BS is equipped with N_1 antennas and the user in the near-field domain is equipped with N_2 antennas. The numbers of radio frequency (RF) chains of the BS and the user are denoted by N_ b and N_ u, respectively. The channel from the BS to the user is denoted by H∈ℂ^N_2× N_1. Therefore, the signal received by the user can be expressed as
y=WHQs+n,
where y∈ℂ^N_ u× 1 denotes the pilot signal, W∈ℂ^N_ u× N_2 denotes the combining matrix, Q∈ℂ^N_1× N_ b denotes the hybrid precoding matrix, s∈ℂ^N_ b× 1 denotes the transmitted signal, and n∼𝒞𝒩 (0,δ^2𝐈_N_ u) denotes the noise received by the user.
§.§ Channel Model
Consider that each BS-user antenna pair experiences different transmission paths in the near-field domain of XL-MIMO systems, we consider the path component for each BS-user antenna pair separately under the assumption of geometric free-space propagation <cit.>. By defining 𝐇(n_2,n_1) as the channel between the n_1-th antenna at the BS and the n_2-th antenna at the user, it can be expressed as
𝐇(n_2,n_1)=1/r_n_2,n_1 e^-j2π/λ r_n_2,n_1,
where n_1∈{1,2,...,N_1} and n_2∈{1,2,...,N_2}. r_n_2,n_1 represents the transmission distance from the n_1-th antenna at the BS to the n_2-th antenna at the user. Additionally, 1/r_n_2,n_1 represents the normalized free-space path loss of each BS-user antenna pair, and r_n_2,n_1 can be expressed as
r_n_2,n_1=√((rcosθ -d_2sinϕ )^2+(rsinθ+d_2cosϕ-d_1 )^2)
=√( r^2+ d_1^2+ d_2^2+ 2(rd_2sin(θ+ϕ )- rd_1sinθ- d_1d_2cosϕ )),
where r is the distance between the first antenna of the user and the first antenna of the BS, ϕ represents the relative angle between the BS and the user, and θ represents the transmission angle of the signal. Additionally, d_1 and d_2 are respectively expressed as
d_1=n_1d,
d_2=n_2d,
where d is the antenna spacing. Therefore, we can obtain the channel model as
𝐇(r,θ, ϕ)=[ 1/r_n_2,n_1 e^-j2πr_n_2,n_1/λ]_N_2× N_1.
§ CSI FEEDBACK PROCESS AND EXTENDNLNET
§.§ CSI feedback process
After the user estimates the complete CSI matrix 𝐇, it needs to be split into real and imaginary parts because neural networks are not suitable for processing complex numbers. This split operation is performed before feeding the matrix into the neural network and the resulting split matrix is denoted as 𝐇_in. As the original CSI matrix 𝐇 is of size N_2 × N_1, the split matrix 𝐇_in will then have a size of L = 2N_2 N_1. After splitting, each element in the matrix 𝐇_in is normalized within [0,1]. Then, the encoder part at the user will compress the L-sized matrix 𝐇_in into a K-dimensional feature vector s̃ based on the given compression ratio (C R). The C R can be expressed as
CR=L/K.
The compression process can be expressed as
s̃=f_en ( H_in,θ _en ),
where f_en(·) is the function used for the compression process, and θ_en represents the parameters of the encoder.
Once the BS receives the feature vector s̃, the decoder at the BS decompresses the K-dimensional feature vector into an L-sized CSI matrix H_out, and the decompression process can be expressed as
H_out=f_de ( s̃,θ _de ),
where f_de ( · ) is the decompression function, and θ _de represents the parameters of the decoder.
In addition, we use the Mean Square Error (MSE) as the loss function. By substituting (<ref>) and (<ref>) into the loss function, the optimized expression of the neural network can be obtained as
( θ̂_en,θ̂_de )=min_θ _en,θ _deH_out-H_in_2^2,
where θ̂_en and θ̂_de represent the optimal parameters of the f_en(·) and f_de(·), respectively.
§.§ Architecture of ExtendNLNet
As shown in Fig. <ref>, we show the architecture of our proposed ExtendNLNet. Specifically, after the CSI matrix is input into the encoder, it first undergoes convolution process with a 3× 3 convolution layer. Then, the convolution result is batch normalized, and we use a LeakyReLU <cit.> layer to perform the non-linear activation process. After the activation process, the data is input into the Non-Local block for further feature learning and information extraction. The specific structure and design principles of the Non-Local block will be discussed in details. For the output from the Non-Local block, the CSI matrix is reshaped into an L× 1 vector and then compressed.
After the codeword obtained by the decoder at the BS, it first executes the reverse process of the encoder. Specifically, the codeword is remapped into an L × 1 vector by a fully connected layer, and then reshaped into two 32 × 32 matrices. These two matrices serve as the initial results for the real part and imaginary part of the CSI matrix. Next, the initial results are input into the Non-Local block for feature learning, and the CSI matrix is preliminarily recovered. Then, the results from the Non-Local block are input into the Refine-net block <cit.>, which continuously refine the reconstruction to make the initial results more consistent with the input matrix .
In ExtendNLNet, the three convolutional layers of the Refine-net block do not all use 3 × 3 sized convolutional layer.
Considering a single sized convolutional layer may not adapt the changes in the sparsity of the CSI matrix well, which may lead to a decrease in the performance of the neural network. Meanwhile, different sized convolutional layer have different receptive fields, which can adapt to different sparsity levels. Specifically, convolutional layers with smaller size can better extract finer features, suitable for information-dense areas, and larger size is better for relatively sparse parts.
Therefore, the first convolutional layer is of size 3 × 3, and the second convolutional layer is of size 5 × 5. They respectively expand the CSI matrix to 8 and 16 feature maps. The size of the third convolutional layer is 9 × 9, which restores the number of feature maps to 2.
Moreover, batch normalization is applied after each convolution layer, and using the LeakyReLU for activation at last. In addition, the data from the initial input of the Refine-net block is added back to avoid the problem of gradient vanishing.
After passing through two Refine-net blocks, the data undergoes another 3 × 3 sized convolutional layer, activated by a Sigmoid layer, and output in the end. During the training of the neural network, the network computes the input and output results into the loss function, aiming to iteratively update its parameters to achieve lower values of the loss function.
§.§ Non-Local Block
Existing researches of DL-based CSI feedback architectures typically use convolutional or recurrent operations to extract features, which can only handle a small area of the CSI matrix at a time. These operations need to be repeated to obtain a larger area of the CSI matrix, which leads to algorithm optimization difficulties and computational inefficiency. Therefore, we use the Non-Local <cit.> block to extract CSI matrix features over a large range. The Non-Local block is designed specifically for sequential data and can directly capture the feature relationships between any two positions in the CSI image. The calculation is formulated as
u( 𝐚_i )=∑_𝐛̃∈Ω^w( 𝐚_i,𝐛̃ )v( 𝐛̃ ),
where w(𝐚_i, 𝐛̃) is the normalized weight and i represents the index of the position on the feature map. The 𝐛̃ that is more similar to the output result u(𝐚_i) will have larger weight in the calculation. Similar to (<ref>), the general Non-Local process can be expressed as
𝐲_i=1/C(𝐱)∑_∀ j^f( 𝐱_i,𝐱_j )g(𝐱_j ),
where C(𝐱) is the normalization factor. 𝐱 and 𝐲 represent the input and output feature maps, respectively. j represents the index of all possible positions on 𝐱. The function g is used to compute the embedded feature representation of the input feature map at position j. The function f is used to compute the correlation between indices i and j. In this paper, we use the embedded Gaussian as function f, and can be expressed as
f( 𝐱_i,𝐱_j )=e^θ̃(𝐱_i)^Tϕ̃(𝐱_j),
where θ̃ and ϕ̃ represent embedding spaces.
The architecture of the Non-Local block is illustrated in Fig. <ref>. The overall operation of the entire block can be expressed as
𝐳_i=NL(𝐲_i)+𝐱_i,
where NL(·) represents the Non-Local block operation, 𝐲_i is given in (<ref>) and + 𝐱_i represents the residual connection. To reduce the size of the feature maps in the Non-Local block, convolution operations with a stride of 2 and a kernel size of 3 × 3 are added as downsampling operations. In addition, this reduces the size of the multiplied matrices from the original 1024 × 1024 to 256 × 256. Simultaneously, the number of channels is increased to 16 to compensate for the loss in neural network performance caused by downsampling. Before adding with the residual connection, a transposed 3 × 3 sized convolution and a stride of 2 is used as an upsampling operation to restore the shape of the feature maps.
As shown in (<ref>), the Non-Local operation calculates the correlation f(𝐱_i, 𝐱_j) at all positions, enabling it to directly extract and propagate correlated features between position i and position j in one operation, which can be regarded as an operation that provide a global view for feature extraction. Specifically, the global convolution covering the entire feature map with its own self-correlation matrix as the kernel. It effectively captures the correlation between two distant positions, making the Non-Local block more suitable for holistic feature extraction. Consequently, optimal results can be achieved without introducing too many parameters.
§ SIMULATION RESULTS
To provide numerical results, the network complexity analysis is shown in Table <ref>, and we use the number of parameters to express the complexity of the networks. As shown in Table <ref>, by comparing the number of parameters between CsiNet, Attention-CSiNet and ExtendNLNet under the same C R, the complexity of the ExtendNLNet has not increased much compared with the other methods.
To compare the performance of different networks, we introduce cosine similarity ρ and normalized mean square error (NMSE) as the performance indicators<cit.>. ρ can be expressed as
ρ =𝔼{1/N_c∑_n=1^N_c|h_out ( n )^H·h_in ( n )|/h_out ( n )_2h_in ( n )_2},
where N_c denotes the number of columns of the matrix, h_out ( n ) denotes the column vector of the output matrix, and h_in ( n ) denotes the column vector of the input matrix. The NMSE can be expressed as
NMSE=log (𝔼{H_in-H_out_F^2/H_in_F^2} ).
The number of the antennas equipped by the BS and the user are set as N_1=1024 and N_2=1, respectively. In the training process of the neural networks, the epoch is set to 100, the batch size is set to 250, and the learning rate is set to 0.001. The number of samples in the training set, test set, and validation set are 25,000, 5,000, and 5,000, respectively. To facilitate comparison of network performance, we have compared the performance of CsiNet, Attention-CsiNet, and ExtendNLNet. The optimal performance of each network is summarized and recorded in Table <ref>.
As shown in the Table <ref>, as the C R increases, the performance advantage of ExtendNLNet becomes more prominent. This trend is consistent with the increase of parameters shown in Table <ref>. Compared to CsiNet, the parameter increment of ExtendNLNet remains relatively consistent across different C R, while the parameters of CsiNet decrease significantly as the C R increases. Meanwhile, ExtendNLNet experiences a larger increase in parameters when the C R is high.
Specifically, when the C R is 16, the performance improvement of ExtendNLNet compared to CsiNet and Attention-CsiNet is 3.75 % and 11.09%, respectively. When the C R is 32, the performance improvement of ExtendNLNet compared to CsiNet and Attention-CsiNet is 23.57% and 22.93%, respectively. When the C R reaches 64, the performance improvement of ExtendNLNet compared to CsiNet and Attention-CsiNet is 41.88% and 37.34%, respectively. In contrast, the performance improvement of Attention-CsiNet compared to CsiNet is minimal, and even shows a slight decline when the C R is 16. This indicates that for the near-field channels of XL-MIMO communication systems, achieving good CSI matrix recovery requires paying more attention to the global structural characteristics of the CSI matrix, rather than being limited to local features in small blocks. Overemphasizing local features may lead to performance degradation because of being trapped in local optima. The introduction of larger convolutional layers in the Non-Local block and Refine-net block aims to capture large-scale features. Meanwhile, the small convolutions with kernel sizes of 5× 5 and 3× 3 in the neural network ensure that the performance of the neural network remains at a high level even at low C R, and enhancing the robustness and versatility of the neural network.
From Fig. <ref>, Fig. <ref>, Fig. <ref> and Fig. <ref>, we provide statistics on how the network performance of different neural networks changes during the training process under different C R, which shows the performance of the neural network improves significantly as the compression ratio increases.
§ CONCLUSIONS
In this paper, we proposed a DL-based ExtendNLNet for XL-MIMO systems in the near-field domain, which can compress the CSI and further reduce the overhead of CSI feedback. In addition, we introduced the Non-Local block and convolutions with larger kernels to expand the feature extraction field of the proposed ExtendNLNet, which can obtain a larger area of CSI features. Meanwhile, some convolution layers with smaller kernels are retained to maintain capability of the proposed ExtendNLNet for local feature extraction. Simulation results demonstrated that the performance of the proposed ExtendNLNet improves significantly as the compression ratio increases.
IEEEtran
|
http://arxiv.org/abs/2405.09895v1 | 20240516083144 | Measuring the Fitness-for-Purpose of Requirements: An initial Model of Activities and Attributes | [
"Julian Frattini",
"Jannik Fischbach",
"Davide Fucci",
"Michael Unterkalmsteiner",
"Daniel Mendez"
] | cs.SE | [
"cs.SE"
] |
Measuring the Fitness-for-Purpose of Requirements: An initial Model of Activities and Attributes
1st Julian Frattini,
3rd Davide Fucci,
4th Michael Unterkalmsteiner,
5th Daniel Mendez*
Blekinge Institute of Technology
Karlskrona, Sweden
{firstname}.{lastname}@bth.se
2nd Jannik Fischbach
Netlight Consulting GmbH and *fortiss GmbH
Munich, Germany
jannik.fischbach@netlight.com
May 20, 2024
=================================================================================================================================================================================================================================================================================================================================================================================================
Requirements engineering aims to fulfill a purpose, i.e., inform subsequent software development activities about stakeholders' needs and constraints that must be met by the system under development.
The quality of requirements artifacts and processes is determined by how fit for this purpose they are, i.e., how they impact activities affected by them.
However, research on requirements quality lacks a comprehensive overview of these activities and how to measure them.
In this paper, we specify the research endeavor addressing this gap and propose an initial model of requirements-affected activities and their attributes.
We construct a model from three distinct data sources, including both literature and empirical data.
The results yield an initial model containing 24 activities and 16 attributes quantifying these activities.
Our long-term goal is to develop evidence-based decision support on how to optimize the fitness for purpose of the RE phase to best support the subsequent, affected software development process.
We do so by measuring the effect that requirements artifacts and processes have on the attributes of these activities.
With the contribution at hand, we invite the research community to critically discuss our research roadmap and support the further evolution of the model.
requirements engineering, requirements quality, literature review, interview study, activity
§ INTRODUCTION
Requirements engineering (RE) is a means to an end and aims to fulfill a purpose, i.e., to inform subsequent activities of the software development life cycle about the needs and constraints of relevant stakeholders <cit.>.
Therefore, requirements artifacts and processes must be fit for purpose.
This fitness for purpose is determined by the attributes of the software development activities that are affected by requirements artifacts or processes <cit.>.
For example, a requirements specification is considered fit for purpose when implementing (activity) its implied features works correctly, completely, and quickly (attributes), among other attributes.
In that sense, we should judge the quality of requirements (and RE) based on the extent to which they are fit for purpose, i.e., how they impact the attributes of requirements-affected activities <cit.>.
Still, research on requirements quality is dominated by studies aiming to determine the quality of a requirements specification solely based on normative metrics <cit.>.
Recent endeavors to nuance requirements quality research with this activity-based perspective are promising <cit.>, but have so far not seen adoption in practice <cit.>.
One reason for this is the lack of an overview of software development activities that are affected by requirements engineering as well as their measurable attributes.
This gap was acknowledged in previous requirements quality research <cit.> and is one milestone on requirements quality research roadmaps <cit.>.
The overview of the activities that are potentially affected by RE would offer guidance on which activities determine the fitness for purpose of RE processes and artifacts.
Furthermore, an overview of the activities' attributes would offer guidance on how to measure their performance.
Consequently, we formulate the following research questions:
* RQ1: Which software development activities are affected by requirements artifacts?
* RQ2: By which attributes are requirements-affected activities evaluated?
This paper initializes the endeavor to create and maintain an overview of requirements-affected activities and attributes answering the research questions.
As the first step, we inductively construct an initial model from three distinct data sources (<Ref>).
The model contains 24 activities like implementing, testing, and estimating effort, and characterizes them with 16 attributes including duration, completeness, and correctness (<Ref>).
The paper further describes how to apply the model in research and practice and how future research will advance the endeavor (<Ref>).
We disclose all material, data, and source code[Archived at <https://zenodo.org/doi/10.5281/zenodo.10869626>] to facilitate this community endeavor.
§ BACKGROUND AND RELATED WORK
§.§ Requirements Use in SE
We consider as an activity any SE-relevant process performed by a (human or software) agent that uses one or more input artifacts and produces one or more output artifacts <cit.>.
<Ref> visualizes a simplified overview of SE activities, the artifacts they use as an input and produce as an output, and their scope.
For example, the implementing activity receives several input artifacts like a requirements specification and system architecture to produce output artifacts like source code.
We consider an activity requirements-affected if at least one of its input artifacts is a requirements artifact (yellow activities in <Ref>).
The aforementioned implementing activity is requirements-affected because it considers a requirements specification as an input.
In the simplified example in <Ref>, the requirements elicitation and the deployment activity are not requirements-affected.
It is, however, possible that the requirements elicitation activity may be affected by requirements artifacts of previous projects and sprints or that explicit deployment requirements exist.
§.§ Requirements Quality
Since requirements artifacts are used as input to requirements-affected activities, the artifacts' quality affects the quality of these activities and their output <cit.>.
For example, a vague requirements specification may lead to incorrect or missing features and reduced customer acceptance <cit.>.
These quality defects are more expensive to fix the later they are addressed <cit.>:
Revising a vague requirements specification is less expensive than redeveloping a faulty system built on it.
Therefore, organizations aim to detect and remove requirements quality defects as early as possible <cit.>.
However, requirements quality research focuses predominantly on normative quality factors <cit.> that do not consider an impact on affected activities <cit.>.
For example, the use of passive voice is often advised against in literature <cit.> despite a lack of empirical evidence for its negative consequences <cit.>.
This fosters skepticism of organizations to adopt requirements quality research <cit.>.
To address this issue, Femmer et al. proposed the perspective of activity-based requirements quality <cit.>.
This perspective entails that requirements are only as good as they support the activities in which they are used <cit.>, i.e., requirements quality depends on the performance of requirements-affected activities.
Specifying requirements quality as fitness-for-purpose to support affected activities necessitates requirements quality research to understand requirements-affected activities, i.e., it requires identifying and measuring activities affected by a requirements artifact <cit.>.
Without a systematic elicitation of requirements-affected activities prior to investigating the quality of a requirements artifact, researchers risk drawing incomplete conclusions.
For example, Ricca et al. investigate the effect of screen mock-ups on requirements comprehension <cit.> and conclude that providing screen mock-ups improves the understandability of requirements.
Femmer et al. confirm this effect but contrast that they simultaneously have a negative effect on requirements maintainability <cit.>.
Systematic studies on activity-based requirements quality agree that an overview of requirements-affected activities and their attributes is necessary to advance relevant requirements quality research <cit.>.
§.§ Related work
Requirements engineering literature contains several studies about the impact of requirements quality on subsequent software development activities.
For example, Kamata et al. <cit.> and Zowghi et al. <cit.> empirically investigated the impact of requirements quality on project success measured in time and cost overrun.
Similarly, Knauss et al. studied the impact of requirements quality on project success measured by customer satisfaction <cit.>.
These studies generalize the affected activities and summarize their effect on the overall project outcome.
Studies focusing on more specific activities include Chari et al. investigating the impact of requirements defects on injected software defects <cit.>, and Femmer et al. relating the use of passive voice to the domain modeling activity <cit.>.
On the other hand, some studies expand the scope of affected activities.
Damian et al. conducted a longitudinal case study observing a full project development lifespan and measured the tradeoffs of a revised RE process on several activities like communication, effort estimations, and implementation <cit.>.
Mendez et al. conducted a large-scale, global survey of perceived problems in RE and their effects on activities, including designing, implementing, and organizing <cit.>.
Research on traceability between software development artifacts constitutes another closely related domain.
Several secondary studies have summarized traceability research and identified artifacts that are commonly connected <cit.>.
Although requirements artifacts are prominent targets of trace links, they are typically connected to other artifact types, not the activities that produce them <cit.>.
These artifact types can be used to infer the producing activities, though the inferred activities typically remain on a very high level <cit.>.
Furthermore, this limitation excludes by design all activities that do not necessarily or only rarely produce artifacts, like, for example, informal reviewing, modifying existing artifacts, assessing feasibility, or estimating effort.
In summary, none of these previously mentioned primary studies systematize the affected activities and their attributes but rather select the studied impact based on the availability of data or anecdotal hypotheses, and traceability research exhibits significant limitations regarding the identification of these activities.
Only two studies known to the authors attempt to explicate the affected activities.
Femmer et al. elicited the activities affected by specific requirements artifacts at a case company and determined the qualitative impact of requirements defects on them <cit.>.
In a similar study, Frattini investigated requirements quality factors relevant to a case company and their impact on subsequent activities <cit.>.
Both studies prototype a model of requirements-affected activities for the specific context but acknowledge the need for a more systematic and comprehensive overview.
§ GOAL AND EARLY METHOD
One goal of activity-based requirements quality research is to create and maintain a comprehensive model of requirements-affected activities and their attributes exhibiting the following properties <cit.>:
* Applicability: The model can represent all requirements-affected activities and attributes in any given SE context.
* Suitability: The model can be used to evaluate relevant activities by means of their attributes.
* Extensibility: The model can be extended with new activities or attributes.
* Usability: The model can be accessed and comprehended by software engineers.
In this study, we contribute the first version of this model.
Since we are not aware of any systematic prior work collecting requirements-affected activities and their attributes <cit.>, we surveyed different data sources for textual descriptions of SE activities that use requirements artifacts as input.
From these textual mentions, we inductively construct a model of requirements-affected activities and their attributes by employing thematic synthesis as proposed by Cruzes and Dybå <cit.>.
§.§ Data Collection
To ensure the property of applicability as mentioned above, we collected data from three distinct sources described in the following three subsections:
a systematic review of experimentation literature (<Ref>), an interview study (<Ref>), and a literature study on software process models (<Ref>).
§.§.§ Systematic Literature Review
The first source of textual descriptions of requirements-affected activities and their attributes that we considered were scientific studies reporting controlled experiments in which the experimental task involves human subjects and considers requirements as an input artifact.
These experimental tasks simulate requirements-affected SE activities performed by practitioners.
The dependent variables in these experiments are eligible attributes describing the performance of the activity.
We adopted the systematic literature survey method employed by Sjøberg et al. <cit.>.
Database selection.
To ensure that our database search for eligible primary studies targets publications relevant to SE we pre-selected eligible journals and conferences (from hereon out collectively called venues) from the CORE ranking[<https://www.core.edu.au/>] whose field of research is software engineering.
To ensure a high quality of the primary studies, we constrained the venues to those of rank A* or A.
A few select venues of lower rank that are particularly relevant to the topic constituted an exception.
These included the Requirements Engineering Journal, the Journal of Software: Evolution and Process, the International Working Conference on Requirements Engineering: Foundation for Software Quality, the International Conference on Product-Focused Software Process Improvement, and the Euromicro Conference on Software Engineering and Advanced Applications, which all have a core rank of B.
Additionally, we removed all venues that host computer science rather than SE topics.
This task was performed by three authors in conjunction to ensure reliability.
The final database selection contained 35 venues (10 journals and 25 conferences).
Database search.
We performed a keyword-based database search for each included venue with the keywords experiment* as well as requirement* (or the synonyms srs or specification*).
These keywords limited the retrieved primary studies to those (1) describing an experiment and (2) involving requirements at least to some degree.
We executed the database search via Scopus[<https://www.scopus.com/search/form.uri?display=advanced>] and in four cases, where Scopus did not index publications of that venue, via the ACM Digital Library.[<https://dl.acm.org/>]
The search string per venue consisted of the two sets of keywords as well as a limitation to the venue via its title.
For example, the search string for the ACM Computing Surveys journal in Scopus looked as follows: .
The search per venue returned between 1 (e.g., from the European Conference on Object-Oriented Programming) and 175 (from the Journal of Systems and Software) primary studies for a total of 1446 studies.
Inclusion.
Next, we performed an inclusion phase to ensure the following properties of primary studies expressed by the two inclusion (I1 and I2) and four exclusion criteria (E1-E4):
* I1: The primary study presents an experiment with human subjects as one of its core contributions.
* I2: The experimental task uses a requirements specification as an input.
* E1: The experimental task is a requirements review.
* E2: The study is not written in English.
* E3: The publication is not available via the university's access program.
* E4: The study is a duplicate of or extended by an already included study.
I1 ensures that eligible primary studies present a proper experiment (regardless of whether it is controlled or quasi) that involves human subjects.
Otherwise, the experimental task would not simulate an SE activity, the concept of interest.
This excludes, for example, experiments in which machine learning algorithms of different configurations are compared on a classification task.
I2 ensures that the activity is requirements-affected.
E1 explicitly excludes requirements review tasks, i.e., requirements defect detection and removal activities.
The purpose of identifying requirements-affected activities is to optimize the affecting requirements in a way that improves their impact on the activities.
This optimization process is the requirements review.
Hence, we excluded these studies to avoid a circular impact, i.e., suggesting to optimize requirements for the reviewing activity, which is exactly this optimization.
E2 and E3 exclude inaccessible studies, and E4 removes content duplicates.
Primary studies were considered for the next data analysis step when they met all two inclusion and none of the exclusion criteria.
The first author conducted the inclusion step based on the title, abstract, and keywords.
Out of 1446 primary studies, 145 (10.3%) were included.
To ensure the reliability of this subjective process, the second author independently performed the inclusion task on 75 (i.e., 5.2%) randomly selected studies.
We calculate the inter-rater agreement using Bennett's S-Score <cit.>, which is robust against uneven marginal distributions <cit.>.
The inter-rater agreement yields a value of 92%, which we deem sufficient to instill confidence in this subjective task.
Data Extraction.
The first author reviewed all 145 included primary studies and extracted, for each human-subject experiment in each study, (1) the description of the experimental task and (2) all dependent variables measured to evaluate the performance of the task.
The description of the experimental task constituted the source of requirements-affected activities, and the dependent variables were the source of their attributes.
While reviewing the full text of the studies, 22 studies revealed to not, in fact, meet all inclusion criteria other than the title, abstract, and keyword had suggested.
We excluded these 22 studies from further processing.
Additionally, we excluded extractions where the attribute description did not imply a valuation.
Because our goal was to identify attributes that quantify the performance of their respective activity, eligible attributes must be valuating—i.e., values of that attribute must imply a degree of performance.
While attributes do not necessarily have to be measured on an interval scale (i.e., it is not important to associate an interval of the attribute, like a certain amount of minutes for the attribute duration, with a specific level of quality), it has to be at least on an ordinal scale—i.e., the sign of the interval is important (more minutes of duration is bad, less minutes of duration is good).
For example, if the dependent variable of an experiment investigating the activity of estimating effort is the estimated amount of hours <cit.>, then this data point(i.e., pair of activity and attribute) was excluded as a higher or lower value of that attribute does not automatically make it good or bad due to the lack of ground truth.
If, instead, the dependent variable was precision, i.e., how close the estimated amount of hours is to actual implementation time, then the data point would be included as a higher value of precision (i.e., an estimation that is closer to the actual time) is better.
This process eliminated 12 descriptions of non-valuating attributes.
To assess the validity of this process, the third author independently repeated the task on a sample of 12 data points, which consisted of 6 random samples from each of the two classes (valuation vs. no valuation), and we measured the inter-rater agreement using Cohen's Kappa <cit.> since the classes have an even marginal distribution <cit.>.
The first overlap achieved a Cohen's Kappa of only 33.3%, which emphasized the complexity of the task.
The two authors reconvened, discussed the differences, reformulated the exclusion criteria, and repeated the labeling.
The second overlap achieved a score of 83.3%, which represents a sufficient reliability of the step.
The extraction produced 142 descriptions of experimental tasks and 355 descriptions of dependent variables.
Several experimental tasks were evaluated via multiple dependent variables, which is why the 355 resulting data points contain repeated descriptions of experimental tasks.
§.§.§ Interview Study
The second source of textual descriptions of requirements-affected activities and their attributes that we consider were reports from industry practitioners about the usage of requirements specifications in subsequent SE activities.
To this end, we evaluated the transcripts of a previously conducted interview study <cit.>.
Interview Participants.
The first author conducted the interview study in a large, globally distributed software development organization that specifies requirements using both free-form and constrained natural language (use cases) prior to each development cycle.
A contact at the organization provided a sample of eight software engineers directly responsible for processing requirements specifications and developing solution specifications based on them.
These eight engineers represent the majority of personnel in their role in the team that was involved in the study.
The interview participants had an average of 3.5 years of experience in their role, 7.5 years with the organization, and 15.3 years as software engineers.
Interview.
The original purpose of the interview was to identify which quality defects practitioners perceive in the requirements specifications that they process <cit.>.
Because the elicitation of quality defects entailed mentioning what kind of subsequent activity is affected by this defect, the generated data served to identify requirements-affected activities and their attributes.
For example, stating that vague requirements lead to a delay of the testing phase contains the requirements-affected testing activity and its attribute duration.
To guide the semi-structured interview, we developed a protocol.
The protocol contained, among demographic questions, one prompt per type of requirements quality.
The types of requirements quality were derived from Montgomery et al. <cit.> and covered, among others, ambiguity, completeness, and traceability.
Data Extraction.
All eight one-hour-long interviews were recorded, automatically transcribed using a speech-to-text conversion tool,[<https://www.descript.com/>], and verified by the first author.
Then, the first author extracted from the transcripts each mention of an activity affected by a requirements quality defect and how this effect was measured.
The extraction produced 55 descriptions of affected activities but no descriptions of how this effect was measured on them.
§.§.§ Literature Study
The third source of textual descriptions of requirements-affected activities and their attributes that we consider were descriptions of software process models.
Software process literature describes processes and products of the SE life cycle and, hence, contains information about which activities are affected by requirements.
Since software process literature is fairly mature <cit.>, we have access to reliable summaries of process models.
Literature.
We selected the book “Software Process Definition and Management” by Münch et al. <cit.> as a reliable summary of software process literature.
The first author reviewed the descriptions of all seven lifecycle models, which cover the waterfall model <cit.>, iterative enhancement <cit.>, prototyping, the spiral model <cit.>, the incremental commitment spiral model <cit.>, Unified Process <cit.>, and Cleanroom Development <cit.>.
The first author extracted all textual mentions of requirements-affected activities and their attributes as prescribed by the lifecycle model.
This extraction produced 21 textual descriptions of activities and one explicit description of an attribute.
§.§ Data Analysis
Coding.
The data collection phase over the three sources culminated in a table containing 218 textual descriptions of requirements-affected activities and 356 textual descriptions of their attributes.
In the absence of a prior theory or model of requirements-affected activities, we resorted to an inductive coding process <cit.>.
The first and third authors jointly established the level of granularity of the codes that were applied to the textual descriptions and documented this process in a guideline.
The first author then performed the coding process independently and, upon completion, verified the assigned codes with the third author.
For each pair of textual descriptions of an activity and attribute, we coded four concepts:
* Activity: the requirements-affected activity
* Activity attribute: a property evaluating an activity
* Artifact: an output artifact produced by the activity
* Artifact attribute: a property evaluating an artifact
The distinction of artifacts from activities was necessary since some activities were not evaluated directly but rather by the artifacts they produced.
For example, duration is an attribute of the implementing activity, but several studies additionally evaluate that activity by measuring the coupling (artifact attribute) of the resulting source code (artifact).
Consolidation.
The inductive coding process produced 24 unique codes for activities, 16 for activity attributes, 21 for artifacts, and 26 for artifact attributes.
The first and third authors then created an abstraction hierarchy of identified activities and artifacts based on the guide to the software engineering body of knowledge <cit.>.
For example, both the planning and the estimating effort activities are sub-types of the more abstract managing activity <cit.>.
We decided to merge the activities interpreting and understanding with comprehending as none of the data sources sufficiently distinguished between them.
Future studies differentiating them properly are necessary.
Once the hierarchy emerged, we associated each activity and artifact with the respective attributes that our data sources reported to characterize them.
Whenever all activities or artifacts of a hierarchical group shared an attribute, we moved it to the higher-level activity or artifact for conciseness.
Additionally, we made educated assumptions about the transferability of some attributes.
For example, even though our data did not contain an instance of duration being evaluated on every activity, it is safe to assume that every activity can be characterized and evaluated by its duration.
This step introduces slight subjectivity but improves the applicability of the model.
§.§ Data Availability
To achieve the goals of usability and extensibility of the resulting model, we disseminate it via GitHub.[Available at <https://github.com/JulianFrattini/gere-r3a>]
The repository contains a reference to all considered data sources, guidelines and protocols for the data extraction, and a specification of the current model of requirements-affected activities and their attributes.
More importantly, it contains guidelines on how to contribute new or revise existing activities and attributes.
Using the version control system of GitHub[<https://docs.github.com/en/get-started/using-git/about-git>] we will foster a collaborative evolution of the model.
§ RESULTS
§.§ Requirements-affected Activities and their Attributes
<Ref> visualizes the initial model of requirements-affected activities and their attributes.
The model is structured like a UML class diagram and makes use of the inheritance relationship.
An activity, represented as a UML class, that inherits from another activity also exhibits its attributes.
For brevity, artifacts are excluded from the visualization.
The replication package contains an extended model that includes the artifacts.
The root of the inheritance tree is the abstract activity processing, which represents every executable activity.
The model contains several activities that are commonly considered in research as requirements-affected activities, like modeling, prioritizing, implementing, and testing.
Another prominent spot is taken by the merged activity comprehending, which dominates the distribution of activities among both experimental literature and interview statements.
This correlates to the prominence of ambiguity among the attributes of requirements quality in empirical research <cit.> and is supported by the fact that this activity precedes every other activity <cit.>.
The model, furthermore, contains several less commonly investigated activities.
For example, Murakami et al. investigate the activity of code review in which subjects are provided with a requirements specification <cit.>.
Consolidating larger sets of requirements to identify a semantically equivalent subset <cit.> is another rare example.
The model also contains activities that did not appear in experimental studies but were reported by interview participants or prescribed by software process literature.
The activity of prototyping is such an example that was both mentioned during the interviews and as part of lifecycle models.
Furthermore, the following activities were all named by interview participants but not considered in the experimentation literature:
coordinating internal stakeholders based on a requirement, reusing artifacts like source code based on a new requirement, and estimating feasibility of a requirement.
The attributes recorded in the model also show a varying distribution of prevalence.
The most commonly encountered attributes of an activity are duration, correctness, and completeness.
These represent both simple-to-measure and critical properties of most activities.
Additionally, we observed several attributes related to the effect that the activity has on the executing agent, for example, how certain an agent feels when executing the activity, how easy, enjoyable, motivating, and useful they perceive it to be, and how learnable the activity was.
Rarely mentioned attributes include how robust an activity is against errors and how biased an activity becomes given some controlled stimulation.
§.§ Implications
§.§.§ Implications for Research
The results contain multiple implications for requirements engineering and, specifically, requirements quality research.
Firstly, the distribution of activities and attributes among the three data sources hints at potential research gaps.
For example, the above mentioned activities of prototyping, coordinating, reusing, and estimating have not appeared in the sample of primary studies.
Secondly, the model provides guidance for comprehensive measurements of the software development life cycle with respect to the impact of requirements artifacts and processes.
As determined by Femmer et al., only a holistic view of all requirements-affected activities will reliably determine the impact of any treatment in requirements artifacts or processes <cit.>.
This affects all comparative studies in requirements engineering, i.e., all controlled and quasi-experiments aiming to evaluate the impact of a quality defect or the benefit of a new method.
Only by measuring this impact on all requirements-affected activities in terms of their attributes and summarizing the total benefit or drawback, a holistic decision on the benefit or harm of any treatment can be made.
While we certainly do not suggest that any comparative study from here on out must necessarily consider all 24 activities simultaneously, the model of requirements-affected activities provides at least a framework that allows integrating the results of multiple studies investigating the effect of the same treatment on different activities to one, overall conclusion.
§.§.§ Implications for Practice
The resulting initial model of requirements-affected activities and their attributes may serve practitioners as an overview of activities to measure when attempting to understand the fitness for purpose of their requirements.
The model emphasizes the diversity of activities that may be affected by requirements but also the diversity of metrics by which they can be evaluated.
While attributes like completeness, correctness, and duration are likely to be covered in key performance indicators of organizations, attributes like usefulness, ease of use, and learnability may often be neglected.
Further practical use of the model for quantitative comparisons requires future work and will be discussed in <Ref>.
§.§ Limitations
This study exhibits the following limitations.
Firstly, the data extraction phase was only performed by one researcher.
This introduces the possible risk that relevant information from the bodies of text was missing from the textual descriptions that were later coded.
Secondly, the interview study was not performed with the research questions stated in <Ref> in mind.
Instead, the main theme of the interview study was centered around the broader scope of requirements quality <cit.>.
However, confirming previous studies that proposed that requirements quality inevitably depends on requirements-affected activities <cit.>, the responses of interview participants naturally contained information that contributed to answering our research questions.
Hence, we deem the interview data as an eligible data source for this study.
Thirdly, every step of the study where we depart from purely summarizing and reporting data and instead interpret it introduces researchers' bias.
This is particularly evident in the conscious merging of the understanding, interpreting, and comprehending activity but also in the assumption about the transferability of several activities' attributes.
This step was necessary to elevate the model beyond a systematic summary toward an evaluation framework as demanded in previous research roadmaps <cit.>.
We documented all interpretative steps and disclosed them in our replication package to allow other researchers to scrutinize these decisions.
Finally, we address the threat to external validity.
Full generalizability was out of the scope of the goals of this study, but we, nevertheless, briefly discuss all threats to external validity in order to justify the research plan as presented in <Ref>.
One threat to the generalizability stems from the sampling of the literature survey, which only considers a specific set of SE-relevant venues and categorically excludes workshops.
Additionally, the literature review is limited to experiments and excludes other methods like case studies.
Another threat stems from the sample of interview participants, which represent only one team of only one company.
§ RESEARCH PLAN
§.§ Model Extension
The limitations mentioned in <Ref> necessitate the extension of the model to achieve goals 1 (applicability) and 2 (suitability) stated in <Ref>.
Both the applicability and the suitability are inhibited by the potential incompleteness of the model.
Hence, we plan to repeat the early method presented in <Ref>.
Two immediately planned extensions are (1) repeating the systematic literature survey on workshop papers and (2) replicating the interview study in different companies and teams.
Because of the extensive documentation of data collection methods for both empirical data (i.e., interview transcripts) and meta-research (i.e., primary studies), as well as the data analysis protocol, we anticipate that the model extension can be distributed well within our network of researchers interested in requirements-affected activities.
§.§ Model Maintenance
Goals 3 (extensibility) and 4 (usability) stated in <Ref> are fulfilled by the design of the chosen dissemination strategy.
The authors of this study will maintain the GitHub repository containing the current content and structure of the model.
§.§ Model Validation
The most significant step of future work is to validate whether the model achieves the four goals stated in <Ref>.
Validating applicability.
To test whether the model can represent all requirements-affected activities and attributes in any given SE context, we plan to conduct multiple case studies in different company contexts.
Once the model is deemed sufficiently extensive, we trace requirements artifacts in each case company to every instance of reuse.
The process of tracing requirements artifacts to activities using these artifacts as input shall happen both directly, i.e., by interviewing involved stakeholders, but also indirectly, i.e., by observing which stakeholder accesses the artifact and then following up on the purpose.
The latter accounts for requirements-affected activities that stakeholders are not actively aware of, i.e., in case they unconsciously retrieve information to execute an activity without considering that this makes the activity requirements-affected.
We constitute that the model achieves goal 1 if we do not encounter any requirements-affected activity that has no semantic equivalent in the model.
Validating suitability.
To test whether the model can be used to evaluate relevant activities by means of their attributes, we plan to conduct an empirical study involving all surveyed case companies.
Given the already detected requirements-affected activities, we evaluate these via the attributes associated with the activities in our model to quantify their performance.
We aim to produce two types of empirical investigations from this data.
Firstly, we aim to survey the activities and generate an overview of attribute values for all affected activities.
This overview provides an absolute comparison of the activities and answers questions like “Which activity phase takes the longest time” or “Which development activity is perceived as the least enjoyable?”.
Secondly, we aim to conduct quasi-experiments at the case companies investigating whether certain properties of requirements artifacts or properties have an impact.
For example, the subject of the experiments could be the comparison between two types of template systems for requirements specification <cit.> or the avoidance of specific linguistic structures like passive voice <cit.>.
The subject of the experiments will be aligned with current questions and endeavors of the case companies to optimize their requirements engineering artifacts or process in an evidence-based manner.
The results of the quasi-experiments will be measured in terms of differences in attribute values of all affected activities.
This overview will provide companies with a summary of the effect that the proposed change has on all affected activities.
We constitute that the model achieves goal 2 if the results generated by the surveys and quasi-experiments are accepted by the respective case companies.
Validating extensibility.
To test whether the model can extended with new activities or attributes, aim to involve additional researchers in the model extension presented in <Ref>.
By distributing the task beyond the authors of this study, we determine how easily other researchers can extend the model.
We constitute that the model achieves goal 3 if external researchers extend the model successfully.
Validating usability.
To test whether the model can be accessed and comprehended by software engineers, we plan to facilitate external replications of the validation of goals 1 and 2.
This not only validates whether the model achieves goal 4 but also extends the empirical evidence about the impact of requirements on affected activities in different company contexts.
We constitute that the model achieves goal 4 if external researchers successfully replicate the empirical studies.
§ CONCLUSION
Requirements artifacts and processes fulfill a specific purpose in the software development lifecycle, that is, to inform subsequent activities about the needs and constraints imposed by stakeholders on the system under development <cit.>.
How fit requirements artifacts and processes are to fulfill their purpose, i.e., how well they benefit these requirements-affected activities, can be effectively determined when (1) all affected activities are known and (2) the performance of these activities can be evaluated.
The need for a systematic overview of (1) requirements-affected activities as well as (2) the attributes which quantify their performance has been well recognized in requirements quality literature <cit.> and evoked the call for a comprehensive model <cit.>.
We answer this call by proposing an initial model of requirements-affected activities and their attributes systematically derived from three distinct data sources.
The model aims to support both researchers by guiding empirical studies concerning the impact of requirements artifacts and processes but also practitioners by offering an overview of attributes that may serve as key performance indicators of their requirements-affected activities.
We envision that this model will be extended and evolved by the requirements engineering community to provide an applicable and suitable model for the task.
We will actively maintain the presented resources to enable and foster this community endeavor.
§ ACKNOWLEDGMENT
This work was supported by the KKS foundation through the S.E.R.T. Research Profile project at Blekinge Institute of Technology.
We further thank Parisa Yousefi and Charlotte Ljungman from Ericsson Karlskrona for facilitating the interview study.
IEEEtran
|
http://arxiv.org/abs/2405.10078v1 | 20240516132001 | Spurious reconstruction from brain activity | [
"Ken Shirakawa",
"Yoshihiro Nagano",
"Misato Tanaka",
"Shuntaro C. Aoki",
"Kei Majima",
"Yusuke Muraki",
"Yukiyasu Kamitani"
] | q-bio.NC | [
"q-bio.NC"
] |
Impact of medium temperature heat treatment on flux trapping sensitivity in SRF cavitiesThe work is partially supported by the U.S.Department of Energy, Office of Science, Office of High Energy Physics under Awards No. DE-SC 0009960 and The manuscript has been authored by by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
P. Dhakal1,2dhakal@jlab.org, B. D. Khanal2, E. Lechner1, and G. Ciovati1,2
1Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA
2Center for Accelerator Science, Department of Physics,
Old Dominion University, Norfolk, VA 23529, USA
=====================================================================================================================================================================================================================================================================================================================================================================
The rapid advances in brain decoding, particularly visual image reconstruction, have sparked discussions about the potential societal implications and ethical considerations surrounding neurotechnology. As these methods aim to recover perceived images from brain activity and achieve prediction over diverse images beyond training samples (zero-shot prediction), it is crucial to critically assess their capabilities and limitations to prevent misguided public expectations and inform future regulations. Our case study of recent text-guided reconstruction methods, which leverage a large-scale dataset (the Natural Scene Dataset, NSD) and text-to-image diffusion models, reveals significant limitations in their generalizability. We found a notable decrease in performance when applying these methods to a different dataset, which was designed to prevent category overlaps between training and test sets. UMAP visualization of the text features with NSD images showed a limited diversity of distinct semantic and visual clusters, with substantial overlap between training and test sets. Formal analysis and simulations demonstrated that clustered training samples can lead to “output dimension collapse,” restricting the output feature dimensions predictable from brain activity. Diversifying the training set to ensure a broader feature distribution improved generalizability beyond the trained clusters. However, text features alone are insufficient for a complete mapping to the visual space, even if perfectly predicted from brain activity. We argue that recent photo-like reconstructions may primarily be a blend of classification into trained categories and the generation of convincing yet inauthentic images through text-to-image diffusion (hallucination). To achieve genuine zero-shot prediction, diverse datasets and compositional representations spanning the image space are essential. As neurotechnology advances, engaging in interdisciplinary discussions involving neuroscientists, ethicists, policymakers, and the public is crucial to ensure responsible development and application of these techniques. These discussions should be grounded in a clear understanding of the current capabilities and limitations of the technology, as well as a careful consideration of the potential ethical and societal impacts.
§ INTRODUCTION
Brain decoding has been widely used in the neuroscience field, revealing specific contents of the mind <cit.>. As brain decoding is sometimes referred to as “mind-reading” in popular media <cit.>, it has attracted significant attention beyond the scientific community due to its potential for real-world applications in medicine and industry. Such neurotechnology has also started to affect future ethical discussions and legal regulations <cit.>. To prevent misguiding public expectations and policies, scientists need to carefully assess the current status of brain decoding techniques and clarify the possibilities and limitations.
One of the major challenges in brain decoding is the limited amount of brain data we can collect. The current brain measurement devices are costly, yielding far less brain data than the amounts typically used in image or text processing within the field of computer science and AI <cit.>. Although we have gradually increased the amount of brain data per subject <cit.>, it remains impractical to collect brain data covering the full range of cognitive states and perceptual experiences. Consequently, classification-based decoding approaches, primarily developed in the early stages of this field, are insufficient to uncover the neural code in general or natural conditions since decodable information is confined to the same stimuli or predefined categories used in a training phase.
To overcome this limitation, several decoding methods have been proposed to enable the prediction of novel contents from brain activities that are not encountered during the training phase. <cit.> proposed a general visual decoding approach via a statistical encoding model that predicted fMRI voxel values from image features. It successfully identified novel test images from a set of 1000 candidates. <cit.> utilized co-occurrence rates of specific verb sets for nouns to predict fMRI brain activity during the perception of line drawing images, demonstrating the ability to predict unseen nouns. <cit.> constructed a color-tuning model and predicted brain activity while the subjects were presented with color stimuli. As their training stimuli cover most of the color space, their methods can successfully identify novel colors not included in the training dataset. <cit.> utilized deep neural network (DNN) features to decode brain activity measured while subjects perceived natural images. They showed successful prediction of novel object categories not encountered during the training phase.
In the field of machine learning, “zero-shot” prediction refers to the ability of a model to accurately predict or classify novel contents that were not encountered during the training phase <cit.>. This technique has found widespread application across various domains, including image classification <cit.>, image generation <cit.>, and natural language processing <cit.>. The concept of zero-shot prediction can be considered analogous to brain decoding techniques that aim to interpret brain activity patterns associated with previously unseen stimuli or experiences. Both approaches seek to generalize knowledge gained from a limited set of training data to novel situations, enabling the interpretation of new information without explicit prior exposure. To achieve effective zero-shot prediction, the model often utilizes a compositional representation of the output <cit.>. Compositional representation refers to the ability to understand and generate novel combinations of previously learned concepts or features. By learning the underlying structure and relationships between different elements, the model can generalize its knowledge to new, unseen instances.
Visual image reconstruction is another prominent example of zero-shot prediction in brain decoding. This task aims to recover perceived novel images that were not encountered during the training phase, effectively reconstructing visual experiences from brain activity patterns <cit.>. As our perceptual visual experiences cannot be fully covered by limited brain data, reconstruction methods require strong generalizability. <cit.> conducted a study that demonstrated the reconstruction of perceived arbitrary 10x10 binary-contrast images from brain activity. They built multiple modular decoders to predict the local contrasts of each location and combined their predictions. This approach leverages the compositional representation of the visual field, which is organized retinotopically in the early visual cortex. Incorporating cortical organization into the model's architecture can significantly improve its ability to perform zero-shot prediction and reconstruct novel visual experiences from brain activity. Although the training stimuli were only 400 random images, it was possible to reconstruct an arbitrary image from a set of possible 2^100 instances, including geometric shapes such as crosses and alphabets. Similarly, <cit.> replaced local decoders with DNN feature decoders. Although their training stimuli were 1200 natural images, they demonstrated reconstructing novel images, including artificial images, which were not part of the training set. These successes suggest that the proposed reconstruction models capture rich and comprehensive information about the general aspects of the neural code, beyond merely the information defined by the training data <cit.>. Developing reliable reconstruction methods also enables further analysis of subjective visual experiences, such as visual imagery <cit.>, attention <cit.>, and illusion <cit.>. Enabling the decoding of novel brain states that were never encountered during the training phase can be a promising approach to neural mind-reading <cit.>.
Visual image reconstruction pipelines can be divided into three main components: translator, latent features, and generator (Figure <ref>). The translator serves to convert a brain activity pattern into a latent feature space via linear regression <cit.>, or nonlinear transformation <cit.>. Latent features serve as surrogate representations of perceived visual images. While the local contrasts in <cit.> can be seen as a primitive form of latent features, recent studies often use DNN features, such as the intermediate output of recognition models <cit.>. The generator visualizes translated latent features into images. Some studies have used the pretraining image generative models for the generator module <cit.>. Image optimization can also be regarded as a generator <cit.>. Direct mapping from brain activity to images using DNNs can also be considered to implicitly contain these components <cit.>.
Recent advances in generative AI, particularly in text-to-image generation, have naturally given rise to expectations that these techniques could provide a shortcut for visual image reconstruction by leveraging semantic representations. In addition, there has been a growing trend towards collecting neural datasets using a wide range of diverse visual and semantic content. This shift aims to capture a more comprehensive and ecologically valid representation of the human experience <cit.>. Researchers have started to collect large-scale datasets, such as the Natural Scene Dataset (NSD) <cit.> and the THINGS-fMRI dataset <cit.>, which incorporate a broader range of fMRI data induced by diverse visual stimuli with text or category annotations. Notably, recent studies leveraging the latest generative AI techniques with semantic representation and large-scale datasets have reported photo-like reconstructions from brain activity <cit.>. They utilized contrastive language-image pretraining (CLIP) features <cit.> as latent features and text-to-image diffusion models <cit.> as generators.
While these recent approaches show promising results, it remains uncertain whether these methods truly achieve zero-shot reconstruction due to several factors. The complex model architectures employed in these studies, along with the use of a large-scale dataset, make it challenging to interpret and understand the underlying mechanisms driving the reconstruction process. To fully assess the zero-shot prediction capabilities of these approaches, it is essential to rigorously test their generalizability across different datasets and to provide detailed analyses of the individual model components. This includes evaluating the performance of the translators, latent features, and generators used in these methods. Furthermore, the characterization of the diversity of stimuli in the datasets and the latent representations has not been thoroughly explored. It is unclear whether the recently proposed datasets, such as the NSD, are optimally designed to capture the full range of human visual experiences and to support the development of truly generalizable prediction models.
In the following, we begin with a case study that critically tests the methods proposed by <cit.> and <cit.>, which originally used the NSD. Our findings reveal several issues, including the failure of replication with another dataset specifically designed to avoid overlap between training and test sets <cit.>, and the questionable post-hoc selection procedures of <cit.> that can produce convincing reconstructions even with random brain data. We identify the lack of diversity represented by the limited number of clusters in the NSD dataset as a potential factor contributing to these issues. We also demonstrate the failure of zero-shot prediction in the feature space and the inability to recover a stimulus from its latent features, suggesting a misspecification of the model. Based on these findings, we conclude that the apparent photo-like reconstructions are essentially a result of classification to clusters shared between training and test sets, combined with hallucinations by the generative model.
In the formal analysis and simulation section, we aim to uncover the general factors underlying the issues and observations highlighted in the case study. We first describe the phenomenon of “output dimension collapse” in the translation from brain activity to the latent feature space. We show that the regression model trained on the clustered outputs becomes specialized to the training examples and its output collapses into a subspace of the training dataset. Our simulations with clustered data demonstrate that out-of-sample prediction can be achieved with an increasing number of training clusters, suggesting that zero-shot prediction is possible if stimulus diversity and a compositional representation are available. We also explore the preservation of image information at hierarchical layers of DNNs and discuss the caveats associated with evaluating reconstructions using identification metrics alone. Finally, we provide speculation on how we are fooled by the seemingly realistic reconstructions generated by AI models.
We conclude with recommendations for critically testing reconstruction models. Our findings highlight the importance of rigorous evaluation, the use of diverse datasets, and the need for careful specification and characterization of model components to advance the field of visual image reconstruction and develop truly generalizable prediction capabilities.
§ RESULTS
§.§ Case study
We investigated two types of recent generative AI-based reconstruction methods, StableDiffusionReconstruction <cit.> and Brain-diffuser <cit.>, as well as their validation dataset, the NSD <cit.>. Both reconstruction methods utilize CLIP features <cit.> to effectively apply recent text-to-image diffusion models in visual image reconstruction analysis. CLIP text features are obtained from the average of five text annotations corresponding to the stimulus image. These text annotation information are only used during training to map the brain activity into CLIP text features. In the test phase, they directly predict CLIP text features from brain activity during image perception. Henceforth, these two reconstruction methods will be referred to together as text-guided reconstruction methods.
The StableDiffusionReconstruction method <cit.> uses the components of the Stable Diffusion model <cit.>, the VAE features <cit.> and CLIP text features as latent features. Similarly, the Brain-diffuser method <cit.> utilizes the component of another type of diffusion model <cit.>, CLIP text features, and CLIP vision features as well as VDVAE features <cit.> as latent features. Both methods first generate initial images from translated VAE/VDVAE features. These initial images are then passed through the image-to-image pipeline of the diffusion model conditioned on the translated CLIP features, producing the final reconstructed images. They validated the reconstruction performance using the NSD dataset, preparing training and test data based on the data split provided by the NSD study. Thanks to the authors of these studies who made efforts to make their datasets and scripts publicly available, we were able to conduct our replication analysis effectively. We compared the reconstructed results of these two text-guided reconstruction methods with those from a previous image reconstruction method, iCNN <cit.>. For more information on datasets and reconstruction methods, see Material and Methods (“Datasets” or “Reconstruction methods”) or the original studies <cit.>.
§.§.§ Observations: Failed replication and convincing reconstruction from random data
We first confirmed the reproduction of the findings of the text-guided reconstruction methods, particularly those of the Brain-diffuser study <cit.> when applied to the NSD dataset (Figure <ref>a). The reconstructed images generated by the Brain-diffuser method effectively captured most of the layout and semantics of the test images. Although the reconstructed images produced by the StableDiffusionReconstruction method <cit.> showed slightly degraded performance compared to the original paper, they still successfully captured the semantics of the test images. Notably, the iCNN method <cit.> also performed well on the NSD dataset, with reconstructed images capturing the dominant structures of the objects within the images, consistent with the findings reported in the original study.
To further investigate the generalizability of the text-guided reconstruction methods, we attempted to replicate their performance using a different dataset, Deeprecon, which was originally collected for the study by <cit.>. The Deeprecon dataset was specifically designed to avoid overlap between training and test sets, making it an ideal benchmark for evaluating the zero-shot prediction capabilities of reconstruction methods. However, the original Deeprecon dataset lacked the text annotations required by the text-guided reconstruction methods. To facilitate a fair comparison, we collected five text annotations for each training stimulus in the Deeprecon dataset through crowd-sourcing and used them to generate CLIP text features.
Despite the inclusion of these text annotations, the text-guided reconstruction methods failed to achieve the same level of performance on the Deeprecon dataset as they did on the NSD dataset. The reconstructed images produced by the text-guided methods exhibited photo-like appearances but suffered from largely degraded quality compared to their performance on the NSD dataset (Figure <ref>b). Notably, the text-guided reconstruction methods generated photo-like reconstructions even for simple geometric shapes present in the Deeprecon dataset, which deviated significantly from the original stimuli. In contrast, the iCNN method (Shen et al., 2019a) consistently provided faithful reconstructions for both the NSD and Deeprecon datasets, despite its simplicity compared to the text-guided methods.
Upon further investigation, we noted a questionable post-hoc image selection procedure. In the StableDiffusionReconstruction study <cit.>, they presented the reconstruction results by the following procedure: “We generated five images for each test image and selected the generated images with highest PSM” (PSM means perceptual similarity metrics), as illustrated in Figure <ref>a. This selection might lead readers or peer reviewers, particularly those not specialized in the brain decoding field, to overestimate the effectiveness of the methods and potentially lead to a distorted understanding of the actual reconstruction performance. Note that Brain-diffuser study <cit.> did not executed such procedures and in their following report <cit.>, they updated the image presentation procedure more fairly as: “we generated five images with different stochastic noise and selected three images randomly.”
To examine the impact of this post-hoc selection procedure, we conducted an experiment using random brain data (Figure <ref>b). Instead of feeding the test brain data into pretrained translators, we shuffled the activities within each voxel of the NSD test set independently to create random brain data. Surprisingly, when the random brain data were input into the VAE feature translator, which contributes to producing initial images, plausible images were obtained by generating five images and selecting the best one. Even more strikingly, when the random brain data were input into both VAE and CLIP text feature translators, we still obtained convincing results by simply generating images five times and conducting the aforementioned selection. These observations are inexplicable since the artificially created brain data should completely lack any information related to the original visual stimuli.
These observations raise perplexing questions about the performance and generalizability of recent text-guided reconstruction methods. The significant deterioration in performance when simply changing the dataset from NSD to Deeprecon highlights the need for a deeper understanding of the factors contributing to the success of these methods on the NSD dataset. Moreover, the ability to obtain plausible reconstructions from random brain data by merely generating multiple images and selecting the best ones suggests that there may be fundamental issues with both the evaluation dataset and the components of the reconstruction methods themselves. In the following sections, we will conduct a thorough investigation into the potential problems associated with the NSD dataset and each component of the text-guided reconstruction pipeline.
§.§.§ Lack of diversity in the stimulus set
First, we examined the characteristics and limitations of the NSD dataset itself. To characterize the diversity of stimuli in the datasets and their latent representations, we focused on the CLIP features, which are used as latent features in recent reconstruction methods. We employed uniform manifold approximation and projection (UMAP) <cit.> to visualize the CLIP text features of the NSD stimuli (see Materials and Methods “UMAP visualization”). The visualization revealed approximately 40 distinct clusters, with considerable overlap between the training and test sets (Figure <ref>a). Interestingly, we were able to describe the stimulus images in each cluster using a single semantic word, such as airplane, giraffe, or tennis. Despite the NSD containing around 30,000 brain samples per subject, the diversity of the presented stimuli was quite limited to just around 40 semantic concepts. In contrast, the Deeprecon dataset, which was specifically designed to differentiate object categories between training and test data, exhibited less overlap between the two sets (Supplementary Figure <ref>).
To further investigate the similarity between the training and test stimuli, we conducted an analysis using a state-of-the-art perceptual similarity metric called DreamSim <cit.> (Figure <ref>b). For each test image in the NSD, we used DreamSim to identify the most similar training images based on their perceptual similarity scores. We observed that the training images extracted using DreamSim were highly similar to the test images in the NSD, not only in terms of semantic concepts but also in terms of overall layout and visual composition. This finding suggests that the NSD test images are heavily biased towards the distribution of the training data, with a significant overlap in the visual and semantic features present in both sets. Such a strong bias raises concerns about the true reconstruction performance of methods evaluated on this dataset, as the impressive results may be largely attributable to the methods' ability to memorize and reproduce the characteristics of the training data rather than generalizing to novel stimuli.
In contrast, when we applied the same DreamSim-based analysis to the Deeprecon dataset, we found that the extracted training images were substantially different from the test images. This observation indicates a clearer separation between the training and test distributions in the Deeprecon dataset, with the test images containing novel visual and semantic features that are not well-represented in the training set. The distinct differences between the NSD and Deeprecon datasets in terms of stimulus similarity highlight the importance of carefully designing evaluation benchmarks that can effectively assess the generalization capabilities of visual image reconstruction methods.
§.§.§ Failed zero-shot prediction in the feature space
Given the significant semantic and structural similarities between the NSD test images and the training images, it is crucial to assess whether the predictions made from brain activity truly demonstrate generalization beyond the training set (i.e., zero-shot prediction). To evaluate the zero-shot prediction capability of the translators used in the reconstruction methods, we conducted an N+1 way identification analysis, where N represents all of the training samples and 1 refers to the test sample (Figure <ref>a). We calculated the percentage of translated features that successfully identified the corresponding test sample in the NSD dataset.
The performance of the CLIP features was nearly 0% (Figure <ref>b). This extremely low performance suggests that the CLIP feature translator fails to achieve zero-shot prediction, even within the distribution covered by the training set. In other words, the CLIP features translated from brain activity are unable to accurately identify novel test samples, indicating a lack of generalizability beyond the training data. In contrast, the VGG19 features <cit.> employed in the iCNN method demonstrated moderate identification performance at the latent features on the intermediate layers. This finding suggests that the VGG19 features possess a certain degree of generalizability, enabling them to identify test samples that were not encountered during training.
Furthermore, we investigated whether the CLIP feature translator enables the prediction of novel semantic concepts not in the training. We redesigned the dataset split to ensure no semantic concepts were shared between them as in the previous zero-shot prediction study <cit.>. We first applied k-means clustering to the UMAP embedding space of the NSD's CLIP text features (Supplementary Figure <ref>a). We set the number of clusters as 40 based on visual inspection of the UMAP results. Based on these clustering results, we performed a hold-out analysis: when predicting samples within a cluster (e.g., the ski cluster), we excluded samples from that cluster in the training set (Figure <ref>a; hold-out split conditon). As a control, we also prepared a naive data-split condition where the training sample size is the same as in the hold-out split condition but allows overlapping semantic concepts. When we visualized the properties of predicted features in hold-out analysis by transforming the predicted features into the previous UMAP embedding space (Figure <ref>a), we observed that the predicted features tend to diverge largely from their original clusters and move into other clusters (Figure <ref>b).
To quantitatively assess the performance of the feature translator, we employed two metrics: cluster identification accuracy and pairwise sample identification accuracy. Cluster identification accuracy focuses on evaluating the translator's ability to generate features that correctly identify the semantic cluster to which a test sample belongs (Figure <ref>c; left). In this analysis, we compare the predicted features of each test sample to the average features of the training samples within each semantic cluster. The accuracy is then calculated as the percentage of predicted features that successfully identify the original semantic cluster of their corresponding test samples in the feature space. Pairwise sample identification accuracy is a commonly used metric in the evaluation of feature prediction and reconstruction performance <cit.>. This analysis assesses the translator's ability to generate features that are more similar to the actual test features than to the features of other samples in the test set (Figure <ref>c; right). For each test sample, we compare its predicted features with two sets of features: the actual test features and the features of another randomly selected sample from the test set. The accuracy is then calculated as the average winning rate of the predicted features against all other test samples, indicating the proportion of cases where the predicted features are more similar to the actual test features than to the features of other samples.
Figure <ref>d presents a comparison of the identification performance of the feature translator in each semantic cluster between the hold-out and naive split conditions, as measured by two evaluation metrics: cluster identification accuracy and pairwise sample identification accuracy. The cluster identification accuracy in the hold-out split condition exhibited a substantial drop compared to the naive split condition across all semantic clusters. Notably, the cluster identification accuracy was frequently found to be 0% in the hold-out split condition, exposing a severe limitation of the translator in novel semantic concepts that were absent from the training set (see also Supplementary Figure <ref>b). This finding strongly suggests that the CLIP feature translator functions primarily as a “classifier,” heavily relying on the semantic features learned during the training phase.
Intriguingly, despite the feature translator's complete failure to identify semantic concepts in the hold-out split condition, the pairwise sample identification accuracy often surpassed the chance level across the semantic clusters (Figure <ref>d). The insensitivity of the pairwise sample identification accuracy to the translator's inability to capture semantic concepts can be attributed to its emphasis on comparing the relative similarity between the predicted features and the actual test features, without taking into account the broader categorical context. This can lead to an inflated pairwise sample identification accuracy, potentially misrepresenting the true reconstruction performance and the translator's ability to generalize to novel clusters. This observation raises concerns about the adequacy of using identification accuracy as the sole metric for evaluating reconstruction performance, a practice that has been widely adopted in many studies <cit.>.
§.§.§ Failed recovery of a stimulus from its latent features
Finally, we conducted a rigorous evaluation of the generator component, which typically consists of diffusion models in generative AI-based reconstruction methods. To ensure that a visual image reconstruction method has the potential to faithfully reproduce an individual's perceived visual experiences, it is crucial that the method can recover the original images with a high degree of perceptual similarity when the neural translation from brain activity to latent features is perfect. However, it has been unclear whether generative AI-based reconstruction methods meet this fundamental requirement. To address this question, we performed a recovery check analysis by reconstructing images using the true latent features of target images. Instead of using latent features transformed from brain activity, we directly input the latent features derived from the target images into the generator. While generative AI-based reconstruction methods generated images that were semantically similar to the target, the resulting images were not perceptually similar to the original (Figure <ref>a). In contrast, iCNN method yielded results that closely resembled the actual target images, demonstrating their superior ability to capture and replicate the original visual content.
To further investigate the recovery performance, we conducted a recovery check on each latent feature of the Brain-diffuser method (Figure <ref>b). Interestingly, reconstructions from VDVAE features, which are used for generating initial images in the Brain-diffuser, exhibited a high degree of similarity to the target images. However, the images generated by CLIP features through the diffusion models showed significant deviations from the original targets. These findings suggest that text-guided reconstruction methods may not be well-suited for visual image reconstruction tasks, as they fail to faithfully recover the original visual images. Instead, they tend to create images based on their latent features, such as CLIP features, which can lead to a phenomenon known as “hallucination” in the field of generative AIs. Hallucination refers to an output that appears plausible but is actually incorrect or misleading, raising concerns about the reliability and accuracy of the model <cit.>. These methods seem to prioritize generating semantically similar images rather than faithfully reconstructing the visual content perceived by the individual.
These findings may provide an explanation for why the text-guided reconstruction methods performed well only on the NSD dataset. The results of the case study demonstrated that the text-guided reconstruction methods struggle to reconstruct (Figure <ref>) or identify (Figure <ref>) test samples that lie beyond the distribution of the training set. Such limitations suggest that these methods lack true generalization capabilities and are unable to accurately reconstruct novel visual experiences that differ significantly from the examples they were trained on. Moreover, even when the test samples belonged to the same distribution as the training set, the translators of the text-guided reconstruction methods had difficulty correctly identifying those test samples (Figure <ref>). This observation indicates that the translators may not have learned a sufficiently robust and generalizable mapping between brain activity patterns and the corresponding latent features, further limiting their ability to faithfully reconstruct the perceived visual experiences.
The case study revealed that the NSD test stimuli are highly similar to the training set (Figure <ref>), with a significant overlap in their visual and semantic features. Given this similarity, the impressive reconstruction results achieved by the recent text-guided reconstruction methods on the NSD dataset should not be interpreted as evidence of zero-shot reconstruction capabilities. Instead, a more plausible interpretation is that these methods primarily function as a combination of “classification” and “hallucination.”
In this context, the translator component of the text-guided reconstruction methods functions primarily as a classifier, predicting categorical information present in the training data rather than capturing the fine-grained details of the visual experience. This limitation may explain why convincing reconstructions can be obtained even from random brain data through post-hoc selection. Due to the limited variety in the outputs generated by the reconstruction models, repeated trials can eventually produce images that are semantically and visually similar to the target stimulus. The apparent plausibility and semantic consistency of these generated images can be attributed to the capabilities of the diffusion model, which learns to generate realistic-looking images based on the semantic information. While these images may seem convincing at first glance, they do not accurately reflect the specific visual experience of the individual. This phenomenon of hallucination raises serious concerns about the reliability and validity of the text-guided reconstruction methods when evaluated on the NSD dataset.
§.§ Formal analysis and simulation
In the above case study, we have identified several issues with these text-guided reconstruction methods and the dataset, especially the cluster structure of CLIP latent features, the lack of diversity in NSD, and the misspecification of the latent representation for image reconstruction, which results in the inability of diffusion models to faithfully recover the original images from their latent features. However, it is crucial to recognize that the findings of the case study are not merely specific to CLIP, NSD, or diffusion models. Instead, these issues likely reflect more fundamental problems that can arise in the development and evaluation of brain decoding and visual image reconstruction methods. Thus in this section, we extend the problems identified in the case study into formal analyses and simulations in generalized settings, aiming to provide a more comprehensive understanding of the factors that contribute to the limitations of current reconstruction methods and explore strategies for mitigating these issues.
§.§.§ Output dimension collapse
First, we revisit the properties of the translator and highlight the issues that arise when the prediction targets have insufficient diversity. Let us consider the general situation of predicting a feature 𝐲∈ℝ^D from a brain activity pattern 𝐱∈ℝ^D using a linear regression model. For the training set, we consider brain activity matrix X_tr∈ℝ^N × D and feature value matrix Y_tr∈ℝ^N × D, where X_tr consists of N samples of D-dimensional brain activity vectors 𝐱 and Y_tr consists of N feature value vectors 𝐲. We then train the linear (ridge) regression model using this training data. Given a regularization parameter λ, the weight of ridge regression model is analytically derived as W = (X_tr^⊤ X_tr + λ I)^-1 X_tr^⊤ Y_tr, where I is the D × D identity matrix.
The predicted feature value 𝐲̂_te for the test brain activity data 𝐱_te can be represented as:
𝐲̂_te = W^⊤𝐱_te
= Y_tr^⊤X_tr (X_tr^⊤X_tr + λ I)^-1𝐱_te
= Y_tr^⊤𝐦 = ∑_i^N m_i 𝐲_tr^(i),
where m = X_ tr (X_ tr^⊤X_ tr + λ I) ^-1 𝐱_ te∈ℝ^N× 1 and 𝐲_ tr^(i) is the ith training feature vector. This transformation indicates that the predicted value is always represented as a linear combination of the features in the training set. This property is not limited to ridge regression but is generally applicable to ordinary ridgeless linear regression and related linear models.
Next, we consider a scenario where the diversity of the target features is small. This situation can arise when the feature space exhibits a clustered structure and the training data lacks sufficient diversity, as observed in the case study with the CLIP text features and the NSD dataset. When the training feature values have limited diversity, the predicted values from brain activity, which are represented as linear combinations of these features, also become constrained. Consequently, the prediction from brain data to features effectively becomes a projection onto a low-dimensional subspace formed by the training data.
To illustrate this phenomenon, we conducted a simple simulation to examine the distribution of predicted values from a linear regression model trained on clustered features (Figure <ref>). We generated clustered features by sampling from a Gaussian mixture distributions in a high-dimensional space. The corresponding brain activity data was generated from a multivariate Gaussian distribution. We then trained a linear regression model to predict the clustered feature values from the randomly generated brain data.
The simulation results clearly demonstrate the impact of clustered features on the predicted values. The trained model projects arbitrary brain data onto the subspace defined by the training features, resulting in predicted values that are confined to the vicinity of the training clusters. This observation highlights the limitation of training linear regression models with clustered features: the trained model's predictions are inherently constrained by the diversity and structure of the training data.
This phenomenon, which we refer to as “output dimension collapse,” has important implications for the generalization capability of linear regression models in the context of brain decoding and visual image reconstruction. When the training data lacks diversity and forms distinct clusters in the feature space, the translator overly adapts to the subspace formed by the training data, regardless of the potential of the latent feature space. Consequently, the translator's outputs become similar to patterns in the training set, irrespective of the inputs, severely limiting the model's ability to predict novel or out-of-distribution samples. Note that output dimension collapse is not inherently caused by the limitations of linear regression models but rather stems from a lack of diversity in the outputs of the training data. This phenomenon can occur regardless of the type of regression model employed, including nonlinear models. In fact, nonlinear models may be even more susceptible to output dimension collapse, as the increased flexibility and complexity of these models can potentially exacerbate the issue.
Output dimension collapse may explain why plausible reconstructions were obtained even from random brain data by merely generating images several times in the case study. The NSD's lack of semantic diversity causes the translator to adapt only to the feature patterns of the training set, restricting its outputs to the subspace formed by the training data. As a result, convincing images could be found even from random data through questionable post-hoc selection.
§.§.§ Simulation with clustered features: What makes prediction compositional
The case study revealed that the NSD exhibits limited diversity (Figure <ref>) and poses difficulties for zero-shot prediction (Figure <ref> and <ref>). These observations suggest that the translator of CLIP features suffers from output dimension collapse due to the lack of semantic diversity in NSD. To explore potential strategies for mitigating output dimension collapse and achieving flexible predictions, we conducted simulation analyses using clustered data to assess generalization performance beyond the training set.
Our simulation involved teacher-student learning with Gaussian mixture models (see Materials and Methods: “Simulation with clustered data”). To imitate the feature translation situation from brain activity, we first generated feature data and assumed a scenario where observation noise affects the brain data. To simulate a situation where the dataset has cluster structures and to control diversity effectively, the training feature 𝐲∈ℝ^D was generated from a D-dimensional mixture of Gaussians (Figure <ref>a). The brain activity data 𝐱∈ℝ^D was generated from 𝐲 multiplied by teacher weight and adding observation noise.
We trained a ridge regression model on large training data samples and obtained the student weight. To simulate the situation where the trained model encounters clusters that are not available at the training phase, we generated two types of test samples, in-distribution and out-of-distribution. In-distribution test samples were generated using the same probability distribution as the training data, while out-of-distribution (OOD) samples were generated from a different distribution. For these two types of predicted features, we calculated the cluster identification accuracy (Figure <ref>c), using C+1 cluster centers, which included C centers from the training set and one additional center for the OOD test set.
We first examined zero-shot prediction performance as the number of clusters in the training data was increased while fixing the feature dimension D and the cluster variance ratio between each cluster constant (Figure <ref>b). While the cluster identification performance of in-distribution samples were perfect, that of OOD samples showed different patterns depending on the number of clusters in the training data. When the number of training clusters was low, cluster identification accuracy was 0%. This indicates that the behavior of the translator became more similar to a classifier, difficult to generalize beyond the training set, as observed in NSD cases (see also Figure <ref>d). On the other hand, as the number of clusters in the training data increased, it became possible to identify the novel clusters, achieving the same performance as in-distribution test samples. This observation indicates the importance of the diversity of the training dataset. We also emphasize that large numbers of training data do not necessarily address the problem. All of these results were obtained with a sufficiently large number of training data, and the number of training clusters was varied while keeping the number of training data fixed. Also, we observed qualitatively similar results in increasing the data diversity by controlling the cluster variance ratio (Supplementary Figure <ref>a).
Next, we investigate how diverse the training data needs to be to ensure sufficient generalization. There are two possible scenarios for diversifying training samples: either by densely sampling the entire space so that there are no remaining gaps or by uniformly sampling to the extent that it covers the entire dimension of the feature space (Figure <ref>c). The former scenario requires an exponentially larger number of samples relative to the dimension while the latter scenario only requires up to a linear order. We sought to reveal which of these two scenarios was more likely to be true by varying the dimensions of the feature space and identifying the number of clusters required for generalization (see also Supplementary Figure <ref>b). Here, we defined the number of clusters required for generalization as the point at which the prediction accuracy of OOD samples exceeds 50%. The relationship between the dimension of the feature space and the number of clusters necessary for generalization appears to be linear (Figure <ref>d). This finding suggests that achieving generalization does not necessarily require an exponentially large diversity that fills the entire feature space. Instead, it suffices to have a number of clusters that cover the effective dimensions within the feature space. Although obtaining a large amount of brain data is hard work, it is important for a dataset to contain sufficiently diverse stimuli covering the effective dimensions of the target feature space to achieve zero-shot prediction.
We also confirmed this phenomenon by a simple and transparent example (D=2, Figure <ref>e). The training data covers sufficient axes in the feature space, enabling the prediction of locations not present in the training set. Based on this low dimensional intuition, we argue that successful zero-shot prediction requires that training data leads to representations that can serve as a basis spanning the feature space. Leveraging such bases effectively enables the model to predict novel samples by predicting each basis and combining them. This compositional representation, spanning the target feature space is crucial for zero-shot prediction <cit.> and reconstructing arbitrary visual images from limited brain data.
§.§.§ Preserved image information across hierarchical DNN layers
To enable the reconstruction for arbitrary visual images, compositional latent features that can be appropriately mapped into the image space are essential. Although it is often assumed that the hierarchical feature representations in DNNs, especially convolutional neural network (CNN), discard pixel-level information through their hierarchical processing with progressively expanding receptive fields, this is not necessarily the case.
For example, the latent features of auto-encoder models <cit.> can represent images in a low-dimensional space while preserving their reversibility, which is reasonable considering that the output is trained to match the input. Furthermore, <cit.> showed that input images can be reconstructed with reasonable accuracy even from relatively high-level layers of a DNN designed for an object recognition task. It has also been argued that large receptive field sizes do not necessarily impair neural coding capacity as long as the number and density of units remain constant <cit.>. These results challenge the notion that higher-level layers in DNNs discard all pixel-level information and highlight the potential for utilizing intermediate representations in visual image reconstruction. To further illustrate this point, we performed a recovery check on each intermediate layer of the VGG19 network used in the iCNN methods (Figure <ref>; see also Figure <ref>). Given the DNN features of a target image, we optimized input pixel values to make the image's latent features similar to the targets (See Materials and Methods “Recovery check of a single layer by iCNN”). We observed that input images can be recovered with reasonable accuracy from relatively high-level layers (around the 11th layers, the 19th layers as a total). Furthermore, by introducing image generator networks to add constraints on image statistics, reasonable recovery can be achieved from even higher layers. By utilizing a weak image prior <cit.>, which contains only information about the structure of images without any prior information on natural images, input images can be recovered from the 13th layer. When using an image generator that has learned natural image information <cit.>, input images can be recovered even from the 15th layer.
These findings underscore the significance of intermediate DNN representations that can effectively recover the input image in visual image reconstruction studies. Conversely, utilizing high-level image features or features from other modalities, such as text annotations, for which the corresponding images are challenging to recover, is not a rational choice for visual image reconstruction tasks. Although recent text-to-image models and decoded text features can easily generate images, the outputs should not be interpreted as reconstruction results. Instead, it is more reasonable to interpret them as visualizations of decoded semantic information.
§.§.§ Caveat with evaluation by identification
Pairwise identification has been a standard metric for evaluating latent feature decoding <cit.> or reconstruction performance <cit.>. However, our analysis revealed that even with difficulties in accurately identifying semantic clusters the test samples belong to, pairwise identification performance still surpassed chance levels (Figure <ref>d). Here we critically examine this metric and demonstrate that significant results can be easily obtained when the target or predicted features exhibit certain structures.
Pairwise identification is calculated as the accuracy with which the predicted features (reconstructed image or its features) can correctly identify the true ones, in pairs consisting of a true sample and one of the remaining samples in the test set. If the candidate feature belongs to the same category as the true feature, we can expect the identification to be difficult. Conversely, if the candidate feature belongs to a different category from the true feature, identification would become easier.
Here we assume the test set comprises k categories and all test samples are equally distributed across each category for simplicity. We model the situation mentioned above as the expected identification accuracy is set to chance level (i.e., 0.5) when a candidate belongs to the same category as the true feature. In contrast, the expected identification accuracy when a candidate belongs to a different category from the true feature is set to q, ranging between 0.5 to 1 to mimic the ease of identification across categories. Assuming the number of test samples is large enough, the pairwise identification becomes
Acc = 1/k· 0.5 + (1-1/k) · q
(See Materials and Methods “Expected identification accuracy in imprecise reconstructions” for the derivation). Figure <ref> shows the line plots of expected values of pairwise identification accuracy as a function of the number of categories in the test set, while presenting the possible similarity structure of the test set. Even with failure to identify within categories completely and success only between two categories, pairwise identification accuracy can still reach a high value up to 75%. This finding highlights a potential limitation of the pairwise identification metric, as judging reconstruction performance solely based on it can be misleading.
§.§.§ How are we fooled by hallucinations of generative AIs
Generative AIs have recently made remarkable progress, with models now capable of producing high-resolution and hyper-realistic images from text input <cit.> or generating text of a quality indistinguishable from human-written content <cit.>. However, due to the complex internal structure of these models and the vast amounts of data they are trained on, we are often fooled by the outputs of generative AIs. For instance, when searching for an unfamiliar topic using a large language model (LLM) in our daily life, we may not realize that the model is creating false concepts. Someone may have mistaken generative AI show real-time response in the promotion, despite the fact that these promotions were edited <cit.>. We might also believe that these models are unbiased or can represent all possible viewpoints, even though they inherently contain biases from their training data and developers <cit.>. As we have observed that the generative AI-based reconstruction methods exhibit photo-like appearances but poor generalizability (Figure <ref>), could similar issues be occurring in visual image reconstruction studies as well?
The goal of visual image reconstruction is to generate images from brain activity that precisely mirror visual experience. However, there appears to be a prevalent focus among the general public, reviewers, and even researchers on achieving outputs that are as photorealistic as possible, rather than emphasizing the accuracy of these reconstructions. This shift in focus raises questions about the extent to which these photorealistic reconstructions truly represent the actual visual experiences.
Traditionally, we have held two beliefs: 1) generating photo-like images from brain activity is challenging, and 2) if the reconstruction pipeline effectively captures the brain representation under natural image perception, the model’s output should also appear naturalistic or photo-like. Based on these beliefs, we tend to consider photo-like reconstructions as an indication of accurately reflecting the actual visual experience.
This heuristic can be formalized as follows:
Assumption 1: [R] ≪[R̅] ≈ 1,
Assumption 2: [R | T] ≈ 1,
Conclusion: [T | R] is high,
where T represents the event of the model's output truthfully reflects the visual image and R represents the event of the model's output has a realistic appearance. R̅ represents a complementary event of R. [T | R] is the probability that how likely we should conclude that the model's output actually reflects the subject's visual experience based solely on the fact that the output is photo-like. In fact, this heuristic is reasonable when the above two assumptions are true and [T] is not extremely low.
However, recent developments in generative AIs, such as diffusion models, have made it easily to produce convincing photo-like outputs, subverting the first assumption i.e, [R] ≫[R̅]. Consequently, it becomes invalid to infer that the visual images are accurately reflected in the generator outputs solely because they appear photo-like. Rather, as shown in Figure <ref>, the probability [T | R] may become smaller as the generative AIs produce more convincing outputs (see also Figure <ref>). This perspective emphasizes the need for careful evaluation of reconstruction performance, considering the possibility of hallucinations by generators. While pursuing photo-like reconstructions to improve reconstruction fidelity is undoubtedly important, it would be counterproductive to obsess over naturalistic appearance to the point of neglecting the original goal of reconstructing perceived visual images.
§ DISCUSSION
In this study, we critically examined recent generative AI-based visual image reconstruction methods to assess their true capabilities and limitations. Our primary goals were to (1) investigate the performance of these methods on different datasets, (2) identify potential issues and pitfalls in their methodology and evaluation, and (3) provide insights and recommendations for future research in this field. We conducted a case study focusing on text-guided reconstruction methods and their validation on the Natural Scene Dataset (NSD). Our findings revealed several concerns, including the failure to replicate the reconstruction performance on a different dataset, the use of problematic post-hoc image selection procedures, the lack of diversity and limited number of clusters in the NSD stimulus set, the failure of zero-shot prediction by the translator component, and the inability to accurately recover original stimuli by the generator component. Formal analysis and simulations further demonstrated the phenomenon of output dimension collapse, the importance of compositional representations for achieving zero-shot prediction, and the potential pitfalls of relying solely on identification metrics to evaluate reconstruction performance. Moreover, we highlighted that photo-like appearance does not necessarily imply accurate reflection of the perceived visual images. Based on these findings, we argue that the reconstructions from the recent text-guided reconstruction methods are, in large part, the result of a combination of classification and hallucination. Our study emphasizes the need for more rigorous evaluation and careful interpretation of results in visual image reconstruction research, particularly when using generative AI-based methods.
While our study critically examined the limitations of recent text-guided, diffusion-based methods for visual image reconstruction, it is important to acknowledge that these approaches provide new and potentially promising directions for brain decoding research. Although we emphasized the importance of achieving zero-shot prediction, it is crucial to recognize that most brain decoding studies focus on classification tasks, which, while not zero-shot, have provided insights into neural representations <cit.>. Moreover, the visualization of semantic contents (e.g., the supplementary movies in <cit.>), can have significant utility in visual communication, even if it does not constitute zero-shot prediction. It is also worth noting that the individual components of these text-guided, diffusion-based methods are already being utilized in various brain decoding applications. For instance, text latent features derived from deep neural networks and large language models (LLMs) have shown promise in analyzing semantic information from brain activity <cit.>. Furthermore, diffusion models are not limited to text-to-image generation; they can be trained to generate images from visual latent features, as demonstrated by <cit.>, enabling the successful reconstruction of subjective experiences. By leveraging these components and exploring their potential synergies, researchers can continue to push the boundaries of brain decoding and visual image reconstruction, while being mindful of the challenges and limitations highlighted in our study.
The recent trend of collecting and sharing large-scale visual neural datasets, such as those by <cit.> and <cit.>, is a welcome development in the field of neuroscience. These datasets provide valuable resources for researchers to investigate brain function and advance our understanding of visual processing. The NSD is a particularly notable example, as it was created with the goal of extensively sampling brain responses to a wide range of natural visual stimuli <cit.>. The NSD has been widely utilized in various studies <cit.>, demonstrating its value to the research community. While our results suggest that the semantic and visual diversity of the NSD stimuli may not be as high as initially thought, and there is substantial overlap between the training and test sets provided by the NSD authors, this does not diminish the overall importance and usefulness of the dataset. However, to fully leverage the NSD and other publicly available large-scale datasets for developing generalizable and zero-shot prediction models, it is crucial to carefully consider the data split between training and test sets. When aiming for generalizable predictions, such as in visual image reconstruction, researchers should verify whether there are significantly similar stimuli included in both the training and test sets. If necessary, redesigning the training and test split can ensure that the models are tested on their ability to predict novel stimuli. Moreover, recent advancements in functional alignment and inter-site neural code conversion methods <cit.> hold promise for combining datasets from different sources, enabling truly larger-scale data analysis in neuroscience. These techniques allow researchers to align brain activity patterns across individuals and measurement sites even when stimuli are not shared across datasets. By leveraging these methods, researchers can pool data from various sources, increasing the sample size and diversity of the combined dataset, mitigating the limitations of individual datasets, and enhancing the development of generalizable and zero-shot prediction models.
Investigating neural responses to natural stimuli is a highly valuable approach to understand brain function and representation <cit.>. As our brains have developed while being exposed to natural scenes, it is crucial to use natural stimuli especially in model training. However, we should not forget that we are also capable of perceiving non-natural stimuli like artificial images. We would like to emphasize that there are the potential pitfalls when relying too heavily on evaluations based solely on natural stimuli. With the increasing scale of neural data and the growing complexity of analysis pipelines, there is a risk that the learned mappings may produce unexpected shortcuts, just as we have demonstrated that the generative AI-based reconstruction methods exploited the semantic and visual overlap between training and test sets. Drawing inspiration from comparative and developmental psychology, where researchers often employ simple stimuli and tasks to measure cognitive abilities in infants for better experimental control and precise inferences <cit.> , the evaluation of visual image reconstruction should not be limited to complex natural stimuli alone. While natural stimuli are essential for ensuring ecological validity and understanding how the brain processes real-world information, it is equally important to assess the performance in controllable and transparent manners.
We discussed output dimension collapse and related issues with clustered data found in natural image features. It is important to note that these issues can arise in various situations. Several studies have used brain data where the stimuli shared the category information between the training and test sets. <cit.> collected a dataset of EEG signals recorded during natural image perception. Their visual stimuli consisted of 2,000 images selected from 40 object categories in ImageNet (50 images per category) and the test set contains the same categories that are included in the training set. <cit.> attempted to develop music reconstruction methods from fMRI activity patterns. Their music stimuli consisted of 540 music pieces selected from 10 music genres and the test set contains the same genres as the training set <cit.>. <cit.> attempted to reconstruct perceived texture images from EEG signals. Their texture stimuli consisted of 191 images of 21 natural textures and they performed a reconstruction analysis with a leave-one-out manner. It should be carefully examined whether these studies may suffer from output dimension collaplse, merely decoding category information from brain activity and generating stimuli based on this classified category information, similar to our interpretation of the text-guided reconstruction methods. Note that for other issues in the data from <cit.>, see <cit.>. Our inspection of the data from <cit.> revealed that many frames in the test movie stimuli were almost identical to those in the training set (Supplementary Figure <ref>). This is presumably because temporally adjacent video frames were split between the training and test stimuli. While our preliminary analysis of their motion energy features did not exhibit peculiar clustering (note also that their model was an encoding model where the output is brain activity), caution should be exercised if the dataset is used for models that extract other features of the videos.
One of the remaining challenges in visual image reconstruction is the development of metrics for evaluating the quality and accuracy of the reconstructed images. The first and most critical step in assessing reconstruction results is to confirm a qualitative similarity between the reconstructed images and the perceived images through visual inspection across a diverse range of test sets. Following this, quantitative metrics should be employed for a more objective, high-throughput evaluation. However, as our analysis has suggested, it can be misleading to evaluate reconstruction by heavily relying on identification performance based on the relative similarity among alternatives <cit.>. Even in cases where the reconstructed images only capture superficial information, such as categories or overall brightness, identification metrics can still be high. While identification performance can provide a useful benchmark, it should not be the sole metric for evaluating reconstruction quality. It is crucial to develop more appropriate similarity metrics that can accurately measure the perceptual similarity between the reconstructed and original images. One promising approach is to leverage image quality assessment (IQA) techniques from the computer vision field <cit.>. These techniques are designed to quantify the perceptual quality of images and can be adapted to the specific requirements of visual image reconstruction.
Visual image reconstruction methods have gained attention not only from neuroscientists but also from the general public and policymakers, sparking discussions about their potential applications and risks <cit.>. These stakeholders often contemplate the possibilities of seamless information communication through the brain, such as in brain-machine interfaces (BMIs), or the dangers of unauthorized access to private information from brain activity. This interest may stem from the perception that brain activity data can be obtained easily and reliably in real-time. However, current technology and analysis methods fall short of these expectations. For instance, most reconstruction methods analyze previously acquired brain data offline. Additionally, the brain data used for reconstructing images are often averaged over multiple presentations of the test image, with only a few studies demonstrating single-trial reconstruction results <cit.>. Further, It has been argued that subject cooperation is essential for reliably training and testing decoding models <cit.>. Public expectations are thus set too high, and it is challenging to meet these demands quickly. It is essential to explicitly state these limitations to avoid disappointment and prevent governments or companies from making misguided decisions. Key limitations include the lack of real-time analysis, reliance on averaged brain data, the necessity of subject cooperation, and the need for further model fine-tuning using the test brain dataset <cit.>. It is crucial to avoid making overly optimistic claims about the ability to reconstruct arbitrary images, as the applicability can be highly dependent on specific training data and conditions.
§ MATERIALS AND METHODS
§.§ Datasets
We utilized two datasets: the natural scene dataset (NSD) <cit.> and Deeprecon dataset <cit.>. Both datasets comprise visual stimuli and corresponding fMRI activity collected when subjects perceived the stimuli. In the NSD dataset, eight subjects were presented with MSCOCO images <cit.>, yielding 30,000 brain activity samples per subject, which is three times the amount provided by the Deeprecon dataset. The Deeprecon dataset includes fMRI activity data from subjects presented with both ImageNet images <cit.> and artificial images. It contains roughly 8,000 brain samples per subject. Since this dataset is designed for evaluating reconstruction performance, the test data is carefully selected. The test natural images were selected from ImageNet, which were in categories different from those used in the training. The artificial images were only used as test data to check the generalizability performance of the proposed reconstruction methods. In both datasets, we adopted the train/test split used in previous studies and utilized data from the first subject (S1 in NSD, Subject 1 in Deeprecon). Text-guided reconstruction methods require text annotations of images.. For the NSD, text annotations accompanying the MSCOCO database were used. For the Deeprecon dataset, we collected captions for each experimental stimulus via crowd workers on Amazon Mechanical Turk, yielding five captions per image. The captions of training stimuli are publicly available at https://github.com/KamitaniLab/GOD_stimuli_annotationsthe GitHub repository.
§.§ Reconstruction methods
We utilized three image reconstruction methods: StableDiffusionReconstruction <cit.>, Brain-diffuser <cit.> and iCNN <cit.>. Each method employs two common steps: first, translating brain activity patterns into latent features of the stimuli, and second, generating images from these latent features using an image generator (see also Figure <ref>). In the StableDiffusionReconstruction method, the latent features are the VAE <cit.> features calculated from stimulus images and CLIP text features <cit.> from the image annotations. The generator is StableDiffusion <cit.>. They first generate low-resolution images from the translated VAE features, and those images are further fed into the StableDiffusion model with translated text features to generate images. The generated images are regarded as reconstructed images from brain activity. In the Brain-diffuser method, the latent features are the VDVAE <cit.> and CLIP vision features from stimulus images and CLIP text features from the image annotations. The generator is Versatile diffusion <cit.>. Similar to StableDiffusionReconstruction, low-resolution images are first generated from the translated VDVAE features, and these images are further used for the input of the versatile diffusion model with the translated vision and text features. The generated images are regarded as reconstructed images from brain activity. In the iCNN method, the latent features are the intermediate output of VGG19 <cit.> layer from stimulus images. As a generator, they used the pretrained image generator <cit.> and they solved the optimization problem to minimize the discrepancy between the VGG19 features calculated from the generated images and the transformed VGG19 features. Well-optimized images are regarded as reconstructed images.
§.§ UMAP visualization
To investigate dataset diversity, we employed Uniform Manifold Approximation and Projection (UMAP), a non-linear dimensionality reduction technique <cit.> to learn a projection from a latent features space to a lower dimension (UMAP embedding space). We used both the training and test CLIP text features for learning the UMAP projection. These features were combined and standardized beforehand. The UMAP hyperparameters follow https://umap-learn.readthedocs.io/en/latest/clustering.html#umap-enhanced-clusteringthe official UMAP guide for clustering usage with cosine distance as a distance metric. The learned UMAP was also used to project the features predicted from brain activity (Figure <ref>). After standardizing the predicted features using the same mean and standard deviation parameters used in UMAP projection learning, we projected the decoded features into the UMAP embedding space.
§.§ Simulation with clustered data
We conducted a simulation analysis to examine the generalization performance beyond the training data (Figure <ref>). This analysis involves a teacher-student learning task using Gaussian mixture distributions. We generated both input and target pairs from teachers, then examined the prediction performance of regression models of students. To imitate the feature translation situation from brain activity, we first generated feature data and assumed a scenario where observation noise affects the brain data.
The training sample of latent feature data 𝐲∈ℝ^D was generated from a D-dimensional space using Gaussian mixture distributions, formulated as:
p_tr(𝐲) = 1/C∑_c=1^C𝒩(μ_c^ tr,σ_r^2I), μ_c^ tr∼ 𝒩( 0, σ_s^2),
where C is the number of clusters in the training set. σ_r^2 is the scalar value representing the variance of the Gaussian distribution corresponding to each cluster. σ_s^2 is the scalar value representing the variance of the distribution of cluster centers (μ_c^tr). Brain activity data, 𝐱∈ℝ^D, were generated using teacher weights A̅∈ℝ^D× D and incorporating observation noise ξ with 𝐱 = A̅^⊤ 𝐲 + ξ, where ξ∼𝒩(0,σ_n^2 I). σ^2_n is the scalar value representing the variance of observation noise. Using N training samples, the ridge regression model was trained. With the hyperparameter λ, the weight of ridge regression model W can be calculated analytically:
W =(X_tr^⊤X_tr + λ I)^-1X_tr^⊤ Y_tr.
For testing the model performance, we prepared two types of test samples, in-distribution test samples and out-of-distribution test samples. In-distribution test samples are generated using the same parameter as p_tr(𝐲). Out-of-distribution (OOD) test samples were generated in two steps. First, C_ood different cluster centers μ_c^ood were obtained by sampling from a Gaussian distribution: μ_c^ood∼𝒩(0, σ_s^2). Then OOD test samples were generated from these new cluster centers:
p_ ood( 𝐲) = 1/C_ ood∑_c=1^C_ ood𝒩(μ_c^ ood,σ_r^2I).
To evaluate the model's zero-shot performance, we evaluate C +1 way cluster identification analyis on the C training clusters and the one cluster to which each OOD test sample belongs. Similar to Figure <ref>c, we caluculated the proportion of predicted features from test brain data that were correctly classified as belonging to the original cluster centers compared with the other candidate clusters. The similarity between predicted features and each cluster center was calculated using the correlation coefficient, with the highest similarity determining the cluster assignment, where the chance level is 1/(C+1). We evaluated the cluster identification performance for each of the OOD clusters and reported the median performance. Note that, for in-distribution test samples, we also conducted a C + 1 way identification analysis on the C training clusters and one selected OOD cluster for fair comparison.
In the simulation analysis, we mainly focused on the cluster identification performance for the dimension D, for the number of clusters in the training set C or the distance among cluster centers for large sample size N = 500000. Other parameters fixed in this simulation analysis as follows: σ_r^2 = 10 / D, σ_s^2 = 100 / D, A̅_ij∼𝒩(0, D^-1/2), σ_n^2 = 0.1, λ = 1.0, and C_ ood = 32. We parameterized the scale of the variances (σ_r^2 and σ_s^2) and the teacher weights A̅ using the dimension D so that the order of magnitude of the output invariant from the dimension.
§.§ Recovery check of a single layer by iCNN
We performed a recovery check analysis using a single layer from the iCNN method at Figure <ref>. Briefly, the iCNN method generates an image by optimizing pixel values to make the image's latent features similar to the target latent features <cit.>. In the pixel optimization condition (the left columns of each recovery image in Figure <ref>), we directly optimized the pixel values of images to minimize the mean squared loss between the latent feature of the image as well as the total-variance (TV) loss of pixel values <cit.>.
Additionally, the iCNN method can incorporate image generator networks (middle and right columns of each recovery image in Figure <ref>) to add constraints on image statistics. Instead of optimizing the pixel values, we optimized the parameters related to the generator networks to minimize the mean squared loss between the latent features obtained through the generator networks and the target latent features. As a weak image prior, we used Deep image prior (DIP) <cit.>. DIP utilizes a hierarchical U-Net architecture as an inherent prior for image tasks, capturing the statistical regularities of images without relying on a specific dataset. This model works effectively by optimizing a randomly initialized neural network that can be used as an image prior in various inverse problems such as denoising, super-resolution, and inpainting tasks. In our analysis, DIP started with a U-Net initialized with random noise. Subsequently, the latent features and parameters of DIP were optimized to minimize the difference between the network's output and target DNN features. For the pretrained image prior, we used the same generator model as in <cit.> <cit.>, optimizing the latent features of the pretrained networks.
§.§ Expected identification accuracy in imprecise reconstructions
A pairwise identification accuracy is a metric defined on three types of samples: the test sample, the predicted sample, and the candidate sample selected from a test set as 𝐲, ŷ, 𝐲_-∈𝒴, respectively. We define a function S 𝒴×𝒴×𝒴→ℝ that takes the aforementioned triplet as the input and output whether the predicted sample was much close to the test sample than the candidate sample as
S(𝐲̂, 𝐲, 𝐲_-) =
1 ( sim(𝐲̂, 𝐲) ≥sim(𝐲̂, 𝐲_-) )
0 ( otherwise)
where sim(·, ·) is an arbitrary function that evaluates a similarity between two samples. The pairwise identification accuracy Acc over n test samples is defined as
Acc = 1/n(n-1)∑_i^n∑_j≠ i^n S(𝐲̂^(i), 𝐲^(i), 𝐲_-^(i)).
Now we consider a scenario where the translator only decodes semantic information (e.g., category) and cannot decode information about its precise visual appearance. Suppose the test set contains a categorical structure like NSD stimuli, we model such a scenario as
𝔼_p(𝐲, 𝐲̂, 𝐲_-| (𝐲, 𝐲̂, 𝐲_-) ∈ Z) [S(𝐲̂, 𝐲, 𝐲_-)] = 0.5,
𝔼_p(𝐲, 𝐲̂, 𝐲_-| (𝐲, 𝐲̂, 𝐲_-) ∈Z̅) [S(𝐲̂, 𝐲, 𝐲_-)] = q where q ∈ [0.5, 1].
Z is a set of triplet of which the test sample and the candidate sample belong to the same category. Z̅ is a complementary set of Z. 𝔼_p(𝐲, 𝐲̂, 𝐲_-|·) [S(𝐲̂, 𝐲, 𝐲_-)] represents the pairwise identification accuracy in the conditional expectation form. If the candidate sample belongs to the same category as the test sample, pairwise identification is challenging because of the poor prediction of the translator. On the other hand, if the candidate samples belongs to a different category than the test sample, the test sample is easily identified only from the semantic information.
Here, we assume that the test set contains k categories in total and all samples are equally distributed across each category for simplicity. If we have sufficiently large number of test samples, the above identification accuracy can be approximated as
Acc n →∞=1/n(n-1)(∑_(𝐲, ŷ, 𝐲_-) ∈ Z S(𝐲̂, 𝐲, 𝐲_-)) + 1/n(n-1)(∑_(𝐲, ŷ, 𝐲_-) ∈Z̅ S(𝐲̂, 𝐲, 𝐲_-))
n →∞=|Z|/n(n-1)(1/|Z|∑_(𝐲, ŷ, 𝐲_-) ∈ Z S(𝐲̂, 𝐲, 𝐲_-)) + |Z̅|/n(n-1)(1/|Z̅|∑_(𝐲, ŷ, 𝐲_-) ∈Z̅ S(𝐲̂, 𝐲, 𝐲_-))
n →∞=1/k· 𝔼_p(ŷ,𝐲, 𝐲_-| (𝐲, ŷ, 𝐲_-) ∈ Z) [S] + (1 - 1/k) · 𝔼_p(ŷ, 𝐲, 𝐲_-| (𝐲, ŷ, 𝐲_-) ∈Z̅) [S]
n →∞=1/k·0.5 + ( 1-1/k) · q,
where |Z| = n(n/k-1), and |Z̅| = n(n-n/k).
We used the assumption on the large sample size at the third equality.
§ ACKNOWLEDGEMENTS
We thank our laboratory team, especially Eizaburo Doi, Hideki Izumi, Matthias Mildenberger and for their invaluable comments and suggestions on the manuscript. This work was supported by the Japan Society for the Promotion of Science (JSPS: KAKENHI grants JP20H05954, JP20H05705 to Y.K, JP21K17821 to Y.N, and 22KJ1801 to K.S.), Japan Science and Technology Agency (JST: CREST grants JPMJCR18A5, and JPMJCR22P3 to Y.K.), and New Energy and Industrial Technology Development Oprganization (NEDO: commissioned project, JPNP20006 to Y.K.).
plainnat.bst
§ SUPPLEMENTARY FIGURES
|
http://arxiv.org/abs/2405.08999v1 | 20240514234702 | Robust Approximate Sampling via Stochastic Gradient Barker Dynamics | [
"Lorenzo Mauri",
"Giacomo Zanella"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
[
Robust Approximate Sampling via Stochastic Gradient Barker Dynamics
Lorenzo Mauri Giacomo Zanella
Department of Statistical Science
Duke University Department of Decision Sciences and BIDSA
Bocconi University
]
Stochastic Gradient (SG) Markov Chain Monte Carlo algorithms (MCMC) are popular algorithms for Bayesian sampling in the presence of large datasets.
However, they come with little theoretical guarantees and assessing their empirical performances is non-trivial.
In such context, it is crucial to develop algorithms that are robust to the choice of hyperparameters and to gradients heterogeneity since, in practice,
both the choice of step-size and behaviour of target gradients induce hard-to-control biases in the invariant distribution.
In this work we introduce the stochastic gradient Barker dynamics (SGBD) algorithm,
extending the recently developed Barker MCMC scheme, a robust alternative to Langevin-based sampling algorithms, to the stochastic gradient framework.
We characterize the impact of stochastic gradients on the Barker transition mechanism and develop a bias-corrected version
that, under suitable assumptions, eliminates the error due to the gradient noise in the proposal.
We illustrate the performance on a number of high-dimensional examples, showing that SGBD is more robust to hyperparameter tuning and to irregular behavior of the target gradients compared to the popular stochastic gradient Langevin dynamics algorithm.
§ INTRODUCTION
Approximating posterior distributions arising from probabilistic models is a challenging computational task, especially in the context of large datasets.
Standard gradient-based MCMC algorithms <cit.>
require evaluations of the exact target density and its gradient at each iteration, which can be computationally impractical.
Inspired by stochastic optimization <cit.>, stochastic gradient MCMC (SG-MCMC) algorithms replace the exact target gradient with a computationally cheaper estimate, such as those obtained from a randomly sampled subset the original data.
Since the influential work of <cit.>, which introduced the stochastic gradient Langevin dynamics (SGLD) algorithm, SG-MCMC methods have gained considerable popularity among practitioners seeking to perform approximate Bayesian inferences with large datasets. We refer to <cit.> for an overview of SG-MCMC.
Most SG-MCMC methods <cit.> converge to the true posterior distribution if the step-size is appropriately decreased to zero <cit.>. However, this strategy deteriorates mixing, increasing computational cost.
Practitioners usually
keep the step-size fixed, which leads to non-negligible and hard-to-diagnose bias in the invariant distribution <cit.>, especially if the step-size is chosen too large or the target distribution is irregular.
Also, adaptive tuning is harder in the stochastic gradient setting relative to standard MCMC <cit.>, which makes
robust methods even more
appealing in this context.
Motivated by these considerations, we develop the stochastic gradient version of the Barker proposal scheme developed in <cit.>, which has been shown to enjoy improved robustness to target heterogeneity and hyperparameter tuning relative to classical gradient-based MCMC schemes.
The paper is structured as follows.
Section <ref> sets up notation and provides background on the Barker proposal.
Section <ref> introduces and analyzes the stochastic gradient Barker dynamics (SGBD) algorithm.
In particular: Section <ref> analyzes the bias induced by direct use stochastic gradients;
Sections <ref>-<ref> propose a bias-correction methodology and identify the maximum level of noise that it can tolerate; Section <ref> shows how to minimize bias for higher levels of noise.
Section <ref> numerically compares SGBD to SGLD.
Therein, SGBD displays greater robustness to the choice of hyperparameters and to irregular posterior distributions, and in most cases exhibits either comparable or better out-of-sample predictive performance.
Section <ref> discusses future research directions.
§ BACKGROUND
We consider the task of approximate sampling from a target probability distribution of the form π(θ) ∝exp(g(θ)) where θ∈ℝ^d and g:ℝ^d→ℝ. In a classical Bayesian setting with conditional independent data, we have g(θ) = log(p(θ)) + ∑_i=1^Nlog(p(y_i| x_i, θ)), where p(θ) is the prior distribution of the parameter θ, and p(y_i| x_i, θ) is the likelihood component of the i-th data point y_i with covariates x_i. Hence, the gradient of g(θ)
can be written as the sum of N data points components, ∂_j g(θ) = ∑_i=1^N ∂_j g_i(θ),
where ∂_j g_i(θ) = 1/N∂_j log( p(θ)) + ∂_j log(p(y_i| x_i, θ)), and ∂_j stands for the partial derivative with respect to the j^th component of θ, i.e. ∂_j g(·) = ∂/∂θ_j g(·) for j=1, …, d.
§.§ The Barker Proposal
The Barker proposal <cit.> is a first-order approximation of
a locally-balanced jump process <cit.>.
The latter are continuous-time π-invariant jump processes with
generator Lf(θ)=∫(f(θ+w)-f(θ))J(θ,θ+w)dw
defined by the intensity function
J(θ,θ+w) =
h(π(θ+w)/π(θ))
∏_j=1^dμ_σ(w_j)
θ,w∈ℝ^d .
Above h:(0,∞)→(0,∞) can be any function satisfying h(t)=th(1/t) and μ_σ(z)=σ^-1μ(z/σ)
any probability density function (PDF) on ℝ with scale parameter σ>0
and a symmetric reference distribution μ.
Taking h as the Barker function, h(t)=2t(1+t)^-1, and approximating π with its first order log-Taylor expansion, π(θ+w)/π(θ)≈exp(∑_j=1^d∂_j g(θ)w_j) leads to the Barker proposal, whose PDF is
Q_B(θ,θ+w) =∏_i=1^d
2p(∂_jg(θ),w_j)μ_σ(w_j)
θ,w∈ℝ^d
p(δ,z) =(1 + exp(-z δ))^-1 δ,z∈ℝ .
Since p(δ,z)+p(δ,-z)=1, Q_B is a product of skew-symmetric distributions <cit.>,
which
provides full-tractability as well as a straightforward algorithm to sample from Q_B (namely lines 2-7 in Algorithm <ref>).
The gradient ∂_j g(θ^(t-1)) enters into Q_B as a degree of skewness, as opposed to a linear shift of the mean as for classical Euler-Maruyama based schemes such as the Unadjusted Langevin Algorithm (ULA; see e.g ).
It follows that the gradients only influence the direction of the increment under Q_B and not its size, since the distribution of |w_j| is independent of ∂_j g(θ).
This decoupling of gradients and increments size leads to an increased robustness to sub-optimal hyperparameter tuning and target heterogeneity, see e.g. <cit.> for more details and some formal results.
On the other hand, being a first-order approximation to a π-invariant process, Q_B shares the favourable high-dimensional scaling properties of classical gradient-based MCMC, such as a scaling of order d^-1/3 as d diverges <cit.>.
Algorithm <ref> describes the resulting unadjusted Barker proposal algorithm.
The key step is the flipping operation at line 5, where the algorithm flips the sign of the proposed increment with probability 1- p(∂_j g(θ), w^(t)_j).
This operation skews the proposal distribution towards the target π, since the increment will be +w^(t)_j with high probability if ∂_j g(θ)w^(t)_j is high and -w^(t)_j otherwise.
Note that Algorithm <ref> has essentially the same computational cost of standard ULA, where θ^(t)_j θ^(t-1)_j + σ^2/2∂_j g(θ^(t-1))+w^(t)_j and w^(t)_j∼ N(0,σ^2) for j=1,…,d.
Both schemes require O(N) operations at each iteration, with the computation of the gradient representing the major computational bottleneck with large datasets, as in other gradient-based schemes.
Previous work have considered the Metropolis-adjusted version of Algorithm <ref>. Here we consider the unadjusted one, in order to mantain the computational savings induced by (<ref>) when moving to the stochastic gradient context. While there are works combining MH schemes with mini-batching <cit.>, stochastic-gradient versions of unadjusted schemes are much more common and widely used.
In our experiments below we take μ_σ to be the bimodal distribution 0.5𝒩(-σ,(0.1σ)^2)+0.5𝒩(σ,(0.1σ)^2), as recommended in <cit.>.
Note that in algorithmic implementations one can simply take μ_σ=N(σ,(0.1σ)^2), since the resulting algorithm is equivalent by symmetry (though for the equality (<ref>) to be correct one needs the symmetric version of μ_σ).
§ THE STOCHASTIC GRADIENT BARKER PROPOSAL (SGBD)
In this section we propose and analyze the stochastic-gradient Barker Proposal (SGBD) algorithm. At each iteration, we replace the true gradient with the minibatch estimate
∂̂_j g(θ) = N/n∑_i ∈𝒮_n∂_j g_i(θ) j=1,…,p,
where 𝒮_n is a subset of {1,…,N} of size n ≪ N sampled uniformly at random, with or without replacement.
The vanilla version of SGBD (v-SGBD) consists in substituting the gradient in Algorithm <ref> with the estimate in (<ref>), leading to Algorithm <ref>.
§.§ Bias of vanilla SGBD
Lines 2, 5 and 6 of Algorithm <ref> are equivalent to setting b^(t)_j=1 with probability 𝔼[p(∂̂_j g(θ^(t-1), w^(t)_j)], where the expectation is taken with respect to the subsampling mechanism.
Thus, denoting current location and proposed increment as θ∈ℝ^d and
z∈ℝ for notational simplicity, Algorithm <ref> effectively replaces the flipping probability, i.e. Pr(b=1)=p(∂_j g(θ), z) in Algorithm <ref>, with 𝔼[p(∂̂_j g(θ), z)].
While (<ref>) ensures that 𝔼[∂̂_j g(θ)] =∂_j g(θ),
the non-linearity of p implies that 𝔼[p(∂̂_j g(θ), z)] ≠ p(∂_j g(θ), z) in general.
The bias of p(∂̂_j g(θ), z)
implies that the gradient noise does not balance out and, similarly to other SG-MCMC methods, Algorithm <ref> introduces additional error in the stationary distribution of Algorithm <ref>.
In the next section we analyse this bias
and develop a strategy to reduce it.
We implicitly assume that the approximation error
inherent to Algorithm <ref>, induced by the first-order approximation of the jump process, is of smaller order relative to the one induced by stochastic gradients. The same holds true for most SG-MCMC schemes <cit.>.
First, we identify the direction of the bias.
We make the following symmetry assumption on the stochastic gradient noise, which we denote as η_θ := ∂̂_j g(θ) - ∂_j g(θ), suppressing the dependence on j for brevity.
[Symmetry of η_θ]
η_θd= - η_θ,
where d= denotes equality in distribution.
Under Condition <ref> we have
|p(∂_j g(θ), z) - 0.5| ≥|𝔼[ p(∂̂_j g(θ), z)]- 0.5|.
Proposition <ref>, as well as results below, holds for every θ∈ℝ^d and z∈ℝ.
Proofs of all theoretical results are provided in the supplement.
Proposition <ref> shows that, under symmetric noise, the expectation of p(∂̂_j g(θ), z) is always shrunk towards 0.5 relative to its target value p(∂_j g(θ), z).
The practical implication of (<ref>) is an inflation of the variance of the stationary distribution, as Algorithm <ref> moves less frequently towards a local mode of the distribution relative to Algorithm <ref>. This is analogous to what happens with other SG-MCMC algorithms: for instance, when the step-size is held fixed, the stochastic gradient noise increases the variance of the invariant distribution of SGLD when no correction is taken into account <cit.>.
§.§ Corrected SGBD
In this section we quantify the bias of p(∂̂_j g(θ), z) and derive a corrected estimator for Pr(b=1)=p(∂_j g(θ), z).
To do so, we assume the stochastic gradient noise to be normally distributed. This is a common requirement in the SG-MCMC theory literature, which is typically justified by assuming the mini-batch size to be sufficiently large and applying a central limit theorem <cit.>.
[Normality of η_θ]
η_θ∼𝒩(0, τ_θ^2),
for some τ_θ>0 that can depend on θ (and on j).
Under Condition <ref>, we obtain the following tractable approximation to the expectation of p(∂̂_j g(θ), z).
Under Condition <ref>, we have
|𝔼[p(∂̂_j g(θ), z)] - p(c_z,τ_θ∂_j g(θ), z ) | < 0.019,
where c_z,τ_θ := 1.702/√(1.702^2 + z^2 τ_θ^2).
Proposition <ref> approximates Algorithm <ref>
by Algorithm <ref>
with target gradients shrunk by the multiplicative factor c_z,τ_θ<1.
This supports the idea that stochastic gradients have the effect of tempering the stationary distribution by a power smaller than 1, and suggests the strategy of multiplying them by a factor larger than 1 to counterbalance the effect induced by the noise. In particular, multiplying ∂̂_j g(θ) by α>1 inflates its expectation by α and its variance by α^2. It turns out that the value α=1.702(1.702^2-τ_θ^2z^2)^-1/2 (despite not depending on ∂_j g(θ)) makes the expectation of the resulting (corrected) estimator approximately equal to p with the correct partial derivative ∂_j g(θ). This is formalized in Corollary <ref>.
Following Remark <ref>, we define the
corrected estimator of p(∂_j g(θ), z) as p̃(∂̂_j g(θ), z), where for any δ,z∈ℝ
p̃(δ, z) :=
p(1.702/√(1.702^2-τ_θ^2z^2)δ, z) if |z| < 1.702/τ_θ,
1(δ z >0) otherwise,
with 1(A) denoting the indicator function of the event A.
When the value of τ_θ is not too large, p̃(∂̂_j g(θ), z) is an approximately unbiased estimator of p(∂_j g(θ), z), as stated in the following corollary.
Assume Condition <ref> and τ_θ < max{1.702/|z|,τ̅(∂_j g(θ),z)}, where
τ̅(δ,z) =| δ/ Φ^-1((1+exp(-zδ))^-1)|
and Φ denotes the standard Normal CDF.
Then
|𝔼[p̃(∂̂_j g(θ), z) ]- p(∂_j g(θ), z) | < 0.019.
Replacing the naive estimate p(∂̂_j g(θ), z) used in v-SGBD with p̃(∂̂_j g(θ), z) leads to
what we refer to as corrected SGBD (c-SGBD).
Note, however, that the corrected estimator requires knowledge of the variance of the gradient noise, τ_θ. In practical applications, τ_θ must be estimated.
To do that we adopt a simple online sample variance estimator, leading to the version of c-SGBD described in Algorithm <ref>.
While Condition <ref> is not satisfied in most practical scenarios, it allows to quantify the bias and devise estimators that
can reduce it when normality holds approximately. See for example Figure <ref>.
The results in Proposition <ref> and Corollary <ref> are based on the bound
max_x |F(x) - Φ(x/1.702)| < 0.0095, see e.g. <cit.>. Here F(·) is the CDF of the logistic distribution and Φ(·) is the one of the standard Normal distribution.
§.§ Noise tolerance of SGBD
There is a maximum amount of noise
that can be tolerated while still being able to estimate
p(∂_j g(θ), z)
from ∂̂_j g(θ).
In particular, even under Condition <ref>, if τ_θ is too large it is not possible to have
unbiased estimators of p(∂_j g(θ), z), by which we mean functions p̂(δ̂, z;τ_θ) taking values in [0,1] such that
𝔼[p̂(δ̂, z;τ_θ)]= p(δ,z)
for all δ∈ℝ
with expectation taken under δ̂∼ N(δ,τ_θ^2).
Assume Condition <ref> and τ_θ > τ^*, where
τ^*
=
inf_δ∈ℝτ̅(δ,z)
=
4ϕ(0)/|z|,
with ϕ denoting the standard Normal PDF.
Then there exist no unbiased estimator of p(∂_j g(θ), z).
Note that [0,1]-valued unbiased estimators are what is needed in order to implement a stochastic gradient scheme that introduce no further bias in Algorithm <ref>.
Thus, Proposition <ref> identifies a noise level, τ^*, beyond which it is not possible to implement SGBD without introducing further bias due to stochastic gradients.
The value of τ^*=
4ϕ(0)/|z|
≈
1.596/|z| is related to, though slightly smaller than, the upper bound on τ_θ we required in Corollary <ref> to guarantee approximate unbiasedness of p̃(∂̂_j g(θ), z).
Intuitively, since only approximate unbiasedness is required in Corollary <ref>, one can afford slightly larger values of τ_θ therein.
We can re-interpret Proposition <ref> in terms of upper bound to the algorithmic step-size: given a noise level τ_θ, the largest increment one can propose in Algorithm <ref> without introducing further bias due to stochastic gradients is |z|≤ 1.596/τ_θ (or |z|≤ 1.702/τ_θ if a small controllable bias is allowed as in Corollary <ref>).
These results could also be used to devise adaptive versions of SGBD where σ is tuned on-the-fly so that |z|≤ 1.702/τ_θ occurs with high-probability, in a spirit similar to e.g. <cit.>.
We leave such extensions to future work.
Figure <ref> numerically illustrates these phenomena.
There we plot p(∂_j g(θ), z), 𝔼[p(∂̂_j g(θ), z)] and 𝔼[p̃(∂̂_j g(θ), z)] as a function of z when π(θ) is the posterior distribution in a high-dimensional logistic regression model with real data and randomly chosen values of θ and j.
See the supplement for more details on the model and generation of θ and j.
We observe the value of 𝔼[p(∂̂_j g(θ), z)]
being close to p(∂_j g(θ), z) when |z| is small,
while as |z| increases the shrinkage effect discussed in Proposition <ref> becomes evident.
The corrected estimator p̃ successfully reduces the bias up until the tolerance level |z|≈ 1.702/τ_θ, after which the signal in the stochastic gradient is too weak to successfully estimate the true value of p (Corollary <ref> and Proposition <ref>).
We refer to the supplement for a comparison between the tolerance level of SGBD and SGLD.
§.§ Extreme SGBD
Sections <ref> and <ref> show that,
under Condition <ref>, one can implement a stochastic-gradient version of the unadjusted Barker proposal without introducing significant bias.
Doing that requires
τ_θ<max{1.702/|z|, τ̅(∂_j g(θ), z)}.
This can be achieved either by reducing the stepsize σ (which reduces |z|) or by increasing the minibatch size n (which reduces τ_θ).
However, in many settings, users prefer to run SG-MCMC schemes with larger stepsize and smaller mini-batch size to speed-up convergence and reduce computational cost, even if this introduces non-negligible bias.
In this section we thus focus on the case of larger values of τ_θ
and identify the optimal estimator of p(∂_j g(θ), z) in such settings, which turns out to be
p̅(δ, z) := 1(δ z >0) ,
i.e. p̅(δ, z) equals 1 when δ and z have the same sign and 0 otherwise.
We refer to p̅ as extreme estimator.
Note that p̅ coincides with the corrected-estimator, p̃, whenever τ_θ≥ 1.702/|z|.
We will prove optimality of p̅ within the following class of estimators.
An estimator p̂(δ̂, z)=p̂(δ̂, z;τ_θ) of p(∂_j g(θ), z) is said to be symmetric if
p̂(δ̂, z) + p̂(-δ̂, z) = 1
for all (δ̂,z)∈ℝ^2
Condition (<ref>) requires p̂ to have a symmetric behaviour about 0.5 when the sign of the stochastic gradient is flipped, i.e. p̂(δ̂, z) - 0.5 = 0.5 - p̂(-δ̂, z).
The latter is a natural requirement for an estimator of p(∂_j g(θ), z) to be of practical interest,
otherwise the resulting algorithm would be unjustifiably biased towards the left or right side of the current value of θ_j.
We make the following assumption on stochastic gradients, which is strictly weaker than Condition <ref>.
[Unimodality of noise distribution]
The random variable η_θ admits a density function f_θ(x) with respect to the Lebesgue measure and f_θ(x)
is non-decreasing for x≤ 0 and non-increasing otherwise.
Relative to Condition <ref>, Conditions <ref> and <ref> accomodate for more general scenarios, such as heavier-tailed distributions of η_θ.
Under these conditions, any symmetric estimator induces more shrinkage towards 0.5 than p̅ as we now show.
Under Conditions <ref> and <ref>, we have
|𝔼[p̅(∂̂_j g(θ), z)] - 0.5| ≥|𝔼[p̂(∂̂_j g(θ), z)] - 0.5|.
for any symmetric estimator p̂, with strict inequality when p̂≠p̅ and ∂_j g(θ)z ≠ 0.
Since p̂ is always biased towards 0.5 for large τ_θ, Proposition <ref> implies that in such cases p̅ achieves minimal bias.
We formally state this under Condition <ref>, quantifying how large τ_θ needs to be.
Assume Condition <ref>, ∂_j g(θ)z ≠ 0, and τ_θ>τ̅(∂_j g(θ),z), with τ̅ defined in (<ref>).Then
|p(∂_j g(θ), z) - 𝔼[p̅(∂̂_j g(θ), z)]|
<
|p(∂_j g(θ), z) - 𝔼[p̂(∂̂_j g(θ), z)]|
for any symmetric estimator p̂≠p̅.
Corollary <ref> supports the use of p̅ for large values of τ_θ.
Also, Corollaries <ref> and <ref> combined imply that the corrected estimator p̃(∂̂_j g(θ), z) is optimal, in the sense of being approximately unbiased when τ_θ≤τ̅(∂_j g(θ),z) and achieving minimal bias when τ_θ > τ̅(∂_j g(θ),z).
In our simulations, especially when we are interested in predictive accuracy, we also study the performance of the algorithm that always employs the extreme estimator, irrespective of the value of z and τ_θ.
We refer to such algorithm as extreme SGBD (e-SGBD), and provide pseudo-code for it in the supplement.
Effectively, e-SGBD sets b=1 when z and ∂̂_jg(θ^(t-1)) have the same sign and b=-1 otherwise. Hence, it always moves each coordinate in the direction of its component of the stochastic gradient, in a way that is similar to stochastic optimization methods such as AdaGrad <cit.>.
§ EXPERIMENTS
In this section we study the performances of the three SGBD versions (vanilla, corrected and extreme), applying them to sampling tasks arising from various models and comparing them to SGLD.
To help comparability, we consider three versions of SGLD: a vanilla one (v-SGLD) corresponding to the stochastic gradient version of ULA; a corrected one (c-SGLD; see Algorithm 4 in the supplement), where the standard deviation of the artificial noise is adjusted to correct for the stochastic gradient noise; and an extreme one (e-SGLD), where the maximum correction is applied by adding no artificial noise (which simply corresponds to the stochastic gradient descent algorithm, ).
We also tested the modified variant of SGLD proposed in <cit.>, obtaining comparable results to c-SGLD.
Full details on the SGLD variants, as well as additional simulations for this section can be found in the supplement.
Overall, the results support the fact that the increased robustness of the Barker scheme to target heterogeneity and hyperparameters tuning <cit.> is relevant also in the stochastic gradient context.
§.§ Skewed target distributions
First, we study how skewness in the target distribution affects algorithmic performances.
Skewness naturally generates heterogeneity in the magnitude of the gradient ∂_j g(θ) in different regions of the state space, thus being an interesting test case for SG-MCMC schemes.
In particular, we consider the family of skew-normal target distributions, i.e. π_α(θ) = 2 ϕ(θ) Φ(αθ) where θ∈ℝ, α>0 and ϕ and Φ are the standard normal PDF and CDF,
for different values of the skewness parameter α.
η_θ∼ N(0,var(π_α)) and consider two values for the step-size σ, namely σ_1=0.1× sd(π_α) and σ_2=0.5× sd(π_α), where sd(π_α) and var(π_α) denote the standard deviation and variance of π_α.
Figure <ref> displays the results for v-SGBD and v-SGLD.
The results obtained with corrected variants, as well as different noise levels for η_θ, led to similar conclusions and are reported in the supplement.
Figure <ref> reports the bias on the posterior mean estimates as a function of the skewness parameter α, while Figure <ref> displays stationary distribution for two values of α.
The results suggest that SGBD is more robust to skewness relative to
SGLD, which can suffer from high bias in the invariant distribution as α increases.
Also, the two algorithms exhibit a different robustness to hyperpameter tuning, with SGBD displaying more stable performances when the step-size is increased.
§.§ Scale Heterogeneity
Next, we study the performance of SGBD on a binary regression task with scale heterogeneity across parameters.
We consider a Bayesian logistic regression model of the form
y_i| x_i, θ∼Bern((1+e^(-θ^⊤x_i))^-1) i = 1, …, N ,
where x_i ∈ℝ^d and y_i ∈{0, 1}. We use SG-MCMC to target the posterior distribution π(θ)=p(θ| x,y) resulting from (<ref>) and a standard normal prior on θ, i.e. θ∼𝒩_d(0,𝐈_d).
We consider the https://archive.ics.uci.edu/ml/datasets/Sepsis+survival+minimal+clinical+recordsSepsis dataset from the UCI repository, which contains N=110204 instances and d=4 covariates.
Note that this example is low-dimensional, while in the supplement a high-dimensional binary regression data-set is considered.
The Sepsis dataset leads to an ill-conditioned posterior distribution, where
the posterior standard deviation of the first coordinate is much smaller than the others, as shown in Figure <ref>.
While similar issues related to ill-conditioning could be in principle solved by preconditioning, finding good posterior preconditioners before running MCMC is not always easy and in practice SG-MCMC schemes are often used without adaptive preconditioning, also
due to the difficulty in tuning them <cit.>.
In this sense, robustness to scale heterogneity is a desirable feature for SG-MCMC schemes.
We compare v-SGBD and v-SGLD, with a mini-batch size of n=⌊ 0.01 N⌋ and T=2× 10^5 iterations.
Figure <ref> reports the resulting traceplots in the stationary phase. Black lines correspond to the true posterior mean plus and minus two standard deviations, computed with full-batch MCMC using Stan <cit.>.
Due to ill-conditioning, SGLD struggles to explore the posterior: tuning the step-size to match the first coordinate (Figure <ref> top row) leads to very poor mixing in the other coordinates; while doubling the step-size with respect to that (Figure <ref> bottom row) dramatically inflates the variance of the first component and leads to biased inferences.
Instead SGBD, while still being affected by ill-conditioning, is remarkably more robust to scale heterogeneity (e.g. it simultaneously achieves good mixing and moderate bias in the stationary distribution) as well as to the choice of hyperparameters (e.g. doubling the step-size from top to bottom only leads to a moderate inflation of the marginal stationary distribution in the first component).
The specific values for the hyperparameters used in Figure <ref> are σ=0.00075, 0.0015 for SGBD and σ = 0.0002, 0.0004 for SGLD.
See also the supplement for plots of marginal density estimates.
In the supplementwe fit model (<ref>) to the Arrhythmia data-set which contains 100 features and 362 training data points. In this example, SGLD is more accurate for small step-sizes and low mixing, while SGBD appears to be more robust to hyperparameter tuning. In particular,
it enjoys a more favourable mixing-accuracy trade off when the step size is chosen increasingly large. In addition, SGBD obtains a better prediction accuracy on the hold-out set for all hyperparameter configurations considered.
§.§ Bayesian Matrix Factorization
We consider a Bayesian matrix factorization model <cit.> for recommendation. We have U users rating M items. The (potentially sparse) matrix of ratings 𝐑=(R_ij)_ij, where R_ij corresponds to the rating of user i to item j, is modeled as
R_ij |𝐔, 𝐕, α, I_ij=1 ∼𝒩(𝐔_i^⊤𝐕_j, α^-1)
𝐔_i |μ_𝐔, Λ_𝐔∼𝒩_p(μ_𝐔, Λ_𝐔^-1)
i = 1, …, U
𝐕_j |μ_𝐕, Λ_𝐕∼𝒩_p(μ_𝐕, Λ_𝐕^-1)
j = 1, …, M
where 𝐔∈ℝ^U × p, 𝐕∈ℝ^M × p and I_ij=1 if user i has rated item j.
We adopt the hyperprior
μ_𝐙|μ_0, Λ_𝐙∼𝒩_p(μ_0, Λ_𝐙^-1)
where Λ_𝐙 = diag(λ_𝐙,1, …, λ_𝐙,p), λ_𝐙,j∼Γ(a_0, b_0) for j=1, …, p and 𝐙 =(𝐔, 𝐕).
We consider the https://grouplens.org/datasets/movielens/100kMovieLens dataset containing 10^5 ratings (taking values in {1,2,3,4,5}) of 1000 users on 1700 movies. Taking p=20 and applying a 80%-20% train-test split, we obtain a 54080-dimensional target posterior distribution p(𝐔, 𝐕, μ_𝐔, μ_𝐕, Λ_𝐔, Λ_𝐕|𝐑). We set α=3, μ_0=0, a_0=1 and b_0=5 and use a mini batch size of n=N/100=800.
Figure <ref> compares v-SGBD, e-SGBD, v-SGLD and e-SGLD in terms of predictive performances, measured in root mean squared errors (rMSE; see supplement for explicit definition).
We consider both predictions obtained using single MCMC samples at a given iteration, as well as
ones obtained using MCMC ergodic averages.
In general, SGBD outperforms SGLD in all variants. e-SGBD converges faster and single samples have a better predictive accuracy. However v-SGBD ergodic average estimates have the best overall predictive accuracy. For all schemes, the step-size maximising predictive performances was selected over a grid.
§.§ Independent Component Analysis
We consider an independent component analysis <cit.> model, where the likelihood of each datapoint x_i=(x_ij)_j=1,…,p∈ℝ^p for i=1,…,N is
p(x_i| W) ∝ | W| ∏_k=1^p
cosh(0.5∑_j=1^pw_kj x_ij)^-2
and each entry of W=(w_ij)_i,j=1,…,p is assigned a standard normal prior, i.e. w_ijiid∼ N(0,1).
We apply the model to the http://research.ics.aalto.fi/ica/eegmeg/MEG_data.htmlMEG data collected by the Brain Research Unit, from the Helsinki University of Technology, which contains N=17730 data points of dimensionality 100. To perform our experiments, we extract the first 10 channels (i.e. p=10) and use SG-MCMC to sample from the resulting 100-dimensional posterior distribution p(W|(x_i)_i=1,…,n).
We perform a 80%-20% train-test split and compare log-likelihood on the unseen test set.
We run the samplers for T=4× 10^4 iterations choosing a batch-size of n=100.
We test the performance on the unseen test set both by computing the log-likelihood produced by each sample, as well as using the one produced by the ergodic averages of W, see the supplement for explicit definitions.
Results are reported in Figure <ref>.
In this example optimal performances were obtained with very small stepsizes, both for SGLD and SGBD,
and in such setting the difference between the two algorithms is more limited.
Overall, the c-SGBD estimates produce higher levels of log-likelihood on unseen data.
§ CONCLUSION
In this paper, we extended the Barker proposal to the stochastic gradient setting, leading to the SGBD algorithm.
We studied the bias induced by stochastic gradient noise in the Barker proposal and develop strategies to remove it, for small noise levels, or minimize it, for larger ones.
We then compared SGBD to SGLD numerically on simulated and real datasets. Results suggest that, while the two algorithms have similar performances for small step-sizes, SGBD is more robust to hyperparameters choice (thus potentially allowing for larger values to be used in practice) and to heterogeneity in the target gradients (arising from e.g. skewness or ill-conditioning).
Overall, SGBD represents a valid alternative to SGLD with minimal algorithmic change and appealing robustness.
There are many directions for future research, including: adding momentum to improve efficiency (similarly to, e.g., SGHMC <cit.>); developing adaptive variants that optimally tunes the step-size across iterations; characterizing more explicitly how the invariant distribution is affected by the choice of the step-size; directly studying impact on predictive accuracy;
further exploring the connection with optimization schemes as well as the use of SGBD for optimization purposes.
§.§.§ Acknowledgements
GZ acknowledges support from the European Research Council (ERC),
through StG “PrSc-HDBayLe” grant ID 101076564. LM was partially supported by the National Institutes of Health, grant ID R01ES035625.
Supplementary Materials
§ ALGORITHMS
In this section we provide the pseudocode for the e-SGBD, v-SGLD and c-SGLD algorithms described in Section <ref> of the paper.
§ PROOFS AND ADDITIONAL LEMMAS
§.§ Proof of Proposition <ref>
We first state two auxiliary lemmas.
Let (δ, z) ∈ℝ^2 and η≠ 0.
Then
1/2(p(δ - η, z) + p(δ+ η, z) )≤ p(δ, z) if and only if z δ≥ 0, with equality only if zδ = 0.
We have
2 p(δ, z) - (p(δ - η, z) + p(δ + η, z)) = (1 + e^-zδ)^-1 - ((1 + e^-zδ + zη)^-1 + (1 + e^-zδ - zη)^-1) .
Define c = e^-zδ.
Expanding the right hand side of (<ref>), we obtain
(1 + c)^-1 - ((1 + c e^+ zη)^-1 + (1 + c e^- zη)^-1)
= c (1-c) ((e^-z η + e^z η) -2)/(1 + c)(1+ c e^- z η)(1+ c e^+ z η)
Note that the denominator on the right hand side of (<ref>) is strictly positive and (e^-z η + e^z η) -2 ≥ 0 by Jensen's inequality with strict inequality if z ≠ 0. The result follows by noting that c >0 always, and 1-c≥ 0 if and only if zδ≥ 0, with strict inequality if zδ >0.
Let (δ, z) ∈ℝ^2 and η≠ 0.
Then
1/2(p(δ - η, z) + p(δ + η, z)) ≥ 0.5 if and only if z δ≥ 0, with equality only if zδ = 0.
Define c:= e^-zδ and consider the following:
1/2(p(δ+η, z) + p(δ-η, z) ) = 1/2((1 +c e^-zη)^-1 + (1+c e^zη)^-1)
= 2 + c (e^- z η + e^+z η)/2(1+ c e^- z η)(1+ c e^+ z η)
= 2 + c (e^- z η + e^+z η)/2 + 2 c (e^- z η + e^+z η) + c ^2
The result follows by noting that c ^2 ≤ 1 if and only if zδ≥ 0, with strict inequality if zδ>0, and (e^-z η + e^z η) -2 ≥ 0 by Jensen's inequality with strict inequality if z ≠ 0.
We can now prove Proposition <ref>.
Consider first the case z∂_j g(θ)>0 and τ_θ>0.
Denoting the distribution of η_θ by P_η_θ, we have
𝔼[ p (∂_j ĝ(θ), z)] (I)=∫_0^+∞( p(∂_j g(θ)+η_θ, z) + p(∂_j g(θ)-η_θ, z) )
dP_η_θ(η_θ)
=
∫_0^+∞((1+ e^- z (∂_j g(θ) +η_θ))^-1 + (1+ e^- z (∂_j g(θ) -η_θ))^-1)
dP_η_θ(η_θ)
(II)≤∫_0^+∞ 2(1+ e^- z (∂_j g(θ)))^-1
dP_η_θ(η_θ)
(III)=∫_-∞^+∞(1+ e^- z (∂_j g(θ)))^-1
dP_η_θ(η_θ)
=
p(∂_j g(θ), z),
where (I) and (III) follow from the symmetry assumption in Condition <ref>, and (II) follows from Lemma <ref>
Moreover, z∂_j g(θ)>0 implies p(∂_j g(θ), z)>0.5 and 1/2(p(∂_j g(θ)+η_θ, z) + p(∂_j g(θ)-η_θ, z) )> 0.5 for all values η_θ≠ 0 by Lemma <ref>.
It follows, again using the symmetry assumption in Condition <ref>, that
0.5 < 𝔼[p̂(∂̂_j g(θ), z) ] < p(∂_j g(θ), z),
Using the same argument, it is easy to show that the reverse inequalities hold if z∂_j g(θ)<0, leading to |p(∂_j g(θ), z) - 0.5| > |𝔼[ p(∂̂_j g(θ), z)]- 0.5| as desired.
The argument for the case z∂_j g(θ)<0 is analogous.
Finally, if z∂_j g(θ)=0 or τ_θ = 0, we have 𝔼[ p (∂_j ĝ(θ), z)]=p(∂_j g(θ), z)=0.5.
§.§ Proof of Proposition <ref>
Let F(x)=(1+exp(-x))^-1 be the CDF of the logistic distribution and Φ the one of the standard Normal distribution.
Then we have |F(x) - Φ(x/1.702) |< 0.0095 for all x ∈ℝ, see e.g. <cit.>.
By definition of p, this implies
|p(δ,z) - Φ(zδ/1.702) | < 0.0095 δ,z ∈ℝ
and, as a result,
|
𝔼[p(∂̂_j g(θ), z)
-
Φ(z ∂̂_j g(θ)/1.702)]
|
< 0.0095.
Hence, | 𝔼[p(∂̂_j g, z)] - Φ(z∂_j g(θ)/√(1.702^2+z^2τ_θ^2)))| < 0.0095 noting that
𝔼[Φ(z ∂̂_j g(θ)/1.702)] = Φ(z ∂_j g(θ)/√(1.702^2+z^2τ_θ^2)) ,
which follows by ∂̂_j g(θ) ∼𝒩(∂_j g(θ), τ_θ^2) and standard properties of the normal distribution.
The result in Equation (<ref>) follows using the bound (<ref>) with an application of the triangle inequality.
§.§ Proof of Corollary <ref>
Consider first the case when τ_θ < 1.702/|z|.
Then, replacing z with zα in Proposition <ref>, we obtain
|𝔼[p(∂̂_j g, α z)] - p(1.702 α/√(1.702^2+z^2τ_θ^2α^2)∂_j g(θ), z)| < 0.019.
Taking α = 1.702/√(1.702^2-τ_θ^2z^2), we have p(∂̂_j g, α z) = p̃(∂̂_j g, z) and the second term in the absolute value on the left hand side of (<ref>) simplifies to p(∂_j g(θ), z) leading to the desired result.
Consider now the case when τ̅(∂_j g(θ), z) ≥ 1.702/|z| and τ_θ∈[1.702/|z|, τ̅(∂_j g(θ), z) ]. In this case, we have p̃(∂̂_j g(θ), z) = 1(∂̂_j g(θ)z>0).
We first consider the case when z∂_j g(θ)>0.
Under Condition <ref>, we have
𝔼[p̃(∂̂_j g, z)] = Φ(|∂_jg(θ)|/τ_θ).
The right hand side of (<ref>) is a strictly increasing function of τ_θ, hence it can be lower and upper bounded by
Φ(|∂_jg(θ)|/τ̅(∂_j g(θ), z) )
≤Φ(|∂_jg(θ)|/τ_θ) ≤Φ(∂_jg(θ)z/1.702).
Define D(x) : =Φ(x/1.702) - (1 + e^-x)^-1 and expand the upper bound in (<ref>) as
Φ(∂_jg(θ)z/1.702) = (1 + e^-∂_jg(θ)z) - D(∂_jg(θ)z)
≤ p(∂_jg(θ), z) + 0.0095.
Moreover, note that, when z∂_jg(θ)>0, the left hand side in (<ref>) is equal to p(∂_jg(θ), z) by the definition of τ(∂_j g(θ), z), i.e.
where the inequality follows from the definition of p and |D(x)| < 0.0095 for all x ∈ℝ <cit.>.
Φ(∂_jg(θ)/τ̅(∂_j g(θ), z) ) = p(∂_jg(θ), z).
Thus, combining all of the above, we get
p(∂_jg(θ), z) ≤𝔼[p̃(∂̂_j g(θ), z)] ≤ p(∂_jg(θ), z) + 0.0095.
When ∂_jg(θ)z≤ 0, it is easy to show that the reverse inqualities hold, leading to the following bound:
p(∂_jg(θ), z) -0.0095 ≤𝔼[p̃(∂̂_j g, z)] ≤ p(∂_jg(θ), z).
§.§ Proof of Proposition <ref>
To prove the statement, consider the expectation of the extreme-estimator defined in Section <ref> of the paper, p̅,
𝔼[p̅(∂̂_jg(θ), z)] = Φ(τ_θ^-1|∂_j g(θ)|sgn(z∂_j g(θ)) ).
Note that the definition τ̅(∂_j g(θ), z) in (<ref>) implies that
Φ(|∂_j g(θ)|sgn(z∂_j g(θ)) /τ(∂_j g(θ), z) ) = p(∂_jg(θ), z).
Consider the case when τ_θ > τ^*. Since τ̅( δ, z) is a continuous function of δ, there exists δ̅∈ℝ such that τ^* < τ̅(δ̅, z) < τ_θ.
Notice that the right hand side of (<ref>) is decreasing in τ_θ if z∂_j g(θ) >0 and increasing otherwise.
Hence, for a value of the gradient δ̅ and noise standard deviation τ_θ > τ̅(δ̅, z), we obtain
|p(δ̅, z) - 0.5| < |𝔼[p̅(∂̂_jg(θ), z)] - 0.5|.
By Proposition <ref>, we know that, for any other symmetric estimator p̂ satisfying (<ref>), we have
|𝔼[p̅(∂̂_jg(θ), z)] - 0.5| < |𝔼[p̂(∂̂_jg(θ), z)] - 0.5|
Hence, no unbiased symmetric estimator exists for τ_θ > τ^*.
Finally, note that the absence of any unbiased symmetric estimator implies the absence of any unbiased estimator, since any unbiased estimator needs to be symmetric.
Indeed, assume by contradiction that p̂(δ̂, z) is unbiased for p(δ, z) when δ̂∼𝒩( δ, τ_θ) but it is not symmetric, i.e. p̂(δ̂, z) + p̂(-δ̂, z) ≠ 1 for some δ̂∈ℝ. Consider z fixed and define h(δ̂) = p̂(δ̂, z) + p̂(-δ̂, z) -1. Unbiasedness imply 𝔼[p̂(δ̂, z) + p̂(-δ̂, z)] =p( δ, z) + p(- δ, z)= 1, or equivalently
𝔼[h(δ̂)] = 0 with δ̂∼𝒩( δ, τ_θ^2), for all δ∈ℝ. Since δ̂ is a complete sufficient statistics for δ̂∼𝒩( δ, τ_θ^2), the ladder condition implies h(δ̂) ≡ 0, determining a contradiction.
§.§ Proof of Proposition <ref>
The case z=0 is trivial.
We prove the result when z>0, the case z<0 is analogous.
For any symmetric estimator p̂ satisfying (<ref>), we have
𝔼[p̂(∂̂_j g(θ), z)] = ∫_-∞^+∞p̂(∂_j g(θ) + η_θ, z) f_θ(η_θ) dη_θ
(I)=∫_-∞^+∞p̂(ε, z) f_θ(ε - ∂_j g(θ) ) dε
=
∫_0^+∞p̂(ε, z) f_θ(ε - ∂_j g(θ) ) dε
+
∫_-∞^0p̂(ε, z) f_θ(ε - ∂_j g(θ) ) dε
(II)=∫_0^+∞p̂(ε, z) f_θ(ε - ∂_j g(θ) ) dε
+
∫_0^+∞p̂(-ϵ, z) f_θ(-ϵ - ∂_j g(θ) ) dϵ
(III)=∫_0^+∞p̂(ε, z) f_θ(ε - ∂_j g(θ) ) dε
+
∫_0^+∞p̂(-ϵ, z) f_θ(ϵ + ∂_j g(θ) ) dϵ
where (I) and (II) follows by changes of variables (ε: = η_θ + ∂_j g(θ) and ϵ := - ε, respectively), and (III) follows by Conditions 1 and 3 which implies f_θ(x) = f_θ(-x) for all x ∈ℝ.
The expected value of the extreme-estimator p̅ is given by:
𝔼[p(∂̂_j g(θ),z)] =
∫_-∞^+∞p(∂_j g(θ) + η_θ
, z)f_θ(η_θ)dη_θ
=∫_-∂_j g(θ)^+∞ 1 f_θ(η_θ)dη_θ
(I)=∫_0^+∞
1 f_θ(ϵ -∂_j g(θ))dϵ
(II)=∫_0^+∞p̂(ϵ,z)
f_θ(ϵ -∂_j g(θ)) dϵ
+
∫_0^+∞p̂(-ϵ,z)f_θ(ϵ -∂_j g(θ)) dϵ
where (I) follows from the change of variable ϵ:= η_θ + ∂_j g(θ) and (II) follows from (<ref>).
Thus
𝔼[p(∂̂_j g(θ),z)]
-
𝔼[p̂(∂̂_j g(θ), z)]
=
∫_0^+∞p̂(-ϵ,z)
(f_θ(ϵ -∂_j g(θ))-f_θ(ϵ + ∂_j g(θ) )) dϵ.
Moreover, note that by Conditions 1 and 3, if ∂_j g(θ) >0 we have f_θ(ϵ + ∂_j g(θ) )< f_θ(ϵ - ∂_j g(θ) ), while if ∂_j g(θ) <0 we have f_θ(ϵ + ∂_j g(θ) )> f_θ(ϵ - ∂_j g(θ) ), for every ϵ >0.
Hence, the right hand side in equation (<ref>) is greater the than 0 if ∂_j g(θ) >0
and smaller then 0 otherwise, with equality holding if and only if p̂(ϵ,z) for all ϵ<0, which holds if and only if p̂ = p̅, if we exclude the trivial estimator p̂≡ 1.
§.§ Proof of Corollary <ref>
By Proposition <ref>, we have 𝔼[p̅(∂̂_j g(θ), z)]> 𝔼[p̂(∂̂_j g(θ), z)] if ∂_j g(θ) z>0, while the reverse inequality holds if ∂_j g(θ) z<0.
Moreover, by (<ref>), when τ_θ>τ̅(∂_j g(θ),z) the expectation of p̅(∂̂_j g(θ), z) is shrunk to 0.5 compared to p(∂_j g(θ), z). Thus, for ∂_j g(θ)z>0, we have
0.5 < 𝔼[p̂(∂̂_j g(θ), z)] < 𝔼[p̅(∂̂_j g(θ), z) ] < p(∂_j g(θ), z),
while the reverse inequalities hold when ∂_j g(θ)z<0, proving the result.
§ CONNECTION BETWEEN SGBD AND SGLD NOISE TOLERANCE
One can compare the noise tolerance of SGBD with the one of SGLD. Consider the recursion of SGLD
θ^(t+1)θ^(t) + σ^2/2∇̂g(θ^(t)) + z z ∼𝒩(0, σ^2).
Under Condition <ref>, this recursion is equivalent to
θ^(t+1)θ^(t) + σ^2/2∇ g(θ^(t)) + z̃ z̃∼𝒩(0, σ^2 +(σ^2/2)^2 τ_θ^2).
When τ_θ≤ 2σ^-1, it is possible to correct exactly the recursion reducing the variance of the artificial noise: namely, if z ∼𝒩(0,σ^2 - (σ^2/2)^2 τ_θ^2 ), we obtain back the exact unadjusted Langevin proposal, i.e. θ^(t+1)∼𝒩( θ^(t) + σ^2/2∇ g(θ^(t)) , σ^2).
On the contrary, if τ_θ > 2σ^-1, simple variance arguments show that there is no distribution for the noise z (assuming it to be independent from η_θ) such that the resulting proposal coincides with the exact unadjusted Langevin one.
In this sense, the noise tolerance of SGLD under Condition <ref> is τ_SGLD^* = 2σ^-1.
Contrary to the one of SGBD, such value depends directly on the hyperparameter σ and not on the sampled increment z.
However, there are similarities with the one of SGBD identified in Section <ref> of the paper.
In particular,
the size of the proposed increment under μ is of order
σ, think e.g. at the choice μ = 0.5N(-σ,(0.1σ)^2)+0.5N(σ,(0.1σ)^2), meaning that we can roughly interpret the noise tolerance of SGBD under Condition <ref> as
being τ^* ≈ 1.596σ^-1.
This value is similar to the one of SGLD and exhibit the same dependance with respect to σ, despite the constant in front being slightly smaller.
This rough analysis suggests that the amount of Gaussian noise the two algorithms can tolerate while being able to exactly recover the original proposal is similar.
On the other hand, one can expect that, due to the non-linear use of gradients in SGBD, the two algorithms will exhibit significantly different behavior in the case of noise η_θ with larger variance and/or an heavier-tail distribution, with SGBD providing a more and stable robust behaviour (e.g. in the sense of resulting in a proposal closer to the original exact unadjusted scheme).
We illustrate this phenomenon numerically in this section, leaving a more detailed theoretical analysis of this behaviour to future work.
We consider a standard Normal target, i.e. π(θ) = ϕ(θ) and add to the true gradient symmetric noise with Laplace or Cauchy distribution and scale parameter τ_θ.
Compared to the example presented in Section <ref>, this setting allows to focus on the robustness to heavy-tailed gradient noise rather than skewness in the target distribution.
We consider two values for the step-size σ, namely σ_1 = 0.1 and σ_2 = 0.5 and run v-SGBD and v-SGLD for T=2× 10^5 discarding the first half as burn-in.
Figure <ref> shows the increase in the bias of the 95^th quantile of the invariant distribution relative to π, when η_θ∼𝐋𝐚𝐩𝐥𝐚𝐜𝐞(0, τ_θ) as τ_θ increases, and Figure <ref> reports the density estimates when τ_θ = e^1.5 - 1. With Laplace distributed noise, v-SGLD and v-SGBD with a small step-size behave similarly. However, v-SGBD is more robust than v-SGLD when the step-size is increased and exhibits a smaller bias.
Figure <ref> shows the same plots, where the noise is Cauchy distributed, i.e. η_θ∼𝐂𝐚𝐮𝐜𝐡𝐲(0, τ_θ). In this scenario, v-SGBD is dramatically more robust than v-SGLD.
§ ADDITIONAL EXPERIMENTS
We report the additional simulations details and results. All experiments were run on a laptop with 11th Gen Intel(R) Core(TM) i7-1165G7 2.80 GHz using R version 4.3.1.
§.§ Empirical Simulation Of The Correction for p
We present additional details for Figure <ref> in the paper.
Figure <ref> reports the same plot for three additional coordinates.
We take the last sample from the v-SGBD chain used to produce the Figure <ref> in the paper, we repeatedly subsample a mini-batch, store the gradient for each coordinate, estimate its standard deviation, and compute p̂ and p̃.
Figure <ref> reports the resulting Monte Carlo average of p̂ and p̃ as the value of the proposed increment varies.
The figure was produced with the following values: (a) ∂_j g(θ) = -25.15, τ_θ = 22.72, (b) ∂_j g(θ) = 40.39, τ_θ = 25.63, (c) ∂_j g(θ) = -23.21, τ_θ = 22.88
(d) ∂_j g(θ) = -13.03, τ_θ = 20.27.
§.§ Toy Example: Skew-Normal target with isotropic Gaussian Noise
Figure <ref> reports the results obtained using also corrected variants of the algorithms and with a different value for the standard deviation of η_θ for the experiment presented in Section <ref> of the paper.
With a gradient noise standard deviation equal to the one of the target, we observe little difference between vanilla and corrected variants of the algorithms (Figure <ref>).
When the stardard deviation if large, i.e. τ_θ = 10× sd(π_α), corrected variants achieve a lower bias then their vanilla counterparts with a large step-size (Figure <ref>). Under all configurations, SGBD displays increased robustness to skewness and to tuning with respect to SGLD.
Next, we perform a sensitivity analysis to the step-size choice. Figure <ref> reports the density estimates of the samples obtained via v-SGBD with the step-size respectively equal to 0.05× sd(π_α), 0.1× sd(π_α), 0.5× sd(π_α), and 0.75× sd(π_α), for α=20, where sd(π_α) denotes the standard deviation of the target distribution. v-SGBD appears to be very robust to the step-size choice, and only slightly inflates the scale of the target distribution under the configuration with the largest step-size.
§.§ Binary Regression With Scale Heterogeneity
Figure <ref> shows the density estimates for all coordinates of the posterior samples obtained in the experiment in Section <ref>, and Figure <ref> shows the corresponding trace plots.
Figure <ref> reports the density estimates for all coordinates of the posterior samples from a second experiment where we tuned the step-sizes via MAMBA <cit.>. MAMBA uses a multi-armed bandits algorithm to minimize the Finite Set Stein Discrepancy <cit.> between true posterior and its Monte
Carlo approximation.
MAMBA selects a step-size equal to 3.24 × 10^-4 and 1.36 × 10^-3 for v-SGLD and v-SGBD respectively. In general, the chosen step-sizes match the largest scale of the coordinates but are too large for the first one. We note that SGBD outperforms SGLD in particular for the first coordinate where SGLD remarkably inflates its variance.
Figure <ref> reports the density estimates for all coordinates of the posterior samples from a third experiment where we selected different step-sizes for each coordinated. In particular, we are interested in the case where a diagonal preconditioner is applied and we set σ_1,j=0.1× sd(π_j) and σ_2,j=0.2× sd(π_j), for j=1, …, 4, where sd(π_j) denotes the standard deviation of the posterior distribution for θ_j. The aim of this experiment is to study the performance of the algorithm with a correct tuning of the step-size across the coordinates. However, we note that this is in general not easily to do in practice as it requires to have access to posterior quantities which are in general unknown a priori and usually estimated with MCMC.
In this scenario, clearly both algorithms perform better than in the previous one and sample accurately with small step-sizes (dotted lines in Figure <ref>). With a larger step-size (dashed lines in Figure <ref>), the samplers moderately inflate the variance of the marginals distribution with SGBD performing slightly better.
§.§ Binary Regression With High-Dimensional Predictors
This section studies the performance of SGBD on ill-conditioned high dimensional logistic regression task using model (<ref>).
We apply the model to the https://archive.ics.uci.edu/ml/datasets/arrhythmiaArrhythmia dataset from the UCI repository. The dataset contains 452 instances and 279 covariates, from which we retain the first 100. A random 80-20% train-test split is applied to the dataset, and we run the samplers for T=10^5, discarding the first half as burn-in, iteration using a mini-batch of n=34.
We are interested in how hyperparameter tuning affects the sampling accuracy of the algorithm.
In particular, we study the trade-off between mixing and sampling accuracy, since increasing the step size of the algorithms produces better mixing but less accurate chains, as no MH step is used.
Figure <ref> shows how sampling accuracy decreases as mixing increases when the step sizes vary.
Mixing is measured with the median effective sample size (ESS) across the parameters and sampling accuracy with the mean standardized 1^st and 2^nd order bias,
which are defined as follows:
Bias(𝔼 [θ_j |{y_i}_i=1^n]) = |θ_j - 𝔼 [θ_j |{y_i}_i=1^n]|/(𝕍 [θ_j |{y_i}_i=1^n])^1/2
Bias(𝕍 [θ_j |{y_i }_i=1^n]) = |τ_θ_i^2 - 𝕍 [θ_j |{y_i }_i=1^n]|/(𝕍 [θ_j |{y_i }_i=1^n])^1/2 j=1, … d,
where θ̅_j = ∑_t=1^T θ_j^(t)/T and τ^2_θ_j = ∑_t=1^T(θ_j^(t) - θ̅_j)^2/T-1.
The results in Figure <ref> suggests that SGLD is more accurate for small step-sizes and low mixing, while SGBD appears to be more robust to hyperparameter tuning. In particular, it enjoys a more favourable mixing-accuracy trade off when the step size is chosen increasingly large.
Figure <ref> reports the predictive performance in terms of log-loss on the held-out test set, when hyperparameters are chosen such that the median ESS of the samples is roughly equal to 1000 and are set to σ=0.25, 0.22, 0.07 for v-SGBD, c-SGBD and e-SGBD and σ=0.14, 0.11, 0.09 for v-SGLD, c-SGLD and e-SGLD. The log-loss at iteration t is defined as
l(t) =- 1/|T|∑_i ∈𝒯 y_i log(p̂(𝐱_i, θ)^(t)) + (1- y_i) log(1 - p̂(𝐱_i, θ)^(t))
where T is the test set, |T| denotes its size, and p̂(𝐱_i, θ)^(t) = 1/t∑_k=1^t(1+e^-x_i^⊤θ^(k))^-1 is the ergodic average of the estimate of probability of Y_i = 1 given the predictors and samples for the parameter θ.
SGBD outperforms SGLD in terms of predictive accuracy.
Figure <ref> reports how the log-loss using all the samples from each chain varies for each configuration of hyperparameter.
In general, SGBD achieves a better predictive accuracy than SGLD with different step-sizes configurations.
§.§ Additional details for the numerics in Sections <ref> and <ref>
This section reports additional details about the experiments in Section <ref> and <ref> of the paper.
In Section <ref>, we run the algorithms with the following value for the hyperparameter σ=0.022 and σ=0.005 for v-SGLD and e-SGLD and σ=0.011 and σ=0.0105 for v-SGBD and e-SGBD.
The rMSE reported in Figure <ref> is computed after clipping the ratings predicted values at 1 and 5.
In particular, the sample rMSE (s-rMSE) at iteration t is computed as
s-rMSE(t)= √(1/|T|∑_i,j : (i, j) ∈ T(R_ij - R̂_ij^(t))), R̂_ij^(t) =
1 if R̃_ij^(t) <1,
5 if R̃_ij^(t) >5,
R̃_ij^(t) otherwise,
where R̃_ij^(t) = 𝐔_i^(t)𝐕_j^(t), T is the test-set and |T| denotes its size.
The rMSE of the ergodic average of the preditions (e-rMSE) at iteration t is computed as
e-rMSE(t)= √(1/|T|∑_i,j : (i, j) ∈ T(R_ij - R̅̂̅_ij^(t),
))
where R̅̂̅_ij^(t) = 1/t∑_k=1^t R̂_ij^(k) is the ergodic average of the predictions for R_ij
Similar results were obtained without clipping the predictions.
In Figure <ref> of Section <ref>, the sample mean log-likelihood (s-ℒ) at iteration t is computed as
s-ℒ(t) = 1/|T|∑_i ∈ Tlog(p(x_i | W^(t)))
and the mean log-likelihood of the ergodic average of W
(e-ℒ) at iteration t is given by
e-ℒ(t)= 1/|T|∑_i ∈ Tlog(p(x_i |W^(t)))
where W^(t) = 1/t∑_k=1^t W^(k), T is the test-set and |T| denotes its size.
We chose the following step-sizes optimizing predictive performance of the ergodic average of the samples of W over a grid. obtaining σ=0.0110 for v-SGBD, σ=0.0084 for c-SGBD and e-SGBD, σ=0.0070 for v-SGLD and c-SGLD and σ=0.0063 for e-SGLD.
|
http://arxiv.org/abs/2405.08951v1 | 20240514203750 | Applications of Fast Magnetic Reconnection Models to the Atmospheres of the Sun and Protoplanetary Disks | [
"Fulvia Pucci",
"Alkendra Singh",
"Uma Gorti",
"Marco Velli",
"Neal Turner",
"Disha Varshney",
"Maria Elena Innocenti"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP",
"physics.plasm-ph",
"physics.space-ph"
] |
fulvia.e.pucci@jpl.nasa.gov
1Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California 91109, USA
2Plasma Astrophysics Research Laboratory, Department of Physics, Institute of Science, BHU, Varanasi 221005, India
3Astronomical Observatory, Graduate School of Science, Kyoto University , Yamashina, Kyoto 607-8471, Japan
4Department of Earth, Planetary, and Space Sciences, UCLA, 595 Charles Young Drive East, Los Angeles, California 90095, USA
5NASA Ames Research Center, MS 245-1, PO Box 1, Moffett Field, California 94035, USA
6SETI Institute, 339 Bernardo Ave Suite 200, Mountain View, California 94043, USA
7Institut für Theoretische Physik, Ruhr-Universität Bochum, Bochum, Germany
Partially-ionized plasmas consist of charged and neutral particles whose mutual collisions modify magnetic reconnection compared with the fully-ionized case. The collisions alter the rate and locations of the magnetic dissipation heating and the distribution of energies among the particles accelerated into the non-thermal tail. We examine the collisional regimes for the onset of fast reconnection in two environments: the partially-ionized layers of the solar atmosphere and the protoplanetary disks that are the birthplaces for planets around young stars. In both these environments, magnetic nulls readily develop into resistive current sheets in the regime where the charged and neutral particles are fully coupled by collisions, but the current sheets quickly break down under the ideal tearing instability. The current sheets collapse repeatedly, forming magnetic islands at successively smaller scales, till they enter a collisionally-decoupled regime where the magnetic energy is rapidly turned into heat and charged-particle kinetic energy. Small-scale, decoupled fast reconnection in the solar atmosphere may lead to preferential heating and energization of ions and electrons that escape into the corona. In protoplanetary disks such reconnection causes localized heating in the atmospheric layers that produce much of the infrared atomic and molecular line emission observed with the Spitzer and James Webb Space Telescopes.
§ INTRODUCTION
Magnetic reconnection plays a key role in the solar atmosphere, from the photosphere, chromosphere to filaments, and prominences, as well as in the interstellar medium with its star-forming molecular clouds, and the planet-forming protoplanetary discs that orbit young stars (e.g. and references therein) and many other environments. Reconnection in stars and accretion disks contributes to the development of their hot coronae, the dynamo regeneration of their magnetic fields, and the launching of their supersonic winds (e.g. and references therein). In many of these environments, the plasma is only partially ionized. The substantial neutral fraction alters the rates of both reconnection <cit.> and the instabilities that can disrupt the current sheets (CSs) where reconnection takes place <cit.>. Thus the partially-ionized nature of the plasma must be part of attempts to understand these systems.
Through the solar photosphere and chromosphere the plasma density varies sharply with height and the ionization degree of hydrogen varies from about χ∼ 10^-4 in the photosphere to χ∼1 at the top of the chromosphere (). A wide variety of dynamical events which characterize these regions are attributed, at least in part, to reconnection:
chromospheric jets (),
Ellerman Bombs (), and
type II white light flares (e.g., ).
Recently, improvements in the angular resolution of solar telescopes have allowed many small-scale events in the low solar atmosphere to be observed <cit.>.
High-temperature compact bright points, possible signatures of magnetic reconnection, have UV counterparts frequently observed with the Interface Region Imaging Spectrograph (IRIS; ).
Protoplanetary disks (PPDs) consist of gas and dust orbiting young stars. They form as a consequence of angular momentum conservation during the collapse under self-gravity of dense cores within interstellar molecular gas clouds. They therefore are a universal step in forming stars <cit.> and provide the raw materials from which planets form <cit.>.
The ionization in PPDs' surface layers comes mainly from photons emitted by the central star in the X-ray <cit.> and UV bands <cit.>. The disk midplane is optically-thick to these photons, but remains ionized at least weakly, thanks to cosmic rays <cit.> and the decay of radioactive isotopes <cit.>. The ionization fraction χ ranges from <10^-14 near the midplane <cit.> to >10^-4 in the disk atmosphere, where far-UV photons ionize most of the carbon atoms <cit.>. Midplane temperatures are high enough for thermal ionization in the fraction of an au nearest the star <cit.>.
Magnetic fields threading the disks play fundamental roles in the evolution and dispersal of the planet-forming material. The fields readily slip through the gas in the weakly-ionized parts of the disk interior and are better-coupled to the gas in the more-strongly-ionized surface layers.
Reconnection can affect many aspects of the planet-forming environment, including the saturation amplitude of the turbulence driven by magneto-rotational instability or MRI <cit.>, the formation of gas rings that may determine planets' initial locations <cit.> and the localized heating needed to form the chondrules common in primitive meteorites <cit.>. Magnetic forces also launch winds from the disk atmosphere near the plasma β=1 surface where the dynamics transitions from gas-dominated to magnetically-dominated <cit.>. Flows near this surface, dragging the magnetic field footpoints at the wind source can generate CSs leading to magnetic reconnection. While the processes listed above may be understood through modeling they remain difficult to observe directly, though there is near-term potential for chemical and thermodynamic signatures of reconnection to be detected through molecular near- and mid-infrared spectroscopy with the James Webb Space Telescope (JWST).
The idea that magnetic reconnection operates similarly across the solar atmosphere, protoplanetary disks, and other astrophysical contexts, comes from the fact that any CS forming in these environments is subject to the tearing instability <cit.>, leading to magnetic reconnection. Reconnection follows a longer period over which flows in the plasma store energy locally, in macroscopic, coherent magnetic structures. Once reconnection begins, the stored magnetic energy is quickly converted into plasma heating and particle acceleration. The conditions for triggering such fast reconnection can be understood using the ideal tearing model (IT, ). Under this model, the CS first thins gradually as reconnection proceeds slowly. Then, once the CS aspect ratio (length over thickness) reaches a critical threshold, further evolution is dominated by a fast tearing mode. This model explains numerical MHD results from <cit.> and plasma kinetic results from <cit.>. The variety of scales involved and the universality of the process suggest that, in the partially ionized case <cit.>, there are three regimes of reconnection: (1) coupled ion and neutral dynamics, (2) weakly coupled and (3) decoupled dynamics. These regimes are separated by the thickness of the CS being larger or smaller than characteristic scales, defined by the parameters of the system and, in particular, by collision frequencies of ions with neutrals. In <cit.> the onset of fast magnetic reconnection generated by the tearing instability is discussed for each regime.
In this work we apply the model to reconnection in the lower solar atmosphere and protoplanetary disks.
In Sec. <ref> we summarize the processes involved in the onset of fast magnetic reconnection.
In Sec. <ref> we generalize to partially-ionized plasmas.
In Sec. <ref> we present analytical calculations of the transitions between the reconnection regimes due to recursive processes.
In Sec. <ref> we introduce the spatial and temporal scales involved in the reconnection process in the solar chromosphere and PPDs.
In Sec. <ref> we apply the model to the solar atmosphere, examining the length scales at which magnetic reconnection can be observed in the lower, partially-ionized layers and comparing them with the observable range of scales.
In Sec. <ref> we carry out similar calculations for PPD atmospheres, contrast the reconnection regimes with the magnetic diffusion regimes commonly considered in this context,
and finally estimate the local heating resulting from the dissipation of the magnetic fields.
A summary and conclusions are in Sec. <ref>.
§ ONSET OF FAST MAGNETIC RECONNECTION, A SUMMARY.
In this section, we summarize the onset of the tearing mode instability, leading to fast magnetic reconnection in a resistive, fully-ionized plasma. For simplicity we describe the process starting from a 2D Harris CS <cit.>, though the analysis is also valid when there is an out of plane component of the magnetic field, i.e. a guide field, that may also vary so as to make the equilibrium force-free – see e.g. Eq. (5) and Eq. (6) in . The scale of the variation of the equilibrium magnetic field determines the initial CS thickness a.
The linear stability (for incompressible fluctuations) does not depend on the presence or absence of a magnetic field in the direction orthogonal to the reconnection plane (z or k̂) and whether the equilibrium is force-free or pressure balanced <cit.>.
At large Lundquist numbers (low collision regimes) two regions define the solution structure of the perturbed magnetohydrodynamics (MHD) equations: a boundary layer of thickness 2δ around the center (y = 0) of the CS, where δ is defined so it separates resistive inner regions y<δ from outer regions where magnetic diffusivity η_O due to resistivity and growth rate of the tearing instability may be neglected.
From here on, barred quantities are normalized to the CS thickness a. τ̅_A = a/v_A is the Alfvén crossing time using the Alfvén speed v_A, k̅ = ka=2π n/ℒ where n is the wavenumber so that the minimum wavevector available in the system is k̅_min = 2π/ℒ, for some system lengthscale ℒ. The Lundquist number S̅ = τ̅_R/τ̅_A = a v_A/η_O, τ̅_R = a^2/η_O is the Ohmic diffusion time over the thickness a.
The dependence of the maximum growth rate of the tearing instability γ_ max on the Lundquist number (Ohmic magnetic diffusivity) can be expressed as
γ_ maxτ̅_A ∼S̅^ -1/2 , δ_ max/a∼S̅^ -1/4 ,
k_ max a∼S̅^-1/4.
where the subscript “max” indicates the maximum growth rate available, corresponding to the wavevector k_ max (see e.g. for the Harris CS). The expression for γ_ max in Eq. (<ref>) suggests that when the Lundquist number is large, as in most astrophysical plasmas,
the growth rate becomes negligible and the process inefficient.
We also notice that the system has an intrinsic length scale L, limiting the length of the CS (e.g. the length of a magnetic loop in the solar corona, the pressure scale height in a disk atmosphere etc..).
This suggested the relation for the “ideal” tearing instability <cit.>, i.e. for an instability where the growth rate survives independently of the Lundquist number in the ideal limit S→∞. Rescaling the dispersion relation to the CS length rather than the thickness, the maximum growth rate of the tearing instability (corresponding to the fastest growing mode) reads:
γτ_A ∼ S^-1/2(a/L)^-3/2.
where we dropped the subscript “max” to simplify the notation.
For an inverse aspect ratio varying as a/L∼ S^-α, any α<1/3 leads to growth rates diverging in the ideal limit, while any α>1/3 leads to growth rates tending to zero as the Lundquist number grows <cit.>.
This result is general: additional effects such as viscosity <cit.> and Hall current <cit.> result in different scalings for the critical aspect ratio at which fast reconnection is triggered. It also extends to situations where non-collisional terms in Ohm's law break the frozen-in conditions <cit.>.
§ RECONNECTION IN PARTIALLY IONIZED PLASMAS.
In partially-ionized plasmas, the electron-neutral, ion-neutral and electron-ion collisions cause an Ohmic-type diffusion of the magnetic field. With three different species undergoing collisions, such as electrons, ions, and neutrals, a single-fluid description leads to an appropriately modified magnetic induction equation <cit.>:
∂ B∂ t=
∇× ( v× B)- ∇× [ η_O(∇× B)
+η_H(∇× B)×b̂+η_AD (∇× B)_⊥]
where b̂= B/| B|, and η_H, and η_AD are respectively the Hall, and ambipolar diffusivities (AD).
The velocity is here the velocity of the neutral fluid, and the coefficients are calculated neglecting electron pressure and other forces associated with the ionized components that are assumed to be negligible.
If the only charge carriers are ions and electrons the Ohmic diffusivity becomes:
η_O=c^2ω^2_pe(ν_ei+ν_en) +c^2ω^2_piν_in
where ω_pe is the electron plasma frequency, the electron-ion, ion-neutral and electron-neutral collision frequencies are respectively ν_ei,in,en and c is the speed of light. In the lower solar atmosphere helium and hydrogen are by far the most abundant species <cit.> and collisions between neutrals and ions are negligible so the last term in Eq. (<ref>) can be dropped.
For protoplanetary disks, usually the electron ion-collisions are negligible and, under this assumption, in <cit.>, a prescription for the calculation of non-ideal coefficients is provided as a function of the local magnetic field, ions and neutrals collision frequencies, masses and abundances. <cit.> describes three regimes in which each of the terms in Eq. (<ref>) dominates the diffusion process in protoplanetary disks, where the main charge carriers may vary from dust grains near the midplane, to electrons and H^+ ions in the upper atmospheric layers (see Sec. <ref>).
The ambipolar diffusivity can be written as (see e.g. ):
η_AD=| B|^2|μ_0|( ρ_nρ)^2(ρ_iν_in+ρ_eν_en)^-1
where ρ=ρ_i+ρ_n is the total mass density, ρ_i, ρ_n and ρ_e are respectively the ion, neutral and electron mass density; μ_0 is the vacuum permeability. Assuming quasi neutrality (n_e∼ n_i), this expression is the same as in <cit.>, where ρ∼ρ_n.
Usually diffusion processes are slow compared to the advection time and the local heating they provide depends on the actual value of the coefficients. In the IT scenario magnetic reconnection supported by Ohmic diffusivity can occur in a time comparable to the magnetic field advection τ_rec∼τ_A.
The Hall effect can modify the reconnection scenario
when the scale of the equilibrium magnetic field gradient a, generating the CS, is a ∼ d_i=c/ω_pi, where d_i is the ion inertial length and ω_pi the ion plasma frequency <cit.>. In particular for modifications of the IT model, see <cit.> for the linear theory, where the critical aspect ratio depends on the ion inertial length while reconnection still proceeds at the fastest speed in the system, and <cit.>
for nonlinear simulations.
In the linear analysis of reconnecting instabilities the effect of non-ideal terms is assessed by their relative weight compared to the first advective term on the right hand side of Eq. (<ref>).
For the AD to become relevant specifically for reconnection dynamics,
a≤ v^2_Ai/(v_A ν_in).
Note that AD does not violate the flux conservation associated with electron and ion motion. It appears in Ohm's law only because the advective term is written for the frame of the neutral fluid. The assumption is that collisions partly couple neutrals to ions so the neutrals participate indirectly in the reconnection process.
In the discussion that follows, we assume instability sets in on an initial state where ions and neutrals move with the same velocity.
We include AD and its effect on the linear instability.
However AD may also play a role in the formation of favorable initial conditions with CSs <cit.>. AD-driven CS steepening may be inhibited by guide fields <cit.>, though this does not affect the stability directly, in the sense that for a given CS thickness the presence or absence of a guide field does not change the sheet's stability.
The relative importance of the different non-ideal effects in the partially ionized environments of the solar atmosphere and protoplanetary accretion disks are discussed in detail below.
§.§ Reconnection: the tearing mode equations.
Recently <cit.> extended the instability calculations presented by
<cit.> to the full resistive tearing mode dispersion relation, and from there obtained the trigger conditions for IT in the partially ionized case. In the latter work, there is no drift of the ions with respect to neutrals in the equilibrium. The current carriers in the starting equilibrium CS, considered to be force-free, are therefore the electrons only. In the linearized equation for incompressible perturbations
the ions and neutrals are collisionally coupled but the equations are then resolved in terms of the ion flow. Electron-neutral collisions are not taken into account in Ohm's law, again written in terms of the perturbed ion flow, though we consider the electron-neutral collisions in our calculation of the diffusive coefficients here. Compared to Eq. (<ref>), the <cit.> model neglects the Hall effect, while AD does not appear explicitly because of the different choice of variables in Ohm's law (ion flow vs neutral flow), though the corresponding couplings, namely ion neutral momentum transfers, are included via the modified ion momentum equation. For the different forms of Ohm's law we refer to
<cit.>.
The Hall effect was considered by <cit.>, from which we derive where the Hall effect plays a role in our calculations and discuss its effects on the triggering of fast magnetic reconnnection.
In <cit.> the linearized momentum and induction equations are written for the ion velocity and vector potentials (respectively ϕ ,ψ) as (primes denote derivatives with respect
to the a-scaled variable y/a):
(γτ̅_Ai)^2( 1+ν_inγ+ν_ni)(ϕ”- k̅^2ϕ)=
-F(ψ”-k̅^2ψ)+F”ψ
ψ=k̅Fϕ+ 1/S̅γτ̅_Ai (ψ”-k̅^2ψ),
where S̅ is defined using the Alfvén speed calculated with the ion mass density.
We want to stress again that barred quantities will be normalized to the CS thickness, so that, for example τ̅_Ai=a/v_Ai is the Alfvén time calculated with the ion density, γ is the tearing growth rate associated with a mode with wave vector k̅=ka along the equilibrium magnetic field.
We calculate the collision frequencies assuming binary elastic (energy and momentum conservation) collisions between electrons and neutrals so that ν_ni = n_im_in_nm_n ν_in⇒ν_ni < ν_in at most heights in the solar atmosphere (see Tab. 1 in ) and in protoplanetary disks. Note that the opposite limit ν_ni≫ν_in leads to the standard tearing of a completely ionized plasma.
Following <cit.> we may redefine a starred Alfvén time and Lundquist number
τ̅_Ai (1+ν_in/γ +
ν_ni)^1/2 := τ̅_Ai f_M^1/2→τ̅_A^*,
S̅^* := S̅τ̅_Ai/τ̅_A^*= S̅ f_M^-1/2.
Inserting τ̅_A^* into Eqs. (<ref>) and substituting S̅τ̅_Ai with S̅^*τ̅_A^* and γτ̅_Ai with γτ̅^*_A,
the tearing mode equations regain their standard form, so that all the properties of the dispersion relation discussed previously now apply to the starred quantities.
We are going to analyze the maximum growth rate of the tearing instability Eq. (<ref>), because the fastest growing mode is the most relevant in the context of triggering fast magnetic reconnection in natural plasmas causing an efficient energy conversion.
In particular, from Eq. (<ref>) we have that γτ̅_A^* follows the same scaling with S̅^* as in the standard tearing theory:
γτ̅^*_A ∼ (S̅^*)^-1/2⇒γτ̅_Ai∼ (S̅)^-1/2(τ̅_Aiτ̅^*_A)^1/2.
When the growth rate is negligible compared to both collision frequencies, the factor f_M^1/2 becomes
f_M^1/2=(1+ν_in/ν_ni)^1/2=(1+ρ_n/ρ_i)^1/2=(ρ/ρ_i)^1/2.
§.§ Reconnection regimes.
With the two ion-neutral collision frequencies, two intrinsic length scales are introduced into the resistive MHD equations,
a_c1,c2 defined as <cit.>
a_c1,c2 = (η_m v_A,Ai/ν_ni,in^2)^1/3.
If the thickness of the CS a≫ a_c1 fast reconnection onset kicks in in a regime where the neutrals and ion dynamics is coupled. For a_c1≥ a ≥ a_c2 fast reconnection onset occurs in the intermediate regime. Finally when a<a_c2 fast reconnection dynamics involves ions only, with neutrals not really noticing. In Fig. <ref> (left two columns) we summarized the regimes (divided by colors) and the transition scales (labelled in red and orange), showing that from the coupled to the decoupled regime we are transitioning to thinner CS. Ultimately, the plasma parameters and dynamics determine the regime in which fast reconnection onset occur.
In Fig. <ref> right three columns, we summarize the results for the onset of fast magnetic reconnection from <cit.>. In particular, we report the critical CS thickness a_c for each regime, the values of the growth rate at the transitions between regimes (red and orange colors) and the timescale over which the instability proceeds in each regime once the critical thickness is reached.
The latter depends on the ion-neutral collision frequencies, and in particular in the fully decoupled regime has a weak dependence on ν_in / τ_Ai expressed via the exponent ζ
(non barred quantities are normalized with L so that S=L v_Ai/η_O).
The growth rate of the tearing instability and so the speed at which reconnection proceeds in the fully coupled regime will be of the order of τ_A, where the Alfvén speed is calculated with the total (ions+neutrals density).
In particular when a_c≪ a_c1,c2 the time-scale becomes the shortest ion-only Alfvén time,
τ_Ai=τ̅_Ai a/L.
To compare the trigger conditions found in <cit.> with those arising when including explicit ambipolar, Hall and Pedersen terms in Ohm's law <cit.> and differential equilibrium flows between neutrals and ions, a linear study of the tearing instability should be carried out within a three fluid framework, where the dynamics of the positive and negative charge carriers and of the neutrals are solved for the IT mode. However, the main AD effects on magnetic reconnection, i.e. the consequences of ion - neutral drift on reconnecting mode growth rates are captured by the critical scales of <cit.>.
§ ENERGY CASCADE TOWARDS SMALLER SCALES
On the basis of the above discussion, one may outline the behavior of the tearing instability in a simple CS that is slowly thinning. We shall assume the tearing mode develops on the global, ion-neutral coupled, Alfvénic time-scale, since this often proves to be the case for the environments discussed in this work. Modeling work has shown that a recursive reconnection regime may appear <cit.>, where CSs form at successively smaller scales. This suggests a transition to the intermediate and then fully decoupled regime as the sheets' thicknesses decrease, accelerating the nonlinear evolution of the tearing mode, and so of the energy conversion.
The natural thinning of the CS results from nonlinear processes driven by the self-attraction of parallel currents and the repulsion of oppositely directed currents. Considering a simple 2D magnetic configuration as in <cit.>, the reconnecting magnetic field generates an inflow-outflow pattern at the x-point. These flows tend to collapse the x-point into a new sheet that is stretched by the outflow but being kept at the initial thickness by the inflow. Once the sheet lengthens sufficiently, it itself is unstable to reconnection on a faster timescale, breaking the initially macroscopic CS into ever smaller reconnecting regions. Each then becomes further unstable accelerating the process until a turbulent regime, with heating and acceleration, is obtained (Fig. 2 in , Fig.2 in , Fig. ).
As discussed by <cit.>, different recursive reconnection models predict different numbers of plasmoids at the n-th stage of the reconnection cascade and so different number of CSs separating plasmoids (regions of close magnetic field lines, see the figures and references mentioned above). The process continues down to smaller scales until energy is dissipated, by relevant microscopic processes; for the fully ionized corona, for example, these are collective kinetic effects. In partially-ionized plasmas, if a reconnection event is initiated at scales at which ions and neutrals are coupled, see the blue area in the diagram in Fig. <ref>, as sheets keep shrinking, they will become unstable to secondary IT instability in the intermediate or fully decoupled regimes.
Information on plasmoid scaling and energy transfer to the small scales can be translated into spectral features <cit.>, and not only can help distinguish one model prediction to another, but might also result as an observable feature of the reconnnection layer.
§.§ Transition from the coupled to the Intermediate regime.
Understanding the energy cascade at smaller scales requires finding a relation between the n_th recursive reconnection stage properties and the n_th-1 parent reconnecting process. As discussed in <cit.>, we do not make any assumption as in e.g. <cit.>. We follow instead <cit.> where it is empirically observed that the characteristics of the n_th CS thickness is related to the n_th-1 CS length by a_n/L_n-1∼ S^-1/2, because the upstream magnetic field is seen to be swept into the reforming x-points and remains of the same intensity as the background at each level (see e.g. ). For sake of simplicity let's consider the second step of the recursive collapse, then
a_1/L_0∼ S^-1/2,
where we labelled with subscript 0 the initial macroscopic unstable CS length, with subscript 1 the parameters of the secondary reconnecting sheets, and assumed we are in the fully coupled ion-neutral regime. Later in this work we will show this hypothesis is satisfied for a relatively large range of CS lengths in the lower solar atmosphere as well as in protoplanetary disks. In the partially ionized case δ corresponding to the maximum growth rate is given by δ/ a ∼S̅^*-1/4=(τ_D/τ̅_A^*)^-1/4=S^-1/4 f_M^-1/8, where again S is the Lunquist number calculated with Alfvén speed based on the ion mass density only. As discussed in <cit.>, since f_M is invariant for the IT rescaling, the solution is the same as for the classic IT with corrections depending on the regime, δ / L =S^-1/2 f_M^-1/8. Based on the assumptions above, Eq. (<ref>) becomes
a_1/L_0∼ S^-1/2(ρ_n/ρ_i)^-1/8
This means that if the CS thickness at the n=1 recursive reconnection stage a_1<a_c1, the triggering condition for the tearing instability at this stage will be the one described in Fig. <ref> for the intermediate regime, i.e. a_c∼ LS^-1/3(ν_in/τ_Ai)^1/3. This occurs when
a_1 < a_c1.
The inequality above gives
τ_A/τ_ni < (S ρ/ρ_i)^1/4
where τ_ni∼ 1/ν_ni as the collision time neutral-ion.
As we will show in the next sections, in a partially ionized plasma with a significant neutral population, Eq. (<ref>) is often satisfied and secondary reconnection will occur in the intermediate or decoupled regime, even if the dynamics ultimately depend on the thickness and so length of the CS and the local Lundquist number.
§ ADVECTION SCALES IN SOLAR AND DISK ATMOSPHERES.
In this section we address the characteristic lengths and temporal scale variations of the environments in question, in order to calculate magnetic Reynolds numbers necessary to determine the critical thicknesses of fast magnetic reconnection.
The reason we focus on the Reynolds number is the following. Once a CS has formed and become unstable to fast magnetic reconnection thanks to resistivity and neutral collisions breaking flux-freezing, the sheet's critical thickness is the key parameter to understand the role of other non-ideal effects in the reconnection process. The critical thickness, estimated as shown in Fig. <ref> can be then compared to the lengths calculated in Sec. <ref> to find whether the Hall effect and AD alter reconnection's onset. AD is included in our model and affects the reconnection regimes in which a degree of coupling between ions and neutrals is present. The Hall effect has scales defined by the ion inertial length d_i and affects reconnection at kinetic scales, where collisions between neutrals and ions are generally negligible. In this paper we will discuss in detail when corrections due to the Hall effect and AD are relevant, even if they are not expected to significantly change the conclusions. The picture could be further refined using three-fluid calculations to obtain corrections to the CS thickness explicitly proportional to η_AD. However, such corrections would be small since the neutrals' role in the ion and electron dynamics is already included here.
We adopt the following definition for the magnetic Reynolds number:
R_m=L v/η_O.
where L is the characteristic macroscopic lengthscale of the system and v is the convective speed found in the first term on the right-hand side of Eq. (<ref>). The Lundquist number S is obtained by replacing v with v_A.
Eq. (<ref>) for fully ionized plasmas and Eq. (<ref>) for partially ionized plasmas tell us the efficiency of reconnection due to the tearing instability depends on the magnetic Reynolds number.
For the model in <cit.> this dependence reflects on the critical scale of which fast magnetic reconnection- and so efficient magnetic energy conversion- can be achieved. In order to discuss the reconnection dynamics it is then important to understand how to estimate the magnetic Reynolds numbers. In this section we will discuss the (temporal and spatial) scales involved in convecting and diffusing the magnetic field in the solar chromosphere and photosphere, as well as in protoplanetary disks. A similar discussion holds for any other plasma environment.
§.§ Magnetic Reynolds number in the solar atmosphere.
The solar photosphere and chromosphere are gravitationally stratified. This means the characteristic length scale L of reconnection in these layers is limited to at most the smaller of the pressure scale height H_p and the typical transverse convective scale, defined by granulation at ∼1000 km. Since H_p is the smaller of the two, L < H_p.
In the solar corona in contrast, the gravitational scale height is much greater than the transverse and magnetic field scales, the plasma β (≃ c_ s^2/v_A^2 ) is very small, and gravity can often be neglected.
In the photosphere and chromosphere then, the pressure scale height defines the maximum vertical coherent scale achievable so that the appropriate magnetic Reynolds number becomes (c.f. )
R_ m=H_p c_s/η_O,
where c_ s is the speed of sound. Indeed in these atmospheric layers the plasma β may be larger than unity. This is especially important near the boundary of an isolated flux tube, where the Alfvén velocity is comparable to the sound speed. We can estimate the pressure scale height in the solar atmosphere using the C7 model in <cit.>. Once the Ohmic magnetic resistivity η_O has been calculated (see e.g. ) we can, for example, evaluate R_ mc∼ 10^ 5-10^ 4 in the solar photosphere at the solar temperature minimum <cit.>.
To compare with the usual definition of the Lundquist number in Eq. (<ref>), we parameterize
L v_A = H_p c_s L/(H_pβ^1/2)≃ 10 H_p c_s.
We then conclude that in order to discuss the onset of fast magnetic energy conversion through reconnection, we should consider how the choice of the parameters affects the critical threshold for the onset of the tearing instability, ultimately leading to the reconnection process.
§.§ Magnetic Reynolds number in protoplanetary disk atmospheres.
In protoplanetary disks the magnetic fields' evolution depends on dynamical processes that may include
magneto-rotational instability (MRI) leading to channel flows and turbulence <cit.>,
magnetically driven winds <cit.>,
photoevaporative winds <cit.>,
vertical shear instability <cit.>,
magnetic buoyancy and Parker instability <cit.>, and
magnetic reconnection <cit.>.
Depending on the physical and chemical state of the environment, each of these processes can be of interest for local as well as global disk evolution, accretion of disk material onto the star, and the dispersal of the disk back into interstellar space. The dimensionless Reynolds and Elsasser numbers define how the fields evolve on large scales, but are insufficient for understanding how magnetic reconnection proceeds once a CS forms. In this paper we focus on reconnection, so the magnetic Reynolds number defined above is a key parameter governing the CS thickness and so the role of the other non ideal effects.
Most of the definitions of magnetic Reynolds numbers in protoplanetary disks have been provided in the context of the MRI and its saturation in MHD simulations (see the discussion in ). <cit.> defined R_m=v^2_A/Ωη_O=1, where Ω is the angular velocity of the disk, below which MRI driven turbulence relaxes in channel flows. The latter define a new macroscopic lengthscale in the system, which can be disrupted by magnetic reconnection events. In the case of a poloidal field with zero vertical net flux, <cit.> argued that the MRI can be sustained when the effective magnetic Reynolds number R_m=c^2_s/Ωη_O≤ 10^4. <cit.> defined R_m=L^2Ω/η_O>10^3, where L is the size of the simulation domain in the radial direction, above which the magnetic turbulence induced by the MRI can be sustained, and energy is stored in the turbulent magnetic field, defining a new length at the eddy scale.
In general the definition of a magnetic Reynolds/Lundquist number depends on the dynamics and, in a protoplanetary disk, its calculation is difficult because of uncertainties on magnetic field detection <cit.>. Recent measurements using the Zeeman effect provide estimates on the upper limits of magnetic fields of |B|∼ 10^-2G <cit.>, significantly smaller than the values we see in the solar atmosphere. In this work we will vary the magnetic field in a range B=10^-4-10^2 G to cover a sufficiently large parameter space. The disk atmosphere is vertically stratified so that a scale height can be defined as H_p:=c_s/Ω where c_s is the sound speed and Ω the orbital frequency.
The rotational speed of the atmospheric layer provides a torque, storing energy in the magnetic field lines, so we can think about the rotational speed as the main driver for the so-called accumulation or “build-up” phase (for magnetic field concentration and amplification in PPDs see ).
Reconnection is particularly important in the regions where the plasma parameter β≤1, since most of the energy is stored in the magnetic field, which dominates the dynamics.
We will then adopt the Alfvén speed as the main driver velocity for magnetic field convection in the regions of interest, so the Lundquist number for the disk will be estimated here as S=H_p v_A/η_O. The latter will be calculated following <cit.>, see the discussion on the model for PPDs adopted in this work, Sec. <ref>.
§ PARTIALLY IONIZED RECONNECTION IN THE SOLAR ATMOSPHERE
In this section, we will discuss the regimes in which fast magnetic reconnection can be triggered in the solar photosphere and chromosphere.
§.§ Lower solar atmospheric structure.
The model we adopt for the solar atmosphere is the one labeled C7 in <cit.>, where the quantities are functions only of height in the solar atmosphere. The number densities of ions and neutrals are listed in Tab. <ref> assuming quasi-neutrality n_i∼ n_e.
As the most abundant species in the solar atmosphere are hydrogen and helium, the neutral and ionized helium abundances n_ HeI and n_ HeII are also provided. The ionized helium abundance is orders of magnitude lower than the electron abundance (and so positive ion abundance, under quasi-neutrality), though helium contributes noticeably to the neutral population. Henceforth we will adopt hydrogen as the main neutral species.
The interspecies collision frequencies ν_ei, ν_en, ν_in are calculated using <cit.>, for which the collision cross-sections are obtained from <cit.>.
In Tab. <ref> we also list the temperature at each height. The temperature profile plotted in Fig. <ref> shows a minimum about 500 km above the solar surface.
Magnetic field at a given height is evaluated using the relation <cit.>
B=B_s(ρ/ρ_s)^α,
where B_s is the magnetic field at the solar surface, defined as the surface with optical depth equal to unity (bottom of the photosphere). We choose α = 0.3 so the magnetic field weakens with height. We experiment with α over the range 0.3 to 0.6 <cit.> with only minor impacts on the results. We consider two different field strengths at the solar surface, B_s=1200 G and B_s=2200 G <cit.>.
In Fig. <ref> we show also the total density, from which we derive the Alfvén speed profile. The pressure scale height is
H_p=ℛ T/μ g,
where ℛ is the gas constant, μ the mean mass per mole of atmospheric plasma, and g the local acceleration due to gravity (Sec. <ref>).
We now have all the quantities needed to calculate the critical scales in Eq. (<ref>).
§.§ Critical scales for the onset of fast magnetic reconnection in the lower solar atmosphere.
In Tab. <ref> we listed the resulting Ohmic diffusivity and, for comparison, the Hall and AD coefficients. The Hall coefficient is the same order of magnitude as the Ohmic magnetic resistivity. The AD depends on the magnetic field and increases with height in the atmosphere.
While these quantities tell us which diffusion process is dominant, as discussed before, diffusion is slow compared to advection and fast reconnection. We will come back to this later in this section.
Magnetic Reynolds numbers, necessary to calculate the critical CS for the IT mode, inherited the contribution due to the ion-neutral collisions from the diffusivity.
In Fig. <ref> we show the results for the transition between different reconnection regimes (color coded), as a function of height in the solar atmosphere. The coupled regime (blue area) extends down to CSs with thickness of a ≪ 1 km for all the heights, both in the photospheric as well as in the chromospheric layer.
The Lundquist numbers and the magnetic Reynolds numbers resulted to be of the same order of magnitude, and so we used S for the results shown in Fig. <ref>.
The critical length scale a/H_p∼ S^-1/3(ρ_n / ρ_i)^1/6 associated with the longest available CS, i.e. limited by L<H_p at each height, is shown in Fig. <ref> (dashed black line). This means there is a range of CS thicknesses in the solar atmosphere a≲ 100 km for which reconnection begins with ion-neutral coupled dynamics. When the magnetic field dominates the dynamics (β<1), larger CSs are likely to form.
Scales down to a∼ 100 km could be resolved with DKIST or SOLAR-C <cit.>, while smaller-scale features may be hidden since the photon mean free path ∼ 100 km at the photosphere <cit.>.
As explained in Sec. <ref>, we expect primary CSs to break into secondary CSs that are unstable to IT in the intermediate or decoupled regime (see also ).
§.§ Hall and Ambipolar diffusion effects.
In Fig. <ref> the dotted and barred areas indicate where the Hall and AD contributions to magnetic reconnection might not be negligible. For the AD to be important, a degree of coupling between neutrals and ions is necessary, so ambipolar effects are not relevant in the decoupled regime. The upper edge of the barred area is calculated using Eq. (<ref>).
If and only if there is a neutral point, AD steepens the current profile in the forming current sheet (), once the dynamics of reconnection are taken into account.
The upper limit of the dotted area is defined by the local ion inertial length d_i, for H^+ ions. The Hall effect on its own does not break the frozen-in condition, as field lines remain frozen to the electron flow, but the Hall term destabilizes reconnection and affects tearing and the IT instability criterion (). This means that in the dotted region in Fig. <ref>, the critical length-scale at which reconnection becomes efficient will also depend on the ion inertial length, and the CS will become unstable at slightly smaller aspect ratio than the simple resistive case, with corrections provided in <cit.>.
Still, even if both the AD and Hall diffusion are relevant to the reconnection process, the energy conversion will proceed at a pace determined by the regime in which the reconnection event is happening, i.e. coupled, intermediate or decoupled, see Fig. <ref>.
§.§ Fractal reconnection scenario.
As discussed in Sec. <ref>, secondary reconnection is likely to occur in the intermediate or the decoupled regime in partially-ionized plasmas. In the lower solar chromosphere we have from Tab. <ref> that ρ/ρ_i ∼ n_n/n_e=10^3 since the plasma is quasi-neutral and m_i∼ m_n. The density ratio falls to ρ/ρ_i ∼ 1 in the fully-ionized upper solar atmosphere. The Lundquist number S ranges from S=10^5-10^6 (Tab. <ref>). Choosing S=10^5 and ρ/ρ_i=10^3, for Eq. (<ref>) to be satisfied τ_A≤ 100 τ_ni. In the coupled regime (blue region of Fig. <ref>), as a_c approaches a_c1 then γ∼τ_A∼τ_ni. In the first equality here we use the fact that the CS has already developed into fast IT in the primary reconnection stage. Then Eq. (<ref>) is satisfied and secondary reconnection begins in the intermediate regime.
That reconnection proceeds faster at smaller scales, where the ions and neutrals decouple, leads to runaway acceleration of the ions that could produce tails in the ion energy distribution like those needed for a significant contribution to coronal heating from velocity filtration <cit.>.
§ RECONNECTION IN PROTOPLANETARY DISK ATMOSPHERES.
Reconnection has not been clearly identified in PPDs. Even the global angular scales are unresolved, since the nearest PPDs are more distant than the Sun by seven orders of magnitude. In this section we discuss where reconnection is likely to occur in PPDs, how reconnection events can be initiated in the disk context, and how the resulting magnetic energy dissipation may impact the disk and its atmosphere.
We apply the reconnection picture described above in sections <ref> and <ref> to a model disk orbiting a star of M = M_⊙ that is about 10^6 yr old, an age at which such disks are typically accreting onto their central stars. We focus on conditions near 2.5 au from the star, since (1) the disks' central few au is the launching point for winds observed through optical line emission <cit.> and (2) the disk atmosphere in this same distance range is the source of a multitude of mid-infrared molecular emission lines that carry information on the excitation conditions <cit.>.
At 2.5 au, the dipole component of the star's magnetic field is millions of times weaker than at the stellar photosphere and the higher-multipole components are weaker still. Measured field strengths at young near-Solar-mass stars' photospheres are in the kilogauss range <cit.>, corresponding to fields at au distances less than a milligauss. Comparing this to the disk fields swept in with the gas during the collapse of the parent molecular cloud core <cit.> and the gauss-range fields inferred for the asteroid belt's location in the early Solar system <cit.>, it seems likely that the stellar fields outside 1 au are negligible compared with the fields intrinsic to the disk.
§.§ Turbulence as a driver for magnetic reconnection.
Though MRI is not thought to be active at the midplane of protoplanetary disks (),
it can be triggered in any layer where both of the Elsasser numbers exceed unity and the magnetic pressure is less than the gas pressure ().
The nature of the magnetic dynamics and in particular the presence of MRI-driven turbulence depend on the heating and cooling processes and chemical reactions included, since these govern the non-ideal coefficients in the induction equation <cit.>.
The static disk model we employ has more species and more heating and cooling processes than can typically be treated in dynamical calculations. The coupling of chemistry to thermodynamics leads to a disk with a hot, extended, ionized atmosphere. As a result, we shall see below that the criteria for MRI are met in a layer spanning heights above the midplane from 0.35 to 0.75 au.
If MRI is present some of the magnetic fields generated in the MRI turbulent region will rise into the atmosphere due to their buoyancy () and through the Parker instability ().
Reconnection can develop in turbulent environments driven by the MRI <cit.>, and MRI-driven currents and reconnection events can cause enough ionization to sustain MRI turbulence itself <cit.>. Recent work on MHD turbulence shows that eddies develop strong anisotropies in the inertial range and that magnetic reconnection in the ideal regime can even break the cascade leading to dissipation <cit.>. The energy released potentially contributes to the heating of protostellar winds and jets <cit.>.
The role and importance of reconnection may not be readily observable in disk simulations because limited spatial resolution leads to underestimating Lundquist and magnetic Reynolds numbers in certain regions, and also because time-averaging magnetic fields and assuming axisymmetry can hide the perturbations driving reconnection.
§.§ Magnetic field tangling and CSs in the protoplanetary disk atmosphere.
The magnetic topology in protoplanetary disks is driven by the field's coupling to flows arising from differential rotation and turbulence. CSs may form as natural boundaries separating flux volumes rooted in different disk regions. For example, if the disk atmosphere has a beta transition at which gas and field are coupled, then any flows in the gas will move atmospheric magnetic field footpoints. In a system where a form of line-tying exists, differential rotation of footpoints leads naturally to the formation of CSs separating field lines rooted at different radii, or in regions separating open and closed field lines in the magnetically linked star-disk system <cit.>.
One large-scale CS is that formed by the hourglass magnetic configuration <cit.> of the disk’s magneto-centrifugal wind when acted upon by the differential orbital rotation. This leads naturally to toroidal fields with opposite signs in the upper and lower hemispheres. CSs can also form at the boundary between the so-called “dead zone” and the wind launching region, where magnetic field is wound up by differential rotation. In diffusive MHD calculations including those by <cit.> and <cit.>, this CS is offset from the midplane and occurs in the atmosphere on either the upper or lower face of the disk.
§.§ Disk chemistry and radiation.
We assess reconnection in protoplanetary disks in the context of a thermochemical disk model <cit.> having mass 0.01M_⊙ and orbiting a 1M_⊙ star whose optical, UV, and X-ray spectrum are those of the well-studied young star TW Hya.
As detailed in <cit.>, we determine the disk's density structure by solving for vertical hydrostatic balance under a surface density constraint set by the disk mass.
Gas temperature is found by balancing heating by dust collisions, UV, and X-rays with cooling by line and continuum emission from various ions, atoms, molecules, and grains, whose abundances are determined by solving a chemical network. The network includes ∼ 6000 gas-phase reactions, photo-reactions (UV and X-rays), cosmic ray ionization, and grain surface reactions among ∼ 800 species. The gas density, temperature, and chemistry, and the dust size distribution are all coupled. The grain size distribution at each height is determined by balancing fragmentation, coagulation, and settling. The cooling is computed using non-LTE radiative transfer in the main spectral lines treating collisional and radiative processes. The dust's thermal radiation is also included.
The density structure, heating and cooling, and chemistry are computed iteratively to mutual consistency. Fig. <ref> shows the resulting variation with height of the total (ions plus neutrals) density and temperature.
Also plotted there is the pressure scale height H_p, used below as an estimate of the largest coherent scale for CS formation L=H_p as discussed in Sec. <ref>. The main charge carriers' abundances versus height are shown in Fig. <ref>.
The model disk extends from the midplane up to heights above the base of the layer where the star's X-ray and FUV photons can heat the gas to near 10^4 K
while it remains neutral. The temperature at these heights is too low for the hot gas to readily escape, since the virial temperature at our fiducial distance of 2.5 au from the star is ≳ 20,000 K <cit.>. Photoevaporative escape from our fiducial radius is possible at larger heights where the gas is fully ionized, but for the range of heights z ≲ 1 au we are concerned with here, the disk structure does not significantly depart from hydrostatic equilibrium.
The thermal structure implies the base of the photoevaporative wind lies at a height 1.14 au and the launching speed is just 16% of the thermal speed, by the methods in <cit.> and <cit.>. This slow outflow will have only slight effects on the disk structure in the region of interest here (we focus on z ∼ 0.6 au where β=1). Examining how reconnection operates in this hydrostatic model is thus sufficient.
§.§ Modeling dust grains.
Dust grains' surfaces catalyze chemical reactions including the recombination of free charges. Dust also enables photoelectric heating and continuum radiation cooling. Dust thus can have major effects on the induction equation's non-ideal coefficients <cit.>. Our model disk has a ratio of dust to gas surface density Σ_ dust/Σ_ gas=0.01, near the value in the interstellar medium. The silicate and carbonaceous grains' size distribution when integrated through the disk thickness corresponds to equilibrium between collisional aggregation and fragmentation, with number density n_d(a)∝ a^-3.5 over the range 0.005 μ m<a<1 cm. Each grain size's vertical distribution within the gas column is determined by balancing settling with stirring by weak turbulence, so that the bigger grains are concentrated near the midplane as observed <cit.>.
The grain temperature is size- and composition-dependent and the solids are released into the vapor phase from grains whose temperature exceeds the sublimation threshold of ∼ 1500K.
§.§ Non-ideal coefficients and Elsasser numbers.
Both the AD and the reconnection heating rate depend on the magnetic field strength. We estimate the field in our model disk annulus as follows. Protoplanetary disks deliver material to the surfaces of their central young stars at rates typically around Ṁ=10^-8 Solar masses per year at age one million years <cit.>. If this accretion flow is driven by the torque from a magneto-centrifugal wind, the magnetic field threading the disk cannot be weaker than B > (2ṀΩ/√(3)R)^1/2 ( and Eq. 7). For the disk annulus located R=2.5 au from the Solar-mass star, this amounts to a lower limit on the field strength B > 30 milligauss.
While measurements of magnetization in primitive meteorites suggest fields up to about 500 milligauss at 1 to 3 au from the young Sun in the solar system's first million years <cit.>, we give our model disk annulus a field with the minimum strength of 30 milligauss to obtain a lower bound on the reconnection heating rate.
Empirical evidence on protoplanetary disks' magnetic field geometries is so limited that a wide range of possibilities remains open. We therefore simply assume the total field is uniform in height to make exploration of the parameter space straightforward. Fig. <ref> shows that the 30-milligauss field yields a ratio of gas to magnetic pressure β∼ 10^4 at the midplane. This ratio falls with height, being less than unity above z=0.6 au. The two other magnetic field properties needed below are the magnitude of the vertical field component, which governs whether magneto-rotational turbulence is present, and the magnitude of whichever field component is involved in reconnection. Again seeking the simplest reasonable choice since the evidence is too limited to support greater complexity, we assume both the field's vertical component and its reconnecting component are one-third of the total field, or 10 milligauss.
We can now determine the coefficients in the induction equation's non-ideal terms at each height in the model disk annulus using Eqs. (25)–(31) in <cit.>. Included are the contributions to the electric current from all the charged gas-phase and grain species tracked in the thermochemistry code; inter-species collision frequencies from <cit.>; and the magnetic field described above.
The resulting non-ideal coefficients yield the Ohmic and ambipolar Elsasser numbers Λ_O/AD=v^2_Az/(Ωη_O/AD) that govern the magnetic field dynamics away from current sheets in the disk context <cit.>. These two dimensionless numbers are plotted versus height in Fig. <ref>. The ambipolar Elsasser number is the smaller of the two and less than unity, hence the limiting factor for MRI, only in a thin layer around height 0.3 au. Above about 0.35 au both Elsasser numbers exceed unity, a necessary condition for MRI to operate. A further requirement is plasma β greater than unity, which holds below 0.66 au. The layer in the model disk atmosphere between 0.35 and 0.66 au thus meets the conditions for the onset of turbulence driven by the MRI.
We have checked that these results depend weakly on the choice of field strength. For example, if the field were twice the minimum capable of driving the typical accretion flow, the two Elsasser numbers would both exceed unity above 0.34 au and the plasma beta exceed unity below 0.58 au, so the layer of the disk atmosphere in between would remain subject to MRI. The stronger field would enable reconnection heating rates several times greater than we discuss below.
§.§ Critical CS thickness and regimes for the onset of fast reconnection.
Next we discuss the dominant effects in reconnection dynamics, which are determined not by the Elsasser numbers but by the Alfvén speed and diffusion coefficients as described in Sec. <ref>. We calculate for the model protoplanetary disk the critical CS thicknesses a_c1 and a_c2 at which reconnection transitions from the fully-coupled down to the intermediate regime and from the intermediate down to the decoupled regime following Fig. <ref>. These two scales vary with height z in the disk atmosphere as shown in Fig. <ref> by the red and the orange line, respectively. They divide the fully-coupled regime in blue from the intermediate regime in yellow and the decoupled regime in green.
A black dashed line near the top of Fig. <ref> shows the thickness of the longest CS available at each height in the disk. This CS has length L∼ H_p and thickness given by the coupled regime critical reconnection scale a_c in the blue row of Fig. <ref>. The black dashed line remains in the blue zone even high in the disk atmosphere above 1 au, where the stellar X-rays make ionized hydrogen the main charge carrier (Fig. <ref>). Thus even here, magnetic reconnection can be triggered in the decoupled regime on the longest CSs.
The scale height H_p increases with z since the sound speed rises alongside the temperature (Fig. <ref>). We note that CSs even longer than H_p could form above the β=1 surface, where the total pressure is mostly magnetic. The magnetic pressure dominates above some height not just in the simplified magnetic geometry we consider here, but also in more detailed MHD treatments of protoplanetary disks' magnetically-launched winds <cit.>.
§.§ The role of AD and Hall effect in magnetic reconnection.
The barred and dotted overlays on Fig. <ref> indicate where the AD and Hall effect, respectively, can alter the critical thresholds for fast magnetic reconnection.
AD is relevant when there is a degree of coupling between neutrals and ions, i.e. in the coupled and intermediate regimes (blue and yellow in Fig. <ref>). In the decoupled regime (green), AD does not affect reconnection since neutrals are not involved in the reconnection process. We determine the upper boundary of the barred area in Fig. <ref> using Eq. (<ref>). Reconnection on the longest available current sheet, shown by the dashed black line, is unaffected by AD at the heights plotted. AD does however act on shorter CSs throughout the disk atmosphere. The effects of AD on the trigger conditions for IT were treated by <cit.> and <cit.> neglecting the electron-neutral collisions in Ohm’s law (Sec. <ref>). Extending such modeling to all three fluids – ions, electrons, and neutrals – would not change the maximum reconnection speed but might slightly modify a_c.
For the Hall effect, the upper boundary of the dotted area in Fig. <ref> is the ion inertial length d_i, calculated at each height using the locally most-abundant charge carrier from Fig. <ref>. The Hall effect alters the onset of fast tearing instability as discussed in <cit.>, <cit.>, <cit.>, and Sec. <ref>. The Hall effect is destabilizing, making the CS unstable to fast tearing at slightly larger thicknesses, as quantified in <cit.>.
§.§ Comparing disk diffusion regimes.
Three non-ideal terms occur in the magnetic induction equation <ref>. Of these, only the Ohmic diffusivity is essential to the triggering of fast magnetic reconnection through the tearing instability. The AD and Hall effect change the geometry of fast-reconnecting CSs. Though Ohmic dissipation produces heating even in current sheets not undergoing tearing instability, it is much slower there in transferring energy from the magnetic field to the gas and depends on the local collision frequencies.
In contrast, all three non-ideal terms can be important for the magnetic field's global evolution in protoplanetary disks, affecting for example the existence and behavior of MRI turbulence <cit.> and the Parker magnetic buoyancy instability <cit.>. Which non-ideal term dominates versus height in the disk atmosphere and strength of the magnetic field was explored by <cit.> for a model disk with temperature independent of height, but treating some of the same ionization and recombination reactions we consider here. There it was often true that the Ohmic diffusivity dominated near the midplane, while AD dominated for strong magnetic fields and for the atmosphere's upper layers. Our calculation with the thermodynamics coupled to the chemistry yields a disk atmosphere much hotter than the midplane but nevertheless qualitatively similar diffusion regimes, as shown in the left panel of Fig. <ref>. There is no combination of field strength and height for which the Hall term dominates in this model disk annulus.
§.§ Fractal and recursive reconnection.
As discussed in Sec. <ref>, while the primary trigger occurs in the coupled regime, the secondary CS reconnection will most probably lead to regimes in which ions and neutrals are not fully coupled.
Assuming the CS length to be H_p (the longest CS achievable in the system) we estimated the primary reconnection onset occurs in the fully coupled regime. In Fig. <ref> (left) we superimposed bars when the secondary onset occurs in the intermediate regime. Shorter CSs than L=H_p are not prevented to form, so that primary reconnection trigger could also occur in the intermediate or decoupled regime. Indeed, the length of the CS – and so its thickness to achieve fast reconnection – is affected by the local dynamics and by the CS thinning.
§.§ Consequences for disk heating.
In this section we explore whether magnetic reconnection results in plasma heating and particle acceleration capable of affecting the protoplanetary disk's temperature, chemical composition, and dynamics.
We focus on the layer where the magnetic and gas pressures are comparable. This layer is important if MRI is present since MHD modeling of small disk patches indicates the β=1 surface separates a turbulent interior from a magnetized disk corona. Fields generated beneath are buoyant and rise into the corona where they can undergo reconnection <cit.>. The β∼ 1 layer is important too if MRI is absent, since magnetocentrifugal winds are typically launched around this height <cit.>. Reconnection heating near the launching point could change the launching speed and thus the rates at which the winds remove the disk's mass and angular momentum, with consequences for disk lifetimes.
To see whether reconnection heating is significant, we compare its rate with the other heating processes at work in the model protoplanetary disk. The volumetric reconnection heating rate Q≃ B^2/(8π)×(v_A/L) has units erg cm^-3 s^-1. We assume the fast reconnection is triggered in the fully-coupled regime, which is a reasonable assumption for the longer CSs with L∼ H_p (Fig. <ref>).
Q is shown as a function of magnetic field strength in the left panel of Fig. <ref> by color variations along the curve marking where the plasma parameter β=1. The strength of the reconnecting field is taken to be one-third of the total field, just as for our fiducial total field strength of 30 milligauss (Sec. <ref>). The right panel of Fig. <ref> is specific to the fiducial 30 milligauss total field strength and compares Q against the main heating processes in the thermochemical disk model. Above about z=0.5 au, the reconnection heating (solid black curve) exceeds all other heating processes. Reconnection thus has the potential to raise the temperature of the gas in and around CSs.
An important question that cannot be addressed without modeling the magnetic dynamics in detail is when, for how long and where reconnection happens in such protoplanetary disk contexts. Even MHD simulations that do not resolve the CSs may yield information on the distribution of Q over space and time, useful for evaluating the contribution to heating the disk atmosphere. However, even without such modeling we can obtain an upper limit to the time-averaged reconnection heating rate per unit disk area. This upper limit is the rate at which the accretion flow converts the gravitational potential energy of the inward-spiraling disk gas into other forms, such as magnetic energy.
The power released by steady-state accretion per unit disk area is D ∼ (3/4π) ṀΩ^2 <cit.>, which amounts to 400 erg cm^-2 s^-1 using a typical accretion rate onto T Tauri stars of Ṁ = 10^-8M_⊙ yr^-1 and the orbital frequency Ω for our location 2.5 au from the star. By comparison the height-integral of Q from Fig. <ref> right panel is 17 erg cm^-2 s^-1, more than an order of magnitude below the available accretion power.
Thus reconnection can heat the disk atmosphere at rates exceeding all other known heating processes while consuming a power well within that available locally from the accretion flow.
§.§ Reconnection Onset in Global MHD simulations.
To quantify reconnection heating in global models of MRI turbulence and magnetocentrifugal wind launching, it will be necessary to spatially resolve the onset of fast reconnection. We explore whether this is feasible for reconnection taking place in the layers where β>1 and starting with CS thickness a>a_c1, so that the reconnection is in the fully-coupled regime.
From the discussion in Sec. <ref>, the critical thickness to trigger fast reconnection a_c ∼ LS^-1/3(ρ_i / ρ_n)^1/6∼ 0.008 au, where S∼ 10^10 using the Ohmic diffusivity from z=1 au in the thermochemical model disk annulus. In two-dimensional MHD simulations, grids of 1,000 cells are feasible over a domain extending from the midplane up to z=2 au to span the disk layers where reconnection is taking place. The vertical grid spacing Δ z∼ 0.002 au is then small enough to identify CSs capable of beginning fast reconnection. An MHD calculation on such a grid could estimate the places and times where reconnection produces heating at the rates Q discussed above in Sec. <ref>.
§ CONCLUSIONS
We have investigated the onset of fast magnetic reconnection in the partially-ionized plasmas found in the lower atmospheric layers of the sun and in the atmospheres of protoplanetary disks. We combined an analytic picture of the onset of reconnection via the tearing instability with models of the plasma's composition and thermodynamics in the solar and disk atmospheres. The results indicate that reconnection may contribute to heating these environments. The main conclusions are:
* Even when reconnection begins with the ions, electrons, and neutrals all well-coupled by collisions, the tearing instability breaks the initial current sheet into secondary sheets at successively smaller spatial scales, where magnetic dissipation occurs in the intermediate or the completely decoupled regime. As tearing proceeds recursively, the ions fully decouple from the neutrals, raising the effective Alfvén speed and accelerating the conversion of magnetic energy into heat.
* In the solar photosphere and chromosphere, current sheets can have lengths up to the pressure scale height. This provides a wide range of scales over which reconnection begins in the fully-coupled regime, so that neutrals are involved in the reconnection dynamics. These initial current sheets have thicknesses a=10-100 km, the upper end of the range being near the resolution limit of DKIST <cit.>. Since this is also about the photon mean free path at the photosphere <cit.>, much thinner current sheets would be visible only indirectly through their heating and brightening effects.
* The Hall effect and ambipolar diffusion alter the critical scales for fast tearing instability's onset. Corrections due to the Hall effect <cit.> and ambipolar diffusion <cit.> have been determined separately, but not together, though the Hall term likely is important only in the decoupled regime. Ambipolar diffusion can steepen current sheets in the presence of neutral sheets with no guide field <cit.>, but not more generally, so that the results of <cit.> should hold, but we defer analyses of the linear stability of AD steepened sheets to future work.
* In protoplanetary disks, current sheets' length is limited by the density scale height when plasma β>1. In this case there is a wide range of scales over which reconnection is triggered in the fully-coupled regime, so neutrals are also accelerated by the process.
* Where the disk is well-magnetized with β<1, the largest coherent scales available can be governed by disk winds if these are not disrupted by other instabilities. At higher plasma β where MRI can develop, large-scale magnetic structures may also be created via channel flows. Channel flows' breakup by magnetic reconnection can drive time-dependent disk winds <cit.> that could move the disk surface up and down. We have shown that in such large-scale reconnecting current sheets, the neutrals are accelerated directly by the reconnection. The whole plasma can thus be lifted up as part of the reconnection ejecta.
* Our work also has implications for the idea that ambipolar diffusion steepening of protoplanetary disks' midplane current sheet induces rapid accretion, pinching the field into a reversing configuration, removing magnetic flux, and yielding dense rings <cit.>. These processes may be greatly accelerated if the current sheet thins enough that reconnection takes place in the decoupled regime.
* The layer in protoplanetary disk atmospheres where the plasma β parameter reaches unity and magnetic winds are launched is susceptible, if current sheets form, to reconnection at rates sufficient to increase the local sound speed and so the wind launching speed.
* In our model disk atmosphere the power deposited locally by magnetic reconnection is sufficient to raise temperatures above those established by the stellar X-ray and UV heating, even for the minimum field strength B≈ 10 milligauss consistent with magnetically-driven accretion. Stronger fields would provide even more magnetic energy for conversion to heat. Such heating would alter the atmosphere's chemical composition and radiative emissions. A related scenario was proposed by <cit.> with reconnection in a turbulent environment heating the parts of the protosolar disk near the young Sun to produce the glassy, spherical chondrules found in many primitive meteorites.
Further investigation of the magnetic dissipation rates and timescales and the pathways taken by the deposited energy should include modeling that treats the multi-species nature of the plasmas. Multi-fluid treatment of the partially ionized solar plasma has been demonstrated by <cit.>.
Through multifluid, multispecies modeling of the solar atmosphere with hydrogen and helium, <cit.> showed that the two species' differing dynamics drove chemical reactions involving helium and enriched the helium abundance in the solar wind and coronal mass ejections. They also showed the heating rates for each species at the CS location. Here we would like to point out that an important consequence of the fractal reconnection scenario is the eventual decoupling of the ions from the neutrals, populating energetic tails in the electrons and ions, whose effects at large Knudsen numbers <cit.>, might be fundamental to the subsequent coronal expansion into the solar wind. Recent observations of ubiquitous lower coronal jetting seem to provide indirect evidence for the role of such processes <cit.>.
In the context of PPDs a multifluid treatment is relevant at intermediate heights from the disk mid-plane in which heavier species than hydrogen act like the positive charge carriers, as shown by the chemical model adopted in this paper.
Copyright 2023. All rights reserved. We would like to thank Prof. Kazunari Shibata for fundamental discussions and insights on the physics of the solar atmosphere and magnetic reconnection. F.P. would like to thank Dr. Yasuhiro Hasegawa for insights on wind launching driven by magnetic reconnection.
F.P.'s research was supported by an appointment to the NASA Postdoctoral Program at the Jet Propulsion Laboratory, administered by Oak Ridge Associated Universities under contract with NASA.
K.A.P.S. gratefully acknowledges the UGC Faculty Recharge Program of the Ministry of Human Resource Development (MHRD), Govt. of India and University Grants Commission (UGC), New Delhi, the incentive grant of the Institute of Eminence (IoE) Program, BHU, and the Visiting Associateship Program of IUCAA, Pune.
M.V. was supported by the NASA Parker Solar Probe Observatory Scientist Grant No. NNX15AF34G.
M.E.I. acknowledges support from the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) within the Collaborative Research Center SFB1491.
This work was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA and with the support of NASA's Exoplanets Research Program through grant 17-XRP17_2-0081 to N.J.T.
aasjournal
|
http://arxiv.org/abs/2405.08712v1 | 20240514155631 | Advancing Electron Injection Dynamics and Mitigation Approaches in the Electron-Ion Collider Swap-out Injection Scheme | [
"Derong Xu",
"Ferdinand Willeke",
"Michael M. Blaskiewicz",
"Yun Luo",
"Christoph Montag"
] | physics.acc-ph | [
"physics.acc-ph"
] |
Advancing electron injection dynamics and mitigation approaches in the Electron-Ion
Collider’s swap-out injection schemeWork supported by
Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy
Derong Xudxu@bnl.gov, Ferdinand Willeke, Michael M. Blaskiewicz, Yun Luo, Christoph Montag
Brookhaven National Laboratory, Upton, NY, USA
14th May 2024
=================================================================================================================================================================================================================================================
The Electron-Ion Collider (EIC) will use swap-out injection scheme for the Electron Storage Ring (ESR)
to overcome limitations in polarization lifetime. However, the pursuit of highest luminosity with the
required 28 nC electron bunches encounters stability challenges in the Rapid Cycling
Synchrotron (RCS). One method is to inject multiple RCS bunches into a same ESR bucket.
In this paper we perform simulation studies investigating proton emittance growth and
electron emittance blowup in this injection scheme.
Mitigation strategies are explored.
These findings promise enhanced EIC stability and performance, shaping potential future
operational improvements.
§ INTRODUCTION
The EIC, to be constructed at Brookhaven National Laboratory (BNL),
is designed to facilitate collisions between polarized high-energy electron
beams and hadron beams <cit.>.
The highest luminosity of 10^34 cm^-2s^-1
will be achieved by colliding 10 GeV electrons and 275 GeV protons.
The corresponding beam parameters are shown in Table <ref>.
The physics program calls for the simultaneous storage of electron bunches with both
spin helicities. A full energy polarized electron injector is needed, so that the electron bunches are
injected into the Electron Storage Ring (ESR) with high transverse polarization and the desired
spin direction.
The Rapid Cycling Synchrotron (RCS) will serve as the electron accumulator, ramping electrons
from 400 MeV up to 18 GeV. It will be located within the same existing tunnel.
The RCS features a 96-fold lattice periodicity design to avoid spin imperfection and
intrinsic resonances, ensuring the maintenance of electron polarization.
Due to the short lifetime of polarization, the ESR injection scheme adopts full bunch swap,
necessitating that the RCS provides 28 nC per bunch,
as indicated in Table <ref>. However,
electron bunches with 28 nC exhibit instability at the lower energy of
400 MeV. There is ongoing consideration regarding the development
of a dedicated booster to mitigate this issue <cit.>.
Nonetheless, achieving 28 nC per bunch is beyond the state of
the art for such boosters <cit.>.
Another approach involves injecting multiple electron bunches into one same ESR bucket,
where synchrotron radiation damping will eventually merge the injected and stored bunches.
According to Liouville's theorem, the injected bunch cannot occupy the same phase-space volume as
the stored beam without impacting the latter. Therefore, a separation between the injected and
stored beams is necessary. Previous study has demonstrated that proton emittance growth occurs
when injection errors are included <cit.>. Additionally, the electron emittance is likely to blow up
significantly due to the large tune spread resulting from the electromagnetic kick from the
opposing proton beam.
In this paper, we will study the feasibility of this injection scheme in presence of beam-beam
interaction. The beam parameters used in our simulation is
presented in Tab. <ref>.
§ BETATRON VS SYNCHROTRON INJECTION
According to the methods at the injection point to separate the injected and
stored bunches, we adopt the terms “betatron injection”
and “synchrotron injection” as described in <cit.>.
In betatron injection, the injected bunch is positioned at the same location in the
longitudinal phase space and undergoes betatron oscillation due to its initial
transverse offset from the design orbit. Conversely, in synchrotron injection,
the injected bunch differs in energy from the stored beam, with the two beams
separated by a dispersion at the injection point.
This leads to the centroid of the injected bunch performing synchrotron oscillation.
Figure <ref> displays the strong-strong simulation results for betatron and synchrotron injection.
The old electron bunch is tracked for 20,000 turns before being kicked out. Subsequently,
four electron bunches are injected into the same ESR bucket over a duration of 80 turns.
Thanks to the synchrotron radiation, these four newly injected bunches will merge into a single bunch.
At 10 GeV, each electron bunch should be replaced every 20 minutes. The lower two plots in
Figure <ref> illustrate the evolution of the proton bunch’s emittance.
When compared to the blue reference curve without electron replacement,
the increase in proton emittance per electron
bunch replacement is less than 1% in the horizontal plane and 5% in the vertical plane.
Consequently, the increase in proton emittance due to electron replacement remains below 20%
per hour, which is acceptable in comparison to the intra-beam scattering (IBS) lifetime of 2 hours.
However, as indicated by the upper two plots in Figure <ref>, the electron emittance
experiences significant change. Specifically, in betatron injection with an initial transverse
offset of 7σ_x, the horizontal emittance increases up to 25 times,
while in the synchrotron injection scheme with an initial momentum offset of 7σ_δ,
the vertical emittance rises up to 6 times. The substantial increase in horizontal emittance
conflicts with the small dynamic aperture and leads to particle loss.
Additionally, the enlarged vertical emittance could enhance intrinsic resonances,
potentially leading to depolarization. Therefore, it is necessary to develop strategies to
mitigate this electron emittance blowup.
§ TWO KICKER SCHEME IN BETATRON INJECTION
The horizontal emittance blowup in betatron injection can be mitigated by reducing the initial offset.
Figure <ref> illustrates the maximum horizontal emittance observed during the strong-strong
simulation. During the simulation, the initial offset, expressed in terms of σ_x,
varies while all other conditions remain consistent with those depicted in Fig. <ref>.
To minimize the initial offset relative to the design orbit while maintaining sufficient separation
between the injected and stored bunches, the introduction of a second kicker is proposed.
This concept is illustrated in Fig. <ref>. As the electron bunch loses polarization,
the old electron bunch is kicked out. Consequently, when the first bunch is injected, the ESR bucket
is empty, allowing the bunch to be injected directly on orbit. For subsequent injections,
the stored bunch is shifted to a position with a negative offset at the injection point,
while the newly injected
bunch is positioned with a positive offset. This arrangement ensures that both bunches maintain
a small offset relative to the design orbit and achieve adequate separation to accommodate
the septum magnet.
Figure <ref> displays the simulation results for the two-kicker scheme,
specifically focusing on the electron's horizontal emittance.
The duration between two injections is 20,000 turns, which is sufficient long to merge
the stored and injected beam.
In this setup,
both the stored and injected bunches are directed to positions of ±3.5σ_x at the
interaction point (IP). As a result, the maximum horizontal emittance is reduced to seven times
of the design value. Further examination of the phase space output reveals that only 0.05%
of macro-particles exceed the 10σ_x dynamic aperture.
Further reduction of the maximum horizontal emittance is indeed achievable by optimizing the
phase advance between the IP and the injection point, as well as by utilizing a pulse quadrupole.
§ VERTICAL EMITTANCE BLOWUP IN SYNCHROTRON INJECTION
The electron emittance blowup in synchrotron injection can be attributed to synchro-betatron resonance.
This phenomenon occurs when an electron bunch is injected with an off-momentum deviation.
The momentum offset
translates into a longitudinal offset due to longitudinal oscillation. As the bunch goes through
the crab cavity, the significant longitudinal offset induces a nonlinear crab kick,
resulting in a horizontal offset at the IP. This horizontal displacement is
modulated by the longitudinal motion, which in turn excites higher-order
synchro-betatron resonances via beam-beam interaction
<cit.>.
Figure <ref> illustrates the vertical emittance evolution through a weak-strong simulation.
The vertical emittance blowup is significantly mitigated by selecting a lower frequency for the crab
cavity. Utilizing a combination of 200 MHz and 400 MHz frequencies for the crab
cavity results in a more linear crab cavity kick, which further reduces the emittance blowup. This
approach clearly demonstrates the emittance blowup is caused by synchro-betatron resonance.
Reducing the longitudinal action can effectively decrease the strength of synchro-betatron resonance.
Figure <ref> shows a comparison of different electron bunch lengths through the weak-strong
simulation. The term “bunch length” here refers to the equilibrium length in the absence of beam-beam
interaction, which is proportional to the bucket width. When the electron bunch length is reduced
to 0.7 cm, which is the current design value, the vertical emittance blowup is eliminated.
§ CONCLUSION
One possible method to accumulate a high electron charge is by injecting multiple electron
bunches into the same ESR bucket.
This paper investigates betatron and synchrotron injection schemes using both
weak-strong and strong-strong simulations. A two-kicker scheme is proposed to mitigate
the horizontal emittance blowup in the betatron injection scheme.
The vertical emittance blowup in the synchrotron injection scheme can be
alleviated by reducing the electron bunch length. Given that a bunch length of
0.7 cm is included in the latest baseline design of the ESR, synchrotron injection
emerges as a viable method for merging multiple electron bunches into a single bunch
within the same bucket.
booljacowbiblatex
9
jacow-help
JACoW,
<http://www.jacow.org>
IEEE
IEEE Editorial Style Manual,
IEEE Periodicals, Piscataway,
NJ, USA, Oct. 2014, pp. 34–52.
journal-abbreviations
<https://woodward.library.ubc.ca/researchhelp/journal-abbreviations/>
|
http://arxiv.org/abs/2405.08674v1 | 20240514145557 | Expensive Multi-Objective Bayesian Optimization Based on Diffusion Models | [
"Bingdong Li",
"Zixiang Di",
"Yongfan Lu",
"Hong Qian",
"Feng Wang",
"Peng Yang",
"Ke Tang",
"Aimin Zhou"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
[
Expensive
Multi-Objective Bayesian Optimization
Based on
Diffusion Models
equal*
Bingdong LiECNU
Zixiang DiECNU
Yongfan LuECNU
Hong QianECNU
Feng WangWHU
Peng YangSUSTech
Ke TangSUSTech
Aimin ZhouECNU
ECNUEast China Normal University
WHUWuhan University
SUSTechSouthern University of Science and Technology
Feng Wangfengwang@whu.edu.cn
Aimin Zhouamzhou@cs.ecnu.edu.cn
Machine Learning, ICML
0.3in
]
Multi-objective Bayesian optimization (MOBO)
has shown promising performance on various
expensive multi-objective optimization problems (EMOPs).
However,
effectively modeling complex distributions of
the Pareto optimal solutions is difficult
with limited function evaluations.
Existing Pareto set learning algorithms may exhibit considerable instability in such expensive scenarios,
leading to significant deviations between the obtained solution set and the Pareto set (PS).
In this paper, we propose a novel
Composite Diffusion Model based Pareto Set Learning
algorithm, namely CDM-PSL,
for expensive MOBO.
CDM-PSL includes both
unconditional and conditional diffusion model
for generating high-quality samples.
Besides,
we introduce an information entropy based weighting method to balance different objectives of EMOPs.
This method is integrated with the guiding strategy, ensuring that all the objectives are appropriately balanced and given due consideration during the optimization process;
Extensive experimental results on both synthetic benchmarks and real-world problems demonstrates that our proposed algorithm attains superior performance compared with various state-of-the-art MOBO algorithms.
§ INTRODUCTION
Expensive multi-objective optimization problems are commonly seen various fields, such as neural architecture search <cit.>, antenna structure design <cit.>,
and clinical drug trials <cit.>.
Handling EMOPs involves optimizing multiple (often conflicting) objectives simultaneously with the limit number of function evaluations due to time and financial constraints.
To meet these challenges,
multi-objective Bayesian optimization (MOBO) <cit.>, an extension of single-objective Bayesian Optimization (BO) <cit.> for expensive multi-objective optimization problems, has emerged as a promising paradigm. BO itself is recognized as an exceedingly effective strategy for global optimization, particularly noted for its success in addressing black-box optimization issues <cit.>.
The core principle of BO involves creating probabilistic surrogate models that closely represent the black-box functions.
These models are utilized in conjunction with acquisition functions to seek out globally optimal solutions.
MOBO represents a fusion of Bayesian optimization with multi-objective optimization.
A widely adopted MOBO approach is the random scalarization technique, which effectively translates a multi-objective optimization problem into several single-objective optimization problems.
Another noteworthy strategy in MOBO involves the use of sophisticated acquisition functions, such as the expected hypervolume improvement (EHVI) <cit.> and predictive entropy search (PES) <cit.>.
Among them, Pareto set learning (PSL) based methods
(e.g.
<cit.>)
which aims to modeling the Pareto set via
machine learning techniques,
have shown promising performance.
However, effectively capturing and modeling complex distributions of limited samples is difficult when faced with expensive multi-objective Bayesian optimization problems (EMOBOPs).
Existing PSL algorithms may exhibit considerable instability in expensive scenarios.
This instability can lead to significant deviations between the obtained solution set and the true Pareto set (PS).
In other words, the quality of the resulting solution set is highly influenced by the performance of the PSL model,
which is largely limited on EMOBOPs.
Diffusion model (DM), inspired by the natural diffusion of gases, is a kind of popular deep generative models.
The characteristics of DMs, including their distribution coverage, stationary training objective, and effortless scalability <cit.>, have empowered them to not only outperform Generative Adversarial Networks (GANs) in image synthesis tasks <cit.> but also achieve great success in diverse fields such as computer vision <cit.>, natural language processing <cit.>, and waveform signal processing <cit.>.
These features of DM show promise for its application in Pareto set learning in EMOBOPs.
In this paper, we propose a novel
Composite Diffusion Model based Pareto Set Learning
algorithm, namely CDM-PSL
for expensive
MOBO.
We introduce the Diffusion Model into Pareto Set Learning, where DM operates by simulating the transition of data from an ordered state to disordered noise. This process progressively learns and reveals the inherent distribution within the data. This methodology is particularly effective in scenarios with limited sample sizes, as it can efficiently extract substantial information from each individual sample, thereby amplifying the learning impact <cit.>.
This enables the effective modeling of complex distributions of high-quality samples.
Building upon this foundation, to generate samples of superior quality, we designed a guided sampling process. This approach ultimately led to the realization of
conditional sample generation.
The major contributions of this paper are summarized as follows:
1) We introduce a
composite diffusion model based Pareto set learning method for offspring generation
for expensive MOBO,
which includes both
unconditional and conditional
sample generation.
2) We devise a guided sampling process to improve the quality of solutions generated by the diffusion model, resulting in a conditional diffusion model;
3) We introduce an information entropy based weighting method to balance the importance of different objectives of multi-objective problems. This method is integrated with the guiding strategy, ensuring that
all the objectives are appropriately balanced and given due consideration
during the optimization process;
4) We have conducted extensive experiments on both synthetic benchmarks and real-world problems, clearly demonstrating that CDM-PSL obtains superior performance compared with various state-of-the-art MOBO algorithms.
§ PRELIMINARIES
§.§ Expensive Multi-objective Optimization
A multi-objective optimization problem (MOP) can be universally expressed in mathematical terms as follows:
minimize f(x)=(f_1(x), f_2(x),…, f_M(x))^T
subjecttox∈Ω
where x = (x_1,x_2,…,x_d) represents the decision vector,
f (·): Ω→Λ denotes a black-box objective function, encompassing M (M ≥ 2) objectives,
Ω symbolizes the non-empty decision space,
and Λ is the objective space. An MOP is considered expensive when the evaluation of f(x) involves either time-intensive computations or high-cost experimental procedures. In such contexts, the primary aim in optimizing an MOP is to approximate the Pareto Front (PF) effectively within a limited evaluation budget.
Considering two solutions x and y ∈ Ω,
x is said to dominate y (expressed as x≺y) if and only if the following conditions are met:
1) ∀ i ∈{1,2,...,M},
f_i(x) ≤ f_i(y);
2) ∃ j ∈{1,2,...,M},
f_j(x) < f_j(y).
This definition encapsulates the essential criterion for determining
the quality of solutions in terms of meeting multiple objectives simultaneously. <cit.>
A solution x^∗∈Ω is Pareto optimal if there exists no other solution x∈Ω that can dominate it. This implies that within the feasible region Ω, x^∗ is considered to be Pareto optimal if no alternative solution offers better outcomes across all objectives without being worse in at least one of them.
The Pareto Set (PS) refers to the collection of all the Pareto optimal solutions:
PS={x∈Ω | ∀y∈Ω,y⊀x}.
The corresponding set of objective vectors of the PS is the Pareto Front (PF).
§.§ Bayesian Optimization
Bayesian Optimization (BO) is a powerful method for the efficient global optimization of expensive black-box functions <cit.>. By leveraging probabilistic surrogate models to approximate the black-box functions, in conjunction with acquisition functions, it seeks to locate global optimal solutions with as few evaluations of the actual objective function as possible. BO has been widely used in a variety of fields, including hyperparameter tuning <cit.>, A/B testing <cit.>, combinatorial optimization <cit.>, among others.
§.§ Diffusion Models
Diffusion models are a specialized form of probabilistic generative models that operate by learning to reverse a forward process that gradually increases noise in the training data <cit.>.
They have demonstrated remarkable performance on a wide variety of tasks, such as image generation <cit.>, voice synthesis <cit.>, video generation <cit.> and inpainting <cit.>.
Training a diffusion model involves two processes: the forward diffusion process and the backward denoising process.
§.§.§ Forward Process
In the forward phase, Gaussian noise is added to the input data step by step until a pure Gaussian noise is produced, which is a Markovian process. Given an initial data distribution 𝐱_0∼ q(𝐱), the noised x_1,x_2…,x_T can be obtained from the following equation:
q(x_t|x_t-1)=𝒩(x_t;√(1-β_t)· x_t-1,β_t·𝐈),∀ t∈{1,…,T},
where T is the number of diffusion steps and the step sizes are controlled by a variance schedule {β_t∈(0,1)}_t=1^t. Moreover, the properties of this recursive formula make it possible to obtain q(x_t) directly from x_0 by the following equation:
q(x_t|x_0)=𝒩(x_t;√(β̂_t)· x_0,(1-β̂_t)·𝐈), ∀ t∈{1,…,T},
where β̂_̂t̂=∏_i=1^tα_i and α_t=1-β_t. Thus, x_t can be sampled from q(x_t|x_0) as follows:
x_t=√(β̂_t)· x_0+√((1-β̂_t))· z_t,
where z_t∼𝒩(0,𝐈).
§.§.§ Reverse Process
The reverse process recreates the true sample from a Gaussian noise input x_T∼𝒩(0,𝐈) by the following equation:
q(x_t-1|x_t) = 𝒩(x_t-1;μ(x_t,t),Σ(x_t,t))
However, q(x_t-1|x_t) cannot easily be evaluated due to the reverse process lacking a complete dataset and therefore we need to train a neural network p_θ(x_t-1|x_t)=𝒩(x_t-1;μ_θ(x_t,t),Σ_θ(x_t,t)) to approximate these conditional probabilities. Specifically, The model takes in the noisy data x_t and the corresponding embedding at time step t as input, and is trained to predict the mean μ_θ(x_t,t) and the covariance Σ_θ(x_t,t)). Based on this, Ho <cit.> proposed to fix the covariance Σ_θ(x_t,t)) to a constant value and reformulating the mean μ_θ(x_t,t) as a function dependent on noise, as follows:
μ_θ=1/√(α_t)·(x_t-1-α_t/√(1-β̂_t)· z_θ(x_t,t)).
This enables the model to
predict the noise of the data rather than directly predicting the mean and the covariance.
§.§ Conditional Generative Models
In the field of generative models, Conditional Generative Models have garnered significant interest <cit.>. A prominent approach in Generative Adversarial Networks (GANs) involves incorporating a classification component within the Discriminator network. The strategy aims to facilitate the learning of classifier conditions, a method explored in various studies <cit.>. The training of these classifiers proceeds occurs concurrently with the GAN training process.
Similarly, Deep Generative Models (DGMs) have been adapted to include conditional capabilities. This is achieved by integrating a classification head into diffusion models, as demonstrated in recent work <cit.>. There are instances where DGMs are conditioned on images, enabling style transfer across different instances, a concept explored by Preechakul <cit.>. The research by Dhariwal and Nichol <cit.> presents a novel method allowing for the control of samplers/generators in DGMs without necessitating retraining. This idea is further expanded by Liu et al. <cit.>, who generalize this concept to accommodate various modalities. In the realm of guidance without classifiers, Ho and Salimans <cit.> have shown that it's possible to achieve guidance properties in generative models without relying on a classifier.
§.§ Diffusion Model Based Optimization Algorithms
Krishnamoorthy proposed a black-box optimization algorithm named DDOM <cit.>, based on the diffusion model. This algorithm converts a single-objective optimization problem into a continuous diffusion process, leveraging the inverse process of the diffusion model to efficiently address complex problems. Subsequently, Yan and Jin's EmoDM <cit.> extended the application of the diffusion model to multi-objective optimization. By learning the noise distribution in the previous evolutionary search task, a set of non-dominated solutions can be generated for the new multi-objective optimization problem without further evolutionary search. Fang's DMO <cit.> further demonstrates the diffusion model's efficacy by applying it to create a gasoline hybrid scheduling scheme, highlighting its capability in solving practical multi-objective optimization challenges. Building upon these contributions, this paper's CDM-PSL advances the field by optimizing the balance between solution convergence and diversity through the integration of conditional and unconditional diffusion models. Moreover, CDM-PSL incorporates gradient information, weighted by information entropy, into the process of generating solutions, significantly enhancing convergence performance during the early-stage iterations. The combination of these strategies makes CDM-PSL have competitive performance in solving expensive multi-objective Bayesian optimization problem.
§ OUR METHOD
§.§ Overview
We present a composite diffusion model based Pareto set learning method for EMOBO, denoted as CDM-PSL (Algorithm <ref> and Figure <ref>). CDM-PSL contains three components to generate offspring: data extraction, diffusion model training and conditional generation. Initially, the process begins by initializing a set of samples X_0 ⊂𝒳, where 𝒳⊂ℛ^d. These samples are drawn using Latin Hypercube Sampling (LHS) <cit.>. Following this initialization, acquisition and batch selection are then applied in an iterative manner.
§.§ Data Extraction
To prepare training data for Pareto set learning, we propose a data extraction strategy summarized in Algorithm <ref>. Central to this strategy is the application of shift-based density estimation (SDE) <cit.> for calculating fitness values, which is mathematically represented as:
Fitness(p) = min_q∈Y_k \p√(∑_i=1^M(
max{ 0,f_i(q)-f_i(p)
})^2)
In this formula, p and q are solutions within the set Y_k, and f_i(p) indicates the i-th objective value of solution p. The SDE methodology assesses the quality of samples based on their convergence and diversity characteristics. From X_k, a total of T candidate solutions (for instance, |X_k|/3) that demonstrate superior SDE values are identified as Pareto optimal samples. This selection process is essential for ensuring the quality and relevance of the data used in Pareto set learning.
§.§ Diffusion Model Training
The DM training process comprises two major steps: the diffusion process and noise prediction.
§.§.§ Diffusion Process
Given set of samples X_k^* and a specified step t, the diffusion process involves gradually introducing Gaussian noise ϵ∼𝒩(0, I) to X_k^* over t steps:
X_k,t^*=√(1-β_t) X_k,t-1^* +√(β_t)ϵ
In this equation, X_k,t^* denotes the data at step t, and β_t ∈ [1 e-5, 5 e-2] represents the noise level at step t. The DM learning process for PSL operates as a Markov chain. This stepwise approach simplifies the learning task compared to direct Pareto set learning and effectively captures the distribution characteristics of optimal samples.
§.§.§ Noise Prediction
The noise prediction phase involves reconstructing the samples X_k,t^*, which have undergone the diffusion process, back to their original state, X_k^*. This reconstruction is achieved through a model ℳ that predicts the noise added at each step, thereby reversing the diffusion process. This process follows the equation <ref>:
X̃_k,t-1^*=1/√(1-β_t)(X_k,t^* -β_t/√(1-∑_s=1^t β_s)ϵ_θ(X_k,t^*, t)).
In this equation, X̃_k,t-1^* represents the data after reconstruction, and θ denotes the parameters of the model. The term ϵ_θ(X_k,t^*, t) is the predicted noise by model ℳ at step t.
The loss function ℒ for model ℳ is defined as:
ℒ=1/ℋ∑_i=1^ℋ(ϵ-ϵ_θ(x_k,i,t^*, t)).
This equation takes ℋ into account, the total number of optimal samples, where each x_k,i,t^* is an instance from X_k,t^*, and ϵ_θ(X_k,t^*, t) represents the predicted noise. The training of ℳ, using Equation <ref>, aims to minimize the loss ℒ, thereby enhancing the accuracy of noise prediction.
The entire DM training process, comprising both the diffusion process and noise prediction, plays a crucial role in effective Pareto Set Learning for EMOBOPs. This method presents a novel approach to learning high-quality solutions, striking a balancing between exploration and exploitation in the search space.
§.§ Conditional Generation
In order to further enhancing the quality of generated samples, we propose a Guided Denoise Process. This process leverages gradients to guide the denoising procedure of DM, thereby achieving the realization of a Conditional Diffusion Model. The subsequent elucidation will provide a detailed explanation through two components: the guided denoise process and the weighted gradient.
§.§.§ Guided Denoise Process
For a given step t and sample x_t, the denoising process with guidance can be implemented through equation <ref>:
X_t-1=1/√(α_t)(X_t-1-α_t/√(1-α̅_t)ϵ_θ(X_t,t))+σ_t^2ĝ+σ_tz
where α_t represents 1-β_t, ϵ_θ(X_t, t) signifies the predicted noise by the trained model, σ_t is the standard deviation of the t-th step, and ĝ denotes the weighted gradients used to guide the denoising process.
To procure the gradients essential for guiding the model in sample generation, we establish separate Gaussian Process (GP) models for each objective, as proposed by Balandat <cit.>. These models are utilized to compute the objective values for all generated samples, thus obtaining gradients g for each objective, facilitating the realization of conditional generation.
§.§.§ Weighted Gradients
In addressing EMOPs, employing weighted gradients is essential. This requires the determination of appropriate weights for each objective. Therefore, we propose a weighting methodology grounded in information entropy, facilitating the derivation of these weighted gradients ĝ.
In order to obtain weights based on information entropy, it is necessary to first normalize the objective values. Let y_ij represent the j-th objective value of the i-th individual. The normalized objective value ỹ_ij is computed using Equation <ref> as follows:
ỹ_ij=y_ij-min(y_j)/max(y_j)-min(y_j)
Subsequent to this, for each objective j(j=1,2,…,M), the probability matrix P_ij is calculated using Equation <ref>, where k=1,2,…,N and N represents the number of individuals in the population.
P_ij=ỹ_ij/∑_k=1^Nỹ_kj
Subsequently, for each objective j(j=1,2,…,M), the information entropy E_j is computed. The method of calculation is as outlined in Equation <ref>:
E_j=-1/ln(N)∑_i=1^N(P_ij×ln(P_ij+η))
Herein, i=1,2,…,N and j=1,2,…,M, where N signifies the total number of samples, and M represents the quantity of objectives. To avoid the occurrence of ln(0), a small positive number, η, is introduced. The computation of information entropy relies on Shannon entropy <cit.>, expressed as H(X)=-∑(P(x)×log(P(x))). Additionally, the coefficient 1/ln(N) is employed to guarantee that the values of information entropy are confined within the range of 0 to 1.
Finally, for each objective j=1,2,…,M, the weight W_j is computed using Equation <ref>, where k=1,2,…,M, and M denotes the total number of objectives.
W_j=1-E_j/∑_k=1^M(1-E_k)
By employing the calculated weights W, we can subsequently derive the entropy weighted gradients (EWG) ĝ. The entropy weighting method, serving as an adaptive weight allocation approach, assigns weights based on the information entropy of objectives. This method reduces subjectivity in weighting gradients for different objective values, and ensures that the algorithm places more emphasis on objectives with rich information content, thereby achieving better performance in multi-objective optimization problems. Furthermore, the characteristic of weight allocation based on information entropy endows the entropy weighting method with broad applicability, makeing it suitable for a wide range of fields and various types of multi-objective optimization problems, demonstrating its strong universality. In the context of guiding the denoising process, the weighted gradients also enhance the effectiveness of the guidance for multi-objective problems, thereby enhancing the conditional generation.
§.§ Selection Strategy
§.§.§ Batch Selection
After obtain the solutions sampled by the CDM-PSL, we employ the batch selection strategy of PSL-MOBO <cit.> to select a small subset X_k^B={x_b|b=1,…,B}. Specifically, this strategy use the Hypervolume (HV) indicator <cit.> as the selection criteria. The HV indicator is defined as follows:
HV(S) = Λ({q ∈ℝ^d| ∃ p ∈ S:p ≤ q and q ≤ r })
where S signifies a solution set, r is identified as a reference vector set, and Λ(·) denotes the Lebesgue measure.
§.§.§ Operator Selection
At the end of each iteration round, we have devised a method to switch operators based on the growth rate of the HV indicator.
In algorithm 1, F_CDM is a flag used to determine if CDM is currently being used to generate offspring.
After passing through an iteration window, such as three evaluation rounds, if the HV indicator's growth rate falls below a predefined threshold (default setting is 5%), we switch the operator for offspring generation (including CDM-PSL and other Optimizer like the Genetic Algorithm (GA)). This strategy is designed to prevent the algorithm from becoming trapped in local optima specific to the current operator.
§ EXPERIMENTS STUDY
§.§ Experimental Settings
Instances and Baselines
To comprehensively validate the performance of CDM-PSL, experiments were conducted on 9 benchmark problems
(2- and 3-objective
ZDT1-3 <cit.> and DTLZ2-7 <cit.>)
and 7 real-world problems <cit.>.
Moreover,
we have compared CDM-PSL with 9 state-of-the-art and classical algorithms, including NSGA-II <cit.>, MOEA/D-EGO <cit.>, TSEMO <cit.>, USeMO-EI <cit.>, DGEMO <cit.>, PSL-MOBO <cit.>, qNparEGO <cit.>, qEHVI <cit.> and qNEHVI <cit.>.
Instances and Parameter Settings
For fair comparison, the population size N was initialized to 100 for all the compared algorithms.
Bayesian optimization algorithms were executed for 20 batches, each with a batch size of 5, across all algorithms.
Each method was randomly run 10 times.
For CDM-PSL, the hyperparameter t was set to 25,
the batch size m was 1024,
and the learning rate γ was 0.001, with training spanning 4000 epochs.
The configurations for other methods were aligned with those in their original publications (See the appendix).
Evaluation Metrics
The hypervolume (HV) indicator, as defined in Equation <ref>, was employed to assess the quality of the solutions obtained.
Higher HV values are indicative of better performance.
§.§ Experimental Results
We conducted a series of experiments on a variety of widely recognized synthetic multi-objective benchmarks, including ZDT1-3 <cit.> and DTLZ2-7 <cit.>. The problems selected for the experiments featured 2 and 3 objectives, with the number of decision variables set at 20. We particularly highlight the results for a specific instance where d=20, and further details are available in the supplementary materials. Additionally, the comprehensive performance
of CDM-PSL was thoroughly evaluated through the application of seven real-world problems.
Figure <ref> shows a comparison of the hypervolume (HV) indicator relative to function evaluations (FE).
CDM-PSL
demonstrates outstanding performance across most synthetic benchmarks, excelling in both convergence speed and final values.
Additionally, CDM-PSL exhibits ideal performance in real-world problems (RE).
These findings decisively affirm the effectiveness and superiority of the CDM-PSL approach.
§.§ Ablation Study
To verify the effectiveness of each component in CDM-PSL, ablation study results are presented in Figure <ref>.
Entropy Weight
CDM-PSL w/o Weight represents the CDM-PSL variant using mean weighted gradients to guide
the sampling process instead of entropy weighted gradients (EWG).
CDM-PSL demonstrates superior convergence performance compared to CDM-PSL w/o Weight
on all the six tested problems and attains better final values in ZDT3, DTLZ3 and DLTZ6.
This indicates that
EWG
can gauge the significance of objectives
more accurately,
thereby offering more effective guidance for the sampling process.
Conditional Generation
CDM-PSL w/o Condition refers to the variant from which the conditional generation component has been omitted. In a similar vein, CDM-PSL consistently and significantly outperforms CDM-PSL w/o Condition across all the tested problems. This clearly validates the critical role and effectiveness of the conditional generation component in the CDM-PSL.
Switching Strategy
CDM-PSL w/o Switch is a variation of CDM-PSL that lacks the switching strategy. The results of the ablation experiments on ZDT3 and DTLZ3 demonstrate that the inclusion of this strategy prevents the PSL model from falling into local optima, achieved through alternating between various optimization operators. The incorporation of the switching strategy in CDM-PSL results in enhanced convergence performance.
Diffusion Model
CDM-PSL w/o DM refers to the variant that employs the Genetic Algorithm (GA) instead of the DM-based PSL model as its optimizer, utilizing Simulated Binary Crossover (SBX) to generate new solutions. It is evident that this version's performance is markedly inferior compared to that of the standard CDM-PSL. This disparity underscores the effectiveness of DM based Pareto set learning.
§.§ Validity of CDM-PSL
Figure <ref> displays the Pareto fronts approximated by CDM-PSL and by MOBO without CDM-PSL, based on the posterior mean. Clearly, CDM-PSL more effectively captures the essential features of the true PF, surpassing MOBO without CDM-PSL, in both synthetic benchmarks and real-world problems. For instance, MOBO without CDM-PSL is difficult to approach the true PF in a limited number of evaluations, yet CDM-PSL can effectively capture nearly all characteristics of ZDT1. In the three-objective DLTZ6 problem, the use of CDM-PSL can approximate the true PF faster in a limited number of evaluations. Additionally, our methodology demonstrates commendable exploitation capabilities on complex problems, such as the rocket injector design (RE7) <cit.>, which is characterized by a complex PF. This complexity arises from Pareto optimal solutions being distributed across multiple regions.
§.§ Parameter Sensitivity Study
The main parameter of CDM-PSL, step t, is studied in this subsection. Figure <ref> depicts the influence of varying t values on CDM-PSL's performance. The default value of t is set at 25 in CDM-PSL. A smaller t yields suboptimal experimental outcomes; although near-excellent final values can be achieved on certain problems, there is a noticeable decline in convergence performance, attributed to the inadequate learning of the distribution of superior solutions. On the other hand, a larger t, such as 50, results in less stable convergence performance compared to t=25, primarily due to an increase in learning error. Furthermore, an increased step t also leads to longer training durations. Consequently, balancing between convergence performance and time efficienvy, t=25 is established as the optimal default step size for CDM-PSL.
§ CONCLUSION AND FUTURE WORK
In this paper, we introduce a composite diffusion model based Pareto set learning method, termed CDM-PSL, for addressing EMOBOPs.
CDM-PSL uses both
unconditional and conditional diffusion model
for generating high-quality samples.
Besides, the quality of the solutions generated by the Pareto Set learning model is significantly enhanced by employing entropy weighted gradients to guide the sampling process. Extensive experimental evaluations on 9 benchmark problems and 7 real-world problems verify the efficiency of CDM-PSL.
The relatively high computational overhead makes CDM-PSL challenging to apply in very high-dimensional problems. The Monte Carlo tree approach has proven effective in feature extraction for such complex scenarios <cit.>. Consequently, there is an intention to incorporate Monte Carlo trees into existing algorithms for future research to address higher-dimensional multi-objective Bayesian optimization problems.
§ IMPACT STATEMENTS
This paper presents work whose goal is to advance the field of Multi-Objective Bayesian Optimization (MOBO).
There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
icml2024
§ EXPERIMENTAL DETAILS
§.§ Parameter Settings
First, we supplement more details about training DM. DM is designed with a straightforward architecture, incorporating two linear layers, each containing 128 hidden units, and utilizes the ReLU activation function. Additionally, we train DM for 4000 epochs, with a batch size of 1024. The Adam optimizer is selected for the model, set with a learning rate of 0.0001.
Moreover, the hyperparameter for the DM’s step t is configured at 25, and the noise level is defined within the range from 1e-5 to 0.5e-1.
The maximum number of function evaluations is 200, where 100 FEs are used to initialize the population and 100 FEs are used to evaluate new offspring after batch selection. Besides, batch size of all the algorithms is 5, thus number of iterations is 100 / 5 = 20.
In CDM-PSL, the initial populations are all obtained from the Latin Hypercube Sampling (LHS) <cit.>. To ensure fairness and allow the comparison algorithms to perform as them originally did, the relevant settings of the comparison algorithms are kept unchanged from the original paper.
§.§ Synthetic Benchmark Problems
In this subsection, we detail each synthetic benchmark problem, outlining the dimensions of the decision space 𝒳∈ℝ^d and the objective space f(𝒳) ∈ℝ^M, where f(·) represents the black-box function. Additionally, we discuss the reference points r utilized for computing the hypervolume indicator. Our experiments encompass 9 synthetic benchmark problems, with the objectives numbering 2 and 3, and the decision variables is 10, 20 and 50. Further details are available in Table <ref>, and the characteristics of each problem are listed in Table <ref>. For the reference point r∈ℝ^M, we adopt a vector comprising the maximum objective values from the initial solution set x_1,...,x_N (Eq. <ref>).
r = (max_1 ≤ i ≤ Nf_1(x_i),...,max_1 ≤ i ≤ Nf_M(x_i))
§.§ Real-world Application Problems
Our experiments include 7 real-world application problems <cit.>, alongside several synthetic benchmark problems. These problems were originally proposed across various fields for distinct applications. Below are introductions to each of these real-world problems:
Four Bar Truss Design (RE1). This task aims to optimize the design of a four-bar truss, focusing on minimizing structural volume (f_1) and joint displacement (f_2). It considers the lengths of the four bars (x_1, x_2, x_3, and x_4) as variables. For more information, please see <cit.>.
Pressure Vessel Design (RE2). The goal here is to design a cylindrical pressure vessel to minimize total costs (f_1), including materials, forming, and welding, and to avoid violations of three design constraints (f_2). The decision variables are the shell thicknesses (x_1), the pressure vessel head (x_2), the inner radius (x_3), and the length of the cylindrical section (x_4). Additional details are in <cit.>.
Disk Brake Design (RE3). This problem focuses on designing a disc brake to minimize mass (f_1), stopping time (f_2), and violations of four design constraints (f_3), with four decision variables: inner radius (x_1), outer radius (x_2), engaging force (x_3), and the number of friction surfaces (x_4). Further details are available in <cit.>.
Gear Train Design (RE4). The objective is to design a gear train to minimize the deviation from the required gear ratio (f_1), the maximum size of the gears (f_2), and violations of design constraints (f_3), considering the number of teeth in each of the four gears (x_1, x_2, x_3, and x_4). More information can be found in <cit.>.
Rocket Injector Design (RE5). The design goal for the rocket injector is to minimize the maximum temperature on the injector face (f_1), the distance from the inlet (f_2), and the temperature at the post tip (f_3), with decision variables including hydrogen flow angle (x_1), hydrogen area (x_2), oxygen area (x_3), and oxidizer post tip thickness (x_4). Detailed information is in <cit.>.
Reinforced Concrete Beam Design (RE6). This design problem for a reinforced concrete beam aims to minimize the total cost of concrete and reinforcing steel (f_1) and the sum of two constraint violations (f_2), with variables for the reinforcement area (x_1), beam width (x_2), and beam depth (x_3). Details are available in <cit.>.
Welded Beam Design (RE7). The challenge is to design a welded beam to minimize the cost (f_1), end deflection (f_2), and violations of four constraints (f_3), with variables adjusting the beam's size (x_1, x_2, x_3, and x_4). More information can be found in <cit.>.
To maintain consistency in evaluation, the same reference point was applied when evaluating all the algorithms.
More details are shown in the supplementary material.
Details on the real-world application problems, including information and reference points, are presented in Table <ref>.
§.§ Hardware Settings
The experiments in this paper are conducted on a server with Ubuntu 22.04 LTS operating system, a 1.5GHz AMD EPYC 7742 CPU (64 CPU Cores), 320 GB RAM, and two NVIDIA RTX 4090 GPU.
§ RELATED WORKS
§.§ Comparing DM-Based Methods with Normalizing Flows
Normalizing flows (NF) transforms data into a prior distribution through bijective functions, which limits the ability of modeling complex data distribution in both practical and theoretical contexts <cit.>. However, DM is more efficient in the task of capturing complex data distributions, and the CDM-PSL proposed in this paper also demonstrates this experimentally. Moreover, the random noise introduced by DM can also naturally generates a greater diversity of candidate solutions.
Specifically, DM enhances its expressive power by introducing random noise in the forward and backward processes, compared with NF. Considering the case of generating candidate solutions, this property of DM enables the exploration of more solutions in the space, thus enriching the diversity of solutions.
From the perspective of the motivation behind our proposed method, the performance of PSL methods might deteriorate significantly due to the complexity of distributions of Pareto optimal solutions and the limitation of function evaluations in the face of EMOBOPs. DM can model complex distributions of solutions step by step, and generate more diverse solutions by introducing random noise. This gives DM an advantage over NF for application on the PSL of EMOBOPs.
§.§ Comparing CDM-PSL with DDOM
In this section, we explain the novelty and advantages of CDM over DDOM. Krishnamoorthy proposed an offline black-box optimization algorithm named DDOM <cit.>, based on the diffusion model. DDOM trains a generative model on offline dataset and proof its effectiveness on single-objective optimization problems. The novelty and advantages of CDM-PSL over DDOM are as follows:
Model composition: The main difference between CDM-PSL and DDOM is that CDM consists of two generative models, CG and UG. CG can minimize all the objective values as much as possible in a limited number of function evaluations, and UG can generate solutions with greater diversity.
Training data: CDM-PSL trains DM on online dataset, while DDOM is trained on offline dataset, which means that CDMs are able to receive and learn new data instantly, which allows them to quickly adapt to changes in data distribution.
Inference process: CDM-PSL introduces gradient information, weighted by information entropy, into the process of generating solutions, significantly enhancing convergence performance under limited FEs.
Research problems: CDM-PSL is targeted at solving multi-objective optimization problems, while DDOM mainly solves single-objective optimization problems.
The combination of our proposed methods makes CDM-PSL have competitive performance in solving EMOPs.
§ ADDITIONAL EXPERIMENTS
§.§ The Advantages of Using Diffusion Model
Effectively modeling the complicated distribution of the PS is quite challenging for EMOPs with limited function evaluations. Existing Pareto set learning algorithms may exhibit considerable instability in expensive scenarios <cit.>. DM is meticulously engineered to represent complex datasets through a highly flexible family of probability distributions. It has demonstrated promising outcomes in several domains, notably in image restoration and data synthesis <cit.>. Thus, DM offers a promising way for Pareto set learning in expensive multi-objective Bayesian optimization.
Specifically, the proposed CDM-PSL is a composite of CG with conditional DM and UG with unconditional DM. It takes advantage of CG's convergence capabilities with UG's diversity potential. Figure <ref> illustrates the offspring produced by the SBX versus those generated by CG and UG on DTLZ7 and ZDT1 at FE = 25. The closer to (0, 0, 0), the better. As can be seen from the figure, both CG and UG can generate solutions with lower objective values.
* CG with conditional DM can generate offspring solutions closer to the PF than SBX and UG by introducing performance gradient information in the denoising process, i.e., it has stronger convergence performance.
* UG with unconditional DM does not have as strong convergence performance as CG, but since it does not rely on the gradient of the aggregated objective values to guide the denoising process, the solutions generated by UG will have stronger diversity.
§.§ The Advantages of Entropy Weighting Method
The entropy weighting method, serving as an adaptive weight allocation approach, assigns weights based on the information entropy of objectives. This method reduces subjectivity in weighting gradients for different objective values, and ensures that the algorithm places more emphasis on objectives with rich information content, thereby achieving better performance in multi-objective optimization problems.
To show the advantages of entropy weighting method, we visualize the offspring to compare the convergence performance of SBX and CG under three different weighting methods: entropy weighting, mean weighting, and random weighting. Figure <ref> illustrates the objective values of all newly generated solutions at FE = 25. As can be seen from the figure, CG with weights can generate solutions with lower objective values across multiple objectives. Moreover, CG with entropy weighted gradient has better convergence performance compared to others. This is because CG (entropy weight) pay close attention to objectives that are relatively more important (with higher information entropy) during the generation of offspring. The offspring produced by it exhibit stronger convergence compared to those generated with CG (mean weight) and CG (random weight).
§.§ The Advantages of CDM Over a Gradient-Based Method With Multiple Restarts
In this section, we demonstrate through experiments the advantages of our proposed CDM-PSL compared to the gradient-based method (GM) with multiple restarts. The main advantage of CDM over gradient-based methods is that CDM is based on population-wide information to generate new offsprings, while some other general optimization methods are based on individual solutions to generate new offsprings and they ignore the connections between solutions, which makes CDM more advantageous in maintaining population diversity <cit.>.
Moreover, gradient-based methods are more likely to get stuck in local optima or saddle points and fail to explore the global optimum, and they are sensitive to the choice of the initial point and the step size <cit.>. While the problem of them falling into a local optimum can be mitigated to some extent by restart, this can lead to significant time overhead when dealing with high dimensional problem. However, DM can model complex distributions of solutions, and generate more diverse solutions by introducing random noise.
To compare the performance of CDM and GM through experiments, we implemented a GM with multiple restarts, with the following specific settings:
* The objective value gradient of the current solution is obtained in the same manner as in CG.
* The weighting of the gradient adopts the entropy weighting method, which is the most effective in CG.
* SDE is used to select good solutions for the next generation of offspring through gradient methods.
* Data augmentation is performed on the selected solutions (starting points for the gradient method), including adding Gaussian noise/random perturbations, etc.
The experimental results are shown in Figure <ref> (d=20, FE=25), which validate the advantages of CDM over GM with multiple restarts by visualizing the offspring. GM generates solutions that are more likely to fall into local optimality, while CDM can maintain better diversity.
Figure <ref> demonstrates that CDM, compared with GM, can generate solutions with better convergence and diversity by comparing the HV values of both.
§.§ Results on Synthetic Benchmark Problems with Various Decision Space Scales
In this section, we broaden our experimental scope to encompass problems within 10-dimensional decision spaces, maintaining the same experimental parameters as before, including a batch size of 5, 20 iterations, and an initial sample size of 100. The outcomes, illustrated in Figure <ref> and <ref>, also demonstrate that, overall, CDM-PSL is superior to the methods compared.
Specifically, when the dimensionality of decision variables is set to 10, CDM-PSL exhibits highly competitive performance in almost all problems compared to the benchmark methods. Furthermore, when the dimensionality of decision variables reaches 50, our proposed method achieves notably superior performance on DTLZ2, DTLZ4, DTLZ5, and DTLZ6. However, the performance of CDM-PSL shows some degradation on ZDT3, DTLZ3, and DTLZ7. As indicated by the characteristics of benchmark problems in Table 2, these problems all possess a multimodal landscape feature. This phenomenon occurs because our proposed Pareto set learning approach generates new offspring based on the distribution of the current population. In these three types of problems, the Pareto optimal set is distributed across multiple regions, and with the increase in decision space dimensionality, the Pareto optimal sets of these multimodal landscape problems become more complex. As a result, our method might struggle with adequate exploration and exploitation on these types of problems at higher dimensions. In contrast, CDM-PSL excels in capturing unimodal Pareto sets (such as DTLZ2, DTLZ4, DTLZ5, and DTLZ6), particularly in high-dimensional decision spaces, showing a clear advantage over the methods compared.
§.§ Results on Synthetic Benchmark Problems with Various Initial population size
In this section, we experiment with the performance of CDM-PSL under varying initial population sizes. We selected qEHVI <cit.>, qNEHVI <cit.>, qLogEHVI <cit.>, and qLogNEHVI <cit.> as comparison algorithms. The initial population size for these algorithms were set according to their default settings, specifically 2(d+1), where d represents the dimension of the solution. Figure <ref> presents the results of these five algorithms on different problems. From the figure, it is evident that even with a smaller initial population size, CDM-PSL maintains good convergence performance in the early stages of the algorithm. Furthermore, compared to the other algorithms, CDM-PSL achieves higher HV values in the final evaluation.
§.§ Comparation of Log HV Differences
In this section, we include the plot of log HV difference for all the test problems (d=20). Figure <ref> shows the plots of log HV difference for CDM-PSL (our method) alongside PSL-MOBO and DGEMO, which are two of the most competitive algorithms examined in our experiments. This comparison effectively highlights what fraction of the problem was actually solved and showcase the superior performance of CDM-PSL.
§.§ Ablation Study with Limited FEs
To analyze the convergence performance of our proposed method, we compare CDM-PSL with three variants: CDM-PSL without entropy weight (CDM-PSL w/o Weight), CDM-PSL without conditional generation (CDM-PSL w/o Condition), CDM-PSL without diffusion model (CDM-PSL w/o DM). As can be seen from the Table <ref>, the convergence performance of CDM-PSL is impaired when any of the components is ablated. Notably, incorporating the diffusion model to generate offspring significantly improves the model's performance with a minimal number of function evaluations (specifically, 25), enabling it to achieve higher HV values within a limited set of evaluations.
§.§ Pareto Fronts Obtained by CDM-PSL on Real-World Problems
In this section, we add more figures of the final solutions obtained by CDM-PSL in Real-World Problems. As can be seen from the figures <ref>, the proposed CDM-PSL can effectively modeling the distribution of the PS, despite that the Pareto fronts of the real-world test problems RE1-RE7 are complicated.
§.§ Parameter Analysis
§.§.§ Performance with Different Learning Rates
In this section we compare the performance of CDM-PSL under different learning rate settings to validate the sensitivity of our proposed method to this parameter. Figure <ref> illustrates the impact of varying learning rates on the performance of CDM-PSL, notably with lr=0.001 being the default setting of our proposed method. The results depicted in Figure <ref> indicate that there are no significant differences in the performance of CDM-PSL across different learning rates.
§.§.§ Study of CG and UG ratios in CDM-PSL
In this section we exploring the impact of varying ratios between conditional generation and unconditional generation in CDM-PSL. Figure <ref> displays the HV values for CDM-PSL with different CG and UG ratios across multiple problems. Here, "C1-U10" signifies a CG to UG ratio of 1 to 10, which is the default setting for CDM-PSL; "C3-U8" indicates a ratio of 3 to 8; and so forth. From the figure, it is observable that incrementing the proportion of CG from the default configuration marginally improves the algorithm's early convergence performance, as evidenced by a slightly elevated HV value up to the 25th function evaluation. However, with a sufficient number of function evaluations, the DM has been able to learn the distribution of the good solutions more accurately. Consequently, the optimal final HV value is achievable with a minimal CG ratio and a predominately higher experimental group UG ratio. Therefore, we opt for a CG to UG ratio of 1 to 10, which significantly reduces the algorithm's time overhead without compromising its final performance.
§.§ Time overhead of CDM-PSL
We choose ZDT1 and DTLZ2 to discuss the time overhead of CDM-PSL, the number of decision variables is 10 and 20 respectively, the maximum number of function evaluations is set to 120, and the parameters for training CDM are consistent with the default settings mentioned in the manuscript.
Table <ref> lists the time overhead of each part of the CDM-PSL, including training, CG, UG and others. We choose ZDT1 and DTLZ2 to discuss the time overhead of the algorithms, the number of decision variables is 10 and 20 respectively, the maximum number of function evaluations is set to 120, and the parameters for training CDM are consistent with the default settings mentioned in the manuscript.
As we can see from the table, the time overhead of training DM is a relatively modest fraction of the total time overhead of CDM-PSL. CG takes longer to generate new candidate solutions compared to UG. This significant difference is attributed to CG's reliance on a surrogate model to calculate the objective values during the denoising process, which is essential for obtaining the gradient information needed to guide the denoising. However, the additional time overhead imposed by CDM is acceptable compared to the rest of the algorithm.
In the far right column of the table, we present the time overhead associated with one of the latest SOTA algorithms, PSL-MOBO. A comparison between the total time overhead of our method and that of PSL-MOBO indicates that the additional time overhead incurred by CDM is within acceptable limits.
§ MORE EXPLANATIONS ABOUT THE MOTIVATION OF THE PROPOSED METHOD
§.§ Motivation of using DM in PSL-based MOBO
When addressing EMOPs, the performance of PSL methods might deteriorate significantly due to the complexity of distributions of Pareto optimal solutions and the limitation of function evaluations. This poses a significant challenge to existing PSL algorithms <cit.>. Unlike existing methods that directly learn the distribution of the PS in a single-step way, complex distributions of solutions can be learned step by step with the aid of DM. The success of DM in various domains (e.g. computer vision) shows promise for its application in PSL in EMOPs.
§.§ Detailed motivation
Using DM for MOBO is not trival, because one should take into account the following two aspect: how to speed up the searching process and how to balance convergence and diversity. Having this in mind, we proposed a PSL method with composite diffusion models. Specifically, the proposed CDM-PSL has the following features:
* CG with conditional DM is aimed at better convergence since it introduces gradient information to guide the searching process.
* UG with unconditional DM is designed for generating diverse solutions with low cpu-time cost.
* The entropy weighting method, serving as an objective aggregation approach for offering performance gradient information, is introduced to assign weights based on the information entropy of objectives. This method reduces subjectivity in weighting gradients for different objective values, and ensures that the algorithm places more emphasis on objectives with rich information content. Figure <ref> shows that CG with entropy weighting method can generate better solutions than CG with mean weighting or random weighting.
§ LIMITATIONS OF CDM-PSL
In this section, we will discuss some limitations of CDM-PSL. One limitation of our approach is that CG will introduce additional time overhead. As shown in Table <ref>, conditional generation increases the time overhead of generating candidate solutions to some extent, which leads to the limitations of the method in terms of excessive time overhead when applied to very high dimensional problems. Another limitation of our work is that our experiments were not conducted on real-world problems with very high dimensionality. While we have conducted experiments on seven real-world engine design problems, RE1-RE7, which can provide some evidence of the superiority of CDM-PSL in facing real-world problems, these test cases do not represent the complexity and scale of extremely high-dimensional industrial problems. We will explore the algorithm's performance on super large problems like real industrial design in future work.
|
http://arxiv.org/abs/2405.09828v1 | 20240516060508 | PillarNeXt: Improving the 3D detector by introducing Voxel2Pillar feature encoding and extracting multi-scale features | [
"Xusheng Li",
"Chengliang Wang",
"Shumao Wang",
"Zhuo Zeng",
"Ji Liu"
] | cs.CV | [
"cs.CV"
] |
PillarNeXt: Improving the 3D detector by introducing Voxel2Pillar feature encoding and extracting multi-scale features
Xusheng Li,
Chengliang Wang,
Shumao Wang,
Zhuo Zeng,
and Ji Liu
Xusheng Li, Chengliang Wang, Zhuo Zeng, and Ji Liu are with Chongqing University, Chongqing, China (E-mail: lixusheng@cqu.edu.cn; wangcl@cqu.edu.cn; zengz@cqu.edu.cn; liujiboy@cqu.edu.cn).
Shumao Wang is with Zhengjiang University, Zhejiang, China (E-mail: maomao123@zju.edu.cn).
May 20, 2024
===================================================================================================================================================================================================================================================================================================================================================================
Multi-line LiDAR is widely used in autonomous vehicles, so point cloud-based 3D detectors are essential for autonomous driving. Extracting rich multi-scale features is crucial for point cloud-based 3D detectors in autonomous driving due to significant differences in the size of different types of objects. However, due to the real-time requirements, large-size convolution kernels are rarely used to extract large-scale features in the backbone. Current 3D detectors commonly use feature pyramid networks to obtain large-scale features; however, some objects containing fewer point clouds are further lost during downsampling, resulting in degraded performance. Since pillar-based schemes require much less computation than voxel-based schemes, they are more suitable for constructing real-time 3D detectors. Hence, we propose PillarNeXt, a pillar-based scheme. We redesigned the feature encoding, the backbone, and the neck of the 3D detector. We propose Voxel2Pillar feature encoding, which uses a sparse convolution constructor to construct pillars with richer point cloud features, especially height features. Moreover, additional learnable parameters are added, which enables the initial pillar to achieve higher performance capabilities. We extract multi-scale and large-scale features in the proposed fully sparse backbone, which does not utilize large-size convolutional kernels; the backbone consists of the proposed multi-scale feature extraction module. The neck consists of the proposed sparse ConvNeXt, whose simple structure significantly improves the performance. The effectiveness of the proposed PillarNeXt is validated on the Waymo Open Dataset, and object detection accuracy for vehicles, pedestrians, and cyclists is improved; we also verify the effectiveness of each proposed module in detail.
point clouds, 3D detector, multi-scale features, pillar feature encoding
§ INTRODUCTION
Point cloud-based 3D detectors are crucial in autonomous driving <cit.>.
Due to the long detection distance of LiDAR, point clouds are sparse and massive <cit.>.
Current works <cit.> organize the unstructured point clouds into voxels <cit.> or pillars <cit.> and perform feature encoding <cit.>, e.g., Voxel R-CNN <cit.> and PointPillar <cit.>.
Since the point clouds are sparse, the voxels\pillars only exist at a few locations <cit.>.
So, processing the voxels\pillars by traditional convolution consumes additional computing resources <cit.>, contrary to the real-time requirements of autonomous driving.
Current works reduce the computation cost by sparse convolution <cit.>, which only convolves at the location where the voxels\pillars are present.
A frame of point clouds can be organized in voxels\pillars. However, the number of voxels is much larger than that of pillars, and the voxel-based detector needs to utilize 3D sparse convolution <cit.>, which consumes much more computation than 2D sparse convolution <cit.>.
The pillar-based detectors that require only 2D sparse convolution are more compatible with the real-time requirements of autonomous driving <cit.>.
When constructing pillars, first, a simple multilayer perceptron (MLP) is used to expand the features for each point in a pillar <cit.>; then, the maximum value is taken in each dimension <cit.>.
Compared to the voxel-based feature encoding scheme, the pillar-based feature encoding result has less feature information capacity. It ignores a large amount of point cloud height information, lacking the ability to represent point clouds of different heights <cit.>.
The 3D detection objects such as vehicles, pedestrians, and cyclists vary significantly in size, and vehicles are oversized <cit.>.
Therefore, extracting large-scale and multi-scale features is essential for accurate 3D object detection <cit.>.
Some image-based object detection <cit.> and semantic segmentation <cit.> tasks increase the convolution kernel size to construct large-scale features.
However, the computational cost increases rapidly as the size of the convolution kernel enlarges <cit.>.
Due to the real-time requirements of autonomous driving, the large-size convolution kernel is seldom used in the backbone of 3D object detection <cit.>.
However, constructing large-scale features is necessary to accurately detect large-size objects such as vehicles <cit.>.
The pillar-based detectors usually extract multi-scale features by performing stage-by-stage downsampling in the backbone <cit.>.
The layer at the tail of the backbone has a large receptive field and can construct long-range features for large-size objects <cit.>.
For example, VoxelNeXt <cit.> utilizes six layers of step-by-step downsampling in the backbone, and the last layer obtains a large receptive field to construct long-range features for large-size objects.
Moreover, the features output from the last three backbone layers are combined to obtain multi-scale features, which improves the detection accuracy of multi-scale objects.
However, some objects with fewer points are further lost during the downsampling, resulting in an inability to adequately construct large-scale features of these objects at the tail of the backbone.
Therefore, the construction of multi-scale features should be performed at the early layer of the backbone.
We propose the PillarNeXt to extract multi-scale and large-scale features at each layer of the fully sparse backbone.
We redesigned the feature encoding, the backbone, and the neck.
First, we propose the Voxel2Pillar feature encoding method, which combines the advantages of voxel-based and pillar-based encoding methods. It uses the proposed sparse convolution constructor to construct pillars from voxels with richer features.
Second, in the design of the fully sparse backbone, we avoid expanding the receptive field by increasing the size of the convolutional kernel, thus ensuring real-time 3D object detection.
The backbone adopts step-by-step downsampling; each layer uses the proposed multi-scale feature extraction module.
Each multi-scale feature extraction module contains a dense large-scale feature extraction block and several large-scale feature extraction blocks to achieve fine-grained and coarse-grained large-scale feature extraction, respectively.
The backbone network has achieved the extraction of multi-scale features throughout the entire process, improving the accuracy of object detection while ensuring real-time performance.
Finally, we propose sparse the ConvNeXt to extract multi-scale features in the neck. Significant performance improvements are obtained with a slight modification.
Our contributions are summarized as follows:
1. We propose the PillarNeXt, which redesigns the feature coding, backbone, and neck. The network significantly improves the accuracy of 3D object detection.
2. We propose the Voxel2Pillar feature encoding, which combines the advantages of the voxel-based and the pillar-based feature encoding. The proposed sparse convolution constructor constructs pillars from voxels that contain richer features.
3. A fully sparse backbone is proposed to extract multi-scale and large-scale features at each layer.
4. We use the proposed sparse ConvNeXt to extract multi-scale features in the neck, and the slight modification brings significant performance improvement.
§ RELATED WORK
In this section, we first review recent 3D detectors and pillar-based detectors. Then, sparse convolution and multi-scale feature extraction methods are reviewed.
§.§ 3D detectors
PV-RCNN <cit.> deeply combines a 3D voxel convolutional neural network with PointNet-based set abstraction to learn more discriminative point cloud features.
It exploits the efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive field of the PointNet-based network.
In contrast to traditional pooling operations, RoI-grid feature points encode richer contextual information to estimate object confidence and location accurately.
Voxel R-CNN <cit.> proposes that coarse-grained voxels can also provide sufficient detection accuracy.
A simple but effective voxel-based framework is designed using voxel features in a two-stage method.
The proposed method substantially increases the speed of object detection while maintaining accuracy.
CenterPoint <cit.> uses points to represent, detect, and track 3D objects.
It uses a keypoint detector to detect the object's center, which is regressed to additional properties, including 3D size, 3D orientation, and velocity.
In the second stage, it uses additional point features on the object to improve these estimates.
CenterPoint improves previous state-of-the-art techniques by 10-20% when running at 13 Frames Per Second (FPS).
Part A^2 <cit.> is a 3D detector for point clouds.
It uses the intra-object part information to learn distinctive 3D features and improves the performance of 3D object detection via RoI-aware pooling and sparse convolution.
§.§ Pillar-based 3D detectors
Real-time and high-performance 3D object detection is crucial for autonomous driving, and pillar-based detectors consume less computational resources by using only 2D convolution.
PointPillar <cit.> is a widely deployed detector for 3D object detection that balances speed and accuracy.
The pillars are converted to pseudo-images, and feature extraction is performed using a 2D detection network. The detection speed of the PointPillar reaches 62Hz.
PillarNet <cit.> is a real-time, and high-performance pillar-based detector using only 2D convolution.
The proposed pillar network consists of an encoding network for pillar feature learning, a neck network for spatial semantic feature fusion, and the usual detection head.
The detection performance is improved thanks to the designed orientation-decoupled IoU regression loss and IoU-aware prediction branch.
Sparse voxel features need to be densified and processed by a dense prediction head, requiring additional computational costs. VoxelNext <cit.> performs fully sparse 3D object detection, predicting objects directly based on sparse voxel features without sparse-to-dense conversion or NMS post-processing.
The detector balances speed and accuracy, demonstrating that the fully sparse voxel-based representation works well for 3D object detection and tracking.
A pillar-based version of the VoxelNext is also provided, and the speed and accuracy achieved are competitive.
§.§ Sparse Convolution
Sparse convolution is widely used in point cloud tasks because it saves a lot of computational costs by performing convolution computations only at non-empty locations <cit.>.
Sparse convolution consists of spatially sparse convolution <cit.> and submanifold sparse convolution <cit.>.
Spatially sparse convolution <cit.> reduces the computation by considering only the positions of non-empty elements in the data.
The receptive field of the convolution kernel is restricted to the non-null elements in the data, and the convolution operation is performed only at these locations.
This makes the convolution operation more efficient and suitable for processing sparse data.
However, the use of spatially sparse convolutions leads to an increase in the sparsity of the data.
Submanifold sparse convolution <cit.> also focuses only on the non-null elements in the data and convolves only when a non-null element appears in the center of the convolution kernel, and thus does not lead to an increase in sparsity in the convolution results.
§.§ Multi-scale features
Multi-scale feature extraction is essential in image-based object detection tasks.
However, due to the real-time requirements of the task, large-size convolutional kernels are generally not used in the backbone.
Feature pyramid networks <cit.> are frequently used in image-based detectors to extract object features of different sizes.
Numerous recent works <cit.> have used feature pyramid networks and their variants to address the problem of small and multi-scale objects in object detection.
DeepLab <cit.> introduces dilated convolutions in image-based semantic segmentation, effectively expanding the receptive field to obtain more context information without increasing the parameters and computation cost.
Payal Mittal et al. <cit.> used dilated convolutions to study contextual information of specific small-sized objects. Solves the complex detection problem due to large differences in object scales, small sizes, and occlusions.
TridentNet <cit.> investigates the effect of the receptive field on scale variation in object detection and constructs a multi-branch structure.
Each branch uses dilated convolutions with different scales to improve object detection accuracy.
§ PILLARNEXT
In point cloud-based 3D object detection tasks, extracting multi-scale features is essential because vehicles, pedestrians, and cyclists vary significantly in size <cit.>.
Existing 3D detectors typically use feature pyramid networks to construct large-scale features at the tail of the backbone <cit.>.
However, some long-range objects, including fewer point clouds, will become more sparse in the backbone.
We aim to construct a high-speed pillar-based 3D object detector.
The network extracts multi-scale features at each layer in the backbone to construct fast and efficient 3D object detection.
This section proposes the PillarNeXt. Sec.<ref>, Sec.<ref>, and Sec.<ref> present the Voxel2Pillar feature encoding, the fully sparse backbone, and the neck, respectively. Finally, the structure of PillarNeXt is presented in Sec.<ref>.
§.§ Voxel2Pillar feature encoding
Voxel-based and pillar-based feature encoding methods divide the raw point clouds in the grids and then encode the points in each grid <cit.>.
Voxel-based feature encoding divides the raw point clouds on the horizontal and vertical dimensions and requires 3D convolution for voxel feature extraction <cit.>.
Pillar-based feature encoding methods only divide the raw point clouds on the horizontal dimension and only require 2D convolutions for pillar feature extraction <cit.>.
The number of pillars is considerably smaller than the number of voxels. The 2D convolution kernel with fewer parameters is computationally faster than the 3D convolution <cit.>.
Therefore, pillar-based feature encoding methods <cit.> are more favorable for constructing high-speed 3D detectors.
Pillar-based feature encoding methods <cit.> divide the raw point clouds into grids according to the pillar size. Then, the points in each grid are encoded. It can be formulated as:
Pm_(i,j) = ReLU(BatchNorm(MLP(Pi_(i,j))))
Feature_(i,j) = max(Pm_(i,j))
In Equation 1, Pi_(i,j) represents a point cloud set, i and j represent the coordinates of the point cloud set.
MLP() is a shallow MLP.
BatchNorm() and ReLU() is Batch Norm and activation function.
Pm_(i,j) is the feature extension result of the point cloud set.
Equation <ref> represents obtaining the maximum values of each dimension of Pm_(i,j).
Feature_(i,j) is the feature encoding result called pillar.
Our proposed feature encoding method can be formulated as:
Feature_(i,j) =[max(Pm_(i,j)),min(Pm_(i,j)),
mean(Pm_(i,j))]
Equation 3 represents obtaining the maximum, minimum, and average values of each dimension of the feature extension result Pm_(i,j) and concatenating them to obtain the enriched feature encoding result Feature_(i,j).
The proposed method encodes raw point clouds in pillars with richer features than the widely used Pointpillar's feature encoding block <cit.>.
However, pillar-based feature encoding has apparent drawbacks compared to voxel-based methods, such as coarse feature granularity and lack of expressiveness in vertical dimensions <cit.>.
Therefore, we propose Voxel2Pillar feature encoding.
The approach combines the advantages of both encoding methods so that the encoded pillars contain richer features and are more expressive in the vertical dimension.
The proposed Voxel2Pillar feature encoding method is shown in Fig.<ref>.
It is formulated as:
Vm_(i,j,k) = ReLU(BatchNorm(MLP(Vi_(i,j,k))))
Vf_(i,j,k) = [max(Vm_(i,j,k)),min(Vm_(i,j,k))
,mean(Vm_(i,j,k))]
Pillar_(i,j) = ReLU(BatchNorm
(Spconv_(N,1,1)(Vf_(i,j,:))))
Firstly, the raw point clouds are divided into voxels. Then a shallow MLP is used to expand the features of the points in each voxel.
Equation 5 represents obtaining the maximum, minimum, and mean values of each dimension of the point cloud feature set F_(i,j,k) and concatenating them to obtain the new feature vector Vf_(i,j,k).
Equation 6 represents the proposed sparse convolution constructor.
It consists of a 1×1× N spatially sparse convolution, a Batch Norm, and a ReLU activation function; N is the maximum number of voxels in the vertical dimension.
The sparse convolution constructor merges the voxels in the vertical dimension to obtain pillar Pillar_(i,j).
Since the point clouds are sparse, not many computations are performed despite the large size of the convolution kernel.
§.§ Backbone
§.§.§ Large-scale feature extraction residual block
Extracting multi-scale and large-scale features of objects at the early backbone layer is conducive to improving the detection accuracy of objects of different sizes <cit.>.
Large-scale features of the object are usually constructed by expanding the receptive field <cit.>.
However, enlarging the convolution kernel size increases the number of parameters and brings significant computational costs, contrary to the real-time requirement of the autonomous driving task <cit.>.
The dilated convolution <cit.> expands the coverage of the convolution kernel by inserting spaces between the kernel elements and obtains a large receptive field without additional computational cost.
Therefore, inducting dilation convolution in the feature extraction block effectively expands the receptive field to learn large-scale object features without significantly increasing computational effort.
The proposed large-scale feature extraction residual block (LSFE-Res-Block) is shown in Fig.<ref>.
It consists of three branches: the main feature extraction branch, the residual branch, and the large-scale feature extraction branch.
The main feature extraction branch consists of two 3×3 submanifold sparse convolutions <cit.> to extract the fine-grained features of point clouds.
The residual branch is utilized to prevent network performance degradation <cit.>.
In particular, instead of adding the dilation convolution directly on the main branch, a separate branch with dilation convolution is utilized to extract large-scale features because the dilated convolution probably causes local information loss, and the acquired over-range features may be uncorrelated.
Therefore, the main branch extracts the fine-grained features, and the large-scale feature extraction branch helps establish long-range feature relationships for large-size objects.
As shown in Fig. <ref>, the large-scale feature extraction branch utilizes a 3×3 submanifold sparse dilation convolution <cit.>, m denoting the dilation rate.
The dilation rate gradually increases to incrementally enlarge the receptive field and realize the multi-scale features construction.
Submanifold sparse convolution <cit.> focuses on non-null elements in the data and convolves when a non-null element appears in the center of the convolution kernel.
The block utilizes submanifold sparse convolution, which does not lead to an increase in the sparsity of the convolution result at each branch.
Eventually, all branch outputs are aggregated to construct features with fine-grained features and coarse-grained large-scale features.
§.§.§ Dense large-scale feature extraction residual block
The LiDAR has a long detection range, and the features of some distant objects tend to be sparse.
Although coarse-grained large-scale features are extracted using sparse dilation convolution, the increase in the dilation rate causes a decrease in the ability to construct features for small-sized objects at long distances <cit.>.
Therefore, fine-grained construction of long-range features is necessary for accurate 3D object detection.
To address the above problems, we propose a dense large-scale feature extraction residual block (D-LSFE-Res-Block).
As shown in Fig.<ref>, the block contains three branches.
The residual branch is utilized to prevent network performance degradation <cit.>.
The fine-grained feature extraction branch consists of a 3×3 submanifold sparse convolution to extract fine-grained features.
The dense large-scale feature extraction branch consists of a 1×9 and a 9×1 submanifold sparse convolution.
The dense large-scale feature extraction branch is computationally equivalent to two 3×3 submanifold sparse convolutions, but the features constructed are farther, which constructs dense large-scale features.
§.§.§ Multi-scale feature extraction module
We propose a multi-scale feature extraction module (MSFE-Module) to extract multi-scale features of objects, as shown in Fig. <ref>.
The module consists of an optional downsampling block, a D-LSFE-Res-Block, and several LSFE-Res-Blocks.
The downsampling block downsamples the input using a spatially sparse convolution <cit.>, thus retaining more complete features.
All other blocks consist of submanifold sparse convolutions to avoid the sparsity of the features increasing as the number of blocks increases.
In Fig.<ref>, m denotes the dilation rate of the sparse dilation convolution in each LSFE-Res-Block.
m increases gradually as the number of LSFE-Res-Blocks increases, thus gradually increasing the scale of the extracted features.
The second block utilizes the D-LSFE-Res-Block, which extracts large-scale features densely and prevents the loss of objects with fewer point clouds.
Hence, the multi-scale feature extraction module can simultaneously extract dense, coarse-grained large-scale, and fine-grained large-scale features.
§.§ Neck
ConvNeXt <cit.> is often applied in object classification and detection tasks to capture multi-scale features in images.
In the design of the neck of PillarNeXt, we borrowed the structure of ConvNeXt. The proposed neck comprises a sparse convolution module and a sparse ConvNeXt module.
Sparse ConvNeXt:
As shown in Fig. <ref>, the proposed sparse ConvNeXt has a similar structure to ConvNeXt, but a 5 × 5 submanifold sparse convolution kernel <cit.> replaces the large-scale convolution kernel.
Due to the high sparsity of the point clouds, the use of submanifold sparse convolution here reduces ineffective convolution.
In particular, we use a large-scale convolution kernel in the neck because the feature map at the neck is smaller and does not significantly increase the computation cost.
The proposed neck is shown in Fig. <ref>.
The neck starts with a 3× 3 spatially sparse convolution module, Batch Norm, and ReLU activation function.
Then several sparse ConvNeXt modules are used to extract multi-scale features from the feature map in the neck.
§.§ Network
As shown in Fig.<ref>, the proposed PillarNeXt is similar to most 3D object detection networks, including feature encoding, backbone, neck, and detection head.
The feature encoding utilizes the proposed Voxel2Pillar feature encoding.
The backbone is the feature pyramid network, including six MSFE-Modules to extract multi-scale and large-scale features.
We follow the VoxelNeXt <cit.> to fuse the feature output from the last three backbone layers to obtain multi-scale features.
We used the proposed neck in the network.
The detection head, also proposed by VoxelNeXt <cit.>, directly predicts the object's class, size, and yaw from the sparse pillars.
The proposed PillarNeXt has a similar structure to VoxelNeXt-2D <cit.> but uses the proposed feature encoding, backbone, and neck. Therefore, the effectiveness of the proposed module can be verified by comparison with VoxelNeXt-2D.
§ EXPERIMENTS
In this section, we first present the details of the implementation of the experiments.
Then, the effectiveness of the proposed PillarNeXt is verified.
Finally, the efficacy of the proposed module is verified through ablation study.
Sec.<ref> presents the experimental setting, followed by the experimental analysis in Sec.<ref>. Finally, the ablation study is presented in Sec.<ref>.
§.§ Experimental setting
Datasets.
Waymo Open Dataset <cit.> is a large-scale 3D object detection dataset in autonomous driving.
The dataset is captured by high-resolution sensors (e.g., LiDAR, cameras, and radar) and contains rich annotation information (e.g., vehicles, pedestrians, and cyclists).
Moreover, the dataset is rich in scenarios, including urban roads, highways, suburbs, etc., which helps train the model to be more robust.
It contains 798 training sequences (160k frames) and 202 validation sequences (40k frames).
It is categorized into two difficulty levels based on the number of points inside the object: LEVEL 1 means more than five points inside the object, and LEVEL 2 means at least one point but less than five points inside the object.
Waymo's official evaluation metrics are the mean average precision (mAP) and the mean average precision weighted by heading (mAPH) <cit.>.
The mAPH is a variant of mAP that represents an increased weight on object direction prediction.
Training and inference details.
Both the proposed and the compared detectors utilize the official code and default profile provided by OpenPCDet<cit.>.
All detectors are trained and tested in the same hardware environment with an A100 GPU.
The setup of the proposed detector is consistent with the VoxelNeXt-2D <cit.>.
The horizontal and vertical space detection ranges of PillarNeXt are [-75.5,75.2] and [-2,4], the size of the voxel is [0.1,0.1,0.2], and the size of the pillar is [0.1,0.1,6].
§.§ Experimental analysis
The experimental results are shown in Table <ref>.
The proposed PillarNeXt achieved the highest object detection accuracy in the vehicle detection task.
Compared to voxel-based methods, the proposed PillarNeXt does not require 3D sparse convolution and has a lower computational cost but achieves higher object detection accuracy.
Compared to VoxelNeXt-2D, vehicle detection accuracy for both difficulty levels has been significantly improved.
The mAP and mAPH of LEVEL 1 have been improved by 3.26 and 3.29, respectively; The mAP and mAPH of LEVEL 2 increased by 3.42 and 3.42, respectively.
In LEVEL 1 and LEVEL 2 pedestrian detection tasks, the mAP and the mAPH of the proposed PillarNeXt are only lower than VoxelNeXt-large <cit.>, which is based on Voxel.
Compared to VoxelNeXt-2D <cit.>, pedestrian detection accuracy for both difficulty levels has been significantly improved.
The mAP and mAPH of LEVEL 1 have been improved by 2.78 and 4.49, respectively; The mAP and mAPH of LEVEL 2 increased by 3.04 and 4.45, respectively.
The proposed PillarNeXt improves mAPH more than mAP compared to VoxelNeXt-2D.
The mAPH increases the weight of object direction detection. Therefore, the proposed PillarNeXt improves the accuracy of pedestrian direction detection more significantly.
In LEVEL 1 and LEVEL 2 cyclist detection tasks, the mAP and mAPH of the proposed PillarNeXt are only lower than those of VoxelNeXt-large <cit.>, which is based on Voxel.
However, the proposed PillarNeXt outperforms all the Pillar-based detectors.
In addition, compared to VoxelNeXt-2D, the accuracy of cyclist detection for both difficulty levels has been significantly improved.
The mAP and mAPH of LEVEL 1 have been improved by 3.63 and 3.75, respectively; The mAP and mAPH of LEVEL 2 increased by 3.50 and 3.64, respectively.
From the above analysis, the proposed PillarNeXt achieved the highest object detection accuracy in the vehicle detection task.
The pedestrian and cyclist detection accuracy of the proposed PillarNeXt is slightly lower than VoxelNeXt-large but higher than all the pillar-based detectors.
However, the computational cost of the pillar-based detectors is much lower than that of the voxel-based detectors.
Therefore, the proposed PillarNeXt can achieve more competitive detection accuracy with limited computing power.
§.§ Ablation study
Table <ref> and Fig. <ref> show the results of the ablation study on the proposed module. We verified the effectiveness of the proposed feature encoding, backbone, and neck separately.
§.§.§ Feature encode
In Table <ref>, compared to Programme B, Programme C uses the proposed voxel feature encoding module, as shown in Equation <ref>. Compared to Programme B, the average detection accuracy of vehicles, pedestrians, and cyclists has been improved by 0.63, 0.62, and 0.83, respectively.
The accuracy of cyclists improved to more than 0.8.
The proposed voxel feature encoding method shows a more significant improvement in cyclists' detection accuracy.
It shows that the proposed voxel feature encoding constructs richer features, which is conducive to obtaining higher object detection accuracy.
Compared to Programme B, Programme D and E use the proposed voxel feature encoding module and Voxel2Paillar module.
The voxel size of Programme D and E are 0.3 and 0.2, respectively.
In Programme D, the average detection accuracy of vehicles, pedestrians, and cyclists has been improved by 1.31, 1.52, and 0.46, respectively.
In Programme E, the average detection accuracy of vehicles, pedestrians, and cyclists has been improved by 1.55, 1.81, and 0.85, respectively.
It can be seen that as the size of Voxel decreases, the accuracy of object detection improves more. However, the size of Voxel2Pillar's sparse convolution kernel also increases.
§.§.§ Backbone
In Table <ref>, Programme A indicates that the MSFE-Module utilized in the backbone contains only LSFE-Res-Block.
In the LEVEL 1 and LEVEL 2 vehicle detection tasks, the mAP and mAPH achieved boosts between 0.55 and 0.64.
The mAP and mAPH boosted between 0.57 and 0.63 in the cyclist object detection task.
However, in the pedestrian object detection task, mAP decreased by 0.18 in LEVEL 1, while other metrics only showed the highest improvement of 0.23.
The above analysis shows that multi-scale features are constructed using dilated sparse convolutional branches with different dilation rates in each backbone layer, which improves the detection of large-scale objects.
However, the large granularity of the multi-scale features constructed by dilated convolution leads to a degradation of the detection performance of simple small-size pedestrians. However, the module is beneficial for detecting difficult pedestrians.
Programme B represents the addition of LSFE-Res-Block and D-LSFE-Res-Block to the MSFE-Module.
Compared to Programme A, all metrics are further improved.
In both LEVEL 1 and LEVEL 2 vehicle and cyclist detection tasks, an improvement of about 1 is achieved compared to VoxelNeXt-2D <cit.>.
In addition, the drop in the mAP for pedestrian detection in LEVEL 1 is compensated, with an improvement of 0.53.
Other pedestrian detection metrics are further improved.
It shows that the multi-scale feature extraction in the backbone should combine large-scale feature extraction and fine-grained large-scale feature extraction.
The simultaneous construction of coarse-grained large-scale and fine-grained large-scale features helps construct long-range features for large objects such as vehicles.
It prevents the degradation of detection performance for small-sized objects due to large feature granularity.
§.§.§ Neck
Compared to Programme E, programs F and G use the proposed neck.
The number of sparse ConvNeXt modules used in Programme F and G is 3 and 1, respectively.
In the detection task of vehicles, there is no significant difference in the detection performance between Programme F and G.
However, compared to Programme F, Programme G has an average improvement of 0.35 and 0.42 in detection accuracy for pedestrians and cyclists, respectively.
This indicates that a small number of sparse ConvNeXt modules can improve object detection accuracy. However, as the number of modules increases, the detection accuracy of small objects will decrease slightly. The excessive use of large-scale convolution kernels in the neck would reduce the ability to construct features for small objects.
Table <ref> shows the ablation study on the size of sparse convolution kernels in sparse ConvNeXt.
As shown in Programme B, the highest detection accuracy improvement is achieved when the sparse convolution kernel is 5.
When the sparse convolution kernel size is 7, the detection accuracy of small objects such as cyclists and pedestrians significantly decreases due to the constructed feature distance being too far.
When the sparse convolution kernel size is 3, compared to Programme C, the accuracy of all object detection is significantly reduced. This is because small convolution kernels cannot fully construct the long-range features of the object.
§ CONCLUSION
Different objects' sizes vary greatly, so extracting multi-scale and large-scale features is essential for accurate 3D object detection.
We propose the PillarNeXt based on Pillar, which achieves competitive object detection accuracy while ensuring the real-time performance of 3D detectors.
We have redesigned the feature encoding module, backbone, and neck.
Our proposed Voxel2Pillar feature encoding module constructs pillars that contain richer features.
The proposed fully sparse backbone network can extract multi-scale features at each layer without increasing the convolutional kernel.
In the neck, we used the proposed sparse ConvNeXt.
We validate the proposed PillarNeXt and modules on the Waymo Open Dataset.
The results show that the accuracy of 3D object detection has been significantly improved.
IEEEtran
|
http://arxiv.org/abs/2405.10005v1 | 20240516114551 | Probing the role of self-gravity in clouds impacted by AGN-driven winds | [
"Ankush Mandal",
"Dipanjan Mukherjee",
"Christoph Federrath",
"Geoffrey V. Bicknell",
"Nicole P. H. Nesvadba",
"Andrea Mignone"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
Natural Language Can Help Bridge
the Sim2Real Gap
Albert Yu,
Adeline Foote,
Raymond Mooney, and
Roberto Martín-Martín
UT Austin
May 20, 2024
====================================================================================
The impact of winds and jet-inflated bubbles driven by active galactic nuclei (AGN) are believed to significantly affect the host galaxy's interstellar medium (ISM) and regulate star formation. To explore this scenario, we perform a suite of hydrodynamic simulations to model the interaction between turbulent star-forming clouds and highly pressurised AGN-driven outflows, focusing on the effects of self-gravity.
Our results demonstrate that the cloudlets fragmented by the wind can become gravitationally bound, significantly increasing their survival time. While external pressurisation leads to a global collapse of the clouds in cases of weaker winds (42-43), higher-power winds (44-45) disperse the gas and cause localised collapse of the cloudlets. We also demonstrate that a kinetic energy-dominated wind is more efficient in accelerating and dispersing the gas than a thermal wind with the same power. The interaction can give rise to multi-phase outflows with velocities ranging from a few 100 to several 1000 . The mass outflow rates are tightly correlated with the wind power, which we explain by an ablation-based mass-loss model. Moreover, the velocity dispersion and the virial parameter of the cloud material can increase by up to one order of magnitude through the effect of the wind. Even though the wind can suppress or quench star formation for about 1 Myr during the initial interaction, a substantial number of gravitationally bound dense cloudlets manage to shield themselves from the wind's influence and subsequently undergo rapid gravitational collapse, leading to an enhanced star formation rate (SFR).
ISM – self-gravity – AGN – outflows – hydrodynamics – star formation
§ INTRODUCTION
The feedback from active galactic nuclei (AGNs) on the overall evolution of their host galaxies is thought to be a dominant mechanism in galaxy evolution theory <cit.>.
It is postulated that the large-scale outflow in the form of `jets' from the AGN heats up the intra-cluster medium and stops the cooling flow towards the centre of the cluster, therefore regulating the star-forming fuel <cit.>.
Indeed, in recent cosmological simulations, it is necessary to include various models of feedback from the AGN by injecting thermal or kinetic energy <cit.>, in order to regulate star formation in massive galaxies and reproduce various observed scaling relations, including the luminosity functions and the M_ BH-σ relation <cit.>.
While modern cosmological simulations successfully replicate the statistical characteristics and the redshift evolution of galaxies, they lack the ability to predict the feedback's impact on individual host galaxies due to challenges in accurately modelling the multi-phase interstellar medium (ISM) and associated small-scale physics, i.e., these processes are included as sub-grid recipes in cosmological simulations.
There is increasing observational evidence that wind and young radio jets originating from the central AGN significantly affect the host galaxy's ISM by driving multiphase outflows, which expel gas from the central region, driving turbulence, and potentially diminishing the star-forming fuel <cit.>. This is also demonstrated by dedicated hydrodynamic simulations <cit.>.
These phenomena have a direct impact on the star formation activity inside the host as demonstrated by several observational studies where it has been found that some galaxies hosting radio-loud AGN show a lower star formation rate (SFR) compared to main-sequence galaxies, which follow the standard Kennicutt-Schmidt relation <cit.> between gas-mass and SFR surface density <cit.>.
Conversely, the over-pressurized winds/jet can cause significant compression of the ISM and may trigger collapse to rapidly form stars <cit.>. Observational evidence also supports this hypothesis, where compact radio jets or quasar winds are found to enhance star formation activity <cit.>.
Advancements in observational techniques and improved modelling of star formation physics within hydrodynamic simulations are beginning to shed light on the distinction between 'negative' and 'positive' feedback from AGN. Recent observations indicate the coexistence of both types of feedback within a single system <cit.>. Indeed, recent hydrodynamical simulations of jet-ISM interactions have revealed that while jet-inflated bubbles globally reduce star formation by enhancing turbulence, they can cause local regions of enhanced SFR due to the compression near the nuclear region <cit.>.
Thus, how AGN-driven outflows affect star formation is a complex competition between various phenomena on different scales.
However, a complete understanding of star formation as well as the survivability of the dense gas subjected to powerful AGN outflows remains elusive without the effect of the self-gravity of the gas.
With typical densities around 100 <cit.>, star-forming giant molecular clouds (GMC) have a freefall timescale of a few Myr, which is comparable to or shorter than the typical duration of AGN episodes, lasting between 10-100 Myr <cit.>.
Moreover, the presence of self-gravity can increase/prolong the survival time of the clouds, when faced with strong outflows from AGN, by making them dense and compact, effectively shielding them from erosion caused by the outflow and/or AGN radiation.
Conversely, the fragmentation induced by self-gravity can give rise to numerous smaller cloudlets that may be susceptible to evaporation or entrainment by the hot wind/jet cocoon, leading to the formation of multiphase outflows, which may regulate the available fuel to form stars.
Therefore, the significance of self-gravity at the cloud level can influence how AGN-driven winds/jet cocoons impact the host galaxy on larger scales.
Thus, well-resolved simulations modelling the interaction between AGN-driven winds and individual star-forming clouds may offer a supplementary perspective to both observations and global-scale simulations.
After all, the ultimate fate of the clouds depends on small-scale processes.
Additionally, the results from these small-scale studies are important for building better sub-resolution prescriptions of different mechanisms in global (galaxy and cosmological) simulations.
In this study, we revisit the classical `cloud-crushing' problem <cit.> with the help of a suite of three-dimensional (3D), self-gravitational hydrodynamics simulations in the context of the interaction between AGN-driven winds/jet cocoons and star-forming clouds. There have been extensive studies of the effects of external shocks or winds on clouds in various different contexts, with a primary emphasis on supersonic winds/shocks from galactic winds <cit.>.
However, wind-driven bubbles or jet-cocoons are known to be highly pressurized during the energy-driven phase <cit.>, and thus can be subsonic depending on the density of the wind, while also exhibiting extreme velocities of up to tens of thousands of .
Nonetheless, only a limited number of studies have taken into account the parameters of shocks/winds (e.g., density, velocity, and pressure) which can reach extremes comparable to those generated by AGN-jet cocoons or quasar winds <cit.>.
<cit.> examined the influence of a radio jet cocoon on uniform spherical and elliptical clouds using 2D simulations.
They identified 3 significant phases in the evolution <cit.> of these interactions: (i) the initial impact between the blast wave and the cloud, (ii) compression induced by the thermal and ram pressure of the wind, and (iii) fragmentation of the cloud.
Additionally, their work revealed that in the presence of radiative cooling, the growth rate of the Kelvin-Helmholtz instability was highly suppressed and the mixed gas fraction was considerably reduced (less than 1% of the original cloud mass), implying a slow evaporation process and a prolonged lifetime of the cloudlets.
<cit.> reached a similar conclusion and further demonstrated that when dealing with a fractal cloud, the fragmentation induced by the wind is more pronounced compared to a uniform cloud structure.
The fragmented cloudlets are compressed to high densities, which can then cool very efficiently and survive for a much longer period.
While they discussed the potential role of self-gravity in this scenario, self-gravity was not included explicitly in the simulations.
Considering the effect of external pressurisation on a spherical cloud that may result from AGN winds or jet cocoons, <cit.> demonstrated that the pressure confinement triggers the collapse of the cloud, leading to an enhanced SFR.
Using adaptive mesh refinement (AMR), self-gravitating simulations of the interaction between more realistic AGN-driven winds and Bonnor-Ebert spheres (resembling star-forming cores), <cit.> also arrived at a similar conclusion.
Additionally, they identified a threshold ram pressure of the wind, above which the cloud will be destroyed before significant amounts of star formation can take place.
Another recent study by <cit.> concluded that, if the cloud is initially Jeans' unstable, the interaction will eventually enhance the collapse of the cloud in the presence of self-gravity.
While these studies have individually delved into various significant aspects and distinct physical processes, there is a lack of a comprehensive global perspective that incorporates all the crucial parameters and physics.
This study seeks to extend previous research on the interaction between AGN-driven outflows with more realistic fractal star-forming interstellar clouds, including radiative cooling and self-gravity.
We explore a wide parameter space, systematically varying parameters including the wind power, the average cloud density, the internal fractal density distribution within the cloud, and whether the wind is primarily dominated by kinetic or thermal energy.
This approach enables us to investigate diverse facets of the cloud's evolution.
The paper is organised as follows.
In Sec. <ref>, we describe the simulation method and the choice of initial conditions.
In Sec. <ref>, we present the main results of this study. We discuss the implications of this work in Sec. <ref>.
Finally, in Sec. <ref>, we summarize and conclude.
§ METHOD
§.§ Simulation code
The numerical simulations presented in this study are performed using the grid-based code Pluto v4.4 <cit.> in 3D (x,y,z) Cartesian geometry. We use the HLLC Riemann solver <cit.> along with a piecewise parabolic reconstruction scheme <cit.> for solving the self-gravitating hydrodynamic (HD) equations:
∂ρ/∂ t
+ ∇·(ρv) = 0,
∂(ρv)/∂ t
+ ∇·[ρvv + pI] = ρg,
∂ E_t/∂ t
+ ∇·[(E_t + p )v )] = ρv·g - (ρ/μ m_ H)^2Λ(T),
∂(ρ C)/∂ t + ∇·ρ C v = 0 ,
∇^2Φ = 4π G ρ,
where ρ is the mass density, v is the velocity, p is the thermal pressure, C is a Lagrangian scalar used to track gas in different components (i.e., cloud and wind), g=-∇Φ (where Φ is the gravitational potential) is the acceleration due to gravity, and E_t is the total energy density given by,
E_t = ρ e + ρv^2/2.
The above equations are closed by an ideal gas equation of state (EOS):
p = (γ - 1)ρ e,
where we consider γ=5/3 throughout. The energy conservation equation (Eq. <ref>) also includes the cooling term (Λ) in order to account for the radiative losses, discussed in Sec. <ref>.
Time evolution of Eq. (<ref>)-(<ref>) is performed using a 3^ rd-order Runge-Kutta time-stepping scheme.
For simulations involving gravity, we incorporate the self-gravity module[The self-gravity patch for the PLUTO code is publicly available at <https://bitbucket.org/mankush/pluto-4.4-self-gravity-patch>] developed by <cit.>, which employs a Runge-Kutta-Legendre-based Poisson solver coupled to a V-cycle multigrid algorithm to solve the Poisson equation for the gravitational potential.
Details of the numerical implementation of the self-gravity module are presented in <cit.>.
We also solve for the potential in the non-self-gravitating runs but do not couple the gravitational acceleration terms to the hydrodynamics (Eq. <ref> and <ref>). In this way, we calculate the gravitational potential-related quantities for these simulations and compare them with the corresponding self-gravitating runs.
§.§ Computational domain
In our simulations, we employ a uniform Cartesian domain (xyz) with a physical range of -50 pc≤ x≤ 150 pc, -50 pc≤ y≤ 50 pc, and -50 pc≤ z≤ 50 pc.
The domain is discretized into a grid with 1024×512×512 cells, resulting in a computational cell size (resolution) of 0.195 pc.
In order to study the effect of the numerical resolution, we also perform two lower-resolution simulations with a grid size of 512×256×256 and 256×128× 128, respectively, and present the results in Appendix <ref>.
The fractal clouds (see Sec. <ref>) are initially positioned at the origin (0,0,0) of the domain.
The clouds possess a core radius of 25 pc and an envelope of width 5 pc. Therefore, the core radius of the cloud is resolved with ∼ 128 cells, this being adequate to capture the overall evolution <cit.>.
The wind is launched from the y-z boundary at x=-50 pc in the positive x-direction.
The details of the cloud setup are described in Sec. <ref> and the wind parameters and their injection in Sec. <ref> and Sec. <ref> respectively.
§.§ Cloud initialisation
Several theoretical and observational studies have shown that the density structures in the turbulent molecular cloud are well described by a log-normal distribution function <cit.>.
Hence, we model the cloud in our simulation such that the probability distribution function (PDF) in terms of the logarithmic density s=ln(ρ/ρ_0) (where ρ_0 is the mean density of the cloud) is given by a lognormal distribution:
P(s) = 1/√(2πσ_s^2)exp[-(s-s_0)^2/2σ_s^2].
Here s_0 and σ_s are the mean and dispersion of the logarithmic density fluctuation, which from normalisation constraints (∫ e^s P(s) ds = 1) are related by s_0=-σ_s^2/2 <cit.>.
The log-normal density field of the cloud is constructed using the pyFC library[<https://www2.ccs.tsukuba.ac.jp/Astro/Members/ayw/code/pyFC>], which generates a periodic random scalar field in 3D Cartesian space from a given PDF and power-law spectrum, D(k)∝ k^-β, in Fourier space, where k is the dimensionless wavenumber.
The two-point fractal distribution is characterised by the slope of the power-law (β), the Nyquist limit k_ max and a lower cutoff wavenumber, , which corresponds to the largest spatially-correlated scale (λ_max≈ L/) for a positive value of β.
For a given value of , the largest size of the perturbations or `cloudlets' is r_ cloudlets,≈ L/(2), where L is the size of the periodic box <cit.>.
In this study, we have set the value of β to be 1.66 for all the clouds. This falls in the range of cloud density spectral indices for supersonic turbulence<cit.>. The value is primarily set to 3 (λ_ max≈ 20) for most of the clouds while varying the wind properties to investigate their effects.
Nevertheless, we also explore different values of = 1, 3, 6, and 10, while keeping the wind parameters identical. This allows us to examine the impact of variations in the density distribution within the cloud.
In addition to the fractal density distribution of the cloud, we also initialise a Gaussian random field for each component of the velocity with zero mean and 1-D velocity dispersion of σ_v.
For a particular cloud setup, the values of and β for the velocity field are the same as the log-normal density field.
We choose the value of σ_v for most of the simulations (except for the cloud with lower mean density) such that the 3D velocity dispersion (=√(3)σ_v) of the cloud is ∼ 8, which is typical of the observed velocity dispersion of GMCs in the Milky Way and nearby galaxies on this scale <cit.>.
We set the mean number density of the fractal cloud to 200, which gives us an initial virial parameter (= 2E_ kin/|E_ grav) of ∼ 0.9, again typical for star-forming clouds <cit.>.
The standard deviation of the density PDF in Eq. (<ref>) of our clouds is calculated using the well-established relation:
σ_s ≈[ln(1+b^2^2)]^1/2
which connects the standard deviation of the log-density (σ_s) and the turbulent Mach number (= /c_ s, rms, where c_ s, rms is the root mean square sound speed) <cit.>.
The parameter b is the driving parameter of turbulence, which represents the ratios of energies in solenoidal and compressive modes turbulence and varies between 1/3 (purely solenoidal) and 1 (purely compressive), respectively <cit.>.
Here we set b=0.4 for a mixed mode of turbulence driving <cit.>, as often observed in different environments <cit.>.
The resulting value of σ_s then serves as the input parameter in the pyFC routine which generates the fractal density field.
For the cloud density, we create a 310^3-sized log-normal data cube with a mean of 1, which corresponds to a physical domain of (x,y,z)∈[-30 ,30][for this choice, the scale largest correlated density structure for a given value of is λ_ max,≈ (60/)], following the method described above.
The density cube is then multiplied by the desired mean density of the cloud and a spherical volume of a radius of 30 is extracted from the cube.
The sphere is then tapered with a radially decreasing function in order to ensure a smooth transition between the cloud's edge and the ambient medium:
ρ_ c(r) = ρ_ a + ρ_ cube(r)/cosh[(r/r_ core)^8],
where ρ_ a, ρ_ cube and ρ_ c are the ambient density, the original density values in the fractal data cube and the final density for the cloud material that is used for the simulations, respectively.
Here, r_ core = 25 is the core radius of the cloud where the density remains the same as the density cube, along with an envelope with 5 radially decreasing density.
Finally, the turbulent cloud is placed at the origin (0,0,0) of the computational domain using a tri-linear interpolation scheme.
We also perform the same procedure for each velocity component where a 310^3-sized Gaussian random data cube is generated for each component of the velocity field and mapped into the computational grid in a similar way to the density initialization.
The remaining portion of the computational domain is initialised with a static ambient medium having a density of 0.1 and a temperature of 10^6 K, while the cloud is set to be in pressure equilibrium with this ambient medium initially.
§.§ Wind parameters
In this study, our main objective is to examine how AGN-driven “fast” winds or jet-inflated cocoons affect star-forming complexes within host galaxies.
Therefore, we focus on parameters typical of such scenarios, particularly considering the pressure and velocity ranges associated with these AGN-driven processes.
Numerous observational studies have demonstrated that the velocities of ionized winds <cit.> and broad absorption line (BAL) winds <cit.> driven by the central AGN on kiloparsec scales range from a few hundred to several thousand <cit.>.
In this work, we consider the cloud to be located at ∼ 1 away from the AGN and the velocity of the injected winds at () this distance are in the range of 400 to 4000 , with a total (kinetic + thermal) wind power () ranging from 42 to 45. From theoretical and numerical investigations, these types of winds are found to be very hot and highly pressurised during the energy-conserving phase with pressure ranging from 10^-10 to 10^-7 dyne cm^-2 depending on the wind power <cit.>.
Within these parameter ranges the thermal energy of the winds dominates the total energy budget, making them subsonic.
Interestingly, the self-similar solution describing an expanding bubble propelled by a central source, as proposed by <cit.>, aligns with these pressure values.
For an AGN-driven bubble with a given injected power (), expanding in a spatially homogeneous ambient medium of density ρ_ a, the pressure () at a distance R_ w from the central source is expressed as
= 7/25(125/154π)^2/3ρ_ a^1/3^2/3 R_ w^-4/3,
≈ 5.5× 10^-10(n_ a/0.1 )^1/3(/10^43 )^2/3×
(R_ w/)^-4/3 dyne cm^-2.
In this study, we set the pressure of the wind from the above equation (Eq. <ref>) for R_ w∼ 1, which yields values in range ∼ 10^-10-10^-8 dyne cm^-2. The pressure is kept constant in time at the injection region, unlike the true <cit.> wind solution. We discuss the implications of this choice in later paragraphs.
Moreover, the velocity of the bubble's forward shock at R_ w = 1, as given by <cit.>,
= (243/3850π)^1/5ρ_ a^-1/5^1/5 t^-2/5,
≈ 850 (n_ a/0.1 )^-1/3(/10^43 )^1/3(R_ w/)^-2/3 ,
also predicts values within the considered velocity ranges ∼ 400-4000. In Table <ref>, we list the wind parameters for each simulation.
We also consider a wind of power 45 whose velocity is set to 15,000 and whose pressure is such that it is supersonic. The total energy, in this case, is dominated by the kinetic energy (see Table <ref>).
This kind of extreme outflow parameter can appear in the scenario where a cloud directly lies along the path of an AGN jet.
Additionally, this choice allows us to explore the effect of thermal vs. kinetic (subsonic vs. supersonic) winds of the same power in an otherwise identical cloud setup.
In order to completely specify the wind state, the wind density is calculated from energy conservation, i.e., the total energy flux of the thermal and kinetic components is equal to the injected power of the wind,
(1/2ρ_ w^2 + /γ -1)4π R_ w^2 = .
Therefore, for the given values of wind power (), velocity () and pressure (), the density (n_ w) of the wind is given by
≈ 3× 10^-2(/10^3 )^-2[5.57(/10^43 ) .
. ×(/10^3 )^-1(R_ w/)^-2 - (/10^-10 dyne cm^-2)] ,
which is typically n_ w≈ 0.01 for the wind powers with corresponding velocity and pressure considered in this study. Therefore, we set the density of the wind to =0.01 for all the simulations. With these wind parameters, the mass outflow rate over a surface of 4π R_ w^2 at R_ w = 1 lies within the range of ∼ 0.7-7 M_⊙ yr^-1, which is typical of the mass outflow rates of ionized winds found in observations <cit.>.
Although we set the pressure and velocity of the winds with different powers assuming that the cloud is located at R_ w = 1 from the AGN, from Eq. (<ref>) and (<ref>), it becomes apparent that similar values of and can be achieved for a wind of different power when considering an alternative R_ w value.
For instance, and are very similar for winds with = 44 at R_ w = 1 and =42, but at R_ w = 0.1.
Therefore, the wind parameters considered in this study not only reflect the strength of the wind at a particular location (∼ 1) for varying wind powers, but can also be mapped to different locations relative to the central AGN for a particular wind power.
It is essential to emphasise that <cit.> solution is only used to set the reference values of wind parameters, which are kept constant throughout the simulation.
In reality, the radius of the bubble (R_ w) increases with time as it expands. Therefore, the wind solutions are time dependent. Ideally, one should consider a self-consistent evolution of the bubble with time for the wind injection. Additionally, the bubble solution of an AGN-driven wind solution consists of several distinct internal structures, i.e., the forward shock, shocked ambient medium, shocked wind and wind material <cit.>, which are not generally easy to implement in local box simulations like ours. Moreover, as our main focus is to investigate the effect of a steady wind whose parameters are similar to AGN-driven outflows on kpc scales on star-forming complexes, we adopt the simplistic wind injection model. Importantly, the main purpose of this study is to investigate the impact of the wind on clouds with and without self-gravity. Thus, while details of the wind modelling are simplified, we can still make a meaningful comparison, as clouds in the simulation with and without self-gravity face the exact same wind.
§.§ Cooling
In order to account for the energy losses due to radiative cooling (see Eq. <ref>), we use the non-equilibrium cooling function calculated using Mappings V code <cit.>.
This code utilizes a comprehensive database of atomic data to self-consistently compute the optically thin cooling rate for various gas phases, including cold neutral, warm neutral, partially ionized, and fully ionized gas in the temperature range 10^2-10^9 K.
For temperatures exceeding ∼ 10^9 K, the cooling function is extended by assuming bremsstrahlung emission <cit.>.
We also impose a lower temperature threshold of T_ floor = 100 K, below which the cooling is turned off and the temperature of any cell falling below T_ floor is enforced to stay at T_ floor.
No additional heating terms, such as the galactic UV background or ionizing photons (UV, soft and hard X-ray) from the AGN, are included.
In our simulations, the gas that can cool below 10^4 K is sufficiently dense (n_ c > 500).
In this density range, the high-density cores can be safely assumed to be self-shielded from the external UV and soft X-ray photons <cit.>.
Conversely, due to the very small photo-absorption cross-section of hard X-ray photons (E≳ 20 keV, primarily due to K-shell ionization of Fe and Ni ions) <cit.>, the cloudlets remain optically thin, resulting in negligible heating.
Through explicit radiative-transfer calculations using Cloudy <cit.> with the AGN spectral energy distribution (SED) as used in <cit.>, we confirm that ionization within the dense region (n_ c > 500) by an external radiation field (AGN and UV background) is insignificant, even for an AGN with a bolometric luminosity of L_ bol = 10^45, where radiation can only penetrate to a depth of ≲ 0.5 from the illuminated surface.
This depth is even smaller for lower luminosities or higher densities.
Therefore, the impact of ionising radiation can be safely disregarded.
We create a table of the cooling rate Λ as a function of temperature in the range of 10^2-10^10 K assuming solar metallicities <cit.>. The values are then interpolated to every computational cell at runtime and are treated as a source term in Eq. (<ref>).
The equation is solved using a fractional step formalism, where the hydrodynamic evolution and source step are solved separately through operator splitting.
The energy losses from radiative cooling are computed by integrating the internal energy equation
∂ (ρ e)/∂ t = - (ρ/μ m_ H)^2Λ(T),
using an adaptively-chosen, explicit or semi-implicit Embedded Runge-Kutta method, depending on the “stiffness” of the equation <cit.>.
We note that we do not account for the explicit density dependence of the cooling function (Λ), which becomes significant in gas with T ≲ 10^4 K and n ≳ 10^4 in the case for non-equilibrium cooling.
In our simulations, however, the initial temperature of gas with n > 10^3 is below the floor temperature of T < 100 K, where radiative cooling is turned off.
Additionally, the shock-heated gas in our simulations rarely reaches the density and temperature ranges where the density dependence of the cooling curve would be significant.
Therefore, the simplification of the cooling curve as a function of temperature only does not affect our results.
Moreover, our simulations do not include molecular cooling, which might be a dominant mechanism below T = 1000 K, and completely dominates the cooling process below 100 K <cit.>.
Therefore, for more accurate modelling of gas cooling and star formation, one must include the low-temperature processes of the cold molecular phase <cit.>.
Nonetheless, as we impose a temperature floor of 100 K, the temperature does not fall below this range. Hence, the absence of molecular cooling does not affect the result significantly.
§.§ Wind injection and boundary conditions
We approximate the AGN-driven spherically symmetric wind at a distance of 1 from the central AGN as a planar wind on the scale of the cloud, propagating in the positive x-direction.
The initial position of the forward shock is set at x=-40 pc and the domain within -50 ≤ x ≤-40 is initialised with the wind state.
The wind is constantly injected from the left boundary along the x-direction using an inflow boundary condition, where we populate the fluid quantities (pressure, velocity and density) in the ghost cells with the wind properties as listed in Table <ref>.
However, one complication of subsonic inflows is that only two out of three characteristic waves enter the domain, while the third leaves.
This implies that only two out of three primitive variables (ideally density-pressure or density-velocity pairs for well-poised conditions, e.g., see Sec. 19.3 of ) can be specified physically at the boundary, with the remaining variable set numerically from the interior solution <cit.>.
Yet, a difficulty arises due to the finite size of the computational domain: the outgoing wave, which should freely exit the domain, experiences numerical reflection at the boundary, thereby reducing the accuracy of the interior solution <cit.>.
Therefore, the value of the third primitive variable should be chosen to allow the wave to exit the domain with minimal reflection, a condition commonly referred to as the non-reflecting boundary condition (NRBC), which itself is a broad area of research <cit.>.
However, in our simulations, we do not include such a treatment to minimize boundary reflections.
As a result, the subsonic winds injected at the ghost-cell layers, do not emerge with the same parameters (density, velocity, pressure, Mach, and power) as tabulated in Table <ref>.
Fig. <ref> depicts the time evolution of the wind density (top-left), velocity (top-right), pressure (bottom-left), and power (bottom-right) for the subsonic cases, as indicated in the legend.
The values are calculated by taking the average over a yz slice at i = 1, i.e., the first interior cell from the x-left boundary. We observe that all wind parameters experience an initial transient phase when the wind starts to progress through the stationary ambient medium, sweeping up the material.
However, after t≳ 0.4, the parameters saturate to constant values, which are different from the injected values at the ghost zone.
In Table <ref>, we tabulate the injected values of the wind parameters and their saturated values.
Nonetheless, the deviation of the parameters from the intended values is almost similar for all the powers.
The wind velocities exhibit the most significant deviation, reaching ≲ 75% higher than the intended values. Conversely, pressure values decrease by ≲ 40%, therefore increasing the Mach number of the winds (see Table <ref>), which are still subsonic.
However, as the winds are dominated by thermal energy, the deviations of the wind power from the intended values are less, increased by ≲ 30 %.
Nevertheless, since our main objective in this study is to investigate the interaction between a cloud and winds possessing parameters akin to AGN-driven winds at specific powers, the qualitative outcomes among winds of varying powers, as well as the impact of self-gravity, remain unaffected.
Moreover, the saturated values of the wind parameters, i.e., density, velocity, pressure and power (see Table <ref>) fall well within the desired AGN wind parameter space, as laid out in Sec. <ref>.
Furthermore, implementing non-reflecting boundary conditions at the inflow boundary does not ensure that the wind will emerge with the intended power, as one of the three primitive variables remains unconstrained and is set numerically.
Indeed, as demonstrated in Appendix <ref>, even though the physically specified primitive variables (density and velocity) remain constant, the pressure and power of the wind deviate significantly (∼ 90%) from the intended value, exceeding that of the simulation using the wind injection method employed in this study.
Except for the inflow boundary, all other boundaries of the computational domain are set to diode boundary conditions – the modification of the outflow boundary condition that prevents inflow into the domain so that gas can only leave the computational domain.
We employ isolated boundary conditions for the gravitational potential (Φ) on all sides of the domain, which is calculated using a multipole expansion of the density distribution up to order l=4 <cit.>.
§.§ Simulations and naming convention
In this study, we conduct a total of 13 three-dimensional (3D) simulations (including 10 simulations with self-gravity) covering a large parameter space of both the wind and cloud.
An additional two simulations (with and without self-gravity) are performed without wind in order to ascertain the effects of winds.
For all these simulations, we initialize the cloud with an initial virial parameter, α_ vir, 0 = 0.9, and using this value, we determine the initial mean cloud density and velocity dispersion.
In our fiducial simulations, the mean density of the cloud () is set to 200 with =3 and an initial velocity dispersion of 8 to achieve α_ vir, 0 = 0.9.
Therefore, the mass of the standard cloud in our simulation is M_ c = 2.873× 10^5, which is typical of the average mass of GMCs in the Milky Way and nearby galaxies <cit.>.
To investigate the effect of a similar power wind on a cloud with lower density,
we consider one simulation with a lower mean density of 20 and velocity dispersion of 2.5.
The fiducial wind in our simulation is thermal energy-dominated.
The wind parameters and the minimum wavelength of the cloud are varied as discussed in Sec. <ref> and <ref>.
The initial conditions and the parameters along with the name for all the simulations are listed in Table <ref>
It is useful to clarify the naming convention of the simulations.
For example, in GC43_k3, `G' implies self-gravity is present and `C' stands for cooling, which is common in all the simulations.
The number `43' represents the total power of the wind of 43. the label `k3' represents the minimum wavenumber of the cloud; in this case k_ min=3.
We explicitly added the label `_low' to `GC43_k3' for the case of the lower mean density cloud with wind power of 43.
Similarly, `_kinetic' is added to `GC45_k3' to indicate that the wind of power 45 is kinetic energy dominated.
If not specified, the default minimum wavenumber of the cloud is =3.
§.§ Relevant timescales
There are various dynamical timescales involved in the problem that we consider in this study and the absolute values for different simulations are tabulated in Tab. <ref>.
* The shock-passing time (), i.e., the approximate time the initial shock takes to sweep over the whole cloud <cit.>:
≈2R_ c/,
where R_ c is the radius of the cloud and is the velocity of the wind.
* The cloud-crushing time (), the typical time in which the shock will compress the cloud <cit.>:
≈R_ c/v_ st≈χ^1/2R_ c/v_w,
where v_ st is the velocity of the transmitted shock into the cloud and χ = ρ_ c/ρ_ w is the density contrast between the cloud and wind.
* The freefall timescale:
= √(3π/32 G ρ_ c),
* The cooling timescale <cit.>:
= 3k_ B n_ c⟨ T_ c⟩/2n_ c^2Λ(T),
where k_ B is the Boltzmann constant, n_ c and ⟨ T_ c⟩, the average initial number density and temperature of the cloud. For the fiducial simulations with an initial mean number density of n_ c = 200 and mean temperature of ⟨ T_ c⟩≈ 3× 10^3 K, as estimated from the pressure equilibrium condition with the ambient medium, the typical cooling timescale is of the order of 3 × 10^2 yr, which is significantly shorter than relevant other timescales (see Tab. <ref>). However, it is essential to recognize that this estimate relies on average temperature and density values, whereas the fractal nature of the clouds in the simulations introduces a wide range of temperatures due to the inhomogeneous density distribution. Therefore, the cooling timescale can exhibit significant variations across different regions within the cloud.
* The drag timescale (), by which the cloud is accelerated to a similar velocity as that of the wind <cit.>:
≈χ^1/2
* The time () for the Kelvin-Helmholtz (KH) instability to grow for χ≫ 1 <cit.>:
∼χ^1/2/k_ KH v_ rel≈4/3r_ cloudlet, χ^1/2/
where k_ KH and v_ rel are the wavenumber of the KH perturbations and the relative velocity between the post-shock background and cloud. For χ≫ 1, the relative velocity is approximately equal to the post-shock velocity of the background, i.e., v_ rel≈ 3/4. We set k_ KH∼ 1/r_ cloudlet, because even though the instabilities at the smallest wavelengths grow more rapidly, the most detrimental wavelengths are those approximately equal to the cloudlet radius <cit.>.
§ RESULTS
§.§ Morphological evolution of the cloud
§.§.§ General features
The evolution of a cloud impacted by a highly pressurized wind can be divided into three phases.
Initially, as the wind comes into contact with the cloud, it initiates an internal shock propagating from the cloud's surface toward its centre.
The force exerted by the wind's ram pressure causes compression within the cloud, leading to a rise in the average density of the cloud material.
The log-normal density distribution of the clouds results in many high-density fragments (cloudlets) and low-density channels inside the fractal clouds.
This aids the wind to propagate deeper into the clouds, resulting in a stronger interaction with the cloud and deeper penetration when compared to cases with a uniform spherical cloud, as demonstrated in Fig. <ref> <cit.>.
Subsequently, as the wind continues to flow downstream wrapping around the cloud, a shear layer forms at the wind-cloud interface, leading to the onset of Kelvin-Helmholtz (KH) instability.
As a result, the outer layer of the cloud gets stripped, mixed with the wind, and funnelled downstream where it condensates (middle left panel of Fig. <ref>), giving rise to a series of cold and dense cloudlets that form a trailing tail behind the main cloud.
During this phase, if no cooling mechanism is present, the energy transport from the wind to the cloud in the form of thermal energy should increase the temperature of the cloud material, which would result in an expansion of the cloud, making the cloud more prone to destruction <cit.>.
However, with the presence of radiative cooling, the compressed cloudlets cool efficiently, leading to a cessation of internal pressure support, contraction, and the formation of denser cloudlets.
This contraction enhances the density contrast (χ) between the cloudlets and the surrounding hot shear flow, consequently reducing the growth rate of KH instabilities (Eq. <ref>).
Therefore, in the presence of radiative cooling, the cloudlets are relatively protected from instabilities induced by the shear flow compared to a non-radiative scenario <cit.>.
While the presence of radiative cooling reduces the growth rate of instabilities, it does not provide complete protection.
Therefore, the wind-induced shear flow along the cloudlet surface still causes the entrainment of cloud material, albeit to a lesser extent compared to a non-radiative scenario <cit.>.
Additionally, the direct momentum transfer from the wind accelerates the cloudlets.
Combined with the shredded cloudlets from the main cloud due to the KH instability, these fragments persistently contribute to the elongation of the cloud, eventually leaving the computational domain (bottom-left panel of Fig. <ref>).
Although the general evolutionary stages of a cloud impacted by a highly pressurized wind are similar for all cases, the corresponding time scales and the overall evolution can depend on several factors which are discussed in subsequent sections.
§.§.§ Gravity vs. no-gravity
Here, we examine the effect of self-gravity on the morphological evolution of the cloud by comparing it with a simulation that shares identical initial conditions and wind strength but lacks self-gravity.
In Fig. <ref>, we show the column density map in the xy plane of the simulations both without (C42_k3; left panels) and with (GC42_k3; right panels) self-gravity at three different times.
The power of the wind is 42.
During the initial compression phase, the mean density of the cloud in both cases (with and without self-gravity) increases.
After the initial shock-induced fragmentation phase, in the absence of self-gravity, the pressure-confined fragments undergo the highest attainable compression due to the transmitted shock into the cloudlets (middle-left panel of Fig. <ref>).
Once this compression limit is reached, the density of the clumps cannot increase any further.
Hence, the cloud as a whole commences disintegration, resulting in cloud expansion (bottom-left panel).
We point out that the expansion is not a result of adiabatic heating in the absence of radiative cooling, as outlined in various studies <cit.>.
Instead, the expansion occurs because the low-density channels within the fractal cloud facilitate the infiltration of high-velocity wind material, which induces turbulence and vorticity within the cloud <cit.>, leading to the transfer of momentum from the wind to the cloud material.
This momentum transfer is what ultimately causes the cloud to expand.
In contrast, when self-gravity is present, the shock-compressed cloudlets become gravitationally bound and start to accrete material from the surroundings (middle-right panel of Fig. <ref>), thereby attaining significantly higher density and mass compared to the case without self-gravity.
Additionally, when acting in an inhomogeneous medium, self-gravity causes additional local fragmentation of the cloud, which creates many low-density channels for the wind to propagate inside the cloud, resulting in a stronger interaction.
The increased average cloud density produced by compression deepens the gravitational potential, resulting in a more compact and tightly bound cloud structure.
As a result, the interplay between the gravitational pull and the strength of the wind-cloud interaction becomes the pivotal factor in deciding whether the cloud will experience a runaway collapse or will be disintegrated by the force of the wind.
For a low-power wind, e.g., 42 as shown in Fig. <ref>, the momentum transfer from the wind to the cloud is significantly lower and therefore less gas is pushed out of the potential well of the cloud (which becomes deeper with time as more gas is accreted) compared to a high-power wind.
Therefore, the wind triggers a runaway collapse of the cloud in the presence of self-gravity as can be seen from the bottom-right panel of Fig. <ref>.
In contrast, in the simulation without self-gravity, the cloud expands (bottom-left panel of Fig. <ref>) and eventually will be elongated and destroyed by the wind.
§.§.§ Effect of varying wind power
Here we show the morphological differences of the cloud in simulations in the presence of self-gravity with different wind velocities and pressure, which are the proxy for the power of the wind driven by AGN. Fig. <ref> illustrates the column density map on the xy plane of simulations with wind power 42 (GC42_k3), 43 (GC43_k3), 44 (GC44_k3) and 45 (GC45_k3) at t=0.85.
The corresponding physical time is also shown in each panel.
For the lower power cases (GC42_k3 and GC43_k3), the initial freefall time is shorter than the growth timescale for the KH instability due to the lower wind velocity (∝ 1/), which can further be reduced by the dramatic increase of density through compression.
In this scenario, the cloud as a whole undergoes runaway gravitational collapse before any instability has a chance to act.
This is clearly visible in the top-left panel, where the cloud in GC42_k3 has collapsed by this time, becoming very compact and highly dense.
Nonetheless, due to the continuous ablation, the loosely bound outer layer of the cloud has been gradually stripped away by the wind, giving rise to a tail in the direction of the wind.
In GC43_k3 (top-right), the cloud is undergoing collapse but at a somewhat lower rate than in the 42 case due to the stronger wind and comparatively quicker growth of shear instabilities (such as KH instability).
Thus, even though the cloud's central region is experiencing collapse, the wind's influence extends to a larger section of the cloud, resulting in a more extended structure compared to the 42 case.
The cloud disruption timescales in GC44_k3 (bottom-left) and GC45_k3 (bottom-right) are much shorter than the free-fall time due to increased velocity and ram pressure (see Tab. <ref>).
Therefore, a high-power wind induces much stronger turbulence and percolation of gas inside the inhomogeneous cloud, preventing the runaway collapse globally, compared to a low-power wind.
Thus, in this scenario, the cloud is ablated and disrupted significantly before the influence of gravity becomes important.
However, even in the case of a strong wind, the small cloudlets formed by the fragmentation become gravitationally bound and shielded from the external wind, by forming a high-density post-shock outer layer both due to compression and radiative cooling, preventing the wind material from infiltrating these cloudlets.
Therefore, in the absence of any opposing mechanism, very high-density cloudlets collapse locally, while the wind ablates comparatively low-density material, shaping the cloud into an elongated and extended structure of cloudlets.
However, it is important to note that the density contrast between the cloud and wind material in these simulations is very high (χ∼ 2×4), which results in a very high value of the drag timescale (∼√(χ)), which is the time by which the cloudlets are expected to attain a similar velocity as the wind.
Therefore, even in scenarios of high power, where the cloud's overall collapse is impeded by the wind, a considerable portion of the fragmented cloudlets persists and remains gravitationally bound to the central potential well due to the inability to attain escape velocity, owing to the large drag timescale.
§.§.§ Dependence on cloud fractal wavenumber
The maximum size (λ_ max≈ L/2π) of individual cloudlets inside the cloud is parameterized by the minimum wavenumber .
Therefore, a higher value of results in a larger number of small cloudlets as well as low-density inter-cloudlet channels.
As discussed earlier, the interaction between the wind and a fractal cloud is influenced by the extent to which the wind material can permeate the medium between individual clumps separated by low-density cloud material, in addition to its impact on the fractal surfaces of the cloud.
Thus, one can anticipate that in the case of a higher value of with a greater abundance of low-density channels, it is easier for the wind to percolate through these channels into the cloudlets and mix with cloud material, thereby transferring energy and momentum.
Additionally, due to the reduced size of the cloudlets in this case, KH instabilities grow faster (∝ 1/k) in a cloud with lower .
Consequently, it is expected that a cloud with a higher value would undergo more pronounced disruption.
Hence, in order to investigate the impact of the cloud fractal wavenumber, we simulate wind-cloud interaction for clouds with varying values of (= 1, 3, 6, 10), while keeping the mass and mean density of the clouds fixed.
The simulations are initiated with self-gravity and a wind of power 43.
In Fig. <ref>, we show the number density slice[Here we show the density slices instead of the column depth, as presented earlier, to better demonstrate the percolation of the wind within the clouds for different values of the parameter.] through the z=0 plane for these four cases (row-wise) at 0 Myr (left) and 1.27 Myr (right).
If we first consider the =1 case (`k1', first row), the cloud contains approximately one big cloudlet with increasing density towards its centre, surrounded by low-density outer layers, which are rapidly eroded by the wind through ablation.
Nevertheless, owing to its larger size and the added shielding from compression and cooling, the shock is unable to penetrate deep into the clump, at least not at early times compared to the other cases.
Furthermore, external over-pressurization triggers the gravitational collapse of the clump to form a compact, gravitationally bound structure.
In the =3 case (`k3', second row), the cloud is initially composed of many small, dense clumps separated by wide inter-clump channels.
These channels help the high-velocity wind material to percolate into the cloud, transferring momentum.
This results in more fragmentation of the cloud, preventing the global collapse, unlike in the =1 case.
Clouds with smaller scale density perturbations as in the =6 (`k6') and 10 (`k10') cases (third and fourth row), the numbers of cloudlets are much higher, and the inter-cloud channels are also consequently narrower.
During the initial interaction, the swept-up material by the wind creates a dense layer near the interaction area, blocking the entrance of these narrow channels. This prevents the wind from dispersing the cloudlets efficiently, resulting in the accumulation near the centre.
Thus, external over-pressurization triggers the global collapse of the cloud in these cases similar to the `k1' case.
However, a striking difference between the 'k1' case and the =6,10 cases is evident.
In the former, cloudlets form through fragmentation and stripping of the large clump.
In contrast, in the 'k6' and 'k10' cases, cloudlets were initially seeded by the fractal generator itself at the initialisation of the simulation and they remain organized in a more spherically symmetric manner at later times.
§.§.§ Dependence on the mean cloud density
In order to understand the influence of the density contrast between the cloud and wind, we consider two simulations including self-gravity with identical wind power of 43, but two different values of the initial mean cloud density, namely, = 200 (χ=2× 10^4, GC43_k3) and 20 (χ = 2× 10^3, GC43_k3_low).
Fig. <ref> shows the column density map on the xy plane of the simulation with = 200 (left column) and 20 (right column) at different times (row-wise).
We observed that the cloud in GC43_k3 undergoes gravitational collapse due to the compression from the wind.
However, in the case of =20, the value of χ is one order of magnitude lower, resulting in a much smaller cloud-crushing time (∼ 1 Myr) and drag time (∼ 44 Myr).
Hence, the wind rapidly disintegrates the cloud in the =20 case, giving rise to a filamentary structure before the gravitational force has a chance to significantly impact the evolution.
Additionally, due to the relatively shorter drag time, the fragmented cloudlets attain a sufficiently high velocity to effectively overcome the gravitational potential and get dispersed.
§.§.§ Thermal wind vs. kinetic wind
Until now, the energy budget of the wind – specifically, whether the energy carried by the wind is primarily thermally dominated (subsonic) or kinetically dominated (supersonic) – has received limited attention in the context of the traditional `cloud-crushing' problem.
Most of the previous studies focus on the impact of a supersonic wind on the cloud, which generally holds true for galactic or starburst-driven winds.
However, as indicated by various theoretical and numerical investigations <cit.>, the progression of AGN-driven winds can encompass various evolutionary stages—such as pressure-dominated and kinetic-energy dominated phases—contingent upon diverse parameters, including the black hole mass, the launch velocity of the wind at the accretion scale, ambient density profile, and various cooling mechanisms, among others <cit.>.
Thus, it is useful to investigate how different kinds of wind, despite having the same power, affect the evolution of the cloud, as pressure and momentum couple differently to the hydrodynamics.
Thus, to examine the effect of thermal vs. kinetic wind on the evolution of the cloud, we consider two different simulations with the same initial cloud configuration and wind power of 45, but varying wind properties.
One simulation involves a thermal wind (GC45_k3), characterized by a Mach number () of 0.28 (as used in the previous simulation comparisons in this study so far), while the other initialises a kinetic wind (GC45_k3_kinetic, see Tab. <ref>), with a higher velocity (15000) but lower pressure (10^-10 dyne cm^-2) such that total power is 45, resulting in = 12.16.
Fig. <ref> displays the column density map in the xy plane of the simulation with the thermal wind (left column) and the kinetic wind (right column) at different times (row-wise).
Evidently, the cloud impacted by the kinetic wind undergoes a higher fraction of ablation across all stages of its evolution, compared to the thermal wind's effect.
We observe a significantly larger amount of gas in the low-density tail in the kinetic wind case, which has been stripped from the original cloud by the wind.
Although possessing the same power, the differences in the effect of the thermal and kinetic wind are due to the fact that cloud material is primarily entrained and accelerated by the direct momentum transfer from the wind compared to the PdV work <cit.>.
With the thermal wind containing less kinetic energy and consequently, lower ram pressure compared to the kinetic energy-dominated wind, the mixing and acceleration of the cloudlets are significantly lower for the thermal wind case.
In contrast, due to higher ram pressure, the direct momentum transfer is much higher for the case of the kinetic wind, leading to increased turbulence and vorticity, and causing the cloud to expand kinetically.
As a result, the kinetic wind, despite possessing the same power, is expected to exhibit more destructive behaviour than a thermal wind with equivalent power.
§.§ Cloud dynamics
In this section, our main emphasis lies in examining how various initial conditions influence the dynamical evolution of the cloud, i.e., mass loss, gas turbulence, cloud elongation, acceleration, etc.
In order to extract a particular quantity (Ψ) for the cloud material from the whole simulation domain, we use the definition of the mass-weighted volume average of Ψ by <cit.>,
⟨Ψ⟩ = ∫Ψρ C dV/∫ρ C dV,
where ρ is the density and C is the cloud tracer (a passive scalar that traces cloud material, i.e., if a cell contains only cloud material then C=1; if the cell contains no cloud material C=0, such that any mixture of cloud and non-cloud material in a cell can be represented by C).
To exclude significant fractions of wind and hot mixed material from the cloud, we impose a threshold value on the temperature, i.e., only cells with T<4 K are used in Eq. (<ref>).
This gives us a reliable estimate of a particular quantity of cold gas embedded in a hot wind.
One common convention in all the line plots presented in this section is that the greyed-out parts of the lines (if they exist) correspond to the regime where the Jeans length of the highest density cell is not resolved by four resolution elements (e.g., see Fig. <ref>).
This occurs exclusively in the self-gravitating simulations, where few cells can reach very large densities, whose Jeans length can not be resolved by at least 4 computational cells, which is necessary to avoid artificial fragmentation <cit.>.
§.§.§ Density PDF
In this section, we investigate the influence of self-gravity and wind power on the distribution of cloud density.
The left panel of Fig. <ref> illustrates the number density distribution at t=0.9 =3.48 Myr in simulations with self-gravity (solid line) and without (dashed line), considering a wind power of 42.
The grey dashed-dotted line represents the initial probability distribution function (PDF) for the cloud number density.
Notably, we observe an increase in the low-density tail of the PDFs due to the stripping and mixing of the cloud material with the wind.
This behaviour remains consistent regardless of the presence of self-gravity.
On the other hand, the wind significantly compresses a substantial portion of intermediate-density gas (∼ 10-100) to higher densities.
Interestingly, the highest-density regions, which are surrounded by comparatively lower-density gas, exhibit minimal influence from the wind, as the high-density tail of the PDF in the simulation without self-gravity at 3.48 Myr (blue dashed line) coincides with the initial PDF.
However, in the presence of self-gravity, the densities of these self-gravitating cores increase by accreting lower-density gas from the surroundings, forming an extended, high-density power-law (PL) tail (solid line), which is a prominent feature in turbulent clouds when self-gravity becomes important <cit.>.
The right panel of Fig. <ref> depicts the density PDF of the self-gravitating cloud at t=0.26=1 Myr, for simulations with different wind power.
This time approximately corresponds to the stopping point of the highest power simulation (GC45), where due to increased pressure, the simulation necessitates substantially lower time steps, rendering computations beyond this time prohibitively resource-intensive.
Nonetheless, due to the shorter cloud-crushing timescales (see Tab. <ref>), all the major evolutionary stages of the clouds are well captured even for the relatively short duration of the simulations.
Notably, as the wind power increases, the compression timescale by the wind decreases, resulting in more rapid growth of the high-density tail of the PDF.
However, at this early stage of the simulations, the influence of self-gravity on the cloud's evolution has not yet given rise to the emergence of the PL tail in the density PDF (see Fig. <ref> where we show the density PDF at the same cloud-crushing time, which corresponds to different absolute times for different simulations).
Nonetheless, the mean of the PDF shifts to higher values as the wind power increases due to elevated compression ratios.
Interestingly, for the higher power winds (GC44 and GC45), a secondary peak in the low-density regime of the PDF can be observed (at n∼ 10^-2 and 1 for GC_44 and GC_45, respectively), owing to the presence of a significant portion of hot, diffuse and mixed cloud material which has been stripped off from the cloud by the wind.
§.§.§ Evolution of the cold gas mass
A commonly employed method for understanding cloud survival is to calculate the evolution of the cloud mass with n≥n_ c/3, where is the mean number density of the cloud <cit.>.
However, as we are interested in the evolution of the cloud gas fraction (which is essential for potential star formation of the cloud), this particular definition might encompass certain portions of mixed and shock-heated dense gas.
Thus, we define the cold gas mass as
M_ cold = ∫_T<10^4 Kρ C dV.
In Fig. <ref>, we show the evolution of the cold gas mass for simulations with different wind power.
The dashed (solid) lines correspond to simulations without (with) self-gravity.
For comparison, we also present the results of simulations without a wind (red lines).
A general pattern for all the simulations is an initial rapid decline followed by subsequent growth over a brief time span and finally a gradual decline.
During the initial interaction, the wind transfers a significant fraction of thermal energy into the cloud, resulting in an increase in the overall temperature of the cloud, which reduces the amount of cold gas.
The fall-off becomes more pronounced with increasing wind power, as a larger amount of energy is transported into the cloud.
It is important to note that, this decrease in cold gas mass is not due to cloud ablation, which is not significant during this phase.
After the initial interaction, when the shock-compressed gas cools efficiently, the excess internal energy is removed via radiative losses, resulting in the decrease of the mean temperature and the cold gas mass increases to aid the growth phase.
Nonetheless, the value of the cold cloud mass to which a particular simulation reaches after the initial impact with the cloud decreases with increasing wind power.
Subsequent to the growth phase, the cold gas mass in all the simulations decreases, primarily due to ablation, mixing and removal of the gas from the computational domain by the wind.
As the wind continues to strip the cloud material due to various instabilities and cooling-driven pressure gradients <cit.>, the cloud material gets mixed and heated by the wind.
The effectiveness of mixing escalates in tandem with increasing wind power, leading to a quicker decline in the amount of surviving cold gas under the influence of the hot wind.
However, in the simulations without a wind (red lines), the late time decrease in the cold gas mass is due to the escape of a fraction of mass through the computational boundary, which is stirred by the initial random velocity field.
This is more prominent in the case without self-gravity (red dashed line) due to the absence of attractive gravitational forces.
Interestingly, the simulations with self-gravity (solid lines) retain more cold gas compared to the cases without self-gravity (dashed lines) at the same wind power.
The presence of self-gravity renders the shock-compressed clouds gravitationally bound and more compact, thus, reducing wind's ability to ablate the cloud and induce subsequent mixing.
Furthermore, the compact cloud provides less surface area for the wind to interact with, compared to that of an expanded cloud in the absence of self-gravity.
This combination of factors contributes to a higher fraction of cold gas in simulations with self-gravity.
However, the difference between the ensuing evolution in simulations with and without self-gravity becomes less pronounced for high-power winds (≳44 ). due to the shorter cloud-crushing timescales compared to the freefall time of the cloud.
§.§.§ Gas turbulence
Quantifying the generation of turbulence inside a cloud impacted by the wind is one of the important factors because this can regulate star formation <cit.>.
As the wind progresses through the fractal cloud, various shear layers are created inside the inter-cloudlet medium, depending on the local density, which leads to the generation of vorticity (ω=v).
Additionally, when acting on an inhomogeneous medium, the self-gravitational force causes internal motion between the clumps, creating vorticity inside the cloud <cit.>, which can further be amplified by the wind.
In order to quantify the turbulence inside the cloud, we define the 3D mass-weighted velocity dispersion () as
= √(∑_i=1^3σ_v_i^2),
with
σ_v_i = √(⟨ v_i^2 ⟩ - ⟨ v_i ⟩ ^2),
where σ_v_i is the 1D velocity dispersion along each axis.
The left panel of Fig. <ref> shows the evolution of for different simulations with and without self-gravity as a function of (see Fig. <ref> for the evolutionary trends in terms of the cloud-crushing time in Appendix <ref>).
Irrespective of the wind power, the velocity dispersion in all the wind simulations increases due to energy transfer from the wind to the cloud material.
Furthermore, the magnitude of the enhancement increases with increasing wind power.
In all cases, there is an initial enhancement of the velocity dispersion during the compression phase.
Consequently, in the simulations without self-gravity, when the wind disperses the cloud, it becomes comparatively easier for the wind to traverse through the inter-cloudlet channels.
Therefore, the velocity dispersion settles to an approximately constant value, depending on the strength of the wind.
In the cases where self-gravity is included, the evolution deviates when gravity starts to dominate.
In these instances, the fragmented cloudlets are pulled to the central gravitational potential well, towards the core, which induces additional motions between the cloudlets, resulting in a higher velocity dispersion compared to the cases without self-gravity.
It is worth mentioning that, as the wind power increases, the disparity in velocity dispersion evolution between simulations with and without self-gravity diminishes, as a stronger wind disperses the cloud before gravity can significantly influence its evolution.
To thoroughly gauge the wind's influence on star formation, it's inadequate to solely consider velocity dispersion.
The crucial factor lies in the balance between turbulent kinetic energy (E_ kin) and gravitational binding energy (E_ grav), which determines whether a gas cloud experiences runaway collapse or maintains stability through turbulent support.
Thus, we consider the virial parameter () of the cloud, defined as the ratio between the kinetic and gravitational energy <cit.>,
α_ vir = 2 E_ kin/|E_ grav|,
where E_ kin and E_ grav are defined as
E_ kin = 1/2 M_ cold^2
E_ grav = ∫_T<10^4 KΦρ C dV,
where Φ is the gravitational potential.
In the simulations without self-gravity, we also solve for the gravitational potential at runtime, but do not couple it to the hydrodynamics.
The right-hand panel of Fig. <ref> shows the evolution of the virial parameter with time.
In most of the simulations, except for the cases with wind power 42, the virial parameter initially increases beyond unity because of the infusion of kinetic energy from the wind, which initially surpasses the overall gravitational energy.
For the highest power simulation (GC45), this increase in is more than one order of magnitude (∼ 20) higher than the initial value of ∼ 0.9.
Nevertheless, following the compression phase, there is a rise in the average cloud density, intensifying the gravitational potential well, ultimately leading to a reduction in the value of .
In the simulations without self-gravity, the subsequent evolution becomes nearly steady-state.
This suggests that the kinetic and gravitational energy maintain a delicate balance, remaining relatively constant over time.
In the presence of self-gravity, the highly dense collapsing cores give rise to a very deep gravitational potential, leading to the domination of gravitational binding energy over the kinetic energy.
This results in a very low value of below 1, which is particularly prominent in low-power wind cases.
It is important to acknowledge that due to the absence of a sink particles algorithm <cit.> in our simulations, we cannot take advantage of removing gas from a cell whose local Jeans length becomes unresolved by the computational grid <cit.>.
Therefore, in scenarios where a region experiences runaway collapse, the density within the specific cell at the bottom of the gravitational potential well becomes exceptionally high as it accumulates matter.
This situation results in a violation of the <cit.> criterion, which suggests that the Jeans length of any region should be resolved by at least 4 grid cells to prevent artificial fragmentation.
Nonetheless, even if we do not resolve the small-scale structures, the qualitative behaviour in the virial parameter will not change significantly as the potential well created by this very high-density region will still be very deep, even in higher-resolution simulations.
§.§.§ Cloud elongation
Here, we discuss the role of self-gravity on the compression and subsequent elongation of the cloud, impacted by the wind, from a quantitative point of view.
We calculate the effective length (𝒳) of the cloud along the xdirection (i.e., along the direction of the wind) as follows <cit.>:
𝒳∝√(( ⟨ x^2 ⟩ - ⟨ x ⟩^2 )),
where Eq. (<ref>) is used to calculate the average quantities.
Fig. <ref> shows the evolution of the cloud length (normalized to the initial value), parallel to the wind (x-direction), for different simulations.
The solid and dashed lines correspond to simulations with and without self-gravity, respectively.
We see that in all scenarios, the cloud width initially decreases as a result of the compression.
Consequently, the cloud undergoes elongation due to the combined effects of shock- and turbulence-induced expansion.
In the absence of self-gravity, this elongation is much more prominent for the lower power winds, as depicted by the blue dashed line (C_42) in Fig. <ref>.
In contrast, due to the global collapse of the clouds in the presence of self-gravity in cases with lower wind power, the clouds are more compact leading to a smaller longitudinal width as can be seen from the solid blue line (GC_42).
Nevertheless, as the wind power increases, the distinction between these two scenarios becomes less pronounced, as the wind initiates the stripping of the cloud before gravity has a significant influence.
§.§.§ Cloud acceleration
Fig. <ref> illustrates the evolution of the x-component of the cloud's centre of mass velocity across various simulations. It's evident that higher-powered winds quickly accelerate the clouds due to a greater net momentum transfer from the wind to the cloud.
Notably, despite a similar initial evolution, clouds in simulations incorporating self-gravity at the same wind power begin to decelerate (solid lines) as gravitational forces toward the central potential well counteract the wind-induced acceleration.
It's essential to note that, within the chosen simulation parameters, acceleration timescales significantly exceed the duration of the simulations themselves.
As a result, the net velocity achieved by the cloud's centre of mass remains significantly lower than the flow velocity.
Nevertheless, the discernible impact of self-gravity on the cloud's acceleration is clearly evident.
In Fig. <ref>, we depict the cloud's net centre of mass velocity for GC45_k3_thermal (solid line) and GC45_k3_kinetic (dashed-dot line) to analyze the impact of wind energy composition on cloud acceleration. As anticipated, the cloud exposed to the kinetic wind demonstrates significantly higher velocities compared to the thermal wind of equal power. This difference arises from the more efficient direct momentum transfer in the kinetic wind, contrasting with the thermal wind where injected internal energy dissipates rapidly due to strong radiative loss, diminishing momentum transfer efficiency.
§.§ Multi-phase outflow
While the findings presented in Sec. <ref> provide insights into the acceleration of the “cloud as a whole” by the wind, it becomes increasingly evident that the definition of the cloud as a single object diminishes as the wind fragments the initial cloud within a spherical zone into numerous smaller pieces.
This effect is particularly prominent for a fractal cloud, which is the case in this study.
The fragmented and stripped cloud material with different densities and temperatures are entrained in the hot wind, resulting in a multiphase outflow that can extend up to a few kpc from the centre of a galaxy, with velocities from a few 100 to several 1000 .
These kinds of ionized <cit.>, neutral atomic <cit.>, and molecular <cit.> outflows in galaxies, resulting from AGN, have been observed and characterized by a plethora of observational studies <cit.>.
Moreover, numerical simulations of the interaction between relativistic jets and interstellar clouds have demonstrated that atomic gas can form in-situ inside the post-shock material which cools rapidly, at least for low-power jets <cit.>.
Thus, from a theoretical point of view, it is important to investigate and quantify how AGN-driven winds with different powers lead to such multi-phase outflows.
§.§.§ Velocity distribution
Fig. <ref> presents the mass-weighted 2D histogram of velocity vs. number density for the cloud material, for four self-gravitating simulations with different wind power at 0.85, which corresponds to different physical times for different power as indicated in the legends.
The contours in each panel represent the initial distribution of the cloud material in the density-velocity plane.
The impact of the wind on the cloud can be readily seen from the diagram, particularly in the high-power cases (GC44, GC45), where a significant portion of the cloud material has been compressed to higher densities and accelerated to higher velocities.
The velocities of the dense gas with number densities of ∼ 10-100 (right side of the vertical line) is enhanced up to ≲ 400, which constitutes a considerable fraction of the cloud's mass.
A notable amount of diffuse gas with densities of ∼ 0.01 1 is accelerated to velocities of several 1000.
However, the velocities of the very dense gas (n≳ 10^3) are only mildly affected by the wind, as the gas with larger column densities is more resistant to acceleration through direct momentum transfer.
§.§.§ Multiphase structure
To analyze the gas phases of the outflowing material, we have constructed phase diagrams representing gas temperature versus number density in Figure <ref>, where the colourbar in the top and bottom rows show the total mass and the mass-weighted mean velocity of the cloud material, respectively.
From the top row, it is evident that most of the cloud material is in the cold phase (T≲10^3 K) for the winds with lower powers, whereas there exists a notable amount of moderately dense gas (10 ≲ 100) in warm (10^3 K≲ T ≲ 10^4 K) and hot (10^4 K≲ T ≲ 10^7 K) phases in the higher-power cases. Cloud materials in these phases experience acceleration up to 400. Interestingly, comparatively diffused gas with densities (0.1-10), which notably contributes to the ionized outflows, exists mostly in the hot phase, with velocities up to several 1000. On the other hand, gas in the dense and cold phase exhibits very low velocities, less than 100, even in the highest power case. This suggests that accelerating dense, cold interstellar gas to high velocities (∼ 1000) via AGN-driven winds is exceedingly challenging.
In the simulations featuring lower-power winds (GC42 and GC43) shown in Fig.<ref>, the presence of high-velocity gas is notably limited, due to the low ram pressure of the wind, which primarily accelerates the cloud.
§.§.§ Outflow rate scaling with wind power
In order to quantify the outflow driven by the winds and its dependence on the wind power, we calculate the total mass-outflow rates () of the cloud material as a function of time for self-gravitating simulations with different wind power, including the simulation with the kinetic wind (GC45_kinetic), which is illustrated in the left panel of Fig. <ref>.
The outflow rates are calculated across the yz plane at x=75 pc.
In the global picture, this corresponds to the outflow rate through an area of A = 100 ×100 from a single cloud situated at a distance ∼ 1.075 kpc from the AGN as per our configuration.
Notably, the mass outflow initiates after the compression phase when the ablation and stripping of the cloud start to dominate the evolution and progressively approaches a near-steady state value over time.
The amount of outflow increases with increasing wind power, as expected.
Furthermore, during the steady-state phase, the outflow rate induced by a kinetic wind (dashed-dotted magenta) surpasses that of a thermal wind (solid magenta) with equivalent power due to the higher momentum transfer efficiency.
It is important to note that, the mass-outflow rates, calculated at x=75, are entirely contributed by the stripped and entrained materials from the cloud.
To establish the relationship between the mass outflow rate and wind power, we determine the average mass outflow rate () in each simulation after reaches a near-steady state, as indicated by the dotted lines (red for thermal and magenta for kinetic wind) in the left panel, which is plotted against the wind power in the right panel of Fig. <ref>.
The open squares correspond to values for self-gravitating simulations with different powers, as indicated in the legend.
Additionally, the grey cross marks represent values obtained from non-self-gravitating simulations with corresponding wind powers.
From the right panel of Fig. <ref>, it is evident that the winds with power in the range 42-45 cause mass-outflow rates of ∼ 10^-4-10^-2 yr^-1 from a single cloud through an area of A = 100× 100.
If we consider multiple clouds are distributed around the AGN globally, the global mass-outflow rates (M_ OF, glob) at a distance of from the AGN can be calculated as,
M_ OF, glob≈ f_ V×4π^2/A(Ω/4π),
where f_ V is the volume filling factor of the dense cloud, typically of the order of 0.01-0.1 <cit.> and Ω is the solid angle covered by the outflowing region. For a spherical outflowing region with a maximum outflow rate Ω = 4π, and f_ V = 1. Thus, Eq. <ref> becomes M_ OF, glob≈× 4π R_ OF^2/A.
Therefore, assuming a simple spherical arrangement of the clouds, the global mass outflow rates at a distance of ∼ 1.1 from the AGN are to the order of 0.1-10 yr^-1 for the considered wind powers in this study.
These values are similar to what has been found in observations <cit.>.
A noteworthy feature in Figure <ref> is the tight correlation between the mass-outflow rate and the wind power.
As the power of the wind increases, there is a corresponding increase in the mass-outflow rate, a trend that is in agreement with our intuitive expectations.
Fitting a simple power law (∝ P^κ) to the results from the self-gravitating simulations (indicated by the red dashed line) yields a power-law exponent of κ∼ 0.52.
However, from the left panel, it is important to note the outflow rates for low-power winds (GC42 and GC43) have not reached a steady state by the end of the simulation and are expected to rise further before becoming constant.
Therefore, the calculated average mass outflow rates for the lower power winds presented in the right panel serve as lower bounds.
Thus, the value of κ∼ 0.52 represents the upper bound of the power-law exponent estimated in this study.
We present a simple complementary picture, where the mass outflow, driven by the AGN, is due to the stripped-off material from the embedded clouds inside the flow, rather than the swept-up shell model <cit.>.
We assume that the mass loss due to the ablation of the cloud by the hot wind is driven by the pressure gradient at the interface <cit.>.
For a purely hydrostatic ablation, the ablation rate (M_ abl) of one cloud embedded in a hot wind of Mach number is given as <cit.>,
M_ abl = αmin(1,^4/3)(M_ cc_ s,c)^2/3(ρ_ w)^1/3,
where M_ c and c_ s,c are the mass and internal isothermal sound speed of the cloud and α is a constant factor of the order of unity for a spherical cloud.
Keeping the cloud configuration identical, the mass ablation rate depends on the wind velocity as
M_ abl∝^5/3.
Thus, by combining Eq. (<ref>) and Eq. (<ref>), we can show that,
M_ abl∝^5/9,
where is the power of the wind.
Therefore, if we assume the ablated mass is the source of the majority of the mass-loading in the outflow, then the outflow rate depends on the wind power as ∝^5/9∝^0.55, which is close to the value we are estimating from the simulations.
It is worth noting that similar correlations between the outflow rates and the bolometric luminosity of the AGN have been consistently reported in numerous observational investigations, spanning both the ionized and cold molecular phases.
For instance, studies focusing on ionized outflows have documented a power-law exponent of about κ∼ 1.29 <cit.>, whereas the exponent for cold molecular outflows varies between κ∼ 0.68-0.76 in different instances <cit.>.
While our analysis does not differentiate between different phases of the outflow, the scaling relation we estimate for the overall mass-outflow rate, encompassing both the hot, warm, and cold phases, as a function of wind power, is fairly close to the scaling relations to the molecular outflow.
However, it's crucial to emphasize that despite the observed strong correlation between the average mass outflow rate () and wind power as found in this study, there exist several sources of scatter contributing to this correlation.
For instance, a cloud impacted by a kinetic wind tends to show a higher mass outflow rate than in the case of a thermal wind with similar power, as illustrated by the magenta squares in the right panel (filled for kinetic and open for thermal).
Additionally, factors such as the morphology (i.e., whether the cloud is porous or compact) of the cloud and whether self-gravity plays a dominant role in the cloud's evolution at the impact stage can introduce considerable scatter to the correlation.
Moreover, on a global scale the geometric distribution (i.g., spherical, disk-like, etc.) of clouds around the driving source, as represented by the f_ V factor in Eq. (<ref>), can also cause significant scatter in the global mass outflow rate.
Furthermore, comparing outflow rates for different systems with the same power but at different evolutionary stages can also introduce scatter as the temporal evolution of can vary significantly (left panel of Fig. <ref>).
Therefore, acknowledging these factors while interpreting the correlation between the mass outflow rate and wind power is crucial. However, it is noteworthy that our simplified estimates do present qualitative similarities to the observed results, reinforcing the idea that an ablation-based model of mass loss can likely explain the observed correlations.
§.§ Star formation rate
Outflows from AGNs are believed to be one of the major drivers behind the star formation activity in galaxies.
The local input of energy and momentum from these outflows into the star-forming regions can induce turbulence, which may have a dual role.
On the one hand, the induced turbulence increases the stability of the clouds against gravitational forces preventing global collapse, which can reduce the star formation inside such clouds.
On the other hand, it can promote over-densities via shock compression <cit.>, which may result in an enhanced star formation.
Therefore, the effect of these AGN-driven winds on the star formation activity is determined by a complex interplay between different physical processes acting on the cloud scale.
We estimate the star formation rate in the simulation using a semi-analytical model of turbulence-regulated star formation <cit.>, which take into account the local physical quantities such as the velocity dispersion, virial parameter, local sound speed, turbulence driving mode, and magnetic field (if present) to estimate the star formation rate.
In this framework, the star formation rate (SFR) is calculated by integrating the density PDF from a critical density. The SFR of a cloud of mass M_ c is calculated as,
SFR = M_ c/(ρ_0),
where (ρ_0) is the freefall time at the mean density (ρ_0) of the cloud. The quantity , defined as the star formation rate per freefall time, is given by <cit.>,
= ϵ/ϕ_t∫_s_ crit^∞(ρ_0)/(ρ)ρ/ρ_0p(s)ds,
where s=ln(ρ/ρ_0) is the logarithmic density contrast, and p(s) is the density PDF expressed in terms of s.
On the GMC scale, the parameter ϵ is interpreted as the fraction of the global mass of the whole cloud that eventually turns into stars, and is typically 1-2% <cit.>.
Therefore, we set ϵ=0.015 in the calculation of .
The ϕ_t parameter is a numerical factor of the order of unity to account for uncertainties in the integral and is set to ϕ_t = 2.04 as calibrated in <cit.>.
The critical logarithmic density (s_ crit) of collapse, used as the lower limit of the integral of Eq. (<ref>), is estimated by comparing the Jeans length to the sonic length (where the turbulent velocity dispersion is of the order of the local sound speed) and is given by <cit.>,
s_ crit = ln[(π^2/5)ϕ_x^2],
where ϕ_x is a numerical factor of the order of unity to account for slight differences in the exact equality between the Jeans length and the sonic scale <cit.> and set to 0.19 in this study <cit.>.
For the detailed theoretical background of the turbulence-regulated star formation model, we refer interested readers to <cit.>, and references therein.
In order to estimate the SFR in our simulations, we use Eq. (<ref>) and Eq. (<ref>) for calculating the Mach number () and virial parameter (), which gives us the value of s_ crit via Eq. (<ref>).
We then construct the density PDF p(s) from the simulation data, which is subsequently integrated according to Eq. (<ref>) to obtain .
Finally, we calculate the SFR from Eq. (<ref>), where Eq. (<ref>) is used to estimate the cloud mass.
§.§.§ Dependence on wind power
Fig. <ref> shows the time evolution of (left panel) and SFR (right panel) for different simulations, as denoted in the legend.
During the initial stages, as the wind begins to interact with the cloud, it injects a substantial amount of kinetic and thermal energy into it, leading to an increase in the velocity dispersion and virial parameter, as depicted in Fig. <ref>.
Thus, the turbulent motions of the gas impede the gravitational collapse of many regions, resulting in a decreased SFR compared to the simulation without a wind.
This behaviour is consistent among all the wind simulations; however, the degree of initial suppression in increases with rising wind power.
In fact, for the most powerful wind (magenta), the star formation is completely quenched.
This is because higher wind power leads to a greater transfer of energy into the cloud, resulting in more pronounced turbulent motion within the gas.
Subsequent to this phase, as the wind fragments the cloud, it becomes easier for the wind to pass through the inter-cloudlet channels due to the smaller column depth.
Hence, the strength of the interaction between the wind and cloud reduces.
Additionally, the dense cloudlets embedded within the hot wind are more resilient to propagation of the transmitted shocks into the cores, thus, weakening the effect of the wind within the dense cores.
Furthermore, the cloudlets that have been shock-compressed achieve significantly higher densities, resulting in a deeper gravitational potential.
Therefore, the gravitational energy becomes dominant over the kinetic support, leading to an increased after the initial suppression phase.
For simulations without self-gravity, except for the C_45 case, the rebound reaches a level comparable to that in simulations without the wind, as depicted by the dashed lines. For the C_45 case, the cloud is destroyed and entrained by the wind before reaching that value of .
However, in the presence of self-gravity, the shock compression causes the cloudlets to attain a much higher density.
Additionally, the cloudlets merge due to the attractive gravitational force, forming a massive and highly dense central core, which undergoes runaway collapse.
As a consequence, the experiences a rapid increase following the initial decline (solid lines), to the extent that it surpasses the value in the simulation without wind (red solid line).
It is important to note that the stronger the wind, the earlier and the more rapid this increase in occurs, as the higher-power winds quickly compress the cloud, effectively reducing the free-fall time.
Therefore, in all the simulations with self-gravity, the wind triggers the collapse of the cloud after the initial suppression.
While the morphological features in Fig. <ref> show that a high-power wind significantly disrupts the cloud, we note that a substantial portion of cloud material still undergoes compression and experiences rapid collapse.
It is crucial to emphasize that our simulations do not include self-consistent modelling of star formation using sink particles, which we aim to explore in a forthcoming study.
Therefore the reported SFR values in this context should be interpreted as a qualitative trend between various wind parameters.
§.§.§ Dependence on cloud properties
The results discussed in the earlier sections focused on the impact of winds with different powers on the same initial cloud structures and density.
However, clouds with different densities and morphology can react differently to the wind.
Therefore, to investigate this, in this section, we present the results for the wind-cloud interaction for a fixed value of wind power (43) but varying cloud properties, viz. different cloud wavenumber (GC43_k{1,3,6,10}) and mean density (GC43_k3_low).
The top-left panel of Fig. <ref> shows the evolution of the cold gas fraction with time.
Notably, the evolution appears similar for high-density clouds.
Initially, there is a decrease in the cold gas fraction attributed to energy injection from the wind, followed by a subsequent increase due to the efficient cooling of the compressed gas.
However, for the low-density cloud (GC43_k3_low), the initial cold gas fraction is lower compared to the other cases.
This is because, initially, the clouds in our simulations are in pressure equilibrium with the ambient medium, as described in Sec. <ref>.
Due to the lower mean density, the equilibrium condition leads to a higher mean initial temperature (∼ 2700 K) of the low-density cloud, resulting in a lower fraction of cold gas mass compared to the high-density clouds.
However, as the evolution progresses, the gas rapidly cools, and the cold gas fraction increases.
Nonetheless, as the wind impacts the cloud, the fraction decreases, and the subsequent evolution closely resembles that of the high-density cases.
The major difference lies in the fact that the cold gas fraction remains low and decreases rapidly due to higher degrees of ablation and mixing.
However, until the end of the simulation (∼ 3 Myr), a significant amount of cold gas survives.
The top-right and bottom-left panels of Fig. <ref> illustrate the time evolution of velocity dispersion and virial parameter, respectively.
All the values are normalized to the initial values in order to examine the effect of the wind on the evolution of the parameters with respect to the initial values.
The qualitative evolution of both parameters is similar in all the simulations to what we observe in Fig. <ref>.
Interestingly, the evolution for =1, 6 and 10 is quite similar apart from some initial deviation due to different initialization.
As discussed in Sec. <ref>, while in =1 case the single large clump experiences rapid global collapse, for =6,10 cases, the narrow inter-cloudlets channels prevent the wind from penetrating and dispersing the cloud significantly, eventually leading to global collapse.
Despite different reasons, the ultimate fates of the clouds in these cases are similar.
In contrast, for the =3 case, the inter-cloudlets channels are wide enough for the wind to penetrate into the cloud, which directly compresses the cloudlets locally but prevents the global collapse for a longer period.
For the cloud with a lower mean density (GC43_k3_low), the velocity dispersion increases by more than an order of magnitude as the wind induces stronger turbulent motion due to a lower density.
The virial parameter also increases by two orders of magnitude in this case by the impact of wind.
However, here we observe a second increase of the viral parameter at ∼ 0.25.
This is primarily a result of the increase in turbulent velocity together with the decrease in mean density due to the disintegration of the cloud, which contributes to the decrease in gravitational binding energy.
The bottom-right panel of Fig. <ref> shows the evolution of the normalized star formation rate per freefall time () for all these simulations.
Similar to Fig. <ref>, the initial increase of the virial parameter reduces the for all simulations.
However, the consequent rebound of depends on the cloud properties.
For the case with =1, one big clump (see top panel of Fig. <ref>) rapidly collapses via compression, leading to an early growth of .
For =6 and 10 cases, the initial small-scale velocity field of the cloud, characterized by the same values of , causes an enhanced amount of inter-cloud motion.
This motion persists for a longer period before decaying, keeping the star formation rate relatively low during this time.
However, as the wind compresses the cloud globally, the majority of the cloudlets merge, effectively forming a large, compact clump, which undergoes rapid collapse globally.
As a result, the star formation rate increases rapidly after the initial suppression.
In the case of the cloud with a lower mean density, we observe a complete cessation of star formation for approximately 0.1 (∼ 1.3 Myr) due to the initial rise of the virial parameter by two orders of magnitude.
However, as some of the dense clump gets compressed by the wind, we observe some intermittent star formation activity after the initial quenching.
Nonetheless, the SFR values during these intervals remain more than one order of magnitude lower than the initial value until the end of the simulation.
§ DISCUSSION
§.§ Comparison to previous studies
There have been a few studies concerning the effect of highly pressurized winds possibly originating from AGN on the interstellar clouds <cit.>.
The results obtained in the present work add to the previous studies of AGN-driven wind-cloud interaction by including additional physics such as self-gravity and more realistic cloud morphologies (fractal structures) and carrying out simulations over a wide range of parameters for both winds and clouds.
Some of the key findings from our study align with the results from previous works in the context of shock/wind-cloud interaction.
Our results reveal the emergence of dense filaments due to ram pressure stripping and KH instabilities acting upon the interface between the cloud and the wind, similar to the results from previous studies <cit.>.
However, it is worth noting that in our simulations, these filaments display a more clumpy structure as opposed to the extended, continuous filaments observed in earlier studies.
This difference arises from our incorporation of a fractal medium characterized by the presence of dense cores and a diffuse inter-core medium.
This configuration facilitates the infiltration of the wind into the cloud by clearing out low-density channels and enveloping the cores, effectively breaking the cloud into numerous cloudlets.
Moreover, as the KH instability grows faster at smaller wavelengths, the fragmentation of the cloud is much higher compared to an uniform or smooth cloud <cit.>.
Therefore these fragmented cloudlets undergo acceleration to shape the clumpy filaments observed in our simulations.
In this way, the morphology of the filaments in our study resembles more the findings by <cit.>, where a similar fractal cloud configuration is considered.
Another interesting phenomenon to be compared with previous studies is the lifetime of the dense clumps that are generated in the process of fragmentation.
<cit.> concluded that radiative cooling plays a significant role in prolonging the survival of dense clumps entrained within a hot wind.
This cooling process leads to the creation of a dense protective layer around the clump, enhancing the cloud's lifetime.
In such scenarios, these clouds can survive for more than 1 Myr, which aligns with our results.
In all simulations performed in this study, the majority of the fragmented clumps survive until the end of the simulations (1.2-5.5 Myr, depending on the wind power).
Additionally, as demonstrated previously in Sec. <ref> and <ref>, self-gravity further increases the density of the shock-compressed cloudlets, and since the growth rate of instabilities is inversely proportional to the density contrast (see Eq. <ref>), the growth of instabilities is diminished, therefore, further prolonging the lifetime of the cloudlets. Additionally, the increased density contrast due to self-gravity reduces the velocity of the transmitted shock (v_ ts∼ v_ w/χ^1/2) into the cloudlets, consequently reducing the overall heating of the cores.
The effect of self-gravity has been considered in a few previous studies <cit.> in the context of the cloud wind/shock interaction.
All of these studies have demonstrated that when self-gravity is taken into account, the compression resulting from the wind contributes to an increased rate of fragmentation and eventual gravitational collapse <cit.>, which agrees with our findings.
Furthermore, <cit.> has shown that under specific conditions, a sufficiently high ram pressure in the wind can lead to the complete destruction of the cloud.
However, our study diverges from this result as we do not observe any instance of a cloud being entirely destroyed by the wind, even when subjected to the most powerful winds in our simulations.
Moreover, despite the substantial disruption of the cloud by the wind in the simulation with an initial lower mean density (GC43_k3_low), pockets of dense material persist, fostering intermittent star formation activity (see bottom panel of Fig. <ref>).
This discrepancy can be attributed to two major factors.
Firstly, in order to obtain that large value of ram pressure of the wind, <cit.> consider quite higher values of the wind density.
Therefore, the momentum imparted by the wind into the cloud is orders of magnitude higher than in our highest-wind-power case. As a result, the cloud in their study undergoes rapid acceleration, and the combination of Rayleigh-Taylor instabilities and ram pressure stripping leads to the cloud's disintegration.
Secondly, we consider a fractal cloud in our simulations, which has many dense gravitationally-bound cores at the initiation of the simulations.
These cores are further compressed by the wind, rendering them self-shielded and resistant to ablation, which contributes to the cloud's survival, despite the wind's powerful effects.
§.§ Implication for AGN feedback
The impact of AGN feedback on star formation activity remains a complex and elusive process in current astrophysical research.
While numerous studies support the idea of negative feedback, attributed to the turbulence and thermal energy enhancement induced by AGN winds or the jet-inflated cocoon, there are also proponents of positive feedback.
In this scenario, the over-pressurized wind associated with AGN activity has the effect of compressing and fragmenting star-forming clouds.
This compression leads to an increased star formation rate (SFR) by triggering the collapse of gas clouds into new stars.
While there is little consensus on negative feedback from the theoretical point of view, a few studies have shown that AGN activity can indeed suppress the star formation rate globally inside the host galaxy by overall induction of turbulence and thermal energy <cit.>.
Furthermore, <cit.> have argued that the gas within AGN-driven outflows can become unbound, effectively escaping the host galaxy's potential and thereby reducing the available star-forming fuel.
This long-term process may contribute to a negative feedback mechanism.
Numerous theoretical studies have presented scenarios in which the triggering of star formation by AGN activity can be a viable mechanism <cit.>.
For instance, <cit.> have shown that the compression resulting from the high-pressure bubble inflated by an AGN jet can enhance the star formation rate (SFR) in a disc galaxy by a factor of 2-3.
Similarly, <cit.> arrived at a similar conclusion, demonstrating that external pressurization within the ISM can confine and compress star-forming regions and accelerate the onset of gravitational collapse and subsequent star formation.
Moreover, the study by <cit.> revealed that the star formation efficiency depends on the ram pressure of the wind.
They identified a critical threshold value of the ram pressure, below which the clouds experience rapid collapse, leading to an enhancement in SFR.
However, above this threshold, ram pressure becomes strong enough that the clouds are ablated before gravitational forces can significantly influence their evolution.
The result we present in this study also leans toward a positive feedback scenario.
While there exists a short period of suppressed/quenched star formation during the initial interaction between the wind and the cloud, the compression due to the over-pressurized wind dominates in the long term.
The radiative shocks compress many massive cloudlets to significantly higher densities, and the presence of self-gravity intensifies this process.
As a result, the shock propagating into these cores decelerates swiftly as the gas densities progressively increase, and the cloudlets effectively become self-shielded from the wind, such that the cloudlets can survive for a long time in the hot wind.
Without the support from internal turbulence inside these cores, which is crucial for their stability, they eventually collapse to form stars, which will stop once all the gas in the core is consumed.
Therefore, from our present results, positive feedback by the AGN is inevitable even for high-power winds.
Indeed, some observational studies support this scenario.
For instance, <cit.> detected a substantial amount of star formation (∼ 15 yr^-1) inside a galactic outflow, with a significant young (∼ 10 Myr) stellar population and higher stellar velocities (≲ 100).
This indicates that the stars have been formed inside the outflows, which have been triggered by the compression induced by the out-flowing gas.
A recent investigation by <cit.> identified a young (1-10 Myr) stellar population inside young compact steep spectrum (CSS) radio galaxies.
Notably, the dynamical age of the radio sources in these galaxies corresponds well to the stellar ages, suggesting that star formation was triggered by jet activity.
There are various other observational studies where enhanced star formation by AGN activity (jets or winds) has been reported <cit.>, in agreement with our findings.
Moreover, as outlined in Sec.<ref>, the wind velocity and pressure experienced by the cloud at a distance of 1 for different powers can be mapped to various distances from the AGN for a specific wind power.
For example, the wind parameters corresponding to powers of 42, 43, 44 and 45 at 1 (refer to Tab.<ref>) can be replicated by considering a wind with a power of 43 at distances of 3.2, 1, 0.3, 0.1 from the AGN.
Therefore, our findings hold relevance for comprehending the impact of AGN-driven winds on a galactic-scale environment.
Importantly, even though a lower power wind (e.g., 42, 43) may not appear to significantly affect the cloud at a distance of 1 according to our study, the closer environment of the AGN on a scale of hundreds of parsecs will likely be strongly influenced by the wind, akin to the outcomes observed in higher power simulations (e.g., 44, 45).
Similarly, for a higher-power wind, the outskirts of the galaxy will be mildly affected, mirroring the outcomes of simulations with lower-power winds presented in this study.
In any case, the presence of self-gravity is likely to trigger star formation inside the clouds on a wide length scale while impacted by AGN-driven winds, albeit at different timescales, as observed in simulations with different powers (Fig. <ref>).
The reported suppression of star formation by AGN activity, as observed in several studies <cit.>, continues to await theoretical confirmation.
Our study, which incorporates comprehensive modelling of the interaction between a star-forming cloud and an AGN-driven wind, does not reveal any significant long-term suppression of star formation activity over the duration of an AGN lifetime (typically a few Myr).
Based on our results, it appears that the only plausible means of reducing the star formation would be to employ mechanisms capable of disrupting the dense cores from the inside.
Stellar feedback in the form of winds, jets, radiation pressure, and photo-ionization appears to be a viable mechanism <cit.>, at least on the scale of a few parsecs, which corresponds to the typical size of the dense cloudlets we have identified in our study.
Indeed, previous studies on star-cluster formation in clouds of size ∼ 1-10 have demonstrated that stellar feedback can slow down the star formation rate <cit.>.
Therefore, the underlying concept is that following the initial burst of star formation induced by wind-driven compression, the feedback from young and massive stars acts to disrupt the cloud, which in turn makes wind material entrainment more favourable.
Therefore, in tandem with stellar feedback, the AGN wind could potentially contribute to the destruction of dense cores, breaking them into even smaller structures that are no longer prone to gravitational collapse due to their reduced size.
This presents an intriguing scenario that we plan to investigate in future studies.
Additionally, the wind parameters examined in this study represent fast and hot AGN-driven outflows. However, the impact of slow and relatively cold winds, which are typically found at greater distances from the AGN, is not considered here and should be investigated in future.
§ CONCLUSION
In this study, we have performed a series of three-dimensional hydrodynamical simulations of the interaction between AGN-driven winds and star-forming interstellar clouds, including radiative cooling and realistic cloud morphology such as fractal geometry.
We have conducted two sets of simulations with and without self-gravity to examine the effect of self-gravity on the evolution of the clouds.
We consider a large range of parameter space for investigating various aspects of the evolution process, including the power of the wind, the mean density of the cloud, the fractal density distribution of the cloud, and whether the wind is dominated by kinetic or thermal energy.
In the following, we summarize the main results of this study:
* Interaction of the wind with the fractal cloud:
When the wind interacts with the fractal cloud consisting of dense cores separated by low-density channels, it rapidly erases the low-density areas, resulting in the formation of numerous dense cores. Subsequently, these dense cores undergo compression due to radiative shocks.
* Effect of self-gravity:
While the cloudlets get compressed by the wind irrespective of the presence of self-gravity with the same wind power, the cloudlets formed in the self-gravitating simulations attain much higher densities and become gravitationally bound, compared to the cloudlets in the simulations without self-gravity. In the absence of self-gravity, after attaining the maximum possible compression, the clouds start to disintegrate and expand as a result of the momentum transfer from the wind to the cloud material.
* Dependence on wind power:
The amount of cloud material that is retained and accreted by the gravitationally bound cloudlets depends on the strength of the wind. For lower-power winds, the momentum transfer is reduced, and as a result, a significant portion of the cloud material does not gain enough acceleration to overcome the gravitational potential of the cloud. Consequently, this material falls back into the potential well, forming a central, massive clump that undergoes rapid gravitational collapse. Conversely, for the higher-power winds, although some initially high-density clumps collapse under self-gravity, a significant number of relatively low-density clumps are accelerated and dispersed by the wind prior to self-gravity becoming a significant factor.
* Effect of cloud morphology:
The size of the cloudlets (determined by the minimum wave number used to generate the fractal density distribution) inside the cloud significantly affects the evolution when impacted by a wind with the same power. For the =1 case, the whole cloud roughly contains a single large dense clump and is therefore large enough to prevent the shock from penetrating into its cores before it undergoes global gravitational collapse due to external pressurization. In contrast, for =6 and =10, the cloud is characterized by numerous dense cores, separated by narrow channels. However, when the initial shock sweeps across the cloud, it effectively blocks these channels with swept-up material, preventing the wind material from infiltrating the cloud. As a result, in the absence of significant dispersal of the cloudlets, the cloudlets accumulate near the centre by the gravitational force, ultimately experiencing global collapse. Thus, there exists a narrow range of , for which the inter-clumps channels are wide enough for the wind to penetrate into the cloud and provide stability against global collapse by transferring energy and momentum, which is the case for =3.
* Relative effect of thermal vs. kinetic wind:
The total energy partition of the wind in the thermal and kinetic components has a major effect on the evolution of the cloud. A thermal-energy-dominated wind primarily affects the cloud material by increasing its internal energy, which in turn is converted into kinetic energy of the gas. However, in the presence of strong radiative cooling, the internal energy of the gas quickly dissipates, therefore the effective momentum transfer from the wind to the cloud is reduced for a thermal wind. A kinetic-energy-dominated wind directly transfers momentum to the cloud material, resulting in a higher level of expansion and acceleration.
* Evolution of the velocity dispersion:
The velocity dispersion inside the cloud in all the wind simulations increases due to energy transfer from the wind to the cloud material. This effect is more pronounced with stronger winds. In cases without self-gravity, the velocity dispersion stabilizes at a roughly constant value as the cloud disperses. With self-gravity, the velocity dispersion starts to increase again when gravity becomes dominant as it pulls fragmented cloudlets towards the core, inducing additional motions between the cloudlets.
* Effect on the virial parameter:
In all the simulations, an initial increase in the virial parameter occurs as the wind imparts kinetic energy, initially surpassing the gravitational energy. As the cloud compresses and the gas density rises, the gravitational potential deepens, reducing . Simulations without self-gravity reach a steady state with balanced kinetic and gravitational energy. On the other hand, with self-gravity, the cloud becomes gravitationally bound and undergoes collapse, and drops significantly, especially in cases with low-power winds, due to the dominant gravitational binding energy.
* Generation of multiphase outflow:
The ablation of the cloud material by the wind can give rise to multi-phase outflows with velocities from a few 100 to several 1000 over a huge range of temperatures (10^2-10^7 K), consisting of cold, warm and hot gas. The calculated mass-outflow rates correlate tightly with the wind power (∝ P^κ). We find a power-law exponent of κ≈ 0.52.
* Impact on the star formation rate:
In the presence of self-gravity, which is very important within the environment we are interested in, our results favour a positive feedback scenario triggered by the AGN-driven winds, at least within the parameter space we consider in this study. Even though the wind can suppress or quench the star formation for about 1 Myr during the initial interaction, a substantial number of shock-compressed, dense cloudlets manage to shield themselves from the wind's influence and subsequently undergo rapid gravitational collapse. This process ultimately leads to an increased star formation rate.
§ ACKNOWLEDGEMENTS
We thank the referee, Dr. Tiago Costa, for his thorough and insightful comments, which have improved the quality of the paper significantly. A. Mandal would like to thank Moun Meenakshi and Kavita Kumari for useful discussions about the effect of the AGN radiation field on interstellar gas. A. Mandal further thanks Moun Meenakshi for the help with the radiative transfer calculation using cloudy.
C. F. acknowledges funding provided by the Australian Research Council (Discovery Project DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). We gratefully acknowledge high-performance computing resources provided by the Australian National Computational Infrastructure (grant n72 and ek9) and the Pawsey Supercomputing Centre (project pawsey0810) in the framework of the National Computational Merit Allocation Scheme and the ANU Merit Allocation Scheme, the Pegasus[<http://hpc.iucaa.in>] high-performance computing facilities of IUCAA, and the Leibniz Rechenzentrum and the Gauss Centre for Supercomputing (grants pr32lo, pr48pi and GCS Large-scale project 10391). D. Mukherjee and N. Nesvadba acknowledge support from the IFCPAR/CEFIPRA (project no. 6504-2) for collaborative research. This work has received funding from the European High Performance Computing Joint Undertaking (JU) and Belgium, Czech Republic, France, Germany, Greece, Italy, Norway, and Spain under grant agreement No 101093441.
This work is supported by "Italian Research Center on High Performance Computing Big Data and Quantum Computing (ICSC)", project funded by European Union - NextGenerationEU - and National Recovery and Resilience Plan (NRRP) - Mission 4 Component 2. Spoke 3, Astrophysics and Cosmos Observations.
§ DATA AVAILABILITY
Data related to this work will be shared upon reasonable request to the corresponding author.
mnras
§ NON-REFLECTING BOUNDARY CONDITION FOR SUBSONIC INFLOW
In this section, we present a simplified approximation of the non-reflecting boundary conditions for subsonic inflow for one-dimensional flow.
§.§ Characteristic form of the conservation equations
The one-dimensional (1D) conservation equations in the primitive variable are given as
ρt + uρx + ρux = 0,
ut + uux + 1/ρpx = 0,
pt + ρ a^2ux + upx = 0,
where a is the sound speed and defined as a = √(γ p/ρ). Eqs. (<ref>)-(<ref>) can be written in terms of the primitive variable vector 𝐐 as
𝐐t + 𝐀(𝐐)𝐐x = 0,
where 𝐐 and 𝐀(𝐐) are defined as
𝐐 =
[ ρ; u; p ],
𝐀(𝐐) =
[ u ρ 0; 0 u 1ρ; 0 ρ a^2 u ].
Applying a similarity transformation to 𝐀, one obtains
𝐀 = 𝐒Λ𝐒^-1,
where 𝐒 = [𝐊^1,𝐊^2,𝐊^3] is the matrix consisting of the right eigenvectors of 𝐀 such that 𝐀𝐊^(i) = λ_i𝐊^(i), where λ_i are the eigenvalues corresponding to 𝐊^(i). Λ is a diagonal matrix with its diagonal elements being λ_i.
Replacing Eq. (<ref>) in Eq. (<ref>), we obtain
𝐐t + 𝐒Λ𝐒^-1𝐐x = 0.
Multiplying Eq. (<ref>) with 𝐒^-1 from the left, we obtain
𝐒^-1𝐐t + Λ 𝐒^-1𝐐x = 0.
If we approximate 𝐀(𝐐) to be locally constant, so is 𝐒^-1. Therefore, defining
𝐖 = 𝐒^-1𝐐,
Eq. (<ref>) reduces to
𝐖t + Λ𝐖x = 0,
which is the characteristic form of the conservation equations, and 𝐖 = [w^1, w^2, w^3]^T is the characteristic variable vector.
In component form, Eq. (<ref>) can be written as
w^it +λ_iw^ix = 0,
which is a set of wave equations with characteristic velocities λ_i.
In order to compute the characteristic variables, first we find the eigenvalues (λ_i) and right eigenvectors (𝐊^i) of 𝐀, which are given by
λ_1 = u-a,
λ_2 = u,
λ_3 = u+a,
and the corresponding eigenvectors are given as
𝐊^1 = α_1
[ 1; -aρ; a^2 ],
𝐊^2 = α_2
[ 1; 0; 0 ],
𝐊^3 = α_3
[ 1; aρ; a^2 ],
where α_i are the scale factors. Assuming α_1 = ρ/2a, α_2=1, and α_3 = ρ/2a, the 𝐒 matrix and its inverse is calculated as
𝐒 =
[ ρ2a 1 ρ2a; -12 0 12; ρ a2 0 ρ a2 ],
𝐒^-1 =
[ 0 -1 1ρ a; 1 0 -1a^2; 0 1 1ρ a ].
Therefore, from Eqs. (<ref>), (<ref>) and (<ref>), the form of characteristic variables (𝐖=[w^1,w^2,w^3]^T) for the 1D Euler equations are given as
w^1 = p/ρ a - u, corresponding to λ_1 = u-c
w^2 = ρ - p/a^2, corresponding to λ_2 = u
w^3 = p/ρ a + u, corresponding to λ_3 = u+c.
§.§ Boundary conditions
The primary idea behind the non-reflecting boundary condition is that any outgoing wave (w^ out) that is leaving the computational domain through the boundary should leave the domain without getting reflected back into the interior region, which means the wave amplitude associated with the outgoing wave is constant at the boundary, i.e.,
w^ out = const, or Δ w^ out = 0, at the boundary,
and the boundary condition associated with the outgoing characteristic variables should be extrapolated from the interior solution. The other waves that are incoming into the computational domain require physical boundary conditions.
For a subsonic inflow boundary (let us assume x-beg), one wave (w^1) is leaving the domain (λ_1 = u - a < 0), and the other two (w^2, w^3) are incoming, which means only two out of three primitive variables (ideally (ρ, u) or (ρ, p) for well-poised conditions, e.g., see Sec. 19.3 of ) can be specified physically at the boundary. The remaining variable has to be set numerically from the interior solution in such a way that Δ w^1 = 0 at the boundary. Let's say, we want to specify the value of ρ_ B and u_ B physically, then the pressure value (p_ B) at the boundary is calculated by setting
Δ w^1 = 0,
w^1_ B -w^1_ I = 0,
p_ B = γρ_ B(w^1_ I + u_ B)^2,
where w^1_ I is the value of w^1 at the first active zone (i=1) of the computational domain, which is to be calculated from the interior solution. Therefore, for target density and velocity values of ρ_ t and u_ t, the ghost cell values of the primitive variables are given by
ρ_ B = ρ_ t,
u_ B = u_ t,
p_ B = γρ_ t(w^1_ I + u_ t)^2.
Eqs. (<ref>)-(<ref>) ensure that Δ w^1= 0 at the boundary, and the wave-reflection is minimised.
§.§ Test and comparison
In order to demonstrate how the wind emerges in the interior solution with and without NRBC, we perform one simulation with NRBC implemented at the subsonic inflow boundary with the same setup as described in Sec. <ref>. The wind and cloud initialisations are the same as in GC45_k3 in Table <ref>, with half the resolution of the fiducial runs. We compare the evolution of the wind parameters with a simulation that uses the same initial condition and resolution, but using the method of wind injection adopted in this paper (Sec. <ref>). In the simulation with NRBC, we specify the density and velocity values at the ghost zone, while the pressure of the wind is calculated using Eq. (<ref>).
In Fig. <ref>, we present the time evolution of the wind density (top-left), velocity (top-right), pressure (bottom-left), and power (bottom-right), at the first active cell in simulations with (red lines) and without (blue lines) NRBC. The lower section of each panel illustrates the fractional deviation of each parameter from its intended value.
As observed, the evolution of the physically specified wind parameters—namely density and velocity—remains consistent throughout the simulations with NRBC. However, the pressure of the injected wind shows a significant deviation compared to the simulation without NRBC. Consequently, the deviation of wind power from its intended value is substantial (∼ 90%).
On the other hand, although all wind parameters in the simulation without NRBC deviate from their intended values (blue lines), the cumulative effect results in more stable wind power (with a variation of ∼ 30%) compared to the scenario with NRBC.
Moreover, we have performed additional simulations using the saturated values of the density and velocity for the GC45 case (see Tab. <ref>) with the NRBC. We confirmed that, although the quantitative results differ from the result presented in this study during the transient phase, they closely resemble the results (e.g. the diagnostics presented in Fig. <ref>, <ref>, <ref>) at the saturation phase (t≳ 0.4). Therefore, the qualitative results presented in this study are robust.
§ DENSITY PDF OF THE CLOUD MATERIAL AT THE SAME CLOUD-CRUSHING TIME FOR DIFFERENT POWER
Fig. <ref> depicts the density PDF of the self-gravitating cloud at t=0.4 for simulations with different wind power. This time corresponds to an absolute time of 3.46, 1.38, 0.78 and 0.34 Myr for GC42, GC43, GC44 and GC45, respectively.
We notice a distinctly contrasting trend in the high-density tail of the PDF compared to what is depicted in Fig. <ref>.
In this case, the cloud in the lowest power simulation (GC42, blue) has the highest fraction of cloud material in the power-law tail. As wind power increases, this fraction as well as the maximum cloud density decreases due to an increasingly smaller absolute time for self-gravity to influence the evolution. Notably, in the highest-power simulation (GC45), there is no evident signature of the power-law tail at 0.4. Corresponding to an approximate time of 0.4 = 0.34 Myr≈ 0.1 in this instance, the evolution is at a very early stage in terms of the impact of self-gravity.
§ TURBULENT PROPERTY EVOLUTION IN TERMS OF CLOUD-CRUSHING TIME
Here we examine how the turbulent properties of the clouds in the simulations with different wind power evolve with time in unit of the cloud-crushing timescale.
Fig. <ref>, shows the evolution of velocity dispersion (left) and the virial parameter (right) of the cloud material as a function of cloud-crushing time for the fiducial cloud setup with different wind power.
It is evident that in the absence of self-gravity (dashed line), the evolutionary trends are similar for all the powers. The velocity dispersion as well as the virial parameter peaks at ∼ 0.2, although the magnitudes are different.
However, in the self-gravity runs, the freefall time () is a more relevant timescale to compare to. As the freefall time of the cloud in the lower-power case (GC_42) is shorter than , the cloud collapses early at around ∼ 0.5. With increasing power, of the cloud becomes comparable or shorter than .
Therefore, the collapse of the clouds gets delayed in terms of with increasing wind power.
§ EFFECT OF NUMERICAL RESOLUTION
In order to quantify the numerical resolution dependence of the results presented in this study, we perform two simulations with the same wind and cloud initialisation as in GC43_k3, but with decreasing resolution. The widths of a cell in the considered simulations are Δ x = 0.19, 0.39 and 0.78 and the initial cloud radius (r_ c) is resolved by 128 (high), 64 (medium) and 32 (low) cells, respectively.
Fig. <ref> illustrates the time evolution of the cold gas mass (top-left), velocity dispersion (top-right), virial parameter (bottom-left), and star formation rate per freefall time (bottom-right), for simulations with r_ c/Δ x = 128 (solid), 64 (dashed-dotted), and 32 (dotted).
Across all panels, there is a clear dependence on resolution regarding the evolution of these quantities.
While the evolution in the high (solid) and medium (dashed-dotted) resolution simulations closely align, the low (dotted) resolution one exhibits more significant deviations from the medium resolution case.
This systematic dependence on resolution in the scenario of shock-cloud interaction directly stems from how well the instabilities acting on the cloud surface are resolved <cit.>.
As resolution increases, we can resolve perturbations with smaller wavelengths (λ), i.e., higher wavenumber (k).
Since the growth rates for the Kelvin-Helmholtz and Rayleigh-Taylor instabilities are directly proportional to the wavenumber of the perturbation, the clouds in higher-resolution simulations are more susceptible to instabilities and get ablated faster and mixed with the wind.
This systematic variation affects all derived quantities.
For instance, the amount of cold gas in the top-left panel of Fig. <ref> in the higher-resolution simulation is consistently lower compared to the lower-resolution case, due to the increased amount of ablation and mixing. This systematic variation is reflected in all the other panels as well.
However, we find that the differences of the results between r_ c/Δ x = 128 and 64 are much less compared to the r_ c/Δ x = 64 and 32 pair, implying that the resolution of the fiducial simulations (r_ c/Δ x=128) presented in this paper is adequate, as found by various previous studies <cit.>.
|
http://arxiv.org/abs/2405.09540v1 | 20240515175332 | Singular parabolic operators in the half-space with boundary degeneracy: Dirichlet and oblique derivative boundary conditions | [
"Luigi Negro"
] | math.AP | [
"math.AP",
"35K67, 35B45, 47D07, 35J70, 35J75"
] |
Singular parabolic operators in the half-space with boundary degeneracy: Dirichlet and oblique derivative boundary conditions
L. Negro Dipartimento di Matematica e Fisica “Ennio De
Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. email: luigi.negro@unisalento.it
May 20, 2024
===========================================================================================================================================================
We study elliptic and parabolic problems governed by the singular elliptic operators
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2 D_yy+y^α_1+α_2/2-1(d,∇_x)+cy^α_2-1D_y-by^α_2-2
in the half-space ^N+1_+={(x,y): x ∈^N, y>0}, under Dirichlet or oblique derivative boundary conditions. In the special case α_1=α_2=α the operator ℒ takes the form
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2,
where v=(d,c)∈^N+1, b∈ and
A=(
[ Q q^t; q γ ]) is an elliptic matrix.
We prove elliptic and
parabolic L^p-estimates and solvability for the associated problems. In the language of semigroup
theory, we prove that ℒ generates an analytic semigroup, characterize its domain as a weighted
Sobolev space and show that it has maximal regularity.
Mathematics subject classification (2020): 35K67, 35B45, 47D07, 35J70, 35J75.
Keywords: degenerate elliptic operators, boundary degeneracy, vector-valued harmonic analysis, maximal regularity.
§ INTRODUCTION
In this paper we study solvability and regularity of elliptic and parabolic problems associated to the degenerate operators
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2 D_yy+y^α_1+α_2/2-1(d,∇_x)+cy^α_2-1D_y-by^α_2-2
and D_t- ℒ in the half-space ^N+1_+={(x,y): x ∈^N, y>0} or in (0, ∞) ×^N+1_+ and under Dirichlet or oblique derivative boundary conditions at y=0.
Here v=(d,c)∈^N+1 with d=0 if c =0, b∈ and
A=(
[ Q q^t; q γ ]) is
a constant real elliptic matrix. The real numbers α_1, α_2 satisfy α_2<2 and α_2-α_1<2 but are not assumed to be nonnegative. In the special case α_1=α_2=α the operator ℒ takes the form
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2
whose coefficients are singular for α<0 and degenerate for α>0 at y=0.
This paper is the companion of <cit.>, in which the same type of operators are considered but with d=0, b=0, and with Neumann boundary condition.
We write
B_y to denote the 1-dimensional Bessel operator D_yy+c/yD_y and L_y=D_yy+c/yD_y-b/y^2; note that B_y is nothing but L_y when b=0. With this notation the special cases where
ℒ=y^α_1Δ_x+y^α_2B_y, ℒ=y^α_1Δ_x+y^α_2L_y
has been already studied in <cit.>. The main novelty here consists in the presence of the mixed derivatives 2y^α_1+α_2/2q·∇_xD_y and of the x-derivative y^α_1+α_2/2-1(d,∇_x) in the operator ℒ which is a crucial step for treating degenerate operators in domains, through a localization procedure. Surprisingly enough, the case α_1=α_2 implies all other cases by the change of variables described in Section <ref>. However this modifies the underlying measure and the procedure works if one is able to deal with the complete scale of L^p_m spaces, where L^p_m=L^p(^N+1_+; y^m dxdy).
The interest in this class of singular operators has grown in the last decade as they appear extensively in the literature in
both pure and applied problems. The operators in (<ref>) are strongly connected with nonlocal operators as they play a major role in the investigation of the fractional powers of the Laplacian and of the Heat operator
through the “extension procedure" of Caffarelli and Silvestre, see <cit.> and <cit.> for a more general setting.
We refer also the reader to the introductions of <cit.> for some references
to related problems in probability, mathematical
finance and biology, porous media equations and in degenerate viscous Hamilton-Jacobi equations. This type of singular operators are also connected to the theory of geometric PDEs with edge singularities <cit.> and to the analysis of the regularity of the ratio of solutions to elliptic PDEs <cit.>.
Our main results are Theorems <ref>, <ref>, <ref> and <ref>, where, in the language of semigroup theory, we prove that ℒ generates an analytic semigroup on L^p_m, characterize its domain as a weighted Sobolev space and show that it has maximal parabolic regularity. For reader's convenience we collect, in Section <ref>, the main hypotheses we assume and also the main results of the paper in the case α_1=α_2 as in (<ref>), referring to Section <ref> for their extension to general α_1, α_2 as in (<ref>).
We prove both elliptic and parabolic estimates which, in the case α_1=α_2=α, b=0 and oblique derivative boundary condition, read as
y^αD^2 u_L^p_m
+ y^α-1v·∇ u_L^p_m≤ Cℒ u_L^p_m,
and
D_t u_L^p_m+ℒu_L^p_m≤ C (D_t-ℒ) u_L^p_m,
where the L^p norms are taken over _+^N+1 and on (0, ∞) ×_+^N+1 respectively. Both the elliptic and parabolic estimates above share the name “maximal regularity" even though this term is often restricted to the parabolic case. Throughout the paper we keep this convention referring, in the statements of our results, to maximal regularity to denote the validity of the parabolic estimates (<ref>) while the elliptic bounds as in (<ref>) will be characterized through the precise description of the domain of ℒ.
Let us explain the meaning of the restrictions α_2<2, α_2-α_1<2
considering first the case where α_1=α_2=α, so that the unique requirement is α<2.
It turns out that when α≥ 2 the problem is easily treated in the strip ^N× [0,1] in the case of the Lebesgue measure, see <cit.>, and all problems are due to the strong diffusion at infinity. The case α≥ 2 in the strip ^N × [1, ∞[ requires therefore new investigation even though the 1-dimensional case is easily treated by the change of variables of Section <ref>.
When α_1 ≠α_2, the further restriction α_2-α_1<2 comes from the change of variables of Section <ref>, see Section <ref>.
Let us briefly describe the previous literature on these operators. In <cit.> we considered the simplest case of Δ_x+B_y making extensive use of the commutative structure of the operator. The non-commutative case of y^α_1Δ_x+y^α_2B_y has been later faced in <cit.>.
Another source of non-commutativity comes from the presence of mixed derivatives. In <cit.> we treated the operator
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2 D_yy+c y^α_2-1D_y
under Neumann boundary conditions. The methods use in these papers rely on tools from vector-valued harmonic analysis, Fourier multipliers and structure theory of Banach spaces. We refer the reader also to <cit.> and to <cit.> for related results with different methods, but without the powers y^α_1, y^α_2 (α_1=α_2=0) and with variable coefficients.
This paper is devoted to complete the picture in this direction, by adding the x-derivative y^α_1+α_2/2-1(d,∇_x) and the potential term by^α_2-2 and by studying ℒ under Dirichlet or oblique derivative boundary condition at y=0.
Here we consider only constant matrices Q and constant q, γ. The general case where Q,q, γ are bounded and uniformly continuous is however straightforward and allows to treat operators in smooth domains, whose degeneracy in the top order coefficients behaves like a power of the distance from the boundary. We shall treat these topics in a forthcoming paper.
We also point out that our results seems to be new in the case of oblique derivative boundary conditions (see Theorems <ref> and <ref>) when α_1≠ 0 or α_2≠ 0 (i.e. when the powers y^α_1, y^α_2 appear in the operator) while for Dirichlet boundary conditions (Theorems <ref> and <ref>) we improve the results in <cit.> which are valid in the special case α_1=α_2∈ (0,2), v=0, b=0 but with variable coefficients.
The paper is organized as follows. In Section <ref> we collect the hypotheses we assume on ℒ and present the main results of the paper: here, in order to improve readability, we restrict ourselves to the case α_1=α_2 as in (<ref>), referring to Section <ref> for the general case (<ref>).
In Section <ref>, we exploit some elementary changes of variables, in a functional analytic setting, to reduce our operators to simpler cases.
In Section <ref> we collect the results we need concerning anisotropic weighted Sobolev spaces: the main novelty is Section <ref> where we introduce and study the Sobolev space W^2,p_v(α_1,α_2,m) having oblique derivative boundary condition.
This careful study is essential for characterizing the domain of ℒ when the x-derivative y^α_1+α_2/2-1(d,∇_x) is present and when α_1≠α_2.
In Sections <ref> and <ref>, which are the core of the paper, we prove generation results, maximal regularity and domain characterization for the operator ℒ, under respectively oblique derivative and Dirichlet boundary conditions, both in the case α_1=α_2.
Finally, in Section <ref>, we extend the results to general α_1, α_2.
Notation. For N ≥ 0, ^N+1_+={(x,y): x ∈^N, y>0}. For m ∈ we consider the measure y^m dx dy in ^N+1_+ and we write L^p_m(_+^N+1) for L^p(_+^N+1; y^m dx dy) and often only L^p_m when ^N+1_+ is understood.
^+={λ∈: λ >0 } and, for |θ| ≤π, we denote by Σ_θ the open sector {λ∈: λ≠ 0, |Arg (λ)| <θ}.
We denote by α^+ and α^- the positive and negative part of a real number, that is α^+=max{α,0}, α^-=-min{α,0}.
We write often (x,y) or x· y to denote the inner product of ^N and, for A,B∈^N,N symmetric, (AB)=∑_i,ja_ijb_i,j. Moreover, if ω∈^N, we write also ω⊗ω∈^N,N to denote the matrix (ω_iω_j)_i;j=1,… N; with this notation one has (ω⊗ω A)=(Aω,ω).
We use B for the one-dimensional Bessel operator D_yy +c/y D_y and L for D_yy+c/yD_y-b/y^2. Here c, b ∈ and both operators are defined on the half-line (0, ∞).
§ THE MAIN RESULTS AND ASSUMPTIONS
We consider first, for b,c∈, the 1d operators
L=D_yy+c/yD_y-b/y^2, B=D_yy+c/yD_y
on the half line _+=]0, ∞[. Note that B (which stands for Bessel) is nothing but L when b=0. Often we write L_y, B_y to indicate that they act with respect to the y variable.
The equation Lu=0 has solutions y^-s_1, y^-s_2 where s_1,s_2 are the roots of the indicial equation f(s)=-s^2+(c-1)s+b=0
s_1:=c-1/2-√(D),
s_2:=c-1/2+√(D)
where
D:=
b+(c-1/2)^2.
The above numbers are real if and only if D ≥ 0. When D<0 the equation u-Lu=f cannot have positive distributional solutions for certain positive f, see <cit.>.
When b=0, then √(D)=|c-1|/2 and s_1=0, s_2=c-1 for c ≥ 1 and s_1=c-1, s_2=0 for c<1.
We now introduce a (N+1)-d generalization of the operators (<ref>). For reader's convenience we list below the main hypothesis and notation which we assume throughout the whole manuscript.
Let v=(d,c)∈^N+1 and let A=(a_ij)∈^N+1,N+1 be a symmetric and positive definite (N+1)× (N+1) matrix; we write A as
A:= (
[ Q q^t; q γ ])
where Q∈^N× N, q=(q_1, …, q_N)∈^N and γ=a_N+1, N+1>0. Let α_1,α_2∈ such that
α_2<2, α_2-α_1<2.
For m ∈ we consider the measure y^m dx dy in ^N+1_+ and we write L^p_m(_+^N+1) for L^p(_+^N+1; y^m dx dy) and often only L^p_m when ^N+1_+ is understood.
We consider the (N+1)-d degenerate operator
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2 D_yy+y^α_1+α_2/2-1(d,∇_x)+cy^α_2-1D_y-by^α_2-2
in the space L^p_m=L^p_m(^N+1_+). Note that ℒ can be written equivalently as
ℒ =y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x)
where L_y is the operator defined in (<ref>) with parameters b/γ, c/γ.
Note that in the special case α_1=α_2=α, so the unique requirement is α<2, the operator ℒ takes the form
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2.
We always keep the assumption D≥ 0 satisfied by the coefficients of the operator L_y in (<ref>).
We study unique solvability of the problems
λ u-ℒ u=f, D_t v -ℒ v=g
in the spaces L^p_m(^N+1_+) spaces under
Dirichlet or oblique derivative boundary conditions at y=0, and initial conditions in the parabolic case, together with the regularity of u,v. In the language of semigroup theory, we prove that ℒ generates an analytic semigroup on L^p_m, characterize its domain and show that it has maximal regularity, which means that both D_t v and ℒ v have the same regularity as g. To improve readability we recall the following definition.
An analytic semigroup (e^t𝒜)_t ≥0 on a Banach space X with generator 𝒜 has
maximal regularity of type L^q (1<q<∞)
if for each f∈ L^q([0,T];X), the following parabolic problem associated with 𝒜
{[ D_tu(t)-𝒜u(t)=f(t), t>0,; u(0)=0 ].
has a unique solution u∈ W^1,q([0,T];X)∩ L^q([0,T];D(𝒜)). This means that the mild solution of (<ref>), given by the variation of parameters formula
t↦ u(t)=∫_0^te^(t-s)𝒜f(s) ds,
is indeed a strong solution and has the best regularity one can expect.
It is known that the property above does not depend on 1<q<∞ and T>0.
A characterization of maximal regularity is available in UMD Banach spaces, through the ℛ-boundedness of the resolvent in a suitable sector ω+Σ_ϕ, with ω∈ and ϕ>π/2: this approach is widely described in <cit.> and in the new books <cit.>, <cit.>.
Our main results are the following. We refer to Section <ref> for the definition of the weighted Sobolev spaces involved
W^2,p_𝒩(α_1,α_2,m), W^2,p_v(α_1,α_2,m),
having respectively Neumann and oblique boundary conditions (see also Definition <ref>).
For simplicity, here, we state the results only in the special case α_1=α_2=α, where
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2,
referring to Section <ref> for the general case having possibly different weights y^α_1, y^α_2, in front of the x and y derivatives.
We start by considering b=0 and endow ℒ with the Neumann or oblique derivative boundary conditions
lim_y→ 0 D_y u=0 (if v=0), lim_y→ 0y^c/γ v ·∇ u=0 (if c≠ 0).
We define accordingly (see Propositions <ref>, <ref>, <ref>)
W^2,p(α,α,m)= {u∈ W^2,p_loc(^N+1_+): y^α D^2u, y^α/2∇ u, u∈ L^p_m}
and
W^2,p_v(α, α,m) ={u ∈ W^2,p(α, α,m): y^α-1v·∇ u ∈ L^p_m}, (c≠ 0);
W^2,p_v(α, α,m):=W^2,p_𝒩(α, α,m) ={u ∈ W^2,p(α, α,m): y^α-1D_yu ∈ L^p_m}, (v= 0),
(note that W^2,p_(0,c)(α, α,m)=W^2,p_𝒩(α, α,m)).
(Theorems <ref> and <ref>)
Let v=(d,c)∈^N+1 with d=0 if c=0, and let α∈ such that α<2 and
α^- <m+1/p<c/γ+1-α.
Then the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover
D(ℒ)=W^2,p_v(α,α,m)
and the set 𝒞_v defined in (<ref>) is a core for ℒ.
We then add the potential term -by^α-2 and endow ℒ with the Dirichlet boundary condition (see Corollary <ref>)
lim_y→ 0y^s_2 u=0 (if D>0), lim_y→ 0y^s_2 u∈ (if D=0),
where D and s_1,2 are defined in (<ref>), (<ref>).
(Theorems <ref> and <ref>) Let α∈ such that α<2 and
s_1+ α^-<m+1/p<s_2+2-α.
Then, under Assumptions <ref> and <ref>, the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover,
D(ℒ)
=y^-s_1W^2,p_w(α,α,m-s_1p), w=v-2s_1(q,γ).
The maximal regularity of ℒ stated in the Theorems above, implies immediately the following result which we state, for simplicity, only in the case of oblique boundary conditions. The proof follows directly from the above theorems, Definition <ref> and standard semigroup theory.
Let v=(d,c)∈^N+1 with d=0 if c=0, and let α∈ such that α<2 and
α^- <m+1/p<c/γ+1-α.
Let us consider the operator
ℒ=y^α(AD^2)+y^α-1(v,∇)
endowed with domain W^2,p_v(α,α,m):=W^2,p_m,v. Then for each 1<q<∞, T>0 and u_0 ∈ W^2,p_m, v, f∈ L^q([0,T];L^p_m) the problem
∂/∂ t u(t,x,y)-ℒu(t,x,y)=f(t,x,y), t>0,
u(0,x,y)=u_0(x,y)
admits a unique solution u∈ W^1,q([0,T];L^p_m)∩ L^q([0,T];W^2,p_m,v).
§ DEGENERATE OPERATORS AND SIMILARITY TRANSFORMATIONS
In this section we consider the operator ℒ defined in Definition <ref> namely
ℒ =y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2 D_yy+y^α_1+α_2/2-1(d,∇_x)+cy^α_2-1D_y-by^α_2-2
which we often shorten by writing
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x)
where L_y=D_yy+c/γ/yD_y-b/γ/y^2 is the operator defined in (<ref>) with parameters b/γ, c/γ.
We investigate how this operator can be transformed by means of change of variables and multiplications.
For k,β∈, β≠ -1 let
T_k,β u(x,y) :=|β+1|^1/py^ku(x,y^β+1), (x,y)∈^N+1_+.
Observe that
T_k,β ^-1=T_-k/β+1,-β/β+1 .
Let 1≤ p≤∞, k,β∈, β≠ -1. The following properties hold.
(i) For every m∈, T_k,β maps isometrically L^p_m̃ onto L^p_m where
m̃=m+kp-β/β+1.
(ii) For every u∈ W^2,1_loc(^N+1_+) one has
1. y^α T_k,β u=T_k,β (y^α/β+1u), for any α∈;
2. D_x_ix_j(T_k,β u)=T_k,β(D_x_ix_j u), D_x_i(T_k,β u)=T_k,β(D_x_i u);
3. D_y T_k,β u=T_k,β (ky^-1/β+1u+(β+1)y^β/β+1D_yu),
D_yy (T_k,β u)=T_k,β ((β+1)^2y^2β/β+1D_yyu+(β+1)(2k+β)y^β-1/β+1D_y u+k(k-1)y^-2/β+1u).
4. D_xy T_k,β u=T_k,β (ky^-1/β+1D_xu+(β+1)y^β/β+1D_xyu)
Proof. The proof of (i) follows after observing the Jacobian of (x,y)↦ (x,y^β+1) is |1+β|y^β. To prove (ii) one can easily observe that any x-derivatives commutes with T_k,β. Then we compute
D_y T_k,β u(x,y)= |β+1|^1/py^k(ku(x,y^β+1)/y+(β+1)y^β D_y u(x,y^β+1))
= T_k,β (ky^-1/β+1u+(β+1)y^β/β+1D_yu)
and similarly
D_yy T_k,β u(x,y)= T_k,β ((β+1)^2y^2β/β+1D_yyu+(β+1)(2k+β)y^β-1/β+1D_y u+k(k-1)y^-2/β+1u).
Let
T_k,β be the isometry above defined and let
ℒ =y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2D_yy+y^α_1+α_2/2-1(d,∇_x)+c y^α_2-1D_y-by^α_2-2.
The following properties hold.
(i) For every u∈ W^2,1_loc(^N+1_+) one has
T_k,β ^-1(ℒ)T_k,β u=ℒ̃ u
where
ℒ̃=y^α̃_1(QD^2_x)+2y^α̃_1+α̃_2/2q̃·∇_xD_y+y^α̃_2γ̃D_yy+y^α̃_1+α̃_2/2-1(d̃,∇_x)+y^α̃_2-1c̃D_y-b̃y^α̃_2-2
is the operator defined as in Definition <ref> but with parameters given by
α̃_1=α_1/β+1, α̃_2=α_2+2β/β+1
and
q̃=(β+1)q, γ̃=(β+1)^2γ, d̃=2kq+d,
c̃=(β+1)(c+(2k+β)γ), b̃=b-k(c+(k-1)γ).
(ii) In particular choosing β=α_1-α_2/2 and setting α̃=2α_1/α_1-α_2+2 one has α̃_1=α̃_2=α̃ and, for every u∈ W^2,1_loc(^N+1_+),
T_k,β ^-1(ℒ)T_k,β u
=y^α̃(ÃD^2u)+y^α̃-1(ṽ,∇ u )-b̃y^α-2u.
where Ã= (
[ Q q̃^t; q̃ γ̃ ]) and ṽ=(d̃, c̃).
Proof. The proof follows after a tedious straightforward computation using Proposition <ref>.
If in the above Proposition we write both operator ℒ, ℒ̃ in the compact form (<ref>), we have the following result.
Let
T_k,β be the isometry above defined and let
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x)
where L_y=D_yy+c/γ/yD_y-b/γ/y^2.
Then
(i) For every u∈ W^2,1_loc(^N+1_+) one has
T_k,β ^-1(ℒ)T_k,β u =y^α̃_1(QD^2_xu)+2y^α̃_1+α̃_2/2(q̃,∇_xD_yu)+y^α̃_2γ̃L̃_yu+y^α̃_1+α̃_2/2-1(d̃,∇_x u)
where L̃_y=D_yy+c̃/γ̃/yD_y-b̃/γ̃/y^2.
(ii) The discriminant D̃ and the parameters s̃_1,2 of L̃_y defined as in (<ref>), (<ref>) are related to those of L_y by
D̃ =D/(β+1)^2,
and
s̃_1,2=s_1,2+k/β+1 (β+1>0), s̃_1,2 =s_2,1+k/β+1 (β+1<0).
Proof. The first claim is simply a reformulation of (i) of Proposition <ref>. (ii) then follows directly by (<ref>) and Definitions (<ref>), (<ref>).
We define now for ω∈^N and β≠ -1 the following isometry of L^p_m
S_β,ω u(x,y) :=u(x+ω y^β+1,y), (x,y)∈_+^N+1.
Let ω∈^N and β≠ -1. Then for every m∈, S_β,ω is an isometry of L^p_m and for every u∈ W^2,1_loc(^N+1_+) one has
1. y^α S_β,ω u=S_β,ω(y^αu), for any α∈;
2. D_x_ix_j(S_β,ωu)=S_β,ω(D_x_ix_j u), D_x_i(S_β,ωu)=S_β,ω(D_x_i u);
3. D_y S_β,ωu=S_β,ω((β+1)y^β (∇_x u,ω)+D_yu),
D_yy (S_β,ω u)=S_β,ω((β+1)^2y^2β (D^2_x u·ω,ω)+2(β+1)y^β (∇_x D_y u, ω)
+β (β+1)y^β-1(∇_x u,ω)+ D_yyu);
4. ∇_xD_yS_β,ωu=S_β,ω((β+1)y^β D^2_x u·ω+∇_xD_yu).
Proof. The proof follows after a straightforward computation.
Let
S_β,ω be the isometry above defined and let ℒ the operator defined in (<ref>) with L_y=D_yy+c/γ/yD_y-b/γ/y^2. Then for every u∈ W^2,1_loc(^N+1_+) one has
S_β,ω^-1(y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x))S_β,ωu
=y^α_1(QD^2_xu)+2(β+1)y^α_1+α_2+2β/2(q⊗ω D^2_xu)+γ (β+1)^2y^α_2+2β(ω⊗ω D^2_xu)
+2y^α_1+α_2/2(q,∇_xD_yu)+2γ (β+1)y^α_2+β(ω,∇_xD_y u)
+γ y^α_2D_yyu+cy^α_2-1D_yu-by^α_2-2u
+(c+γβ)(β+1)y^α_2+β-1(ω, ∇_xu)+y^α_1+α_2/2-1(d,∇_xu).
Proof. Using Proposition <ref>, one has
* S_β,ω^-1(y^α_1(QD^2_xu))S_β,ωu=y^α_1(QD^2_xu);
* S_β,ω^-1(2y^α_1+α_2/2(q,∇_xD_y))S_β,ωu=2(β+1)y^α_1+α_2+2β/2(q⊗ω D^2_xu)+2y^α_1+α_2/2(q,∇_xD_yu);
* S_β,ω^-1(y^α_2γ D_yy)S_β,ωu=γ (β+1)^2y^α_2+2β(ω⊗ω D^2_xu)+2γ (β+1)y^α_2+β(ω,∇_xD_y u)
+γ y^α_2D_yyu+γβ(β+1)y^α_2+β-1(ω,∇_x u);
* S_β,ω^-1(cy^α_2-1D_y)S_β,ωu=c(β+1)y^α_2+β-1(ω, ∇_xu)+cy^α_2-1D_yu;
* S_β,ω^-1(y^α_1+α_2/2-1(d,∇_x))S_β,ωu=y^α_1+α_2/2-1(d,∇_xu).
.
The required claim then follows after a straightforward computation.
When α_2-α_1≠ 2 we can specialize the previous relation by choosing β=α_1-α_2/2.
Let ω∈^N and let S_β,ω be the isometry defined in (<ref>) with β=α_1-α_2/2≠ -1. Let ℒ the operator defined in (<ref>) with L_y=D_yy+c/γ/yD_y-b/γ/y^2. Then one has
S_β,ω^-1 (y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x))S_β,ωu
=y^α_1(Q̃D^2_xu)+2y^α_1+α_2/2(q̃,∇_xD_yu)+γ y^α_2 L_yu+y^α_1+α_2/2-1(d̃,∇_xu)
where
Q̃=Q+2(β+1) q⊗ω+γ(β+1)^2ω⊗ω, q̃=q+γ(β+1)ω,
d̃=d+(c+γβ)(β+1)ω
Proof. The proof follows by specializing Proposition <ref> to β=α_1-α_2/2≠ -1.
§ ANISOTROPIC WEIGHTED SOBOLEV SPACES
Let p>1, m, α_1,α_2 ∈ such that
α_2<2, α_2-α_1<2, α_1^- <m+1/p.
In order to describe the domain of the operator
ℒ =y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x)
we collect in this section the results we need about suitable anisotropic weighted Sobolev spaces.
The main novelty is Section <ref> where we introduce the Sobolev space W^2,p_v(α_1,α_2,m) having oblique derivative boundary condition: here, in order to improve readability and although not essential, we treat separately
the cases α_1=α_2 and α_1≠α_2.
Besides this we also briefly recall the main properties about the spaces W^2,p_𝒩(α_1,α_2,m) and W^2,p_ℛ(α_1,α_2,m) having respectively Neumann and Dirichlet boundary conditions, referring to <cit.> for further details and all the relative proofs (also outside the above range of parameters (<ref>)).
We also clarify, in Propositions <ref> and <ref>, the relation between the three spaces W^2,p_𝒩(α_1,α_2,m), W^2,p_v(α_1,α_2,m) and W^2,p_ℛ(α_1,α_2,m).
§.§ The space W^2,p_𝒩(α_1,α_2,m)
We start by defining the Sobolev space
W^2,p(α_1,α_2,m)def= {u∈ W^2,p_loc(^N+1_+): u, y^α_1 D_x_ix_ju, y^α_1/2 D_x_iu, .
.y^α_2D_yyu, y^α_2/2D_yu, y^α_1+α_2/2 D_y∇_x u∈ L^p_m}
which is a Banach space equipped with the norm
u_W^2,p(α_1,α_2,m)def= u_L^p_m+∑_i,j=1^ny^α_1 D_x_ix_ju_L^p_m+∑_i=1^ny^α_1/2 D_x_iu_L^p_m
+y^α_2D_yyu_L^p_m+y^α_2/2D_yu_L^p_m+y^α_1+α_2/2 D_y∇_x u_L^p_m.
Next we add different boundary conditions for y=0.
We add a Neumann boundary condition for y=0 in the form y^α_2-1D_yu∈ L^p_m and set
W^2,p_𝒩(α_1,α_2,m)def={u ∈ W^2,p(α_1,α_2,m): y^α_2-1D_yu ∈ L^p_m}
with the norm
u_W^2,p_𝒩(α_1,α_2,m)def=u_W^2,p(α_1,α_2,m)+y^α_2-1D_yu_ L^p_m.
We remark that, in the range of parameters (<ref>), the condition on the mixed derivatives in the definition of W^2,p_𝒩(α_1,α_2,m) can be discarded, without any loss of generality, since by
<cit.> and <cit.>
one has for every u ∈ W^2,p_𝒩(α_1,α_2,m)
y^α_1+α_2/2 D_y∇_x u _ L^p_m≤ C [u_L^p_m+y^α_1 D^2_xu_L^p_m+y^α_1/2∇_xu_L^p_m.
. +y^α_2D_yyu_L^p_m+y^α_2/2D_yu_L^p_m+y^α_2-1D_yu_ L^p_m].
With obvious changes we consider also the analogous Sobolev spaces W^2,p(α,m) and W^2,p_ N(α, m) on _+.
For example we have
W^2,p_𝒩(α,m)={u∈ W^2,p_loc(_+): u, y^αD_yyu, y^α/2D_yu, y^α-1D_yu∈ L^p_m}.
All the results of this section will be valid also in _+ changing (when it appears) the condition α_1^- <m+1/p to 0<m+1/p.
The next result clarifies in which sense the condition y^α_2-1D_y u ∈ L^p_m is a Neumann boundary condition.
<cit.> The following assertions hold.
(i) If m+1/p >1-α_2, then W^2,p_𝒩(α_1, α_2, m)=W^2,p(α_1, α_2, m).
(ii) If m+1/p <1-α_2, then
W^2,p_𝒩(α_1, α_2, m)={u ∈ W^2,p(α_1, α_2, m): lim_y → 0D_yu(x,y)=0 for a.e. x ∈^N }.
In both cases (i) and (ii), the norm of W^2,p_𝒩(α_1, α_2, m) is equivalent to that of W^2,p(α_1, α_2, m).
The next results show the density of smooth functions in W^2,p_𝒩(α_1,α_2,m). Let
𝒞:={u ∈ C_c^∞(^N×[0, ∞)), D_y u(x,y)=0 for y ≤δ and some δ>0},
its one dimensional version
𝒟={u ∈ C_c^∞ ([0, ∞)), D_y u(y)=0 for y ≤δ and some δ>0}
and finally (finite sums below)
C_c^∞ (^N)⊗𝒟={u(x,y)=∑_i u_i(x)v_i(y), u_i ∈ C_c^∞ (^N), v_i ∈ D }⊂𝒞.
<cit.>
C_c^∞ (^N)⊗𝒟
is dense in W^2,p_𝒩(α_1,α_2,m).
Note that the condition (m+1)/p>α_1^-, or m+1>0 and (m+1)/p+α_1>0, is necessary for the inclusion 𝒞⊂ W^2,p_𝒩(α_1,α_2,m).
We provide an equivalent description of W^2,p_𝒩(α_1, α_2, m), adapted to the degenerate operator B_y=D_yy+cy^-1D_y. In the first formulation we shows that the Neumann boundary condition in the integral form y^α_2-1D_yu∈ L^p_m is actually equivalent to the trace condition lim_y→ 0y^c D_yu=0.
<cit.>
Let c∈ and m+1/p<c+1-α_2. Then setting B_y=D_yy+cy^-1D_y one has
W^2,p_𝒩(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, y^α_2B_y u ∈ L^p_m and lim_y→ 0y^c D_yu=0}
and the norms u_W^2,p_𝒩(α_1,α_2,m) and
u_L^p_m+y^α_1Δ_x u_L^p_m+y^α_2B_y u_L^p_m
are equivalent on W^2,p_𝒩(α_1, α_2, m).
Finally, when 0<m+1/p≤ c-1 then
W^2,p_𝒩(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, y^α_2B_y u ∈ L^p_m}.
The following equivalent description of W^2,p_𝒩(α_1, α_2, m) involves a Dirichlet, rather than Neumann, boundary condition, in a certain range of parameters.
<cit.>
Let c≥ 1 and m+1/p<c+1-α_2. The following properties hold.
(i) If c>1 then
W^2,p_𝒩(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, y^α_2B_y ∈ L^p_m and lim_y→ 0y^c-1 u=0}.
(ii) If c=1 then
W^2,p_𝒩(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, y^α_2B_y u ∈ L^p_m and lim_y→ 0 u(x,y)∈}.
§.§ The space W^2,p_v(α_1,α_2,m)
Let v=(d,c)∈^N+1, with d∈^N and c ≠ 0. We impose now a weighted oblique derivative boundary condition (y^α_1-α_2/2 d ·∇_x u+cD_yu)(x,0)=0 in the integral form (see Proposition <ref>)
y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu∈ L^p_m.
For reader's convenience, although not necessary, we treat separately
the simpler case α_1=α_2 and the case α_1≠α_2, where some complications occur due to the different weights y^α_1, y^α_2 which appear in the x and y directions.
§.§.§ The case α_1=α_2: the space W^2,p_v(α,α,m)
We start by the case α_1=α_2:=α where the condition above reads as
y^α-1v·∇ u=y^α-1( d ·∇_x u+cD_yu)∈ L^p_m.
We define accordingly
W^2,p_v(α, α,m)def={u ∈ W^2,p(α, α,m): y^α-1v·∇ u ∈ L^p_m}
with the norm
u_W^2,p_v(α,α,m)def=u_W^2,p(α,α,m)+ y^α-1v·∇ u_ L^p_m.
In particular, when d=0, one has W^2,p_𝒩(α,α,m)=W^2,p_(0,c)(α,α,m). This justify the following definition.
To shorten some statements we also write
W^2,p_(0,0)(α,α,m)def=W^2,p_𝒩(α,α,m).
With this notation W^2,p_v(α,α,m) is well defined for any v=(d,c)∈^N+1 such that d=0 if c=0. Note that
W^2,p_v(α,α,m)=W^2,p_𝒩(α,α,m) when c=0 or d=0.
Under this identification, we also define
𝒞_v
:={u ∈ C_c^∞(^N×[0, ∞)), ( v·∇ u)(x,y)=0 for y ≤δ and some δ>0}
with the convention that 𝒞_v:=𝒞 when v=(0,0) and 𝒞 is the set defined in (<ref>).
In what follows we clarify the relation between the spaces W^2,p_𝒩(α,α,m) and W^2,p_v(α,α,m). The next Proposition shows that W^2,p_v(α, α,m) is related to W^2,p_𝒩(α,α,m) by means of the isometry S_0,ω of L^p_m defined in (<ref>) with β=0 and ω=-d/c, namely
S_0,-d/c u(x,y) :=u(x-d/c y,y), (x,y)∈_+^N+1.
One has
S_0,-d/c (W^2,p_𝒩(α, α,m)) =W^2,p_v(α, α,m)
In particular the set 𝒞_v defined in (<ref>) is dense in W^2,p_v(α, α,m).
Proof. Let u∈ W^2,1_loc(^N+1_+) and let us set ũ=S_0,-d/cu. Then by Proposition <ref> one has
1. y^αD_x_ix_jũ=S_0,-d/c(y^αD_x_ix_j u), y^α/2D_x_iũ=S_0,-d/c(y^α/2D_x_i u),
y^α-1D_x_iũ=S_0,-d/c(y^α-1D_x_i u);
2. y^α/2D_y ũ=S_0,-d/c(-1/cy^α/2 (∇_x u,d)+y^α/2D_yu),
y^α-1D_y ũ=S_0,-d/c(-1/cy^α-1 (∇_x u,d)+y^α-1D_yu);
3. y^α_2D_yyũ=S_0,-d/c(1/c^2y^α (D^2_x u· d,d)-1/c2y^α (∇_x D_y u, d)+ y^αD_yyu)
4. y^α∇_xD_yũ=S_0,-d/c(-1/cy^α D^2_x u· d+y^α∇_xD_yu).
In particular
y^α-1 d ·∇_x ũ+cy^α-1D_yũ =S_0,-d/c(cy^α-1 D_yu).
The above relations shows that ũ∈ W^2,p_v(α, α,m) if and only if u∈ W^2,p_𝒩(α, α,m). This proves the required claim. The last claim follows by the density of 𝒞 in W^2,p_𝒩(α, α,m) since 𝒞_v= S_0,-d/c(𝒞).
As in Propositions <ref> and <ref> we can provide an equivalent description of W^2,p_v(α, α, m), adapted to the degenerate operator D_yy+y^-1v·∇. In the first formulation we shows that the oblique boundary condition in the integral form y^α-1v·∇ u∈ L^p_m is actually equivalent to the trace condition lim_y→ 0y^c v·∇ u=0.
Let m+1/p<c+1-α. Then one has
W^2,p_v(α, α, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu∈ L^p_m .
. y^αD_yyu+y^α-1v·∇ u ∈ L^p_m and lim_y→ 0y^c v·∇ u=0}
and the norms u_W^2,p_v(α,α,m) and
u_L^p_m+y^αΔ_x u_L^p_m+y^αD_yyu+y^α-1v·∇ u_L^p_m
are equivalent on W^2,p_ v(α_1, α_2, m).
Finally, when 0<m+1/p≤ c-1 then
W^2,p_v(α, α, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, y^αD_yyu+y^α-1v·∇ u ∈ L^p_m}.
Proof.
Let ũ∈ W^2,p_v(α, α,m). By Proposition <ref>, ũ=S_0,-d/cu where u∈ W^2,p_𝒩(α, α,m). We observe that by Proposition <ref>, u is characterized by
u, y^αΔ_xu, y^αD_yyu+cy^α-1D_yu ∈ L^p_m and lim_y→ 0y^c D_yu=0.
We remark preliminarily that the Calderon-Zygmund inequality (see e.g. <cit.>) yields
∫_^N |D_x_i x_ju(x,y)|^p dx≤ C∫_^N |Δ_xu(x,y)|^p dx.
Multiplying by y^pα+m and integrating over _+
we obtain
∑_i,j=1^ny^α D_x_ix_ju_L^p_m≤ Cy^αΔ_x u_L^p_m
and the same relation obviously holds for ũ. Moreover by Proposition <ref> again we also obtain
y^α∇_xD_yu_L^p_m≤ C[u_L^p_m+y^αΔ_x u_L^p_m+y^αD_yyu+cy^α-1D_yu_L^p_m].
By Proposition <ref> and Corollary <ref> with β=0 and ω=-d/c we have
1. y^αD_x_ix_jũ=S_0,-d/c(y^αD_x_ix_j u);
2. y^α-1 v ·∇ũ=S_0,-d/c(cy^α-1 D_yu);
3. y^αD_yyũ+y^α-1v·∇ũ=S_0,-d/c(1/c^2y^α(D^2_xu· d,d)-2/cy^α(d,∇_xD_yu)+y^αD_yyu+cy^α-1D_yu).
The above relations and (<ref>), (<ref>), then shows that the requirements in (<ref>) are equivalent to
ũ, y^αΔ_xũ, y^αD_yyũ+y^α-1v·∇ũ∈ L^p_m and lim_y→ 0y^c v·∇ũ=0
which proves the first required claim. The claim for 0<m+1/p≤ c-1 follows similarly.
The following proposition involves a Dirichlet, rather than an oblique, boundary condition, in a certain range of parameters.
Let c≥ 1 and m+1/p<c+1-α. The following properties hold.
(i) If c>1 then
W^2,p_v(α, α, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu∈ L^p_m .
. y^αD_yyu+y^α-1v·∇ u ∈ L^p_m and lim_y→ 0y^c-1 u=0}
(ii) If c=1 then
W^2,p_v(α, α, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu∈ L^p_m .
. y^αD_yyu+y^α-1v·∇ u ∈ L^p_m and lim_y→ 0y^c-1 u∈}
Proof.
The proof follows as in Proposition <ref> by using Proposition <ref> in place of Proposition <ref>.
§.§.§ The general case: the space W^2,p_v(α_1,α_2,m)
We now extend the previous definition also to the case α_1≠α_2. In contrast to the case α_1=α_2, a distortion correction defined by the coefficient
β_αdef=α_1-α_2/2
appears in the definition of W^2,p_v(α_1,α_2,m). This is due to the possibly different weights y^α_1, y^α_2 which appear in the x and y directions. It does not appears, obviously, in the case α_1=α_2, where β_α=0, but also when d=0 (see Remark <ref>). This correction is essential for the validity of equalities (<ref>) and (<ref>) and for characterizing, in Section <ref>, the domain of the degenerate operator ℒ defined in (<ref>).
We start by defining
F^2,p(α_1, α_2,m)= {u∈ W^2,p_loc(^N+1_+): u, y^α_1 D_x_ix_ju, y^α_1/2 D_x_iu, y^α_1+α_2/2 D_y∇_x u, .
.y^α_2(D_yyu+β_αD_yu/y), y^α_2/2D_yu∈ L^p_m}
with the norm
u_F^2,p(α_1,α_2,m)=u_L^p_m +∑_i,j=1^ny^α_1 D_x_ix_ju_L^p_m+∑_i=1^ny^α_1/2 D_x_iu_L^p_m+y^α_1+α_2/2 D_y∇_x u_L^p_m
+y^α_2(D_yyu+β_αD_yu/y)_L^p_m+y^α_2/2D_yu_L^p_m.
Next we impose the weighted oblique derivative boundary condition
(y^β_α d ·∇_x u+cD_yu)(x,0)=0
(see Proposition <ref>) in the integral form
y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu= y^α_2-1( y^β_α d ·∇_x u+cy^α_2-1D_yu)∈ L^p_m.
Accordingly we define
W^2,p_v(α_1,α_2,m)def=={u ∈ F^2,p(α_1, α_2,m): y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu ∈ L^p_m}
with the norm
u_W^2,p_v(α_1,α_2,m)def=u_F^2,p(α_1,α_2,m)+y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu_ L^p_m.
(i) The difference between the spaces W^2,p(α_1,α_2,m) and F^2,p(α_1,α_2,m) relies on the requirement y^α_2(D_yyu+β_αD_yu/y)∈Ł^p_m which, in the definitions of F^2,p(α_1,α_2,m) and of W^2,p_v(α_1,α_2,m), cannot be split into
y^α_2D_yyu, y^α_2-1D_yu∈Ł^p_m
as in the definition of W^2,p_𝒩(α_1,α_2,m).
(ii) Both the requirements
y^α_2(D_yyu+β_αD_yu/y)∈Ł^p_m, y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu∈ L^p_m
in the definition of W^2,p_v(α_1,α_2,m) are essential for the validity of Propositions <ref> and <ref> (see Remark <ref>).
(iii) The hypotheses y^α_2(D_yyu+β_αD_yu/y)∈Ł^p_m is equivalent to
y^α_2D_yyu-β_α/cy^α_1+α_2/2-1 d ·∇_x u∈Ł^p_m.
This follows by combining linearly the two conditions in (<ref>). In particular, when d=0, one has in any case
W^2,p_(0,c)(α_1,α_2,m)=W^2,p_𝒩(α_1,α_2,m), for any c∈.
(iv) The boundary condition (<ref>) is defined up to a normalization constant i.e.
W^2,p_μ v(α_1,α_2,m)=W^2,p_v(α_1,α_2,m), ∀μ∈∖{0}.
Note that in the special case α_1=α_2 the above definition is consistent with the one given in Subsection <ref>. Moreover Remark <ref> (iii) justifies the following definition.
To shorten some statements we also write
W^2,p_(0,0)(α_1,α_2,m)def=W^2,p_𝒩(α_1,α_2,m).
With this notation W^2,p_v(α_1,α_2,m) is well defined for any v=(d,c)∈^N+1 such that d=0 if c=0. Note that
W^2,p_v(α_1,α_2,m)=W^2,p_𝒩(α_1,α_2,m), when c=0 or d=0.
Under this identification, we also define
𝒞_v
:={u ∈ C_c^∞(^N×[0, ∞)), (y^β_α d ·∇_x u+cD_yu)(x,y)=0 for y ≤δ and some δ>0}
with the convention that 𝒞_v:=𝒞 when v=(0,0) and 𝒞 is the set defined in (<ref>).
As before we clarify the relation between the spaces W^2,p_N(α_1,α_1,m) and W^2,p_v(α_1,α_2,m) by generalizing Proposition <ref>. W^2,p_v(α_1, α_2,m) is related to W^2,p_𝒩(α_1,α_2,m) by means of the isometry S_β_α,ω of L^p_m defined in (<ref>) with
β_α=α_1-α_2/2≠ -1, ω=-d/c(β_α+1),
namely
S_β_α,ω u(x,y) :=u(x-ω y^β_α+1,y), (x,y)∈_+^N+1.
Let β_α=α_1-α_2/2≠ -1 and ω=-d/c(β_α+1). Then one has
S_β_α,ω (W^2,p_𝒩(α_1, α_2,m)) =W^2,p_v(α_1, α_2,m)
In particular the set 𝒞_v defined in (<ref>) is dense in W^2,p_v(α_1, α_2,m).
Proof. Let u∈ W^2,1_loc(^N+1_+) and let us set ũ=S_β_α,ωu. Then by Proposition <ref> one has
1. y^α_1D_x_ix_jũ=S_β_α,ω(y^α_2D_x_ix_j u), y^α_1/2D_x_iũ=S_β_α,ω(y^α_1/2D_x_i u);
2. y^α_2/2D_y ũ=S_β_α,ω((β_α+1)y^α_1/2 (∇_x u,ω)+y^α_2/2D_yu),
y^α_2-1D_y ũ=S_β_α,ω((β_α+1)y^α_1+α_2/2-1 (∇_x u,ω)+y^α_2-1D_yu);
3. y^α_2D_yyũ=S_β_α,ω((β_α+1)^2y^α_1 (D^2_x u·ω,ω)+2(β_α+1)y^α_1+α_2/2 (∇_x D_y u, ω)
+β_α (β_α+1)y^α_1+α_2/2-1(∇_x u,ω)+ y^α_2D_yyu);
4. y^α_1+α_2/2∇_xD_yũ=S_β_α,ω((β_α+1)y^α_1 D^2_x u·ω+y^α_1+α_2/2∇_xD_yu);
5.y^α_1+α_2/2-1 d ·∇_x ũ+cy^α_2-1D_yũ =S_β_α,ω(cy^α_2-1 D_yu).
The above relations shows that ũ∈ W^2,p_v(α_1, α_2,m) if and only if u∈ W^2,p_𝒩(α_1, α_2,m). This proves the required claim. The last claim follows by the density of 𝒞 in W^2,p_𝒩(α_1, α_2,m) since 𝒞_v= S_β,ω(𝒞).
Recalling Remark <ref> (iv), equality (<ref>) can be written equivalently as
S_β_α,ω (W^2,p_𝒩(α_1, α_2,m)) =W^2,p_v(α_1, α_2,m), v=(-ω(β_α+1),1)
which is valid for any ω∈^N.
As in Propositions <ref> and <ref> we provide an equivalent description of W^2,p_v(α_1, α_2, m), adapted to the degenerate operator
𝒜: =y^α_2D_yy+y^α_1+α_2/2-1d̃·∇_x + μ y^α_2-1D_y, d̃=μ+β_α/cd.
Let μ∈ such that m+1/p<μ+1-α_2 and 𝒜 be the operator in (<ref>).
Then one has
W^2,p_v(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜u ∈ L^p_m .
. and lim_y→ 0y^μ(y^β_α d ·∇_x u+cD_yu)=0}
and the norms u_W^2,p_v(α_1,α_2,m) and
u_L^p_m+y^α_1Δ_x u_L^p_m+𝒜u_L^p_m
are equivalent on W^2,p_v(α_1, α_2, m).
Finally, when 0<m+1/p≤μ-1 then
W^2,p_v(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜 u ∈ L^p_m}.
Proof. Let us suppose, preliminarily, μ=c. The proof, in this case, follows as in Proposition <ref> by using the isometry S_β_α,ω in place of S_0,-d/c. The claim for a general μ∈ follows, recalling Remark <ref> (iv), by writing
W^2,p_v(α_1, α_2, m)= W^2,p_(μ/c d,μ)(α_1, α_2, m)
and by using the previous step with (μ/c d,μ) in place of v .
Let μ≥ 1, m+1/p<μ+1-α_2 and 𝒜 be the operator in (<ref>). The following properties hold.
(i) If μ>1 then
W^2,p_v(α_1, α_2, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜u ∈ L^p_m and lim_y→ 0y^μ-1 u=0}
(ii) If μ =1 then
W^2,p_v(α, α, m)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜 u ∈ L^p_m and lim_y→ 0y^μ-1 u∈}
.
Proof. The proof follows as in Propositions <ref> and <ref>.
The next Proposition shows that in a certain range of parameters the spaces W^2,p_𝒩(α_1, α_2, m) and W^2,p_v(α_1, α_2, m) coincide.
If m+1/p>1-α_1+α_2/2 then W^2,p_𝒩(α_1, α_2, m)= W^2,p_v(α_1, α_2, m) and one has
y^α_1+α_2/2-1∇_xu_L^p_m≤ C y^α_1+α_2/2 D_y∇_x u_L^p_m, ∀ u∈ W^2,p_𝒩(α_1, α_2, m).
Proof. Let u∈ C_c^∞(^N×[0, ∞)). By applying <cit.> to ∇_x u we get
y^α_1+α_2/2-1∇_xu_L^p_m≤ C y^α_1+α_2/2 D_y∇_x u_L^p_m.
Since by Theorem <ref> and Proposition <ref>, the set C_c^∞(^N×[0, ∞)) is dense in both W^2,p_𝒩(α_1, α_2, m) and W^2,p_v(α_1, α_2, m), the above inequality extends to both spaces. This shows in particular that for function u belonging to W^2,p_𝒩(α_1, α_2, m) or W^2,p_v(α_1, α_2, m), so that in particular y^α_1+α_2/2 D_y∇_x u∈ L^p_m, one has
y^α_1+α_2/2-1 d ·∇_x u+cy^α_2-1D_yu ∈ L^p_m ⟺ y^α_2-1D_yu ∈ L^p_m.
This, recalling (iii) in Remark <ref>, proves the required claim.
We end the section by showing how the spaces W^2,p_𝒩(α_1,α_2,m), W^2,p_v(α_1,α_2,m), introduced so far, transform under the action of the map (<ref>) with k=0, β≠ -1, namely
T_0,βu(x,y) :=|β+1|^1/pu(x,y^β+1), (x,y)∈^N+1_+.
Observe that
T_0,β^-1=T_0,-β/β+1 .
Let 1≤ p≤∞, β∈, β≠ -1 and m∈. The following properties hold.
(i) T_0,β maps isometrically L^p_m̃ onto L^p_m where m̃=m-β/β+1.
(ii) Setting α̃_1=α_1/β+1, α̃_2=α_2+2β/β+1 one has
W^2,p_𝒩(α_1,α_2,m) =T_0,β(W^2,p_𝒩(α̃_1,α̃_2,m̃)),
W^2,p_v(α_1,α_2,m) =T_0,β(W^2,p_ṽ(α̃_1,α̃_2,m̃)), ṽ=(d, c(β+1)).
In particular choosing β=β_α=α_1-α_2/2 and setting α̃=2α_1/α_1-α_2+2 and m=2m-α_1+α_2/α_1-α_2+2 one has
W^2,p_v(α_1,α_2,m) =T_0,β_α(W^2,p_ṽ(α̃,α̃,m̃)), ṽ=(d, c(β_α+1)).
Proof. (i) and the first equality in (ii) follow by a straightforward application of Proposition <ref> with k=0 as in <cit.>.
To prove the second equality in (ii) we set ṽ=(d,c̃):=(d,c(β+1)) and
γ=α_1-α_2/2, ω=-d/c(γ+1), γ̃=α̃_1-α̃_2/2, ω̃=-d/c̃(γ̃+1).
and we observe preliminarily that, by construction, one has
γ̃+1=γ+1/β+1, ω̃=ω.
Then, recalling the definitions of S_γ,ω and T_0,β, a straightforward calculation yields
S_γ,ω∘ T_β=T_β∘ S_γ̃,ω̃.
Indeed the latter equality follows by the previous relations on the coefficients after observing that
S_γ,ω(T_β u)(x,y)=(T_β u)(x+ω y^γ+1,y)=|β+1|^1/pu(x+ω y^γ+1,y^β+1)
and
T_β(S_γ̃,ω̃ u)(x,y)=|β+1|^1/p(S_γ̃,ω̃u)(x,y^β+1)=|β+1|^1/pu(x+ω̃y^(γ̃+1)(β+1),y^β+1).
Then using the first equality in (ii) and Proposition <ref>, one has
W^2,p_v(α_1,α_2,m) =S_γ,ω (W^2,p_𝒩(α_1, α_2,m))=S_γ,ω (T_0,β(W^2,p_𝒩(α̃_1,α̃_2,m̃)))
=T_0,β(S_γ̃,ω̃ (W^2,p_𝒩(α̃_1, α̃_2,m))) =T_0,β(W^2,p_ṽ(α̃_1,α̃_2,m̃)).
This proved the required claim.
It is essential to deal with W^2,p_𝒩(α_1,α_2,m), W^2,p_v(α_1,α_2,m): in general the map T_0,β does not transform W^2,p(α̃_1,α̃_2,m̃) into W^2,p(α_1,α_2,m) since by (ii)-3 of Proposition <ref> one has
y^α_2D_yy (T_β u)=(β+1)T_β ((β+1)y^α̃_2D_yyu+β y^α̃_2-1D_y u).
§.§ The space W^2,p_ℛ(α_1,α_2,m)
We consider also an integral version of the Dirichlet boundary condition, namely a weighted summability requirement for y^-2u and introduce
W^2,p_ℛ(α_1, α_2, m)def={u ∈ W^2,p(α_1, α_2, m): y^α_2-2u ∈ L^p_m}
with the norm
u_W^2,p_ℛ(α_1, α_2, m)def=u_W^2,p(α_1, α_2, m)+y^α_2-2u_L^p_m.
The symbol ℛ stands for "Rellich", since Rellich inequalities concern with the summability of y^-2u.
The following properties hold.
(i) For any u ∈ W^2,p_ℛ(α_1, α_2, m) one has
y^α_2-1D_yu_L^p_m ≤ C( y^α_2D_yyu_L^p_m+y^α_2-2u_L^p_m),
y^α_1+α_2/2-1∇_x u_L^p_m ≤ C( y^α_1D_xxu_L^p_m+y^α_2-2u_L^p_m)
In particular W^2,p_ℛ(α_1, α_2, m)⊆ W^2,p_𝒩(α_1, α_2, m)∩ W^2,p_v(α_1, α_2, m).
(ii) C_c^∞ (^N+1_+) is dense in W^2,p_ℛ(α_1, α_2, m).
(iii) If m+1/p>2-α_2, then
W^2,p_ℛ(α_1, α_2, m) = W^2,p_𝒩(α_1, α_2, m)= W^2,p_v(α_1, α_2, m)=W^2,p(α_1, α_2, m)
with equivalence of the corresponding norms.
Proof. The proof follows by <cit.>. We need only to prove the second inequality in (i). With this aim let u ∈ W^2,p_ℛ(α_1, α_2, m). We use the classic interpolative inequality
∇_x u(·,y)_L^p(^N)≤εΔ_x u(·,y)_L^p(^N)+C(N,p)/ε u(·,y)_L^p(^N).
Multiplying the above inequality by y^α_1+α_2/2-1 and choosing ε=y^1-α_2-α_1/2 we get
y^α_1+α_2/2-1∇_x u(·,y)_L^p(^N)≤y^α_1Δ_x u(·,y)_L^p(^N)+C(N,p) y^α_2-2 u(·,y)_L^p(^N).
The required estimate then follows after raising to the power p and integrating in y. We remark that in (iii), under the range of parameters (<ref>), we have m+1/p>2-α_2>1-α_1+α_2/2 and then, by Proposition <ref>, W^2,p_𝒩(α_1, α_2, m)= W^2,p_v(α_1, α_2, m).
Finally, we investigate the action of the isometry T_k,β defined in (<ref>). We start with the case k=0.
Let T_0,β the map defined in (<ref>). Then
W^2,p_ℛ(α_1,α_2,m)=T_0,β(W^2,p_ℛ(α̃_1,α̃_2,m̃))
where m̃=m-β/β+1, α̃_1=α_1/β+1, α̃_2=α_2+2β/β+1. In particular choosing β=β_α=α_1-α_2/2 one has
W^2,p_ℛ(α_1,α_2,m)=T_0,β_α(W^2,p_ℛ(α̃,α̃,m̃)), m=2m-α_1+α_2/α_1-α_2+2, α̃=2α_1/α_1-α_2+2.
Proof.
The claim follows from Proposition <ref> since by Proposition <ref> one has
W^2,p_ℛ(α_1,α_2,m)=W^2,p_𝒩(α_1,α_2,m)∩{u∈ L^p_m: y^α_2-2u∈ L^p_m}
and noticing that y^α_2-2u∈ L^p_m if and only if y^α̃_2-2u∈ L^p_m̃.
We consider now the multiplication operator T_k,0:u↦ y^ku.
<cit.>
Let α_2-α_1<2 and m+1/p>2-α_2. For every k∈,
T_k,0: W^2,p_𝒩(α_1, α_2, m) → W^2,p_ℛ(α_1, α_2, m-kp)
is an isomorphism (we shall write y^k W^2,p_𝒩(α_1, α_2, m)= W^2,p_ℛ(α_1, α_2, m-kp)).
Finally we deal with the isometry of L^p_m, S_β_α,ω defined in (<ref>) with β_α=α_1-α_2/2≠ -1 and ω=-d/c(β_α+1).
Let β_α=α_1-α_2/2≠ -1 and ω=-d/c(β_α+1). Then one has
S_β_α,ω (W^2,p_ℛ(α_1, α_2,m)) =W^2,p_ℛ(α_1, α_2,m)
Proof. The proof follows as in Proposition <ref> using Proposition <ref>.
§ THE OPERATOR WITH OBLIQUE BOUNDARY CONDITIONS
In this section we study parabolic problems related to the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)
defined in (<ref>) in the case b=0. Here v=(d,c)∈^N+1 and, under the hypotheses in Assumption <ref>, we always assume α<2 and the diffusion matrix
A= (
[ Q q^t; q γ ])
to be symmetric and positive definite. We endow ℒ with Neumann and oblique boundary conditions in the sense specified below.
We briefly recall the weighted Sobolev spaces which are introduce and analysed in details, as well as their boundary conditions, in Sections <ref> and <ref>.
We set, at first,
W^2,p(α,α,m)= {u∈ W^2,p_loc(^N+1_+): y^α D^2u, y^α/2∇ u, u∈ L^p_m}.
We start by considering, preliminarily, the case d=0 and we add, accordingly, a Neumann boundary condition
lim_y→ 0y^c/γD_yu=0
in integral form (see Proposition <ref>) by defining
W^2,p_𝒩(α,α,m) ={u ∈ W^2,p(α,α,m): y^α-1D_yu ∈ L^p_m}.
The following result characterizes the generation properties, the maximal regularity and the domain of ℒ in the special case d=0. It has been proved by the author in <cit.> by constructing, using tools from vector-valued harmonic analysis and Fourier multipliers, a resolvent (λ-ℒ)^-1 of ℒ for λ in a suitable sector ω+Σ_ϕ, with ω∈ and by proving that the family λ (λ-ℒ)^-1 is ℛ-bounded on ℬ(L^p_m). We refer the reader to <cit.> for the proof and any further details.
<cit.>
Let α∈ such that α<2 and
α^- <m+1/p<c/γ+1-α.
Then the operator
ℒ =y^α(AD^2)+y^α-1 c D_y
endowed with domain W^2,p_𝒩(α,α,m) generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover the set C_c^∞ (^N)⊗𝒟 defined in (<ref>) is a core for ℒ.
Finally, the estimate
y^α D_x_i x_j u_L^p_m +y^α D_yy u_L^p_m+y^α D_x_iy u_L^p_m+y^α-1 D_y u_L^p_m≤ Cℒu_L^p_m
holds for every u ∈ W^2,p_𝒩(α,α,m)
When c≠ 0 and d∈, we can impose an oblique derivative boundary condition
lim_y→ 0y^c/γ v ·∇ u=0
in integral form (see Proposition <ref>) by defining
W^2,p_v(α, α,m)def={u ∈ W^2,p(α, α,m): y^α-1v·∇ u ∈ L^p_m}.
Recalling Definition <ref>, to shorten the notation we also write W^2,p_(0,0)(α, α,m)=W^2,p_𝒩(α, α,m).
We transform ℒ into a similar operator with d=0 and Neumann boundary condition.
Indeed, we use the map S_0,ω of Section <ref> defined in (<ref>) with β=0 and ω=-d/c, namely
S_0,-d/c u(x,y) :=u(x-d/c y,y), (x,y)∈_+^N+1.
We recall that, by Proposition <ref> and
Corollary <ref>, S_0,-d/c is an isometry of L^p_m and for every
u∈ W^2,1_loc(^N+1_+) one has
S_0,-d/c^-1 (y^α(AD^2)+y^α-1(v,∇))S_0,-d/cu=y^α(ÃD^2u)+y^α-1c D_yu
where
Ã= (
[ Q̃ q̃^t; q̃ γ ]), Q̃=Q-2/c q⊗ d +γ/c^2d⊗ d, q̃=q-γ/c d.
We can then deduce the following result.
Let v=(d,c)∈^N+1 with d=0 if c=0, and let α∈ such that α<2 and
α^- <m+1/p<c/γ+1-α.
Then the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover
D(ℒ)=W^2,p_v(α,α,m)
and the set 𝒞_v defined in (<ref>) is a core for ℒ.
Proof.
The claim for c=0 or d=0 is just Theorem <ref>. Let us suppose c≠ 0. According to the discussion above the isometry S_0,-d/c of L^p_m
transforms ℒ into
ℒ̃=y^α(ÃD^2)+y^α-1c D_y,
The statement on generation and maximal regularity is therefore a translation to ℒ and in L^p_m of the results of Theorem <ref> for ℒ̃.
Also, using Proposition <ref>, one has
D(ℒ)=S_0,-d/c(D(ℒ̃))=S_0,-d/c(W^2,p_𝒩(α,α,m))=W^2,p_v(α,α,m).
Under the assumptions of the previous theorem, the estimate
y^α D_x_i x_j u_L^p_m +y^α D_yy u_L^p_m+y^α D_x_iy u_L^p_m+y^α-1 v·∇ u_L^p_m≤ Cℒu_L^p_m
holds for every u ∈ W^2,p_v(α,α,m) (if c=0 replace y^α-1 v·∇ u with y^α-1 D_yu).
Proof.
By Theorem <ref> the above inequality holds if u_L^p_m is added to the right hand side.
Applying it to u_λ (x,y)=u(λ x, λ y), λ >0 we obtain
y^α D_x_i x_j u_L^p_m+y^α D_x_i y u_L^p_m +y^α D_yy u_L^p_m+y^α-1v·∇ u_L^p_m≤ C(ℒ u_L^p_m+λ^α -2 u_L^p_m)
and the proof follows letting λ→∞.
The following corollary enlightens the role of the Neumann and of the oblique boundary conditions.
Under the assumptions of the previous theorem one has
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, y^αγ D_yyu+y^α-1v·∇ u ∈ L^p_m and lim_y→ 0y^c/γ v·∇ u =0}
(when v=0, replace lim_y→ 0y^c/γ v·∇ u=0 with lim_y→ 0 D_y u=0).
Proof. The proof follows from Theorem <ref> and Propositions <ref> and <ref>.
§ THE OPERATOR WITH DIRICHLET BOUNDARY CONDITIONS
In this section we add a potential term to the operator of Section <ref> and study the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2
with v=(d,c)∈^N+1 and b∈. We endow ℒ under Dirichlet boundary conditions, in the sense specified below.
We always assume the hypotheses in Assumptions <ref> and <ref>; we also recall that, as in Definition <ref>, ℒ can be written equivalently as
ℒ =y^α(QD^2_x)+2y^α(q,∇_xD_y)+y^αγ L_y+y^α-1(d,∇_x)
where A= (
[ Q q^t; q γ ]) and
L_y=D_yy+c/γ/yD_y-b/γ/y^2
is the operator defined in (<ref>) with parameters b/γ, c/γ.
The equation L_yu=0 has solutions y^-s_1, y^-s_2 where s_1,s_2 are the roots of the indicial equation f(s)=-s^2+(c/γ-1)s+b/γ=0 given by
s_1:=c/γ-1/2-√(D),
s_2:=c/γ-1/2+√(D)
where
D:=
b/γ+(c/γ-1/2)^2
is supposed to be nonnegative.
When b=0, then √(D)=|c/γ-1|/2 and s_1=0, s_2=c/γ-1 for c/γ≥ 1 and s_1=c/γ-1, s_2=0 for c/γ<1.
All the results of this section will be valid, with obvious changes, also in _+ for the 1d operators y^α L_y changing (when it appears in the various conditions on the parameters) α^- to 0 (see also Remark <ref>). We also refer to <cit.> for the analogous results concerning the Nd version of L_y.
A multiplication operator transforms ℒ into an operator of the form y^α(AD^2)+y^α-1(v,∇) and allows to transfer the results of Section <ref> to this situation.
Indeed, we use the map defined in (<ref>) of Section <ref>
T_k,0 u(x,y) :=y^ku(x,y), (x,y)∈^N+1_+
for a suitable choice of k and with β=0.
We recall that, by Propositions <ref>, <ref> and Corollary <ref>, T_k,0 maps isometrically L^p_m̃ onto L^p_m where
m̃=m+kp and for every
u∈ W^2,1_loc(^N+1_+) one has
T_k,0 ^-1(ℒ)T_k,0 u=ℒ̃u
where
ℒ̃=y^α(AD^2)+y^α-1(d̃,∇_x)+y^α-1c̃D_y-b̃y^α-2,
d̃=2kq+d, c̃=c+2kγ, b̃=b-k(c+(k-1)γ).
Equivalently we can write
ℒ̃ =y^α(QD^2_xu)+2y^α(q,∇_xD_yu)+y^αγL̃_yu+y^α-1(2kq+d,∇_xu)
where L̃_y=D_yy+c̃/γ/yD_y-b̃/γ/y^2.
Moreover the discriminant D̃ and the parameters s̃_1,2 of L̃_y are given by
D̃ =D, s̃_1,2=s_1,2+k.
Choosing k=-s_1 and recalling the definition of s_1, we get
b̃=0, c̃=γ(c/γ-2s_1)=γ(1+2√(D))
and therefore
T_-s_1,0^-1(y^α(AD^2)+y^α-1(v,∇)-by^α-2)T_-s_1,0=y^α(AD^2)+y^α-1(w,∇)
where w:=(d̃, c̃)=(d-2s_1q, c-2s_1γ). Moreover from (<ref>) one has
w=v-2s_1(q,γ)=(d-2s_1q, γ(1+2√(D))).
We can now derive the following result.
Let α∈ such that α<2 and
s_1+ α^-<m+1/p<s_2+2-α.
Then, under Assumptions <ref> and <ref>, the operator
ℒ =y^α(AD^2)+y^α-1(v,∇)-by^α-2
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover,
D(ℒ)
=y^-s_1W^2,p_w(α,α,m-s_1p), w=v-2s_1(q,γ).
Proof. According to the discussion above the map
T_-s_1,0:L^p_m-s_1p→ L^p_m
transforms ℒ into
ℒ̃=y^α(AD^2)+y^α-1(w,∇), w=(-2s_1q+d, c-2s_1γ):=(d̃,c̃).
We observe now that, recalling (<ref>), the hypothesis s_1+ α^-<m+1/p<s_2+2-α is equivalent to α^-<m-ps_1+1/p<c̃/γ+1-α. Moreover Assumption <ref> implies, recalling (<ref>), that c̃=γ(1+2√(D))≥γ>0.
The statement on generation and maximal regularity is therefore a translation to ℒ and in L^p_m of the results of Theorem <ref> for y^α(AD^2)+y^α-1(w,∇) in L^p_m-s_1p.
Also one has
D(ℒ)=T_-s_1,0(D(ℒ̃))=y^-s_1W^2,p_w(α,α,m-s_1p).
(i) Recalling Definition (<ref>) and by a straightforward calculation, equality (<ref>) says that u ∈ y^-s_1W^2,p_w(α,α,m-s_1p) if and only if all functions
u, y^αD_x_i x_ju, y^α(∇_xD_yu+s_1∇_xu/y), y^α/2∇_xu, y^α/2 (D_y u+s_1u/y ),
y^α(D_yyu+2s_1D_yu/y-(s_1-s_1^2)u/y^2),
y^α( w·∇ u/y+s_1c̃u/y^2)
belong to L^p_m ( c̃=w· e_N+1=c-2s_1γ). However, in the range of parameters of Theorem <ref>, one cannot deduce, in general, that y^α-1w·∇ u and y^αD_yyu belong to L^p_m, as one can check on functions like y^-s_1u(x-d̃/c̃y), u∈ C_c^∞(^N), near y=0. This is however possible in the special case of Corollary <ref>.
(ii) Unlike Theorem <ref>, the above remark shows that one cannot in general estimate any singular terms which compose ℒ as in (<ref>).
Nevertheless, the estimate involving the x-derivatives
y^αD_x_ix_j u_L^p_m≤ C ℒ u_L^p_m
and, by difference, also y^αℒ u-y^α(QD^2_x u) _L^p_m≤ C ℒ u_L^p_m,
always hold for every u ∈ D(ℒ). This follows
since, by Corollary <ref>, the similar statement holds for ℒ̃ in L^p_m-s_1p and, by Propositions <ref> and <ref>,
T_-s_1,0^-1( y^α D_x_ix_j) T_-s_1,0=y^αD_x_ix_j.
The following corollary explains why we use the term Dirichlet boundary conditions.
Let α∈ such that α<2 and s_1+ α^-<m+1/p<s_2+2-α and let 𝒜 be the operator
𝒜: =γ y^αD_yy+y^α-1 v_q ·∇-by^α-2, v_q:=(d-2s_1q,c).
(i) If D>0 then
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, 𝒜u ∈ L^p_m and lim_y→ 0y^s_2 u=0}
(ii) If D=0 then s_1=s_2 and
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, 𝒜u∈ L^p_m and lim_y→ 0y^s_2 u∈}
In both cases the graph norm and
u_L^p_m+y^αΔ_x u_L^p_m+𝒜u_L^p_m
are equivalent on D(ℒ).
Proof. Let us prove claim (i). By Theorem <ref> and Remark <ref> (iv) we have
D(ℒ)
=T_-s_1,0(W^2,p_w(α,α,m-s_1p))=T_-s_1,0(W^2,p_w/γ(α,α,m-s_1p))
where
w=(d,c)-2s_1(q,γ):=(d̃,c̃).
We apply Proposition <ref> with c̃/γ=c/γ-2s_1=1+2√(D)> 1 in place of c thus obtaining
W^2,p_w(α, α, m-s_1p)= {v ∈ W^2,p_loc(^N+1_+): v, y^αΔ_xv, 𝒜̃v ∈ L^p_m-s_1p and lim_y→ 0y^c̃/γ-1 v=0}
where
𝒜̃: =y^αD_yy+y^α-1ω/γ·∇.
The required claim then follows from the previous equalities after noticing that, by Proposition <ref>,
T_-s_1,0^-1( y^αΔ_x) T_-s_1,0=y^αΔ_x, T_-s_1,0^-1(𝒜)T_-s_1,0=γ𝒜̃.
and that for any u∈ y^-s_1W^2,p_w(α,α,m-s_1p), setting v=y^s_1u, one has, recalling (<ref>),
y^c̃/γ -1v=y^c̃/γ-1+s_1u=y^2√(D)+s_1u=y^s_2 u.
Claim (ii) follows similarly.
In the following Corollary, in continuity with Remark <ref> (ii), we shows that in certain range of parameters, we can improve the elliptic regularity of the operator ℒ by estimating, in addition to y^αD_x_ix_ju, other different terms which compose ℒ. Specifically if m+1/p>s_1+1-α
we can estimate
y^αγ L_yu, y^α D_x_iy u, y^α-1∇_x u
whereas in the smaller range m+1/p>s_1+2-α, we reach the best elliptic regularity where, recalling the definition of W^2,p_ℛ(α,α,m) in Section <ref>, D(ℒ) consists of the functions for which any single terms of ℒ is in L^p_m.
Let α∈ such that α<2.
(i) If both the condition s_1+ α^-<m+1/p<s_2+2-α and m+1/p>s_1+1-α hold then D(ℒ)
=y^-s_1W^2,p_𝒩(α,α,m-s_1p) and
y^α D_x_i x_j u_L^p_m + y^αγ L_yu_L^p_m+y^α D_x_iy u_L^p_m+y^α-1∇_x u_L^p_m≤ Cℒu_L^p_m
where y^αγ L_y=y^αγ D_yy +cy^α-1D_y-by^α-2.
(ii) If s_1+2-α<m+1/p<s_2+2-α then D(ℒ)=W^2,p_ℛ(α,α,m).
Proof. Following the notation of the proof of Theorem <ref>, we consider the operator ℒ̃ of (<ref>) on L^p_m-s_1p.
Claim (i) then follows since by Proposition <ref>,
W^2,p_w(α,α,m-s_1p)=W^2,p_𝒩(α,α,m-s_1p). Moreover if u∈ D(ℒ)=y^-s_1W^2,p_𝒩(α,α,m-s_1p), then v=y^s_1u satisfies by Proposition <ref> and by (<ref>) of Theorem <ref>
y^α-1∇_x v_L^p_m-s_1p≤ Cℒ̃v_L^p_m-s_1p
which under the isometry T_-s_1,0:L^p_m-s_1p→ L^p_m translates, recalling that T_-s_1,0^-1( y^α∇_x) T_-s_1,0=y^α∇_x and T_-s_1,0^-1( ℒ) T_-s_1,0=ℒ̃, into
y^α-1∇_x u_L^p_m≤ Cℒu_L^p_m.
Then by difference, using the equivalent norm of Corollary <ref>, the last estimate proves that the required inequality holds if u_L^p_m is added to the right hand side. This term can although be eliminated by performing the same scaling argument used in the proof of Theorem <ref>.
To prove (ii) we observe preliminarily that s_1+ 2-α>s_1+α^-, since α<2. By Theorem <ref> and Proposition <ref>
D(ℒ)=y^-s_1( W^2,p_w(α,α,m-s_1p))=W^2,p_ℛ(α,α,m)
under the assumption m-ps_1+1/p>2-α which is equivalent to s_1+ 2-α<m+1/p.
Observe that the condition m+1/p>s_1+1-α in the previous corollary is necessary for the integrability of the mixed derivatives of functions like y^-s_1u(x), u∈ C_c^∞(^N), near y=0.
The above results apply, when b=0, also to the operator
ℒ =y^α(AD^2)+y^α-1(v,∇).
When c>γ, so that s_1=0, s_2=c/γ-1 > 0, by (<ref>) the operator ℒ coincides with the one of Theorems <ref> and <ref> since D(ℒ)=W^2,p_v(α,α,m). Moreover by Proposition <ref> and Corollary <ref> one has
D(ℒ) ={u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, y^αγ D_yyu+y^α-1v·∇ u ∈ L^p_m and lim_y→ 0y^c/γ-1 u=0}.
Therefore, recalling Corollary <ref> the oblique derivative and the Dirichlet boundary conditions are, in the range c>γ, equivalent since they lead to the same operator.
On the other hand, when c<γ, so that s_1=c/γ-1 ≠ 0, s_2=0, we can construct a realization of ℒ different from that of Theorems <ref> and <ref>.
Let c<γ and c/γ-1+ α-<m+1/p<2-α. Then ℒ=y^α(AD^2)+y^α-1(v,∇) with domain
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^αΔ_xu, y^αγ D_yyu+y^α-1v_q·∇ u ∈ L^p_m and lim_y→ 0 u=0},
where v_q:=(d-2s_1q,c)∈^N+1,
generates a bounded analytic semigroup in L^p_m which has maximal regularity.
Proof. This follows from Corollary <ref> (i), since s_1=c/γ-1 and s_2=0.
Note that the generation interval c/γ-1+ α^-<m+1/p<2-α under Dirichlet boundary conditions, is larger than α^-<m+1/p<c/γ+1-α given by Theorems <ref> and <ref> for Neumann and oblique boundary conditions.
Let us explain what happens in Theorem <ref> if we choose -k in (<ref>) as the second root s_2 instead of s_1. Proceeding similarly, one proves an identical result under the condition
s_2+α^-< m+1/p <s_1+2-α.
However this requires the assumption s_2<s_1+2-α which is not always satisfied. When (<ref>) holds this procedure leads to a different operator. For further details about different realizations of ℒ and about uniqueness questions we refer also to <cit.>.
§ CONSEQUENCES FOR MORE GENERAL OPERATORS
In this section we deduce generation and domain properties in L^p_m for the more general operators
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2D_yy+y^α_1+α_2/2-1(d,∇_x)+c y^α_2-1D_y-by^α_2-2
where possibly different powers y^α_1, y^α_2 appear in front respectively of the x and y derivatives. Here α_1,α_2∈ such that α_2<2, α_2-α_1<2. We keep the Assumptions <ref> and <ref> and we recall that ℒ can be written equivalently in the compact form
ℒ =y^α_1(QD^2_x)+2y^α_1+α_2/2(q,∇_xD_y)+γ y^α_2 L_y+y^α_1+α_2/2-1(d,∇_x)
where
L_y=γ D_yy+c/γ/y D_y-b/γ/y^2
is the operator defined in (<ref>).
In contrast to the case α_1=α_2, some complications arise due to the different weights y^α_1, y^α_2 which appear in the x and y directions. This reflects into a distortion correction which depends on the coefficient
β_αdef=α_1-α_2/2
which appears into the characterization of the domain of ℒ (see the definition of W^2,p_v(α_1,α_2,m) in Section <ref>). This complication does not appears, obviously, when α_1=α_2, where β_α=0, but also when d=0 (see Remark <ref>).
The isometry of Section <ref> transforms ℒ into a similar operator with α_1=α_2 and allows to transfer the results of Sections <ref> and <ref> to this situation.
Indeed, we use the map defined in (<ref>) with k=0 and β=β_α namely
T_0,β_αu(x,y) :=|β_α+1|^1/pu(x,y^β_α+1), (x,y)∈^N+1_+, β_α=α_1-α_2/2.
We recall that, by Propositions <ref>, <ref> and Corollary <ref>, T_0,β_α maps isometrically L^p_m̃ onto L^p_m where
m̃=m-β_α/β_α+1 and
transforms ℒ into
ℒ̃:= T_0,β_α^-1 ℒ T_0,β_α =y^α(ÃD^2)+y^α-1(ṽ,∇)- by^α-2
where
α=α_1/β_α+1, Ã:= (
[ Q q̃^t; q̃ γ̃ ]), q̃=(β_α+1)q, γ̃=(β_α+1)^2γ,
ṽ:=( d, c̃), c̃=(c+β_αγ)(β_α+1).
As in Definition <ref>, we write ℒ̃ equivalently as
ℒ̃ =y^α(QD^2_x)+2y^α(q̃,∇_xD_y)+γ̃y^αL̃_y+y^α-1(d,∇_x)
where
L̃_y=D_yy+c̃/γ̃/yD_y-b/γ̃/y^2.
By (<ref>), (<ref>) and observing that, by the assumption on α_1, α_2, β_α+1=α_1-α_2+2/2>0, the discriminant D̃ and the parameters s̃_1,2 of L̃_y defined as in (<ref>), (<ref>) are related to those of L_y by
D̃ =D/(β_α+1)^2, s̃_1,2=s_1,2/β_α+1
The generation properties, the domain description and the maximal regularity for the operator in (<ref>) can be then deduced by the same properties for the operator studied in the previous sections when α_1=α_2.
We start by the case b=0 with Neumann and oblique boundary condition. We recall that the Sobolev spaces W^2,p_v(α_1,α_2,m) and W^2,p_𝒩(α_1,α_2,m), as well as their boundary conditions, are introduced and analysed in details in Sections <ref> and <ref> to which we refer.
Let v=(d,c)∈^N+1 with d=0 if c +β_αγ=0, and let α_1,α_2∈ such that α_2<2, α_2-α_1<2. If
α_1^- <m+1/p<c/γ+1-α_2
then the operator
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2D_yy+y^α_1+α_2/2-1(d,∇_x)+c y^α_2-1D_y
endowed with domain
W^2,p_w(α_1,α_2,m), w=(d,c+β_αγ)
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover the set 𝒞_w defined in (<ref>) is a core for ℒ.
Proof. According to the discussion above the isometry
T_0,β_α:L^p_m̃→ L^p_m, β_α=α_1-α_2/2, m̃=m-β_α/β_α+1
transforms ℒ into
ℒ̃=y^α(ÃD^2)+y^α-1(ṽ,∇)
where Ã, ṽ are defined in (<ref>) and ℒ̃ acts on L^p_m̃.
Observe now that the assumptions on the parameters translates into α<2 and α^- < m̃ +1/p < c̃/γ̃+1-α. The statement on generation and maximal regularity is therefore a translation to ℒ and in L^p_m of the results of Theorem <ref> for ℒ̃ in L^p_m̃.
Concerning the domain, we have
D(ℒ)=T_0,β_α( W^2,p_ṽ (α, α, m̃))
which, by Proposition <ref>, coincides with W^2,p_ w(α_1,α_2,m).
As in the proof of Theorem <ref>, when c +β_αγ≠ 0 we can transform ℒ into a similar operator with d=0 and Neumann boundary condition.
Indeed, we use the the isometry (<ref>) with ω=-d/(c+γβ_α)(β_α+1), namely
S_β_α,ω u(x,y) :=u(x-ω y^β_α+1,y), (x,y)∈_+^N+1.
Then by Corollary <ref> and by Proposition <ref> one has
S_β_α,ω^-1(ℒ)S_β_α,ω =ℒ̃, D(ℒ̃)=W^2,p_𝒩 (α_1, α_2, m )
where
ℒ̃=y^α_1(Q̃D^2_x)+2y^α_1+α_2/2(q̃,∇_xD_y)+γ y^α_2D_yy+c y^α_2-1D_y
and Q̃=Q-2q⊗d/c+γβ_α+γd⊗ d/(c+γβ_α)^2, q̃=q-γ/c+γβ_αd.
We point out, without stating them explicitly, that analogous results as those in Corollaries <ref>, <ref> apply also to this case.
We finally add the potential term and study ℒ under Dirichlet boundary conditions.
Let α_2<2, α_2-α_1<2 and
s_1+ α_1^-<m+1/p<s_2+2-α_2.
Then the operator
ℒ=y^α_1(QD^2_x)+2y^α_1+α_2/2q·∇_xD_y+γ y^α_2D_yy+y^α_1+α_2/2-1(d,∇_x)+c y^α_2-1D_y-by^α_2-2
generates a bounded analytic semigroup in L^p_m which has maximal regularity. Moreover,
D(ℒ)
=y^-s_1W^2,p_w(α_1,α_2,m-s_1p), w=(d,c+β_αγ)-2s_1(q,γ).
Proof. As in the proof of the previous theorem, the isometry
T_0,β_α:L^p_m̃→ L^p_m, β_α=α_1-α_2/2, m̃=m-β_α/β_α+1
transforms ℒ into
ℒ̃ =y^α(ÃD^2)+y^α-1(ṽ,∇)- by^α-2
=y^α(QD^2_x)+2y^α(q̃,∇_xD_y)+γ̃y^αL̃_y+y^α-1(d,∇_x)
where Ã, ṽ are defined in (<ref>) and ℒ̃ acts on L^p_m̃. Moreover the parameters s̃_1,2 of L̃_y satisfies
s̃_1,2=s_1,2/β_α+1.
Observe now that the hypotheses on the parameters translates into α<2 and
s̃_̃1̃+ α_1^-<m̃+1/p<s̃_̃2̃+2-α_2.
The statement on generation and maximal regularity is therefore a translation to ℒ and in L^p_m of the results of Theorem <ref> for ℒ̃ in L^p_m̃.
Concerning the domain we have
D(ℒ) =T_0,β_α(D(ℒ̃))=T_0,β_α(y^-s̃_̃1̃ W^2,p_w̃ (α, α, m̃-s̃_̃1̃p ))
where
w̃=ṽ-2s̃_̃1̃(q̃,γ̃)=(d,(c+β_αγ)(β_α+1))-2s_1(q,γ(β_α+1)).
Then recalling (<ref>) and using property (ii)-1 of Proposition <ref> and (ii) of Proposition <ref> we get
D(ℒ) =y^-s_1T_0,β_α( W^2,p_w̃ (α, α, m̃-s̃_̃1̃p ))=y^-s_1W^2,p_ w(α_1,α_2,m-s_1p).
As in Corollary <ref> we can characterized D(ℒ) through a Dirichlet boundary condition.
Let α_2<2, α_2-α_1<2 such that
s_1+ α_1^-<m+1/p<s_2+2-α_2.
Let 𝒜 be the operator
𝒜: =γ y^α_2D_yy+y^α_1+α_2/2-1 (d-2s_1q) ·∇_x + c y^α_2-1D_y-by^α_2-2.
(i) If D>0 then
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜u ∈ L^p_m and lim_y→ 0y^s_2 u=0}
(ii) If D=0 then s_1=s_2 and
D(ℒ)= {u ∈ W^2,p_loc(^N+1_+): u, y^α_1Δ_xu, 𝒜u∈ L^p_m and lim_y→ 0y^s_2 u∈}
In both cases the graph norm and
u_L^p_m+y^αΔ_x u_L^p_m+𝒜u_L^p_m
are equivalent on D(ℒ).
Proof. Let us prove claim (i). By Theorem <ref> we have
D(ℒ)
=T_-s_1,0(W^2,p_w(α_1,α_2,m-s_1p))
where
w=(d,c+β_αγ)-2s_1(q,γ):=(w_x,w_N+1).
Then we proceed as in the proof of Corollary <ref>: we apply Proposition Proposition <ref> with μ=c/γ-2s_1=1+2√(D)> 1 obtaining
W^2,p_w(α_1, α_2, m-s_1p)= {v ∈ W^2,p_loc(^N+1_+): v, y^α_1Δ_xv, 𝒜̃v ∈ L^p_m-s_1p and lim_y→ 0y^μ-1 v=0}
where
𝒜̃: =y^α_2D_yy+y^α_1+α_2/2-1d̃·∇_x + μ y^α_2-1D_y, d̃=μ+β_α/w_N+1w_x=d-2s_1q/γ.
The required claim then follows from the previous equalities after noticing that, by Proposition <ref> and Corollary <ref>,
T_-s_1,0^-1( y^α_1Δ_x) T_-s_1,0=y^α_1Δ_x, T_-s_1,0^-1(𝒜)T_-s_1,0=γ𝒜̃.
and that for any u∈ y^-s_1W^2,p_w(α_1,α_2,m-s_1p), setting v=y^s_1u, one has, recalling (<ref>),
y^μ -1v=y^μ-1+s_1u=y^s_2 u.
Claim (ii) follows similarly.
As in Corollary <ref>, in some range of parameters one has an improvement in the elliptic regularity of the operator.
Let α_2<2, α_2-α_1<2 and
s_1+ α_1^-<m+1/p<s_2+2-α_2.
(i) If both the condition s_1+ α_1^-<m+1/p<s_2+2-α_2 and m+1/p>s_1+1-α_1+α_2/2 hold then D(ℒ)
=y^-s_1W^2,p_𝒩(α_1,α_2,m-s_1p) and
y^α_1 D_x_i x_j u_L^p_m + y^α_2γ L_yu_L^p_m+y^α_1+α_2/2 D_x_iy u_L^p_m+y^α_1+α_2/2-1∇_x u_L^p_m≤ Cℒu_L^p_m
where y^α_2γ L_y=y^α_2γ D_yy +cy^α_2-1D_y-by^α_2-2.
(ii) If s_1+2-α_2<m+1/p<s_2+2-α_2 then D(ℒ)=W^2,p_ℛ(α_1,α_2,m).
Proof. Identical to the proof of Corollary <ref>. For claim (ii) we also observe that s_1+ 2-α_2>s_1+α_1^-, since α_2<2, α_2-α_1<2.
When b=0, we remark, without stating explicitly, that the results of Remark <ref> and Corollary <ref> apply also in this case . In particular if c>γ (so that s_1=0, s_2=c/γ-1), the operator of Theorems <ref> and <ref> coincide, showing that in this case the Dirichlet and the oblique derivative boundary conditions are equivalent. On the other hand if c<γ (so that s_1=c/γ-1 ≠ 0, s_2=0) we can construct a realization of ℒ different from that of Theorem <ref>.
|
http://arxiv.org/abs/2405.09071v1 | 20240515034935 | Data-driven discovery of drag-inducing elements on a rough surface through convolutional neural networks | [
"Heesoo Shin",
"Seyed Morteza Habibi Khorasani",
"Zhaoyu Shi",
"Jiasheng Yang",
"Sangseung Lee",
"Shervin Bagheri"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Enhancing Airline Customer Satisfaction: A Machine Learning and Causal Analysis Approach
Tejas Mirthipati
Georgia Institute Of Technology
Atlanta, USA
tmirthipati3@gatech.edu
May 20, 2024
==============================================================================================
Understanding the influence of surface roughness on drag forces remains a significant challenge in fluid dynamics. This paper presents a convolutional neural network (CNN) that predicts drag solely by the topography of rough surfaces and is capable of discovering spatial patterns linked to drag-inducing structures. A CNN model was developed to analyze spatial information from the topography of a rough surface and predict the roughness function, Δ U^+, obtained from direct numerical simulation. This model enables the prediction of drag from rough surface data alone, which was not possible with previous methods owing to the large number of surface-derived parameters. Additionally, the retention of spatial information by the model enables the creation of a feature map that accentuates critical areas for drag prediction on rough surfaces. By interpreting the feature maps, we show that the developed CNN model is able to discover spatial patterns associated with drag distributions across rough surfaces, even without a direct training on drag distribution data. The analysis of the feature map indicates that, even without flow field information, the CNN model extracts the importance of the flow-directional slope and height of roughness elements as key factors in inducing pressure drag. This study demonstrates that CNN-based drag prediction is grounded in physical principles of fluid dynamics, underscoring the utility of CNNs in both predicting and understanding drag on rough surfaces.
Authors should not enter keywords on the manuscript, as these must be chosen by the author during the online submission process and will then be added during the typesetting process (see http://journals.cambridge.org/data/[3]relatedlink/jfm-[3]keywords.png for the full list)
§ INTRODUCTION
The interaction between surface roughness and fluid flow is critical, particularly in scenarios involving turbulent flows. In these flows, the roughness elements of the surface can influence the smallest eddies near the wall, often resulting in increased drag. As heightened drag impedes the optimal functioning of various systems, including turbines, vehicles, and pipelines, the precise prediction of turbulent drag on rough surfaces is crucial.
While the increased drag from roughness can be determined from direct numerical simulations (DNSs) or experiments (e.g. towing tanks), these methods are not sustainable for drag prediction <cit.>. Indeed, capturing the full nonlinear interaction between irregular roughness structures and turbulent flows may not be necessary for an accurate and reliable drag prediction.
The majority of prior studies, including those by <cit.>, <cit.>, <cit.>, and <cit.>, developed empirical relations that establish a correlation between drag and statistical surface parameters, such as skewness, effective slope and mean roughness height. Although these models present a good fit of the empirical data which they were developed for, they often poorly predict the drag of surfaces of different roughness types.
The topographic data of rough surfaces, typically represented by two-dimensional height-map, could not be directly used in the empirical models owing to their large size. Instead, the surfaces had to be parameterized by statistical means, which does not capture all the spatial details of the surface topography.
In addition, statistical parameterization complicates the identification of structural patterns on rough surfaces relevant to drag and reduces the physical interpretability of the predictive model.
A second aspect is related to the modelling approach itself, since the accuracy of predictions depends significantly on capacity of the models.
Recent developments have used artificial neural networks (ANNs), which are adept at managing complex, nonlinear problems. Studies by <cit.> and <cit.> have demonstrated the potential of fully connected networks (FCNs) in predicting drag on rough surfaces. Nonetheless, these models still depend on the statistical parameters of surface topography, which limits their ability to use the actual rough surface as an input. Consequently, this study aims to overcome these obstacles by directly employing surface topography for drag prediction using convolutional neural networks (CNNs), focusing not on predictive accuracy but on the methodology itself. Given their ability to recognize spatial patterns, CNNs have been applied in various fluid engineering research areas <cit.>.
In this study, we developed a CNN model, trained on a DNS dataset of turbulent flows over both isotropic and anisotropic rough surfaces, to predict the roughness function (Δ U^+). This function represents the difference in the mean velocity profiles between smooth- and rough-wall turbulence within the log layer <cit.>. We observed that the prediction of the roughness function is based on the physical mechanism by which drag is induced on rough surfaces. Although our CNN model was trained to predict the scalar quantity Δ U^+ without information about turbulent flow, it can produce feature maps that closely resemble the drag force distribution obtained through DNS. These three-dimensional matrix outputs, resulting from nonlinear operations on rough surface inputs, illustrate the correlation between surface topography and drag. By comparing these outputs with drag-force distribution maps obtained from DNS and examining different aspects of rough surface topography, we have ascertained that the CNN model predicts Δ U^+ by focusing on specific topographical features of surfaces that significantly influence the pressure drag.
The remainder of this paper is structured as follows: Section <ref> describes the dataset of rough surfaces acquired through DNS, and Section <ref> outlines the architecture of our CNN model. Section <ref> presents the training outcomes and analyzes the ability of the CNN to identify spatial patterns associated with drag distributions, along with its limitations. Section <ref> provides concluding observations and directions for future studies.
§ ROUGH SURFACE DATASET
This section describes the methodologies employed to create rough surface topographies and their corresponding DNS datasets, which are essential for training CNNs. It consists of two subsections: (1) the creation of rough surfaces, and (2) the computational specifics of DNS.
§.§ Generation of rough surface topographies
The rough surfaces created in this study are classified based on the topographic metrics of skewness (skw) and effective slope for both streamwise (ES_x) and spanwise (ES_z) orientations.
The definitions of these topographic indicators are as follows:
skw = 1/A_t∫_x,z (k - k_avg)^3 dA/k_rms^3,
ES_x = 1/A_t∫_x,z|∂ k/∂ x| dA,
ES_z = 1/A_t∫_x,z|∂ k/∂ z| dA.
Here, x, y, and z represent the streamwise, wall-normal, and spanwise directions, respectively. Moreover, k is the distribution of roughness height and A_t denotes the total roughness plan area. Finally, k_avg and k_rms are defined as A^-1_t∫_x,z k dA and √(A^-1_t∫_x,z (k - k_avg)^2 dA), respectively.
First, isotropic rough surfaces were generated featuring approximately equal values of ES_x and ES_z. Three categories of isotropic surfaces were generated: (i) Gaussian (zero-skw), (ii) positive-skw, and (iii) negative-skw rough surfaces. These surfaces are distinguished based on their skw value, which reflects the asymmetry in the distribution of k. Gaussian rough surfaces (S_Gauss) have a Gaussian distribution of k, leading to an evenly balanced distribution of peaks and valleys and a skw value of zero. Positive-skw rough surfaces (S_pos) consist of planes and peaks, resulting in positive skw values. In contrast, negative-skw rough surfaces (S_neg) are characterized by planes and pits and yield negative skw values. In figure <ref>, ES_x is the same as ES_z for all surfaces, with the S_Gauss sample indicating that skw equals zero, whereas the S_pos and S_neg samples differ. These isotropic surfaces were created using the Fourier-filtering algorithm and the code developed by <cit.>.
Second, we generated anisotropic rough surfaces, which – unlike isotropic surfaces – exhibit directionality, leading to differences between ES_x and ES_z.
To create these surfaces, we employed the multiscale anisotropic rough surface algorithm developed by <cit.>. We produced (i) ES_x-dominant anisotropic rough surfaces (S_ES_x) and (ii) ES_z-dominant anisotropic rough surfaces (S_ES_z). In figure <ref>, the S_ES_x sample shows larger ES_x values compared with ES_z, leading to wave-like patterns in the streamwise direction. Conversely, the S_ES_z sample has larger ES_z values than ES_x, resulting in wave-like patterns in the spanwise direction. The skewness (skw) of both surfaces is zero.
Each surface type consists of 135 surfaces, which were doubled through the augmentation method described in Appendix <ref>, yielding 270 surfaces. In addition, we added 162 hydrodynamically smooth surfaces, as detailed in Appendix <ref>. The surfaces were subsequently divided into datasets: 60% for training, 20% for validation, and 20% for testing, as detailed in Appendix <ref>.
§.§ Direct numerical simulations
We developed a dataset of rough-wall turbulence in both the transitionally and fully rough regimes by solving the incompressible Navier-Stokes equations, as follows:
∇·𝐮 = 0
∂𝐮/∂ t + ∇· (𝐮𝐮) = -1/ρ∇ p + ν∇^2 𝐮 - 1/ρP_x 𝐞_𝐱 + 𝐟_𝐈𝐁𝐌
where 𝐮 = (u,v,w)^⊺ is the velocity vector, and P_x is the mean pressure gradient and 𝐞_𝐱 the streamwise unit vector. P_x is added as a constant source term to the momentum equation to drive the flow in the channel. p denotes the pressure fluctuations, 𝐞_𝐱 denotes the streamwise basis vector, ρ denotes the density (set to 1 in this study), ν denotes the kinematic viscosity, and 𝐟_𝐈𝐁𝐌 denotes the body-force term introduced by the immersed boundary method (IBM) to enforce the no-slip and no-penetration conditions on the rough surfaces <cit.>.
For solving these equations, we used the open-source GPU-accelerated solver CaNS <cit.>. This solver is spatially second-order accurate and employs a fast Poisson solver, and temporally integrates the Navier-Stokes equations using a three-step Runge–Kutta scheme as part of a fractional-step algorithm <cit.>. We adopted a minimal channel approach to minimize computational expenses while ensuring accurate results for Δ U^+ <cit.>.
The simulations in the minimal-channel rough-wall DNS were conducted at a friction Reynolds number _τ=u_τδ/ν=500, where δ represents the channel half-height, and u_τ represents the friction velocity. Periodic boundary conditions were used along the x- and z-directions, with a Dirichlet boundary condition along the y-direction.
The domain dimensions L_x, L_z, and L_y were set to 2.4δ, 2.0δ, and 0.8δ, respectively.
The numbers of grid points in the x- (N_x) and z- (N_z) directions were fixed at 302 and 102, respectively, with the grid spacings in the x- and z-directions in the viscous scale (Δ x^+ and Δ z^+) being 4.192 and 4.137, respectively. The superscript + indicates normalization by the viscous scale δ_ν = ν/u_τ. In the y-direction, the grid was stretched using the hyperbolic tangent function with a minimum y^+ value ≈ 0.5. These grid sizes are confirmed by the grid convergence tests described in Appendix <ref> to ensure sufficient accuracy.
The DNS results were used to obtain Δ U^+, following the methodology described by <cit.>.
The logarithmic mean velocity profile, U^+, is expressed as:
U^+_R = 1/κln(y^+) + A + Δ U^+ = U^+_S + Δ U^+,
where κ≈ 0.4 and A is the von Kármán constant and log-law intercept for a smooth surface (approximately 5.0). The subscripts S and R denote smooth and rough surface quantities, respectively. The function Δ U^+ depends on both the roughness topography and the roughness size, k^+. The latter is defined as:
k^+ = k/δ_ν = k u_τ/ν.
We determine Δ U^+=U^+_R - U^+_S at a designated reference point of y^+ = 200, as depicted in figure <ref>, where Δ U^+ achieves a fixed value.
A positive Δ U^+ indicates a momentum loss attributable to surface roughness, whereas a negative value indicates a momentum gain. Thus, Δ U^+ serves as an indicator for drag resulting from surface roughness. Accordingly, our CNN model was trained to predict Δ U^+.
Using DNS, we investigated the relationship between the topographical characteristics of rough surfaces and Δ U^+ across different surface types.
Figure <ref>(a) shows the distribution of each surface as a function of ES_x and ES_z. It is clear that the isotropic surfaces (S_Gauss, S_pos, S_neg) discussed earlier are distributed on the line ES_x = ES_z.
Figure <ref>(b) shows the distribution of Δ U^+ relative to the ES_x values of the surfaces, revealing a positive correlation between an increase in ES_x and a corresponding rise in Δ U^+. A notable exception is S_neg.
The mechanism of drag generation on the S_neg surface is distinct from that on other surfaces; consequently, we analyze the case of S_neg separately in a section <ref>.
Figure <ref>(c) shows that ES_z is positively correlated with Δ U^+ for S_pos, S_Gauss, and S_ES_x, mirroring the trend observed between ES_x and Δ U^+. However, variations in ES_z do not significantly affect Δ U^+.
This is attributed to the fact that an increase in ES_z typically results in an increase in ES_x for S_Gauss, S_pos, and S_ES_x, but not for S_ES_z. In the case of S_neg, an increase in ES_z exerts a negligible influence.
Thus, these findings underscore that an enhancement in ES_x generally leads to an increase in Δ U^+. This relationship is due to ES_x being directly proportional to twice the value of frontal solidity (λ_f), a measure reflecting the area exposed to pressure drag <cit.>.
Finally, figure <ref>(d) illustrates the distribution of Δ U^+ relative to the skw values of the surfaces. Notably, S_neg shows significantly lower Δ U^+, ranging between 1 and 2, in contrast to the typical range of 3–8 observed for most surfaces.
Our aim is not only to predict Δ U^+ but also to demonstrate that our model trained to predict Δ U^+ learns the dominant drag-inducing mechanisms. We extracted drag maps from DNS to provide a detailed visual representation of drag force distribution on the rough surface. Figure <ref> shows an example of a DNS-derived drag map, f_x, where f_x is the streamwise component of the wall-integrated mean IBM force,
(f_x, f_y, f_z)^⊺ = -1/T∫_0^H ∫_0^T 𝐟_IBM(x,y,z,t) dt dy.
The negative sign indicates that the forces act in the opposite direction of the flow.
An overall increase in the magnitude of f_x signifies a loss of the streamwise momentum, correlating with an increase in Δ U^+. From in figure <ref>, we note that the regions where the roughness significantly contribute to drag are spanwise elongated.
These DNS drag maps provide a means for both quantitative and qualitative evaluations in comparison with the feature maps produced by the CNN model. This comparison enhances our understanding of the effectiveness of the model and the physical phenomena it encapsulates. To assess the ability of the CNN to accurately reflect the physics across different rough surface types, we obtained DNS drag maps for three samples from each rough surface category, which were not used in training the model (S_Gauss,i, S_pos,i, S_neg,i, S_ES_x,i, and S_ES_z,i, where i=1,2,3).
§ CNN ARCHITECTURE
In contrast to FCNs used in previous studies <cit.>, CNNs can process high-dimensional data for training without omitting the spatial information contained within, owing to convolutional operations. In this study, we developed a deep neural network based on the CNN framework to preserve the spatial information of rough surfaces and directly utilize it as input.
Figure <ref> shows the detailed CNN model architecture used in this study. Regarding the structure of the model, this section will focus on two architectural elements introduced specifically to deal with rough surfaces: periodic boundary conditions and a parallel structure. For additional information on other structural features of the model, refer to Appendix <ref>. Moreover, the CNN model was trained using training and validation datasets, and its hyperparameters were finetuned through Bayesian optimization, as detailed in Appendix <ref>.
Since zero values exist at the edges of the surface data due to the traditional zero-padding method, which differs from the DNS condition, we used periodic boundary padding to preserve dimensionality during convolution operations and to mimic the periodic boundary conditions observed in the DNS, specifically along the x- and z-directions. This technique expands the input feature map along the x- and z-axes, thus maintaining the cyclic nature of the boundaries in alignment with the DNS. Figure <ref> shows a comparison between this padding approach and the traditional zero-padding method using an example.
Given that rough surfaces consist of roughness elements of various scales, these elements need to be considered in predicting drag. In this context, the parallel structure of our CNN is designed to detect roughness elements at various scales. Applying the inception module introduced by <cit.>, our CNN model utilizes a range of kernel sizes from 3 × 3 to 11 × 11, corresponding to the grid sizes. This diversity facilitates the detection of surface features at multiple scales. As depicted in figure <ref>, toward the end of the CNN, feature maps from different kernels within the parallel structure were merged. This was followed by a convolution with a 1 × 1 kernel employing a single filter, a process that combines the features while maintaining the original input dimensions. Subsequently, a comprehensive CNN feature map was produced. This map was subject to global average pooling (GAP), generating a scalar value representative of Δ U^+.
§ EVALUATION OF PREDICTION PERFORMANCE AND PHYSICS LEARNABILITY
Next, we evaluate the trained CNN model from two perspectives: its accuracy in predicting Δ U^+, and its ability to capture the mechanisms of drag induced on rough surfaces. The latter entails generating feature maps that that can be gauged against the drag force distributions of the DNS drag maps. As the topographical, CNN feature, and DNS drag maps depict distributions of k^+, Δ U^+, and f_x, respectively, we standardized each map using the following equation:
m̃ = m - μ/σ,
where m̃ represents the standardized form of a given map (m), which is a two-dimensional matrix, μ is the mean of m, and σ is the standard deviation of m.
§.§ Δ U^+ prediction performance of the CNN model
The evaluation of the predictive accuracy of the model for Δ U^+ is quantified using the mean absolute error (MAE) and the coefficient of determination (R^2). The MAE is defined as follows:
MAE = 1/N∑_i=1^N| y_i - ỹ_i |,
where N represents the number of samples in the test dataset, y_i is the actual Δ U^+, and ỹ_i is the Δ U^+ predicted by the CNN model. The R^2 metric is defined as
R^2 = 1 - ∑_i=1^N(y_i - ỹ_i)^2/∑_i=1^N(y_i - y)^2,
where y is the average of the actual Δ U^+.
Lower MAE values indicate an improved predictive accuracy. An R^2 value close to 1 signifies high precision, whereas a value close to 0 indicates lower reliability.
The average prediction accuracy across all the surface types was 0.108 in terms of the MAE and 0.996 in terms of R^2. Additionally, the prediction accuracy for each type of surface is detailed in table <ref>. In this table, we used the mean absolute percentage error (MAPE) to indicate the accuracy for each surface type. The MAPE is defined as
MAPE (%) = 1/N∑_i=1^N| y_i - ỹ_i |/y_i× 100,
According to the table <ref>, all the surface types demonstrated a comparable prediction accuracy.
However, S_neg showed larger errors compared with the other types of surfaces. The reasons for the larger errors specifically in predicting drag on S_neg are discussed in Section <ref>. The ability of the CNN model to capture drag generation factors for various surface types and generate feature maps resembling their drag distribution is validated and analyzed in Sections <ref> to <ref>.
§.§ Assessment of physics learnability through feature map analysis
This section evaluates the ability of the CNN model to capture the main physics of drag inducement across four different surface types (S_Gauss, S_pos, S_ES_x, and S_ES_z). These surface type demonstrate a comparable predictive performance (table <ref>). As DNS drag maps originate from solutions to the Navier–Stokes equations, a CNN feature map that closely resembles the DNS drag map suggests that the CNN model effectively captures the mechanisms of drag generation on rough surfaces. Therefore, we evaluated the similarity between the CNN feature maps and the DNS drag maps.
To evaluate the similarity between the CNN feature maps and the DNS drag maps, or between the CNN feature maps and the topographical maps, we calculated the root mean squared error (RMSE) and structural similarity index measure (SSIM). The RMSE is defined as
RMSE = √(1/N_x N_z∑_i=1^N_x∑_j=1^N_z (a_i,j - b_i,j)^2),
where a_i,j and b_i,j represent arbitrary maps.
These could be any two among a CNN map, a DNS drag map, or a topographical map.
The SSIM was originally a method for comparing the similarity between two images, devised by <cit.>. The method evaluates the similarity based on three components of the image: luminance (l), contrast (c), and structure (s). In this study, these components are interpreted as follows: (i) l represents regions of higher or lower map values, (ii) c denotes areas with significant variations in map values, and (iii) s evaluates the spatial arrangement of map values, corresponding to the organization of patterns across the map. The SSIM is defined as:
l(a_i,j, b_i,j) = 2μ_a_i,jμ_b_i,j + c_1/μ_a_i,j^2 + μ_b_i,j^2 + c_1,
c(a_i,j, b_i,j) = 2σ_a_i,jσ_b_i,j + c_2/σ_a_i,j^2 + σ_b_i,j^2 + c_2,
s(a_i,j, b_i,j) = σ_a_i,j b_i,j + c_3/σ_a_i,jσ_b_i,j + c_3,
where
μ_a_i,j = 1/N_x N_z∑_i=1^N_x∑_j=1^N_za_i,j,
μ_b_i,j = 1/N_x N_z∑_i=1^N_x∑_j=1^N_zb_i,j,
σ_a_i,j^2 = 1/N_x N_z∑_i=1^N_x∑_j=1^N_z (a_i,j - μ_a_i,j)^2,
σ_b_i,j^2 = 1/N_x N_z∑_i=1^N_x∑_j=1^N_z (b_i,j - μ_b_i,j)^2,
σ_a_i,j b_i,j = 1/N_x N_z(N_x N_z-1)∑_i=1^N_x∑_j=1^N_z (a_i,j - μ_a_i,j)(b_i,j - μ_b_i,j).
Here, c_1=(k_1 L)^2, c_2=(k_2 L)^2, and c_3 = c_2 / 2 with k_1 = 0.01 and k_2 = 0.03, and L is defined as the difference between the maximum and minimum values among a_i,j and b_i,j.
The overall SSIM index was computed as the product of these three components:
SSIM(a_i,j, b_i,j) = l(a_i,j, b_i,j)^α· c(a_i,j, b_i,j)^β· s(a_i,j, b_i,j)^γ,
where the weights α, β, and γ are all 1.
A SSIM value close to 1 indicates a high degree of similarity between the maps. This measure is crucial in evaluating how effectively the CNN model has captured and replicated the drag-inducing physics of the rough surfaces in the DNS.
Figure <ref> displays sample topographical, DNS drag, and CNN feature maps from S_Gauss, S_pos, S_ES_x, and S_ES_z. The RMSE and SSIM values between these maps are listed in table <ref>. According to the table, the SSIM between the CNN feature maps and the DNS drag maps is higher than that between the CNN feature maps and the topographical maps, consistent with the lower RMSE. This indicates that the CNN feature maps resemble the DNS drag maps more closely than they do the topographical maps. Building on this assessment, we investigated the high-intensity patterns in the CNN feature maps and DNS drag maps. Figure <ref> reveals elongated, spanwise high-intensity patterns in the CNN feature maps, similar to those in the DNS drag maps. The pattern is also similar to the area distribution of roughness elements facing the flow in the corresponding topographical map, particularly visible in figure <ref>(b). This provides significant evidence of the ability of the CNN to predict drag, considering that the mechanism of drag induction on rough surfaces and the effective slope in the x-direction predominantly influence the pressure drag. Additionally, it demonstrates that the CNN model learned to determine the flow direction without being provided with any flow-related information.
To analyze these patterns further, we visualized the DNS drag maps and CNN feature maps in three dimensions (see figure <ref>).
The DNS drag maps in figure <ref> reveal a distinct concentration of f̃_̃x̃ on positive slopes when viewed in the direction of the flow. This is particularly evident for roughness elements with positive slopes and heights exceeding the mean surface level, where the pressure drag is more pronounced than the viscous drag <cit.>. Conversely, the drag distribution observed in the opposite direction shows lower concentrations, attributable to the reduced presence of f̃_̃x̃. Similarly, the CNN feature maps in figure <ref> effectively reflect this distribution, focusing on the wall-normal structures similar to the DNS drag maps. The model highlights the force disparity between the flow and counterflow directions. This alignment underscores the capability of our model to capture the topographical features critical to predicting Δ U^+, demonstrating its ability to recognize spatial patterns in surface structures that predominantly induce pressure drag, even without information about the drag distribution or turbulent flow.
However, there are notable discrepancies in the drag force distributions on the planes of the CNN feature maps compared with those of the DNS drag maps. Generally, the plane areas of the CNN feature maps of figure <ref> display lower values compared with those of the DNS drag maps. These divergences highlight the limitations of the CNN model in accurately predicting drag distributions for components beyond pressure drag.
Additionally, the CNN model struggles to capture areas that are sheltered behind other roughness peaks and experience negligible local drag force, referred to as shadowed areas <cit.>. Figure <ref> shows the line graphs of the DNS drag map, CNN feature map, and topographical map at z^+ = 23.5, where the highest peak in the topographical map for S_pos,2 is located. Boxes (a), (b), and (c) in the figure illustrate a reduction in local drag in the shadowed areas in the DNS drag map, whereas this reduction is either absent or less pronounced in the CNN feature map. This suggests that the model has not completely accounted for the physics associated with the shadowed areas.
In addition, we analyzed the three maps (topographical, CNN feature, and DNS drag maps) of each surface sample in the wavenumber domain. This analysis aims to compare the scale of the dominant spatial patterns in each map. First, we calculated the spanwise-averaged premultiplied power spectral density (PSD), denoted by k_x^+ Φ. We assessed the similarity of k_x^+ Φ between the topographical, CNN feature, and DNS drag maps.
The similarity between two different k_x^+ Φ (i.e. k_x^+ Φ_1 and k_x^+ Φ_2) is quantified using the Euclidean distance (ED), calculated as follows:
ED = √(∑^N_x_i=0 [k_x^+ Φ_1, i - k_x^+ Φ_2, i ]^2).
Figure <ref> shows the k_x^+ Φ lines of the topographical maps, DNS drag maps, and CNN feature maps, and table <ref> presents the calculated ED between the CNN feature map and both the DNS drag map and the topographical map for all the sampled surfaces.
For the CNN feature map. k_x^+ Φ closely resembles that of the DNS drag map, indicating that the CNN feature map has similar dominant spatial patterns.
Furthermore, we analyzed the similarities in the peaks of k_x^+ Φ. For example, in S_Gauss,1, the wavenumbers of the three primary peaks of k_x^+ Φ, identified by their highest values, are as follows: 0.084, 0.157, and 0.099 for the DNS drag map; 0.147, 0.199, and 0.094 for the CNN feature map; and 0.052, 0.037, and 0.079 for the topographical map.
Subsequently, the λ_x^+ values are 75.0, 40.0, and 63.158 for the DNS drag map; 42.857, 31.579, and 66.667 for the CNN feature map; and 120, 171.429, and 80.0 for the topographical map.
This analysis suggests a closer alignment in peak distribution between the DNS drag map and the CNN feature map compared with that between the CNN feature map and the topographical map. Thus, the critical patterns in both the CNN feature map and the DNS drag map exhibit similar scales.
Figure <ref> shows the five predominant λ_x^+ values in the k_x^+ Φ of the CNN feature map, which closely aligns with the scale of the drag force pattern identified in the DNS drag map. In the topographical map, these λ_x^+ values closely represent the distances between adjacent peaks and the size of streamwise peaks, both of which are crucial in influencing the pressure drag.
Additionally, we analyzed the two-dimensional premultiplied PSD (k_x^+ k_z^+ Φ), which are shown in figure <ref>. The similarities of the patterns in the samples' k_x^+ k_z^+ Φ were evaluated using the SSIM and RMSE, as presented in table <ref>. This table indicates that, for all the samples, the SSIM between the k_x^+ k_z^+ Φ of the DNS drag map and that of the CNN feature map is higher than that between the k_x^+ k_z^+ Φ of the CNN feature map and that of the topographical map, except for S_pos,3. In terms of the RMSE, the values are generally lower between the CNN feature maps and the DNS drag maps, except for S_pos,1, S_pos,2, and S_ES_x,2. Both the RMSE and SSIM metrics corroborate the findings from the k_x^+ Φ analysis, demonstrating a resemblance between the CNN feature maps and the DNS drag maps. Figure <ref> not only confirms the congruence of the CNN feature maps with the DNS drag maps in terms of a high-intensity distribution but also highlights significant similarities in this distribution with the topographical maps. For instance, the marked boxes in S_ES_x,1 of figure <ref> show common distribution patterns between the DNS drag and the CNN feature maps, and between the topographical and the CNN feature maps, also capturing distributions common to all the maps. This underscores the capability of the CNN model to extract dominant spatial features in surface topography and identify the essential scales of spatial patterns for predicting Δ U^+ over rough surfaces. Therefore, we will extract the parts that affect drag from the topographical map to determine which elements of the topographical map our CNN focuses on intensively to predict drag.
§.§ Topographical-characteristics-based analysis
We conducted an analysis using topographical characteristics maps, distinct from CNN feature maps, to identify specific topographical characteristics captured by the CNN model for predicting Δ U^+. These maps, comprising a range of topographical elements, were combined with distinct weights and subsequently visualized. This combination aimed to create a “composite map" resembling the CNN feature map, thereby elucidating the topographical characteristics essential to the predictive accuracy of the model.
The foundation of this analysis lies in the topography-derived maps, which encompass both the roughness height and the surface gradient map. The composite map comprises six base maps: T^t, T^b, T^m, G_x^t, G_x^m, and G_x^b. Here, T denotes the topographical map, and G_x, representing the gradient of T, is calculated as follows:
G_x = ∂ T/∂ x≈T_fw - T_bk/2Δ x,
where T_fw and T_bk denote the forward and backward positions by one grid point on the input map, respectively. The superscripts (t, b, and m) for T and G_x differentiate the maps based on specific thresholds. For example, T^t represents the values in the top 25% of all T values, T^b identifies those in the bottom 25%, and T^m includes the values between the bottom 25% and the top 25%.
We combined these base maps to create a composite map. This process involved merging two base maps, resulting in nine additional combination maps: T^t G_x^t, T^t G_x^b, T^t G_x^m, T^m G_x^t, T^m G_x^b, T^m G_x^m, T^b G_x^t, T^b G_x^b, and T^b G_x^m. For instance, T^t G_x^t represents the integration of the top 25% of T values with the top 25% of gradient values from G_x. These maps were standardized using equation <ref>.
Subsequently, we determined the optimal weights, w_i (where i = 1, 2, ..., 15), to determine the most effective combination ratio that reflects the CNN feature map. This was achieved using the following equation:
m_c = w_1 T̃^̃t̃ + w_2 T̃^̃m̃ + w_3 T̃^̃b̃ + ... + w_15T̃^̃b̃ ̃G̃_̃x̃^̃b̃,
where m_c represents the composite map created using the topographical characteristics maps to resemble the CNN feature map.
We used stochastic gradient descent for the optimization of the weights (w_1 to w_15). This method iteratively refines the weights using the least squares method, which measures the difference between m_c and the CNN feature maps. During each iteration, a subset of data is used to calculate the gradient of the loss function with respect to w, guiding the adjustments required to better align with the CNN feature map. The weight update follows the equation:
w_new = w_old - η·∇_w L(w_old),
where w_new and w_old are the updated and previous weight vectors, respectively, η is the learning rate, and ∇_w L(w_old) is the gradient of the loss function L with respect to w at the previous iteration.
Figure <ref> shows the optimized weights for each topographical characteristics map. According to this figure, the two most prominent surface features across various surface types are T^t and T^tG_x^t. Given that high roughness elements and positive gradients in the streamwise direction significantly influence pressure drag, this analysis suggests that the CNN model predominantly focuses on the topographical elements of rough surfaces that induce pressure drag to predict Δ U^+. This is consistent with the analysis in the previous sections, where high values were distributed in the counterflow areas of roughness elements.
Accordingly, the results discussed in this section corroborate the analyses presented in the preceding section <ref>, demonstrating that our model primarily focuses on the peaks of rough surfaces and the positive gradients of these peaks in the direction of fluid flow. Therefore, given the close correlation between pressure drag for roughness heights larger than the viscous sublayer and the frontal area of rough surfaces, our model focuses on the pressure drag on rough surfaces. This analysis aligns with the findings of previous studies <cit.>, which have shown that pressure drag significantly contributes to the total drag in a fully rough regime. Additionally, the less precise distribution of CNN feature maps in planar areas of rough surfaces or regions below the mean height indicates the limitations of the model in addressing forces other than the pressure drag. Moreover, our model does not accurately represent the force weakening in the shadowed areas of rough surfaces. In the following section, we will explore drag prediction on surfaces with negative skewness, where the pressure drag is not the predominant factor, unlike on other surfaces analyzed in previous sections, to clarify the limitations of our model further.
§.§ Limitations in predicting negative-skw surfaces
Although our CNN model has proven effective in predicting Δ U^+ based on the mechanism of the drag inducement for S_Gauss, S_pos, S_ES_x, and S_ES_z, it encounters challenges in accurately predicting S_neg when compared with these surface types. As indicated in table <ref>, the MAPE for S_neg is almost twice as high as that for the other surfaces. More importantly, the similarity between the DNS drag maps and CNN feature maps for S_neg is significantly lower than that for the other surface types (see figure <ref>). Consequently, this section discusses these limitations and investigates the reasons behind the reduced prediction accuracy and limited physics learnability for S_neg.
In the DNS drag maps of figure <ref> and figure <ref>, the pits in the DNS map exhibit lower values than the planes, whereas in the CNN feature map, the pits show higher values than the planes. This occurs because the model fails to capture accurately the dominant physics of turbulent drag on S_neg surfaces, primarily focusing on regions with positive slopes and peaks while neglecting the plane and pit regions, similar to its performance in predicting the drag on S_Gauss, S_pos, S_ES_x, S_ES_z. Specifically, for S_neg, the average of the absolute SSIM between the sampled topographical maps and the CNN feature maps is 0.039, and that between the sampled DNS drag maps and the CNN feature maps is 0.058. These values are lower than the SSIM values between the CNN feature maps and the DNS drag maps compared with other surface types, as shown in table <ref>, where the average absolute SSIM for all types is 0.492.
According to the wavenumber domain analysis, S_neg shows a closer alignment of the CNN feature map with the topographical map than with the DNS drag map unlike other surface types (see figure <ref>). Specifically, the average ED between the sampled topographical map and the CNN feature map was 3.920, which is smaller than the ED between the sampled DNS drag map and the CNN feature map, which was 10.231. In other words, the scale of spatial patterns in the CNN feature map of S_neg more closely resembles that in the topographical map, diverging from the distributions observed in the DNS drag map. This congruence is also evident in the k_x^+ k_z^+ Φ (see figure <ref>). These results suggest that the CNN model did not accurately capture the spatial patterns for S_neg, unlike other previously analyzed surface types.
From an examination of previous studies, the inaccurate prediction of S_neg can be attributed to several factors. According to <cit.>, the ratio of the pressure drag force (F_p) to the total drag force (F_tot), which includes both viscous and pressure drag forces, is higher for surfaces with positive or zero-skw values than for those with negative-skw values. Specifically, on surfaces with negative-skw values, characterized by planar and pit features similar to our S_neg, the F_p/F_tot ratio is reported to be smaller than 0.3. This suggests a relatively minor role of F_p in F_tot for S_neg compared with the surfaces with zero or positive-skw values.
The study by <cit.> explored the influence of skewness on the drag of rough surfaces. The authors discovered that peak-dominant surfaces, or rough surfaces with skw values ranging from zero to positive, induce a higher drag compared with surfaces with negative-skw values. They proposed that the lower drag induction on negative-skw surfaces is due to flow skimming over surface depressions. Thus, the limited accuracy of the CNN model in predicting S_neg surfaces, coupled with its diminished ability to capture physics beyond the pressure drag distribution, highlights its limited focus on learning the pressure drag when predicting Δ U^+ and its reduced understanding of other factors influencing the total drag.
Furthermore, the complex flow phenomena inherent to S_neg, as described by <cit.>, contribute to the challenges of predicting and understanding the physics of S_neg. Surfaces with negative-skw values, featuring grooves and classified as either k-type or d-type based on their groove aspect ratio, exhibit complexities owing to recirculation vortices affecting the logarithmic layer offset. These complexities highlight the limitations of both the CNN model and the dataset in comprehensively understanding and learning the intricate flow dynamics of S_neg.
§ CONCLUSION
This study demonstrates the effectiveness of CNNs in predicting Δ U^+, a critical parameter for evaluating turbulent drag on rough surfaces induced by fluid flow. The key findings of this study include the following:
* Our developed CNN model predicts Δ U^+ using only the rough surface topography as input, eliminating the need for extracting surface parameters and manually selecting them.
* The feature map generated by our CNN model closely resembles the DNS drag maps, which indicate the distribution of drag generated by a flow over a rough surface. The model primarily identifies regions with high roughness elements. Additionally, it focuses on positive slopes in the wall-normal direction of roughness elements, which are linked with the frontal area of the rough surfaces and are strongly correlated with the pressure drag. Consequently, the CNN feature map exhibits elongated patterns in the spanwise direction, which are also observed in the DNS drag maps. Therefore, our model predicts drag based on the main mechanism of the drag induced on rough surfaces, which has positive or zero skewness.
* Although our CNN model effectively predicts Δ U^+ for surfaces where the pressure drag is dominant, it exhibits diminished predictive accuracy for surfaces with negative skewness, where the pressure drag is not the primary source of drag. Additionally, the CNN feature map shows reduced similarity to the DNS drag maps for surfaces with negative skewness. This underscores the limitations of the CNN model in capturing the drag mechanisms of surfaces where pressure drag is not the predominant factor. Additionally, flow phenomena other than pressure drag, such as the effects of shadowed areas or recirculation vortices in negatively skewed surface pits, are not well captured.
Future studies should aim to improve the learnability of ANNs for complex physics of drag on rough surfaces to enable more robust and accurate drag predictions across different surface types. This objective requires expanding the training dataset to encompass a wider variety of rough surface patterns, such as k- and d-type negative skewness surfaces and nonhomogeneous rough surfaces such as those introduced by <cit.>. Furthermore, considering that the rough surfaces in this study are comprised solely of randomly arranged roughness elements, integrating data on surfaces with regular arrangements into the dataset, as suggested by the study of <cit.>, could enhance the diversity of the dataset. Additionally, developing a larger training dataset is essential for utilizing feature extraction through the “vision transformer," which has recently demonstrated significant capabilities in computer vision. Moreover, although this study primarily investigated surfaces increasing drag (Δ U^+ > 0), the inclusion of surfaces that decrease drag (Δ U^+ < 0) under certain conditions, such as riblets <cit.>, could offer valuable insights into the dynamics of rough surfaces. This strategy could assist in designing surfaces to minimize drag, by employing machine-learning methods.
Furthermore, based on this study, we aim to develop a prediction model capable of visualizing drag distribution and instantly predicting the flow drag generated on the surface by processing diverse images of rough surfaces.
§ ACKNOWLEDGEMENTS
This work was supported by Korea Institute of Energy Technology Evaluation and
Planning(KETEP) grant funded by the Korea Government(MOTIE) (RS-2023-00243974, Graduate
School of Digital-based Sustainable Energy Process Innovation Convergence), the National Research Foundation of Korea Grant funded by the Korean Government(NRF-2022R1F1A1066547), and the Inha University Research Grant. SMHK, ZS and SB acknowledge the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish Energy Agency for funding the research.
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
§ VALIDATION OF THE DNS SOLVER
We employed the same resolution criteria as those reported in <cit.> and <cit.>. We validated the grid convergence of the DNS solver by halving and doubling the y grid numbers used in this study.
The grid numbers and resolutions utilized in this grid convergence test are summarized in table <ref>. Figure <ref> illustrates the mean velocity profile in the x-direction for each grid resolution. When GCS_fine is considered as the ground truth, the absolute percentage errors at y_ref are 0.969% for GCS_base and 0.808% for GCS_coarse.
§ ARCHITECTURAL FEATURES OF THE CNN MODEL
The architecture of our model was designed to include several important features. First, it utilizes a residual network (ResNet) framework, as proposed by <cit.>, which is extensively employed to address the vanishing gradient problem. Second, it adopts periodic boundary padding to reflect the periodic boundary condition used in DNS, ensuring consistency between the simulation and the model input. Third, the architecture uses GAP to create a feature map that effectively emphasizes critical areas on the rough surface that are important for predicting Δ U^+. Finally, it features a parallel structure with various kernel sizes to improve the ability of the model to detect features of rough surfaces at different scales.
The ResNet architecture incorporates skip connections, which create direct pathways to previous layers, effectively addressing the vanishing gradient problem. This problem involves the diminishing gradients of the loss function during the training of deep neural networks, leading to minimal parameter updates. The skip connections mitigate this issue by ensuring a consistent and effective flow of gradients throughout the network.
GAP, as outlined by <cit.>, is crucial for generating a two-dimensional feature map that highlights significant regions in the data, thereby aiding the decision-making processes of the CNN. By employing this pooling method, our CNN constructs a feature map that significantly contributes to the prediction of Δ U^+ from the input rough surface. This representation, termed a “CNN feature map" in this study, indicates significant areas on rough surfaces utilized by our CNN model for drag prediction.
§ DATA PREPROCESSING
§.§ Data augmentation
Hydrodynamically smooth surfaces, denoted as S_smooth, were included to enhance the diversity of the dataset. As U_R^+ in equation <ref> equals U_S^+ for these surfaces, the resulting Δ U^+ is consistently zero. This allowed us to create a smooth surface dataset without using additional DNS.
Furthermore, the dataset was expanded by reflecting surfaces along the x-axis. This mirrored surface retains its original Δ U^+ value, making it suitable for augmentation. This strategy effectively doubles the dataset size without requiring additional computations. Figure <ref> illustrates the original and mirrored surfaces.
§.§ Data partitioning
The expanded dataset of topographical maps was divided into training, validation, and test sets in the proportions of 60%, 20%, and 20%, respectively, as illustrated in figure <ref>.
To optimize model training and minimize biases, the rough surfaces and their corresponding Δ U^+ values were randomly shuffled. This procedure ensured an even distribution of data throughout the CNN training phase. All the data were then standardized using equation <ref>, based on the mean and variance of the training dataset.
§ OPTIMIZATION OF HYPERPARAMETERS
The optimization of hyperparameters is essential for improving the performance of ANNs. A thorough analysis of these parameters was performed to refine the CNN model. Bayesian optimization (BO) was employed for this purpose. BO uses a probabilistic model to predict the performance of the objective function, enabling an efficient exploration of the hyperparameter space. This method is particularly beneficial in scenarios involving high-dimensional optimization, where exhaustive searches are computationally infeasible. In this study, the hyperparameters optimized for the CNN model through BO include N_b = 3 and N_f = 48, where N_f represents the number of filters, and N_b denotes the number of ResNet blocks.
jfm
|
http://arxiv.org/abs/2405.09948v1 | 20240516095221 | Mitigating Text Toxicity with Counterfactual Generation | [
"Milan Bhan",
"Jean-Noel Vittaut",
"Nina Achache",
"Victor Legrand",
"Nicolas Chesneau",
"Annabelle Blangero",
"Juliette Murris",
"Marie-Jeanne Lesot"
] | cs.CL | [
"cs.CL"
] |
A Review of Multiple Access Techniques for Intelligent Reflecting Surface-Assisted Systems
Wei Jiang1 and Hans D. Schotten12
1Intelligent Networking Research Group, German Research Center for Artificial Intelligence (DFKI), Germany
2Department of Electrical and Computer Engineering, University of Kaiserslautern (RPTU), Germany
==========================================================================================================================================================================================================================================================
Toxicity mitigation consists in rephrasing text in order to remove offensive or harmful meaning. Neural natural language processing (NLP) models have been widely used to target and mitigate textual toxicity. However, existing methods fail to detoxify text while preserving the initial non-toxic meaning at the same time. In this work, we propose to apply counterfactual generation methods from the eXplainable AI (XAI) field to target and mitigate textual toxicity. In particular, we perform text detoxification by applying local feature importance and counterfactual generation methods to a toxicity classifier distinguishing between toxic and non-toxic texts. We carry out text detoxification through counterfactual generation on three datasets and compare our approach to three competitors. Automatic and human evaluations show that recently developed NLP counterfactual generators can mitigate toxicity accurately while better preserving the meaning of the initial text as compared to classical detoxification methods. Finally, we take a step back from using automated detoxification tools, and discuss how to manage the polysemous nature of toxicity and the risk of malicious use of detoxification tools. This work is the first to bridge the gap between counterfactual generation and text detoxification and paves the way towards more practical application of XAI methods.
§ INTRODUCTION
Online textual toxicity can be considered as rude, aggressive and degrading attitudes exhibited on online platforms, ranging from harmful to hateful speech. Hate speech is defined as aggressive or offensive language against a specific group of people who share common characteristics, such as religion, race, gender, sexual orientation, sex or political affiliation <cit.>. Such toxic content has multiplied on the Internet in recent years <cit.>, raising concerns about its multi-faceted negative impact, such as the potential to threaten victims' psychological and physical well-being <cit.> or to be used as a medium for criminal actions <cit.>.
Toxic text data can also have a negative impact when used to train large language models (LLMs): recent advances in natural language processing (NLP) and the development of LLMs such as GPT-3 <cit.>, or LaMDA <cit.> have been made possible by utilizing vast quantities of textual data available on the Internet. These models have demonstrated a high capacity to generate plausible text, while raising several concerns about harmful content generation <cit.>, and bias amplification <cit.> coming from the training texts. Thus, LLMs' ability to generate toxic content may contribute to the rapid spread of such content online, as more and more content is synthetically generated by chatbots <cit.>.
To overcome the rapid development of online toxic content and curb its societal impact, automatic toxicity processing methods have been developed to detect and process harmful content on online communities and digital media platforms <cit.>. In particular, text detoxification (or toxicity mitigation) aims to rewrite toxic text in order to remove (or mitigate) toxicity while preserving the initial non-toxic meaning and maintaining plausibility. Several methods based on neural NLP models have been developed to perform text detoxification <cit.> by generating text under constraint, or by detecting toxic content and modifying it. If these methods succeed in significantly lowering textual toxicity, they generally fail to preserve the initial non-toxic content. Besides, automatic toxicity processing tools raise major ethical questions regarding the risks related to their robustness and the role of humans involved.
In this paper we propose to address toxicity targeting and mitigation by applying eXplainable AI (XAI) methods, more precisely Local Feature Importance (LFI) and counterfactual example generation <cit.>. The former aims at detecting important input features to explain a model prediction. Counterfactual example generation (see <cit.> for a survey) instead explains a model's prediction by identifying the minimal changes that enable flipping the outcome of a classifier.
The main contributions of this work are as follows:
* We show that LFI and counterfactual generation can be applied to a toxicity classifier respectively to target toxicity and to perform toxicity mitigation.
* We propose _tigtec, a toxicity mitigation method based on a recently developed counterfactual example generator: <cit.>.
* We conduct both automatic and human experiments to show that _tigtec reaches competitive performance in text detoxification.
* We discuss risks and opportunities related to automatic toxicity detection and mitigation tools and define recommendations.
As an illustration, Figure <ref> shows an initial toxic text and its detoxified versions respectively obtained from our proposed method, <cit.> and <cit.>. Counterfactual detoxification leads here to more sparse, plausible and context preserving text as compared to the other methods.
The paper is organized as follows: in Section Background and Related Work we first recall basic elements about toxicity mitigation and summarize the desired characteristics of detoxification methods. The second section shows how common XAI methods can be used to (1) target toxicity and (2) generate plausible and content-preserving detoxified texts. Experimental results discussed in the next section highlight that text detoxification through counterfactual generation achieves competitive results in terms of toxicity mitigation, content preservation and plausibility, as compared to state-of-the-art competitors. It also compares the ability of different LFI methods to target toxic content. Experimental evaluation is performed, both automatically and with a human-grounded protocol. Since the use of automatic toxicity processing raises several critical concerns, we finally discuss in the last section risks and opportunities around the use of toxicity mitigation methods. As a result, we discuss how to take full account of the polysemous nature of toxicity, manage the risk of malicious use of detoxification tools and favor human-in-the-loop processes.
§ BACKGROUND AND RELATED WORK
In this section we recall the context of automatic text detoxification, whose objective is to remove toxicity while preserving the initial non-toxic content. We present the task of text detoxification and discuss existing methods that aim to detoxify text using neural NLP models. Finally, we introduce the XAI principles used in the next section to perform toxicity detection and mitigation.
§.§ Automatic toxicity processing background
In the following, we use the terms text detoxification and toxicity mitigation interchangeably, as their definitions depend on that of toxicity.
§.§.§ Definition and objective
Textual toxicity can be defined in multiple ways <cit.> and can take various forms, such as rude, offensive or hateful speech, potentially causing online harm to isolated people or minority groups <cit.>. Automatic toxicity processing can essentially take two forms: detection and mitigation. Toxicity detection can be either based on prior knowledge (vocabulary, regex) or obtained from a fine tuned toxicity classifier f : 𝒳→𝒴 mapping an input text representation space 𝒳 to an output space 𝒴 to distinguish toxic and non-toxic texts <cit.>. Training language models to classify toxic and non-toxic texts is difficult, as it requires access to datasets labeled based on an implicit definition of toxicity derived from human annotators <cit.>. Toxicity mitigation consists in rewriting a toxic text while preserving the non-toxic meaning. This task is even more difficult because it requires disentangling toxic and non-toxic meanings to plausibly modify the former while preserving the latter.
§.§.§ Expected characteristics of text detoxification
Several desirable properties have been proposed to assess automatically text detoxification. We organize them into three categories. When ground truth detoxified text is unavailable, we use the previously introduced f classifier as an oracle to evaluate the toxicity level of the supposedly detoxified text.
Accuracy (ACC) assesses the extent to which the generated texts are accurately detoxified with respect to f. Accuracy can be measured by computing either the rate of successful changes or the average toxicity logit of the generated texts using f.
Proximity or content preservation (CP) evaluates how close two texts are. Textual similarity can be defined in two different ways. The first one consists of evaluating textual proximity based on word sequence co-occurrences, with metrics such as self-BLEU <cit.>, ROUGE <cit.>, METEOR <cit.> or the Levenshtein distance. The second way of evaluating textual proximity is to measure semantic similarity from word-level embeddings <cit.> or sentence-level embeddings <cit.>.
Finally, the detoxified text has to be plausible, or fluent. Mostly, Plausibility is automatically evaluated by the perplexity (PPL) score obtained from language models such as LSTM <cit.> or GPT-2 <cit.>.
§.§ Toxicity mitigation with neural NLP models
This section presents existing methods leveraging neural NLP models to classify and generate plausible text for text detoxification. Two categories are distinguished: Text Style Transfer (TST) and Masking and Reconstructing (M&R).
§.§.§ Text Style Transfer
TST (see <cit.> for a survey) aims to alter the stylistic attributes of an initial text while preserving its content that is unrelated to the target style. Text detoxification can be achieved through TST, where the initial style is characterized by the presence of toxicity, and the target style is defined by its absence. Style transfer is usually performed by generating text with neural NLP decoders guided by an NLP toxicity classifier. In general, TST methods vary based on the language model used for text generation and the method of text generation steering.
A first TST approach <cit.> uses an encoder-decoder architecture based on recurrent neural networks (RNN) to generate non-toxic text, using a toxicity convolutional neural network (CNN) classifier to steer the style transfer. Another method <cit.> fine-tunes a text-to-text T5 model <cit.> by using a denoising and cyclic auto-encoder loss. Finally, <cit.> uses a pre-trained T5-based paraphraser model and a class-conditionned language model to steer the text generation.
TST methods generally detoxify text accurately but struggle to preserve its non-toxic meaning <cit.>.
§.§.§ Masking and Reconstructing
Toxicity mitigation can be sequentially done by (1) targeting toxic content, (2) masking it, and (3) modifying it. Masking and Reconstructing (M&R) approaches generally enable performing text detoxification while preserving the non-toxic meaning. Once the toxic content is targeted, mask infilling is usually performed with a neural NLP encoder. In general, M&R methods differ in the way they target toxic content, and the neural NLP model used to perform mask infilling.
A first M&R approach <cit.> performs text detoxification by retrieving potential harmful Part-Of-Speech (POS) based on a predefined vocabulary of toxic words, generating non-offensive POS substitution candidates, and editing the initial text through mask infilling with a RoBERTa NLP encoder for unacceptable candidates. <cit.> identifies tokens to be masked using a logistic bag-of-words classifier and performs mask infilling using a BERT NLP encoder. Finally, the most recent M&R method called <cit.> detects POS that could convey toxic meaning by comparing likelihoods from two BART NLP encoder-decoder respectively fine-tuned on toxic and non-toxic content. The targeted potential toxic content is then replaced by non-toxic content by mixing token probabilities from these two encoder-decoders and a third neutral model.
On average, M&R yields to better results than TST in terms of content preservation, and performs equally regarding toxicity mitigation <cit.>.
§.§ XAI for NLP
In the following, we consider the neural NLP toxicity classifier f : 𝒳→𝒴 introduced in the previous section, and a text x = [t_1,...,t_|x|] ∈𝒳 represented as a sequence of tokens with f(x) = y. 𝒴 can either be a binary space that distinguishes toxic and non-toxic texts or a multi-class space that categorizes several levels of toxicity.
§.§.§ Local Feature Importance
A Local Feature Importance (LFI) function g : 𝒳→ℝ^|x| explains a prediction by a vector [z_1,...,z_|x|] where z_i is the contribution of the i-th token to the prediction. The higher the contribution, the more important the token to explain the prediction of the classifier f. Three types of LFI methods can be distinguished: perturbation-based such as <cit.>, gradient-based such as <cit.> and attention-based such as self-attention in case of a Transformer classifier <cit.>. Perturbation-based LFI perturbs and resamples feature values to compute feature importance, whereas gradient-based LFI is based on the classifier's backpropagated gradient activity.
§.§.§ Counterfactual explanations
Counterfactual explanations emphasize what should be different in an input instance to change the outcome of a classifier <cit.>. Counterfactual examples provide contrastive explanations by simulating alternative causes to assess if a specific event (here the predicted class) still happens or not <cit.>. The counterfactual example generation can be formalized as a constrained optimization problem. For a given classifier f and an instance of interest x, a counterfactual example x_cf must be close to x but predicted differently. It is defined as:
x_cf = _z ∈𝒳 c(x,z) s.t. f(z) ≠ f(x)
where c : 𝒳×𝒳→ℝ is a cost function that aggregates several expected counterfactual characteristics, such as the textual distance. The counterfactual explanation is then the difference between the generated counterfactual example and the initial data point, x_cf - x. Many desirable characteristics for counterfactual explanations have been proposed <cit.>, such as sparsity, defined as the l_0 norm of x_cf - x, plausibility to make sure that the counterfactual example is not out-of-distribution <cit.> and actionability <cit.>. In the following section, we present a few methods for generating textual counterfactuals by comparing and relating them with the toxicity mitigation task.
§ WHEN XAI MEETS TEXT DETOXIFICATION
This section describes how XAI methods can be used to tackle text detoxification. We show that LFI methods can foster toxic content targeting and we illustrate how to apply counterfactual generation methods for performing text detoxification.
§.§ Targeting toxic POS with Local Feature Importance
Toxic POS targeting involves identifying the elements in a toxic text that induce its toxicity. While toxicity can be easily defined in part by a predefined lexical field, it can also take more complex forms, such as sarcasm, synonyms, or associations that are difficult to detect automatically.
Let f be a toxicity classifier and x is a toxic text with f(x)=. Applying LFI to f highlights important tokens that explain why x has been classified as toxic by f. This way, toxic POS detection can be performed by applying LFI methods to f on texts classified as toxic.
Toxic POS detection with LFI does not require the definition of a predefined toxicity vocabulary and is only based on a model that is trained to discriminate between toxic and non-toxic texts. Then, toxicity is detected in a data driven fashion based on a fine-tuned neural NLP model. The use of LFI methods applied to f makes it possible to detect complex forms of toxicity, as recent neural NLP models such as BERT take into account the context to make their predictions. Toxic POS detection through LFI depends on the ability of the f classifier to accurately discriminate between toxic and non-toxic texts. A toxicity classifier with a low accuracy might misclassify toxic and non-toxic texts, leading to non-toxic important tokens being highlighted with LFI explanations.
Figure <ref> illustrates the principle of toxic POS targeting with LFI applied to a toxicity classifier, where the tokens "f**k" and "b***h" are assessed as important to predict that the first text is toxic, whereas the tokens "ugliest" and "b***h" are highlighted for the second text. In the next section, we experimentally study the relevance of this approach by comparing perturbation-based, attention-based and gradient-based LFI methods.
§.§ Toxicity mitigation with counterfactual generation
We propose to perform toxicity mitigation through counterfactual generation with respect to the f toxicity classifier. We postulate that, following the notations of Equation <ref>, toxicity mitigation can be performed by setting x as a toxic instance of interest with f(x) =. Therefore, the objective is to find x_cf that minimizes function c such that f(x_cf) =. The lack of ground truth detoxified texts is then overcome through the use of f as an oracle to guide text detoxification while keeping the non-toxic content. This way, text detoxification through counterfactual generation consists in detecting texts classified as toxic by f and generating their related detoxified counterfactual examples.
Among the numerous expected attributes of counterfactual examples, we highlight those that are common with the expected characteristics of text mitigation introduced in the previous section:
* Proximity is identical to content preservation (CP) that can be measured in NLP with either sparsity (number of token changes between the initial text and its related counterfactual example) or semantic similarity (computed from sentence embedding similarity).
* Plausibility that can be treated in NLP as the linguistic fluency and measured with the perplexity (PPL).
Textual counterfactual generation methods can be of two types: Text editing heuristics and Counterfactual generation with large language models <cit.>. Text editing heuristics address textual counterfactual generation by slightly modifying the input text whose prediction is to be explained. Important tokens are targeted and modified with mask language models to switch the outcome of the classifier, making this approach very similar to the M&R way of detoxifying text. Text editing heuristics methods differ in the way they target important tokens and the language model used to modify the initial text. Regarding the former methods, they mostly target tokens to be modified by applying LFI methods to the classifier that has to be explained. For example, <cit.> applies , <cit.> gradient-based approaches, <cit.> leverages rationalization methods, and <cit.> employs or . Next, mask language models used to perform mask infilling are essentially T5, RoBERTa or BERT. This way, M&R methods and Text editing heuristics differ only in the way they target toxic POS. As mentioned in the previous section, the former detect tokens to change by either the use of prior knowledge (vocabulary) or from a by-design interpretable model (logistic model), or from expert-anti-expert disagreement, whereas Text editing heuristics apply LFI methods to a toxicity classifier. Finally, text is modified using the same language models. We propose to group these two kinds of approaches together under the name of "target-then-replace" methods. Figure <ref> shows the whole target-then-replace counterfactual detoxification process, where the detoxification finally consists in performing the two following token changes: f**k → heck and b***h → girl
Counterfactual generation with large language models (CF-LLM) methods build counterfactual examples by leveraging pre-trained generative language models in the same way as TST text detoxification.
These methods differ in the language model used to generate text and the way the model is steered towards a specific objective. For example, <cit.> learns perturbations to steer text generation with BART <cit.> and fine-tunes GPT-2 to generate counterfactual examples. This way, TST and CF-LLM methods differ mainly in the way generative language models are steered towards a specific style or label. We group these two families of approaches together under the name of "steered text generation" methods. Figure <ref> summarizes the connection between TST text detoxification and
Counterfactual generation with large language models on the one hand, and M&R methods and Text editing heuristics on the other.
Finally, counterfactual generation and text detoxification can be (1) defined in the same way with the objective to find a small change to reach a target state, (2) categorized in two similar families of methods, namely steered text generation and target-then-replace and (3) evaluated with common metrics to assess accuracy, proximity and plausibility.
§ EXPERIMENTAL SETTINGS
This section presents the automatic and human experimental studies conducted across three datasets to perform toxicity mitigation through counterfactual generation.
§.§ Experimental protocol
Datasets
We perform toxicity mitigation on three toxicity datasets from <cit.>. Microagression.com (MAgr) is a public blog containing socially-biased interactions with offending quotes.
Social Bias Frames (SBF) is
a corpus of offensive content from various online sources. We use a subset of SBF from the microaggressions subreddit where the texts have been labeled as harmful by annotators. DynaHate is a dataset of hate comments that are difficult to detect for a hate-speech classifier. Toxicity mitigation is run on texts initially classified as toxic in all three cases. MAgr contains 951 toxic texts, SBF 460 and DynaHate 500. The three datasets are available in the Github project in the initial paper[<https://github.com/shallinan1/MarcoDetoxification/tree/main/datasets>].
Counterfactual generator and competitors
We instantiate the method proposed in the previous section by choosing as counterfactual generation method <cit.>. is a target-then-replace textual counterfactual generator that implements several LFI methods to target important tokens to be changed. iteratively masks and replaces tokens with a BERT mask language model following a tree search policy based on beam search. has shown great performances, leading to a competitive compromise is terms of success rate, sparsity and content preservation as compared to other counterfactual generation methods. We use the following settings to run : we first train a BERT classifier on a toxic task dataset from Kaggle[<https://www.kaggle.com/datasets/rounak02/imported-data>] to learn to distinguish toxic texts to non-toxic ones. The classifier performance after training is 94%. This way, counterfactual text detoxification is performed with a classifier that has been trained on a different dataset from the ones used for evaluation.
Toxicity mitigation is then performed by generating counterfactual examples starting from toxic texts to reach a non-toxic state. In the following, we call _tigtec our toxicity mitigation method based on counterfactual generation. _tigtec is run in three different versions, targeting toxic POS with three different LFI methods: , and . In particular, we aggregate self-attention as in the original paper by averaging the attention coefficients related to the CLS token over the attention heads in the last layer of the BERT f classifier. _tigtec is compared to three state-of-the art text toxicity mitigation methods: <cit.>, and <cit.>. The code used to run the all the methods is not provided for anonymity reasons and will be available upon acceptance.
Automatic evaluation
We use the 5 metrics previously introduced to assess toxicity mitigation.
In particular, the toxicity metrics are based on a pre-trained toxicity classifier. The library used to import the pre-trained toxicity classifier is and the model backbone is . This toxicity classifier is different from the one used to steer counterfactual toxicity mitigation. The success rate is computed with the accuracy (%ACC) from the classifier to binary assess if the evaluated text is toxic or non-toxic. %ACC is defined as the number of non-toxic texts over the total number of evaluated texts, with respect to the pre-trained toxicity classifier. The average toxicity score (SCORE) is obtained from the last layer of the classifier before the layer.
The sparsity (%S) is computed with the normalized word-based Levenshtein distance. The content preservation (%CP) is computed with the cosine similarity between Sentence Transformer <cit.> embeddings to evaluate the semantic proximity between the initial toxic text and its detoxified version. The library used to import the Sentence Transformer is and the model backbone is .
Text plausibility is measured with the perplexity score <cit.> and compared to the perplexity of the original text (ΔPPL). This way, a ΔPPL score lower than 1 indicates than the text plausibility increases whereas ΔPPL higher than 1 means that the detoxified text is less plausible. This score is computed based on the exponential average cross-entropy loss of Gemma-2B <cit.>, a recently developed small generative language model outperforming GPT-2 while having approximately the same size. The library used to import the pre-trained model is and the backbone is . Due to the presence of outliers when calculating the entropy used to calculate perplexity, perplexity is aggregated using the median rather than the mean operator.
Human-grounded evaluation
In addition to automatic evaluation, we perform a human-grounded experiment to compare _tigtec to , and in terms of toxicity mitigation performance. It consists in asking 5 annotators to rank detoxified texts by toxicity level obtained by applying _tigtec, , and on 20 randomly selected texts from each dataset. The order of appearance of the toxicity mitigation methods and the dataset is randomized, so that there is no spatial bias in information processing. Annotators can rank texts at the same level if necessary.
Before running the experiment, annotators are given the same instructions. To make sure that they annotate based on the same common knowledge, we define the textual toxicity as "violent, aggressive or offensive language that may focus on a specific person or group of people sharing a common property. This common property can be gender, sexual orientation, ethnicity, age, religion or political affiliation.". Annotators all have a MSc degree in data analytics or machine learning and have a good knowledge of English.
§.§ Results
Global results
Table <ref> shows the obtained experimental results by running each method on the same datasets. In this table, _tigtec has been run using self-attention as on the initial paper to target toxic POS. For each dataset, _tigtec leads to the most content preserving texts, with the highest %CP and %S scores. On the other hand, _tigtec performs in average worse than and in terms of detoxification accuracy and score across all datasets. Still, the toxicity of texts generated by _tigtec is in average lower than that of over all text corpora. If and mitigate the most toxicity, the resulting detoxified texts are significantly different from the initial one in terms of sparsity and semantic proximity. generates the less plausible text across all datasets and degrade text plausibility whereas and improve it. In particular, produces the most plausible text. This result is linked to the fact that utilizes a paraphrase language model to generate text that is intended to be plausible. However, modifies the most the initial text in terms of sparsity.
The high perplexity level of the text generated by _tigtec can be partially attributed to the mask language model used for generating new text: indeed, it is significantly smaller than the encoder-decoder models used by and for text generation. The model used by _tigtec is a small 66M parameters DistilBERT for masked language model, whereas each encoder-decoder model used by and to generate text are respectively a 139M parameters BART and a 220M parameters T5. Using a bigger mask language model such as BERT-base or BERT-large would improve the plausibility of the text generated by _tigtec. _tigtec still generates significantly more plausible text as compared to .
Human evaluation
Figure <ref> shows the results from the human-grounded experiment where human annotators rank methods' outputs by level of toxicity. achieves the lowest level of toxicity on DynaHate and MAgr, which is consistent with the automatic analysis. _tigtec and produce less toxic texts as compared to on the DynaHate dataset. Toxicity is overall at the same level across _tigtec, and on SBF.
This way, automatic and human evaluation indicate that applied to a toxicity classifier offers another possible compromise between toxicity, meaning preservation and text plausiblity as compared to other state-of-the-art existing methods.
Ablation study
Table <ref> shows the experimental results obtained by running three different versions of _tigtec on the three datasets of interest. Each _tigtec instance is defined by the LFI method used to target toxic POS. Table <ref> shows that , and lead to similar results in terms of toxicity mitigation, content preservation and text plausibility. These results highlight that LFI methods of a different nature (perturbation, attention or gradient-based) can all yield good results.
Toxicity mitigation through counterfactual generation methods like has to be performed by choosing the appropriate LFI method to target toxicity based on the available model information. For example, is appropriate if no information (internal parameters, gradients) is available about the classifier f used to counterfactually mitigate toxicity, due to its model-agnostic nature. On the contrary, if f gradients are accessible, the use of is indicated since it is less computationally costly than . Finally, if all f parameters are accessible, using is appropriate because it is available at no cost.
Since gradually masks and replaces the tokens in the original toxic text based on LFI, we postulate that the sparser detoxified texts a LFI method induces, the better its performance, as it targets the most discriminating tokens of the initial text. This way, among the LFI methods accurately mitigating toxicity, give the most faithful explanations (i.e. target the most accurately toxic POS).
§ ON THE USE AND MISUSE OF TOXICITY DETECTION AND MITIGATION TOOLS
Online toxicity is a systemic problem with complex and multiple roots <cit.>. Automatic toxicity detection and mitigation do not in themselves solve the factors causing online toxic content generation, but they offer technological means of adaptation to its rapid online development. The use of such tools raises critical ethical and technical considerations. In this section we identify some of the risks associated with the use of toxicity mitigation tools, and propose good practice rules to limit these risks.
§.§ Diversity of values and control of information
In this work, we have considered a usual hate speech definition as "aggressive or offensive language that can be focused on a specific group of people who share common property such as religion, race, gender, sexual orientation, sex or political affiliation". This definition is just one of many used by institutions and platforms to characterize hate speech <cit.>. In particular, hate speech characterization can focus either on violence and hate incentives or on the objective of directly attacking. The choice of a specific definition can have a direct impact on the way online toxicity is automatically processed with automatic toxicity processing tools.
In addition, the perceived toxicity of language can vary based on identity and beliefs <cit.>. For instance, conservative annotators can show a higher propensity to label African American English dialect as toxic while being less likely to annotate anti-Black comments as harmful <cit.>. Such annotator bias can be reflected in the datasets used to detect toxicity. Recommendation No. 1: When building databases to train ML models to detect toxicity, data annotators must be selected to represent this diversity of values. Besides, toxicity mitigation with NLP must be done by carefully selecting the toxicity dataset used to train a toxicity classifier, in order to make sure that the values implicitly encoded in the classifier match those expected.
Toxicity mitigation is less restrictive than full detection and removal of toxic messages, since the non-toxic part of the initial text is supposed to be retained. However, having a comment moderated by a toxicity mitigation algorithm can be perceived as a censorship and unjustified information control mechanism. Recommendation No. 2: The toxicity definitions chosen and the datasets used to train the toxicity detector have to be made transparently accessible. Making this information transparent and easily accessible is more likely to ensure a relationship of trust between the content generator and the platform on which the text is posted. The vision chosen to define toxicity could then be subject to deliberative questioning by the users.
§.§ Against malicious use
A toxicity mitigation tool can be misused in various ways. Since toxicity mitigation implies to learn how to detect toxic content and replace it, the same process can be carried out in the opposite way to poison text. This way, counterfactually generating toxicity would consist in starting from a non-toxic text and slightly modifying it to make it toxic using the same toxicity classifier used to perform toxicity mitigation. Such a way of using the tool would automatically turn non-toxic texts into more toxic ones while still being plausible. Recommendation No. 3: One way of preventing counterfactual toxic content generation is to fine-tune the neural models performing text modification on detoxified text corpora. A toxicity vocabulary can also be used to prevent the language model from generating text within it. However, these solutions cannot provide a complete guarantee against someone using the tool as a toxicity amplifier to poison text.
Another misuse of a toxicity mitigation tool is to use it as an adversarial attack generator to make a toxic text seem non-toxic. Adversarial attacks are small perturbations of data instances fooling a classifier with imperceptible changes <cit.>, which bring them formally close to counterfactuals. Since toxicity mitigation methods are based on the use of a toxicity classifier to steer the detoxification process, these methods are subject to adversarial attacks. In this manner, a dishonest user could hijack a detoxification tool to find the smallest modifications to the initial texts, leading a toxicity classifier to falsely assess that a text is correctly detoxified. Recommendation No. 4: A recent work <cit.> proposes a method to robustly detect textual adversarial attacks based on the computation of similarities between a given input embedding and the training distribution. Another way to prevent adversarial attacks is to make the toxicity classifier more robust through counterfactual data augmentation <cit.>.
Toxicity mitigation methods can also be reverse engineered to discover the rules used by an online platform to detect and modify harmful contents. This can lead to a change in the terms used, expressed in a seemingly neutral way in order to continue to publicly expressing hateful content online <cit.>. Toxicity classifiers have to be frequently updated and fine-tuned on updated datasets integrating changes in the vocabulary used to express toxic ideas.
§.§ The inaccuracy of automatic toxicity processing
State-of-the-art toxicity mitigation algorithms do not remove toxicity with perfect precision. Therefore, deploying a toxicity mitigation algorithm fully automatically is particularly risky since it could let harmful content spread. We propose to use this kind of tool as the first layer of hate content processing before integrating humans into the loop.
Behind any platform, a lot of content must be reviewed and online moderation is partly performed by human labor <cit.>. By being exposed to disturbing toxic content, human moderators can develop psychological and emotional distress <cit.>.
Toxicity mitigation tools have the potential to induce a socio-technical change, suggesting textual intervention and reducing exposure for content moderators. Recommendation No. 5: We suggest using hate content detectors and mitigation methods based on the level of toxicity of the text. Text with the highest level of toxicity could simply be deleted, as it would be unlikely to be modified without completely altering its original meaning. Intermediate toxicity levels could be handled by a toxicity mitigation algorithm in order to preserve the general meaning of the text while proposing a softened version. This way, content moderators' exposure to the most hateful content would be significantly limited, and the more ambiguous content would be preprocessed by the mitigation algorithm to propose a more acceptable first version, still requiring moderation.
§.§ Ecological impact
The use of NLP neural models to mitigate toxicity induces a non-negligible carbon consumption <cit.>. Most of the detoxification methods rely on neural NLP models for text generation under control. For instance, both and use three language models to perform toxicity mitigation.
These language models have several million (or billion) parameters, which often require the use of high-emission clusters to process large quantities of data. These methods have to be used with caution and alternatives to neural NLP have to be considered whenever possible.
§ DISCUSSION
In this work we showed that XAI methods can be applied to a toxicity classifier to target toxic POS with LFI and mitigate toxicity with counterfactual generation. Counterfactual detoxification with the counterfactual generator enables us to find a new compromise in terms of toxicity lowering, content preservation, and textual plausibility.
Counterfactual toxicity mitigation is highly dependent on the f toxicity classifier used to steer detoxification. If f is unreliable and only performs well on its training set, the risk of incorrectly indicating that the text has been detoxified is high. Therefore, the choice of f and the toxicity training data must be made with caution to avoid incorrect toxicity assessments during detoxification. The f classifier can be fine-tuned on a more specific dataset if the detoxification task is related to a precise kind of toxicity, such as racism or sexism.
Counterfactual detoxification has been tested by applying three different versions of with , , or . There is a wide range of other LFI methods that could be used to target important tokens to explain a toxicity prediction, such as <cit.> or <cit.>. Besides, other counterfactual generators such as <cit.>, <cit.> or <cit.> could be used to perform toxicity mitigation. We believe that these counterfactual generation methods could lead to other levels of compromise between toxicity lowering, text plausibility and content preservation.
This paper is the first to show the extent to which fields such as automatic toxicity processing and explainable AI, which have developed in parallel, actually share many similarities and can be mutually beneficial.
§ CONCLUSION
This paper formalized how LFI and counterfactual generation methods can be used to target textual toxic content and perform toxicity mitigation. _tigtec leads to competitive results, with state-of-the-art performance in terms of content preservation while accurately detoxifying text and generating plausible text. _tigtec is versatile since it can be used with various types of LFI methods (such as attention, gradient, and perturbation) to target toxic POS. This work is the first attempt to recognize the systemic similarity of these tasks and address detoxification through counterfactual generation. While counterfactual toxicity mitigation may yield competitive results, it also poses risks in terms of malicious use. In particular, counterfactual detoxifiers could be hijacked to generate adversarial attacks or toxic content from non toxic sources, which calls for the implementation of robust good practices.
§ ETHIC STATEMENT
Each participant to the human evaluation signed an informed consent form outlining the project's purpose and details, and the intended use of the data they would generate. The data were anonymized and processed only by the authors. The data produced are stored in a file in accordance with the General Data Protection Regulation (GDPR) regulations in force. Participation in the study was fully voluntary. It was possible to stop performing the labeling tasks at any time. The Consent form used is anonymized and presented in the Appendix, Figure <ref>. The authors of this paper do not represent any organization or institution engaged in data labeling activities. This study was conducted for research purposes only.
For anonymity reasons, we do not provide a link to the GitHub of the paper. The code will be available upon acceptance to facilitate reproduction and further research.
unsrtnat
§ APPENDIX
§.§ Human-grounded protocol participant consent form
Each participant signed an informed consent form containing the project purpose and details and the intended use of the data they would generate. Figure <ref> shows an anonymized version of the consent form.
|
http://arxiv.org/abs/2405.08689v1 | 20240514151139 | Learning How to Dynamically Decouple | [
"Arefur Rahman",
"Daniel J. Egger",
"Christian Arenz"
] | quant-ph | [
"quant-ph"
] |
School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona 85287, USA
IBM Quantum, IBM Research Europe - Zurich, Rüschlikon 8803, Switzerland
carenz1@asu.edu
School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona 85287, USA
Current quantum computers suffer from noise that stems from interactions between the quantum system that constitutes the quantum device and its environment.
These interactions can be suppressed through dynamical decoupling to reduce computational errors.
However, the performance of dynamical decoupling depends on the type of the system-environment interactions that are present, which often lack an accurate model in quantum devices.
We show that the performance of dynamical decoupling can be improved by optimizing its rotational gates to tailor them to the quantum hardware.
We find that compared to canonical decoupling sequences, such as CPMG and XY4, the optimized dynamical decoupling sequences yield the best performance in suppressing
noise in superconducting qubits.
Our work thus enhances existing error suppression methods which helps increase circuit depth and result quality on noisy hardware.
Learning How to Dynamically Decouple
Christian Arenz
May 20, 2024
====================================
§ INTRODUCTION
Noise currently limits quantum computers from harnessing their full potential.
In the long term, quantum error correction is expected to overcome this issue <cit.>.
In the short term, noise mitigation and suppression techniques are critical to improve quantum device performance <cit.>.
Error mitigation is designed to reduce noise, typically, in expectation values <cit.>.
The computation is executed on the noisy quantum computer multiple times to either extrapolate to a zero-noise limit <cit.>, or to cancel the noise on average <cit.>.
By contrast, error suppression methods, such as dynamical decoupling <cit.> and pulse-efficient transpilation <cit.>, reduce the presence of noise directly in the quantum circuits.
In this work we focus on the well-established noise and error suppression technique dynamical decoupling (DD).
Inspired by nuclear magnetic resonance spectroscopy (NMR) <cit.>, the theory of DD was first developed by Lorenza Viola, Emanuel Knill, and Seth Lloyd in 1998 <cit.> as an open-loop control technique.
DD suppresses errors by decoupling the system from its environment through the application of a sequence of pulses which, in the ideal case, compose to the identity.
The utility of DD has been demonstrated in a wide range of quantum systems, such as coupled nuclear and electron spins <cit.>, trapped ions <cit.>, electron spins <cit.>, and superconducting qubits <cit.>. Since the introduction of the DD framework in the 90's, DD has also become a viable method to suppress noise and errors in quantum computing <cit.>.
For example, DD can suppress crosstalk <cit.> and improve the performance of superconducting qubit based quantum devices <cit.> in general.
DD sequences can be designed from first principles <cit.> or with numerical simulations, leveraging tools such as genetic algorithms <cit.> and machine learning <cit.>.
In recent years, DD has become a major error suppression method for noisy superconducting quantum computers <cit.>.
Indeed, the method is easy to apply as a simple transpilation pass that inserts delays and pulses into a quantum circuit.
Furthermore, simple sequences such as X-X, where X is a π-rotation around the qubit's x-axis, already yield excellent results <cit.>.
More elaborate sequences, such as staggered X-X <cit.> and staggered XY4, improve, for instance, the execution of dynamic circuits by cancelling cross-talk <cit.>.
Crucially, the performance of a DD sequence depends on the interactions present in the quantum hardware.
In superconducting qubits <cit.>, a good model of these interactions is typically not known, a case familiar to optimal control <cit.> that can be overcome with closed-loop optimization <cit.>. Similarly, it is possible to tailor DD sequences to the quantum hardware at hand by learning them through genetic algorithms <cit.>. However, in hardware, DD pulses are not perfect, potentially introducing additional noise and errors through the controls that implement the DD pulses, thereby diminishing the quality of the designed DD sequence.
To overcome these limitations, in this work we tailor the DD sequences to the hardware and quantum circuits to execute.
This is achieved by optimizing the rotational angles of the gates in the DD sequence in a closed-loop with the quantum hardware.
A classical optimizer is fed the cost function value that is reconstructed from quantum samples and that is sensitive to the quality of the DD pulses, see Fig. <ref>.
This manuscript is structured as follows.
We introduce in Sec. <ref> two commonly employed DD sequences CPMG and XY4.
Next, in Sec. <ref> we develop the theoretical framework of how optimal parameters in DD sequences are found on quantum hardware, which we refer to as learning dynamical decoupling (LDD).
We demonstrate in Sec. <ref> the utility of LDD on IBM Quantum hardware by comparing the performance of LDD to CPMG and XY4 to suppress noise in two experiments.
We show that LDD outperforms CPMG and XY4 for suppressing noise present during mid-circuit measurements and noise resulting from increasing the depth of a quantum circuit.
We conclude in Sec. <ref>.
§ BACKGROUND: DYNAMICAL DECOUPLING
DD can in general suppress generic interactions through pulses that rotate around multiple axis <cit.>.
However, DD is most resource efficient when tailored to the specific type of interactions at hand <cit.>.
Furthermore, the most effective DD sequence depends on the noise type present in the physical system.
The spin echo <cit.> in NMR can be seen as a DD experiment where a single Pauli X gate refocuses coherent errors. The “Carr–Purcell–Meiboom–Gill” (CPMG) DD sequence is an extension of the spin echo with two symmetric insertions of an X gate <cit.>.
The resulting pulse sequence is
CPMG≡τ/2 - X - τ - X - τ/2,
where τ is the duration of the free evolution.
Multiple CPMG sequences can be concatenated one after another, see Fig. <ref>(a).
While the CPMG sequence can suppress homogeneous dephasing along one axis, it cannot suppress noise stemming from generic system-environment interactions.
By contrast, the XY4 DD sequence <cit.>, defined by
XY4 ≡ Y - τ - X - τ - Y - τ - X - τ,
and shown in Fig. <ref>(b), is a universal DD sequence that can suppress generic system-environment interactions.
It employs π-rotations around the x and y-axis described by Pauli operators X and Y, respectively.
However, the effectiveness of these DD sequences critically depends on the noise, i.e., the detrimental interactions that are present in the system, which is often challenging to infer in superconducting quantum devices <cit.>.
As such, the performance of a DD sequence can vary substantially across different device architectures.
§ LEARNING HOW TO SUPPRESS NOISE
We use tools from closed-loop optimal control <cit.> and optimization <cit.> to learn optimal DD sequences without precise knowledge of a noise model.
Tong et al. <cit.> demonstrate the usefulness of this approach by optimizing with a genetic algorithm the placement of DD gates on idling qubits in a quantum circuit to achieve, for instance, a higher success probability in the Bernstein-Vazirani algorithm compared to canonical DD sequences.
In their approach, they chose DD gates from the fixed set {I_±, X_±, Y_±, Z_±} where I_+, X_+, Y_+, Z_+ are the Pauli matrices and the minus sub-script indicates an added phase of π.
DD gates themselves are prone to errors, such as errors in the rotation angles.
Furthermore, the optimal rotation axis of the gates in the DD sequence may depend on the type of noise in the system.
We address these issues by adopting a different optimization approach from Tong et al. <cit.>.
Instead of fixing a set of DD gates and optimizing over the DD sequence structure to find the best DD sequence <cit.>, we optimize the rotational parameters entering in the chosen DD sequence to improve performance. This approach is similar to finding noise resilient quantum circuits through machine learning <cit.>. After introducing the theory behind such a learning dynamical decoupling approach, we demonstrate on IBM Quantum hardware in two experiments that the optimized LDD sequences yield the best performance (compared to XY4 and CPMG) in suppressing errors.
§.§ Theory
We consider a quantum circuit described by the ideal quantum channels 𝒰_j applied in sequence,
ρ_ideal=∏_j𝒰_j(ρ_0),
to an initial state ρ_0=|ψ_0⟩⟨ψ_0|, which we assume is pure, to create a desired target state ρ_ideal=|ψ_ideal⟩⟨ψ_ideal|.
Here, we concern ourselves with noise described by a collection of typically unknown quantum channels ℳ_j, in which we include potential mid-circuit measurements, that act between the unitary channels 𝒰_j when qubits are idling.
The noisy quantum circuit then takes the form ∏_jℳ_j𝒰_j.
Each noise channel ℳ_j can describe unitary and non-unitary errors.
To suppress noise induced by ℳ_j we divide ℳ_j into N+1 channels ℳ_j,k so that ∏_k=1^N+1ℳ_j,k=ℳ_j, and insert a LDD sequence of the form shown in Fig. <ref>(c).
The LDD sequence is described by parameterized unitary quantum channels ℛ_x⃗(·)=R_x⃗(·)R_x⃗^† that are applied between the ℳ_k's where the LDD gates R_x⃗ are parameterized by x⃗∈ℝ^M.
For simplicity, we assume that the LDD parameters x⃗ are the same for each LDD gate R_x⃗.
If we include N/2 LDD gates between the noise channels M_j,k, followed by implementing N/2 times the corresponding inverse R_x⃗^† to insure that the total LDD sequence composes to the identity in the absence of noise, as depicted in Fig. <ref>, the noise channel ℳ_j becomes
ℳ_j, x⃗:=ℳ_j,N+1∏_k=N/2^Nℛ_x⃗^†ℳ_j,k∏_k=1^N/2ℛ_x⃗ℳ_j,k.
The state resulting from the noisy circuit with LDD is then given by
ρ(x⃗)=∏_jℳ_j,x⃗𝒰_j(ρ_0).
To optimize the parameters x⃗ we need a cost function J(x⃗) that is sensitive to the quality of the circuits in Eq. (<ref>).
For small circuits, a natural choice for a figure of merit or cost function is the fidelity error,
J(x⃗)=1-⟨ψ_ideal|ρ(x⃗)|ψ_ideal⟩,
where F=⟨ψ_ideal|ρ(x⃗)|ψ_ideal⟩ is the fidelity with respect to the ideal state.
However, state tomography scales exponentially with system size.
For large circuits, a scalable cost function can be built in multiple ways.
As done in Refs. <cit.> we can invert the circuit with mirroring in which each 𝒰_j^† is applied in reverse such that ρ_ideal=ρ_0.
Alternatively, one can reduce the original quantum circuit to a Clifford circuit as done, for instance, in Refs. <cit.>.
Here, the single-qubit gates in a circuit (with the CNOT as the two-qubit gate) are replaced by Clifford gates such that the whole circuit is a Clifford gate.
In hardware where parameters are encoded in virtual-Z gates, this will preserve the structure and timing of the underlying pulses, thereby leaving most noise sources such as T_1, T_2 and cross-talk unchanged at the pulse-level.
We can then compute ρ_ideal with efficient Clifford based simulators.
Solving the optimization problem
min_x⃗∈ℝ^MJ(x⃗),
yields the optimal parameter values x⃗^* of the LDD sequence that minimize J(x⃗).
Since we do not know the noise processes described by ℳ_j, we minimize J in an iterative, variational quantum algorithm type fashion by using quantum and classical computing resources in tandem <cit.>. By measuring the output at the end of the quantum circuit we estimate J, while a classical search routine is employed to update the parameters.
§.§ Case studies on IBM hardware
We study the performance of LDD in two different experiments carried out on IBM Quantum hardware with Bell pairs; a valuable resource.
For instance, Bell pairs enable quantum gate teleportation <cit.> and similar known states generated in a factory of resources to enable circuit cutting <cit.>.
In the first experiment we suppress noise during mid-circuit measurements.
In the second experiment we suppress noise resulting from an increasing circuit depth.
In both cases we minimize fidelity loss due to noise by increasing the fidelity of preparing the Bell state |ψ_ideal⟩=|Φ_+⟩=1/√(2)(|00⟩+|11⟩), on two qubits q_i and q_j within a n>2 qubit system, starting from ρ_0=|0⟩⟨ 0|^⊗ n.
To infer the value of J(x⃗) in Eq. (<ref>) for the Bell state, it suffices to measure the expectation values of the X_iX_j, Y_iY_j and Z_iZ_j Pauli operators with respect to ρ(x⃗), rather than performing full state tomography <cit.>, i.e,
J(x⃗) =1-1/4⟨1+X_iX_j-Y_iY_j+Z_iZ_j ⟩_ρ(x⃗).
We employ parameterized decoupling operations given by arbitrary single-qubit rotations
R_x⃗=R(θ, ϕ, λ)=e^-iθ/2Ze^-iϕ/2Ye^-iλ/2Z,
where in both experiments we repeat R(θ,ϕ,λ) and its inverse in total N=4 times, described by Eq. (<ref>).
Here, we only consider three angles x⃗=(θ,ϕ,λ) to parameterize the full LDD sequence.
All qubits with an LDD sequence thus share the same parameter values.
This limits the size of the search space.
On IBM Quantum hardware, these single-qubit rotations are implemented by three parameterized virtual-Z rotations <cit.> and two √(X) pulses.
Therefore, in LDD we optimize the angles in these virtual-Z rotations.
We pick the Simultaneous Perturbation Stochastic Approximation (SPSA) gradient descent method <cit.> to solve the optimization problem in Eq. (<ref>) starting with (θ, ϕ, λ)=(0,0,0).
We allow SPSA a total of 100 iterations.
At each iteration SPSA requires only two estimations of the objective function, regardless of the number of optimization parameters. The hyperparameter “pertubation” in SPSA is set to the default values and the hyperparameter “learning rate” is calibrated by the optimizer <cit.>.
Below we compare LDD with CPMG and XY4 where DD sequences are inserted when qubits are idling. Throughout the two experiments we concatenate two CPMG and XY4 sequences and adjust the duration τ to match the idling times of the qubits.
§.§.§ Suppressing noise during mid-circuit measurements
A mid-circuit measurement (MCM) involves measuring qubits at intermediate stages within a quantum circuit.
MCMs have various applications including quantum error correction (QEC) <cit.>, quantum teleportation <cit.>, reducing the depth of a quantum circuit <cit.>, circuit cutting <cit.>, and analyzing complex quantum behaviour <cit.>.
Unfortunately, MCMs may introduce noise on neighbouring qubits of the physical device <cit.>.
As depicted in Fig. <ref>, we consider the task of preparing the Bell state |Φ_+⟩ between qubit q_0 and qubit q_2 while qubit q_1 is subject to r repeated MCMs.
Here, varying r∈{1, ..., 15} allows us to amplify the amount of noise introduced in the quantum circuit.
First, we compute F without any DD, once with r MCMs and once with a delay equivalent to r MCMs.
The experimental result are shown in Fig. <ref> (red and purple curves) where we report the median and the lower and upper quartiles of ten measurements.
In the presence of MCMs we observe a large drop in F from 0.916+0.004
-0.026 at r=1 to 0.447 +0.052
-0.178 at r=15. Without MCMs F drops from 0.905 +0.015
-0.075 at r=1 to 0.386 +0.033
-0.016 at r=15.
This implies that a MCM on q_1 does not have a large impact on q_0 and q_2 aside from adding a 820 ns delay.
Crucially, the fidelity decrease due to the long delays induced by r MCMs on q_1 can be mitigated by DD sequences inserted on neighbouring qubits q_0 and q_2 during the MCM measurement, see Fig. <ref>.
In Fig. <ref> we compare the performance of LDD (blue), CPMG (green), and XY4 (orange).
The LDD sequence, whose corresponding optimal parameters are shown in Table <ref>, yields the best performance, resulting in e.g., a fidelity of 0.853 +0.012
-0.007 at r=15 while CPMG and XY4 result in 0.814 +0.012
-0.005 and 0.774 +0.007
-0.021, respectively.
To evaluate the reliability of the LDD sequence we compute F(x⃗^*) ten times after the learning and report the lower quartile, median, and upper quartile of these ten runs.
Due to queuing, these circuits were executed on the hardware two days after the optimal parameters x⃗^* were learnt.
This indicates that the learned parameters x⃗^* are stable in time.
We attribute the residual decay to T_1 shown for comparison in Fig. <ref> as a solid black curve.
This curve is computed as e^-t/T_1 where t is the idling time and T_1=167 μ s is the average T_1 time of qubits q_0 and q_2 of ibm_hanoi.
§.§.§ Suppressing deep circuit noise
Next, we consider suppressing noise through LDD that is introduced due to increasing the depth of a quantum circuit. In particular, we consider the task of preparing the Bell state |Φ_+⟩ between two qubits located at the edges of a qubit chain with nearest neighbour interactions.
The corresponding coupling graph of the IBM Quantum device is shown in Fig. <ref>(a) where the considered qubit chain is highlighted in the dashed grey box.
In Fig. <ref>(b) we show the quantum circuit that prepares a Bell state between qubits q_0 and q_12 by bringing them into proximity with a ladder of SWAP gates.
Since the ancilla qubits are in their ground state |0⟩, we can implement each SWAP gate with two CNOT gates instead of three.
Similar gate ladders with a single CNOT at each rung often occur in quantum simulation algorithms to create unitaries generated by Pauli strings <cit.>.
In Fig. <ref>(c) we plot the fidelity F for preparing the Bell state between the two edge qubits of the chain in Fig. <ref>(a) as a function of the intermediate qubits (IQ) shown by the dashed red curve.
We see a fidelity decrease from 0.948 +0.002
-0.002 for a chain consisting of 2 qubits (i.e., 0 IQ) to 0.720 +0.024
-0.013 for a chain consisting of 10 qubits (i.e., 8 IQ).
To mitigate the Bell state fidelity decrease, as depicted in Fig. <ref>(b), we insert DD sequences on idling qubits and compare their performance. The results shown in Fig. <ref>(c) for the CPMG sequence (green), the XY4 sequence (orange), and the LDD sequence (blue) suggest that the best performance is obtained for LDD where the corresponding optimal parameters are given in Table <ref>.
In fact, since the fidelity obtained through inserting XY4 and CPMG can even be below the fidelity obtained without DD, observing a significant fidelity drop for XY4 at 1 IQ, we conclude that DD can increase the noise instead of suppressing it.
This situation is avoided by LDD as the DD sequence is tailored to the device.
§ CONCLUSION
Dynamical decoupling (DD) is a powerful noise suppression strategy that averages out detrimental processes by applying properly designed pulses to the system.
We introduced the framework of “learning dynamical decoupling” (LDD).
Instead of considering a DD sequence with fixed rotational gates, LDD optimizes directly on quantum hardware the rotational parameters in the DD gates.
We compared the performance of such optimized DD sequences with the known DD sequences CPMG and XY4 on IBM Quantum hardware.
We found that LDD outperforms both sequences in suppressing noise that occurs during mid-circuit measurements and noise that stems from increasing the depth of a quantum circuit.
The LDD sequences that we studied have by design a small number of single-qubit gates and a fixed number of rotational parameters. While we believe that performance can be increased even further by adding more optimization parameters, the results shown in Fig. <ref> for different system sizes (i.e., a different number of intermediate qubits) suggest that the number of LDD parameters can remain constant while achieving a similar performance when the system is scaled. As such, the classical optimization overhead does not need to increase when the system size increases.
Therefore, the LDD approach considered here is scalable by design.
Furthermore, inserting only a small number of single-qubit gates in LDD or DD on idling qubits to suppress noise is important for current quantum devices.
Idle times in quantum circuits typically occur when a subset of all qubits undergo two-qubit gates.
Therefore, the shorter the two-qubit gate duration is on a device, the more compact the DD sequence needs to be.
To illustrate this consider ibm_torino and ibm_sherbrooke which have a median two-qubit gate duration of 84 ns and 533 ns, respectively [The numbers are as reported from the backends. Furthermore, ibm_torino implements two-qubit CZ gates with tunable couplers while ibm_sherbrooke uses the cross-resonance interaction.].
The duration of both the single-qubit X and √(X) gates are 32 ns and 57 ns for ibm_torino and ibm_sherbrooke, respectively.
As such, on ibm_torino we can insert up to two X or Y gates or one arbitrary single-qubit rotation during the median two-qubit gate duration.
By contrast, on ibm_sherbrooke these numbers become eight and four, respectively.
Consequently, as the two-qubit gate duration become comparable with the single-qubit gate duration, short DD sequences become more important.
In summary, DD is crucial to suppressing errors in noisy hardware.
As DD sequences improve – becoming more tailored to the hardware – so do the hardware results.
This motivates the strong interest in DD.
Future work may include optimizing the spacing of the DD pulses in LDD.
Furthermore, one could explore how to protect circuit cutting resources, consumed in teleportation circuits as demonstrated in Ref. <cit.>.
Indeed, these resources are less costly to generate simultaneously.
However, this has the draw-back that they idle until they are consumed.
Finally, we optimized virtual-Z rotations that sandwich √(X) gates.
Future work may thus elect to directly optimize the pulses that implement the DD sequence, e.g., similar to pulse-level variational quantum algorithms <cit.>.
C. A. acknowledge support from the National Science Foundation (Grant No. 2231328). A. R. and C. A. acknowledge support from Knowledge Enterprise at Arizona State University.
We acknowledge the use of IBM Quantum services for this work.
The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team.
|
http://arxiv.org/abs/2405.09259v1 | 20240515111847 | Multiconfiguration Dirac-Hartree-Fock and configuration-interaction study of $4d-3p$ x-ray transitions in Cu- and Ni-like tungsten ions | [
"Karol Kozioł",
"Jacek Rzadkiewicz"
] | physics.atom-ph | [
"physics.atom-ph"
] |
Karol.Koziol@ncbj.gov.pl
Jacek.Rzadkiewicz@ncbj.gov.pl
Narodowe Centrum Badań Jądrowych (NCBJ), Andrzeja Sołtana 7, 05-400 Otwock-Świerk, Poland
The 4d → 3p X-ray transitions in Cu- and Ni-like tungsten ions have been studied theoretically.
The Multiconfiguration Dirac–Hartree–Fock (MCDHF) method and the large-scale relativistic configuration interaction (CI) method have been employed in order to take into account electron correlation effects on the wavelengths and transition rates.
It was found that the wavelengths and transition rates obtained from the MCDHF-CI method depend strongly on the size and the type of the Active Space used in the calculations.
It has been found that extending the Active Space of orbitals without careful control of the Configuration State Function base does not always lead to good quality MCDHF-CI results for highly ionized tungsten ions.
MCDHF-CI study of 4d-3p X-ray transitions in Cu- and Ni-like tungsten ions
Jacek Rzadkiewicz
May 20, 2024
==========================================================================
§ INTRODUCTION
Tungsten has been chosen as a plasma facing material in modern large tokamaks, including JET and ITER <cit.>.
Therefore, spectroscopic studies of tungsten ions can constitute a unique tool for diagnostics relevant for a wide range of electron temperatures, from 0.1 keV at the edge up to 25 keV in the core of the tokamak plasma <cit.>.
The tungsten spectra originating from different plasma regions with various electron temperatures consist of radiation emitted by many specific ion charges.
The spectra related to plasma temperatures above a few keV consist of many intense lines originating from Cu- () and Ni-like () tungsten ions.
These lines have been observed at the ASDEX Upgrade and JT60U tokamaks in various spectral ranges.
The 4d-3p x-ray lines of and ions were also observed at JET in the wavelength region of 5.19–5.24 Å by means of an upgraded high-resolution x-ray diagnostic <cit.>.
In our previous paper <cit.> the x-ray transitions in Ni- and Cu-like tungsten ions in the 5.19–5.26 Å wavelength range that are relevant as a high-temperature tokamak diagnostic, in particular for JET in the ITER-like wall configuration, have been studied.
The experimental wavelengths were measured at the upgraded Shanghai Electron Beam Ion Trap with an accuracy of 0.3–0.4 mÅ, and then compared with those determined from JET ITER-like wall plasmas.
It has been found that employing the Multiconfiguration Dirac–Hartree–Fock (MCDHF) calculations extended by taking into account correlation effects (Configuration Interaction, CI, approach) brings the calculations closer to the experimental values in comparison with previous calculations found in the literature.
In that paper, theoretical studies were presented only for the x-ray lines appearing in the measured spectra, i.e. for the so-called Ni1, Ni2, Cu1, Cu2, and Cu3 lines related to the transitions between excited [Mg]3p^53d^104d^1/[Mg]3p^53d^104s^14d^1 states and [Mg]3p^63d^10/[Mg]3p^63d^104s^1 ground states, for and ions, respectively.
In the present paper we consider the wavelengths and intensities of all transitions between |[Mg]3p^53d^104d^1⟩_J=1 and |[Mg]3p^63d^10⟩_J=0 states in Ni-like () tungsten and between |[Mg]3p^53d^104s^14d^1⟩_J=1/2,3/2 and |[Mg]3p^63d^104s^1⟩_J=1/2 states in Cu-like () tungsten.
In addition we present our methodology in more detail.
See Table <ref> for the terminology for the states.
Excited states may be divided into two groups: lower-lying states (states 1 and 2 for and states 1–7 for ) and higher-lying states (state 3 for and states 8–11 for ).
Transitions between lower-lying excited states and ground states for and ions appear in spectra in the range 5.19–5.30 Å.
Transitions from higher-lying excited states appear in spectra in the range 4.6–4.7 Å.
§ THEORETICAL BACKGROUND
The calculations of the radiative transition energies and rates have been carried out by means of the Grasp2k v1.1 <cit.> code.
The Grasp2k code is based on the MCDHF method.
For comparison we have perform also calculations with Fac code based on the Dirac–Hartree–Fock–Slater method <cit.>.
The methodology of MCDHF calculations performed in the present study is similar to that published earlier in many papers (see, e.g., <cit.>).
The effective Hamiltonian for an N-electron system is expressed by
H = ∑_i=1^N h_D(i) + ∑_j>i=1^N C_ij,
where h_D(i) is the Dirac operator for the ith electron and the terms C_ij account for the electron–electron interactions.
In general, the latter is a sum of the Coulomb interaction operator and the transverse Breit operator.
An atomic state function (ASF) with total angular momentum J and parity p is assumed in the form
Ψ_s (J^p ) = ∑_m c_m (s) Φ ( γ_m J^p ),
where Φ ( γ_m J^p ) are the configuration state functions (CSFs), c_m (s) are the configuration mixing coefficients for state s, and γ_m represents all information required to define a certain CSF uniquely.
The CSFs are linear combinations of N-electron Slater determinants which are antisymmetrized products of 4-component Dirac orbital spinors.
In present calculations, the initial and final states of the considered transitions have been optimized separately and a biorthonormal transformation has been used for performing the transition rate calculations <cit.>.
Following this, the so-called relaxation effect is taken into account.
In the Grasp2k code, the Breit interaction contribution to the energy is added perturbatively, after the radial part of wavefunction has been optimized.
We calculated the Breit term in low-frequency limit (see, e.g., <cit.> for details), because frequency-dependent term is not appropriate for virtual orbitals <cit.>.
Using self-consistent variational approach to calculate Breit term instead of perturbational approach (see, e.g., <cit.> for details) cause small effect, estimated to be about +0.3 to +0.4 mÅ at multi-reference (MR) MCDHF level in studied cases.
However, it has been found that this ”variational effect” is significantly reduced when active space is expanding <cit.>.
Also two types of quantum electrodynamics (QED) corrections: the self-energy (as the screened hydrogenic approximation <cit.> of the data of Mohr and co-workers <cit.>) and the vacuum polarization (as the potential of Fullerton and Rinker <cit.>) have been included.
The differences in various models of estimating the self-energy in many-electron atoms (see, e.g., <cit.> for details) cause effect below 0.1 mÅ in studied cases.
The radiative transition rates were calculated in the Babushkin (length) <cit.> gauge.
The accuracy of the wavefunction depends on the CSFs included in its expansion <cit.>.
The accuracy can be improved by extending the CSF set by including the CSFs originating from excitations from orbitals occupied in the reference CSFs to unfilled orbitals of the active orbital set (i.e., CSFs for virtual excited states).
This approach is called Configuration Interaction.
The CI method makes it possible to include the major part of the electron correlation contribution to the energy of the atomic levels.
In the CI approach, it is very important to choose an appropriate basis of CSFs for the virtual excited states.
It can be done by systematically building CSF sequences by extending the Active Space (AS) of orbitals and concurrently monitoring the convergence of the self-consistent calculations <cit.>.
The |[Mg]3p^53d^104s^0,14d^1⟩→|[Mg]3p^63d^104s^0,1⟩ transitions in Cu- and Ni-like W ions are interesting within the CI framework because: (i) the lower states of the transitions are the lowest states for the given number of electrons in the given symmetry, but the upper states of the transitions are not the lowest states in the given symmetry; (ii) for the lower states of the transitions, all the virtual orbitals are above the occupied ones, but for the upper states of the transitions, some of the virtual orbitals are above and others are below the occupied ones.
Hence, SCF variational procedure need to be performed carefully.
For we have tested the active spaces of virtual orbitals with n up to n=7 and l up to l=5.
Details are presented in Table <ref>.
We have considered all possible single (S) and double (D) substitutions from the 3s, 3p, 3d, 4d occupied subshells for the upper states and from the 3s, 3p, 3d occupied subshells for the lower state.
In this case, the inactive core contains n = 1, 2 subshells.
Some CSFs (the ones that have a zero matrix element with every CSF in the MR set) are excluded by using the jjreduce3 program, a part of the Grasp2k program set.
In this way the number of CSFs has been reduced by up to 35%.
The MR configurations are [Mg]3p^53d^104d^1 and [Mg]3p^63d^10 for the upper and lower states of the mentioned transitions in ions (giving AS0 active space).
This approach will be referred to as the FCI (“full CI”).
Because the size of the expansions increases rapidly with the size of the reference set, for we used a slightly simplified model, which is a common approach (see e.g., <cit.>).
The occupied subshells have been divided into three kinds: inactive core, active core (C), and valence (V) subshells.
The n = 4 subshells (i.e., 4s and 4d for the upper states of transitions and 4s for the lower state) are considered as valence subshells.
The n = 1, 2 subshells are an inactive core and the n = 3 subshells are an active core.
Then, for we considered SD substitutions divided into two groups: VV (both substituted electrons are from the valence subshells) and CV (the first substituted electron is from the valence subshell and the other is from the active core subshell) substitutions.
The details are presented in Table <ref>.
The reference (AS0) configurations are [Mg]3p^53d^104s^14d^1 and [Mg]3p^63d^104s^1 for the upper and lower states of mentioned transitions in ion.
This approach will be referred to as CV (“core–valence”).
In order to prove that the CV approach provides results as good as the more sophisticated FCI approach, we used the CV approach also in the case of Ni-like tungsten ions.
In this case the open subshells (3p and 4d for the upper states but nothing for the lower state) are treated as valence subshells.
The 3s and 3d subshells for the upper states and the 3s, 3p, and 3d ones for the lower state are an active core.
The n = 1, 2 subshells are an inactive core.
VV and CV SD substitutions are allowed.
The details are presented in Table <ref>.
§ RESULTS AND DISCUSSION
Firstly, we studied the convergence of the energies of the initial and final states in the CI procedure.
We considered three FCI models: FCI-d (Fig. <ref>a), when virtual orbitals with maximal l=2 are used, FCI-f (Fig. <ref>b), when virtual orbitals with maximal l=3 are used, and FCI-g (Fig. <ref>c), when virtual orbitals with maximal l=4 are used.
As one can see, for all FCI models, the energy saturation for the ground state of Ni-like tungsten (0 level) precedes the saturation for the excited states (1, 2, 3 levels).
One can conclude that using the AS4g active space is high enough to reach the saturation at a similar level of relative energy for each initial and final state on the n and l levels.
The initial and final states of transitions in and have fairly different occupation numbers in the outer and inner shells.
Therefore, for the AS with low n and l the correlation corrections to the energy levels are very unequal for the initial and final states of the transitions.
This is due to the fact that some important substitutions from the 4d subshell are not allowed when the AS contains only subshells with l ≤ 2.
In other words, the energy of the final (ground) state converges for lower n in AS in comparison with the energy for the initial states.
This inequality decreases for ASs with higher n and, as a result, the values of the wavelength saturate at AS3–AS4.
In the case of the FCI-d model (see Fig. <ref>) a reduction of the energy at AS4d is larger for the ground state (level 0) than for the excited states 1 and 2, while state 3 nearly does not change its energy.
The situation is fundamentally different in the FCI-f and FCI-g models.
The reduction of energy for state 0 is slightly smaller than for states 1 and 2, and the reduction of energy for state 3 is distinctly larger.
The differences of energy of states 0, 1, 2, and 3 translates directly into the energies of the 1→0, 2→0, and 3→0 transitions (Ni2, Ni1, and Ni3 lines, respectively), calculated in various FCI models.
A similar analysis of the convergence for the CV approach is presented in Fig. <ref>a.
In this case the reduction of energy for state 0 is smaller than for states 1 and 2 by a few eV and larger than for state 3, also by a few eV.
Wavelengths of |[Mg]3p^53d^104s^14d^1⟩_J=1/2,3/2→|[Mg]3p^63d^104s^1⟩_J=1/2 transitions (Å) in Cu-like tungsten ions for various theoretical approaches.
0→1 0→2 0→3 0→4 0→5 0→6 0→7 0→8 0→9 0→10 0→11
Cu3 Cu2 Cu1
12ltheory:
MCDHF (MR) 5.2968 5.2878 5.2760 5.2712 5.2308[1] 5.2207[1] 5.2178[1] 4.6743 4.6685 4.6640 4.6192
MCDHF-CI (CV) 5.3028[2] 5.2935[2] 5.2789[2] 5.2769[2] 5.2370[1][2] 5.2281[1][2] 5.2251[1][2] 4.6804 4.6644 4.6624 4.6297
MCDHF-CI (CV*) 5.3010 5.2925 5.2781 5.2765 5.2364 5.2278 5.2245 4.6803[2] 4.6748[2] 4.6693[2] 4.6223[2]
12lexperiments:
EBIT <cit.> 5.2369(3) 5.2292(3) 5.2259(4)
JET <cit.> 5.2295(9) 5.2263(9)
12lother theory:
Fac (this paper) 5.2979 5.2889 5.2770 5.2722 5.2316[1] 5.2218[1] 5.2191[1] 4.6745 4.6686 4.6641 4.6195
Fac <cit.> 5.2728 5.2313 5.2230 5.2197 4.6559
Relac <cit.> 5.2298 5.2192 4.6633
Cowan <cit.> 5.241 5.230
MBPT <cit.> 5.2409 5.2328 4.6601 4.6181
12lother experiments:
Ref. <cit.> 5.238(9)
NIST <cit.> 5.2379(17) 5.2289(11)
[1]Numbers published in Rzadkiewicz et al. <cit.>
[2]The recommended values
Transition rates of |[Mg]3p^53d^104s^14d^1⟩_J=1/2,3/2→|[Mg]3p^63d^104s^1⟩_J=1/2 transitions (s^-1, length gauge) in Cu-like tungsten ions for various theoretical approaches. Notation A[B] means A×10^B. The theoretical uncertainties are in parentheses (see text for details).
0→1 0→2 0→3 0→4 0→5 0→6 0→7 0→8 0→9 0→10 0→11
Cu3 Cu2 Cu1
12ltheory:
MCDHF (MR) 3.13(13)[11] 2.03(6)[12] 1.55(2)[13] 4.78(33)[10] 9.08(16)[13] 9.11(18)[13] 1.44(3)[13] 3.12(13)[09] 4.73(9)[13] 4.46(9)[13] 2.40(7)[12]
MCDHF-CI (CV) 2.47(16)[11][1] 1.75(6)[12][1] 2.22(3)[13][1] 9.95(35)[10][1] 8.12(14)[13][1] 6.67(13)[13][1] 1.30(3)[13][1] 3.81(19)[10] 6.45(36)[12] 1.39(5)[13] 6.04(41)[11]
MCDHF-CI (CV*) 2.74(20)[11] 1.64(7)[12] 2.14(4)[13] 8.07(53)[10] 7.96(17)[13] 6.55(15)[13] 1.27(4)[13] 3.00(41)[09][1] 4.19(8)[13][1] 3.71(8)[13][1] 2.22(10)[12][1]
12lother theory:
Fac (this paper) 3.25[11] 2.07[12] 1.56[13] 4.41[10] 9.21[13] 9.24[13] 1.44[13] 1.07[09] 4.71[13] 4.44[13] 2.35[12]
Fac <cit.> 2.21[13] 8.14[13] 6.58[13] 1.31[13] 1.32[13]
Relac <cit.> 8.32[13] 7.80[13] 3.99[13]
MBPT <cit.> 8.03[13] 6.44[13] 6.35[13] 2.43[12]
[1]The recommended values
The wavelengths and transition rates of the Ni1, Ni2, and Ni3 transitions, calculated by using various theoretical approaches, are presented in Tables <ref> and <ref>.
They are compared with the experimental results presented in our previous paper <cit.>, and also compared with experimental and other theoretical data available in the literature.
On can see from Table <ref> that both the FCI-g and CV approaches are close to the experimental data for the N1 and Ni2 lines cited in <cit.>, surrounding the experimental values from above and below.
The MCDHF-CI values are better than those of MR MCDHF or Fac.
However, in the case of the Ni3 line, the results of FCI-g and CV differ from each other and from the experimental values.
A disagreement between other theoretical attempts to predict the Ni3 wavelength can also be observed.
A similar disagreement is observed for the theoretical predictions for transition rate of Ni3 transition, see Table <ref>.
There is a relatively large difference between the results from MCDHF-CI (FCI-g) and MCDHF-CI (CV), and a noticable difference between the results of MCDHF (MR) and MCDHF-CI (CV).
The latter is important, because the MCDHF-CI approach should be treated only as a correction to the MCDHF (MR) approach, so for strong transitions it is expected that the MCDHF-CI value is not substantially different from the MCDHF value.
Studying the CSF expansion of the ASF for level 3, one can see that this ASF is strongly influenced by the CSFs related to the |[Ne]3s^13p^63d^104p^1⟩_J=1 electronic configuration.
This is the result of an accidentally close energetic neighborhood of the virtual |[Ne]3s^13p^63d^104p^1⟩_J=1 level, originating from excitations to a virtual orbital, and level 3 of (the so called near-degeneracy effect, because |[Ne]3s^13p^63d^104p^1⟩_J=1 level formally is an excitation but actually has a lower energy than level 3).
Then we modified the FCI-g and CV approaches, removing two CSFs due to the |[Ne]3s^13p^63d^104p^1⟩_J=1 configuration from ASs.
The new ones are referred to as the FCI-g* and CV* approaches.
One can see from Table <ref> that this modification changes the wavelengths of the Ni1 and Ni2 lines by less than 1 mÅ.
However, the change of wavelength for the Ni3 line is substantial, and the predictions calculated by the FCI-g* and CV* models met each other.
The similar consistency of theoretical predictions can be obtained in the case of the transition rates for the Ni3 transition – see Table <ref>.
It is worth noticing that the convergence of the atomic level energies from extending the AS is very similar for the FCI-g* and CV* approaches – see Figs. <ref>d and <ref>b.
The wavelengths and transition rates calculated by means of the various theoretical approaches for ions are presented in Tables <ref> and <ref>, respectively.
The CV* represents the model when six CSFs due to the [Ne]3s^13p^63d^104s^14p^1 configuration and one CSF due to the [Ne]3s^2 3p_1/2^1 3p_3/2^4 3d^10 4p_3/2^2 configuration are removed from the ASs.
The reason for applying this approach is similar to that for the CV* model in the case of the transitions: the |[Ne]3s^13p^63d^104s^14p^1⟩_J=1/2,3/2 and |[Ne]3s^2 3p_1/2^1 3p_3/2^4 3d^10 4p_3/2^2⟩ levels are very close energetically to higher |[Ne]3s^23p^53d^104s^14d^1⟩_J=1/2,3/2 levels.
Removing these CSFs from the ASs improves the convergence of the atomic level energies – compare Figs. <ref>a and <ref>b.
The difference between the results from the CV and the CV* is small for the transitions from lower-lying excited states, but significantly higher for the transitions from higher-lying excited states.
It is interesting to see that the transition rate for the strong Cu2 transition decreases markedly between the pure MCDHF (AS0) and the MCDHF-CI (AS4) frameworks.
This behavior may be explained by expanding the ASF for the initial state of the Cu2 transition in the LS-coupling CSF basis, and applying the rule for the selection of LS-coupling (Δ S = 0; Δ L = 0, ± 1, but not 0→0), keeping in mind that the final state is ^2S.
In the case of AS0, the initial state of the Cu2 transition is composed of 86.3% ^2P, 8.8% ^4D, and other CSFs.
In the case of AS4 it is composed of 72.7% ^2P, 7.5% ^4D, 7.3% ^2S (related to excitation to virtual orbitals), and other CSFs.
In this case the ^2S term causes a reduction of the ^2P contribution, and at the same time the ^2S →^2S transition is not allowed.
As a result, in the AS4 case the Cu2 transition rate is smaller than in the AS0 case.
Present work presents the correlation corrections on wavelengths and transition rates in various MCDHF-CI models.
The ”correlation effect” on transition wavelengths is estimated to be about 4–7 mÅ in studied cases and this effect is more significant for transition rates, which values may differ by a factor of a few depending on the model used in calculations.
Basing on our best assessment, for analyzing future experiments we recommend MCDHF-CI (CV) values for Ni1, Ni2 lines in and 0 → i (i = 1–7) lines in , and MCDHF-CI (CV*) values for Ni3 line in and 0 → i (i = 8–11) lines in .
The ”best” numbers are chosen by comparing with experiments (when possible) and by carefully monitoring the convergence within MCDHF-CI process.
The recommended values are marked in Tables <ref>, <ref>, <ref>, and <ref>.
The theoretical uncertainties of wavelength numbers are related to convergence with the size of a basis set and estimated as absolute value of difference between wavelengths calculated within converged values (AS∞; asymptote value assuming that correlation effects on energy levels are saturated, i.e. |E(ASn+1)-E(ASn)|→0 when n→∞) and AS4 for given model, i.e. δλ = |λ^AS∞-λ^AS4|.
Such an estimation gives 0.1 mÅ uncertainty limit for all and theoretical wavelengths.
More conservative estimation δλ = |λ^AS4-λ^AS3| gives 0.1 mÅ uncertainty for lines and 0.3–0.5 mÅ for lines.
In the case of transition rates, the total uncertainty contains also uncertainty related to difference between rates calculated by Babushkin (length) and Coulomb (velocity) gauges, i.e. δ A = [(A_len^AS4-A_len^AS3)^2+(A_len^AS4-A_vel^AS4)^2]^1/2.
For MR values, the uncertainty is related only to gauge difference, i.e. δ A = |A_len^AS0-A_vel^AS0|, so comparing these uncertainties to other uncertainty values is not fully justified.
The total uncertainties in transition rates are presented explicit in Tables <ref> and <ref>, being a test on the expansion model used. For recommended values they are below 1% for lines and for lines they are below 3% for strong lines (A>10^13 s^-1) and below 14% for the week lines.
It is worth to notice that in general low uncertainties indicate a higher quality of the theoretical model.
For Ni1 and Ni2 lines the uncertainty for the recommended values are the smallest among considered models.
For for lines 0 → i (i = 1–7) the CV model produces smaller uncertainties than CV* model, that proves our choice of recommended values.
For Ni3 line and for 0 → i (i = 8–11) lines in transition rates vary a lot from one model to another. However, it is worth to notice that for these lines (except 0 → 8 line in which need more theoretical effort) the relative (percentage) uncertainty δ A/A is smaller for CV* model (recommended) than for CV model.
§ CONCLUSIONS
The energy levels of the ground excited states of and ions and the wavelengths and transition probabilities of the 4d → 3p transitions were calculated by using the MCDHF-CI method.
It has been found that both MCDHF-CI approaches, FCI and CV, can provide reliable results if the active space is chosen properly.
It was found that if the highest occupied orbital in the initial or the final state of transition is the nl orbital, then the active space should be extended to at least (n+1)(l+1) virtual orbitals.
Our analysis indicates that the best theoretical predictions of wavelengths and transition rates in the considered spectral range can be obtained by the MCDHF-CI (CV) model with the active spaces of virtual orbitals with n up to n = 7 and l up to l = 5.
For higher-lying states (state 3 for and states 8–11 for ) our recommended values of wavelengths and transition rates have been obtained the by CV approach with the removed specific CSFs in order to reduce the negative impact of the near-degeneration effect on the convergence in active spaces.
This clearly shows that extending the active space of orbitals without careful control of the CSF basis does not always lead to good quality MCDHF-CI results for highly ionized tungsten ions.
Results of present study provide an important benchmark for x-ray measurements in tokamaks, in particular for JET and ITER.
33
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Matthews et al.(2007)Matthews, Edwards, Hirai, Kear, Lioure, Lomas, Loving, Lungu, Maier, Mertens, Neilson, Neu, Pamela, Philipps, Piazza, Riccardo, Rubel, Ruset, Villedieu, Way, and ITER-like Wall Project Team]Matthews2007
author author G. F. Matthews, author P. Edwards,
author T. Hirai, author M. Kear, author
A. Lioure, author P. Lomas, author A. Loving, author C. Lungu, author H. Maier, author P. Mertens, author D. Neilson, author R. Neu, author J. Pamela, author V. Philipps,
author G. Piazza, author V. Riccardo, author
M. Rubel, author C. Ruset, author E. Villedieu, author M. Way, and author ITER-like Wall Project Team, 10.1088/0031-8949/2007/T128/027 journal journal
Phys. Scr. volume T128, pages 137
(year 2007)NoStop
[Bolt et al.(2002)Bolt,
Barabash, Federici, Linke,
Loarte, Roth, and Sato]Bolt2002
author author H. Bolt, author V. Barabash,
author G. Federici, author J. Linke, author
A. Loarte, author J. Roth, and author K. Sato, 10.1016/S0022-3115(02)01175-3
journal journal J. Nucl. Mater. volume 307–311, pages 43 (year
2002)NoStop
[Pütterich et al.(2008)Pütterich, Neu, Dux, Whiteford, O'Mullane, and ASDEX Upgrade Team]Putterich2008
author author T. Pütterich, author R. Neu, author R. Dux, author A. D. Whiteford, author
M. G. O'Mullane, and author
ASDEX Upgrade Team, 10.1088/0741-3335/50/8/085016 journal journal
Plasma Phys. Control. Fusion volume 50, pages 085016 (year 2008)NoStop
[Rzadkiewicz et al.(2013)Rzadkiewicz, Dominik, Scholz, Chernyshova, Czarski, Czyrkowski,
Dabrowski, Jakubowska, Karpinski, Kasprowicz, Kierzkowski,
Pozniak, Salapa, Zabolotny,
Blanchard, Tyrrell, and Zastrow]Rzadkiewicz2013a
author author J. Rzadkiewicz, author W. Dominik, author M. Scholz,
author M. Chernyshova, author T. Czarski, author
H. Czyrkowski, author
R. Dabrowski, author
K. Jakubowska, author
L. Karpinski, author
G. Kasprowicz, author
K. Kierzkowski, author
K. Pozniak, author Z. Salapa, author W. Zabolotny, author P. Blanchard, author S. Tyrrell, and author K.-D. Zastrow, 10.1016/j.nima.2012.12.041 journal journal Nucl.
Instruments Methods Phys. Res. Sect. A volume 720, pages 36 (year 2013)NoStop
[Chernyshova et al.(2014)Chernyshova, Czarski, Dominik,
Jakubowska, Rzadkiewicz, Scholz, Pozniak, Kasprowicz, and Zabolotny]Chernyshova2014
author author M. Chernyshova, author T. Czarski, author W. Dominik,
author K. Jakubowska, author J. Rzadkiewicz, author
M. Scholz, author K. Pozniak, author G. Kasprowicz, and author W. Zabolotny, 10.1088/1748-0221/9/03/C03003 journal journal J.
Instrum. volume 9, pages C03003
(year 2014)NoStop
[Shumack et al.(2014)Shumack, Rzadkiewicz, Chernyshova,
Jakubowska, Scholz, Byszuk,
Cieszewski, Czarski, Dominik,
Karpinski, Kasprowicz, Pozniak, Wojenski, Zabolotny, Conway, Dalley, Figueiredo, Nakano, Tyrrell, Zastrow, and Zoita]Shumack2014
author author A. E. Shumack, author J. Rzadkiewicz, author M. Chernyshova, author K. Jakubowska, author M. Scholz,
author A. Byszuk, author R. Cieszewski, author
T. Czarski, author W. Dominik, author L. Karpinski, author G. Kasprowicz, author K. Pozniak, author A. Wojenski, author W. Zabolotny, author N. J. Conway, author S. Dalley, author J. Figueiredo,
author T. Nakano, author S. Tyrrell, author
K.-D. Zastrow, and author
V. Zoita, 10.1063/1.4891182 journal journal Rev. Sci.
Instrum. volume 85, pages 11E425
(year 2014)NoStop
[Nakano et al.(2015)Nakano,
Shumack, Maggi, Reinke,
Lawson, Coffey, Pütterich, Brezinsek, Lipschultz,
Matthews, Chernyshova, Jakubowska, Scholz, Rzadkiewicz,
Czarski, Dominik, Kasprowicz,
Pozniak, Zabolotny, Zastrow, and Conway]Nakano2015
author author T. Nakano, author A. E. Shumack,
author C. F. Maggi, author M. Reinke, author
K. D. Lawson, author
I. Coffey, author T. Pütterich, author S. Brezinsek, author B. Lipschultz, author G. F. Matthews, author M. Chernyshova, author K. Jakubowska, author M. Scholz,
author J. Rzadkiewicz, author T. Czarski, author
W. Dominik, author G. Kasprowicz, author K. Pozniak, author W. Zabolotny, author K.-D. Zastrow, and author N. J. Conway, 10.1088/0953-4075/48/14/144023
journal journal J. Phys. B At. Mol. Opt. Phys. volume 48, pages 144023 (year 2015)NoStop
[Rzadkiewicz et al.(2018)Rzadkiewicz, Yang, Kozioł, O'Mullane, Patel, Xiao, Yao, Shen, Lu, Hutton,
Zou, and JET
Contributors]Rzadkiewicz2018
author author J. Rzadkiewicz, author Y. Yang,
author K. Kozioł, author M. G. O'Mullane, author
A. Patel, author J. Xiao, author K. Yao, author Y. Shen, author D. Lu, author
R. Hutton, author Y. Zou, and author JET
Contributors, 10.1103/PhysRevA.97.052501 journal journal Phys. Rev. A volume
97, pages 052501 (year 2018)NoStop
[Jönsson et al.(2007)Jönsson, He, Froese Fischer, and Grant]Jonsson2007
author author P. Jönsson, author X. He,
author C. Froese Fischer, and author I. P. Grant, 10.1016/j.cpc.2007.06.002 journal journal Comput. Phys. Commun. volume 177, pages 597 (year 2007)NoStop
[Jönsson et al.(2013)Jönsson, Gaigalas, Bieroń,
Froese Fischer, and Grant]Jonsson2013
author author P. Jönsson, author G. Gaigalas, author J. Bieroń, author C. Froese
Fischer, and author I. P. Grant, 10.1016/j.cpc.2013.02.016 journal
journal Comput. Phys. Commun. volume
184, pages 2197 (year 2013)NoStop
[Gu(2008)]Gu2008
author author M. F. Gu, 10.1139/P07-197 journal journal Can. J. Phys. volume 86, pages 675 (year 2008)NoStop
[Dyall et al.(1989)Dyall,
Grant, Johnson, Parpia, and Plummer]Dyall1989
author author K. G. Dyall, author I. P. Grant,
author C. Johnson, author F. A. Parpia, and author E. Plummer, 10.1016/0010-4655(89)90136-7 journal journal
Comput. Phys. Commun. volume 55, pages
425 (year 1989)NoStop
[Grant(2007)]Grant2007
author author I. P. Grant, 10.1007/978-0-387-35069-1 title Relativistic Quantum Theory of Atoms and Molecules, edited by editor I. P. Grant, series Springer Series on Atomic, Optical, and Plasma Physics,
Vol. volume 40 (publisher Springer, address New York, NY, year 2007)NoStop
[Kozioł et al.(2018)Kozioł, Giménez, and Aucar]Kozio2018
author author K. Kozioł, author C. A. Giménez, and author G. A. Aucar, 10.1063/1.5017986 journal journal J. Chem. Phys. volume
148, pages 044113 (year 2018)NoStop
[Si et al.(2018)Si,
Guo, Brage, Chen,
Hutton, and Fischer]Si2018
author author R. Si, author X. L. Guo,
author T. Brage, author C. Y. Chen, author
R. Hutton, and author
C. F. Fischer, 10.1103/PhysRevA.98.012504 journal journal Phys.
Rev. A volume 98, pages 012504
(year 2018)NoStop
[Chantler et al.(2014)Chantler, Nguyen, Lowe, and Grant]Chantler2014
author author C. T. Chantler, author T. V. B. Nguyen, author J. A. Lowe, and author I. P. Grant, 10.1103/PhysRevA.90.062504 journal journal Phys. Rev. A volume 90, pages 062504 (year 2014)NoStop
[McKenzie et al.(1980)McKenzie, Grant, and Norrington]McKenzie1980
author author B. J. McKenzie, author I. P. Grant,
and author P. H. Norrington, 10.1016/0010-4655(80)90042-9 journal
journal Comput. Phys. Commun. volume
21, pages 233 (year 1980)NoStop
[Mohr(1992)]Mohr1992a
author author P. J. Mohr, 10.1103/PhysRevA.46.4421 journal
journal Phys. Rev. A volume 46, pages 4421 (year 1992)NoStop
[Fullerton and Rinker(1976)]Fullerton1976
author author L. W. Fullerton and author G. A. Rinker, 10.1103/PhysRevA.13.1283 journal
journal Phys. Rev. A volume 13, pages 1283 (year 1976)NoStop
[Kozioł and Aucar(2018)]Kozio2018a
author author K. Kozioł and author G. A. Aucar, 10.1063/1.5026193 journal
journal J. Chem. Phys. volume 148, pages 134101 (year 2018)NoStop
[Babushkin(1964)]Babushkin1964
author author F. A. Babushkin, @noop journal journal
Acta Phys. Pol. volume 25, pages 749
(year 1964)NoStop
[Froese Fischer(2010)]FroeseFischer2010
author author C. Froese Fischer, 10.1088/0953-4075/43/7/074020
journal journal J. Phys. B At. Mol. Opt. Phys. volume 43, pages 074020 (year 2010)NoStop
[Froese Fischer(2011)]FroeseFischer2011
author author C. Froese Fischer, 10.1088/0953-4075/44/12/125001
journal journal J. Phys. B At. Mol. Opt. Phys. volume 44, pages 125001 (year 2011)NoStop
[Lowe et al.(2010)Lowe,
Chantler, and Grant]Lowe2010
author author J. A. Lowe, author C. T. Chantler,
and author I. P. Grant, 10.1016/j.physleta.2010.09.055 journal
journal Phys. Lett. A volume 374, pages 4756 (year
2010)NoStop
[Fei et al.(2012)Fei,
Zhao, Shi, Xiao,
Qiu, Grumer, Andersson,
Brage, Hutton, and Zou]Fei2012
author author Z. Fei, author R. Zhao, author Z. Shi, author
J. Xiao, author M. Qiu, author J. Grumer, author M. Andersson,
author T. Brage, author R. Hutton, and author
Y. Zou, 10.1103/PhysRevA.86.062501 journal journal Phys.
Rev. A volume 86, pages 062501
(year 2012)NoStop
[Froese Fischer(2009)]FroeseFischer2009a
author author C. Froese Fischer, in 10.1007/978-90-481-2596-8_7
booktitle Adv. Theory At. Mol. Syst., series Progress in Theoretical Chemistry and Physics, Vol. volume 19, editor edited by editor
P. Piecuch, editor J. Maruani, editor G. Delgado-Barrio, and editor S. Wilson (publisher Springer Netherlands, address Dordrecht, year 2009) pp. pages 115–128NoStop
[Dong et al.(2003)Dong,
Fritzsche, and Xie]Dong2003
author author C.-Z. Dong, author S. Fritzsche, and author L. Y. Xie, 10.1016/S0022-4073(02)00071-7 journal journal J. Quant. Spectrosc. Radiat. Transf. volume
76, pages 447 (year 2003)NoStop
[Safronova et al.(2006)Safronova, Safronova, Hamasha, and Beiersdorfer]Safronova2006
author author U. I. Safronova, author A. S. Safronova, author S. Hamasha,
and author P. Beiersdorfer, 10.1016/j.adt.2005.09.001 journal journal At. Data Nucl. Data Tables volume 92, pages 47 (year 2006)NoStop
[Clementson et al.(2014)Clementson, Beiersdorfer, Brage, and Gu]Clementson2014
author author J. Clementson, author P. Beiersdorfer, author T. Brage,
and author M. F. Gu, 10.1016/j.adt.2013.07.002 journal journal At. Data Nucl. Data Tables volume 100, pages 577 (year 2014)NoStop
[Fournier(1998)]Fournier1998
author author K. Fournier, 10.1006/adnd.1997.0756 journal
journal At. Data Nucl. Data Tables volume 68, pages 1 (year 1998)NoStop
[Neill et al.(2004)Neill,
Harris, Safronova, Hamasha,
Hansen, Safronova, and Beiersdorfer]Neill2004
author author P. Neill, author C. Harris,
author A. S. Safronova, author S. Hamasha, author
S. Hansen, author U. I. Safronova, and author P. Beiersdorfer, 10.1139/p04-053
journal journal Can. J. Phys. volume 82, pages 931 (year
2004)NoStop
[Clementson et al.(2010)Clementson, Beiersdorfer, Brown, and Gu]Clementson2010a
author author J. Clementson, author P. Beiersdorfer, author G. V. Brown, and author M. F. Gu, 10.1088/0031-8949/81/01/015301 journal journal Phys. Scr. volume
81, pages 015301 (year 2010)NoStop
[Tragin et al.(1988)Tragin,
Geindre, Monier, Gauthier,
Chenais-Popovics, Wyart, and Bauche-Arnoult]Tragin1988
author author N. Tragin, author J.-P. Geindre,
author P. Monier, author J.-C. Gauthier, author
C. Chenais-Popovics, author
J.-F. Wyart, and author
C. Bauche-Arnoult, 10.1088/0031-8949/37/1/011 journal journal Phys.
Scr. volume 37, pages 72 (year 1988)NoStop
[Kramida(2011)]Kramida2011
author author A. E. Kramida, 10.1139/p11-045 journal
journal Can. J. Phys. volume 89, pages 551 (year 2011)NoStop
[Safronova et al.(2003)Safronova, Johnson, Shlyaptseva, and Hamasha]Safronova2003
author author U. I. Safronova, author W. R. Johnson, author A. Shlyaptseva, and author S. Hamasha, 10.1103/PhysRevA.67.052507 journal journal Phys. Rev. A volume 67, pages 052507 (year
2003)NoStop
[Safronova et al.(2012)Safronova, Safronova, and Beiersdorfer]Safronova2012
author author U. I. Safronova, author A. S. Safronova, and author P. Beiersdorfer, 10.1103/PhysRevA.86.042510 journal journal Phys. Rev. A volume
86, pages 042510 (year 2012)NoStop
[Osborne et al.(2011)Osborne, Safronova, Kantsyrev,
Safronova, Beiersdorfer, Williamson, Weller, and Shrestha]Osborne2011
author author G. Osborne, author A. Safronova,
author V. Kantsyrev, author U. Safronova, author
P. Beiersdorfer, author
K. Williamson, author
M. Weller, and author
I. Shrestha, 10.1139/p11-026 journal journal Can. J. Phys. volume 89, pages 599 (year
2011)NoStop
|
http://arxiv.org/abs/2405.09976v1 | 20240516104131 | Language-Oriented Semantic Latent Representation for Image Transmission | [
"Giordano Cicchetti",
"Eleonora Grassucci",
"Jihong Park",
"Jinho Choi",
"Sergio Barbarossa",
"Danilo Comminiello"
] | cs.CV | [
"cs.CV",
"eess.SP"
] |
Probing neutrino-nucleus interaction in DUNE and MicroBooNE
R K Pradhan e1,addr1
R Lalnuntluangae2,addr1
and A Giri e3,addr1
=====================================================================================
In the new paradigm of semantic communication (SC), the focus is on delivering meanings behind bits by extracting semantic information from raw data. Recent advances in data-to-text models facilitate language-oriented SC, particularly for text-transformed image communication via image-to-text (I2T) encoding and text-to-image (T2I) decoding. However, although semantically aligned, the text is too coarse to precisely capture sophisticated visual features such as spatial locations, color, and texture, incurring a significant perceptual difference between intended and reconstructed images. To address this limitation, in this paper, we propose a novel language-oriented SC framework that communicates both text and a compressed image embedding and combines them using a latent diffusion model to reconstruct the intended image. Experimental results validate the potential of our approach, which transmits only 2.09% of the original image size while achieving higher perceptual similarities in noisy communication channels compared to a baseline SC method that communicates only through text.
The code is available at https://github.com/ispamm/Img2Img-SC/https://github.com/ispamm/Img2Img-SC/.
Semantic Communication, Semantic Coding, Generative Models, Generative Semantic Communication.
§ INTRODUCTION
In the field of communication theory and technology, the quest for more meaningful, efficient, and effective exchanges of information has led to the emergence of a fascinating research area known as Semantic Communication (SC).
The key idea behind SC is to convey the semantic information of data, which may also be attached to a compressed representation of the content, rather than exchanging the whole message. Especially, semantics may be useful if not enough bandwidth is available for the whole data transmission.
The sender cares to extrapolate and encapsulate the semantics of the data it wants to transmit in a compact and meaningful representation. The receiver uses the received information to reconstruct semantically equivalent data rather than the original content. <cit.>.
These concepts introduce numerous additional degrees of flexibility that can be strategically leveraged in system design and resource allocation. Nowadays, one of the most appropriate tools that can be employed to solve the newly posed challenges is generative deep learning <cit.>.
The recent developments made by deep generative models are well known to everyone. These models can generate almost any kind of multimedia content such as text, images, video, and audio. One of their most interesting features is the ability to generate content starting from semantic conditioning, which can be an extremely compressed version of original data, such as text <cit.> or low-dimensional latent vectors <cit.>. This is an important key point in SC and for the future generation of networks (6G), since the adoption of generative models allows for a substantial reduction in information exchanging, bandwidth requirements, and latency, leaving untouched perceptual results at the receiver side. However, the fidelity of the generation at the receiver strictly depends on the quality of the transmitted semantics. Unfortunately, extracting the proper semantic representation from data is not straightforward and no unique recipes exist to do so. As a matter of fact, the best way to extract meaningful semantics from different types of data is still an open problem <cit.>.
One of the first approaches towards the use of generative models in SC is the employment of a variational autoencoder acting as a transceiver in a Deep joint source and channel coding (DeepJSCC) system <cit.>. The variational autoencoder (VAE) adopted performs perceptual compression extracting the original data statistics
μ and σ in a reduced dimension and use them as semantic vectors through a communication channel.
Although this approach paved the way for the use of generative deep learning in SC, it has several drawbacks. The latent dimensionality cannot be dynamically changed according to network conditions since any further compression of data corresponds to a significant degradation in perceptual reconstruction performance. In addition, performance is strongly influenced by training data and the communication environment, making it effective only under certain predefined usage conditions. Lastly, VAEs often have limited expressive capacity, and therefore newer approaches use diffusion models to overcome these limitations.
Recently, a large number of new approaches that use generative models have been proposed in the field of semantic communication. Their application is tailored to almost any kind of multimedia content, especially for images <cit.> and audio <cit.>.
Interestingly, Hyelin Nam et al. propose to use text as a semantic vector through a communication channel <cit.>. They design a novel framework of language-oriented semantic communication. This innovative approach involves the use, at the sender side, of an image-to-text encoder that encapsulates the semantic meaning of an image into its corresponding textual caption.
The receiver, in turn, is equipped with a huge text-to-image diffusion model that, taking the textual caption as input, generates an image with a similar semantic meaning with respect to the original one.
Even though their framework is quite efficient in terms of resources needed to exchange information (bandwidth and latency), the semantic gap between original and reconstructed data may be very large. Image-to-text models, for example, are not able to produce textual descriptions that capture all the details present in the original content. As a consequence, the reconstructed images may be vague, less detailed, and perceptually very dissimilar from the original ones.
To address the low perceptual similarities in language-oriented communications, in this paper, we introduce a novel framework communicating both (i) the textual caption and (ii) the latent embedding of the image. This is inspired by Stable Diffusion <cit.>, a latent diffusion model for text-to-image generation, which first generates a compressed image from (iii) pure noise in a latent space via iterative denoising while conditioning on (i), followed by scaling up to the original image resolution via an image decoder. In our proposed method, the denoising process starts not from (iii) but from (ii) that imposes high-fidelity semantic information such as spatial locations, color, and text, complementing textual caption's low-fidelity semantic information.
We validate our approach through experiments for the image-to-image communication task. We demonstrate that the embeddings of an image generated by an encoder network effectively retain the key features of the original image. For this reason, they are very useful to reconstruct data. On the receiver side, we adopt Stable Diffusion <cit.>, a well-known generative latent diffusion model that can be conditioned on text, image embedding, or both to generate more semantic accurate data. The conditioning choice can also be made dynamically based on network conditions. If the infrastructure is overloaded or the performance is very degraded we can send only a few characters describing original data. As soon as the network becomes better, we can send the latent embedding to boost the generative performance of the diffusion model.
§ THE PROPOSED FRAMEWORK
In this Section, we accurately define the proposed semantic communication framework by addressing separately the two main components: semantic extraction at the sender side and data reconstruction at the receiver side.
§.§ Semantic Latent Representation
In our novel approach, we identify as semantic vectors both textual caption and latent embedding of an input image. This idea comes from the fact that, in most cases, the caption alone cannot fulfill the intrinsic semantic of an image and can lead to poor or vague representations of it. Consequently, our semantic encoder comprises two main components: an image-to-text (I2T) converter and an image encoder.
Image-to-text: The I2T is responsible for translating an image x into a text prompt y. The latter is defined as a sequence containing Y words presented in a specific order:
y = I2T(x) = (y_1,y_2, …, y_Y)
where y_i is the i-th word and it has |y_i| characters.
As image-to-text encoder, we use the well-known vision-language pre-training framework BLIP <cit.>, which can produce accurate captions of a given image.
Image encoder: The image encoder ℰ transforms an image x∈ℝ^C × H × W from the RGB space into a latent representation 𝐳 = ℰ(𝐱). The literature is full of image encoders that can be employed. The choice is related to the domain of interest of the final applications and may be tailored for different kinds of tasks. In our study, we involve a pre-trained encoder from Stable-Diffusion v1.5 <cit.>. Inspired by <cit.>, this encoder is part of an encoder-decoder network trained using a combination of a perceptual loss and a patch-based adversarial objective. The latent space learned from this type of model is of really small dimensionality compared to the initial RGB space. Despite this, experiments demonstrate that they are semantically and perceptually equivalent <cit.>.
In our proposed framework, we consider transmitting text prompt y along with latent image embedding z over an additive white Gaussian noise channel (AWGN). The received y and z are independently distorted by a zero-mean Gaussian noise, and the channel conditions are identified by the signal-to-noise ratio (SNR) as SNR = P_signal/P_noise where P_signal and P_noise are average received signal and noise powers, respectively. Note that the aggregate size of y and z is only 2.09% of the original image x in our experiment, underscoring the bandwidth efficiency of our proposed method. Extending this to adaptive transmissions, it could be interesting to send only y under extremely limited bandwidth or poor channel conditions, which is deferred to future studies.
§.§ Data Reconstruction
We equip the receiver with a conditional image generative model. We involve Stable Diffusion as it has demonstrated superior abilities in generating images from text. The methodology adopted by Stable Diffusion is based on a Latent Diffusion Model (LDM) <cit.>. It is a probabilistic model that can generate high-quality images starting from random noise and gradually transforming it into images. The key innovation of latent diffusion models is that they apply the diffusion process not to the raw pixel values but instead to an encoded latent representation of the image itself. This process can be conditioned on different signals (text, images, semantic maps, etc.). Commonly, LDMs comprise three modules: an encoder responsible for generating latent vectors, a U-Net devoted to the diffusion denoising process, and a decoder that brings latent vectors back to the image space. The novel idea behind our proposed framework is to distribute these modules between the sender and the receiver. The encoder is used in the sender to encapsulate the image in a latent representation. The U-Net and the decoder are used to regenerate content at the receiver side.
Following the theory of diffusion models,
the conditioned U-Net ϵ_θ(z_t,t,y) has learned during training how to denoise the latent vector from pure Gaussian noise z_T to z_0 in T sampling steps. To reach this goal, a Markovian forward process is defined which injects noise at different time steps to the original latent vector according to:
𝐳_t = √(α̅_̅t̅)𝐳_0 + √(1-α̅_̅t̅)ϵ ϵ∼𝒩(0,𝐈)
where {α̅_̅t̅}_t=1^T is the noise schedule and controls the amount of injected noise.
The U-Net ϵ_θ(z_t,t,y) is trained to predict the amount of noise injected at any given time step using the objective function:
ℒ_DM= 𝔼_z,y,ϵ∼𝒩(0,I),t[ ||ϵ - ϵ_θ(z_t,t,y)||_2^2]
To accommodate the conditioning of text prompts, the U-Net is equipped with cross-attention mechanisms. Given text prompt y, CLIP <cit.> text encoder τ_θ is used to produce its textual representation τ_θ(y) that is then injected into the cross-attention layers of the U-Net. In particular, key-value pairs (K, V) are built by projecting the text representation, while the query (Q) is built using the i-th intermediate representation of the U-Net ϵ_θ:
(Q, K, V) = (QK^T/√(d))×V
Q=W_Q×ϕ_i(z_t), K=W_K×τ_θ(y), V= W_V×τ_θ(y)
Here, ϕ_i(z_t) denotes the i-th intermediate representation of the U-Net implementing ϵ_θ. W_Q,W_K,W_V are learnable matrices of parameters.
In our framework, the receiver acquires from the sender either a noisy version of the text prompt y' or a combination of a noisy text prompt and noisy latent vector (y',z'). In the first case, the system starts the sampling phase from a random latent vector sampled from a normal Gaussian distribution z_T ∼𝒩(0, I). In the second case, it takes the vector z', injects noise on it to obtain a noisy version z'_T according to (<ref>) and starts the generation from it.
There is a substantial and important difference between the two approaches. In the first case, the system needs to regenerate the content from pure noise conditioned on text prompt. In the second case we can control the amount of noise injected to z' setting the number of generative sampling steps to any intermediate number t from 1 to T. Low sampling steps allow for an efficient generation (less time) but the final latent vector z_0 could retain the effects of the noise introduced by the network. Large sampling steps require more computational resources but allow for a more fine-grained generation.
After the latent diffusion process, a decoder model 𝒟 takes the latent variable z_0 and upsample it. This module brings the latent variable back to the original RGB space x̂=𝒟(z_0), x̂∈ℝ^C × H × W.
§ EXPERIMENTAL RESULTS
§.§ Simulation Settings
We use BLIP-large <cit.> as an I2T encoder. We leverage the power of the Stable Diffusion v1.5 image encoder at the sender side and denoising U-Net and decoder at the receiver side. We set T=50 denoising steps for text-only conditions while T=30 for text and latent embedding scenario.
The entire diffusion process is conditioned by the text prompt encoded using CLIP <cit.>. In order to validate our method we consider the Flickr 8k dataset <cit.>. It contains 8,092 samples, each with a different shape. We rescale each image at 512x512 for convenience. For text prompts transmission, each character is 8-bit ASCII coded and modulated using 16QAM. For latent embedding transmission, each float number is 64-bit encoded as defined in the IEEE 754-2008 standard and modulated as well using 16QAM.
The metrics used to assess the quality of generated images are: Structural similarity index measure (SSIM), learned perceptual image patch similarity (LPIPS), Fréchet inception distance (FID), and CLIP score.
SSIM evaluates the distortion rate between x and x̂ at the pixel level. LPIPS and FID assess the perceptual similarity of intended and regenerated images using additional neural networks. CLIP score computes the correlation between texts and images.
All the metrics are averaged over 100 simulation runs.
§.§ Experiments
We begin our experiments by investigating the robustness of our method toward adverse channel conditions. As illustrated in Fig. <ref>, when there is a lot of noise introduced by the channel, both methods perform poorly. However, the joint text and latent embedding strategy seems to produce slightly better images.
On the contrary, when network conditions improve, our method clearly outperforms the text caption-only strategy. There is an improvement in SSIM metric from 0.1387 to 0.3321 at SNR=10 dB. In addition, there is a marked improvement in perceptual metrics LPIPS and FID leaving untouched the correlation between text and image (CLIP score).
Visual results can be appreciated in Fig. <ref>, where we display three random samples and we set SNR=7.5 db.
We continue the experimental phase by comparing the performance of the two approaches using different sampling steps. In Fig. <ref>, we compare the generation of three random images at different timesteps setting SNR=7.5 dB.
It is interesting to see the role of the noise introduced by the channel. In the text and embedding scenario, if T=0, the entire system collapses to a VAE
acting as a transceiver, as no diffusion steps are performed. However, the noise introduced by the channel makes difficult the regeneration phase at the receiver side. On the contrary, when T increases, the contribution of the diffusion model becomes clearer, proving its crucial role in eliminating this kind of noise. However, when T grows too much, the generated images lose fidelity with respect to the original ones. This is because the cross-attention mechanism in (<ref>) introduces more and more text conditioning during the regeneration process. Indeed, increased sampling steps correspond to heightened text conditioning and diminished latent embedding guidance, meaning that the generation is less faithful to the image embedding. For this reason, in our work, we involve T=30 sampling steps.
We conclude our experimental investigation by analyzing the dimensionality of the data exchanged between sender and receiver.
Tab. <ref> reveals that the text only
is the most lightweight approach since at maximum we can exchange 77 characters. This limitation comes from the max_tokens hyperparameter of CLIP text encoder. Sending the latent vector requires more computational resources (bandwidth and latency). However, despite the larger size, using the image embedding to regenerate the content allows us for a more accurate generation. We can think of sending text and latent embeddings when the network conditions are good, while in case of bad conditions, we can send only the text, which retains much of the initial semantics.
§ CONCLUSION
In this paper, we propose an innovative framework for semantic image-to-image communication. We identify as a semantic vector to be conveyed in a noisy channel the textual caption along with a latent representation of input images. We leverage the generation power of an LDM to regenerate images semantically and perceptually aligned at the receiver side. Experiments validate our approach demonstrating a sensible reduction of the LPIPS metric when both text and latent embeddings are used to regenerate the content. Future research might improve the efficiency of our approach by reducing the dimensionality of latent embeddings and exploiting large language models (LLM) to compress the characters exchanged between sender and receiver, as suggested by <cit.>. It could also be interesting to extend this work to other types of multimedia content such as audio, speech, and video.
IEEEbib
|
http://arxiv.org/abs/2405.10004v1 | 20240516114435 | ROCOv2: Radiology Objects in COntext Version 2, an Updated Multimodal Image Dataset | [
"Johannes Rückert",
"Louise Bloch",
"Raphael Brüngel",
"Ahmad Idrissi-Yaghir",
"Henning Schäfer",
"Cynthia S. Schmidt",
"Sven Koitka",
"Obioma Pelka",
"Asma Ben Abacha",
"Alba G. Seco de Herrera",
"Henning Müller",
"Peter A. Horn",
"Felix Nensa",
"Christoph M. Friedrich"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.LG"
] |
[
Wojciech Rżysko
May 20, 2024
===================
empty
§ BACKGROUND & SUMMARY
Recent years have seen tremendous progress in medical imaging. The advent of deep learning techniques has enabled the development of sophisticated models for image analysis tasks. Multimodal image datasets play a crucial role in the development and validation of these models. One such dataset is Radiology Objects in COntext (ROCO) <cit.>, which has enabled researchers to develop models for a wide range of tasks, including concept detection, caption generation, and image-text retrieval.
The first version of the ROCO dataset was introduced by Pelka et al. <cit.> in 2018. It includes image-caption pairs from two classes: Radiology images from multiple imaging modalities and Out-of-Class images, such as synthetic radiology figures, digital art, and portraits, from peer-reviewed publications in the open-access subset of the biomedical literature database PubMed Central (PMC) <cit.>.
The dataset contains 81,825 radiology images and 6127 out-of-class images. In addition to the images and their captions, the dataset provides keywords, Unified Medical Language System® (UMLS®) Semantic Types (SemTypes), and UMLS Concept Unique Identifiers (CUIs) for each image. This information makes the dataset suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using the UMLS concepts provided with each image, e.g., to develop systems supporting structured medical reporting. The dataset is available on GitHub (available at <https://github.com/razorx89/roco-dataset>, accessed 2024-03-12) in the form of links to the publication and scripts to download and extract the images from them.
The ROCO dataset has been used in the medical caption tasks <cit.> at the Image Retrieval and Classification Lab of the Conference and Labs of the Evaluation Forum (ImageCLEF) <cit.>.
ROCOv2 is the result of more than four years of updates and improvements to the original ROCO dataset. Due to the focus on radiological images, ROCOv2 does not include out-of-class images like ROCO, and to allow direct distribution of the images, only images from CC BY licensed articles (including CC BY-NC, but excluding CC BY-ND and CC BY-SA) are included.
Other changes include manually curated concepts, e.g., for modality of all images, anatomy and directionality of X-ray images, and improved concept extraction with the Medical Concept Annotation Toolkit v1.10.0 (MedCAT) <cit.>, which is based on a newer version of the UMLS database and uses word embeddings instead of QuickUMLS <cit.> that relies on direct dictionary matches. In addition, better concept filtering has been introduced.
The ROCOv2 dataset serves as a valuable resource for various applications and use cases in the medical domain, as it contains a vast amount of biomedical knowledge stored in the literature. One of the primary applications of this dataset is to train and evaluate models across different modalities for tasks such as image caption generation. By leveraging the multimodal nature of the dataset, researchers can develop models that accurately describe the content of radiological images, facilitating better understanding and communication of medical findings.
Furthermore, the ROCOv2 dataset can be utilized to build and train an efficient image retrieval system specifically tailored for the medical domain. Such a system would allow healthcare professionals to quickly search for relevant radiological images based on specific queries or similar case studies, enhancing the decision-making process and enabling more informed patient care. The image-caption pairs available in the dataset provide a rich foundation for training these retrieval models, ensuring accurate and relevant results. This can also be further extended to build a multimodal retrieval augmented generation (RAG) system that can be used in tasks such as generating detailed medical reports, or answering complex clinical questions.
Multimodal RAG also opens up additional possibilities beyond image retrieval. By combining the visual information from radiological images with the textual data from captions and associated medical literature, generative AI models can be trained or fine-tuned to produce more comprehensive and context-aware outputs. This approach can lead to the generation of synthetic data, which is particularly useful in cases where real-world medical data is scarce or difficult to obtain. The generated data can be used to augment existing datasets, improve model robustness, and support further research and development in the field of medical AI.
All sources of the dataset are openly available as part of the PMC Open Access Subset at the time of the publication of the dataset.
Summarizing the main contributions of this work:
* Dataset of 79,789 radiological images with associated captions and medical concepts
* Possible use cases include training of image captioning, image retrieval and pre-training models
* Concepts were automatically generated from the captions, and combined with manually curated concepts for modality (all images), body region (X-ray only), and directionality (X-ray only)
* Images and captions were extracted from openly available publications with CC BY licenses in the PMC Open Access Subset
§ RELATED WORK
Since its initial release, the ROCO dataset has been used as the foundation for generating training and test data for multiple iterations of the ImageCLEFmedical Caption tasks, up to and including the most recent edition in 2023. Beyond these tasks, the ROCO dataset has proven to be a valuable resource for medical imaging research, resulting in its inclusion in several studies over the years. For instance, Eslami et al. <cit.> investigated the effectiveness of Contrastive Language-Image Pre-training (CLIP) <cit.> for the task of Medical Visual Question Answering (MedVQA) by leveraging the ROCO dataset to fine-tune CLIP for the medical domain. They chose the ROCO dataset for training due to its comprehensive collection, which includes various imaging modalities such as ultrasound, X-ray, PET, CT, MRI, and angiography from different human body regions, such as the head and pelvis. Their research resulted in the creation of PubMedCLIP, a specialized vision encoder that outperformed the general CLIP on two MedVQA benchmarks.
In addition to the ROCO dataset, several other datasets containing both image and text data have been published and used in medical imaging research.
One of the most notable examples is the MIMIC-CXR <cit.> dataset. MIMIC-CXR is a large, publicly available collection of thorax radiology images paired with semi-structured free-text reports detailing radiological findings. The dataset includes 227,835 imaging studies from 65,379 patients, resulting in 377,110 images. Another dataset which also focuses on chest X-rays is the Open-I Indiana Chest X-ray collection <cit.>. The dataset consists of 3996 de-identified radiology reports and 8121 associated images. PADCHEST <cit.> is a large dataset consisting of more than 160,000 chest X-ray images and associated reports from 67,000 patients. The reports are annotated with 174 different radiographic findings, 19 differential diagnoses, and 104 anatomical locations, hierarchically organized and mapped to standard UMLS <cit.> terminology.
Additionally, manual annotations include bounding boxes for subfigures and their corresponding subcaptions for a subset of 2069 figures, resulting in 7507 subfigure-subcaption pairs. Compared to this work, MIMIC-CXR, Open-I Indiana Chest X-ray, and PADCHEST focuses only on chest X-rays, whereas ROCOv2 includes a wide range of anatomical regions, medical concepts, and modalities.
In another work, Subramanian et al. <cit.> introduced MedICaT, a dataset of medical images, captions, and textual references. The dataset contains 217,060 images sourced from 131,410 open-access biomedical papers featuring captions and inline references for 74% of the figures. Additionally, manual annotations including bounding boxes for sub-figures and their corresponding subcaptions, are provided for a subset of 2069 figures. Based on the dataset, the authors introduced the task of aligning subfigures with their corresponding subcaptions in compound figures and highlighted the valuable role of inline references in facilitating image-text matching. In comparison to the ROCOv2 dataset, MedICaT contains images distributed under the CC BY-ND and CC BY-SA licences which prohibit the distribution of processed images or require the transfer under the same license.
Recently, Lin et al. <cit.> proposed PMC-CLIP, a pre-trained model that uses biomedical documents for contrastive language-image pre-training based on PMC-OA, a biomedical dataset of 1.6 million image-caption pairs collected from the Open Access subset of PMC. The dataset covers various modalities and diseases, with the majority of the image annotation samples aligned at a fine-grained level, i.e., sub-figure and subcaption. The PMC-CLIP model achieves state-of-the-art results on several downstream tasks, including image-text retrieval on ROCO, MedMNIST <cit.> image classification, and medical VQA. To focus the PMC-OA dataset on biomedical images, filtering based on a keyword search and a deep learning-based classification model is used. In contrast, ROCOv2 uses manual validation as a filtering step to achieve a high-quality radiological image dataset. Similar to the MedICaT dataset, PMC-OA contains images distributed under the CC BY-ND and CC BY-SA licenses, which are excluded from ROCOv2, so that the images of the dataset can be directly distributed.
Expanding on these datasets, the PMC-VQA <cit.> dataset has recently been introduced with a focus on the MedVQA task. The dataset contains 226,946 VQA pairs and 149,075 images, covering various medical modalities and diseases. It provides a comprehensive basis for the development and evaluation of MedVQA models. Data generation started with 381k image-caption pairs from the PMC-OA dataset. These captions were used with ChatGPT to generate five question-answer pairs per caption, which then underwent a filtering process. Experiments with models trained on the PMC-VQA dataset have demonstrated superior performance on established benchmarks such as the Visual Question Answering in Radiology (VQA-RAD) <cit.> and Semantically-Labeled Knowledge-Enhanced (SLAKE) <cit.> datasets. In addition, the authors proposed a manually verified test set that is more challenging and reflects the complexity of the real world. As the PMC-VQA dataset is based on the PMC-OA dataset, the same drawbacks apply, including no manual filtering and no filtering based on licences.
Leveraging the integration of visual and linguistic elements in medical datasets, Moor et al. <cit.> proposed Med-Flamingo, a multimodal few-shot learner adapted to the medical domain. It is based on OpenFlamingo-9B <cit.> and has been further pre-trained on paired and linked medical image-text data from publications and textbooks. Its unique strength lies in generative MedVQA, especially for open-ended questions similar to United States Medical Licensing Examination (USMLE) style problems. It has demonstrated its effectiveness by improving performance in generative MedVQA by up to 20% on clinician ratings. The model was fine-tuned using the PMC-OA dataset. This dataset and the related problems have already been discussed.
In summary, while existing datasets such as MIMIC-CXR, PADCHEST, and PMC-OA have contributed to the field of medical imaging research, they have certain limitations. These datasets either focus on specific anatomical regions (e.g., chest X-rays), have license restrictions, or do not have manual validation. ROCOv2 aims to address some of these limitations by providing a diverse, manually validated dataset covering a wide range of anatomical regions, medical concepts, and modalities. In addition, by including only images with permissive licenses, ROCOv2 allows for the distribution of the dataset, facilitating its use in various research applications.
§ METHODS
§.§ Dataset Creation
Figure <ref> shows a schematic overview of the dataset creation workflow described below.
The first step in creating the ROCOv2 dataset was to download the full PMC Open Access Subset via FTP, including all archives added until 2022-10-27. 4,798,923 archives, occupying 22 TB of disk space, were downloaded in this manner.
After extracting the archives, which include the PDF of the paper, as well as any images contained in the paper and an XML representation of the paper, more than 16,324,613 million extracted images are run through two binary classification models. The first is used to filter for non-compound images, while the second is used to filter for radiological images. The models are part of the original ROCO workflow described in <cit.> and achieved accuracies of about 90% and 98.6%, respectively.
After this filtering step, 188,537 images (102,807 articles) were left, which were further reduced to 119,140 images (67,357 articles) by filtering out images from papers not licensed under CC BY or CC BY-NC as well as images which are subject to copyright of other commercial organizations or individuals, and by removing 2056 duplicates identified using AntiDupl v2.3.10 (available at <https://github.com/ermig1979/AntiDupl/>, accessed 2023-11-10).
From the remaining images, the new validation and test sets were created using 21,545 images from 2021 and 2022 that were not previously used in the ImageCLEFmedical Caption datasets. They were manually annotated for modality (angiography, CT, MRI, PET, ultrasound, X-ray, and combined modalities), Image Retrieval in Medical Applications (IRMA) <cit.> body region (X-ray only), and directionality (X-ray only) and combined with 34,900 images that were part of the original ROCO dataset and 28,086 images that had been used in previous ImageCLEFmedical Caption datasets. Stratified random sampling based on the manually curated concepts was used to divide the images into validation and test sets, and generated concepts that did not appear in the training set were removed from the validation and test sets.
The resulting 84,530 images (46,904 articles) were filtered for valid captions.
1528 images with non-English captions were removed. In addition, very short captions without relevant information (e.g., “Figure 1”) were removed, resulting in a final dataset of 79,789 images (44,975 articles), with 59,958 images in the training set, 9904 images in the validation set, and 9927 images in the test set with 1947 unique CUIs overall, 1947 in the training set, 1760 in the validation set and 1754 in the test set. The detailed labeling and concept generation workflow is described in the next section. Compared to the dataset used in the ImageCLEFmedical Caption 2023 task, approximately 1500 non-radiological images were removed, further improving the quality of the dataset.
Of the 81,825 radiology images in the original ROCO dataset, 33,645 were incorporated into ROCOv2 <cit.>, with the rest being excluded due to their license. They were combined with 46,144 images which have been used in ImageCLEFmedical Caption challenge datasets from 2021 to 2023.
§.§ Caption Processing and Concept Extraction
To extract concepts from the captions, several pre-processing and filtering steps were performed. First, captions in languages other than English were excluded to focus the analysis on English-language concepts. This was done using the fastText <cit.> language identification model. Captions identified as non-English with a confidence level greater than 45% were excluded from the dataset. To reduce the risk of erroneously removing English captions, any caption identified as non-English with a confidence level of less than 45% was retained under the assumption that it was likely written in English. Of all the captions, 1528 were identified as non-English. The bar chart in Table 20 shows the frequency of non-English captions across the different languages in the dataset. French captions were the most common with a count of 1413, followed by Portuguese and Spanish with 55 and 48 captions, respectively. Next, URLs within the captions were identified and removed, as they often do not provide relevant information for concept extraction. In addition, some captions were identified as consisting entirely of LaTeX code, and these were also removed from the dataset. Empty captions and those containing minimal information, such as “xxx”, were discarded during pre-processing.
After the initial pre-processing, the remaining captions were further processed to extract relevant concepts using the Medical Concept Annotation Toolkit (MedCAT) <cit.> framework. MedCAT is a robust tool specifically designed for extracting and linking biomedical concepts from unstructured text. It was trained on the MIMIC-III <cit.> dataset (as of 2022-03-15) and links to Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) IDs. The SNOMED CT IDs were then mapped to UMLS2022AB release CUIs and semantic types (TUIs), which were then used for concept extraction and filtering.
During concept extraction, a frequency cutoff was implemented that retained only concepts that exceeded a frequency threshold of 10. In this way, low-frequency concepts, of which only a few examples were available, were effectively filtered out. By linking concepts to the UMLS, associated semantic types were filtered to focus on concepts that are likely to be visually observable and interpretable in the images. For example, concepts with the associated UMLS semantic type T029 (Body Location or Region) or T060 (Diagnostic Procedure) are relevant, while concepts of semantic type T054 (Social Behavior) cannot be derived from the image through a model. Specific concept filters were then manually applied to exclude UMLS concepts that could not be directly associated with the image content, such as temporal or qualitative aspects of certain concepts. Blacklisted concepts often contain qualifiers that would distract from the actual interest, and would also introduce bias since qualifiers are used in highly individual and variable ways by the original authors. Entity linking systems sometimes tend to incorrectly link concepts with ambiguous synonyms, e.g. C0994894 (Patch Dosage Form) may be linked if the caption refers to a region described as patchy. In the case of high frequency of such concepts, they have been manually mapped to the correct CUI.
In addition to the described automatic concept extraction, further manual creation and validation of basal concepts were performed. As in the original ROCO dataset <cit.>, this mainly focused on the seven supported image modalities (angiography, combined modalities (e.g. PET/CT), CT, MRI, PET, ultrasound, and X-ray). However, for the ROCOv2 dataset, additional concepts were introduced for the X-ray modality, focusing on: (i) the displayed or described body region, and (ii) the directionality on which a projection was based. The body region classification was based on the IRMA classification <cit.>, which distinguishes eight different regions (abdomen, breast, chest, cranium, lower extremity, pelvis, spine, and upper extremity). The directionality classification was based on a reduced set of the most commonly used directions (coronal anteroposterior, coronal posteroanterior, sagittal, and transversal), but was introduced for experimental purposes only.
The reason for the manual creation of concepts is that in many cases the captions either do not explicitly provide information about them (e.g. "T2 weighted" implies MRI modality, "cholangiography" implies abdominal region).
The manual concept creation pipeline involved an initial manual classification of subsets of tens of thousands (modality, directionality) or several thousand (body region) images. This was done by two annotators for image modalities and X-ray body regions, and a single annotator for X-ray directionalities. The respective annotation guidelines followed by the annotators are provided in a distilled form as supplementary material. Deep learning image classification models were then trained on these subsets to pseudo-label the remaining images as a preliminary sorting method. These were then manually curated again to resolve errors in the classification models. Finally, the quality of the manually created concepts was validated by a radiologist on representative subsets for each category, with results described in the technical validation section.
The concepts from both manual and automatic extraction were combined, with priority given to the manually curated concepts. Automatically extracted modality concepts were included only for combined modalities (DRCO). The manually identified anatomy and directionality concepts of X-rays were not checked for conflicting concepts from automatic extraction to be integrated.
§ DATA RECORDS
The ROCOv2 dataset files are available on Zenodo <cit.>. It contains images, captions, and concepts for training, validation, and test splits, as well as image license information.
* “{train,valid,test}_images.zip”: JPEG images of various sizes taken directly from PMC with the filename format .
* “{train,valid,test}_captions.csv”: Two-column comma-separated value (CSV) files with image filenames and corresponding captions (escaped with double quotes if necessary).
* “{train,valid,test}_concepts.csv”: Two-column CSV files with image filenames and corresponding medical concept CUIs separated by semicolons. Includes manually curated and automatically generated concepts.
* “{train,valid,test}_concepts_manual.csv”: Two-column CSV files with image filenames and corresponding medical concept CUIs separated by semicolons. Includes manually curated concepts only.
* “cui_mapping.csv”: Two-column CSV file with CUIs and their canonical name.
* “license_information.csv”: Four-column CSV file with image filename, PMCID, CC BY attribution string, and PMC article link.
A dataset analysis is performed as part of the technical validation in the following section.
§ TECHNICAL VALIDATION
The ROCOv2 <cit.> dataset is based on the dataset used in the medical caption task <cit.> at the ImageCLEF 2023 <cit.>, where participants had access to the training and validation sets after signing a user agreement. ImageCLEF 2023 consists of the ImageCLEFmedical, ImageCLEFfusion, and ImageCLEFaware labs, where the ImageCLEFmedical lab is divided into the subtasks MEDIQA-Sum (natural language semantic retrieval), Caption, GANs (medical image generation) and MedVQA-GI (gastrointestinal visual question answering). The ImageCLEFmedical Caption task consists of two subtasks: concept detection and caption prediction.
All results are also described in detail in the overview paper <cit.>.
Since several improvements were made to the dataset compared to the one used in ImageCLEFmedical Caption 2023, such as the removal of approximately 1500 non-radiological images and the addition of approximately 3000 missing manually curated concepts, baseline results are reported separately for both datasets.
The results for both subtasks show that the baseline models achieve similar results on the ImageCLEF dataset as the challenge participants while performing better on the ROCOv2 <cit.> dataset, showing the improved dataset quality. The baseline results, along with several years of competitive and improving scores for both subtasks in the context of the ImageCLEFmedical Caption challenges show the suitability of the dataset for training models for concept detection and caption prediction.
§.§ Concept Detection
For the concept detection, participants are asked to predict a set of concepts defined by the UMLS CUIs <cit.> based on the visual information provided by the radiology images, which can help in the development of systems supporting structured medical reporting. The balanced precision and recall trade-off were measured in terms of sample-averaged F1-scores, with a separate F1-score being calculated for manually curated concepts.
In the ImageCLEFmedical 2023 Caption challenge, the best team achieved an F1-score of 0.5223 using an ensemble of three multi-label classification models with different architectures <cit.>. Additional results are shown in Table <ref>.
As previously mentioned, the ROCOv2 <cit.> dataset is an improved version of the dataset provided in the ImageCLEFmedical Caption task. In order to compare the challenge results to the results of the ROCOv2 dataset, two baseline models, namely an EfficientNet-B0 <cit.> and an EfficientNetv2-s <cit.>, were additionally trained and results for both datasets are given.
The implementation was developed using PyTorch v2.0.1 <cit.>, and all experiments were run on an NVIDIA® DGX-1 (available at <https://www.nvidia.com/en-gb/data-center/dgx-1/>, accessed 2023-11-10) supercomputer, with NVIDIA® V100 (available at <https://www.nvidia.com/en-us/data-center/v100/>, accessed 2023-11-10) Graphical Processing Units (GPUs) containing 16 GB of memory. The execution environment was an NVIDIA®-optimized (available at <https://github.com/NVIDIA/nvidia-docker>, accessed 2023-11-10) Docker <cit.> container, running a Deepo (available at <https://github.com/ufoym/deepo>, accessed 2023-11-10) image. All experiments were executed using a single GPU.
For both models, a grid search was performed on the validation dataset for hyperparameter tuning to identify the best combination of optimizer and learning rate. The values [1e-1,1e-2,...,1e-5] are used as candidates for the learning rate. In addition, the Adam <cit.>, Stochastic Gradient Descent (SGD), and Root Mean Square Propagation (RMSProp) optimizers were tested. After hyperparameter tuning, the final models were trained on the entire training and validation dataset. To train both models, the training augmentation pipeline includes loading the images with an image size of 1.25 times the model image size, random horizontal and vertical flipping with a probability of 0.5 each, random cropping to the image size of the model, and image normalization. The validation and test augmentation pipelines include loading the images with an image size of 1.25 times the model image size, center cropping to the image size of the model and image normalization.
The loss function used is a multi-label soft margin loss. A sigmoid activation function is used for all model outputs with a threshold of 0.5. All models are trained using mixed precision <cit.> for 20 epochs. For the remaining hyperparameters, the default values in PyTorch are used.
The model based on the EfficientNet-B0 architecture was pre-trained on the ImageNet-1k dataset <cit.>. This model was trained with a batch size of 256, a drop rate of 0.2, and an image size of 224. During hyperparameter tuning, the Adam optimizer, trained with a learning rate of 1e-3, achieved the best sample-averaged F1-score on the validation dataset for both datasets (ImageCLEFmedical Caption and ROCOv2 <cit.>). The final model achieved a sample-averaged F1-score of 0.5099 (secondary sample-averaged F1-score: 0.9309) for the ImageCLEFmedical Caption test set and a sample-averaged F1-score of 0.5811 (secondary sample-averaged F1-score: 0.9353) for the ROCOv2 test set. The EfficientNetv2-s model was pre-trained on the ImageNet-21k dataset <cit.>. This model was trained with a batch size of 92, a drop rate of 0.2, and an image size of 300. During hyperparameter tuning, the RMSProp optimizer trained with a learning rate of 1e-4 achieved the best F1-score on the validation dataset for both datasets (ImageCLEFmedical Caption and ROCOv2 <cit.>). The final model achieved a sample-averaged F1-score of 0.5215 (secondary sample-averaged F1-score: 0.9407) for the ImageCLEFmedical Caption test set and a sample-averaged F1-score of 0.5925 (secondary sample-averaged F1-score: 0.9430) for the ROCOv2 test set.
§.§ Caption Prediction
The caption prediction aims to automatically generate captions for the radiology images provided. In ImageCLEFmedical Caption, the performance of caption prediction is evaluated based on BERTScore <cit.>, which is a metric that computes a similarity score for each token in the generated text with each token in the reference text. Several other metrics were also calculated and published, to illustrate how difficult the evaluation of caption similarity is: First, the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) <cit.> score was adopted as a secondary metric that counts the number of overlapping units such as n-grams, word sequences, and word pairs between the generated text and the reference. In addition to ROUGE, the Metric for Evaluation of Translation with Explicit ORdering (METEOR) <cit.> was explored, which is a metric that evaluates the generated text by aligning it to reference and calculating a sentence-level similarity score. Furthermore, the Consensus-based Image Description Evaluation (CIDEr) <cit.> metric was also adopted. CIDEr is an automatic evaluation metric that calculates the weights of n-grams in the generated text, and the reference text based on term frequency and inverse document frequency (TF-IDF) and then compares them based on cosine similarity.
Another metric used is the BiLingual Evaluation Understudy (BLEU) score <cit.>, which is a geometric mean of n-gram scores from 1 to 4. For this task, the focus was on the BLEU-1 score, which takes into account unigram precision.
Bilingual Evaluation Understudy with Representations from Transformers (BLEURT) <cit.> is specifically designed to evaluate natural language generation in English. It uses a pre-trained model that has been fine-tuned to emulate human judgments about the quality of the generated text.
CLIPScore <cit.> is an innovative metric that diverges from the traditional reference-based evaluations of image captions. Instead, it aligns with the human approach of evaluating caption quality without references by evaluating the alignment between text and image content.
For the caption prediction subtask at ImageCLEFmedical 2023, the best team achieved a BERTScore of 0.6413 with an encoder-decoder framework with subsequent reinforcement learning <cit.>. Additional results are shown in Tables <ref> and <ref>.
As a baseline for the caption prediction task, a model leveraging a vision encoder-decoder architecture <cit.> was employed. The encoder component was instantiated with the base-sized (available at <https://huggingface.co/google/vit-base-patch16-224-in21k>, accessed 2023-11-10) Vision Transformer (ViT) model <cit.>. This model is based on the transformer architecture and was pre-trained on the ImageNet-21k dataset <cit.> at resolution 224x224. To initialize the decoder, the BioMedLM (available at <https://huggingface.co/stanford-crfm/BioMedLM>, accessed 2023-11-10) model was used, which is a language model based on the GPT2 architecture <cit.>. This decoder-only transformer-based model has 2.7 billion parameters and a maximum context length of 1024 tokens. The BioMedLM training data is derived from the PubMed Abstracts and PMC sections of the Pile dataset, which contains approximately 50 billion tokens from 16 million abstracts and 5 million full-text articles from the biomedical literature. This training corpus provides BioMedLM with a strong understanding of biomedical language, making it uniquely suited for tasks in the biomedical domain. The model was trained using the Huggingface Transformers library for two epochs with a batch size of two and a gradient accumulation of two. The maximum input sequence length was set to 128 tokens. As an optimizer, the Adafactor optimizer was chosen due to its efficiency in memory usage and its adaptability in adjusting learning rates. In addition, fp16 mixed precision was used during training.
The baseline model was first trained and evaluated on the ImageCLEFmedical Caption dataset. During this initial evaluation, the model demonstrated good performance, achieving a BERTScore of 0.6217 and a ROUGE score of 0.2318. It also obtained a CLIP score of 0.8172, a BLEURT score of 0.3093, a BLEU score of 0.1821, a METEOR score of 0.0813, and a CIDEr score of 0.1968. The same model architecture was then trained and evaluated on the ROCOv2 <cit.> dataset. The model achieved a BERTScore of 0.6241 and a ROUGE score of 0.2325, along with a CLIP score of 0.8212, a BLEURT score of 0.3131, a BLEU score of 0.1836, a METEOR score of 0.0830, and a CIDEr score of 0.2026.
§.§ Dataset Analysis
Table <ref> shows the descriptive statistics derived from the dataset. On average, each caption contains about 21 words but can range from one word to 848 words. Each article contains about 1.76 image-caption pairs on average, with some articles having up to 28 image-caption pairs. In addition, on average, captions are annotated with about 3.36 UMLS concepts, with some captions having as many as 28 concepts.
The histogram in Figure <ref> shows the distribution of the number of concepts per image-caption pair. All captions have at least one concept.
Table <ref> lists the ten most common UMLS concepts found in the dataset, excluding modality-related concepts that were set manually.
The ten most common semantic types (TUIs) are outlined in Table <ref>. These types provide insights into various medical concepts, ranging from diagnostic procedures to neoplastic processes.
§.§ Annotator-Radiologist Evaluation of Manual Concepts
To assess the overall quality and validity of the manually created concepts, an evaluation of the agreement between annotators and a radiologist was performed. To determine this, a representative subset of the dataset was created for each manual concept category: (i) modality of all images, (ii) displayed body region of X-ray modality images, and (iii) directionality of X-ray modality images. Representativeness was ensured by stratifying the corresponding labels. A radiologist then manually labeled each subset, independently going through the same process as the annotators and following the same labeling guidelines for each category.
Subsets were extracted from an internal, raw version of the original ImageCLEFmedical Caption 2023 dataset that included additionally the labels and for later filtering and refinement purposes (e.g., out-of-class images, mixed-class images, uncertainty regarding distinct label assignment). Their exact meaning for each category is described in the annotation guidelines in the supplementary material. This was done to not bias the evaluation by applying premature filtering that may have excluded images that would have been valid in a radiologist's eye.
To quantify inter-annotator agreement, confusion matrices based on annotator and radiologist labels were created and Cohen's κ <cit.> analyses were performed. Corresponding normalized and absolute confusion matrices for modalities, body regions, and directionalities are shown in Figures <ref>, <ref>, <ref>, <ref>, <ref> and <ref>. The results of Cohen's κ analysis are presented in Table <ref>.
A Cohen's κ within an interval of [0.81, 1.00] can be interpreted as almost perfect, and within a range of [0.41, 0.60] as moderate agreement <cit.>. Thus, high values of κ=0.928 for image modalities and κ=0.886 for body regions indicate trustworthiness of manually curated concepts for both categories. However, a moderate value of κ=0.557 for directionalities highlights their experimental character. Identified reasons for decreased agreement are outlined in the Limitations section.
§ LIMITATIONS
The entire dataset is sourced from the Open Access Subset of the PMC database. This naturally introduces a bias in terms of the selected images on the one hand, and quality issues inherent to the PMC on the other hand.
One example of such quality issues is a lower image quality in the PMC archive compared to the published article. Another very rare, but often impossible to manually correct issue is the occasional mix-up of images, where images are reproduced with wrong captions, sometimes taken from a different publication.
A fundamental limitation is represented by faulty or fuzzy original captions that serve as ground truth. This became apparent during the process of manual concept creation and evaluation, where the annotators and radiologist involved reported various discrepancies. Affected captions involved, e.g., modality confusion where obvious CT images were labeled as MRI images, or ambiguous statements where pure CT images were labeled as PET/CT images because they were captured with a combined scanner unit. Another common issue was a lack of detail and context in the original captions. This included, for example, not specifying the modality or, in the case of follow-up images in a series, generally no context on the region depicted. Thus, samples may lack a sufficient set of concepts that would be needed to comprehend the contents and context of an image.
Due to the inherent imbalance in modality distribution, certain modalities such as positron emission tomography or combined modalities as per ROCO definition are relatively rare. Although this distribution reflects the rarity of these modalities within publications in general, it should be taken into account if the dataset is to be used in the context of rare modalities.
Semantic types representing concepts identifiable from images were selected based on consent and best effort, but not by dedicated medical personnel, due to the lack of a well-defined process and resources.
Additionally, to maximize sample size and variety, the dataset includes images with at least one concept. However, this may be a limiting factor, as single-concept images may lack the complexity required for comprehensive model training. Therefore, users are advised to consider the impact of single-concept images on the effectiveness of their models and adjust their selection criteria accordingly.
Several limitations apply in regards to the manually created concepts for image modalities, body regions (X-ray only), and projection directionalities (X-ray only), meant to validate automatically generated ones and to substitute missing ones.
The performed validation by a radiologist showed a generally high agreement in regards to modality- and (X-ray only) body region-related concepts.
In regards to the modality, only for the positron emission (DRPE) and the angiography (DRAN) modalities an agreement in the moderate respectively substantial range was observed. For the positron emission modality this be explained with the low sample count n=2) in the evaluation that fosters a strong bias in occasional disagreement. For the angiography modality, images labeled as X-ray modality by the radiologist can be ascribed to a conservative stance, e.g., barely visible traces of contrast agent without explicit mention of a performed angiography within the caption may have been labeled as X-ray modality.
In regards to the (X-ray only) body region, only for the pelvis and spine region an agreement in the moderate respectively substantial range was observed. For both, this can be explained by limitations in the determination of anatomical regions. For example, diseases such as osteoarthritis or advanced osteonecrosis of the hip might affect both the acetabulum and femoral head and thus might be categorized as either pelvis or lower extremity, leading to potential labeling inconsistencies. These will be resolved as part of future work through implementation of multi-label assignment. Additionally, some anatomical regions, such as those focused on the soft tissue of the neck, could not be assigned a class at all due to minor limitations within the IRMA classification system.
Yet, in regards to (X-ray only) directionality-related concepts the achieved moderate agreement can not only be explained by said conservative stance of the radiologist during evaluation. It further indicates the complexity of the given task, as even for experienced professionals distinguishing between anteroposterior and posteroanterior directionalities is non-trivial when not provided additional context. A general problem further lies in the very common lack of said additional context as well as the deliberate reduction of directional complexity to only four classes that occasionally do not leave room for sufficient differentiation. For instance, while labeling the directionality of a dorsopalmar hand radiograph as coronal posteroanterior is not entirely accurate, this was done here to not leave a notable amount of X-ray samples without a directionality label.
Due to these limitations, manual concepts have been documented distinctively, so dataset users have the possibility to decide on their own whether to use or exclude them from given concepts for images.
§ USAGE NOTES
The images are provided exactly as they appear in the PMC Open Access Subset archive and must be resized or cropped before being used in a machine learning workflow. Also note that the images provided in the PMC Open Access Subset may be of different quality than the images included in the journal.
Some possible use cases for the ROCOv2 <cit.> dataset include pre-training models for handling radiological images, building systems to support structured medical reporting, as well as building multi-label medical concept classification models and caption prediction models as done in the ImageCLEFmedical Caption tasks, which can be used to support structured medical reporting. Another use case is the evaluation of deep learning models for multi-task learning.
Please see the GitLab repository mentioned in the next section for example scripts regarding baseline models and evaluation.
§ CODE AVAILABILITY
The code is available in a GitLab repository (available at <https://gitlab.com/saviola/rocov2-code>, accessed 2023-11-10).
The folder “roco-2018” contains scripts and models for the compound figure and radiological figure detection, as taken from the original ROCO pipeline, which are used to filter all extracted images of the PMC Open Access Subset for non-compound radiological images.
The folder “baseline” contains code for the training of the baseline models for concept detection and caption prediction, which is described in the Technical Validation section.
The folder “ImageCLEF” contains the pre-processing and evaluation scripts for the ImageCLEFmedical Caption 2023 challenge tasks.
10
rm
url<#>1urlprefixURL doiprefixDOI:
ROCO2018
authorPelka, O., authorKoitka, S.,
authorRückert, J., authorNensa, F. &
authorFriedrich, C. M.
titleRadiology Objects in COntext (ROCO): A multimodal
image dataset.
In booktitleProceedings of the Third International
Workshop on Large-Scale Annotation of Biomedical Data and Expert Label
Synthesis (LABELS 2018), Held in Conjunction with MICCAI 2018, vol.
volume11043, pages180–189
(publisherLNCS Lecture Notes in Computer Science, Springer,
addressGranada, Spain, year2018).
pmcoa
authorNational Library of Medicine.
titlePMC Open Access Subset (year2003).
noteDataset,
<https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/> (accessed
2024-03-12).
ImageCLEFmedicalCaptionOverview2019
authorPelka, O., authorFriedrich, C. M.,
authorGarcía Seco de Herrera, A. &
authorMüller, H.
titleOverview of the ImageCLEFmed 2019 concept detection
task.
In booktitleWorking Notes of Conference and Labs of
the Evaluation Forum (CLEF 2019), vol. volume2380 of
seriesCEUR Workshop Proceedings
(publisherCEUR-WS.org, year2019).
ImageCLEFmedicalCaptionOverview2020
authorPelka, O., authorFriedrich, C. M.,
authorGarcía Seco de Herrera, A. &
authorMüller, H.
titleOverview of the ImageCLEFmed 2020 concept
prediction task: Medical image understanding.
In booktitleWorking Notes of Conference and Labs of
the Evaluation Forum (CLEF 2020), vol. volume2696 of
seriesCEUR Workshop Proceedings
(publisherCEUR-WS.org, year2020).
ImageCLEFmedicalCaptionOverview2021
authorPelka, O. et al.
titleOverview of the ImageCLEFmed 2021 concept &
caption prediction task.
In booktitleWorking Notes of Conference and Labs of
the Evaluation Forum (CLEF 2021), vol. volume2936 of
seriesCEUR Workshop Proceedings,
pages1101–1112 (publisherCEUR-WS.org,
year2021).
ImageCLEFmedicalCaptionOverview2022
authorRückert, J. et al.
titleOverview of ImageCLEFmedical 2022 – Caption
Prediction and Concept Detection.
In booktitleCLEF2022 Working Notes, vol.
volume3180 of seriesCEUR Workshop
Proceedings, pages1294–1307 (publisherCEUR-WS.org,
addressBologna, Italy, year2022).
Müller2019
authorMüller, H., authorKalpathy-Cramer, J. &
authorGarcía Seco de Herrera, A.
titleExperiences from the ImageCLEF Medical
Retrieval and Annotation Tasks, pages231–250
(publisherSpringer International Publishing,
addressCham, year2019).
KRALJEVIC2021102083
authorKraljevic, Z. et al.
journaltitleMulti-domain clinical natural
language processing with MedCAT: The medical concept annotation toolkit.
Artificial Intelligence in Medicine
volume117, pages102083,
<https://doi.org/10.1016/j.artmed.2021.102083> (year2021).
soldaini2016quickumls
authorSoldaini, L. & authorGoharian, N.
titleQuickUMLS: A fast, unsupervised approach for
medical concept extraction.
In booktitleMedical Information Retrieval (MedIR)
Workshop, Special Interest Group on Information Retrieval (SIGIR) 2016,
pages4 (addressPisa, Italy, year2016).
eslami-etal-2023-pubmedclip
authorEslami, S., authorMeinel, C. &
authorde Melo, G.
titlePubMedCLIP: How much does CLIP benefit visual
question answering in the medical domain?
In booktitleFindings of the Association for
Computational Linguistics: EACL 2023, pages1181–1193
(publisherAssociation for Computational Linguistics,
addressDubrovnik, Croatia, year2023).
RadfordKHRGASAM21
authorRadford, A. et al.
titleLearning transferable visual models from natural
language supervision.
In editorMeila, M. & editorZhang, T. (eds.)
booktitleProceedings of the 38th International Conference on
Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, vol.
volume139 of seriesProceedings of Machine
Learning Research, pages8748–8763
(publisherPMLR, year2021).
Johnson2019
authorJohnson, A. E. W. et al.
journaltitleMIMIC-CXR, a de-identified
publicly available database of chest radiographs with free-text reports.
Scientific Data volume6,
<10.1038/s41597-019-0322-0> (year2019).
DemnerFushman2015
authorDemner-Fushman, D. et al.
journaltitlePreparing a collection of radiology
examinations for distribution and retrieval.
Journal of the American Medical Informatics
Association volume23, pages304–310,
<10.1093/jamia/ocv080> (year2015).
BUSTOS2020101797
authorBustos, A., authorPertusa, A.,
authorSalinas, J.-M. & authorde la Iglesia-Vayá,
M.
journaltitlePadChest: A large chest x-ray
image dataset with multi-label annotated reports.
Medical Image Analysis
volume66, pages101797,
<https://doi.org/10.1016/j.media.2020.101797> (year2020).
umls
authorBodenreider, O.
journaltitleThe unified medical language system
(UMLS): integrating biomedical terminology.
Nucleic Acids Research
volume32, pages267–270,
<10.1093/nar/gkh061> (year2004).
subramanian-etal-2020-medicat
authorSubramanian, S. et al.
titleMedICaT: A dataset of medical images, captions,
and textual references.
In booktitleFindings of the Association for
Computational Linguistics: EMNLP 2020, pages2112–2120,
<10.18653/v1/2020.findings-emnlp.191> (publisherAssociation
for Computational Linguistics, addressOnline,
year2020).
lin2023pmcclip
authorLin, W. et al.
titlePMC-CLIP: Contrastive language-image pre-training
using biomedical documents.
In booktitleProceedings of the Medical Image
Computing and Computer Assisted Intervention (MICCAI 2023),
pages525 – 536, <10.1007/978-3-031-43993-3_51>
(year2023).
YangSN21
authorYang, J., authorShi, R. & authorNi,
B.
titleMedMNIST classification decathlon: A lightweight
AutoML benchmark for medical image analysis.
In booktitle18th IEEE International Symposium on
Biomedical Imaging, ISBI 2021, Nice, France, April 13-16, 2021,
pages191–195, <10.1109/ISBI48211.2021.9434062>
(publisherIEEE, year2021).
DBLP:journals/corr/abs-2305-10415
authorZhang, X. et al.
journaltitlePMC-VQA: visual instruction
tuning for medical visual question answering.
CoRR
volumeabs/2305.10415, <10.48550/arXiv.2305.10415>
(year2023).
2305.10415.
Lau2018
authorLau, J. J., authorGayen, S.,
authorAbacha, A. B. & authorDemner-Fushman, D.
journaltitleA dataset of clinically generated
visual questions and answers about radiology images.
Scientific Data volume5,
<10.1038/sdata.2018.251> (year2018).
LiuZXMYW21
authorLiu, B. et al.
titleSLAKE: A semantically-labeled knowledge-enhanced
dataset for medical visual question answering.
In booktitle18th IEEE International Symposium on
Biomedical Imaging, ISBI 2021, Nice, France, April 13-16, 2021,
pages1650–1654, <10.1109/ISBI48211.2021.9434010>
(publisherIEEE, year2021).
DBLP:journals/corr/abs-2307-15189
authorMoor, M. et al.
journaltitleMed-Flamingo: a multimodal
medical few-shot learner.
CoRR
volumeabs/2307.15189, <10.48550/arXiv.2307.15189>
(year2023).
2307.15189.
DBLP:journals/corr/abs-2308-01390
authorAwadalla, A. et al.
journaltitleOpenFlamingo: An open-source
framework for training large autoregressive vision-language models.
CoRR
volumeabs/2308.01390, <10.48550/arXiv.2308.01390>
(year2023).
2308.01390.
lehmann2003irma_classification
authorLehmann, T. M., authorSchubert, H.,
authorKeysers, D., authorKohnen, M. &
authorWein, B. B.
titleThe IRMA code for unique classification of medical
images.
In editorHuang, H. K. & editorRatib, O. M.
(eds.) booktitleSPIE Proceedings,
<10.1117/12.480677> (publisherSPIE, year2003).
zenodo
authorRückert, J. et al.
titleROCOv2: Radiology Objects in COntext Version 2, An
Updated Multimodal Image Dataset, <10.5281/zenodo.10821435>
(year2023).
noteDataset, Zenodo.
joulin-etal-2017-bag
authorJoulin, A., authorGrave, E.,
authorBojanowski, P. & authorMikolov, T.
titleBag of tricks for efficient text classification.
In booktitleProceedings of the 15th Conference of
the European Chapter of the Association for Computational Linguistics:
Volume 2, Short Papers, pages427–431
(publisherAssociation for Computational Linguistics,
addressValencia, Spain, year2017).
Johnson2016
authorJohnson, A. E. et al.
journaltitleMIMIC-III, a freely accessible
critical care database.
Scientific Data volume3,
<10.1038/sdata.2016.35> (year2016).
jcm9123992
authorAli, A., authorAndrzejowski, P.,
authorKanakaris, N. K. & authorGiannoudis, P. V.
journaltitlePelvic girdle pain, hypermobility
spectrum disorder and hypermobility-type ehlers-danlos syndrome: A narrative
literature review.
Journal of Clinical Medicine
volume9, <10.3390/jcm9123992>
(year2020).
ImageCLEFmedicalCaptionOverview2023
authorRückert, J. et al.
titleOverview of ImageCLEFmedical 2023 – caption
prediction and concept detection.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
ImageCLEF2023
authorIonescu, B. et al.
titleOverview of ImageCLEF 2023: Multimedia retrieval in
medical, socialmedia and recommender systems applications.
In booktitleExperimental IR Meets Multilinguality,
Multimodality, and Interaction, Proceedings of the 14th International
Conference of the CLEF Association (CLEF 2023) (publisherSpringer
Lecture Notes in Computer Science LNCS, addressThessaloniki,
Greece, year2023).
AUEB-NLP-Group2023
authorKaliosis, P., authorMoschovis, G.,
authorCharalambakos, F., authorPavlopoulos, J. &
authorAndroutsopoulos, I.
titleAUEB NLP group at ImageCLEFmedical caption 2023.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1524–1548 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
kdelab2023
authorShinoda, H. et al.
titleKDE lab at ImageCLEFmedical caption 2023.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1688–1701 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
vcmi2023
authorRio-Torto, I., authorPatrício, C.,
authorMontenegro, H., authorGonçalves, T. &
authorCardoso, J. S.
titleDetecting concepts and generating captions from
medical images: Contributions of the VCMI team to ImageCLEFmedical
caption 2023.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1653–1667 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
IUST_NLPLAB2023
authorLotfollahi, Y., authorNobakhtian, M.,
authorHajihosseini, M. & authorEetemadi, S.
titleIUST_NLPLAB at ImageCLEFmedical caption tasks
2023.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1558–1570 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
ClefCSEGAN2023
authorYeshwanth, V., authorP, P. &
authorKalinathan, L.
titleConcept detection and image caption generation in
medical imaging.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1767–1775 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
CSMorgan2023
authorHasan, M. R., authorLayode, O. &
authorRahman, M.
titleConcept detection and caption prediction in
ImageCLEFmedical caption 2023 with convolutional neural networks, vision
and text-to-text transfer transformers.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1510–1523 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
SSNMLRG2023
authorMohamed, S. S. N. & authorSrinivasan, K.
titleSSN MLRG at caption 2023: Automatic concept
detection and caption prediction using ConceptNet and vision transformer.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1620–1626 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
closeAI2023
authorZhou, W. et al.
titleTransferring pre-trained large language-image model
for medical image captioning.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1776–1784 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
Tan2019EfficientNet
authorTan, M. & authorLe, Q. V.
titleEfficientNet: Rethinking model scaling for
convolutional neural networks.
In booktitleInternational Conference on Machine
Learning (ICML), pages6105 – 6114 (year2019).
Tan2021EfficientNetv2
authorTan, M. & authorLe, Q. V.
titleEfficientNetV2: Smaller models and faster
training.
In booktitleInternational Conference on Machine
Learning (ICML), pages10096 – 10106 (year2021).
Paszke2019PyTorch
authorPaszke, A. et al.
titlePyTorch: An imperative style, high-performance deep
learning library.
In booktitleAdvances in Neural Information
Processing Systems (NeuRIPS) 32, pages8024–8035
(publisherCurran Associates, Inc., year2019).
merkel2014docker
authorMerkel, D.
journaltitleDocker: Lightweight Linux
containers for consistent development and deployment.
Linux journal
volume2014, pages2 (year2014).
Kingma2015Adam
authorKingma, D. P. & authorBa, J.
titleAdam: A method for stochastic optimization.
In booktitleInternational Conference on Learning
Representations (ICLR) (year2015).
Micikevicius2017MixedPrecision
authorMicikevicius, P. et al.
titleMixed precision training.
In booktitleProceedings of the 6th International
Conference on Learning Representations, (ICLR 2018)
(publisherOpenReview.net, year2018).
Deng2009ImageNet
authorDeng, J. et al.
titleImageNet: A large-scale hierarchical image
database.
In booktitle2009 IEEE Conference on Computer Vision
and Pattern Recognition, pages248–255,
<10.1109/CVPR.2009.5206848> (year2009).
RidnikBNZ21
authorRidnik, T., authorBaruch, E. B.,
authorNoy, A. & authorZelnik, L.
titleImageNet-21K pretraining for the masses.
In editorVanschoren, J. & editorYeung, S.
(eds.) booktitleProceedings of the Neural Information
Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and
Benchmarks 2021, December 2021, virtual (year2021).
zhangBERTScoreEvaluatingText2020
authorZhang, T., authorKishore, V.,
authorWu, F., authorWeinberger, K. Q. &
authorArtzi, Y.
titleBERTScore: Evaluating text generation with BERT.
In booktitle8th International Conference on Learning
Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020
(year2020).
linROUGEPackageAutomatic2004
authorLin, C.-Y.
titleROUGE: A package for automatic evaluation of
summaries.
In booktitleText Summarization Branches Out,
pages74–81 (publisherAssociation for Computational
Linguistics, year2004).
denkowskiMeteorUniversalLanguage2014
authorDenkowski, M. & authorLavie, A.
titleMeteor universal: Language specific translation
evaluation for any target language.
In booktitleProceedings of the Ninth Workshop on
Statistical Machine Translation, pages376–380,
<10.3115/v1/W14-3348> (publisherAssociation for Computational
Linguistics, year2014).
vedantamCIDErConsensusbasedImage2015
authorVedantam, R., authorZitnick, C. L. &
authorParikh, D.
titleCIDEr: Consensus-based image description
evaluation.
In booktitle2015 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR),
pages4566–4575, <10.1109/CVPR.2015.7299087>
(publisherIEEE, year2015).
bleu2002
authorPapineni, K., authorRoukos, S.,
authorWard, T. & authorZhu, W.-J.
titleBLEU: a method for automatic evaluation of machine
translation.
In booktitleProceedings of the 40th annual meeting
of the Association for Computational Linguistics, pages311–318
(year2002).
sellam-etal-2020-bleurt
authorSellam, T., authorDas, D. &
authorParikh, A.
titleBLEURT: Learning robust metrics for text
generation.
In booktitleProceedings of the 58th Annual Meeting
of the Association for Computational Linguistics,
pages7881–7892, <10.18653/v1/2020.acl-main.704>
(publisherAssociation for Computational Linguistics,
addressOnline, year2020).
hessel-etal-2021-clipscore
authorHessel, J., authorHoltzman, A.,
authorForbes, M., authorLe Bras, R. &
authorChoi, Y.
titleCLIPScore: A reference-free evaluation metric for
image captioning.
In booktitleProceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing,
pages7514–7528, <10.18653/v1/2021.emnlp-main.595>
(publisherAssociation for Computational Linguistics,
addressOnline and Punta Cana, Dominican Republic,
year2021).
CSIRO2023
authorNicolson, A., authorDowling, J. &
authorKoopman, B.
titleA concise model for medical image captioning.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1611–1619 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
PCLmed2023
authorYang, B., authorRaza, A., authorZou,
Y. & authorZhang, T.
titlePCLmed at ImageCLEFmedical 2023: Customizing
general-purpose foundation models for medical report generation.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1754–1766 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
Bluefield2023
authorAono, M. et al.
titleMulti-stage medical image captioning using
classification and CLIP.
In booktitleCLEF2023 Working Notes, vol.
volume3497 of seriesCEUR Workshop
Proceedings, pages1387–1395 (publisherCEUR-WS.org,
addressThessaloniki, Greece, year2023).
VinyalsTBE15
authorVinyals, O., authorToshev, A.,
authorBengio, S. & authorErhan, D.
titleShow and tell: A neural image caption generator.
In booktitleIEEE Conference on Computer Vision and
Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015,
pages3156–3164, <10.1109/CVPR.2015.7298935>
(publisherIEEE Computer Society, year2015).
DosovitskiyB0WZ21
authorDosovitskiy, A. et al.
titleAn image is worth 16x16 words: Transformers for image
recognition at scale.
In booktitle9th International Conference on Learning
Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021
(publisherOpenReview.net, year2021).
Radford2019LanguageMA
authorRadford, A. et al.
titleLanguage models are unsupervised multitask learners
(year2019).
cohen1960kappa
authorCohen, J.
journaltitleA coefficient of agreement for
nominal scales.
Educational and Psychological Measurement
volume20, pages37–46,
<10.1177/001316446002000104> (year1960).
landis1977kappa
authorLandis, J. R. & authorKoch, G. G.
journaltitleThe measurement of observer
agreement for categorical data.
Biometrics volume33,
pages159, <10.2307/2529310> (year1977).
§ ACKNOWLEDGEMENTS
The authors thank Seyedeh Delaram Mirazziroudsari, Department of Computer Science, University of Applied Sciences and Arts Dortmund, Dortmund, Germany, for her support on X-ray directionality concept implementation. The work of Louise Bloch, Raphael Brüngel, Sven Koitka, and Obioma Pelka was partially funded by a PhD grant from University of Applied Sciences and Arts Dortmund, Dortmund, Germany. The work of Ahmad Idrissi-Yaghir and Henning Schäfer was funded by a PhD grant from the DFG Research Training Group 2535 Knowledge- and data-based personalisation of medicine at the point of care (WisPerMed).
§ AUTHOR CONTRIBUTIONS STATEMENT
S.K. and O.P. conceived the models and workflows for creating the original ROCO dataset.
R.B., A.I., C.S., L.B., and J.R. wrote the original draft of the manuscript.
R.B., A.I., and H.S. performed formal dataset analysis and visualization.
A.I., S.K., O.P., J.R., and H.S. developed the software for the dataset creation workflow and evaluation.
L.B. and R.B. curated the dataset concept labels.
C.S. validated the dataset caption and concept quality.
A.A., C.F., A.H., P.H., H.M., and F.N. provided supervision.
All authors reviewed the manuscript.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ SUPPLEMENTARY
|
http://arxiv.org/abs/2405.09432v1 | 20240515152428 | Analytic forms for the $e^+e^-$ annihilation cross sections around a resonance including initial state radiation | [
"Baoxin Liu",
"Zhenyu Zhang",
"Xiang Zhou"
] | hep-ph | [
"hep-ph"
] |
zhenyuzhang@whu.edu.cn
xiangzhou@whu.edu.cn
Hubei Nuclear Solid Physics Key Laboratory, School of Physics and Technology,
Wuhan University, Wuhan, Hubei 430072, P. R. China
The exact analytic form of cross sections including initial state radiation with Kuraev-Fadin radiative function are obtained for e^+e^- annihilation around a resonance. Despite accounting for vacuum polarization and center-of-mass energy spread effects, the precision remains below 0.1%, meeting the accuracy requirements of quantum electrodynamics corrections up to 𝒪(α^2). The analytic forms lead to an enhancement in the precision of experimental measurements of physical parameters, such as branching fractions, and demonstrate significantly improved computational efficiency in the regression procedures.
Keywords Initial state radiation, cross section, branching fraction, analytic form.
Analytic forms for the e^+e^- annihilation cross sections around a resonance including initial state radiation
Xiang Zhou
May 20, 2024
==============================================================================================================
§ INTRODUCTION
The e^+e^- annihilation experiments provide cleaner experimental environments for quarkonium decays than pp̅ and hadron decays experiments. The measured quarkonium decay widths, or the corresponding branching fractions, can serve as inputs in phenomenological models or be used for comparison with theoretical predictions to test our understanding of quantum chromodynamics (QCD) <cit.>. There is an unavoidable background from the continuum process, which directly produces the final state via e^+e^- annihilation, i.e., e^+e^-→γ^*→ f <cit.>. It has been shown that the cross section from the interference term will lead to imprecise branching fractions for the broad resonance above open heavy flavor threshold, such as ψ(3770) <cit.>.
A novel study shows that the ratio of the cross section from the interference term with respect to the resonance are surprisingly large compared to the precision of the current experiments even for the narrow resonances below the open heavy flavor threshold, such as J/ψ and ψ(3686) <cit.>. Therefore, to achieve precision measurements of the quarkonium decay widths or the corresponding branching fractions, the cross sections around the resonance at least three energies must be measured <cit.>.
Initial state radiation (ISR) is an essential quantum electrodynamics (QED) correction for the precise measurement of cross sections in e^+e^- annihilations <cit.>. In the ISR process, the electron or positron emits one or more photons before annihilation, thereby reducing the center-of-mass (c.m.) energy. While the Born cross sections of the final state are of interest, it is their corresponding ISR-corrected cross sections that are measured experimentally <cit.>. The ISR-corrected cross section can be obtained by the structure function (SF) method, which is an integral transformation of the kernel cross section by the radiative function <cit.>. The SF method, with corrections up to 𝒪(α^2), can achieve the required accuracy of about 0.1% for c.m. energy ranging from 0.2 to 10 GeV <cit.>.
The parameters of the Born cross section can be obtained through the integral method and the inverse transformation method. The integral method involves fitting the experimental observed cross sections with the integral of the radiative function multiplied by the model function of Born cross sections <cit.>. On the other hand, the inverse transformation method utilizes the Born cross section model function to fit the transformed cross sections from the experimental observed ones through an iterative procedure <cit.> or by solving numerical integral equations <cit.>. The integral method is straightforward but time-consuming, as it requires numerical integration in the regression iterations <cit.>. Conversely, the inverse transformation method is time-efficient but introduces an additional uncertainty from the transformation process <cit.>.
For the narrow resonance processes, such as J/ψ or ψ(2S), the widths of the resonances are significantly less than the c.m. energy spread and then the experimental observed cross sections become strongly correlated. Therefore, the parameters of the Born cross section can only be obtained through the integral method <cit.>. Moreover, the consuming time of the regression process becomes much more burdensome since an additional convolution with the c.m. energy spread must be included in the numerical integration <cit.>.
The integral method will be significantly accelerated if the analytic form of the ISR-corrected cross sections is obtained <cit.>. The radiative function proposed by Kuraev and Fadin (KF) consists of an exponentiated part and a finite-order leading-logarithmic (LL) part, accounting for soft multi-photon emission and hard collinear bremsstrahlung, respectively <cit.>. In 1987, R. N. Cahn was the first to derive the analytic form with the exponentiated part in the KF radiative function <cit.>. The approximate analytic form, including the LL parts in the KF radiative function and the upper limit correction by the exponential expansions, has been developed in subsequent works <cit.>. However, it is shown below that for the ISR-corrected cross sections around J/ψ, the accuracy of the approximated analytic form is within a few percent and propagates in the same order to the estimated value by regression for the hadronic branching fraction. In a sense, the approximated analytic form introduces an imperceptible yet non-negligible systematic uncertainty, which is comparable to the statistical or even the total systematic uncertainty for experiments.
In this paper, we first provide the formalisms of the ISR corrected Born cross sections and validate the precisions in Section <ref> where the formulas of the Born cross sections around a resonance and the corresponding exact analytic forms of the ISR-corrected cross sections incorporating the KF radiative function, referred as the KF analytic forms, are provided in Appendices <ref> and <ref>, respectively. Section <ref> and Section <ref> introduce the formalisms that incorporate the vacuum polarization effect and the c.m. energy spread effect, respectively, demonstrating the corresponding precisions. The fit tests of the toy Monte Carlo (MC) samples in the vicinity of J/ψ are presented in Section <ref>. Finally, Section <ref> gives discussions and conclusions.
§ ISR CORRECTED BORN CROSS SECTIONS
In the vicinity of a resonance, the amplitude
A_tot.^f of the final state f in e^+e^- colliders is the coherent sum of
both resonance A_R^f and continuum amplitudes A_C^f. The Born
cross section can be written as <cit.>
σ_Born^f(W) ∝ |A_tot.^f(W)|^2=|A_C^f(W).
.+A_R^f(W)e^iϕ|^2,
where ϕ is the relative phase between the continuum
amplitude A_c^f and the resonance amplitude A_R^f. Therefore,
the Born cross section is a sum of three parts as
σ_Born^f(W) = σ_B_C^f(W)+σ_B_R^f(W)+σ_B_I^f(W),
where σ_B_C^f(W)∝|A_C^f(W)|^2,
σ_B_R^f(W)∝|A_R^f(W)|^2 and
σ^f_B_I∝ 2{ A_c^f(W)A_R^f(W)} denote the Born cross
sections from continuum, resonance and interference contributions,
respectively.
When the kernel cross section is the Born cross section, the ISR corrected cross section σ_ISR^f is an integral of the Born cross section σ_Born^f times the radiation function <cit.>, that is
σ_ISR^f(W) = ∫_0^1-(W_min/W)^2dxF(x,W)
×σ_Born^f(W√(1-x)),
where W is the c.m. energy of e^+e^- annihilation and W_min is the threshold energy equal to the invariant mass of the final states or the experimental cut off energy. The KF radiative function F(x,W) has the form <cit.>
F(x, W) = x^β-1(1+δ)-β(1-x/2)+1/8β^2[4(2-x) .
.×log1/x-1+3(1-x)^2/xlog (1-x).
.-6+x],
with
δ = 3/4β+α/π(π^2/3-π^2/12) and β=2α/π(2logW/m_e-1).
The ISR corrected cross section integral σ_ISR^f is a sum of three parts, which is
σ_ISR^f(W) = σ_C^f(W) + σ_R^f(W) + σ_I^f(W),
where
σ_C^f(W) ≡∫_0^1-(W_min/W)^2dxF(x,W)σ_B_C^f(W√(1-x)),
σ_R^f(W) ≡∫_0^1-(W_min/W)^2dxF(x,W)σ_B_R^f(W√(1-x)),
and
σ_I^f(W) ≡∫_0^1-(W_min/W)^2dxF(x,W)σ_B_I^f(W√(1-x)),
are the integrals of the continuum, resonance and interference contributions, respectively.
For the process e^+e^-→μ^+μ^- in the vicinity of J/ψ, the Born cross section is <cit.>
σ_Born^μ^+μ^-(W) = 4 πα^2/3 W^2|1+W^2/M.
.×3 √(Γ_e eΓ_μμ)/α(W^2-M^2+i M Γ) e^i ϕ|^2,
where α is the fine structure constant, M and Γ are the mass and total decay width of J/ψ, Γ_ee and Γ_μμ are the decay widths of J/ψ→ e^+e^- and μ^+μ^-, respectively. For the processes e^+e^-→ 2(π^+π^-)π^0 and ηπ^+π^- with
η→π^+π^-π^0 in the vicinity of J/ψ, the Born cross sections can be written in a
general form as <cit.>
σ_Born^5π(W) = (𝒜/W^2)^2 4 πα^2/3W^2|1+.
.3 W^2 √(Γ_e eΓ_μμ)𝒞_1 e^i ϕ(1+𝒞_2 e^i Φ)/α M(W^2-M^2+i M Γ)|^2,
where 𝒜/W^2 is the form factor, 𝒞_1 and 𝒞_2 are the ratios of amplitudes, Φ is the phase between the strong and electromagnetic decays from J/ψ. For the process of e^+e^-→ 2(π^+π^-)π^0, 𝒞_1 and ϕ are assumed to be 1 and 0, respectively <cit.>.
Therefore, it can be derived for the cross sections from continuum, resonance and interference contributions. For example, the Born cross section from the continuum contribution of e^+e^-→μ^+μ^- is
σ_B_C^μ^+μ^-(W) = 4πα^2/3W^2.
The Born cross sections are composed by rational functions, while the KF
raditive function contains not only exponential functions but also
logarithmic functions, such as log x, which make the ISR corrected cross section integral has an integrable singularity at the lower limit x=0 in Eq. (<ref>). Fortunately, the integral still has an analytic form. For example, the analytic form for the improper integral of log1/x times σ_B_C^μ^+μ^-(W) is
ℐ(W) = ∫_0^1-(W_min/W)^2dxβ^2log1/xσ_B_C^μ^+μ^-(W√(1-x))
= ∫_0^1-(W_min/W)^2dxβ^2log1/x4πα^2/3W^2(1-x)
= -β^24πα^2/3W^2∫_0^1-(W_min/W)^2dxlog x/1-x
= -β^24πα^2/3W^2[Li_2(W_min^2/W^2)-π^2/6],
where Li_2(x) is the Spence's function which is defined as
Li_2(z)=∫_1^1-zlog (x)/1-xd x.
Since the analytic forms are tedious, we put them into the appendix <ref>.
In Fig. <ref>, we compare the ISR corrected cross section σ^μ^+μ^-_NI obtained through the numerical integration (NI) with the KF analytic form σ^μ^+μ^-_KF, and the approximated analytic form σ^μ^+μ^-_Approx for the e^+e^-→μ^+μ^- process around the J/ψ resonance. The relative deviation between σ^μ^+μ^-_KF and σ^μ^+μ^-_NI is less than 10^-10, coinciding with the numerical integration's error tolerance, thereby affirming the precision of our analytic form. Moreover, the relative deviation between σ_NI^μ^+μ^- and σ_Approx^μ^+μ^- reaches approximately 0.3% at the incoherent phase ϕ=90^∘ where σ_B_I^μ^+μ^- and then σ_I^μ^+μ^- vanish. Therefore, the precision is about 0.3% for the sum of the approximated analytic forms σ_C^μ^+μ^- and σ_R^μ^+μ^- where the precision of the approximated analytic form σ_R^μ^+μ^- is only 0.1% <cit.>. Conversely, at ϕ=0^∘ or 180^∘, the relative deviation between σ_NI^μ^+μ^- and σ_Approx^μ^+μ^- is about 3%. It is evident that the deviation comes from σ_I^μ^+μ^- is 3% in the approximated analytic form.
§ VACUUM POLARIZATION EFFECT
When the vacuum polarization (VP) effect is considered, the Born cross section in the vicinity of a resonance is corrected as <cit.>
σ_Born-VP^f(W) = σ_B_C^VP^f(W) + σ_R_B^VP^f(W) + σ_B_I^VP^f(W)
= σ_B_C^f(W)/|1-Π_0(W)|^2 + σ_B_R^f(W)
+ σ_B_I^f(W)/|1-Π_0(W)|,
where Π_0(W) is the non-resonant VP factor <cit.>.
The ISR corrected cross section with VP effect is also a sum of three parts which is
σ_ISR-VP^f(W) = σ_C-VP^f(W)+σ_R-VP^f(W)
+σ_I-VP^f(W),
where
σ_R-VP^f(W) ≡σ_R^f(W) = ∫_0^1-(W_min/W)^2dxF(x,W)
×σ_B_R^f(W√(1-x)),
σ_C-VP^f(W) ≡ ∫_0^1-(W_min/W)^2dxF(x,W)
×σ_B_C^f(W√(1-x))/|1-Π_0(W√(1-x))|^2,
≃ σ_C^f(W)/|1-Π_0(W)|^2,
and
σ_I-VP^f(W) ≡ ∫_0^1-(W_min/W)^2dxF(x,W)
×σ_B_I^f(W√(1-x))/|1-Π_0(W√(1-x))|,
≃ σ_I^f(W)/|1-Π_0(W)|,
are integrals of the resonance, continuum, and interference contributions, respectively. Approximations are used in Eqs. (<ref>) and (<ref>) because the non-resonance VP factors Π_0(W) are numerically obtained <cit.>.
In Fig. <ref>, we compare the ISR corrected cross section including VP effect σ_NI-VP^μ^+μ^- obtained through NI with σ_KF-VP^μ^+μ^- obtained using the KF analytic form and σ_Approx-VP^μ^+μ^- obtained using the approximated analytic form. The relative deviation between σ_NI-VP^μ^+μ^- and σ_Approx-VP^μ^+μ^- is at the order of 10^-4, primarily arising from the simplifications made in Eq. (<ref>) and (<ref>). The deviation meets the 0.1% precision requirement of the SF method with corrections up to 𝒪(α^2) <cit.>, thereby fulfilling the criteria for practical applicability. Furthermore, the relative deviation between σ_NI-VP^μ^+μ^- and σ_Approx-VP^μ^+μ^- remains about 0.3% at ϕ=90^∘, and 3% at ϕ=0^∘ and 180^∘. It is evident that the precision of σ_Approx-VP^μ^+μ^- directly inherits from the one of σ_Approx^μ^+μ^-.
§ C.M. ENERGY SPREAD EFFECT
For the narrow resonances, such as J/ψ whose decay width is 92.6 keV, the c.m. energy spreads of e^+e^- are much larger than the resonance widths. For example, the c.m. energy spread around J/ψ in BEPCII is less than 1 MeV <cit.>. Therefore, the effect for the c.m. energy spread must be considered in the experimentally observed cross section σ_exp^f(W) by a Gaussian convolution with σ_ISR-VP^f(W) which is
σ_ISR-exp^f(W) = ∫_W-n_E^W+n S_E1/√(2 π) S_Eexp(-(W-W^')^2/2 S_E^2)
×σ_ISR-VP^f(W^') d W^'.
In Fig. <ref>, we compare ISR corrected cross sections including both the VP and c.m. energy spread effects σ^μ^+μ^-_NI-exp obtained through the NI with σ^μ^+μ^-_KF-exp obtained using the KF analytic form and σ^μ^+μ^-_Approx-exp obtained using the approximated analytic form. The relative deviation between σ^μ^+μ^-_NI-exp and σ^μ^+μ^-_KF-exp remains at the order of 10^-4. Moreover, the relative deviation between σ_ISR-VP^NI and σ_ISR-VP^PE is still 0.3% at Φ=90^∘ and 3% at Φ=0^∘ and Φ=180^∘. It is evident that the Gaussian convolution of the c.m. energy spread does not impact on the precision.
In Table <ref>, we present a comparison of the consuming time for σ^f_KF-exp, σ^f_Approx-exp, and σ^f_NI-exp for the processes e^+e^-→μ^+μ^- and 2(π^+π^-)π^0. Both the KF and approximated analytic forms significantly reduce the consuming time. Notably, σ^f_KF-exp yields the shortest computing time for the hadronic decay process.
§ FIT RESULTS USING TOY MC SAMPLES
We use the toy MC samples for e^+e^-→μ^+μ^- and e^+e^-→ 2(π^+π^-)π^0 around J/ψ to test our analytic form and compare it to the approximated one. Firstly, the toy MC samples for the ISR corrected cross sections of e^+e^-→μ^+μ^- are generated with 1% uncertainty at 20 c.m. energies around J/ψ. The minimized χ^2_μ^+μ^- is performed with the free parameters M, S_E and ϕ. The χ^2_μ^+μ^- is Eq. (<ref>)
χ^2_μ^+μ^-(M, S_E, ϕ)=∑_i=1^20[σ^μ^+μ^-_i-σ_ISR-exp^μ^+μ^-(W_i; M, S_E, ϕ)/Δσ^MC_i]^2,
where σ^μ^+μ^-_i is the cross section at every energy point W_i, Δσ^μ^+μ^-_i is the corresponding uncertainty. The estimators M and S_E and their corresponding uncertainties σ_M and σ_S_E are taken as the information of the e^+e^- collider.
Secondly, the toy MC samples for the ISR corrected cross sections of e^+e^-→ 2(π^+π^-)π^0 are then generated with 1% uncertainty at the same 20 c.m. energies around J/ψ. The minimized χ^2_5π is performed with the free parameters 𝒜, 𝒞_2 and Φ, and the nuisance parameters M and S_E. Therefore, the estimators 𝒜, 𝒞_2 and Φ can be obtained. The χ^2_5π is Eq.
(<ref>)
χ^2_5π(𝒜, 𝒞_2, Φ) = ∑_i=1^20[σ^5π_i-σ_ISR-exp^5π(W_i; 𝒜, 𝒞_2, Φ, M, S_E)/Δσ^5π_i]^2
+(M-M/σ_M)^2+(S_E-S_E/σ_S_E)^2.
The branching fraction for J/ψ→ 2(π^+π^-)π^0 satisfies <cit.>
Br = (𝒜/W^2)^2Γ_μμ|(1+𝒞_2e^iΦ)|^2.
Finally, the above toy MC sample generations and χ^2 minimizations are repeated 10000 times. The fit results of e^+e^-→μ^+μ^- are shown in Fig. <ref>. The estimators of M and S_E are centered around the MC truth values for both KF and approximated analytic forms. Only the estimator ϕ of the KF analytic form is centered around the MC truth value. While the distribution of the estimator ϕ of the approximated analytic form deviate more than 2σ from the MC truth value. The χ^2_μ^+μ^- of the KF analytic form is better than that of the approximated analytic form.
The fit results for the process e^+e^-→ 2(π^+π^-)π^0 are depicted in Fig. <ref>. Two solutions, labeled as Solution I and II, are obtained. The distributions of χ^2_5π are identical for both Solution I and II. The χ^2_5π of the KF analytic form is better than that of the approximated analytic form. Only in Solution I the estimator Φ is centered around the MC truth value of 90^∘. While in Solution II, Φ is centered around -90^∘. Furthermore, in Solution I, only the estimator Br of the KF analytic form is centered around the MC truth value, whereas the estimator of the approximated analytic form deviates significantly. The toy MC test shows that the KF analytic form avoids a subtle yet non-negligible uncertainty that arises from employing the approximated analytic form.
§ DISCUSSIONS AND CONCLUSIONS
The ISR correction is essential for precise cross section measurements in e^+e^- annihilation. The structure functions method provides an integral transformation from the Born cross sections to the experimentally observed ones. For the first time, we present the exact analytic form for the ISR corrected cross section aorund a resonance with the KF radiative function. Even including the vacuum polarization (VP) and c.m. energy spread effects, the precision remains matches the accuracy requirement of the structure functions (SF) method up to 𝒪(α^2) <cit.>. The analytic form can accelerate the regression procedure used to extract physical parameters, such as the branching fraction, by over 150 times. Utilizing the toy MC samples, we reveal a non-negligible few percent systematic uncertainty caused by the approximated analytic form.
For the currently running e^+e^- collision experiments, the BESIII experiment has been accumulated 10 billion J/ψ events <cit.> and 3 billion ψ(3686) events <cit.> which are at least 6 times larger than those used in previous BESIII measurements. Furthermore, the Belle II experiment plans to take about 500 fb^-1 data for each vector bottomnium state <cit.> which are at least ten or huandreds of times larger than the Belle experiment. The KF analytic form will be helpful to handle the ISR correction properly in currently running experiments at BESIII and Belle II, as well as in planned ones, such as the super-tau-charm factories (STCF) <cit.> and the super-J/ψ factory <cit.>.
X. Z. would like to acknowledge useful conversations with Y. D. Wang and K. Zhu. This work has been supported by the National Natural Science Foundation of China (NSFC) under Contract Nos. U2032114 and 12192265).
§ ANALYTIC FORMS FOR BORN CROSS SECTIONS
The Born cross section of the process e^+e^-→μ^+μ^- is <cit.>
σ_Born^μ^+μ^-(W) = 4 πα^2/3 W^2|1+W^2/M.
.×3 √(Γ_e eΓ_μμ)/α(W^2-M^2+i M Γ) e^i ϕ|^2,
which can be taken apart into three parts
σ^μ^+μ^-_B_c(W) = 4πα^2/3W^2,
σ^μ^+μ^-_B_R(W) = 12π W^2Γ_eeΓ_μμ/M^2((W^2-M^2)^2+M^2Γ^2),
σ^μ^+μ^-_B_I(W) = 8πα√(Γ_eeΓ_μμ)/M((W^2-M^2)^2+M^2Γ^2)
×((W^2-M^2)cosϕ + MΓsinϕ).
The Born cross section of the process e^+e^-→2(π^+π^-)π^0 is <cit.>
σ_Born^5π(W) = (𝒜/W^2)^2 4 πα^2/3W^2|1+W^2/M.
.×3 √(Γ_e eΓ_μμ)𝒞_1 e^i ϕ(1+𝒞_2 e^i Φ)/α(W^2-M^2+i M Γ)|^2,
which can also be taken apart into three parts:
σ_B_c^5π(W) = (𝒜/W^2)^2 4 πα^2/3W^2,
σ_B_R^5π(W) = 𝒞_1^2(1+𝒞_2^2+2𝒞_2cosΦ)
×12π𝒜^2Γ_eeΓ_μμ/M^2W^2((W^2-M^2)^2+M^2Γ^2),
σ_B_I^5π(W) = 8πα𝒜^2√(Γ_eeΓ_μμ)/W^4M((W^2-M^2)^2+M^2Γ^2)
×[(W^2-M^2) 𝒞_1(cosϕ + 𝒞_2cosϕcosΦ.
.-𝒞_2sinϕsinΦ)+MΓ𝒞_1(sinϕ + .
.𝒞_2cosϕsinΦ+𝒞_2sinϕcosΦ)].
§ ANALYTIC FORMS OF ISR CORRECTED CROSS SECTIONS
The ISR corrected cross section from the continuum part of e^+e^-→μ^+μ^- is
σ_C^μ^+μ^- = 4πα^2/3W^2I_0.
The ISR corrected cross section from the resonance part of e^+e^-→μ^+μ^- is
σ^μ^+μ^-_R = 6πΓ_eeΓ_μμW^2/M^5Γ[(1+A/B)I_1-1/BI_2]
+c.c..
The ISR corrected cross section from the interference part of e^+e^-→μ^+μ^- is
σ^μ^+μ^-_I = [4πα√(Γ_eeΓ_μμ)/M^3Γ(Γsinϕ-Mcosϕ)+.
.4πα√(Γ_eeΓ_μμ)/M^4ΓW^2cosϕ(1+A/B)]I_1
-I_2W^2/B4πα√(Γ_eeΓ_μμ)/M^4Γcosϕ + c.c..
The ISR corrected cross section from the continuum part of e^+e^-→ 2(π^+π^-)π^0 is
σ^5π_C = 4πα^2𝒜^2/3W^6I_3.
The ISR corrected cross section from the resonance part of e^+e^-→ 2(π^+π^-)π^0 is
σ^5π_R = 𝒞_1^2(1+𝒞_2^2+2𝒞_2cosΦ)6π𝒜^2Γ_eeΓ_μμ/Γ M^5W^2
(1/A+BI_0+B/A+BI_1 )+c.c..
The ISR corrected cross section from the interference part of e^+e^-→ 2(π^+π^-)π^0 is
σ^5π_I = 4πα𝒜^2√(Γ_eeΓ_μμ)/M{[𝒞_1/ W^4M^2..
..×(sinϕ + 𝒞_2cosϕsinΦ+𝒞_2sinϕcosΦ)..
..-𝒞_1(cosϕ + 𝒞_2cosϕcosΦ-𝒞_2sinϕsinΦ)/Γ W^4M].
.×[ B/(A+B)^2I_0+ (B/A+B)^2I_1+1/A+BI_4
]+.
.𝒞_1(cosϕ + 𝒞_2cosϕcosΦ-𝒞_2sinϕsinΦ)/Γ W^2M^3( 1/A+BI_0
..
..+ B/A+BI_1)}+c.c.,
where c.c. represents the complex conjugate of the former part, A=Γ/M+i[(W/M)^2-1], B = -i(W/M)^2 and b = 1-(W_min/W)^2. The five integrations I_0, I_1, I_2, I_3 and I_4 are
I_0 = β(1+δ)ℬ(b,β,0) - bβ/2 + log(1-b)(1/4β^2+β/2+3/8β^2b) + 1/16β^2log^2(1-b)
-β^2/2blog b-β^2/2(Li_2(1-b)-π^2/6-Li_2(b)).
I_1 = β(1+δ)(b^β _2F_1(1,β,1+β,-bB/A)/Aβ) + (β^2/8+β/2)(b/B-A/B^2logA+Bb/A) -
(β+3/4β^2)1/BlogA+Bb/A
+ 3/4β^2[-1/B[Li_2(A+Bb/A+B)-Li_2(A/A+B)
..
..+logB/A+BlogA+Bb/A]] - 3/8β^21/B[-b+(b-1)log(1-b)+
.
.A/B[Li_2(A+Bb/A+B)-Li_2(A/A+B)+logB/A+BlogA+Bb/A]]
-β^21/B[-π^2/6+1/2log^2(1+B/Ab)+Li_2(A/A+Bb)-logB/Alog(1+B/Ab)]
+1/2β^21/B[b(log b-1) - A/B[-π^2/6+1/2log^2(1+B/Ab)+Li_2(A/A+Bb)-
..
..logB/Alog(1+B/Ab)]] -1/2β^21/A[-Li_2(b) + [Li_2(A+Bb/A+B)- ..
..Li_2(A/A+B)+logB/A+BlogA+Bb/A]],
I_2 = β(1+δ)b^β/β+β^2/2Li_2(b) + b^2(β/4+1/32β^2) + b(-β-5/16β^2)
+log(1-b)(-9/16β^2+3/4β^2b-3/16β^2b^2)+log b(β^2b^2/4-β^2b),
I_3 = β(1+δ)ℬ(b,β,-2) + (β^2/8+β/2)-1+(1-b)^3+b(b+3-3b)/2(1-b)^3
-1/2(β+3/4β^2)(1/(1-b)^2-1)-β^2/81-(1-b)^2+2log(1-b)/4(1-b)^2
-3/4β^2log(1-b) -β^2/4b(b-1-(b-2)log b)/(1-b)^2+β^2/2blog b/b-1
-1/2β^2(-Li_2(b)-1/2log^2(1-b))-β/8b+log(1-b)/1-b ,
and
I_4 = β(1+δ)ℬ(b,β,-1) + β^2/2(Li_2(b)-Li_2(1-b)+π^2/6) + b/1-b(-3/4β^2-β/2)
+log(1-b)(β/2-3/8β^2)-blog b/1-bβ^2/2+log^2(1-b)β^2/16-β^2/8log(1-b)/1-b.
The special functions used in the above formulas are
Li_2 = -∫_0^zlog(1-u)/udu,
ℬ(x,a,b) = ∫_0^x t^a-1(1-t)^b-1dt,
and
_2F_1(a,b,c,x) = ∑_n=0^∞(a)_n(b)_n/(c)_nx^n/n!,
where Li_2 is the Spence function, ℬ is the beta function and _2F_1 is the Gaussian hypergeometric function with the Pochhammer symbol (a)_n=Γ(a+n)/Γ(a) and Γ(x) is the Gamma function.
*
|
http://arxiv.org/abs/2405.10304v1 | 20240516175622 | Towards Unpolarized GPDs from Pseudo-Distributions | [
"H. Dutrieux",
"R. Edwards",
"C. Egerer",
"J. Karpie",
"C. Monahan",
"K. Orginos",
"A. Radyushkin",
"D. Richards",
"E. Romero",
"S. Zafeiropoulos"
] | hep-lat | [
"hep-lat",
"hep-ph"
] |
=1
apsrev4-1
a]Hervé Dutrieux,b] Robert G. Edwards,b] Colin Egerer,b] Joseph Karpie,a] Christopher Monahan,a,b] Kostas Orginos,c,b] Anatoly Radyushkin,b] David Richards,b] Eloy Romero,d] Savvas Zafeiropoulos[a][b][c][d]hldutrieux@wm.eduedwards@jlab.orgegerer@jlab.orgjkarpie@jlab.orgcmonahan@wm.edukostas@wm.eduradyush@jlab.orgdgr@jlab.orgeromero@jlab.orgsavvas.zafeiropoulos@cpt.univ-mrs.frWe present an exploration of the unpolarized isovector proton generalized parton distributions (GPDs) H^u-d(x, ξ, t) and E^u-d(x, ξ, t) in the pseudo-distribution formalism using distillation. Taking advantage of the large kinematic coverage made possible by this approach, we present results on the moments of GPDs up to the order x^3– including their skewness dependence – at a pion mass m_π = 358 MeV and a lattice spacing a = 0.094 fm.JLAB-THY-24-4059
Towards Unpolarized GPDs from Pseudo-Distributions
(on behalf of the HadStruc Collaboration)
May 20, 2024
=======================================================
§ INTRODUCTION
Generalized parton distributions (GPDs) <cit.> characterize the three-dimensional internal structure of hadrons in terms of the parton's longitudinal momentum fraction x, the fraction of longitudinal momentum transfer between the incoming and outgoing hadrons, or skewness ξ, and the invariant momentum transfer t. The three-dimensional nature of the distributions, in contrast to the one-dimensional parton distribution functions (PDFs), enables insights into crucial features such as the orbital angular momentum carried by the quarks and gluons <cit.>. Generalizing the elastic form factors, GPDs can also be used to define radial profiles of energy or pressure distribution of the partonic matter <cit.>.
The principal experimental means of gaining insight into GPDs is through exclusive processes, notably deeply virtual Compton scattering (DVCS) <cit.> and deeply virtual meson production (DVMP) <cit.>. There is a worldwide experimental program dedicated to these processes, including experiments at Jefferson Lab at both 6 and 12 GeV <cit.> and the upcoming Electron-Ion Collider (EIC) <cit.>. However, these experiments only give indirect access to GPDs. In the framework of collinear factorization, the amplitude of these processes depends on GPDs through a convolution with a perturbative kernel, which gives a practical access to some specific features of the GPDs (notably the diagonal region x ≈ξ or some specific integrals of the GPDs). Therefore the insights they provide are primarily two-dimensional, or “moment-like”<cit.>, the three-dimensional behavior being obscured. A manifestation of this inverse problem is encapsulated in “shadow GPDs” <cit.>, wherein a range of GPDs can give rise to extremely similar DVCS cross-sections. In the limit of small Bjorken-x, corresponding to small x and ξ, the enhanced effects of perturbative evolution may give a hope of a better control of the uncertainty propagation in this inverse problem <cit.>. However, in the regime of moderate to large x, which precisely corresponds to the kinematical region accessible to lattice QCD, a model independent extraction of GPDs from those experimental processes seems difficult to achieve. It is especially the case in the so-called ERBL region (|x| < |ξ|) where the lack of theoretical constraints like positivity <cit.> makes the deconvolution problem particularly poorly behaved <cit.>. For example, the very different results obtained when extracting gravitational form factors from DVCS data using different parametrizations <cit.> demonstrate a relative lack of sensitivity of the current DVCS dataset to this quantity. In contrast, these form factors have been probed with great precision on the lattice using local composite operators <cit.>.
GPDs provide therefore an ideal setup where, at least for the time-being, only lattice QCD can provide systematically controlled information. Besides DVCS and DVMP, an increasing number of exclusive processes with enhanced sensitivity to GPDs are being studied, but their experimental exploration is only at its beginning: double DVCS <cit.>, where a recent study of measurability looks promising <cit.>, di-photon production <cit.>, photon meson pair production <cit.>, or the general category of single diffractive hard exclusive processes <cit.>. The maturity of this new phenomenology of GPDs in the perspective of the EIC therefore complements the progress of the extraction of GPDs on the lattice.
GPDs are defined as the matrix elements of operators separated along the light cone. As in the case of the PDFs, their direct calculation is precluded on a Euclidean lattice, except for low-order moments which can be computed from local operators. New developments in the last decade have led to the possibility of computing the x-dependence of parton distributions using non-local operators with a space-like separation <cit.>. The application of some of those approaches to GPDs was performed in <cit.> and pioneering calculations have been conducted in <cit.>.
§.§ Main phenomenological results
We present a formalism for the computation of pseudo-GPDs using distillation. This technique allows us to probe the GPDs on a large kinematic range in terms of skewness and momentum transfer as presented in Fig. <ref>. For numerical application, we use 186 combinations of initial and final momenta giving 116 kinematic values of (ξ, t). This grants us access to Mellin moments of the GPDs sensitive to the skewness on top of those only sensitive to the forward limit ξ = 0. We use a lattice ensemble with pion mass m_π = 358 MeV, lattice spacing a = 0.094 fm, and lattice volume 32^3×64. For the extraction of the GPD moments, we have used hadron momenta up to 1.4 GeV, yielding maximal Ioffe time values of ν≤ 0.6 z/a where z is the space-like separation of the non-local operator. This corresponds to ν_max = 3.5 using z = 6a. As demonstrated in the context of PDFs in <cit.>, we can reach momenta at least twice as large when using the distillation framework with momentum smearing, allowing for a study of the x-dependence of the parton distributions. However, we reserve this effort for a subsequent paper, as preliminary exploration at large momentum hinted at the necessity of a better control of excited state contamination and other lattice systematic uncertainties.
For the reader interested in the phenomenology of GPDs beyond the framework of lattice calculations, we summarize in Fig. <ref> the main physics results of this study, in the form of six moments of the isovector GPDs H^u-d and E^u-d:
∫_-1^1 dx x^n-1[ H^u-d; E^u-d ](x, ξ, t) = ∑_k = 0 even^n-1[ A_n,k(t); B_n,k (t) ]ξ^k .
For brevity, the results are quoted at the scale μ = 2 GeV in the form of a dipole fit with the value at t=0 and a dipole mass Λ_n,k following the example:
A_n,k(t) = A_n,k(t = 0) (1-t/Λ_n,k^2)^-2 .
More sophisticated treatment of the data is presented starting from section <ref> of this document. The quoted uncertainty includes an evaluation of excited state contamination on top of the statistical uncertainty resulting from the sampling of gauge configurations. However, those results are obtained at an unphysical pion mass of 358 MeV without an attempt at extrapolating to the physical pion mass. Likewise, as we will discuss in detail further in the document, we observe clear signs of systematic effects, either attributable to the lack of continuum limit (discretization effects) or the uncertainty in the light-cone limit of our space-like matrix elements (higher-twist effects). With these limitations in mind, we hope that the results, and in particular the novel characterization of the skewness-dependent moments A_3,2,B_3,2 and A_4,2,B_4,2 can provide insights for theoretical models of GPDs and extractions from the data.
The paper is organized as follows. In section <ref>, we review the formalism of GPDs in the short-distance factorization, notably deriving the matching formulas for the moments sensitive to the skewness. In section <ref>, we present our calculation strategy on the lattice, stressing the extensive coverage of the kinematic domain enabled by distillation. In section <ref>, we describe our numerical analysis of the correlation functions. Then we present our results, first for the elastic form factors (section <ref>) and second for the generalized form factors (section <ref>).
§ THEORETICAL FORMALISM
§.§ Building light-cone GPDs from space-like matrix elements
GPDs parameterize off-forward matrix elements of quark and gluon operators with a light-like separation. In the convention of <cit.>, the leading twist-2 vector quark GPDs of the nucleon are defined according to:
F^q(x,p_f,p_i) =1/2∫ dz^-/2πe^ixP^+z^-
×⟨N(p_f,λ_f)|ψ̅^q(-z/2)γ^+Ŵ(-z/2,z/2;A)ψ^q(z/2)|N(p_i,λ_i)⟩|_z^+=0,𝐳_⊥=0_⊥ ,
=1/2P^+u(p_f,λ_f)[γ^+ H^q(x,ξ,t)+iσ^+νq_ν/2m E^q(x,ξ,t)]u(p_i,λ_i) ,
where Ŵ is the Wilson line in the fundamental representation, m the nucleon mass, λ_i, λ_f are the initial and final state helicities, and p_i, p_f are the initial and final state momenta that define the variables
P ≡1/2(p+p') , q ≡ p'-p , t ≡ q^2 , ξ≡ -q^+/2P^+ .
We use the spinor normalization u̅(p,λ)u(p,λ')=2mδ_λλ'.
For a Euclidean lattice QCD calculation, the light-cone definition is unsuitable. Since real-time is inaccessible, we use instead equal-time non-local operators with a space-like separation. Let us consider more general fundamental matrix elements defining GPDs of spin-1/2 nucleon without restriction on z^μ:
M^μ(p_f,p_i,z)=⟨N(p_f,λ_f)|𝒪^μ(z;A)|N(p_i,λ_i)⟩.
This matrix element has multiplicative ultraviolet divergences <cit.> that only depend on the length of the Wilson line, so it is useful to use instead the ratio:
ℳ^μ(p_f,p_i,z) = M^μ(p_f,p_i,z)/M^0(0,0,z) ,
which has a finite continuum limit. In fact, this ratio is renormalization group invariant (RGI), which means that it is independent of the scheme that is used to define the renormalized quantities in the numerator and denominator. This denominator is the same matrix element used in the analogous ratio of the forward matrix elements in <cit.>. In the case of isovector unpolarized quarks, the denominator also directly normalizes the Dirac elastic form factor at 1 when t = 0.
Introducing the Ioffe time ν = z · P, the light-cone GPD definition (<ref>) can be expressed as:
F^q(x,p_f,p_i) = lim_z → (0,z^-,0_⊥)1/2P^+∫ dν/2πe^ixνℳ^+(p_f,p_i,z) ,
=lim_z^2 → 01/2∫ dν/2πνe^ixν z_μℳ^μ(p_f,p_i,z) .
This expression is particularly convenient as it is directly Lorentz invariant. It should be noted that the limit in Eq. (<ref>) suffers from collinear divergences, but perturbation theory allows us to predict the behavior in z^2 of the leading twist contribution when z is small enough compared to hadronic scales. However, since all our calculations are performed at z^2 < 0, there exists an uncertainty associated with the light-cone limit z^2 → 0.
The matrix element admits the Lorentz decomposition <cit.>:
ℳ^μ(p_f,p_i,z) =⟨⟨γ^μ⟩⟩𝒜_1(ν,ξ,t,z^2)+z^μ⟨⟨1⟩⟩𝒜_2(ν,ξ,t,z^2)+i⟨⟨σ^μ z⟩⟩𝒜_3(ν,ξ,t,z^2)
+i/2m⟨⟨σ^μ q⟩⟩𝒜_4(ν,ξ,t,z^2)+q^μ/2m⟨⟨1⟩⟩𝒜_5(ν,ξ,t,z^2)
+i/2m⟨⟨σ^zq⟩⟩[P^μ𝒜_6(ν,ξ,t,z^2)+q^μ𝒜_7(ν,ξ,t,z^2)+z^μ𝒜_8(ν,ξ,t,z^2)] .
The Lorentz structures preceding the amplitudes employ the following abbreviations: σ^μ z≡σ^μρz_ρ, σ^μ q≡σ^μρq_ρ, σ^zq≡σ^ρλz_ρ q_λ, and ⟨⟨Γ⟩⟩≡u(p_f,λ_f) Γ u(p_i,λ_i). ξ may be generalized to space-like separations via:
ξ = -q· z/ 2 P· z = - q· z/2 ν .
This expression coincides exactly with the definition of ξ in the light-cone limit (<ref>), but interestingly the kinematic bound satisfied by the light-cone ξ
|ξ_light-cone|≤√(-t)/√(-t+4m^2)≤ 1 ,
no longer holds for space-like z. In fact, with this definition of ξ, one can choose kinematics such that ν =0 and ξ is infinite. This is, however, not an issue as all amplitudes and matching relations appear naturally as functions of ν̅ = νξ = -q· z/2, which is always finite. The decomposition of the GPD matrix elements in terms of ν and ν̅ emerges clearly in the double distribution (DD) framework <cit.>, as shown in the Lorentz invariant relationship Eq. (<ref>).
The matrix element contracted with z_μ that enters the light-cone limit, Eq. (<ref>), reads:
z_μℳ^μ(p_f,p_i,z) =⟨⟨z⟩⟩𝒜_1(ν,ξ,t,z^2)+z^2⟨⟨1⟩⟩𝒜_2(ν,ξ,t,z^2)+0×𝒜_3(ν,ξ,t,z^2)
+i/2m⟨⟨σ^z q⟩⟩𝒜_4(ν,ξ,t,z^2)-νξ/m⟨⟨1⟩⟩𝒜_5(ν,ξ,t,z^2)
+i/2m⟨⟨σ^zq⟩⟩[ν𝒜_6(ν,ξ,t,z^2)-2ξν𝒜_7(ν,ξ,t,z^2)+z^2𝒜_8(ν,ξ,t,z^2)] ,
where we have used that σ^μνz_μ z_ν = 0. One can immediately identify that 𝒜_3 disappears in the contraction and 𝒜_2,8 are 𝒪(z^2) contaminations. If we introduce Ioffe-time GPDs as:
[ H^q; E^q ] (x, ξ, t) = ∫dν/2π e^ixν[ H^q; E^q ](ν, ξ, t) ,
then we can identify which amplitudes contribute to the lightcone limit by comparison with Eq. (<ref>):
H^q(ν, ξ, t) = lim_z^2 → 0[𝒜_1 - ξ A_5] ,
E^q(ν, ξ, t) = lim_z^2 → 0[𝒜_4 + ν𝒜_6 - 2ξν𝒜_7 + ξ A_5] ,
where scale dependence from regulating the collinear divergences at the z^2 → 0 limit has been suppressed.
We have used the Gordon identity to distribute the contribution of 𝒜_5 between H and E.
This relation is equivalent to the one quoted in <cit.> up to choices of conventions. The discussion in <cit.> using the DD formalism <cit.> demonstrates that the 𝒜_5 amplitude is sensitive to the D-term in the limit z^2 → 0, while the other amplitudes 𝒜_1,4,6,7 build up the double distributions in a similar fashion as here. The right-hand sides of Eqs. (<ref>) and (<ref>) are very advantageous since they can be evaluated immediately at any value of z^2 with the assurance that they converge towards the correct light-cone limit for arbitrarily small values of z^2. However, there is clearly no unique way to construct a Lorentz invariant object at z^2 < 0 which converges analytically towards the correct light-cone limit. In the absence of a D-term in the non-singlet GPDs that we consider here, we will not use the 𝒜_5 amplitude at all.
In the case of unpolarized quark PDFs, a single amplitude contributes leading twist information at lowest powers of z^2 and α_s. For helicity distributions a linear combination of two is required <cit.>, not dissimilar to Eqs. (<ref>) and (<ref>). In the case of GPDs, different choices of the quasi-GPD were compared in terms of the Lorentz invariant amplitudes <cit.>. In the light-cone limit z^2→0 at fixed ν,ξ, these definitions converge to the same final GPD. The practical differences between the definitions for realistic scales would be dominated by twist-3 contributions <cit.> in the amplitudes that come from terms proportional to the transverse momentum transfer Δ_μ^T = q^μ + 2 ξ P^μ. When considering the light-cone matrix element in Eq. (<ref>) with a generic γ^μ, when μ≠ + the dominant deviations from the leading twist matrix elements again occur in the OPE from twist-3 operators that are total derivatives of leading twist operators or from genuine quark-gluon-antiquark correlations <cit.>. In <cit.>, the effect of such twist-3 contributions is to partially restore the translational symmetry of the off-light-cone matrix element, which is broken in a leading twist calculation. As shown in <cit.> adding the 6,7 terms to Eq. (<ref>) is explicitly canceling the contribution to two such twist-3 contributions. If one could determine higher twist combinations, of any order and from the same or a different calculation, then they could be added similarly to Eq. (<ref>) and (<ref>) to improve the light-cone limit z^2→0, even if not all twist-3 contribution are separable using Lorentz invariant functions. An evaluation of the plausible size and analytical properties of the genuine higher-twist power corrections is carried out in <cit.>, where the corrections are found to be fairly mild when using the ratio normalization of Eq. (<ref>) in the range of Ioffe time relevant for this study.
§.§ Perturbative matching
The matching of unpolarized non-singlet Ioffe-time GPDs in the MS scheme to z^2-dependent pseudo-distributions in coordinate space was first presented at one-loop accuracy in <cit.> for the pion's analog of 1. For clarity, we denote with a bar the MS quantity. The matching is given by:
F(ν,ξ,t,z^2)=F(ν,ξ,t,μ^2)-α_sC_F/2π{ln(-e^2γ_E+1/4z^2μ^2)B+L}⊗F+𝒪(z^2Λ_ QCD^2) + 𝒪(z^2 t)
,
where:
B⊗F =∫_0^1 dα [(2α/1-α)_+cos(α̅ξν)+sin(α̅ξν)/ξν-δ(1-α)/2]F(αν,ξ,t,μ^2)
L⊗F =∫_0^1 dα [4(ln(1-α)/1-α)_+cos(α̅ξν)-2sin(α̅ξν)/ξν+δ(1-α)]F(αν,ξ,t,μ^2),
with α̅=1-α. The standard plus-prescription is defined as
∫_0^1 dα G(α)_+f(α x)=∫_0^1 dα G(α)[f(α x)-f(x)] .
Here B is simply the ordinary leading-order evolution kernel of non-singlet GPDs, except expressed in Ioffe time instead of x. Compared to its expression in x-space, the evolution operator is much simpler. For instance, it is analytical in ξν, whereas the x-space evolution operator has non-analytic structures in ξ/x. Furthermore, the evolution/matching of Ioffe-time GPDs shares with Ioffe-time PDFs the characteristic of only involving smaller Ioffe times (the α variable in the integral runs from 0 to 1). Thus, we do not need to know the entire Ioffe-time dependence of the GPD for evolution. In contrast, the x-space evolution of GPDs will typically require knowledge of the entire x-dependence of the GPD. It is only when |x| > |ξ| that a partial knowledge of the x-dependence of GPDs is enough to evolve them[Depending on which kinematic region in (x, ξ) of the GPDs one exactly knows, it is possible to imagine a somewhat cumbersome strategy where one would use double distributions as an intermediate step before performing the evolution <cit.>. It is however potentially numerically challenging, and certainly less straightforward than the evolution of a Ioffe-time GPD.]. As a consequence, it seems that Ioffe-time GPDs are generally speaking a much friendlier object to manipulate than their x-dependent counterpart, even though they contain the same information if the entire range in ν and x is known.
The kernels governing the evolution, Eq. (<ref>), and scheme matching, Eq. (<ref>), connect the scales μ^2 in MS and z^2 at fixed values of Ioffe time, skewness, and momentum transfer. In the limit ξ→ 0, the kernels reduce to the PDF case. It is interesting to observe the effects of non-zero skewness. To isolate the skewness dependence attached purely to the perturbative matching, we introduce a ξ-independent GPD resembling a normalized isovector PDF:
F(x,ξ,t,μ^2)=15/16 x^-0.5(1-x)^2 .
The associated Ioffe-time GPD
F(ν,ξ,t,μ^2)=∫_-1^1 dx e^-ixνF(x,ξ,t,μ^2)
is presented as the red curve in the right panels of Fig. <ref> for the real part and Fig. <ref> for the imaginary part. The blue curves on the same plots represent the matched pseudo-GPD F(ν, ξ, t, z^2) where we have assumed that μ = 2 GeV, α_s = 0.28 and z = a = 0.094 fm. This yields ln(-z^2μ^2 e^2γ_E+1/4) ≈ 0.67. It can be observed that the matching effects are quite small, especially for larger values of ξ.
To understand how evolution and matching separately contribute to the effects of non-zero skewness, we show the separate convolutions B ⊗F and L ⊗F on the left panels of Figs. <ref> and <ref>. A striking feature is that the curves for various skewnesses seem to become similar as ν gets larger, but the smaller the skewness, the longer they take to fall towards this “universal” limit. Although it is difficult to assess precisely, a general rule can be observed from the plots that the universal limit is reached when ν≈ 5/ξ, and it is unclear whether the curve for ξ = 0 participates in this universal limit at all. We have checked that this remains consistently true for much larger values of ν.
In fact, this is not surprising. The x-space evolution kernel has a cusp when x = ξ, which leaves us to expect some kind of trouble in Ioffe time when ν = 𝒪(1/ξ). As we have noticed before, the relevant variable that appears consistently in our calculations is not ξ, but rather νξ. A similar study is conducted in x-space in <cit.> for the evolution kernel. It demonstrates the formation of non-analytic behavior at x = ξ through the evolution operator and shows how evolution effects are considerably increased in this region (see Fig. 3 in that paper, for instance). We observe here the same effect in Ioffe-time space, although in the range of ν available for realistic calculations, the ξ dependence of the evolution kernels is quite benign.
Additionally, one can notice in Fig. <ref> that the small ν behavior of the imaginary part of the convolutions is independent of the skewness. As we will see in the next section, this stems from the fact that the first derivative with respect to ν of the imaginary part of the non-singlet Ioffe-time GPD at ν =0 is independent of ξ due to the polynomiality property of GPDs. This also explains why the small ν behavior of the real part of the convolutions in Fig. <ref> is of order 𝒪(ν^2), but this time dependent on ξ.
Finally, a feature of the matching already noticed before in the case of PDFs or Good Lattice Cross Sections (e.g. <cit.>) is the fact that the evolution and matching convolutions have generally opposite signs in the regime of ν where data exists. The result is a reduction or even near cancellation of the dominant perturbative effect. Of course, we did not include any initial skewness dependence in the model, so such results are qualitative at best. It will be interesting to study if these canceling effects continue to hold with higher orders in perturbation theory and more sophisticated resummation techniques.
In <cit.>, the perturbatively calculated kernel for the resummed z^2 PDF step-scaling from one scale to another was studied and compared to DGLAP evolution. The kernel for the forward pseudo-PDF was found to be smaller than the MS DGLAP evolution kernel. Further comparisons should be made in future studies to other scheme-dependent methods, such as RI-Mom renormalization, where the scale-independent structure is different.
§.§ Moments of GPDs
By differentiating the defining relationship between the Ioffe-time distribution and the x-space distribution, Eq. (<ref>), one finds that:
d^n/dν^nF(ν, ξ, t, z^2)|_ν = 0 = (-i)^n ∫_-1^1 dx x^n F(x, ξ, t, z^2) .
We introduce the short-hand notation:
F_n(ξ, t, z^2) ≡∫_-1^1 dx x^n-1F(x, ξ, t, z^2) ,
where n-1 is a widely encountered convention. Then:
F(ν, ξ, t, z^2) = ∑_n=0^∞(-iν)^n/n! F_n+1(ξ, t, z^2) .
Introducing the matching relation:
F(ν, ξ, t, z^2) = ∫_0^1 dα K(α, ξν, α_s, z^2μ^2) F(αν, ξ, t, μ^2) ,
and differentiating it n times yields:
(-i)^n F_n+1(ξ, t, z^2) = ∑_k = 0^n [ n; k ]∫_0^1 dα d^n-k/dν^n-kK(α, ξν, α_s, z^2μ^2) |_ν = 0d^k/dν^kF(αν, ξ, t, μ^2)|_ν = 0 ,
= ∑_k = 0^n F_k+1(ξ, t, μ^2) [ n; k ]∫_0^1 dα (-iα)^kd^n-k/dν^n-kK(α, ξν, α_s, z^2μ^2)|_ν = 0
≡ (-i)^n ∑_k = 0^n F_k+1(ξ, t, μ^2) ξ^n-k c_n+1,n-k(α_s, z^2μ^2) ,
where
c_n+1,k(α_s, z^2μ^2) ≡ i^k [ n; k ]∫_0^1 dα α^n-kd^k/d(ξν)^kK(α, ξν, α_s, z^2μ^2)|_ν = 0 .
In this paper we are concerned with the spin-1/2 unpolarized GPDs, so K(α, ξν) is even in ξν, and thus only the c_n+1,k where k is even are non-zero. In particular, c_n+1,k is always real. The ξ dependence in Eq. (<ref>) is explicitly compatible with the well-known polynomiality property of GPDs.
We now list the first few values of c_n+1,k at order 𝒪(α_s), using the convention that
L = ln(-e^2γ_E+1/4z^2μ^2) :
2
skewness independent quadratic skewness
c_1,0 = 1
c_2,0 = 1- α_sC_F/2π(-4/3 L +14/3)
c_3,0 = 1- α_sC_F/2π(-25/12L +47/6) c_3,2 = -α_sC_F/2π(5/12L - 7/6)
c_4,0 = 1- α_sC_F/2π(-157/60L +931/90) c_4,2 = -α_sC_F/2π(11/20L - 53/30)
The coefficient of -α_s C_FL/2π in c_n+1,0 is simply the leading-order DGLAP anomalous dimension. For ξ≠ 0, the GPD Mellin moments mix among themselves through the coefficients c_n+1, k with k > 0. It is convenient to write the matching in matrix form:
[ F_1(ξ, t, z^2); F_2(ξ, t, z^2); F_3(ξ, t, z^2); F_4(ξ, t, z^2); ⋯ ] = [ c_1,0 0 0 0; 0 c_2,0 0 0; ξ^2c_3,2 0 c_3,0 0 ; 0 ξ^2 c_4,2 0 c_4,0 ; ⋱ ]×[ F_1(ξ, t, μ^2); F_2(ξ, t, μ^2); F_3(ξ, t, μ^2); F_4(ξ, t, μ^2); ⋯ ] .
Then, by isolating the logarithmic part of the kernel, one can introduce an expression resumed at leading-logarithmic accuracy as:
[ F_1(ξ, t, μ^2); F_2(ξ, t, μ^2); F_3(ξ, t, μ^2); F_4(ξ, t, μ^2); ⋯ ] = G^-1(ξ) A(μ^2,z^2) G(ξ) ×(I - B(ξ, z^2)) ×[ F_1(ξ, t, z^2); F_2(ξ, t, z^2); F_3(ξ, t, z^2); F_4(ξ, t, z^2); ⋯ ] ,
where A is the matrix of the leading-logarithmic resummed DGLAP evolution
A(μ^2, z^2) = [ 1 0 0 0; 0 r^-4/3 0 0; 0 0 r^-25/12 0; 0 0 0 r^-157/60; ⋱ ] with r = (α_s(-1/z^2)/α_s(μ^2))^C_F/(2πβ_0) .
Here β_0 = (33-2n_f)/(12π) ≈ 0.716, G is the matrix diagonalizing the evolution of moments of GPDs (a specific type of Gegenbauer moments for leading-order evolution),
G(ξ) = [ 1 0 0 0; 0 1 0 0; -ξ^2/5 0 1 0; 0 -3ξ^2/7 0 1; ⋱ ] ,
and I is the identity matrix of the relevant dimension. The non-logarithmic part of the matching kernel is
B(ξ, z^2) = [ 0 0 0 0; 0 -a_s (-4/3l+14/3) 0 0; -ξ^2a_s(5/12l-7/6) 0 -a_s (-25/12l+47/6) 0; 0 -ξ^2a_s(11/20l-53/30) 0 -a_s (-157/60l+931/30); ⋱; ] ,
where a_s = α_s(-1/z^2) C_F / (2π) and l = ln(e^2γ_E+1/4) ≈ 0.768.
We have taken the “natural” scale of non-local lattice calculations to be -1/z^2, but one can easily choose to vary this scale. It is important to notice that the expression of Eq. (<ref>) is far from unique, and several sensible perturbative choices would result in different formulations, differing by contributions of order 𝒪(α_s^k L^m) where k ≥ 2 and m ≤ k-1. In the simpler context of PDFs, more complicated choices of perturbative approach and some of their properties were explored in <cit.>, where the existence of this ambiguity motivates the desire to compute at least the logarithmic part of the kernel directly on the lattice.
Now we introduce the polynomiality property of GPDs in the absence of a D-term because we are dealing with non-singlet distributions:
F_n(ξ, t, μ^2) = ∑_k=0
k even^n ξ^k A_n,k(t, μ^2) .
In matrix form, this reads
[ F_1(ξ, t, μ^2); F_2(ξ, t, μ^2); F_3(ξ, t, μ^2); F_4(ξ, t, μ^2); ⋯ ] = [ 1 0 0 0 0 0 ; 0 1 0 0 0 0 ; 0 0 1 ξ^2 0 0 ; 0 0 0 0 1 ξ^2 ; ⋱ ]×[ A_1,0(t,μ^2); A_2,0(t,μ^2); A_3,0(t,μ^2); A_3,2(t,μ^2); A_4,0(t, μ^2); A_4,2(t, μ^2); ⋯ ] ,
and the final matching is obtained by inserting this expression into Eq. (<ref>), and evaluating the result for a sufficient range of different values of ξ to obtain invertibility (one value of ξ for F_1,2 and two values for F_3,4 are enough).
§.§ Discrete and Lorentz symmetries of Ioffe-time GPDs
Let us first understand the symmetries of the amplitudes 𝒜_k entering the Lorentz decomposition of Eq. (<ref>) with respect to ν→-ν and ξ→-ξ. In this section, for brevity we choose a gauge where the link connecting the quark and anti-quark fields is unity. Since it is a gauge invariant quantity the discrete symmetries will not be different in other gauges. The parity (P), charge conjugation (C), and time reversal (T) properties of the Wilson line operator O^μ(z)=ψ̅(-)^μψ() in Eq. (<ref>) and its (anti)-hermitian combination O^μ_±(z) =1/2(O^μ(z) ± O^μ(-z)) are given by:
PO^μ(z)P = (-1)^μψ̅ ( )^μψ (-) = (-1)^μ O^μ(-z) → PO_±^μ(z)P = ±(-1)^μ O_±^μ(z)
CO^μ(z)C = -ψ̅ ( )^μψ(-)= -O^μ(-z) → CO_±^μ(z)C = ∓ O_±^μ(z)
TO^μ(z)T = (-1)^μψ̅ (- )^μψ()= (-1)^μ O^μ(z) → TO^μ_±(z)T = (-1)^μ O^μ_±(z) ,
where (-1)^μ = (1, -1, -1, -1) and the operators have the following properties with respect to hermitian conjugation:
O^μ(z)^† = ψ̅ () γ^μψ (-) = O^μ(-z) → O^μ_±(z)^† = ± O^μ_±(z) .
By using the combined PT symmetries, the matrix element in Eq. (<ref>), written now with explicit helicity arguments, obeys
M^μ(p_f,p_i, z, λ_f, λ_i) = M^μ(p_i,p_f, z, -λ_i, -λ_f) ,
which swaps the sign of ξ. The relation between amplitudes with ±ξ can be determined term by term from this relationship once the changes in the kinematic factors in the Lorentz decomposition of Eq. (<ref>) are known. With the spinors employed in this study, the bilinears u̅(p_f,λ_f) Γ u(p_i,λ_i) equal λ_fλ_i [u̅(p_f,-λ_f) Γ u(p_i,-λ_i)]^∗ for Γ=1, γ^μ, σ^μν. Note that all structures with the tensor bilinear are accompanied by an additional factor of i that generates an additional sign when complex conjugated. Under these transformations, we can identify the following relation obeyed by the kinematic factors labeled using k ∈{1,..,8}:
K_k^μ(p_f,p_i, z, λ_f, λ_i) = λ_fλ_i Z^PT_k K_k^μ(p_i,p_f, z, -λ_i, -λ_f) ,
where Z^PT_k is the parity of the amplitudes k under ξ→-ξ, which takes the values Z^PT_3,5,7=-1 and Z^PT_k≠3,5,7=+1.
Next, consider the impact of hermiticity:
M^μ(p_f,p_i,z,λ_f, λ_i)^∗ = M^μ(p_i,p_f,-z,λ_i, λ_f) ,
which swaps the sign of ξ and ν simultaneously. The parity of the kinematic factors under hermiticity is given by Z^H_2,5,6,8=-1 and Z^H_k≠2,5,6,8=+1. Therefore, the amplitudes are related to their complex conjugates as:
k(ν,ξ) = Z^H_k k(-ν,-ξ)^∗ .
Combining this result with the previous rule, we find that the real part of the amplitudes changes with the sign of ν following Z^ν_k=Z^H_k Z^PT_k which is Z^ν_k=-1 for k=2,3,6,7,8 and Z^ν_k=+1 otherwise. The opposite is true for the imaginary parts.
To understand the implications of this behavior, consider the (anti-)hermitian operators O_±^μ(z), whose matrix elements are purely real or imaginary in the forward case, (p_f, λ_f) = (p_i, λ_i). In the off-forward case the matrix elements M_±^μ(p_f,p_i, z)=1/2(M^μ(p_f,p_i, z) ± M^μ(p_f,p_i, -z)) are given by
M_±^μ(p_f,p_i, z) = 1/2∑_k [K_k^μ(p_f,p_i, z)𝒜_k(ν,ξ,t,z^2) ± K_k^μ(p_f,p_i, -z) 𝒜_k(-ν,ξ,t,z^2)]
=1/2∑_k K_k^μ(p_f,p_i, z)[𝒜_k(ν,ξ,t,z^2) ± Z^z_k 𝒜_k(ν,ξ,t,z^2)^∗] ,
where Z^z_k is the parity of the kinematic factors in the sign of z. As can be seen, the hermitian and anti-hermitian operators expose the real and imaginary components of the amplitudes, but the relation depends on the amplitude. The phases in the matrix elements are entirely controlled by the kinematic factors and choices in defining spinors within the calculations.
Finally, let us observe how these discrete symmetries interact with the Lorentz symmetry, which materializes through the polynomiality property, Eq. (<ref>). The feature of polynomiality is an intricate relationship in the ξ dependence of the moments of GPDs. It arises naturally in the double distribution representation, which describes GPDs in x-space as (except for the D-term which is irrelevant for the non-singlet GPDs we are studying here):
H(x,ξ) = ∫_-1^1 dβ∫_-1+|β|^1-|β|dα δ(x-β - αξ) h(β,α) .
We have omitted the t-dependence and the scale of the double distribution and GPD.
It is easy to observe that the parity in α and β of the double distribution controls respectively the ξ and x parity of the GPD.
The relation between the (ν,ξ) space of Ioffe-time distributions and (β,α) of double distributions is given by integrals of the form:
𝒜(ν,ν̅=ξν) = ∫_-1^1 dβ e^iνβ∫_-1+|β|^1-|β|dα e^iν̅α h(β,α) .
Therefore, definite parity in α of the double distribution is equivalent to the same definite parity in ξ of the amplitude 𝒜. Provided the double distribution h is real, evenness in β is equivalent to 𝒜 even in ν and real, whereas oddness in β is equivalent to 𝒜 odd in ν and imaginary.
§ NUMERICAL IMPLEMENTATION
Previous calculations from the HadStruc collaboration isolated the isovector twist-2 quark PDFs <cit.> and unpolarized and helicity gluon PDFs <cit.> of the nucleon using a combination of distillation <cit.> and the pseudo-distribution formalism. We use the same gauge ensemble, featuring an m_π=358 MeV pion mass within a 32^3×64 lattice volume with an a=0.094 fm lattice spacing–we denote this ensemble as a094m358. The a094m358 ensemble is an isotropic 2+1 flavor Wilson clover fermion ensemble generated by the JLab/W&M/LANL/MIT collaboration <cit.>, with the strange quark fixed to its physical value and Wilson clover fermion sea quarks. A total of 348 configurations with four distillation “sources" evenly separated in the time extent per configuration (detailed below) are utilized to determine all matrix elements. Aspects of the a094m358 ensemble are summarized in Table <ref>–further details can be found in Refs. <cit.>.
To isolate the bare isovector matrix elements of the space-like quark bilinear in Eq. (<ref>) we require two-point functions and
connected three-point functions with the insertion of the operator ψ(-z/2)γ^μ W^(f)(-z/2,z/2;A)τ^3/2ψ(z/2), where the flavor isovector projector τ^3/2 is included. This projector eliminates statistically noisy disconnected diagrams and gives the difference of the u and d quark distributions.
Placing the operator symmetric about the origin, with quark and anti-quark fields at (z/2) and (-z/2), ensures the operator has transparent C,P,T and hermiticity properties, reviewed in Section <ref>.
Inserting complete sets of energy eigenstates, the spectral content of the two- and three-point correlation functions reads
C_2 pt(p⃗,T) =⟨𝒩(-p⃗,T)𝒩(p⃗,0)⟩=∑_n|𝒵_n(p⃗)|^2/2E_n(p⃗)e^-E_n(p⃗)T
C_3 pt^[γ^μ](p⃗_f,p⃗_i,T;z,τ) =⟨𝒩(-p⃗_f,T)ψ(-z/2,τ)γ^μ W^(f)(-z/2,z/2;A)ψ(z/2,τ)𝒩(p⃗_i,0)⟩
=∑_n',n𝒵_n'(p⃗_f)𝒵_n^†(p⃗_i)/4E_n'(p⃗_f)E_n(p⃗_i)⟨n'|𝒪^μ(-z/2,z/2;A)|n⟩e^-E_n'(p⃗_f)(T-τ)e^-E_n(p⃗_i)τ ,
with the source and sink interpolating fields 𝒩 separated by a Euclidean time T, momentum-dependent interpolator-state overlaps given by 𝒵_n(p⃗_i,f). The bare quark bilinear, abbreviated by 𝒪^μ(z,τ;A), is introduced between the source and sink interpolators for Euclidean times 1≤τ≤ T-1. It is the ground state, or n,n'=0 in Eqs. (<ref>), that corresponds to the physical nucleon state of interest.
Our computation of the correlation functions in Eqs. (<ref>) and (<ref>) using distillation <cit.> provides potential cost-saving benefits when mapping x-dependent GPDs because distillation enables us to efficiently calculate many combinations of source and sink momenta while reusing the most expensive components of the calculation. The Wick contracted quark fields smeared with distillation at both source and sink reduce Eqs. (<ref>) and (<ref>) into traces over products of distinct, reusable computational units in the distillation space. The independently generated hadron elementals and perambulators encode the operator construction at source/sink and quark propagation between time slices, respectively. The elementals and perambulators are shared between both two-point and three-point correlation functions (see <cit.> Fig. 1). The color and spatial degrees of freedom (a N_c × L^3 dimensional space for quarks) are projected into the lowest modes of the Laplacian (a N_vec dimensional space). Since hadronic physics is dominated by the long-distance modes of QCD, which the lowest modes of the Laplacian are a proxy for, the number of modes required to reproduce a ground state hadronic correlation function is much smaller than triple the volume of modern lattices.
However, correlators computed in distillation effectively sample the whole source and sink timeslices and therefore are equivalent to a large number of smeared point-source propagators. As a result, our computation with four time sources on 348 configurations results in much more precise matrix elements than traditional approaches.
A substantial amount of the computational resources for this calculation are expended in computing generalized perambulators, or genprops for short:
=∑_y⃗ξ_a^(i)†(T_f)D^-1_ασ;ac(T_f;τ,y⃗)Γ(τ)D^-1_ρβ;db(τ,x⃗;T_i)ξ_b^(j)(T_i) ,
where color (Lorentz) indices are denoted by Latin (Greek) letters, D^-1 are inverses of the Wilson-clover fermion operator, and the Dirac structure of the insertion is abbreviated Γ=γ_σρ^μ e^-iq⃗·y⃗δ_y⃗,x⃗+zẑW_cd(x⃗+zẑ,x⃗). The ξ^(k) are eigenvectors of the three-dimensional gauge-covariant Laplacian at a time slice T,
-∇^2(T)ξ^(k)(T)=λ^(k)(T)ξ^(k)(T) ,
that defines the distillation operator. In practice, we compute inversions against eigenvectors at both source and sink and use γ_5-hermiticity to assemble each genprop . Within the distillation approach, we only need to produce genprops with all desired Dirac matrices, Wilson line lengths, and 3-momentum transfers {q⃗}. We are then free to choose any off-forward matrix element, defined by final (initial) momenta p⃗_f (p⃗_i), that is consistent with the pre-determined set of momentum transfers {q⃗}, and compute that matrix element via comparatively cheap tensor contractions.
The distillation framework affords two particular advantages over other methods generally employed. The first is that it enables a momentum projection to be performed at each point in our two- and three-point functions. Thus it provides a far better sampling of the gauge configurations. Second, the factorization of correlation functions into the convolution of the perambulators describing the quark propagation and the elementals representing the interpolating operators admits the use of a variational method; we defer the use of this for a later work.
In this manner, distillation provides straight-forward access to what has been called in the literature “symmetric” and “asymmetric” matrix elements <cit.> without repeating nearly identical inversions when varying the sources/sinks, which are otherwise required for standard Gaussian-smeared correlation functions computed with the help of sequential inversion methods <cit.>.
HD: no longer relevantWe consider the following nineteen distinct three-momentum transfers within this study:
aq⃗=2π/L×n⃗_q
n⃗_q GeV
Dic_4E_1 [n00] (0,0,±1),(0,0,±2),(0,1,0),(1,0,0) 0.412, 0.824, 0.412, 0.412
Dic_2E_1 [nn0] (0,1,±1),(1,1,0),(2,0,±2) 0.583, 0.583, 1.166
Dic_3E_1 [nnn] (1,1,±1) 0.714
C_4EE_1 [nm0] (0,1,±2), (2,0,±1) 0.922,0.922
C_4EE_1 [nnm] (1,1,±2) 1.010
where the three-momentum transfers q⃗ are ordered according to the double-cover cubic little group (see section <ref>) that leave each n⃗_q invariant, and n,m∈ℤ\{0} with n≠ m. We find this modest set of q⃗ is capable of sampling the x-dependent GPDs across an impressive collection of skewness and momentum transfer combinations {ξ,t}, the details of which we defer to section <ref> and Appendix <ref>
§.§ Interpolator Construction
Mass eigenstates in the continuum are labelled according to angular momentum and parity quantum numbers J^P, each of which corresponds to an irreducible representation (irrep) of the continuous rotation group O(3). Fermionic and bosonic states are subsequently described by double and single-valued representations, respectively. The projection of J along a standard axis, typically taken to be J_z, provides a further quantum number and is used to label the rows of the representation.
On a finite isotropic lattice the continuum rotational symmetry reduces to the finite-dimensional cubic group or its dual, the octahedral group O_h. Continuum mass eigenstates in this environment are characterized by their patterns of subduction across the finite number of irreps Λ of O_h. As the nucleon is our numerical arena in this study, we will be concerned with irreps of the double-cover octahedral group O_h^D. Due to this implied many-to-one mapping, the spectrum of the ground-state nucleon will be contaminated not only by excited states of the same continuum J^P quantum numbers but by higher spin states as well.
For continuum states in motion, J_z ceases to be a good quantum number and is replaced by helicity λ; J^λ then labels the irreps of the little group, or subgroup, O(3) that leaves the momentum vector invariant. Unlike the continuum little group, which is independent of momentum direction, the O_h^D symmetry is broken further into little groups that depend on ^*(p⃗), the group of rotations that leave p⃗ invariant <cit.>, further compounding the mixing of (continuum) mass eigenstates.
The reduced symmetry of an isotropic cubic lattice and the mixing of distinct continuum J^P states motivates our construction of nucleon interpolators that transform irreducibly under the irreps of O_h^D and its little groups. Our strategy for isolating the 1/2^+ ground-state nucleon with distillation hinges on first building continuum interpolators 𝒪^J,P,M(p⃗=0⃗) that possess definite J^P and flavor quantum numbers <cit.>. This starting point reads symbolically
⟨p⃗=0⃗;J',P',M'|[𝒪^J,P,M(p⃗=0⃗)]^†| 0⟩ =Z_Jδ_JJ'δ_PP'δ_MM' ,
where the overlap of a continuum spin-J' state with the continuum interpolator 𝒪^J,P,M is denoted by Z_J. To reliably isolate nucleons with a momentum p⃗ within our isotropic lattice, which breaks O_h^D into its little groups, we implement an algorithm <cit.> first envisioned for mesons in flight. The starting point is continuum helicity operators, which are obtained by boosting our continuum operators at rest by an amount |p⃗| along their quantization axis ẑ, followed by a series of active rotations that enforce a change of basis from J_z to λ. This process reads symbolically as:
[𝕆^J,P,λ(p⃗)]^†=∑_M𝒟_Mλ^(J)(R)[𝒪^J,P,M(|p⃗|ẑ)]^† ,
where a continuum helicity operator 𝕆^J,P,λ(p⃗) is obtained via application of a Wigner-𝒟 matrix 𝒟_Mλ^(J)(R) onto the boosted J_z-quantized operator 𝒪^J,P,M(|p⃗|ẑ). Ref. <cit.> highlights the ambiguity within a finite cubic volume in rotating from |p⃗|ẑ to p⃗ in this manner. A consistent convention is established by splitting the rotation R into two distinct rotations,
R=R_ latR_ ref ,
where R_ ref rotates |p⃗|ẑ to a reference vector p⃗_ ref[Each O_h^D little group defined by *(p⃗) is assigned a reference direction such that momenta within the same little group (e.g. (0,1,0) and (-2,0,0)) can be treated consistently. For example, we assign momenta within the Dic_2 little group, such as (1,0,1), the reference direction p⃗_ ref=(0,1,1). Euler angles within the zyz-convention then rotate |p⃗|ẑ to p⃗_ ref.], and R_ lat is a lattice rotation that rotates p⃗_ ref to p⃗. In general, R_ ref may not be allowed within a cubic volume, however, its action is permissible given that it merely produces the aforementioned basis change. We have validated this procedure for all possible momenta within a finite cubic volume, with the exception of momenta of the form p⃗=(n,m,l), with distinct n,m,l∈ℤ\{0}, which are left invariant under the C_2 little group of O_h^D. Hence, we will exclude C_2-type momenta from this study. The resulting continuum helicity operators respect the following overlap criterion:
⟨p⃗;J',P',λ'|[𝕆^J,P,λ(p⃗)]^†| 0⟩=Z_J'J;P'P;λδ_λλ' .
In motion, parity P and spin J are no longer conserved quantities, because their symmetries are broken by the explicit direction p⃗, while helicity remains conserved in flight. The construction of our interpolators, or equivalently subduced helicity operators, is finalized by specifying how a continuum helicity λ subduces into a little group irrep Λ with rows μ∈{1,⋯, dim(Λ)}:
[𝕆^[J,P,|λ|]_Λ,μ(p⃗)]^†=∑_λ̂=±λ[S_Λ,μ^J,λ̂]^∗[𝕆^J,P,λ̂(p⃗)]^† ,
where the subduced helicity operators used in our study are linear combinations of each value of continuum helicity, the precise combinations of which are set by so-called subduction coefficients S_Λ,μ^J,λ̂. As a consequence, the continuum energy eigenstates as part of the spectral content of the three-point functions in Eq. (<ref>) are instead subduced eigenstates of helicity.
Interpolators constructed in this manner are assigned a compact spectroscopic notation X^2S+1L_𝒫J^P, with X denoting the hadronic state, S the overall Dirac spin, L the angular momentum induced by covariant derivatives with permutation symmetry 𝒫, and J^P the continuum spin/parity quantum numbers. We select the simplest non-relativistic and spatially-local operator, denoted N^2S_s1/2^+, to interpolate the ground-state nucleon from the vacuum. A follow-on study could make use of an expanded operator basis (such as in Refs. <cit.>) to improve excited state control within each p⃗_i/p⃗_f momentum channel.
§.§ Computing the kinematic matrix
The connection between the physically relevant matrix elements and the subduced matrix elements we extract from the three-point correlation functions was established in <cit.> for the isovector quark helicity PDFs of the nucleon. In the interest of self-containment, we highlight the points that are key to the extraction of the GPDs.
To isolate the Lorentz invariant amplitudes, the subduced matrix elements can be related to the helicity matrix elements, which requires inverting the effect of the subduction coefficients S^J,λ̂_Λ,μ, Eq. (<ref>), that describe the way the different helicity λ states are related (subduced) into the various rows μ∈{1, dim(Λ)} of the irreps Λ. Following the conventions of <cit.>, the creation operators require the complex conjugate of the coefficient. After, the Lorentz decomposition can be applied to the continuum matrix element.
Finally, the subduced matrix elements are given by a linear combination of invariant amplitudes:
⟨p⃗,Λ,μ_f|𝒥^Λ_Γ,μ_Γ|p⃗,Λ,μ_i⟩=∑_k∑_λ_f,λ_Γ,λ_iS^Λ,J_μ_f,λ_f[S_μ_i,λ_i^Λ,J]^*𝒦_k(λ_f[J,p⃗];λ_i[J,p⃗])𝒜_k(ν,z^2) .
Here each term is weighted by a helicity-dependent kinematic prefactor 𝒦_k(λ_f[J,p⃗];λ_i[J,p⃗]), given as the Lorentz covariant coefficients of k in Eq. (<ref>), whose irreducibility under the relevant cubic (sub)group is encoded via subduction coefficients.
Following the procedure in <cit.>, one need not relate the subduced matrix element to the helicity matrix element as an intermediate step. Instead one can calculate the kinematic matrices,
K_k (μ_f, [Λ, p⃗]) =∑_λ_f,λ_Γ,λ_iS^Λ,J_μ_f,λ_f[S_μ_i,λ_i^Λ,J]^*𝒦_k(λ_f[J,p⃗];λ_i[J,p⃗]) ,
and invert the relationship to obtain the amplitudes directly. As discussed in <cit.>, this viewpoint exposes the ever-present potential issues with mixing between states of different parity or continuum spin, all of which belong to the same finite volume irrep Λ. In QCD, the spectrum is ordered in spin, and ultimately the ground state which will be the desired is the 1/2^+ nucleon. These other states contribute to excited state contamination that can be modeled and removed. As such, the kinematic matrices will use only spin-1/2 spinors.
The construction of the subduced spinors was outlined in detail in <cit.>. The algorithm begins with constructing a continuum spinor with spin and momentum in the z direction. This spinor is then rotated, as the interpolating field was in Eq. (<ref>), to align with the correct lattice-allowed momenta. Finally, the spinor is subduced into the finite volume Λ irrep as the interpolator was in Eq. (<ref>). The convenience of parameterizing a rotation with Euler angles is that these rotations are independent of the spin representation of the object being considered. Armed with subduced spinors, we can construct the Lorentz covariant coefficients of each of the desired invariant amplitudes in their relation to the subduced matrix elements.
§.§ From the matrix element to the amplitudes
To isolate each amplitude individually we need to consider many combinations of initial/final rows as well as Γ structures. We expand upon the strategy of <cit.> to address this challenge.
In the case of the unpolarized quark forward matrix element, the nucleon matrix element M^0(p,p,z) was sufficient to isolate 𝒜_1(ν, ξ=0, t, z^2) and obtain a PDF. With so many amplitudes in the off-forward case, Eq. (<ref>), this is impossible, just as it was in the case of helicity quark and gluon matrix elements <cit.>. While fixing the momenta and separation, the matrix element is considered in the four possible initial and final spin combinations[Regardless of source or sink momentum, each cubic irrep relevant for the (continuum) J^P=1/2^+ nucleon is two-dimensional.] and for each of the four γ^μ matrices. These combined sixteen matrix elements can be used to isolate the eight unknown k given the matrix relation
16×1𝐌=16×8𝐊×8×1𝐀 ,
where we define a column vector of fitted subduced matrix elements by
16×1𝐌=
[ [ M^0(p_f,p_i,z)_11; M^0(p_f,p_i,z)_12; M^0(p_f,p_i,z)_21; M^0(p_f,p_i,z)_22 ]^T
[ M^1(p_f,p_i,z)_11; M^1(p_f,p_i,z)_12; M^1(p_f,p_i,z)_21; M^1(p_f,p_i,z)_22 ]^T
[ M^2(p_f,p_i,z)_11; M^2(p_f,p_i,z)_12; M^2(p_f,p_i,z)_21; M^2(p_f,p_i,z)_22 ]^T[ M^3(p_f,p_i,z)_11; M^3(p_f,p_i,z)_12; M^3(p_f,p_i,z)_21; M^3(p_f,p_i,z)_22 ]^T ]^T .
The kinematic matrix is built from each Lorentz covariant structure in Eq. (<ref>) sandwiched between initial/final-state subduced spinors:
16×8𝐊=
[ ⟨⟨γ^0⟩⟩_11 z^0 ⟨⟨1⟩⟩_11 i⟨⟨σ^0 z⟩⟩_11 ⋯ i/2mz^0⟨⟨σ^zΔ⟩⟩_11; ⋮ ⋮ ⋮ ⋮; ⟨⟨γ^1⟩⟩_11 z^1⟨⟨1⟩⟩_11 i⟨⟨σ^1 z⟩⟩_11 ⋯ i/2mz^1⟨⟨σ^zΔ⟩⟩_11; ⋮ ⋮ ⋮ ⋮; ⟨⟨γ^2⟩⟩_11 z^2⟨⟨1⟩⟩_11 i⟨⟨σ^2 z⟩⟩_11 ⋯ i/2mz^2⟨⟨σ^zΔ⟩⟩_11; ⋮ ⋮ ⋮ ⋮; ⟨⟨γ^3⟩⟩_11 z^3⟨⟨1⟩⟩_11 i⟨⟨σ^3 z⟩⟩_11 ⋯ i/2mz^3⟨⟨σ^zΔ⟩⟩_11; ⋮ ⋮ ⋮ ⋮; ] ,
and the vector
8×1𝐀=
[ 1 2 3 4 5 6 7 8 ]^T
contains the required amplitudes with omitted (ν,ξ,t,z^2) arguments. The subscripts in each entry of Eqs. (<ref>) and (<ref>) are integer pairs designating the sink/source irrep row combination (μ_f μ_i); red dots are employed for brevity to indicate repeated Lorentz covariant structures sandwiched between distinct (μ_fμ_i) combinations.
Given the linear system of sixteen equations for eight amplitudes (<ref>), our goal is to obtain the most probable k. The traditional solution to an over-constrained linear system of equations is to find the minimum of χ^2=||𝐌 - 𝐊𝐀||_2^2 where ||.||_2 denotes the l^2 norm. The solution is given by 𝐀̅ = 𝐊^+ 𝐌, where 𝐊^+ is the pseudoinverse of 𝐊 <cit.>. The pseudoinverse can be calculated via singular value decomposition (SVD). Numerically, we use a cutoff on the relative size of the smallest to the largest singular value of ϵ=10^-15 in the function of the publicly available package. This cutoff formally shifts the matrix from the original definition of the Moore-Penrose pseudoinverse but increases the numerical stability.
Half of the equations in Eq. (<ref>) are related to the other half by various symmetries. Ultimately the four helicity combinations only give two independent equations. For most kinematics, this is enough to reliably reconstruct the eight amplitudes 𝒜_k. A few specific situations arise however:
* If z = 0, the kinematic factors of all amplitudes except 𝒜_1,4,5 cancel. Those are therefore the only amplitudes that we can hope to extract. We will discuss the results at z = 0 specifically in section <ref>.
*
If the spatial momentum transfer q⃗ = p⃗_f - p⃗_i is exactly along the z axis, several issues may appear. For some kinematic pairs and z ≠ 0 (respectively z = 0), there are fewer than eight (respectively three) singular values in the kinematic matrix, meaning that there exist exact (up to ϵ=10^-15) linear relations between some amplitudes that cannot be separated from one another. In other cases, in spite of eight different singular values, the unitary matrices entering the SVD present large similarities in their columns and result in a very bad case of uncertainty propagation. As we observe in a consistent fashion that these kinematics behave poorly, we remove them altogether. They are not included in the summary plot of Fig. <ref>.
If z ≠ 0 and q⃗·ẑ = 0, then we observe that, due to our choice of z^μ = (0, 0, 0, z_3), only the M^3(p_f, p_i, z) matrix elements are sensitive to 𝒜_2 and 𝒜_8. In practice, this means that the M^3 matrix elements are entirely and exclusively used to characterize those two amplitudes. Although it has been argued that using M^3 introduces a finite mixing <cit.>, using it does not change the extraction of any other amplitude, and therefore does not impact our light-cone GPDs, which do not involve either 𝒜_2 or 𝒜_8.
When z = 0, the 𝒜_2 and 𝒜_8 amplitudes are absent from the decomposition and the M^3 matrix elements have a constraining power on the amplitudes 𝒜_1,4,5. We observe, however, that the difference is typically of the order of a few percent at most on 𝒜_1,4, which contain the elastic form factors (EFFs). As we will see later, we perform the extraction of the EFFs using both the z = 0 and z ≠ 0 data, and obtain very similar results up to a 𝒪(z^2) contamination which cannot be attributed to the M^3 data (since it plays no role when z ≠ 0). We give more details on the question of the use of M^3 in appendix <ref>.
HD: It is a bit more complicated. I have counter examples to the following test where the SVD gives apparently the expected result, but an L1 minimization gives a completely different one for our problematic kinematics because there is a matter of uncertainty propagation on top of the question of the number of singular values.One can simply test this given the actual kinematic matrix 𝐊 and a fake set of amplitudes such as 𝐀_i=i, if 𝐀≠𝐊^+ 𝐊𝐀, that is 𝐊^+𝐊≠𝐈, then the kinematic matrix lacks sufficient constraints. In testing 𝐀 =? 𝐊^+ 𝐊𝐀, you may recognize that some amplitudes are still correctly constrained if not all of them. In that case, those amplitudes are still reconstructable from the given data. The amplitudes that fail to be reconstructed are those that appear in the singular vectors corresponding to the singular values that are 0. We have confirmed that for the momenta combinations we use in this study 𝐊 possesses eight non-zero singular values and is associated with eight distinct right singular vectors that project onto each i.
As an extension to this approach, one could consider the effect of additional momentum combinations that select the same values of ν, ξ, and t to produce an even more constrained system of equations. This could especially help in reducing lattice systematic effects, such as discretization errors of 𝒪(aq). Using the relationships in section <ref> between amplitudes of ±ν and ±ξ, even more combinations can be included. In this paper, we limit ourselves to averaging the ± z_3 separations and perform an external amplitude extraction for each pair (p⃗_f, p⃗_i). Nonetheless, we have a certain redundancy, because we have data for ±ξ and various ν at the same or very nearby t, which will enables us to reduce some of the lattice artifacts.
An alternative possibility to perform the extraction is to remove or average directly the trivial duplicate constraints that the zero singular values represent and form a true matrix inverse. The solution of this linear inverse would exactly reproduce the analog of the analytic formulas presented in <cit.> to extract the amplitudes from the matrix elements. In our calculation, the spinors required to calculate ⟨⟨Γ⟩⟩_μ_f μ_i are complicated functions of the momenta, depending upon its magnitude, the specific finite volume irreducible representation, and choices of convention. With many momenta combinations, solving the analytic formulas would require a numerical algebra implementation for practical use. The approach outlined in this section replaces the step of re-deriving these analytic solutions for varying considerations such as off-axis separations or new momentum frames with a generic linear algebra problem.
Finally, we have cross-checked the stability of our SVD extraction by using an l^1-norm minimization of ||𝐌 - 𝐊𝐀||_1, also known as least absolute deviation (LAD) estimator <cit.>. This procedure has disadvantages compared to SVD because it requires an iterative minimization for each jack-knife sample of the data, which is hundreds of times
computationally more expensive than a simple matrix operation. The results are consistent, as one can check in appendix <ref>, where we present the extraction of a full set of amplitudes in some kinematics.
§ ANALYSIS OF CORRELATION FUNCTIONS
We isolate the desired bare matrix element M^μ(p⃗_f,p⃗_i,z) using two strategies to evaluate possible systematics linked to excited state contamination. Our first procedure is the summation method <cit.>. We form an optimized ratio of three-point and two-point correlation functions,
=C^μ_3 pt(p⃗_f,p⃗_i,z;T,τ)/C_2 pt(p⃗_f;T)√(C_2 pt(p⃗_i;T-τ)C_2 pt(p⃗_f;τ)C_2 pt(p⃗_f;T)/C_2 pt(p⃗_f;T-τ)C_2 pt(p⃗_i;τ)C_2 pt(p⃗_i;T)) ,
where T is the temporal separation between the source/sink interpolating fields and 0≤τ≤ T is the current insertion time slice. The three-point correlation functions are computed for the values of T ∈{4,6,8,10,12,14} and every 1 ≤τ≤ T-1. For asymptotically large temporal separations (0≪τ≪ T), the ratio will become proportional to the desired matrix element,
τ,T→∞⟶1/√(2)×1/√(4E_0(p⃗_f)E_0(p⃗_i))M^μ(p_f,p_i,z) .
The additional factor of √(2) is due to the normalization of the isovector vector current.[
The electromagnetic current in the light-quark sector reads 𝒥^μ=2/3uγ^μ u-1/3dγ^μ d, which can be expressed in terms of isovector ρ^μ and isoscalar ω^μ components according to 𝒥^μ=ρ^μ/√(2)+ω^μ/3√(2), where ρ^μ=(uγ^μ u-dγ^μ d)/√(2) and ω^μ=(uγ^μ u+dγ^μ d)/√(2). We use the current ρ^μ to probe the isovector flavor structure of the off-forward nucleon matrix elements.] We form by treating both three-point and all two-point correlators as complex-valued functions, as the construction of the subduced helicity operators we use as interpolators, detailed in section <ref>, involves a delicate interplay of complex phases. A depiction of the ratio for a specific choice of p⃗_f and p⃗_i is given on the left panel of Fig. <ref>.
Then the matrix elements are extracted by summing the ratio R^μ(p⃗_f,p⃗_i,z;T,τ) over the insertion time slice τ, excluding all time separations below a threshold t_s,
ℛ_t_s^μ(p⃗_f,p⃗_i,z;T)≡∑_τ/a=t_s^T-t_sR^μ(p⃗_f,p⃗_i,z;T,τ) .
The summed ratio ℛ_t_s^μ(p⃗_f,p⃗_i,z;T) is a geometric series that depends linearly on the bare matrix element M^μ(p_f,p_i,z). We apply the functional form
ℛ^μ_ fit(p⃗_f,p⃗_i,z;T)=C+M^μ(p_f,p_i,z)T
to extract the bare matrix element. This procedure should yield errors of the order 𝒪(e^-Δ ET), where Δ E is the energy gap between the lowest-lying effective excited state and the ground state. We show in the right panel of Fig. <ref> the impact of the exclusion of short time separations, both in the ratio method (green points) and in the exponential fits that we will describe now.
Our second procedure extracts the bare matrix elements using exponential fits, following the spectral decompositions in Eqs. (<ref>) and (<ref>).
We first perform a non-linear fit of the ground state and first excited state on the two-point correlation functions, excluding small source-sink separations. For each value of |p⃗|, we use several rotationally equivalent two-point functions to increase the robustness of our fit. As a result, we obtain for each jack-knife sample the fitted values of 𝒵_n(p⃗) and E_n(p⃗) for n = 0 and 1. The dispersion relation of the ground state is shown in Fig. <ref>.
Then we use the fitted values of 𝒵_n(p⃗) and E_n(p⃗) in the decomposition of the three-point correlation function. We first fit the three-point function assuming pure ground state dominance, that is, using only the n = 0 fits of the two-point functions. This gives the blue points in the right panel of Fig. <ref>, where we sequentially exclude the small separations τ and T-τ < t_s. It is clear that the points follow a non-constant trend, a sign of sizeable excited state contamination. Based on these results, we use both the results for n = 0 and n=1 (red points in the right panel of Fig. <ref>), meaning that we fit the three-point function with four free parameters corresponding, in principle, to ⟨ 0|𝒪 |0⟩, ⟨ 0|𝒪 |1⟩, ⟨ 1|𝒪 |0⟩ and ⟨ 1|𝒪 |1⟩. The results are now generally in very good agreement with the ratio fits. In general, we observe that the exponential fit with one excited state seems less prone to fluctuations compared to the ratio method when increasing the cut in t_s. Therefore, we will use the excited state exponential fit with a cut t_s = 3 and t_s = 4 to include a measure of excited state contamination.
Finally, we form the ratio with the matrix element for p⃗_f = p⃗_i = 0, Λ_f = Λ_i = 1 and μ = 0 to form the RGI reduced matrix element, Eq. (<ref>).
The process of subduction detailed in section <ref> leads to dim(Λ_f)× dim(Λ_i) determinations of each matrix element M^μ(p⃗_f,p⃗_i,z), where Λ_f and Λ_i are the irreps at sink and source, respectively. The fitted matrix elements will be determined for each combination of initial and final interpolator rows μ_i/μ_f.
§ ELASTIC FORM FACTORS
§.§ Local matrix elements
Let us first consider the local limit z=0. Then, Eq. (<ref>) reduces to a decomposition resembling the standard form factors parametrizing an electromagnetic interaction with a spin-1/2 nucleon:
M^μ(p_f,p_i,0) =⟨⟨γ^μ⟩⟩𝒜_1(0,ξ,t,0)+i/2m⟨⟨σ^μ q⟩⟩𝒜_4(0,ξ,t,0)+q^μ/2m⟨⟨1⟩⟩𝒜_5(0,ξ,t,0),
=⟨⟨γ^μ⟩⟩ F_1(t)+i/2m⟨⟨σ^μ q⟩⟩ F_2(t)+q^μ/2m⟨⟨1⟩⟩𝒜_5(0,ξ,t,0) ,
where F_1(t) and F_2(t) are the familiar Dirac and Pauli form factors of the nucleon. The Ward identity q_μ M^μ=0 requires that 5 vanishes in the local limit.[The terms q_μ⟨⟨γ^μ⟩⟩ and q_μ⟨⟨σ^μ q⟩⟩ evaluate to zero via the Dirac equation, while the term q_μ q^μ⟨⟨1⟩⟩ does not vanish.] Any departure of 5 from zero in the local limit can thus be interpreted as the degree to which lattice artifacts systematically affect a calculation of Eq. (<ref>) since we use the local vector current which is not the conserved vector current on the lattice.
We present in Fig. <ref> the results of the elastic form factors on the 186 pairs of momenta (p⃗_f, p⃗_i) whose kinematic coverage is displayed on Fig. <ref>. We have merged the errorbars provided by the cut of lattice time separation t_s = 3 and 4 (see section <ref>). As expected, F_1(t) and F_2(t) are globally independent of ξ. The amplitude 𝒜_5(0, ξ, t, 0) is generally at most a few sigmas away from zero but there seems to be a non-vanishing signal, loosely odd in ξ as expected using the discrete symmetries of section <ref>. The equivalent term in <cit.> was found to be small, but nonetheless non-zero for z=0. App. <ref> describes the effect on this systematic error from the inclusion or exclusion of M^3 data.
When using the extraction of bare matrix elements from the summation of ratios instead of the exponential fit, an even starker signal of non-vanishing 𝒜_5(t) appears. The fact that 𝒜_5(t) at z = 0 presents an enhanced sensitivity to the treatment of excited states suggests that some underestimation of excited state contamination may play a role in addition to the discretization errors in the non-vanishing value of 𝒜_5(t). On the other hand, F_1(t) and F_2(t) seem generally unaltered by the behavior of 𝒜_5(t) and its skewness dependence. In fact, performing the SVD while ignoring altogether the 𝒜_5(t) contribution does not produce a significant change in the extraction of F_1(t) and F_2(t).
A recent expansive calculation of nucleon form factors by the Nucleon Matrix Elements Collaboration <cit.> included determinations of the electric and magnetic Sachs form factors
G_E(t)=F_1(t)+t/4m^2F_2(t) , G_M(t)=F_1(t)+F_2(t) ,
on a Wilson clover fermion ensemble characterized by a pion mass (m_π∼270 MeV) that is slightly lighter than the a094m358 ensemble we consider in this work. The result is shown in Fig. <ref>. As expected, the extraction with a lighter pion mass has a more pronounced decrease in t.
The next step of our analysis consists in forming bins of data in t, as indicated by the grey regions in Fig. <ref>. This is beneficial for several reasons. First, by averaging over data with various (p⃗_f, p⃗_i) in a single bin, we hope to mitigate some lattice systematic uncertainty. For instance, we have many cases of symmetry ξ⟷ -ξ in our data, but also different magnitudes of (p⃗_f, p⃗_i) in the same bin producing different excited state behavior. Furthermore, the binning results in a decrease in the number of degrees of freedom for the final fit of the t-dependence, meaning that the empirical covariance matrix is a better estimate of the true covariance of the data. Indeed, to estimate the covariance matrix, one should typically have many more measurements than random variables. Here, with 186 kinematic values for 358 gauge configurations, the stability of the empirical covariance matrix is reduced. Trimming down the 186 kinematic values to fifteen bins improves the confidence in this estimation. Finally, the binning gives us access to a significant number of values of (ν, ξν) in each bin when z ≠ 0, which will allow for the extraction of generalized form factors in the next section.
We perform the binning in two steps. First, we fit each amplitude by a constant within each bin. Then we take the fifteen data points obtained in this fashion, and fit an overall t-dependence using a dipole fit,
F(t) = A (1 - t/Λ^2)^-2 .
Next, we displace the points in each bin towards the center of their bin using the t-slope indicated by the dipole fit. With this method, we expect to correct the intrabin t-dependence of order 𝒪(t-t_center), where t_center is the value at the center of the bin to which each point belongs. In effect, the modification is generally very small as |t - t_center| is typically below 0.02 GeV^2. A few reach 0.04 GeV^2, but only for larger values of t, where the t dependence is reduced. The only noticeable effect of intrabin t-correction arises in the first bin, where the slope in t is pronounced and the data very numerous, and we consider this therefore as a negligible effect in the overall uncertainty budget. For this reason, we will not quote an uncertainty in t in our results.
We fit the elastic form factor using either the dipole form of Eq. (<ref>) or a z-expansion <cit.> with three free-parameters,
F(t) = ∑_k = 0^2 a_k z(t)^k , z(t) = √(t_cut - t) - √(t_cut - t_0)/√(t_cut - t) + √(t_cut - t_0) ,
where t_cut = 4 m_π^2 and t_0 = t_cut (1-√(1-t_max / t_cut)). We use t_max = -1.4 GeV^2. We observe that the specific value of t_cut and t_max does not produce a significant effect on the extraction. The final fit results are produced in Table <ref> and displayed in Fig. <ref>. We quote in Table <ref> both a statistical uncertainty and an excited state uncertainty obtained by comparing the results with a cut of t_s = 3 and 4 on the correlation functions. The uncertainty band in Fig. <ref> is constructed as the union of the bands obtained using the two cuts. As one can observe, both the statistical and excited state uncertainties are extremely narrow for the elastic form factor, to the point that the model dependence in the extrapolation to t = 0 becomes a significant source of uncertainty as well. The χ^2 per degree of freedom of those fits is bad, between 6 and 30. We attribute that to the fact that the precision of the measurements is such that lattice spacing errors become very noticeable and make it apparent that the Lorentz decomposition is violated on the lattice. As a result, the points do not fall exactly on a single curve, and increasing the statistics would probably only result in even worse fits.
§.§ Elastic form factors from non-local data
Although elastic form factors are typically computed using the local operator z = 0, they can be accessed using non-local operators as well. In fact, some kinematics (p⃗_f, p⃗_i) are particularly convenient for that purpose: if p_f,3 = p_i,3 = 0 and z = (0, 0, 0, z_3), then both ν and νξ are identically zero whatever the value of z^2, and one has:
F_1(t) = lim_z^2 → 0𝒜_1(ν = 0, νξ = 0, t, z^2) ,
F_2(t) = lim_z^2 → 0𝒜_4(ν = 0, νξ = 0, t, z^2) .
Using non-local operators exposes us to the risk of introducing additional uncertainty due to the evaluation of the right-hand sides at z^2 < 0. We plot on the left panel of Fig. <ref> the value of 𝒜_1(ν = 0, νξ=0, t = -1.35 GeV^2, z^2) using p⃗_f = (1, -1, 0) and p⃗_i = (-1, 1, 0) for every value of z from 0 to 6a (we average the ± z data) using the cut t_s = 3. Evidently, a noticeable z dependence can be observed.
It is important to notice that the various points on the plot are highly correlated with one another, as they all stem from the same external proton states computed on the same gauge configurations. The correlation matrix of the data is given by:
Corr = [ 1 0.96 0.93 0.86 0.76 0.64 0.51; 0.96 1 0.98 0.93 0.84 0.73 0.60; 0.93 0.98 1 0.98 0.91 0.82 0.70; 0.86 0.93 0.98 1 0.98 0.91 0.80; 0.76 0.84 0.91 0.98 1 0.97 0.90; 0.64 0.73 0.82 0.91 0.97 1 0.97; 0.51 0.60 0.70 0.80 0.90 0.97 1 ].
Neighboring points on the plot are correlated by more than 96%, and the furthest are still over 50%. Those circumstances are common in lattice QCD plots where the axis variable is the non-local separation z. This can produce counter-intuitive results.
For instance, despite the clear trend in z that appears visually, one could wish to fit our dataset by a constant. Intuitively, one would expect the constant to fall somewhere in the middle of the plot, around F_1 ≈ 0.31. Such would indeed be the case if our points were uncorrelated. But the best fit by a constant is in fact F_1 ≈ 0.28, strangely close to the result at z = 0, and in good agreement with the projection to z = 0 of the quadratic fit by α + β (z/a)^2. Excluding the local data at z = 0 produces exactly the same fit results, either for the constant or quadratic fit. However, now the constant fit is incompatible with every single point of the fitted dataset. The remarkable agreement of the constant fit with the quadratic one at z = 0 is quite surprising and happens in a general fashion in other kinematics as well. Although we will use the quadratic fit for the non-local elastic form factor, it is interesting to try to understand why a value that is significantly below every single data point can be the best constant fit of the dataset. This illustrates how such plots can be quite misleading if the very high degree of correlation is not taken into account.
One needs to remove correlations from the data. First, we diagonalize the covariance matrix Cov = P^-1 D P where D is the diagonal matrix of eigenvalues. Then the vector
P ×[ 𝒜_1(z = 0); ⋯; 𝒜_1(z = 6a) ]
is an uncorrelated random vector. However, at this stage, its central values are difficult to interpret. In our case, it is interesting to compare these uncorrelated data with what would happen if 𝒜_1 was indeed independent of z. Therefore, we form:
R = P ×[ 𝒜_1(z = 0); ⋯; 𝒜_1(z = 6a) ]/ P ×[ 1; ⋯; 1 ] ,
where the division is understood as a term-by-term ratio of the elements in both vectors. If 𝒜_1 was indeed constant in z, its uncorrelated version R would remain constant with the same value. The values of the seven uncorrelated random variables contained in R are shown on the right panel of Fig. <ref>.
Now the origin of our problem appears clearly. There exist uncorrelated linear combinations of our measurements which favor very small constant fits. In fact, it is easy to see from this example that, by carefully crafting the correlations between the data, the best fit from a given model can be arbitrarily far from any of the correlated measurements.
With this in mind, we can extract the elastic form factor using only non-local matrix elements, and performing either constant or quadratic fits over z ∈{a, ..., 6a}. When using kinematics that are not the special p_f,3 = p_i,3 = 0, since ν≠ 0 and ξ≠ 0, the matrix element contains more than just the elastic form factor. But when the higher-order moments are fitted jointly, as we will describe in the next section, the entire kinematic dataset can be used to constrain the elastic form factor using the non-local data. The result is shown in Fig. <ref>. In light grey, the drifting value of 𝒜_1,4 with increasing z is depicted, while its value at z = 0 is represented by the red dots. The fit of the grey dots by a quadratic is depicted as the blue dots, barely distinguishable from the z = 0 data, or the result of a constant fit. The final fit by a z expansion of either the local (red) or non-local (blue) elastic form factor extraction is completely compatible, although the χ^2 per d.o.f. are typically significantly smaller when using the non-local extraction. We report the results of the non-local extraction of elastic form factors in Table <ref> as well, and the dipole result serves as the standard result reported at the beginning of this document in Fig. <ref>. One will notice that the pattern of spurious z^2 dependence is clearly different between the F_1 and F_2 elastic form factors.
§ GENERALIZED FORM FACTORS
From the discussion in section <ref>, we expect that
Re 𝒜_1(ν, ξ, t, z^2) = F_1(t) - ν^2/2(A_3,0(t, z^2) + ξ^2 A_3,2(t, z^2)) + 𝒪(ν^4)+ 𝒪(Λ_ QCD^2 z^2, t z^2) ,
Im 𝒜_1(ν, ξ, t, z^2) = -ν A_2,0(t, z^2) + ν^3/6(A_4,0(t, z^2) + ξ^2 A_4,2(t, z^2)) + 𝒪(ν^5) + 𝒪(Λ_ QCD^2 z^2, t z^2) ,
up to power corrections and lattice spacing systematic uncertainties. A similar relation is obtained by substituting the appropriate combination to form the E GPD 𝒜_4 + ν𝒜_6 - 2ξν𝒜_7 and its B_n,k generalized form factors. We fit this relation at a fixed value of z in each bin, with an intrabin t correction proceeding in the same fashion as what we have described for the elastic form factors. The result using all points in the bin t ≈ -0.33 GeV^2, z = 6a, and the cut t_s = 3 is displayed in Fig. <ref>. The χ^2 per d.o.f. of this fit of 28 points with 3 parameters is close to 2 both for the real and imaginary parts. Note that although it may seem that the special kinematic point at ν = 0 corresponding to p⃗_f = (0,0,0) and p⃗_i = (1,1,0) is dragging the value of F_1(t) upwards, this is not the case. Excluding this point from the fit would give F_1(t, z=6a) = 0.687± 0.008, whereas with it included, we find the very similar F_1(t, z=6a) = 0.689± 0.007. As we have already discussed, the data favor a value of F_1(t) at z = 6a that is noticeably larger than the data at z = 0 prefers, namely 0.652 ± 0.007, that is 5σ below. One will also notice that data at larger ξ exists at smaller values of ν since part of the average P_3 momentum is sacrificed to create the momentum transfer. The coverage in (ξ, ν) available for our study is depicted in Fig. <ref>, with the exception of a few points where p_f,3 = -p_i,3≠ 0, corresponding to ν = 0 and ξ infinite (but νξ≠ 0). As one can see from Eqs. (<ref>) and (<ref>), those kinematics probe exclusively the elastic form factor and the skewness-dependent generalized form factors, so they are particularly useful.
§.§ Matching
At higher order and in contrast to the elastic form factors, the generalized form factors are scale-dependent, and we have derived in section <ref> a perturbative prediction for their z^2 dependence. The matching of the gravitational form factors A_2,0, B_2,0 is particularly interesting to observe because our data are very precise. Let us use again the same bin as in the previous discussion. We try three different leading-order/leading-logarithmic matching procedures:
* The procedure that arises immediately from the fixed order matching in Eq. (<ref>):
A_2,0(t, μ^2) = A_2,0(t, z^2) [1+α_s(μ^2) C_F/2π(γ_1 L + d_1)] ,
where γ_1 = -4/3 is the DGLAP anomalous dimension, d_1 = 14/3 the moment of the non-logarithmic part of the matching kernel, and L = ln(e^2γ_E+1/4 × (z^2/λ) ×μ^2). Here λ should be close to -1 and will be varied in the interval [-1/2, -2] as is usual to evaluate uncertainty in scale fixing.
* A variant in which we evaluate α_s using its leading-logarithmic 3-flavor evolution at the “natural” scale of the data λ / z^2:
A_2,0(t, μ^2) = A_2,0(t, z^2) [1+α_s(λ / z^2) C_F/2π(γ_1 L + d_1)] .
* A full leading-logarithmic matching as derived in section <ref>:
A_2,0(t, μ^2) = A_2,0(t, z^2) (α_s(λ / z^2)/α_s(μ^2))^γ_1 C_F/(2πβ_0)[1+α_s(λ/z^2) C_F/2π(γ_1ln(e^2γ_E+1/4) + d_1)] .
We use the MS scale μ = 2 GeV and present the result of the three matching procedures in Fig. <ref>. We use α_s(μ^2) = 0.28, a value compatible with the usual MS value at that scale. The relative arbitrariness of this choice is balanced by the possibility of performing scale variations.
By construction, the first matching procedure does not present any divergence except for -z^2 →∞ and can be applied to as large -z^2 as desired. We observe some inconsistency at small z, followed by a stabilization in the large z regime. The second matching procedure, on the other hand, cannot be applied once as λ / z^2 reaches the Λ_QCD divergence. With α_s(2 GeV)=0.28 and three active flavors, Λ_QCD = 165 MeV, which allows us nonetheless to apply the matching to all our values of z^2. However, the large values result in an extremely large-scale fixing uncertainty, as one would expect.
The full leading-logarithm calculation exhibits the smallest scale fixing uncertainties of the three matching procedures, the increasingly large α_s balancing one another in the kernel. The procedure produces relatively constant results up to z = 0.3 fm, after which it is no longer constant in this bin. In principle, we do not have expectations that a perturbative matching explains the z^2 dependence of the data when -1/z^2 ≪ 1 GeV^2. Therefore, this situation is not particularly concerning. In fact, when applied to the E GPD in the same bin, the full leading-logarithmic matching still has the smallest scale fixing uncertainties, and overall the best behavior as seen in Fig. <ref>. At this stage, considering that different behaviors are observed for various bins, no strong conclusion can be drawn, especially in view of the spurious z dependence observed in the elastic form factors.
Only a few bins of the generalized form factor A_2,0 are precise enough in terms of statistics and systematic difference between the cut t_s = 3 and 4 that scale variation in the leading-logarithmic matching produces a visible difference. Overall, scale variation does not produce a significant contribution to the uncertainty budget given the other uncertainties once the t-dependence is reconstructed in the final fit. We will therefore stick to the full leading-logarithmic matching with λ = -1 in the following. Even if this is not a practical matter at the current stage of our calculations, we foresee that applying perturbation theory at large z is not good practice when all uncertainties are under control. On the other hand, in the event that our framework retains a strong sensitivity to the leading-twist operator even at scales where perturbation theory becomes dubious, we have advocated that evolution operators in z^2 should be computed on the lattice in <cit.>. However, this program requires a higher confidence in the suppression of power corrections and other lattice systematic uncertainties that we do not believe to have been achieved in the present calculation.
We present in Fig. <ref> an overall picture of the matched moments in one bin in t. The importance of the large z data (and therefore large Ioffe time) to constrain the higher moments is obvious.
§.§ t-dependent generalized form factors
The results of the fits of the t-dependence of moments of GPDs are presented in Fig. <ref> for the generalized form factors A_n,0 and B_n,0 that are independent of ξ, and in Fig. <ref> for the A_n,2 and B_n,2 that are dependent on ξ. Since beyond the elastic form factor, the trend in z of the data is not discernible in a consistent fashion, we use a constant fit on z ∈{a, ..., 6a} after having matched the data to MS at 2 GeV using the leading-logarithmic matching derived in section <ref>. The colored bands result from the statistical uncertainty of the fit on data with a cut in lattice time separation of three. The uncolored band results from the union of the result with a cut of three and a cut of four. It appears that excited state uncertainty increases as we head towards larger moments, unsurprisingly as the signal in the data weakens. In the specific case of A_4,2 and B_4,2, the dipole fit of the data with a cut of four is extremely unstable, as some data (especially in the cut of four) is compatible with 0 or negative. As a non-linear fit is unable to cross zero, the dipole fit has major issues adapting to these data. We use there therefore the z-expansion fit, which on the other hand does not suffer from any particular difficulty, to increase the stability of the dipole. The obtained results for the dipole fit in terms of value at t = 0 and mass are reported in Fig. <ref>.
§.§ Radial distributions
One of the main physical motivations behind the study of GPDs is their characterization of radial distributions of partonic properties inside hadrons (momentum, energy, pressure, etc.). Such distributions are obtained by a Fourier transform of the t-dependence of the GPD, usually at zero skewness <cit.>. The impact parameter distribution of an unpolarized quark in an unpolarized proton is given by
I(x, b⃗_⊥) = ∫d^2 Δ⃗_⊥/(2π)^2 e^-i b⃗_⊥·Δ⃗_⊥ H(x, ξ = 0, t=-Δ⃗^2_⊥) ,
whereas the GPD E allows one to characterize the unpolarized quark distribution inside a transversely polarized proton,
I^T(x, b⃗_⊥) = ∫d^2 Δ⃗_⊥/(2π)^2 e^-i b⃗_⊥·Δ⃗_⊥[H(x, ξ = 0, t=-Δ⃗^2_⊥)+ iΔ_y/2mE(x, ξ = 0, t = -Δ⃗^2_⊥)] .
Using the model-dependent extrapolation to any t provided by the previous fits, we can extract the first moments of the isovector component of these distributions,
I^u-d_n+1(b⃗_⊥) = ∫d^2 Δ⃗_⊥/(2π)^2 e^-i b⃗_⊥·Δ⃗_⊥ A^u-d_n+1,0(-Δ⃗^2_⊥) ,
I^u-d, T_n+1(b⃗_⊥) = ∫d^2 Δ⃗_⊥/(2π)^2 e^-i b⃗_⊥·Δ⃗_⊥[A^u-d_n+1,0(-Δ⃗^2_⊥) + iΔ_y/2mB^u-d_n+1,0(-Δ⃗^2_⊥)] .
We use in the following the dipole fits, as the t-dependence of the z expansion is too unconstrained at large t to give rise to sensible radial distributions. The extraction presented here is therefore strongly model dependent. Even then, the t-dependence of A_4,0 and B_4,0 is very poorly determined and therefore we only present results for n ∈{0,1,2}.
The unpolarized moments (<ref>) are spherically symmetric in |b⃗_⊥|. We represent them in Fig. <ref>. As can already be inferred by the increasing value of the fitted dipole mass (see the summary table of Fig. <ref>), when the order of the skewness-independent moments increases, the radial extent of the distribution shrinks. This can be largely understood from a kinematic point of view, as higher-order Mellin moments are increasingly dominated by the large x domain of the distribution. The center of the proton with respect to which our radial coordinate system is defined is the barycenter of the partonic longitudinal momentum. When the longitudinal momentum fraction of the active parton x approaches one, the active parton is by necessity close to the center of the proton, so the impact parameter distribution at large x is very narrow.
The transverse impact parameter distribution moments (<ref>) are not spherically symmetric. We represent them, along with the unpolarized moments, in Fig. <ref>. We only use the central value of the dipole fits. A subtle feature of the transverse plots, which can also be observed in <cit.>, is the fact that moments with n even (extracted from the imaginary part of the Ioffe-time distributions) seem to show a lesser distortion in the transverse impact parameter distribution than the odd moments. In <cit.>, where the u+d component is extracted as well, it also seems to hold for n = 3. Here again, the distribution shrinks at large n. One can notice that perturbation theory predicts the GPD to be independent of t when x → 1<cit.>:
H^q(x, ξ, t) ∼(1-x)^3/(1-ξ^2)^2 , E^q(x, ξ, t) ∼(1-x)^5/(1-ξ^2)^2 .
This is equivalent to the large n GPD moments becoming independent of t, or the impact parameter distribution becoming a simple Dirac peak at b⃗_⊥ = 0.
§ CONCLUSIONS
In this work, we present the calculation of off-forward nucleon matrix elements with extensive coverage in initial and final momenta. These matrix elements scan a wide range of t and ξ relevant for understanding the three-dimensional tomography of the nucleon, through a factorization relationship to GPDs analogous to those for DVCS and DVMP. This lattice calculation was made possible by utilizing the technique of distillation to explicitly project onto the momentum states for all operators in the correlation functions. In addition to significantly improving the signal, this procedure provides a natural method for an efficient calculation by allowing many momenta to be calculated, while recycling the components for each operator.
With the current statistical precision, control over excited states is critical for an accurate analysis. We have compared results from two different methods for extracting matrix elements and from different cuts on the data. This step is critical for creating realistic error estimates for the matrix elements. The data with different cuts in Euclidean time can lead to significant deviations in the final results, as was seen in the extraction of generalized form factors. In future studies, we will use the distillation to implement distinct nucleon operators for each momentum. This allows for a GEVP analysis to explicitly remove contributions from excited states. The excited state contamination will only become more difficult to control in the chiral limit, so this full control of excited state contamination will be crucial for physical extrapolations.
By using this Lorentz-invariant approach, the ν dependence of this data can be analyzed to study the Mellin moments of the GPDs. In this study, we have utilized relatively small momenta in the direction of the Wilson line, which scans the low ν region that is dominated by the lowest few moments. We extract the moments of H and E and study the t dependence and, for the first time, the ξ dependence. The ξ^2 coefficients of the moments A_n,2 and B_n,2 are larger than the ξ^0 coefficients A_n,0 and B_n,0. We also extracted radial distributions at zero skewness by performing the Fourier transform of the t dependence.
This calculation represents a step on the road to a systematically controlled calculation of nucleon tomography through GPDs. Nucleon tomography requires significantly more matrix elements, to scan the full three-dimensional space of GPDs than determinations of PDFs, particularly for a full study of systematic errors. We have demonstrated a method capable of tackling the problem of kinematical coverage and for studying the significant impact of excited state contamination.
§ ACKNOWLEDGMENTS
This project was supported by the U.S. Department of Energy, Office of Science, Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab. KO and HD were supported in part by U.S. DOE Grant . CM is supported in part by U.S. DOE ECA .
AR acknowledges support by U.S. DOE Grant .
This work has benefited from the collaboration enabled by the Quark-Gluon Tomography (QGT) Topical Collaboration, U.S. DOE Award .
This research was funded, in part, by l’Agence Nationale de la Recherche (ANR), project ANR-23-CE31-0019. For the purpose of open access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
Computations for this work were carried out in part on facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. This work was performed in part using computing facilities at William & Mary which were provided by contributions from the National Science Foundation (MRI grant PHY-1626177), and the Commonwealth of Virginia Equipment Trust Fund. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC) <cit.>. In addition, this work used resources at NERSC, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract #DE-AC02-05CH11231, as well as resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. . The software codes Chroma<cit.>, QUDA<cit.>, QPhiX<cit.>, and Redstar<cit.> were used in our work. The authors acknowledge support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Nuclear Physics, Scientific Discovery through Advanced Computing (SciDAC) program, and of the U.S. Department of Energy Exascale Computing Project. The authors also acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources, like Frontera computing system <cit.> that has contributed to the research results reported within this paper. The authors acknowledge William & Mary Research Computing for providing computational resources and/or technical support that have contributed to the results reported within this paper.
We acknowledge the EuroHPC Joint Undertaking for awarding this project access to the EuroHPC supercomputer LUMI, hosted by CSC (Finland) and the LUMI consortium through a EuroHPC Extreme Scale access call. This work also benefited from access to the Jean Zay supercomputer at the Institute for Development and Resources in Intensive Scientific Computing (IDRIS) in Orsay, France under project A0080511504.
§ FULL SET OF LORENTZ AMPLITUDES
We present here the full set of Lorentz amplitudes 𝒜_k for k ∈{1, ..., 8} following the decomposition of Eq. (<ref>) for two pairs of momentum: p⃗_f = (2,1,0) and p⃗_i = (0, 1, 2) (lattice units) on Fig. <ref>, and p⃗_f = (1,-1,0) and p⃗_i = (-1, 1, 0) on Fig. <ref>. The data is presented as a function of z in lattice units. We only show positive z as we average over ± z using the symmetries discussed in section <ref>. We remind the reader that this means that each data point exists therefore at a different “scale” of the order of -1/z^2. The amplitudes 𝒜_2,3,8 which do not enter the construction of light-cone GPDs are represented together in the last plots, whereas the other five get their individual plot. Let us comment on some interesting features of both figures.
Figure <ref>
* For the five amplitudes that enter the construction of light-cone GPDs, we compare a solution for the extraction of the amplitudes from the matrix elements using either the SVD (l^2 minimization) or an l^1 minimization. We find that the l^1 minimization produces generally larger uncertainties but with central values in excellent agreement with respect to the l^2 extraction. We consider this as a validation of the robustness of the SVD method.
* For those five amplitudes, we also show the comparison between the SVD on the data with a cut in lattice time separation of 3 and 4. The larger cut produces more uncertainty, but also in a good consistency with the smaller cut. The results of this paper are jointly extracted using the cut in 3 and 4 to give an account of excited state contribution.
* 𝒜_5 and 𝒜_7 do not show a strong deviation from 0, a situation generally observed in the dataset. 𝒜_5 contains signal of the D-term (absent in this non-singlet calculation), the kinematic higher-twist contamination Y discussed in <cit.>, and lattice systematic errors. Therefore, it is quite satisfactory that no clear signal emerges. As for 𝒜_7, it enters the construction of the light-cone GPD E, but is generally small in magnitude and barely different from 0.
* The kinematic factors of the amplitudes 𝒜_2,3,6,7,8 cancel in the limit z = 0. Therefore, we cannot evaluate the amplitudes in this limit. 𝒜_6 grows noticeably at small z. It is in fact necessary to counter-act the discontinuity of 𝒜_4 between the value computed at z = 0 (the form factor F_2), and the value at z = 1. Adding ν𝒜_6 to 𝒜_4 to form the light-cone GPD E alleviates the discontinuity[The discontinuity of 𝒜_4 is perceivable in the first row, second column plot of Fig. <ref>. Because ν is negative for z > 0 with this choice of momenta p_f and p_i, ν𝒜_6 is negative at z = 1 and alleviates the discontinuity of 𝒜_4 to form the GPD E.].
Figure <ref>
The choice of momentum pair p⃗_f = (1,-1,0) and p⃗_i = (-1, 1, 0) means that ν and νξ are both 0 for every value of z. Therefore, we should have 𝒜_1 = F_1 + 𝒪(z^2) and 𝒜_4 = F_2 + 𝒪(z^2). 𝒜_1 presents a distinctive evolution with z, which we discuss in section <ref>. 𝒜_4 on the other hand is compatible with a constant within its uncertainty. The general features identified in the previous figure remain valid.
§ TO Γ_3 OR NOT TO Γ_3
In the lattice calculation of PDFs, the choice of matrix elements with Dirac structure γ^3 has been considered disfavored. In the Lorentz decomposition, M^μ= p^μℳ + z^μ𝒩, choosing μ parallel to the Wilson line will introduce a contribution from 𝒩 which does not appear in the light cone definition of the PDF <cit.>. In the work of <cit.>, the matrix element of the operator ψ̅(z) Γ W(z;0)ψ(0) is calculated at 1-loop lattice perturbation theory. The matrix elements contain terms proportional to u̅Γ u and u̅{Γ, z}u, describing a finite mixing with operators which occurs due to the breaking of chiral symmetry. As discussed in this appendix for the case of the vector structures, this pattern can be consolidated with another contribution we typically neglect or discard. For the vector current, the mixing comes from u̅{γ^μ, z}u=-2z^μu̅u which can only impact the 𝒩(ν,z^2) or A_2(ν,t,z^2) for PDFs and GPDs respectively. Since these do not contain the leading order contributions we are interested in, this contamination is benign. Similarly for axial Γ=γ^μγ^5 structure in the case of the pion DA <cit.>, the mixing tensor structure can only generate effects of O(z^2) which will not ultimately effect the asymptotic short distance limit of the relevant amplitude. For the vector case, the inclusion of γ_3 data has a negligible effect on the amplitudes A_1,4,5 when z/a>0.
First, for z=0, the first 1,4 amplitudes correspond to the form factors F_1,2 while 5 should be 0 in the continuum by the Ward Identity q_μ M^μ=0. The rest of the amplitudes have a factor of z in the kinematic factor and do not belong here. Using the SVD procedure in section <ref>, 1,4,5 were extracted while including and while neglecting the M^3 matrix element, shown in Fig. <ref>. The amplitudes remain quite similar with a few percent deviations in A_1,4, relevant at our statistical precision while the quality of 5=0 changes slightly. After binning of the t-dependence (see section <ref>), the elastic form factors are indiscernible whether M^z is used or not. Remembering the duplicity relating half helicity combinations to the other half, for z=0, there were either 8 or 6 independent matrix elements used to obtain 3 unknown functions. Adding in more data without increasing the number of unknowns means there are new constraints being applied when M^3 is added.
While studying the real component of z=a data, which shows the same patterns as larger z and with the imaginary component, if the M^3 data is neglected then the 2,8 terms are dropped since they always contribute 0 to M^t,x,y. The two sets of results are remarkably identical, shown in Fig. <ref>. In fact, as we have already mentioned in section <ref>, at z ≠ 0, the M^3 data brings exactly two constraints for two new unknown amplitudes, so it exactly serves to determine 𝒜_2,8 without any incidence on the other amplitudes. The only deviations on the other amplitudes come from numerical noise, at a level of 10^-6 corresponding to the accuracy at which we computed the kinematic matrix.
|
http://arxiv.org/abs/2405.09989v1 | 20240516111832 | A Gaussian Process Model for Ordinal Data with Applications to Chemoinformatics | [
"Arron Gosnell",
"Evangelos Evangelou"
] | stat.AP | [
"stat.AP",
"stat.ME",
"stat.ML"
] |
A Gaussian Process Model for Ordinal Data with Applications to Chemoinformatics
Arron GosnellCONTACT A. Gosnell.
and Evangelos Evangelou
University of Bath, UK
================================================================================================
With the proliferation of screening tools for chemical testing, it is now possible to create vast databases of chemicals easily.
However, rigorous statistical methodologies employed to analyse these databases are in their infancy, and further development to facilitate chemical discovery is imperative.
In this paper, we present conditional Gaussian process models to predict ordinal outcomes from chemical experiments, where the inputs are chemical compounds.
We implement the Tanimoto distance, a metric on the chemical space, within the covariance of the Gaussian processes to capture correlated effects in the chemical space. A novel aspect of our model is that the kernel contains a scaling parameter, a feature not previously examined in the literature, that controls the strength of the correlation between elements of the chemical space.
Using molecular fingerprints, a numerical representation of a compound's location within the chemical space, we show that accounting for correlation amongst chemical compounds improves predictive performance over the uncorrelated model, where effects are assumed to be independent.
Moreover, we present a genetic algorithm for the facilitation of chemical discovery and identification of important features to the compound's efficacy.
A simulation study is conducted to demonstrate the suitability of the proposed methods.
Our proposed methods are demonstrated on a hazard classification problem of
organic solvents.
Keywords: chemical space; drug discovery; genetic algorithm; molecular fingerprints; quantitative structure-activity relationships; Tanimoto distance
§ INTRODUCTION
Drug discovery is of vital importance to many fields, including agricultural sciences, chemistry, medicine, and the food and drinks industry.
Chemoinformatics, which focuses on the analysis of data from chemical compounds, can aide in the understanding of influential chemical structures and the discovery of novel drugs <cit.>.
Many chemoinformatics methods rely on quantitative structure-activity relationship (QSAR) techniques, which aim to predict biological activities from chemical structures <cit.>.
To that end, chemical graph representations are vital for understanding the relationship between chemical structures and their biological activities <cit.>.
A chemical graph is a figurative representation of a compound according to its atomic features. These graphs may alternatively be expressed as a vector of categorical features, one such example being a SMILES string, with each element depicting the presence or absence of a chemical substructure or molecular property.
Representing the compound in this way allows for the application of a range of machine learning techniques, including molecular data mining, compound diversity analysis, and compound activity prediction <cit.>.
Compounds are said to live within the chemical space, i.e., the space describing the ensemble of all organic chemical compounds <cit.>.
A central principle of chemoinformatics is that neighbouring compounds, i.e., compounds close to one another within the chemical space, share similar properties <cit.>.
The closeness, or distance, between compounds is typically measured using metrics on dichotomous feature spaces, with there being over 70 established methods for quantifying closeness in such feature spaces <cit.>.
Among these, the Tanimoto similarity is the most widely used measure of closeness <cit.>, and typically scores highest in terms of capturing the greatest level of intermolecular similarity <cit.>. The distance based on the Tanimoto similarity, known as the Tanimoto or Jaccard distance, is a proper metric <cit.>, thereby satisfying the required metric criteria, in particular the triangle inequality.
The Tanimoto similarity has been widely incorporated in a range of machine learning applications for compound discovery and property prediction. In a regression setting, <cit.> developed mixed deep neural networks, which leveraged both
chemical text (SMILES) as well as molecular descriptors (MACCS fingerprints) for predicting chemical properties, whilst <cit.> implemented random forests and deep neural networks to molecular property and
reactivity prediction.
Support vector machines <cit.> and Gaussian processes <cit.> have also been applied to molecular property prediction in a regression context.
Furthermore, molecular fingerprints have been applied to a range of classification tasks.
<cit.> implemented the molecular fingerprints of compounds, which inhibit cancer cell line growth within binary classification models. <cit.> applied convolutional neural networks and language-based models on molecular fingerprint data for several classification tasks.
A notable criticism of these approaches is the absence of a scale parameter for controlling the strength of the similarity, thereby not properly accounting for its effect in the model.
Motivated by the aforementioned principle, in Section <ref>, we present a novel approach to incorporating chemical distance into Gaussian process (GP) models.
The proposed GP model is defined on the chemical space, i.e., its inputs are the chemical compounds, while the values of the GP represent the effect of each compound on the outcome we wish to model.
GPs are, however, commonly defined on Euclidean spaces, and are typically applied when modelling geographical phenomena. The metrics employed for analysing chemical structures are inherently non-Euclidean. Consequently, when modeling chemical structures, it is necessary to adapt the distance metric within the GP covariance.
In Section <ref> of this paper, we provide a mathematical framework to demonstrate that, indeed, GPs can be defined on non-Euclidean spaces, such as the chemical space, by incorporating the Tanimoto metric within the GP's covariance structure.
In addition, we present suitable isotropic correlation functions adapted to live on the chemical space.
An important distinction between our proposed method and existing
approaches is that we provide the GP kernels with a scaling
parameter. To our knowledge, this is the first paper where
such kernels based on the Tanimoto metric are developed.
We focus on the case where the outcome is measured in an ordinal scale <cit.>.
The proposed model can be described as a cumulative link model with correlated random effects <cit.>.
As the likelihood of the proposed model is not available in closed form, we
apply Laplace's method to approximate the likelihood and estimate the model
parameters.
This approach is described in Section <ref>.
Thus, another contribution of this paper is the application of the Laplace approximation for estimation and prediction of ordinal data with Gaussian process random effects.
Due to the correlation structure of the GP model, we can gain information
from the effects of sampled compounds to predict the effect of unsampled compounds, a property which cannot be exploited with independent random effects, as well as provide uncertainty estimates of the proposed effects. The latter property makes GPs a particularly attractive choice to the application of drug discovery, especially when considering the cost-effectiveness of chemical production.
Exploration of the chemical space is vital for discovering new and effective compounds, and it is of particular interest to identify compounds that display high efficacy.
Since the chemical space encompasses an incredibly vast number of molecular structures, it is impossible to assess all configurations of molecular features to discover the ideal compound, making virtual screening particularly challenging.
We, therefore, require optimisation techniques to automate discovery and propose interesting regions for further exploration.
To that end, in Section <ref>, we develop a genetic algorithm, aided by the proposed model, to search over the chemical space and identify compounds of potentially high efficacy. We propose two optimality criteria that can be used for this purpose that are based on the features of the proposed model. The first criterion is based on maximising the probability that the outcome will belong to a given class, under given experimental conditions. On the other hand, it is not always possible to specify the experimental conditions, so our second criterion ignores the experimental conditions and focuses solely on the value of the GP.
Section <ref> provides
several simulation studies to demonstrate that the proposed method can
recover the true parameter values, given the true model is the GP model
with Tanimoto metric, as well as to demonstrate that the genetic algorithm
can identify the optimal compound. Moreover, Section <ref> applies the model to the practical scenario of hazard classification for organic solvents.
The computations presented this paper were performed on a Windows 10 machine with an Intel Core i5-7300 CPU and 8GB RAM. The software R <cit.> was used for the implementation of the proposed model and the genetic algorithm, with the heavier computations implemented in Fortran 90. To conduct the analysis of the solvent data, the Python package RDKit <cit.> was used to derive each solvent's daylight fingerprint from its SMILES code.
§ GP CLASSIFICATION BASED ON A CUMULATIVE PROBABILITY MODEL
We consider a chemical space ℂ = {c_1,…,c_m} of m distinct compounds.
In practice, m is large, but only a small number of them will be used in experiments.
We assume observed data (_1,y_1,c_l_1),…,(_n,y_n,c_l_n),
where, for i = 1,…,n, y_i ∈{1,2,…, C }, with
1 < 2 < … < C, is the class response, _i ∈ℝ^p are the testing
conditions, and l_i ∈{1,…,m} indicates the compound used in the ith experiment among the m distinct compounds in ℂ.
The objective is to predict the outcome y_* given experimental conditions _* with compound c_*, i.e., to estimate the probabilities (y_* = j|) for each class j ∈{1,…,C}, where = (y_1,…,y_n).
For modelling ordinal data, the cumulative link model <cit.> is well-suited. Originally, this model has been proposed for independent observations, but has been extended by <cit.> to include a Gaussian process random effect. Our model follows the same approach, but considers more general link and correlation functions, that are suitable for chemical inputs.
Let T(·,·) represent the Tanimoto distance between pairs of compounds within the chemical space.
We define u:ℂ↦ℝ to be a GP on ℂ, such that for any finite collection of compounds = (u(c_1),…, u(c_m)) is distributed according to the m-dimensional multivariate normal distribution with mean 0 and variance-covariance matrix K.
We write the (r,s)th element of the matrix K, corresponding to compounds c_r and c_s, where r,s = 1,…,m, as k_rs = σ^2 R(T(c_r,c_s),ϕ), where σ^2 denotes the variance parameter, and R(t,ϕ) denotes the correlation function at distance t with scaling parameter ϕ.
Specific forms of R(t,ϕ) are given in Section <ref>.
Let y denote the outcome of an arbitrary experiment under conditions with compound c, and let
γ_j = (y ≤ j|u(c)), with γ_C = 1.
Our model assumes that
G(γ_j) = η_jc = α_j + β^⊤ + u(c), j = 1,…,C-1,
where G:(0,1) ↦ℝ is the link function, β∈ℝ^p denotes the regressor coefficients, and α_1 < … < α_C-1 are the ordered intercepts.
Link functions model the non-linear effect of the regressor variables and the GP to the cumulative probabilities.
Table <ref> lists common choices of link functions that we consider in this paper, and, in general, determine the predictive performance of the model.
<cit.> showed that link misspecification can result in biased estimates and higher prediction error. In practice, a suitable link function should be chosen based on goodness-of-fit criteria, such as cross-validation, which we provide greater detail of in Section <ref>.
Let γ_ij = (y_i ≤ j|u(c_l_i)), j = 1, …, C, with γ_iC = 1, be the cumulative probabilities for up to class j, and π_i1 = γ_i1, π_ij = γ_ij - γ_i, j-1, j = 2, …, C be the individual class probabilities.
We assume that the distribution of each y_i is conditionally independent of y_i' for i' ≠ i given u(c_l_i).
Thus our model can be described by
y_i | u(c_l_i) ind∼Categorical(π_i), i = 1,…,n,
∼N_m(0, K),
where π_i = (π_i1,…,π_iC) and is the value of the GP at the m distinct compounds.
The GP models are defined so that, if G(·) is increasing, low values of u(c) correspond to high probabilities of an outcome in the highest class, C.
To demonstrate this, we consider the odds ratio (1-γ_j)/γ_j, for j=1,…,C-1, and its behaviour as a function of u(c).
We observe that (1-γ_j)/γ_j = 1/γ_j - 1 = 1/G^-1(η_jc) - 1, where η_jc = α_j + β^⊤ + u(c).
Therefore, if G is an increasing function, then so is G^-1, and in
that case, the odds ratio of observing a class higher than j is a decreasing function of u(c).
§ FINGERPRINTS AS A REPRESENTATION OF THE CHEMICAL SPACE
Chemical fingerprints are a widely used concept in the analysis between molecular substructures and biological activities.
Fingerprints are typically represented as κ-dimensional bit vectors, with the features being based on their chemical composition, or graph.
Each feature within the fingerprint indicates the presence of atomic
substructures, such as functional groups, ring systems, or atom arrangements. For example, a fingerprint might have a bit set to 1 if a certain functional group (like a hydroxyl group) is present within the molecule.
Figure <ref> illustrates two simple molecules and their associated fingerprints. We observe that the two molecules share a common ring. Similarity measures, such as the Tanimoto similarity, capture the intersection of molecular properties of chemical compounds through a similarity score.
The Tanimoto similarity is a measure of closeness between chemical compounds. In defining the Tanimoto similarity, consider a collection of bit vectors of the form c_r = (c_r1, c_r2,…, c_rκ), where c_ri is either 0 or 1, and not all 0, denoting the presence of feature (atomic substructure) i in the rth compound, i = 1,…,κ.
The Tanimoto similarity S_rs = S(c_r,c_s), for a pair of compounds c_r, c_s, is defined to be the number of features in common between the two compounds over the number of features in either.
More specifically,
S_rs = ⟨c_r,c_s⟩/⟨c_r,c_r⟩ + ⟨c_s,c_s⟩ - ⟨c_r,c_s⟩,
where ⟨c_r,c_s⟩=∑_i=1^κ c_ric_si.
By definition, 0 ≤ S_rs≤ 1.
When the two compounds have no features in common, their Tanimoto similarity is zero, i.e., S_rs=0, and when the compounds have identical features, their Tanimoto similarity is 1, i.e., S_rs=1.
An important result that justifies the use of the Tanimoto similarity as a correlation matrix of the GP is that the m× m matrix S with elements S_rs, r,s = 1,…,m, is positive definite <cit.>.
Subtracting the Tanimoto similarity from 1 converts it into a distance <cit.>, with the Tanimoto distance between compounds c_r and c_s denoted
T(c_r,c_s) = T_rs = 1- S_rs.
Some authors <cit.> used the Tanimoto distance directly within a Gaussian kernel to model the correlation of a Gaussian process. Although the Tanimoto distance is a metric, it is non-Euclidean, and can produce non-positive definite correlations when used with spatial kernels <cit.>.
As an example, consider the chemical space ℂ = {c_1 = (0,1,1), c_2 = (1,0,1), c_3 = (1,1,0), c_4 = (1,1,1)}. The matrix of pairwise Tanimoto distances, T, and the corresponding correlation matrix R with elements R_rs = exp(-T_rs^2), are given by
T = [ 0 2/3 2/3 1/3; 0 2/3 1/3; 0 1/3; 0 ],
R = [ 1 0.6412 0.6412 0.8948; 1 0.6412 0.8948; 1 0.8948; 1 ] to 4 decimal points.
Note that the distances given in T cannot correspond to distances in some Euclidean space. To see this, suppose there exist points ε_1, …, ε_4 on some Euclidean space with pairwise distances given by T. Then, as T_14 + T_24 = T_12, T_14 + T_34 = T_13, and T_24 + T_34 = T_23, the point ε_4 must lie simultaneously in the middle of the edges of the equilateral triangle formed by ε_1, ε_2, and ε_3, which is impossible. Note also that the correlation matrix R is not positive definite as its lowest eigenvalue is about -0.036.
Next, we discuss the use of the Tanimoto distance with well-known spatial kernels.
Let (ℂ,d) be a metric space. The metric d is called Euclidean if for any set of points c_1,…,c_m ∈ℂ, there exist ε_1,…,ε_m ∈ℝ^α (α depends on m), such that d(c_r,c_s) = ε_r - ε_s for all r,s = 1,…,m, where · denotes the Euclidean norm in ℝ^α.
In this case, we say that the points {c_1,…,c_m} can be isometrically embedded in a Euclidean space of dimension α.
As an example, any three points, c_1, c_2, c_3, with pairwise
distances d_ij = d(c_i,c_j), i<j ∈{1,2,3} can always be embedded
in a 2-dimensional Euclidean space, where the embedded points
ε_1, ε_2, ε_3 correspond to the
vertices of a triangle with side lengths d_12, d_13, d_23.
The following theorem, appearing in <cit.>, can be used to show that a metric is Euclidean. We denote the m× m identity matrix by I_m, and the m× m matrix of ones by J_m.
Let (ℂ,d) be a metric space.
* The metric d is Euclidean if and only if, for any set of points c_1,…,c_m ∈ℂ, the m× m matrix B = HAH is positive semi-definite, where H = I_m - m^-1 J_m, and A is the m× m matrix with elements A_rs = -d(c_r,c_s)^2/2, r,s = 1,…,m.
* Furthermore, let α = rank(B). Then, the points {c_1,…,c_m} can be isometrically embedded in a Euclidean space of dimension α, and α is the lowest dimension for which this is possible.
Now consider the chemical space ℂ = {c_1,…,c_m} with the metric d(c_r,c_s) = √(T(c_r,c_s)).
The matrix B from Theorem <ref> is B = - H(J_m - S)H = HSH, where S is the m × m matrix with elements given by (<ref>). As S is positive definite, B is positive semi-definite and rank(B) = m-1, therefore, the points ℂ can be embedded in a (m-1)-dimensional Euclidean space. <cit.> provide an algorithm for finding the points ε_1,…,ε_m in the Euclidean space. In the example given earlier, ε_1 = (-1/√(6), -1/√(18),-1/12), ε_2 = (1/√(6), -1/√(18),-1/12), ε_3 = (0,2/√(18),-1/12), ε_4 = (0,0,1/4) have pairwise Euclidean distances given by the square root of the elements of T.
This result allows us to create a vast catalogue of isotropic correlation functions using the Tanimoto distance, based on the correlation functions used in the GP literature, which allow the GP model to have certain properties.
Table <ref> lists several choices of the GP correlation, R(t,ϕ), corresponding to compounds with Tanimoto distance t.
The independent correlation corresponds to what is commonly referred to as the mixed effects model, and is used for reference to assess the improvement when incorporating correlation.
§ METHODOLOGY
Our two primary objectives are to estimate the model parameters, obtained
through maximising the model likelihood, and to estimate the probability an
experiment falling within each ordered class, based on the
available data. The model parameters are estimated first via the maximum likelihood method. The estimates are then used to construct the predictive distribution for the GP corresponding to a future experiment and compute the probabilities of each outcome.
The likelihood of the model, as well as the class probabilities for the given data, can be written only as multidimensional integrals with no closed-form expression. Techniques based on Monte-Carlo approximations of the likelihood, such as Monte-Carlo expectation maximisation <cit.>, can be used. However, these methods lack computational efficiency and, given the high dimension of the GP, alternative methods are preferred. Therefore, we propose the use of Laplace approximation to compute the likelihood.
The size of the data in relation to the dimension of the GP is an important consideration when using Laplace approximation on binary data, first examined by <cit.>.
In particular, the sample size n should increase at a higher rate than the dimension of the GP, m.
Theoretically speaking, m is bounded above by 2^κ, where κ denotes the number of features in the fingerprint vector. However, in finite samples, m can be comparable with n, so care must
be taken when using our proposed method. Furthermore, κ can potentially
increase as more compounds are added to the database as more features are
needed to properly distinguish the compounds and ensure a rich
representation of the space.
§.§ Estimation of model parameters
Let θ = (α_1,…,α_C-1, β, σ^2, ϕ) denote the model parameters. We use the symbol f(·) to represent the probability density/mass function of the expression in the brackets.
Given the model in (<ref>), and excluding any factors that do not depend on θ or , we have
f(|;θ) ∝∏_i=1^n ∏_j=1^C π_ij^1(y_i=j),
f(;θ) ∝ |K|^-1/2exp(-1/2^⊤ K^-1),
where 1(·) denotes the indicator function. The likelihood, based on data , is then
L(θ|) = f(;θ) = ∫ f(|;θ)
f(;θ) .
As noted earlier, the integral in (<ref>) does not have a closed-form solution, so obtaining the maximum likelihood estimates of θ by direct maximisation of the likelihood is not possible.
To compute the likelihood, we apply the Laplace approximation, a technique which enables approximations to integrals of the form ∫ e^-g().
Letting g() = -log [f(|;θ)f(;θ)], we may express the second order Taylor expansion of g() as
g() ≈ g() + 1/2( - )^⊤Ĥ( - ),
where denotes the point at which the function g() is minimised, and Ĥ denote the Hessian matrix of g() at .
By substituting (<ref>) into (<ref>), we obtain the approximation to the log-likelihood (up to a constant)
log L(θ|) ≈ -g() -1/2log|Ĥ|.
Therefore, θ̂ may be obtained by minimising (<ref>) with respect to θ. Let 𝒥(θ,) denote the negative Hessian matrix of (<ref>). Then, 𝒥(θ̂,)^-1 is an estimate of the variance-covariance matrix of θ̂.
Furthermore, recognising that f(|) ∝exp{-g()}, which from (<ref>) is proportional to a multivariate normal density, leads to the approximation
|∼N_m(, Ĥ^-1) approximately as n →∞.
§.§ Detailed derivations
The logarithm of the probability mass function for |, from (<ref>), is given by
ℓ(|;θ) = ∑_i=1^n ∑^C_j=11(y_i=j) log(π_ij)
=∑_i=1^n ∑^C_j=11(y_i=j) log(γ_ij - γ_i,j-1)
= ∑_i=1^n∑^C_j=11(y_i=j) log(G(η_i,j) - G(η_i,j-1))
where η_i,j = α_j + β^⊤ + u(c_l_i), and we define α_0=-∞, γ_i,0=0.
Therefore
∂ℓ/∂ u(c) = ∑_i=1^n ∑^C_j=11(y_i=j) 1(c_l_i = c) b'_ij,
∂^2 ℓ/∂ u(c)∂ u(c') = ∑_i=1^n ∑^C_j=11(y_i=j) 1(c_l_i = c) 1(c_l_i = c') (b”_ij - b'_ijb'_ij),
where
b_ij' = G'(η_i,j)G(η_i,j), if j = 1,
G'(η_i,j)-G'(η_i,j-1)G(η_i,j)-G(η_i,j-1),
if j=2,…,C-1,
-G'(η_i,j-1)1-G(η_i,j-1), if j=C,
b_ij” = G”(η_i,j)G(η_i,j), if j = 1,
G”(η_i,j)-G”(η_i,j-1)G(η_i,j)-G(η_i,j-1),
if j=2,…,C-1,
-G”(η_i,j-1)1-G(η_i,j-1), if j=C.
Overall, we can write
∂ℓ/∂ = P^⊤Ψ_1, ∂^2 l/∂∂^⊤ = P^⊤Ψ_2 P
where P is an n × m binary matrix where its i^th row is 0 everywhere except at l_i which equals 1, and Ψ_1 is an n-dimensional vector and Ψ_2 is an n × n diagonal matrix with elements
Ψ_1i = ∑_j=1^C 1(y_i=j) b'_ij,
Ψ_2ii = ∑_j=1^C 1(y_i=j) (b”_ij - b'_ijb'_ij),
respectively, for i=1,…,n.
To find used in the Laplace approximation, we solve
K^-1-P^⊤Ψ̂_1=0,
and the Hessian is Ĥ=K^-1-P^⊤Ψ̂_2P, where Ψ̂_1 and Ψ̂_2 denote Ψ_1 and Ψ_2 evaluated at .
§.§ Prediction
The approximation in (<ref>) enables the prediction of a class damage, y_*, for an untested compound, u_*.
We begin by evaluating the conditional distribution u_*| to estimate the unobserved effects of the GP. Using the conditional independence of u_* and given , we observe that
f(u_*|) = ∫ f(u_*|) f(|) ≈∫ f(u_*|) f̂(|) =: f̂(u_*|),
so the density f(u_*|) can be approximated by a Gaussian density f̂(u_*|), whose mean and variance can be
computed using the law of total expectation and variance.
In doing so,
𝔼[u_*|] = 𝔼[𝔼[u_*|]|]
= 𝔼[K_*K^-1|]
= K_*K^-1
Var[u_*|] = 𝔼[Var[u_*|]|] + Var[𝔼[u_*|]|]
= 𝔼[K_**-K_*^⊤ K^-1K_*|] + K_*^⊤ K^-1Var[|]K_* K^-1
= K_**-K_*^⊤ K^-1K_* + K_*^⊤ K^-1Ĥ^-1K_* K^-1,
where K_* = Cov(,u_*), and K_**= Cov(u_*,u_*).
Here, we have made use of the well-known relations 𝔼[u_*|] = K_*K^-1 and Var[u_*|] = K_**-K_*^⊤ K^-1K_* from Gaussian conditioning rules.
Let y_* denote the outcome of a future experiment under conditions _* using compound c_*. To obtain the predicted outcome, we require the probabilities (y_* = j|) for j=1,…,C. These can be estimated as follows.
(y_* = j|) = E[1(y_*=j)|]
= E[E[1(y_*=j)|,u_*]|]
= E[E[1(y_*=j)|u_*]|]
= E[π_*j|]
= ∫π_*jf(u_*|) u_*
≈∫π_*jf̂(u_*|) u_*.
Equation (<ref>) is evaluated using numerical integration. In this paper, we use the Gauss-Hermite quadrature method <cit.> with 21 integration points. In fact, under the probit link, the integral in (<ref>) has an analytical expression (see Appendix <ref>), however this is not the case for general link functions.
§.§ Variance corrections to parameter uncertainty
The formula for Var[u_*|] given in (<ref>) is a function of the model parameters, θ.
In practice, θ is unknown and is replaced by its estimate θ̂, effectively assuming that the true value of θ is θ̂.
This ignores the uncertainty in the value of θ.
<cit.> provided a correction to the prediction variance for generalised linear mixed models with independent random effects.
We follow a similar approach here to derive variance corrections to the GP estimates for our model.
Let u_* be the true value and let _*(,θ) = 𝔼[u_*|] be the prediction with known θ.
We want to assess the error _*(,θ̂) - u_*, where θ̂ denotes the maximum likelihood estimator for θ.
We write _*(,θ̂) - u_* = _*(,θ̂) - _*(,θ) + _*(,θ) - u_* = e_1 + e_2, where e_1 = _*(,θ̂) - _*(,θ) is the additional error due to the uncertainty in θ and e_2 = _*(,θ) - u_* is the error had θ been known.
Note that, e_1 is a function of , but not of u_*, and 𝔼[e_2|] = _*(,θ) - 𝔼[u_*|] = 0.
Then,
𝔼[e_1e_2] = 𝔼[𝔼[e_1e_2|]]
= 𝔼[e_1𝔼[e_2|]] = 0.
Furthermore,
e_1 = _*(,θ̂) - _*(,θ)
≈∇_θ_*(,θ)^⊤ (θ̂- θ)
⇒Var(e_1) ≈∇_θ_*(,θ)^⊤ℐ(θ)^-1∇_θ_*(,θ),
where ℐ(θ) denotes the Fisher information matrix of θ, which can be estimated by 𝒥(θ̂,).
Then,
𝔼[(_*(,θ̂) - u_*)^2] = 𝔼[(e_1 +
e_2)^2]
=Var(e_1+e_2)
= Var(e_1) + Var(e_2)
≈∇_θ_*(,θ)^⊤ℐ(θ)^-1∇_θ_*(,θ) + Var[u_*|].
The second term in (<ref>) is given by (<ref>), while the first term is the variance correction due to estimation in θ. To compute the derivatives ∇_θ_*(,θ), note that, by (<ref>), ∇_θ_*(,θ) = K_*K^-1∇_θ(,θ), where (,θ) is the solution to (<ref>). By differentiating both sides of (<ref>) with respect to elements of θ, we are able to compute ∇_θ(,θ) algebraically.
§.§ A note on computational complexity
The proposed methodology can be summarised in two steps.
* Estimation of the parameters θ by maximising (<ref>).
* Calculation of class probabilities via (<ref>) based on the estimates obtained.
Parameter estimation involves the use of a numerical optimisation procedure, such as the quasi-Newton algorithm and construction of the Laplace approximation at each iteration. The Laplace approximation itself requires solving (<ref>), which consists of a separate quasi-Newton iteration procedure. In our experience, only few iterations of the inner optimisation are required, however, each iteration involves an inversion of an m× m dense matrix, which has computational complexity in the order of O(m^3). Thus, denoting the average number of iterations for maximising (<ref>) by L_1, and the average number of iterations for solving (<ref>) by L_2, then the average complexity of the first step is O(L_1 L_2 m^3).
The prediction step involves calculation of the predictive mean and variance given by (<ref>) and (<ref>) respectively. The matrices K^-1 and H^-1 would be available during the estimation step, so the computational complexity in this case would be linear in m, O(m).
§ A GENETIC ALGORITHM FOR DRUG-DISCOVERY
A fundamental aspect of chemoinformatics is the ability to identify promising
compounds without the requirement of physical testing. Due to the expanse of the chemical space and the high dimensionality of the molecular representation, assessing the performance of every compound is currently
impractical. Therefore, efficient search methods are required to guide exploration of the chemical space and propose interesting regions for further analysis.
In pursuit of the above objective, we advocate for the utilisation of a genetic algorithm. Genetic algorithms are
a family of stochastic optimisation techniques inspired by the
Darwinian model of natural selection <cit.>. They are particularly effective in the application of feature
selection <cit.>. The two defining characteristics of a genetic algorithm are the crossover and mutation rates. The crossover rate mirrors the natural process of genetic inheritance, as it involves passing on a portion of genes from each parent to the offspring population. Conversely, the mutation rate introduces randomness into the population, mimicking the occasional genetic variations observed in evolutionary cycles.
Within each iteration of the algorithm, a generation of compounds reproduce, resulting in an offspring population. The performance of the offspring population is evaluated through a fitness score. Features associated with higher fitness scores are more likely to be passed down to subsequent generations during the reproductive cycle. After a fixed number of reproductive cycles, the features associated with the highest fitness scores form the fittest individuals in the population.
To demonstrate the genetic algorithm, consider a population of an even number of compounds, k, denoted
{c_1,…,c_k}, along with their corresponding fitness, defined
below, where each compound is represented by its fingerprint
vector c_r = (c_r1,…,c_rκ), for the rth compound,
r=1,…,k. We then perform the following steps iteratively. Each
step produces an updated population, which we also denote by {c_1,…,c_k}.
Selection Each compound is ranked according to the number of
compounds with fitness lower than that compound's fitness. The population
of compounds is then updated by sampling k elements with replacement among
{c_1,…,c_k}, with the probability of choosing a particular
compound, say c_r, being proportional to a+bρ_r, where a, b > 0
are chosen parameters of the algorithm, and ρ_r is the rank of the
rth compound.
Crossover We form k/2 pairs (c_r,c_r+1) for
r=1,3,…,k-1. For each pair, we perform crossover with probability
p_c. We sample an index λ uniformly in {1,…,κ}.
Then, we update
* c_r,i← c_r,i and c_r+1,i← c_r+1,i,
for i ≤λ, and
* c_r,i← c_r+1,i and c_r+1,i← c_r,i,
for i > λ.
Mutation For each compound, c_r, we perform mutation with
probability p_m. We sample an index δ
uniformly in {1,…,κ}. Then, we update c_rδ← 1-c_rδ.
In terms of fitness value, suppose that we are interested in identifying
the compound c_* that is more likely to lead to an outcome y_* in the
highest class, for given experimental conditions, _*, under the
current data . According to our model, this is achieved by the
compound with the highest value of (y_* = C|), which is estimated
by (<ref>) for j=C. This objective may be desirable if the
experimental conditions have been decided. An alternative objective can be
to find the compound with the lowest GP mean, given by (<ref>).
This is interpreted as finding the compound that is most likely to
correspond to the lowest GP value, and therefore, the highest probability
for the highest class, regardless of the experimental conditions. The
additional benefit of this objective is that it avoids the numerical
integration for computing the class probabilities.
A crucial part of the methodology is the use of the prediction formula (<ref>) to determine the fitness of a compound. It is therefore imperative that the predictions are accurate. In practice it is possible to test only few compounds which then form the data used to fit the model. In such cases, these compounds must be selected from a large data base in a way that they form a representative sample of the chemical space. There are several approaches used in the literature for this purpose including clustering, dissimilarity-based, cell-based, and optimisaton approaches <cit.>. <cit.> studied optimality criteria with the goal of minimising the average prediction variance that can be applied here, while <cit.> proposed an algorithm that can be used for dissimilarity-based selection.
§ SIMULATION STUDIES
We conducted various simulation studies to assess the performance of the
proposed methods. The general model is described by equations (<ref>)
and (<ref>), with specific choices for covariates and link
functions as described below.
§.§ Estimation performance
In the first study, we consider estimation of the model
parameters. The chemical space is formed by combining 5 features, producing
a total of m=2^5-1=31 distinct compounds (excluding the compound with
no active features). The data consist of n=341 experiments, where each of the
31 compounds was tested under 11 different experimental conditions. Let
y_ik, where i=1,…,11, and k=1,…,31, denote the observed
outcome at the ith experiment with compound k, which can be among C=3
categories. The model for the cumulative probabilities is
logit(y_ik≤ j) = α_j + β x_i + u_k, j = 1,2,
with a single covariate x_i = (i-1)/10. We consider two different GP
models for , one using the Gaussian covariance, and one using the
exponential covariance, both with variance parameter σ^2 and scale
parameter ϕ. The model parameters α_1, α_2, β,
σ^2, and ϕ are considered unknown. We conducted two different
simulation studies from each model with the true parameter values chosen as
shown in Table <ref>. We performed 500 simulations in total from
each model, where we estimate the parameters using the proposed method.
Table <ref> shows the average estimate across the 500 simulations
of each parameter and for each model, in addition to the true and estimated
standard deviations, based on the inverse of the Hessian matrix of the
approximate log-likelihood. The results in Table <ref> show that the proposed method can
estimate the parameters accurately. In particular, for estimating the
parameters α_j and β, there is virtually no bias. In terms of
estimating the standard deviation using the inverse of the Hessian matrix
of the approximate log-likelihood, we observe that the proposed method
underestimates the standard deviation slightly, except for the scale
parameter ϕ where the method overestimates the standard deviation. Figure <ref> provides an overview of the parameter estimate distribution in the simulation studies, with the true parameter value indicated by a red “x”. It is apparent that the estimates exhibit almost no bias (apart from the estimation of ϕ is some cases), as the true value frequently lies near the median of the estimated values.
Overall, we can conclude that the proposed methodology provides accurate
estimates and standard errors for the model parameters.
§.§ Assessment of the prediction variance formula
Next, we consider the accuracy of the variance correction
formula (<ref>). Using the simulated data from
Section <ref>, we predict the GP value at the 31 compounds
for each of the 500 simulated data sets. We then compute the empirical
variance for the GP corresponding to each compound across the 500
simulations. We compare this against the average prediction variance
estimate based on the uncorrected and corrected versions.
Table <ref> shows the average (over the 31 compounds) squared
difference between the empirical variance and the uncorrected and corrected
estimates. We observe that the corrected version is more accurate, and, in
fact, examination of the individual estimates shows that the uncorrected
version underestimates the variance. This verifies that the corrected
prediction variance formula (<ref>) is more accurate.
§.§ Assessment of the drug discovery algorithm
In the final simulation study we aim to assess the ability of the proposed
genetic algorithm of Section <ref> to identify compounds of high
efficacy. We simulated from the same four models as in
Section <ref>, except that the chemical space was increased
to 10 features, i.e. 2^10-1 = 1023 compounds (excluding the compound
with no active features). In addition, only m=90 compounds were tested.
These compounds were selected based on a space-filling criterion using the
Euclidean distance d(·,·) = √(T(·,·))
<cit.>. After fitting the model to each generated data
set, we use the proposed genetic algorithm to find the compound that
(a) corresponds to the lowest GP value, and (b) corresponds to the highest
probability of an output in the third category when x=1. The genetic
algorithm was run with population size k=10, for 100 generations, and
with parameters a=10, b=1, p_c = 0.8, and p_m=0.1. We applied this
method to 100 generated data sets simulated from each of the four models.
We then compared how highly ranked the derived compound is, compared to the
truly optimal compound as predicted by the fitted model, i.e., we
compute (<ref>) and (<ref>) for each of the 1023 compounds in
our model, which we then rank from best to worst, and then find the rank
that corresponds to the compound selected by the genetic algorithm.
Table <ref> shows how many times each rank was attained. On
average, more than 70% of the time, the genetic algorithm returned the
optimal compound, and more than 90% of the time it returned one of the top
two compounds. Of course, the results can be improved by increasing the
population size k and the number of iterations, with the additional
computational cost. This suggests that the proposed algorithm works well
for this problem.
§ HAZARD CLASSIFICATION OF ORGANIC SOLVENTS
To illustrate the proposed GP model, we consider the list of 500 organic solvents published in
<cit.>. The list contains a range of chemical information, including its classification according to the German water hazard class (WGK)
<cit.>, which classifies chemicals in three levels as slightly, obviously, and highly hazardous to water, in increasing
severity.
The other variables contained in the data set describe the properties of the solvents, which includes the GHS classification,
GHS hazard statements, Hansen Solubility
Parameter, boiling temperature, vapor pressure at atmospheric pressure,
density, molecular weight, and molar volume. The SMILES-codes are also included, which illustrate the chemical graph of the molecule and are used to derive each solvent's chemical fingerprint.
After removing missing values, n=485 data points remain in the
proportions of 25%, 22%, and 52% respectively for the three WGK ordered classes.
We consider the proposed ordinal model, given by equations (<ref>) and (<ref>), with a combination of correlation and link functions given in Table <ref> and Table <ref>. In addition we fit a random forest model using the R package <cit.>,
which incorporates the chemical information provided in the data as predictors instead of the
derived fingerprint vector.
Let π̂_kj denote the estimated probability that the kth deleted
outcome is j. If the realised outcome is y_k = j', we define the
logarithmic loss by L_log(k) = -logπ̂_kj', and the
spherical loss by
L_sph(k) = -π̂_kj'(∑_j=1^Cπ̂_kj^2)^-1/2.
We use 5 fold cross-validation to assess the
performance of each model where approximately 20% of the observations from each class were randomly removed from the data in each fold and the models fitted on the remaining data.
After fitting each model, we predict the outcome at the
removed data from the solvent's fingerprint (in the case of the ordinal
model), or chemical information (in the case of the random forest model). The log and spherical losses were averaged across the five folds.
Table <ref> shows the
average cross-validation loss for each model. In addition to this, the average time to maximise the approximate likelihood is reported. We observe the model with probit link and Tanimoto covariance has the greatest performance, with an average time of 26.8 seconds. The
models based on the independent correlation have the lowest scores, indicating the relevance of the fingerprint information. We also observe that most models generally perform better than the random forest model, suggesting the
ordinal model is able to extract the necessary information from the
solvent's fingerprint when predicting the WGK class.
Next, we consider identifying which fingerprint features contribute the most to a solvent being classed as highly hazardous (class 3). We consider only those features that are present in more than 10% of the solvents in our data (177 features), with the remaining features fixed at 0. As an exploratory step, we find which features appear most frequently among the class-3 solvents; 3 features appear in more than 80% of the class-3 solvents, and 5 more appear in more than 70%. We then used the proposed genetic algorithm with population size k=100, number of generations 500, and parameters a=100, b=1, p_c=0.8, and p_m=0.1 to find which solvents are predicted to have the highest class-3 probability. We examine the features common to the 100 fittest members from the optimisation. Twenty features appeared in all members of the population, which includes the 3 most common among class-3 solvents in the data, and 2 of the 5 that appear more than 70% of the time. In addition, 32 features appear in at least half of the members of the population, which could be further investigated as potential drivers of hazardousness.
Referees of this paper have expressed concerns regarding the validity of
the anisotropy assumption, which underpins the proposed methodology. Our
case presents a challenge as techniques designed to assess anisotropy in
spatial data are not directly applicable due to the high dimensionality of
the embedding space, and developing methodologies tailored to address this
challenge extends beyond the scope of our paper. To investigate the issue,
we consider a lower-dimensional embedding of the compound fingerprints to a
Euclidean space of dimension 2 using multidimensional scaling
<cit.>. Figure <ref> (left) shows the
lower-dimensional representation of the compounds in that space. This
allows us to compute (using the R package geoR <cit.>) the directional
semi-variogram of the predicted GP at various directions, which is shown in
Figure <ref>. Each variogram in Figure <ref> corresponds to one directional angle, with angles of 0, 45, 90, and 135 degrees. We observe the trajectories of the variaograms are similar for the given directions, suggesting the absence of anisotropy. The conclusion drawn from this observation is that the isotropic assumption is reasonable for the proposed GP model. In other words, the spatial dependence in the data is consistent across different directions, supporting the use of an isotropic GP model.
§ CONCLUSION
The motivation for this paper is to provide rigorous statistical methodology for chemoinformatics with particular focus in predicting properties of chemical compounds and aiding drug discovery. We propose a GP model over the chemical space to capture the correlation in the effects of chemical compounds based on their chemical structure. The GP correlation is modelled in terms of the Tanimoto distance, which is a non-Euclidean metric on the chemical space. This approach allows us to incorporate compound similarity in our model, and implement the closeness principle of chemoinformatics.
Our findings show that the proposed GP model has better performance in the application considered over the independent random effects model and the random forest model, which demonstrates that, indeed, the correlation between compounds should be taken into account. In addition, we have shown that the genetic algorithm is a suitable method for exploration of the chemical space, and can be used to propose compounds of great efficacy. Our simulation study validated the suitability of the proposed estimation techniques.
We focused on the case where the outcome is measured in an ordinal scale, although the model can be
extended to the case where the outcome is categorical (for classification), or continuous (for regression). The change in the model for the classification task is that the link function is applied to the probabilities for each class instead of the cumulative probabilities. The derivations in this case would be similar. For the regression task, the multinomial distribution is replaced by a normal distribution, with a separate variance (noise) parameter. In this case, the model fitting process simplifies considerably as it is possible to derive the predictive distributions in closed form. We expect the proposed methodology to have similar performance in those cases.
A notable shortfall of our application is that all features within the chemical fingerprint are considered equally important. As each fingerprint consists of many features, examining each one individually would be time-consuming. A natural extension of the proposed approach is to embed the chemical space in a higher-dimensional Euclidean space to account for potential anisotropy or non-stationarity <cit.>.
Furthermore, the optimisation method is not guaranteed to produce a
realistic compound, however this issue can be overcome by considering observable
compounds that are similar to the one derived from the optimisation method.
Another open question is the identification of important fingerprint
features for prediction. One possible research direction towards that goal
is the implementation of Shapley values in the spirit of
<cit.>.
The proposed model and techniques may be applied to other settings, such as
predicting the potency of pharmaceutical products, and properties of food
ingredients. Directions for future work are to implement sparse correlation
functions that allow use of our methods to large chemical databases.
Further analysis could also incorporate other metrics on the chemical
space, such as the cosine similarity or the dice coefficient, as well as
consider interaction effects between the GP and other covariates. Another
interesting extension would be to consider alternative representations of
the chemical space, such as those based on topological data analysis
<cit.>.
§ PREDICTION UNDER THE PROBIT LINK
In this section we derive the closed-form expression for the prediction probabilities of a future experiment under the probit link. These probabilities can be expressed in integral form in general by the right-hand side of (<ref>).
We will make use of the following lemma, which is proven in Section 3.9 of <cit.>.
Let f(z|μ,σ^2) denote the Gaussian probability density function with mean μ and variance σ^2, and let Φ(z) denote the standard normal cumulative distribution function. Then, for a ∈ℝ,
∫Φ(z-a) f(z|μ,σ^2) z = Φ(μ-a/√(1+σ^2)) .
Considering the probit model, we have for the jth category,
j=1,…,C, π_*j = Φ(α_j + _*^⊤β + u_*) -
Φ(α_j-1 + _*^⊤β + u_*), with the convention
α_0 = -∞ and α_C= ∞, and f̂(u_*|)
corresponds to the Gaussian density with mean μ_* = 𝔼[u_*|] given
by (<ref>), and variance σ^2_* = Var[u_*|] given
by (<ref>). Then, by (<ref>),
(y_* = j|) ≈∫π_*jf̂(u_*|) u_*
= ∫Φ(α_j + _*^⊤β + u_*) f̂(u_*|) u_*
- ∫Φ(α_j-1 + _*^⊤β + u_*) f̂(u_*|) u_*
= Φ(μ_* + α_j +
_*^⊤β/√(1+σ^2_*)) -
Φ(μ_* + α_j-1 +
_*^⊤β/√(1+σ^2_*)).
§ ACKNOWLEDGMENTS
Arron Gosnell was supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1. Arron Gosnell acknowledges Syngenta for partial funding.
§ DATA AVAILABILITY STATEMENT
The solvents data can be obtained from <DOI: 10.17632/b4dmjzk8w6.1>.
|
http://arxiv.org/abs/2405.08791v1 | 20240514173947 | On the basin of attraction of a critical three-cycle of a model for the secant map | [
"Ernest Fontic",
"Antonio Garijo",
"Xavier Jarque"
] | math.DS | [
"math.DS",
"37C05, 37C70, 37C25"
] |
*thmATheorem A
*thmBTheorem B
*thmCTheorem C
*ConjConjecture
plain
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
prop[theorem]Proposition
definitionDefinition
remark[theorem]Remark
conjectureConjecture
exExample[section]
exsExamples[section]
equationsection
boxit
[t]14cm
./figures/
A model for the dynamics of the secant map near a critical three-cycle]
On the basin of attraction of a critical three-cycle of a model for the secant map
Departament de Matemàtiques i Informàtica, Universitat de Barcelona (UB),
Gran Via de les Corts Catalanes 585, 08007 Barcelona, Catalonia, Spain
and
Centre de Recerca Matemàtica (CRM),
08193 Bellaterra, Barcelona, Catalonia, Spain
fontich@ub.edu
Departament d'Enginyeria Informàtica i Matemàtiques,
Universitat Rovira i Virgili, 43007 Tarragona, Catalonia, Spain
antonio.garijo@urv.cat
Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via, 585, 08007 Barcelona, Catalonia and Centre de Recerca Matemàtica, Edifici C, Campus Bellaterra, 08193 Bellaterra, Catalonia
xavier.jarque@ub.edu
The first author is supported by the
grant PID2021-125535NB-I00.
The second and third authors were supported by MICIU/AEI grant PID2020-118281GB-C32(33) Both grant are funded by MICIU/AEI/10.13039/501100011033/FEDER,UE. The second author is supported by Generalitat de Catalunya 2021SGR-633. We want to thank the Thematic Research Programme Modern holomorphic dynamics and related fields, Excellence Initiative – Research University programme at the University of Warsaw.
Finally, this work has also been funded through the María de Maeztu Program for Centers and Units of Excellence in R&D, grant CEX2020-001084-M funded by MICIU/AEI/10.13039/501100011033/FEDER,UE
We consider the secant method S_p applied to a real polynomial p of degree d+1 as a discrete dynamical system on ℝ^2. If the polynomial p has a local extremum at a point α then the discrete dynamical system generated by the iterates of the secant map exhibits a
critical periodic orbit of period 3 or three-cycle at the point (α,α). We propose a simple model map T_a,d having a unique fixed point at the origin which encodes the dynamical behaviour of S_p^3 at the critical three-cycle. The main goal of the paper is to describe the geometry and topology of the basin of attraction of the origin of T_a,d as well as its boundary. Our results concern global, rather than local, dynamical behaviour. They include that the boundary of the basin of attraction is the stable manifold of a fixed point or contains the stable manifold of a two-cycle, depending on the values of the parameters of d (even or odd) and a∈ℝ (positive or negative).
Keywords: Root finding algorithms, secant map, stable manifold, center manifold, basin of attraction.
[
Xavier Jarque
May 20, 2024
=================
§ INTRODUCTION
A major goal in applied and theoretical mathematical modelling is to find stable equilibria which determines the expected behaviour of the phenomenon we are analyzing. Those equilibria are given by real (or complex) numbers, real (or complex) finite dimensional vectors, or functions belonging to an infinite dimensional space, depending on the nature of the model under consideration.
In the majority of cases, the stable equilibria determining the evolutionary steady states of any model turn out to be solutions of non-linear equations. In general, we cannot solve these equations explicitly. Accordingly, there is a long history of research studying different algorithms which efficiently find their solutions.
Among these algorithms the ones given by the special kind of discrete dynamical systems, known as
root-finding algorithms, have been shown to be the most useful. Roughly speaking, a root-finding algorithm is a system such that for most of the initial conditions the asymptotic behaviour of the corresponding iterative process tends to one of the solutions of the non-linear equation determining the equilibria of the model. We observe that the condition of convergence for most of the initial conditions is a global phenomenon rather than (only) a local one. Indeed, this is the reason why the global dynamics of root-finding algorithms has been an important subject of study for dynamicists.
Moreover, when the model has more than one steady state the phase portrait of the root-finding algorithm splits into regions where the iterates of the seeds converge to different equilibria. Consequently, two natural questions arise. On the one hand, about the boundaries of these regions: do they have easy geometry and topology? what about the restricted dynamics over these boundaries?
do they have positive measure?
On the other hand, about the stable steady states: are there other stable behaviour of the algorithm unrelated to the steady states of the model? If they exist, there would be open sets of the dynamical plane where the root-finding algorithm is full of bad initial conditions. The answers to all these questions has had a great influence on the study of theoretical as well as applied discrete dynamical systems.
There is no discussion that the most famous root-finding algorithm is the well-known Newton's method. More concretely, assume the equation we want to solve is p(x)=0, where, to simplify the exposition, we assume p(x) to be a polynomial with x∈ℝ or x∈ℂ but the method extends to higher dimensional problems. Newton's method is the study of the dynamical system
x_n+1=N_p(x_n):= x_n-p(x_n)/p^'(x_n), x_0 ∈ℝ or x_0 ∈ℂ.
It is easy to see that N_p(α)=α if and only if p(α)=0 (if p^(ℓ)(α)=0, 1≤ℓ≤ k, one can still use (<ref>) after some modifications) and moreover if x_0≈α then {x_n:=N^n_p(x_0)}→α as n→∞. Consequently, Newton's method is a (one dimensional) dynamical systems whose fixed points correspond to the roots of p and they are local attractors. In fact, it is somehow the canonical roof-finding algorithm. The natural dynamical space for one dimensional Newton's method applied to degree d polynomials is ℂ (rather than ℝ) due to the fundamental theorem of algebra, and it defines one of the most studied family of rational (holomorphic) maps on the Riemann sphere <cit.>. See also <cit.> for Newton's method applied to transcendental entire maps.
Despite its fundamental role, Newton's method has some limitations and the literature has explored other root finding algorithms trying to avoid these weakness (for instance avoiding to compute the derivatives if their computational cost is too high) or to improve the efficiency of the method under certain hypothesis (for instance improving the local speed of convergence of the method to the root(s) of p).
Another basic
root-finding algorithm is the secant method given by the dynamical system generated by the iterates of the 2-dimensional map
S(x,y)=S_p(x,y):=(y,y-p(y) x-y/p(x)-p(y)).
However, in contrast to Newton's method, secant's method is a two dimensional system and it does not required to compute any derivative of p. Nonetheless, as before, we have that S(α,α)=(α,α) if and only if p(α)=0. Moreover if (x_0,y_0)≈ (α,α) then
{(x_n,y_n):=S^n(x_0,y_0)}→ (α,α)
at least for simple roots of p (see <cit.> for multiple roots). We refer to <cit.>, and references therein, for a detailed discussion of the phase plane (ℝ^2 and ℂ^2) of the secant method.
One (unexpected) fact of the real secant map is that there are no finite periodic points of period two or three in ℝ^2. However, it has finite periodic points of period four and some of them determine the geometry and topology of the boundaries of the immediate basin of attraction of its fixed points (see <cit.>). Moreover, if we extend the domain of the secant method to infinity (that is, if we extend the secant map to ℝℙ^2 or ℂℙ^2) a new three-cycle phenomenon arises. Indeed, in <cit.> (see also <cit.>) the authors showed that if c∈ℝ satisfies that p'(c)=0 (critical point) and p(c) p”(c)≠ 0 the secant method exhibits a critical three-cycle at (c,c)
given by
(c,c) S (c,∞) S (∞,c) S (c,c).
Moreover, the three-cycle has a basin of attraction whose geometry varies depending on the degree of the polynomial. However, its geometry and topology is quite similar among polynomials of the same degree. These basins, and their
disparate geometry can be visualized in red in Figure <ref> for concrete
polynomials of degree of different parity.
The main goal of this work is to go deeper into the understanding of the geometry of the basin of the critical three-cycle by means of a model which captures the relevant information and allows us to give an accurate description.
Following the approach in <cit.> we assume, without lost of generality, that c=0 and p(0)=1. Thus, assuming also that (p)=d+1, the polynomial p writes as
p(x)= 1 + a_2 x^2 + … + a_d+1x^d+1,
where d ≥ 2 and a_2a_d+1≠ 0. Using the natural extension of S at infinity, via the charts φ_1(x,y)=(1/x,y) and φ_2(x,y)=(x,1/y), and some computations (explicit in <cit.>)
the expression of S^3 near the origin is given by
S^3
(
[ x; y ]) =
(
[ y - (-a_2)^d/a_d+1 (x+y)^d; y - 2 (-a_2)^d/a_d+1 (x+y)^d ])
+ 𝒪_d+1,
where 𝒪_d+1 indicates terms bounded by (|x|+|y|)^d+1. The expression (<ref>) motivates the introduction of the following model map T_a,d which encodes the dominant terms of S^3 near the origin. Concretely,
T_a,d(
[ x; y ]) =
(
[ y - a (x+y)^d; y - 2a(x+y)^d ]),
where a≠ 0 is a parameter.
We now are ready to state the main results of this paper about the basin of attraction of the origin of (<ref>), defined as
𝒜_a,d(0) = { (x,y) ∈ℝ^2 | T_a,d^n(x,y) → (0,0) as n →∞} ,
depending on the parameters a and d. Obviously, the origin is the unique fixed point of T_a,d and DT_d(0,0) has eigenvalues 0 and 1. The 0 eigenvalue guarantees that 𝒜_a,d(0)∅ but a complete topological and geometric description depends on the motion over the center manifold. The main theorem describes 𝒜_a,d(0) as well as its boundary ∂𝒜_a,d(0) depending on a and d.
Let 𝒜_a,d(0) be the basin of attraction of the origin for the map T_a,d.
(a)If d is even and a ≠ 0 then 𝒜_a,d(0) is a compact set which is homeomorphic to a closed topological disk and the boundary of 𝒜_a,d(0) is the stable manifold of the origin. See Figure <ref>(a).
(b) If d is odd and a>0 then 𝒜_a,d(0) is an open, simply connected, unbounded set. Moreover, ∂𝒜_a,d(0) contains the stable manifold of a hyperbolic two-cycle {p_0,p_1} lying on ∂𝒜_a,d(0). See Figure <ref>(b).
(c) If d is odd and a<0 then 𝒜_a,d(0) is the stable manifold of the origin. Moreover, 𝒜_a,d(0) is unbounded. See Figure <ref>(c).
We finish with an important remark, somehow complementary, on the previous result to calibrate their value. On the one hand, from construction, system (<ref>) encodes the information of system (<ref>) as long as (x,y)≈ (0,0). But if one reads carefully Theorem A, we see that it does not refer to the dynamics in a given small neighbourhood of the origin, as for instance Theorem A(b) is showing that 𝒜_a,d, a>0, is unbounded. Hence, a priory, there is no reason to argue that Theorem A can be transported to explain the red regions in Figure <ref>. However, comparing the top right picture in Figure <ref> with Figure <ref>(a) (d even), or comparing the bottom right picture in Figure <ref> with Figure <ref>(b) (d odd) one can immediately see that (<ref>) and (<ref>) share more than expected. A better explanation for this connection, somehow global, will require future work.
In a companion paper <cit.> we study in more depth the boundary of 𝒜_a,d(0) when d is odd and a is positive and we show
there is a (topologically transversally) homoclinic intersection between the stable and the unstable manifolds of the hyperbolic two-cycle {p_0,p_1} and there are infinitely many periodic points (somehow chaotic dynamics) in ∂𝒜_a,d(0). See Figure <ref>.
The paper is organized as follows. In Section <ref> we show that T_a,d reduces to three cases: d even with a=1, and d odd with a=± 1. In Section <ref> we study the series expansions of the stable and center invariant manifolds of the origin. Theorems A(a) and A(b) are proven in Sections <ref> and <ref>, respectively. Finally, in Section <ref> we prove Theorem A(c).
§ PRELIMINARIES AND LOCAL DYNAMICS NEAR THE ORIGIN
A preliminary simple step is to show that, given d≥ 2 fixed, for most values of a 0 the maps of the family T_a,d in (<ref>) are conjugate to each other, so that we only need to deal with one or two particular values of a. See Corollary <ref> below.
We have the following statements.
(a) If d is even and a_1 and a_2 are such that a_1 a_2 ≠ 0 then T_a_1,d is conjugate to T_a_2,d.
(b) If d is odd and a_1 and a_2 are such that a_1 a_2 >0 then T_a_1,d is conjugate to T_a_2,d.
The conjugation will be a rescaling.
Given any μ∈ we have that
T_a,d(μ x, μ y) = μ( y - aμ^d-1 (x+y)^d , y - 2aμ^d-1 (x+y)^d) = μ T_a μ^d-1,d (x,y).
Given a_1, a_2 0 we take
μ := ( a_2/a_1)^1/(d-1).
If d is even and a_1 and a_2 are two parameters with a_1 a_2 ≠ 0 we immediately have
T_a_1,d(μ x, μ y)= μ T_a_2,d(x,y).
If d is odd the same is true but the existence of the (d-1)-root requires the condition a_1a_2>0.
To study the dynamics of the family of maps given by (<ref>) it is enough to consider the cases {a=1, d≥ 2} and {a=-1, d≥ 3, d }.
To avoid heavy notation (depending on the parameter a=± 1) in what follows we assume a=1. We will deal with the case a=-1 (for d odd) in Section <ref>, Remark <ref>, and in Section <ref>. In particular, when a=1, we will use the simplified notation
T_d(x,y):=T_1,d(x,y)=
(
[ y - (x+y)^d; y - 2(x+y)^d ]).
We have
(a) If d is even,
T_d sends ^2 onto {x≥ y}. The map T_d has two inverses
T^-1_±, d (x,y) = ( -2x+y ±( x-y)^1/d ,
2x-y )
which determine two one to one maps
T^-1_+,d:{x≥ y}→{x≥ -y} and T^-1_-,d: {x≥ y}→{x≤ -y}.
Moreover, for any x_0∈, T_d maps the line y=-x+x_0 onto the line y=x-x_0^d in a one-to-one way.
(b) If d is odd, the map T_d:^2→^2 is a homeomorphism onto ^2 and its inverse map is real analytic in ^2 ∖{x=y}, but not differentiable on {x=y}. Its inverse is given by
T^-1_d (x,y) = ( -2x+y + ( x-y)^1/d , 2x-y ).
Moreover, for any x_0∈ℝ, the map T_d maps bijectively the line y=-x+x_0 onto the line y=x-x_0^d.
All statements come from direct computations.
§ LOCAL DYNAMICS AROUND THE ORIGIN: THE STABLE AND THE CENTER MANIFOLDS
The origin is the only fixed point of the map T_d in (<ref>). In this section we obtain information on the local dynamics near the origin from the analytic expressions (in series expansion) of the (local) invariant manifolds. The derivative of T_d at (x,y) is given by
DT_d(x,y)=
(
[ - d (x+y)^d-1 1 - d (x+y)^d-1; -2d (x+y)^d-1 1 -2d (x+y)^d-1 ]),
and therefore
DT_d(0,0)=
(
[ 0 1; 0 1 ]).
The matrix DT_d(0,0) is independent of the parameter d. Its eigenvalues are 0 and 1 with associated eigenvectors v_1=(1,0) and v_2=(1,1), respectively. In other words the direction v_1 is super-attracting while the direction v_2 is neutral. It follows from the general theory of invariant manifolds of fixed points of maps that there is a stable invariant manifold of (0,0) being tangent to v_1 and a (non-unique) center invariant manifold of (0,0) being tangent to v_2. We denote them W^s_d(0) and W^c_d(0), respectively. According to the general theory, W^s_d(0) is analytic and W^c_d(0) is C^k for all k≥ 1.
Even if W^c_d(0) may not be unique, all its Taylor coefficients are uniquely determined.
More concretely, the local invariant manifolds can be parametrized as graphs
W^s_d,(0) = { (x, φ^s_d(x) ) | |x| < ε_0 } and W^c_d,(0) = { (x, φ^c_d(x) ) | |x| < ε_0 } ,
for some ε_0 >0, where
φ^s_d(x) = ∑_n=2^∞α_n(d) x^n φ^c_d(x) = x+ ∑_n=2^∞β_n(d) x^n .
We also denote by R^s_d and R^c_d the maps which encode the induced dynamics on the invariant manifolds. Thus, locally, we have
T_d( x,φ_d^s(x) ) = ( R^s_d(x), φ_d^s(R^s_d(x)) )
T_d(x,φ_d^c(x) )= ( R^c_d(x), φ_d^c(R^c_d(x)) ),
respectively.
See <cit.> for a general discussion on the theory of local invariant manifolds. In the next lemma we provide the structure of the Taylor expansion of φ^s_d and φ^c_d. The lower order terms will determine the local dynamics near the origin.
Let d≥ 2. The Taylor series of
φ^s_d and φ^c_d have the following structure
φ^s_d(x)= x^d ∑_k=0^∞α_d+k(d-1)(d) x^k(d-1) φ^c_d(x)= x+ x^d ∑_k=0^∞β_d+k(d-1)(d) x^k(d-1).
Moreover, α_d(d)= 2, α_2d-1(d)=4d, β_d(d)= -2^d and β_2d-1(d)=-3d2^2d-1 and thus we have that
φ_d^s(x)=2x^d+ 𝒪(x^2d-1)
φ_d^c(x)=x-2^dx^d+𝒪(x^2d-1)
and the one-dimensional dynamics induced by T_d on the stable and center manifolds are governed by
R^s_d: x ↦ x^d+ 𝒪(x^2d-1) R^c_d: x ↦ x-4^4x^d+ 𝒪(x^2d-1),
respectively.
See Figure <ref>(a) and (b) for the induced dynamics of the map T_d on the invariant manifolds.
To simplify the notation below we introduce the symbol {·}_n so that if Φ is a formal series around the origin, we write
Φ(x)=∑_n≥ 0{Φ}_n x^n.
We prove (<ref>) for the case of the stable manifold W^s_d(0) (see (<ref>)). Using that the stable manifold is an invariant graph for T_d we obtain that if W^s_d(0)= φ^s_d then
φ^s_d(x) - 2 [ x+ φ^s_d(x) ]^d = φ^s_d ( φ^s_d(x) - [ x+ φ^s_d(x) ]^d ).
From the above equation, some computations show that, on the one hand α_2(2)=2 and α_2(d)=0 for all d≥ 3, and on the other hand, for all n ≥ 3 we have that α_n(d) in (<ref>) can be written recursively as
α_n(d)= 2 {( x+ ∑_ j=2 ^n-1α_j(d) x^j )^d }_n + ∑_ i = 2 ^n-1α_i(d) {( ∑_j=2 ^n-1α_j(d) x^j - ( x+ ∑_ j=2 ^n-1α_j(d) x^j )^d )^i }_n.
Proving (<ref>) for the stable manifold is equivalent to see that in (<ref>) the coefficient α_n(d)=0 for all n≥ 2 such that n-d is not a multiple of d-1, or equivalently, not of the form n=d+k(d-1) for k≥ 0. We argue by induction. We claim that for any N≥ 1, up to order
n=d+(N-1)(d-1)
the stable manifold writes as
x^d ∑_ k =0 ^N-1α_d+k(d-1)(d) x^k(d-1) =: x^d Ψ(x^d-1).
When N=1 the result is true since α_n(d) = 0 for 2 ≤ n ≤ d-1 and α_d(d)=2. Indeed, from (<ref>), a_j(d)=0 implies α_j+1(d)=0 for all j=1,…, d-2. Also, α_d(d)=2 since we have a unique term of degree d with coefficient 2 associated to the first {Φ}_n term in the right hand side of (<ref>).
Assuming the claim is true for N, we are going to prove that in the right hand side of (<ref>) the coefficients
α_n+j(d) x^n+j, j≥ 1, are involved in terms of order n+d or higher. This is easy to check for j=1. For j>1 the coefficients appear in terms of order bigger or equal than n+d+1.
In the right hand side of (<ref>) the first term is
2(x+x^d Ψ(x^d-1) + α_n+1(d) x^n+1 + …)^d
and the lower term in which α_n+1(d) appears is
2dx^d-1α_n+1(d) x^n+1 = 2dα_n+1(d) x^n+d.
The second term of the right hand side of (<ref>) can be written as
α_d(d) (x^d Ψ(x^d-1)+ α_n+1(d) x^n+1+… -
(x+x^d Ψ(x^d-1)+ α_n+1(d) x^n+1+…) ^d )^d
+ …
and the lower term in which α_n+1(d) appears is
2d (2x^d)^d-1α_n+1(d) x^n+1 = 𝒪(x^n+d(d-1)+1).
This finishes the induction.
Once the expression of φ^s_d given in (<ref>) is proved and the first terms of the expansion have been calculated we only need to justify the expression in (<ref>). For this we compute the image of a point on the stable invariant manifold only using the lowest term of the series expansion
T_d(x,φ^s_d(x)) =(2x^d-(x+2x^d)^d, 2x^d-2(x+2x^d)^d)
=(x^d+ 𝒪(x^2d-1), -4dx^2d-1 + 𝒪(x^3d-2))).
Therefore, the one-dimensional dynamics is given by
x ↦ x^d+ 𝒪(x^2d-1).
Similar computations provide the result for φ^c_d.
Using the same arguments as the ones in Lemma <ref> one can get similar results for the case d odd and a=-1. The difference is the sign of some leading coefficients. More precisely if d is odd and a=-1 in the definition of T_d we have that
α_d(d)= -2, α_2d-1(d)=4d, β_d(d)= 2^d and β_2d-1(d)=-3d2^2d-1 and hence we have that
φ_d^s(x)=-2x^d+ 𝒪(x^2d-1)
and φ_d^c(x)=x+2^dx^d+𝒪(x^2d-1)
and the one-dimensional dynamics induced by T_d over the stable and center manifold are governed by
x ↦ -x^d+ 𝒪(x^2d-1) and x ↦ x+4^4x^d+ 𝒪(x^2d-1),
respectively. See Figure <ref>(c) for the induced dynamics of the map T_d over the invariant manifolds in this case.
We close this section by completing the discussion above, for the case d even. We have shown in Lemma <ref> that many coefficients of the series expansion of the stable and center manifolds are zero (no mater the parity of d). Next we prove that, for d even, all non-zero coefficients of φ^s_d are positive.
Let d be even. Then, α_ℓ(d)≥ 0 for all ℓ≥ 0.
In Lemma <ref> we proved that the coefficients of the series expansion of the analytic expression of the local stable manifold at the origin satisfy certain properties. In particular we proved that all coefficients α_ℓ(d), ℓ≥ 0, of the monomials x^ℓ with ℓ d+k(d-1) for some k≥ 0 are zero. Moreover we also proved that α_d(d)=2 for all d≥ 2.
We fix d≥ 2 even. To simplify the notation we remove the dependence of the coefficients with respect to d; that is, we write α_k:=α_k(d). Let (x) be the auxiliary analytic function given by the series expansion
(x) =∑_k=d^∞_k x^k :=( x + ∑_k = d^∞α_k x^k )^d.
Note that _k+1 depends on α_j, d≤ j≤ k. The lemma follows from the following claim.
0.2truecm
Claim: If n≥ d then, for all d≤ k≤ n, we have α_k ≥ 2_k≥ 0.
0.2truecm
We prove the claim by induction. For n=d it is obviously true because α_d=2 and _d=1. Assuming the claim is true for n, from (<ref>) we can write
α_n+1= 2 {( x + ∑_k = d^nα_k x^k )^d }_n+1 +
{∑_i =d^nα_i [ ∑_k = d^nα_k x^k - ( x + ∑_k = d^nα_k x^k )^d ]^i }_n+1 .
The induction assumption implies
α_k ≥ 2 {( x + ∑_j = d^kα_j x^j )^d }_k ≥{( x + ∑_j = d^kα_j x^j )^d }_k
, d≤ k ≤ n.
This implies that all coefficients of the terms of order n+1 of the second term of the right hand side of (<ref>) are non-negative (because i≥ d≥ 2).
Then, we conclude from (<ref>) that α_n+1≥ 2_n+1≥ 0, and the claim follows.
§ PROOF OF THEOREM A(A): THE CASE D EVEN AND A=1
Let d≥ 2 be even. From Corollary <ref> we can take a=1 to cover all cases (a 0). We simplify the notation writing T_d:=T_1,d and 𝒜_d(0):=𝒜_1,d(0). We will show that the origin belongs to the boundary of the basin, that the basin is contained in the upper half plane and that its boundary is the stable manifold of the origin.
Let us introduce some notation. Given (x_0,y_0)∈^2 we will write (x_k,y_k) = T_d^k(x_0,y_0) for k≥ 0. Set
R_d:= (1-1/d) (2d)^-1/d-1 .
Note that R_2 = 1/8 and, in general, R_d<1. Finally let
= {(x,y)| y≤ x} and _R_d= {(x,y)∈| y≥ 0, 0≤ x ≤ R_d}.
Since the proof of Theorem A(a) is quite long we split the arguments into several lemmas. The first one is just an observation.
We have that (x_k,y_k)∈ for k≥ 1 and then the sequences
{x_k}_k≥ 1 and {y_k}_k≥ 1 are monotonically decreasing.
Moreover, while x_k > - y_k the sequences are strictly decreasing.
The first assertion follows from Lemma <ref>(a). The second one follows directly from the inequalities:
x_k+1 = y_k - (x_k+y_k)^d ≤ y_k ≤ x_k, k≥ 1 (k≥ 0 if (x_0,y_0)∈),
y_k+1 = y_k - 2(x_k+y_k)^d ≤ y_k, k≥ 0.
Next lemma shows that 𝒜_d(0) is a bounded set.
𝒜_d(0) is a bounded set. More concretely,
𝒜_d(0) ⊂ := (-5R_d, R_d)× (0,2R_d) ∪{(0,0)}.
We decompose
^2 ∖ = _1 ∪_2 ∪_3 ∪_4,
where
_1 = {(x,y)| y≤ 0}∖{(0,0)},
_2 = {(x,y)| x≥ R_d},
_3 = {(x,y)| y≥ 2R_d} ,
_4 = {(x,y)| x ≤ -5R_d, 0< y <2R_d}
and we argue that _d(0) ∩_j = ∅, 1≤ j≤ 4, so that
_d(0) ⊂.
We will use the following property: by the invariance of _d(0) by
T_d and its inverses we have that T_±,d^-1, (x_0,y_0) ∈_d(0) if and only if
(x_k,y_k) ∈_d(0) for some k≥ 0.
Let (x_0,y_0) ∈_1. Since (x_0,y_0) ≠ (0,0), if y_0=0 then
y_1=-2x_0^d<0 and if y_0<0 then
y_1≤ y_0<0. Hence, in both cases
y_1<0 and by (<ref>) the sequence of iterates cannot converge to (0,0).
Next, we claim that if (x_0,y_0) ∈_2 then (x_1,y_1 ) ∈_1.
Indeed, we consider the line {x=x_0} with x_0≥ R_d and we look at the second component of its image Ψ_1(y):= π_y T_d(x_0, y)= y - 2(x_0+y)^d. Since d≥ 2 is even, lim_t→±∞Ψ (y) = -∞ and therefore Ψ_1 has a global maximum. Actually, it has a unique maximum whose location is obtained from the condition Ψ_1'(y) =0 and is
y^(m):= 1/(2d)^1/(d-1) -x_0.
Then Ψ^(m)_1 := Ψ_1(y^(m)) = 1/(2d) ^1/(d-1)
- 2/(2d)^d/(d-1) -x_0 =R_d -x_0 and therefore y_1≤ 0.
Moreover, (x_1,y_1) (0,0) because the only preimage of (0,0 ) is (0,0) ∉_2.
Next we take (x_0,y_0) ∈_3 and we claim that (x_1,y_1 ) ∈_1 ∪_2.
Indeed, consider the line
{ y=y_0} with y_0 ≥ 2R. Its image is contained in the line {(u,v) | v=2u-y_0} and it is contained in _1 ∪_2 because
we have that either u ≥ R_d and the claim is true or u < R_d and then v = 2u -y_0 < 2R_d -y_0 ≤ 0 and (x_1, y_1) (0,0).
Finally, if (x_0,y_0) ∈_4 we claim that
(x_1,y_1 ) ∈_1 ∪_2.
Indeed, notice that x_0+y_0 < -5R_d + 2R_d =-3R_d and then (x_0+y_0)^d > (3R_d)^d.
If x_1 ≥ R_d the claim is true. If x_1 < R_d then
y_1 = y_0 - 2(x_0+y_0)^d = x_1 -(x_0+y_0)^d
< R_d -(3R_d)^d = (1-3^d/2d(1-1/d)^d-1) R_d < 0.
We have that (x_0,y_0) ∈𝒜_d(0) if and only if (x_k,y_k)∈_R_d for all k ≥ 1.
Assume (x_0,y_0) ∈_d(0). By Lemma <ref>(a), x_k≥ y_k for all k≥ 1 and by Lemma <ref>, x_k < R_d for all k≥ 0. Since the sequences {x_k} and {y_k} are decreasing for k≥ 1, if there exists m >0 such that y_m <0, then y_k≤ y_m<0 for all k≥ m and (x_k,y_k) cannot converge to (0,0). Then, y_k≥ 0 for all k and the limit
y^⋆ = lim_k→∞ y_k exists, y^⋆≥ 0 and then x_k ≥ y_k ≥ 0. As a consequence (x_k,y_k)∈_R_d for all k ≥ 1.
Conversely, let (x_0,y_0) ∈^2 and assume that (x_k,y_k)∈_R_d for all k ≥ 1. Since the sequence {y_k}_k≥ 0 is strictly decreasing and bounded from below by 0
there exists the limit
y^⋆ = lim_k→∞ y_k ≥ 0.
From the recurrence
y_k+1 = y_k - 2 (x_k + y_k)^d
we obtain that lim _k→∞ (x_k+y_k) exists and it is 0.
This implies that -y^⋆ = lim _k→∞ x_k ≥ 0. Then y^⋆ =0 and (x_0,y_0) ∈_d(0).
We now turn the attention to ∂𝒜_d(0). Our goal is to prove that ∂𝒜_d(0) coincides with the global stable manifold of the origin, W^s_d(0).
W^s_d(0) cuts the line {x=y} at some point (p,p) with 0<p < R_d.
In Lemma <ref> it is proven that the local expression of W^s_d(0) is given by the graph of an analytic function φ_d^s(x)= 2 x^d + … whose series expansion in the x-variable has all its coefficients non-negative, and therefore there exists ρ_1 >0 such that γ = {(x,φ_d^s(x))| x∈ (0,ρ_1)} is contained in _R_d. We claim that the extension of this local piece γ of W^s_d(0) eventually leaves _R_d. Indeed, assume the contrary.
We globalize γ iterating with T^-1_+,d (see (<ref>)).
Let (x_0,y_0) ∈γ and denote
(x_-k,y_-k)=T^-k_+,d(x_0,y_0), k≥ 0.
We have
y_-k-1 = y_-k + 2 (x_-k-y_-k) > y_-k.
If all (x_-k,y_-k) ∈_R_d we have that the sequence {y_-k}_k ≥ 0 is strictly increasing and bounded, and we conclude that there exists y^⋆>0 such that
y^⋆=lim_k →∞ y_-k >0.
Moreover, from (<ref>) we have that
x_-k = (y_-k +y_-k-1 )/ 2 → y^⋆.
Now, using the recurrence
x_-k-1 = -2x_-k + y_-k + (x_-k - y_-k)^1/d,
we get that y^⋆=0, which provides a contradiction with (<ref>).
Finally, since we have seen that _d(0) does not meet {y=0}∖{(0,0)} nor {x=R_d} the globalization of γ has to cross {x=y}.
We denote by Γ the piece of the stable manifold W^s_d(0) from (0,0) to (p,p) contained in _R_d.
We plot Γ in red colour in Figure <ref>.
Let φ^s_d be given in (<ref>).
The following properties for φ_d^s hold.
(a) There exists a unique point q̅=(q̅_x,q̅_y)∈Γ whose tangent vector has slope m=1/2.
(b) If we denote by r_0>0 the radius of convergence of φ_d^s (as a function of a complex variable) then 0<r_0<R_d, φ^s_d is
increasing and convex in the interval (0,r_0)
and decreasing and convex (-r_0-2φ^s_d(r_0),0 ).
We observe that since all coefficients of the series expansion of φ_d^s are non-negative (see Lemma <ref>) we conclude from Vivanti-Pringsheim's Theorem <cit.> that φ^s_d as a function of a complex variable has a singularity at x=r_0>0 and
φ_0: = φ^s_d(r_0)= ∑_k ≥ dα_k r_0^k.
In fact we have that r_0<R_d< ∞ and φ_0< 2R_d since φ_d^s⊂ W_d^s(0) ⊂𝒜_d(0) and by Lemma <ref>, 𝒜_d(0) ⊂. In particular φ^s_d|_(0,r_0) is an increasing and convex function
and
lim_x → r_0^-(φ^s_d)^' (x) = + ∞.
Indeed, if (φ^s_d)^' (r_0) < ∞, then
(φ^s_d)^' could be extended in a differentiable way for x>r_0. The graph close to x=r_0 will be the image by T^-1_+,d of a piece of the graph of φ^s_d, say γ_2, closer to the origin. The piece γ_2 does not contain the point (p,p) since its image is the point (-p,p) outside _R_d. Therefore T^-1_+,d is analytic on γ_2 and then φ^s_d would be analytic in a neighborhood of r_0 which provides a contradiction. This proves statement (a) and provides the existence of r_0.
We have the symmetry (d is even)
T_d(x,y)=T_d(-2y-x,y).
Hence if (x,y)∈ W_d^s(0) then also (-2y-x,y)∈ W_d^s(0).
More concretely, since (x,φ^s_d(x)) ∈ W^s_d(0), (-x-2φ^s_d(x), φ^s_d(x)) ∈ W^s_d(0) and then
φ^s_d (-x-2φ^s_d(x)) = φ^s_d(x).
This means that φ^s_d is defined for x∈ (-r_0-2φ^s_d(r_0),0 ). Moreover, taking derivatives in (<ref>) we get
(φ^s_d) '(-x-2φ^s_d(x))(-1-2(φ^s_d)'(x)) = (φ^s_d)'(x),
(φ^s_d) ”(-x-2φ^s_d(x))(-1-2(φ^s_d)'(x))^2 + (φ^s_d) '(-x-2φ^s_d(x))(-2(φ^s_d)”(x)) = (φ^s_d)”(x),
and hence we can conclude that φ^s_d is decreasing and convex in (-r_0-2φ^s_d(r_0),0 ). Indeed, substituting (φ^s_d) '(-x-2φ^s_d(x)) from (<ref>) into (<ref>) we obtain
(φ^s_d) ”(-x-2φ^s_d(x))(-1-2(φ^s_d)'(x))^2
= (φ^s_d)”(x) (1- 2(φ^s_d) '(x)/1+ 2(φ^s_d) '(x)) >0.
From the previous properties there exists a unique point q̅=(q̅_x,q̅_y)∈Γ whose tangent vector has slope m=1/2.
Let
Ω_0^+ = {(x,y)∈_R_d| y ≥φ_d^s(x), 0≤ x≤ρ_2 },
Ω_0^- = {(x,y)∈_R_d| 0 < y< φ_d^s(x), 0≤ x≤ρ_2 },
with ρ_2 < min{1/21/(4d)^1/(d-1), q̅_x}. See Figure
<ref>.
The domain Ω_0^+ is invariant by T_d and
Ω_0^+ ⊂_d(0). Moreover,
Ω_0^- ∩_d(0) =∅.
Let (x,y):=(x_0,y_0) ∈Ω_0^+. The sequence {x_k}_k≥ 0
is decreasing by Lemma <ref> . So, it is enough to show that y_1-φ^s_d(x_1) ≥ 0. Indeed
y_1-φ^s_d(x_1) = y - 2(x+y)^d - φ^s_d(y - (x+y)^d)
= y - φ^s_d(x) + φ^s_d(x) - φ^s_d(y - (x+y)^d) - 2(x+y)^d
= y - φ^s_d(x) + H(x,y),
where
H(x,y) = φ^s_d(x) - φ^s_d(y - (x+y)^d) - 2(x+y)^d .
Taking into account that φ^s_d satisfies the invariance equation
φ^s_d(x) - 2(x+φ^s_d(x))^d = φ^s_d(φ^s_d(x) - (x+φ^s_d(x))^d),
H can be rewritten as
H(x,y) = φ^s_d(φ^s_d(x) - (x+φ^s_d(x))^d) - φ^s_d(y - (x+y)^d) +
2(x+φ^s_d(x))^d - 2(x+y)^d
= ∫_0^1[ d/dtφ^s_d(ξ_t-(x+ξ_t)^d)(1-d(x+ξ_t)^d-1)
+2d(x+ξ_t)^d-1](φ_d^s(x) -y) dt
=: (φ^s_d(x) -y) H(x,y),
where
ξ_t= y +t (φ^s_d(x) -y) and hence since (x,y)∈Ω_0^+ we have
0 < φ_d^s(x) ≤ξ_t ≤ y ≤ x≤ρ_2.
From (<ref>) and (<ref>) we have
y_1-φ^s_d(x_1)=(y-φ^s_d(x))(1- H(x,y)),
so that it is enough to see that H(x,y)<1.
Given x∈ (0, ρ_2), we introduce Ψ_2(ξ)=ξ-(x+ξ)^d
for ξ∈ (0 , x). The function Ψ_2(x) is concave and we have that Ψ_2(0)=-x^d <0, with -x^d >-ρ_2^d and
Ψ_2(x)= x-(2x)^d = x(1-2^d x^d-1) > x(1-1/(2d) ) >0 by one of the conditions in the definition of ρ_2.
Hence, ξ_t-(x+ξ_t)^d ≥ -ρ_2^d for all t∈ [0,1].
Note that, by the fact that the coefficients of the expansion of
φ^s_d are non-negative (Lemma <ref>), at the symmetric point the absolute value of the derivative is smaller, i.e. for x∈[0,r_0),
|(φ^s_d)'(-x ) | ≤ (φ^s_d)'(x ) so that, for -x^d < ζ <0, |(φ^s_d)'(ζ ) | ≤ (φ^s_d)'(-ζ ) ≤ (φ^s_d)'(x^d )
≤ (φ^s_d)'(ρ_2 ) ≤ 1/2.
Then
|(φ^s_d)'(ξ_t-(x+ξ_t)^d)| <1/2.
Moreover,
2d (x+ξ_t)^d-1 < 2d(2x)^d-1≤ 1/2.
By (<ref>) and (<ref>) we obtain
H(x,y) < 1
and therefore
we obtain that the iterates stay in the same side of φ^s_d.
Now we deal with Ω^-_0. To prove that
Ω_0^- ∩_d(0) =∅ we will see that if (x_0,y_0)
∈Ω_0^- then not all its iterates can remain in
Ω_0^-. Assume the contrary. To simplify the estimates we do a (x-depending) translation to put the (local) stable manifold at {y=0}. Actually, we make the change
C(x,y) = (x,y+^s_d(x)). The transformed map is
T_d(
[ x; y ])
=
(
[ F(x,y); G(x,y) ]) :=
(
[ y +^s_d (x) - (x+y+^s_d (x))^d; y +^s_d (x) - 2(x+y+^s_d (x))^d -
^s_d( F(x,y) ) ]).
The domain Ω^-_0 is transformed into
Ω^-_0 = { (x,y)| 0<x<ρ_2, -^s_d (x) < y < 0 }.
Let (x_0,y_0) ∈Ω^-_0.
We use again the notation
(x_k,y_k) = T_d^k(x_0,y_0) for k≥ 0.
Let ρ_3 ∈ (0,ρ_2] be such that
0< ^s_d(x) < 3x^d for x∈ (0,ρ_3).
Assume that (x_k,y_k)∈Ω^-_0 for all k≥ 0.
Since d is even, we also have 0< x_k+1≤ y_k +^s_d (x_k)
< ^s_d (x_k) ≤ x_k. Then {x_k} is also decreasing and
x_k =y_k-1+φ_d^s(x_k-1)-(x_k-1+y_k-1+φ_d^s(x_k-1))^d ≤^s_d (x_k-1) -x^d_k-1 < 2 x^d_k-1,
and inductively we get
x_k < 2^(d^k-1/d-1) x_0^d^k < (2^1/(d-1) x_0)^d^k.
Note that since 2^1/(d-1) x_0 < 2^1/(d-1)ρ_2 <1/2 then x_k→ 0. We have
G(x,0) = ^s_d(x) - 2(x+^s_d(x))^d - ^s_d(^s_d(x) - (x+^s_d(x))^d)=0
by the invariance equation (<ref>),
and
G_1(x) :=
∂ G/∂ y(x,0) = 1 - 2d(x+^s_d(x))^d-1-
(^s_d)'(F(x,0)) (1- d(x+^s_d(x))^d-1 )= 1 - 2d x^d-1 +…
so that
G(x,y)= G_1(x)y +G_2(x,y) with G_2(x,y) = 𝒪(y^2).
There exists ρ_4 ∈ (0,ρ_3] such that
G_1(x) > 1-ν x^d-1 and |G_2(x,y)| <M |y|^2, x∈ (0,ρ_4), (x,y)∈Ω^-_0 ,
for some ν > 2d and M>0.
Then, taking an iterate (x_k, y_k) such that x_k <ρ_4 and relabeling it by (x_0, y_0) and, starting again the iteration, we have
y_k+1 = G(x_k, y_k) ≤ (1-ν x_k^d-1) y_k + My_k^2
< (1-ν x_k^d-1-M^s_d(x_k))y_k ≤ (1-b x_k^d-1) y_k ,
where b= ν + 3ρ_4 M.
Iterating (<ref>) we obtain
y_k < ∏_j=0^k-1 (1-bx_j^d-1) y_0
= y_0 exp∑_j=0^k-1log (1-bx_j^d-1).
The series
∑log (1-bx_j^d-1) is convergent since bx_j^d-1 tends to zero and log (1+x) > (2log 2 ) x if x∈ (-1/2, 0). Then,
y_k <y_0 exp ( S_0 ) where S_0= ∑_j=0^∞log (1-bx_j^d-1).
This means that y_k is less than some negative number so that y_k cannot converge to 0 and therefore (x_0, y_0) ∉_d(0).
If (x_0, y_0)∈Ω_0^-, assume that all its iterates stay in _R_d. Then the sequences {x_k} and {y_k} are decreasing and there exists m≥ 0 such that x_m<ρ_4 and, by the previous estimates, (x_m, y_m)∉_d(0).
Let Ω_r be the closure of the bounded domain whose boundary is the simple closed curve formed by the concatenation of Γ and
J_r:={(x,y)| x=y, 0< x < p } (the meaning of r is right, in contrast of the later notation J_ℓ for left). See Figure <ref>. The domain Ω_r is invariant by T_d since the iterates cannot jump across the boundary. Moreover, there exists m≥ 1 such that
T^m_d(Ω_r) ⊂Ω ^+_0.
Then Ω_r ⊂_d(0). By Lemma <ref>, to obtain _d(0) we only need to take one preimage of Ω_r by T_d.
In the light of Lemma <ref>(a) we write
Γ_± := T^-1_±,d(Γ), Ω_± := T^-1_±,d(Ω_r ).
Clearly, the sets Ω_± are contained in _d(0). The boundaries of Ω_± are the images of the boundaries of Ω_r by T_±,d^-1. Consequently, we have
∂Ω_+=Γ_+ ∪ J_ℓ and ∂Ω_-=Γ_- ∪ J_ℓ,
where Γ_+:=T_+,d^-1(Γ) is a curve contained in
{y≥ -x} which joints (0,0) with (-p,p), Γ_-:=T_-,d^-1(Γ) is a curve contained in {y≤ -x} which joints (0,0) with (-p,p), and
J_ℓ:=T_+,d^-1(J_r) = {(x,y)| y=-x, -p< x < 0 }. Notice that every point in Γ∖{(0,0)∪ (p,p)} has two preimages while
T^-1_+,d(0,0)=T^-1_-,d(0,0)=(0,0) T^-1_+,d(p,p)=T^-1_-,d(p,p)=(-p,p).
See Figure <ref>. Accordingly, the curves Γ_± joint the points (0,0) and (-p,p), they are mapped bijectively onto Γ by T_d and determine the boundary of the basin of attraction of the origin. That is,
𝒜_d(0) = Ω_+∪Ω_- ∪Γ_+ ∪Γ_-.
In Figure <ref> we draw Γ_- in green and Γ_+ in blue. This finishes the proof of Theorem A(a).
We can add some extra information about the geometry of W_d^s(0). See Figure <ref>.
On the one hand, direct computations from (<ref>) imply that if u=(u_1,u_2) is the tangent vector of W_d^s(0) at the point (p,p) then
DT^-1_±,d(p,p)(u)
=
(
[ ∞; 2u_1-u_2 ])
≈(
[ 1; 0 ])
DT^-1_±,d(p,p)
=
(
[ ∞ ∞; 2 -1 ]),
where here ∞ has to be understood as a limit.
Concretely, the tangent vector of W_d^s(0) at the point (-p,p) is horizontal.
Moreover, taking into account the symmetry
(<ref>),
the points with highest value of y in W_d^s(0) should be symmetric. Actually, they coincide with the two points q^±=(q_x^±,q_y^±) which are mapped by T_d to a point q=(q_x,q_y)∈Γ whose tangent vector has slope m=2.
§ PROOF OF THEOREM A(B): THE CASE D ODD AND A=1
For the whole section we assume that d≥ 3 is odd and a=1. The proof of Theorem A(b) is quite long and therefore we will split it into several lemmas and propositions. Roughly speaking the strategy is as follows. First we will see that _d(0) is open, simply connected and that [-1/2,0]×{0}⊂_d(0) (Proposition <ref>). Second we will show that there exists a hyperbolic two-cycle of saddle type whose unstable manifold intersects [-1/2,0]×{0}. From this we will show that the two-cycle as well as its stable manifold belong to ∂_d(0) (Proposition <ref>). And finally we will see that ∂_d(0) is unbounded (Proposition <ref>).
Let b∈(0,1/2]. We denote by Q_b⊂ℝ^2 be the compact convex polygon bounded by the straight segments
0.2truecm
A_b:={(2b^d,y)∈ℝ^2| y∈[0,2b^d] }, B_b:={(x,2b^d)∈ℝ^2 | x∈[0,2b^d]},
C_b:={(x,2x+2b^d)∈ℝ^2 | x∈[-b^d,0]}, D_b:={(-b^d,y)∈ℝ^2| y ∈ [-b^d,0] },
E_b:={(x,-b^d)∈ℝ^2| x ∈ [-b^d,0]}, F_b:={(x,1/2x-b^d)∈ℝ^2| x∈[0,2b^d]}.
0.2truecm
We denote Q^⋆:=Q_1/2.
We have that Q^⋆⊂_d(0). In particular, [-1/2,0]×{0}⊂_d(0). Moreover, _d(0) is open and simply connected.
From Lemma <ref> the origin is asymptotically stable and therefore _d(0) is open (see also Figure <ref>).
The family {Q_b}_b∈ (0,1/2] is a neighbourhood basis of the origin.
We claim that T_d(Q_b)⊂(Q_b), b∈ (0,1/2]. See Figure <ref> for a sketch of Q_b and its image.
Assume the claim is true. This implies that Q^⋆⊂ (0). Since the image of [-1/2,0]×{0} by T_d is the segment {(x,2x)∈ℝ^2| x∈[0,(1/2)^d]}⊂ Q^⋆ we conclude that [-1/2,0]×{0}⊂_d(0) as desired. Moreover, since there exists an open simply connected neighborhood Q^⋆ containing (0,0) and contained in _d(0), the origin is asymptotically stable (our proof demonstrates again that the origin is asymptotically stable).
Finally,
_d(0) = ⋃ _k≥ 0 T_d^-k(Q^⋆ ).
Since T_d is one to one, T_d^-k(Q^⋆ ) is also open and simply connected, for all k.
Furthermore, since T_d^-k-1(Q^⋆ ) ⊃ T_d^-k(Q^⋆ ),
we conclude that _d(0) is open and simply connected as well.
The rest of the proof is devoted to prove the claim. Hereafter we remove from the notation the dependence of the whole construction with respect to the parameter b, unless strictly necessary. The proof consists in studying the image of each side of the boundary of Q by T_d. We will get that the image of the boundary of Q is contained in (Q) and therefore
T_d(Q) ⊂ (Q).
We denote by Γ_A the image of the segment A under T_d and similarly for the other pieces of the boundary.
Next, we prove that each image is contained in (Q).
See Figure <ref>.
The image Γ_A=T_d(A). We parametrize Γ_A as follows
Γ_A= T_d(A)= {(Ψ_1(y):= y-(2b^d+y)^d,Ψ_2(y):=y-2(2b^d+y)^d) , y ∈ [0, 2b^d] }.
We check that Γ_A ⊂ Q ∩{y < x}.
The condition y < x is equivalent to
Ψ_2(y) < Ψ_1(y) for y ∈ [0, 2b^d] which is clearly true.
The condition
y> 1/2 x -b^d is equivalent to
Ψ_2(y)> 1/2Ψ_1(y) -b^d for y ∈ [0, 2b^d] which can be
written as
χ_1 (y):= 1/2 y +b^d -3/2(2b^d +y)^d >0 for y ∈ [0, 2b^d].
We have that χ_1(0)= b^d (1-3/2 2^d b^d^2-d) >0 and
χ_1(2b^d ) = 2 b^d (1-3/4 4^d b^d^2-d )>0 since b∈ (0, 1/2] and d≥ 3.
Also χ_1” (y)= -d(d-1)3/2(2b^d +y)^d-2 <0, therefore
χ_1 (y)>0.
Finally, Ψ_1(y) < y ≤ 2b^d and
Ψ_2(y) ≥ -b^d. The first claim is immediate. For the second we
consider the auxiliary function
χ_2(y):= y +b^d-2(2b^d+y)^d.
We have χ_2(0) = b^d(1-2^d+1 b^d^2-d ) >0 and
χ_2(2b^d) = 3b^d(1-1/3 2^2d+1 b^d^2-d ) >0 since d≥ 3. Moreover,
χ_2” (y) = -2d(d-1)(2b^d +y)^d-2 <0 and hence
χ_2 (y) >0.
0.2truecm
The image Γ_B=T_d(B). We parametrize Γ_B by x:
Γ_B= T_d(B)= {(Ψ_1(x):= 2b^d-(2b^d+x)^d, Ψ_2(x):=
2b^d-2(2b^d+x)^d) , x ∈ [0, 2b^d] }.
It is immediate to see that
Ψ_1'(x)=- d(2b^d+x)^d-1<0 and Ψ_2'(x)=- 2d(2b^d+x)^d-1<0.
Therefore,
Γ_B ⊂ [Ψ_1(2b^d), Ψ_1(0)] × [Ψ_2(2b^d),Ψ_2(0)]
⊂ (0, 2b^d) × [0, 2b^d) ⊂(Q).
0.2truecm
The image Γ_C=T_d(C). We parametrize Γ_C by x:
Γ_C = {(Ψ_1(x):=2x+2b^d-(3x+2b^d)^d, Ψ_2(x):=2x+2b^d-2(3x+2b^d)^d
) , x ∈ [-b^d,0] }.
Similarly as before
Ψ_1'(x) =2-3 d(3x+2b^d)^d-1>2-3d2^d-1b^d^2-d>0
and
Ψ_2'(x) =2-6 d(3x+2b^d)^d-1>2-6 d2^d-1b^d^2-d>0.
Then,
Γ_C ⊂ [Ψ_1(-b^d), Ψ_1(0)] × [Ψ_2(-b^d),Ψ_2(0)]
⊂ (0, 2b^d) × (0, 2b^d) ⊂(Q).
0.2truecm
The image Γ_D=T_d(D). We parametrize Γ_D by y: Clearly,
Γ_D= {( Ψ_1(y):=y-(-b^d+y)^d, Ψ_2(y):=y-2(-b^d+y)^d ) , y∈[-b^d,0] }.
First, we check that Γ_D ⊂ [-b^d, 2b^d]× [-b^d, 2b^d].
Indeed,
Ψ'_1(y)=1-d(-b^d+y)^d-1>0, and Ψ'_2(y)=1-2d (-b^d+y)^d-1 >0 .
Then
-b^d < Ψ_1(-b^d) ≤Ψ_1(y) ≤Ψ_1(0) < 2b^d,
-b^d < Ψ_2(-b^d) ≤Ψ_2(y) ≤Ψ_2(0) < 2b^d.
The condition ψ_2(y)< 2ψ_1(y)+2b^d reads
y-2(-b^d+y)^d < 2(y-(-b^d+y)^d)+2b^d
which is satisfied if y +2b^d >0 which is the case.
The condition ψ_2(y) > 1/2ψ_1(y) -b^d reads
y-2(-b^d+y)^d > 1/2 (y-(-b^d+y)^d) -b^d
which is satisfied if
χ_3(y) := 1/2 y-3/2(-b^d+y)^d +b^d > 0.
This is indeed true because
χ_3(-b^d)=1/2 b^d + 3/2(2b^d)^d >0 and
χ_3'(y) = 1/2-3/2 d(-b^d+y)^d-1 > 0
(since d≥ 3 and b∈ (0,1/2]).
0.1truecm
The image Γ_E=T_d(E). We parametrize Γ_E by x:
Γ_E= {( Ψ_1(x):=-b^d-(-b^d+x)^d, Ψ_2(x):=-b^d-2(-b^d+x)^d ) , x∈[-b^d,0] }.
In this case we will check that Γ_E⊂ (-b^d ,0)× (-b^d ,0) ⊂(Q).
Indeed, directly from the expression of the parametrization we have, Ψ_1(x) >-b^d, Ψ_2(x)>-b^d,
Ψ_1(x)< -b^d-(-2b^d)^d = -b^d(1-2^d b^d^2-d) <0 and
Ψ_2(x)< -b^d(1-2^d+1 b^d^2-d) <0.
0.1truecm
The image Γ_F=T_d(F). We parametrize Γ_F by x:
Γ_F= {( Ψ_1(x):=1/2x-b^d-(3/2x-b^d)^d,
Ψ_2(x):=
1/2x-b^d-2(3/2x-b^d)^d) , x∈[0,2b^d] }.
In this case we will also check that
Γ_F⊂ (-b^d ,0)× (-b^d ,0) ⊂(Q).
We have that
Ψ_1 (0) = -b^d -(-b^d )^d > -b^d, Ψ_1 (2b^d ) = -(2b^d )^d<0 .
Moreover, since
Ψ'_1 (x) = 1/2-3/2 d(3/2x-b^d)^d-1
which is positive because
3/2 d(3/2x-b^d)^d-1≤3/2 d 2^d-1 b^d(d-1)≤3/2 d 2^-d^2+2d-1)≤ 9/32<1/2
we get Ψ_1 (x)∈ (-b^d ,0).
Concerning Ψ_2,
Ψ_2(0) =-b^d+2(b^d)^d >-b^d and
Ψ_2(2b^d)= -2 (2b^d)^d<0. However,
it is not (always) monotone. Depending on b and d it may have a maximum at some x_c∈ (0,b^d). The value of x_c is obtained from the condition Ψ'_2(x_c)=0. It is the positive solution of
(3/2x_c-b^d)^d-1 = 1/6d.
In case x_c belongs to the interval (0, 2b^d)
we have that
Ψ_2(x_c) =1/2 x_c-b^d-(3/2x_c-b^d)^d
= 1/2 x_c-b^d-(1/6d)^d/(d-1)< -(1/6d)^d/(d-1) <0.
Thus, Ψ_2(x) <0 for x∈ [0,2b^d].
This finishes the proof.
Following the steps of the strategy of the proof of Theorem A(b) described at the beginning of the section. We start checking that {p_0=(0,1),p_1=(0,-1)} forms a hyperbolic two-cycle.
Since DT_d(p_0) =DT_d(p_1), the chain rule implies that
DT_d^2(p_0)=DT_d^2(p_1)= DT_d(p_0) DT_d(p_1)
=
(
[ 3d^2-2d 3d^2-4d+1; 6d^2-2d 6d^2-6d+1 ] ).
A direct computation shows that the eigenvalues and eigenvectors of DT_d^2(p_j), j=0,1, are given by
λ^±_d = 1/2 ( 9d^2- 8d + 1 ± (3d-1)√( 9 d^2 -10d + 1))
and
( 1,m^±_d) = (1, 4d/1-d ±√( 9d^2-10d + 1)),
respectively. Finally, it is straightforward to check that
both eigenvalues are strictly positive. Moreover,
λ^-_d is strictly decreasing and λ^+_d is strictly increasing, with respect to the parameter d. We also have
lim _d→∞λ^-_d=1/9 and 1/9<λ^-_d ≤λ^-_3 = 29-8√(13)≈ 0.1556
lim_d→∞λ^+_d=∞ and
λ^+_ d ≥λ^+_ 3 = 29+8√(13)≈ 57.8444.
On the other hand, m^-_d is negative and strictly increasing while m^+_d is positive and strictly decreasing (both with respect to the parameter d). We also have
lim_d→∞ m^-_d =-1 and
-1.3028 ≈-6/1+√(13) = m^-_3 ≤ m^-_d <-1
lim _d→∞ m^+_d =2 and 2<m^+_d ≤ m^+_3 = 6/√(13)-1≈ 2.3028
Therefore, the two cycle {p_0, p_1} is a hyperbolic saddle point. In what follows we will denote by W^s:=W^s_{p_0,p_1} and W^u:=W^u_{p_0,p_1} the (global) stable and unstable manifolds of the periodic orbit {p_0,p_1}, respectively. We split W^u=W^u_p_0∪ W^u_p_1 where W^u_p_j is the (global) unstable manifold of the fixed point p_j for the map T_d^2, j=1,2. Similarly, W^s=W^s_p_0∪ W^s_p_1 for the stable manifold. Consequently, we remark that W^s and W^u refer to the manifolds associated to the hyperbolic periodic orbit {p_0,p_1}, and hence they are not the manifolds associated to the origin (with a similar notation) studied and considered in Sections <ref> and
<ref>.
To simplify the notation, unless strictly necessary, we drop the dependence of λ_d^± and m_d^± with respect to the parameter d. Thus, we will write
λ^±:=
λ^±_d m^±:=m^±_d .
We introduce m^⋆=7/2.
Next lemma gives a precise description of the geometry of W^u_p_0 that we will use to prove that {p_0,p_1}⊂∂𝒜_d(0) and finally to prove that W^s ⊂∂𝒜_d(0).
Let be the closed triangle determined by the vertices
p_1=(0,-1), (1/ m^+ +1,-1/ m^+ +1) and (1/m^⋆+1,-1/m^⋆+1) .
Then, there is a local piece of W^u_p_0 (attached to p_0) tangent to the line y=1+ m^+ x contained in T_d(). Moreover, if we parametrize W^u_p_0∩{ y≤ 1} as W^u_p_0:={φ(t) | t ≥ 0}, with φ(0)=(0,1) and φ(t)⊂ (T_d()) for t∈(0,t_0) and φ(t_0) ∈∂ T_d() then
φ(t_0) ⊂∂ T_d() ∩{y=x} = {(s,s)| 1/ m^+ +1≤ s ≤1/m^⋆ +1}.
See Figure <ref> (right).
The triangle can also be represented as
= { (t,-1+mt) | t ∈ [0,1/(m+1)], m ∈ [ m^+ ,m^⋆] }.
The proof of this lemma will follow from an accurate description of the sets T_d() and T_d^-1(), their relative position and geometry in the plane, and the behaviour of the map T_d^-2:T_d() → T_d^-1().
0.2truecm
The shape of T_d().
We consider the decomposition of into the segments
ℓ_m={(t,-1+mt), t ∈ [0,1/(m+1)]}, with m ∈ [ m^+,m^⋆].
If we write γ(t):=γ_m(t)=T_d(ℓ_m):=(x_m(t),y_m(t))=:(x(t),y(t)) we have
x(t) = mt -1 - ( (m+1)t -1)^d and y(t) = mt -1 - 2( (m+1)t -1)^d.
Thus, the first derivatives of x(t) and y(t) are given by
x'(t) = m- d(m+1) ( (m+1)t -1)^d-1 and y'(t) = m- 2d(m+1)( (m+1)t -1)^d-1.
Easy computations show that x^'(t) and y^'(t) vanish at the points
r^± = 1/m+1[ 1 ±(m/d(m+1))^1/(d-1)] and s^± = 1/m+1[ 1 ±(m/2d(m+1))^1/(d-1)],
respectively. Moreover,
0 < r^- < s^-<1/m+1 < s^+ < r^+ and x(1/m+1)=y(1/m+1)=-1/m+1,
where t=1/(m+1) corresponds to the common maximum of x'(t) and y'(t). See Figure <ref> (left). In summary, the components x(t) and y(t) of the curve γ(t) are polynomial functions in t, having a unique minimum in the interval [0,1/(m+1)] located at t=r^- and t=s^-, respectively, and sharing the same negative value, -1/(m+1), at t=1/(m+1). See the middle picture in Figure
<ref>.
To conclude the description of the shape of γ, see Figure <ref> (right). Let us prove that its image can be represented as the union of two graphs with respect to the variable x (i.e., it admits a piecewise parametrization of graphs with respect to x). We write
γ(x)= (x,γ^(2)(x)) := (x,y(t(x))),
where t(x):=x^-1(t) is one of the two branches of the inverse of x(t).
Some direct computations show that
dγ^(2)/dx =dy/dt(dx/dt)^-1,
d^2γ^(2)/dx^2 =(dx/dt)^-3(d^2y/dt^2dx/dt-dy/dtd^2x/dt^2)=(dx/dt)^-3(d(d-1)m(m+1)^2((m+1)t-1))^d-2,
everything evaluated at the corresponding branch of t=t(x).
We denote γ^(2)_ u:=γ^(2)_ u,m and γ^(2)_ℓ:= γ^(2)_ℓ,m the functions corresponding to the upper (concave) and lower (convex) graphs, respectively.
From the previous discussion, dx/dt has a unique zero at t=r^- and it is monotone in the whole interval [0,1/(m+1)], see Figure <ref> (left).
For the upper branch, corresponding to 0≤ t <r^-, we have dx/dt<0 and therefore γ^(2)_ u(x) is increasing and concave (see (<ref>)) and for the lower branch, r^-< t ≤ 1/(m+1), we have dx/dt>0 and therefore γ^(2)_ℓ(x) is convex having a minimum at x(s^-) (see again (<ref>)). See Figure <ref> (right), we have drawn (qualitatively) the curve γ for the values m=m^+ and m=m^⋆. We remark that γ^(2)_ u,m^+ (x) is tangent at p_0 to the line y=m^+ x+1 since it is the image of the side of tangent to W^u_p_1.
Since T_d sends the line {y=-x} to {y=x} the images of all the curves γ_m end up at {y=x}.
All together determines the shape of T_d().
Moreover, in the light of the above arguments we have that
∂
T_d()=γ^(2)_ u,m^+∪γ^(2)_ℓ,m^+∪γ^(2)_ u,m^⋆∪γ^(2)_ℓ,m^⋆∪{(x,x) | -1/m^+ +1 < x < -1/m^⋆+1}.
Hereafter we will refer to
γ^(2)_ u,m^+∪γ^(2)_ l,m^+ and γ^(2)_ u,m^⋆∪γ^(2)_ l,m^⋆
as the left and right the boundaries of T_d(), respectively. See Figure <ref> (right) and Figure <ref>.
0.2truecm
The shape of T_d^-1(). We consider the same decomposition of into the segments ℓ_m as in (<ref>). We denote Γ(t):=Γ_m(t)=T_d^-1(ℓ_m):=(α_m(t),β_m(t))=:(α(t),β(t)). We have
α(t) = (m-2)t -1 + ( (1-m)t +1)^1/d , β(t) = (2-m)t +1,
t∈ [0, 1/(m+1)].
Therefore, the first and second derivatives of α(t) and β(t) are given by
α'(t) = m-2 + 1-m/d ((1-m)t +1)^(1-d)/d, β'(t) = 2-m,
α”(t) = 1-d/d^2 (1-m)^2 ((1-m)t +1)^(1-2d)/d<0, β”(t)=0 .
Clearly β'(t) < 0 since m ≥ m^+ > 2. Next, we focus the attention on α^'(t). The line t=1/(m-1) is a vertical asymptote (outside the domain (0,1/(m+1))) and simple computations show that α'(t)=0 if and only if t=t^± where
t^± :=t^±_m = 1/m-1±( m-1/d^d(m-2)^d)^1/(d-1).
Some further computations show that
t^-_m^+ <0, t^-_m^⋆ >1/m^⋆+1 and t_m^+ > 1/m-1, ∀ m∈ [m^+,m^⋆].
Since α^'_m^+(0)
= 4(d-1 ) (1-d+√(1-10d+9d^2))^-1 +1/d -2<0 and α^'_m^⋆(0)>1/2 (3-5/d) >0, it follows from the previous arguments and (<ref>) that α_m^+(t) is monotonically decreasing and α_m⋆(t) is monotonically increasing in the considered domain. Consequently, Γ_m^+ and
Γ_m^⋆ can be expressed as graphs of monotone functions of the form
Γ(x)=(x,Γ^(2)(x)):= (x,β(t(x))),
where t(x):=α^-1(t) for m=m^+ and m=m^⋆, respectively.
We have
dΓ^(2)/dx =dβ/dt(dα/dt)^-1,
d^2Γ^(2)/dx^2 =-(dα/dt)^-3dβ/dtd^2α/dt^2=-(dα/dt)^-3(
d-1/d^2(m-2)(m-1)^2/( (1-m) t+1 )^(2d-1)/d),
everything evaluated at t=t(x).
Indeed, taking into account (<ref>), when m=m^+, Γ^(2) is increasing and convex, while when m=m^⋆, Γ^(2) is decreasing and concave. From (<ref>) we conclude that
β(t) > β (1/(m+1)) = 3/(m+1)> 3/(m^⋆+1)=2/3,
and then the preimage T^-1() is above the line {y =2/3}. Finally we notice that the image by T^-1_d of the segment ∩{y=-x} is contained in (the graph of)
x=ϕ(y):=-y + (2/3 y )^1/d.
Then
ϕ'(y)=-1 +2/3d(2/3 y)^(1-d)/d < -1+3/2d< -1/2,
and the function y = ϕ^-1 (x) is decreasing in the corresponding domain.
In particular the curve Γ_m^+(x) belongs to the second quadrant, while Γ_m^⋆(x) belongs to the first one. In Figure <ref> we display T_d() and T_d^-1() and we can see the relative position of these two sets and the initial triangle . We emphasize that, from arguments above, γ_m^+(x) and Γ_m^+(x) are both tangent to the line y=m^+ x+1, but, using the convexity properties, γ_m^+(x) is below this line while Γ_m^+(x) is above it, hence their relative position illustrated in Figure <ref> is the right one.
Consider now T_d^-2:T_d()→ T_d^-1(). From the stable/unstable manifold theorem and the relative position and geometry of the sets T_d() and T_d^-1(), we can conclude that, locally, W^u_p_0 exists, it is tangent to y=m^+x+1, and it is contained in T_d(). In particular we also conclude that there is a local piece of
W^u_p_1 attached to p_1 belonging to .
We claim that W^u_p_0 may only leave T_d() through the piece of the boundary given by ∂ T_d() ∩{y=x} (later we will see that W^u_p_0 does leave T_d() through this boundary). To check the claim we first observe that W^u_p_0∩{y≤ 1} can be parametrized W^u_p_0:={φ(t)| t ≥ 0 }, with φ(0)=(0,1) (see <cit.>). Second, we suppose it leaves T_d() either for the left or the right boundaries of T_d(), and we get a contradiction.
Let p=φ(t_0) for some t_0>0 such that {φ(t), t∈ (0,t_0)}⊂ (T_d()) for all t∈ (0,t_0) and p∈∂ T_d() ∖{x=y} (that is, it leaves T_d() through the left or right boundaries of it). Consider q:=T_d^-2(p). Since W^u_p_0 is invariant by T_d^-2 we have that q=φ(t_q) for some t_q∈ (0,t_p). However, we also have
q∈∂(T_d^-1())∖ T_d^-1(∂∩{y=-x}),
which provides a contradiction (see Figure <ref>).
Let ℰ be the closed triangle determined by the vertices τ_0=(0,1), τ_1=(-1/2,0) and τ_2=(-1/3,0).
Next two lemmas refer to the set T_d()∩{y≥ 0}. First we show that this set belongs to the triangular region ℰ and, second, we will give relevant information of the dynamics of T^-2|_ℰ (and therefore in T_d()∩{y≥ 0}). All together implies two main properties for W^u_p_0. On the one hand, we will prove that W^u_p_0∩ [-1/2,0]×{0}∅ and, on the other hand, we will prove that W^u_p_0∩{y=x}∅ (such intersection happens on the fourth quadrant). See Figure <ref>.
We have
T_d()∩{y≥ 0}⊂ℰ.
We will check that the left and the right boundaries of
T_d()∩{y≥ 0}, given by pieces of the curves γ_m^+ and
γ_m^⋆, respectively, are contained in ℰ. By the discussion we did when analyzing the shape of T_d() we know that both curves can be written as graphs of concave functions that only intersect at the point p_0=(0,1) (see Lemma <ref>). Moreover the slope s_m^+ of γ_m^+ at t=0 is m^+ >2 (the slope of the left side of ℰ). Indeed,
2<s_m^+≤ 6/(√(13)-1) ⪅ 2.3028 .
All together implies that the statement of the lemma follows from proving that the (only) intersection of γ_m^⋆ with y=0 happens at a point x<-1/3.
Consider the second component y(t) of γ_m^⋆ defined for
t∈ [0,1/(m^⋆ + 1)].
Since y(0)=1, y(1/(m^⋆ + 1))=
-1/(m^⋆ + 1)<0 and y”(t) >0 there exists a unique t_1∈ [0,1/(m^⋆ + 1)] such that
y(t_1)=0. See Figure <ref> (center).
When d=3 we can localize t_1 with some precision. Let
t_1^-= 1/18 and t_1^+= 1/16. Both values belong to [0, r^-] where the functions x(t) and y(t) are decreasing. See (<ref>). We have
y(t_1^-)= -29/36+ 2 (27/36)^3 > 1/30 and
y(t_1^+)= -25/32+ 2 (23/32)^3 < -1/30.
This means that t_1∈ (1/18, 1/16) and that, since γ_m^⋆ is the graph of a concave function,
x(t_1) < x(t_1^-) = -29/36+ (27/36)^3 < -1/3.
By concavity we get that γ_m^⋆∩{y≥ 0}⊂ℰ.
To deal with the general value of d odd, d≥ 3, we will see that if we consider the intersection point (x(t_1), 0) as a function of d, it is decreasing so that for d≥ 3, x(t_1) < -1/3.
Indeed, we compute the derivative of x(t_1) with respect to d.
We write y=y(t,d).
Let t_1(d) be the parameter such that y(t_1(d),d)=0.
We want to compute (x(t_1(d),d))', where prime stands for the derivative with respect to d.
Derivating impliticly (and simplifying notation) we have
(t_1(d))' = -∂ y/∂ d (t_1(d),d) / ∂ y/∂ t (t_1(d),d)=:-(∂ y/∂ d/∂ y/∂ t)|_(t_1(d),d)=:-(∂ y/∂ d/∂ y/∂ t).
Then,
(x(t_1(d),d))' = ∂ x/∂ t× (t_1(d))'
+ ∂ x/∂ d
=
∂ x/∂ t×(-
∂ y/∂ d / ∂ y/∂ t) + ∂ x/∂ d
=
(1/ ∂ y/∂ t) ×(
∂ x/∂ d×∂ y/∂ t
-∂ x/∂ t×∂ y/∂ d).
Taking the corresponding derivatives from equation (<ref>) and simplifying we get
(x(t_1(d),d))' =-(1/ ∂ y/∂ t)
m^⋆(1-(m^⋆+1)t)^d log(1-(m^⋆+1)t) <0.
Indeed, we are evaluating the right side of the above equation at the point (t_1(d),d) with 0<t_1(d)<r^-(d). Thus, we have
(1/ ∂ y/∂ t)|_(t_1(d),d) < 0 and 0<1-(m^⋆+1)t <1.
Next lemma tell us that, while the iterates by T_d^-2 remain
in ℰ, the sequence of their second coordinates of them is strictly increasing. See Figure <ref>.
Let (f(x,y),g(x,y)):=T_d^-2(x,y). If (x,y)∈ℰ then g(x,y) ≥ y and the equality only holds when (x,y)=(0,1).
0.1truecm
From (<ref>) we have that g(x,y)=3y-6x+2(x-y)^1/d,
and then
g(x,y)≥ y in ℰ if and only if
G(x,y):=2y-6x+2(x-y)^1/d>0, ∀ (x,y)∈ℰ∖{(0,1)}.
To prove this inequality we will show that G restricted to ℰ has a global minimum
G=0 at (0,0) which is only attained at (0,0).
A direct computation shows that the partial derivatives of G cannot vanish simultaneously, therefore the minimum has to be attained at the boundary of ℰ.
It is clear that the restriction of the function G on each of the three segments of ∂ℰ is given by
χ_1(x):=
G(x,0)=2(-3x+x^1/d), x∈ [-1/2,-1/3],
χ_2(x):= G(x,2x+1)=2(1-x-(1+x)^1/d), x∈ [-1/2,0],
χ_3(x):= G(x,3x+1)=2(1-(1+2x)^1/d), x∈ [-1/3,0].
Using elementary methods we can check that indeed χ_1(x)>0, χ_2(x)≥ 0 and
χ_3(x)≥ 0 in the indicated intervals and that χ_2(x)= 0,
χ_3(x)= 0 only hold when x=0.
The unstable manifold W^u_p_0 crosses the interval I_0:=T_d() ∩{y=x} at some point ( p, p) such that
1/m^+ +1 < p < 1/m^⋆+1.
Moreover, the piece of W^u_p_0 from (0,0) to ( p, p) is contained in T_d(). We also have that this piece of W^u_p_0 cuts the segment
(-1/2, -1/3) ×{0}⊂^2.
We first prove the existence of the point ( p, p).
A completely analogous procedure will be used in the proof of Proposition <ref> and in Section <ref>.
Let
I_0:= T_d() ∩{y=x} = {(s,s)| 1/m^+ v+1≤ s ≤1/m^⋆ +1}.
The image T^-2_d (I_0) is a curve, which is a piece of the boundary of
T_d^-1() that, be previous arguments, has to cross the left and right boundaries of T_d(). Actually, it can be parametrized as
s ↦ (-3s+(2s)^1/d, 3s). In the study of the shape of T_d^-1() we have seen that T_d^-1() ⊂{y> 1/m^⋆+1 = 2/3}.
We define
I_1 = T_d^2(T_d^-2(I_0)∩ T_d())⊂ I_0
and, in general,
I_n = T_d^2n(T_d^-2n(I_n-1)∩ T_d()), n≥ 1.
It is clear that
I_n ⊂ I_n-1 for all n≥ 1.
Then, I_∞=∩_n≥ 0 I_n is compact and contains the points in
T_d() such that all their negative iterates by T_d^2 are in
T_d().
Moreover, by Lemma <ref>, the sequence of the second components of these iterates is increasing and has to converge to 1.
Then, those points must belong to W^u_p_0 and therefore there exists ( p, p) ∈ I_0 such that
( p, p)⊂ W^u_p_0∩{y=x}⊂ T_d() ∩{y=x}∅.
From Lemma <ref> the piece of W^u_p_0 from (0,0) to ( p, p) must be contained in T_d(). Hence Lemma <ref> implies that W^u_p_0 cuts the segment
(-1/2, -1/3) ×{0}⊂^2.
Next propositions are devoted to show the two further properties of _d(0) claimed in Theorem A(b). First we show that the stable manifold of the periodic orbit belongs to ∂_d(0) and second we show that _d(0) is unbounded, which follows because the stable manifold is unbounded.
Let d≥ 3 odd. Then, W^s ⊂∂_d(0).
From Proposition <ref> we know that W^u_p_0 crosses the interval (-1/2,-1/3)×{0}⊂ℝ^2. Let (x_0,0)∈ W^u_p_0∩ (-1/2,-1/3)×{0}
be the first intersection point of W^u_p_0 with the segment. From Proposition <ref> we also have that [x_0,0]×{0}⊂_d(0). We have (recall that T^-1_d(p_0)=p_1 and that when d is odd T_d is one-to-one)
⋃_n=0^∞ T_d^-n([x_0,0]×{0}) ⊂_d(0) and p_j ∈ Acc({T_d^-n(x_0)}_n≥ 0), j=0,1,
where Acc(X) denotes the set of accumulation points of X. Since, of course, {p_0,p_1}∉_d(0) we conclude that p_j∈∂_d(0), j=0,1. Now, let q be any point in W^s_p_0 and U a small disc centered at q and let Σ⊂ U be a transversal segment to W^s_p_0 through q. On the one hand, W^s_p_0∩ U is not contained in _d(0). On the other hand, by the λ-Lemma <cit.>, the iterates by T_d^2 of the points in Σ (close enough to W^s_p_0) accumulate to W^u_p_0. Therefore, we would have
(x_0,0) ∈ Acc({T_d^n(Σ)}_n≥ 0).
Since (x_0,0)∈_d(0) and _d(0) is open we conclude that U contains points of _d(0). If q∈
W^s_p_1 then T_d(q)∈ W^s_p_0 and the conclusion is the same. All together implies that q∈∂_d(0), as desired.
_d(0) is unbounded.
From the previous lemma it is enough to see that the stable manifold W^s of the hyperbolic two-cycle {p_0,p_1} is unbounded. We start introducing some notation. See Figure <ref>. Let Q_2^⋆ and Q_4^⋆ be the closed unbounded subsets of the second and fourth quadrant defined as follows.
Q_2^⋆:={(x,y)| x≤ 0, y≥ 1} Q_4^⋆:={(x,y)| x≥ 0, y≤ -1}.
Next we split the above sets into three pieces. Concretely,
Q_2^⋆ =⋃_j=1^3 E_j Q_4^⋆ =⋃_j=1^3 D_j ,
where
E_1={x≤(y+1/2)^1/d-y| y ≥ 1}, 3truecm D_1={0≤ x≤ y^1/d-y| y ≤ -1},
E_2={(y+1/2)^1/d-y ≤ x≤ y^1/d-y | y ≥ 1}, 3truecm D_2={y^1/d-y ≤ x≤(y-1/2)^1/d-y| y ≤ -1},
E_3={y^1/d-y≤ x≤ 0| y ≥ 1}, 3truecm D_3={x≥(y-1/2)^1/d-y| y ≤ -1}.
We denote by {𝒥_ℓ,ℐ_ℓ} with ℓ=1,2 the straight boundaries of the above sets. That is,
𝒥_1={(x,1)| x≤ 0 }, 6truecm ℐ_1={(x,-1)| x≥ 0 },
𝒥_2={(0,y)| y≥ 1 }, 6truecm ℐ_2={(0,y)| y≤ -1 }.
Finally, we denote by {γ^±,σ^±} the other boundaries of the sets E_j and D_j. That is,
γ^+ = E_2∩ E_3 = {(y^1/d-y,y) | y≥ 1}={y=(x+y)^d| x≤ 0, y≥ 1},
γ^- =E_1∩ E_2= {((y+1/2)^1/d-y,y)| y≥ 1}={y=-1+2(x+y)^d| x≤ 0, y≥ 1},
σ^- =D_1∩ D_2 ={(y^1/d-y,y)| y≤ -1}= {y=(x+y)^d| x≥ 0, y≤ -1},
σ^+ =D_2∩ D_3={((y-1/2)^1/d-y,y)| y≤ -1} = {y=1+2(x+y)^d| x≥ 0, y≤ -1}.
0.2truecm
See Figure <ref> for a qualitative picture and the relative position of all curves and sets. One can check that by construction we have
T_d(γ^-)=ℐ_1, T_d(γ^+)=ℐ_2, T_d(σ^-)=𝒥_2 and T_d(σ^+)=𝒥_1.
Consequently,
T_d(D_2)=⋃_j=1^3 E_j and T_d(E_2)=⋃_j=1^3 D_j.
We also notice that the curves γ^±(y) and σ^±(y) are graphs of monotonically decreasing functions of y. Indeed, for instance, if we write
γ^±={(γ_1^± (y),y)| y≥ 1} then
dγ_1^+/dy(y)=1/d(1/y)^d-1/d-1≤1/d-1 <0 dγ_1^-/dy(y)=1/2d(2/y+1)^d-1/d-1≤1/2d-1 <0.
Let
Ω:=T_d^-1(E_2).
According to the previous discussion it is clear that Ω⊂ D_2 (remember that ∂ E_2 = γ^+∪γ^-). We also claim that ∂Ω is given by two curves contained in D_2 which can be written as graphs of monotone functions (of y as well as of x). Of course ∂Ω = T_d^-1(γ^-) ∪ T_d^-1(γ^+). Using (<ref>) and (<ref>) we have
T_d^-1(γ^-(y))=
([ ξ_1(y); ξ_2(y) ])
:=
([ 3y-2^d-1/d(y+1)^1/d+[2^-1/d(y+1)^1/d-2y]^1/d; -3y+2^d-1/d(y+1)^1/d ]), y≥ 1.
Thus, we have
dξ_1/dy(y) =3-1/d(y+1/2)^1-d/d+1/d[ (y+1/2)^1/d-2y]^1-d/d(1/2d(y+1/2)^1-d/d-2)
≥ 3 - 1/d > 0
and
dξ_2/dy(y)=-3+1/d(y+1/2)^1-d/d≤ -3 + 1/d < 0.
Therefore, using the same formulas as the ones in (<ref>), T_d^-1(γ^-) can be written as a graph of a monotonically decreasing function (with respect to y as well as x). Similar computations lead to the same conclusion for T_d^-1(γ^+).
0.2truecm
Claim 1. Let λ:= d^2/(1-2d)^2. Let x_0>0 and (x_0,y_0) ∈Ω. We denote (x_2,y_2) =T_d^2 (x_0,y_0). Then,
0≤ x_2 < λ x_0.
Given x_0>0, let
L := {x=x_0}∩ D_2
= { (x_0,t)| t_1= t_1(x_0) ≤ t ≤ t_2(x_0) = t_2 },
where (x_0,t_1)∈σ ^- and (x_0,t_2)∈σ ^+ and hence
t_1=(x_0+t_1)^d and t_2=2(x_0+t_2)^d+1.
The image of L by T_d can be represented by
Γ_1 (t) = ([ α(t); β(t) ])
:=T_d([ x_0; t ]) =
([ t-(x_0+t)^d; t-2(x_0+t)^d ]), t∈[t_1,t_2].
Since d-1 is even and the fact that if (x,y) ∈ D_2 we have y>2(x+y)^d+1 and x+y<0,
α^'(t)=1-d(x_0+t)^d-1≤ 1-d(t-1/2)^(d-1)/d≤ 1-d<0.
This means that α(t) is strictly decreasing in t with
α(t_2)=(t_2+1)/2 ≤α(t)≤α(t_1)=0.
Similarly we have β^'(t)=1-2d(x_0+t)^d-1≤ 1-2d<0 and
β(t_2)=1 ≤β(t)≤β(t_1)=-t_1.
This implies that Γ_1(t) can be seen as the graph of an increasing function joining _1 with _2. Therefore, it crosses transversally the boundary of E_2. Let
(x^+,y^+)∈γ^+ and (x^-,y^-)∈γ^- be the corresponding intersections.
From (<ref>) and (<ref>) we have
(t_2+1)/2 ≤ x^- ≤ x for all (x,y) ∈Γ_1 ∩ E_2.
Now, given ξ∈[x^-,0] we consider the new vertical segment in E_2,
L_2:={x=ξ}∩ E_2.
By its definition T_d(L_2) is a curve joining
ℐ_1 and ℐ_2, parametrized by
Γ_2(s)=([ α_2(s); β_2(s) ])
:=
T_d([ ξ; s ])
=
([ s-(ξ +s)^d; s-2(ξ +s)^d ]), s∈[s_1,s_2],
where s_1=s_1(ξ), s_2=s_2(ξ) and
s_1=2(ξ +s_1)^d-1>1 and s_2=(ξ +s_2)^d.
A similar computation to the one in (<ref>) gives that
β_2^'(s) < α_2^'(s)≤ 1-d<0 and then
α_2(s_2)
≤α_2(s)≤α_2(s_1)
=s_1-1/2, s∈[s_1,s_2].
The claim will follow from α_2(s_1(ξ))=s_1(ξ)-1/2≤λ x_0
for all ξ∈[x^-,0].
Clearly σ^+ is above its tangent line at the point (0,-1) which is given by y=2d/1-2dx-1 (to get the slope of the line we can use implicit derivation to
y=1+2(x+y)^d at (x,y)=(0,-1)). As a consequence,
since (x_0,t_2)∈σ^+,
t_2+1>2d/1-2dx_0.
Similarly, γ^- is below its tangent line at the point (0,1) given by y=2d/1-2dx+1. Consequently, since (ξ,s_1(ξ))∈γ^-,
s_1(ξ)-1<2d/1-2dξ.
Using (<ref>), (<ref>) and (<ref>) we have that
α_2(s_1(ξ))=s_1(ξ)-1/2<d/1-2dξ≤d/1-2dx^-
≤d/1-2dt_2+1/2≤(d/1-2d)^2x_0=λ x_0.
0.2truecm
Claim 2. Let x=x_0>0. Then, there exists a point (x_0,y)∈Ω such that (x_0,y)∈ W^s. In particular, from Proposition <ref>, we conclude that ∂𝒜(0) is unbounded.
0.2truecm
Since (0,-1) is hyperbolic we already know that W^s_p_1 exists and consists of the points such that their ω-limit with respect to T_d^2 is (0,-1).
We recall that Ω⊂ T_d^2(Ω) = Q^⋆ _4.
Thus,
T_d^-2(Ω) ⊂Ω.
Let K_0:={x=x_0}∩Ω. Clearly, T_d(K_0) is a curve connecting γ^- and γ^+ and so, by construction, T_d^2(K_0) is a curve connecting ℐ_1 and ℐ_2 and crossing ∂Ω at exactly two points (remember that T_d is one-to-one): one in T_d^-1(γ^-) and the other in T_d^-1(γ^+).
We write K_1=T_d^-2(T_d^2(K_0)∩Ω)⊂ K_0⊂Ω.
Repeating this procedure we can define recursively
K_j=T_d^-2j(T_d^2j (K_j-1)∩Ω)⊂ K_j-1, j≥ 1.
Therefore, {K_j}_j≥ 0 is a sequence of nested compact sets and therefore
⋂_j≥ 0 K_j ∅.
Now, we check that if (x_0,y_0) ∈⋂_j≥ 0 K_j, then
(x_0,y_0) ∈ W^s.
Indeed, let (x_0,y_0) ∈⋂_j≥ 0 K_j. By the definition of K_j,
(x_2j,y_2j)= T_d^2j(x_0,y_0) ∈ T_d^2j (K_j-1)∩Ω⊂ T_d^2j (K_0)∩Ω
, j≥ 0.
We can prove by induction that x_2j < ^j x_0 for all j≥ 1. Since
(x_0,y_0)∈ K_0∩Ω, by Claim 1, x_2 < x_0. Assuming the statement is true for j-1, since
(x_2j-2,y_2j-2) ∈ T_d^2j-2 (K_0) ∩Ω, then
(x_2j,y_2j) = T_d^2(x_2j-2,y_2j-2) satisfies
x_2j < x_2j-2. We conclude that x_2j→ 0. Since
(x_2j,y_2j) ∈Ω we also have y_2j→ -1.
Since x_0 is arbitrarily large we obtain that the invariant manifold is unbounded.
§ PROOF OF THEOREM A(C): THE CASE D ODD AND A=-1
According to Remark <ref>, under the parameter values d odd and a=-1, the dynamics on the center manifold of the origin is repelling and therefore the only points tending to the origin under iteration are the ones of the stable manifold of (0,0). Hence, it remains to show that the stable manifold is unbounded.
It follows from (<ref>) that for d odd and a=-1 the map T_d is a homeomorphism and we have
T_d(
[ x; y ]) =
(
[ y + (x+y)^d; y + 2(x+y)^d ])
and
T_d^-1(
[ x; y ]) =
(
[ -2x+y - (x-y)^1/d; 2x-y ]).
In a similar way as in the proof of Proposition <ref> we introduce a domain, which we expect to contain W^s, and we prove that contains points, arbitrarily far away, such that all their iterates are in the domain and moreover tend to the origin so that indeed it contains W^s.
We will take this domain in the fourth quadrant Q_4 := { x≥ 0, y≤ 0}. We define D^0⊂ Q_4 by the condition T_d^2(D^0) = Q_4. Since T_d^2 is a homeomorphism the boundary of D^0 is obtained by taking the preimage of the boundary of Q_4 with respect to T_d^2.
Consequently, the boundary of D^0 is the union of the images of the curves
σ^+_0(t) := (^+_0 (t) , ^+_0 (t))
= T_d^-2(t,0) =(
6t +2t^1/d + (4t+t^1/d)^1/d, -6t -2t^1/d), t≥ 0,
σ^-_0(t) := (^-_0 (t) , ^-_0 (t))
= T_d^-2(0,-t) =(
3t +2t^1/d + (2t+t^1/d)^1/d, -3t -2t^1/d), t≥ 0.
We have that
(^±_0)' (t)>0 and (^±_0 ) ' (t) <0. Therefore, the curves σ^±_0 are graphs of well defined decreasing functions
h^±_0 = β^±_0 ∘ ( ^± _0)^-1 from [0,∞) onto (-∞, 0].
By construction, the set D^1 := T_d(D^0)=T_d^-1(Q_0) is the domain limited by the curves
σ ^±_1 := T_d(σ^±_0(t) ), t≥ 0.
Concretely, these curves are
σ^+_1(t) =(^+_1 (t) , ^+_1 (t)) =
T_d(σ^+ _0(t)) =
(
-2t -t^1/d ,2t ), t≥ 0,
σ^-_1(t) =(^-_1 (t) , ^-_1 (t))
= T_d(σ^-_0(t)) = (
-t -t^1/d , t), t≥ 0.
Similarly, since (^±_1)' (t)<0 and (^±_1 ) ' (t) >0, we have that σ ^±_1 are graphs of decreasing functions
h^±_1 = β^±_1 ∘ ( ^± _1)^-1 from
(-∞, 0] onto [0,∞).
Finally, D^2:= T_d(D^1) is the full closed fourth quadrant Q_4. For notational convenience we also define
σ^+_2(t) :=
T_d(σ^+ _1(t)) =
( t,0 ), t≥ 0,
σ^-_2(t)
:= T_d(σ^-_1(t)) = (
0 , -t), t≥ 0.
Since T_d is a homeomorphism, the curves σ^+_1 and σ^-_1 are the only preimages of the curves σ^+_2 and σ^-_2, respectively. They only intersect at the origin. The same happens with σ^+_0 and σ^-_0. Moreover, σ^+_1 is above σ^-_1 and σ^+_0 is above σ^-_0. Also, we will use that σ^+_1 is below {y=-x}
and σ^-_0 is above {y=-x}.
Indeed, these claims can be checked from (<ref>) and (<ref>) after some computations.
If (x_0,y_0) ∈ D^0 then (x_2,y_2):=T_d^2(x_0,y_0) ∈ Q_4 and x_2 ≤ x_0/2.
The first part of the statement follows from the previous construction.
We have to prove the inequality. We define
D^0_ρ ={(x,y)∈ D^0| x≤ρ},
D^1_ρ ={(x,y)∈ D^1| x≥ -ρ},
D^2_ρ ={(x,y)∈ D^2| x≤ρ}.
We will prove that, for any ρ>0,
T_d(D^0_ρ ) ⊂ D^1_ρ/2 and
T_d(D^1_ρ/2 ) ⊂ D^2_ρ/2 .
For the first inclusion
we consider the segments
{x=r}∩ D^0_ρ with 0≤ r≤ρ, parametrized by s ∈[s_-, s_+]⊂ [-r,0],
with s_- and s_+ such that (r,s_-)∈{⊷σ_0^-} and
(r,s_+)∈{⊷σ_0^+}. In particular, we have that there exists t_1≥ 0 such that
σ^-_0(t_1)=
(
3t_1 +2t_1^1/d + (2t_1+t_1^1/d)^1/d, -3t_1-2t_1^1/d)=(r,s_-).
The image of the segment can be represented by
τ_1(s) = ( τ_1^x(s), τ_1^y(s) )
:= T_d(r,s)
= ( s+(r+s) ^d, s+2 (r+s) ^d), s ∈[s_-, s_+].
Since d-1 is even,
( τ_1^x)'(s) and (τ_1^y)' (s) are positive.
This implies that the minimum of τ_1^x(s) is attained at the value s=s_-.
This point is sent by T_d to
(-t_1- t_1^1/d, t_1)
and we have
-t_1- t_1^1/d = 1/2 (-3t_1 - 2 t_1^1/d) + 1/2 t_1
≥1/2 s_-
≥ -1/2 r ≥ -1/2ρ.
Now let r be such that -ρ/2 ≤ r ≤ 0 and we consider the image of the segment
{x= r}∩^1_ρ, parametrized by
s ∈[ s_-, s_+]⊂ [0, - r].
We write
τ_2( s) = ( τ_2^x( s), τ_2^y( s) ) := T_d( r, s)
= ( s+( r+ s) ^d, s+2 ( r+ s) ^d).
Since σ^+_1 is below {y=-x}, r+ s<0.
In this case we also have that
(τ_2^x) ' ( s) and (τ_2^y) ' ( s) are positive. Then, a bound of the maximum of τ_2^x( s) is obtained from
τ_2^x( s) ≤τ_2^x(- r)= - r ≤ρ/2.
This implies T_d(D^1_ρ/2 ) ⊂ D^0_ρ/2.
Now we take ρ>0 arbitrary and define
I^0 = D^0 ∩{x=ρ}.
Its image by T_d^2 is a curve in Q_4 that joints a point in {⊷σ _2^-} and a point in {⊷σ _2^+}. Then this curve has to cross
{⊷σ_0^-} and {⊷σ _0^+}.
The set I^1 = T_d^-2 (T_d^2(I^0) ∩ D^0) ⊂ I^0
contains points such that they, together with their second iterates, belong to I^0. Repeating this procedure we define, as in the final part of the proof of Proposition
<ref>,
I^k = T_d^-2k (T_d^2k(I^k-1) ∩ D^0) .
Clearly, I^k⊂ I^k-1 so that I^k is a sequence of nested compact sets as well.
Then I^∞ = ∩_n≥ 0 I^n ≠∅.
By this construction, if (x_0,y_0) ∈ I^∞,
(x_2k,y_2k) = T_d^2k(x_0,y_0) ∈ T_d^2k(I^k-1) ∩ D^0 ⊂ D^0 for all k≥ 0.
Then, by Lemma <ref>,
0< x_2k < ( 1/2)^k x_0
and as (x_2k,y_2k)∈ D^0, the iterates
(x_2k,y_2k) converge to (0,0) which implies that (x_0,y_0) ∈ W^s_0. Since ρ is arbitrary, W^s_0 is unbounded.
Since the curves that determine th e boundary of D^0 are very close they provide a very good approximation for the stable manifold, even far away from the origin.
alpha
|
http://arxiv.org/abs/2405.10106v1 | 20240516140055 | Advancing Set-Conditional Set Generation: Graph Diffusion for Fast Simulation of Reconstructed Particles | [
"Dmitrii Kobylianskii",
"Nathalie Soybelman",
"Nilotpal Kakati",
"Etienne Dreyer",
"Eilam Gross"
] | hep-ex | [
"hep-ex",
"hep-ph"
] |
Advancing Set-Conditional Set Generation]Advancing Set-Conditional Set Generation: Graph Diffusion for Fast Simulation of Reconstructed Particles
^1 Weizmann Institute of Science, Israel
^*These authors contributed equally
{nathalie.soybelman,dmitry.kobylyansky}@weizmann.ac.il
January 2024
The computational intensity of detailed detector simulations poses a significant bottleneck in generating simulated data for collider experiments. This challenge inspires the continued development of fast simulation techniques based on machine learning to serve as efficient surrogate models.
In our approach, a network generates a set of reconstructed objects conditioned on input particle sets. Building on the success of a slot-attention-based model, we present a new architecture utilizing diffusion, showcasing an enhanced performance in the context of single jets.
Keywords: fast simulation, transformer, graph networks, slot-attention, conditional generation, diffusion
§ INTRODUCTION
For experiments at the Large Hadron Collider (LHC), the demand for simulated data has increased throughout recent data-taking periods and will surge by roughly a factor of ten during the High-Luminosity runs <cit.>.
The standard simulation pipeline used at the general-purpose experiments ATLAS and CMS is designed to closely mirror that of real data <cit.>. Initially, proton-proton collision events are generated, followed by parton shower, hadronization, and secondary decays. The set of stable, or “truth”, particles then enter the detector volume, and their material interactions are modeled on a microscopic level using Geant4 <cit.>. The resulting energy deposits in the tracker and calorimeter cells seed the formation of tracks and clusters, respectively.
Subsequently, a reconstruction algorithm refines these data into objects suitable for statistical analysis.
Depending on the algorithm, the reconstructed “particle flow objects” (PFOs) either represent particles directly or else consist of tracks and calorimeter clusters with kinematics adjusted to maintain energy flow at the jet level.
From the perspective of computational efficiency, the main constraint in the traditional simulation pipeline comes from the highly sophisticated but expensive model of particle-detector interactions provided by Geant4. In a bid to remove this constraint, various machine learning approaches are currently being explored <cit.>. These rapid surrogate models aim to drastically reduce computational time while otherwise performing similarly to the traditional simulation pipeline.
The majority of research efforts in this direction have concentrated on fast calorimeter simulation <cit.>. In this approach, the objective is to replace the slowest link in the simulation chain while preserving the other steps, including reconstruction. Utilizing the features of an incoming particle as a conditioning factor, a generative model is employed to produce the detector response, which can then be aggregated for all particles within an event <cit.>. Within the scope of the Fast Calorimeter Challenge <cit.>, a wide array of methods were developed and analyzed. Various architectures, including those based on Generative Adversarial Networks (GANs) <cit.>, normalizing flows <cit.> and diffusion models <cit.> were introduced. These approaches represent the calorimeter's response through diverse formats such as images, point clouds, or graphs.
Another approach involves the replacement of the entire simulation pipeline. In studies such as <cit.>, the final high-level objects are generated based on particles originating from the hard process. This approach is highly efficient as it encompasses all simulation steps in a single process, eliminating the need for additional processing. However, it is worth noting that this method is process-dependent, necessitating optimization and retraining of the network for each analysis. An alternative strategy entails creating a generative model for jets <cit.>. In this scenario, the constituents of the jet are generated conditioned on both the jet features and the particle type. Additionally, in this approach, the features of the high-level object are directly mapped to the hard-process parton. Unlike the previous method, this approach does not require adjustment for different analyses but does necessitate event-splitting and additional processing for other objects within the event.
In the pursuit of a process-independent, full-event method, a novel strategy for fast simulation known as FlashSim <cit.> has been introduced. This method involves utilizing stable particles to predict high-level objects such as jets, fat jets, muons, electrons, and more. For each object, a separate network is trained using distinct input and target variables.
In this study, our focus lies on developing a process-independent, end-to-end, fast simulation technique that generates reconstructed particles utilizing stable particles as input. This presents a set-to-set problem. Initially, a prototype for this approach was presented in <cit.> using a simplified version of the problem. This initial model solely considered charged particles within a jet, with smeared tracks as targets. The architecture incorporated graph neural networks with slot attention mechanisms. Now, transitioning towards a more realistic approach that encompasses both neutral and charged particles while targeting reconstructed particles, we introduce Graph Diffusion as a novel architecture, showcasing enhanced performance compared to the baseline slot attention model.
§ DATASET
The network takes as input a single jet of truth particles entering the detector. For the generation task, the target is the set of reconstructed particles. Instead of relying on a parametrized smearing model (e.g. Delphes <cit.>), we obtain reconstructed particles using a realistic detector simulation followed by a particle flow algorithm described below. Each input or output reconstructed particle is represented by its momentum, direction, and charge (p_T, η, ϕ, |q|). Notably, charge prediction is a new feature introduced in this study, as our previous work utilized a toy model focusing solely on charged particles. Generally, reconstructed particles are classified into five classes: charged hadrons, neutral hadrons, photons, electrons, and muons. Given the significant class imbalance within the dataset, we simplify the task by predicting only whether the object is neutral or charged.
§.§ Truth Event Generation
We concentrate on a localized reconstruction of particles within a single jet. For this purpose, we utilize Pythia8 <cit.> to generate a single quark with momentum ranging between 10 GeV and 200 GeV, with the initial direction randomly selected within the ranges |η| < 2.5 and -π < ϕ < π. Subsequent to parton shower and hadronization, only stable particles with momentum exceeding 1 GeV and |η| < 3.0 are retained. The set of particles contained within the jets exhibits an average cardinality of N = 6.7, with a maximum of 25 particles and a minimum of 1 particle per truth event, as measured on the whole dataset.
§.§ Detector simulation
The detector simulation uses COCOA <cit.>, a Geant4-based configurable calorimeter simulation toolkit. The detector comprises three electromagnetic and three hadronic calorimeter layers, simulating ATLAS-like materials. Geometrically, the coverage is divided into a barrel region (|η| < 1.5) and two endcaps (1.5 < |η| < 3.0). The cell depth follows a 1/coshη profile to maintain a constant effective interaction depth across η. Moreover, the tracking region of the detector is subjected to a magnetic field of 3.8 T. To avoid particles created upstream of the calorimeter, our detector simulation is simplified by assuming that the tracker contains no material.
Tracking effects are emulated by first computing the trajectories of charged particles within the magnetic field, followed by applying smearing to the track parameters q/p, θ, and ϕ. Additionally, we discard tracks originating far from the beamline, with a transverse radius exceeding 75 mm (250 mm) in the barrel (endcap) region.
Following the simulation, a topological clustering algorithm is employed to group cells based on their deposited energy, expected noise levels, and proximity. These calorimeter clusters, along with the cells belonging to them and the set of tracks, are the input for the reconstruction algorithm.
§.§ Particle Reconstruction
To reconstruct particles, we utilize a recently developed machine learning algorithm called HGPflow <cit.>. The design principle of HGPflow is to learn the energy assignment between the input set of tracks and calorimeter clusters and an output set of particles using a hypergraph prediction network <cit.>. We trained a version of HGPflow on a statistically independent dataset of 294k single jets and evaluated this model to obtain a set of reconstructed particles for each jet in our training dataset. These predictions serve as the target for our fast simulation models.
§.§ Preprocessing
To mitigate contamination from detector noise potentially converted into neutral particles during HGPflow reconstruction, we employ a filtering process. Neutral reconstructed particles located outside a cone of R=0.4 around the truth-jet axis are removed since these typically correspond to clusters of calorimeter noise. Subsequently, before inputting the object features into the networks, we apply relative scaling. For each event individually, we scale log p_T, η, and ϕ based on the mean and standard deviations of the truth particles. The resulting scaling quantities of η and log p_T are then added as global features to retain knowledge of the absolute jet properties.
Additionally, we remove events where truth particles have a |ϕ| > 2.8 to avoid edge effects.
§ FAST SIMULATION ALGORITHMS
In this study, we evaluate the performance of two set-conditional set generation approaches. Both methods rely on graph neural networks, as they can naturally represent unordered, set-valued data through a graph structure. Currently, we use fully connected graphs because we have single jets. This can be easily generalized when considering full events where different event regions are loosely or not at all connected. Edges within the graph allow inter-particle relations to be encoded through message passing. Simplified illustrations of the network architectures are depicted in Fig. <ref>.
§.§ Slot Attention (SA)
The slot-attention approach, initially proposed in <cit.>, constitutes the baseline model. In this approach, the generation task is divided into two stages: cardinality and feature prediction. Cardinality is inferred from the updated representations of truth particles following message passing. Subsequently, based on the predicted cardinality, particles are initialized with random noise and iteratively refined through a slot-attention block <cit.>, which attends to the updated truth particle representations. The introduction of noise during initialization is crucial to prevent deterministic predictions. However, in our previous and current work, we recognize that the learned noise model exhibits room for improvement, indicated by the limited precision of the predictions.
The newly introduced Graph Diffusion, detailed in the subsequent section, aims to address the limitations of the slot-attention network.
§.§ Graph Diffusion (GD)
In this approach, we maintain the cardinality prediction method as originally introduced in the slot-attention approach and solely modify the feature prediction task. After noting that a single slot-attention block lacks the expressiveness needed to transform complete noise into sufficiently accurate features, we opt for a different approach. This involves replacing the feature prediction component with a diffusion process. Initially, starting with a fully noised graph, we employ a slot attention-based network to iteratively predict a feature estimate. This estimate is used to update the noised graph until we obtain the final, denoised reconstructed particles.
§.§.§ Diffusion formulation
We adopt score-based diffusion, a method where a diffusion process gradually perturbs the original data x with Gaussian noise while the neural network learns the time-dependent score function ∇_x log p(x; σ). This function enables us to reverse the noise process, starting from pure noise x_T ∼𝒩(0, 1) and iteratively denoising it to sample from the original data distribution x_0 ∼ p_data.
Our Graph Diffusion approach follows the EDM strategy introduced in <cit.> for score-based denoising diffusion models. During training, we sample noise rates σ from the log-normal distribution, defined by log(σ) ∼𝒩(-0.8, 0.8). The network receives scaled noisy data x = c_in(σ)(x_0 + σϵ) as input, where ϵ∼𝒩(0, 1). To enhance network predictions, we combine the neural network output F_θ(x; 1/4lnσ, c) with a skip connection D_θ(x; σ, c) = c_skip(σ)x + c_out(σ)F_θ(x;1/4lnσ, c), where c denotes contextual information.
The weighted loss function minimized during training is then defined as:
ℒ = E_σ, x_0, ϵ, c[λ(σ)D_θ(x; σ, c) - x_0 ^2]
We adopted specific design choices from <cit.>, as detailed in Tab. <ref>. During the inference process, we utilize the network predictions to compute the score function ∇_x log p(x; σ, c) = (D_θ(x; σ, c) - x)/σ^2 and solve the diffusion ordinary differential equation (ODE) using Heun's 2^nd order method <cit.>.
To improve sampling quality, we incorporate self-conditioning on the network output from the previous timestep, following a technique proposed in <cit.>. This involves concatenating x_t with the previously estimated x̃_0. For training the denoising function D_θ(x, x̃_0; σ, c) in this context, we set x̃_0 = 0 with a probability of p = 0.5, reverting to the model without self-conditioning. The remaining time, we first estimate x̃_0 = D_θ(x, 0; σ, c) and then employ it for self-conditioning.
§.§.§ Architecture description
* Input set Encoding and Cardinality Prediction
This segment remains unchanged from the slot attention approach, as depicted at the top of Fig. <ref>. The graph of truth particles characterized by p_T, η, ϕ, and |q| is embedded using a multilayer-perceptron (MLP) and subsequently updated through one round of message-passing. An overall embedding G(T) of the truth event T is aggregated using the node-level representation. It is used to predict the categorical distribution of the output set cardinality via an MLP.
* Denoising Network
This component of the Graph Diffusion receives as input the set of Pflow particles with noised features x, concatenated with self-conditioning features x̃_0 and noise level σ. These features are embedded via an MLP, along with the global representation of the truth event G(T), event scaling information, and the sinusoidal embedding of σ as contextual information. The output embedding proceeds through multi-head Slot Attention layers <cit.>, where they function as queries, while the embedding of truth particles serves as keys and values. The final representation undergoes MLP layers along with the contextual information to obtain F_θ. This is then combined with the skip connection to derive the denoised estimate D_θ. The schematic representation of this part is shown in Fig. <ref>.
* Inference
Since our goal is to predict the cardinality of the generated set of particles, we split the inference process into two steps. Initially, we predict the cardinality of the generated particles from the set of truth particles and initialize them with Gaussian noise. Subsequently, we employ Heun's 2^nd order method in conjunction with our denoising network to iteratively sample the reconstructed particle features from the noise. Following generation, objects with a Δ R > 0.5 with respect to the truth-jet are removed to stabilize against outliers.
§ TRAINING AND LOSS
We adopt a familiar loss function outlined in <cit.>. The central element involves the application of the Hungarian algorithm <cit.> for particle matching. For each pair of input and output particles from the two sets, we calculate the squared differences for the particle features log p_T and η. Due to the periodicity of ϕ, we use the cosine loss 2(1-cosΔϕ), which, when Taylor-expanded to second order, recovers the squared difference. Additionally, we include the Binary Cross Entropy (BCE) loss for the charge in the case of the Slot Attention model and the squared difference for Graph Diffusion.
Hungarian matching determines the pairing configuration that minimizes the sum of all pair losses per event. The total loss comprises this sum and the Cross Entropy (CE) loss for the cardinality prediction. We additionally weigh the losses for the diffusion model as defined in Eq. <ref>.
The training, validation, and test datasets comprise 1,000,000, 20,000, and 300,000 truth events, respectively. The trainings were performed on an NVIDIA RTX A5000.
§ RESULTS
Given the task of set prediction, we can assess multiple aspects of network performance. Firstly, we analyze properties of the set as a whole, such as set cardinality and inclusive particle distributions. Secondly, we examine the properties of individual set constituents. Additionally, for our application, we aim to study resolution modeling, which entails examining the variation in generated particle properties given the same input values. These aspects are discussed separately in the following subsections.
§.§ Set based performance
The cardinality prediction network follows the same principles and structure for both approaches, hence, similar results are expected. Minor deviations may occur as the networks are trained simultaneously with the property prediction network using combined losses. Fig. <ref> compares the generated and target set cardinalities for different input set cardinalities. The distributions for low cardinalities peak at the truth cardinality, with the width of the distribution increasing as the number of truth particles rises. Higher particle counts in the event lead to more instances of misreconstructions and inefficiencies. For very high cardinalities, the peak position is lower than the truth cardinality since dense environments are more prone to reconstructing several neutral particles as one. Both networks closely match the target distributions, even for high cardinalities where the event count is low, with predictions aligning with targets within statistical uncertainties.
Next, we examine the marginal distributions of the set constituents depicted in Fig. <ref>. These histograms encompass all particles from all events in the test set. Graph Diffusion outperforms the baseline model in the p_T distributions for low energies. Both models perform well in the intermediate and high p_T range. The Slot Attention model slightly outperforms Graph Diffusion in the distribution of η. The bias comes from the fact that low p_T particles tend to have higher |η| and vice versa.
However, the mismatch results in a deviation of a few percent, which is not deemed very significant. The distributions of ϕ closely match the target distribution within statistical errors for both networks.
The feature prediction task involves classifying particles as charged or neutral. Given the initially predicted total cardinality, the network must learn the number of charged and neutral constituents in the set. Since the total cardinality prediction displays good agreement, we can separately compare the neutral and charged contributions to analyze class prediction performance. Fig. <ref> illustrates the difference between the number of truth and reconstructed particles per event, separated for neutral and charged constituents. The target distribution for charged particles is a delta function since track presence ensures perfect classification. However, the reconstruction of neutral particles is more prone to inefficiencies and fakes, resulting in a wider distribution. Slot Attention demonstrates excellent agreement with the target, while Graph Diffusion exhibits some misclassification, as achieving binary prediction accuracy is challenging in diffusion models.
Overall, we conclude that both models reproduce set properties to a very satisfactory extent.
§.§ Constituent based performance
A more nuanced investigation involves a per-event comparison of set constituents. While Fig. <ref> illustrates that on average, the generated particles have correct features, mismatches among certain features may occur within an event, which cannot be captured in this plot. To explore this further, we conduct Hungarian matching using the same configuration as during training. However, here, we match the truth set with the output set. With the matched pairs, we examine their differences in p_T, η, and ϕ.
These residual distributions are depicted in Fig. <ref>. For angular quantities, Graph Diffusion performs better than Slot Attention in matching the peak of the distribution but exhibits wider tails originating from outliers, particularly in η. Conversely, Slot Attention displays a narrower distribution in η, suggesting that generated particles are not sufficiently smeared with respect to the truth. This trend is also evident in the right tail of the p_T distribution. Here, Graph Diffusion demonstrates better agreement. However, both models deviate from the target distribution in the peak area. It is noteworthy that this area is primarily influenced by low-p_T particles, which are challenging to model.
To gain deeper insights, we depict the residual p_T distribution for different bins of the matched truth particle p_T in Fig. <ref>. Initially focusing on the target distributions, we observe that low-energy particles form a narrow distribution with sharply declining tails comprised of mismatched or noisy particles. As the truth p_T increases, the distributions widen,
likely driven by worsening track momentum resolution. Both models tend to overestimate the residuals for low energies. For higher energies, we observe excellent agreement for graph diffusion, while slot attention predicts distributions that are too narrow, underestimating smearing effects and predicting values too close to the truth.
We can also assess constituent-based performance at the event level. Similar to the procedure in the training loss, we can extract the Hungarian cost for each event from the truth and reconstructed particle features, which can conveniently be used as a summary metric. It is depicted in Fig. <ref>. Intuitively, this plot can be understood as follows: If the generated distribution is shifted to the left of the target, i.e., the Hungarian cost is too low, the predictions are too close to the truth and smearing effects are not well modeled. Conversely, a shift to the right indicates excessive smearing of truth quantities and possibly outliers. The double-peak structure originates from events with and without high p_T particles.
We can summarize previously made observations: Slot Attention exhibits a clear left-shift in the bulk of the data, indicating overly precise predictions. Both networks fail to match the left tail of the distribution, which consists of events with very low energies. In the right peak, we observe a slight shift to the right for the Graph Diffusion. While Graph Diffusion shows room for improvement on the edges of the distributions, it matches the bulk of the data very well and clearly outperforms the baseline model.
Acknowledging that matching-based comparison is the easiest method for evaluating the performance of constituent modeling, it's important to note a few flaws that can potentially introduce biases. Firstly, particles without a match are not included in the comparison, and since the matching is performed with the truth, a substantial number of particles can be missed, as evident from Fig. <ref>. Additionally, in some cases, a generated event may have the same number of particles as the truth, but one particle was not reconstructed, while a particle from noise with very different features was created. This scenario can occur in reality, but the matching would pair the unreconstructed truth particle with the fake one, resulting in a high Hungarian cost. We also note that when summing the costs of p_T, η, ϕ and class we weigh them so every variable on average has approximately the same contribution. Changing the balance would yield different matching results and can influence the performance metrics.
Despite these flaws, matching-based comparison still serves as a useful metric, as these effects will equally occur for both targets and predictions as long as the cardinality is modeled correctly.
§.§ Resolution performance
A crucial aspect for a good detector simulation is accurate resolution modeling. The stochastic nature of particle-material interactions leads to non-trivial smearing of truth particle features. Conducting detector simulation and reconstruction multiple times on the same truth event results in different outcomes each time. To evaluate how well our model reproduces this variability, we introduce a replica dataset, similarly to the concept introduced in <cit.>. We generate 10,000 unique truth events, and for each truth event, we run detector simulation and reconstruction 100 times. After performing truth matching for all replicas, each truth particle has a matched distribution of particles. For each truth particle, we compute the mean and standard deviation of the matched distribution as a proxy for the detector resolution. This is summarized in bins of truth p_T in Fig. <ref>.
While the means are modeled comparably well by both models, Slot Attention largely underestimates the standard deviations and therefore doesn't accurately model the resolution. However, Graph Diffusion matches the target resolution significantly better, with a slight trend towards overestimating the resolution.
It's worth mentioning that utilizing the replica dataset for training with an extended loss function, as demonstrated in <cit.>, is an option. In such a scenario, the Slot Attention model would yield better performance in resolution modelling. However, Graph Diffusion consistently outperforms it. However, since we found no significant advantage for the latter, we opted for the simpler setup without replicas.
For demonstrative purposes, we present an event display with replicas in Fig. <ref>. The showcased event was selected because it contains particles across a broad p_T spectrum without exceeding a count of particles that would classify it as a rare event. The event display aligns with our previous discussions. Slot Attention fails to accurately model the distribution and does not replicate the low-energy noise. In contrast, Graph Diffusion exhibits a much closer alignment with the target distribution but is prone to a few noisy outliers.
§ CONCLUSION
In this study, we have advanced a previously introduced approach for fast simulation of particles within dense environments such as jets. Our method involves directly generating reconstructed particles from input truth particles, effectively replacing the conventional processes of detector simulation and reconstruction in a single step.
Central to our approach is the introduction of the Graph Diffusion architecture, which leverages cutting-edge diffusion techniques on graph-valued data. We conducted a comprehensive analysis of the network's performance, focusing on three key aspects: overall set quantities, features of individual set constituents, and resolution modeling, a significant challenge in detector simulation. Compared to the previous paper, GD allows good resolution modeling without the need for a training dataset containing multiple detector replicas of the same underlying truth event.
To accurately assess resolution performance, we introduced the replica dataset, enabling us to evaluate generation outcomes multiple times per ground truth. Our findings demonstrate that Graph Diffusion significantly outperforms the baseline model in resolution modeling, while exhibiting comparable performance in overall set constituent features.
Looking ahead, our ultimate objective is to develop a fast simulation package that is universally applicable across various events. Future directions could involve scaling up the model to process full events and therefore handle higher cardinalities, although the performance implications of such scaling remain uncertain, particularly for events with several hundred particles. Alternatively, implementing event partitioning could address the cardinality issue, but would require training on a diverse range of detector signatures, including jets of varying energy ranges and isolated objects.
§ ACKNOWLEDGEMENT
ED, EG, NK, DK, and NS are supported by the BSF-NSF grant 2028 and the ISF Research Center 494.
|
http://arxiv.org/abs/2405.09141v1 | 20240515071308 | Tree-Packing Revisited: Faster Fully Dynamic Min-Cut and Arboricity | [
"Tijn de Vos",
"Aleksander B. G. Christiansen"
] | cs.DS | [
"cs.DS"
] |
Evaluation scheme for children-centered language interaction competence of AI-driven robots
Jiantao Li
===========================================================================================
A tree-packing is a collection of spanning trees of a graph. It has been a useful tool for computing the minimum cut in static, dynamic, and distributed settings.
In particular, [Thorup, Comb. 2007] used them to obtain his dynamic min-cut algorithm with Õ(λ^14.5√(n)) worst-case update time.
We reexamine this relationship, showing that we need to maintain fewer spanning trees for such a result; we show that we only need to pack Θ(λ^3 log m) greedy trees to guarantee a 1-respecting cut or a trivial cut in some contracted graph.
Based on this structural result, we then provide a deterministic algorithm for fully dynamic exact min-cut, that has Õ(λ^5.5√(n)) worst-case update time, for min-cut value bounded by λ.
In particular, this also leads to an algorithm for general fully dynamic exact min-cut with Õ(m^1-1/12) amortized update time, improving upon Õ(m^1-1/31) [Goranci et al., SODA 2023].
We also give the first fully dynamic algorithm that maintains a (1+ε)-approximation of the fractional arboricity – which is strictly harder than the integral arboricity. Our algorithm is deterministic and has O(αlog^6m/ε^4) amortized update time, for arboricity at most α.
We extend these results to a Monte Carlo algorithm with O((log m,ε^-1)) amortized update time against an adaptive adversary. Our algorithms work on multi-graphs as well.
Both result are obtained by exploring the connection between the min-cut/arboricity and (greedy) tree-packing. We investigate tree-packing in a broader sense; including a lower bound for greedy tree-packing, which – to the best of our knowledge – is the first progress on this topic since [Thorup, Comb. 2007].
empty
empty
§ INTRODUCTION
A tree-packing is a collection of spanning trees of a graph.
Often one is interested in a tree-packing satisfying certain requirements, e.g., greedy, disjoint, which we detail where applicable.
In particular, tree-packing first appeared in the seminal works of Nash-Williams <cit.> and Tutte <cit.>.
They studied sufficient and necessary conditions for when a graph can be decomposed into k disjoint spanning subgraphs, i.e., when a graph admits a disjoint tree-packing of size k.
They provided a sufficient and necessary condition by considering partition values:
given a partition 𝒫 of the vertex set of a graph G, the value of 𝒫 is then |E(G/𝒫)||V(G/𝒫)|-1 , where G/𝒫 is the graph obtained by contracting 𝒫 in G.
In particular, they showed that G admits a decomposition into k disjoint spanning trees if and only if the minimum partition value is at least k.
Not long after, Nash-Williams noted <cit.> that his techniques extends to answer a question with a very similar flavor: what is the smallest number of trees needed to cover all edges of a graph?
This number is now known as the arboricity of the graph, and Nash-Williams showed that the arboricity is the ceiling of the fractional arboricity defined as α := max_S⊆ V|E(S)|/|S|-1.
Tree-packing, however, has not only been studied for its mathematical properties, but also for its algorithmic applications.
In a seminal paper, Gabow <cit.> showed how related techniques can be used to compute both a packing of disjoint trees and the minimum cut value of a graph, i.e., the minimum number of edges whose deletion causes the graph to become disconnected.
Such a collection of edges form a minimum cut or min-cut for short.[As per convention, we often simply write min-cut to refer to the size/value of a min-cut. Note that there can be multiple min-cuts, although there is a unique min-cut value.]
This was far from the end of the story, as Karger <cit.> used the fact that any large enough greedy tree-packing contains trees that 2-respect some min-cut, i.e., it contains a spanning tree which crosses some min-cut at most twice. Here a greedy tree-packing is defined as follow: let the weight of an edge be the number of trees an edge belongs to, and require that the spanning trees in the packing form successive minimum spanning trees.
To arrive at his near-linear time min-cut algorithm for computing a min-cut, Karger used this fact by showing that one can efficiently compute the size of all cuts that 2-respects a tree.
This observation has subsequently also found use for computing k-cuts <cit.>.
This `semi-duality' between tree-packing and min-cut has also found applications in other models of computation such as distributed computing <cit.> and in dynamic algorithms <cit.>.
Dynamic algorithms maintain a solution to a problem – for instance the size of a min-cut – as the input graph undergoes deletions and insertions of edges.
In this setting, it is not known how to maintain the smallest 2-respecting cut in a dynamic forest. Instead, Thorup <cit.> showed that packing Ω(λ^7 log^3 m) greedy trees[Throughout this paper we write log m for all logarithmic factors. We note that for simple graphs this simplifies to log n.] is sufficient to guarantee that at least one tree 1-respects a min cut, provided that the size of the min-cut is at most λ.
He then shows that one can dynamically maintain this packing in Õ(||^2 √(λ n)) update time[We write Õ(f) for O(flog f).] and that one can maintain the smallest 1-respecting cut efficiently in a dynamic forest. These things combine to give an exact dynamic algorithm running in Õ(λ^14.5√(n)) worst-case update time, whenever the min-cut has size at most λ.
This dependency on λ is persistent. Even the cases λ = 1 and λ = 2 are important and very well-studied in the dynamic setting <cit.>. Recently Jin, Wu, and Thorup <cit.> gave an algorithm for the case that λ≤ (log m)^o(1). For general (polynomial) values of λ, Thorup <cit.> remains the state-of-the-art.
One immediate way of improving the dependency on λ in his update time is to show that packing even fewer trees still guarantees that at least one tree 1-respects a min-cut.
This prompted Thorup to ask if it is possible to show that an even smaller tree-packing always 1-respects at least one min-cut.
In recent work, Goranci et al. <cit.> balanced the approach of Thorup with a different approach based on expander decompositions to get an exact dynamic algorithm for all values of λ.
Again, an improvement in the λ dependency immediately yields a faster algorithm. In a similar vein to Thorup, they asked whether this is possible.
In this paper, we answer this question in the affirmative. We show that focusing solely on 1-respecting cuts might be too limited an approach in the dynamic setting.
In particular, we show that in the cases where one might need to pack many trees to ensure that at least one packed tree 1-respects a min-cut, one can instead consider a corresponding approximate partition. We show that whenever, we cannot guarantee a 1-respecting min-cut, we can instead guarantee that the approximate partition contains a trivial min-cut.
We then make this approach algorithmic by showing how to maintain the trivial min-cuts of this approximate partition.
By doing so, we need only pack Ω(λ^3 log m) trees, thus resulting in a much smaller final dependency on λ.
In spite of tree-packing having found so many applications, we are still far from fully understanding them and their limits. In this paper, we show that they are not only useful for dynamically estimating the min-cut, but also for dynamically estimating the fractional arboricity, as we provide the first dynamic algorithm able to (1+)-approximate the fractional arboricity efficiently.
Finally, we study the limits of tree-packing based approaches by proving some technical results concerning tree-packing. As such, we categorize our results in three sections: dynamic min-cut, dynamic arboricity, and technical results concerning tree-packing.
§.§ Min-Cut
First, we present our results for dynamic min-cut. We obtain the following result.
theoremmincutPar
There exists a deterministic dynamic algorithm, that given an unweighted, undirected (multi-)graph G=(V,E), maintains the exact min-cut value λ if λ≤λ_max in Õ(λ_max^5.5√(n)) worst-case update time.
It can return the edges of the cut in O(λlog m) time with Õ(λ_max^5√(m)) worst-case update time.
This improves on the state-of-the-art by Thorup <cit.>, who achieves Õ(λ_max^14.5√(n)) worst case-update time. Thorup also uses this result to obtain (1+)-approximate min-cut against an oblivious adversary in Õ(√(n)) time, where our result improves the polylogarithmic factors. An application where our result has more impact is deterministic exact min-cut for unbounded min-cut value λ.
theoremmincutCombi
There exists a deterministic dynamic algorithm, that given a simple, unweighted, undirected graph G=(V,E), maintains the exact min-cut value λ with amortized update time
Õ(min{m^1-1/12, m^11/13n^1/13,n^3/2}).
It can return the edges of the cut in O(λlog m) time with Õ(m^1-1/12) amortized update time.
We obtain this result by using the algorithm of Goranci et al. <cit.> for the high λ regime. When they combine this with <cit.>, they achieve Õ(m^29/31n^1/31)=Õ(m^1-1/31) amortized update time[See the updated arXiv version for the correct bounds. The SODA version of the paper states a different bound (namely Õ(m^1-1/16)), which resulted from a typo when citing <cit.>.].
We note that Goranci et al. <cit.> also provide a randomized result with Õ(n) worst-case update time against an adaptive adversary.
Further Related Work.
Jin and Wu <cit.> gave an algorithm for s-t min-cut, for min-cut up to size (log m)^o(1) with n^o(1) worst-case update time. Recently, they generalized these techniques to get n^o(1) worst-case update time for the global min-cut <cit.> up to size (log m)^o(1).
There is a further line of work on smaller values of the min-cut: in particular for graph connectivity (whether λ≥ 1) <cit.> and 2-edge connectivity (whether λ≥ 2) <cit.>.
For planar graphs, Lacki and Sankowski <cit.> provided a deterministic algorithm with Õ(n^5/6) worst-case update and query time.
There are faster algorithms for dynamic min-cut, at the cost of an approximation factor: Karger and Thorup <cit.> gave a (2+)-approximation in Õ((log m,^-1)) amortized update time, and Thorup <cit.> gave a (1+)-approximation in Õ(√(n)) worst-case update time.
In the partially dynamic setting, Thorup <cit.> gives an algorithm with Õ(n^3/2+m) total time for the purely decremental and incremental settings. Further, there are incremental algorithms with O(λlog m) or O(log m ) amortized update time, by Henzinger <cit.> and Goranci, Henzinger, and Thorup <cit.> respectively.
§.§ Arboricity
The fractional arboricity α of a graph is defined as
α := max_S⊆ V|E(S)|/|S|-1.
The arboricity is defined as ⌈α⌉, so strictly easier to compute. Equivalently, we can define the arboricity as the minimum number of trees needed to cover the graph <cit.>.
Our Result.
We provide an algorithm to compute the arboricity based on tree-packing. This leads to the first dynamic (1+)-approximation of the fractional arboricity. Our algorithm is deterministic, hence also works naturally against an adaptive adversary.
theoremarboricityDetNew
There exists a deterministic dynamic algorithm, that given an unweighted, undirected (multi-)graph G=(V,E), maintains a (1+)-approximation of the fractional arboricity α when α≤α_max in O(α_maxlog^6 m / ^4) amortized update time or a Las Vegas algorithm with O(α_max^2 m^o(1) / ^4) worst-case update time.
This improves the approximation factor in the state-of-the-art: Chekuri et al. <cit.> give a (2+)-approximation of the fractional arboricity. It has O(logα_maxlog^2 m /^4) amortized update time or O(logα_maxlog^3 m /^6) worst-case update time. In fact, for simple graphs, the value of the densest subgraph is a (1+)-approximation for large values of α. Combining this with our result for smaller values of α, we obtain the following result.
theoremarboricitySimple
There exists a deterministic dynamic algorithm, that given a simple, unweighted, undirected graph G=(V,E), maintains a (1+)-approximation of the fractional arboricity α in O(log^6 m / ^5) amortized update time or a Las Vegas algorithm with O(m^o(1)/^6) worst-case update time.
For multi-graphs, the value of the densest subgraph remains a 2-approximation of the fractional arboricity, even when α becomes large. To obtain an efficient algorithm for (1+)-approximation, we use a sampling technique to reduce the case of large α to the case of small α. Although this is rather straight-forward against an oblivious adversary, we need to construct a more sophisticated scheme to deal with an adaptive adversary.
theoremarboricity
There exists a dynamic algorithm, that given an unweighted, undirected (multi-)graph G=(V,E), maintains a (1+)-approximation of the fractional arboricity α against an adaptive adversary in O( log^11 m / ^15) amortized update time or O(m^o(1) / ^19) worst-case update time.
Against an oblivious adversary we obtain the improved amortized update time of O(log^7 m /^6), or worst-case update time O(m^o(1)/^8), see <Ref>.
We remark that the update time of many dynamic algorithms is parameterized by the (fractional) arboricity <cit.>. This shows that the fractional arboricity is an important graph parameter. These algorithms are faster when α is small, and this is exactly the regime where dynamic approximations of α are lacking. We hope that our results contribute to a better understanding of this.
Further Related Work.
Computing the (fractional) arboricity of a graph is a relatively hard problem; we do not know a static linear-time algorithm for the exact version.
Gabow showed how to compute the arboricity in Õ(m^3/2) time <cit.>. For the approximate version, much faster algorithms are known.
In particular, Eppstein gave a 2-approximation in O(m+n) time <cit.>.
Plotkin, Shmoys, and Tardos gave a FPTAS for solving fractional packing and fractional covering problems, and their algorithm applied to fractional arboricity takes Õ(mα/^2) time <cit.> and provides a (1+)-approximation, where α denotes the value of the fractional arboricity.
Toko, Worou, and Galtier gave a (1+)-approximation for the fractional arboricity in O(mlog^3 m/^2) time <cit.>.
Blumenstock and Fischer <cit.> gave a (1+)-approximation of the arboricity in O(mlog mlogα/) time. More recently, Quanrud <cit.> gave a (1+)-approximation of the arboricity w.h.p. in O(mlog mα(n) + nlog m(log m + (loglog m+log(1/))α(n))/^3) time, where α(n) is the inverse Ackermann.[We apologize for the abuse of notation, in the remainder of the paper the inverse Ackermann does not appear and α always refers to the fractional arboricity itself.]
The exact fractional arboricity can be computed in Õ(mn) time <cit.>.
For dynamic arboricity, there is a deterministic exact algorithm with worst-case update time Õ(m) <cit.>.
The work of Brodal and Fagerberg <cit.> implies a 4-approximation of the arboricity in O(log m) amortized update time against an oblivious adversary.
Both for better approximations, and for the fractional arboricity, we turn to the closely related densest subgraph problem, where we want to determine the density ρ:
ρ := max_S⊆ V|E(S)|/|S|.
There have been multiple dynamic approximate densest subgraph algorithms over the last years <cit.>.
The state-of-the-art <cit.> gives a (1+)-approximation, which in turn imply (2+)-approximate fractional arboricity algorithms (for simple graphs). It has O(log^2 m logρ /^4) amortized update time or O(log^3 m logρ /^6) worst-case update time.
The algorithms via densest subgraph, but also <cit.>, use out-orientations to compute the approximate fractional arboricity. Although this works well for crude approximations, it seems to not lead to high precision approximations.
§.§ Tree-Packing
Definitions.
Let V=(G,E) an unweighted undirected graph. A tree-packing of G is a family of spanning trees, where edges can appear in multiple trees. The load of an edge e, denoted by L^(e), is defined by the number of trees that contain e. The relative load is defined as ℓ^(e) = L^(e)/||. Whenever the tree-packing is clear from the context, we omit the superscript ·^. The packing value of a tree-packing is
pack_val() := 1/max_e∈ Eℓ^(e).
Dual to tree-packing we have a concept for partitions. For a partition , we define the partition value as
part_val() := |E(G/)|/||-1.
We now have that (see e.g. <cit.>)
Φ_G:= max_pack_val() = min_part_val().
We omit the subscript and simply write Φ when the graph is clear from the context.
Next, we introduce ideal relative loads, following Thorup <cit.>. These loads, denoted by ℓ^*(e), are defined recursively.
* Let ^* be a packing with pack_val(^*)=Φ.
* For all e∈ E(G/), set ℓ^*(e):= 1/Φ.
* For each S∈^*, recurse on the subgraph G[S].
Greedy Tree-Packing.
Thorup <cit.> showed that a greedy tree-packing approximates the ideal packing well. In particular, a greedy tree-packing with ||≥ 6λlog m/^2 trees satisfies
|ℓ^(e)-ℓ^*(e)| ≤/λ,
for all e∈ E.
Now he continues to show, that if we pack Ω(λ^7 log m) greedy trees, we pack at least one tree that crosses some min-cut only once. This argument heavily relies on <Ref> with set to λ^3.
Tree-Packing and Min-Cut.
Instead of directly improving upon the bounds above, i.e., showing that one can pack fewer trees, we investigate the relationship between tree-packing and min-cut more closely.
theoremthmCutExistence
Let G be an unweighted, undirected (multi-) graph, and let be a tree-packing on G. Suppose |𝒯|=Ω(λ^3log m), then at least one of the following holds
* Some T∈ 1-respects a min-cut of G; or
* Some trivial cut in G/ {e∈ E: ℓ^(e)<2λ-38λ^2} corresponds to a min-cut of G.
This result is the main technical ingredient for <Ref>, which also contains novel dynamic routines for maintaining both parts.
Number of Greedy Trees.
First of all, we can show that packing many greedy trees is actually necessary to satisfy <Ref> (up to a log m factor).
theoremTPlowerbound
Let G be an unweighted, undirected (multi-) graph. In general, a greedy tree-packing needs ||=Ω(λ/^3/2) trees to satisfy
|ℓ^(e)-ℓ^*(e)| ≤/λ,
for all e∈ E, whenever ^-1 = O(n^1/3).
A Smaller Tree-Packing.
For a general tree-packing, clearly λ/ is a lower bound for the number of trees to satisfy <Ref>. We show that there always exists a tree-packing satisfying this.
theoremTPexistence
Let G be an unweighted, undirected (multi-)graph. There exists a tree-packing with ||=Θ(λ/) trees that satisfies
|ℓ^(e)-ℓ^*(e)| ≤/λ,
for all e∈ E.
Although a greedy tree-packing is easy to compute and maintain, this might indicate that investigating other tree-packings might be worthwhile.
Further Related Work.
Tree-packing can be formulated as a (packing) LP. One way of solving LPs is via the Multiplicative Weights Update (MWU) Method, see e.g., <cit.>. In particular, this method needs at most O(ρlog m/^2) iterations for a packing/covering LP with width ρ <cit.>. For tree-packing, we have width ρ=Θ(λ).
The MWU method and greedy tree-packing are strongly related. In particular, Harb, Quanrud, and Chekuri <cit.> note that if one uses (a particular version of) the MWU method then the resulting algorithm would also be greedy tree-packing. Chekuri, Quanrad, and Xu <cit.> conjecture that greedy tree-packing is in fact equivalent to approximating the LP via the standard MWU method.
General packing LPs have an iteration count lower bound of Ω(ρlog m/^2) iterations <cit.>. It is unclear whether this also holds for the special case of tree-packing. We think it is likely that our lower bound, <Ref>, can be extended with a factor log m. However, we think that the dependency might be optimal. In other words, we conjecture that the MWU method for tree-packing requires Θ(λlog m/^3/2) iterations.
We note that in both greedy tree-packing and in the MWU method, there is still freedom in the update step: in greedy tree-packing there are often multiple minimum spanning trees (e.g., the first tree can be any spanning tree) and in the MWU method we can select multiple constraints. So, although we do not expect every greedy tree-packing to perform better, we might still be able to show that a specific greedy tree-packing beats the general lower bound. For example, selecting a minimum spanning tree at random.
Tree-packing is also used for minimum k-cut <cit.>. It is an interesting open question whether our techniques extend to this setting.
§.§ Technical Overview
Dynamic Graph Algorithms.
Before we give an overview of our techniques, we recall the model we are working in. The goal is to maintain a data structure for a changing graph, that maintains the solution to some problem, e.g., the value of the min-cut or the arboricity. The input graph undergoes a series of updates, which are either edge insertions or edge deletions. If the sequence of updates is fixed from the start, we say we have an oblivious adversary. If the sequence of updates can be based upon the algorithm or the state of the data structure, we say we have an adaptive adversary. We say the algorithm has amortized update time equal to t, if it spends O(σ t) time for a series of σ updates. We say the algorithm has worst-case update time equal to t if it spends at most t time after every update.
In some cases, the data structure does not store the solution to the problem explicitly, but it can be retrieved by a query, e.g., in case of the edges of a min-cut. In this case we state the required time to answer such a query.
§.§.§ Min-Cut
To obtain our dynamic min-cut result, <Ref>, we have three technical contributions. The first is showing <Ref>. Our algorithm then consists of maintaining such a tree-packing, its 1-respecting cuts, and the corresponding trivial cuts. Our second contribution is showing how to maintain such trivial cuts, and the third is a new technique to decrease the recourse of tree-packing.
Min-Cut and Tree-Packing.
First, let us sketch the proof of <Ref>. The main observation that allows one to show that some tree 1-respects a min-cut is that if some min-cut C has
∑_e ∈ Cℓ^(e) < 2,
then it must be 1-respected by some tree.
Indeed, then the average number of times a tree in crosses C is
1/||∑_e ∈ C L^(e) = ∑_e ∈ Cℓ^(e) < 2,
so some tree must 1-respect C.
Hence, if ∑_e ∈ Cℓ^*(e) is small enough, i.e., far enough below 2, then the tree-packing does not need not be very large to also concentrate the sum ∑_e ∈ Cℓ^(e) below 2.
Hence, we can restrict ourselves to the case where every min-cut C has ∑_e ∈ Cℓ^*(e) ≈ 2.
In this case, one can show that in fact every edge e participating in a min-cut has ℓ^*(e) ≥a̅≈2λ for some appropriately chosen a̅.
This in turn means that for a different value â only slightly smaller than a̅, the graph G/{e ∈ E: ℓ^*(e) < â} must contain trivial min-cuts.
To show this we assume this is not the case for contradiction, and then count the edges of the graph in two different ways: once using the ℓ^* values and once by summing the degrees.
The first way of counting implies that G/{e ∈ E: ℓ^*(e) < â} contains roughly â^-1 |V(G/{e ∈ E: ℓ^*(e) < â})| ≈λ2 |V(G/{e ∈ E: ℓ^*(e) < â})| edges, and the second way of counting shows via the Hand-Shaking Lemma that G/{e ∈ E: ℓ^*(e) < â} contains at least λ+12 |V(G/{e ∈ E: ℓ^*(e) < â})| – a contradiction.
The rest of the proof then boils down to showing that for an arbitrary trivial min-cut X of G/{e ∈ E: ℓ^*(e) < â}, Φ_G[X] is sufficiently large.
Indeed, this implies that we do not need to pack too many trees in order to concentrate ℓ^ values of the edges in a min-cut far above the ℓ^ values of edges in G[X].
In particular, one can ensure that all edges e in a min-cut have ℓ^(e) > â and all edges in G[X] has ℓ^(e) < â.
This then implies that X is a trivial min-cut of G/{e ∈ E: ℓ^(e) < â}.
To show this, we lower bound the value of Φ_G[X] via the objective value of an optimization problem, which encodes the size of every trivial cut in the partition 𝒫 realizing Φ_G[X].
The key property needed is that no min-cut edges are in E(G[X]) and so very few trivial cuts of G[X]/𝒫 have size smaller than λ + 1.
This allows us to bound the objective value of the optimization problem significantly above λ2, which in turn implies that all ℓ^* values of edges in G[X] are significantly below 2λ, thus yielding the required property. For more details, see <Ref>.
Maintaining Trivial Cuts.
Second, we describe how we maintain the trivial cuts efficiently.
The key challenge in maintaining the size of the trivial cuts, is that we do not know how to maintain an explicit representation of G/{e ∈ E: ℓ^*(e) < â}.
The difficult part is that dynamically maintaining contractions and un-contractions explicitly can be very time consuming, as we might need to look at many edges in order to assign them the correct endpoints after the operation is performed.
To circumvent this issue, we maintain a weaker, implicit representation of G/{e ∈ E: ℓ^*(e) < â}.
Instead of maintaining the graph explicitly, we only maintain the vertices of G/{e ∈ E: ℓ^(e) < â} as well as their degrees.
This weaker representation turns out to be much easier to maintain.
We maintain a dynamic connectivity data structure on the graph induced by all edges e with ℓ^(e) < â.
The connected components of this graph correspond exactly to the vertices of G/{e ∈ E: ℓ^(e) < â}.
In order to calculate the degree of these vertices, we maintain the number of edges e with ℓ^(e) ≥â incident to each vertex v ∈ V(G).
Since the connectivity data structure supplies us with a connectivity witness in the form of a spanning forest, we can maintain a spanning tree of each connected component as a top tree <cit.>.
By weighting each vertex v ∈ V(G) by the number of incident edges e with ℓ^(e) ≥â, we can use the top trees to dynamically maintain the sum of vertex weights of each tree.
Since loops are counted twice, the sum of vertex weights in a spanning tree correspond exactly to the degree of the corresponding vertex in G/{e ∈ E: ℓ^(e) < â}.
By using a min-heap, we can then maintain the minimum degree of G/{e ∈ E: ℓ^(e) < â}, which can only over-estimate the size of the corresponding trivial cut.
To report the min-cut, we can then retrieve the cut by searching each vertex with weight strictly greater than 0 in only log m time, and then report each edge with ℓ^(e) ≥â incident to the vertex in log m time per edge. For more details, see <Ref>.
Recourse in Tree-Packing.
Third, we turn to the tree-packing. We note that the most expensive step in our algorithm is to maintain 1-respecting cuts on ||=Θ(λ_max^3 log m) minimum spanning trees. Every time such a spanning tree changes, we need to update its 1-respecting cuts in Õ(√(n)) time. Therefore, it is important to bound the recourse in these spanning trees: the number of changes to the tree-packing due to an edge update. Naively, we have O(||^2)=O(λ_max^6log^2m) recourse (see e.g., <cit.>), to see this consider an edge deletion. This edge is contained in at most || trees. In each of these trees it needs to be replaced, which leads to a weight increase for the replacement edge. This change needs to be propagated in all following trees, so can lead to a series of || changes. We show that we can get away with a recourse of O(λ_max^5log^2 m). See <Ref> for a formal statement. This improvement is independent of the improvement from <Ref> in the number of trees we need to pack, and might have applications in other instances where a greedy tree-packing is used.
The first observation is that if the graph has min-cut λ, then an edge will be in roughly ||/λ trees, not in all ||. If λ=λ_max throughout the update sequence, then this gives the result. However, λ can become arbitrary small. To mitigate this fact, we keep logλ_max copies of the tree-packing, _1, _2, …, where _i corresponds to the case where the min-cut value is in [2^i,2^i+1). If λ≥ 2^i, then we have _i as a normal tree-packing on G=(V,E). However, if λ< 2^i, then _i is a tree-packing on (V,E∪ E_virtual), where E_virtual is some set of virtual edges ensuring that the min-cut stays large. These edges get added as λ shrinks and deleted as λ grows.
When using this global view of the connectivity, we obtain an algorithm with O(λ_max^5log^2 m) amortized recourse. However, by inspecting the ℓ^ values, we have a much more refined view of the connectivity. With a careful analysis, this means we can obtain the same bound on the worst-case recourse. This comes down to deleting a virtual edge when it is no longer needed, and not waiting until the global min-cut has increased. We refer to <Ref> for the details.
§.§.§ Arboricity
We revisit the relationship between ideal tree-packing and the partition values. We recall that <cit.>
Φ_G:= max_pack_val() = min_part_val(),
which Thorup <cit.> used to recursively define the ideal relative loads ℓ^*. By a close inspection of these definitions, we can show that
α(G) = 1/min_e∈ Eℓ^*(e).
This means that we can simply estimate α(G) by (minℓ^(e))^-1, with ℓ^T some good approximation of ℓ^*. We recall that by packing ||=Θ(λlog m/^2) greedy spanning trees, we have that
|ℓ^(e)-ℓ^*(e)| ≤/λ,
for all e∈ E <cit.>.
By maintaining this many greedy spanning trees through dynamic minimum spanning tree algorithms, we obtain update time ∼α_max^4, see <Ref>. Similar to the tree-packing for min-cut, we can alter the graph. This means we have an articficially high min-cut of λ=Θ(α), this both decreases the number of trees we need tot pack and decreases the recourse (and hence update time) to ∼α_max, see <Ref>. It is not a direct application of the same result, since there are even disconnected graphs with high arboricity. We make an adaptation such that we can leverage high min-cut anyway.
Simple Graphs.
First, we consider simple graphs, to obtain a result independent of α_max. Suppose S⊆ V satisfies |E(S)|/|S|-1=α. Since in simple graphs we have |E(S)|≤ |S|(|S|-1), we see that
|S| = |S|(|S|-1)/|S|-1≥|E(S)|/|S|-1=α.
This means that for large values of α=Ω(1/), we have that 1/|S|≈ 1/(|S|-1), hence we have ρ =max_S⊆ V|E(S)||S|≈max_S⊆ V|E(S)||S|-1=α. We combine our algorithm for bounded α_max, <Ref>, with an efficient algorithm to compute a (1+)-approximation of the density ρ <cit.> to obtain <Ref>. See <Ref> for more details.
Multi-Graphs.
For multi-graphs, the same argument does not hold: even in subgraphs of size |S|=2 we can have many parallel edges, so the density is only a 2-approximation of the arboricity, even for large values of α. Instead, we use a sampling approach to reduce the case of large α to small α. The idea is simple (see also e.g., <cit.>): if we sample every edge with probability Θ(log mα^2), we should obtain a graph with arboricity Θ(log m^2). By maintaining log m copies with guesses for α=2,4,8,…, we can find the arboricity. We describe in <Ref> how this gives an algorithm against an oblivious adversary.
However, an adaptive adversary poses a problem: such an adversary can for example delete sampled edges to skew the outcome. Of course it is too costly to resample all edges after every update. A first idea is to only resample the edges incident to an updated edge. Although this would give us the needed probabilistic guarantee, it has high update time: a vertex can have degree in the sampled graph as big as n/α. To combat this, we work with ownership of edges: each edge belongs to one of its endpoints. When an edge is updated, we resample all edges of its owner. When we assign arbitrary owners, this does not give us any guarantees on the degree yet. We remark that ownership can be seen as an orientation of the edge, where the edge is oriented away from the owner. Now can guarantee that each vertex owns at most (1+)α edges by using an out-orientation algorithm <cit.>.
For more details, see <Ref>.
§.§.§ Lower Bound for Greedy Tree-Packing
To show that a greedy tree-packing needs Ω(λ/^3/2) trees to satisfy |ℓ^(e)-ℓ^*(e)|≤/λ (<Ref>), we give a family of graphs, together with a tree-packing, such that if we pack o(λ/^3/2) trees, then |ℓ^(e)-ℓ^*(e)|> /λ for some edge e∈ E. First, we restrict to the case λ=Φ=2. We give a graph that is the union of two spanning trees, see <Ref>. The vertical string in the graph consists of k vertices, the circular part of the remaining n-k vertices. We show that we can over-pack edges at the top of the vertical string (edge a in <Ref>), at the cost of under-packing edges at the bottom of the vertical string (edge b in <Ref>). The load of two neighboring edges can differ by at most 1, since the packing is greedy. We show that by packing Θ(k^3) trees, we can get the optimal difference of Θ(k) between the highest and lowest edge. Since both of them are supposed to have a value of ℓ^*(a)=ℓ^*(b)=1/Φ, we obtain an error |ℓ^T(a)-ℓ^*(a)|> Θ(k/k^3)=Θ(1/k^2). By setting k=Ω(^-1/2), we obtain the result.
We generalize this result for any even λ, by copying every edge in the above construction λ/2 times and packing trees on each copy in parallel. For more details, we refer to <Ref>.
§.§.§ Existence of Small Greedy Tree-Packing
Our last technical contribution is <Ref>, showing that there exists a small tree-packing. We first consider the case that ℓ^*(e)=1/Φ for every edge e∈ E. The proof is rather simple, and based on Kaiser's <cit.> elegant proof of the tree-packing theorem. This is the theorem initially proved by Tutte <cit.> and Nash-Williams <cit.> that shows that Φ (in our notation) is well defined. Another phrasing is as follows: a graph G contains k pairwise disjoint spanning trees if and only if for every partition of V(G), the graph G/ has at least k(||-1) edges. We generalize this to k trees + 1 forest and show that extending this forest to an arbitrary spanning tree gives the required packing.
Next, we use the ideal load decomposition to generalize this to any graph. The trees in our packing are simply unions of the trees on each component. For more details, see <Ref>.
§ MIN-CUT
In this section, we reexamine the relation between min-cut and tree-packing.
The main technical contribution is the following theorem.
*
Thorup <cit.> showed that if ||=Ω(λ^7log^3m), then some T∈ 1-respects a min-cut. With this new result, we significantly decrease the number of trees we need to pack, leading to a significant speed-up.
This section is organized as follows. In <Ref>, we give a proof of <Ref>. In <Ref>, we show how to estimate the trivial cuts of part <ref> in <Ref> efficiently. In <Ref>, we show how to decrease the recourse in the tree-packing, which leads to a faster running time. In <Ref>, we provide our dynamic min-cut algorithm for bounded λ. And finally, in <Ref>, we provide our dynamic min-cut algorithm for general λ.
Preliminaries.
Before we move on to the proof of <Ref>, we first make the theorem statement more precise, and we make the notation more concise.
For some set of edges E'⊆ E, we denote G/E' for the graph where we contract all edges in E'. Suppose uv∈ E∖ E' and u and v are contracted in G/E', then we keep uv as a self-loop in G/E'. Such edges count twice towards the degree of the corresponding vertex in G/E'.
Let E_∘ a^*:= {e∈ E : ℓ^*(e) ∘ a} and E_∘ a^:= {e∈ E : ℓ^(e) ∘ a} for ∘∈{≥, >, ≤ <, =}. Further let G_a := G/E^_<a. Then in our new notation, part <ref> of <Ref> corresponds to a trivial cut in G_a=G/E^_<a for a=2λ-38λ^2.
Let ⊆ E denote the set of all edges that are contained in at least one min-cut. And let a denote the largest value in such that E^*_≥a⊇.
We recall the definition of Φ.
We have
Φ_G:= max_pack_val() = min_part_val().
We will repeatedly use the following simple lemma concerning Φ and the min-cut λ. We include a proof for completeness.
λ/2 < Φ≤λ.
We recall that Φ = min_|E(G/)|||-1, this immediately gives that
Φ≤min_={A,B}|E(A,B)|/2-1= λ.
Now consider an arbitrary partition . We note that 2|E(G/)||| is the average value of a trivial cut in G/. Since this corresponds to a cut in G, we get 2|E(G/)|||≥λ. Now we see
Φ≥|E(G/)|/||-1 > |E(G/)|/||≥λ/2.
We have the following lemma showing that a sufficiently large greedy tree-packing approximates the ideal packing, introduced in <Ref>, well.
A greedy tree-packing with ||≥ 6λlog m/η^2 trees, for η<2, satisfies
|ℓ^(e)-ℓ^*(e)| ≤η/λ,
for all e∈ E.
§.§ A Proof of Theorem <ref>
Let us first briefly discuss the intuition behind the proof. We start by using reasoning similar to arguments appearing in Thorup <cit.> to show that if some edge e' ∈ has ℓ^*(e') small enough, then any min-cut containing e' will be crossed once by some tree even if we only pack a relatively small amount of trees. This follows from the observation that if
∑_e ∈ Cℓ^(e) < 2
for some cut C, then some tree crosses C at most once. Indeed, the average number of times C is crossed by a tree is given by
1/||∑_e ∈ C L^(e) = ∑_e ∈ Cℓ^(e)
and at least one tree will cross C at most the average number of times. Hence, if ℓ^*(e') is small enough, i.e., far enough below 2/λ, then it is sufficient to pack O(λ^3 log m) trees in order to concentrate ∑_e ∈ Cℓ^(e) below 2. It is important to note here that in order to get this dependency on λ one has to use the fact that many of the edges in C will have ℓ^* values sufficiently far below 2/λ.
From here on, our argument goes in a completely different direction compared to those of Thorup. The starting point for the second part of <Ref> is that we may assume a is sufficiently close to 2λ.
We first use this to show that for an appropriate value â only slightly smaller than a, the graph G/E^*_<â must contain trivial min-cuts. To show this we may assume this is not the case for contradiction, and then count the edges of the graph in two different ways: once using the ℓ^* values and once by summing the degrees. This then yields a contradiction since â^-1 is sufficiently smaller than λ + 12.
Next we exploit the fact that if ℓ^*(e) < a, then e is not in any min-cut by construction. We focus on an arbitrary trivial min-cut X of G/E^*_<â. The remaining part of the proof then boils down to showing that Φ_G[X] is sufficiently large.
To do so, we formulate an optimization problem which lower bounds the value Φ_G[X] can take, and then show that the solution to the optimization problem is large enough. To do so, we heavily exploit that no edge in G[X] belongs to a min-cut, and so only very few trivial cuts of any partition of G[X] can deviate from having degree at least λ + 1.
As outlined above, we begin by showing that we must have some 1-respecting cut if a is sufficiently far from λ2.
If ≤2λ - 1c λ^2 + 1γλ^2 with γ≥ 4c then <ref> holds.
By assumption there exists some e' ∈ such that ℓ^*(e') =. Now pick any min-cut C = (A,B) containing e' and fix it for the remainder of the proof.
We will first show that |E^*_= ∩ C| ≥λ2. Indeed, consider the graph G/E^*_<. Since C separates the endpoints of e' (or C-e' is a smaller cut), it must be that C separates the endpoints of e' in G[E^*_=]. By applying <Ref> twice, we see that Φ_(G/E^*_<)[E^*_=] = ≥λ2, and consequently that the smallest cut in (G/E^*_<)[E^*_=] separating the endpoints of e' has size at least λ2. Hence, it must be that |E^*_= ∩ C| ≥λ2.
Following the earlier discussion, we now want to argue that
∑_e ∈ Cℓ^(e) < 2.
This follows from the fact that we pack || = Ω( λlog m^2) trees, and so we achieve concentration < 38cλ and thus we can write:
∑_e ∈ Cℓ^(e) ≤∑_e ∈ Cℓ^*(e) + λ·/λ≤λ·2/λ - λ/2(1/c λ^2 - 1/γλ^2) + ≤ 2 - 3/8c λ +
< 2,
which per the previous discussion implies that <ref> holds.
Following the proof out-line in the beginning of this section, we next show that if is greater than or equal to the cut-off value of 2λ - 1c λ^2 + 1γλ^2, then it is sufficiently close to 2λ for <ref> to hold.
We begin by showing that G/E^*_< must contain at least one trivial min-cut that corresponds to a min-cut of G.
Suppose that γ≥ 4c > 4, then G/E^*_<2λ - 1c λ^2 + 1γλ^2 contains at least one trivial min-cut that corresponds to a min-cut of G.
As hinted to earlier, we will assume that this is not the case for contradiction. We will then count the edges of G' = G/E^*_<2λ - 1c λ^2 + 1γλ^2 twice to reach the contradiction.
To do so, let _0 be the packing for G with pack_val(_0)=Φ. Next, we recurse on the subgraphs G[S] for each S∈_0. We write the total decomposition as P_i the packing on G[S_i], where each S_i∈ P_j for some j< i, for i∈ℐ for some index set ℐ. We are only interested in parts of the decomposition, and so we will only recurse on S_i∈ P_j if Φ_G[S_i]≤ (2λ - 1c λ^2 + 1γλ^2)^-1.
Now we have the containment:
E^*_≥2/λ - 1/c λ^2 + 1/γλ^2⊂⋃_i∈ℐ E(G[S_i]/_i).
Indeed, any edge in E^*_≥2/λ - 1/c λ^2 + 1/γλ^2 will be included since by <Ref>, we have that Φ_G[S] = (max_e ∈ G[S_i]ℓ^*(e))^-1, and so the recursion will not terminate until every edge in E^*_≥2/λ - 1/c λ^2 + 1/γλ^2 belongs to some subgraph.
In fact, the containment is an equality.
Next, we have
|E^*_≥2/λ - 1/c λ^2 + 1/γλ^2| ≤∑_i∈ℐ |E(G[S_i]/_i)|
≤∑_i∈ℐΦ_i (|V_i| -1)
≤ (2/λ - 1/c λ^2 + 1/γλ^2)^-1∑_i∈ℐ (|V_i| -1),
where Φ_i is the packing value of _i, and V_i=V(G[S_i]/_i).
Next, we show that
∑_i∈ℐ (|V_i| -1) ≤ |V(G')|-1.
We divide ℐ in different depths of recursion: let ℐ_j be such that for i∈ℐ_j we have S_i in recursion depth j (i.e., it is in j partitions before). For ease of notation, we let a singleton cluster {v} again partition into {v}. This has no effect on the sum over |V_i| -1, but ensures that all vertices make it to the lowest depth. Let r be the total recursion depth.
With this notation, we can write for 0 ≤ j < r
∑_i ∈ I_j |V_i| = |I_j+1|
since we must recurse exactly once on each vertex in V_i in the next level.
Let the partition of G be obtained by letting S ∈ if and only if S ∈ P_i for some i ∈ I_r.
Then, similarly to above, the introduced notation implies that
∑_i ∈ I_r |V_i| = |V(G/)|.
We have E^*_≥2/λ - 1/c λ^2 + 1/γλ^2⊂ E(G/) by before.
Thus, we note that any edge contracted in G/ is also contracted in G', and therefore we have |V(G/)| ≤ |V(G')|. Indeed, contracting an edge can never increase the size of the vertex set.
Finally, we can write
∑_i∈ℐ (|V_i| -1) = ∑_j=0^r ∑_i∈ℐ_j (|V_i| -1)
= ∑_j=0^r ( ∑_i∈ℐ_j |V_i|) - |I_j|
= |V(G/)| - |I_r| +∑_j=0^r-1 |ℐ_j+1| -|ℐ_j|
= |V(G/)| - |I_0|
≤ |V(G')|-1.
Now we upper bound as follows:
|E^*_≥2/λ - 1/c λ^2 + 1/γλ^2|
≤(2/λ - 1/c λ^2 + 1/γλ^2)^-1∑_i∈ℐ (|V_i| -1)
≤(2/λ - 1/c λ^2)^-1(|V(G')|-1)
≤λ/2/1-1/2cλ(|V(G')|-1)
≤λ/2(1+1/cλ)(|V(G')|-1)
≤(λ/2 + 1/2c)(|V(G')|-1),
where we used the well-known fact that 11-x≤ 1+2x if x < 1.
To reach a contradiction, we let 𝒫 be the final partition obtained in the above procedure and
and apply the classical hand-shaking lemma to obtain:
|E^*_≥2/λ - 1/c λ^2 + 1/γλ^2| =1/2∑_X ∈𝒫 d_G'(X) ≥λ + 1/2|V(G')|,
which is a contradiction for c > 1. Here we used the assumption that G' contains no trivial cut of size λ. Note that a cut of G' corresponds to a cut in G containing the exact same edges.
Finally, we show that if one was to recurse the idealized load decomposition on a partition in G/E^*_<2λ - 1c λ^2 + 1γλ^2, then the recursive call on a trivial cut will produce a new set S for which Φ_G[S] is sufficiently far away from to establish a separation.
Suppose ≥2λ - 1c λ^2 + 1γλ^2, and let 𝒫 be the partition induced in G/E^*_<2/λ - 1/c λ^2 + 1/γλ^2, and let S ∈𝒫 be any trivial min-cut in this partition. Then we have
Φ_G[S]≥λ/2 + 1/2.
We will exploit the fact that no edge belonging to E(G[S]) is contained in any min-cut. We consider the minimum partition of G[S], call it 𝒫_S, and let G̃ = G[S]/𝒫_S. Then it must be the case that any vertex of degree d ≤λ in G̃ is incident to at least λ + 1 - d edges in G/E^*_<2/λ - 1/c λ^2 + 1/γλ^2. Indeed, otherwise the edge belongs to some min-cut in G, which contradicts the choice of .
It now follows that any choice of G̃ with k vertices must induce a feasible solution to the following optimization problem with objective value Φ_G̃:
min_α, β 1/2(k-1)∑_i = 1^k (λ+ 1 - α_i + β_i)
subject to ∀i: α_i, β_i≥0,
∑_i = 1^k α_i ≤λ.
Indeed, for an arbitrary numbering of the vertices of G̃, we pick the unique feasible choice of α and β that satisfies (v_i) = λ + 1 - α_i + β_i and minimizes |α_i|+|β_i| for all i. It then follows by the hand-shaking lemma and the definition of the partition value that:
Φ_G̃ = E(G̃)/|V(G̃)|-1 = ∑_v ∈ V(G̃)(v_i)/2(k-1) = 1/2(k-1)∑_i = 1^k (λ + 1 - α_i + β_i).
Furthermore, by the above, we know that ∑_v ∈ V(G̃)max{0, λ + 1 - d(v)}≤λ since any vertex with degree d ≤λ in G̃ must be incident to at least λ + 1 - d edges in G/E^*_<2/λ - 1/c λ^2 + 1/γλ^2. By assumption, we know that S has at most λ such edges.
In particular, we find that the solution to the above optimization problem lower bounds the value of Φ_G̃.
Since we are optimizing over a closed and bounded subset of ℝ^2k it follows by the Extreme Value Theorem that an optimal solution exists. To find it, simply note that if any β_i >0, we can reduce the objective value by setting β_i = 0, and if ∑_i = 1^kα_i < λ, we can reduce the objective value by increasing any α_i. Hence, we can assume that any solution to the optimization problem has β = 0 and ∑_i = 1^kα_i = λ. Now simple calculations yield the result:
Φ_G[S]≥1/2(k-1)∑_i = 1^k (λ + 1 - α_i) = k(λ + 1) - λ/2(k-1) = (k-1)λ + k/2(k-1) = λ/2 + k/2(k-1) > λ/2 + 1/2,
for any integer k ≥ 2.
If ≤2λ - 1c λ^2 + 1γλ^2 then <ref> holds by Lemma <ref>. Otherwise, if ≥2λ - 1c λ^2 + 1γλ^2 we will show that <ref> holds. To this end, let S ∈ V(G/E^*_<2/λ - 1/c λ^2 + 1/γλ^2) be any trivial min-cut. Note that under the current assumptions and mild assumptions on c and γ (which we will verify later), Lemma <ref> guarantees the existence of such a trivial cut. If we can show that for any tree-packing with || = Ω(λ^3 log m), we have ℓ^(e) < 2λ - 1c λ^2 for all e ∈ E(G[S]) and ℓ^(e) > 2λ - 1c λ^2 for all e ∈ E(G/E^*_<2/λ - 1/c λ^2 + 1/γλ^2), it then follows that S is exactly a trivial cut of G_ thus establishing <ref>.
Note that S is a connected component of (V,E^*_<2/λ - 1/c λ^2 + 1/γλ^2), and therefore computing the ℓ^* values on G[S] corresponds to computing the ℓ^* values on G from the last depth where S was not contracted.
Indeed, here the recursive definition will recurse on exactly G[S].
Therefore we have
max_e ∈ G[S]ℓ^*(e) = 1/Φ_G[S].
Now by this inequality, <Ref>, and <Ref> we have
max_e ∈ G[S]ℓ^*(e) ≤Φ_G[S]^-1≤λ/2 + 1/2^-1≤2/λ/1+1/λ≤2/λ(1-1/2λ) = 2/λ - 1/λ^2,
since for x ≤ 1 we have 11+x≤ 1-x2. Now, any greedy tree-packing containing at least || ≥ 6log m ^2λ trees will by <Ref> have that:
max_e ∈ G[S]ℓ^(e) ≤/λ + max_e ∈ G[S]ℓ^*(e) ≤2/λ - 1/λ^2 + /λ.
Hence, if 1λ^2 - λ > 1cλ^2, then we have what we wanted to show. Therefore, we only require λ < c-1cλ^2 for this part to hold.
For any e ∈ E(G/E^*_<2/λ - 1/c λ^2+ 1/γλ^2), we similarly have for any greedy tree-packing containing at least || ≥ 6log m^2λ that by <Ref>
min_e ∈ E(G/E^*_<2/λ - 1/c λ^2+ 1/γλ^2)ℓ^(e) ≥ -/λ + min_e ∈ E(G/E^*_<2/λ - 1/c λ^2+ 1/γλ^2)ℓ^*(e) ≥2/λ - 1/c λ^2+ 1/γλ^2 - /λ.
Hence for this part we simply require λ < 1γλ^2.
So for all of the above arguments to be valid, we require that γ≥ 4c, c > 1, λ < c-1cλ^2, λ < 1γλ^2, and that < 38c λ.
Now the first condition together with a choice of c ≥32 immediately imply that c-1cλ^2≥1γλ^2, so setting c = 2, γ = 8, and = 116λ works. As argued earlier, we now have that all edges in G[S] are contracted in G_2λ-38λ^2 and that all edges in E(G/E^*_<2/λ - 1/c λ^2+ 1/γλ^2) are not contracted in G_2λ-38λ^2 so in particular S represents the required trivial min-cut in case
<ref>, when ≥2λ - 1c λ^2 + 1γλ^2. This concludes the proof of <Ref> for any greedy tree-packing with at least || ≥ 6·16^2·λ^3log m = 1536 ·λ^3log m trees.
§.§ Estimating Trivial Cuts in Ga
In this section, we design a data structure that takes a parameter a as input and is able to report an estimate of the size of the smallest trivial cut in G_a.
Depending on the current loads, G_a might only contain one vertex, in which case the data structure returns ∞.
Formally, we show the following lemma.
Let G be a dynamic unweighted, undirected (multi-)graph, and assume we have access to a black-box dynamic algorithm that maintains a tree-packing on G with loads ℓ^(·).
Suppose an update to G results in P(n) loads crossing a during the update of .
Then there is a deterministic data structure which reports a value μ such that:
* If |V(G_a)| = 1, then μ = ∞.
* Else μ = min_X ∈ V(G_a) d_G_a(X).
The algorithm has O((1+P(n))log m) amortized update time, or O((1+P(n))√(n)) worst-case update time.
Both data structures can list the edges incident to some vertex X ∈ V(G_a) with d_G_a(X) = μ in O(log m) worst-case time per edge.
Here loops are listed twice.
We briefly note that if G has min-cut λ at some time t and a = 2λ-38λ^2, then the algorithm will return μ_t = λ at time t.
Indeed, when |V(G_a)| ≥ 2, the set of edges incident to a single vertex of G_a are super-sets of cuts in G, and so the degrees of vertices in G_a upper bounds the size of cuts in G.
Hence μ_t ≥λ.
However, under these assumptions if follows by <Ref> that some trivial cut of G_a, say around X ∈ G_a, will be a min-cut of G. Since the proof of <Ref> shows that all edges e ∈ E(G[X]) have ℓ^(e)< a, it follows that X has no loops in G_a, and we conclude that μ_t ≤λ.
Note also that for any choice of a such that |V(G_a)| ≥ 2, we have μ_t≥λ_t since the edges incident to any vertex of G_a form a super-set of a cut, as argued above.
We briefly discuss the intuition behind the proof.
Since it can be expensive to support contractions and un-contractions, we will not maintain an explicit representation of G_a.
Instead, we will maintain the graph Γ = G[E^_<a] explicitly.
For each connected component of Γ, we maintain a spanning tree.
For any connected component C in Γ, we let the external degree of C be the number of endpoints of edges in E^_≥ a belonging to C.
We observe that the external degree of a connected component C corresponds exactly to the degree of the vertex X in G_a represented by C.
Thus we can maintain the degrees of vertices in G_a, by maintaining the external degrees of every component in Γ.
This can be achieved by storing the spanning tree of each component as a top tree. By storing some additional information, the top trees can compute the external degrees exactly.
Finally, we note that we only have to perform updates to Γ or the top trees, whenever some edge is inserted or deleted into G, or whenever the load of some edge crosses a.
Description of data structure.
The data structure updates as follows:
* We maintain Γ as well as a connectivity data structure on Γ:
* After an update to G, we add a new edge e to Γ if ℓ^(e) < a after the tree-packing is updated, and we delete an old edge e from Γ if ℓ^(e) < a before the tree-packing is updated.
* We remove an edge e from Γ if its load ℓ^(e) < a before the tree-packing is updated and its load ℓ^(e) ≥ a after the tree-packing is updated.
* We add an edge e to Γ if its load ℓ^(e) ≥ a before the tree-packing is updated and is load ℓ^(e) < a after the tree-packing is updated.
* For each vertex v ∈ V(G), we maintain the degree of v in G[E^_≥ a] as well as the edges from G[E^_≥ a] incident to v.
* For each connected component C in Γ, we maintain the sum
S(C):= ∑_v ∈ C d_G[E^_≥ a](v),
as well as enough information to list all vertices v ∈ C with d_G[E^_≥ a](v) > 0.
* We maintain a min-heap containing S(C) for every connected component C of Γ.
We can report the value μ by setting μ equal to the minimum element of the min-heap. In the case where the min-heap only contains one element, we set μ = ∞.
We can report all edges by first listing the vertices v ∈ C with d_G[E^_≥ a](v) > 0, and then listing the edges incident to v in G[E^_≥ a].
Implementation.
Next we describe how to efficiently implement the above steps.
We need access to different data structures, which we list below.
To maintain the connected components of Γ, we will use the following data structures:
There exists a deterministic fully-dynamic algorithm that maintains a spanning forest of a dynamic graph in O(√(n)) worst-case update time and O(1) worst-case recourse.
There exists a deterministic fully-dynamic algorithm that maintains a spanning forest of a dynamic graph in O(log^2 m) amortized update time and O(1) worst-case recourse.
In addition to these data structures, we maintain each spanning tree as a top tree <cit.>.
We use the following interface:
For a dynamic forest, one can maintain a top tree of height O(log m) supporting the operations (u,v), (u,v), and () in O(log m) worst-case update time using only O(1) calls to and and O(log m) calls to and .
The following lemma is then a routine application of top trees.
Given a dynamic forest F where each vertex v has integer weight w(v), there is a data structure supporting the following operations in O(log m) worst-case time per operation:
* (u,v): add an edge between u and v in F.
* (u,v): delete an edge between u and v in F.
* (v): increment the weight of v by 1.
* (v): decrement the weight of v by 1.
* (T): return the sum of vertex weights in T.
Furthermore, for each tree T, one can list the vertices of T with weight >0 in O(log m) time per vertex.
The first two operations are already supported directly by the top tree.
To support the other three, we store as additional information for a cluster A the sum of weights of non-boundary vertices, WeightSum(A).
This information can be maintained under a C =(A,B) operation by updating
WeightSum(C) = WeightSum(A) + WeightSum(B) + ∑_v ∈ (∂ A ∪∂ B)∖∂ C w(v),
with only O(1) overhead.
In order to implement (v) and (v), one can call (v), thus turning v into a boundary vertex. Then the weight of v can be updated without invalidating any information in the top tree.
In order to answer a (T) query, we let R be the root-cluster of the top tree representing T, and return
WeightSum(R) + ∑_v ∈∂ R w(v).
Finally, we maintain as additional information for each cluster the number of non-boundary vertices of weight >0.
This can be done analogously to above.
We can then find all vertices of weight >0 as follows: if the current cluster is a leaf cluster, return all non-boundary vertices of weight >0. Otherwise, report all non-boundary vertices that are boundary vertices for both children, and recurse on all children containing at least one vertex of weight >0.
In the special case where the current cluster is the root cluster, we also return all boundary vertices of weight >0.
Since the top tree has height O(log m), and each recursion takes constant time, we can report each such vertex in O(log m) time per vertex.
We now implement the data structure as follows. We use the connectivity data structure from <Ref> to maintain Γ (or <Ref> for worst-case guarantees).
Both algorithms maintain a spanning forest of Γ which we additionally store using the data structure from <Ref>.
We maintain the invariant that w(v) = d_G[E^_≥ a](v).
Then, we immediately support the required operations.
To maintain the invariant, we note that whenever an edge uv leaves Γ, we increment the weights of u and v by 1, and whenever an edge, previously in G[E^_≥ a], enters Γ, we decrement the weights of u and v by 1.
New or old edges resulting from an update to G are handled similarly.
Finally, we can implement the dynamic min-heap in O(log m) update and query time using any standard balanced binary search tree.
Correctness. Correctness follows readily from <Ref>, <Ref>, and <Ref>.
We need only verify that if Γ contains at least two vertices, then
μ = min_X ∈ V(G_a) d_G_a(X), but as noted earlier this follows from the fact that
∑_v ∈ C d_G[E^_≥ a](v) = d_G_a(X),
if X is the vertex in G_a represented by the connected component C in Γ.
Analysis. Each operation on G is supported in O(T(n) + log m) time, where T(n) is the time needed for the connectivity data structure.
Each time the load of some edge crosses the threshold a, we have to perform O(1) deletions and insertions to Γ, and update O(1) weights of vertices.
The first type of operation is supported in O(T(n) + log m) time by <Ref>, <Ref>, and <Ref>.
Indeed, the only additional operations we need account for are the updates to the min-heaps. Each insertion or deletion to Γ forces at most O(1) changes to the set of connected components, so this can be supported in O(log m) time.
The second type of operation is supported in O(log m) time directly by <Ref>.
§.§ Recourse in Tree-Packing
The goal of this section is to show that we can bound the recourse to ∼λ_max^5 when maintaining ∼λ_max^3 greedy spanning trees.
A standard argument gives that we can maintain || greedy spanning trees with O(||^2) recourse (see e.g., <cit.>), but with a more careful analysis, we can shave a factor λ_max. We start with the following lemma, which showcases the main idea.
Let ||≥λlog m. We can maintain || greedy spanning trees with a recourse of O(||^2/λ).
First, let us consider a deletion of an edge e. We observe that it can appear in at most a limited number of trees since by <Ref> (with η=1)
ℓ^(e)≤ℓ^*(e)+ 1/λ≤max_e'ℓ^*(e')+1/λ≤ 1/Φ+1/λ≤ 3/λ,
where the third inequality follows by <Ref>, and the last inequality holds by <Ref>. So we have L^(e)=||ℓ^(e)=O(||/λ). In each tree where e appears, we need to do an update that can lead to a chain of || updates. Hence we get total recourse O(||^2/λ).
This lemma by itself does not help us yet: we are maintaining ∼λ_max^3 trees, and λ_max^6/λ can be as big as λ_max^6, when λ becomes small.
The trick is to maintain O(logλ_max) different tree-packings _i, each of different size.
Now, we only need the tree-packing _i to be correct when λ_t∈ [2^i,2^i+1), and so some of the packings can be much smaller. For i such that λ_t ≥ 2^i+1, the tree-packing can just be seen as a truncated packing from a larger i and will be correct as well. For larger i, i.e., when λ < 2^i, we can add fake input edges to keep the min-cut larger and hence the update time smaller. Hereto, we first observe that in <Ref> we do not actually use the min-cut, but we have a more local argument: we need L^_i(e) to be small enough. We will exploit this by locally adding edges, such that all L^_i(e) stay small.
We can maintain a tree-packing of size Θ(λ_t^3 log m) in the following manner.
We maintain log m tree-packings _i of various graphs of size |_i|=Θ(2^3ilog m), with O(2^5ilog^2 m) worst-case recourse in tree-packing _i, using any minimum spanning tree algorithm with O(1) worst-case recourse to maintain the individual trees. We then have that _i for i such that λ_t∈ [2^i,2^i+1) is a tree-packing on G of the required size.
We will maintain tree-packings _1, _2, …, _logλ_max, where |_i|=O(2^3ilog m). We only need the packing _i with λ_t∈ [2^i,2^i+1). The other _i will not necessarily correspond to a tree-packing of G, but we maintain all of them simultaneously with the stated recourse.
First, note that for _i with 2^i≤λ_t the results directly holds by <Ref>. In fact, we do not need to compute these separately, but we can just use the truncated tree-packing below the cut-off value. For the rest of the proof, we focus on _i with 2^i> λ_t.
Next, we consider an initialization step for the tree-packings _i with 2^i>λ_0. For each of these, we do not start with only G, but we add a path graph where each edge appears with multiplicity 2^i-4. We call these added edges virtual. Next, we compute a tree-packing for the resulting graph. Clearly this graph now has min-cut at least 2^i-4. More precisely, it also means that L^_i(e)≤ |_i|/2^i-4=16|_i|/2^i, hence the recourse of updates to this graph is now bounded by O(2^5ilog^2 m). Next, we delete all virtual edges with L^_i(e)≤ 8|_i|/2^i. These edges were not necessary, in the sense that G already guarantees the right connectivity.
Claim 1. Deleting a virtual edge e cannot lead to L^_i(e')>16|_i|/2^i for any e'.
Proof. The last time e was not picked, there must be some path separating the end points of e where each edge has load at most 8|_i|/2^i.
In the worst case we add the entire load of 8|_i|/2^i to this path. Since all edges had load at most 8|_i|/2^i, so they now have load at most 16|_i|/2^i.
Note that the load of e will be distributed along this path before being added to an edge e' with L^_i(e')=16|_i|/2^i, as any tree previously containing e missed at least one edge from this path.
This initialization time is O(n· 2^i-4· 2^5ilog^2 m). This can be divided in a worst-case matter over insertions in the graph, carrying out at most O(2^5ilog^2 m) operations per insertion. The reason is that to reach λ_t∈ [2^i,2^i+1), we need at least n· 2^i edges in G.
Now after every update, whenever a virtual edge satisfies L^_i(e)≤ 8|_i|/2^i, we delete it. Note that by Claim 1, this does not lead to any edge e' with L^_i(e')>16|_i|/2^i. So we do not get a chain of insertions and deletions.
However, one edge insertion can lead to multiple deletions. In fact, it can lead to 2^i deletions, which would take more than the stated worst-case update time. Instead of actually deleting it right away, we add each virtual edge that needs to be deleted to a deletion queue and delete one edge from the queue after any update. Note that these delayed deletions only have a positive effect on the min-cut (and hence update time). Moreover, if a virtual edge in the deletion queue later increases its load above the threshold, then it is removed from the deletion queue.
Further, if an edge deletion e leads to L^_i(e')>16|_i|/2^i for any e', then we keep e as a virtual edge. This guarantees that at any point in time we have L^_i(e)≤ 16|_i|/2^i for all edges e, hence our recourse stays bounded. Finally, we show correctness.
Claim 2. If λ_t∈ [2^i,2^i+1), then all virtual edges have been placed in the deletion queue.
Proof. Since λ_t∈ [2^i,2^i+1), we have
L^_i(e) ≤ |_i|ℓ^*(e)+ |_i|/λ_t ≤ |_i|max_e'ℓ^*(e')+|_i|/λ_t
≤ |_i|/Φ+|_i|/λ_t ≤ 3|_i|/λ_t≤ 3|_i|/2^i≤ 8|_i|/2^i.
Next, we need to show that the deletion queue will be empty by this time. This shows that when λ_t∈ [2^i,2^i+1), the tree-packing is correct. In Claim 3, we show that if there are k virtual edges e with L^_i(e)≥ 8|_i|/2^i, then we need to insert at least k edges to obtain λ∈ [2^i,2^i+1). Hence whenever an edge gets moved to the deletion queue, we know that enough insertions will be performed later, at which time we can carry out the deletion.
Claim 3. Suppose there are k virtual edges e with L^_i(e)≥ 8|_i|/2^i, then we need to insert at least k edges to obtain λ∈ [2^i,2^i+1).
Proof. Consider the idealized load packing of this graph (including the virtual edges). We write L^*(e):=|_i|ℓ^*(e).
We observe that for each of our k edges
L^*(e) ≥ L^_i(e) -|_i|/λ≥ 8|_i|/λ -|_i|/λ≥ 4|_i|/λ,
where the first inequality uses <Ref> (with η=1), and the second inequality uses that λ≥ 2^i.
When λ≥ 2^i, we have that
2^i ≤λ≤ 2Φ = 2 min_part_val(),
where the second inequality holds by <Ref> and equality holds by the definition of Φ. Recall that ^* denote the packing satisfying this minimum and defining the ideal loads, so we have L^*(e)≤ |_i|/Φ≤ 2|_i|/λ for any e.
To obtain L^*(e)≤ 2|_i|/λ from L^*(e) ≥ 4|_i|/λ in the idealized load packing, we know we need to at least double the number of edges across the partition ^* on this level. Hence, we need to insert at least k edges.
At any time the deletion queue is empty, we can directly apply Claim 3 for the current virtual edges. If it is nonempty, we delete at least one edge from the queue, which decreases the number of virtual edges. We note that the number of virtual edges can increase by 1 when a deleted edge becomes virtual. In that case, that there will be an additional insertion across the partition ^* on this level.
§.§ The Algorithm for Bounded lambda
In this section we give the algorithm that finds the min-cut if this is below some threshold value λ_max, which appears as a parameter in the update time. Intuitively, the structure is as follows. If we know λ, then:
* We maintain a greedy tree-packing of size Θ(λ^3 log m).
* We maintain the minimum size of all 1-respecting cuts of each tree in .
* Maintain the minimal trivial cut of G_a, for a=2λ-38λ^2.
Then by <Ref>, one of the two gives the right answer. We note that we can do Step <ref> efficiently by the following lemma.
There exists a deterministic dynamic algorithm that, given an unweighted, undirected (multi-)graph G with a dynamic spanning tree T, maintains a min-cut that 1-respects the tree in Õ(√(m)) worst-case update time.
It can return the edges of the cut in O(log m) time per edge.
Both the size of the tree-packing in Step <ref>, and the graph G_a in Step <ref>, need λ, the value of the min-cut.
However, we do not know λ – as our goal is to compute it – and it changes over time. In the proof we will show how to maintain these structures for all values of λ simultaneously without incurring too much overhead.
*
We first note that w.l.o.g. we can assume that we have O(λ_max n) edges, using the connectivity sparsifier from Nagamochi and Ibaraki <cit.>. We use the dynamic version of the sparsifier from Eppstein et al. <cit.>, which guarantees that each update to original graph leads to at most 2 updates to the sparsified graph[We abuse notation slightly and denote by G the graph the rest of the algorithm is run on. I.e., G is the original graph when no sparsifier is applied, and the sparsified graph when a sparsifier is applied.] in O(λ_max√(n)) update time.
Algorithm.
* Maintain logλ_max tree-packings _1, _2, …, with |_i|=Θ(2^3ilog m).
* For each packing _i, maintain the minimum size of all cuts 1-respecting at least one tree in the packing.
* For each μ∈{1, 2, …, λ_max}, maintain the minimum trivial cut of G_a_μ by <Ref>, for a_μ:=2μ-38μ^2. Here we use the tree-packing _i such that μ∈ [2^i,2^i+1) for G_a_μ:= G/{e∈ E : ℓ^_i(e) < a_μ}.
* Maintain the minimum of all cuts from <ref> and <ref>.
We output the result of Step <ref>.
Correctness.
First, we note that the minimum of Step <ref> cannot be below the min-cut. Each of the values from Step <ref> corresponds to a cut in the graph, hence can only over-estimate the min-cut. Each value from Step <ref> is a cut in a contracted version of G, hence corresponds to a cut[Technically, there are also self-loops in G_a, hence the value of a cut in G_a can be bigger than the value of the corresponding cut in G. However, this can only lead to further over-estimation.] in G or equals ∞, and thus can only over-estimate the min-cut of G.
Now let i such that λ_t∈ [2^i,2^i+1). By <Ref>, we know that if 1-respecting cuts of the tree-packing _i do not give a minimum-cut, then for μ=λ_t, we have some trivial cut in G_a_μ which is a minimum-cut. Hence either Step <ref> or Step <ref> outputs the value of the min-cut.
Update time. We analyze the update time of each of the four parts of the algorithm.
* We can maintain tree-packings with R(n)=O(λ_max^5log^2 m) worst-case recourse per tree-packing, by <Ref>.
For the update time, we need to maintain minimum spanning trees, where we have R(n) updates to these trees. We can maintain a minimum spanning tree with Õ(√(n)) worst-case update time <cit.>, so this takes Õ(λ_max^5√(n)) worst-case update time in total.
* Maintaining the minimal 1-respecting cut take Õ(√(m))=Õ(√(λ_maxn)) update time by <Ref>. Since we have R(n) updates for these trees, this takes Õ(λ_max^11/2√(n)) worst-case update time.
* Each of the tree-packings _i decides which edges loads cross the value a_μ for the trivial cuts corresponding to μ∈ [2^i,2^i+1). Let P_μ(n) denote the number of edges crossing a_μ in <Ref>. We remark that ∑_μ∈ [2^i,2^i+1) P_μ(n)=R(n), since any update to the tree-packing can change ℓ^_i(e) for one edge. This change is as follows ℓ^_i(e) = L^_i(e)|_i|→L^_i(e)± 1|_i|, so a change of size 1|_i|=O(1μ^3) for each μ∈ [2^i,2^i+1). Since this change is so small, it can cross at most one value a_μ = 2μ-38μ^2 for a given i.
Since <Ref> takes O((1+P_μ(n))√(n)) worst-case update time, we get Õ(λ_max^5√(n)) worst-case update time, when summing over all i.
* We can implement this step with a min-heap, which has O(λ_maxlog m) worst-case update time.
We conclude that we have Õ(λ_max^11/2√(n)) worst-case update time.
Returning the cut edges.
If the cut is from Step <ref>, we can return the edges of the cut with O(log m) time per edge by <Ref>. The caveat is that these edges are only correct if we did not apply the connectivity sparsifier. This changes the factor √(λ_maxn) to √(m) in the update time.
If the cut is from Step <ref>, we can return the edges with O(log m) time per edge by <Ref>.
§.§ General lambda
We use the following result of Goranci, Henzinger, Nanongkai, Saranurak, Thorup, and Wulff-Nilsen <cit.>.
There exists a deterministic fully dynamic algorithm that, given a simple, unweighted, undirected graph G = (V, E) with m edges and a parameter ϕ∈ (0, 1), maintains a min-cut estimate μ(G) in Õ(1/ϕ^3 +ϕ m) amortized time per edge insertion or deletion. If ϕ≥ 240/δ, where δ is the minimum degree then the min-cut estimate is correct, i.e., μ(G) = λ(G).
It can return the edges of the cut in O(λlog m) time with the same update time.
In <cit.>, they balance this with <cit.> to obtain an update time of Õ(τ^29/2√(n)+m/τ)=Õ(m^29/31n^1/31)=Õ(m^1-1/31) for τ=m^2/31n^-1/31. When we balance it with our <Ref>, we obtain the following result. We note that since <Ref> only holds for simple graph, our combined result is also restricted to that case.
*
Let τ be a parameter to be determined later. We run the algorithm of <Ref> with λ_max=τ+1, denoted by Algorithm 𝒜, and the algorithm of <Ref> with ϕ=240/τ, denoted by Algorithm ℬ.
We note that Algorithm 𝒜 is correct when λ≤τ +1 and Algorithm ℬ is correct when λ≥τ (using that λ≤δ).
We decide which output to take as follows.
* If the current value of λ is at most τ, we use Algorithm 𝒜 for the next update.
* If the current value of λ is at least τ+1, we use Algorithm ℬ for the next update.
* Using an efficient static min-cut computation, e.g., <cit.>, we can compute the initial value and decide whether we start with Algorithm 𝒜 or ℬ.
Now correctness directly follows from the guarantees on Algorithm 𝒜 and ℬ.
Update time. We have amortized update time Õ(τ^11/2√(n)+τ^3+m/τ)=O(τ^11/2√(n) +m/τ). Balancing this gives τ=m^2/13n^-1/13, hence we we have update time Õ(m^11/13n^1/13)=Õ(m^1-1/13).
To optimize for large values of m, we can use the connectivity sparsifier of Nagamochi and Ibaraki <cit.> again, which brings m down to min{m,λ n}. If we use this in the regime λ≤τ', we obtain running time
Õ((τ'n)^11/13n^1/13)+ Õ(τ'^3 + m/τ').
For different choices of τ' we can get the following running times (up to polylogarithmic factors):
* m^11/24n^1/2+m^13/8n^-3/2;
* n^9/7+mn^-3/7;
* m^11/52n^12/13+m^3/4.
We remark that the last one is always smaller than n^11/26+12/13+n^3/2=n^35/26+n^3/2=O(n^1.5), since G is simple.
Returning the cut edges.
We take the version of <Ref> that can return the cut edges, which has Õ(λ_max^5√(m)) amortized update time, and do not apply the connectivity sparsifier. This gives an amortized update time of Õ(m^1-1/12).
§ ARBORICITY
In this section, we first show a structural result: a relation between the fractional arboricity and the ideal relative loads of a tree-packing, see <Ref>. We then give a deterministic dynamic algorithm, that is efficient for small values of α, see <Ref>. For simple graphs, we can combine this with the state of the art for densest subgraph approximation, see <Ref>. For multi-graphs, we need to downsample the high α regime to low α regime. While this is relatively straight-forward against an oblivious adversary (<Ref>), it is much more involved against an adaptive adversary (<Ref>).
§.§ Structural Result
The idea is to show that (minℓ^*(e))^-1 = α(G). Then we can estimate α(G) by simply taking α_est = (minℓ(e))^-1, with ℓ some good approximation of ℓ^*. For the integral arboricity, ⌈α⌉, a similar result follows already from <cit.>. We achieve this, more nuanced result by using the language of ideal load decompositions from <cit.>.
In particular, we use the following observation. We provide a proof for completeness.
[<cit.>]
For each S∈^*, we have Φ_G[S]≥Φ.
We will prove this by contradiction. Suppose is a partition of S such that part_val() < Φ. Let '= (^*∖{S})∪ be a partition of V, then we see that
part_val(') = |E(G/')|/|'|-1 = |E(G/^*)|+|E(G[S]/)|/|^*|-1+||-1 < Φ(|^*|-1)+Φ(||-1)/|^*|-1+||-1= Φ,
using that |E(G/^*)|/|^*|-1= Φ and |E(G[S]/)|/||-1=part_val() < Φ
Now we can show the main result.
Let G=(V,E) be an undirected, unweighted (multi-)graph, then
α(G) = 1/min_e∈ Eℓ^*(e).
We start by showing α(G) ≥ (min_e∈ Eℓ^*(e))^-1. We note that (min_e∈ Eℓ^*(e))^-1= max_i Φ_i, ranging over all partitions appearing in the ideal partitioning. Denote the graph where this is maximized by G^*, and its vertex set by X.
By <Ref>, we have that it does not further partition, hence ^*=X, so G^*/^*=G^* and G^*=G[X]. Now we can conclude
(min_e∈ Eℓ^*(e))^-1 = max_i Φ_i = |E(G^*/^*)|/|^*|-1 = |E(G[X])|/|X|-1
≤max_Y ⊆ V|E(G[Y])|/|Y|-1 = α(G).
Next, we show that α(G) ≤ (min_e∈ Eℓ^*(e))^-1.
Let Y⊆ V be any subset. We will show that
|E(Y)|/|Y|-1≤ (min_e∈ Eℓ^*(e))^-1.
Hence α(G) = max_Y⊆ VE(Y)/|Y|-1≤ (min_e∈ Eℓ^*(e))^-1.
The arguments we use here are very similar to the proof of <Ref>, where we present them with more detail. We inspect the definition of ideal relative loads: let _0 the packing for G with pack_val(_0)=Φ. Now we recurse on the subgraphs G[S] for each S∈_0. We write the total decomposition as P_i the packing on G[S_i], where each S_i∈ P_j for some j< i, for i∈ℐ for some index set ℐ. Now we have the disjoint union:
E(G) = ⋃_i∈ℐ E(G[S_i]/_i).
So in particular we have
|E(Y)| = ∑_i∈ℐ |E(Y) ∩ E(G[S_i]/_i)| ≤∑_i∈ℐΦ_i (|V_Y,i| -1),
where Φ_i is the packing value of _i, and V_Y,i=Y∩ V(G[S_i]/_i). Note that the inequality follows from the fact that if E(Y) ∩ E(G[S_i]/_i) contains more than Φ_i (|V_Y,i| -1) edges, then one can contract all such edges to get a new partition with at most |V_Y,i| -1 fewer vertices but more than Φ_i (|V_Y,i| -1) fewer edges, which contradicts the choice of Φ_i.
Next, we prove that
∑_i∈ℐ (|V_Y,i| -1) = |Y|-1.
We divide ℐ in different depths of recursion: let ℐ_j be such that for i∈ℐ_j we have S_i∩ Y in recursion depth j (i.e., it is in j partitions before)[For ease of notation, we let a singleton cluster {v} again partition into {v}. This has no effect on the sum over |V_Y,i| -1, but ensures that all vertices make it to the lowest depth.]. Let r be the total recursion depth. Then we can write
∑_i∈ℐ (|V_Y,i)| -1) = ∑_j=0^r ∑_i∈ℐ_j (|V_Y,i| -1)
= ∑_j=0^r ( ∑_i∈ℐ_j |V_Y,i|) - |I_j|
= ∑_j=1^r |ℐ_j| -|ℐ_j-1|
= |ℐ_r| -|ℐ_0|
= |Y|-1.
Now we use this as follows:
|E(Y)|/|Y|-1 ≤∑_i∈ℐΦ_i (|V_Y,i| -1)/|Y|-1
≤(max_i∈ℐΦ_i) ∑_i∈ℐ|V_Y,i| -1/|Y|-1
= max_i∈ℐΦ_i
= (min_e∈ Eℓ^*(e))^-1,
where the last equality holds by definition of ℓ^*.
Next, we use the fact that a greedy tree-packing approximates an ideal packing well to give an approximation of the fractional arboricity.
A greedy tree-packing with ||≥24α^2log m ^2 trees satisfies, for ∈ (0,1),
|1/min_e∈ Eℓ^(e)- α| ≤α.
The tree-packing contains
24α^2log m /^2≥6(1+)^2α^2log m /λ^2=6log m λ/(λ/α(1+))^2
trees. We note[Here we use that λ≤α. One way to see that is λ≤Φ = 1max_e∈ Eℓ^*(e)≤1min_e∈ Eℓ^*(e)=α.] that λα(1+)≤λα≤<1, so we can apply <Ref> with η= λα(1+) to obtain
|min_e∈ Eℓ^(e)-min_e∈ Eℓ^*(e)|≤/α(1+).
Now we see that
1/min_e∈ Eℓ^(e)- α ≤1/min_e∈ Eℓ^*(e)-/α(1+)- α
=
1/1/α-/α(1+)- α
= (1/1+/1+-/(1+)-1) α
= α.
The proof that α -1min_e∈ Eℓ^(e)≤α is analogous.
§.§ Dynamic Algorithm for Bounded alpha
In this section, we show how to dynamically maintain the estimate 1min_e∈ Eℓ^(e), giving us the arboricity estimate. We start with a warm-up giving the result almost directly by plugging in a deterministic minimum spanning tree algorithm.
§.§.§ Warm-Up
There exists a deterministic dynamic algorithm, that given an unweighted, undirected (multi-)graph G=(V,E), maintains a (1+)-approximation of the fractional arboricity α when α≤α_max in O(α_max^4log^6 m / ^4) amortized update time or a Las Vegas algorithm with O(α_max^4 m^o(1) / ^4) worst-case update time.
By <Ref>, all we need to do is maintain Θ (α_max^2log m ^2) greedy spanning trees, and then maintain the minimum over ℓ^(e).
The latter can simply be done by maintaining a min-heap, which has an update time of O(log m).
For the former, we note that any edge insertion or deletion can lead to overall O( ( α_max^2log m ^2)^2 ) updates to the spanning trees <cit.>.
We can use a deterministic dynamic minimum spanning tree algorithm with
* O(log^4 m) amortized update time deterministically <cit.>; or
* m^o(1) worst-case update time (Las Vegas algorithm) <cit.>.
Multiplying these update times with the number of spanning tree updates per insertion/deletion gives the result. Note that both subsume the O(log m) update time for maintaining the min-heap.
§.§.§ Recourse in Tree-Packing
Next, we show how we can bound the recourse in the tree-packing to shave a factor α_max. This is similar to <Ref>, but has additional complications: to bound the recourse we need to guarantee a high min-cut, and we need to show we can keep an artificially high min-cut in a graph with certain arboricity. Note that there are even disconnected graphs with linear arboricity, so this is a non-trivial adaptation.
*
By <Ref>, all we need to do is maintain Θ (α^2log m λ^2) greedy spanning trees, and then maintain the minimum over ℓ^(e). As opposed to <Ref>, we do not do this once with α_max, but keep logα_max copies corresponding to different values of α. The goal is to establish O(||^2/α) recourse instead of the trivial O(||^2) recourse. We do this by utilizing that the recourse is O(||^2/λ), and artificially increase the min-cut to Θ(α). This simultaneously means we only need to maintain ||=Θ (αlog m ^2) greedy spanning trees.
To be precise, we maintain tree-packings _1, _2, …, _logλ_max, where |_i|=O(2^ilog m/^2). We only need the packing _i when α∈ [2^i,2^i+1). If α∉ [2^i,2^i+1), we still need the update times to hold, but we do not need the output to be correct.
Each tree-packing _i is the tree-packing of some graph G_i, where we show that α=α(G_i) if α∈ [2^i,2^i+1). As opposed to <Ref>, we do not guarantee the stronger statement that G_i=G in this case.
First, we assume that G_i initially has min-cut λ≥ 2^i-1.
This means that initially ℓ^_i(e) ≤ℓ^*(e)+/2^i-1≤ 1/Φ_G_i+1/2^i-1≤ 2/2^i-1+1/2^i-1=O(1/2^i), using <Ref> with η=1. Throughout the updates, we keep an edge e as a virtual edge if deleting e would cause any edge e' to reach ℓ^_i(e') ≥ 16/2^i. By the same arguments as in <Ref>, this gives amortized recourse O(|_i|^2/2^i) for _i.
We delete a virtual edge e if ℓ^_i(e)< 8/2^i after some update. This cannot lead to the load of any other edge rising above 16/2^i, see Claim 1 in <Ref>.
Since any virtual edge was inserted as a real edge at some point, we can amortize the cost of this deletion.
Next, we show that the virtual edges do not interfere with the arboricity if 2^i ≤α, in which case 8/2^i ≥ 8/α. We do this with two claims.
Claim 1. If ℓ^_i(e)≥ 8/α then ℓ^*(e) ≥ 4/α.
Proof. By <Ref> we have ℓ^*(e)≥ℓ^_i(e)-/α≥ 4/α.
Claim 2. Let ℓ^*(e) > 1α and let S^*⊆ V be such that α=|E(S^*)||S^*|-1. Then e ⊄S^*.
Proof. Consider the partition induced by the last level of the ideal load decomposition 𝒫^*, i.e., the classes of 𝒫^* are the connected components induced by edges f with ℓ^*(f) = 1α.
Note that by arguments similar to <Ref> and <Ref>, we have 1) that e ∈ E(G/𝒫^*) and 2) that |E(G/𝒫^*)|/|𝒫^*|-1 < α.
Now suppose the classes of 𝒫^* are P_1, …, P_t. W.l.o.g., possibly by renumbering, we can assume that |S^*∩ P_j| ≠∅ for all 1≤ j ≤ i and |S^*∩ P_j| = ∅ for all j > i. If i = 1, we are done, so assume i > 1.
Observe first that the graph induced by P_1, …, P_i in G/𝒫^* contains strictly less than α(i-1) edges.
Indeed, suppose for contradiction this is not so. If i = t then the supposition contradicts 2), and if i < t, then one would need to contract P_1, …, P_i in every level except for the last level of the ideal load decomposition, thus contradicting the choice of ^*.
Finally, we claim that for some s ∈ [i], we must have |E(G[S^*∩ P_s])| > α(|S^*∩ P_s|-1) contradicting that G has arboricity α.
Indeed, observe that
α = |E(S^*)|/|S^*|-1 ≤|E(P_1, P_2, …, P_i)|+∑_j = 1^i |E(S^*∩ P_j)|/|S^*|-1
= |E(P_1, P_2, …, P_i)|+∑_j = 1^i |E(S^*∩ P_j)|/(i-1)+ ∑_j = 1^i (|S^*∩ P_j|-1)
< α(i-1)+∑_j = 1^i |E(S^*∩ P_j)|/(i-1) + ∑_j = 1^i (|S^*∩ P_j|-1),
where we used that |S^*| = ∑_j = 1^i |S^*∩ P_j| = i + ∑_j = 1^i (|S^*∩ P_j|-1).
Finally, the claim follows by observing that the final inequality is false if |E(S^*∩ P_j)| ≤α(|S^*∩ P_j|-1) for all 1≤ j ≤ i.
Combining the two claims above, we see that any edge with ℓ^_i(e)≥ 8/2^i ≥ 8/α is not part of a subgraph for some choice of S^* achieving the arboricity. So α(G_i) ≤α(G). Since G⊆ G_i, we also have α(G) ≤α(G_i), thus we conclude α(G)=α(G_i). This means that it suffices to approximate the arboricity in G_i.
Now we discuss an initial initialization step to guarantee that we have λ≥ 2^i-1. We do this by vertex insertions. We initialize all our data structures on a graph with n vertices, but no edges. We add an edge only if both endpoints have degree at least 2^i-1. Note that this corresponds to inserting the vertex with its edges when its degree reaches this boundary. This means we insert at most two vertices at any time.
When we insert such a vertex, and it has less than 2^i-1 edges towards the other vertices already present, we add the remainder in virtual edges. This guarantees min-cut λ≥ 2^i-1 at any time.
Observe that an edge (or its virtual representative) is added and deleted at most twice.
Claim 3. If (v)≤ 2^i-1, then v∉ S^* for any S^*⊆ V such that 2^i≤α=|E(S^*)||S^*|-1.
Proof. Let S^*⊆ V be such that 2^i≤α=|E(S^*)||S^*|-1, and suppose v∈ S^*
|E(S^*∖{v})|/|S^*∖{v}|-1= |E(S^*)|-|E(S,{v})|/|S^*|-2≥2^i(|S^*|-1)-2^i-1/|S^*|-2 > 2^i,
which is a contradiction.
Next, we consider the update time: each such vertex insertion takes O(|_i|^2/2^i) amortized update time. We show that we can perform such a vertex insertion in O(|_i|^2) time. Since it required 2^i-1 insertions to reach this degree, this gives the amortized bound.
We consider each of the |_i| trees in the packing and perform all updates simultaneously to it. This consists of the 2^i-1 insertions, plus O(2^i-1· |_i|/2^i )=O(|_i|) recourse from the earlier trees (since each edge ends up in at most O(|_i|/2^i) trees). In total this gives O(|_i|(|_i|+2^i-1))=O(|_i|^2) time.
As before, we use a deterministic dynamic minimum spanning tree algorithm with O(log^4 m) amortized update time deterministically <cit.> for maintaining the trees in each packing.
This gives update time O(2^ilog^6 m/^4) for each _i, hence the update times sum up to O(α_maxlog^6 m / ^4) amortized update time.
The worst-case result is obtained in the same way, with the exception that the insertions cannot be amortized over the edge insertions. Hence the worst-case update time is proportional to |_i|^2. Using that we maintain the minimum spanning trees in n^o(1) time <cit.>, we obtain the result.
§.§ Dynamic Result for Simple Graphs
We will use the following approximation for the densest subgraph problem, where we say that ρ̃ is a (1-)-approximation of ρ when (1-)ρ≤ρ̃≤ρ.[This is equivalent to a (1+)- or a (1±)-approximation by re-scaling. We use the (1-)-version for ease of notation in the proof.]
There exists a deterministic dynamic algorithm, that given an unweighted, undirected (multi-)graph G=(V,E), maintains a (1-)-approximation of the density ρ in O(log^3 m / ^4) amortized update time or O(log^4 m / ^6) worst-case update time.
We use this result in the high arboricity regime (α≥ 1/), where the density is a good approximation of the fractional arboricity for simple graphs.
*
The algorithm is as follows:
* Maintain a fractional arboricity estimate α̃ with α_max=Θ(1/) using <Ref>;
* Maintain a (1-)-estimate ρ̃ of the densest subgraph using <Ref>;
* If α≤Θ(1/), output α̃, else output ρ̃.
By <Ref>, we know that α̃ is a (1+)-approximation of the fractional arboricity if α≤α_max. We next show that ρ̃ is a (1+)-approximation if α≥ 1/. Because these ranges overlap, and an update can change the fractional arboricity by at most 1, we can easily see when we should switch from one estimate to the other.
Note that we always have
ρ̃≤ρ = max_S⊆ V|E(S)|/|S| = |E(S^*)|/|S^*|≤|E(S^*)|/|S^*|-1≤max_S⊆ V|E(S)|/|S|-1= α,
for some S^*⊆ V. Now let S^* be such that
max_S⊆ V|E(S)|/|S|-1=|E(S^*)|/|S^*|-1.
Using that G is simple, we now have for α≥ 1/
|S^*| ≥|S^*| (|S^*|-1)/|S^*|-1≥|E(S^*)|/|S^*|-1 = α≥ 1/.
We use this to see that
α - ρ = |E(S^*)|/|S^*|-1 - max_S⊆ V|E(S)|/|S|≤|E(S^*)|/|S^*|-1 -|E(S^*)|/|S^*|
= |E(S^*)||S^*|-(|S^*|-1)/|S^*|(|S^*|-1)= α/|S^*|≤α.
Rearranging gives us
ρ̃≥ (1-)ρ≥ (1-)^2α.
Setting ←/3 gives that ρ̃ is a (1-)-approximation of the fractional arboricity α.
Concerning the update time, we need O(log^6 m / ^5) amortized update time for small α, and O(log^3 m / ^4) amortized update time for large α.
For the small α regime we have O(m^o(1)/^6) worst-case update time, and O(log^4 m/^6) worst-case update time for the high α regime.
§.§ Downsampling for Multi-Graphs
As shown in <Ref>, for multi-graphs, the number of spanning trees we need to pack scales with α_max. In this section, we show how to use a standard sampling technique (see e.g., <cit.>) to get rid of this dependency.
theoremThmArbMulti
There exists a dynamic algorithm, that given an unweighted, undirected multi-graph G=(V,E), maintains a (1+)-approximation of the fractional arboricity α when in O(log^7 m /^6) amortized update time or with O(m^o(1)/^8) worst-case update time. The algorithm is correct with high probability against an oblivious adversary.
The idea is to maintain log m graphs, denoted by H_i, which are initialized by sampling each edge with probability p_i=24clog m/2^i ^2 (for i s.t. p_i<1). Now if α = Θ(2^i), then H_i has fractional arboricity Θ(log m^2). To compute this, on each graph H_i, we run the algorithm of <Ref> with α_max = Θ(log m/ε^2). First, we show that if α∈ [2^i-1,2^i+2), then H_i gives the correct answer. We do not prove anything about the output of the other graphs H_j, but simply disregard their output.
If α = O(log m^2), we just look at G itself. So assume α = Ω(log m^2).
Let i such that 2^i-1≤α < 2^i+2.
We show correctness in three parts.
* We show that for a set S⊆ V that satisfies w.h.p. α = |E(S)||S|-1, that
1/p_i|E_H_i(S)|/|S|-1≥ (1-)α.
* We show that for any S⊆ V we have w.h.p. that
1/p_i|E_H_i(S)|/|S|-1≤ (1+)α.
* In H_i, the fractional arboricity is at most α_max= Θ(log m^2).
Part 1. Since we sample each edge with probability p_i, we immediately have that [ |E_H_i(S)|]=p_i |E(S)|. Now by a Chernoff bound we obtain
ℙ[ 1/p_i|E_H_i(S)|/|S|-1< (1-)α] = ℙ[ |E_H_i(S)|<(1-)p_iα(|S|-1)]
= ℙ[ |E_H_i(S)|<(1-)p_i|E(S)|]
≤ e^- p_i|E(S)|^2/2
≤ e^- 24clog m/2^i ^2α(|S|-1) ^2/2
≤ m^-c(|S|+2),
using that α≥ 2^i-1. This allows us to union bound over all sets S of size |S|, O(n^|S|) many, all sizes |S|, for n sizes, and m updates.
Part 2.
Also this follows by a Chernoff bound. Here we use the upper bound on the expectation.
ℙ[ 1/p_i|E_H_i(S)|/|S|-1> (1+)α] = ℙ[ |E_H_i(S)|>(1+)p_iα(|S|-1)]
≤ e^- p_iα(|S|-1)^2/3
≤ e^- 24clog m/2^i ^2α (|S|-1)^2/3
≤ m^-c(|S|+2),
using that α≥ 2^i-1. Again, this allows us to union bound over all sets S of size |S|, O(n^|S|) many, all sizes |S|, for n sizes, and m updates.
Part 3.
Since α < 2^i+2, we get by Part 2. that the fractional arboricity in H_i is w.h.p. at most p_i· (1+)α≤24clog m/2^i ^2· 2^i+3=O(log m^2).
Note that since we assume the adversary to be oblivious, the probabilistic guarantees from above hold for any graph, in particular for the graph after t updates.
To see which H_i to look at, we use the approximation from before the update:
* If the current estimate of α is at most Θ(log m^2) we consider the estimate on G.
* If the current estimate of α lies in [2^i,2^i+1) we use the estimate from H_i for the next update.
* Using an efficient static algorithm, e.g., <cit.>, we can compute an initial approximation to decide with which H_i to start.
Since the estimate of H_i is correct up to a (1+) factor, and the arboricity can change by at most 1, we have that if before the update our estimate 2^i≥α̃<2^i+1, then after the update α≤ (1+)α̃+1<(1+)2^i+1+1≤ 2^i+2, and similarly 2^i-1≤α. Hence the estimate from H_i is a (1+)-approximation.
Update time.
Whenever an edge gets deleted from G, we delete it from H_i, if it appears there. Whenever an edge gets inserted to G, we insert it in H_i with probability p_i.
By simply maintaining the data structures on each H_i, we obtain an algorithm that works against an oblivious adversary. This algorithm has amortized update time O( log^7 m /^6) or worst-case update time O(m^o(1)/^8) for each H_j.
Now we note that p_i+1=p_i/2, so the probability that an update needs to be processed in H_i+1 is half as big as the probability that it needs to processed in H_i. In the first (relevant) H_i, edges are sampled with probability ≤ 2^-1. So we have m/2 updates to this H_i in expectation, and m w.h.p. by a Chernoff bound. Using the same argument for each subsequent H_i, we get m+m/2+m/4+…=2m updates in total w.h.p. Hence running the algorithm for the log m copies has as many updates as for one copy, and we obtain the result.
§.§ Downsampling Against an Adaptive Adversary
For our algorithm against an adaptive adversary, we use the same set-up as before: again, we have log m sampled graphs H_i for different regimes of α. However, we need to resample more often to fend of adversarial attacks. We first consider a naive way of doing this, and then describe a more involved process.
Naive Resampling.
An adaptive adversary can attack our sampling, for example by deleting our sampled edges. This forces us to introduce some form of resampling. The most straight forward way to do this, is that for any inserted or deleted edge uv. We resample all edges adjacent to either u or v. This guarantees that we cannot over- or under-sample the edges of one vertex, and one can show that this is enough to preserve the fractional arboricity. The downside is that this approach is slow: each vertex can have up to degree n, even in a graph with bounded fractional arboricity. Instead, we will assign ownership of each edges to a vertex, such that we can sample more efficiently.
Fancy Resampling.
First we compute out-orientations such that each vertex has at most α(1+) out-edges. To each vertex, we assign its out edges. So every edge is assigned to a vertex, and each vertex has at most α(1+) edges assigned to it. Now upon an update to uv, we recompute the out-orientation, and then resample all out-edges of each vertex for which its set of out-edges changed.
There is one more complication that we address in the next paragraph where we provide the full description of the algorithm: we cannot afford to resample in each H_i. However, where resampling is too costly, it is also unnecessary.
Algorithm Description.
The main algorithm consists of the following steps.
* Maintain out-orientation with maximum out-degree (1+)α.
* Maintain the algorithm of <Ref> with α_max=Θ(log m/^4).
* Upon update to uv, i.e., an out-edge of u, do for each i= 1, 2, …, log m:
* Resample for H_i all out-edges of u with probability p_i := 8(c+3)log m2^i ^4, using e.g. <cit.>.
* If this leads to at most Θ(log m^4) changes to H_i, process them as updates in H_i's fractional arboricity algorithm, <Ref> with α_max=Θ(log m/^4). If it leads to more changes, do nothing.
We note that if resampling leads to more than Θ (log m^4) changes, that p_i α> p_i d^+_H_i(u)= Ω (log m^4), so α = Ω(2^i), hence H_i is not the correct graph to look at. In particular, before we want to use H_i, we need α and hence d^+_H_i(u) to decrease, which means we will resample its out-edges before we need the estimate from this graph.
As before, in <Ref>, we use the current output of the algorithm to see which H_i we should use for the output after the next update.
Correctness.
Next, we show that the given algorithm always maintains a correct estimate of the fractional arboricity.
At any moment, we maintain a (1+)-approximation of the fractional arboricity.
Let i such that 2^i-1≤α < 2^i+2.
Let H_i be the corresponding sampled graph, where each edge is sampled with probability p_i = 8(c+3)log m2^i ^4. Since any action of the adversary leads to resampling all out-edges of all affected vertices, we can treat these resamplings as independent random events. We prove that with high probability our result holds. Hence even many attacks on the same vertex will not lead to breaking our guarantees.
For the proof, we consider |E_H_i(S)||S|-1, and aim to show this is roughly equal to p_i·|E(S)||S|-1. To be precise, we need to show two parts:
* We show that for a set S⊆ V and that satisfies α = |E(S)||S|-1, that w.h.p.
1/p_i|E_H_i(S)|/|S|-1≥ (1-)α.
* We show that for any S⊆ V we have w.h.p. that
1/p_i|E_H_i(S)|/|S|-1≤ (1+)α.
Instead of working with the standard adaptive adversary, who can attack us by implicitly attacking our sampling, we show that the algorithm holds against a stronger adversary: at any moment in time the adversary is allowed to point at a vertex who needs to resample all its out-edges. In total the adversary can call for O(m^-4log^3m)=O(m^2) resamples, since we have m updates, which each lead to O(^-4log^3 m) recourse in the out-orientations.
If the update sequence is longer than m^2, we can build new versions of the data structure in the background via period rebuilding.
For each part, we make a case distinction on the size of S as follows
* |S| ≤ 2/; or
* |S|>2/.
Part <ref>.
We note that at least (1-)|S|-1 vertices u∈ S have at least α out-neighbors in S. This follows from a simple pigeonhole argument: let k denote the number of vertices that have at least α out-neighbors. Then we see that
k(1+)α +(|S|-k)α≥α(|S|-1)
k≥ |S|-1 -|S|=(1-)|S|-1.
Next, consider a vertex u∈ S with at least d^+(S,u)≥α out-neighbors in S. In H_i this vertex has in expectation d^+(S,u)p_i out-neighbors, to show our result with high probability, we need to be more precise.
We say a vertex u with d^+(S,u) ≥α is good for S, if the sampled degree for v in S satisfies
d_H_i^+(S,u) ≥ (1-) d^+(S,u)p_i.
If u is not good, it is bad. Let B(S) be the set of bad vertices for S. Using a Chernoff bound, we see that the probability that a vertex u is bad for S
ℙ[d_H_i^+(S,u)< (1-)d^+(S,u)p_i] ≤ e^-^2d^+(S,u)p_i/2
≤ e^-^2α8(c+3)log m2^i ^4 /2
≤ e^-^2 2^i-18(c+3)log m2^i ^4/2
≤ m^-(c+3)2/.
Part <ref><ref>.
We can use a union bound over all u and S of size at most 2/ to show that every u is good for all such S.
So w.h.p. we have that for |S|≤ 2/ and for u with d^+(S,u)≥α we have d_H_i^+(S,u)≥ (1-)d^+(S,u)p_i. Further we have at most |S|+1 vertices with out-degree at most α, in total covering at most (|S|+1)α edges in S. Combining these two facts gives us:
1/p_i|E_H_i(S)|/|S|-1 ≥(1-)α(|S|-1)-(|S|+1)α/|S|-1
≥ (1-)α(1-(1-)^-1|S|+1/|S|-1)
≥ (1-)(1-4) α.
Where we use that |S|≥ 2 and ≤12.
Now setting ←/5 gives the result.
Part <ref><ref>.
Next, we consider |S|>2/.
If there are at most |B(S)|≤ (|S|-1) bad vertices u∈ S, then (1-2)|S| good vertices have at least α out-neighbors. Hence this guarantees
1/p_i|E_H_i(S)|/|S|-1 ≥(1-)α(|S|-1)-(|S|+1)α-(|S|-1)α/|S|-1
≥ (1-)α(1-(1-)^-1|S|+1/|S|-1)-α
≥ (1-)(1-4) α-α.
Setting ←/6 gives the result.
Now we compute the probability that there are more than (|S|-1) bad vertices.
Earlier we showed that every time a vertex is resampled it is bad for S with probability at most m^-(c+3)2/.
Now, using this terminology, we want to show that with small enough probability the event |B(S)| > (|S|-1) occurs.
We have assumed the adversary can do at most m^2 resampling attacks
At least (|S|-1) bad resamplings for S has to happen for |B(S)| > (|S|-1).
It follows that the event |B(S)| > (|S|-1) implies the existence of a subset T ⊂ [m^2] with |T| = (|S|-1)+1 such that every resampling in T is bad for S.
Hence, we find that:
ℙ(|B(S)| > (|S|-1)) ≤∑_T ∈ [m^2]_(|S|-1)+1ℙ(T ⊂ B(S))
≤∑_T ∈ [m^2]_(|S|-1)+1 (m^-(c+3)2/)^|T|
≤m^2(|S|-1)+1 (m^-(c+3)2/)^|T|
≤ m^2(|S|-1)+1 (m^-2(c+3))^(|S|-1)+1
≤ m^-2c(|S|-1)+1
≤ m^-2c|S|/2
≤ m^-c|S|,
since |S| ≥ 2.
Here for a set A, we denoted by A_k = {A' ⊂ A: |A'| = k} the set of all subsets of A of size k.
We have at most n^t choices for S of size t, so union bounding over all n choices of t and all n^t choices of S for each t gives that some S is bad with probability at most:
∑_t n^t m^-ct≤ n · m^-(c-1)≤ m^-(c-2)
By union bounding over all m^2 different choices of G throughout the update sequence, we find that the bad event does not happen for any choice of S or G with probability at least m^-(c-4), and so reassigning c ← c+4 gives that it does not happen with high probability.
Part <ref>.
In this case, we say that a vertex is good for S:
* If d^+(S,u)≥α, if the sampled degree for u in S satisfies that
d_H_i^+(S,u) ≤ (1+) d^+(S,u)p_i.
* If d^+(S,u)< α, if the sampled degree for u in S satisfies that
d_H_i^+(S,u) ≤ 3α p_i.
Again, if u is not good, it is bad.
If d^+(S,u)≥α, we can show that u is good for S with probability at least 1-m^-(c+3)2/, again by applying a Chernoff bound, analogous to Part <ref>.
Let u∈ S be a vertex with d^+(S,u)≤α, we show that the probability that these vertices are bad is also small.
Indeed, we have
ℙ[ d^+_H_i(S,u)> 3α p_i] = ℙ[ d^+_H_i(S,u)> 3α/d^+(S,u)p_i d^+(S,u) ]
≤ℙ[ d^+_H_i(S,u)> (1+2α/d^+(S,u))p_i d^+(S,u) ]
≤exp(- ( 2α/d^+(S,u))^2p_i d^+(S,u)/2+2α/d^+(S,u))
≤exp(- 2α p_i2α/d^+(S,u)/4α/d^+(S,u))
= exp(-α p_i)
≤ m^-(c+3)2/^3,
since α p_i≥ 2^i-1 8(c+3)log m2^i^4≥2(c+3)log m^4.
We conclude that in any case the probability that u is bad for S is at most m^-(c+3)2/
Part <ref><ref>.
Again, we can union bound over all u and all S with |S|≤ 2/, and all O(m^2) updates to get that all vertices are good w.h.p.
Now we see that if we let k≥ 1 denote the number of vertices that have d^+(S,u)< α then
|E_H_i(S)| = ∑_u∈ S
d^+(S,u)≥α d^+_H_i(S,u) + ∑_u∈ S
d^+(S,u)< α d^+_H_i(S,u)
≤ (|S|-k)(1+)α p_i + k · 3α p_i.
So we see that
1/p_i|E_H_i(S)|/|S|-1 ≤1/p_i(|S|-k)(1+)α p_i + k · 3α p_i/|S|-1
≤(|S|-1)(1+) + k · 3/|S|-1α
≤(1++3|S|/|S|-1)α
≤ (1+7)α.
Setting ←/7 gives the result.
Part <ref><ref>.
Finally, we consider |S|>2/.
We first note
d_H_i^+(S,u)≤ d_H_i^+(u) ≤ (1+)p_iα,
where the last inequality holds by a Chernoff bound independent from S (so we only need to union bound over all u∈ V).
If at most (|S|-1) vertices u∈ S are bad for S, then in the worst case they achieve equality in <Ref>. We now sum the bad and the good vertices, using the result on good vertices analogous to Part <ref><ref> to see
1/p_iE_H_i(S)/|S|-1≤1/p_i(|S|-1)(1+)p_iα + (1+7)p_iα/|S|-1≤ (1+9)α.
Setting ←/9 gives the result.
The fact that w.h.p. there are at most (|S|-1) bad vertices is analogous to Part <ref><ref>.
Bounding α_max.
Finally, we remark that the fractional arboricity in H_i is at most α_max=Θ(log m/^4).
Since α < 2^i+2, we get by Part <ref> that the fractional arboricity in H_i is w.h.p. at most p_i· (1+)α≤8(c+3)log m/2^i ^4· 2^i+3=O(log m^4). Hence by <Ref>, we maintain a (1+)-approximation of the fractional arboricity in H_i, which scales to a (1+)-approximation of the fractional arboricity in G by Parts <ref> and <ref>.
Putting it all together.
For the update time, we will need the following lemma.
Given an unweighted, undirected (multi-)graph, we can maintain (1+)α out-orientations with O(^-6log^3 m logα) worst-case update time. The orientation maintained by the algorithm has O(^-4log^2 m logα) recourse.
Note that this algorithm works for multi-graphs, if one keeps all parallel edges in balanced binary search trees. When rounding the fractional orientation one has to take care of all 2-cycles before further processing things, but these can be identified and handled using another balanced binary search tree sorted by fractional orientation from one vertex to another.
This ensures that the refinement contains no parallel edges, and therefore only the orientation of one parallel copy is stored implicitly and can be looked up when necessary.
Note that the above things can be implemented in O(log m) time, but this does not increase the running time as it is done in parallel to other more expensive steps.
*
We use the algorithm as described above.
Correctness follows from <Ref>. To obtain the bounds on the update time we note the following.
We maintain log m graphs H_i, on which we apply <Ref> with α_max=Θ(log m^4), which needs O(log^7 m/^8) time per update. By the recourse of the out-orientation, <Ref>, we need to resample the out-edges of O(log^3 m/^4) vertices for each H_i per update to G. Every resample leads to O(log m/^3) edge updates in H_i. So in total each H_i has O(log^4 m/^7) updates per update to G.
Multiplying this with the aforementioned update time for H_i, we obtain
log m · O(log^4 m/^7) · O(log^7 m/^8)= O(log^11m/^15)
amortized update time. For the bound with worst-case update time, we have by <Ref> that each update takes O(m^o(1)/^12) time, so the worst-case update time becomes O(m^o(1)/^19).
§ A LOWER BOUND FOR GREEDY TREE-PACKING
In this section, we will show the following theorem:
*
To do so, we construct a family of graphs, and give an execution of greedily packing trees on these graphs such that |ℓ^(e)-ℓ^*(e)| > /λ if ||=o(λ/^3/2).
We first do this for λ=2, then we note that we can obtain the result for any even λ by essentially copying this construction λ/2 times. To get an intuition for the proof, we recommend the reader to look ahead to the figures. In the right part of <Ref>, we depict the constructed graph with λ=2. This graph is a very uniform graph; every edge e has ℓ^*(e)=1/2 (see <Ref>). The packing of trees is depicted in <Ref> and <Ref>, where we can see that certain edges are over-packed and others are under-packed. The over-packed edges will get a value ℓ^(e) well above 1/2=ℓ^*(e), giving the result.
The construction works for any tuple (λ, k, n) ∈ (2ℤ) ×ℤ_≥ 1×ℤ_≥ 10 with k = 𝒪(n^1/3).
Given n and k satisfying these requirements, we first show the construction for λ = 2.
We extend the construction to any λ∈ 2ℤ afterwards.
Before we present the family of graphs, we first introduce a simple operation which preserves a partition value of 2.
Operation: Replace any vertex v by two vertices v' and v” connected by two parallel edges. The edges around v can be distributed to v' and v” in any arbitrary fashion.
The following lemma is then straight-forward to check.
Given a graph G with Φ(G) = 2 such that the trivial partition 𝒫 = {u}_u ∈ V(G) is a minimum partition, then for all v ∈ V(G), any graph G_v obtained by performing a valid version of the operation on v also has Φ(G_v) = 2 and that the trivial partition 𝒫 = {w}_w ∈ V(G_v) is a minimum partition.
Observe that any graph H that satisfies the conditions Φ(H) = 2 and that the trivial partition 𝒫 = {u}_u ∈ V(H) is a minimum partition is exactly the union of two disjoint spanning trees.
Indeed, suppose first that H is a union of 2 disjoint spanning trees. It then follows by <cit.> that Φ(H)≥ 2. Since the trivial partition induces partition value exactly 2, this direction follows.
To see the other direction, observe that Φ(H) = 2 implies that one can pack two disjoint spanning trees of H by <cit.>. Since the trivial partition achieves this minimum, H must in fact be the disjoint union of two spanning trees.
Finally, observe that performing the operation and placing each new edge in a different spanning tree yields a new graph which is also exactly the union of two disjoint spanning trees.
Next, consider the following family of graphs indexed by n and k: for given n and k, we let G_n,k be the following graph.
Begin with the complete simple graph on 3 vertices K_3.
Pick an arbitrary edge and insert a parallel copy of it. Denote by v^1_1 the vertex not incident to any parallel edges.
Denote by v^1_2 and v^1_3 the other two vertices (the choice can be made arbitrarily, but is fixed once made). Let e^1_1 be the edge v^1_1v^1_2 and e^1_2 the edge v^1_1v^1_3. See the left part of <Ref> for an illustration.
Next, we perform the operation on v^1_3 to get two new vertices v' and v”.
We distribute the edges incident to v^1_3 as follows: all edges incident to v^1_3 becomes incident to v' except for one of the parallel edges which becomes incident to v”.
We then (re-)assign v^1_3 = v', v^2_1 = v^1_2, v^2_2 = v^1_3, v^2_3 = v”, e^2_1 = v^2_1v^2_2, and e^2_2 = v^2_1v^2_3.
Having constructed v^i_1, v^i_2, and v^i_3, we get v^i+1_1, , v^i+1_2, and v^i+1_3 similarly to above:
we perform the operation on v^i_3 to get two new vertices v' and v”.
We distribute the edges incident to v^i_3 as follows: all edges incident to v^i_3 becomes incident to v' except for one of the parallel edges which becomes incident to v”.
We then (re-)assign v^i_3 = v', v^i+1_1 = v^i_2, v^i+1_2 = v^i_3, v^i+1_3 = v”, e^i+1_1 = v^i+1_1v^i+1_2, and e^i+1_2 = v^i+1_1v^i+1_3. See the middle part of <Ref> for an illustration.
We perform the above step 2k-1 times. Each step increases the number of vertices by 1, and so the resulting graph has 3+(2k-1) = 2(k+1) vertices.
We then perform the following step n-2(k+1) times:
perform the operation on v^2k_3 to get two new vertices v' and v”. Let all edges incident to v^2k_3 become incident to v' except for the parallel edges incident to v^2k_3 which become incident to v”.
Then re-assign v^2k_3 = v' and denote the two new parallel edges by f^n-2(k+1)_1 and f^n-2(k+1)_2 (the choice can be made arbitrarily, but is fixed once made).
Having constructed f^i_1 and f^i_2, we can construct f^i-1_1 and f^i-1_2 mutatis mutandis to above. See the right part of <Ref> for an illustration.
Each time the second step is performed, the number of vertices is also increased by one, so in total the graph has n vertices.
By repeatedly applying <Ref>, we find the resulting graph is the disjoint union of two spanning trees. Hence, we have that:
For valid choices of n,k, we have that G_n,k satisfies that Φ(G_n,k) = 2 and that the trivial partition 𝒫 = {u}_u ∈ V(G_n,k) is a minimum partition.
Next, we will specify a packing of G_n,k consistent with <Ref>.
We consider the pairs of edges Y_i consisting of e^i_1 and e^i_2. See <Ref> for an illustration.
After having packed the first 2j trees, we say that Y_i is at level 2α if all edges in Y_i have been packed exactly α+j times.
We say that Y_i is at level 2α + 1 if e^i_1 has been packed α + 1+j times and e^i_2 has been packed α+j times.
If Y_i is at level β we write lev(Y_i) = β.
We use the following definition:
We say that the tree-packing on G_k,n is in standard position if the following holds:
* || = 2j for some j ∈ℤ.
* For all i: Y_i is at level β_i for some β_i.
* For all i<j, we have that: β_i≥β_j and that |β_i-β_i-1| ≤ 1.
* β_2k-1≥ 0.
* For all i: f^i_2 has been packed j times.
* There is some ι such that for all i ≥ι: f^i_1 has been packed j times, and for all i < ι f^i_1 has been packed j-1 times.
We let the vector β = (β_1, β_2, …, β_2k) ∈ℤ^2k be the level profile of .
Next we will show that if is in standard position with lev(Y_i) = lev(Y_i+1) for some i, then we can increase the level of some Y_j without decreasing the level of any pair by adding only O(k) trees to while still ensuring that ends up in standard position.
The final tree-packing is then achieved by applying the above procedure in a systematic way O(k^2) times.
Note that the empty packing is in standard position.
Before showing this, we first show the following lemma.
Let be a greedy tree-packing in standard position on G_n,k with || = 2j and β_2k≥ 0.
Let i and s ≥ 1 be such that 1) either β_i = β_i+1 = … = β_i+s > β_i+s+1 or β_i = β_i+1 = … = β_i+s and i+s = 2j, and 2) either β_i-1 > β_i or i = 1.
Then there is a greedy tree-packing ' in standard position on G_n,k with 2j + 2(s2+1) trees such that the level profile β' of ' satisfies β'_l = β_l + [i = l] - [i+s=l][Here [P] denotes the Iverson bracket, which evaluates to 1 if P is true and 0 otherwise.] for all l.
We will show the lemma assuming that β_i = β_i+1 = … = β_i+s > β_i+s+1 and that β_i-1 > β_i. The other cases follow from analogous arguments.
We pack trees in pairs. Consider first the following trees T and T' that greedily extends . The edge e^l_1 is in T if lev(Y_l) is even, e^l_2 is in T if lev(Y_l) is odd, and all edges f^p_1 are in T.
In order to verify that T indeed extends in a greedy manner, observe that since is in standard position, it follows by downward induction on l that T extends greedily. See <Ref> for an illustration.
Similarly, we let e^j_2 in T' if lev(Y_j) is even, e^j_1 in T' if lev(Y_j) is odd, and all edges f^j_2 are in T.
Exactly as above, it follows by induction that T' greedily extends ∪ T, and that ∪{T, T'} is a greedy tree-packing in standard position with the same level profile as .
We can extend as above as many times as we would like to obtain a greedy tree-packing
∪⋃_i = 1^2(s2+1){T_i, T'_i}
in standard position with the same level profile as .
Consider first the case where lev(Y_i) = lev(Y_i+1) is even.
Then, we can perform the swap of e^i_2 for e^i+1_1, while keeping T_1 a greedy extension of .
Similarly, we can now perform the swaps e^i_1 for e^i_2 and e^i+1_1 for e^i+2_2 in T'_1.
After these swaps, T'_1 is again a greedy extension of ∪ T_1. See <Ref> for an illustration.
Next, we can change T_2 by swapping e^i_1 for e^i_2 and e^i+3_1 for e^i+2_2.
Then we change T'_2 by swapping e^i_2 for e^i_1, e^i+4_2 for e^i+3_1.
We can continue this process mutatis mutandis, until at some point we pack neither e^i+s_1 nor e^i+s_2.
For even s, this happens in T'_s2, and for odd s, this happens in T_s2.
For even s, we make no more changes. For odd s, we need still need to change T'_s2: we swap e^i_2 for e^i_1 and e^i+s_2 for e^i+s_1.
It follows by induction on s that
' = ∪⋃_i = 1^2(s2+1){T_i, T'_i}
is a greedy tree-packing in standard position with the claimed level profile. If needed, we can increase the size of the tree-packing to the required size by adding another pair of T and T' (based on the levels of ' and not of ).
In the case where lev(Y_i) = lev(Y_i+1) is odd, we only alter T'_1.
Here, we exchange e^i+1_1 for e^i_2.
Next we perform the following swaps on T_2; we swap e^i_2 for e^i_1, and we swap e^i+2_2 for e^i+1_1.
We change T'_2 by swapping e^i_1 for e^i_2 and we swap e^i+3_1 for e^i+2_2.
We can continue this process mutatis mutandis, until at some point we pack neither e^i+s_1 nor e^i+s_2.
For even s, this happens in T_s2+1, and for odd s, this happens in T'_s2.
For odd s, we stop here. For even s, we need still need to pack T'_s2+1 before stopping. Before doing so, we swap e^i_1 for e^i_2 and e^i+s_1 for e^i+s_2.
It follows by induction on s that
∪⋃_i = 1^2(s2+1){T_i, T'_i}
is a greedy tree-packing in standard position with the claimed level profile.
We can now use this lemma to extend the packing.
Let be a greedy tree-packing in standard position on G_n,k with || = 2j and β_2k∈{0,1}.
Let i and s >0 be such that 1) either β_i = β_i+1 = … = β_i+s > β_i+s+1 or β_i = β_i+1 = … = β_i+s and i+s = 2j, and 2) either β_i-1 > β_i or i = 1.
Then there is a greedy tree-packing ' in standard position on G_n,k with 2j + 2(2k+1) trees such that the level profile β' of ' satisfies β'_l = β_l + [i = l] for all l.
We will show the lemma assuming that β_i = β_i+1 = … = β_i+s > β_i+s+1 and that β_i-1 > β_i. The other cases follow from identical arguments.
We begin by letting t = i, s' = s. Then we apply <Ref> with t and s' as arguments to get a new packing with only 2j + 2(s'2+1) trees. We then let t = t+s' and s' be the smallest non-negative integer such that lev(Y_t)>lev(Y_t+s'). Again we apply <Ref> with t and s' as arguments. We do this recursively, until t+s' = 2k.
At this point, we have a tree-packing ' of size at most 2j+4k. Indeed, in the worst-case s' = 0 in every iteration, leaving us with at most 2k recursive calls.
If the initial level of Y_2k was 1, we pack T with e^2k_2 and f^ι_1 swapped. Then we pack T', but with e^2k_1 and e^2k_2 swapped.
Note here that T and T' should be constructed with respect to the levels of ' and not .
If the initial level of Y_2k was 0, we pack T with e^2k_1 and f^ι_1 swapped. Then we pack T', but with e^2k_1 and e^2k_2 swapped. Again T and T' should be constructed with respect to the levels of ' and not .
Finally, we can, if necessary, pad with un-altered copies of T and T' (based on the levels of the final tree-packing above) to achieve a greedy tree-packing of the form
∪⋃_l = 1^2(2k+1){T_l, T'_l}
in standard position with ι one larger than before.
Observe that the ultimate tree-packing has level profile
β̂_l = β_l + [l = 2k] + ∑_(t,s') [l = t] - [l = t+s'] = β_l + [l = i] + ∑_t -[l = t] + [l = t] = β_l + [l = i],
as claimed.
To obtain the final tree-packing of G_n,k we do as follows.
Beginning from the empty packing, which is in standard position, we repeatedly apply <Ref>. The goal is to achieve a level profile of the form (2k-1, 2k-2, …, 0).
To do so, assume that we have constructed a level profile of the form (i,i, …, i, i-1, i-2, …, 0).
We then apply <Ref> on j beginning with j=1, then j=2 and so on up to and including j=2k-i to increase the j^th coordinate to i+1.
Since we apply <Ref> ∑_j = 1^2k j = k(2k+1) times, and each application extends the tree-packing with 2(2k+1) trees, the resulting tree-packing contains 2k(2k+1)^2 trees.
Since the tree-packing is in standard position, we observe that some edge e has been packed ||2+k times.
By <Ref> we have that ℓ^*(e) = 12, so if
k/|| = k/2k(2k+1)^2 = |ℓ^(e) - ℓ^*(e)| ≤/2
we have that
^-1≤ (2k+1)^2
i.e., k = Ω(^-12).
In particular, the lemma now follows for λ = 2. Indeed, the above construction with specific k = O(^-12) yields a tree-packing with Ω(λ·^-3/2) trees that does not achieve the required concentration.
Note that this only holds for large enough with ^-1∈ O(n^1/3), since otherwise ι might become too big for the argument in <Ref> to go through.
To generalize the statement to any λ∈ 2ℤ, let λ = 2s.
Then we get G_s,n,k by duplicating every edge of G_n,k s times.
We get a greedy tree-packing _s by replacing each tree T in the tree-packing on G_n,k by s parallel and isomorphic copies of T, each using their own set of edges.
In total |_s| = 2k(2k+1)^2·s, and so the calculation now becomes:
k/|| = k/2k(2k+1)^2·s = |ℓ^(e) - ℓ^*(e)| ≤/2s
and again we conclude that
^-1≤ (2k+1)^2
and obtain the theorem exactly as before.
§ EXISTENCE OF SMALL TREE-PACKING
The goal of this section is to show that for all graphs G there exists a tree-packing that approximates the ideal load decomposition well.
Formally, we will prove the following.
*
We restate the theorem slightly. We recall that λ/2 < Φ≤λ by <Ref>. Hence the statement is equivalent to ||=Θ(Φ/) trees guaranteeing
|ℓ^(e)-ℓ^*(e)| ≤/Φ,
for all e∈ E.
To show this, we first inspect the easier case, where the trivial partition 𝒫 = {v_i}_i = 1^n achieves the minimum partition value Φ_G of G.
To do so, we will generalize Kaiser's simple proof of the tree-packing theorem <cit.>, to packing trees plus one forest. The proof is very similar to the proof in <cit.>, we include it for completeness. We start by introducing some notation.
Let k≥ 1. A k-decomposition of a graph[By abuse of notation, we use both for a k-decomposition and a tree-packing. In the end, this will correspond to essentially the same packing.] is a k-tuple of spanning subgraphs such that {E(T_i) : 1≤ i≤ k} is a partition of E(G).
We define the sequence (_0,_1, …, _∞) of partitions of V(G) associated with as follows. First _0={V(G)}. For i≥ 0, if there exists c∈{1, …, k} such that the induced subgraph T_c[X] is disconnected for some X∈_i, then let c_i be the least such c, and let _i+1 consist of the vertex sets of all components of T_c_i[X], where X ranges over all the classes of _i. Otherwise, the process ends by setting _∞=_i, and we set c_j=k+1 and _j=_i for all j≥ i.
The level, (e), of an edge e∈ E(G) (w.r.t. ) is defined as the largest i (possibly ∞) such that both endpoints of e are contained in one class in _i.
When and are partitions of V(G), we say that refines , denoted by ≤ Q, if every class of is a subset of a class of .
Finally, we define a strict partial order on k-decompositions. Given two k-decompositions and ', we set ≺' if there is some finite j≥ 0 such that both of the following hold
* for 0≤ i< j, _i=_i' and c_i=c_i'[Here we use _i' and c_i' to denote the partitions/values corresponding to '.],
* either _j < _j' or _j=_j' and c_j< c_j'.
Let G be a graph on vertex set V(G) = {v_1, v_2, …, v_n} such that the trivial partition 𝒫 = {v_i}_i = 1^n achieves the minimum partition value Φ_G of G. Then, there exists a disjoint packing with ⌊Φ_G⌋ spanning trees and one forest F on exactly (Φ_G-⌊Φ_G⌋)(n-1) edges.
The idea is the following: pick a k-decomposition that first of all contains ⌊Φ_G⌋ disjoint spanning trees and (at most) one disjoint spanning subgraph F on (Φ_G-⌊Φ_G⌋)(n-1) edges, that subject to these constraints maximize the partial order ≺. Note that this is a k-decomposition for k=Φ_G, and that there is only a forest F iff Φ_G is non-integer. If Φ_G is integer the statement is exactly the tree-packing theorem, so follows by e.g. <cit.>. So we assume it is not. In that case F=T_k.
The claim now is that = _∞ has at least (Φ_G-⌊Φ_G⌋)(n-1) non-parallel, inter-partition edges. If F is a forest, we are done.
We first argue that if F is not a forest, then F must contain a cycle in G/. To see this, consider G/. Since T_i and F all induce trees inside each partition of , we conclude that F must have at least (Φ_G-⌊Φ_G⌋) (||-1) edges in G/. F contains a tree on each partition P∈ so we have at least n-|| edges inside the partitions.
Now we see that
(Φ_G-⌊Φ_G ⌋)(n-1) = |E(F)| ≥ (Φ_G-⌊Φ_G⌋) (||-1)+ (n-||).
Rearranging gives
(Φ_G-⌊Φ_G ⌋)(n-||) ≥ n-||,
and thus for ||≠ n, Φ_G-⌊Φ_G ⌋≥ 1, a contradiction. We conclude that ||=n, and thus any cycle in F is a cycle in F/.
Let e be an edge in a cycle of F=T_k of minimum level, and set m=(e). Let P be the class of of _m containing both endpoints of e. Since e joins different components of T_c_m[P], we have c_m≠ k, and the unique cycle C in T_c_m+ e contains an edge with only one endpoint in P. Thus for some edge e'∈ C we have (e')< m. Let e' be such an edge of lowest level. Let Q be the class of _(e') containing both endpoints of e'. Observe that V(C)⊆ Q. We now create a new k-decomposition with e and e' swapped: let ' be the k-decomposition obtained from by replacing T_c_m with T_c_m+e-e' and T_k with T_k-e+e'.
Next, it is easy to check that ≺', contradicting that was maximal. First, we show that for i≤ m we have _i'=_i and c_i'=c_i. We do this by induction. For i=0 we have _0'={V(G)}=_0 and c_0'=k=c_0, so the base case is clear. Now we assume that the statement holds for 0≤ i<m, and we prove it for i+1.
Let S be an arbitrary class of _i+1, by definition, T_c_i[S] is connected. We want to show that T'_c_i'[S] is also connected. By the induction hypothesis, c_i'=c_i. If c_i∉{c_m,k}, then T'_c_i'[S]=T_c_i'[S]=T_c_i[S], so is connected. We have i<m, so we are only left with the case c_i=k. We have that E(T_k)-E(T_k')=e, so if not both endpoints of e are in S, then we have T_k'[S] is connected as well. If S does contain both endpoints of e, then P⊆ S, because every class of _i+1 containing both endpoints of e contains P. Hence T_k'[S] is connected. We see that _i+1≤_i+1'. By maximality of , we conclude that _i+1 = _i+1'.
Next, we show that c_i+1'=c_i+1. Let R∈_i+1 and c<c_i+1. Since _i+1 = _i+1', we have R∈_i+1'. By definition of c_i+1, T_c[R] is connected. Similar as before, we can argue that T_c'[R] is connected. Hence, c_i+1'≥ c_i+1. Again by maximality of , we get that we must have c_i+1'=c_i+1.
Now we look at the next step. We have from the above that _m' = _m and c_m'=c_m, so the classes of _m+1' are the vertex set of components of T_c_m'[U] for U∈_m. For U∈_m-{P}, we have T_c_m'[U]=T_c_m[U], so their components coincide. The graph T_c_m'[P] equals T_c_m[P] with the extra edge e connecting two components of T_c_m[P]. Hence _m+1< _m+1', so also ≺', contradicting being maximal.
Now the result follows easily on graphs with uniform Φ_G.
Let G be an unweighted, undirected (multi-) graph, where the trivial partition 𝒫 = {v_i}_i = 1^n achieves the minimum partition value Φ_G of G.
There exists a tree-packing that needs ||=Θ(Φ_G/) trees to satisfy
|ℓ^(e)-ℓ^*(e)| ≤/Φ_G,
for all e∈ E.
W.l.o.g., assume that 1/ is an integer. We replace every edge in G by 1/ copies to obtain G'. Note that ℓ^*_G'(e)=ℓ^*_G(e) and Φ_G'=Φ_G/. By <Ref>, we see that we can pack ⌊Φ_G'⌋ spanning trees and one forest on exactly (Φ_G'-⌊Φ_G'⌋)(n-1) edges.
Now let consists of these ⌊Φ_G'⌋ spanning trees, together with one tree that is the forest extended to a tree in an arbitrary way. Now we have that L^(e)=1/ or L^T(e)=1/+1. We see that in the first case that
|ℓ^(e)-ℓ^*(e)| = | 1//⌈Φ_G/⌉-1/Φ_G|= 1/Φ_G -1//⌈Φ_G/⌉= ⌈Φ_G/⌉-Φ_G//⌈Φ_G/⌉Φ_G
≤1/⌈Φ_G/⌉Φ_G≤/Φ_G.
And in the second case we have
|ℓ^(e)-ℓ^*(e)| = | 1/+1/⌈Φ_G/⌉-1/Φ_G|= |Φ_G/+Φ_G-⌈Φ_G/⌉/⌈Φ_G/⌉Φ_G|
≤|Φ_G/-⌈Φ_G/⌉/⌈Φ_G/⌉Φ_G|+|Φ_G/⌈Φ_G/⌉Φ_G|
≤ 2/Φ_G,
where for the first inequality we use the triangle inequality, and for the second inequality we use the first case. Now by setting ←/2, we obtain the result.
At last, we are ready to show the general case.
We recall the definition of the ideal relative loads: ℓ^*(e) is defined recursively as follows.
* Let ^* be a packing with pack_val(^*)=Φ.
* For all e∈ E(G/), set ℓ^*(e):= 1/Φ.
* For each S∈^*, recurse on the subgraph G[S].
We prove the lemma by induction on the depth of this recursive definition. If the depth is 0 then the minimum partition value Φ_G is exactly achieved by the trivial partition 𝒫 = {v_i}_i = 1^n, so the base case is immediate. Suppose it holds up to depth i. Then we have for each component X∈^* that there exists a tree-packing _X such that |_X|=Θ(Φ/) and
|ℓ^_X(e)-ℓ^*(e)| ≤/Φ,
for all e∈ E(G[X]). Note that this follows from the induction hypothesis since Φ_G[X]≥Φ, so we can set ←Φ_G[X]Φ. We also have a tree-packing _* on G/^* such that |_^*|=Θ(Φ/) and
|ℓ^_^*(e)-ℓ^*(e)| ≤/Φ,
for all e∈ E(G/^*).
Clearly =_^*∪(⋃_X∈^*_X), where each tree is a union of the respective trees, is a tree-packing of G that satisfies the bounds on ℓ^.
[heading=bibintoc]
|