entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
13
172
authors
sequencelengths
1
668
primary_category
stringclasses
115 values
categories
sequencelengths
1
7
text
stringlengths
3
431k
http://arxiv.org/abs/2406.17775v1
20240625175912
Evidence of thermodynamics and magnetic monopole plasma formation by photon-magnon interaction in artificial spin ice
[ "D. G. Duarte", "S. F. de Souza", "L. B. de Oliveira", "E. B. M. Junior", "E. N. D. de Araujo", "J. M. Fonseca", "C. I. L. de Araujo" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil dearaujo@ufv.br Laboratory of Spintronics and Nanomagnetism (LabSpiN), Departamento de Física, Universidade Federal de Viçosa, Viçosa-MG, Brazil § ABSTRACT Artificial spin ices (ASI), containing magnetic monopole quasi-particles emerging at room temperature, have been investigated as a promising system to be applied in alternative low-power information technology devices. However, restrictions associated with the intrinsic energetic connections between opposing magnetic monopoles in conventional ASI need to be overcome to achieve this purpose. Here, photon-magnon scattering in nanomagnets is examined as an approach to locally activate the collective dynamics of interacting magnetic systems at the nanoscale. Low-power white and polarized light were employed as a new tool to manipulate magnetic monopole intensity, leading to tuning on the particles response to external magnetic field and spontaneous magnetization flipping without external field (thermodynamics). Our findings showing evidence of magnetic monopole plasma formation in a regular square ASI system are explained by an analytical model of photon-magnon conversion acting directly on the ASI nanomagnet dipole. Micromagnetic simulations based on the samples parameters and values obtained from the model present a very good qualitative correspondence between theory and observations for the investigated ASI system. Evidence of thermodynamics and magnetic monopole plasma formation by photon-magnon interaction in artificial spin ice C. I. L. de Araujo July 1, 2024 ===================================================================================================================== § INTRODUCTION Artificial Spin Ice (ASI) systems, consisting of nanomagnet arrays nanofabricated in planar geometries in order to present magnetic frustration <cit.>, have been extensively investigated in the last decade. Such nanofabricated network have the potentiality to mimic, at room temperature <cit.>, the properties previously observed only at very low temperatures in natural spin ice Pyrochlore crystals, such as emergency of magnetic monopoles <cit.> and thermodynamic phase transitions <cit.>. In the most common square ASI system, the four nanomagnets at each vertex can retain magnetic configurations with different energies <cit.>, as shown in Figure <ref>a. By describing the magnetic dipole of each nanomagnet as a magnetic charge dumbbell, it is possible to see that the least energetic configuration T1 has no residual magnetic charge and no magnetic dipole, whereas configuration T2 has no residual magnetic charge but a magnetic dipole at the vertex. Configurations T3 and T4 are the most interesting because they carry residual magnetic charge resembling Nambu magnetic monopoles. Such sort of monopoles have the characteristics of always been attached to an opposite charge by an energetic string composed by T2 configurations to keep the system charge balance <cit.> (Figure <ref>b). When it comes to the goal of magnetic monopole information transport in a lower dissipation system, compared to conventional electronics, such an active string may provide a challenge <cit.>. To circumvent this constraint, geometries presenting degeneracy with system vacuum composed by both configurations T1 and T2 have been investigated. These systems allow higher magnetic monopole mobility <cit.>, because they present a non-energetic string connecting magnetic monopoles <cit.>, resembling what has been called Dirac monopoles in natural crystals <cit.>. Some of the major issues that still need to be resolved are the difficulty of building a system ground state manifold with zero magnetic charges at vertices and a controlled method of selectively regulating magnetic charge mobility. Recent research has looked into the use of nanomagnets thin enough to be almost superparamagnetic, which allows thermal activation close to room temperature in the context of ground state manifold accomplishment <cit.>. But the time frame reported to reach the lowest energy state up to this moment is too long to be useful in practical device applications. Furthermore, effective methods for manipulating magnetic monopole mobility are currently missing. Recently, we have suggested a method with a reasonable mobility change that involves monopole screening by free electrons on metals <cit.>. That approach is limited by the permanent charge carriers on metal thin films, resulting in non-reversibility. Regulated methods for quick reversible monopole mobility adjustment in ASI would represent a significant next step in the development of magnetronics. A recent study <cit.> revealed an intriguing method of plasmonic nanomagnets heating by using both constant and pulsed high laser power of 60mW. They showed a reasonable heating and consecutive nanomagnet magnetization change in the timescale of μs and ns for constant and pulsed light successively. In this paper, we will provide a novel approach for tuning magnetic properties in ASI systems using considerably lower laser power, bypassing the nanomagnet heating process. Using an analytical model, we propose that low energetic photon incidence in traditional ASI nanomagnets can produce magnons, which will alter the nanomagnets dipolar energy and hence decrease the monopole strength at the vertex, as illustrated in the cartoon of Figure <ref>c. We subsequently demonstrate experimentally, using magnetic force microscopy (MFM) a non-optical magnetic measurement technique, that such an effect can reasonably change the evolution of vertex configurations and system hysteresis as a function of the external magnetic field. Micromagnetic simulations based on materials parameters and estimations from the analytical model were used to support our findings. Despite of provide a tool for magnetic monopole mobility modification and plasma formation, as it will be further demonstrated, such observed behavior should be considered in general ASI investigations using optical magnetic characterizations, like photoemission electron microscopy combined with magnetic dichroism (PEEM-XMCD) <cit.> and magnetic optic Kerr effect (MOKE) <cit.>, once we show here that light can affect the ASI properties. That could be the reason why predicted ground states were not experimentally fully demonstrated in some ASI systems investigations <cit.>. § MAGNON-PHOTON INTERACTION MODEL Here, we develop our theoretical model to study photon scattering anomalies in the magnetization of Permalloy nanomagnets. Since we are working with a magnetized medium, prior photon scattering in magnon models is taken into consideration when deriving our Hamiltonian. <cit.>: Ĥ = - g ( b̂^†_q + b̂_q )â^†_ωâ_ω^' where g is the optomechanical coupling constant, q = ω - ω^' are magnon and photon frequencies respectively and b̂ and â are respectively magnon and photon bosonic operators. Assuming that the photon scatterings causes small disturbances (δŜ<<1) in the magnetization Ŝ = (Ŝ^x, Ŝ^y, Ŝ^z ), we can apply the Holstein-Primakoff transformation and obtain Ŝ^x = √(S/2)( b̂^†_q + b̂_q ), with S = |Ŝ|. Thus, eq (<ref>) becomes: Ĥ = - G Ŝ^x â^†_ωâ_ω^' where the optomagnonic coupling constant G = g√(2/S) is defined <cit.>. Thus, photons scattering into magnons can cause perturbations in the spin; these perturbations are minor deviations in the projection of Ŝ in the plane xy in the limit δŜ <<1. Since G depends on the magnetization, it is important to note that the coupling shown in eq.(<ref>) only occurs in a medium whose magnetization is non-zero. Considering an isotropic, non-dissipative medium with a linear magnetization response M <cit.>, the permittivity tensor in this case is ε_i,j(M) = ε_0 (εδ_i,j - if ∑_k ϵ_ijk M_k ), where ε_0 (ε) is the vacuum (relative) permittivity, f is a material-dependent constant and ϵ_ijk is the Levi-Civita tensor. Using the complex representation of the electric field E = (E^* + E)/2, the average energy is <cit.>: Φ = - ifε_04∫_V M(r)·[E(r)^*×E(r)] dr. Light rotates in its polarization when it travels through a magnetic medium; this rotation is related to the permittivity tensor and is determined by the Faraday angle θ_F = ω f M_S / 2c√(ε), where c denotes the speed of light, ω denotes the frequency of light, and M_S denotes the saturation magnetization. Conversely, a local effective magnetic field can be produced via the permittivity tensor <cit.>: B_eff(r,t) = - ifε_04∫_V E^*(r,t)×E(r, t) d r. This field has a similar impact to spin waves, or magnons. As spin waves are closely related to the light's electric field, we may proceed further by quantize the electric field Ê = E_βâ_β, where E_β denotes the β eigenmode of the electric field. A change in the orientation of local magnetization may be caused by the field B_eff; this phenomenon is known as the Inverse Faraday Effect (IEF)<cit.>. On the other hand, the Holtein-Primakoff representation can be applied if we take into account tiny spin variations. This allows us to directly derive eq(<ref>) from eq.(<ref>) and, in turn, eq.(<ref>), where the optomagnetic coupling is expressed as follows: G^j_βγ = -i ε_0 f4 ħ∑_jmn∫_V M_j(r)S_j(r) E^*_β m E_γ n dr. Specifically, we may describe G in terms of the saturation magnetization and the macro spin norm M_j/M_S = S_j/S if we assume that the modes are homogenous (Kittel modes) <cit.> (this is conceivable if the spins are in resonance). The macro spin of the nano-island is represented by the operator Ŝ. G=cθ_F λ/4S√(ε), which is the result of decomposing the electric field's eigenmodes on a circular basis <cit.>. For flat waves, lambda is a factor that is close to 1 <cit.> and G ∽ 1Hz and S ∽ 10^10 are found in materials like Permalloy <cit.>. The relationship between the total magnon frequency and photon count is given by the constant G <cit.>. Based on this, we can calculate that the maximum magnon density per volume for low power light is around 10^17 m^-3. A magnon's average wavelength is 1μ m. The spin wave magnetic field created by this low power light has an amplitude of around B_eff∽ 10 mT, according to equation <ref>. § RESPONSE TO MAGNETIC FIELD IN FUNCTION OF WHITE AND CIRCULARLY POLARIZED LIGHT The studied samples are regular square ASI consisting of Permalloy nanomagnets developed on top of a silicon substrate (see methods <ref>). We validated the reproducibility of our results by performing three sets of measurements in separate areas of 50 μ m × 50 μ m for the two specified samples. We will concentrate our analysis on the results from the q04 sample. Equivalent results for sample q08 are presented in Supplementary materials. Figure <ref> shows the characterizations done by magnetic force microscopy (MFM) at room temperature, with the sample in the remanence state after each stage of external magnetic field application. Because the nanomagnet size used here are appropriate for devices exhibiting athermal behavior, the system will maintain its magnetic configuration if not subjected to any external excitation. First, the samples were exposed to an external magnetic field for magnetization saturation in the diagonal direction (top-right to bottom-left). The saturation is confirmed in the first image of blow up MFM images with 20 μ m × 20 μ m area given in Figure <ref>a. The stray fields generated by the in plane nanomagnet magnetization results in brilliant and dark patches coming from positive and negative out-of-plane magnetic force on the MFM tip. The other four frames were removed from images obtained during the magnetization reversal process. They were performed with steps of external magnetic field applied in the x-axis direction, in the opposite sense to the initial saturated magnetization. That approach was utilized to study the behavior of individual nanomagnet magnetization flips in relation to the external magnetic field and system internal magnetization, which is primarily determined by geometrical frustration. Two main characteristics are observed in function of external magnetic field: the system's hysterical behavior obtained through the sum of nanomagnet magnetization and magnetic monopoles creation and annihilation obtained by vertex configuration analysis, following the same procedure utilized in our recent works <cit.>. We then executed the above-mentioned characterisation in larger areas of 50μ× 50μ from dark to a sequence of white light with varying power. The light source was set at a constant distance enough to prevent sample heating during the operation, with constant temperature been measured by a commercial thermocouple. In order to validate the ineffectiveness of light heating in the ASI magnetization we have saturated the nanomagnets magnetization in the sample diagonal, with vertex in the energetic configuration T2, and left it under light exposition for 72 hours. After that period, MFM measurements showed that the imposed saturation state was still there, meaning that the light power was not enough to impose heat and consequently thermodynamics to the system. Figure <ref>b shows the hysteresis curves obtained by counting each nanomagnet magnetization in x-axys for each magnetic field step applied in the same direction. We illustrate the evolution of nanomagnet magnetization in function of external field under varied white light power. Figure <ref>c shows the evolution of magnetic vertex percentage presenting monopole configuration under same conditions. The presented results show a clear influence of depolarized white light power on the magnetic properties of the sample, with a noticeable decrease in the field required to initiate the evolution of magnetic monopoles and a decrease in vertex population density as the light intensity increases. To study the effect of light polarization, we reproduced the previous experiment, but this time using 100W of white light with circular polarization. Figure <ref>d compares the measurements performed in the dark, white light with 50W of power and circular polarized light with source of 100W, which is cut in half by the polarizer as predicted by the law of Malus. The large decrease in coercive field observed in the polarized light is compatible with the theoretical prediction presented above for scattering enhancement between photons and magnons. We are going to show latter, by micromagnetic simulations, the influence of generated magnons in the hysteresis process. A lower effect was also studied using light under linear polarization, and a strong direction dependence between polarization and nanomagnet magnetism was found (not shown). § THERMODYNAMICS PRODUCED BY FOCUSSED POLARIZED LIGHT As the last stage of the investigation, we will look at possible thermodynamic effects with nanomagnetic magnetization flipping directed solely by light excitation and characteristic ASI internal frustration. In this case, we used a micro Raman spectroscopic setup to excite individual nanomagnets without applying a magnetic field. In Figure <ref>a, we used an argon laser with a wavelength of 632.8nm and a beam diameter of 1.03μ m through a 50× objective lens with a numerical aperture of 0.75. The system's maximum laser power is 2.17mW, but we are interested in low power regime, so we have utilized for circular polarization both 0.5% and 5% of the utmost capacity, namely 10.85 μ W and 0.108 mW, with each nanomagnet in a chosen row, marked in Figure <ref>a with a red dotted line, subjected to laser exposure for approximately 1 second. Figure <ref>b depicts the MFM pictures after laser exposition, as indicated by the red dotted line. The images on the left were acquired after saturated samples exposed to a linearly polarized laser with 10.85 μ W (top) and 0.108 mW (bottom), whereas the right panel shows the identical condition with a circular polarized laser. It is easy to see that some vertices altered following low-power laser exposure. At 10.85 μ W, higher energy T3 vertices (green square) were seen in both linear and circular polarization. At 0.108 mW, even higher energetic configurations T4 (blue squares) were achieved in both polarizations. Figure <ref>c shows the evolution of vertex configurations based on laser power for both linear (top) and circular (bottom) polarizations. In Figure <ref>c, it is possible to observe not only the formation of high energetic vertex configurations by moderate laser power, but also the evolution of the vertex density statistics to the relation T1/T2 ∽ 1/2, which is expected for degenerated systems suggesting the appearance of Gauge fields with magnetic monopole plasma formation <cit.>. This observation is completely compatible with the underlying theoretical framework described here, which states that increasing power by a factor of ten results in a directly corresponding increase in photon and magnon conversion densities. Magnon creation reduces nanomagnet dipolar interaction, modifying the relation between J_1 and J_2 in a similar way as it was done before in literature <cit.> using an external magnetic field. The tests given in this chapter were rigorously performed three times in different sections of the sample, resulting in data with an error margin of less than 2%. The optical microscope with focused laser setup allowed for the utilization of light to switch nanoislands due to the laser's increased photon density compared to white light, which lacks coherence in earlier research. The experimental data suggest that at low power non-thermal excitation's mediated by photon assisted magnon generation are responsible for the magnetic switching. § MICROMAGNETIC SIMULATIONS To investigate the influence of magnons generated by photon interaction on nanomagnet magnetization and, as a result, observe the effect of such modification in a ASI system, we performed micro-magnetic simulations using our system architecture and materials parameters, as well as the created magnon intensity recovered from the presented analytical model. The simulation was conducted using an open-access GPU-based micro-magnetic simulation software (Mumax³) <cit.>. We first investigate the resonance frequency of a single Permalloy nanoisland used in our studies, which has dimensions of (3000 × 400 × 20)nm. To do this, we first saturated the magnetization along the nanomagnet's x-axis and then applied a sinusoidal signal in the form H_sinc = sin(x) / x perpendicular to the nanomagnet's (z-axis) for 12ns. Using the described technique, we were able to find the nanomagnet magnon resonance frequency of 6.7 Ghz, as presented in Figure <ref>a. This value is in good agreement with values previously presented in literature <cit.> and will be used for the simulation of magnon generated on the ASI system. As mentioned in section  <ref>, magnons resulting from photon scattering have an associated effective field on the order of 10mT. To simulate the effect of magnons on nanomagnet magnetization reversal, we applied an AC external field with the analytically predicted intensity of 10 mT and simulated frequency of 6.7 Ghz on the z-axis direction, while performing a hysteresis cycle with a DC field on the x-axys along the nanomagnet length. Figure <ref>b demonstrates that nanomagnet coercivity decreases significantly in the presence of magnons. We next applied the same procedure to a tiny ASI lattice with a similar topology as in our experiments. Figure <ref>c shows declines in coercivity and magnetization saturation under external field reversal and magnon presence, which are qualitatively similar to the findings given in Figure <ref>b. § CONCLUSIONS This work describes an analysis of the shine light process in an artificial spin ice system made of normal athermal Permalloy nanomagnets. We observe changes in the ASI histeretical behavior, as well as the evolution of magnetic monopoles, by characterizing magnetization reversal in function of external field under both white and circularly polarized light. In order to examine possible thermodynamics under light exposure, white light was not enough to provide nanomagnet magnetization flips after 72 hours even in an energetic saturated state. We then exposed each nanomagnet to a low power circularly polarized laser for 1 second, after complete magnetization saturation in the diagonal direction, and observed numerous nanomagnets flip, demonstrating ASI thermodynamics led by light. As a best result, at 0.1 mW, the new vertex configurations obtained were close to the ratio of the degenerate system, indicating the formation of magnetic monopole plasma under light exposure. We explain our findings of nanomagnets magnetization switching under the photon excitation using an analytical model that describes magnon creation on nanomagnets via photon conversion. Using micromagnetic simulations, we were able to determine nanomagnet magnon frequency. Finally, utilizing the analytical model's magnon intensity, we simulate the influence of such magnons in the ASI system. The good qualitative match between experimental and theoretical hysteresis curves indicates that our model is successful in explaining the observed shift in monopole mobility due to the action of moderate power light. The same change in the nanomagnet dipole caused by a focussed moderate circularly polarized laser beam may be accountable for the ASI thermodynamics imprinted by light. The authors thank the Brazilian agencies CNPq 432029/2018-1, FAPEMIG and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)- Finance Code 001 - for the financial support. § METHODS §.§ Sample fabrication To prepare the sample, a trilayer of Ta 3 nm / Py 20 nm / Ta 3 nm was sputtered on top of a p-type silicon substrate, with Ta acting as both an adhesion and capping layer. ASI samples in square geometry were manufactured with a total lattice size of 100 μ m × 100 μ m. The samples q04 and q08 were obtained with different lattice parameters "a," which were 800 nm and 1200 nm, respectively. The nanomagnet size of 3000 nm × 400 nm × 20 nm was carefully created to obtain increased magnetic sensitivity in the Magnetic Force Microscope (MFM) up to the limit while ensuring magnetic monodomain in each nanomagnet. §.§ Magnetic force measurements The experimental procedure involved initially saturating the nanomagnets' magnetization in the positive x-axis direction, followed by stages of magnetic field up to saturation in the opposite direction (negative x-axis), without the presence of a light source. The method was repeated for several levels of white light power: 25%, 50%, 75%, and 100%. MFM pictures were consistently captured between each field application in a sample area of 50 μ m × 50 μ m. To improve experimental dependability, each sample's saturation process was performed three times in various places, with an error margin of 3%. §.§ Micromagnetic simulations Micromagnetic simulation in the square lattices with nine cells was performed with the open-source GPU-based software MUMAX^3 <cit.>. The Permalloy numerical parameters utilized in the simulation were magnetic saturation M_S=860× 10^3 A m^-1, polarization P=0.5, exchange constant A_ex=13× 10^-12 J m^-1 and Gilbert damping α=0.01. Finite difference discretization used for the iterations were based on the Landau–Lifshitz–Gilbert (LLG) equation (Equation (<ref>) with cubic cell of 5 nm × 5 nm × 5 nm). ∂M/∂ t=γH_eff×M +α/M_SM×∂M/∂ t -u∂M/∂ y +β/M_SM×∂M/∂ y We investigated magnon resonance in a single nanomagnet using the MUMAXˆ3 catalog technique (vansteenkiste, 2011). We then used our analytical model's intensity and the frequency of the major peak to apply an AC magnetic field out-of-plane in the nanomagnet and ASI system, while completing a hysteretical cycle with a DC field in plane. § EXTENDED MATERIAL Here we show the results obtained in the second sample denominated q08. It is possible to see that both behavior under white light are similar to the one presented for sample q04 in Figure <ref>.
http://arxiv.org/abs/2406.18301v1
20240626123512
MSR-86K: An Evolving, Multilingual Corpus with 86,300 Hours of Transcribed Audio for Speech Recognition Research
[ "Song Li", "Yongbin You", "Xuezhi Wang", "Zhengkun Tian", "Ke Ding", "Guanglu Wan" ]
eess.AS
[ "eess.AS", "cs.CL", "cs.SD" ]
Global existence of large solutions to 3-D incompressible Navier-Stokes system Shaolei Ru Received 7 March 2024 / Accepted 23 May 2024 ============================================================================== § ABSTRACT Recently, multilingual artificial intelligence assistants, exemplified by ChatGPT, have gained immense popularity. As a crucial gateway to human-computer interaction, multilingual automatic speech recognition (ASR) has also garnered significant attention, as evidenced by systems like Whisper. However, the proprietary nature of the training data has impeded researchers' efforts to study multilingual ASR. This paper introduces MSR-86K, an evolving, large-scale multilingual corpus for speech recognition research. The corpus is derived from publicly accessible videos on YouTube, comprising 15 languages and a total of 86,300 hours of transcribed ASR data. We also introduce how to use the MSR-86K corpus and other open-source corpora to train a robust multilingual ASR model that is competitive with Whisper. MSR-86K will be publicly released on HuggingFace[https://huggingface.co/datasets/Alex-Song/MSR-86K], and we believe that such a large corpus will pave new avenues for research in multilingual ASR. Index Terms: speech recognition, multilingual, corpus § INTRODUCTION Thanks to the rapid development of deep learning, research in speech recognition has gradually shifted from hybrid systems based on Hidden Markov Models to end-to-end ASR systems entirely built on neural networks<cit.>. In fact, the swift progress of end-to-end ASR has also benefited from the contribution of open-source corpora, such as the commonly used LibriSpeech<cit.> and GigaSpeech<cit.> for English, as well as AISHELL<cit.> and WenetSpeech<cit.> for Chinese. These open-source corpora have facilitated research in the field of speech recognition by both academia and industry. In the multilingual domain, the Common Voice<cit.> project, alongside the multilingual LibriSpeech (MLS) <cit.> corpus released by Meta, has greatly promoted research in multilingual ASR. In recent times, the success of OpenAI's Whisper<cit.> model has demonstrated that big data combined with large models can yield improved performance. However, Whisper has not made its training data public, hindering researchers' ability to replicate the results. The MSR-86K corpus introduced in this paper aims to bridge this gap, further advancing research in multilingual ASR. Existing multilingual ASR corpora have two main shortcomings: firstly, most corpora are dominated by English and Western European languages, lacking sufficient linguistic diversity. Secondly, although some corpora have a broad coverage of languages, the duration of recordings for each language is often minimal, insufficient for building a usable ASR system. The MSR-86K corpus addresses these issues by ensuring substantial coverage of languages and providing enough data per language to independently train a robust ASR system. We constructed a series of protocols to automatically retrieve publicly accessible videos from YouTube and set up a data processing pipeline to automatically generate the MSR-86K corpus, significantly reducing the costs associated with data collection and labeling. Table 1 illustrates the distinctions between our MSR-86K and other public multilingual ASR corpora. Whisper is an excellent multilingual model, but its best-performing variant has a large number of parameters, which results in slower inference speed and greater memory overhead. In this paper, we introduce how to use easily accessible unsupervised data for pre-training and fine-tuning with MSR-86K and other open-source corpora to build a robust multilingual ASR model that is faster, smaller in size, and has performance that matches or even exceeds that of the Whisper large model. The rest of the paper is organized as follows. In Section 2, the process of constructing the MSR-86K corpus is described. In Section 3, we introduce our experiments and discussions. Finally, the paper is concluded in Section 4. § CORPUS CONSTRUCTION This section describes the major steps involved in creating the MSR-86K corpus, and Figure 1 illustrates this process. §.§ Data collection Creating keyword lists. First, we start by generating a preliminary list of keywords through querying Wikipedia articles in the target language. Recognizing the presence of numerous non-target language terms within these entries, we then implement a keyword filtering module to refine our list. The module selectively filters and retains terms that are likely to be significant keywords, ensuring relevance in the target language. Retrieving video IDs. Next, we use the YouTube search engine to search the keyword list, obtaining a list of video IDs. Since different keywords may lead to the same videos, it is necessary to deduplicate the video ID list. We hope to share the dataset, so we further filter out videos that are available for public download, and remove private, paid, and restricted videos. Detecting video subtitles. In order to guarantee the quality of our corpora annotations to the greatest extent, we implement a subtitle detection process for videos, filtering out those that feature manually uploaded subtitles. The rest of the videos that lack subtitles are relegated to function as unsupervised data sources, utilizing solely their audio components. Downloading audio and subtitles. We download the audio tracks of videos and their corresponding manually uploaded subtitles through the YouTube download engine[https://github.com/yt-dlp/yt-dlp] as the primary data source for MSR-86K. Additionally, we download the audio from some videos without subtitles to serve as the data source for unsupervised pre-training. Each audio file is converted into a single-channel wav format, sampled at a 16 kHz rate. §.§ ASR corpus construction Text normalization. Video subtitles contain several non-semantic symbols. To streamline further processing, we need to normalize the text. This involves transforming the case, removing punctuation and emojis, converting numbers, and eliminating special symbols associated with specific languages. Forced alignment. Even though video subtitles come with timestamps, we often notice a lot of them aren't accurate, thus necessitating a re-alignment of the audio with the subtitles. Thanks to the work of predecessors, we use a pre-trained ASR model based on the connectionist temporal classification (CTC) <cit.> criterion for alignment, and take the median of the alignment scores as the cutoff for filtering. Duration balance. Due to memory constraints, subtitles are usually segmented for forced alignment. However, each segment does not necessarily correspond to the exact endpoint of the speaker's utterance, resulting in a relatively short distribution of audio duration. To balance the audio duration and ensure the integrity of the speech content as much as possible, we conducted voice activity detection (VAD) based on the output of the CTC model, and limited the maximum duration to 20 seconds. Figure 2 shows the duration statistics before and after VAD. LID filter. After reviewing the outcomes of forced alignment, we noticed that there were still some inaccuracies. The most common issues included mismatched languages between the audio and the subtitles, subtitles that were categorized as descriptive captions, and audio that was either purely music or completely silent. Consequently, we develop a language identification (LID) model that effectively filters out sentences where discrepancies exist between the audio and the subtitles, significantly improving the quality of the data. ASR filter. To further improve data quality, we train an ASR model using both existing open-source data and the data filtered by the LID model. This ASR model is used to decode the data processed by the LID filter and calculate the word error rate (WER). By filtering out segments with higher WER, we ensure greater accuracy in our dataset annotations. Data split. Based on the forced alignment scores, LID scores, and WER, we select a portion of the data with the highest quality to serve as the development set, while the remaining data is allocated for the training set. The distribution of durations across different languages is detailed in Table 2. For the test set, we use the test portion of the Common Voice corpus, which has undergone stringent manual verification to ensure the high quality required for multilingual ASR testing. §.§ Unsupervised corpus construction For audio without subtitles, we employ a sound event detection model to filter out music and noise, and segment the audio at points of silence into clips shorter than 30 seconds. Ultimately, we obtain a total of 200k hours of unsupervised data. § EXPERIMENTS AND DISCUSSIONS In this section, we first introduce the evaluation of the MSR-86K corpus, assessing the overall quality of the corpus. Secondly, we describe how to use unsupervised data for pre-training, and then fine-tune with MSR-86K and other open-source data to obtain a non-autoregressive multilingual speech recognition model that outperforms the whisper large model. §.§ Data evaluation To evaluate the quality of the MSR-86K corpus, we trained a monolingual model using the training set for each language. Then, we performed Beam Search decoding on the MSR-86K development set and calculated the word error rate and character error rate (CER). Our evaluation model utilizes the Transformer-CTC architecture, in which d^model=768, d^ffn=3072, d^head=12 , num_layers=12. In addition, a convolutional front-end was used to sub-sample the acoustic features by a factor of 6. Moreover, each language is equipped with its own respective vocabulary, which employs a byte-level byte-pair encoding (BPE) model with a vocabulary size of 2000. As shown in Table 2, the monolingual ASR models for 15 languages all achieved a WER or CER below 10% on their respective development sets, with some languages reaching below 5% , and an average error rate of 6.42% across all languages. Considering that our evaluation model does not employ state-of-the-art ASR models and given the spontaneous nature of YouTube audio, an overall error rate of 6.42% meets our expectations, indicating that the data quality has reached a relatively ideal level. Therefore, in practice, the development set of MSR-86K can serve as a multilingual test set of the YouTube domain for other open-source corpora. §.§ Multilingual ASR construction Whisper is an excellent multilingual model that performs well across mainstream languages around the world. However, Whisper has not made its training data public, making it difficult for researchers to replicate its results. Our MSR-86K corpus effectively bridges this gap and can facilitate researchers' studies on large-scale multilingual speech recognition. Additionally, the best-performing model of Whisper has a high parameter count of up to 1.55 billion, which results in slower inference speed and also requires more memory and computational resources. In this section, we explain how to leverage easily accessible unsupervised data for pre-training, and then fine-tune with the MSR-86K and other existing multilingual open-source corpora to develop a multilingual ASR model that has a smaller parameter size, faster speed, and performance that is comparable to or even surpasses that of Whisper. The workflow of our multilingual ASR model training is illustrated in Figure 3. Data preparation. Whisper (v2) was trained using 680k hours of annotated data, while Whisper larger-v3 has reached a scale of 5 million hours, which is daunting for the average researcher. As illustrated in Table 3, we employed our contributed MSR-86K and various other open-source multilingual corpora for our transcribed data. In addition, to reduce the model's dependency on transcribed data, we explored unsupervised pre-training methods. By leveraging the data listed in Table 3 and incorporating the unsupervised data detailed in Section 2.3, we amassed a comprehensive corpus of 400k hours. Pre-training. We first conducted unsupervised pre-training with the prepared data. Given the superior performance of HuBERT<cit.>, we chose it as the criterion for unsupervised pre-training. We used a Transformer encoder similar to the one described in Section 3.1 as the acoustic encoder, where d^model=1024, d^ffn=4096, d^head=16, num_layers=24. Fine-tuning. Next, we fine-tuned the pre-trained HuBERT model using the dataset presented in Table 3, with CTC as the training criterion. Similar to Whisper, our vocabulary is shared across all languages. We trained a byte-level BPE model with a vocabulary size of 10,000 using the texts from the corpora presented in Table 3 to establish the lexicon for CTC, to which we added an extra token to signify the blank symbol. LID Prompt-tuning. Multilingual ASR typically encounters two usage scenarios. The first scenario is where the language of the speech to be recognized is not known in advance, necessitating the model to identify it autonomously. In the second, the language information is provided in advance, guiding the model to bolster its performance in recognizing the specified language. To enable the CTC model to accommodate both scenarios, we employed the method proposed in<cit.>, using language identity (LID) as a prompt to enhance the recognition performance of the target language. NNLM Training. To further enhance the performance of the HuBERT-CTC multilingual ASR model, we trained a simple LSTM-based language model using the text from the corpora in Table 3, and employed shallow fusion for decoding. Through the four steps mentioned above, we obtained a high-performance multilingual ASR model with a total parameter size of 362M, which is substantially smaller than the Whisper larger model, making it more suitable for deployment. §.§ Multilingual ASR evaluation Due to the differences in the scale of training data, our primary benchmark for the multilingual ASR model is the Whisper larger-v2 model. We use the pipeline provided by HuggingFace[https://huggingface.co/openai/whisper-large-v3] for inference, testing both models where LID is provided in advance and where LID is not provided. Most previous papers have not tested the performance of Whisper without LID, and we believe that the results without LID are also meaningful. The recognition results for all languages were subjected to text normalization prior to the calculation of WER or CER. It is important to note that for Chinese, converting from traditional to simplified characters is necessary for calculation accuracy. As shown in Table 4, our multilingual ASR model outperforms the Whisper medium and larger-v2 models across all languages, regardless of whether the LID is provided in advance or not, and was trained with less transcribed data. It's worth mentioning that Whisper's performance significantly declines on the Common Voice English test set when the LID is not specified beforehand. This performance dip can be largely ascribed to erroneous LID predictions, which exacerbate the inherent error propagation found in autoregressive models, culminating in less-than-ideal outcomes. On the other hand, our model demonstrates robustness and maintains stable performance, unaffected by the presence or absence of LID information. The results in Table 5 once again demonstrate that our model surpasses Whisper on the MSR-86K development set, which is indicative of the advanced nature of our algorithms. § CONCLUSIONS In this paper, we introduce the MSR-86K corpus, an evolving, multilingual corpus with 86,300 hours of transcribed audio for speech recognition research. We believe that such a large-scale corpus will propel the research in multilingual speech algorithms. We also hope that more researchers will contribute to open-source data and work together to advance the development of the intelligent speech field. Additionally, we explain how to effectively leverage readily available unsupervised data, MSR-86K, and other open-source corpora to train a robust ASR model that is competitive with Whisper in terms of performance but smaller in size and faster, allowing everyone to use open-source data to train their own multilingual ASR model. IEEEtran
http://arxiv.org/abs/2406.17713v1
20240625165355
Multi-objective Binary Differential Approach with Parameter Tuning for Discovering Business Process Models: MoD-ProM
[ "Sonia Deshmukh", "Shikha Gupta", "Naveen Kumar" ]
cs.NE
[ "cs.NE" ]
[1]Assistant Professor, Department of Computer Science and Information Technology, KIET Group of Institutions, Delhi-NCR, India [2]Associate Professor, Department of Computer Science, S.S. College of Business Studies, University of Delhi, Delhi, India [3]Professor, Department of Computer Science, University of Delhi, Delhi, India Correspondence should be addressed to Shikha Gupta: shikhagupta@sscbsdu.ac.in sonia.cs.du@gmail.com (A. Sonia Deshmukh), shikhagupta@sscbsdu.ac.in (B. Shikha Gupta), nk.cs.du@gmail.com (C. Naveen Kumar) MoD-ProM Multi-objective Binary Differential Approach with Parameter Tuning for Discovering Business Process Models: MoD-ProM A. Sonia Deshmukh1, B. Shikha Gupta2 and C. Naveen Kumar3 June 25, 2024 ==================================================================================================================== § ABSTRACT Process discovery approaches analyze the business data to automatically uncover structured information, known as a process model. The quality of a process model is measured using quality dimensions- completeness (replay fitness), preciseness, simplicity, and generalization. Traditional process discovery algorithms usually output a single process model. A single model may not accurately capture the observed behavior and overfit the training data. We have formed the process discovery problem in a multi-objective framework that yields several candidate solutions for the end user who can pick a suitable model based on the local environmental constraints (possibly varying). We consider the Binary Differential Evolution approach in a multi-objective framework for the task of process discovery. The proposed method employs dichotomous crossover/mutation operators. The parameters are tuned using Grey relational analysis combined with the Taguchi approach. We have compared the proposed approach with the well-known single-objective algorithms and state-of-the-art multi-objective evolutionary algorithm— Non-dominated Sorting Genetic Algorithm (NSGA-II). Additional comparison via computing a weighted average of the quality dimensions is also undertaken. Results show that the proposed algorithm is computationally efficient and produces diversified candidate solutions that score high on the fitness functions. It is shown that the process models generated by the proposed approach are superior to or at least as good as those generated by the state-of-the-art algorithms. § INTRODUCTION Processes are ubiquitous in any organization. An efficient organization is built on processes that run in a symphony to achieve growth and customer/employee satisfaction. In the present digital era, organizations maintain the process execution information in the form of transaction logs that are amenable to analyses. However, amidst routine activities, an organization may not analyze the effectiveness of the processes being followed. Process mining aims to extract non-trivial knowledge and exciting insights from data recorded by the information systems, stored in the form of an event log. In the past decade, process mining adoption has expanded considerably, evidenced by numerous industry and academic use cases, especially in auditing and healthcare, with the field maturing through enhanced tools and techniques <cit.>. The prominent process mining challenges include process discovery, conformance checking, and enhancement. Process discovery algorithms build a process model from the given event log <cit.>. Conformance checking verifies the goodness of the discovered process models. Enhancement techniques extend or improve existing processes by identifying and removing bottlenecks, finding deviations, recommending adjustments, and repairing processes using the information in an event log <cit.>. The present work is focused on the challenge of the process discovery. Process discovery concerns itself with extracting information on existing processes to recognize the bottlenecks, deviations, and inefficiencies in the day-to-day process workflows, providing concrete steps toward business process improvement. The last decade has seen several process discovery techniques that optimize one or more quality metrics, namely, completeness (also known as replay fitness <cit.>), preciseness, simplicity, and generalization or their weighted function. Typically, process discovery algorithms output a single model. However, a single process model may not always describe the recorded behavior of the log effectively and may be a consequence of over-fitting the training data. In this paper, we present Multi-objective Differential approach in Process Mining (MoD-ProM), a process discovery algorithm that generates several competing process models, representing different trade-offs in the quality dimensions. The present work formulates process discovery as a multi-criterion problem. The proposed approach applies the Differential Evolution algorithm and optimizes Completeness and Generalization quality metrics to output several candidate process models. Subsequently, the solutions may either be evaluated by a domain expert to best suit the situation at hand or be chosen by the user based on his/her preference. The contributions of this proposal are: * A novel application of differential evolution approach for discovering a Pareto-front of the process models. * We adapted a binary version of the multi-objective differential evolution algorithm and used dichotomous operators <cit.>. * The proposed algorithm (MoD-ProM) is evaluated on ten synthetic and four real-life event logs, and results are compared with the state-of-the-art algorithms. * The parameters are tuned using grey relational analysis combined with the Taguchi approach <cit.>. * The computation of fitness functions (completeness and generalization) has been reformulated in terms of the causality relation matrix. The results reveal that the proposed approach (MoD-ProM) outperforms the compared algorithms regarding the quality of the process model. Compared to Non-dominated Sorting Genetic Algorithm II (NSGA-II) <cit.>, the proposed algorithm exhibits a lower computational cost. The competing solutions (Pareto set) generated by the proposed approach are better than the non-dominated solutions generated by NSGA-II. The remainder of this paper is organized as follows: section 2 outlines the basic concepts related to process discovery and the related work. Section 3 describes the solution strategy, and section 4 presents the results of the experiments. Finally, section 5 gives the conclusion of the paper. § BACKGROUND AND RELATED WORK §.§ Process Discovery Process discovery is an evolving domain that leverages event logs to analyze business processes and present factual insights. An event log is the starting point for process discovery algorithms and represents a business process using the case notation to correlate events. A ‘‘case” in this notation refers to an instance of a process and is also known as a trace. Each case is assigned a unique ID, called the Case ID. An instance of a process may involve multiple activities or tasks over many days. An occurrence of a task in the context of a particular process instance (case), along with its timestamp, is called an event. Table <ref> gives an example of an event log. In this example, 101, 102, and 103 represent the Case ID of three process instances, and T_1, T_2, …, and T_7 represent the various tasks carried out in the system. §.§.§ Visualisation of a Process Model A process model can be discovered from the given event log and may be visualized in various forms such as Business Process Modelling Notation (BPMN models), Petri nets, and Data Flow Graphs (DFGs), etc. In this paper, the discovered process model is graphically represented as a Petri net, a popular method for representation. A Petri net is a bipartite graph, composed of nodes, tokens, and directed arcs. A node could be a place (denoted by a circle) or a transition (denoted by a square). The places and the transitions are joined by directed arcs. For example, in the following figure, p_1 and p_2 are places and t_1 is a transition. A transition is also called a task. The token is the information that needs to be processed. Each place can hold zero or more tokens. In the above figure, the place p_1 holds a single token. The directed arcs can transfer one token. Transitions cannot store tokens. Arcs connect (input) places to transitions and transitions to (output) places. The state of a Petri net is given by its assignment of tokens to places. A transition is said to be enabled if each input place holds at least one token. In the following figure, t_1 transition is enabled. An enabled transition may fire at any time. When fired, the tokens in the input places are moved to the output places of the transition. Firing of a transition results in a new state of the Petri net. The following figure shows the change in the above Petri net after transition t_1 fires. A transition cannot be enabled if a token is absent (missing token) at any input place. For example, in the following figure, transition t_1 cannot be enabled. Figure <ref> depicts a Petri net that conforms to the example event log in Table  <ref>. §.§.§ Well-known Algorithms for Process Model Discovery State-of-the-art process discovery techniques include α <cit.>, α^+ <cit.>, Multi-phase miner <cit.>, Heuristics miner <cit.>, Genetic process mining (GPM) <cit.>, α^++ <cit.>, α^# <cit.>, α^* <cit.>, Fuzzy miner <cit.>, Inductive Logic Programming (ILP) <cit.> algorithms <cit.>. Other algorithms in the domain of process discovery include Evolutionary tree miner (ETM) <cit.>, Inductive miner <cit.>, Multi-paradigm miner <cit.>. <cit.> proposed a hybrid process mining approach that integrates the GPM, particle swarm optimization (PSO), and discrete differential evolution (DE) techniques to extract process models from event logs. <cit.> proposed the Fodina algorithm, an extension of the Heuristic miner algorithm <cit.>. <cit.> proposed an extension of the ETM algorithm <cit.> that discovers a collection of mutually non-dominating process trees using NSGA-II <cit.>. This algorithm optimizes replay fitness, precision, generalization, simplicity, and the number of low-level edits. §.§.§ Motivation for the Proposed Algorithm Usually, state-of-the-art process discovery algorithms output a single process model that may overfit the training data. To capture the observed behavior more accurately, we propose a multi-objective algorithm for process discovery. The proposed approach yields several candidate solutions. Subsequently, the solutions may either be evaluated by a domain expert to best suit the situation at hand or be chosen by the user based on the local environmental constraints (possibly varying). The proposed algorithm formulates the problem of process model discovery in a multi-objective framework using the Differential evolution approach <cit.>. Differential evolution (DE) is a versatile and stable evolutionary algorithm. It evolves individuals through the perturbation of population members with scaled differences of distinct population members. DE algorithm has consistent robust performance and is suitable for solving various numerical optimization problems <cit.>. §.§ Multi-objective Binary Differential Evolution The proposed algorithm employs a binary version of the differential evolution approach to suit the process mining domain. While DE was initially designed to operate in continuous space, <cit.> proposed a Binary DE (BDE) algorithm based on dichotomous mutation and crossover operators. The authors <cit.> verified that compared to other notable BDE variants, the dichotomous BDE improves the diversity of the population and enhances the exploration ability in the binary search space. Also, it has been shown that as compared to other BDE variants, the dichotomous algorithm does not involve any additional computation cost and is faster than other variants of BDE <cit.>. The past decade has seen the application of the DE approach to problems where the optimization of multiple objectives is required. <cit.> first proposed a DE-based approach for multi-objective real-coded optimization problems. According to <cit.>, in the case of binary-coded optimization problems, multi-objective BDE algorithms explore the decision space more efficiently than other multi-objective evolutionary algorithms. Subsequently, multi-objective BDE algorithms were also proposed <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. § MATERIALS AND METHODS A process discovery algorithm is a function that maps an event log onto a process model that best represents the behavior seen in the event log. In the present work, a process model is represented by a causality relation matrix C= (c_t_1,t_2), where t_1, t_2 ∈ [1, n] represent the tasks, c_t_1,t_2∈{0,1}, and n is the number of tasks in the given event log. That is, an individual in the population is binary-coded. We, therefore, adapted a binary version of the multi-objective differential evolution algorithm using dichotomous operators <cit.>. The steps for the proposed multi-objective differential approach for process mining (MoD-ProM) are outlined in Algorithm <ref>. These steps are explained in the following subsections. §.§ Initialization The given event log E with n tasks is first consolidated into a dependency measure matrix D indicating the degree of dependencies between tasks <cit.>. Considering the example in Figure <ref>, where T_1, T_2, …, T_7 represent the tasks. A dependency exists between activities T_1 and T_2 if, in a trace, either T_1 directly precedes T_2 or vice versa. This is indicated by the presence of either the strings T_1 T_2 or T_2 T_1 in a process instance (trace) of the event log. The strength of dependency is proportional to the frequency of occurrence of these strings. In the example log (Figure  <ref>), T_1 directly precedes T_2 whereas the string T_2 T_1 does not occur at all. That is, in the given system, task T_1 is more likely to be the cause of task T_2 than vice versa. Dependency measure is computed by counting the length-one-loops (for example, T_1 T_2), self-loops (for example, T_1 T_1), length-two-loops (for example, T_1 T_2 T_1), and parallel tasks (for example, T_1 T_2 and T_2 T_1 occur an equal number of times). In the example log (Figure  <ref>), T_5 and T_6 are parallel tasks and T_1 T_2 is a length-one-loop <cit.>. As proposed by <cit.>, the present work represents a process model as a causality relation matrix. To represent a process model, <cit.> have favored a causality relation matrix over the more popular Petri net since representing the population individual as a causality relation matrix makes it easier to initialize the population and define the genetic operators. While a causality relation matrix can be directly derived from the information in the event log, in Petri nets, there are places whose existence cannot be derived directly from the event log <cit.>. The mapping between a Petri net and a causality relation matrix is detailed in <cit.>. We have graphically depicted both representations for an example event log in Section <ref> titled "Process Model Representation". §.§ Objective Functions and Fitness Evaluation In the proposed algorithm, we use a novel combination of completeness and generalization as objective functions. Completeness is an important quality dimension because a discovered process model is expected to describe the behavior stored in the log. Completeness is the process of computation of all the parsed tasks while replaying the traces of the log in the model. The missing tokens in a trace and the extra ones left behind during parsing (unconsumed tokens) contribute to the penalty value. Generalization shows whether the process model accurately represents the system as it is and is not "overfitting" to the behavior observed in the event log <cit.>. Completeness <cit.> and generalization <cit.> are computed as in Algorithms <ref> and <ref> respectively. The algorithms make use of the following function for a given event log E: follows(t_1, t_2, E) = 1 if t_1t_2 is length-one-loop in E 0, otherwise follows_k(t_1, t_2, E) = 1 if t_2 is the k^th task after t_1 in E 0, otherwise The present proposal performs an additional analysis of the discovered process models by evaluating their preciseness and simplicity values. The preciseness value of a model is relative to an event log and quantifies the behavior existing in the model but not observed in the event log<cit.>. A process model with a high precision value is not expected to show behavior not observed in the event log <cit.>. Completeness and preciseness only consider the relationship between the event log and the process model. However, just a portion of all potential behavior that the system permits is recorded in the event log. Simplicity, instead of telling about the behavior observed in the event log, shows the internal structure of the discovered model. Preciseness <cit.> and simplicity <cit.> values are computed as in Algorithms <ref> and <ref> respectively. §.§ Constraints and Decision Variables For a given event log E, Dependency measure matrix D= (D(t_1, t_2)) is used to generate causality relation matrices C^i= (c_t_1,t_2^i) where i ∈ [1, N] , t_1, t_2 ∈ [1, n], N is the population size (Algorithm <ref>). The dependency measure matrix and the causality matrix correspondingly represent the constraints and the decision variables for the problem. Each causality relation matrix represents an individual of the initial population and is computed as <cit.>: c_t_1,t_2^i = 1 if r <D(t_1,t_2) 0, otherwise r ∈ [0,1) is a random number. §.§ Mutation For a population member C^i= (c_t_1,t_2^i), i ∈ [1, N] , t_1, t_2 ∈ [1, n], two other causal matrices C^r_1, C^r_2, r_1 ≠ r_2 ≠ i, r_1, r_2 ∈ [1, N] are chosen randomly from the current population. A mutant individual V^i= (v_t_1,t_2^i) is then created using the following dichotomous mutation scheme <cit.>. v_t_1,t_2^i = ((c_t_1,t_2^r_1⊕ c_t_1,t_2^r_2) rand) ( (c_t_1,t_2^r_1⊕ c_t_1,t_2^r_2) c_t_1,t_2^r_1) where rand ∈ {0,1}, denotes the AND operator, denotes the OR operator, denotes the NOT operator, and ⊕ denotes the XOR operator. Equation <ref> can also be expressed as: v_t1,t2^i= rand if c_t_1,t_2^r_1⊕ c_t_1,_t2^r_2= 1 c_t_1,t_2^r_1 if c_t_1,t_2^r_1⊕ c_t_1,t_2^r_2= 0 That is, if c_t_1,t_2^r_1 and c_t_1,t_2^r_2 are distinct, then the corresponding bit of the mutant individual v_t_1,t_2^i is randomly chosen as “0” or “1”; otherwise, v_t_1,t_2^i is set as c_t_1,t_2^r_1. §.§ Crossover The Dichotomous crossover operator <cit.> starts from the mutant individual V^i= v_t_1,t_2^i, obtained after application of the dichotomous mutation operator. In this step, the original individual C^i= (c_t_1,t_2^i) and the mutated individual V^i are used to generate a candidate individual U^i= u_t_1,t_2^i using the following equation: u_t_1,t_2^i = v_t_1,t_2^i if rand_t_1,t_2<CR_t_1,t_2 c_t_1,t_2^i, otherwise where rand_t_1,t_2 ∈ [0, 1], CR_t_1,t_2 = CR_1 if (c_t_1,t_2^r_1⊕c_t_1,t_2^r_2) = 0 CR_2 if (c_t_1,t_2^r_1⊕c_t_1,t_2^r_2) = 1 This operation uses two crossover probabilities CR_1 and CR_2 based on dichotomous psychological thinking or "black and white” thinking, with a proclivity for only seeing extremes. After mutation, to generate a candidate individual, if the bits in the randomly chosen individuals from the original population are the same (distinct), then crossover probability CR_1 (CR_2) is used. This approach induces diversity in the population and enhances the exploration ability of the proposed approach <cit.>. §.§ Selection In this section, we outline the selection procedure (Algorithm <ref>) used to determine the individuals to be preserved from the current population Pop= {C^1, C^2,…, C^N}, and the candidate population Pop_U= {U^1, U^2,…, U^N} generated after the crossover operation. The process involves identifying the non-dominated individuals. The i^th individual from the current population (parent) (C^i) is said to dominate (≺) the corresponding i^th individual in the candidate population (child) (U^i) if the parent is superior for both the objectives of completeness and generalization, that is, C^i≺ U^i = 1 if f_c(U^i) ≥ f_c(C^i) && f_g(U^i) ≥ f_g(C^i) 0, otherwise where f_c and f_g denote the completeness and generalization values respectively. If the parent (child) dominates the child (parent), then the parent (child) is preserved while the child (parent) is discarded. When neither parent nor child is superior to each other, both the parent and the child are retained.After eliminating dominated individuals, the number of remaining non-dominated individuals will be between N and 2*N. Since the population size to be carried for the next generation is N, a truncation procedure based on non-dominated sorting (Algorithm <ref>) and crowding distance (Algorithm <ref>) is applied <cit.>. Non-dominated sorting algorithm (Algorithm <ref>), involves finding rank 1 individuals of the population that are not dominated by any other individual. Rank 2 is assigned to those individuals of the population that are dominated by rank 1 individuals, and so on. If the number of non-dominated solutions is greater than the population size N, Euclidean distance is used to truncate individuals from the most crowded region (Algorithm <ref>). If the rank 1 individuals are less than N, then rank 2 individuals are added, and so on. Pseudo code for Multi-objective Differential Approach for Process Mining (MoD-ProM) § RESULTS AND DISCUSSION §.§ Experimentation The proposed algorithm is tested on both synthetic and real-world datasets (Table <ref>). Over the last decade, BPI challenge event logs have become important real-world benchmarks in the data-driven research area of process mining. The proposed algorithm is tested for three BPI event logs, namely, BPI 2012 <cit.>, BPI 2013 <cit.> and BPI 2018 <cit.>, varying in the number of tasks, number of traces, and their domain. BPI 2012 is one of the most studied datasets in process mining. This dataset contains 13,087 traces, and 23 tasks and is derived from a structured real-life loan application procedure released to the community by a Dutch financial institute. The BPI 2013 dataset is from the IT incident management system of Volvo Belgium with 7554 traces and 13 tasks. BPI 2018 covers the handling of applications for EU direct payments for German farmers from the European Agricultural Guarantee Fund. BPI 2018-reference dataset contains 43802 traces and 6 tasks. The proposed algorithm is also tested on a real-life medical event log containing events of sepsis cases from a hospital with 1000 traces and 16 tasks <cit.>. The proposed algorithm is also run for synthetic logs (ETM, g2-g10 <cit.>). The proposed approach is compared with state-of-the-art algorithms, α^++ <cit.>, Heuristic Miner <cit.>, Genetic Miner <cit.>, ILP <cit.> and Inductive Miner <cit.> algorithms. For the compared algorithms, the completeness, preciseness, and simplicity values for the synthetic datasets are taken as reported by <cit.>. However, <cit.> does not report the value of generalization for these datasets. For the models generated using the Prom tool, α^++, Heuristic Miner, Genetic Miner, and ILP algorithms, the generalization value is computed using the Cobefra tool <cit.>. We have also compared the proposed strategy with the NSGA-II algorithm for process discovery. In the proposed multi-objective differential approach for process mining (MoD-ProM), the population size is set to 100, and the value of control parameters CR_1 and CR_2 is tuned using grey relational analysis combined with the Taguchi approach (Section <ref>). The algorithm is run for a maximum of 100 iterations as the proposed algorithm converges before 100 iterations for most datasets. The total number of runs is fixed at 30. §.§ Parameter Tuning To find values of the crossover probabilities, CR_1 and CR_2 suitable for the domain of process discovery, the grey relational analysis combined with the Taguchi approach is used <cit.>. Taguchi method efficiently determines optimal settings of numerous process variables with a minimal set of experiments. Taguchi method suggests replication of the experiment to achieve improved accuracy of the results. Taguchi L16 orthogonal array (OA) design containing 16 experimental runs is used. The results for completeness and generalization are shown in Table <ref> and Figure <ref>. Dr. Taguchi's Signal-to-Noise ratios (S/N), which are log functions of the desired output, serve as objective functions for optimization <cit.>. The optimization of numerous performance variables requires a comprehensive assessment of the S/N ratio. The grey relational analysis is used in the study to solve this issue <cit.>. In the grey relational analysis combined with the Taguchi approach, the experimental data is normalized using Equation <ref> to avoid different units and to reduce the variability as presented in Table <ref>. x^*_i(k)=x_i(k)-min(x_i(k))max (x_i(k))- min (x_i(k)) where, i = 1,…, m; k = 1,…, n, m is the number of experimental data and n is the number of responses. x_i(k) denotes the original value of k^th response for i^th experimental run, x^*_i(k) denotes the normalized value after the data pre-processing, max (x_i(k)) denotes the largest value of x_i(k), min (x_i(k)) denotes the smallest value of x_i(k). The next step is to calculate the grey relational coefficient, ξ_i(k), from the normalized values by using the following equation (Table <ref>): ξ_i(k)=Δ_min- ξΔ_maxΔ_0ik- ξΔ_max where Δ_0i is the deviation value obtained from the reference value (x_0(k)) and the comparability value (x_i(k)). Δ_0i= ||x_0(k)-x_i(k)|| Δ_min and Δ_max are the minimum and maximum values of the absolute difference (Δ_0i). ξ is the distinguishing coefficient, where ξ ∈ [0,1] and value 0.5 is used for experimentation <cit.>. The next step is to find out the grey relational grade (GRG) using the following equation (Table <ref>): γ_i=1n∑_k=1^nξ_i(k) where γ_i is the required grey relational grade for the i^th experiment. The results are utilized for optimizing the multi-responses as they are converted to a single grade. From the value of GRG, the effects of each process parameter at different levels are plotted and shown in Figure <ref>. Using these results optimal settings for the parameters CR_1 and CR_2 are derived as 0.2 and 0.5 respectively. §.§ Analysis of the Results The proposed algorithm (MoD-ProM) is run for the real-life and for the synthetic datasets and the values for quality dimensions, namely completeness (f_c), preciseness (f_p), simplicity (f_s), and generalization (f_g), for the discovered non-dominated solutions are shown in Tables  <ref> and  <ref>, respectively. The proposed approach is compared with the NSGA-II algorithm for process discovery. Tables  <ref> and  <ref> present the values for the quality dimensions for the discovered non-dominated solutions for real-life and synthetic datasets, respectively. Pareto-curves for the non-dominated solutions of NSGA-II and the proposed multi-objective differential evolution for process mining (MoD-ProM) are plotted for comparison (Figure  <ref> and  <ref>). The Pareto-curves show that in 12 out of 14 datasets, the results of the proposed algorithm are superior to the NSGA-II algorithm. We also compute the convergence rate and per iteration computation time for NSGA-II and the proposed MoD-ProM, over 30 runs (Figures  <ref>,  <ref>, and  <ref>). While in 2 datasets, the algorithms (NSGA-II, MoD-ProM) show a similar convergence rate, in 8 out of 14 datasets, the proposed MoD-ProM converges faster than NSGA-II, demonstrating superior exploration of the proposed approach. Figure  <ref> shows that in all cases, the proposed algorithm is superior to NSGA-II in terms of running time per iteration. It is evident from the results that NSGA-II is computationally more expensive than the proposed MoD-ProM algorithm. The proposed algorithm is also compared with Genetic Miner, Heuristic Miner, α^++, ILP, and Inductive Miner. To rank the proposed approach and the traditional algorithms, additional comparison based on a weighted average <cit.> of the quality dimensions is made (Table <ref>). <cit.> proposed a weighted average computation methodology suitable to the process mining domain, as follows: Weighted Sum = (10* f_c+(1*f_p)+(1*f_s)+(1*f_g))/13 where for a given process model, f_c, f_p, f_s, and f_g denote the completeness, preciseness, simplicity, and generalization values, respectively. A higher weight is assigned to completeness as the process model should be able to reproduce the behavior expressed in the event log. Table <ref> shows the quality dimensions for the process model discovered by the state-of-the-art algorithms. The results (Table <ref>) show that the proposed algorithm produces superior-quality process models for all the datasets in terms of the weighted average. It is also observed that the models generated through the optimization of a combination of completeness and generalization exhibit superior values for the other quality dimensions. §.§.§ Process Model Representation As discussed earlier (Section <ref> on Initialization), the proposed approach represents a process model as a causality relation matrix <cit.>. However, many state-of-the-art approaches use other semantics, such as Petri net, BPMN models, DFGs, etc. Petri net is possibly the more popular technique for visualizing the discovered process model. We apply the methodology given by <cit.> to map between a Petri net and a causality relation matrix. To better explain our results, we have graphically depicted the discovered models (causality relation matrices) as Petri nets for the ETM event log. ETM is a popular dataset in the literature comprising seven tasks. Being a small dataset, it is feasible to show (Figure <ref>) the causality relation matrices and the corresponding Petri nets of the four models discovered by the proposed algorithm (MoD-ProM). For the ETM dataset, the ProM tool generated Petri nets for the state-of-the-art algorithms is shown in Figure  <ref>. To compare with the proposed approach, the Petri net of the model with the highest completeness value, discovered by the proposed MoD-ProM, is also drawn in Figure  <ref>. The completeness or replay fitness <cit.> quantifies the ability to replay the trace from an event log onto the Petri net <cit.>. That is, a process model (Petri net) will exhibit a perfect completeness value if every process instance in the given event log can be replayed (simulated) in the Petri net. For the ETM dataset (Figure  <ref>), it is observed that the proposed MoD-ProM algorithm and the ILP algorithm can replay every process instance in the event log. It is observed that the process model discovered by Inductive miner and α^++ do not replay some of the traces, such as (a,b,c,f,g) and (a,c,d,f,g). Traces (a,b,c,d,e,g) and (a,c,b,d,f,g) are not replayed by the model generated by the heuristic miner algorithm. Similarly, the process model generated by the genetic miner algorithm does not replay (a,c,d,b,f,g) and (a,d,c,b,f,g). § CONCLUSIONS AND FUTURE WORKS While conventional process mining algorithms generate a unique process model from the event logs, multi-objective evolutionary algorithms generate several candidate models. The goodness of the generated process models is measured based on quality dimensions such as completeness, generalization, simplicity, and preciseness. A practitioner in the field of process mining may select the most appropriate process model based on the domain requirement. For example, if a user requires a model that replays the maximum number of traces, he/she may pick the model with a better value of completeness <cit.>. In this paper, we are using the idea of differential evolution towards generating a Pareto-front in the domain of process discovery, a first attempt in this direction. The proposed algorithm performs optimization using completeness and generalization as objective functions. These two quality dimensions make a good pair, as a model with high generalization value can help in improving the current system and can be used for designing future improved processes. Completeness is an important quality dimension because a discovered process model is expected to describe the behavior stored in the log. The experiments were run for ten synthetic and four real-life datasets, and are repeated 30 times for each dataset. The results are compared with state-of-the-art process discovery algorithms such as α^++, heuristic miner, genetic miner, ILP, and Inductive Miner, and also with NSGA-II for process discovery. Results show that the models generated by the proposed approach vis-a-vis the compared approaches exhibit a higher value for all the quality dimensions indicating the discovery of ‘‘good’’ process models. The non-dominated solutions generated by the proposed approach (MoD-ProM) are better than those generated by the NSGA-II algorithm for process discovery. The Pareto curve shows that the results of the proposed algorithm are superior or at least as good as that of the NSGA-II algorithm. In terms of computational time requirement, the MoD-ProM algorithm performs consistently better for all datasets as compared to the NSGA-II algorithm.In summary, we present a novel proposal for process model discovery. The approach employs a multi-objective differential evolution method to optimize the novel combination of completeness and generalization. Results show that the proposed approach is computationally efficient in discovering good-quality process models. However, the proposed approach is limited by the hardware availability. In the future, we plan to evaluate the applicability of recent multi-objective algorithms <cit.> in the domain of process discovery and study their computational complexity. In addition, to address the computational intensity and time consumption of process discovery for large event logs, we can explore parallel implementations (multi-core processors, GPU-based processing, and distributed computing environments) for the proposed algorithm. § DATA AVAILABILITY Previously reported data (event logs) were used to support this study and are described in the section <ref> on Experimentation. These prior studies (and datasets) are cited at relevant places within the text (References <cit.>). § CONFLICTS OF INTEREST The author(s) declare(s) that there is no conflict of interest regarding the publication of this paper. § FUNDING STATEMENT This research did not receive any specific grant from funding agencies in public, commercial, or not-for-profit sectors. hindawi_bib_style
http://arxiv.org/abs/2406.18455v1
20240626160514
System for Measurement of Electric Energy Using Beacons with Optical Sensors and LoRaWAN Transmission
[ "Łukasz Marcul", "Mateusz Brzozowski", "Artur Janicki" ]
cs.NI
[ "cs.NI", "cs.SY", "eess.SY", "C.2.5; H.4.m" ]
gobble System for Measurement of Electric Energy Using Beacons with Optical Sensors and LoRaWAN Transmission   Łukasz Marcul and Mateusz Brzozowski OneMeter Ltd. ul. Dobrzańskiego 3 20-262 Lublin, Poland Email: {Lukasz.Marcul, Mateusz.Brzozowski}@onemeter.com   Artur Janicki Warsaw University of Technology ul. Nowowiejska 15/19 00-665 Warsaw, Poland Email: Artur.Janicki@pw.edu.pl This work was supported by the National Centre for Research and Development within the Smart Growth Operational Programme (agreement No. POIR.01.02.00-00-0352/16-00). July 1, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this article, we present the results of experiments with finding an efficient radio transmission method for an electric energy measurement system called OneMeter 2.0. This system offers a way of collecting energy usage data from beacons attached to regular, non-smart meters. In our study, we compared several low power wide area network (LPWAN) protocols, out of which we chose the LoRaWAN protocol. We verified the energy consumption of a LoRa-based transmission unit, as well as the transmission range between network nodes in urban conditions. We discovered that LoRaWAN-based transmission was highly energy-efficient and offered decent coverage, even in a difficult, dense urban environment. Smart metering; Smart grids; LoRaWAN; Beacon; Optical sensor; AMI. § INTRODUCTION Real-time energy consumption monitoring is a must in our times, as the economic and societal costs of energy production are growing. The European Commission required their member countries to equip at least 80% of their electricity customers with intelligent metering systems by 2020 <cit.>. This was supposed to lead to the creation of smart power grids <cit.>, allowing for easy monitoring and managing of country-based and EU-based power consumption. The process of installing smart meters is very costly and time-consuming, so it is no wonder that most of the EU countries, as described in the next section, did not meet the above deadline. Therefore, to improve the deployment process of smart metering, we proposed a system called OneMeter 2.0 <cit.>, which used energy-efficient beacons, usually with optical sensors, communicating, e.g., via the IEC 65056-21 protocol. The system adds intelligent functionality to existing, popular, non-smart, electronic meters, called Automated Meter Reading (AMR) <cit.>, equipped with an optical port or even only with a blinking LED diode, without a need to install smart meters at all. In this study, we focus on finding an efficient radio communication protocol that would be used for communication between beacons and the cloud. We will choose a suitable low power wide area network (LPWAN) protocol and then verify its energy efficiency and radio coverage offered. Our paper is structured as follows: in Section <ref>, we will briefly describe the problem of smart metering deployment. Next, in Section <ref> we will describe the OneMeter 2.0 system, including the proposed usage of LPWAN-based communication system. Next, we will describe our experiments (Section <ref>), followed by their results, presented in Section <ref>. Finally, we will conclude in Section <ref> with a plan for the future of our work. § SMART METERING CHALLENGES §.§ Smart Metering in Europe According to the data of the European Commission <cit.>, so far, six EU members have achieved a full roll-out of smart meters: Denmark, Estonia, Finland, Italy, Spain, and Sweden. In 2022, France had about 92% penetration, the Netherlands about 88%, and Portugal 52%, with full coverage expected by 2025. In Austria, Latvia, Poland, and the UK, the household penetration was significantly lower, with Austria at 47%, Poland at 15%, and the UK at 49%. In the rest of the EU countries, the deployment of smart meters has varied significantly. This means, for example, in Poland, where there are 18 million metering points, only less than 2.7 million are equipped with smart meters. However, a remarkable part of the remaining electricity meters are equipped with optical ports, which are normally used for billing readouts but can be equally used to access the meter readouts using an optical sensor. §.§ Existing Solutions Several solutions exist that aim to acquire energy consumption data from existing electronic, non-smart meters. The Rhino Company offers the so-called RhinoAMI AP device <cit.>, which accesses electronic meters via a DIN bus using a cable connection. Metering data can then be transmitted further using a GPRS or Ethernet connection. The device requires an external 5-12 V power source. Smappee <cit.> is another cable solution, offered currently at 229 EUR, which, in contrast to the previously described system, uses an electromagnetic sensor clipped to the phase cable supplying an electrical installation, e.g., in an apartment or an office. A dedicated application allows the monitoring of the current energy consumption. A proprietary Non-Intrusive Load Monitoring (NILM) algorithm helps to recognize individual electrical appliances. The Smappee metering system is powered by a 100-230 V main supply. It is noteworthy that Smappee, in fact, estimates the consumption instead of reading it from the meter. A device called mReader®Opto, produced by NUMERON <cit.>, uses an optical sensor to communicate with the meter over the IEC 62056-21 protocol. It requires a USB connection to connect with a smartphone or a computer. It can work on a battery, but only for ca. 2h. The same producer also offers a gateway called smartBOX, which allows a remote transmission of the meter readouts over a network or GPRS. REDZ Smart Communication Technologies offers another device with an optical sensor: KMK 118 Bluetooth Optical Probe <cit.>. Its functionality is similar to that of the previously described device, but here, cable communication is replaced by wireless communication. The device can be battery powered, but the battery life is reported to be only “greater than 24h”. The device is offered at the price of 180 EUR. § OUR SOLUTION We have developed a system that utilizes small, energy-efficient beacons with optical sensors to read data directly from electricity meters. In contrast to other existing solutions, our beacons are energy efficient, allowing them to work on a single battery for over a year. We propose employing either smartphones or dedicated gateways (e.g., LoRaWAN-based ones) to transfer measurement data to the cloud, as shown schematically in Figure <ref>. Thanks to a cloud-based data platform and the possibility of using user smartphones, our solution enables fast and cheap deployment of the AMI infrastructure using the existing, non-smart electricity meters. The details of the proposed solution are described below. §.§ Beacon with Optical Sensor A small bottle cap-shaped beacon of 32 mm diameter (compatible with the IEC 62056-21 interface) was designed, equipped with an optical sensor, LED diode, Nordic Semiconductor's processor nRF51, flash memory, Bluetooth Low Energy (BLE) radio components, and a 3.0V battery (CR2032 or double AA). The beacon is attached magnetically to an electronic meter equipped with an optical port. The optical sensor is designed with a miniature silicon photodiode of high radiant sensitivity and a low-power comparator. The optical sensor, together with the IR LED diode, are able to set up communication with a meter using the IEC 62056-21 (old: IEC 1107) or SML (Smart Message Language) protocol. The amount of measurement data acquired from the meter depends on the meter's model – some of the meters present only the absolute active energy, while the others allow the readout of more detailed information, such as positive and negative active energy, or reactive energy. The processor was programmed in such a way that the beacon performs a readout of the meter every 15 min and stores the metering data in the flash memory. The BLE component allows other BLE devices to connect to the beacon to download metering data or to transmit the readout in real-time through BLE advertisement. §.§ Data Platform The data platform provides gathering, analysis, and visualizations of the collected metering data. The platform was realized using the MongoDB database with a set of proprietary analytic algorithms. A web-based user interface allows the visualization of energy consumption data. The user is able to enter information about their tariff. The cost estimation of the consumed energy can be calculated thanks to the tariff data imported to the database for various energy re-sellers. The platform provides tools to generate reports showing consumption profiles for chosen date ranges and information about maximum power demand, including, for example, information on the percentage of time a certain power threshold was exceeded. §.§ Transmitting Measurement Data Using LPWAN Network While in our previous work <cit.>, we showed using smartphones as gateways to transmit measurement data from beacons to the cloud, in this study, we focus on using a dedicated gateway running an LPWAN protocol. Such an option may be used in urban areas to collect energy measurement data from multiple meters located, e.g., in a closed area or in a block of flats. It can also be advantageous in rural areas with less developed infrastructure, as depicted in Figure <ref>. Various LPWAN protocols were considered, such as DASH7, LoRa, LTE-M, SigFox, NB-IoT <cit.>. Each of them has their advantages and drawbacks, as compared in Table <ref>. Considering the transmission range, the range is from 1 km in urban areas up to 40 km in rural areas; the furthest are offered by LoRa and Sigfox. For the former, a spectacular record was achieved for the distance between transmitter and receiver in favorable conditions: 702 km <cit.>. Systems using LTE-derived standards do not require the installation of a radio gateway due to the existence of dedicated LTE infrastructure of mobile telephony. Sigfox standard is not officially supported in several countries (Poland included). Also, the LTE-M and NB-IoT infrastructure was insufficient in Poland at the time of running our experiments. Therefore, the cost of deploying communication links using these protocols would be very high. However, it is noteworthy that using LTE-M would be advantageous for prosumers and other advanced users, as this standard offers support for increased data transfer, which can be purchased from the telco provider. As for LoRa, its popularity is constantly growing. The advantage of the LoRa standard is the possibility of using free transmission, e.g., via The Things Network[<https://www.thethingsnetwork.org/>] or ChirpStack[<https://www.chirpstack.io/>]. Contrary to that, the cost of sending information using LTE-M or NB-IoT depends on the size of the packets. Considering the above factors and additional characteristics (e.g., security aspects), we chose the LoRaWAN standard for our experiments. § EXPERIMENTAL SETUP In the experiments described in this work, we planned to verify the energy consumption of the LoRaWAN-based transmission unit, the time required for transmission, and the coverage offered in real conditions. In this case, we focused on the urban environment. As the base module, we used Nordic Semiconductor's nRF52 Development Kit with nRF52832 processor. Its core element is the ARM Cortex M4 microcontroller with a 60 MHz clock speed, 512 kB flash memory, 64 kB RAM memory, 32 configurable I/O ports, and automatic processor power supply control system in the range of 1.7-3.6 V. It is very energy-effective, its max. current should not exceed 8 mA during CPU operations, 50 µA in sleep mode, and 2 µA in deep sleep mode. To enable radio transmission using LoRa protocol, we extended it with a radio component: SX1261MB2BAS device with the SX1261 processor (QFN24) from Semtech, with a radio frequency switch PE4259 from Peregrine Semiconductor and 14AC8253 antenna, as visualized in Figure <ref>. The nominal voltage of the module is 3.3 V, and the transceiver is designed to operate in the non-commercial band in the voltage range 1.8–3.7 V, taking into account sleep and standby modes to increase the module's energy savings. The maximum allowed transmission clock frequency is 16 MHz. The crystal oscillator used (EXS00A-CS06465 32 MHz) meets the required frequency drift limitation at a level not higher than ±30 ppm to ensure stable radio transmission. The SX1261 processor offers a maximum link budget of 163 dB, transmitter power of 15 dBm, and receiver sensitivity of -137 dBm. Considering the antenna's gain equal 2.15 dBi, the equivalent isotropically radiated power (EIRP) will be 16.15 dB, assuming transmission losses at the level of 1 dB. As the LoRaWAN gateway, we chose The Things Gateway TTN-001-868-1.0, with LG8271–based radio transceiver. It offers data transmission with a power of up to 14 dBm, as well as reception on eight transmission channels. The receiver sensitivity (for a bandwidth of 125 kHz), according to the supplier documentation, was from -126 dBm to -140 dBm. For research purposes, we used The Things Network server, with The Things Stack toolkit. When measuring the time and energy required by LoRaWAN transmission, we changed the payload size from 1 to 50B, which is the range of a typical payload with energy consumption data. As for the experiments with LoRaWAN transmission range, we used two cases: * indoor transmission, when we measured the propagation of the radio signal within a multi-story building; * outdoor transmission, when we measured the LoRaWAN coverage in the urban area. For experiments with indoor transmission, we used a 6-floor building with a basement, made of reinforced concrete elements, with a LoRaWAN gateway installed on the top floor. A cross-section of the building is depicted in Figure <ref>. Such a setup is typical for collecting data from sensors installed on the electric meters, which are very often located in the staircase of a building. We measured the received LoRaWAN signal energy using the received signal strength indicator (RSSI) value, in dBm, and signal-to-noise ratio (SNR), measured in dB. We experimented with various spreading factor (SF) values to see if they had an impact on signal propagation and, as a consequence, on RSSI and SNR values. When measuring the quality of the outdoor transmission, we kept SF=7, to consider the worst possible scenario. We measured the signal strength when moving the measuring terminal in the neighborhood of the building with the LoRaWAN gateway located in Warsaw, Poland. We used seven LoRaWAN transmission channels. During the measurements, both indoor and outdoor, we used the transmission with bandwidth BW=125 kHz, code rate CR=4/5, preamble size of 8 symbols, 2B cyclic redundant code (CRC), and adaptive data rate (ADR) off. § RESULTS The minimum recorded current value (in sleep mode) was approximately 490 µA. The obtained value is an order of magnitude higher than expected: we expected approx. 50 µA, considering the base and radio modules' energy demand in sleep mode. The probable cause was a software error, and the base module processor did not turn off some peripheral modules. We also researched the relationship between the time and energy costs of message transmission for various payload size (see Figures <ref> and <ref>). The visible step functions suggest that the selection of the appropriate data size is important, i.e., when another threshold is exceeded, adding a few bytes does not result in an increase in cost transmission. With a transfer of 50 B for SF = 7 and SF = 11, approximately 19 nAh and 240 nAh of energy per byte were consumed, respectively. Similarly, we observed 2.76 ms and 30 ms radio band usage time. Assuming the selected information payload size, transferring 3 kB of data per day would require 57 µAh and 720 µAh of energy and 8.28 s and 90 s of transmission time, respectively. Assuming a battery capacity of 1000 mAh, its linear capacity decline, and no degradation cells (for rough estimation purposes only), the end device would be able to transmit data for almost 48 and 4 years, respectively. The indoor range measurement results, shown in Figure <ref>, indicate no significant difference in signal quality when using different SF values, which is most probably caused by the shape of the staircase. Of the values tested, the best results can be attributed to the SF=9 configuration. The 20 m distance (i.e., 6 floors), including the 175 cm-thick reinforced concrete ceiling, reduced the signal strength by 50 dBm and the SNR by around 15 dB. Considering these results, and also the gateway sensitivity (reported as being in the range from -140 dBm to -126 dBm), the intra-building LoRaWAN coverage in such types of buildings can be expected when the end device and the gateway are no more than 8-10 floors away. Therefore, for higher buildings of this type, it is recommended that the LoRaWAN gateways be placed in the middle of the building. The results of outdoor measurements are depicted in Figure <ref>. We observed that the maximum distance between nodes to ensure successful LoRaWAN data transmission was approx. 360 m. We also observed that while the signal strength significantly decreased in the close vicinity of the gateway, it yielded RSSI greater than -100 dBm along the long, straight streets. We think that this was the result of positive signal interference, which can be advantageous in future deployment. It must be remembered, however, that in our experiments, we used an indoor gateway, while in reality, an external unit will be used. Therefore, the coverage should be remarkably increased. § CONCLUSION AND FUTURE WORK In this paper, we showed the results of experiments with using the LoRaWAN protocol for the transmission of energy measurement data. This research was a part of the OneMeter 2.0 project, which developed an electric energy measurement system based on beacons attached to regular, non-smart meters. In our study, we researched the energy consumption of a LoRa-based transmission unit, as well as the transmission range between network nodes in urban conditions. We discovered that LoRaWAN transmission was highly energy-efficient and offered decent coverage, even in a difficult, dense urban environment. Future work will involve experiments with transmission in rural areas, where the poor networking infrastructure will require radio LPWAN solutions, out of which LoRaWAN seems to be one of the best candidates. IEEEtran
http://arxiv.org/abs/2406.17948v1
20240625215125
Transversality for perturbed special Lagrangian submanifolds
[ "Emily Autumn Windes" ]
math.DG
[ "math.DG", "math.SG" ]
Navigating High-Degree Heterogeneity: Federated Learning in Aerial and Space Networks Fan Dong1, Henry Leung1, Steve Drew1 1Department of Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada {fan.dong, leungh, steve.drew}@ucalgary.ca July 1, 2024 ======================================================================================================================================================================================= § ABSTRACT In this paper, we prove a transversality theorem for the moduli space of perturbed special Lagrangian submanifolds in a 6-dimensional manifold equipped with a generalization of a Calabi-Yau structure. These perturbed special Lagrangian submanifolds arise as solutions to an infinite-dimensional Lagrange multipliers problem which is part of a proposal for counting special Lagrangians outlined by Donaldson and Segal in <cit.>. More specifically, we prove that this moduli space is generically a set of isolated points. § INTRODUCTION The prospect of extending gauge-theoretic ideas, originally developed for dimensions 2, 3 and 4, to manifolds with special holonomy in dimensions 6, 7 and 8 was explored by Donaldson and Thomas in <cit.> and by Donaldson and Segal in <cit.>. In particular, manifolds with special holonomy come equipped with calibrations, whose corresponding calibrated submanifolds are of great interest to both physicists and mathematicians. In certain respects, these calibrated submanifolds are analogous to J-holomorphic curves and exhibit many connections to other gauge-theoretic objects such as instantons and monopoles. Donaldson, Thomas, and Segal put forth proposals for how one might count various calibrated submanifolds in the hope of developing new invariants for manifolds with special holonomy. There are several technical reasons that such a program is not straightforward. This direction was explored further by Joyce <cit.>, Doan and Walpuski <cit.>, and others. This paper is concerned specifically with the situation in dimension 6. It was discovered by Hitchin <cit.> that metrics with special holonomy in dimensions 6 and 7 are deeply connected to the existence and properties of differential forms whose pointwise GL(n,ℝ)-orbit is open. Such forms are called stable. One very interesting feature of dimension 6 is the fact that an SU(3)-structure on a 6-manifold is equivalent to a choice of a stable 3-form ρ and a stable 4-form τ satisfying certain algebraic conditions (see <Ref>). Furthermore, a Calabi-Yau structure on a 6-manifold is a choice of an SU(3)-structure (ρ,τ) where the 3-form ρ and the 4-form τ along with their Hitchin duals (see <ref>) are closed. In the Calabi-Yau setting, the 3-form ρ corresponds to the real part of the holomorphic volume form and its Hitchin dual ρ̂ corresponds to the imaginary part. In a 6-dimensional Calabi-Yau manifold, both ρ and ρ̂ are calibrations, whose calibrated submanifolds are called special Lagrangian. In <cit.>, Donaldson and Segal pointed out that special Lagrangian submanifolds can be characterized as solutions to a certain Lagrange multipliers problem defined purely in terms of these stable forms. We briefly summarize this Lagrange multipliers problem here. See <ref> for a more rigorous exposition. Suppose that (M,ρ,τ) is a manifold equipped with a pair of closed, stable forms (ρ,τ) ∈Ω^3(M) ×Ω^4(M). Any Calabi-Yau manifold satisfies this condition, but as we will see, we will also consider more general settings. Next, fix a 3-dimensional submanifold L_0⊂ M. Let L be any nearby 3-submanifold representing the same homology class as L_0 and let Q be a cobordism connecting L_0 and L. Then, roughly speaking, one can define a functional f_τ on the space of submanifolds by integrating the 4-form τ over Q f_τ(L) = ∫_Qτ. Of course, this functional is not well-defined since it depends on the choice of cobordism Q. However, it is well-defined on the covering space of the space of embeddings of a 3-manifold into M as explained in <ref>. Next, let C_ρ be the set of 3-submanifolds in M, all diffeomorphic to L_0, representing the same homology class and satisfying the condition that ρL = 0. Note that in the Calabi-Yau case, a special Lagrangian submanifold which is calibrated by the 3-form ρ̂ always satisfies this condition. The Donaldson-Segal Lagrange multipliers problem is to find the critical points of f_τ restricted to the set C_ρ. In a Calabi-Yau manifold, special Lagrangian submanifolds are critical points of the Lagrange functional arising from this Lagrange multipliers problem. This fact begs the question of whether it is possible to construct a Floer theory for special Lagrangian submanifolds. However, a classical result of McLean <cit.> says that for any special Lagrangian submanifold L, the moduli space of nearby special Lagrangians is a smooth manifold with dimension equal to the first Betti number of L. Thus, with the goal of defining a Floer theory in mind, we want to perturb the SU(3)-structure underlying the Calabi-Yau structure on our 6-manifold so that the critical points of the relevant functional are isolated. The fact that 7 is 6+1 gives us a natural space of perturbations. This is because any 6-manifold M with an SU(3)-structure can be embedded as a hypersurface in a 7-dimensional cylinder ℝ× M equipped with a G_2-structure arising in a canonical way from the SU(3)-structure. Specifically, if (ρ,τ) ∈Ω^3(M) ×Ω^4(M) defines an SU(3)-structure on M, then ψ = dt ∧ρ + τ is a stable 4-form on ℝ× M corresponding to a G_2-structure on ℝ× M. If the SU(3)-structure on M is a Calabi-Yau structure, the metric associated to the 4-form ψ on ℝ× M will have holonomy SU(3) contained in G_2. On the other hand, if (M,ρ,τ) is a 6-manifold equipped with a pair of stable forms (ρ,τ) ∈Ω^3(M) ×Ω^4(M), the condition that this pair (ρ,τ) gives rise to a G_2-structure on ℝ× M is a weaker condition than the requirement that (ρ,τ) defines an SU(3)-structure on M. Let ℛ_G_2 = { (ρ,τ) ∈Ω^3(M) ×Ω^4(M) ψ = dt ∧ρ + τ is stable on ℝ× M } be the set of G_2-pairs on a 6-manifold M. Thus, if we start with a Calabi-Yau 6-manifold M, we can perturb the underlying SU(3)-structure (ρ,τ) in such a way that it is no longer an SU(3)-structure but nonetheless gives rise to a G_2-structure on ℝ× M. Since the condition that a form be stable is an open condition, any small perturbation of the of the pair (ρ,τ) remains in the space of G_2-pairs since such a perturbation corresponds to a small perturbation of the 4-form ψ. As long as ρ and τ, and hence the 4-form ψ, are closed, the Lagrange-multipliers problem described above can still be defined. The solutions to this more general Lagrange-multipliers problem generalize the notion of a special Lagrangian. They are no longer calibrated submanifolds since we no longer require that the Hitchin duals of ρ and τ are closed but nonetheless retain several useful features typically enjoyed by calibrated submanifolds. The Euler-Lagrange equations corresponding to the Donaldson-Segal Lagrange multipliers problem take the following form. Let (M,ρ,τ) be a 6-manifold equipped with a G_2-pair (ρ,τ) where both ρ and τ are closed. A 3-submanifold L ⊂ M is a solution to the Lagrange multipliers problem if and only if there exists λ∈ C^∞(L) such that τ_N + dλ∧ρ_N = 0 ρL = 0. These equations first appeared in <cit.> without much explanation. The details of their derivation can be found in <ref>. See <Ref> for an explanation of the notation. Here, they will be referred to as the perturbed special Lagrangian equations (perturbed SL equations). In this case, function λ is the Lagrange-multiplier and plays a similar role to the Lagrange-multipliers found in calculus. When M is a Calabi-Yau manifold with holomorphic volume form Ω and symplectic structure ω, the 3-form ρ = Re(Ω) and the 4-form τ = 1/2ω^2. In this case, the solutions to the perturbed SL equations are special Lagrangian submanifolds together with constant functions λ. When M is not necessarily Calabi-Yau, a submanifold L solving the perturbed SL equations for some function λ is called a perturbed special Lagrangian submanifold. The perturbed SL equations can also be considered separately from the Lagrange multipliers set-up. In this case, it is not necessary to require that ρ and τ be closed. One could still hope to construct a numerical invariant for 6-manifolds equipped with a G_2-pair in this way. The key connection between solutions to the perturbed SL equations and the 7-dimensional setting is as follows. Suppose that (M,ρ,τ) is a 6-manifold with a G_2-pair and that (λ,L) is a solution to the perturbed SL equations. Then the graph of the function λ over L ⊂ M is an associative submanifold (see <Ref>) of ℝ× M. For this reason, we call the pair (λ,L) a graphical associative. See <Ref> for proof of this fact. As a consequence, the perturbed special Lagrangian equations are elliptic since the deformation theory of associative submanifolds in a G_2-manifold is governed by an elliptic operator <cit.>. The moduli space ℳ(A,P;(ρ,τ)) is defined to be the set of all submanifolds L in M, diffeomorphic to a particular 3-manifold P and representing a fixed homology class A ∈ H_3(M;ℤ), which satisfy the perturbed SL equations for some λ. The definition of a G_2-pair is flexible enough to prove transversality for ℳ(A,P;( ρ,τ)). More precisely, let P be a closed, oriented 3-manifold and M a closed, oriented 6-manifold equipped with a G_2-pair (ρ,τ) ∈Ω^3(M) ×Ω^4(M). We prove Fix a homology class A ∈ H_3(M;ℤ) and a closed 3-manifold P. There is a residual subset ℛ_reg of ℛ_G_2 such that the moduli space ℳ(A,P;(ρ,τ)) is a collection of isolated points whenever (ρ,τ) ∈ℛ_reg. This statement continues to hold if ℛ_G_2 is the set of closed G_2-pairs. When the G_2-pair (ρ,τ) is of class C^ℓ, the proof of this theorem is a consequence of the implicit function theorem for Banach spaces. In order to extend the result to smooth G_2-pairs, we must apply the so-called Taubes trick which is a standard method from symplectic geometry (see for example chapter 3 of <cit.>). In order to utilize the Taubes trick, we must prove an elliptic regularity theorem (<Ref>) and a compactness theorem (<Ref>) for associative submanifolds with bounded second fundamental form and bounded volume. This is significant because the solutions to the perturbed SL equations are not necessarily minimal submanifolds. If the elements of the moduli space ℳ(A,P;(ρ,τ)) are to be counted, then we must determine if or when the moduli space is compact. Towards this end, we we adapt the definition of a tamed G_2-structure (first introduced in <cit.>) to define a tamed G_2-pair on M. This is analogous to the notion of a tamed almost complex structure first introduced by Gromov in order to control the energy of J-holomorphic curves. In our case, a G_2-pair (ρ,τ) is tamed by a second pair (ρ',ω') ∈Ω^3(M) ×Ω^2(M) where both ρ' and ω' are closed and stable. See <ref> for details. We have Let (M,ρ',ω') be a closed, 6-manifold equipped with taming forms (ρ',ω') ∈Ω^3(M) ×Ω^2(M). Suppose that (ρ,τ) is a (ρ',ω')-tame G_2-pair. Then every element in ℳ(A,P;(ρ,τ)) has a topological volume bound depending only on the homology class of ρ'. More details about tamed G_2-structures can be found in <cit.>. Although in general we cannot expect that the second fundamental form of a perturbed SL submanifold to be bounded, we nonetheless expect the volume bound in the previous proposition to give us a compactification of the moduli space using rectifiable currents. However, such a compactification must necessarily contain singular objects which are not entirely understood. Such singular objects were studied by Joyce in <cit.>. We hope to explore these topics along with other topics related to developing a Floer theory for special Lagrangian submanifolds in future papers. §.§ Organization The paper is organized as follows. <Ref> includes a review of results about stable forms in dimensions 6 and 7 as well as a brief discussion about the different contexts in which this work might be applied. <Ref> contains information about G_2-structures on cylinders of the form ℝ× M where M is a 6-manifold equipped with a pair of stable forms. The notion of a G_2-pair is defined and its relationship to SU(3)-structures is discussed. In <ref>, we adapt the definition of a tamed G_2-structure to define a tamed G_2-pair. <Ref> contains a detailed description of the Lagrange multipliers problem first appearing in <cit.>. The perturbed SL equations for this Lagrange multipliers problem are defined. In <ref>, we show that solutions to the perturbed SL equations correspond to associative submanifolds in a 7-dimensional cylinder. We then use this fact to prove that the perturbed SL equations are elliptic. In <ref> we derive volume bounds for perturbed SL submanifolds in a 6-manifold with a tamed G_2-pair. <Ref> contains regularity and compactness results for associative submanifolds. Finally, in <ref> we apply all of the above to prove a transversality result for the moduli space of perturbed special Lagrangian submanifolds. §.§ Acknowledgments This paper would not have been possible without the continuous support, guidance, and expertise of my supervisors Aleksander Doan and Boris Botvinnik. I also thank Thomas Walpuski, Jason Lotay, Robert Bryant, Lorenzo Foscolo, and Costante Bellettini for several enlightening discussions on topics related to this paper. Lastly, I thank Jesse Madnick and Oliver Edtmair for helpful suggestions and feedback on rough drafts of this paper. § STABLE FORMS A form ϕ∈Λ^p(ℝ^n)^* is called stable if its GL(n,ℝ)-orbit in Λ^p(ℝ^n)^* is open. Stable forms and their connections to metrics with special Holonomy were first studied in <cit.>. Precisely which manifolds admit stable forms and in which degrees was worked out in <cit.>. There it was proved that stable 3-forms only exist in dimensions 6, 7, and 8. In the present paper, we will be mainly interested in stable forms in dimension 6, but will rely heavily on the connection to special geometry in 7 dimensions. It should be noted however that although there exist stable 3-forms in dimension 8, metrics on 8-manifolds with Spin(7) holonomy do not arise from them in the same way that metrics with special holonomy arise from stable 3-forms on 7-manifolds. If a manifold M admits a stable p-form α, then it has a G-structure where G is the pointwise stabilizer of α. Let α represent a stable 2-form in any dimension, or a stable 3-form in dimension 6, 7, or 8. In both these cases, the stabilizer group preserves a volume form denoted by vol(α). It was discovered by Hitchin that metrics with special holonomy in both 6- and 7-dimensions have variational characterisations in terms of the functional which inputs a stable form and outputs the total volume of the manifold with respect to a choice of an invariant volume form. Next, we discuss features of stable forms in 6 and 7 dimensions in more detail. §.§ Dimension 6 In dimension 6, stable 2-forms, 3-forms, and 4-forms are possible. Their stabilizers and conventional volume forms are as follows: * A stable 2-form ω is a non-degenerate 2-form. The group Sp(6,ℝ) can be defined to be the set of linear transformations of ℝ^6 that preserve a non-degenerate 2-form, and is a real, non-compact, connected, simple Lie group. The associated volume form is vol(ω) = 1/6ω^3 which is also known as the Liouville volume form. * A stable 4-form τ also has stabilizer Sp(6,ℝ). To define vol(τ), use the isomorphism I:Λ^4(ℝ^6)^* ≅Λ^2ℝ^6 ⊗Λ^6(ℝ^6)^*. Then I(τ)^6∈Λ^6ℝ^6⊗( Λ^6 (ℝ^6)^*)^3≅( Λ^6(ℝ^6)^*)^2. Define vol(τ) = ( I(τ)^6)^1/2. * A stable 3-form ρ has stabilizer SL(3,ℂ) (in which case it's called positive) or stabilizer SL(3,ℝ) ×SL(3,ℝ) (in which case it's called negative). Fix a 3-form ρ∈Λ^3(ℝ^6)^*. Define a map K_ρ(v) = v ⌟ρ∧ρ∈Λ^5(ℝ^6)^* ≅ℝ^6⊗Λ^6(ℝ^6)^*. Then the positive forms are those forms with tr(K)^2 < 0. In this case, we define vol(ρ) = *√(-tr(K)^2)∈Λ^6 (ℝ^6)^* . In this paper, we will only be concerned with stable 3-forms whose orbit is SL(3,ℂ). From now on, a whenever a 3-form is referred to as stable, we actually mean positive. For more information about how the volume forms are defined, see the appendix of <cit.>. In the same paper, Hitchin uses the homogeneous behavior of the map from stable forms to volume forms to define a Hitchin dual which for a stable form α will be denoted by α̂. Another special feature of dimension 6 is that a positive 3-form determines an almost complex structure, a fact that was explained in <cit.>. The Hitchin duals for stable 2- and 4-forms, and for positive 3-forms on ℝ^6 are described below explicitly. * For a stable 2-form ω, ω̂ = 1/2ω^2. * For a positive 3-form ρ, ρ̂ is the unique 3-form ρ̂ such that ρ + i ρ̂ is a nowhere vanishing complex volume form with respect to the complex structure determined by ρ. * For a stable 4-form τ, τ̂ is the unique nondegenerate 2-form satisfying τ = 1/2τ̂^2. In fact, we can use the concept of stable forms to define SU(3) structures on 6-manifolds in much the same way that G_2 structures on 7-manifolds are often said to be defined by a stable 3-form. An SU(3)-structure on a 6-manifold M is a pair of differential forms (ω, ρ) such that * ω is a stable 2-form * ρ is a positive 3-form * the following algebraic conditions are satisfied: ω∧ρ = 0, 1/6ω^3 = 1/4ρ∧ρ̂. In this case, the stabilizer of the pair (ω, ρ) will be exactly SU(3). An SU(3)-structure can be equivalently defined in terms of a stable 4-form and a stable (positive) 3-form, where the 2-form appearing in the last condition is the Hitchin dual of the 4-form. Throughout the paper, ω and ω' will always denote a 2-form, τ will always denote a 4-form, and ρ and ρ' will always denote a 3-form. Various conditions may be placed on the stable forms, most of which have been studied in some detail. Suppose that (M,ρ,τ) is a manifold with an SU(3)-structure. * If both ρ and ρ̂ are closed then M is a complex threefold with a trivial canonical bundle. Manifolds such as these are not necessarily Calabi-Yau manifolds because they are not necessarily Kähler. These manifolds were studied in <cit.> where they are referred to as non-Kähler Calabi-Yau manifolds. There are many simple examples such as S^1 × S^3. * If both ρ and τ are closed, then M is said to have a half-flat SU(3)-structure. Half-flat structures are important to the study of hypersurfaces in ℝ^7 and were first studied by Calabi. They have since showed up again in physics and geometry. A slightly more restricted sub-class of six manifolds with half-flat SU(3)-structures are the nearly Kähler manifolds whose Riemannian cones have holonomy equal to G_2. See <cit.> for more details about nearly Kähler manifolds and <cit.> for an explanation of half-flat SU(3)-structures. * If τ̂ is closed, then M is a symplectic manifold with a compatible almost-complex structure. Note that τ̂ being closed also implies that τ is closed since τ = 1/2τ̂^2. * If τ, τ̂, ρ, and ρ̂ are all closed, then M is a Calabi-Yau manifold. Note that we could also drop the requirement that (ρ,τ) forms an SU(3) structure and simply study manifolds that carry a pair of stable forms. In this paper, we will usually consider pairs of stable forms that determine a G_2 structure in 6 +1 dimensions. We will not always require that they are closed. More details can be found in the following sections. Recall that the existence of an almost complex structure implies the existence of a non-degenerate (i.e. stable) 2-form, since one can be constructed out of an almost complex structure and any Riemannian metric. This is true in any dimension. Therefore in dimension 6, the existence of a stable 3-form implies the existence of a stable 2-form since a stable 3-form defines an almost complex structure. Next we review the 7-dimensional situation. §.§ Dimension 7 Fix an identification ℝ^7≅Im𝕆 of ℝ^7 with the imaginary octonions. Then the multiplication on 𝕆 endows ℝ^7 with * an inner product g g(u,v) := -Re(uv) * a cross-product u × v := Im(uv) * a 3-form φ(u,v,w):= g(u × v, w) * an associator [·,·,·]: Λ^3ℝ^7→ℝ^7 [u,v,w] := (u × v) × w + ⟨ v,w⟩ u - ⟨ u,w⟩ v * and a 4-form ψ(u,v,w,z) := g( [u,v,w],z ). One may always choose coordinates x_1,…,x_7 on ℝ^7 so that the 3-form φ above can be written as φ_0 = dx_123 + dx_145 + dx_167 + dx_246 - dx_257 - dx_347 - dx_356. The notation dx_i_1,…,i_k denotes the wedge product dx_i_1∧… dx_i_k. The following relations always hold ( u ⌟φ) ∧( v ⌟φ) ∧φ = 6g(u,v)vol_g _gφ = ψ. The subgroup of GL(7,ℝ) which fixes φ_0 is the compact, connected, simple Lie group G_2 G_2 = { A ∈GL(7,ℝ) A^* (φ_0) = φ_0 }. The group G_2 also preserves the standard metric and orientation when acting on ℝ^7. In particular, G_2 also fixes the 4-form ψ_0 = *φ_0, where the Hodge star is taken with respect to the metric determined by φ. These forms are stable with respect to <Ref> since the Lie group G_2 is 14-dimensional and dimGL(7,ℝ) - dimΛ^3(ℝ^7) = 49 - 35 = 14, implying that their GL(7,ℝ)-orbit is open. From this perspective, the Hitchin dual of φ is ψ. There also exist stable 3-forms on ℝ^7 with stabilizer split G_2. These are analogous to the negative stable 3-forms in dimension 6, and will not be considered in this paper. Throughout, a stable 3-form on a 7-dimensional manifold will always be one whose pointwise stabilizer is the compact real Lie group G_2. It should be noted that the stabilizer of the 4-form ψ = *φ is actually ± G_2 = G_2 ∪ -IdG_2. Therefore a stable 4-form on a 7-dimensional vector space determines an inner-product but not an orientation. A 7-dimensional manifold X equipped with a global, stable 3-form φ has a G_2-structure and is called a G_2-manifold. If φ is closed, then X is called a manifold with a closed G_2-structure. If *φ = ψ is closed, then X is called a manifold with a co-closed G_2-structure. If φ is both closed and co-closed, then X is called a manifold with a torsion-free G_2-structure. Manifolds with a G_2 structure φ are automatically equipped with a metric g_φ, a cross-product ×_φ, an associator [ ·,·,·]_φ, and a stable 4-form ψ = *_φφ. Equivalently, we may refer to a 4-form ψ on an oriented 7-manifold as a G_2 structure. The following result is due to Fernandez and Gray. Let X be a 7-dimensional manifold equipped with a G_2-structure φ. Let g be the associated metric. Then the following are equivalent. * Hol(g) ⊆ G_2 * dφ = d^*φ= 0 If g is a metric with holonomy contained in G_2, φ is the corresponding stable 3-form, and ∇ is the Levi-Civita connection of g then ∇φ = 0. The form ∇φ is called the torsion of φ, so this lemma justifies the above terminology. For more details about G_2 structures and G_2 manifolds, see for example <cit.>. § TRANSLATION-INVARIANT OKAAAA-STRUCTURES In this section, we investigate how G_2-structures on a cylinder of the form ℝ× M where M is a 6-manifold relate to pairs of stable forms on M. This discussion will also allow us to set some terminology that will be used in the following sections. The Lie group SU(3) is a subgroup of the Lie group G_2. Indeed, if one chooses a vector v_0∈ℝ^7 then the subgroup of G_2 that preserves v_0 is isomorphic to SU(3). Therefore there are interesting relationships between manifolds with G_2-structures and manifolds with SU(3)-structures. Throughout this section, let M be a 6-manifold, X = ℝ× M, and (ρ, ω) ∈Ω^3(M) ×Ω^2(M). A pair of stable forms on a 6-manifold M that define a G_2-structure on the cylinder ℝ× M will be called a G_2-pair. These pairs can consist of a stable 3-form and a stable 2-form or a stable 3-form and a stable 4-form. In the sections that follow, the Greek letter ω will always denote a 2-form while the Greek letter τ will always denote a 4-form. Set ℛ_G_2 = { (ρ,ω) ∈Ω^3(M) ×Ω^2(M) φ = ρ + dt ∧ω∈Ω^3_stable(X) } or ℛ_G_2 = { (ρ,τ) ∈Ω^3(M) ×Ω^4(M) ψ = dt ∧ρ + τ∈Ω^4_stable(X) }. A pair of stable forms (ρ,ω) ∈Ω^3(M) ×Ω^2(M) is a G_2 pair if and only if the pair (ρ̂,τ) of their Hitchin duals is a G_2 pair. One can see this by showing that if ψ = dt ∧ρ̂ + τ is stable then the metric associated to φ = dt ∧ω + ρ must be non-degenerate and vice-versa. However, it's not clear to me if they define the same G_2-structure or not. Similarly, we will let ℛ_SU(3) denote the set of SU(3)-structures on M. That is ℛ_SU(3) = { (ρ,ω) ∈Ω^3(M) ×Ω^2(M) (ρ,ω) is an SU(3)-structure} or ℛ_SU(3) = { (ρ,τ) ∈Ω^3(M) ×Ω^4(M) (ρ,τ) is an SU(3)-structure}. In the case of SU(3)-structures, a pair (ρ,τ) is exactly equivalent to a pair (ρ,ω) where τ is the Hitchin dual of ω since in this case, the metric defined on the cylinder is the product metric and therefore the same whether or not we choose the 4-form or the 2-form. The set ℛ_SU(3) is a subset of ℛ_G_2 and they are not the same as the following standard lemmas illustrate. The pair (ρ,ω) ∈Ω^3(M) ×Ω^2(M) defines an SU(3)-structure on M with metric g_M if and only if φ = ρ + dt ∧ω is a G_2-structure that defines the product metric g_φ = dt^2 + g_M on X = ℝ× M. The pair (ρ,ω) defines a Calabi-Yau structure on M if and only if φ = ρ + dt ∧ω is a torsion-free G_2-structure on X = ℝ× M. In this case, Hol(g_φ) ⊆SU(3). The proofs of Lemmas <ref> and <ref> follow from the proof of Proposition 11.1.1 in <cit.>. There are cases where both ρ∈Ω^3(M) and ω∈Ω^2(M) are stable, but φ = ρ + dt ∧ω is not. For example, let M = ℝ^6 with coordinates x_1, … , x_6. Set ρ = dx_135+dx_632 + dx_254 + dx_416 ω = dx_63 + dx_25 + dx_41. One can check that ρ and ω are stable. However, ω∧ρ = dx_63254 + dx_25416 + dx_41632≠ 0 so (ρ,ω) is not an SU(3)-structure. Next, let x_0 denote an extra coordinate spanning ℝ. Set φ = ρ + dx_0 ∧ω = dx_135+dx_632 + dx_254 + dx_416 + dx_063 + dx_025 + dx_041. Then one can compute that (∂_x_1⌟φ) ∧( ∂_x_1⌟φ) ∧φ = 0 and thus g_φ is not a definite form, so φ is not stable. There are cases where (ρ,ω) ∈Ω^3(M) ×Ω^2(M) do not define an SU(3)-structure but φ = ρ + dt ∧ω does define a G_2-structure. In this case, the metric g_φ does not define a product metric by <Ref>. To see this, let ρ = dx_135 + dx_632 + dx_254 + dx_416 and ω = dx_12 + dx_34 + dx_56. Then φ= dx_135+dx_632 + dx_254 + dx_416 + dx_012 + dx_034 + dx_056. Since being stable is an open condition, small modifications of φ_0 are still stable. For example, φ' = dx_135+dx_632 + dx_254 + dx_416 + dx_012 + dx_034 + dx_056 + Kdx_123 where K is a constant is still stable for small enough K. Note that this is equivalent to changing ρ out for ρ + Kdx^123. Note also that ( ρ + Kdx_123) ∧ω = Kdx_12356≠ 0 so after this modification, (ρ,ω) do not form an SU(3)-structure. Furthermore, a computation shows that (∂_x_3⌟φ')∧( ∂_x_0⌟φ' ) ∧φ' = 6g_φ'(∂_x_3,∂_x_0)vol_g_φ'≠ 0 but, for any product metric g = dx_0^2 + g_M, we have g(∂_x_3,∂_x_0) = dx_0^2(0,∂_x_0) + g_M(∂_x_3,0) = 0. Therefore, g_φ' does not define a product metric on ℝ× M for any metric g_M on M. Suppose that (ρ,ω)∈Ω^3(M) ×Ω^2(M) is a G_2 pair, but not necessarily an SU(3) structure. Then there exists a stable 2-form ω' such that (ρ,ω') is an SU(3)-structure on M. This follows from the fact that M = {0}× M is a hypersurface in ℝ× M. Let ×_φ denote the cross-product associated to φ. Then, given any unit normal vector field n along M, we can define an almost complex structure J_nv = n ×_φ x for all v ∈ TM, and a stable 3-form given by ρ = ι^*φ where ι:M →ℝ× M is the inclusion. Let ω' be defined by ω'(u,v) = g_φ(u,-J_nv) for all u,v ∈ TM. The pair (ρ,ω') defines an SU(3)-structure. The SU(3)-structures of this type were first studied in <cit.> and more recently by in <cit.>. We emphasize that, in example given after <Ref>, the vector field ∂_x_0 is not normal to M with respect to the metric g_φ'. The space ℛ_SU(3) is a deformation-retract of ℛ_G_2. Clearly ℛ_SU(3)⊆ℛ_G_2. Let (ρ,ω) ∈ℛ_G_2. Let t denote the ℝ-coordinate on ℝ× M and set φ = ρ + dt ∧ω as usual. Choose a unit vector field n along { 0 }× M that is normal to { 0 }× M with respect to the metric g_φ. Note that even if g_φ is not a product metric on ℝ× M, the vector field ∂_t is nonvanishing, and therefore homotopic to n. Let { n_s }_s=0^1 be a smooth family of nowhere-vanishing vector fields satisfying n_0 = ∂_t and n_1 = n. If φ does define the product metric, simply set n = ∂_t = n_s for all s ∈ [0,1]. Then for each s define ρ = ι^*φ J_s(·) = n_s ×_φ· ω_s(·,·) = g_φ( ·, -J_s(·)). Since there exists a normal vector field for each pair (ρ,ω) ∈ℛ_G_2, this construction defines a map F:ℛ_G_2× I →ℛ_G_2 by F( (ρ,ω),s ) = (ρ,ω_s). Clearly F( (ρ,ω),0 ) = (ρ,ω). On the other hand, F( (ρ,ω),1 ) ∈ℛ_SU(3) (see also proposition 4.1 of <cit.>). Clearly if (ρ,ω) ∈ℛ_SU(3), then F( (ρ,τ),1 ) is the identity. Thus, F is a deformation retract as desired. The above lemma also applies to G_2-pairs of the form (ρ,τ) where τ is a 4-form. § TAMED STRUCTURES In this section we develop the notion of a tamed G_2-pair. The idea of a tamed structure was first introduced by Gromov in <cit.> in order to control the energy of J-holomorphic curves in a symplectic manifold. Let M be a smooth manifold equipped with a closed, stable 2-form ω and an almost complex structure J. The almost complex structure is called ω-tame if it satisfies ω(v, Jv) > 0 for every nonzero tangent vector v ∈ TM. Let (Σ, j) be a Riemann surface. Then a map u:Σ→ M satisfying J ∘ du = du ∘ j is called a J-holomorphic curve. Such maps are hugely important to symplectic geometry <cit.>. The energy of a J-holomorphic curve u is defined to be E(u) = 1/2∫_Σ*du_J^2 vol_Σ, where the norm du^2_J depends on the choice of J. When J is ω-tame, one can prove the energy identity E(u) = ∫_Σ u^* ω which is a topological invariant depending only on the homology class represented by u. See Lemma 2.2.1 of <cit.> for more details about the symplectic case. This result is essential to proving compactness results for moduli spaces of J-holomorphic curves. In <cit.> a similar notion of taming and tamed forms was introduced for manifolds with special holonomy in 6, 7 and 8 dimensions. This definition is most easily stated for the 7-dimensional case. We will adapt this definition to 6-manifolds equipped with a G_2-pair. In our situation, the role of J-holomorphic curves is played by ψ-associative submanifolds of a 7-manifold. Let X = ℝ^7 and fix an identification of ℝ^7 with the imaginary octonions. Then the associator (from <ref>) vanishes precisely on the 3-dimensional subspaces of X that are associative, hence the name. Suppose that (X,φ,ψ) is an oriented 7-dimensional manifold with a (not necessarily torsion-free) G_2-structure. Let [·, ·, ·] denote the associator corresponding to ψ and x ∈ X. We say that an oriented 3-plane V ⊂ T_x X is ψ-associative if [·, ·, ·]V≡ 0 and φV > 0. Similarly, if ι: P → X is a 3-dimensional submanifold of X, then ι(P) is called ψ-associative if ι^*[·, ·, ·] ≡ 0 and ι^* φ > 0. The following definition allows us to relax the requirement that a G_2-structure be torsion-free, but retain volume-boundedness of associative submanifolds. For an in-depth explanation of the benefits and drawbacks of this condition as well as several equivalent conditions, see section 2.6 of <cit.>. Suppose that (X,ψ) is a 7-manifold equipped with a stable 4-form ψ. Then we say that a closed 3-form φ' tames ψ if for all x ∈ X, and for all ψ-associative, oriented 3-planes V ⊂ T_x X, there exists a positive constant K such that vol_V≤ K φ'V where vol_V denotes the volume form on V with respect to the metric induced by the one given by the G_2-structure associated to ψ. If ψ is co-closed, then φ = *ψ always tames ψ. Note that the definition of a taming 3-form does not stipulate that it must be stable. However, the taming condition does imply this. Let (X,ψ) be a 7-manifold equipped with a stable 4-form. Suppose that the 3-form φ' tames ψ. Then φ' is stable. It suffices to let X = ℝ^7 and choose ψ to be a stable 4-form on ℝ^7. Suppose that a 3-form φ' tames ψ, but is not a stable form. Note that a stable 3-form φ determines a symmetric, bilinear form on ℝ^7 via the formula 6 g_φ(u,v)vol_φ = (u ⌟φ) ∧ (v ⌟φ) ∧φ. This form is positive-definite precisely when φ is positive, and negative-definite precisely when φ is negative, so if φ' is not stable, then g_φ' is neither positive definite nor negative definite. That means there exists u ∈ℝ^7 such that (u ⌟φ') ∧ (u ⌟φ') ∧φ' = 0. We show that this contradicts the assumption that φ' tames ψ. Note that if u ⌟φ' = 0 then φ' clearly does not tame ψ since u is contained in some ψ-associative 3-plane. There are two remaining cases. Case 1. Suppose that (u ⌟φ') ∧ (u ⌟φ')= 0. Let { e_i }_i = 1^7 be a g_φ'-orthogonal basis for ℝ^7 with e_1 = u and let { e^i}_i =1^7 be the dual basis. Then (u ⌟φ') ∧ (u ⌟φ') = 0 implies that u ⌟φ' must be indecomposable. That is, u ⌟φ' =A e^ij for some constant A with i, j ≠ 1. Next, let v ∈ℝ^7 be a vector orthogonal to the 3-plane spanned by u, e_i, and e_j with respect to the metric g_ψ. Then V = span{ u,v, u ×_ψ v }, where ×_ψ is the cross-product defined by the 4-form ψ, is ψ-associative. But v ⌟ u ⌟φ' = v ⌟ Ae^ij = 0 since v is orthogonal to e_i and e_j. This means that φ'V = 0 so φ' does not tame ψ. Case 2. Suppose that (u ⌟φ') ∧ (u ⌟φ') ∧φ' = 0 but (u⌟φ') ∧ (u ⌟φ') ≠ 0. Let β = (u ⌟φ') ∧ (u ⌟φ'). Choose another vector v ≠ u. As before, let V = span{ u,v, u ×_ψ v } and note again that V is ψ-associative. Choose w_1 orthogonal to V with respect to g_ψ. Then let w_2 = u ×_ψ w_1. Since V is ψ-associative, w_2 is also orthogonal to V with respect to g_ψ. Do this process again to get a basis { w_1, w_2 = u ×_ψ w_1, w_3, w_4 = u ×_ψ w_3 } for V^⊥_ψ, the g_ψ orthogonal complement of V. If βV^⊥≠ 0 then that means that φ' = A_1 u ∧ w_a ∧ w_b + A_2 u ∧ w_c ∧ w_d + A_3 u ∧ v ∧ u ×_ψ v + (possibly other terms) for some constants A_i. Then β∧φ' ≠ 0, which is a contradiction. Therefore it must be the case that βV^⊥ = 0. But then φ' does not have a term of the form A_1 u ∧ w_a ∧ w_b + A_2 u ∧ w_c ∧ w_d because that would imply that β had a term of the from A w_a ∧ w_b ∧ w_c ∧ w_d and would therefore not vanish on V^⊥. However, this is also impossible since { u,w_1,w_2 } also span an associative 3-plane. We can extend <Ref> to 6-dimensional manifolds as follows. Let (ρ,τ) ∈Ω^3(M) ×Ω^4(M) be a G_2 pair as in <Ref>. We say that the pair (ρ', ω') ∈Ω^3_closed(M) ×Ω^2_closed(M) tames (ρ,τ) if ψ = τ + dt ∧ρ φ' = ρ' + dt ∧ω' comprises a tamed G_2-structure as in <Ref>. As in the 7-dimensional case, if (ρ',ω') tame a G_2-pair, then both ρ' and ω' must be stable. Thus they are also a G_2-pair. The condition that a pair tames another is always an open condition. In fact, the set of G_2 3-forms which tame a G-2 4-form is an open, convex cone (see Proposition 2.8 in <cit.>). For example, in the setting that is of most interest to us, if we start out with a 6-manifold (M,ρ,τ) equipped with an SU(3)-structure and consider the associated 7-manifold X = ℝ× M with the associated G_2-structure ψ=τ + dt∧ρ, then not only does *ψ always tame ψ but any nearby 4-form also tames ψ. Equivalently, G_2-pair near (ρ̂,ω) tames (ρ,τ). § THE LAGRANGE MULTIPLIERS PROBLEM The purpose of this section is to describe the Lagrange multipliers problem that is at the center of this paper. As we will see, the solutions to this Lagrange multipliers problem are special Lagrangian submanifolds in the special case where we start with a Calabi-Yau manifold. These ideas first appeared in <cit.>. The material from the previous section will be used to show that when the relevant structure is tamed, the solutions to the following Lagrange multipliers problem have bounded volume. §.§ Set-up Much of the notation in what follows is taken from <cit.> and adapted to this 6-dimensional setting. Suppose that M is a closed, 6-manifold equipped with a G_2-pair (ρ,τ) ∈Ω^3(M) ×Ω^4(M). Suppose also that both ρ and τ are closed, but do not require that their Hitchin duals are closed. Fix a closed, oriented, 3-manifold P and homology class A ∈ H_3(M;ℤ). Let ℱ = {ι:P→ M ι is smooth embedding , [ι] = A, ι^*ρ̂ > 0 }. The tangent space to ℱ at ι is T_ιℱ = Γ( ι^*TM ). Let 𝒢 denote the group of orientation-preserving diffeomorphisms of P and define 𝒮 = ℱ/𝒢 which can be identified with the space of oriented, 3-dimensional submanifolds of M diffeomorphic to P along which ρ̂ is positive. Let [ι] denote the equivalence class in 𝒮 of an element ι∈ℱ. The tangent space T_[ι]𝒮 is the quotient T_[ι]𝒮 = Γ(ι^*TM)/{ dι∘ X X ∈Γ(TP) }. Since (ρ,τ) determines a G_2 structure on ℝ× M, it also determines a metric on ℝ× M and therefore an induced metric on M. This tangent space therefore may be identified with space of normal vector fields on ι P in M. We will let Nι denote the normal bundle of ι P whenever we want to take this perspective. Next, fix a particular embedding ι_0 of P into M. Let ℱ̃ denote the universal cover of ℱ based at ι_0. That is, ℱ̃ = {ι̃: [0,1] × P → M ι̃(0,·) = ι_0, ι̃(t,·) = ι_t∈ℱ t ∈ [0,1] } / ∼ where ι̃∼ι̃' if ι̃ and ι̃' have the same endpoints and are smoothly homotopic. The group of orientation-preserving diffeomorphisms 𝒢 also has a covering space 𝒢̃, which is the group of smooth isotopies from [0,1] to Diff(P) starting at the identity. Let 𝒮̃ = ℱ̃/𝒢̃. Define a functional, f_τ: ℱ̃→ℝ by f_τ(ι̃) = ∫_[0,1] × Pι̃^*τ. This functional is well-defined since τ is closed. Its derivative df_τ is a one-form on ℱ given by ( df_τ)_ι(n) = ∫_Pι^*( n⌟τ) for all n ∈ T_ιℱ. Note that both f_τ and its differential are gauge invariant in the sense f_τ( g̃^*ι̃) = f_τ(ι̃) for any g̃∈𝒢̃. Similarly, ( df_τ)_g^*ι( g^*n ) = ( df_τ)_ι(n) for any g ∈𝒢. Also note that if n and n' are in the same equivalence class in T_[ι]𝒮, then ( df_τ)_ι(n) = ( df_τ)_ι(n') since ( df_τ)_ι(v) = 0 for all v ∈Γ(Tι P). Thus f_τ descends to a functional on 𝒮̃ and df_τ descends to a one-form on 𝒮. Next, we define the constraint. Let c: ℱ→Ω^3(P) denote the function given by c(ι) = ι^*ρ. Similarly, let c̃:ℱ̃→Ω^3(P) denote the function given by c̃(ι̃) = ι_1^*(ρ). Then let C_ρ = c^-1(0) and C̃_ρ = c̃^-1(0). Note that C_ρ is also gauge invariant in the sense that if ι^*ρ = 0, then all elements in the equivalence class [ι] also satisfy this condition. Let 𝒞_ρ = C_ρ / 𝒢 and 𝒞̃_ρ = C̃_ρ / 𝒢̃. Next, we want to prove that 𝒞_ρ is a submanifold of 𝒮. First, we prove a lemma. Suppose that [ι] ∈𝒞_ρ so that ι^*ρ = 0 and ι^*ρ̂ > 0. Then for every p ∈ P, the map N_ι(p)ι→Λ^2( T_p^* P ) given by n ↦ι^*(n ⌟ρ) is an isomorphism. Let g be the metric on M determined by the G_2 pair (ρ,τ). There is an orthogonal splitting of the k-forms on M with respect to the metric. We fix the following notation for this splitting throughout the proof. Let N denote the normal bundle of ι P and T denote the tangent bundle. Let p ∈ P and x = ι(p). Λ^2( T_x^*M ) = Λ^2(N^*_x) ⊕( N_x^*⊗ T_x^*) ⊕Λ^2( T_x^*) = Λ^2,0⊕Λ^1,1⊕Λ^0,2 Λ^3( T_x^* M ) = Λ^3(N_x^*) ⊕( Λ^2( N^*_x ) ⊗ T_x^* ) ⊕( N_x^* ⊗Λ^2( T_x^* ) ) ⊕Λ^3( T_x^* ) = Λ^3,0⊕Λ^2,1⊕Λ^1,2⊕Λ^0,3 Λ^4( T_x^* M ) = ( Λ^3( N_x^* ) ⊗ T_x^* ) ⊕( Λ^2( N_x^* ) ⊗Λ^2( T_x^* ) ) ⊕( N_x^* ⊗Λ^3( T_x^* ) ) = Λ^3,1⊕Λ^2,2⊕Λ^1,3 Then we may write ρ_x in components as follows ρ_x = ρ^3,0 + ρ^2,1 + ρ^1,2 + ρ^0,3. The following observations are immediately apparent * ρ^0,3 = 0 since ι^*ρ = 0 * ρ^3,0≠ 0 since ι^*ρ̂ = *ρ > 0. Since N_x and Λ^2( T_x^* ) have the same dimension, it suffices to show that the kernel of the map n ↦ι(n⌟ρ) is trivial. Suppose that there exists n ∈ N_x such that ι^*(n ⌟ρ) = 0. Then it must be the case that n⌟ρ∈Λ^1,1⊕Λ^2,0. Let J denote the almost complex structure on T_x M determined by ρ. Then a formula from <cit.> tells us that *(n⌟ρ) = -Jn ∧ρ∈Λ^2,2⊕Λ^1,3 ⇒ Jn ∧ρ^3,0 = 0 ⇒ Jn ∈ N_x. Here, when we write -Jn ∧ρ we mean the wedge product of the metric dual of -Jn with ρ. This contradicts the fact that ρ + iρ̂ must be a complex volume form since ρ is a stable 3-form. Note that this lemma is not true if ι^*ρ̂ vanishes at p. If ι P is a special Lagrangian submanifold in a Calabi-Yau manifold, then this condition is automatically satisfied. For all [ι] ∈𝒞_ρ, the cokernel of dc_[ι]:T_[ι]𝒮→Ω^3(P) is isomorphic to ℝ. Note that dc_[ι](n) = ι^*( ℒ_nρ) = ι^*( d(n⌟ρ) ) since ρ is closed and where ℒ denotes the Lie derivative. So any 3-form in the image of dc_[ι] is clearly exact. Furthermore <Ref> says that the image of dc_[ι] is all exact 3-forms on P. Therefore since any 3-form on P is closed, P being 3-dimensional, the result follows. Finally, we can state the Lagrange multipliers problem: Find the critical points of f_τC̃_ρ. Next, we briefly review the finite-dimensional situation. §.§ Finite dimensional review Suppose that π: E → M is a rank k vector bundle over an n-dimensional manifold M. Let s: M → E be a section of E and f: M →ℝ a function. Define Z = s^-1(0). Let ds:TM → TE denote the full derivative of s. Whenever we have a connection, let π_V denote the vertical projection and let Ds = π_V ∘ ds. If ds_p is constant rank r + n along Z, so that ds_p has the same rank for each p ∈ Z, then Z is a properly embedded submanifold of codimension r in M. Suppose that V and W are vector spaces and A: V → W is a linear map. Denote the dual map by A^*: W^* → V^*. Then A^* = ( A)^0, the annihilator of A. The first isomorphism theorem tells us there is an isomorphism A̅: V/ A → A satisfying A = A̅∘π where π is the quotient map. That is: V [r, "π"] [rr, "A"', bend right] V / A [r, "A̅"] Im A Furthermore, π = A̅^-1∘ A. Let g ∈ V^*. Then: T ∈ ( A)^0 T(v) = 0 v ∈ A A ⊂ T T factors throughπ In other words, T ∈ ( A)^0 if and only if there exists T̅: V/ A →ℝ satisfying T = T̅∘π. That is: V [r, "π"] [rr, "T"', bend right] V / A [r, "T̅"] ℝ We have: T = T̅∘π = T̅∘( A̅^-1∘ A ) = ( T̅∘A̅^-1) ∘ A . Then L̅ = T̅∘A̅^-1 is a linear functional on A which may be extended (non-uniquely) to a linear functional L on W satisfying T = L ∘ A. Thus T ∈ ( A)^0 if and only if T ∈ A^*. Note that this lemma is equivalent to the statement: A ⊆ g g = f ∘ A for some f ∈ W^*. Two functionals L_1 and L_2 extending L̅ must agree on A. If A is surjective, then A = W so there is a unique extension. If A has rank r < W then the space of possible extensions has the same dimension as A which by the rank-nullity theorem is equal to W - r. In the context described above, a point p ∈ Z ⊂ M is a critical point of fZ if and only if there exists λ∈ E^* such that df_p = λ∘ Ds_p. The set of λs satisfying <ref> is an affine space of dimension (k - r) where k is the rank of E and r is the rank of Ds_p along Z. Note that p is a critical point of fZ implies that Ds_p ⊆ df_p. Therefore, by <ref>, p is a critical point of fZ if and only if df_p = λ∘ Ds_p for some λ∈ E^*. The last statement is a direct consequence of the remark. The Lagrange function is a function on E^* defined by: Λ(λ) = f(π (λ)) - (λ∘ s ∘π) (λ). For convenience, let S(λ) = (λ∘ s ∘π) (λ). Note that S is a section of E^**. Let Λ be the Lagrange function. Then λ∈ E^* is a critical point of Λ if and only if p = π(λ) is a critical point of fZ and λ satisfies df_p = λ∘ Ds_p. We have dS_λ(λ̇) = (π_V (λ̇)) ∘ s(π (λ)) + (λ∘ Ds ∘ dπ)(λ̇). Therefore, dΛ_λ (λ̇) = (df_p ∘ dπ) (λ̇) + dS_λ(λ̇). Next, since λ∈ E^* satisfies df_p = λ∘ Ds_p we have s(π(λ)) = 0. So dΛ becomes: dΛ_λ = (df_p + λ∘ Ds_p) ∘ dπ≡ 0 λ̇∈ T_pE^* due to <Ref>. Conversely, since dΛ_λ (λ̇) = 0 for all λ̇∈ T_λE^*, it is zero in particular for λ̇∈ d π. In this case, π_V(λ̇) = λ̇ which means s(π(λ)) = 0. Therefore df_p + λ∘ Ds_p = 0 and p is a critical point of fZ again by <Ref>. Analogous results hold in infinite dimensions. See for example <cit.>. §.§ The perturbed SL equations In this section, we derive the Euler-Lagrange equations for the above Lagrange multipliers problem. These equations also appeared in <cit.> but without much explanation. Returning to the context of <ref>, we define the Lagrange functional Λ by analogy with the finite-dimensional setting. Note that C^∞(P) is dual to Ω^3(P) in the sense that there is a pairing C^∞(P) ×Ω^3(P) → ℝ (f,α) ↦ ∫_Pfα. Thus, the Lagrange functional Λ:C^∞(P) ×ℱ̃→ℝ is given by Λ(λ,ι̃) = ∫_[0,1] × Pι̃^* τ + ∫_Pλι^*ρ. Here, ι = ι̃(1,·). The tangent space of C^∞(P) ×ℱ at (λ,ι) is T_(λ,ι)( C^∞(P) ×ℱ) = C^∞(P) ×Γ(ι^*TM) As with f_τ, the functional Λ is 𝒢̃-invariant and therefore descends to a functional on 𝒮̃. Furthermore, its derivative dΛ is a 𝒢-invariant one-form on C^∞(P) ×ℱ given by dΛ_(λ,ι)(l,n) = ∫_Pι^*(n⌟τ) + ∫_Pλι^*(n⌟ dρ) + ∫_P lι^*ρ. Note that since ρ is closed, Stokes' theorem gives dΛ_(λ,ι)(l,n) = ∫_Pι^*(n ⌟τ) ∫_Pdλ∧ι^*( n⌟ρ) + ∫_Plι^*ρ. If v ∈Γ(Tι P), then dΛ_(λ,ι)(l,v) = dΛ_(λ,ι)(l,0). Therefore dΛ descends to a one-form on C^∞(P) ×𝒮. The following notation was used in <cit.> and will also be used in this paper. Let α be a k-form on a manifold M. Let ι:P→ M be an embedding of a manifold P into M and suppose that ι^* α = 0. Then α defines an ι^*T^*M-valued (k-1)-form on P called α_N given by α_N(v_1, …, v_k-1) = α( ι_* v_1, … , ι_* v_k-1,·) for any (v_1, …, v_k-1) ∈ TP. In the presence of a metric, α_N takes values in N^* ι. The above construction shows the following. We have dΛ_(λ,ι)(l,n) = 0 for all (l,n) ∈ T_(λ,ι)( C^∞(P) ×ℱ) if and only if the following two equations are satisfied τ_N + dλ∧ρ_N = 0 ι^*ρ = 0. Let (M,ρ,τ) be a 6-dimensional manifold equipped with a G_2-pair (ρ,τ) ∈Ω^3(M) ×Ω^4(M). Let P be an oriented, closed, 3-manifold. A pair (λ,ι) ∈ C^∞(P) ×ℱ which solves <ref> is called a graphical associative for reasons that will become clear in the next section. These are the Euler-Lagrange equations for the Lagrange multipliers problem and will henceforth be referred to as the perturbed special Lagrangian (SL) equations. We say that the pair (λ,ι) is a critical point of the functional Λ if and only if (λ,ι) satisfies these equations. By the Lagrange multipliers theorem, a pair (λ,ι) is a critical point of Λ if and only if there exists ι̃∈ℱ̃ such that ι = ι̃(1,·) and ι̃ is a critical point of f_τC̃_̃ρ̃. Note that the perturbed SL equations are 𝒢-invariant. Thus the solutions may be thought of up to diffeomorphism. The following lemma appeared in <cit.> and shows how the above setup relates to Calabi-Yau manifolds. If (M,J,g,Ω) is a 6-(real) dimensional Calabi-Yau manifold, then (Ω) is a calibration whose calibrated submanifolds are called special Lagrangians. Equivalently, if ω is the Kähler form for the Calabi-Yau metric g, then the special Lagrangian submanifolds, L, are precisely the 3-dimensional submanifolds satisfying ωL = 0 Re(Ω)L = 0. Therefore it is easy to see that if, in the Calabi-Yau case, we let τ = 1/2ω^2 and ρ = Re(Ω), a submanifold ι P is special Lagrangian if and only if the embedding ι together with λ = constant is a critical point of Λ (see <Ref>). Due to this fact we hope to define a Floer theory for the critical points of Λ. However, we will need to consider solutions to <ref> other than those for which λ is a constant because of the following theorem from <cit.> which shows that the critical points of Λ and therefore f_τC̃_ρ cannot usually be isolated. Suppose that M is a Calabi-Yau manifold and L ⊂ M is a special Lagrangian submanifold. Then the moduli space of nearby special Lagrangians is a smooth manifold of dimension equal to the first Betti number of L. Of course, if we think of f_τC̃_ρ as a functional on embeddings rather than on submanifolds (i.e. embeddings up to diffeomorphism) the critical points are not isolated since if ι is a critical point, so is g^*ι for any g ∈𝒢. In the following sections, we will prove that if one perturbs the special Lagrangian equations the critical points of Λ will become isolated (up to diffeomorphism). § ELLIPTICITY AND THE G2-CYLINDER In this section, we express the perturbed SL equations in terms of a section of an infinite-dimensional vector bundle and state the definition of the moduli space. Then we restrict our discussion to a slice of the action of the diffeomorphism group. We prove that the linearization of the relevant section is an elliptic operator by exploiting the relationship between 6 and 7 dimensions. Throughout, M will be a 6-manifold equipped with * a smooth G_2-pair (ρ',ω') ∈Ω^3(M) ×Ω^2(M) such that both ρ' and ω' are closed * a smooth G_2-pair (ρ,τ) ∈Ω^3(M) ×Ω^4(M) such that both ρ and τ are closed and tamed by (ρ',ω'). Also, let g denote the metric on M induced by the G_2-pair (ρ',ω'). More precisely, since φ'= ρ' + dt ∧ω' is a closed G_2-structure on ℝ× M, there is a corresponding metric on ℝ× M. This induces the metric, which we call g, on { 0 }× M which we identify with M. As before, P will be a closed 3-manifold. In this section, we deal with the space of smooth embeddings ι:P → M such that ι belongs to a fixed homology class A and such that both ι^* ρ̂ and ι^*ρ' are positive. Let ℱ denote this space. Next, consider the vector bundle ℰ→ C^∞(P) ×ℱ whose fiber at (λ,ι) is ℰ_(λ,ι) = Ω^3(P) ×Ω^3(P,ι^*T^* M). Solutions to the perturbed SL equations then correspond to the zero set of the following section of this bundle. L:C^∞(P) ×ℱ → ℰ (λ,ι) ↦ ( ι^*ρ,τ_N + dλ∧ρ_N ). Let 𝒩̃( A,P;(ρ,τ) ) denote the moduli space of solutions to the perturbed SL equations. That is, 𝒩̃ = 𝒩̃( A,P;(ρ,τ) ) = { (λ,ι) ι∈ℱ, ι^*ρ = 0, τ_N + dλ∧ρ_N = 0 }. Then 𝒩̃ can be identified with the zero set of L 𝒩̃ = 𝒩̃( A,P;(ρ,τ) ) = L^-1(0). As in the finite-dimensional case, we often want to neglect the Lagrange multiplier λ. Let π_2:C^∞(P) ×ℱ→ℱ be the projection and let ℳ̃ = π_2( 𝒩̃ ). This is the moduli space of perturbed special Lagrangians. A few remarks about this definition of the moduli space are in order. * The tilde above ℳ is there to remind us that we have not yet taken the quotient with 𝒢. * At the moment, we do not need to keep track of the parameters A, P and (ρ,τ) so we drop them from the notation. Later on, we will want to add them back in. * Also at the moment, we have defined the moduli space in such a way that all its elements are smooth. In <ref> we will want to allow for non-smooth elements. The results of <ref> will imply that if (ρ,τ) are of class C^ℓ then there exists a diffeomorphism ϕ such that if (λ,ι) is a C^3 solution to the perturbed SL equations, (λ∘ϕ,ι∘ϕ) is also of class C^ℓ. §.§ Restricting to a slice The group of orientation-preserving diffeomorphisms of P acts freely on C^∞(P) ×ℱ by composition (see <cit.> for details) and 𝒮 = ℱ/𝒢 is a Fréchet manifold. However, we will later need to work with Banach spaces and will want to identify elements in (C^∞(P) ×ℱ)/ 𝒢 near a particular element with normal vector fields of varying regularity. Restricting to a slice of the action of 𝒢 removes this complication. The metric g corresponding to the G_2 pair (ρ',ω') induces an inner-product on the tangent spaces of C^∞(P) ×ℱ as follows. Suppose that n_1, n_2 ∈ T_ιℱ = Γ(ι^*TM) and l_1,l_2 ∈ T_λC^∞(P) . Define ⟨ n_1, n_2⟩ = ∫_P g(n_1,n_2) ι^* ρ' and ⟨ l_1,l_2⟩ = ∫_P l_1 l_2 ι^*ρ'. These equations define a 𝒢-invariant metric on C^∞(P) ×ℱ. Using the exponential map corresponding to the metric g on M, a neighborhood U_(λ,ι) of (λ,ι) can be identified with an open set in T_(λ,ι)( C^∞(P) ×ℱ). Under this identification, let S_(λ,ι)⊂ U_(λ,ι) denote the subset corresponding to the orthogonal complement of the 𝒢-orbit of (λ,ι). We refer to S_(λ,ι) as a local slice for (λ,ι) and it consists of pairs (l,n) where n is a normal (with respect to g) vector field along ι P. This definition makes sense because S_(λ,ι) is transverse to the 𝒢-orbit of (λ,ι) in C^∞(P) ×ℱ. Let 𝒩^S denote the local moduli space of graphical associatives given by 𝒩^S = 𝒩^S(A,P;(ρ,τ)) = { (λ, ι) ∈ L^-1(0) (λ,ι) ∈ S } where S is a local slice as described above. As before, also define ℳ^S = π_2(𝒩^S). Let S be a local slice. Let LS denote the restriction of the section L to S. §.§ Ellipticity Next, we prove The linearization of LS at a point (λ, ι) ∈𝒩^S is a self-adjoint, elliptic operator. First we review some definitions. Let E,F be vector bundles over a manifold M and let T:Γ(E) →Γ(F) be a differential operator of order k. Let (x,v) ∈ T^*M and e ∈ E_x be given. Find a smooth function g(x) and a section f ∈Γ(E) such that dg_x = v and f(x) = e. Then the principal symbol of T is defined by σ(T)(x,v)e = L( (g - g(x))^k f )(x) ∈ F_x A symbol σ is called elliptic if for all (x,v) ∈ T^* M, the linear map σ(x,v): E_x → F_x is an isomorphism. A differential operator T of order k is called elliptic if its principal symbol is elliptic. Next, let (λ, ι) be a graphical associative contained in a local slice S. Let N ι be the normal bundle of ι P with respect to the metric determined by (ρ',ω'). Then the linearization of LS is given by D: C^∞(ι P) ×Γ (Nι) →Ω^3(P) ×Ω^3 (P, N^* ι) D_(λ, ι)(l,n) = ( ι^* d(n⌟ρ) , d( n ⌟(τ + dλ∧ρ) )_N + dl ∧ρ_N ). In matrix form D_(λ,ι)(l,n) = [ (d( · )∧ρ)_N d( · ⌟ (τ+ dλ∧ρ))_N; 0 ι^* d( · ⌟ρ) ][ l; n ]. Note that D is an operator of order 1. Let (x,α_x) ∈ T^* ι P and choose (k,n_x) ∈ℝ× N_x ι. Let σ_τ,ρ denote the symbol of D. Furthermore, let η = τ + dλ∧ρ. Then <Ref> gives σ_τ,ρ(x,α_x) (k,n_x) = ( α_x ∧(n_x ⌟ρ_x), (α_x ∧(n_x ⌟η_x))_N + kα_x ∧ρ_N_x) or, in matrix form σ_τ,ρ(x,α_x) = [ · (α_x ∧ρ_N) α_x ∧( · ⌟(τ + dλ∧ρ) )_N; 0 ( · ⌟ρ) ∧α ][ k; n ]. In order to check that σ_ρ,τ is an isomorphism for all choices of (x,α_x) ∈ T^* ι P, we view our problem from the perspective of 7-dimensional G_2 geometry since from this perspective, the symbol is significantly more simple. The next definition shows how the 6- and 7-dimensional perspectives are related: Suppose that (λ, ι) ∈ C^∞(P) ×ℱ. Define ι_λ: P → ℝ× M ι_λ(p) = ( λ(p) , ι(p) ). Then ι_λP is the graph of the function λ over ι P. The following lemma finally allows us to justify our term graphical associative. For any graphical associative (λ, ι), ι_λ P as defined above is an associative submanifold of X = ℝ× M with respect to the G_2 4-form ψ = τ + dt ∧ρ. Note that any vector field tangent to ι_λ P can be written in the form u = u + dλ(u) ∂_t where u is a vector field on ι P. Let (u,v,w) be three such vector fields tangent to ι_λ P and calculate ι_λ^*(u ⌟ v ⌟ w ⌟ψ). It will be easy to see that since ι^*(ρ) = 0 and (τ + dλ∧ρ)_N = 0, ι_λ^*(u ⌟ v ⌟ w ⌟ψ) also vanishes. The above lemma also allows us to prove that when M is an actual Calabi-Yau manifold, then the only solutions to the perturbed SL equations are special Lagrangians. In the above situation, the only solutions (λ,ι) to the perturbed SL equations are those for which ι P is a special Lagrangian submanifold of M and dλ = 0. We already know that solutions with dλ = 0 correspond to special Lagrangian submanifolds in M, so it suffices to show that any solution (λ,ι) satisfies dλ = 0. However, note that if ι_λ P is a graphical associative submanifold in ℝ× M, it must be volume-minimizing in its homology class. On the other hand, note that the volume of ι_0P where ι_0(p)= (0,ι(p)) is always less than or equal to the volume of ι_λ where equality holds if and only if λ is constant. So the fact that ι_λ P is associative means that λ must be constant. Just as special Lagrangian submanifolds are the critical points of a functional, associative submanifolds are also critical points of a functional. In fact, the situation is much simpler in this case since we do not have any constraints. Therefore the idea is to write the symbol of σ_τ,ρ in terms of the symbol of the action functional for associative submanifolds which is easily seen to be elliptic. §.§ The associative action functional A detailed description of the following material was given in <cit.>. Here it is slightly generalized. Suppose that (X,ψ,φ') is a 7-dimensional manifold equipped with a closed, stable 3-form φ' and a closed φ'-tame 4-form ψ. Let A ∈ H_3(X;ℝ). Let P be a closed, oriented, 3-dimensional manifold. Then define ℱ(X) = {ι: P → X ι∈ C^∞ is an embedding, ι^*φ' > 0, and ι∈ A }. We also have the covering space versions of these objects exactly as in <ref>. The 4-form ψ defines a 𝒢-invariant action functional on ℱ̃(X) by integration: f_ψ(ι̃) = ∫_[0,1] × Pι̃^* ψ. This descends to a 𝒢-invariant, horizontal one-form on ℱ(X) (df_ψ)_ι(n) = ∫_Pι^*(n⌟ψ). Similar to the 6-dimensional case, ι is a critical point of f_ψ if and only if ψ_N = 0. In other words, if and only if ι P is an associative submanifold of X. Therefore, it's easy to see that the critical points of f_ψ are exactly the associative submanifolds of X. As before, we can write down the symbol for the operator associated to the equation ψ_N = 0 and restrict ourselves to a local slice of the action of the diffeomorphism group. Let N ι denote the normal bundle of any embedding ι:P → X with respect to the metric determined by φ'. Note that local slices of the action of 𝒢 on ℱ(X) can also be defined in this context. Let α_x ∈ T_x^* ι P and n_x ∈ N_x ι. Then σ_ψ(α_x)(n_x) = ( α_x ∧ (n_x ⌟ψ) )_N. Now suppose that X = ℝ× M where M is a 6-dimensional manifold with a closed, tamed, G_2 pair. Let (λ,ι) be a critical point of Λ. Then note that N_(t,x)ι_λ≅ℝ⊕ N_x ι P. Under the identification N_(t,x)ι_λ≅ℝ⊕ N_x ι, we have σ_τ,ρ(α)(k, n) = σ_ψ(α)(n + k ∂_t) where α is an arbitrary 1-form on ι P and α is the corresponding 1-form on ι_λ P Note that the image of σ_ψ(α) lies in Λ^3(T^*_(x,t)ι_λ P) ⊗ N^*_(x,t)ι_λ≅Λ^3(T^*_(x,t)ι_λ P) ⊕( Λ^3(T^*_(x,t)ι_λ P) ⊗ N^*_x ι) So we can write σ_ψ as a matrix with respect to this splitting: σ_ψ(α)(k,n)= [ A B; C D ][ k; n ]. Let α be an arbitrary 1-form on ι_λ P. When n = 0, we have σ_ψ(α)(k, 0) = α∧ (k ∂_t ⌟ψ)_N = k α∧ρ_N which is an N^*_(x,t)ι_λ-valued 3-form on ι_λ P. However, note that ρ is defined on ι P. So actually, we can view this as an N^*_x ι-valued 3-form on ι_λ P. Therefore, we can conclude that A = · α∧ρ_N C = 0. Furthermore, when k = 0, we have: σ_ψ(α)(0,n) = α∧( n⌟τ + n⌟(dt ∧ρ) )_N. We know that B is an N^*_x ι -valued 3-form on ι_λ P. If h: ι P →ℝ× M by h(x) = ( λ(x),x ), then B(n) = h^*(α∧ (n⌟τ + n⌟(dt ∧ρ))_N). On the other hand, D is a 3-form on ι_λ P. To see what D is, we can contract A with the volume form on ι_λ P. The term which has a dt remaining will be D. We have B = α∧ (· ⌟τ + · ⌟( dλ∧ρ))_N D = α∧ (· ⌟ρ). After comparing this to equation (<ref>), we see that the claim is proved. Suppose that ι P is an associative submanifold of a manifold X with G_2-structure. Let { e_1 , e_2, e_3 } be an orthonormal basis for T_p ι P and let α = a_1 e^1 + a_2 e^2 + a_3 e^3 be an arbitrary 1-form on ι P. Let n be an arbitrary vector normal to ι P at p. Then (e_3 ⌟ e_2 ⌟ e_3 ⌟( α∧ (n ⌟ψ) ) )^♭ = a_1(e_1 ⌟ e_2 ⌟ n ⌟ψ)^♭ + a_2(e_3 ⌟ e_1 ⌟ n ⌟ψ)^♭ + a_3(e_2 ⌟ e_1 ⌟ n ⌟ψ)^♭ = a_1(e_3 × n) + a_2(e_2 × n) + a_3(e_1 × n) = α× n where we have used the fact that e_1, e_2, e_3 and n are all orthogonal, and the fact that ψ_N vanishes on ι P. In this sense, the symbol σ_ψ is just the dual of the symbol for the Dirac operator = ∑_i = 1^3 e_i ×∇_e_i whose kernel describes deformations of associative submanifolds and is elliptic, and self-adjoint. §.§ Self-adjointness The operator D_(λ,ι) is self-adjoint in the following sense. Suppose that ι: P →ℝ× M is an embedding. Let Nι denote the normal bundle of this embedding and N^*ι its dual. Then there is an isomorphism Γ(N^*ι) Ω^3(P,N^*ι) given by the pairing: Γ(Nι) ⊗Ω^3(P,N^*ι) →ℝ (n,α) ↦∫_Pn ⌟α. Let ι_λ:P →ℝ× M be the embedding associated to a graphical associative (λ,ι) ∈ C^∞×ℱ as in <Ref>. This identification, the above isomorphism, and the metric dual allow us to think of the operator D_(λ,ι) as a map between Nι_λ and itself. Explicitly, let n_1, n_2 ∈Γ N ι_λ. Let ψ = τ + dt ∧ρ as usual. Then define D:Γ(ι_λN) →Γ(ι_λN) n ↦( d(n⌟ψ)_N )^♭ where the metric dual is taken with respect to the metric g_ψ defined by ψ. The following lemma was essentially proved by McLean in <cit.>, and again by Joyce in <cit.>. The operator D above is self-adjoint. Therefore, <Ref> implies that σ_ρ,τ is also elliptic and self-adjoint which is what we set out to prove. § VOLUME BOUNDS In light of <Ref>, we now have a natural way to identify solutions to the Lagrange multipliers problem with associative graphs in a cylinder ℝ× M. If ( X, ψ, φ' ) is a 7-manifold with a closed, tamed G_2-structure, and ι:P → X is a ψ-associative submanifold of X, then <Ref> implies Vol_g_ψ(ι P) ≤ K ⟨ [φ'], [ι]⟩ for some positive constant K, where [ι] is the homology class represented by ι. We have an analogous volume bound for graphical associatives. Suppose that ( M,ρ',ω' ) is a 6-dimensional manifold with a G_2-pair (ρ',ω') ∈Ω^3(M) ×Ω^2(M). Also suppose that (ρ,τ) is a (ρ',ω')-tame G_2 pair and that (λ,ι) ∈𝒩(A,P;(ρ,τ)). Then both the volume of ι P and *dλ_L^2 are bounded. Suppose that (λ,ι) is a graphical associative. Let ι_λ denote the associated embedding ι_λ:P →ℝ× M, as before. We have the following metrics. * g_M : the metric on M given by the G_2-pair (ρ,τ) * g_X : the metric on X = ℝ× M given by the G_2-structure ψ = τ + dt ∧ρ * g = dt^2 + g_M: the product metric on ℝ× M. Since (ρ,τ) is not necessarily an SU(3)-structure, g_X does not necessarily equal g. Fix a point p ∈ P and let { e_i }_i=1^3 be a basis for T_p P. Also let { v_i }_i = 1^3 = { dι( e_i ) } { u_i }_i=1^3 = { dι_λ( e_i ) }. Choose { e_i } so that v_i and { u_i } are orthonormal with respect to g_M and g respectively. Denote the g_M-dual of v_i by v^i. Similarly, denote the g-dual of u_i by u^i. Let μ be the function on ι P defined by λ∘ι^-1. Observe that u_i = v_i + dμ(v_i)dt. Let vol_ι_λ denote the g-volume form on ι_λ P and vol_ι the g_M-volume form on ι P. Then we have ( vol_ι_λ)_(t,x) = u^1 ∧ u^2 ∧ u^3. We compute u^1 ∧ u^2 ∧ u^3 = ( v^1 + dμ(v_1) dt ) ∧( v^2 + dμ(v_2) dt ) ∧ + ( v^3 + dμ(v_3) dt ) = v^1 ∧ v^2 ∧ v^3 + dμ(v_1) v^2 ∧ v^3 ∧ dt - dμ(v_2)v^1 ∧ v^3 ∧ dt + dμ(v_3)v^1 ∧ v^2 ∧ dt. Next, let L: ι P →ℝ× M be the map defined by L(x) = ( μ(x),x ). Then the above implies L^*( vol_ι_λ) = vol_ι + *_ιdμ∧ dμ = vol_ι( 1 + *dμ^2 ) Where *_ι denotes the Hodge star on ι P with respect to the metric induced by g_M. Next, since ι_λ P is associative by assumption, and since (ρ',ω') tames (ρ,τ), there exists a constant K > 0 such that for all (t,x) ∈ℝ× M, ( vol_ι_λ)_(t,x)≤ Kφ'T_(t,x)ι_λP. Note that this is still true even though we are using the metric g instead of g_X to define the volume form vol_ι_λ. Since L^*(φ') = dμ∧ω' + ρ', we have ∫_ι Pvol_ι( 1 + *dμ^2 ) ≤ K ∫_ι P dμ∧ω' + ρ' ⇒Vol_g_M(ι P) + *dμ_L^2^2 ≤ K ⟨ [dμ∧ω'],[ι]⟩ + K ⟨ [ρ'],[ι]⟩ = K⟨ [ρ'],[ι]⟩. Clearly the L^2-norm of dμ is bounded if and only if the L^2-norm of dλ is bounded. § ELLIPTIC REGULARITY In this section, we prove some regularity results that will be used in the proofs of the main theorems. The results in this section mainly follow from standard elliptic bootstrapping methods. We will work exclusively in the 7-dimensional setting for this section. In <ref> we will apply these to the 6-dimensional setting described above. §.§ Set-up on r7 We begin by reviewing the so-called associator equation from <cit.> which is a nonlinear PDE whose solutions are functions whose graphs are associative with respect to the standard G_2-structure on ℝ^7. We then generalize this equation for non-flat G_2-structures. Let ℍ denote the quaternions and 𝕆 denote the octonions. Then identity ℝ^7≅Imℍ×ℍ≅Im𝕆. Furthermore, let 1,𝐢,𝐣,𝐤,𝐞,𝐢𝐞,𝐣𝐤,𝐤𝐞 be the standard basis for 𝕆. In particular, any point x ∈ℍ has the form x = x_0 + x_1 𝐢 + x_2 𝐣 + x_3 𝐤. Suppose that x,y and z are octonions. Define the two- and three-fold cross products to be x × y = -1/2( x̅y - y̅x ) and x × y × z = 1/2( x(y̅z) - z(y̅x) ). There are two important operators which are ingredients in the associator equation. Let U be a domain in Imℍ and let f:U →ℍ be a C^1 map. Then the Dirac operator on f is defined to be D(f) = -∂ f/∂ x_1𝐢 - ∂ f/∂ x_2𝐣 - ∂ f/∂ x_3𝐤. The first order Monge-Ampére operator on f is defined to be σ(f) = ∂ f/∂ x_1×∂ f/∂ x_2×∂ f/∂ x_3. Let f:U →ℍ be a C^1 map. Then the graph of f is an associative submanifold of Imℍ×ℍ if and only if f satisfies D(f) = σ(f). This equation is known as the associator equation and is a first order, nonlinear PDE. It is well-known that C^1 minimal submanifolds are smooth, and that associative submanifolds are minimal. Therefore a corollary to this theorem is that the graph of an associative, C^1 map f:U →ℍ is smooth. Let ψ_0 denote the standard G_2 4-form on ℝ^7. Let β be a 4-form on ℝ^7 which is small enough so that ψ = ψ_0 + β is still stable. We now explain how to formulate a modified associator equation with respect to this new 4-form. To state this generalization we require the following constructions. Let * g_0 be the G_2 metric on ℝ^7 determined by ψ_0 and * g_ψ be the G_2 metric on ℝ^7 determined by ψ. Also fix an orientation on ℝ^7. To each stable 4-form, we also associate an associator [·,·,·]_ψ:Λ^3(ℝ^7) →ℝ^7 by requiring that g_ψ( [v_1,v_2,v_3]_ψ,v_4 ) = ψ(v_1,v_2,v_3,v_4) for all v_1,v_2,v_3,v_4 ∈ℝ^7. Let [·,·,·]_ψ^0:Λ^3(ℝ^7) →ℝ^7 denote the map determined by replacing g_ψ with g_0 in the above formula. That is g_0([v_1,v_2,v_3]_ψ^0,v_4) = ψ(v_1,v_2,v_3,v_4) for all v_1,v_2,v_3,v_4 ∈ℝ^7. Then define β̂(·,·,·):Λ^3(ℝ^7) →ℝ^7 by the equation ψ(v_1,v_2,v_3,v_4) = ψ_0(v_1,v_2,v_3,v_4) + β(v_1,v_2,v_3,v_4) = g_0( [v_1,v_2,v_3]_ψ_0,v_4 ) + g_0( 2β̂(v_1,v_2,v_3),v_4 ). so that [·,·,·]_ψ^0 = [·,·,·]_ψ_0 + 2 β̂(·,·,·) Finally, thinking of Im𝕆 as Imℍ⊕ℍ, define projection operators π_ℍ and π_Imℍ. π_ℍ: Im𝕆 → ℍ π_Imℍ: Im𝕆 → Imℍ Then set β̂_ℍ = π_ℍ∘β̂ β̂_Imℍ = π_Imℍ∘β̂ Next, suppose that f:U →ℍ is a C^1 map. Let f^i denote ∂ f/∂ x_i for simplicity. Then the tangent space of the graph of f at each point is spanned by u = 𝐢 + f^1𝐞 v = 𝐣 + f^2𝐞 w = 𝐤 + f^3𝐞 . Therefore we may think of β̂_ℍ as a differential operator on f by defining β̂_ℍ(f)𝐞 = β̂_ℍ(u,v,w). As before, let U be a domain in Imℍ. Let f:U →ℍ be a C^1 map. Then the graph of f is associative with respect to ψ = ψ_0 + β if and only if it satisfies the equation D(f) - σ(f) + β̂_ℍ(f) = 0 After importing the above definitions, this proof proceeds similarly to the proof of the original theorem in <cit.>. By definition, the graph of a function f is ψ-associative if and only if [u,v,w]_ψ = 0 where u,v,w are vectors spanning the tangent space of the graph of f. In this case, since g_ψ([x_1,x_2,x_3]_ψ,x_4) = ψ(x_1,x_2,x_3,x_4) = g_0([x_1,x_2,x_3]_ψ^0,x_4) for all v_1,v_2,v_3,v_4 ∈ℝ^7, the ψ-associative equation is equivalent to [u,v,w]_ψ^0 = [u,v,w]_ψ_0 + 2 β̂(u,v,w) = 0. Also recall that 1/2[x,y,z]_ψ_0 = Im( x × y × z ). Therefore, if the graph of f is ψ-associative, we have 1/2[u,v,w]^0_ψ = Im(u × v × w) + β̂(u,v,w) = Im( 𝐢( f^2× f^3) + 𝐣( f^3 × f^1 ) + 𝐤( f^1 × f^2 ) ) + β̂_Imℍ(f) + ( σ(f) - D(f) )𝐞 + β̂_ℍ(f)𝐞 = 0 by formulas in <cit.>. In particular, σ(f) - D(f) + β̂_ℍ(f) = 0 as desired. Conversely, suppose that σ(f) - D(f) + β̂_ℍ(f) = 0. Then 1/2[u,v,w]^0_ψ = Im( 𝐢( f^2× f^3) + 𝐣( f^3 × f^1 ) + 𝐤( f^1 × f^2 ) ) + β̂_Imℍ(f). But g_0([u,v,w]^0_ψ,z) = g_ψ( [u,v,w]_ψ,z ) = 0 whenever z = u,v or w since [u,v,w]_ψ is orthogonal to u,v, and w with respect to the metric g_ψ. Since u = 𝐢 + f^1𝐞, this means that g_0( [u,v,w]_ψ^0,u ) = g_0( [u,v,w]_ψ^0,𝐢) + g_0( [u,v,w]_ψ^0,f^1𝐞) = 0. Similarly for v and w. But since each term in <ref> is contained in Imℍ we conclude that g_0( [u,v,w]_ψ^0,𝐢) = g_0( [u,v,w]_ψ^0,𝐣) = g_0( [u,v,w]_ψ^0,𝐤) = 0 Therefore [u,v,w]^0_ψ = 0 so the graph of f is associative as desired. §.§ Regularity and compactness Next, we apply elliptic bootstrapping methods to the modified associator equation to prove a regularity theorem and a compactness theorem for embeddings with bounded second fundamental form which will both play a crucial part in the transversality results to follow. Since the G_2-structures we are interested in in this paper are not necessarily torsion-free, their corresponding associative submanifolds are not necessarily minimal. This is why we cannot rely on regularity results about minimal submanifolds. Recall that a subset S of a Riemannian manifold M is a C^ℓ submanifold if it can be covered by C^ℓ charts of M. It is a standard result that a subset of a C^ℓ manifold M, where ℓ≥ 1, is a C^ℓ submanifold if and only if it is the image of a C^ℓ embedding. Furthermore, if S is the image of a C^1 embedding ι:P → M and if S is also a C^ℓ submanifold, then there exists a C^1 diffeomorphism ϕ of P such that ι∘ϕ is a C^ℓ embedding. See <cit.> for more details. Let ℓ≥ 2 be an integer. Let (X,ψ) be a closed, oriented 7-manifold equipped with a stable C^ℓ 4-form ψ. Let ι:P → X be an embedding of a closed 3-manifold P into X so that ι P is a C^3 associative submanifold with respect to ψ. Then ι P is a C^ℓ submanifold. In particular, if ψ is smooth, then so is ι P. First note that at every point x ∈ι P, there is a small neighborhood U of x in ι P which can be regarded as a graph over T_x ι P. With this in mind, identify T_xι P with Imℍ, and N_x ι with ℍ. Let r be the distance from the origin in T_x X. Let α∈ (0,1) be a number. It suffices to prove that a C^2,α function f:U→ℍ whose graph is associative with respect to a 4-form ψ = ψ_0 + β on Imℍ⊕ℍ, where β is O(r) and C^ℓ-1,α, is in C^ℓ,α. Assume that f is such a function. Then f satisfies D(f) - σ(f) + β̂_ℍ(f) = 0 due to the fact that ι P is associative. Applying D to both sides of this equation yields Δ(f) - (D ∘σ)(f) + (D ∘β̂_ℍ)(f) = 0 where Δ(f) is the Laplacian. Let L_1 = (D∘σ)(f) which can be calculated explicitly L_1(f) = -( f^11× f^2 × f^3 )𝐢 - ( f^1 × f^12× f^3 ) 𝐢 - ( f^1× f^2× f^13) 𝐢 - ( f^21× f^2 × f^3 )𝐣 - ( f^1 × f^22× f^3 ) 𝐣 - ( f^1 × f^2 × f^23) 𝐣 - ( f^31× f^2 × f^3 ) 𝐤 - ( f^1 × f^32× f^3 ) 𝐤 - ( f^1 × f^2 × f^33) 𝐤. We see that L_1 is a nonlinear, second order operator with smooth coefficients. However, we can associate a linear second order operator L_1^f to L_1 with coefficients in C^1,α depending on the first derivatives of f simply by declaring the parts of L_1(f) that are first derivatives of f to be coefficients. Next, we address the operator (D ∘β̂_ℍ). It is useful to first get a better understanding of the operator β̂_ℍ. For the moment, let { e_i } be a basis for ℝ^7 and let { e^i } be its dual basis. Then any C^ℓ-1,α 4-form on ℝ^7 can be written as β = ∑ A_ijkl e^ijkl where A_ijkl:ℝ^7→ℝ are C^ℓ-1,α functions. On the other hand, β̂:Λ^3(ℝ^7) →ℝ^7 is given by β̂ = ∑ A_ijkle^ijk_l. Next, let ℝ^7≅Imℍ⊕ℍ as usual. Let x = (x_1,x_2,x_3) be coordinates on Im(ℍ). At a point (x,f(x)), we have β̂(f)(x) = ∑ A_i(x,f(x))B_i(x)e_i where A_i come from the coefficients of β and B_i are products of the first partial derivatives of f. We see that β̂(f):ℝ^3→ℝ^7. Projecting onto the last 4 coordinates gives us the map β̂_ℍ(f):ℝ^3 → ℝ^4 x ↦ ∑_i = 4^7A_i(x,f(x))B_i(x)e_i. By the Leibniz rule, (D ∘β̂_ℍ) is a nonlinear differential operator with first and second order parts. Let L_2 denote the second order part and L_3 denote the first order part. Both L_2 and L_3 are nonlinear with coefficients in C^ℓ-2,α. Similar to above, we can associate a linear second order operator L_2^f to L_2 whose coefficients depend on the first derivatives of f. In this case however, the fact that the equation for β̂_ℍ(f) involves the composition of f, which is C^2,α with the function A_i, which is C^ℓ,α, means that the linear, second order operator L_2^f has coefficients in C^1,α^2. The linear, first order operator L_3^f also has coefficients in C^1,α^2. Altogether, let L^f = Δ - L_1^f + L_2^f + L_3^f. We must check that L^f is elliptic. The principal symbol of L^f is a 3 by 3 symmetric matrix. Its diagonal components are B_11 = 1 + ( ·× f^1 × f^2)𝐢 + b_11 B_22 = 1 + ( f^1 ×·× f^3) 𝐣 + b_22 B_33 = 1 + ( f^1 × f^2 ×·)𝐤 + b_33. Its off-diagonal components are B_12 = ( f^1 ×·× f^3 )𝐢 + ( ·× f^2 × f^3 ) 𝐣 + b_12 B_13 = ( f^1 × f^2 ×·)𝐢 + ( ·× f^2 × f^3 ) 𝐤 + b_13 B_23 = ( f^1 × f^2 ×·)𝐣 + ( f^1 ×·× f^3 ) 𝐤 + b_23. The terms b_ij come from L_2^f and depend on the coefficients of β, which are C^ℓ-1,α^2 and O(r) and the first derivatives of f which can be made as small as we like since we can choose the neighborhood in the tangent plane of ι P over which we are graphing to be as small as we like. Thus we see that the principal symbol of L^f is a small perturbation of the identity matrix and L^f is therefore elliptic with coefficients in C^1,α^2. Standard results from Schauder theory and elliptic regularity allow us to conclude that since f solves L^f(f) = 0, there is a number α∈ (0,1) for which f is a C^3,α function on U. That means that all the first derivatives of f are in fact contained in C^2,α for some α∈ (0,1). So the same argument implies that f is in C^4,α which means that L^f has coefficients in C^3,α. This process can be repeated up until the first derivatives of f are contained in C^ℓ-2,α and f is in C^ℓ-1,α. Then f is in fact in C^ℓ. Further bootstrapping is prevented by the regularity of the coefficients of β. Next we prove a compactness result for embeddings with bounded second fundamental form. This will be invoked when applying the “Taubes trick” in <ref>. Suppose that X is a 7-manifold and let ι: P → X be an embedding. Fix any Riemannian metric on X. For any x,y ∈ι P let d_X(x,y) denote the distance between x and y in X. That is, the distance between x and y as points in X. On the other hand, let d_ι P(x,y) denote the distance between x and y in ι P. Let II(ι) denote the second fundamental form of ι P and vol(ι P) denote the volume with respect to the metric. Let ℓ≥ 1 be an integer, α∈ (0,1) a number, and let p ≥3/1-α. Let X be a closed 7-manifold equipped with sequence of stable 4-forms ψ_a converging to a stable 4-form ψ in the C^ℓ,α topology. Suppose also that ι_a:P → X is a sequence of W^3,p embeddings such that for all x,y ∈ P, * d_X( ι_a (x),ι_a(y) ) ≥1/K d_ι_a P( ι_a(x),ι_a(y) ) * *II(ι_a)_L^p≤ K * vol(ι_a P) ≤ K for large enough a and some fixed constant K. Also assume that ι_a is ψ_a associative for each a. Then there exists a subsequence ι_b of ι_a and a sequence of diffeomorphisms ϕ_b of P such that ι_b ∘ϕ_b converges in the C^ℓ,α-topology to ι, an embedding also satisfying (i) and (ii). The Sobolev embedding theorem guarantees that ι_a ∈ C^2,α. Thus, <Ref> implies that there is a sequence ϕ_a of diffeomorphisms such that ι_a ∘ϕ_a ∈ C^ℓ+1,α. In order to simplify notation, we therefore just assume that ι_a ∈ C^ℓ +1,α. The main theorem in <cit.> implies that there exists an embedding ι:P → X satisfying conditions (i) and (ii), a subsequence ι_b of ι_a, and a sequence of diffeomorphisms ϕ_b such that ι_b ∘ϕ_b converges to ι in the C^2-topology. The condition (i) above guarantees that ι is also an embedding. Furthermore, ι is ψ-associative and therefore also in C^ℓ+1. We just need to show that ι_a converges in the C^ℓ,α-topology. Let x ∈ι P and let U ⊂ι P be an open set in ι P containing x small enough so that it can be represented as a graph of a C^ℓ+1,α function f. Let V = ι^-1(U) and U_a = ι_a(V). Then, since ι_a converges to ι, each U_a can be represented as a graph of a function f_a over T_x ι P. Section 6 in <cit.> together with the assumption (ii) guarantees that there is a uniform bound on the C^2 norm of f_a. Therefore there is a uniform bound on the C^0,α norm of the first derivatives of f_a. This in turn implies that there is a uniform bound on the C^0,α norm of the coefficients of the operator L^f_a. Thus, the Schauder estimates imply that there is a uniform bound on the C^2,α norm of f_a. This in turn implies that there is in fact a uniform bound on the C^1,α norm of the coefficients of L^f_a. This process can be repeated to obtain a uniform bound on the C^ℓ+1,α norm for f_a. Therefore, after passing to a subsequence f_a converges to f in the C^ℓ,α topology. Thus ι_a also converges to ι in the C^ℓ,α-topology as desired. § TRANSVERSALITY In this section we prove that for a generic choice of a G_2-pair, the moduli space ℳ is a collection of isolated points. §.§ Another slice The group ℝ acts on C^∞(P) by addition of a constant. Let O_λ denote the ℝ-orbit of λ∈ C^∞(P). Note that the L^2-orthogonal complement to the space of constant functions is I(P) = { h ∈ C^∞(P) ∫_P h vol_P = 0 }. The quotient space C^∞(P)/ ∼ where f ∼ g if and only if f = g + c for some constant c can therefore be identified with I(P). Let S be a local slice for the action of 𝒢 on C^∞(P) ×ℱ as in <Ref>. Let I × S denote the set of pairs (λ,ι) where ι∈ℱ / 𝒢 and where λ∈ C^∞(P)/𝒢 is also a representative of an element in I(P), as above. Note that if (λ,ι) is a graphical associative, so is (λ + c,ι) where c is any constant. Therefore restricting to the slice I amounts to neglecting solutions up to translation in the G_2 cylinder. Let ℰ denote the bundle over I × S whose fiber at (λ,ι) is Ω^3(P) ×Ω^3(P,N^*ι). Define LI× S:I× S →ℰ (λ,ι) ↦( ι^*ρ,τ_N + dλ∧ρ_N ). The local moduli space of perturbed special Lagrangians is defined to be ℳ^I× S = ℳ^I × S( A,P;(ρ,τ)) = LI × S^-1(0). Let D_(λ,ι) denote the linearization of LI× S. We make the following observations about D_(λ,ι). Note that before restricting to the slice I, the operator D_(λ,ι) always had a kernel of dimension at least 1 since the set { (c,0) } for constants c was always contained in the kernel. This set is no longer in the domain of D_(λ,ι) after restricting the section L to the slice I. On the other hand, the kernel of D_(λ,ι) at least sometimes contains other elements. Suppose that (M,ρ,τ) is an actual Calabi-Yau manifold. Then (0,ι) ∈ I × S where ι:P → M is a special Lagrangian submanifold are solutions to the perturbed SL equations. In this case, elements (l,n) in the kernel of D_(λ,ι) satisfy * d ι^*(n⌟ρ) = 0 * d (n ⌟τ)_N = 0. Note that the first equation is true whenever n ⌟ρ is a closed 2-form on ι P. The second equation can be re-written using the formulas in <cit.> as n ⌟τ = * (n^♯∧ω) = (Jn)^♯∧ω. Then, d( (Jn)^♯∧ω) = d(Jn)^♯∧ω + (Jn)^♯∧ d ω. This vanishes whenever (Jn)^♯ is a closed 1-form since in the Calabi-Yau case, ω is already closed. These considerations motivate the following definitions. Let ℛ = { (ρ,τ) ∈ℛ_G_2⊂Ω^3(M) ×Ω^4(M) (ω',ρ') tames (ρ,τ)} be the space of parameters. A pair (ρ,τ) ∈ℛ is called regular (depending on A and P) if D_(λ, ι) is surjective for all (λ, ι) ∈ℳ^I,S(A,P;(ρ,τ)). Let ℛ_reg denote the set of all regular pairs (ρ, τ) ∈ℛ. The space ℛ should be compared to the space 𝒥 of ω-tame almost complex structures in the symplectic setting. As is the case in that situation, transversality theorems still hold for weaker definitions of the space of parameters. Note, for example, that we do not require that ρ and τ form an SU(3)-structure, nor are they required to be closed. These conditions are only needed in order for the Lagrange multipliers problem described in <Ref> to apply but are not needed to define the moduli space or the transversality results. The theorems in this section are consequences of the implicit function and Sard-Smale theorems which only apply in the Banach space setting. We address that next. §.§ Banach space setup Let W^k,p_I × S denote the W^k,p completion of I × S. Similarly, let ℰ^k,p denote the W^k,p completion of the bundle over I × S whose fiber at (λ,ι) is Ω^3(P) ×Ω^3(P,N^*ι). Suppose that (λ,ι) ∈ C^∞(P) ×ℱ and S_(λ,ι) is a local slice for (λ,ι). Then since each element of S_(λ,ι) corresponds to a normal vector field along ι_λ P, the W^k,p completion of S_(λ,ι) consists of W^k,p normal vector fields along ι_λ P. Therefore if I is a local slice for λ∈ C^∞(P)/𝒢, W^k,p_I× S is identified with a subset of W^k,p(P,Nι×ℝ). Let ℳ=ℳ( A,P;(ρ,τ) ) = ℳ̃/∼ be the quotient space obtained by modding out by the action of ℝ×𝒢 on C^∞(P) ×ℱ which is given by (c,ϕ) · (λ,ι) = ( (λ∘ϕ) + c, ι∘ϕ). This definition of ℳ corresponds to our previous definition, because after restricting to the slice I, the projection from 𝒩 to ℳ is a bijection. Endow ℳ with the quotient subspace topology coming form the C^∞ topology on C^∞(P) ×ℱ. The following is a consequence of the implicit function theorem for Banach spaces. The moduli space ℳ( A,P;(ρ,τ) ) is a set of isolated points whenever (ρ,τ) ∈ℛ_reg. Fix a local slice I × S for a particular element (λ_0,ι_0) ∈ℳ. We first prove the result for the local moduli space ℳ^I × S. Let L:I × S →ℰ denote the map defined by <ref>, dropping the restriction to I × S from the notation. Since we want to apply the implicit function theorem, extend L to a map, also called L, to L: W^k,p_I × S→ℰ^k-1,p. where k ≥ 3 is an integer and p > 3. If (λ,ι) ∈ L^-1(0), then the Sobolev embedding theorem implies that (λ,ι) ∈ C^2,α so <Ref> implies that (λ, ι) is smooth. In this way, the local moduli space ℳ^I× S is identified with the subset L^-1(0) in W^k,p_I × S. Next, note that any Riemannian metric on M induces a trivialization of ℰ^k-1,p over W^k,p_I × S via parallel transport along geodesics in M. Choose any such metric. For any (λ,ι) ∈ W^k,p_I × S, let the parallel transport maps be denoted by T_0(λ,ι):ℰ^k-1,p_(λ_0,ι_0)→ℰ^k-1,p_(λ,ι) which is an isomorphism. Each fiber of ℰ^k-1,p is a Banach space. Therefore the map L̅ given by L̅:W^k,p_I × S → ℰ^k-1,p_(λ_0,ι_0) L̅(λ,ι) = T_0(λ,ι)^-1( L(λ,ι) ) is a smooth map between Banach spaces. Also, L̅(λ,ι) = 0 L(λ, ι) = 0 (λ,ι) ∈ℳ. By the definition of , 0 is a regular value of L and therefore also of L̅. So the implicit function theorem says that ℳ^I × S is a Banach manifold with T_(λ,ι)ℳ = DL̅(λ,ι). Since the operator D_(λ,ι) is Fredholm with index zero whenever (λ,ι) ∈ℳ, this shows that ℳ^I× S is a set of points isolated in the W^k,p-topology. It remains to show that (λ,ι) is isolated with respect to the C^∞ topology. Suppose that (λ,ι) ∈ℳ^I × S is not isolated in the C^∞ topology. Then every C^∞ neighborhood of (λ,ι) contains a second point (λ',ι') ∈ℳ^I × S. This is a contradiction since every W^k,p neighborhood of (λ,ι) contains a C^∞ neighborhood. Since the above is true for every local slice I × S, the result follows for ℳ. In the next theorem, we will want to work with C^ℓ,α parameters. Define ℛ^ℓ,α = { (ρ,τ) ∈ C^ℓ,α( M,Λ^3T^*M ⊕Λ^4 T^*M ) (ω',ρ') tames (ρ,τ)} where α∈ (0,1) is a number. Define the local universal moduli space ℳ^I× S( A,P;ℛ^ℓ,α) to be the set of pairs ( (λ,ι),(ρ,τ) ) such that (ρ,τ) ∈ℛ^ℓ,α and (λ,ι) is a solution to the perturbed SL equations contained in the C^ℓ,α completion of the local slice I × S. Next, we prove that ℳ^I× S( A,P;ℛ^ℓ,α) is a Banach submanifold of W^k,p_I × S×ℛ^ℓ,α. First, a Lemma. Let (ρ,τ) ∈ℛ^ℓ,α and suppose that (λ,ι) ∈ C^ℓ,α solves the perturbed SL equations with respect to (ρ,τ). Also assume that ι^*ρ > 0 as usual. Then for every nonzero (l,n) ∈ C^ℓ,α( P,ℝ× Nι), there exists a pair (α,β) where α is a closed 3-form on M and β is a closed 4-form on M such that ∫_Pι(n⌟β) + ∫_P dλ∧ι^*(n⌟α) + ∫_P lι^*α > 0. Case 1: (n ≠ 0 but l = 0) In this case, we may choose α = 0 and show that the first term is nonzero, which follows from exactly the same arguments used to prove proposition A.2 of <cit.>. That is, choose p ∈ P. Let U be a neighborhood of p and let V be a tubular neighborhood of U. Since n ≠ 0, it must be nonzero somewhere in U. Therefore, we can let f be a function supported in V such that df(n) ≥ 0 and df(n) > 0 somewhere. Let η be a 3-form on M with ι^*ηU = vol_PU and n ⌟dηV = 0. Then, set β = d(fη) = df ∧η + f dη. We have: ∫_P ι^*(n ⌟ (df ∧η + f dη )) = ∫_P df(n) vol_P > 0 as desired. Case 2: (l ≠ 0 but n = 0). In this case, only the last term matters. But this case is easy since we can just choose U and V small enough so that l never vanishes. Then let f be a function on M such that ι^*(f)U = 1/lU. Choose the 2-form ν on M such that df ∧ν is vol_P when pulled back to U. Then let α = d(f ν). Case 3: (l ≠ 0 and n ≠ 0). This is the same as in Case 1 since we can choose α = 0 and guarantee that the first term is positive. Let I × S be a local slice. Let ℓ≥ 3 and 3≤ k ≤ℓ be integers. Let α∈ (0,1) be a number and p ≥3/1-α. Then ℳ^I× S( A,P;ℛ^ℓ,α) is a separable Banach submanifold of W^k,p_I × S×ℛ^ℓ,α. Note that when 3 ≤ k ≤ℓ, ℳ^I× S( A,P;ℛ^ℓ,α) can be regarded as a subset of W^k,p_I × S×ℛ^ℓ,α. For k ≥ 1, consider the (trivial) Banach space bundle ℰ^k-1,p→ W^k,p_I × S×ℛ^ℓ,α whose fiber over ((λ,ι),(ρ,τ) ) is ℰ^k-1,p_( (λ,ι),(ρ,τ) ) = W^k-1,p( P,Λ^3(T^*P)⊕( Λ^3(T^*P)⊗ N^*ι) ) Let L be a section given by L:W^k,p_I× S×ℛ^ℓ,α → ℰ^k-1,p ( (λ,ι),(ρ,τ) ) ↦ ( ι^*ρ,τ_N + dλ∧ρ_N ). When k ≥ 3, <Ref> applies, so ℳ^I× S( A,P;ℛ^ℓ,α) = L^-1(0). In order to show that the universal moduli space is a submanifold, we need to show that the differential DL( (λ,ι),(ρ,τ) ) :W^k,p( P,ℝ⊕ Nι) × C^ℓ,α( M,Λ^3(T^*M) ⊕Λ^4(T^*M) ) → W^k-1,p( P,Λ^3(T^*P)⊕( Λ^3(T^*P)⊗ N^*ι) ) is surjective at every point ( (λ,ι),(ρ,τ) ) ∈ L^-1(0). The differential of L can be written in two components which are denoted by DL( (λ,ι),(ρ,τ) ) = D_(λ,ι) + D_(ρ,τ). The same arguments from <ref> show that D_(λ,ι) is a Fredholm operator. Therefore DL( (λ,ι),(ρ,τ) ) has a closed image when ( (λ,ι),(ρ,τ) ) ∈ L^-1(0), so it suffices to show that its image is dense. We first consider the case k =1. Although in this case, L^-1(0) cannot be identified with the universal moduli space since the bootstrapping arguments of <ref> don't apply, D_(λ,ι) is still Fredholm so we can still prove that the operator DL is surjective on L^-1(0). Then, it will follow from elliptic regularity that DL is surjective for larger values of k. Suppose that the image of DL at ( (λ,ι),(ρ,τ) ) ∈ L^-1(0) is not dense. Then there exists a nonzero η = (η_1,η_2) ∈ L^q( P,Λ^3(T^*P) ⊕( Λ^3(T^*P) ⊗ N^* ι) ) where 1/p + 1/q = 1, such that * ∫_P⟨η,D_(λ,ι)(l,n)⟩ι^*ρ' =0 (l,n) ∈ W^1,p( P,ℝ⊕ Nι) * and ∫_P⟨η, D_(ρ,τ)(α,β)⟩ι^* ρ' = 0 (α,β) ∈ C^ℓ,α( M,Λ^3(T^*M) ⊕Λ^3(T^*M) ) Now, (i) implies that η∈ W^1,p since D_(λ,ι) is Fredholm and self-adjoint. Furthermore, D^* η = 0. In particular, the Sobolev embedding theorem implies that η is continuous. On the other hand, for any (α,β) ∈ T_(ρ,τ)ℛ^ℓ,α, D_(ρ,τ)(α,β) = ( ι^*α,β_N + dλ∧α_N ). Therefore, letting l the metric dual of η_1 on P and letting n be the normal vector field corresponding to η_2, we have ∫_P⟨ (η_1,η_2),D_(ρ,τ)(α,β)⟩ι^*ρ' = ∫_Pι^*(n⌟β) + ∫_P dλ∧ι^*(n ⌟α) + ∫_Plι^*α = 0. by (ii). But <Ref> shows that this is a contradiction. The fact that DL( (λ,ι),(ρ,τ) ) is surjective whenever ( (λ,ι),(ρ,τ) ) ∈ L^-1(0) for k ≥ 2 follows from elliptic regularity. As mentioned before, as long as 3 ≤ k ≤ℓ, the universal moduli space is equal to L^-1(0). Therefore the result follows from the implicit function theorem. Since W^k,p_I× S×ℛ^ℓ,α is separable, so is ℳ^I × S( A,P;ℛ^ℓ,α). Lastly, we prove that the set of parameters for which the moduli space ℳ( A,P;(ρ,τ) ) is a zero dimensional manifold is “generic”. More precisely, The set ℛ_reg is a residual subset of ℛ. Let ℛ^ℓ,α_reg be the subset of ℛ^ℓ,α for which the associated operator D_(λ,ι) from <Ref> is surjective whenever (λ,ι) solves the perturbed SL equations. First, we show that ℛ^ℓ,α_reg is residual is ℛ^ℓ,α. For that purpose, consider the projection π:ℳ^I× S( A,P;ℛ^ℓ,α) → ℛ^ℓ,α ( (λ,ι),(ρ,τ) ) ↦ (ρ,τ). By <Ref>, this is a map between separable Banach manifolds. Furthermore, dπ( (λ,ι),(ρ,τ) ):T_( (λ,ι),(ρ,τ) )ℳ^I× S( A,P;ℛ^ℓ,α) → T_(ρ,τ)ℛ^ℓ,α ( (l,n),(α,β) ). ↦ (α,β) Note that if ( (l,n),(α,β) ) is in the tangent space of the local universal moduli space, then D_(λ,ι)(l,n) + D_(ρ,τ)(α,β) = 0. Therefore both the kernel and cokernel of dπ( (λ,ι),(ρ,τ) ) are the same as that of D_(λ,ι). So dπ( (λ,ι),(ρ,τ) ) is surjective if and only if D_(λ,ι) is surjective. Therefore, ℛ^ℓ,α_reg is precisely the set of regular values of π. The above holds for any local slice I × S. Thus, for large enough ℓ, the Sard-Smale theorem implies that ℛ^ℓ,α_reg is residual. In order to extend the argument to smooth parameters, we must use the so-called Taubes trick, similar to what is done in order to prove that the set of regular ω-tame almost complex structures is residual in the symplectic case. Let K > 0 be a constant. Consider the set ℛ_reg,K⊂ℛ of all smooth, stable, (ρ',ω')-tame, G_2 pairs (ρ,τ) such that D_(λ,ι) is surjective for every graphical associative (λ,ι) which satisfies the following two conditions. * For any two points q,q' ∈ι P let d_M(q,q') denote the distance between them with respect to the metric induced by (ρ',ω') on M. Similarly, let d_ι P(q, q') denote the distance between them with respect to the induced metric on ι P. Then the first condition for (λ,ι) to be contained in ℛ_reg,K is d_M(ι(x),ι(y)) ≥1/K d_ι P(ι(x),ι(y)) * Next, let p > 0 be a number. Let II(ι_λ) denote the second fundamental form of the embedding ι_λ:P →ℝ× M as in <Ref>. The second condition is that *II(ι_λ)_L^p≤ K. Note that the conditions (i) and (ii) are both 𝒢-invariant. Also note that since (ρ,τ) is (ρ',ω')-tame, the volume of every graphical associative is bounded by the same constant. Since we are no longer directly relying on Banach spaces, we no longer need to restrict ourselves to a slice of the action of 𝒢. Also, every graphical associative (λ,ι) satisfies (i) and (ii) for some constant K > 0. Therefore ℛ_reg = ⋂_K>0ℛ_reg,K. Therefore we must prove that each ℛ_reg,K is open and dense in the C^∞-topology. First we prove that each is open by proving that its complement is closed. Assume that a sequence (ρ_a,τ_a) converging to (ρ,τ) in the C^∞ topology is contained in the complement of ℛ_reg,K. Then for each a there exists a (ρ_a,τ_a) graphical associative (λ_a,ι_a) satisfying (i) and (ii) such that D_(λ_a,ι_a) is not surjective. Each graphical associative (λ,ι) has an associated associative embedding ( ι_a )_λ_a and the conditions (i) and (ii) ensure that <Ref> applies. Thus, there exists a subsequence ( λ_b,ι_b ) of ( λ_a,ι_a ) and a sequence of diffeomorphisms ϕ_b such that ( ι_b )_λ_b∘ϕ_b →ι_λ in the C^∞ topology. The condition (i) guarantees that not only is ι an embedding, but ι_λ is also graphical since we used the distance in M instead of the distance in ℝ× M. The limit also satisfies both (i) and (ii). Furthermore D_(λ,ι) is also not surjective. Therefore (ρ,τ) is not contained in ℛ_reg,K. Therefore ℛ_reg,K is closed for any K > 0. Finally, we prove that ℛ_reg,K is dense in ℛ with respect to the C^∞ topology. At this point, the argument is almost identical to the same argument in the symplectic case. See for example section 3.2 in <cit.>. We include it here for completion. First, let ℛ^ℓ,α_reg,K be the obvious C^ℓ,α-version of ℛ_reg,K. Note that ℛ_reg,K = ℛ^ℓ,α∩ℛ. Then <Ref> still applies, so ℛ^ℓ,α_reg,K is open in ℛ^ℓ,α with respect to the C^ℓ,α topology. Let (ρ_0,τ_0) ∈ℛ. Since ℛ^ℓ,α_reg is dense in ℛ^ℓ,α by the Sard-Smale theorem, then there exists a sequence ( ρ_ℓ,τ_ℓ) ∈ℛ^ℓ,α_reg (here the index ℓ depends on (ℓ,α)) such that for all ℓ≥ℓ_0 for large enough ℓ_0, *(ρ_0,τ_0) - (ρ_ℓ,τ_ℓ)_C^ℓ,α≤ 2^-ℓ. Since (ρ_ℓ,τ_ℓ) ∈ℛ^ℓ,α_reg,K and since ℛ^ℓ,α_reg,K is open in the C^ℓ,α-topology, there exists ϵ_ℓ > 0 depending on (ℓ,α) such that for every (ρ,τ) ∈ℛ^ℓ,α, *(ρ,τ) - (ρ_ℓ,τ_ℓ)_C^ℓ,α < ϵ_ℓ ⇒ (ρ,τ) ∈ℛ^ℓ,α_reg,K. Choose (ρ_ℓ',τ_ℓ') ∈ℛ to be any smooth element such that *(ρ,τ) - (ρ_ℓ,τ_ℓ)_C^ℓ,α < min{ϵ_ℓ,2^-ℓ}. Then (ρ_ℓ',τ_ℓ') ∈ℛ^ℓ,α_reg,K∩ℛ = ℛ_reg,K. Therefore the sequence (ρ_ℓ',τ_ℓ') converges to (ρ_0,τ_0) in the C^∞ topology. Therefore we have shown that ℛ_reg is the intersection of a countable number of open, dense sets. Therefore it is residual, as desired. plain
http://arxiv.org/abs/2406.18817v1
20240627011644
Correspondence-Free Non-Rigid Point Set Registration Using Unsupervised Clustering Analysis
[ "Mingyang Zhao", "Jingen Jiang", "Lei Ma", "Shiqing Xin", "Gaofeng Meng", "Dong-Ming Yan" ]
cs.CV
[ "cs.CV", "cs.AI" ]
[ Correspondence-Free Non-Rigid Point Set Registration Using Unsupervised Clustering Analysis Mingyang Zhao^1 Jingen Jiang^2 Lei Ma^3* Shiqing Xin^2 Gaofeng Meng^4,5 Dong-Ming Yan^4,5* ^1CAIR, HKISI, CAS ^2Shandong University ^3Peking University ^4MAIS, CASIA ^5UCAS July 1, 2024 ======================================================================================================================================================================================================= type=figure < g r a p h i c s > -0.3cm figureNon-rigid registration on 3D point sets. The blue and gray models represent the source and target point clouds, respectively, while the yellow models are our registration results. Our method achieves successful registrations even for shapes with challenging deformations. ] Correspondence-Free Non-Rigid Point Set Registration Using Unsupervised Clustering Analysis Mingyang Zhao^1 Jingen Jiang^2 Lei Ma^3* Shiqing Xin^2 Gaofeng Meng^4,5 Dong-Ming Yan^4,5* ^1CAIR, HKISI, CAS ^2Shandong University ^3Peking University ^4MAIS, CASIA ^5UCAS July 1, 2024 ======================================================================================================================================================================================================= [1]Corresponding Authors. § ABSTRACT This paper presents a novel non-rigid point set registration method that is inspired by unsupervised clustering analysis. Unlike previous approaches that treat the source and target point sets as separate entities, we develop a holistic framework where they are formulated as clustering centroids and clustering members, separately. We then adopt Tikhonov regularization with an ℓ_1-induced Laplacian kernel instead of the commonly used Gaussian kernel to ensure smooth and more robust displacement fields. Our formulation delivers closed-form solutions, theoretical guarantees, independence from dimensions, and the ability to handle large deformations. Subsequently, we introduce a clustering-improved Nyström method to effectively reduce the computational complexity and storage of the Gram matrix to linear, while providing a rigorous bound for the low-rank approximation. Our method achieves high accuracy results across various scenarios and surpasses competitors by a significant margin, particularly on shapes with substantial deformations. Additionally, we demonstrate the versatility of our method in challenging tasks such as shape transfer and medical registration. https://github.com/zikai1/CVPR24_PointSetReg[Code release] § INTRODUCTION Non-rigid point set registration is to optimize a non-linear displacement field that accurately aligns one geometric shape with another. Due to its fundamental importance, non-rigid registration plays a dominant role in a wide range of applications, such as scene reconstruction <cit.>, pose tracking <cit.>, animation <cit.>, deformable shape manipulation and editing <cit.>, and so on. However, given two point sets, one acting as the source and the other as the target, non-rigid registration presents a highly ill-posed and much more complex challenge compared to the rigid counterpart. This increased complexity is primarily attributed to the additional freedom of deformations allowed in non-rigid registration, especially when dealing with shapes that exhibit large deformations (<ref>). To enhance the registration quality for shapes undergoing large deformations, numerous pioneering methods have been actively researched. Rather than directly optimizing the registration process, these methods usually employ a two-step approach <cit.>. First, they perform shape matching by identifying corresponding points between the source and target shapes without considering geometry deformations. Then, they estimate the alignment transformation based on the established correspondences via off-the-shelf registration techniques. While there has been significant attention and research dedicated to the initial shape matching stage, the exploration of direct registration methods for handling large deformations, without relying on shape matching, is comparatively limited and poses substantial challenges <cit.>. In this work, we address the problem of non-rigid point set registration without correspondences, with a specific emphasis on point sets exhibiting large deformations. To overcome this challenge, we present a fresh perspective and introduce a novel method. Our approach reformulates the non-rigid deformation process as an unsupervised clustering problem within the context of machine learning. Unlike previous approaches that treat the two point sets as separate entities, we consider them as integral parts of a whole. Concretely, we designate the source point set as the clustering centroids, while the target one as the clustering samples. This holistic treatment enables us to leverage the interplay between these two sets. Then the dynamic optimization and update of the clustering centroids correspond to the underlying deformation of the source shape. We highlight the advantages of our novel registration function, which is built on clustering analysis, from both information theory and convex optimization perspectives. Furthermore, we provide closed-form solutions to our objective function during each iteration, which enables fast and efficient implementations. We introduce a sparsity-induced Laplacian kernel (ℓ_1-norm) in the Tikhonov regularization to ensure that the displacement field of clustering centroids remains as smooth as possible. This differs from the commonly used Gaussian kernel and exhibits higher robustness, as demonstrated by experimental results. Additionally, we leverage clustering analysis to adopt the improved Nyström low-rank approximation <cit.>, which reduces the computational complexity and storage requirements of the Gram matrix to linear. Meanwhile, we give a rigorous proof of the approximation error bound associated with the Laplacian kernel. Our method is independent of spatial dimensions, allowing us to evaluate and compare its performance in both 2D and 3D settings. The experimental results demonstrate the superiority of our method compared to baselines by a large margin. This is particularly evident in scenarios involving large deformations, such as shape transfer and medical data registration. Our contributions can be summarized as follows: * We propose a novel and correspondence-free method for non-rigid point set registration, utilizing unsupervised clustering analysis. The method achieves impressive results across various settings and mitigates the challenge without explicit correspondences. * We incorporate the Laplacian kernel function for robust displacement regularization and provide a rigorous theoretical analysis to prove the approximation error bound of the Nyström low-rank method. * Our method is dimension-independent, offering closed-form solutions during optimization, and significantly improves performance in handling large deformations. § RELATED WORK We review the work that is closely aligned with ours. Readers are directed to  <cit.> for comprehensive studies. Non-rigid registration. Differing from shape matching that focuses on finding inlier correspondences, non-rigid registration aims to optimize the displacement field. Various pioneering algorithms employ an optimization paradigm that minimizes both the data and penalty terms simultaneously. Amberg  <cit.> extended the rigid iterative closest point algorithm <cit.> to non-rigid settings, while Yao  <cit.> recently improved non-rigid ICP regarding both accuracy and efficiency through deformation graph optimization. Coherent Point Drift (CPD) <cit.> and GMM <cit.> developed probabilistic frameworks by minimizing the negative logarithm likelihood function to enhance the robustness for non-rigid point set registration. Ma  <cit.> further incorporated the shape context descriptor <cit.> to establish shape correspondences for better 2D registration. Hirose <cit.> recently formulated CPD in a Bayesian setting, which effectively overcomes CPD's limitations and delivers impressive results. With the advancement of deep learning, neural network-based methods have also been proposed for non-rigid point set registration <cit.>. Most of them utilize neural networks to extract features for point correspondences and then apply classical methods such as non-rigid ICP for registration. Instead of focusing on shape matching and heavily rely on data annotations, our method is unsupervised and reasons from a case-by-case geometric perspective. This allows us to achieve faithful registrations that are more generalizable to unknown categories. Deformation representation. The representation of the deformation field is a key component in non-rigid registration. Several existing works are based on thin plate spline functions <cit.>, which can be viewed as a regularization of the second-order derivatives of the transformations <cit.>. Another line of researches utilize kernel functions or a reproducing kernel Hilbert space to describe the deformation field <cit.>. However, many of these methods are limited to the Gaussian kernel due to the reliance on fast Gauss transform  <cit.>. Recently, the Multi-Layer Perception (MLP) network has been employed to represent the deformation field by mapping input coordinates to signal values <cit.> and the deformation degree is controlled by frequencies. These methods have shown promising results in dynamical reconstruction and scene flow estimation, which are typically considered less challenging tasks compared to dealing with large deformations. § PRELIMINARIES ON CLUSTERING ANALYSIS As one of the representative unsupervised learning frameworks, clustering analysis plays a fundamental role in various scientific research domains <cit.>. The pioneering work <cit.> explored clustering metrics for rigid point cloud registration. In contrast, we distinguish ourselves by addressing a more challenging non-rigid problem, which we have completely reformulated as a clustering process with a different objective function. We present a concise overview on two commonly used clustering approaches: fuzzy clustering and Elkan k-means clustering analysis. §.§ Fuzzy Clustering Analysis Given a dataset 𝐗={x_i∈ℝ^n}_i=1^M, fuzzy clustering analysis solves the following problem: min_𝐔,𝐕∑_j=1^C∑_i=1^M(u_ij)^r||x_i-v_j||_2^2, s.t.∑_j=1^Cu_ij=1,u_ij∈[0, 1], where 𝐔=[u_ij]_M× C∈ℝ^M× C is the fuzzy membership degree matrix, 𝐕={v_j∈ℝ^n}_j=1^C is the set of clustering centroids consisting of C∈ℤ_+ classes, and r∈[1,+∞) is the fuzzy factor, which controls the clustering fuzziness. To enhance the clustering performance on unbalanced datasets, Miyamoto <cit.> proposed the inclusion of cluster size controlling variables α=[α_1, ⋯, α_C]∈ℝ^C in  <ref>, and thus classes with more samples may lead to higher fuzzy membership degree. Since Euclidean distance-based clustering algorithms are primarily suitable for spherical data, Mahalanobis distance is latter introduced to generalize the fuzzy clustering analysis to accommodate ellipsoidal structures <cit.>. Recently, <cit.> combined the merits of previous fuzzy clustering approaches and developed a novel clustering framework based on the ℓ_2,p norm, which achieves appealing results on a set of clustering analysis tasks: min_𝐔,𝐕,Σ,α∑_j=1^C∑_i=1^Mu_ij||Σ_j^-1/2(x_i-v_j)||_2^p +u_ijlog|Σ_j|+λu_ijlogu_ij/α_j, s.t.  |Σ_j|=θ_j, ∑_j=1^Cu_ij=1, ∑_j=1^Cα_j=1, u_ij, α_j∈[0, 1]. where λ∈ℝ^+ is a regularization parameter, and Σ_j∈𝕊^n_++≜{𝐀∈ℝ^n× n|x^T𝐀x>0, ∀x∈ℝ^n} denotes the covariance matrix of the j-th class, with the corresponding determinant equivalent to |Σ_j|∈ℝ. We explore the application of this clustering analysis framework to non-rigid point set registration and demonstrate its superior performance over previous registration approaches. §.§ Elkan k-Means Clustering In contrast to fuzzy clustering analysis, the k-means algorithm <cit.> has emerged as one of the most widely used clustering methods due to its simplicity. Elkan k-means clustering further introduced the triangle inequality into the k-means framework to avoid unnecessary distance calculations, which dramatically speeds up the primary k-means clustering process. More details of Elkan k-means clustering can be found in <cit.>. § PROPOSED METHOD Problem formulation. Given two point sets 𝐗={x_i∈ℝ^n}_i=1^M and 𝐘={y_j∈ℝ^n}_j=1^N, where 𝐗 and 𝐘 are named as the target and the source, separately, the objective of non-rigid point set registration is to find the optimal deformation map 𝒯 that minimizes the shape deviation between 𝒯(𝐘)≜𝐘+ν(𝐘) and 𝐗, where ν represents the displacement filed acting on each source point y_j. §.§ Clustering-Induced Non-Rigid Registration Observations. We notice that during the clustering process, the spatial position of clustering centroids 𝐕 are dynamically updated until the distance between the centroids and their members is minimized. This dynamic process bears resemblance to the iterative update of each source point 𝒯(y_j). Inspired by this, we propose to formulate non-rigid registration as an unsupervised clustering process. We consider 𝐘 as the clustering centroids and 𝐗 as the clustering members. We customize <ref> to optimize the overall clustering loss by min F(𝐔,α,Σ,ν)=∑_j=1^C∑_i=1^Mu_ij||Σ_j^-1/2(x_i-(y_j+ν(y_j)))||_2^2 +u_ijlog|Σ_j|+λu_ijlogu_ij/α_j, s.t. |Σ_j|=θ_j, ∑_j=1^Cu_ij=1, ∑_j=1^Cα_j=1, u_ij, α_j∈[0,1]. Here we set p=2 to ease the computation, which also ensures closed-form solutions as derived in the following. Regularization. As in <cit.>, we incorporate Tikhonov regularization <cit.> to promote smoothness in the displacement field of clustering centroids. Thus, our objective function is optimized to find the optimal locations of clustering centroids as follows: min F(𝐔,α, Σ,ν)+ζℛ(ν), where ζ is a trade-off parameter. ℛ(·) is an operator that penalizes the high-frequency component of ν if we consider it in the Fourier domain, , ℛ(ν)=∫_𝐑^nd𝐬||ν̃(𝐬)||_2^2/K̃(𝐬). K(𝐬) is a kernel function regarding the frequency variable 𝐬, and f̃ indicates the Fourier transform of the function f. §.§ Virtues of the Newly-Defined Function We provide a theoretical analysis of <ref> from both information theory and optimization perspectives. This analysis allows us to highlight the virtues of our newly introduced loss function for non-rigid point set registration. Information theory view. We re-write F(𝐔, α, Σ,ν) as ∑_j=1,i=1^C,Mu_ij||Σ_j^-1/2(x_i-(y_j+ν(y_j)))||_2^2 +u_ijlog(|Σ_j|/α_j^λ)-λ H(𝐔), where H(𝐔)=-∑_j=1^C∑_i=1^Mu_ijlog(u_ij) is the entropy of 𝐔. From the perspective of information theory, this entropy regularization term serves to push F(𝐔,α,Σ,ν) towards 𝐔 with a uniform distribution that makes H(𝐔) the maximal and thus drags F(𝐔,α,Σ,ν) away from the sparse 𝐔. This not only enhances the smoothness of the feasible set, but also improves the computational stability during optimization, , avoiding lim_u_ij→0log(u_ij)=-∞ <cit.>. Optimization view. Alternatively, from an optimization point of view, u_ijlog(u_ij) is a convex function in terms of u_ij with λ controlling the degree of convexity. Moreover, u_ijlog(u_ij) acts as a barrier function that restricts u_ij to the range of [0, 1] and prevents it from taking values outside this range <cit.>. §.§ Closed-Form Solutions Our method enables closed-form solutions for each variable during the optimization step as derived in the following. Update of 𝐔. We fix α, Σ, ν and update 𝐔, which becomes a convex optimization problem. Utilizing the Lagrangian multiplier and ignoring parameters that are irrelevant to 𝐔, we obtain ℒ(𝐔,β)=∑_j=1^C∑_i=1^Mu_ij||Σ_j^-1/2(x_i-(y_j+ν(y_j)))||_2^2 +u_ijlog|Σ_j|+λu_ijlogu_ij/α_j+∑_i=1^Mβ_i(∑_j=1^Cu_ij-1), where β={β_i∈ℝ}_i=1^M are the set of Lagrangian multipliers. By equating ∂ℒ/∂𝐔=0, we have 𝐔=(diag(𝐀1_C))^-1𝐀 Here 𝐀=exp(-𝐃/λ)diag(α⊙|Σ|), 𝐃=[d_ij]_M× C∈ℝ^M× C is a squared Euclidean distance matrix with d_ij=Σ_j^-1/2(x_i-(y_j+ν(y_j)))_2^2, exp(·) is the element-wise exponential operator of matrices, diag(𝐳) is an operator that creates a square diagonal matrix with the vector 𝐳 on its main diagonal, and |Σ|=[|Σ_1|, ⋯, |Σ_C|]^T∈ℝ^C. 1_C is the C-dimensional vector of all ones, and ⊙ represents the element-wise Hadamard product of two matrices or vectors. Update of α. Likewise, the closed-form solution with respect to α is α=1/M𝐔^T1_M, which formally quantifies the clustering size for each class. Update of Σ. For simplicity, we relax each clustering centroid's covariance matrix to be isotropic, , Σ_j=σ^2𝐈, where 𝐈∈ℝ^n× n is the identity matrix. This ensures a closed-form solution to variance σ^2: σ^2=tr(𝐗^Tdiag(𝐔^T1_M)𝐗-(2(𝐔𝐗)^T+𝐓^Tdiag(𝐔1_C))𝐓)/n× M, where tr(·) is the matrix trace operator. Update of ν. By leveraging the Riesz’s representation theorem <cit.>, the closed-form solution to the regularization term ν can be expressed as ν(y)=∑_j=1^Cc_jK(y,y_j)+∑_η=1^Nd_ηψ_η(ν), where {c_j∈ℝ}_j=1^C are the coefficient scalars, K(·,·) is the kernel function defined in  <ref>, and {ψ_η}_η=1^N represent a set of basis in the N-dimensional null space of ℛ(ν), which is typically composed by a set of polynomials for most choices of the stabilizer ℛ(ν). In contrast to previous approaches that commonly utilize a Gaussian Radial Basis Function (RBF) <cit.>, we adopt the sparsity-induced Laplacian kernel with the robust ℓ_1-norm to characterize the displacement field ν, , K(y_i, y_j)=exp(-γy_i-y_j_1), γ>0 in which y_i-y_j_1 is the Manhattan distance between the two input vectors. Compared to the RBF kernel, the Laplacian kernel exhibits stronger robustness due to its considerably thicker tails, as illustrated in <ref>. We also validate this conclusion through subsequent experiments. Since the Laplacian kernel is positive definite, we obtain ψ_η≡0 <cit.>. By evaluating ν(y) at 𝐘={y_j∈ℝ^n}_j=1^C, following <cit.>, the coefficient vector 𝐜=[c_1, c_2, ⋯, c_C]^T∈ℝ^C is recovered from the following linear system: 𝐜=(𝐋+ζσ^2diag(𝐔1_C)^-1)^-1(diag(𝐔1_C)^-1𝐔𝐗-𝐘), where 𝐋 is the Gram matrix with l_ij=K(y_i, y_j). Therefore, the newly deformed shape 𝐓 from the source point set 𝐘 becomes 𝐓=𝒯(𝐘)=𝐘+𝐋𝐜. §.§ Improved Nyström Low-Rank Approximation The matrix inverse operation in <ref> leads to a computational complexity of O(C^3) and a memory requirement of O(C^2). Previous approaches often employ the fast Gauss transform (FGT) <cit.> to reduce memory usage and accelerate computation. However, FGT is merely limited to the Gaussian kernel. To circumvent this issue, BCPD <cit.> combined the Nyström method <cit.> and the KD tree search <cit.> for acceleration. However, there are still two major issues that remain unresolved. (1) Due to the random sampling scheme used in BCPD, it is unclear how effective the Nyström approximation performs. (2) In order to address convergence issues when σ^2 becomes small, BCPD need to switch from Nyström approximation to KD tree search. This transition may affect the optimization trajectory. To overcome these challenges, we opt to use clustering analysis instead of random sampling. Concretely, we first employ the fast Elkan k-means algorithm (<ref>) to partition 𝐘 into C' disjoint clusters 𝐏_i⊂𝐘, with the corresponding clustering centroids as {z_i∈ℝ^n}_i=1^C' (C'≪ C). Then, we adopt the improved Nyström approximation <cit.> for efficient and consistent optimization: 𝐋≈𝐄𝐖^-1𝐄^T, where 𝐄=[e_ij]∈ℝ^C× C' and 𝐖=[w_ij]∈ℝ^C'× C' are the low-rank Laplacian kernel matrices, with elements e_ij=K(y_i, z_j) and w_ij=K(z_i, z_j). By incorporating clustering analysis, we achieve two key benefits: (1) rigorously proving the error bound of the Nyström approximation for our utilized Laplacian kernel, and (2) as demonstrated through experiments, providing encouraging results for non-rigid point set registration without compromising the optimization trajectory. The low-rank approximation error ϵ=𝐋-𝐄𝐖^-1𝐄^T_F in terms of the Laplacian kernel is bounded by ϵ≤4√(2)T^3/2γ√(C'q)+2C'γ^2 TqW^-1_F, where ·_F is the matrix Frobenious norm, T=max_i|𝐏_i|, q=∑_j=1^Cy_j-z_c'(j)_2^2 is the clustering quantization error with c'(j)= argmin_i=1, ⋯, C'y_j-z_i_2, and γ is the Laplacian kernel bandwidth defined in Eq. (<ref>). Please see the Supplementary Material. § EXPERIMENTAL RESULTS We perform extensive experiments to demonstrate the performance of the proposed method and compare it with state-of-the-art approaches from both 2D and 3D categories. Implementation details. Given a pair of point sets, for better numerical stability, we first perform shape normalization to make them follow the standard normal distribution. However, the registration evaluation is still based on the original inputs through denormalization. The Laplacian kernel bandwidth γ is set to 2 by default, and the number of clustering centroids in Elkan k-means equals to 0.3C for better trade-off between registration accuracy and efficiency. During optimization, we fix the two weight coefficients λ=0.5 and ζ=0.1, which deliver impressive performance across various scenes. Our algorithm is implemented in MATLAB, on a computer running AMD Core Ryzen 5 3600XT (3.8GHz). We leverage publicly available implementations of baseline approaches for assessment, with their parameters either fine-tuned by ourselves or fixed by the original authors to achieve their best results. Evaluation criteria. As in <cit.>, we adopt the Root Mean Squared Error (RMSE) to quantitatively assess the registration accuracy. For point sets with known ground-truth correspondences, we compute the squared distance between corresponding points directly. However, for point sets without annotated correspondences, such as distinct types of geometries, we identify the corresponding point pairs through the nearest neighbor search. Accordingly, the RMSE is defined as: RMSE(𝒯(𝐘),𝐗)=√(Tr{(𝒯(𝐘)-𝐗)^T(𝒯(𝐘)-𝐗)}/M), where 𝒯(𝐘) and 𝐗 are the deformed and the target point sets, respectively. §.§ 2D Non-Rigid Point Set Registration For 2D non-rigid point set registration, we utilize the benchmark IMM hand dataset <cit.> for evaluation. This dataset encompasses 40 real images, showing the left hands of four distinct subjects, and each contains 10 images. As illustrated in Fig. <ref>, the hand shape is described through 56 key points extracted from the contour of the hand. We employ the first pose from each group of hands as our target point set, while the remaining poses of the same subject serve as the source point sets. The quantitative comparison results with state-of-the-art 2D registration approaches including MR-RPM <cit.>, BCPD <cit.>, GMM <cit.>, and ZAC <cit.> are reported in <ref>. We report the average RMSE for each subject along with the average registration timing of each method. As observed, our method consistently outperforms the comparative approaches with higher registration accuracy and efficiency across all subjects. Although without the need for constructing the initial point correspondences, like shape context <cit.> used in MR-RPM, our method still delivers RMSE that is orders of magnitude lower than that of most competitors, highlighting its compelling advantages. The qualitative comparison results regarding the inputs in the third row of <ref> are presented in Fig. <ref>. -0.3cm Robustness. We further investigate the robustness of the designed method against external disturbances including noise and occlusion. We add a set of Gaussian noise with zero mean and varying standard deviations σ∈[0.01, 0.06] to all of the source point sets defined in the above section. Additionally, we randomly erase several points, around 3%∼20% of the source, to construct a range of occlusion geometries. <ref> summarizes the average RMSE values across all subjects. It can be observed that our method still achieves the highest or comparable registration accuracy on all settings, highlighting its stability and robustness. Qualitative comparison results are presented in the Supplementary Material. §.§ 3D Non-Rigid Point Cloud Registration Since our method is dimension-independent, we further substantiate its efficacy on 3D point clouds and compare it with eight state-of-the-art 3D registration or deformation approaches, including BCPD <cit.>, GBCPD <cit.>, Fast_RNRR <cit.>, AMM_NRR <cit.>, Sinkhorn <cit.>, as well as network-based ones Nerfies <cit.>, NDP <cit.>, and NSFP <cit.>. For efficiency, we downsample the point clouds from datasets FAUST <cit.> and TOSCA <cit.> using voxel grid filtering, with a point size of 3,000∼4,000. More experiments are presented in the Supplementary Material. -0.3cm Registration for real human scans.  <ref> and <ref> report the quantitative and qualitative comparison results on the FAUST human dataset, respectively. The evaluation is conducted using six sets of subjects in six different and challenging poses for each subject. We first perform intra-class registration, , deforming the first human geometry to match the other poses for the same subject. Then, to validate the capability of the designed approach against large deformations, we further conduct an inter-class registration test by aligning the first human pose of the i-th subject to all the poses of the (i+2)-th subject (i=1, 2, 3, 4). The statistical results summarized in <ref> demonstrate that our method achieves the highest registration accuracy across all subjects and outperforms competitors by a significant margin, even several orders of magnitude higher. Notably, while achieving remarkable accuracy, our method also maintains efficiency comparable to most competitors, making it a highly practical and effective solution. The qualitative comparison results in <ref> indicate that our method not only ensures higher-quality deformations but also recovers the geometric details as well as the topology of the target subject more accurately and faithfully. -0.3cm Registration for larger deformations. We further verify whether the proposed method improves the registration performance for point cloud pairs with much larger deformations. We evaluate four classes of animals from the TOSCA dataset <cit.> and report the average RMSE for each class of them. As illustrated in <ref>, the source and target point sets exhibit significant pose differences, making the registration quite challenging. <ref> summarizes the quantitative comparison results. We exclude Fast_RNRR, AMM_NRR, and Sinkhorn from our analysis because they exhibit significant deviations from the target poses, rendering the error metrics unreliable. Our method consistently outperforms all the baselines by a large margin. <ref> demonstrates that our method delivers highly stable and accurate registration results for point clouds with large deformations, even without the point-wise correspondences. §.§ Ablation Study Effect of the improved Nyström method. <ref> reports the registration error, running time, and the matrix approximation error (defined in <ref>), between the improved or clustered Nyström approximation method (Ours) and the random one on two randomly extracted FAUST models. We vary the approximation ratio R∈[0.02, 0.4] with Δ R=0.02. It can be seen that our method obtains significant registration and timing performance boost and decrease in matrix approximation error by a large margin, especially on lower ratios. -0.4cm -0.5cm Laplacian VS. Gaussian. <ref> summarizes both the quantitative and qualitative comparison results between the kernel functions of Gaussian and Laplacian. We validate the merits and robustness of the Laplacian Kernel by aligning the source Bunny model <cit.> contaminated by a set of noise (σ∈[0, 0.06]) to a randomly deformed Bunny. The kernel bandwidth γ is varied in [1, 3]. We report the RMSE and the average number of iterations when algorithms converge in <ref>.  We find that the Laplacian kernel consistently outperforms the Gaussian kernel across all settings, suggesting the merits of the sparsity-induced ℓ_1 norm. Moreover, the Laplacian kernel delivers faster convergence and more accurate registration results (see the Supplementary Material). §.§ Applications Shape transfer. As depicted in <ref>, we apply the proposed method to transfer shapes belonging to different categories that require substantial deformations. We first transfer two geometries with the identical topology (sphere and cube) and then proceed to transfer CAD models from ShapeNet <cit.>, which presents a more challenging task. Results indicate the effectiveness of our method in achieving accurate shape deformation while faithfully preserving the geometric details of the source shapes. Notably, our method consistently produces high-quality deformation results even when the shapes possess significantly distinct topology. More results on shape transfer are presented in the Supplementary Material. -0.3cm -0.9cm Medical registration. Deforming a standard medical template to match those captured from individual patients is a crucial step in the field of medical data analysis. In <ref>, we demonstrate the efficacy of our method by aligning a 3D inhale lung volume to two exhale lungs <cit.> and two brain vessels <cit.>, extracted from real-world CT and MRA images. Despite the presence of complex structures, large deformations, and mutual interference, our method consistently achieves impressive results in accurately deforming the template models to align the target shapes. § CONCLUSIONS We proposed an algorithm for solving non-rigid point set registration without prescribed correspondences. The key contribution of our method lies in reformulating non-rigid registration as an unsupervised clustering process that enables holistic optimization, dimension-independent, closed-form solutions, and handling large deformations simultaneously. Moreover, we introduce the ℓ_1-induced Laplacian kernel to achieve a more robust solution than the Gaussian kernel and provide a rigorous approximation bound for the Nyström method. Our method achieves higher-quality results than traditional methods and recent network models, particularly on geometries that exhibit significant deformations. We also showcase its applicability in challenging tasks such as shape transfer and medical registration. Acknowledgements. This work is partially funded by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB0640000), National Science and Technology Major Project (2022ZD0116305), National Natural Science Foundation of China (62172415,62272277,62376267), and the innoHK project. ieeenat_fullname
http://arxiv.org/abs/2406.18453v1
20240626160110
Towards Human-Level 3D Relative Pose Estimation: Generalizable, Training-Free, with Single Reference
[ "Yuan Gao", "Yajing Luo", "Junhong Wang", "Kui Jia", "Gui-Song Xia" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, Vol. 18, No. 9, September 2020 How to Use the IEEEtran Templates Towards Human-Level 3D Relative Pose Estimation: Generalizable, Training-Free, with Single Reference Yuan Gao*, Yajing Luo*, Junhong Wang, Kui Jia, Gui-Song Xia Y. Gao is with the School of Electronic Information, Wuhan University, Wuhan, China. E-mail: ethan.y.gao@gmail.com Y. Luo and G.-S. Xia are with the School of Computer Science, Wuhan University, Wuhan, China. E-mails: {yajingluo, guisong.xia}@whu.edu.cn J. Wang is with MoreFun Studio, Tencent Games, Tencent, Shenzhen, China. E-mails: junhongwang@tencent.com K. Jia is with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, China. E-mails: kuijia@cuhk.edu.cn Corresponding authors: Gui-Song Xia, Yuan Gao. * indicates equal contribution. July 1, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Humans can easily deduce the relative pose of a previously unseen object, without labeling or training, given only a single query-reference image pair. This is arguably achieved by incorporating (i) 3D/2.5D shape perception from a single image, (ii) render-and-compare simulation, and (iii) rich semantic cue awareness to furnish (coarse) reference-query correspondence. Existing methods implement (i) by a 3D CAD model or well-calibrated multiple images and (ii) by training a network on specific objects, which necessitate laborious ground-truth labeling and tedious training, potentially leading to challenges in generalization. Moreover, (iii) was less exploited in the paradigm of (ii), despite that the coarse correspondence from (iii) is able to enhance the compare process by filtering out non-overlapped parts under substantial pose differences/occlusions. Motivated by this, we propose a novel 3D generalizable relative pose estimation method by elaborating (i) with a 2.5D shape from an RGB-D reference, (ii) with an off-the-shelf differentiable renderer, and (iii) with semantic cues from a pretrained model like DINOv2. Specifically, our differentiable renderer takes the 2.5D rotatable mesh textured by the RGB and the semantic maps (obtained by DINOv2 from the RGB input), then renders new RGB and semantic maps (with back-surface culling) under a novel rotated view. The refinement loss comes from comparing the rendered RGB and semantic maps with the query ones, back-propagating the gradients through the differentiable renderer to refine the 3D relative pose. As a result, our method can be readily applied to unseen objects, given only a single RGB-D reference, without labeling or training. Extensive experiments on LineMOD, LM-O, and YCB-V show that our training-free method significantly outperforms the state-of-the-art supervised methods, especially under the rigorous ^∘ metrics and the challenging cross-dataset settings. The codes are available at <https://github.com/ethanygao/training-free_generalizable_relative_pose>. 3D Relative Pose Estimation, Differentiable Renderer, Zero-Shot Unseen Generalization, Single Reference, Label/Training-Free Refinement. § INTRODUCTION Recent years have witnessed great progress in 3D object pose estimation <cit.>, which estimates the 3D rotation of an object depicted in a query RGB image. As a key to facilitating interaction with real-world objects, 3D object pose estimation attracts increasing attention from various areas including computer vision, virtual/augmented reality, robotics, and human-computer interaction <cit.>. To date, the community shows great interest in generalizable 3D object pose estimation <cit.> owing to its wide applicability, which focuses on the generalization to previously unseen objects, preferably in a zero-shot manner[We discuss the relatively easier instance- or category-level object pose estimation in the Related Work Sect. <ref> and <ref>, respectively.]. Existing generalizable 3D object pose estimation methods can be categorized according to how they exploit the reference information, i.e., using a CAD model, multiple images, or a single image as references, as shown in Fig. <ref>. Specifically, most existing methods leverage a 3D CAD model <cit.> or multiple images <cit.> for template matching or feature extraction, while the requirement of laborious 3D scanning (for the CAD-based methods) or multiple-image pose labeling (for most multi-image methods) severely limits their applicability. On the other hand, recent methods propose to reframe the generalizable object pose estimation task as relative pose estimation between a query and a reference images from an unseen object, which is termed as generalizable relative object pose estimation <cit.>. By treating the reference pose as canonical, estimating the relative pose between the reference-query pair successfully bypasses the laborious 3D scanning (of the CAD reference) or dense views calibration (of the multiple-image reference). However, existing methods rely on a large amount of well-labeled poses between the query-reference pairs to effectively train a neural network, thereby imposing the challenge of acquiring high-quantity training data <cit.>. Moreover, the generalizability of some network-based methods may be hindered by the training datasets. We empirically find that after pretrained on an external large-scale dataset such as Objaverse <cit.>, the current state-of-the-art methods <cit.> need to perform in-dataset finetune[The in-dataset finetuning denotes that the finetune set comes from the same dataset with the testing set, while not including the testing objects.] before testing on the in-dataset unseen objects, which impedes the cross-dataset generalize-ability. In this context, we work towards universally applicable zero-shot 3D generalizable relative pose estimation, where (i) the object is agnostic/unseen from a cross-dataset, (ii) only a single RGB-D image is available for reference without a 3D CAD model or multi-view images, and (iii) the ground-truth (relative) pose label is not available. In other words, we aim to establish a novel 3D generalizable (in terms of both objects and datasets) relative pose estimation method given only one reference and one query image, without labeling or training. This is extremely challenging due to the mixture of incomplete shape information and missing reference-query correspondence, which leads to a severely degraded optimization problem. Our method is inspired by the fact that humans can easily infer the relative pose under the aforementioned rigorous setting, even with large pose differences or severe occlusions. We hypothesize that such intelligence is accomplished through (i) perceiving 3D/2.5D shapes from a single image, (ii) conducting render-and-compare simulations via imagination, and (iii) understanding rich semantic cues of the object. For example, given two viewpoints of an unseen animal, humans are able to infer the 3D/2.5D shape of that animal, then identify the correspondences of the animal eyes, noses, ears, etc, and finally rotate and render the 3D/2.5D model until its projection matches the other view. Note that the semantic cues have the potential to deal with the (self-) occluded missing parts, thus enhancing the comparison process, e.g., an animal tail can be simply ignored in the render-and-compare simulations if it only appears in one image and is (self-) occluded in the other. The above analysis motivates us to break down our difficulties and fulfill those three requirements. Concretely, we achieve this by formulating a label/training-free framework through an off-the-shelf differentiable renderer following the render-and-compare paradigm. Our input shape to the differentiable renderer is an RGB- and semantic-textured 2.5D mesh of the reference (avoiding the difficult 3D hallucination of an unseen object). Based on this, we construct a pose refinement framework, where the differentiable renderer takes an initial pose to render projections, then back-propagates the gradients from the projection loss (between the rendered and the query images) to refine the initial pose. Specifically, our method starts with an RGB-D reference and an RGB query, where their semantic maps can be obtained by leveraging an advanced pretrained model DINOv2 <cit.> with the RGB inputs[Note that our method possesses the potential of using only an RGB reference, please see the discussion in Sect. <ref> (Applicability) and Sect. <ref> (Limitations and Future Works) for more details. Moreover, our method works reasonably well even without the DINOv2 semantic maps on the LineMOD dataset, as illustrated in Table <ref>.]. We leverage an easy-to-use differentiable renderer nvdiffrast <cit.>, which takes the RGB- and semantic-textured 2.5D mesh of the reference as input, then renders new RGB and semantic maps (with back-surface culling) under a novel rotated view. The pose refinement loss comes from comparing the rendered RGB and semantic maps with the query ones, which flows the gradients through the differentiable renderer to refine the 3D relative pose. As a result, our method can be readily applied to unseen objects from an arbitrary dataset without labeling or training. In summary, we propose a novel 3D generalizable relative pose estimation method, which takes only an RGB-D reference and an RGB query pair, without requiring the ground-truth pose labels or training. We achieve this by formulating a pose refinement framework via an off-the-shelf differentiable renderer under the render-and-compare paradigm. Our method does not involve training a network, which naturally possesses zero-shot generalize-ability in terms of both unseen objects and datasets. We conducted extensive experiments on LineMOD <cit.>, LM-O <cit.> and YCB-V <cit.> datasets. The results from our training-free method exhibit significant improvement over the state-of-the-art supervised methods, e.g., for ^∘ metric on the LineMOD dataset <cit.> and the YCB-V dataset <cit.>, our label- and training-free method outperforms the supervised state-of-the-art results by 29.98% and 14.28%, respectively. §.§ Taxonomy and Applicability of Our Method Taxonomy. The taxonomy of our methods in generalizable pose estimation, in terms of , , as well as the and the of the reference and the query images, is illustrated in Table <ref>. Our method falls under the category of label/training-free with a single RGB query and a single RGB-D reference. Applicability. Among Table <ref>, the proposed method shares the closest setting to the human intelligence on relative pose estimation that is able to generalize to unseen objects from an arbitrary dataset, with only an additional one-time-collection depth map for the reference image. We have testified in supplementary material that our method can still deliver good estimations with an imprecise depth map, which implies the potential to fully distill human intelligence by a generalizable depth estimator. We note that training a generalizable depth estimator is beyond the scope of, and may introduce distractions to, our current focus. In addition, our method also incorporates the segmentation maps of both query and reference objects as input, which can be obtained by pretrained segmentation models such as SAM <cit.>, FastSAM <cit.> and Grounded SAM <cit.>. We chose not to delve into these segmentation techniques extensively either, for the same sake of minimizing potential distractions. § RELATED WORK §.§ Instance-level 6D Pose Estimation Current object pose estimation can be categorized into instance-level, category-level, and generalizable methods based on different problem formulations. For instance-level methods, there are roughly three categories: direct regression-based, correspondence-based, and refinement-based. Direct regression-based methods <cit.> predict the object pose directly through a neural network. Correspondence-based methods <cit.> estimate the 2D-3D/3D-3D correspondence between the 2D images and 3D object models, followed by PnP solvers <cit.> to calculate 6D poses. Additionally, refinement-based methods <cit.> incorporate refinement-based steps to improve the prediction performance. However, instance-level methods are trained on instance-specific data and rely heavily on CAD models to render numerous training data. Consequently, their application is limited to the objects on which they were trained. §.§ Category-level 6D Pose Estimation In category-level methods, the test instances are not seen during training but belong to known categories. Most methods achieve this by either alignment or directly regressing. Alignment-based methods <cit.> first propose a Normalized Object Coordinate Space (NOCS) <cit.> as a canonical representation for all possible object instances within a category. A network is then trained to predict the NOCS maps and align the object point cloud with the NOCS maps using the Umeyama algorithm <cit.> to determine the object pose. This method typically constructs the mean shape of specific categories as shape priors using offline categorical object models, and the networks are trained to learn deformation fields from the shape priors to enhance the prediction of NOCS maps. In contrast, directly regressing methods <cit.> avoid the non-differentiable Umeyama algorithm and often focus on geometry-aware feature extraction. For instance, CASS <cit.> contrasts and fuses shape-dependent/pose-dependent features to predict both the object's pose and size directly. Fs-net <cit.> leverages 3D Graph Convolution for latent feature extraction, and designs shape-based and residual-based networks for pose estimation. However, while category-level methods strive to address different instances within the same category, their capacity to predict the poses of objects from entirely new categories remains limited, highlighting the ongoing need to broaden the scope of object pose estimation to encompass unfamiliar objects. §.§ Generalizable 6D Pose Estimation Generalizable algorithms aim to enhance the generalizability of unseen objects without the need for retraining or finetuning. Methods in this category can be classified as CAD-based <cit.> or multi-view reference-based <cit.>. For CAD-based approaches, CAD models are often used as prior knowledge for direct feature matching or template generation. In particular, ZeroPose <cit.> performs point feature extraction for both CAD models and observed point clouds, utilizing a hierarchical geometric feature matching network to establish correspondences. Following ZeroPose, SAM-6D <cit.> proposed a two-stage partial-to-partial point matching model to construct dense 3D-3D correspondence effectively. Instead, Template-Pose <cit.> utilizes a CAD model to generate a collection of templates and selects the most similar one for a given query image. Similarly, OSOP <cit.> renders plenty of templates and estimates the 2D-2D correspondence between the best matching template and the query image to solve the object pose. MegaPose <cit.> proposed a coarse network to classify which rendered image best matches the query image and generate an initial pose. Subsequently, multi-view renderings of the initial pose are produced, and a refiner is trained to predict an updated pose. Multi-view reference-based methods can be further divided into feature matching-based and template matching-based approaches. For the former, multi-view reference-based feature matching methods mainly aim to establish 2D-3D correspondences between the RGB query image and sparse point cloud reconstructed by reference views or 3D-3D correspondences between the RGB-D query and RGB-D reference images. For instance, FS6D <cit.> designed a dense prototype matching framework by extracting and matching dense RGBD prototypes with transformers. After the correspondence is established, Umeyama <cit.> algorithms are utilized for pose estimation. OnePose/Onepose++ <cit.> apply the Structure from Motion (SfM) method to reconstruct a sparse point cloud of the unseen object using all reference viewpoints. They then employ an attention-based network to predict the correspondence between 2D pixels and the reconstructed point clouds to estimate the object pose. For the latter, Multi-view references can be reviewed as templates for retrieval when plenty of views exist, or used to reconstruct the 3D object models for template rendering, similar to the CAD-based methods. As an illustration, Gen6D <cit.> selects the closest reference view for the query image, and then refines the pose through the 3D feature volume constructed from both the reference and query images. Notably, Gen6D requires more than 200 reference images for initial pose selection. On the contrary, LatentFusion <cit.> reconstructs a latent 3D representation of an object to present an end-to-end differentiable reconstruction and rendering pipeline, and then estimates the pose through gradients update. Since a 3D object representation can be reconstructed utilizing the multi-view information, FoundationPose <cit.> proposed a unified framework to support both CAD-based and multi-view supported setups. When no CAD model is available, they leverage multi-view references to build a neural implicit representation, which is then used for render-and-compare. §.§ Generalizable Relative Pose Estimation Recent methods <cit.> highlight the importance of formulating object pose estimation as a relative pose estimation problem. Specifically, <cit.> and <cit.> address situations where only a single-view reference image is available. <cit.> evidence that some state-of-the-art feature matching approaches <cit.>, <cit.>, <cit.> fail to generate reliable correspondence between the reference-query pair, while energy-base methods <cit.> struggles to capture 3D information. Instead, 3DAHV <cit.> introduces a framework of hypothesis and verification for generating and evaluating multiple pose hypotheses. Following 3DAHV, DVMNet <cit.> directly lifts the 2D image features to 3D voxel information in a hypothesis-free way, computing the relative pose estimation in an end-to-end fashion by aligning the 3D voxels. § METHOD Following the render-and-compare paradigm, current generalizable pose estimation methods often rely on rotatable 3D CAD models or well-calibrated multi-view images, imposing challenges to acquire the 3D CAD models or expensive pose calibration, especially for previously unseen objects. We instead focus on the generalizable relative pose estimation defined in <cit.>, which aims to estimate the relative pose between a reference-query pair, using only a single reference with an arbitrary pose as canonical (without calibration). Our method differs from <cit.> in not requiring labeled relative pose to train an estimation network. §.§ Overview Taking an RGB query and an RGB-D reference as input, our method establishes a refinement optimization under the render-and-compare framework, by leveraging a 2.5D (i.e., RGB-D) shape of the reference, a pair of semantic maps for both the query and the reference acquired by a pretrained DINOv2 model <cit.> along with the corresponding RGB maps, and a differentiable renderer to backpropagate the gradients. Note that the 2.5D shape is exploited due to the inherent difficulty of accurately hallucinating the 3D shape of unseen objects when relying solely on a single RGB-D image. This challenge further complicates the task of relative pose estimation, as the hallucinated 3D shape must align precisely with the query to achieve a successful estimation. Formally, by using subscript to denote query or reference, our method starts with an RGB pair I_r and I_q for both reference and query, as well as a depth map D_r for the reference. We proposed to estimate the relative pose between I_r and I_q, assisted by D_r. To this end, we first infer the semantic maps S_r and S_q from I_r and I_q exploiting a pretrained DINOv2 model <cit.>. Then, we construct a 2.5D mesh model M_r for the reference object based on D_r, to formulate an RGB and semantic maps textured 2.5D mesh ℳ_r = {M_r, I_r, S_r}. Subsequently, the textured 2.5D reference mesh ℳ_r is rotated with an (arbitrary) initial pose P by a differentiable renderer <cit.> to generate novel I_r(P) and S_r(P). Finally, the generated I_r(P) and S_r(P) are compared with the query I_q and S_q, producing a refinement loss and consequently back-propagate gradients to P through the differentiable renderer. Our method operates the render-and-compare procedure in a self-supervised and network-free manner, without labeling or training. The overview of the proposed method is illustrated in Fig. <ref>. We detail the comprising elements of our method in the following sections, i.e., semantic map estimation in Sect. <ref>, textured 2.5D mesh reconstruction in Sect. <ref>, and label/training-free refinement via differentiable renderer in Sect. <ref>. §.§ Semantic Map Estimation In order to estimate the relative pose, human intelligence may unconsciously infer the semantics of the reference-query pair. Subsequently, coarse correspondence can be established with those semantics, resulting in three-fold benefits: it (i) helps to filter out the large non-overlapped part under a substantial pose difference, (ii) alleviates the influence of occlusions, and (iii) eases the degraded optimization of the relative pose estimation. Benefit from the rapid development of large pretrained models, an elegant off-the-shelf semantic feature extractor is available as DINO/DINOv2 <cit.>, which shows great zero-shot generalize-ability to diverse (even texture-less) objects (see Fig. <ref> for some examples). We thus incorporate the off-the-shelf DINOv2 model <cit.> to acquire the rich semantics of the input unseen objects. Specifically, we utilize DINOv2 <cit.> as the semantic feature extractor Φ(𝐱), which takes an RGB image I to produce a set of semantic features F ∈ℝ^w × h × d. In order to texture F to the 2.5D model and facilitate the novel pose rendering, we use the principal component analysis (PCA) to reduce the dimension of F from d to 3, obtaining a semantic map S: S = (Φ(I)), :ℝ^w × h × d→ℝ^w × h × 3. By feeding Eq. (<ref>) with I_q and I_r, we can obtain the semantic maps for the query and the reference, S_q and S_r, respectively. §.§ Textured 2.5D Mesh Reconstruction In this section, we reconstruct a rotatable 2.5D model of the reference given its depth map D_r, which is subsequently used to generate novel renderings through the differentiable renderer. Note that our design avoids the challenging 3D hallucination of an unseen object from the depth map, as the hallucinated 3D shape must consistently align with the query for relative pose estimation. Specifically, given the depth map D_r of the reference, we lift the coordinates of the image plane into the 3D space and obtain the 2.5D point clouds X_r ∈ℝ^N × 3 of the front surface. We then reconstruct the corresponding 2.5D mesh M_r from X_r, to facilitate the rasterization in the renderer. Since the xy coordinates of X_r are sampled regularly from the 2D grids, reconstructing M_r from X_r can be easily achieved by the Delaunay triangulations <cit.>. Finally, we texture M_r with both color and semantic maps, obtaining ℳ_r = (M_r, I_r, S_r) for rendering under novel poses. Note that as discussed in Sect. <ref> (Applicability), our method possesses the potential of using only an RGB reference and estimating an imprecise depth map exploiting an off-the-shelf generalizable depth estimator. Good estimation is validated in supplementary material given an imprecise and noising depth. We leave training a generalizable depth estimator as our future work to avoid possible distractions in this paper. §.§ Label/Training-Free Refinement via Differentiable Renderer Our last module of label/training-free refinement is constructed by a differentiable renderer, which takes the textured 2.5D reference mesh ℳ_r and a pose P as input, then renders a novel RGB image and a novel semantic map under the view P. By implementing the pose P as a random variable, the render-and-compare/reprojection loss can be back-propagated directly to P, ensuring the label/training-free and zero-shot unseen generalization merits of our proposed method. Formally, by assuming a perspective camera, we leverage a recent differentiable renderer <cit.>, denoted as ℛ, to generate novel RGB and semantic maps, I_r(P) and S_r(P), from the textured 2.5D reference mesh ℳ_r, an arbitrary pose P, and the camera intrinsics K: I_r(P), S_r(P) = ℛ(P, ℳ_𝓇, K) Back Surface Culling. As the reconstructed mesh is only 2.5D representing the front surface, it is crucial to conduct the back-surface culling during the rendering to filter out the incorrect back-facing polygons. Specifically, for every triangle of the mesh, we first calculate the dot product of their surface normal and the camera-to-triangle (usually set to [0,0,1]) and then discard all triangles whose dot product is greater or equal to 0 <cit.>. Please also see the ablation with and without the back-surface culling in Table <ref>. Finally, the pose P can be optimized to align the rendered I_r(P) and S_r(P) with the query I_q and S_q, with the re-projection loss calculated by: L(P) = L_1{I_r(P); I_q} + L_2{S_r(P); S_q}, where L(P) is the final loss to optimize the pose P, and we implement both losses by the multi-scale structural similarity (MS-SSIM) <cit.> as the following: L_1 = 1 - ms-ssim{I_r(P); I_q}, L_2 = 1 - ms-ssim{S_r(P); S_q}, Equation (<ref>) enables us to optimize P simply by gradient descent. Initialization. As revealed in the majority of prior arts <cit.>, a good initialization significantly boosts the performance of the render-and-compare framework. To this end, we implement our initialization by evenly sampling candidate poses on a sphere and chasing the best one. Specifically, we first sample m viewpoints (azimuth and elevation angles) uniformly using a Fibonacci lattice <cit.>, then uniformly sample n in-plane rotation angles for each viewpoint, producing t=m*n poses as the initializing candidates. By rendering both RGB and semantic maps of those candidate poses, we are able to calculate the re-projection loss by Eq. (<ref>) (without back-propagation in this phase) and choose the pose with the minimal loss as our initialization P^init. Given the initialized pose P^init, we perform N iterations with gradient back-propagation to carry out the label/training-free refinement via the differentiable renderer. Our algorithm is detailed in Algorithm <ref>. § EXPERIMENTS In this section, we extensively validate our method on benchmark datasets including the LineMOD <cit.>, YCB-V <cit.>, and LineMOD-Occlusion (LM-O) <cit.> datasets. We detail the experimental setup in the following. §.§ Experimental Setups State-of-the-art Methods for Comparison. As shown in Table <ref>, there does not exist a method applying the challenging setting of label/training-free and a single reference-query pair like ours. Therefore we choose the state-of-the-art methods that share the closest experimental setups, which are ZSP <cit.>, LoFTR <cit.>, RelPose++ <cit.>, 3DAHV <cit.>, and DVMNet <cit.>. Specifically, for ZSP, though it was originally proposed to process multiple queries, it is able to accept one RGB-D query as input. We report its performance based on the single RGB-D query and single RGB-D reference pair. For LoFTR, we use its pretrained weights released by the authors <cit.>. The weights of DVMNet, 3DAHV, and RelPose++ are retrained on-demand to achieve their best performance (for the details, see the following Benchmark Experiments, and the table captions of Table <ref>, Table <ref> and Table <ref>). Datasets. The experiments are carried out on three benchmark object pose estimation datasets, i.e., LineMOD dataset <cit.> comprises 13 real objects, each depicting a single low-textured object on varying lighting conditions with approximately 1,200 images. LineMOD-Occlusion (LM-O) <cit.> consists of 1,214 images of the 8 occluded objects, extracted from the LineMOD dataset, the average visible fraction of objects in LM-O is 79.45%. YCB-V <cit.> encompasses over 110,000 real images featuring 21 objects characterized by severe occlusion and clutter, it exhibits an average visible object fraction of 87.15%. Evaluation Metric. Following <cit.> and <cit.>, we report across sampled reference-query pairs. We also evaluate on important metrics of , i.e., the percentage of the predictions that are within 5/10/15/30^∘, which can be more rigorous (e.g., ) and better characterize the performance. The degree of the pose difference between the ground truth R_gt and the predictions R̂ is calculated by the geodesic distance D: D = arccos((tr(Δ R^T_gtΔR̂) - 1)/2) / π Benchmark Experiments. The in-dataset networks of the state-of-the-art DVMNet, 3DAHV, and RelPose++ methods need to be trained on the leave-out subset which comes from the same dataset as the testing subset but does not include the testing objects. For a fair comparison, on the LineMOD dataset, we follow the experiments in DVMNet <cit.> and 3DAHV <cit.> to evaluate 5 objects (i.e., benchvise, camera, cat, driller, duck). For the YCB-V experiments, we design a similar training protocol to enable the comparison with DVMNet, 3DAHV, and RelPose++, where we randomly sample 8 objects (i.e., tuna_fish_can, pudding_box, banana, pitcher_base, mug, power_drill, large_clamp, foam_brick) for evaluation, leaving the remaining 13 objects to train these three methods. Following DVMNet <cit.>, we evaluate 3 unseen objects on the LM-O dataset (i.e., cat, driller, and duck). Since the challenging LM-O dataset is typically used solely for evaluation, we directly use the same weights for DVMNet and 3DAHV that were trained in the LineMOD experiments. Since the results on the rigorous metrics of are not reported in the 3DAHV <cit.> and DVMNet <cit.> paper, we thus retrain them using the code released by the authors for the evaluation. Moreover, as a label/training-free method, the performance of our method can be assessed on all the objects of LineMOD, YCB-V, and LM-O datasets, without the need to leave out any training data or leverage any external dataset. We report the performance of our method on the complete LineMOD, YCB-V, and LM-O datasets in Tables S3, S4, and S5 of the supplementary material. In-dataset and Cross-dataset Evaluation. Beyond the unseen objects generalization, we also test the dataset-level generalization for the state-of-the-art network-based methods DVMNet and 3DAHV, reporting both the in-dataset and the cross-dataset performance. In short, in-dataset and cross-dataset differ in whether the network needs to be finetuned on a subset that comes from the same dataset with the testing set (though not including the testing objects). Therefore, a good cross-dataset performance demonstrates better generalization in terms of the dataset, as the network only needs to be trained once on a large-scale external dataset without finetuning. Specifically, for the in-dataset experiments, we follow the exact training protocols of DVMNet <cit.> and 3DAHV <cit.>, which first pretrain on an external large-scale dataset Objaverse <cit.> then fintune on a certain dataset (e.g., LineMOD or YCB-V). For cross-dataset experiments, we use the pretrained weights from Objaverse directly without finetuning. Note that the evaluation of our method, ZSP, and LoFTR does not involve a finetuning phase, demonstrating that our method, ZSP, and LoFTR naturally generalize to an arbitrary dataset[This is achieved by that 1) the pose estimation phase of our method, ZSP, and LoFTR are general and do not involve learning a network, and 2) they all use generalizable feature extractors, i.e., DINOv2 or LoFTR.]. Reference-Query Pair Generation. We follow DVMNet <cit.> and 3DAHV <cit.> to generate the reference-query pairs with sufficient overlaps for training and testing. Specifically, given a reference rotation R_r and a query rotation R_q, we first convert the rotation matrices R_r and R_q to Euler angles (α_r, β_r, γ_r) and (α_q, β_q, γ_q). Since the in-plane rotation γ does not influence the overlaps between the reference and query pair, it is set to 0 and converted back to the rotation matrix, i.e., R̃ = h(α, β, 0) with h being Euler-angle to rotation matrix transformation. The overlap between the query and the reference is measured by the geodesic distance (i.e., the pose difference in degree) between their in-plane-omitted rotation matrices R̃_̃q̃ and R̃_̃r̃ using Eq. (<ref>). Finally, following DVMNet <cit.> and 3DAHV <cit.>, we select the sampled pairs with D̃ less than 90^∘. Following DVMNet <cit.> and 3DAHV <cit.>, for each object, we generate 1000 pairs for testing, and 20000 pairs for training DVMNet, 3DAHV, and RelPose++. Figure <ref> illustrates the histograms depicting the statistics of the pairwise pose difference (geodesic distance between rotation matrices R_r and R_q) on the three datasets. All the experiments are carried out on the same testing reference-query pairs. Implementation Details. For semantic feature extraction, we employ the output tokens from the last layer of the DINOv2 ViT-L model <cit.>. We use nvdiffrast <cit.> as our differentiable renderer. We uniformly sample m=200 viewpoints and n=20 in-plane rotations (resulting in 4000 initialization candidates), the maximal iteration number for differentiable rendering is set to N=30. To backpropagate the refinement losses, we use an Adam optimizer <cit.> of 0.01 initial learning rate and decay by a scheduler. All the experiments are conducted on a single NVIDIA 4090 GPU. §.§ Experimental Results on the LineMOD Dataset The results on the LineMOD dataset are illustrated in Table <ref>. We paste the performances of RelPose++ from the 3DAHV paper <cit.>. We leave the ^∘ performance of RelPose++ blank as those were not reported in <cit.> and the (pre-) training code of RelPose++ on the external large-scale Objaverse dataset is not available. Table <ref> shows that our label and training-free method significantly outperforms the supervised state-of-the-art DVMNet w.r.t. all the metrics. In addition, the state-of-the-art methods DVMNet and 3DAHV face challenges in generalizing across different datasets, i.e., their in-dataset results substantially outperform their cross-dataset counterparts. In contrast, our approach, without training a network, inherently generalizes across diverse datasets directly. Especially, our method significantly outperforms DVMNet (in-dataset) for 21.6% and 32.08% w.r.t. the rigorous ^∘. The qualitative results of our method are shown in Fig. <ref>, and comparisons with different methods are presented in Fig. S3 of the supplementary material. Our results on all the LineMOD objects are detailed in Table S3 of the supplementary material. §.§ Experimental Results on the YCB-V Dataset To compare with the state-of-the-art DVMNet <cit.>, 3DAHV <cit.> and RelPose++ <cit.>, we follow the protocols discussed in Sect. <ref> (In-dataset and Cross-dataset Evaluation) to obtain the in-dataset and cross-dataset performance of DVMNet <cit.> and 3DAHV <cit.>, while RelPose++ is trained on the YCB-V dataset only. The performance on the YCB-V dataset is reported in Table <ref>, where our method exhibits a significant improvement of 11.02% and 17.83% w.r.t. the state-of-the-art DVMNet (in-dataset), respectively on the challenging ^∘ metrics. We showcase the qualitative results of our method on the YCB-V dataset in Fig. <ref>, and those across different methods can be found in Fig. S2 of the supplementary material. Our results on all the YCB-V objects are shown in Table S5 of the supplementary material. §.§ Experimental Results on the LM-O Dataset Finally, we carry out the experiments on the challenging LM-O Dataset with severe occlusions. Following DVMNet <cit.>, we conduct the experiments on three unseen objects of the LM-O dataset, i.e., cat, driller, and duck. We note that the LM-O dataset is typically used solely for evaluation. Therefore, the results of DVMNet and 3DAHV are evaluated utilizing the weights finetuned on LineMOD. Nevertheless, since the weights of RelPose++ for the LineMOD dataset have not been released yet and LM-O (with only 8 objects) cannot provide sufficient leave-out data to train RelPose++, we thus do not include RelPose++ for comparison. The results from Table <ref> demonstrate the promising performance of our method on the severely occluded LM-O dataset. We showcase our performance on the LM-O dataset in Fig. <ref>, and those across different methods are illustrated in Fig. S2 of the supplementary material. Our results on all the LM-O objects can be found in Table S4 of the supplementary material. We observe that our results in terms of Mean Err are inferior to the in-dataset results of the state-of-the-art DVMNet and 3DAHV (though our method exhibits better Acc@t^∘ results). This can be attributed to the extensive occlusions presented in the LM-O dataset, which lead to numerous testing pairs lacking adequate overlap. Consequently, those testing pairs are difficult to handle by all the methods (and also challenging for humans). We show those samples as failure cases in Fig. <ref> of Sect. <ref>, as well as investigating the angle error distribution (ranging from 0 to 180 degrees) on the LM-O dataset in Fig. S1 of the supplementary materials. The statistics reveal that at lower angle error thresholds (e.g., for t ≤ 10, 20 in Acc@t^∘), our approach substantially outperforms both DVMNet and 3DAHV. This indicates that for test pairs with sufficient overlaps (i.e., match-able testing pairs), our method delivers superior performance compared to the state-of-the-art DVMNet and 3DAHV. § ABLATION ANALYSIS We carefully investigate the following issues by ablation: 1) the contribution of each comprising element of our method, including the back-surface culling, and the usage of RGB or semantic modality in Sect. <ref>; 2) the effects of different initialization strategies in Sect. <ref>; 3) the effects of different refinement iterations in Sect. <ref>; 4) the inference time statistics of our method in Sect. <ref>; and 5) the failure cases illustrations from the LM-O dataset in Sect. <ref>. §.§ The Contributions of the Proposed Comprising Elements Despite the simplicity of our method, we are interested in investigating the influences for each of our comprising elements, namely the back-surface culling, and the usage of RGB or semantic modality. We perform those ablations on the LineMOD, and the results are reported in Table <ref>. As expected, removing each of our comprising elements results in a decreased performance, because all of them are exploited with clear motivations. Nonetheless, the encouraging observation is that our method is able to deliver promising results using only the RGB modality without the semantic map. This further extends the applicability of our method when the pretrained DINOv2 model is not available or when the DINOv2 model cannot produce reasonable outputs (though the latter case could be rare). §.§ Effects of Different Initialization Strategies The pose estimation performance under the render-and-compare paradigm is largely affected by the initialization <cit.>. In the following, we investigate different initializations including: 1) random initialization, where we randomly sample candidate poses and choose the best one; and 2) uniform initialization, where the candidate poses are uniformly sampled from a Fibonacci lattice with in-plane rotations <cit.>, as detailed in Sect. <ref> (Initialization). For the latter, we also examine different densities of the sampling, i.e., the Fibonacci lattice viewpoints including 100 and 200, and in-plane rotations including 20 and 50. Table <ref> illustrates the performance of different initialization strategies using the LineMOD dataset, which demonstrates that 1) the uniform initialization outperforms the random initialization, and 2) uniform initialization with denser sampling leads to better performance. In our experiments, we choose uniform initialization with 4000 samples (200 Fibonacci lattice viewpoints times 20 in-plane rotations) to balance the performance and the efficiency. §.§ Effects of Different Refinement Iterations. Table <ref> illustrates the impact of the iteration numbers for our label/training-free refinement using the LineMOD dataset. It shows that the improvement becomes marginal after the iteration number N exceeds 30. We thus set the iteration number to N=30 to achieve a balance between performance and efficiency. §.§ The Statistics of Our Inference Time We collect the inference time per reference-query pair, averaged across the LineMOD datasets on a single 4090 GPU. We report the runtime for each stage of our method in Table <ref>. Note that the initialization is efficient with much more candidate samples than the refinement, because those initializing candidate samples can be evaluated in parallel without backpropagation. Table <ref> demonstrates the efficiency of our method with a per-pair runtime of 4.85 seconds in total. §.§ Illustrations of the Failure Cases We show our failure cases on the LM-O dataset in Fig. <ref>, where there do not exist sufficient overlaps between the query and the reference. We note such an extremely degraded case as our limitation and discuss it in Sect. <ref> (Limitations and Future Works). § DISCUSSIONS AND CONCLUSIONS Limitations and Future Works. Our method has the following two limitations. Firstly, our method necessitates the depth information of the reference object as an input. Although this is a one-time requirement per object, the need for depth data can restrict the applicability of our method where a depth sensor is absent. To acquire the depth of the reference image, we evaluated several advanced monocular depth estimation algorithms, including <cit.>. However, we found that these methods often struggle to generalize across different object types. Despite this, our empirical results, presented in Table S1 of the supplementary materials, demonstrate that our method remains robust with imprecise depth (simulated by adding noise to the ground-truth depth). This suggests that the current limitations are likely to be overcome once an object-generalizable depth estimator becomes available. Secondly, our method is likely to fail in the severely degraded scenario where there do not exist adequate overlaps between the query and the reference (possibly caused by occlusions, e.g., Fig. <ref>). Future research with simultaneous render-and-compare and object completion (with minimal inconsistent hallucination) is a promising direction to explore. We also note an additional future direction about adaptively determining the loss weights of the RGB pair and the semantic pair in Eq. (<ref>) (preferably adapting in each refinement step), though we empirically showed that simply using equal weights (i.e., both set to 1) leads to promising results. Conclusions. In this paper, we addressed the challenging generalizable relative pose estimation under a rigorous circumstance with only a single RGB-D reference and single RGB query pair as input, and the pose label is not a priori. We establish our label- and training-free method following the render-and-compare paradigm, by exploiting 1) the 2.5D (i.e., RGB-D) rotatable reference mesh, 2) the semantic maps of both query and reference (extracted by a pretrained large vision model DINOv2), and 3) a differentiable renderer to produce and back-propagate losses to refine the relative pose. We carried out extensive experiments on the LineMOD, LM-O, and YCB-V datasets. The results demonstrate that our label/training-free approach surpasses the performance of state-of-the-art supervised methods, particularly excelling under the rigorous ^∘ metrics. plain [pages=1,2,3]sm.pdf
http://arxiv.org/abs/2406.17693v1
20240625163236
Positive and monotone fragments of FO and LTL
[ "Denis Kuperberg", "Quentin Moreau" ]
cs.LO
[ "cs.LO", "cs.FL" ]
HGTDP-DTA: Hybrid Graph-Transformer with Dynamic Prompt for Drug-Target Binding Affinity Prediction Xi Xiao1,These authors contributed equally to this work. Wentao Wang1,⋆ Jiacheng Xie1 Lijing Zhu3 Gaofei Chen1 Zhengji Li1 Tianyang Wang1 Min Xu2, Corresponding author: xu1@cs.cmu.edu June 25, 2024 =========================================================================================================================================================================================== § ABSTRACT We study the positive logic on finite words, and its fragments, pursuing and refining the work initiated in <cit.>. First, we transpose notorious logic equivalences into positive first-order logic: is equivalent to , and its two-variable fragment with (resp. without) successor available is equivalent to with (resp. without) the “next” operator available. This shows that despite previous negative results, the class of -definable languages exhibits some form of robustness. We then exhibit an example of an -definable monotone language on one predicate, that is not -definable, refining the example from <cit.> with 3 predicates. Moreover, we show that such a counter-example cannot be -definable. § INTRODUCTION In various contexts, monotonicity properties play a pivotal role. For instance the field of monotone complexity investigates negation-free formalisms, and turned out to be an important tool for complexity in general <cit.>. From a logical point of view, a sentence is called monotone (with respect to a predicate P) if increasing the set of values where P is true in a structure cannot make the evaluation of the formula switch from true to false. This is crucial e.g. when defining logics with fixed points, where the fixed points binders μ X can only be applied to formulas that are monotone in X. Logics with fixed points are used in various contexts, e.g. to characterise the class PTime on ordered structures <cit.>, as extensions of linear logic such as μMALL <cit.>, or in the μ-calculus formalism used in automata theory and model-checking <cit.>. Because of the monotonocity constraint, it is necessary to recognise monotone formulas, and understand whether a syntactic restriction to positive (i.e. negation-free) formulas is semantically complete. Logics on words have also been generalised to inherently negation-free frameworks, such as in the framework of cost functions <cit.>. This motivates the study of whether the semantic monotone constraint can be captured by a syntactic one, namely the removing of negations, yielding the class of positive formulas. For instance, the formula ∃ x, a(x) states that an element labelled a is present in the structure. It is both monotone and positive. However, its negation ∀ x, a(x) is neither positive nor monotone, since it states the absence of a, and increasing the domain where predicate a is true in a given structure could make the formula become false. Lyndon's preservation theorem <cit.> states that on arbitrary structures, every monotone formula of First-Order Logic () is equivalent to a positive one ( syntactic fragment). The case of finite structures was open for two decades until Ajtai and Gurevich <cit.> showed that Lyndon's theorem does not hold in the finite, later refined by StolBoushkin <cit.> with a simpler proof. Recently, this preservation property of was more specifically shown to fail already on finite graphs and on finite words by Kuperberg <cit.>, implying the failure on finite structure with a more elementary proof than <cit.>. However, the relationship between monotone and positive formulas is still far from being understood. On finite words in particular, the positive fragment was shown <cit.> to have undecidable membership (with input an formula, or a regular language), which could be interpreted as a sign that this class is not well-behaved. This line of research can be placed in the larger framework of the study of preservation theorems in first-order logic, and their behaviour in the case of finite models, see <cit.> for a survey on preservation theorems. In this work we will concentrate on finite words, and investigate this “semantic versus syntactic” relationship for fragments of and Linear Temporal Logic (). We will in particular lift the classical equivalence between and <cit.> to their positive fragments, showing that some of the robustness aspects of are preserved in the positive fragment, despite the negative results from <cit.>. This equivalence between and is particularly useful when considering implementations and real-world applications, as satisfiability is -complete while satisfiability is non-elementary. It is natural to consider contexts where specifications in LTL can talk about e.g. the activation of a sensor, but not its non-activation, which would correspond to a positive fragment of LTL. We could also want to syntactically force such an event to be “good” in the sense that if a specification is satisfied when a signal is off at some time, it should still be satisfied when the signal is on instead. It is therefore natural to ask whether a syntactic constraint on the positivity of formulas could capture the semantic monotonicity, in the full setting or in some fragments corresponding to particular kinds of specifications. We will also pay a close look at the two-variable fragment of and its counterpart. It was shown in <cit.> that there exists a monotone -definable language that is not definable in positive . We give stronger variants of this counter-example language, and show that such a counter-example cannot be defined in [<]. This is obtained via a stronger result characterizing -monotone in terms of positive fragments of bounded quantifier alternation. We also give precise complexity results for deciding whether a regular language is monotone, refining results from <cit.>. The goal of this work is to understand at what point the phenomenon discovered in <cit.> come into play: what are the necessary ingredients for such a counter-example (-monotone but not positive) to exist? And on the contrary, which fragments of are better behaved, and can capture the monotonicity property with a semantic constraint, and allow for a decidable membership problem in the positive fragment. §.§ Outline and Contributions We begin by introducing two logical formalisms in <Ref>: First-Order Logic (<ref>) and Temporal Logic (<ref>). Then, we lift some classical logical equivalences to positive logic in <Ref>. First we show that , and are equivalent in <Ref>. We prove that the fragment with (resp. without) successor predicate is equivalent to with (resp. without) and operators available in <Ref> (resp. <Ref>). In <Ref>, we give a characterisation of monotonicity using monoids (<Ref>) and we deduce from this an algorithm which decides the monotonicity of a regular language given by a monoid (<Ref>), completing the automata-based algorithms given in <cit.>. This leads us to the <Ref> which states that deciding the monotonicity of a regular language is in Ł when the input is a monoid while it is - when the input is a DFA. This completes the previous result from <cit.> showing -completeness for NFA input. Finally, we study the relationship between semantic and syntactic positivity in <Ref>. We give some refinements of the counter-example from <cit.> (a regular and monotone language -definable but not definable in ). Indeed, we show that the counter-example can be adapted to with the binary predicate "between" in <Ref> and we show that we need only one predicate to find a counter-example in in <Ref>. We also consider a characterization of [<] from Thérien and Wilke <cit.> stating that [<] is equivalent to Σ_2 ∩Π_2 where Σ_2 and Π_2 are fragments of with bounded quantifier alternation. We show that -monotone is characterized by Σ_2^+ ∩Π_2^+. At last, we show that no counter-example for can be found in (without successor available) in <Ref>. We conclude by leaving open the problem of expressive equivalence between and -monotone, as well as decidability of membership in for regular languages (see <Ref>). § FO AND LTL We work with a set of atomic unary predicates Σ = {a_1,a_2,...a_|Σ|}, and consider the set of words on alphabet . To describe a language on this alphabet, we use logical formulas. Here we present the different logics and how they can be used to define languages. §.§ First-order logics Let us consider a set of binary predicates, =, ≠, ≤, <, ≻ and ⊁, which will be used to compare positions in words. We define the subsets of predicates _0 := {≤,<, ≻, ⊁}, _< := {≤,<} and _≻ := {=, ≠, ≻, ⊁}, and a generic binary predicate is denoted $̱. As we are going to see, equality can be expressed with other binary predicates in_0and_<when we have at least two variables. This is why we do not need to impose that=belongs to_0or_<. The same thing stands for≠. Generally, we will always assume that predicates=and≠are expressible. Let us start by defining first-order logic: Let be a set of binary predicates. The grammar of [] is as follows: φ, ψ::= |⊤|(̱x,y) | a(x) |φψ|φψ|∃ x, φ|∀ x, φ|φ, where $̱ belongs to. Closedformulas (those with no free variable) can be used to define languages. Generally speaking, a pair consisting of a worduand a functionfrom the free (non-quantified) variables of a formulaφto the positions ofusatisfiesφifusatisfies the closed formula obtained fromφby replacing each free variable with its image by. Let φ, a formula with n free variables, x_1, ..., x_n, and u a word. Let be a function of {x_1,...,x_n} in [[0,|u|-1]]. We say that (u,) satisfies φ, and we define u,φ by induction on φ as follows: * u,⊤ and we never have u,, * u, x < y if (x) < (y), * u, x ≤ y if (x) ≤(y), * u, ≻(x,y) if (y) = (x)+1, * u, ⊁(x,y) if (y) ≠(x)+1, * u, a(x) if a ∈ u[(x)](note that we only ask inclusion here), * u, φψ if u, φ and u, ψ, * u, φψ if u, φ or u, ψ, * u, ∃ x, φ(x,x_1,...,x_n) if there is i of u such that we have u, ∪ [x ↦ i] φ, * u, ∀ x, φ(x,x_1,...,x_n) if for any index i of u, u, ∪ [x ↦ i] φ, * u,φ if we do not have u,φ. For a closed formula, we simply note u φ. Here is an example: The formula φ = ∃ x, ∀ y, (x=y a(y)) describes the set of non-empty words that admit at most one a. For example, {a}{a,b} does not satisfy φ because two of its letters contain an a, but {a,b,c}{b}∅ does satisfy φ. The predicates ≻ and ⊁ can be expressed in [_<] with three variables. If there are no restriction on variables, in particular if we can use three variables, all binary predicates in _0 can be expressed from those in _<. Thus, we will consider the whole set of binary predicates available when the number of variables is not constrained, and we will note for [_0] or [_<], which are equivalent, and similarly for . Let us now turn our attention to, the set of first-order formulas without negation. We recall definitions from <cit.>. The grammar of is that of without the last constructor, . Let us also define monotonicity properties, starting with an order on words. A word u is lesser than a word v if u and v are of the same length, and for any index i (common to u and v), the i-th letter of u is included in the i-th letter of v. When a word u is lesser than a word v, we note u v. Let L be a language. We say that L is monotone when for any word u of L, any word greater than u belongs to L. formulas are monotone in unary predicates, i.e. if a model (u,) satisfies a formula φ of , and u v, then (v,) satisfies φ. We will also be interested in other logical formalisms, obtained either by restricting, or several variants of temporal logics. First of all, let us review classical results obtained when considering restrictions on the number of variables. While anformula on words is always logically equivalent to a three-variable formula <cit.>, two-variable formulas describe a class of languages strictly included in that described by first-order logic. In addition, the logicis equivalent to Linear Temporal Logic (see below). Please note: these equivalences are only true in the framework on word models. In other circumstances, for example when formulas describe graphs, there are formulas with more than three variables that do not admit equivalents with three variables or less. The set is the subset of formulas using only three different variables, which can be reused. We also define for formulas with three variable and without negation. Similarly, we define and with two variables. The formula ∃ y, ≻(x,y) (∃ x, b(x) (∀ z, z ≥ x z < y a(z) )) (a formula with one free variable x that indicates that the letter labeled by x will be followed by a factor of the form aaaaa. ..aaab) is an formula, and even an formula: there is no negation, and it uses only three variables, x, y and z, with a reuse of x. On the other hand, it does not belong to . §.§ Temporal logics Some logics involve an implicit temporal dimension, where positions are identified with time instants. For example, Linear Temporal Logic (LTL) uses operators describing the future, i.e. the indices after the current position in a word. This type of logic can sometimes be more intuitive to manipulate, and present better complexity properties, see introduction. As mentioned above,is not equivalent to. On the other hand, it is equivalent to, a restriction ofto its unary temporal operators. To begin with, let us introduce, which is equivalent to. The grammar of is as follows: φ, ψ::= |⊤| a |φψ|φψ|φ|φψ|φψ|φ. Removing the last constructor gives the grammar of . This logic does not use variables. To check that a word satisfies anformula, we evaluate the formula at the initial instant, that is to say, the word's first position. Theconstructor then describes constraints about the next instant, i.e. the following position in the word. So the worda.u, whereais a letter, satisfiesφif and only if the suffixusatisfiesφ. The constructionφψ(φuntilψ) indicates that the formulaψmust be verified at a given point in time and thatφmust be verified until then. We defineφψas being equal to (φψ). Let us define this formally: Let φ be an formula, and u = u_0...u_m-1 be a word. We say that u satisfies φ and define u φ by induction on φ as follows: * u ⊤ and we never have u, * u a if a ∈ u[0], * u φψ if u φ and u ψ, * u φψ if u φ or u ψ, * u φ if u_1...u_m-1φ, * u φψ if there is i∈[[0,m-1]] such that u_i...u_m-1ψ and for all j∈[[0,i-1]], u_j...u_m-1φ, * u φψ if u (ψ (ψ∧φ)) or for all i∈[[0,m-1]] we have u_i...u_m-1ψ, * u φ if we do not have u φ. Let us call φψ the formula (φψ), for any pair (φ,ψ) of formulas. The advantage of is that and can be redefined from . The notation for is regularly found in the literature.is included in Temporal Logic,. While the former speaks of the future, i.e. of the following indices in the word, thanks to,and, the latter also speaks of the past. Indeed, we introduce,(since) andthe respective past analogues of,and. The grammar of is as follows: φ, ψ::= |ϕ|ϕψ|φψ. Similarly, the grammar of is that of extended with , and . As for , we will write φψ for (φψ). We also note φ, φ, φ̋ and φ for ⊤φ, ⊤φ, φ and φ respectively. The formulas φ and φ mean respectively that the formula φ will be satisfied at least once in the future ( as Future), and that φ will always be satisfied in the future ( as Global). Similarly, the operators (as Past) and $̋ are the respective past analogues ofand. When evaluating anorformula on a wordu=u_0… u_m, we start by default on the first positionu_0. However, we need to define more generally the evaluation of aformula on a word from any given position: Let φ be a formula, u = u_0...u_m-1 a word, and i∈[[0,m-1]]. We define u,i φ by induction on φ: * u,i ⊤ and we never have u, * u,i a if a ∈ u_i, * u,i φψ if u,i φ and u,i ψ, * u,i φψ if u,i φ or u,i ψ, * u,i φ if u, i+1 φ, * u,i φψ if there is j∈[[i,m-1]] such that u,j ψ and for all k∈[[i,j-1]], u,k φ, * u,i ψφ if u,i (ψφ), * u,i φ if we do not have u,i φ, * u,i φ if u,i-1 φ, * u,i φψ if there is j∈[[0,i]] such that u,j ψ and for all k∈[[j+1,i]], u,k φ. Finally, let us introduceand, the Unary Temporal Logic and its positive version. Thelogic does not use theoroperator, but only,andto talk about the future. Similarly, we cannot useorto talk about the past. The grammar of is as follows: φ, ψ::= |⊤| a |φψ|φψ|φ|ϕ|φ|φ|φ̋|φ|φ . We define define [,,,̋] from this grammar by deleting the constructors and . The grammar of is obtained by deleting the last constructor, and similarly, we define [,,,̋] by deleting the negation in [,,,̋]. In the above definition, $̋ andcan be redefined withandthanks to negation, but are necessary in the case of. When two formulasφandψare logically equivalent, i.e. admit exactly the same models, we denote it byφ≡ψ. Note that a closedformula can be equivalent to anformula, since their models are simply words. Similarly, we can haveφ≡ψwhenφis anformula with one free variable (having models of the form(u,i)) andψis aorformula, this time not using the default starting position forsemantics. § LOGICAL EQUIVALENCES We want to lift to positive fragments some classical theorems of equivalence between logics, such as these classical results: * and define the same class of languages. * and define the same class of languages. §.§ Equivalences to FO+ We aim at proving the following theorem, lifting classical results fromto: The logics , and describe the same languages. The set of languages described by is included in the set of languages recognised by . The proof is direct, see Appendix <ref> for details. Fromto, we can interpret inall constructors of. Let us introduce definitions that will be used in the proof of the next lemma. Let (φ) be the quantification rank of a formula φ of defined inductively by: * if φ contains no quantifier then (φ) = 0, * if φ is of the form ∃ x, ψ or ∀ x, ψ then (φ) = (ψ) + 1, * if φ is of the form ψχ or ψχ then (φ) = max((ψ),(χ)). A separated formula is a positive Boolean combination of purely past formulas (which do not depend on the present and future), purely present formulas (which do not depend on the past and future) and purely future formulas (which do not depend on the past and present). We will adapt previous work to show the following auxiliary result: Let φ be a formula with possible nesting of past and future operators. There is a separated formula of that is equivalent to φ. Our starting point is the proof given by Kuperberg and Vanden Boom in <cit.>, which proves the equivalence between generalisations of the logics and , to the so-called cost and cost . When specialised to and , this corresponds to the case where negations appear only at the leaves of formulas. This brings us closer to our goal. First of all, <cit.> proves a generalised version of the separation theorem from <cit.>. In <cit.>, it is proven that any formula of is equivalent to a separated formula, and a particular attention to positivity is additionally given in <cit.>. Indeed <cit.> also shows that such a Boolean combination can be constructed while preserving the formula's positivity. One can also check <cit.> to verify that positivity of a formula is kept when separating the formula. Thus, a formula in can be written as a Boolean combination of purely past, present and future formulas themselves in . Now we are ready to show the main result of this section: The set of languages described by is included in the set of languages recognised by . We follow <cit.>, which shows a translation from to by induction on the quantification rank. We have adapted this to suit our needs. Let φ(x) be an formula with a single free variable. Let us show by induction on (φ) that φ is equivalent to a formula of . Initialisation: If (φ) is zero, then φ(x) translates directly into the formula. Indeed, disjunctions and conjunctions translate immediately into . Furthermore, unary predicates of the form a(x) translate into a and binary predicates trivialize into ⊤ and (e.g. x<x translates into and x=x into ⊤). For example, (x ≤ x a(x)) (b(x) c(x)) x < x translates into (⊤ a) (b c). Heredity: Suppose that any free single-variable formula of quantification rank strictly less than (φ) translates into a formula, and (φ) is strictly positive. If φ is a disjunction or conjunction, we need to transform its various clauses. So, without loss of generality, let us assume that φ(x) is of the form ∃ y, ψ(x,y) or ∀ y, ψ(x,y). Let us denote a_1, ... a_n where n is a natural number, the letters (which are considered as unary predicates) in ψ(x,y) applied to x. For any subset S of [[1,n]], we note ψ^S(x,y) the formula ψ(x,y) in which each occurrence of a_i(x) is replaced by ⊤ if i belongs to S and by otherwise, for any integer i of [[1,n]]. We then have the logical equivalence: ψ(x,y) ≡⋁_S ⊆ [[1,n]]( ⋀_i ∈ S a_i(x) ⋀_i ∉ S a_i(x) ψ^S(x,y) ). We are going to show that the negations in the above formula are optional. Let us note: ψ^+(x,y) ≡⋁_S ⊆ [[1,n]]( ⋀_i ∈ S a_i(x) ψ^S(x,y) ). Let us then show the equivalence of the formulas ψ(x,y) and ψ^+(x,y) using the monotonicity of ψ as an formula. First of all, it is clear that any model satisfying ψ(x,y) satisfies ψ^+(x,y). Conversely, suppose ψ^+(x,y) is satisfied. We then have a subset S of [[1,n]] such that (∧_i ∈ S a_i(x)) ψ^S(x,y) is satisfied. In particular, according to the values taken by the unary predicates in x, there exists a subset S' of [[1,n]] containing S such that (∧_i ∈ S' a_i(x)) (∧_i ∉ S' a_i(x)) ψ^S(x,y) is satisfied. Now, ψ is monotone in the different predicates a_1,...,a_n. So (∧_i ∈ S' a_i(x)) (∧_i ∉ S' a_i(x)) ψ^S'(x,y) is also satisfied, and ψ(x,y) is therefore satisfied. The rest of the proof is similar to the proof from <cit.>: the quantifiers on y commute with the disjunction on S and the conjunction on i of the formula ψ^+. We can therefore fix a subset S of [[1,n]] and simply consider ∃ y, ψ^S(x,y) or ∀ y, ψ^S(x,y). We then replace ψ^S(x,y) with a formula that depends only on y by replacing each binary predicate of the form (̱x,z) with a unary predicate _(z). For example, we can replace x<z, z<x or x=z by a unary predicate _>(z), _<(z) or _=(z). We then obtain a formula ψ'^S(y) on which we can apply the induction hypothesis (since there is only one free variable). This yields a formula χ from , equivalent to ψ'^S(y) and we have: ∃ y, ψ^S(x,y) ≡χχχ,    and   ∀ y, ψ^S(x,y) ≡χ̋χχ. Let χ' be the formula obtained (χχχ or χ̋χχ). The resulting formula χ' then involves unary predicates of the form _. We then use <Ref> to transform χ' into a positive Boolean combination of purely past, present and future positive formulas, where predicates _$̱ trivialize into⊤or. For example,_<trivializes into⊤in purely past formulas, intoin purely present or future formulas. This completes the induction. From a formula in, we can construct an equivalent formula in. Ultimately, we can return to a future formula. Indeed, we want to evaluate inx=0, so the purely past formulas, isolated by the separation lemma (<Ref>), trivialize intoor⊤. Now, to translate a closed formulaφfromto, we can add a free variable by settingφ'(x) = φ (x=0). Then, by the above,φ'translates into a formulaχfrom, logically equivalent toφ. We can now turn to the proof of <Ref>. By <Ref>, we have the inclusion of the languages described by in those described by , which is trivially included in . By <Ref>, the converse inclusion of into holds. So we can conclude that the three logical formalisms are equi-expressive. §.§ Equivalences in fragments of FO+ The languages described by [_0] formulas with one free variable are exactly those described by formulas. First, let us show the to direction. In the proof of <Ref>, as is classical, three variables are introduced only when translating . By the same reasoning as for , it is clear that translating introduces two variables. It remains to complete the induction of <Ref> with the cases of , , $̋ and, but again we can restrict ourselves to future operators by symmetry: * [φ](x) = ∃ y, x < y [φ](y) ; * [φ](x) = ∀ y, y ≤ x [φ](y). For the converse direction fromto, we draw inspiration from <cit.>. This proof is similar to that of <cit.> used previously in the proof of <Ref>: we perform a disjunction on the different valuations of unary predicates in one free variable to build a formula with one free variable. However, the proof of <Ref> cannot be adapted as it is, since it uses the separation theorem which does not preserve the membership of a formula to, see <cit.>. However, the article <cit.> uses negations and we must therefore construct our own induction case for the universal quantifier that is treated in <cit.> via negations. The beginning of the proof is identical to that of <Ref>. Using the same notations, let us consider a formulaψ^S(x,y)with no unary predicate applied tox. We cannot directly replace binary predicates with unary predicates, because this relied on the separation theorem. Let us consider, as in <cit.>, the position formulas,y<x ⊁(y,x),≻(y,x),y=x,≻(x,y)andx < y ⊁(x,y), whose set is denotedΤ. We then have the logical equivalence:ψ^S(x,y) ≡⋁_τ∈Ττ(x,y) ψ_τ^S(y) ≡⋀_τ∈Ττ(x,y) ψ_τ^S(y),whereψ_τ^S(y)is obtained from the formulaψ^S(x,y)assuming the relative positions ofxandyare described byτ. The above equivalence holds becauseΤforms a partition of the possibilities for the relative positions ofxandy: exactly one of the five formulasτ(x,y)fromΤmust hold. Sincexandyare the only two variables, any binary predicate involvingxis a binary predicate involvingxandy(or else it involves onlyxand is trivial). Binary predicates are therefore trivialized according to the position described byτ. For ψ^S(x,y) = ⊁(x,y) a(y) (∀ x, x ≤ y b(y)) and for the position formula τ = y<x ⊁(y, x), we have ψ_τ^S(y) = ⊤ a(y) (∀ x, x ≤ y b(y)). We do not replace the bound variable x. We have obtained a formula with one free variable, so we can indeed use the induction hypothesis. We use disjunction in the case of an existential quantifier (as in <cit.>) and conjunction in the case of a universal quantifier. We then need to translate∃ y, τ(x,y) ψ_τ^S(y)and∀ y, τ(x,y) ψ_τ^S(y), which we note respectively[τ]_∃and[τ]_∀, in, for any position formulaτ. For readability we omitψ_τ^Sin this notation, but[τ]_∃and[τ]_∀will depend onψ_τ^S. In each case, we noteχfor theformula obtained by induction fromψ_τ^S(y):1.5[ [y<x ⊁(y,x)]_∃≡χ,; [y<x ⊁(y,x)]_∀≡χ̋,; [≻(y,x)]_∃≡≻(y,x)]_∀≡χ,; [y=x]_∃≡ [y=x]_∀≡χ,; [≻(x,y)]_∃≡≻(x,y)]_∀≡χ,; [x < y ⊁(x,y)]_∃≡χ,; [x < y ⊁(x,y)]_∀≡χ. ]The logic [_<] is equivalent to [,,,̋]. For the right-to-left direction, it suffices to notice that the predicates used to translate the constructors of [,,,̋] in the previous proof belong to _<. For the left-to-right direction, simply replace the set Τ in <Ref> proof by Τ' = { y<x, y=x, x<y }. Once again, we obtain an exhaustive system of mutually exclusive position formulas that allow us to trivialize binary predicates. The proof of <Ref> can thus be lifted immediately to this case. We showed that several classical logical equivalence results can be transposed to their positive variants. § CHARACTERISATION OF MONOTONICITY So far, we have focused on languages described by positive formulas, from which monotonicity follows. Here, we focus on the monotonicity property and propose a characterisation. We then derive a monoid-based algorithm that decides, given a regular languageL, whether it is monotone, refining results from <cit.> focusing on automata-based algorithms. §.§ Characterisation by monoids We assume the reader familiar with monoids (see Appendix <ref> for detailed definitions). We will note(,·)a monoid and_Lthe syntactic monoid of a regular languageLand≤_Lthe syntactic order. Let L⊆^* be a regular language. Then L is monotone if and only if there is an order on _L compatible with the product · and included in ≤_L which verifies: ∀ (u,v) ∈^* ×^*, u v h(u) h(v), where h denotes the canonical projection. The proof is left in <Ref>. Let L⊆α^* be a regular language, and ≤_L be its syntactic order. The language L is monotone if and only if we have: ∀ (s,s') ∈^2, s ⊆ s' h(s) ≤_L h(s'), where h:^*→ M_L denotes the canonical projection onto the syntactic monoid. For the left-to-right direction let L be a monotone language and s ⊂ s'. Let m and n be two elements of _L such that mh(s)n ∈ h(L). Since h : ^* →_L is surjective, let u ∈ h^-1(m) and v ∈ h^-1(n). Then usv ∈ L since h recognises L. So us'v ∈ L by monotonicity of L. Thus mh(s')n ∈ h(L). We can conlude that h(s) ≤_L h(s'). For the converse direction, suppose that ≤_L verifies the condition of <Ref>. We can remark that ≤_L is compatible with the product of the monoid. Therefore, the conditions of <Ref> are verified by ≤_L. §.§ An algorithm to decide monotonicity We immediately deduce from <Ref> an algorithm for deciding the monotonicity of a regular languageLfrom its syntactic monoid. Indeed, it is sufficient to check for any pair of letters(s,s')such thatsis included ins'whetherm h(s) n ∈ h(L)impliesm h(s') n ∈ h(L)for any pair(m,n)of elements of the syntactic monoid, wherehdenotes the canonical projection onto the syntactic monoid. This algorithm works for any monoid that recognisesLthrough a surjectiveh:^*→ M, not just its syntactic monoid. Indeed, for any monoid, we start by restricting it toh(^*)to guarantee thathis surjective. Then, checking the above implication is equivalent to checking whethers ≤_L s'for all letterssands'such thatsis included ins'. This is summarised in the following proposition: There is an algorithm which takes as input a monoid (,) recognising a regular language L through a morphism h and decides whether L is monotone in O(||^2||^2). It was shown in <cit.> that deciding monotonicity is-complete if the language is given by an NFA, and in P if it is given by a DFA. We give a more precise result for DFA, and give also the complexity for monoid input: Deciding whether a regular language is monotone is in Ł when the input is a monoid while it is - when it is given by a DFA. See Appendix <ref> for the proof. § SEMANTIC AND SYNTACTIC MONOTONICITY The paper <cit.> exhibits a monotone language definable inbut not in. The question then arises as to how simple such a counter-example can be. For instance, can it be taken in specific fragments of, such as. This section presents a few lemmas that might shed some light on the subject, followed by some conjectures. From now on we will writeAthe alphabet. §.§ Refinement of the counter-example in the general case In <cit.>, the counter-example language that is monotone and-definable but not-definable uses three predicatesa,bandcand is as follows:K = ((abc)^*)^↑∪ A^* ⊤ A^*.It uses the following words to find a strategy for Duplicator in_k^+:u_0 = (abc)^n and u_1 = (abbcca)^n abbc,wherenis greater than2^k, andstis just a compact notation for the letter{s,t}for any predicatessandt. This in turns allows to show the failure on Lyndon's preservation theorem on finite structures <cit.>. Our goal in this section is to refine this counter-example to more constrained settings. We hope that by trying to explore the limits of this behaviour, we achieve a better understanding of the discrepancy between monotone and positive. In Section <ref>, we give a smaller fragment ofwhere the counter-example can still be encoded. In Section <ref>, we show that the counter-example can still be expressed with a single unary predicate. This means that it could occur for instance inwhere the specification only talks about one sensor being activated or not. §.§.§ Using the between predicate First, let us define the “between” binary predicate introduced in <cit.>. <cit.> For any unary predicate a (not only predicates from Σ but also Boolean combination of them), a also designates a binary predicate, called between predicate, such that for any word u and any valuation , (u,) a(x,y) if and only if there exists an index i between (x) and (y) excluded such that (u_i,) a, where u_i is the i-th letter of u. We denote the set of between predicates and ^+ the set of between predicates associated to the set of positive unary predicates. Is is shown in <cit.> that^2[_0∪]is strictly less expressive than. There exists a monotone language definable in [_0 ∪] which is not definable in [_0 ∪^+]. We can use the same words u_0 and u_1 defined above with the following language: K ∪ A^*( ab^2 ∪bc^2 ∪ca^2 ∪abca∪bcab∪cabc)A^* . Indeed, in <cit.>, it is explained that we need to look for some “anchor position” to know whether a word belongs to K. Such positions resolve the possible ambiguity introduced by double letters of the form ab, that could play two different roles for witnessing membership in ((abc)^*)^↑. Indeed, if ab appears in a word, we cannot tell whether it stands for an a or a b. In contrast, anchor letters have only one possible interpretation. They may be singletons ({a}, {b}, {c}) or consecutive double letters such as abca which can only be interpreted as bc. Here, we accept any word containing an anchor of the second kind. This means that in remaining words we will only be interested in singleton anchors. Thus, we need two variables only to locate consecutive anchors and between predicates to check if the letters between the anchors are double letters. See Appendix <ref> for a more detailed description of a formula. §.§.§ Only one unary predicate Now, let us show another refinement. We can liftKto a counter-example where the set of predicatesΣis reduced to a singleton. As soon as there is at least one unary predicate, there exists a monotone language definable in but not in . Suppose Σ reduced to a singleton. Then, A is reduced to two letters which we note 0 and 1 with 1 greater than 0. We will encode each predicate from {a,b,c} and a new letter # (the separator) into A^* as follows: {[ [a] = 001; [b] = 010; [c] = 100; [#] = 100001 ]. . Thus, the letter ab will be encoded by [ab]=011, etc. We will encode the language K as follows: [K] = (([a][#][b][#][c][#])^*)^↑∪ A^* 1 (A^4 \ 0^4) 1 A^* ∪ A^*1^5A^*. First, we can notice that [K] is monotone. Let us show how the separator [#] is used. Let w be a word over A^*. If w contains a factor of the form 1u1 where u is a word of 4 letters containing the letter 1, then w immediately belongs to [K]. This is easy to check with an -formula so we can suppose that w does not contain such a factor. Similarly, we can suppose that 1^5 (corresponding to ⊤ in the original K) is not a factor of w. Then, it is easy to locate a separator since 100001 will always be a separator factor. Therefore, we can locate factor coding letters in w. Then we can do the same thing as <cit.> to find an -formula: we have to fix some anchors (factors coding letters whose roles are not ambiguous as explained in the proof of <Ref>) and check whether they are compatible. For example, suppose w contains a factor of the form [a][#]([ab][#][bc][#][ca][#])^n[bc]. Then [a] is an anchor. The last factor [ca][#][bc] is also an anchor since it can only be interpreted as ([a][#][b])^↑. Since there are no anchors in between [a] and [bc] we just have to verify their compatibility. Here it is the case: in between the anchors, each [ab] can be interpreted as [b]^↑, [bc] as [c]^↑ and [ca] as [a]^↑. If we were to replace [a] with [c], [c] would still be an anchor but would not be compatible with [bc]. This achieves the description of an -formula for [K]. Furthermore, it is not -definable. Indeed, let k∈ be an arbitrary number of rounds for an ^+-game. We can choose n > 2^k such that Duplicator has a winning strategy for u_0 and u_1 defined as follows: [u_0] = ([a][#][b][#][c][#])^n and [u_1] = ([ab][#][bc][#][ca][#])^n[ab][#], where [ab] = 011, [bc] = 110 and [ca] = 101. We can adapt the strategy for u_0 and u_1 (from <cit.>) to [u_0] and [u_1]. For example, if Spoiler plays the i-th letter of a factor [bc], then it is similar to playing the letter bc in u_1. Thus, if Duplicator answers by playing the j-th b or c in u_0, then he should answer by playing the i-th letter of the j-th [b] or [c] respectively, for any natural integers i and j. In the same way, if Spoiler plays in a separator character, then Duplicator should answer by playing the same letter of the corresponding separator character in the other word according to the strategy. §.§ Stability through monotone closure It has been shown by Thérien and Wilke <cit.> that languages[_<]-definable are exactly those who are bothΣ_2-definable andΠ_2-definable whereΣ_2is the set of-formulas of the form∃ x_1,....,x_n ∀ y_1,...,y_m φ(x_1,...,x_n,y_1,...y_m)whereφdoes not have any quantifier andΠ_2-formulas are negations ofΣ_2-formulas. Hence,Σ_2 ∪Π_2is the set of-formulas in prenex normal form with at most one quantifier alternation. Moreover, Pin and Weil <cit.> showed thatΣ_2describes the unions of languages of the formA_0^*.s_0.A_1^*.s_1.....s_t.A_t+1^*, wheretis a natural integer,s_iare letters fromAandA_iare subalphabets ofA. Even though we do not know yet whethercaptures the set of monotone-definable languages, we can state the following theorem: The set Σ_2^+ ∩Π_2^+ of languages definable by both positive Σ_2-formulas (written Σ_2^+) and positive Π_2-formulas (written Π_2^+) is equal to the set of monotone -definable languages. In order to prove <Ref>, we shall introduce a useful definition: For any language L, we write L^ = ((L^c)^↓)^c the dual closure of L, where L^c stands for the complement of L and L^↓ is the downwards monotone closure of L. It is straightforward to show that L^ is the greatest monotone language included in L for any language L. In particular, a monotone language is both equal to its monotone closure and its dual monotone closure. Now, let us show the following lemma: The set Σ_2^+ captures the set of monotone Σ_2-definable languages. First, it is clear that Σ_2^+ describes monotone Σ_2-definable languages. Next, it is enough to show that the monotone closure of a Σ_2-definable language is Σ_2^+-definable. So let us consider a Σ_2-definable language L. Since a disjunction of Σ_2^+ formulas is equivalent to a Σ_2^+ formula, we can suppose thanks to <cit.> that L is of the form A_0^*.s_0.A_1^*.s_1.....s_t.A_t+1^* as explained above. Therefore, L^↑ is described by the following Σ_2^+-formula: ∃ x_0,...,x_t, ∀ y, x_0 < ... < x_t ⋀_i=0^t s_i(x_i) ⋀_i=0^t+1 ( x_i-1<y<x_i ⇒ A_i(y)), where B(x) means ⋁_b ∈ B b(x) for any subalphabet B, x_-1 < y < x_0 means y<x_0 and x_t < y < x_t+1 means x_t < y. This immediately gives the following lemma which uses the same sketch proof: The set Σ_2^- (Σ_2-formulas with negations on all predicates) captures the set of downwards closed Σ_2-definable languages. We can now deduce the following lemma: The set Π_2^+ captures the set of monotone Π_2-definable languages. Then again, we only need to show the difficult direction. Let L be a Π_2-definable language. It is enough to show that L^ is Π_2^+-definable according to <Ref>. By definition of Π_2, the complement L^c of L is Σ_2-definable. Hence, (L^c)^↓ is definable by a Σ_2^--formula φ given by <Ref>. Therefore, φ is a formula from Π_2^+ describing L^. Finally, we can prove <Ref>: Thanks to <cit.>, it is straightforward that any language from Σ_2^+ ∩Π_2^+ is monotone and -definable. Let L be a monotone -definable language. In particular, L belongs to Σ_2 and is monotone. Thus, by <Ref>, L belongs to Σ_2^+. Similarly, L belongs to Π_2^+ by <Ref>. This last result shows how close to capture monotone-definable languagesis. However, it does not seem easy to lift the equivalenceΣ_2 ∩Π_2 = to their positive fragments as we did for the other classical equivalences in <Ref>. Indeed, the proof from <cit.> relies itself on the proof of <cit.> which is mostly semantic while we are dealing with syntactic equivalences. This immediately implies that a counter-example separating-monotone fromcannot be in[_<]as stated in the following corollary: Any monotone language described by an [_<] formula is also described by an formula. If the monotone closureL^↑of a languageLdescribed by a formula of[_<]is in, nothing says on the other hand thatL^↑is described by a formula of[_<], or even of[_0]as the counterexampleL=a^*bc^*de^*shows. The monotone closureL^↑cannot be defined by an[_0]formula. This can be checked using for instance Charles Paperman's online software: <https://paperman.name/semigroup/>. Notice that the software uses the following standard denominations: DA corresponds to[_<], and LDA to[_0]. We give the following conjecture, wherecan stand either for[_<]or for[_0]* A monotone language is definable in if and only if it is definable in . * It is decidable whether a given regular language is definable in Since we can decide whether a language is definable inand whether it is monotone, the first item implies the second one. § PROOF OF LEMMA <REF> Let us show the lemma by induction on the formula. We inductively construct for any formula φ of , a formula φ^(x) of with one free variable that describes the same language. This just amounts to remove the negation case in the classical proof, no additional difficulty here. * =, * ⊤ = ⊤, * a = a(x), * (φψ)(x) = φ^(x) ψ^(x), * (φψ)(x) = φ^(x) ψ^(x), * (φ)(x) = ∃ y, ≻(x,y) φ^(y), * (φψ)(x) = ∃ y, x ≤ y ψ(y) ∀ z, (z < x y ≤ z φ(z)), * (ψφ)(x) = (φψ )(x)∨(∀ y,y<x∨φ(y)). The translation of a formula φ of into a closed formula of is therefore ∃ x, x=0 φ(x), where x=0 is short for ∀ y, y ≥ x. This construction makes it possible to reuse the variables introduced. This is why we can translate the formulas of into . § MONOIDS §.§ Algebraic definitions A semigroup is a pair (𝐒, ) where is an associative internal composition law on the non-empty set 𝐒. We allow ourselves the abuse of language which consists in speaking of the semigroup 𝐒 instead of the semigroup (𝐒, ). A monoid is a pair (, ) which is a semigroup, and which has a neutral element noted 1_ (or simply 1 when there is no ambiguity), i.e. which verifies: ∀ m ∈, 1 m = m 1 = m. Let (, ) and (', ∘) be two monoids. An application h defined from into ' is a morphism of monoids if: ∀ (m_1,m_2) ∈^2, h(m_1 m_2) = h(m_1) ∘ h(m_2), and h(1_) = 1_'. Similarly, if and ' are just semigroups, h is a morphism if it preserves the semigroup structure. Let (,) be a monoid, and ≤ an order on . We say that ≤ is compatible with if: ∀ (m,m',n,n') ∈^4, m ≤ n m' ≤ n' m m' ≤ n n'. Let L be a language and (, ) a finite monoid. We say that recognises L if there exists a monoid morphism h from (^*,.) into (, ) such that L = h^-1(h(L)). Let L be a regular language, and u,v∈^* be any two words. We define the equivalence relation of indistinguishability denoted ∼_L on ^*. We write u ∼_L v if: ∀ (x,y) ∈^* ×^*, xuy ∈ L xvy ∈ L. Similarly, we write u ≤_L v if: ∀ (x,y) ∈^* ×^*, xuy ∈ L xvy ∈ L. The ≤_L preorder is called the L syntactic preorder. Let L be a regular language. We define the syntactic monoid of L as _L = L/∼_L. This is effectively a monoid, since ∼_L is compatible with left and right concatenation. Moreover, the syntactic monoid recognises L through canonical projection. Moreover, we can see that the order ≤_L naturally extends to an order compatible with the product on the syntactic monoid. We will use the same notation to designate both the pre-order ≤_L and the order induced by ≤_L on _L, which we will call syntactic order. §.§ Proof of Lemma 29 The right-to-left direction follows from the definition of monotone languages. Indeed, suppose we have a language L and an order on its syntactic monoid that verifies the assumptions. Let u be a word in L, and v u. By hypothesis, we have h(v) h(u). Again by hypothesis, since h(u)∈ h(L), we also have h(v)∈ h(L), so v belongs to L. We can conclude that L is monotone. Conversely, let us consider a regular language L, and note h its canonical projection onto its syntactic monoid. Let → be the binary relation induced by on _L, i.e. such that m → n if there are words u and v such that m=h(u), n=h(v) and u v. The transitive closure of →, denoted →^*, is then an order relation. First of all, it is clearly reflexive and transitive. Then, to show antisymmetry, it is sufficient to show that →^* is included in ≤_L. Let m and n be two elements of _L such that m →^* n. By definition, there are m_1, m_2, ..., m_pp elements of _L such that m → m_1 → m_2 → ... → m_p → n, where p is a natural number. We then have u_0, u_1, u_1', u_2, u_2', ..., u_p, u_p', and u_p+1 such that m = h(u_0), m_1 = h(u_1) = h(u_1'), m_2 = h(u_2) = h(u_2'), ..., m_p = h(u_p) = h(u_p') and n = h(u_p+1) and u_0 u_1, u_1' u_2, u_2' u_3, ..., u_p' u_p+1. Now let x and y be two words (they constitute a context). By monotonicity of L, if xu_0y belongs to L, then xu_1y belongs to L. Then, since h(u_1) = h(u_1'), if xu_1y belongs to L, then so does xu_1'y. We immediately deduce that if xu_0y belongs to L, then so does xu_p+1y. This proves that →^* is included in ≤_L. So →^* is an order, which we note . Let us check its compatibility with the operation of the monoid. Let m, m', n and n' be elements of _L such that m n and m' n'. First, let us assume m → n and m' → n'. We then have u, u', v and v' representing m, m', n and n' respectively, such that u v and u' v'. So we have uv u'v' and thus, mn m'n'. Now, if we only have m →^* n and m' →^* n', then we have finite sequences (m_i)_i=1^p and (m_i')_i=1^p, which we can assume to be of the same length p by reflexivity of →, such that m → m_1 → ... → m_p → n and m' → m_1' → ... → m_p' → n'. So we have m m' m_1 m_1', but also m_1 m_1' m_2 m_2', ..., m_p m_p' n n'. We then obtain the inequality mn m'n' by transitivity. Finally, it is clear that if u v then h(u) h(v). The relationship therefore satisfies the constraints imposed. §.§ Proof of Proposition 32 First, in the algorithm from the <Ref>, at any given time, we only need to code two letters from and two elements from the monoid . So we can code S and S' with |Σ| bits and increment them through the loop in order to go through the whole alphabet. For example, if Σ = {a,b,c} then a is coded by 001, {a,b} by 010 and so on. In the same way, we only need 2⌈log_2()⌉ bits to code (m,n). Using lookup tables for applying the function h, the product ·, and testing membership in F, all operations can be done in Ł. Thus, the algorithm from the <Ref> is in Ł. To decide whether a DFA describes a monotone language, we can compute the NFA ^↑ by adding to each transition (q_0,a,q_1) of any transition (q_0,b,q_1) with b greater than a. Thus, ^↑ describes the monotone closure of the language recognised by . Then, recognises a monotone language if and only if there is not path from an initial to a final state in the product automaton ×^↑, where is the complement of , obtained by simply switching accepting and non-accepting states. As NFA emptiness is in , DFA monotonicity is in as well. Now, let us suppose we have an algorithm which takes a DFA as input and returns whether it recognises a monotone language. Notice that the DFA emptiness problem is still - when restricted to automata not accepting the empty word ε. We will use this variant to perform a reduction to DFA monotonicity. Suppose we are given a DFA on an alphabet A which does not accept ε. We build an automaton ' on A ∪{⊤} by adding the letter ⊤ to A in , but without any ⊤-labelled transition. Now, let us equip A ∪{⊤} with an order ≤ such that a ≤⊤ for any letter a of A. Then the new automaton ' recognises a monotone language if and only if recognises the empty language. Indeed, suppose we have a word u of length n accepted by . Then, ' would accept u but not ⊤^n which is bigger than u. Reciprocally, if recognises the empty language then so does ' and the empty language is a monotone language. Thus, the monotonicity problem is - when the input is a DFA. § AN FO2[<,S,,BE]-FORMULA FOR THE COUNTER-EXAMPLE Let us give a formula for the counter-example from <ref>. Let us notice that the successor predicate is definable in[_< ∪], so results from <cit.> about the fragment[<,]apply to[_0 ∪]as well. So it is easy to describeA^*( ⊤∪ab^2 ∪bc^2 ∪ca^2 ∪abca∪bcab∪cabc )A^*and to state that factors of length3are in(abc)^↑. Now, for any atomic predicatessandt(i.e.s,t∈{a,b,c}), let us pose:φ_s,t = ∀ x, ∀ y, ( s(x) t(y) x<y ⋀_d ∈Σ d(x,y) ) ψ_s,t(x,y),whereψ_s,t(x,y)is a formula stating that the two anchors are compatible, i.e. either they both use the “upper component” of all the double letters between them, or they both use the “bottom component”. Recall that⋀_d ∈Σ d(x,y)means that there is no singleton letter betweenxandy. For example,ψ_a,b(x,y)is the disjunction of the following formulas:1.5[ bc(x+1) ∧ab(y-1); ab(x+1) ∧ca(y-1); x+1=y ]Indeed, the first case correspond to using the upper component ofbcandab: anchorain positionxis followed by the upperbin positionx+1, which should be consistent with the upperain positiony-1followed by anchorbin positiony, the factor fromx+1toy-1being of the form(bccaab)^+. Similarly, the second case corresponds to the bottom component. The last case corresponds to anchors directly following each other, without an intermediary factor of double letters. This case appears only for(s,t)∈{(a,b),(b,c),(c,a)}Now using the conjunction of all formulasφ_s,twheresandtare atomic predicatesa,b,c, we build a formula for the language of <ref>. § GAMES Erhenfeucht-Fraïssé games and their variants are traditionally used to prove negative expressivity results offragments. This is why we were interested in Erhenfeucht-Fraïssé games matching fragments of. Although we did not manage to use them in the present work, we include here a variant that could be suited for provinginexpressibility results. We note _k^n+[](u_0,u_1), the Ehrenfeucht-Fraïssé game associated with ^n+[] at k turns on the pair of words (u_0,u_1). When there is no ambiguity, we simply note _k^n+(u_0,u_1). In _k^n+(u_0,u_1), two players, Spoiler and Duplicator, play against each other on the word pair (u_0,u_1) in a finite number k of rounds. Spoiler and Duplicator will use tokens numbered 1, 2, ..., n to play on the positions of the words u_0 and u_1. On each turn, Spoiler begins. He chooses δ from {0,1} and i from [[1,n]] and moves (or places, if it has not already been placed) the i numbered token onto a position of the word u_δ. Duplicator must then do the same on the word u_1-δ with the constraint of respecting binary predicates induced by the placement of the tokens, and only in one direction for unary predicates. More precisely, if _0 and _1 are the valuations that to each token (considered here as variables) associates the position where it is placed in u_0 and u_1 respectively, then * for any binary predicate (̱x,y), (u_0,_0)(̱x,y) if and only if (u_1,_1)(̱x,y), * for any unary predicate a(x) in Σ, if (u_0,_0) a(x) then (u_1,_1) a(x). If Duplicator cannot meet the constraint, he loses and Spoiler wins. In particular, for any i∈[[1,n]], if the letter s_0 indicated by the token i on the word u_0 is not included in the letter s_1 indicated by the token i on the word u_1, then Spoiler wins. If after k rounds, Spoiler has not won, then Duplicator is declared the winner. Let L be a language and n a natural number. The language L is definable by a formula of ^n+[] if and only if there exists a natural number k such that, for any pair of words (u_0,u_1) where u_0 belongs to L but u_1 does not, Spoiler has a winning strategy in _k^n+[](u_0,u_1). We generalise the proof from <cit.>, which treats the case of , using a classical construction for with a bounded number of variables. Let n be a natural number. Let us introduce the concept of initial configuration. For two words u_0 and u_1 of lengths l_0 and l_1 respectively, and two functions of 0, 1, 2, ..., or n variables among x_1, ... x_n, _0 and _1 with values in [[0,l_0-1]] and [[0,l_1-1] ] respectively, the game _k^n+[](u_0,u_1) has initial configuration (_0,_1) if token i is placed in position _0(x_i) on word u_0, when _0(x_i) is defined, for any integer i from [ [1,n]], and similarly with u_1 for the valuation _1. We then claim that for any natural number k and any formula φ of ^n+[] (possibly with free variables) of quantification rank at most k, and for all models (u_0,_0) and (u_1,_1), Duplicator wins the game _k^n+[](u_0,u_1) with initial configuration (_0,_1), if and only if: u_0,_0 φ u_1,_1 φ. Indeed, starting from the induction from the article <cit.>, we have to adapt the base case to the set of binary predicates considered. The proof is then similar: each element of can impose a constraint in ^n+[] which is reflected in the constraint on the positions of the tokens. Then, in the induction, we need to modify the valuation update. Indeed, as the number of variables (and therefore of tokens) is limited to n, when a variable x already in use is encountered, we do not need to add a variable to the valuation constructed, but modify the value taken by in x, to construct a new valuation '.
http://arxiv.org/abs/2406.17771v1
20240625175605
Violation of $γ$ in Brans-Dicke gravity
[ "Hoang Ky Nguyen", "Bertrand Chauvineau" ]
gr-qc
[ "gr-qc", "astro-ph.SR", "hep-th" ]
[ ]hoang.nguyen@ubbcluj.ro Department of Physics, Babeş-Bolyai University, Cluj-Napoca 400084, Romania [ ]bertrand.chauvineau@oca.eu Université Côte d'Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Nice cedex 4, France § ABSTRACT 2pt The Brans Class I solution in Brans-Dicke gravity is a staple in the study of gravitational theories beyond General Relativity. Discovered in 1961, it describes the exterior vacuum of a spherical Brans-Dicke star and is characterized by two adjustable parameters. Surprisingly, the relationship between these parameters and the properties of the star has not been rigorously established. In this Proceeding, we bridge this gap by deriving the complete exterior solution of Brans Class I, expressed in terms of the total energy and total pressure of the spherisymmetric gravity source. The solution allows for the exact derivation of all post-Newtonian parameters in Brans-Dicke gravity for far field regions of a spherical source. Particularly for the γ parameter, instead of the conventional result γ_ PPN=ω+1/ω+2, we obtain the analytical expression γ_ exact=ω+1+(ω+2) Θ/ω+2+(ω+1) Θ where Θ is the ratio of the total pressure P_∥^*+2P_⊥^* and total energy E^* contained within the mass source. Our non-perturbative γ formula is valid for all field strengths and types of matter comprising the mass source. Consequently, observational constraints on γ thus set joint bounds on ω and , with the latter representing a global characteristic of the mass source. More broadly, our formula highlights the importance of pressure (when ≠0) in spherical Brans-Dicke stars, and potentially in stars within other modified theories of gravitation. Violation of γ in Brans-Dicke gravity Bertrand Chauvineau June 25, 2024 ===================================== Background—Brans–Dicke gravity is the second most studied theory of gravitation besides General Relativity. It represents one of the simplest extensions of gravitational theory beyond GR <cit.>. It is characterized by an additional dynamical scalar field ϕ which, in the original vision of Brans and Dicke in 1961, acts like the inverse of a variable Newton `constant' G. The scalar field has a kinetic term, governed by a (Brans-Dicke) parameter ω in the following gravitation action S=1/16π∫ d^4x√(-g)[Φ ℛ-ω/Φg^μν∂_μΦ∂_νΦ] In the limit of infinite value for ω, the kinetic term is generally said to be `frozen', rendering Φ being a constant value everywhere. In this limit, if the field ϕ approaches its (non-zero) constant value in the rate 𝒪(1/ω), the term ω/Φg^μν∂_μΦ∂_νΦ would approach zero at the rate 𝒪(1/ω) and hence become negligible compared with the term Φ ℛ, effectively recovering the classic Einstein–Hilbert action. [It has been shown that for non-static and/or in the presence of singularity, the rate of convergence is 𝒪(1/√(ω)). This topic is beyond the scope of this Proceeding however, as we shall only consider a static and regular case here. For more information, we refer the reader to our recent work <cit.>, where we also reviewed the literature on the 𝒪(1/√(ω)) anomaly.]4pt Together with its introduction <cit.>, Brans also identified four classes of exact solutions in the static spherically symmetric (SSS) setup <cit.>. The derivation of the Brans solutions was explicitly carried out by Bronnikov in 1973 <cit.>. Of the four classes, only the Brans Class I is physically meaningful, however. It can recover the Schwarzschild solution in its parameter space.4pt For comparison with observations or experiments, Brans derived the Robertson (or Eddington-Robertson- Schiff) β and γ post-Newtonian (PN) parameters based on his Class I solution: β_ PPN =1 γ_ PPN =ω+1/ω+2 The γ parameter is important as it governs the amount of space-curvature produced by a body at rest and can be directly measured via the detection of light deflection and the Shapiro time delay. The parametrized post-Newtonian (PPN) γ formula recovers the result γ_ GR=1 known for GR in the limit of infinite ω, in which the BD scalar field becomes constant everywhere. Current bounds using Solar System observations set the magnitude of ω to exceed 40,000 <cit.>.4pt We should emphasize that the “conventional” results (<ref>) and (<ref>) were derived under the assumption of zero pressure in the gravity source. It should be noted that these formulae can also be deduced directly from the PPN formalism for the Brans–Dicke action, without resorting to the Brans Class I solution <cit.>. The PPN derivation relies on two crucial approximations: (i) weak field and (ii) slow motions. Regarding the latter approximation, an often under-emphasized point is that not only must the stars be in slow motion, but the microscopic constituents that comprise the stars must also be in slow motion. This translates to the requirement that the matter inside the stars exert low pressure, characterizing them as “Newtonian” stars. 4pt The purpose of our paper is two-fold. Firstly, the analytical form for the exterior vacuum contains two adjustable parameters. The issue in determining them from the energy and pressure profiles inside the mass source has not been rigorously addressed in the literature. Establishing their relationships with the mass source would typically require the full machinery of the Tolman–Oppenheimer–Volkoff (TOV) equations tailored for Brans–Dicke gravity <cit.>. Moreover, solving the TOV equations, even in the simpler theory of GR, generally requires numerical methods except for a few isolated, unrealistic cases such as incompressible fluids. Therefore, at first glance, deriving a concrete expression for these relationships might seem elusive. Surprisingly, as we shall show in this Proceeding, this view is overly pessimistic. It turns out that the full machinery of the TOV equation is not necessary. Instead, only a subset of the field equation and the scalar equation of BD will be needed. This is because only two equations are required to fix the two free parameters of the exterior vacuum. We shall present a rigorous yet parsimonious derivation, which only became available through our recent publication <cit.>.4pt Secondly, the complete solution enables the derivation of any PN parameters applicable for far-field regions in static spherical Brans-Dicke stars. As we shall show in this Proceeding, the derivation is non-perturbative and avoids the two PPN approximations requiring the weak field and the low pressure mentioned above. 4pt The material presented in this Proceeding was developed during the preparation of our two recent papers <cit.>. For a more detailed exposition of the conceptualization and technical points, we refer the reader to these papers.12pt The field equations and the energy-momentum tensor—It is well documented <cit.> that upon the Weyl mapping {g̃_μν:=Φ g_μν, Φ̃:=lnΦ}, the gravitational sector of the BD action can be brought to the Einstein frame as ∫ d^4x√(-g̃)/16π[ℛ̃-(ω+3/2)∇̃^μΦ̃∇̃_μΦ̃]. The Einstein-frame BD scalar field Φ̃ has a kinetic term with a signum determined by (ω+3/2). Unless stated otherwise, we shall restrict our consideration to the normal (“non-phantom”) case of ω>-3/2, where the kinetic energy for Φ̃ is positive.8pt The field equations are R_μν-ω/Φ^2∂_μΦ∂_νΦ-1/Φ∂_μ∂_νΦ+Γ_μν^λ∂_λlnΦ =8π/Φ(T_μν-ω+1/2ω+3T g_μν) ∂_μ(√(-g) g^μν∂_νΦ)=8π/2ω+3T√(-g) In the isotropic coordinate system which is static and spherically symmetric, the metric can be written as ds^2=-A(r)dt^2+B(r)[dr^2+r^2(dθ^2+sin^2θ dφ^2)] It is straightforward to verify, from Eqs. (<ref>)–(<ref>), that the most general form for the energy-momentum tensor (EMT) in this setup is T_μ^ν =diag(-ϵ, p_‖, p_, p_) where the energy density ϵ, the radial pressure p_∥ and the tangential pressure p_⊥ are functions of r. Note that the EMT is anisotropic if p_∥≠ p_⊥. The trace of the EMT is T=-ϵ+p_∥+2p_⊥ . 12pt The Brans Class I vacuum solution outside a star—It is known that the scalar–metric for the vacuum is the Brans Class I solution (which satisfies Eqs. (<ref>)–(<ref>) for T_μν=0) <cit.>. In the isotropic coordinate system (<ref>), the solution reads <cit.> {[ A=(r-k/r+k)^2/λ; B=(1+k/r)^4(r-k/r+k)^2-2Λ+1/λ; Φ=(r-k/r+k)^Λ/λ ]. for r⩾ r_* where r_∗ is the star's radius, and λ^2=(Λ+1)^2-Λ(1-Λ/2ω) Since λ and Λ are linked by (<ref>), this solution involves two independent parameters, which one chooses to be (k, Λ).12pt The field equations in the interior—For the region r⩽ r_*, substituting metric (<ref>) and the BD field Φ(r) into Eq. (<ref>) and the 00-component of Eq. (<ref>) and using the EMT in Eq. (<ref>), the functions A(r), B(r), Φ(r) satisfy the 2 following ordinary differential equations (ODEs): (r^2√(AB)Φ^')^' =8π/2ω+3[-ϵ+p_‖+2p_]r^2√(AB^3) (r^2Φ√(B/A)A^')^' =16π[ ϵ+ω+1/2ω+3(-ϵ+p_‖+2p_)]r^2√(AB^3) These equations offer the advantage of having both their left hand sides in exact derivative forms. Let us integrate Eqs. (<ref>) and (<ref>) from the star's center, viz. r=0, to a coordinate r>r_∗. The (A,B,Φ) functions are then given by (<ref>) at r. For r>r_*, both r^2√(AB)Φ^' and r^2Φ√(B/A)A^' terms that enter the left hand sides of (<ref>) and (<ref>) are r-independent, since the right hand sides of these equations vanish in the exterior vacuum. On the other hand, regularity conditions inside the star impose Φ^'(0)=A^'(0)=0 (i.e. no conic singularity) and finite values of the fields themselves. The calculation yields k Λ/λ =4π/2ω+3∫_0^r_∗dr r^2√(AB^3)[-ϵ+p_‖+2p_] and k/λ =4π/2ω+3∫_0^r_∗dr r^2√(AB^3)× [(ω+2)ϵ+(ω+1)(p_‖+2p_)] . Let us note that r^2√(AB^3) is the square root of the determinant of the metric, up to the sinθ term. (Accordingly, the integrals in the right hand sides of Eqs. (<ref>) and (<ref>) are invariant through radial coordinate transformations, since the combination r^2√(AB^3)sinθ is equivalent to √(-g).) We then can define the energy's and pressures' integrals by E^∗ = 4π∫_0^r_∗dr r^2√(AB^3) ϵ P_‖^∗ = 4π∫_0^r_∗dr r^2√(AB^3) p_∥ P_^∗ = 4π∫_0^r_∗dr r^2√(AB^3) p_⊥ . Inserting in (<ref>) and (<ref>), we obtain k/λ=E^*[ω+2/2ω+3+ω+1/2ω+3 Θ] and Λ=Θ-1/ω+2+(ω+1) Θ in which the dimensionless parameter Θ is defined as Θ:=P_∥^*+2 P_⊥^*/E^* . Together with (<ref>) and (<ref>), these expressions provide a complete expression for the exterior spacetime and scalar field of a spherical BD star. To the best of our knowledge, this prescription was not made explicitly documented in the literature, until our recent works <cit.>.4pt For a perfect fluid, p_‖=p_≡ p, thence P_‖^∗=P_^∗≡ P. The equations (<ref>), (<ref>) and (<ref>) fully determine the exterior solution (<ref>) once the integrals (<ref>)–(<ref>) are known, with these integrals being fixed by the stellar internal structure model. This explicitly determines the particles' motion outside the star, in both the remote and close to the star regions. 4pt The (β, γ, δ) PN parameters—In remote spatial regions, a static spherically symmetric metric in isotropic coordinates can be expanded as <cit.>: ds^2 =-(1-2 M/r+2β M^2/r^2+…)dt^2 +(1+2γ M/r+3/2δ M^2/r^2+…)(dr^2+r^2dΩ^2) in which β and γ are the Robertson (or Eddington-Robertson-Schiff) parameters, whereas δ is the second-order PN parameter (for both light and planetary like motions). It is straightforward to verify that the Schwarzschild metric yields β_ Schwd=γ_ Schwd=δ_ Schwd=1 The metric in Eq. (<ref>) can be re-expressed in the expansion form ds^2 =-(1-4/λk/ρ+8/λ^2k^2/r^2+…)dt^2+(1+4/λ(1+Λ)k/r+2/λ^2(4(1+Λ)^2-λ^2)k^2/r^2+…)(dr^2+r^2dΩ^2) Comparing Eq. (<ref>) against Eq. (<ref>) and setting M=2 k/λ we obtain β_ exact = 1 γ_ exact = 1+Λ δ_ exact = 1/3(4(1+Λ)^2-λ^2) where we have used the subscript “exact” as emphasis. Note that Λ directly measures the deviation of the γ parameters from GR (γ_GR=1). From Eq. (<ref>), Λ depends on both ω and Θ. Finally, we arrive at γ_ exact = ω+1+(ω+2)/ω+2+(ω+1) which can also be conveniently recast as γ_ exact = γ_ PPN+Θ/1+γ_ PPN Θ by recalling that γ_ PPN=ω+1/ω+2. To our knowledge, the closed-form expression (<ref>) for γ was absent in the literature, until our recent works <cit.>.8pt Regarding δ: δ_ exact =1/[ω+2+(ω+1) Θ]^2[(ω^2+3/2ω+1/3) +(2ω^2+19/3ω+13/3)Θ+(ω^2+25/6ω+13/3)Θ^2] Figure <ref> shows contour plots of γ_ exact and δ_ exact as functions of γ_ PPN (i.e, ω+1/ω+2) and Θ. In addition, with the aid of Eqs. (<ref>) and (<ref>), Eq. (<ref>) produces the active gravitational mass M =2ω+4/2ω+3 E^*+2ω+2/2ω+3 (P_∥^*+2P_^*) where the contribution of pressure to the active gravitational mass is evident <cit.>.12pt Degeneracy at ultra-high pressure—For Θ→1^-, both γ and δ go to 1, their GR counterpart values. Generally speaking, for Θ→1^-, since Λ→0 and λ→1 regardless of ω (provided that ω∈(-3/2,+∞)), the value of k approaches k→ω+2/2ω+3 E^*+ω+1/2ω+3 (P_∥^*+2P_^*) The ω-dependence is thus absorbed into k, and the Brans Class I solution degenerates to the Schwarzschild solution {[ A=(r-k/r+k)^2; B=(1+k/r)^4; Φ=1 ]. for r⩾ r_* Therefore, ultra-relativistic Brans-Dicke stars are indistinguishable from their GR counterparts, as far as their exterior vacua are concerned. This fact can be explained by the following observation: For ultra-relativistic matter, the trace of the EMT vanishes, per Eq. (<ref>). The scalar equation (<ref>) then simplifies to □ Φ=0 everywhere. Coupled with the regularity condition at the star center, this ensures a constant Φ throughout the spacetime which is now described by the Schwarzschild solution. Consequently, the scalar degree of freedom in BD gravity is suppressed in the ultra-relativistic limit. This prompts an intriguing possibility whether Birkhoff's theorem is fully restored in this limit.12pt Discussions—Formulae (<ref>) and (<ref>) are the essential outcome of this Proceeding:4pt * Non-perturbative approach: Our derivation is non-perturbative in nature. It makes use of the integrability of the 00-component of the field equation (<ref>), along with the scalar field equation (<ref>). * Parsimony: Our derivation relies solely on the scalar field equation and the 00-component of the field equation, without the need for the full set of equations, specifically the 11- and 22- components of the field equation [Note that establishing the functional form of the Brans Class I solution still requires the full set of equations.]. The additional physical assumptions employed are the regularity at the star's center and the existence of the star's surface separating the interior and the exterior. * Universality of results: The final formulae, (<ref>) and (<ref>), hold for all field strengths and all types of matter (whether convective or non-convective, for example). We do not assume the matter comprising the stars to be a perfect fluid or isentropic. * Higher-derivative characteristics: In contrast to the one-parameter Schwarzschild metric, the Brans Class I solution depends on two parameters, i.e. the solution is not only defined by its gravitational mass, but also by a scalar mass besides the gravitational one <cit.>. The exterior BD vacuum should reflect the internal structure and composition of the star. This expectation is confirmed in Eqs. (<ref>) and (<ref>), highlighting the role of the parameter Θ. * Role of pressure: Figure <ref> shows contour plots of γ_ exact and δ_ exact as functions of ω+1/ω+2 and Θ. There are three interesting observations: * An ultra-relativistic limit, ≃1^-, would render γ_ exact≃1, regardless of ω. * For Newtonian stars, i.e. low pressure (≈0), the PPN result is a good approximation regardless of the field strength. * A joint measurement of γ and δ in principle can determine ω and Θ. However, due to the non-linear relationships in (<ref>) and (<ref>), for a given pair of {γ, δ}, multiple solutions for {ω, Θ} can exist. A measurement of a third PN parameter (apart from β) in principle can resolve the multiplicity problem. Conclusion—We have derived the exact analytical formulae, (<ref>) and (<ref>), for the PN parameters γ and δ for spherical mass sources in BD gravity. The derivation relies on the integrability of the 00-component of the field equation, rendering it non-perturbative and applicable for any field strength and type of matter constituting the source. The conventional PPN result for BD gravity γ_ PPN=ω+1/ω+2 lacks dependence on the physical features of the mass source. In the light of our exact results, the γ_ PPN should be regarded as an approximation for stars in modified gravity under low-pressure conditions. Our findings expose the limitations of the PPN formalism, particularly in scenarios characterized by high star pressure. It is reasonable to expect that the role of pressure may extend to other modified theories of gravitation.12pt Acknowledgments—BC thanks Antoine Strugarek for helpful correspondences. HKN thanks Mustapha Azreg-Aïnou, Valerio Faraoni, Tiberiu Harko, Viktor Toth, and the participants of the XII Bolyai–Gauss–Lobachevsky Conference (BGL-2024): Non-Euclidean Geometry in Modern Physics and Mathematics (Budapest, May 1-3, 2024) for valuable commentaries. 10 BransDicke-1961C. H. Brans and R. Dicke, Mach's Principle and a Relativistic Theory of Gravitation, Phys. Rev. 124, 925 (1961) Brans-1962C. H. Brans, Mach's Principle and a relativistic theory of gravitation II, Phys. Rev. 125, 2194 (1962) Nguyen-2023-BDKGH. K. Nguyen and B. Chauvineau, 𝒪(1/√(ω)) anomaly in Brans-Dicke gravity with trace-carrying matter, https://arxiv.org/abs/2402.14076arXiv:2402.14076 [gr-qc] Bronnikov-1973K. A. Bronnikov, Scalar-tensor theory and scalar charge, Acta Phys. Polon. B 4, 251 (1973), http://s3.cern.ch/inspire-prod-files-1/1a28c080a733a1b776867157a30efd12Link to pdf Will1C. M. Will, Theory and Experiment in Gravitational Physics, second edition, Cambridge University Press, Cambridge, 2018 Will2C. M. Will, The Confrontation between General Relativity and Experiment, Living Rev. Relativ. 17, 4 (2014), http://doi.org/10.12942/lrr-2014-4doi.org/10.12942/lrr-2014-4 WeinbergS. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York, 1972 Nguyen-compact-star-2H. K. Nguyen and B. Chauvineau, An optimal gauge for Tolman-Oppenheimer-Volkoff equation in Brans-Dicke gravity (in preparation) 2024-gamma-PLBB. Chauvineau and H.K. Nguyen, The complete exterior spacetime of spherical Brans-Dicke stars, Phys. Lett. B 855, 138803 (2024), https://arxiv.org/abs/2404.13887arXiv:2404.13887 [gr-qc] 2024-gamma-EPJCH. K. Nguyen and B. Chauvineau, Impact of Star Pressure on g in Modified Gravity beyond Post-Newtonian Approach, https://arxiv.org/abs/2404.00094arXiv:2404.00094 [gr-qc] Baez-2005J. C. Baez and E. F. Bunn, The Meaning of Einstein's Equation, Amer. Jour. Phys. 73, 644 (2005), https://arxiv.org/abs/gr-qc/0103044arXiv:gr-qc/0103044 Ehlers-2005J. Ehlers, I. Ozsvath, E. L. Schucking, and Y. Shang, Pressure as a Source of Gravity, Phys. Rev. D 72, 124003 (2005), https://arxiv.org/abs/gr-qc/0510041arXiv:gr-qc/0510041
http://arxiv.org/abs/2406.18312v1
20240626125137
AI-native Memory: A Pathway from LLMs Towards AGI
[ "Jingbo Shang", "Zai Zheng", "Xiang Ying", "Felix Tao", "Mindverse Team" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Stable Diffusion Segmentation for Biomedical Images with Single-step Reverse Process Tianyu Lin1 Zhiguang Chen2 Zhonghao Yan3 Weijiang Yu2* Fudan Zheng2* July 1, 2024 ==================================================================================== § ABSTRACT Large language models (LLMs) have demonstrated the world with the sparks of artificial general intelligence (AGI). One opinion, especially from some startups working on LLMs, argues that an LLM with nearly unlimited context length can realize AGI. However, they might be too optimistic about the long-context capability of (existing) LLMs – (1) Recent literature has shown that their effective context length is significantly smaller than their claimed context length; and (2) Our reasoning-in-a-haystack experiments further demonstrate that simultaneously finding the relevant information from a long context and conducting (simple) reasoning is nearly impossible. In this paper, we envision a pathway from LLMs to AGI through the integration of memory. We believe that AGI should be a system where LLMs serve as core processors. In addition to raw data, the memory in this system would store a large number of important conclusions derived from reasoning processes. Compared with retrieval-augmented generation (RAG) that merely processing raw data, this approach not only connects semantically related information closer, but also simplifies complex inferences at the time of querying. As an intermediate stage, the memory will likely be in the form of natural language descriptions, which can be directly consumed by users too. Ultimately, every agent/person should have its own large personal model, a deep neural network model (thus AI-native) that parameterizes and compresses all types of memory, even the ones cannot be described by natural languages. Finally, we discuss the significant potential of AI-native memory as the transformative infrastructure for (proactive) engagement, personalization, distribution, and social in the AGI era, as well as the incurred privacy and security challenges with preliminary solutions. § INTRODUCTION Large language models (LLMs), pre-trained on massive text corpora and instruction-tuned on expert annotations (and also via reinforcement learning with human feedback), such as the GPT series from OpenAI <cit.>, the Gemini series from Google <cit.>, the Claude series from Anthropic <cit.>, the Llama series from Meta <cit.>, and the Mixtral series from Mistral <cit.>, have demonstrated significant potentials in their capabilities as general task solvers, going beyond language modeling itself. These models can follow complicated human instructions and perform multi-step reasoning when necessary <cit.>. Therefore, it is a consensus that LLMs are becoming fundamental building blocks towards artificial general intelligence (AGI) <cit.>. Long-context processing capability is vital for LLMs, and therefore, is one of the most popular directions in LLM researches. For example, the original GPT-4 has a context window of 32K tokens <cit.>, and the the most recent GPT-4-turbo and GPT-4o models can process 128K tokens; Gemini 1.5 claimed a context window of 1M or 10M tokens <cit.>. Academia people have also explored to combat length extrapolation <cit.> and position bias <cit.>, where some works claimed “unlimited” context lengths. Following this trend, an increasing number of people, especially from startups working on LLMs, argue that an LLM with super long or even unlimited context can realize AGI by putting all raw data into the context and relying entirely on the LLM to complete all necessary reasoning in one step to get the final result for each query. While nowadays LLMs can take super or even infinitely long inputs and produce an output without throwing a runtime error, it is still unknown whether these models can appropriately utilize the provided long contexts. We argue that similar to a human's cognitive load <cit.>, the maximum amount of content LLMs are capable of handling might be inherently limited depending on the task they are performing. However, most previous evaluations for long-context models are based on perplexity or a simple synthetic retrieval task while overlooking the effectiveness on more complex tasks. According to a recent benchmark following more complicated tasks <cit.>, most, if not all, LLMs over-claimed their context lengths. For example, GPT-4 models, which claim to have a context of 128K, only has an effective context of 64K; ChatGLM <cit.>, another model claimed to have a context of 128K, ends up with only 4K. We further develop reasoning-in-a-haystack evaluations following the LLM-as-personal-assistant scenarios and demonstrate that simultaneously finding the relevant information from a long context and conducting reasoning is nearly impossible. We believe that AGI should be a system, where LLMs are more like Processors and LLM's context is like a RAM. Using Processor and RAM alone is not even enough for a computer, nor AGI. To complete this system, we will at least need (long-term) Memory, which plays a role of disk storage. Retrieval-augmented LLMs that sift through numerous relevant contexts to answer a query <cit.> can be viewed as a special case here by defining the Memory as raw data only. However, Memory is beyond the raw data, as it should be generated and organized, including many results that require reasoning from the raw data. In addition to downstream applications, Memory shall be able to be directly consumed by users. Acknowledging the necessity of Memory, we then discuss the forms of Memory and how to facilitate the interaction between Memory and LLM (e.g., loading the right data from “disk” to “RAM”). As an intermediate stage, the memory will likely be in the form of natural language descriptions. This is in line with many existing information extraction and knowledge discovery works and we will construct a “Memory Palace” for each agent/person. Ultimately, every agent/person should have its own large personal model (LPM), a deep neural network model (thus AI-native) that parameterizes and compresses all types of memory, even the ones cannot be described by natural languages. From this compression perspective, this LPM can be a LLM too. Finally, we discuss the significant potential of AI-native memory as the transformative infrastructure for AI-native (proactive) engagement, personalization, distribution, and social in the AGI era, as well as the incurred privacy and security challenges with preliminary solutions. In summary, our main points are * LLM itself is not enough for AGI. It is very challenging and even impossible to build an LLM with truly unlimited context length, so the model can put all raw data into the context and complete all necessary reasoning in one step for a particular query. * Memory is a keystone towards AGI. AGI should be a system, where LLMs are more like Processors, LLM's context is like a RAM, and Memory plays a role like a disk. * There can be at least two different ways to generate and organize Memory. The first solution is following the Information Extraction/Generation ideas of constructing a “Memory Palace”. The second solution falls in the line of compressing the Memory as a neural network (maybe LLM too). § LLMS WITH UNLIMITED CONTEXT LENGTH ARE NOT THE ANSWER FOR AGI As the LLMs have demonstrated the world with the sparks of AGI <cit.>, an increasing number of people, especially from some startups working on LLMs, argue that an LLM with super long or even unlimited context can achieve AGI by putting all raw data into the context and relying entirely on the LLM to complete all necessary reasoning in one step to get the final result. There are two key assumptions behind this long-context direction, and they must hold true at the same time; otherwise, this argument would fail automatically. Assumption 1: LLMs can effectively find the necessary information from a super long or even unlimited context, i.e., the needle-in-a-haystack capability. Assumption 2: LLMs can conduct all the required, complicated inferences based on the raw inputs in one step, i.e., the long-context reasoning capability. According to the current literature and our experiments (will be presented in this section), people might be too optimistic about the long-context capability of (existing) LLMs – (1) recent literature <cit.> has shown that their effective context length is significantly smaller than their claimed context length; and (2) our reasoning-in-haystack experiments in Section <ref> further demonstrate that simultaneously finding the relevant information from a long context and conducting reasoning is nearly impossible. More details will be covered in the remainder of this section. §.§ Effective Context Length of Existing LLMs is Limited There are several proprietary LLMs claimed very long context lengths. For example, the original GPT-4 has a context window of 32K tokens <cit.>, and the the most recent GPT-4-turbo and GPT-4o models can process 128K tokens; Gemini 1.5 claimed a context window of 1M or 10M tokens <cit.>. There are also a number of works, mostly from academia, extending the open-source LLMs to long context lengths, by either adding more fine-tuning with long contexts or modifying the (relative) attention calculations without changing the model parameters <cit.> Needle-in-a-haystack (NIAH) The needle-in-a-haystack test is commonly adopted in these long-context LLM works to demonstrate that the LLMs can retrieve the “needle” (e.g., a specific number or sentence) from the “haystack”, i.e., a long irrelevant/background text. Effective Context Length The effective context length is defined as the maximum length that the testing LLM can outperform a strong baseline. Specifically in  <cit.>, the baseline is chosen as LLAMA-2-7B (chat), a popular open-source LLM with a 4K context length that is very affordable for serving. All the testing LLMs have a claimed context length at least 32K. According to the Table 3 in <cit.>, most, if not all, LLMs overclaimed their context lengths. For example, GPT-4 <cit.>, which claims to have a context of 128K, only has an effective context of 64K; ChatGLM <cit.> claims to have a context of 128K, but its effective context is only 4K. Therefore, we believe that super long/unlimited effective context is very difficult to achieve, and the effective context size in existing long-context solutions has not fundamentally improved. There are still many fundamental obstacles in technology in the future. §.§ Reasoning-in-a-haystack is Very Difficult for Existing LLMs Going beyond the traditional NIAH tasks that focus solely on retrieval-based abilities, we propose a new reasoning-in-a-haystack task, aimed at validating LLMs' capability when the retrieval and reasoning are required simultaneously. Figure <ref> shows an overview of the reasoning-in-a-haystack evaluation pipeline. We start with the real data from Mebot[<https://me.bot/>. We would like to acknowledge to the users who have agreed to our experiment use for their data.] of Mindverse AI. Mebot is a “second me” product based on LLMs. For each user, it creates personalized models that can be applied across various scenarios. Specifically, it emphasizes on organizing the user's memories while ensuring privacy and security, providing personalized services and inspiration based on these memories. §.§.§ Experiment Setups The experiment details are described as follows. Haystack, Needle, and Query: A more challenging setting We constructed 8 haystacks for different users to increase the diversity and difficulty of the test cases. Each haystack, served as a chronologically organized compilation of users' notes and session messages, was created with the explicit consent of the users. The data was sourced from Mebot users and meticulously filtered to ensure the absence of contradictory information in each query-needle pair. These data contains note and chat. Each note includes title, summary, and content; each chat session involves a (multi-turn) dialogue between user and Mebot. We designed 6 distinct, well-structured query-needle pairs, each with a corresponding true answer, as exemplified in Appendix <ref>. All pairs are in the context of Mebot and are close-ended to ensure feasibility for automated evaluation. The number of hops, which represents the reasoning steps required to obtain the final result, is set to 1, 2, and 3. Furthermore, we experimented two different ways to distribute the needles in the haystack as follows. * Multi-needle: Every needle is evenly distributed in the haystack. For example, if there are 5 needles, they are placed at depths of 0%, 20%, 40%, 60%, and 80%. * Single-needle: All the needles are combined together and distributed at the depth of 40% or 60%. Note that our constructed haystack, needle, and query shall be viewed as significantly more challenging than previous NIAH works, where the relevance between haystack and needle-query pair is nearly minimal. Compared Provider LLMs We selected , and as the Provider to be evaluated, as , are two of the most advanced models and serve as a preferable baseline. The prompt settings used for these providers are illustrated in Appendix <ref>. Evaluator LLM, True Answer, and Evaluation Criteria Due to the closed-world nature of our needle-query construction, we first generate a true answer by LLM and then refine it manually to ensure accuracy and fairness for evaluation. The introduction of true answer makes the evaluator's job much easier as it only needs to compare provider's answer with well-designed true answer; there is no need to refer to the needles to handle more complex reasoning during the evaluation. To ensure consistency in our evaluation, we used (temperature=0) as the evaluator for all cases. The evaluation criteria are presented in Appendix <ref>. For the same provider LLM, we iterate through all needle-query pairs and conduct experiments on 8 haystacks to obtain an average score, which is a number between 0 and 10, the higher, the better. §.§.§ Results As shown in Figure <ref>, the most recent LLMs from OpenAI, and both show poor performance with long texts and multiple hops, supporting our aforementioned arguments on LLMs and AGI. Checking the score trend over the number of hops and the context length, it is obvious that the quality of responses is negatively correlated both of them, indicating that LLMs struggle with extended texts and multiple reasoning steps. Also, the results confirm that the multi-needle setting is more challenging than the single-needle one, because combining all the needles together reduces the retrieval difficulty. Remarkably, and perform similarly on this task. According to livebench results (<https://livebench.ai/>), outperforms in reasoning tasks, while excels in language tasks. Since our task combines these two aspects, similar results for both models are consistent with the literature. §.§ Remarks and Discussions The current reasoning ability of LLMs is insufficient. Without a new paradigm that significantly improves reasoning ability, it is very unrealistic to rely entirely on LLMs to complete all necessary retrieval and reasoning in one step. Drawing the connections with human learning and reasoning, the context of LLMs is like short-term (working) memory. Even with super long/unlimited effective long-context LLMs, they can only solve problems based on very long short-term memory – every time, the LLMs work everything from the scratch. Intuitively, this is less efficient and effective than saving and organizing the important conclusions from the history. Therefore, the most ideal approach here is to timely transform important conclusions into long-term memory for better future use. This points us to AI-native memory. § AGI SHOULD BE A SYSTEM WITH AI-NATIVE MEMORY AGI shall be a System like a computer, where LLMs are like Processors and the context of LLM is like RAM. To complete this system, we must have (long-term) Memory as disk storage. RALM/RAG is an elementary version of Memory Retrieval-augmented LLMs (RALMs) that sift through numerous relevant contexts to answer a query <cit.> can be viewed as a special case here by defining the Memory as raw data only. While some people want to leverage RALMs for AGI, the main starting point of these methods were to solve the lack of domain knowledge in LLMs. Therefore, these methods are designed to solve the problem that the long-context supported by LLM itself is not long enough. As discussed earlier, relying solely on the super long context of LLM itself cannot realize AGI. So RALM/RAG doesn't work either. Memory is beyond the raw data, as it should be generated and organized, including many results that require reasoning from the raw data. In addition to downstream applications, Memory shall be able to be directly consumed by users. What is AI-Native Memory? We believe the ultimate form of AI-Native Memory is a deep neural network model (thus AI-native) that parameterizes and compresses all types of memory, even the ones cannot be described by natural languages. In order to ensure the privacy of the Memory across different users who interacted with the same AGI agent, we argue that the best practice is to maintain one Memory model for each individual user. Therefore, we refer to this Memory model between the AGI agent and a particular user as the Large Personal Model (LPM) of this user. The LPM records, organizes, indexes, and arranges every detail about the individual, ultimately providing interfaces for users to directly access memories and for downstream applications (such as personalized generation, recommendations, etc.) to utilize useful, complete contexts. In a sense, the LPM acts as an upgraded “Retrieval-Augmented” role. Its superiority lies in the transformation of original data through extensive “reasoning” (i.e., organizing, indexing, etc.), rather than merely recording. Note that the LPM will evolve as the user interacts with LPM, creating a Data Flywheel. We envision three levels of the implementations of LPM as follows, with increasing complexity. * L0: Raw Data. This approach is similar to directly applying RALM/RAG to raw data, defining Memory as all raw data. * L1: Natural-language Memory refers to the memory that can be summarized as natural language forms, such as short bio of the user, a list of significant sentences or phrases, and preference tags. * L2: AI-Native Memory refers to the memory that doesn't necessarily need to be described in natural language, learned and organized through model parameters. Each LPM will be a neural network model. From a technical perspective, the production, organization, consumption, and maintenance of the LPM need to be addressed. The rest of this section will give a deep dive into L1 and L2. §.§ L1: Natural-language Memory In L1, the Memory will include a set of natural-language descriptions, such as keywords/tags, phrases, sentences, and even paragraphs. These are highly relevant to information extraction and knowledge discovery, including phrase mining <cit.>, entity recognition <cit.>, relation extraction <cit.>, text summarization <cit.>, taxonomy construction <cit.>, etc. It will also cover different modalities as sources of the Memory, such as image, audio, video, and even sensor signals from wearable devices. The developers of the L1 LPM have to specify the schemes. For example, various useful Memory types can be defined, including but not limited to * (Short) Bio, a general description of the user, typically with a few sentences. * Topics of interest to the user, which can be seen as a collection of tags (e.g., “politics”, “basketball”). * Preferences include a user's preferences for various things. The preference and topic are different because knowing a preference typically (implicitly) excludes the other side of the preference (e.g., detailed vs. concise expressions, cost-effective vs. luxury products, aisle vs. window seats). * Social Connections include the user's social relationships, such as who and which organizations have been mentioned. The Memory can be categorized by granularity too. Taking the topics as example, we can have the following examples from fine-grained to coarse-grained. * Summarized Sentences: Each interaction with the user can be summarized into sentences. Such summaries are just one level beyond the raw data in L0. There can be redundancies, but they should not contradict each other. * Fine-grained Tags: Very precise tags that summarize Memory at a very detailed level. These tags are typically explicitly mentioned by the user. * Coarse-grained Tags: Starting from fine-grained tags, one can roll up the granularity to obtain more general tags. For example, expanding from a player's name (e.g., Michael Jordan) to the sport league (e.g., NBA), and from the sport league to the sport itself (e.g., Basketball). It is important to keep the granularity not too far from the original fine-grained tags, so the user would be still interested. * Global: Every user should have a high-level summary, similar to what the user would say during ice-breaking sessions. This includes fun facts, personal hobbies, etc. The Memory is never only about (generalized) extractions. It requires more complex inference and reasoning. * Memory can include information that is inferred from a single conversation, for example, through summarizing and reflecting. * Memory can be derived from cross-session interactions. This is essentially a pattern mining – deducing global information through user behavior from a few interactions. This can be achieved through sampling and chaining by tags/sentences, and then run an LLM inference. For example, one can put all recent Memory about basketball and then ask an LLM to find a trend. §.§ L2: AI-Native Memory In L2, the Memory goes beyond the natural language forms and becomes a neural network model, and therefore, we name it as “AI-Native”. This model aims to encode all the memories of the user. The L2 LPM can be viewed as a personalized version of world models <cit.>. It shall be able to predict the user behavior based on the user history. To this extent, the L2 LPM can also make suggestions when the user is adding new inputs like an auto-completion. Note that L2 is not simply a parameterized version of L1. It shall generalize to more subtle patterns that cannot be defined by the system designers. It is an end-to-end solution without handcrafted schemes. One can expect that “prompting” the L2 model can obtain the information that the developers can define in L1. Privacy and Security Our envisioned LPM separates the user history as all the LPMs are trained independently, so there is no concern that the LPM will leak the user's information to others. The data and model security is another thing to pay attention to. An LLM can be L2 LPM The memory encoding in L2 can be viewed as a compression of the raw data as lossless as possible. From this compression perspective, choosing LLM as LPM and continuing the language modeling objective training on the user history become a very intuitive solution. At the same time, finding the underlying patterns from the memory so the model will be able to generate novel reasoned memories/preferences is an important feature of L2. One can expect that an L2 model should be able to generate all the L1 memory. Therefore, we can leverage the L1 results as additional data for a supervised fine-tuning of the L2 model. In summary, one can obtain an LPM via a combination of language modeling “pre-training” and instruction-following “fine-tuning” based on the user history. Remarkably, one shall be able to prompt the L2 LPM to uncover all the L1 information because the model will likely hit nearly a zero training error. Challenges and Potential Solutions There are several challenges and open problems require more research and thinkings. * Training Efficiency. One intuitive but computationally complex method is for each user to fine-tune their own LLM. A possible implementation would involve learning how to generate Memory from raw data and how to produce the required Memory based on the current context within an end-to-end neural network model. A compromise method is to use LoRA <cit.> to fine-tune a personal LLM for each user. Our initial experiments suggest that a LoRA 7B model is enough to capture the memory of a single user, as the training data size is several magnitudes smaller than the typical pre-training data size of LLMs. * Serving Efficiency. As more L2 LPMs deployed for users, new infrastructure is needed for serving these models. This is more challenging than serving one single generic LLM for all the users. LPMs have been customized for different users. One advantage of using LoRA models is that different LPMs can still share common layers in the neural architecture. We plan to develop a new serving framework that combines the computations in the common layers of different LoRA models, so the concurrent queries can be put into batches to increase the throughput and also reduce the serving cost. Another direction to explore is to offload the L2 LPM serving to the user's edge device, e.g., a smart phone, after we quantize the model. * Cold Start, as a common problem in training deep neural networks, is a straightforward challenge in L2. We argue that L2 LPM should be only trained when the user has accumulated sufficient data. Otherwise, one can always roll back to L1 to offer some initial personalization experience. Another idea is to find some role-play methods <cit.> to generate synthetic data to lower the entry bar of the L2 LPM for users. * Catastrophic Forgetting and Conflicts Resolving. It is important to ensure that new memory is learned while preventing catastrophic forgetting of old memory. There are also cases where the newly added, correct memory should override the previous wrong information. There are already some pioneer researches along this line <cit.>. § CONCLUSIONS AND OUTLOOKS In this paper, we highlight the limitations of LLMs in achieving AGI due to the impracticality of unlimited context length. We propose that AGI should function as a system where LLMs act as processors, their context as RAM, and memory as a disk. Efficient memory is crucial, and we suggest two solutions: (1) constructing a “Memory Palace” using Information Extraction/Generation techniques for structured storage, and (2) compressing memory into a neural network for efficient retrieval. These approaches can be combined to create a robust memory system for AGI. In our vision, the Memory is strongly associated with the user, and at the same time, agnostic to the specific applications. We believe that in the future, an AGI agent will first interact with the AI-Native Memory and see if it can supply the necessary information. If not, it is the AI-Native Memory's job to interact with the real user to figure out more information. Therefore, AI-Native Memory will be the core of all interactions and personalizations between users and AGI agents. Note that personalization here is not only traditional content recommendation, but a type of recommendation service that marks the beginning of the AI journey. With an accurate and efficient AI-Native Memory, it will enable numerous applications, such as memory-augmented chat, recommendations, building situational memory, auto-completion for the user's input, and integrating personal models based on relationships in social networks. In conclusion, there is a significant potential of AI-native memory as the transformative infrastructure for (proactive) engagement, personalization, distribution, and social in the AGI era, as well as the incurred privacy and security challenges with preliminary solutions. neurips_2023 § APPENDIX § AN EXAMPLE OF MULTI-NEEDLE REASONING-IN-A-HAYSTACK AIbox[2][]colback=white!90!gray, colframe=black!75!white, fonttitle=, title=#2, #1 § PROMPT TEMPLATE § EVALUATION CRITERIA § DETAILED EXPERIMENT RESULTS
http://arxiv.org/abs/2406.19234v1
20240627145838
Seeing Is Believing: Black-Box Membership Inference Attacks Against Retrieval Augmented Generation
[ "Yuying Li", "Gaoyang Liu", "Yang Yang", "Chen Wang" ]
cs.CR
[ "cs.CR", "cs.AI" ]
Gauge Invariance of Equilibrium Statistical Mechanics Matthias Schmidt 27 June 2024 ===================================================== § ABSTRACT Retrieval-Augmented Generation (RAG) is a state-of-the-art technique that enhances Large Language Models (LLMs) by retrieving relevant knowledge from an external, non-parametric database. This approach aims to mitigate common LLM issues such as hallucinations and outdated knowledge. Although existing research has demonstrated security and privacy vulnerabilities within RAG systems, making them susceptible to attacks like jailbreaks and prompt injections, the security of the RAG system's external databases remains largely underexplored. In this paper, we employ Membership Inference Attacks (MIA) to determine whether a sample is part of the knowledge database of a RAG system, using only black-box API access. Our core hypothesis posits that if a sample is a member, it will exhibit significant similarity to the text generated by the RAG system. To test this, we compute the cosine similarity and the model's perplexity to establish a membership score, thereby building robust features. We then introduce two novel attack strategies: a Threshold-based Attack and a Machine Learning-based Attack, designed to accurately identify membership. Experimental validation of our methods has achieved a ROC AUC of 82%. § INTRODUCTION Large Language Models (LLMs) have demonstrated impressive capabilities in language generation tasks <cit.>. Despite these advancements, LLMs still struggle with issues such as hallucinations and a limited ability to process long-tailed factual knowledge <cit.>. Retrieval-Augmented Generation (RAG) addresses these limitations by incorporating information from external knowledge databases into the generative process, thereby reducing hallucinations and enhancing the management of long-tail knowledge <cit.>. However, despite utilizing external non-parametric data stores, RAG systems continue to face significant security and privacy issues. A growing body of research is dedicated to understanding the security vulnerabilities of RAG systems. Hu et al.<cit.> demonstrated that generating specific short prefixes can lead RAG systems to produce erroneous outputs. Similarly, PoisonedRAG, designed by Zou et al.<cit.>, employs specialized prompts to elicit unexpected outputs from LLMs. Additionally, Zeng et al.<cit.> have shown that RAG systems can leak private information. Despite these findings, the security of the external knowledge databases used by RAG systems remains underexplored, with only a few studies focusing on this issue. Qi et al.<cit.> developed a technology for prompt-injected data extraction that effectively extracts data from the databases of RAG systems. Furthermore, Anderson et al. <cit.> employed a method in which an LLM is prompted to answer 'yes' or 'no' to determine whether a specific sample exists in the database. This paper explores the use of Membership Inference Attacks (MIA)<cit.> to assess whether samples are included in the knowledge datasets of RAG systems, with a focus on evaluating the privacy and security of these external datasets. Traditionally used to determine if a sample was part of a model’s training dataset, MIAs are well-recognized for assessing model privacy<cit.>. However, conventional MIA techniques, which depend on the parameterized structures of pre-trained models, are not well-suited for RAG systems that rely on non-parametric external knowledge bases for information sourcing. Our core insight is that if a sample resides in the RAG system’s knowledge database, the content generated by the LLMs will be similar to that sample. This occurs because the RAG system tends to retrieve the top-k passages most similar to the input sample <cit.>. Once a sample is identified within the database, it is retrieved and integrated into the input prompt. This integration enables the LLMs to access relevant knowledge and generate a response that exhibits a higher degree of similarity to the original sample, drawing directly from the sample itself to generate answers. Although the idea is simple, implementing our method in the RAG system presents two primary challenges. First, since LLMs synthesize both retrieved data and their inherent knowledge to generate responses, the output text may not perfectly align with the target sample. This discrepancy poses a challenge in accurately assessing the similarity between the generated text and the target sample. Second, even after calculating a similarity score, determining an appropriate threshold that enables the attacker to accurately assess whether the target sample is present in the external knowledge base remains difficult. To address these challenges, we have developed a novel method that can accurately determine the presence of a sample within the knowledge database of a RAG system, using only black-box API access. As shown in Fig <ref>, we initially use the first half of the target sample as the prompt to generate text via the RAG system. We then compute the cosine similarity between the entire target sample and the generated text, along with the model's perplexity, to establish a robust membership score. Additionally, we employ both threshold traversal-based and machine learning-based methods across these two dimensions to accurately classify members and non-members. Our experiments demonstrate that our attack method can effectively identify samples within the target knowledge base and is superior to the four state-of-the-art MIA methods. § METHOD We introduce a novel method utilizing MIA technology specifically designed for RAG systems. This method can accurately determine whether a sample is included in the knowledge database of a RAG system. As illustrated in Fig <ref>, our approach requires only the generated content from the RAG system to calculate the membership score, enabling its application through black-box API access. This makes it universally applicable to any RAG system, regardless of its underlying architecture or dataset. §.§ Membership Score Our goal is to utilize a set of N query samples, denoted as q_1, ..., q_N, to determine whether each sample q_i exists in the RAG system's external knowledge database D. To achieve this, we first divide each query q_i into two parts: the prompt text p_i and the remaining text r_i, where q_i = {p_i ⊕ r_i}. The RAG system retrieves the top k samples that are most similar to p_i. If q_i is present in the knowledge database, it will be retrieved as external knowledge D, and the model will generate a response g_i based on the content of q_i, resulting in text that is highly similar to the original prompt q_i. To quantitatively assess this similarity, we transform both g_i and q_i from text to embeddings and calculate their cosine similarity Similarity_i. This calculation is performed using the following equation: Similarity_i = ϕ(q_i) ·ϕ(g_i)/ϕ(q_i)ϕ(g_i) Here, ϕ(·) represents the function mapping the text to its embedding vector, and · denotes the dot product operation. The term ϕ(·) denotes the Euclidean norm of the vector ϕ(·). The inherent variability in the text generated by LLMs, which predict the probability of the next token rather than producing a deterministic output <cit.>, means that the similarity of generated text can fluctuate. Consequently, using similarity alone is not sufficiently robust for determining membership. Therefore, we also incorporate the perplexity of the generated text. Perplexity measures the average uncertainty of the model when predicting each word. If a sample belongs to the database, the model's predictions are expected to be more certain, resulting in lower perplexity. We denote the perplexity of the i-th sample as Perplexity_i. Finally, we combine Similarity_i and Perplexity_i into a two-dimensional vector that serves as the membership score for each sample: Membership_Score_i = {(Similarity_i ⊕ Perplexity_i) | i = 1, 2, ..., N} This composite score provides a more robust fearture of whether each sample exists in the knowledge database, enhancing our method's accuracy and reliability. §.§ Attack Threshold-based Attack: Since Membership_Score_i is a two-dimensional feature vector containing similarity and perplexity information, we employ a traversal method to explore all possible threshold vectors θ = (θ_similarity, θ_perplexity) and identify the optimal membership threshold vector θ^*. A sample is classified as a member if Similarity_i ≥θ_similarity and Perplexity_i ≤θ_perplexity. Conversely, it is classified as a non-member otherwise. For each candidate θ, we compute the corresponding Area Under the Curve (AUC) value, which reflects the trade-off between the true positive rate (TPR) and the false positive rate (FPR).The selection of the optimal threshold is determined by the expression: θ^* = max_θAUC (θ) This process aims to find the threshold θ^* that maximizes the AUC value. The AUC metric is particularly suitable for evaluating binary classification tasks because it balances the ability to correctly identify members against the risk of incorrectly identifying non-members as members.Ultimately, the threshold θ yielding the highest AUC value is selected as the optimal threshold θ^*. Machine Learning-based Attack: In addition to employing a simple threshold method, we have developed a supervised attack model that leverages the capabilities of machine learning algorithms to learn nonlinear decision boundaries. This approach enables the model to discern the distribution properties of features between similarity and perplexity, thus allowing it to automatically differentiate members from non-members.For each sample i, we utilize a two-dimensional feature vector as input: X_i = [Similarity_i, Perplexity_i], with the corresponding label: y_i = 1 indicating that sample i is a member, and y_i = 0 indicating a non-member. The goal is for the model to learn a mapping function f, which predicts the probability of membership status based on the feature vector: f(X_i) = P(y_i = 1 | X_i) It is critical to note that any supervised learning model suitable for binary classification can be utilized for this task. To efficiently select the best model and its parameters, we employ the AutoML framework AutoGluon <cit.>. AutoGluon streamlines the process of searching and evaluating multiple models and hyperparameter settings, ultimately identifying the model that achieves the best performance. § EVALUATION §.§ Experimental Settings Dataset: We utilize the HealthCareMagic-100k dataset, a QA dataset consisting of 24,665 samples derived from authentic conversations between patients and doctors on HealthCareMagic.com. For our experiments, this dataset is divided into two distinct parts: 80% of the samples are used as the external knowledge base for the model, representing member sample. The remaining 20% of the samples are designated as non-member sample. Baseline: We evaluate our methods against four commonly-used MIA techniques. The Loss Attack <cit.>, a classic approach in MIA, classifies a target sample as a member or non-member based on the computed loss from the model. The Zlib Entropy Attack <cit.> refines this by calibrating the target sample’s loss using the sample’s zlib compression size. Additionally, two State-of-the-Art(SOTA) methods are evaluated: the Neighborhood Attack <cit.> generates samples similar to the target and compares their model losses to that of the target sample, using the differences to determine membership. The Min-k% Prob Attack <cit.> calculates membership by focusing on the k% of tokens with the lowest likelihoods in a sample and computing the average probabilities. Implementation: The entire experimental framework was implemented using Python 3.8 and Pytorch 1.11.0. Our experiments were conducted on a standard computational platform featuring Ubuntu 20.04, equipped with an Intel^ Xeon^ Platinum 8352V CPU and two NVIDIA^ RTX 4090 GPUs. For the language model, we employed LLaMA-2-7b-chat-hf <cit.>, and the retriever used was all-MiniLM-L6-v2 <cit.>, configured to retrieve the top 5 chunks demonstrating the highest cosine similarity. We selected 100 member samples from the external knowledge database for training, and an additional 100 for testing. Similarly, we chose 100 non-member samples from the remaining dataset for training, with another 100 set aside for testing. For the QA dataset, we used the question component as the prompt text, configuring the system prompt to state: "Answer the question based on the context above." Evaluation Metric: Following previous MIA studies <cit.>, we primarily use ROC AUC to measure the model's ability to correctly classify members and non-members. Additionally, we employ several other metrics to comprehensively assess the model’s performance: Accuracy, which represents the overall proportion of correctly classified samples; Precision, which measures the proportion of predicted members that are actual members, thus evaluating the reliability of the model's positive classifications; Recall, which aids in detecting true positives; and F1-Score, a comprehensive performance indicator that reflects both the precision and the robustness of the attack model. §.§ Results §.§.§ Attack Results The effectiveness of MIA for RAG systems hinges on their ability to accurately classify samples and determine their presence in the RAG knowledge database. We evaluated our method against four conventional MIA techniques, as shown in Table <ref>. Traditional MIAs performed poorly in this context because they are primarily designed for parameterized neural network models and do not adapt well to the non-parametric nature of RAG systems, resulting in suboptimal attack outcomes.Among the traditional approaches, the Neighborhood Attack demonstrated superior ROC AUC and PR AUC values compared to the other three methods. In this method, we consider five neighborhoods per sample; however, this method is computationally intensive as it requires generating neighborhoods for each sample, effectively doubling the training data. In contrast, our Threshold-based Attack achieved a ROC AUC of 0.801 and a PR AUC of 0.839, while the Machine Learning-based Attack performed even better, with a ROC AUC of 0.82 and a PR AUC of 0.898. These results indicate that our methods can effectively capture member information from the RAG system knowledge database. This effectiveness is attributed to the high similarity and low uncertainty between member samples and the samples generated by the RAG system. §.§.§ Diffferent Metric Attack To further demonstrate the effectiveness of our feature extraction method, we conducted experiments using various metrics as sample features. "Single-similarity" utilizes only sample similarity as a feature. "Multi-prompt" involves the model generating five answers, with the similarity of each answer calculated separately. The prompt used is: “Answer the question based on the context above. Generate 5 responses, each ending in '/end'.” "Multi-sample" enhances simple text data, such as by shuffling and synonym replacement, to generate five similar texts and calculate their similarities separately. "Multi-metric" employs five features: similarity, confidence, perplexity, loss, and entropy. As depicted in Tables <ref> and Tables <ref>, the Single-similarity approach achieves high ROC AUC and Precision in the Threshold-based Attack but exhibits very low Recall and F1-Score, suggesting that this method may be overly conservative. It tends to predict a sample as positive only under high certainty, resulting in a high accuracy rate but a low number of positive predictions. In contrast, in the Machine Learning-based Attack, Single-similarity performs poorly overall. This is likely due to the randomness in LLM-generated documents, indicating that relying solely on similarity is not robust enough for the model to learn effective features. Both the Multi-prompt and Multi-sample methods exhibit low recall rates. It is evident that generating multiple texts to compute similarity does not significantly enhance performance; notably, the Multi-prompt method exhibits the poorest results. This underperformance may stem from the model's tendency to rely on its internal knowledge rather than on the external knowledge database when generating multiple outputs. Indeed, the Multi-metric approach demonstrates the best performance by integrating features from multiple dimensions into a comprehensive analysis. However, this method cannot be implemented through a black-box approach. It requires access to detailed model information, such as loss, cross-entropy, and confidence, which are derived from the output vector of the last MLP layer. In contrast, "similarity" and "perplexity" can be computed directly from the model's output text. Our findings indicate that using just these two indicators enables a robust attack on the RAG system knowledge database, utilizing only black-box API access. § CONCLUSION This paper focuses on the susceptibility of external knowledge databases in RAG systems to Membership Inference Attacks (MIA). Employing black-box API access, we developed and validated two innovative attack strategies: the Threshold-based Attack and the Machine Learning-based Attack. The results, which demonstrate the effectiveness of our approaches in identifying membership within RAG systems, confirm the vulnerability of these databases. By disclosing these vulnerabilities, we hope to raise awareness among practitioners and policymakers about potential safety issues within RAG databases, underscoring the importance of developing enhanced security protocols to protect sensitive data. IEEEtran
http://arxiv.org/abs/2406.19208v1
20240627143235
Study of the $Ω_{ccc}Ω_{ccc}$ and $Ω_{bbb}Ω_{bbb}$ dibaryons in constituent quark model
[ "Pablo Martín-Higueras", "David R. Entem", "Pablo G. Ortega", "Jorge Segovia", "Francisco Fernández" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-lat", "nucl-th" ]
[]pablo.higueras@alu.uhu.es Departamento de Ciencias Integradas y Centro de Estudios Avanzados en Física, Matemática y Computación, Universidad de Huelva, 21071 Huelva, Spain []entem@usal.es Grupo de Física Nuclear, Universidad de Salamanca, E-37008 Salamanca, Spain Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain []pgortega@usal.es Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain []jsegovia@upo.es Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, E-41013 Sevilla, Spain []fdz@usal.es Grupo de Física Nuclear, Universidad de Salamanca, E-37008 Salamanca, Spain Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain § ABSTRACT Dibaryons are the simplest system in which the baryon-baryon interaction, and hence the underlying quark-quark interaction, can be studied in a clear way. Although the only dibaryon known today is the deuteron (and possibly the d^*), fully heavy dibaryons are good candidates for bound states because in such systems the kinetic energy is small and the high symmetry of the wave function favours binding. In this study, the possible existence of Ω_cccΩ_ccc and Ω_bbbΩ_bbb dibaryons is investigated in the framework of a constituent quark model that satisfactorily describes the deuteron, the d^*(2380) and the NN interaction. J^P=0^+ candidates are found in both systems with binding energies of the order of MeV. 12.39.Pn, 14.40.Lb, 14.40.Rt Study of the Ω_cccΩ_ccc and Ω_bbbΩ_bbb dibaryons in constituent quark model Francisco Fernández July 1, 2024 ============================================================================ § INTRODUCTION Understanding the nucleon-nucleon interaction has been one of the priority problems in Nuclear Physics since Yukawa's one pion exchange theory. The subsequent development of QCD paved the way to describe the strong interactions in terms of quark degrees of freedom and facilitate to enlarge the field to other flavors like charm en bottom. Dibaryons are the simplest systems in which these studies can be addressed in a transparent way. Until recently, the only well-established bound state of two baryons was the deuteron. Then, in 2011, another unstable light dibaryon, the d^*(2380), was reported by the WASA-at-COSY Collaboration <cit.> from the double pionic fusion reaction pn→ dπ^0π^0. This resonance can be described as a nonstrange ΔΔ dibaryon with I(J^P)=0(3^+). In 1989, Goldman noted that due to the special symmetry of such a state, any model based on confinement and gluon exchange should predict it <cit.>. The long history of the search for dibaryons in the light quark sector can be found in Ref <cit.>. It is well known that the binding of the deuteron is due to the coupling of the ^3S_1 and ^3D_1 partial waves by one-pion exchange tensor interactions. Similarly, the binding of the d^*(2380) can be explained in terms of Goldstone-boson exchanges <cit.>. These two systems then prove that the interaction binding these dibaryons arises from QCD chiral symmetry breaking in the light quark sector. Another interesting system is the fully heavy dibaryon. In such a system the relativistic effects are negligible and the kinetic energy is small. As originally pointed out by Bjorken <cit.>, the triply-charmed baryon Ω_ccc is stable against strong interactions. This fact opens the possibility to study systems like Ω_cccΩ_ccc or Ω_bbbΩ_bbb. Moreover, in contrast to the deuteron and the d^* case, the latter systems provide an ideal scenario to explore the baryon-baryon interaction in an environment free of chiral dynamics. In this work we will focus on the study of the fully heavy dibaryons. Two recent Lattice QCD calculations have explored these systems: Ref. <cit.> showed that Ω_cccΩ_ccc is loosely bound by 5.68(0.77) MeV, while Ref. <cit.> found a very deep Ω_bbbΩ_bbb state with a binding energy of 81_-16^+14 MeV. These conclusions are confirmed by several quark model calculations, but are contradicted by others. For example, Huang et al. <cit.>, using a constituent quark model based on the one-gluon exchange interaction and the resonating group method, studied the possible bound states of the Ω_cccΩ_ccc and Ω_bbbΩ_bbb, among others. They found a J^P=0^+ bound state for the Ω_cccΩ_ccc system with a binding energy of 2.5 MeV and another Ω_bbbΩ_bbb state bound by 0.9 MeV, contrary to naive espectations. Deng <cit.> performed a study of the di-Δ^++, di-Ω_ccc and di-Ω_bbb systems using a naive one-gluon exchange quark model and a chiral quark model including π and σ exchanges between quarks. Obviously, in this case these parts of the interaction apply only to the light quarks, but the set of parameters is different in the two models. Both studies predict very shallow di-Ω_ccc and di-Ω_bbb states with binding energies around 1 MeV. Using a different model, namely QCD sum rules, Wang <cit.> found for each di-Ω_ccc and di-Ω_bbb systems two J^P=0^+ and J^P=1^- states that are slightly below their respective thresholds. On the other hand, several studies within the quark model have ruled out the existence of fully heavy dibaryons. In Ref <cit.> the authors investigated the existence of bbbccc dibaryons and extrapolated their results to the properties of the bbbbbb and cccccc systems. They found no bound states for Ω_cccΩ_ccc or Ω_bbbΩ_bbb combinations. On the other hand, Alcaraz-Peregrina et al. <cit.> used the Difussion Montecarlo technique to describe fully heavy compact six-quark arrangements. They found that all the hexaquarks have smaller masses than those of their constituents, i.e., all the hexaquarks are bound systems. However, their masses are also larger than those of any pair of baryons into which they can be divided. This means that each hexaquark is unstable with respect to its splitting into two baryons. Finally, two more calculations, in the framework of the constituent quark model <cit.> or the extended chromomagnetic model <cit.>, showed that all the fully heavy dibaryons lie above their corresponding baryon-baryon thresholds. In view of this controversial situation, since different approaches lead to quite different conclusions, we will study the possible existence of Ω_cccΩ_ccc and Ω_bbbΩ_bbb dibaryons using the constituent quark model of Ref. <cit.> and its extension to the heavy quark sector <cit.>, which has been able to describe a large variety of hadronic phenomenology. In particular, the model reproduces the properties of the deuteron <cit.> and predicts the existence of the d^*(2380) as a ΔΔ dibaryon <cit.>. Although the binding energy of the d^* predicted in the latter references is smaller than the experimental value, it is laso worth mentioning that the these studies were performed without coupling to the NN channel. The paper is structured as follows. In Sec. <ref> we describe the main aspects of our theoretical model, giving details about the wave functions used to describe Ω_ccc (Ω_bbb) baryons and the way we derive the Ω_cccΩ_ccc interaction using the Resonating Group Method (RGM). Section <ref> is devoted to presenting our results for the possible dibaryons. Finally, we summarize and give some conclusions in Sec. <ref>. § THEORETICAL FORMALISM §.§ The constituent quark model Our theoretical framework is a QCD-inspired constituent quark model (CQM) proposed in Ref. <cit.> and extended to the heavy quark sector in Ref. <cit.>. The main pieces of the model are the constituent light quark masses and Goldstone-boson exchanges, which appears as consequences of spontaneous chiral symmetry breaking of the QCD Lagrangian together with perturbative one-gluon exchange (OGE) and nonperturbative color confining interactions. Following Diakonov <cit.>, a simple Lagrangian invariant under chiral transformations can be written as ℒ = ψ̅(i ∂ - M(q^2) U^γ_5) ψ , where M(q^2) is the dynamical (constituent) quark mass and U^γ_5 = e^iλ _aϕ ^aγ _5/f_π is the matrix of Goldstone-boson fields that can be expanded as U^γ _5 = 1 + i/f_πγ^5λ^aπ^a - 1/2f_π^2π^aπ^a + … The first term of the expansion generates the constituent quark mass, while the second term gives rise to a one-boson exchange interaction between quarks. The main contribution of the third term comes from the two-pion exchange which has been simulated by means of a scalar-meson exchange potential. In the heavy quark sector, chiral symmetry is explicitly broken and Goldstone-boson exchange does not occur. However, the full interaction constrains the model parameters through the light-meson phenomenology <cit.>. Thus, OGE and confinement are the only remaining interactions between the heavy quarks. The OGE potential is generated from the vertex Lagrangian ℒ_qqg = i√(4πα_s) ψ̅γ_μ G^μ_cλ^cψ, where λ^c are the SU(3) colour matrices, G^μ_c is the gluon field and α_s is the strong coupling constant. The scale dependence of α_s allows a consistent description of light, strange and heavy mesons. Its explicit expression can be found in, e.g., Ref. <cit.>, α_s(μ)=α_0/ln(μ^2+μ_0^2/Λ_0^2) Regarding the confinement potential, it is well known that multi-gluon exchanges produce an attractive linearly rising potential proportional to the distance between infinite-heavy quarks <cit.>. However, sea quarks are also important components of the strong interaction dynamics that contribute to the screening of the rising potential at low momenta and eventually to the breaking of the quark-antiquark binding string <cit.>. Our model tries to mimic this behaviour with a screening potential at high distances. Then, the full interaction between heavy quarks is given by V_ij(r) = [ -a_c(1-e^-μ_c r) + Δ + α_s(μ)/41/r] (λ⃗_i ·λ⃗_j) V_ij^S(r) = -α_s(μ)/41/6m_im_je^-r/r_0(μ)/rr_0^2(μ) (σ⃗_i ·σ⃗_j) (λ⃗_i ·λ⃗_j) V_ij^T(r) = -1/16α_s(μ)/m_im_j S_ij (λ⃗_i ·λ⃗_j)× ×[1/r^3-e^-r/r_g(μ)/r (1/r^2+ 1/3r_g^2(μ)+1/rr_g(μ) ) ] where r_0(μ)=r̂_0 m_n/2μ with μ the reduced mass of the (ij) heavy quark pair, λ⃗ are the colour matrices, σ⃗ the spin matrices and S_ij=3(σ⃗_i·r̂)(σ⃗_j·r̂)-(σ⃗_i·σ⃗_j) the tensor operator of the (ij) pair with r⃗ their relative position. All the parameters of the model are given in Table <ref>. We have not included the spin-orbit interaction parts coming from the one-gluon exchange and confinement because they should give small contributions in this calculation. For the same reason, the spin-tensor terms are neglected in the calculation of the Ω_ccc (Ω_bbb) masses, but are included in the Ω_cccΩ_ccc (Ω_bbbΩ_bbb) interaction. §.§ The wave function of the Ω_ccc(Ω_bbb) A precise definition of the wave functions of the Ω_ccc and Ω_bbb baryons (henceforth Ω_QQQ) is an essential part of the calculation, because it defines the size of the baryon, which is important for the baryon-baryon interaction. Once we know the quark-quark interaction, the Ω_QQQ wave function can be calculated by solving the Schrödinger equation with the Gaussian Expansion Method (GEM) <cit.>. In the GEM framework one makes an expansion in gaussian wave functions but instead of using only one set of Jacobi coordinates, one includes the lowest orbital angular momentum wave functions using the three sets of possible Jacobi coordinates. The reason to use different sets is that lowest angular momentum wave functions in one set generates higher angular momentum wave functions in the other sets, making a very numerically efficient way to include such high angular momentum components. However the wave function given by GEM would be quite complicate and would make the calculation of the dibaryon interaction slow. Alternatively, for the calculation of the dibaryon interaction (that would be justified later), the following orbital wave function can be used ϕ(p⃗_ξ_1,p⃗_ξ_2) = [ 2b^2/π]^3/4 e^-b^2 p_ξ_1^2[ 3b^2/2π]^3/4 e^-3b^2/4 p_ξ_1^2 where p_ξ_i are the Jacobi coordinates defined as p⃗_ξ_1 = 1/2 (p⃗_1 - p⃗_2) p⃗_ξ_2 = 2/3p⃗_3 - 1/3 (p⃗_1 + p⃗_2) In the notation we use for the baryon calculation this corresponds to mode 3 of the GEM basis using only one gaussian with angular momentum zero and the parameters ν=1/4b^2 and λ=1/3b^2. Notice that fixing the relation between the parameters of the gaussians ν and λ to these values (ν=3/4λ), the orbital wave functions is totally symetric, necessary to get a totally antisymetric wave function for the baryon of lowest energy. The spin wave function has to be also symmetric and implies S=3/2 and the color wave function will be a color singlet. So our wave function for the baryon is ψ_B = ϕ(p⃗_ξ_1,p⃗_ξ_2) χ_B ξ_c[1^3] with χ_B=((1/21/2) 1 1/2) 3/2) the spin wave function and ξ_c[1^3] a singlet color wave function. Using the wave function of Eq. (<ref>), the kinetic energy is given by T = ⟨ψ_B | p_ξ_1^2/m + 3p_ξ_2^2/4m | ψ_B ⟩ = 3/2m b^2 For the interaction energy we can evaluate ⟨ψ_B | V_12 | ψ_B ⟩ and multiply by 3, since we have 3 interactions between equivalent quarks. It is easier to evaluate it in coordinate space. The wave function in coordinate space is ϕ_B(r_3,R_3) = [ 1/2π b^2]^3/4 e^-r_3^2/4b^2[ 2/3π b^2]^3/4 e^-R_3^2/3b^2 and so ⟨ψ_B | V_12 | ψ_B ⟩ = 4π[ 1/2π b^2]^3/2∫_0^∞ r_3^2 dr_3 e^-r_3^2/2b^2 V(r_3) The mean value distance between quarks is given by √(⟨ r_ij^2 ⟩) = √(3) b and the mass is given by M = 3m_b + T + 3 ⟨ψ_B | V_12 | ψ_B ⟩ Finally, the value of the b parameter is obtained by minimizing the mass ∂ M/∂ b=0 Although the GEM method provides a more complete description of the wave function as mentioned before, the calculation is simplified if we use the analytical wave function of Eq. (<ref>). In Table <ref> we show the results of the Ω_ccc and Ω_bbb wave functions using the mass minimization procedure and compare with the GEM solution to justify the use of the simple wave function given by Eq. (<ref>). We see that we get a resonable agreement for the sizes and energies in both cases, although the agreement is better in the beauty sector. The minimal values for b are given by b_ min=0.15679 fm for the Ω_bbb and b_ min=0.25172 fm for the Ω_ccc. § THE Ω_QQQΩ_QQQ INTERACTION The system under study has six identical quarks. Then, the baryon-baryon total wave function must be fully antisymmetric. As the wave function of the baryons is already antisymmetric, the antisymmetrizer operator is just given by, 𝒜 = 1-9P_36 In order to obtain the effective baryon-baryon interaction from the underlying quark dynamics we use the Resonating Group Method (RGM) <cit.>. Then, we need to solve the projected Schrödinger equation, 0=( p'^2/2μ_ΩΩ -E ) χ(P⃗') +∫ ( ^ RGMV_D(P⃗',P⃗_i) + + ^ RGMK(P⃗',P⃗_i) )χ(P⃗_i) d^3 P_i where P⃗' (P⃗_i) is the relative Ω_QQQ-Ω_QQQ final (initial) momentum, E=E_T-2M_Ω the relative energy of the system with respect to the threshold, ^ RGMV_D(P⃗',P⃗_i) is the direct kernel and ^ RGMK(P⃗',P⃗_i) is the exchange kernel and μ_ΩΩ is the reduced mass of two Ω_QQQ baryons. Here, M_Ω is M_Ω = 3 m_b + 3/2m_Q b^2 + 3 E_int E_int = ⟨ V_ij⟩ = ∫ d^3 q e^-q^2b^2/2⟨ V_ij(q) ⟩ The direct term will be zero in the present model since the color coefficients, (λ⃗_i·λ⃗_j), are zero between color singlets. Then, the full interaction is driven by exchange diagrams, which take into account the quark rearrangement between baryons. The exchange kernel can be written as, ^ RGMK(P⃗',P⃗_i) = ^ RGMT(P⃗',P⃗_i) + ^ RGMV_ij E (P⃗' , P⃗_i ) - -E_T ^ RGMN(P⃗',P⃗_i) where ^ RGMT(P⃗',P⃗_i) is the exchange kinetic term, ^ RGMN(P⃗',P⃗_i) is a normalization term and ^ RGMV_ij E(P⃗',P⃗_i) is the exchange potential (for explicit expressions see, e.g., Refs. <cit.>). § RESULTS Let us first study the Ω_bbbΩ_bbb system. One of the states in S wave is the J^P=0^+, which corresponds to the ^1S_0 and ^5 D_0 partial waves. As in the case of the deuteron, S and D waves are mixed. We first calculate the binding energy considering the parameter b and the reduced mass given by the minimization procedure. Without tensor interactions they are decoupled and only a bound state appears in the ^1S_0 partial wave. The binding energy of this state is E=-1.9859 MeV. The ^5 D_0 partial wave is not bound. If we include the tensor interaction of OGE the partial waves are coupled and the binding energy increases very slightly to E=-1.9876 MeV. The probability of the D wave is only 6.6· 10^-4%. As in the deuteron, the binding energy has a sizeable cancellation between the kinetic and interaction parts. The mean value of the kinetic energy is ⟨ T ⟩ = 11.2 MeV, while for the interaction we have ⟨ V ⟩ = -13.2 MeV. The confinement interaction dominates and gives the needed attraction to bind the system. If we exclude the OGE we get E=-7.3698 MeV with ⟨ T ⟩ = 20.9 MeV and ⟨ V ⟩ = -28.3 MeV. The potential for the ^1S_0 partial wave is given in Fig. <ref>. The relative wave functions are shown in Fig. <ref>. If we consider b=√(r_ij^2)/√(3) and the reduced mass given by the GEM calculation we get a binding energy in the coupled case of E=-1.81 MeV. With b given by the minimization procedure and the reduced mass is given by GEM we get =-1.9754 MeV. The effect of the different reduced mass is very small and dominates the effect of the different b parameters. In principle with only one gaussian one should use the value given by the minimization procedure, but this gives us a feeling of the uncertainty due to the simplification of the wave function. Although the binding energy varies a little bit, in both cases the system is bounded. Another possible state is the J^P=2^+ which includes the ^5S_2, ^1D_2, ^5D_2 and ^5G_2 partial waves. None of them are bound. One could expect a bound state for the ^5S_2 partial wave, but in this case one can see that the potential coming from the λ⃗_i ·λ⃗_j have opposite sign for S=2 with respect to S=0. So if we have attraction for the S=0, this implies repulsion for S=2. Higher partial waves are more difficult to bind. Antisymmetry implies L+S= even and parity is given by P=(-1)^L. So for P=+, the spin S has to be even. This means that 1^+ and 3^+ can be only in D or G waves, which will be difficult to bind as it was seen for the 0^+ and 2^+ states. In more detail: * We start with the 1^+ state and include ^5 D_1 which is the only partial wave. It should be the same as the ^5D_2 partial wave with the exception of the contribution of the OGE tensor interaction. It does not bind. * For the 3^+ state we have the ^5 D_3 and ^5G_3 partial waves and they do not bind. We give in Fig. <ref> the Fredholm determinant for the 4 different J^+ quantum numbers, where we can see that only the 0^+ channel binds. Regarding possible P=- states, this would imply odd partial waves and odd total spin. We have analyzed the J^P={0^-,1^-,2^-,3^-}, finding no additional bound states. Again, in Fig. <ref> the Fredholm determinant for the 4 different J^- quantum numbers is shown, where we can see that no bound state is predicted. Concerning the Ω_cccΩ_ccc system, the situation is similar to the Ω_bbbΩ_bbb system, and we only find a bound state in the 0^+ channel. The binding energy is E=-0.7104 MeV with a D-state probability of 1.7· 10^-3%. The mean values of kinetic and interaction terms are ⟨ T ⟩ = 7.46 MeV and ⟨ V ⟩ = -8.17 MeV. In this case we used the b parameter from the minimization procedure and the reduced mass from the GEM. Using the reduced mass from the minimization parameter the binding energy changes to E=-0.7288 MeV and both parameters from the GEM to E=-0.62 MeV. §.§ Dependence on the model parameters We analyze the dependence on the parameters of the model for the J^P=0^+ state to see in which parameter space region the system will not bind. In all cases we use the minimization procedure to obtain b and μ_ΩΩ. The dependence on the quark mass m_q is shown in Fig. <ref>. Notice that some of the parameters of the potential depends on m_q since we use scale dependent parameters. We see that the system binds reducing the quark mass up to m_q∼ 800-900 MeV. Our model has an effective string tension given by σ = 8/3 a_c μ_c = 0.1537 GeV^2 We plot the parameters b, M_QQQ and E as a function of the string tension in Fig. <ref>. We vary the value of μ_c from 0.15 to 0.85 fm^-1 and leave a_c unchanged so the saturation energy does not change. We see that for higher string tension values (our value is lower than some determinations) the binding energy will increase. Our confinement effective potential is V(r) ∼σ1-e^-μ_c r/μ_c We vary the value of μ_c from 0.15 to 0.85 fm^-1 and change a_c so that σ does not change. This is the same interval we used when we changed the string tension σ. The saturation energy changes as σ/μ_c. Notice that the interaction region is ∼√(3) b so if x≡√(3) b μ_c ≪ 1 the potential in the interacting region is basically linear. In this calculation we got x=0.062 to x=0.39 in the charm sector and x=0.039 to x=0.24 in the bottom sector. For μ_c→ 0 the potential becomes more linear in the interaction region. The results varying the saturation are shown in Fig. <ref>. We see that the dependence on the saturation point of the properties of the Ω_QQQ, b and M_QQQ, is smaller than on the string tension σ as one would expect. For the binding energy of the Ω_QQQ dibaryon we see also an smaller dependence. Notice that when μ_c → 0 the binding energy increases, so a linear confinement potential should give more binding. Finally we can study the dependence of the binding energy on the size of the baryon. For that we keep all the parameters unchanged and only vary the parameter b on the RGM calculation. Results are shown in Fig. <ref>. With bigger sizes we get less binding but it has to be increased much more than the difference between the sizes of the variational and GEM calculation, which shows that using the exact wave function the system will still bind. This argument is more robust for the bottom sector but it should also work in the charm. The result should be seen as an upper bound of the binding energy, since we are using a variational calculation. Also other channels may be involved, but since we are considering the lower energy channel, including more channels will provide more attraction. We can conclude that the Chiral Quark Model binds the Ω_QQQΩ_QQQ system in both cases, when Q is a bottom or a charm quark. These molecular states are analogs of two-atom molecules, where the direct interaction is zero for neutral atoms, as in our model for colorless objects. § SUMMARY In this work we have studied the possible existence of fully-heavy dibaryons in the charm and bottom sectors. The main conclusion we found is that, using a wave function which minimizes the mass of the Ω_ccc (Ω_bbb) baryons, the six c quarks or the six b quarks can form bound states with J^P=0^+ quantum numbers. The binding energy of the charm dibaryon is E_b=-0.71 MeV, while in the bottom case the binding energy is slightly higher, E_b=-1.98 MeV, which is reasonable due to the highest mass of the bottom quark. The J^P=0^+ state corresponds to the coupling of ^1S_0 and ^5D_0 partial waves, but with a very small ^5D_0 component. No further bound states are found in other partial waves. This work has been partially funded by EU Horizon 2020 research and innovation program, STRONG-2020 project, under grant agreement no. 824093; Ministerio Español de Ciencia e Innovación under grant nos. PID2022-141910NB-I00 and PID2022-140440NB-C22; and Junta de Andalucía under contract Nos. PAIDI FQM-370 and PCI+D+i under the title: ”Tecnologías avanzadas para la exploración del universo y sus componentes” (Code AST22-0001).
http://arxiv.org/abs/2406.18182v1
20240626085817
Perspective on properties of renormalization schemes at high loops
[ "J. A. Gracey" ]
hep-th
[ "hep-th" ]
LTH 1374 The Predicament of Absorption-dominated Reionization II: Observational Estimate of the Clumping Factor at the End of Reionization [ =================================================================================================================================== § INTRODUCTION Over the last decade or so there have been signficant developments in the high loop order renormalization of gauge theories. For instance the β-function of Quantum Chromodynamics (QCD) is known to high precision, <cit.>. Results for the renormalization group functions in other schemes such as kinematic ones are not available to as many loops. For instance the QCD β-function in the momentum subtraction (MOM) schemes of Celmaster and Gonsalves, <cit.>, is only available at four loops for the Landau gauge, <cit.>. Once these core quantities are known the next stage is to determine the perturbative expansion of observables to the same order of precision. In this respect while such quantities are invariably evaluated in the scheme there is no a priori reason why this scheme should be preferred over any other. They can equally well be determined in a MOM scheme for instance. In either case the observable will be available to a finite order in the coupling constant expansion and therefore would only be an approximation to the true value at some momentum scale. If one had a high enough number of terms then the theory uncertainty for a experimental measurement ought to be insignificant. Then the question arises as to how to arrive at an uncertainty value for the perturbative series truncation. One idea is to use the discrepancy in the value of the perturbative series when determined in several different schemes. Indeed an exploratory study of this idea was provided recently in <cit.> for the R ratio and the Bjorken sum rule. For example using experimental data for the former quantity in the , the MOM schemes of <cit.> as well as the mini-MOM scheme of <cit.>, estimates from the three and four loop expressions were extracted for α_s^(M_Z), <cit.>. These respectively were 0.13281 ± 0.00197^ +0.01171 _-0.00986 and 0.13185 ± 0.00053^ +0.01072 _-0.00999. Here the error on the average is the envelope of the scheme values and the average value is the centre of the envelope, <cit.>. We record that these estimates are for an idealized situation where resonances and quark mass effects have not been taken into account. The exercise was carried out by ignoring these aspects so that the scheme issue would be the sole focus of the study. It is reasonably apparent that the uncertainty reduces with increasing loop order. While this is encouraging what would be interesting is to include additional schemes in the analysis to check whether this improves the uncertainty in the sense of tightening it. We will review some recent activity in this direction for QCD here by discussing a suite of schemes introduced in <cit.> and extended to five loops in <cit.>. § BASICS We begin by recalling the basics behind defining a renormalization scheme. The Lagrangian is presented in terms of bare or classical variables that are not the optimum ones for describing quantum phenomena due to the presence of infinities. Therefore the variables have to be redefined in terms of renormalized ones which will lead to predictions from the field theory that are devoid of divergences. The procedure to enact this is far from unique. There is a requirement that after renormalization previously divergent Green's functions are finite. Two criteria are used to define a scheme. First for renormalizable theories the Green's functions that are divergent are evaluated at a specific momentum configuration using the regularized Lagrangian. Then the combination of renormalization constants associated with that function are defined by a specified method called a scheme. The most basic scheme is the minimal subtraction one where only the poles with respect to the regulator are removed by fixing the unknown terms of the relevant renormalization constants. There are other schemes that render the Green's functions finite. This can be illustrated with a simple massless cubic theory with Lagrangian L  = 1/2( ∂_μϕ)^2  + g/6ϕ^3 where the coupling constant is g and the critical dimension is six. In terms of bare variables the divergent Green's functions are Γ_2(p)  = ⟨ϕ_0(p) ϕ_0(-p) ⟩  ,  Γ_3(p_1,p_2,p_3)  = ⟨ϕ_0(p_1) ϕ_0(p_2) ϕ_0(p_3) ⟩ . To illustrate the structure of the Green's functions after renormalization we note that for Γ_2(p) it will take the two following forms . Γ_2(p,-p) |_p^2=μ^2 = {[ μ^2 [ 1  + ∑_n=1^∞ a_n g^2n] ; μ^2 ; ]. after renormalization in the and MOM schemes respectively where a_n are finite contributions. Here MOM denotes the momentum subtraction scheme of <cit.> which has the prescription that the Green's function takes its tree value at the subtraction point. For the vertex function there are many more potential schemes given the larger choice of subtraction points. For example one can nullify an external momentum, which is infrared safe in six dimensions despite being an exceptional configuration, which introduces the schemes of <cit.> defined by . Γ_3(p,-p,0) |_p^2=μ^2 = {[ g  + ∑_n=1^∞ b_n g^2n+1 ; g .; ]. Equally there are schemes for non-exceptional configurations one of which is the symmetric point one considered in <cit.> and defined by . Γ_3(p_1,p_2,-p_1-p_2) |_p_i^2=μ^2 = {[ g  + ∑_n=1^∞ c_n g^2n+1 ; g ; ]. for i = 1 and 2 with p_3^2 = μ^2 and c_n are constants like b_n but different in value since the subtraction points are not equivalent. The MOM scheme given in (<ref>) is not unique since others can be constructed using different momentum configurations. For instance the interpolating momentum () subtraction scheme of <cit.> defined by . Γ_3(p_1,p_2,p_3) |_p_1^2=p_2^2=μ^2,p_3^2=ωμ^2 = {[ g  + ∑_n=1^∞ d_n g^2n+1 ; g ; ]. is one such scheme that depends on the parameter ω defined for instance by ω = p_3^2/p_1^2 = p_3^2/p_2^2 where the ω → 1 limit recovers the symmetric point of <cit.>. However the most general situation for a 3-point vertex would regard these two dimensionless momenta ratios as independent. Given the wide variety of ways the coupling can be renormalized opens up the possibility of having hybrid schemes. These can be constructed where for example the 2-point function is rendered finite in say but the vertex function is made finite in one of the momentum subtraction schemes or its generalization to a 2-variable scheme. Finally it is worth noting that the β-function of each of the schemes such as MOM, and carries information about the subtraction point itself via the finite parts that are related to a_n, b_n, c_n and d_n. This first becomes evident at three loops in a single coupling theory or two loops in a gauge theory for a non-zero covariant gauge parameter which produces a β-function which is gauge parameter dependent. § EXAMPLES To illustrate some of the properties of the scheme developed in <cit.> for QCD we have calculated the renormalization group functions in that scheme for ϕ^3 theory to five loops. Two components are required to achieve this. One is the explicit form of the five loop renormalization group functions, which are already available in <cit.>, and the other is the same quantities in the scheme as well as their corresponding four loop renormalization constants. The latter were computed recently in <cit.> using the Forcer package provided in <cit.> written in Form, <cit.>. To use Forcer for six dimensional computations required the determination of the Forcer masters to high order in the ϵ expansion in d = 6 - 2ϵ dimensions as (<ref>) is dimensionally regularized. The masters were deduced via the Tarasov method, <cit.>, and provided in <cit.> up to weight 9 to be on an equivalent level to the four dimensional ones of the original package, <cit.>. The five loop renormalization group functions were then deduced via properties of the renormalization group equation. For instance the couplings in the respective schemes are related by g_(μ)  = Z_g^/Z_g^ g_(μ) where Z_g^ ≡  Z_g^( a_(a_) ) and a = g^2. Then the β-functions are related by β_^ϕ^3 ( a_ )  = [ β_^ϕ^3( a_ ) ∂ a_/∂ a_]_→ where the mapping on the right hand side means the coupling is replaced by the inverse relation to (<ref>). The field anomalous dimension can be deduced by a similar equation. Consequently we have, <cit.>, β_^ϕ^3(a) = 3/4 a^2 - 125/144 a^3 + [ - 1296 ζ_3 + 26741 ] a^4/10368 +  [ - 1370736 ζ_3 + 2177280 ζ_5 - 2304049 ] a^5/186624 +  [ 389670912 ζ_3^2 + 3307195440 ζ_3 + 89151840 ζ_5       - 5640570432 ζ_7 + 2190456157 ] a^6/26873856 +  O(a^7) where ζ_n is the Riemann zeta function. One interesting property is manifest and that is the absence of the even zetas, ζ_4 and ζ_6, which are present in the scheme β-function. Indeed this property is not restricted to (<ref>) as it has been noted previously in QCD in <cit.> and more recently checked for the core renormalization group functions, <cit.>. More generally the criteria for the absence of π in β-functions was formulated in the no-π theorem, <cit.>, having been motivated by observations in <cit.>, and also discussed more recently in <cit.> in the multicoupling context. One application of the six dimensional Forcer masters is to use them to explore a generalization of the scheme definition. That scheme removed the finite part of the 2- and 3-point functions at the subtraction point, where one leg of the latter had a nullified momentum, in addition to the poles in ϵ. This can be modified to the case where all the higher order powers in the ϵ expansion are removed too with the scheme being designated the maximal subtraction scheme and labelled by , <cit.>. The result of renormalizing ϕ^3 theory in this scheme is to produce renormalization group functions in six dimensions which are formally equivalent to those of the scheme. However this is not the case in the regularized theory where ϵ ≠ 0 when β_^ϕ^3(a) ≠ β_^ϕ^3(a) since the coefficient of the O(ϵ^n) term of a renormalization group function is related to the coefficient of the O(ϵ^n-1) term of the corresponding renormalization constant for n ≥ 1. This ϵ dependence in the β-function and other renormalization group functions is necessary to ensure that the critical exponent ω̂ = β^'(a^⋆) is the same in all schemes where a^⋆ is the Wilson-Fisher fixed point in d-dimensions. For (<ref>) we note that ω̂ only depends on rationals and ζ_n for 3 ≤ n ≤ 5 to O(ϵ^5). The scheme is not restricted to (<ref>) and was actually introduced for QCD in <cit.> being developed originally at two loops. As QCD has more than one cubic vertex with several involving different fields means there are several ways of nullifying external legs leading to quite a few schemes. Subsequently the β-functions of several of these schemes were provided to four loops <cit.>. More recently the renormalization group functions for eight possible schemes were determined to five loops in <cit.>. The same renormalization group method as discussed for ϕ^3 theory above was used. In other words QCD was renormalized to four loops in the respective schemes using the bare 2- and 3-point functions provided in <cit.> which were computed using the Forcer package. Since the five loop QCD scheme renormalization group functions are available for an arbitrary colour group in <cit.> all the ingredients are known to deduce the same data for the schemes of <cit.>. For instance, the Yang-Mills (YM) β-function for the scheme is, <cit.>, β^_ (a,0) = - 11/3 C_A a^2 - 34/3 C_A^2 a^3 + [ - 6499/48 C_A^3 + 253/12ζ_3 C_A^3 ] a^4 + [ - 10981313/5184 C_A^4 - 3707/8ζ_3 d_A^abcd d_A^abcd/ - 8/9d_A^abcd d_A^abcd/. .        + 6215/24ζ_5 d_A^abcd d_A^abcd/ + 97405/576ζ_5 C_A^4 + 1116929/1728ζ_3 C_A^4 ] a^5 + [ - 8598255605/165888 C_A^5 - 1161130663/73728ζ_7 C_A^5 . .        - 35208635/3072ζ_7 C_A d_A^abcd d_A^abcd/ - 28905223/2304ζ_3 C_A d_A^abcd d_A^abcd/. .        - 15922907/9216ζ_3^2 C_A^5 + 131849/3456 C_A d_A^abcd d_A^abcd/. .        + 4595789/384ζ_3^2 C_A d_A^abcd d_A^abcd/ + 7284505/1152ζ_5 C_A d_A^abcd d_A^abcd/. .        + 30643529/2048ζ_3 C_A^5 + 1667817635/55296ζ_5 C_A^5 ] a^6  +  O(a^7) where the second argument of the β-function is the gauge parameter α, denotes a scheme based on the triple gluon vertex, C_A, C_F, T_F are the usual colour factors and d_A^abcd d_A^abcd is the rank four Casimir in the adjoint representation of dimension . Clearly (<ref>) is devoid of even zetas as are all the other five loop QCD renormalization group functions, <cit.>. § GENERALITIES Having provided an instance of a class of schemes with particular properties it is worthwhile considering scheme changes from a more general perspective. First we define the coupling renormalization constant and that of another scheme S by Z_g  =  1  + ∑_n=1^∞∑_m=1^n z_g nma^n/ϵ^m    ,     Z_g^ S =  1  + ∑_n=1^∞∑_m=0^n z_g nm^ Sa_ S^n/ϵ^m where there are finite contributions at each loop order in the scheme S. The respective couplings are perturbatively related by a_ S = ∑_n=0^∞ c_n a^n+1 . It is straightforward to show the connection the coefficients have with Z_g^ S with the few terms given by c_0 = 1    ,    c_1  =  -  2 z_g 10^ S   ,    c_2  =  7 (z_g 10^ S)^2  -  2 z_g 20^ S c_3 = -  30 (z_g 10^ S)^3  +  18 z_g 10^ S z_g 20^ S -  2 z_g 30^ S illustrating that the c_i depend solely on the finite parts of Z_g^ S. For instance the coupling constant mapping from the scheme to the scheme in the Landau gauge is, <cit.>, a_ = a + 16 a^2 + [ 93427/192 - 169/4ζ_3 ] a^3 + [ 129114635/6912 - 1822913/576ζ_3 - 124835/192ζ_5 ] a^4 + [ 4050665663/4608 - 393488663/2304ζ_3 + 980775/512ζ_3^2 . .       + 1055749471/36864ζ_7 - 1387483355/9216ζ_5 + 1335 ζ_4 ] a^5 + O(a^6) for the SU(3) colour group and three active quarks. While this contains ζ_4 at four loops it is known that this term is key to the absence of ζ_4 in the renormalization group functions. The location of the various ζ_n terms in (<ref>) is mirrored in the coupling constant maps for the other QCD schemes. One reason for the example rests in the potential connection with the C-scheme of <cit.>. That is a scheme which has its roots in the relation of the Λ parameters of two different schemes being related exactly by a one loop calculation <cit.>. The relation depends on the one loop finite part of the coupling renormalization or z_g 10^ S for an arbitrary scheme. In <cit.> the four loop coupling constant map is provided for the C-scheme to for SU(3) and three quark flavours. It too shares the property of the QCD scheme renormalization group functions in that no even zetas appear to five loops in various physical observables. So there is a possibility that one of the schemes could correspond to the C-scheme. The specifics of the C-scheme renormalization prescription have not been recorded. Instead only the coupling constant map has been provided, <cit.>, for three quarks. In particular, <cit.>, a_C(a)  =  a  - 9/4 C a^2  - [ 3397/2592 + 4C - 81/16 C^2 ] a^3  +  O(a^4) where C is a free parameter within the C-scheme framework that can be tuned to reduce uncertainties on observables. Its origin can be traced back to the Λ ratio between two schemes. In that respect it is akin to z_g 10^ S and therefore we can use (<ref>) to see if a connection can be made to one of the schemes. Examining (<ref>) we note that ζ_3 appears at O(a^3) but is not present in (<ref>) at the same order. A ζ_3 could be manufactured with a suitable choice of C but that would mean ζ_3 would be present at O(a^2). There are no such contributions in any of the QCD scheme mappings at that order even when we consider those with a non-zero gauge parameter. So we believe the C-scheme does not correspond to any of the schemes. However what all the schemes and C-scheme coupling mappings have in common is the ζ_4 term at O(a^5) in (<ref>) with the same coefficient. It can be shown, <cit.>, that this term is directly responsible for the absence of ζ_4 in the β-functions of these schemes. We can also consider another general approach but in a different direction which is to extend the scheme. Instead of a scheme depending on one variable ω this can be replaced by the two independent parameters x  = p_1^2/p_3^2   ,    y  = p_2^2/p_3^2 for 3-point vertices. Consequently the renormalization functions will depend on x and y. For example the two loop Feynman gauge SU(3) Yang-Mills β-function derived from the quark-gluon vertex is . β^qqg_xy(a,1) |_ = -  11 a^2 + [ [ 9 x^3 - 9 x^2 y - 54 x^2 - 9 x y^2 + 64 x y + 81 x + 9 y^3 . .         - 54 y^2 + 81 y - 36 ] Φ_1(x,y) Δ - 1064/5Δ^2 . .       - [ 27 x^2 - 68 xy - 54 x + 41 y^2 - 68 xy + 27 ] ln(xy) Δ] 5a^3/12Δ^2 where Φ_1(x,y) = 1/λ[ 2 _2(-ρ x) + 2 _2(-ρ y) + ln( y/x) ln( (1+ρ y)/(1+ρ x)) . .       + ln(ρ x) ln(ρ y) + π^2/3] with Δ(x,y) = x^2  -  2 x y  +  y^2  -  2 x  -  2 y  +  1 λ(x,y) = √(Δ)   ,   ρ(x,y)  = 2/[1-x-y+λ(x,y)] . The x and y dependence in the two loop term does not contradict any general properties since the β-function is gauge dependent. It is only in the scheme of a single coupling theory that the β-function is scheme independent to two loops. In schemes such as that which produced (<ref>) one can determine the perturbative expansion of observables as a function of x and y. These parameters can then be varied to explore the uncertainty properties of the observable. In this respect x and y play a similar role to that of the free parameter C of the C-scheme but have a different origin. Moreover they could be restricted to a particular domain by constraints on the kinematics. § CONCLUSIONS One of the main observations is the renormalization group equations of both QCD and scalar ϕ^3 theory are free of even zetas up to five loops in the schemes. Moreover it has been shown that in the critical dimension of the latter theory the and scheme renormalization group functions are equivalent. Indeed given the nature of both scheme definitions this property is probably applicable to theories other than ϕ^3 theory. In terms of usefulness of the results the availability of data on more schemes will mean that the estimate on the uncertainty deriving from the truncation of the perturbative expansion of an observable could in principle be improved. Finally while it is encouraging that the no-π theorem of <cit.> appears to hold to five loops in the schemes of <cit.> it remains to be seen whether this continues to high loop order. There is evidence that the theorem may breakdown at very high loop order from the analysis of <cit.>[We are grateful for David Broadhurst's comments on this point.]. This work was carried out with the support of the STFC Consolidated Grant ST/T000988/1. For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising. 99 1 D.J. Gross & F.J. Wilczek, Ultraviolet behavior of nonabelian gauge theories, https://doi.org/10.1103/PhysRevLett.30.1343 Phys. Rev. Lett. 30 (1973), 1343. 2 H.D. Politzer, Reliable perturbative results for strong interactions?, https://doi.org/10.1103/PhysRevLett.30.1346 Phys. Rev. Lett. 30 (1973), 1346. 3 W.E. Caswell, Asymptotic behavior of nonabelian gauge theories to two loop order, https://doi.org/10.1103/PhysRevLett.33.244 Phys. Rev. Lett. 33 (1974), 244. 4 D.R.T. Jones, Two loop diagrams in Yang-Mills theory, https://doi.org/10.1016/0550-3213(74)90093-5 Nucl. Phys. B75 (1974), 531. 5 O.V. Tarasov, A.A. Vladimirov & A.Yu. Zharkov, The Gell-Mann-Low function of QCD in the three loop approximation, https://doi.org/10.1016/0370-2693(80)90358-5 Phys. Lett. B93 (1980), 429. 6 T. van Ritbergen, J.A.M. Vermaseren & S.A. Larin, The four loop beta function in quantum chromodynamics, https://doi.org/10.1016/S0370-2693(97)00370-5 Phys. Lett. B400 (1997), 379 [hep-ph/9701390]. 7 P.A. Baikov, K.G. Chetyrkin & J.H. Kühn, Five-loop running of the QCD coupling constant, https://doi.org/10.1103/PhysRevLett.118.082002 Phys. Rev. Lett. 118 (2017), 082002 [arXiv:1606.08659]. 8 F. Herzog, B. Ruijl, T. Ueda, J.A.M. Vermaseren & A. Vogt, The five-loop beta function of Yang-Mills theory with fermions, https://doi.org/10.1007/JHEP02(2017)090 JHEP 02 (2017), 090 [arXiv:1701.01404]. 9 T. Luthe, A. Maier, P. Marquard & Y. Schröder, The five-loop beta function for a general gauge group and anomalous dimensions beyond Feynman gauge, https://doi.org/10.1007/JHEP10(2017)166 JHEP 10 (2017), 166 [arXiv:1709.07718]. 10 K.G. Chetyrkin, G. Falcioni, F. Herzog & J.A.M. Vermaseren, Five-loop renormalisation of QCD in covariant gauges, http://doi.org/10.1007/JHEP10(2017)179 JHEP 10 (2017), 179 [arXiv:1709.08514]. 11 W. Celmaster & R.J. Gonsalves, The renormalization prescription dependence of the QCD coupling constant, http://doi.org/10.1103/PhysRevD.20.1420 Phys. Rev. D20 (1979), 1420. 12 J.A. Gracey, Two loop QCD vertices at the symmetric point, https://doi.org/10.1103/PhysRevD.84.085011 Phys. Rev. D84 (2011), 085011 [arXiv:1108.4806]. 13 A. Bednyakov & A. Pikelner, Four-loop QCD MOM beta functions from the three-loop vertices at the symmetric point, https://doi.org/10.1103/PhysRevD.101.071502 Phys. Rev. D101 (2020), 071502(R) [arXiv:2002.02875]. 14 R.M. Mason & J.A. Gracey, Kinematic scheme study of the O(a^4) Bjorken sum rule and R ratio, https://doi.org/10.1103/PhysRevD.97.116018Phys. Rev. D108 (2023), 116018 [arXiv:2309.17112]. 15 L. von Smekal, K. Maltman & A. Sternbeck, The strong coupling and its running to four loops in a minimal MOM scheme, https://doi.org/10.1016/j.physletb.2009.10.030 Phys. Lett. B681 (2009), 336 [arXiv:0903.1696]. 16 E. Braaten & J.P. Leveille, Minimal subtraction and momentum subtraction in QCD at two loop order, https://doi.org/10.1103/PhysRevD.24.1369 Phys. Rev. D24 (1981), 1369. 17 J.A. Gracey, Explicit no-π^2 renormalization schemes in QCD at five loops, /https://doi.org/10.1103/PhysRevD.109.036015 Phys. Rev. D109 (2024), 035015 [arXiv:2311.13484]. 18 C. Sturm, Y. Aoki, N.H. Christ, T. Izubuchi, C.T.C. Sachrajda & A. Soni, Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point, https://doi.org/10.1103/PhysRevD.80.014501 Phys. Rev. D80 (2009), 014501 [arXiv:0901.2599]. 19 M. Kompaniets & A. Pikelner, Minimally subtracted six loop renormalization of O(n)-symmet­ric ϕ^4 theory and critical exponents, https://doi.org/10.1103/PhysRevD.96.036016 Phys. Lett. B817 (2021), 136331 [arXiv:2101.10018]. 20 M. Borinsky, J.A. Gracey, M.V. Kompaniets & O. Schnetz, Five loop renormalization of ϕ^3 theory with applications to the Lee-Yang edge singularity and percolation theory, https://doi.org/10.1103/PhysRevD.103.116024 Phys. Rev. D103 (2021), 116024 [arXiv:2103.16224]. 21 J.A. Gracey, Four loop renormalization in six dimensions using Forcer, https://arxiv.org/abs/2405.00413arXiv:2405.00413 [hep-th]. 22 B. Ruijl, T. Ueda & J.A.M. Vermaseren, Forcer, a Form program for the parametric reduction of four-loop massless propagator diagrams, https://doi.org/10.1016/j.cpc.2020.107198 Comput. Phys. Commun. 253 (2020), 107198 [arXiv:1704.06650]. 23 J.A.M. Vermaseren, New features of Form, https://arxiv.org/abs/math-ph/0010025 math-ph/0010025. 24 O.V. Tarasov, Connection between Feynman integrals having different values of the space-time dimension, https://doi.org/10.1103/PhysRevD.54.6479 Phys. Rev. D54 (1996), 6479 [hep-ph/9606018]. 25 O.V. Tarasov, Generalized recurrence relations for two loop propagator integrals with arbitrary masses, https://10.1016/S0550-3213(97)00376-3 Nucl. Phys. B502 (1997), 455 [hep-ph/9703319]. 26 K.G. Chetyrkin & A. Rétey, Renormalization and running of quark mass and field in the regularization invariant and schemes at three loops and four loops, https://doi.org/10.1016/S0550-3213(00)00331-X Nucl. Phys. B583 (2000), 3 [hep-ph/9910332]. 27 K.G. Chetyrkin & A. Rétey, Three-loop and three-linear vertices and four-loop β functions in massless QCD, https://arxiv.org/abs/hep-ph/0007088 hep-ph/0007088. 28 P.A. Baikov and K.G. Chetyrkin, The structure of generic anomalous dimensions and no-π theorem for massless propagators, https:/doi.org/10.1007/JHEP06(2018)141 JHEP 06 (2018), 141 [arXiv:1804.10088]. 29 P.A. Baikov & K.G. Chetyrkin, Transcendental structure of multiloop massless correlators and anomalous dimensions, https://doi.org/10.1007/JHEP10(2019)190 JHEP 10 (2019), 190 [arXiv:1908.03012]. 30 P.A. Baikov & K.G. Chetyrkin, Four loop massless propagators: an algebraic evaluation of all master integrals, https://doi.org/10.1016/j.nuclphysb.2010.05.004 Nucl. Phys. B837 (2010), 186 [arXiv:1004.1153]. 31 I. Jack, No-π schemes for multicoupling theories, https:/doi.org/10.1103/PhysRevD.109.045007 Phys. Rev. D109 (2024), 045007 [arXiv:2311.12766]. 32 B. Ruijl, T. Ueda, J.A.M. Vermaseren & A. Vogt, Four-loop QCD propagators and vertices with one vanishing external momentum, https://doi.org/10.1007/JHEP06(2017)040 JHEP 06 (2017), 040 [arXiv:1703.08532]. 33 D. Boito, M. Jamin & R. Miravitllas, Scheme variation of the QCD coupling and hadronic τ decays, https//:doi.orh/10.1103/PhysRevLett.117.152001 Phys. Rev. Lett. 117 (2016), 152001 [arXiv:1606.06175]. 34 O. Schnetz, Numbers and functions in quantum field theory, https://doi.org/10.1103/PhysRevD.97.085018 Phys. Rev. D97 (2018), 085018 [arXiv:1606.08598].
http://arxiv.org/abs/2406.17660v1
20240625155032
Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients
[ "Aashiq Muhamed", "Oscar Li", "David Woodruff", "Mona Diab", "Virginia Smith" ]
cs.LG
[ "cs.LG" ]
Neuro-Modeling Infused EMT Analytics Qing Shen, Graduate Student Member, IEEE, Yifan Zhou, Member, IEEE, Peng Zhang, Yacov A. Shamash,  Fellow, IEEE, Xiaochuan Luo, Senior Member, IEEE, Bin Wang, Senior Member, IEEE, Huanfeng Zhao, Member, IEEE, Roshan Sharma,  Member, IEEE, Bo Chen,  Member, IEEE This work was supported in part by the National Science Foundation under Grant No. ITE-2134840 and in part by ISO New England. This work relates to the Department of Navy award N00014-24-1-2287 issued by the Office of Naval Research. The U.S. Government has a royalty-free license throughout the world in all copyrightable material contained herein. Q. Shen, Y. Zhou, P. Zhang, Y. A. Shamash and H. Zhao are with the Department of Electrical and Computer Engineering, Stony Brook University, NY, USA (e-mails: qing.shen, yifan.zhou.1, p.zhang, yacov.shamash@stonybrook.edu, huanfengzhao@gmail.com). X. Luo and B. Wang are with ISO New England, Holyoke, MA, USA (e-mails: xluo, bwang@iso-ne.com). R. Sharma and B. Chen are with Commonwealth Edison, Chicago, IL, USA (e-mails: roshan.sharma, bo.chen@comed.com). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT =-1 Large language model (LLM) training and finetuning are often bottlenecked by limited GPU memory. While existing projection-based optimization methods address this by projecting gradients into a lower-dimensional subspace to reduce optimizer state memory, they typically rely on dense projection matrices, which can introduce computational and memory overheads. In this work, we propose Grass (GRAdient Stuctured Sparsification), a novel approach that leverages sparse projections to transform gradients into structured sparse updates. This design not only significantly reduces memory usage for optimizer states but also minimizes gradient memory footprint, computation, and communication costs, leading to substantial throughput improvements. Extensive experiments on pretraining and finetuning tasks demonstrate that Grass achieves competitive performance to full-rank training and existing projection-based methods. Notably, Grass enables half-precision pretraining of a 13B parameter LLaMA model on a single 40GB A100 GPU—a feat infeasible for previous methods—and yields up to a 2× throughput improvement on an 8-GPU system. Code can be found at <https://github.com/aashiqmuhamed/GRASS>. § INTRODUCTION =-1 Pretraining and finetuning large language models (LLMs) are often memory bottlenecked: storing model parameters, activations, gradients, and optimizer states in GPU memory is prohibitively expensive. As an example, pretraining a LLaMA-13B model from scratch under full bfloat16 precision with a token batch size of 256 requires at least 102 GB memory (24GB for trainable parameters, 49GB for Adam optimizer states, 24GB for weight gradients, and 2GB for activations), making training infeasible even on professional-grade GPUs such as Nvidia A100 with 80GB memory <cit.>. Existing memory efficient system-level techniques like DeepSpeed optimizer sharding/offloading <cit.> and gradient checkpointing <cit.> trade off throughput for memory advantages which slow down pretraining. As models scale, the memory and compute demands of increasingly large LLMs continue to outpace hardware advancements, highlighting the need for advances in optimization algorithms beyond system-level techniques. =-1 Various optimization techniques have been proposed to enhance the efficiency of LLM training. One prominent approach is parameter-efficient finetuning (PEFT), such as Low-Rank Adaptation (LoRA), which reparameterizes weight matrices using low-rank adaptors <cit.>. This significantly reduces the number of trainable parameters, yielding smaller optimizer states and gradients. However, despite its efficiency, LoRA and its derivatives <cit.> often underperform compared to full-rank finetuning <cit.>. Variants like ReLoRA <cit.> extend LoRA to pretraining by periodically updating the full matrix with new low-rank updates, but it still requires a costly initial full-rank training warmup which makes it impractical in memory-constrained scenarios. =-1 To allow for full-rank pretraining and finetuning, another approach for memory-efficient LLM training involves designing adaptive optimizers <cit.>. One such class, memory-efficient subspace optimizers, utilizes projection matrices (P) to project high-dimensional gradients into a lower-dimensional space and performs optimization within the subspace. This projection significantly reduces the memory footprint required to store optimizer states. Existing methods such as <cit.> and <cit.> employ dense projection matrices, which introduce additional memory and computational overhead. In contrast, we employ structured sparse matrices for P, demonstrating their advantages in memory, computation, and communication efficiency across both pretraining and finetuning. Our main contributions include: =-1 * We introduce Grass, a novel method that enables full parameter training of LLMs with structured sparse gradients. By leveraging sparse projection matrices, Grass significantly reduces memory consumption and communication overhead compared to existing projection-based optimization techniques. We theoretically motivate and empirically analyze effective ways to construct the sparse projection matrix for Grass. =-1 * We conduct extensive experiments on both pretraining and finetuning tasks, demonstrating that Grass converges faster in wall-clock time than existing projection-based methods due to its additional compute efficiency benefits. Grass exhibits minimal performance degradation (<0.1 perplexity gap) compared to full-rank training on the 1B parameter LLaMA model while achieving a 2.5× reduction in memory footprint. =-1 * We present an optimized PyTorch implementation of Grass for modern hardware, incorporating implementation tricks to enhance training throughput, stability, and scalability. For pretraining a 1B LLaMA model, Grass achieves a 25% throughput increase on a single GPU and up to a 2× throughput improvement on 8 GPUs over full-rank training and . Furthermore, Grass's low memory footprint enables half-precision training of a 13B LLaMA model on a single 40GB A100 GPU, a feat that existing projection-based optimization methods cannot achieve. § A UNIFIED VIEW OF MEMORY-EFFICIENT SUBSPACE OPTIMIZERS (MESO) =-1 High memory usage of full-rank training. Standard full-rank training of the weight matrix W ∈ℝ^m × n in any linear layer of an LLM involves 1) computing the full-parameter gradient G_W ∇ L(W) and 2) using it to update the model weights and optimizer states: 1 S^(t+1), Δ W^(t)← (S^(t), ∇ L(W^(t))) W^(t+1)← W^(t) + Δ W^(t) Here, denotes the optimizer's update function, which uses the current optimizer state S^(t) and the gradient to compute the updated state S^(t+1) and a learning-rate-adjusted weight update Δ W^(t) (see Appendix <ref> for the pseudocode for the Adam optimizer). However, storing both the gradient and optimizer state incurs significant memory overhead – for example, an additional 3mn floats for Adam – motivating the need for more memory-efficient optimization techniques. We discuss these techniques in the following sections, while Appendix <ref> covers additional related work. =-1 Memory-efficient optimization in a subspace. To minimize the memory usage of the optimizer state, memory-efficient subspace optimizers () restrict the optimization to a subspace defined by a projection matrix P ∈ℝ^m × r (r ≪ m) through the following objective: min_A ∈ℝ^r × n L(W_0 + PA). Applying an off-the-shelf optimizer like Adam to learn the smaller matrix A reduces the optimizer state size to O(rn), which can be much smaller than the O(mn) used in full-rank training. We provide the pseudocode of this optimization procedure in Algorithm <ref>, which unifies both existing methods and our proposed method[This algorithm version never materializes the A matrix, but is equivalent as we show in Appendix <ref>.]. We highlight the key parts of this algorithmic framework below. =-1 Computing the projection matrix, [baseline][fill=green!20,anchor=base, inner sep=0pt, outer sep=0pt]_P. Employing a fixed P throughout training confines the search to its column space, limiting the learned model's expressiveness. To address this, MeSO methods periodically recompute P every K iterations with different choices (Algorithm <ref>): <cit.> independently samples each entry of P from 𝒩(0, 1/r), whereas Grass <cit.> sets P to be the top-r left singular vectors of the full-parameter gradient matrix ∇ L(W) obtained through a Singular Vector Decomposition (SVD). Despite these differences, a commonality among prior works is the choice of dense matrices for P. In our work, we explore the use of sparse matrices as an alternative and propose several principled choices for such matrices in Section <ref>. =-1 Optimizer state update, [baseline][fill=yellow!20,anchor=base, inner sep=0pt, outer sep=0pt] . Updating P can modify the subspace optimization landscape. Different methods have proposed distinct strategies for updating the existing optimizer state S^(t). We describe our strategy in Section <ref>. =-1 Projection of the full gradient, [baseline][fill=orange!20,anchor=base, inner sep=0pt, outer sep=0pt]P^⊤∇ L(W^(t)). MeSO methods require projecting the m × n full parameter gradient matrix ∇ L(W^(t)) into a lower-dimensional subspace r × n via left multiplication with P^⊤. Existing methods compute this projection by first materializing the full gradient matrix ∇ L(W^(t)) in memory before performing the left projection multiplication. In contrast, leverages the associative property of matrix multiplication and the sparse structure of P to compute this projection without materializing the full gradient. This yields considerable computational and memory savings, detailed in Section <ref>. These efficiencies also extend to the weight update step, [baseline] [fill=blue!20,anchor=base, inner sep=0pt, outer sep=0pt]W^(t) + α P Δ^(t+1);, due to the sparsity of P. Here, the scale factor α (also used in ) adjusts the effective learning rate of these linear layer weight matrices relative to other trainable model parameters. § 1011.5: A MORE-EFFICIENT 1011.5MESO OPTIMIZER =-1 Unlike prior MeSO methods that employ dense projection matrices, (GRAdient Structured Sparsification) utilizes a sparse projection matrix P ∈ℝ^m × r, where each column p_j ∈ℝ^m has at most one non-zero entry (p_j_0 ≤ 1, ∀ j ∈ [r]). This structure effectively constrains the subspace optimization to update only r rows of the full weight matrix W, inducing structured row-sparsity in the gradients – hence the name . By periodically updating P, learns different rows of W in different iterations, resembling a generalized form of coordinate gradient descent. We dive into the efficiency benefits of this sparse projection and various methods for constructing P in the following subsections. §.§ Efficiency gains of =-1 Efficient Storage of P. In Grass, the sparse projection operator P^⊤∈ℝ^r × m can be expressed as the product of a diagonal scaling matrix ρ∈ℝ^r × r and a binary selection matrix B ∈{0, 1}^r × m which selects a single j-th row in G_W for its i-th row B_ij = 1. Both ρ and B can be efficiently stored using r instead of mr floats, making more memory-efficient in optimizer-related storage (Optimizer column in <ref>). =-1 Efficient Gradient Projection. avoids computing and storing the full gradient matrix G_W ∈ℝ^m × n for projection ([baseline][fill=orange!20,anchor=base, inner sep=0pt, outer sep=0pt]P^⊤ G_W) , unlike existing MeSO methods <cit.>. Leveraging the chain rule, we express G_W = (∇_y L)^⊤ X, where ∇_y L ∈ℝ^b × m is the gradient of the loss with respect to the layer outputs and X ∈ℝ^b × n represents the input activations, with b being the token batch size. This allows us to apply the associative rule and compute[Implementation-wise, we only need to define a custom backward pass for the PyTorch linear layer.] the sparse gradient projection efficiently as ρ ((B∇_y L^⊤)X). This insight yields significant advantages in compute, memory, and communication: =-1 ∙ Compute savings: By exploiting this regrouped multiplication, computes the projection in just rbn + rn FLOPs. In contrast, dense projection methods like and require mbn + rmn FLOPs, making over m/r times more computationally efficient. This significant advantage arises from 1) leveraging the associative rule, 2) the equivalence of left multiplication by ρ to a simple row-wise scaling (costing only nr FLOPs), and 3) the cost-free row selection performed by left multiplication with B. =-1 ∙ Memory savings: 's multiplication order eliminates the need to ever materialize the full gradient matrix, directly yielding the projected result. This saves memory by avoiding the storage of mn floats required by other methods (see the Grad column in <ref>). Importantly, this memory advantage is independent of and can be combined with layerwise weight update techniques <cit.>, which reduce memory by processing gradients one layer at a time. =-1 ∙ Communication savings: During distributed training, existing MeSO methods like and communicate the full m × n gradient matrix across workers, leading to a communication cost of O(mn). Since is implemented in the backward pass, it can directly compute and communicate the r × n projected gradient without materializing the full gradient, reducing communication volume to O(rn) (Comm column in <ref>). =-1 Efficient Weight Update. The weight update step, [baseline] [fill=blue!20,anchor=base, inner sep=0pt, outer sep=0pt]W^(t) + P Δ^(t+1);, also benefits from the sparsity of P in . Instead of constructing the full m × n update matrix PΔ^(t+1), which is row-sparse, directly computes and applies the updates to the r nonzero rows. This reduces the computational cost to just 2rn FLOPs, compared to the rmn + mn FLOPs required by dense update methods like and . §.§ Choices of sparse P We now discuss concrete choices for [baseline][fill=green!20,anchor=base, inner sep=0pt, outer sep=0pt]_P by specifying how to construct ρ and B for P^⊤ = ρ S. To simplify the notation, we denote the index of the only non-zero entry in the j-th row of B by σ_j ∈ [m]. We consider both stochastic and deterministic approaches to construct {σ_j}_j=1^r and {ρ_jj}_j=1^r. =-1 A. Stochastic construction of P. Since σ_j ∈ [m] is a categorial variable, a natural approach is the with-replacement sampling of σ_j i.i.d.∼Multinomial(1, q), with the probability of sampling any integer k∈[m] given by q_k. To ensure the unbiasedness[See the proof of this statement in Appendix <ref>.] of the reconstructed gradient 𝔼[PP^⊤ G_W] = G_W for its optimization convergence benefits, we set ρ_jj = 1/√(r · q_σ_j) after sampling σ_j. To set the multinomial distribution parameter q, we consider two different principles: =-1 * The Variance-reduction principle: Here we want to minimize the total variance of the gradient estimate PP^⊤ G_W. The optimal q is given by the following theorem (proof in Appendix <ref>): Among all the Multinomial(1, q) distributions, the one that is proportional to the row norms of G with q_k = G_k_2/∑_i=1^m G_i_2 minimizes the total variance of the gradient estimate PP^⊤ G. =-1 We call this method Multinomial-Norm. * The Subspace-preservation principle: =-1 When P is fixed for a large K number of iterations and the gradient is low-rank <cit.>, reducing the variance of the gradient estimate could be less important than preserving the low-rank subspace of G_W upon projection. To achieve this, we set q_k proportional to the squared row norms of G_W (q_k ∝G_k^2) and call this method Multinomial-Norm^2. This q distribution gives us approximate leverage score sampling <cit.>, which ensures high probability preservation of the low-rank subspace with little additive error (see Appendix <ref>). =-1 In addition to these two principled unbiased sampling with replacement methods, we also experiment with the Uniform Distribution with q_k = 1/m as a baseline. Furthermore, we explore the non-replacement sampling counterparts (-NR) for each of the three distributions. Since it is analytically intractable to guarantee unbiasedness in this case, we set ρ_jj = 1 for the NR methods. =-1 B. Deterministic construction of P. We consider minimizing the gradient reconstruction error in Frobenius norm PP^⊤ G_W - G_W_F^2 as the principle to choose P. One minimizing solution sets all ρ_jj = 1 and {σ_j}_j=1^r to be the indices of rows of G_W with largest row-norms. We call this _P method Top-r. Compute cost. Unlike , only requires computing row norms of G_W but not an SVD in the update step. (_P column in <ref>). Furthermore, no additional memory is consumed for SVD as in . §.§ Implementation Details -1 Updating the Optimizer State. Updating the projection matrix P in can lead to significant shifts in the selected rows of the parameter matrix W between iterations. Since different rows of W may have distinct gradient moment statistics, we reset the optimizer states to zero during the [baseline][fill=yellow!20,anchor=base, inner sep=0pt, outer sep=0pt]; step. To further stabilize training after such updates, we implement a learning rate warmup phase. This combined approach effectively mitigates training instabilities, particularly those observed in smaller models during pretraining. =-1 Distributed Training. Since updates the projection matrix during each worker's backward pass in distributed training, synchronizing the selected indices across workers is necessary. To minimize communication overhead, we first compute the gradient G_W and then sketch it by sampling r columns based on their norms, resulting in a sketched matrix G_comm∈ℝ^m × r. An all-reduce operation is performed on G_comm, ensuring all workers access a consistent version of the sketch before sampling indices. Furthermore, we implement custom modifications to prevent PyTorch DDP <cit.> from allocating memory for full gradients in our implementation (see <ref> for details). § EXPERIMENTS §.§ Pretraining Performance Experimental setup. =-1 We compare[We compare against in Section <ref> and <ref> as it was primarily intended for finetuning in the original work.] Grass against Full-rank (without gradient projection) and by pretraining LLaMA-based models <cit.> in BF16 on the cleaned C4 subset of Dolma <cit.>. We train without data repetition over a sufficiently large amount of data, across a diverse range of model sizes (60M, 350M, 1B). We adopt a LLaMA-based architecture with RMSNorm and SwiGLU activations <cit.>. For both and , we fix the frequency K at 200 iterations, α at 0.25, use a consistent rank r, and project the linear layers within the attention and feed-forward layers. P is applied to project the smaller dimension of G_W to achieve the best memory-performance tradeoff <cit.>. We use the same batch size and tune the learning rate individually for each method (see Appendix <ref>). Results. As shown in <ref>, Grass matches and approaches Full-rank's performance within a perplexity gap of less than 1 even when r/d_model=8. In <ref>, for the 1B model we see that this gap disappears when we look at perplexity vs. training time (as opposed to tokens seen) on a single A100 GPU, where due to increased pretraining throughput Grass closely follows the Full-rank loss curve with <0.1 perplexity gap. §.§ Finetuning Performance Experimental setup. =-1 We evaluate Grass, LoRA, Full-rank, , and on the GLUE NLU benchmark <cit.> by finetuning a pretrained RoBERTa-Base model <cit.> with a sequence length of 128 in float32 (results on the dev set). For all the optimization methods, we restrict them to only optimize the linear layers in the attention and MLP layers for three epochs with individually tuned learning rates. We set rank r=8 for all the low-rank methods. For the MeSO methods, we set the update frequency K=100 and tune the scale factor α for each method. (See more details in Appendix <ref>.) =-1 Results. In <ref>, Grass Top-r performs competitively with LoRA, , and even though Grass exhibits a reduced memory footprint and improved training throughput compared to these methods as we show in Section <ref>. §.§ Instruction-finetuning Performance Experimental setup. =-1 We compare Grass against Full finetuning, , , and LoRA on instruction finetuning using a LLaMA-7B model <cit.> pretrained on 1T tokens. We finetune on Alpaca <cit.> (52k samples) and a 100k sample subset of FLAN v2 <cit.> from Tulu <cit.> (due to FLAN v2’s scale), using BF16 precision, batch size 64, and a source and target sequence length of 512. All methods, except for Full finetuning which updates all parameters, are restricted to only update the linear layers in the attention and MLP layers with rank r=64 . We finetune for 1000 steps on Alpaca (1.26 epochs) and 1500 steps on Flan v2 (1.08 epochs). Additional hyperparameters are in <ref>. Following prior work <cit.>, we assess the instruction-tuned models' average 5-shot test performance on the MMLU benchmark <cit.> (57 tasks). =-1 Results. As shown in Table <ref>, Grass performs competitively with full-parameter finetuning, , , and LoRA during instruction finetuning on both Alpaca and Flan v2. Furthermore, Section <ref> demonstrates that, at r = 64, Grass not only matches LoRA's performance but also boasts a lower memory footprint and an 18% throughput increase. Because Grass can perform higher rank training with multiple projection matrix updates, it is expected to further outperform the rank-constrained LoRA in more challenging tasks with larger datasets. §.§ Efficiency analysis =-1 Pretraining Throughput. Figure <ref> compares the BF16 pretraining throughput (tokens/s) of Grass and relative to Full-rank, across model sizes, for both regular and projection update[The regular update iteration doesn't invoke _P but only updates the parameters, while the projection update step performs both.] steps. We use rank r=64 on attention and feedforward layers, sequence length 256, and total batch size 1024 on a single 80GB A100 GPU. See Appendix <ref> for detailed settings. We did not employ activation checkpointing, memory offloading, or optimizer state partitioning in our experiments. =-1 While Grass exhibits lower throughput than Full-rank at 60M parameters (due to customized matrix multiplication overhead), Grass significantly outperforms both at 1B and 7B parameters, achieving 26% and 33.8% higher throughput than Full-rank, and 27% and 26.7% higher than (for the regular step). 's projection update overhead is minimal, unlike 's costly SVD computations. The throughput advantage for Grass is expected to grow with larger batch sizes, benefiting further from its lower memory footprint compared to other methods. Appendix <ref> provides further throughput comparisons across different ranks, showing that Grass achieves its highest relative throughput gains at rank (r = 64), with diminishing returns as rank increases or model size decreases. Finetuning Throughput. =-1 <ref> compares the BF16 finetuning throughput of Grass, , and LoRA across various LLaMA model sizes, focusing on the regular step. Unlike the pretraining throughput benchmark, we finetune only the attention and MLP layers using r=64. We maintain a uniform local batch size, sequence length 256, and total batch size of 1024 across all methods (detailed hyperparameters are provided in Appendix <ref>). For the 7B parameter model, Grass achieves throughput improvements of 26% and 18% over and LoRA, respectively. Appendix <ref> provides further throughput comparisons across ranks 8, 16, 32, and 64, demonstrating that Grass consistently maintains its throughput advantage across these ranks. =-1 Pretraining Memory. Figure <ref> benchmarks the BF16 memory footprint of pretraining against Full-rank and across various model sizes (token batch size 256, rank (r=128)), focusing on the regular training step. consistently exhibits a lower memory footprint than both Full-rank and , with the memory reduction increasing with model size. This advantage stems from 's reduced gradient and optimizer memory (due to its sparse projection matrices). At 13B parameters, uses 70% less memory than Full-rank and 45% less than . Beyond the memory advantage in the regular update iteration, Grass is also more memory efficient in the projection update iteration compared to its counterpart : requires converting the full gradient to float32 for SVD computation when computing the projection matrix, making it unable to pretrain the 13B LlaMA model in BF16 at rank (r = 128) on an 80GB GPU. In contrast, is capable of pretraining the 13B model on ranks up to r=768 on a 40GB GPU and up to r = 1024 on a 48GB GPU. =-1 Finetuning Memory. Appendix <ref> and <ref> compare the memory footprint of and LoRA during LLaMA finetuning. demonstrates a memory advantage of roughly 1GB over LoRA when finetuning the 7B parameter model in BF16 at rank (r=64). However, as the batch size increases, activations dominate the memory footprint, and the memory usage of and LoRA becomes comparable. =-1 Communication. Figure <ref> benchmarks the (weak scaling <cit.>) throughput (tokens/sec) of training a 3B parameter LLaMA model on a multi-GPU L40 compute node with a peak all-reduce bandwidth of 8.64 GB/s as we scale the number of participating GPUs. We use a token batch size of 4096 per worker (local batch size 16, sequence length 256). , by communicating only the projected gradients, achieves significantly higher throughput (2× on 8 GPUs) compared to both Full-rank and . §.§ Ablations =-1 Effect of Rank. Figure <ref> presents ablations on the impact of the subspace rank r for during pretraining of a 350M parameter LLaMA model on the C4 subset of Dolma. Increasing the rank generally leads to better training losses for the same number of updates, but with diminishing returns. Additionally, since enables full-parameter training, we observe that training at rank r = 128 for 80k steps is more effective than training at rank r = 512 for 40k steps. Grass can therefore be used to trade-off memory and computational cost where in a memory-constrained setting one could select a lower rank and train longer. =-1 Effect of Update Frequency. Figure <ref> analyzes the impact of update frequency on the convergence of during pretraining of a 60M-parameter LLaMA model on the Realnews subset of C4 <cit.>. Both overly frequent and infrequent updates to the projection matrix hinder convergence. Optimal convergence is achieved within an update frequency range of 200 to 500 iterations. =-1 _P Methods. Table <ref> evaluates our proposed methods to compute the sparse projection P matrix (in Section <ref>) for during pretraining of a 60M LLaMA model on 500M tokens from the RealNews subset of C4. We additionally consider the Frozen Top-r method as a baseline by computing top indices once only at iteration 0. We notice that stochastic strategies employing non-replacement biased (NR) sampling generally surpass their with replacement unbiased (R) counterparts. Within the unbiased strategies (R), the variance reduction approach (Multinomial-Norm-R) outperforms the subspace preservation method (Multinomial-Norm^2-R), while their biased (NR) counterparts exhibit comparable performance. Both Multinomial-Norm^2-NR and Top-r are competitive with , while Uniform sampling underperforms. Similar trends in performance across sampling methods are observed during finetuning (Table <ref>). We find that uniform sampling is more effective for pretraining than finetuning, likely because the norm distribution is more uniform at the onset of pretraining. § CONCLUSION AND FUTURE WORK =-1 In this work, we introduce Grass, a novel memory-efficient subspace optimization method for LLM pretraining and fine-tuning by leveraging sparse gradient projections. Grass significantly reduces the memory footprint of optimizer states and gradients and eliminates the need to materialize the full gradients during the projection step, leading to substantial computational efficiency gains. Our experimental results demonstrate that Grass achieves comparable performance to full-rank training and existing projection-based methods while offering a substantial memory reduction and throughput increase across various model sizes and tasks. Future work will explore extending Grass to utilize diverse structured sparsity patterns and investigating strategies for dynamically adjusting the projection rank based on hardware and model size. § LIMITATIONS While Grass offers compelling advantages in memory efficiency and training throughput, there are several aspects that warrant further investigation and potential improvements. Implementation Complexity. Unlike drop-in optimizer replacements, Grass requires integrating custom linear layers into the Transformer architecture, as the sparse projection operations occur during the backward pass. While this involves minimal code modifications, it introduces a slight complexity barrier for adoption compared to simply switching optimizers. Nonetheless, the significant gains in performance and memory efficiency outweigh this minor overhead. Scalability to Larger Models. Our empirical evaluation primarily focused on model scales up to 13B parameters. The effectiveness of Grass for significantly larger LLMs, exceeding hundreds of billions of parameters, requires further examination. Similarly, as batch sizes increase, the memory savings from sparse projection might become less prominent compared to the activation memory footprint. Exploring strategies to mitigate this potential issue, such as combining Grass with activation checkpointing techniques, would be beneficial. Hyperparameter Sensitivity. Grass's performance depends on hyperparameters like rank (r) and update frequency (K). While our experiments provide insights into suitable ranges for these hyperparameters, a more comprehensive analysis of their impact on training dynamics, particularly as model scales increase, is crucial for maximizing performance and generalizability. Developing methods to automatically and adaptively tune these hyperparameters could further enhance Grass's applicability. § ETHICAL CONSIDERATIONS We acknowledge the potential ethical implications associated with large language models. These include: Misuse Potential. LLMs, being powerful text generation tools, can be misused to create harmful or misleading content, including disinformation, hate speech, and spam. While our work focuses on improving training efficiency, we strongly advocate for responsible use of LLMs and encourage further research on safeguards against malicious applications. Bias Amplification. LLMs are trained on massive text corpora, which can inherently contain biases and stereotypes. These biases can be amplified during training, leading to potentially discriminatory or unfair outputs. While Grass is unlikely to exacerbate this bias, we recognize the importance of addressing this issue through careful data curation, bias mitigation techniques, and ongoing monitoring of LLM behavior. Environmental Impact. Training large LLMs requires significant computational resources, which can have a substantial environmental footprint. Our work aims to reduce the computational cost and energy consumption of LLM training, contributing to more sustainable and environmentally responsible practices in NLP research. Data and Licensing Considerations. We have carefully considered the ethical implications of the datasets used in this work which are publicly released and have followed accepted privacy practices at creation time. * MMLU and GLUE are released under the permissive MIT license, allowing for broad research use. * Alpaca is also distributed under the MIT license. * FLAN uses the Apache license, which permits both academic and commercial applications. * Dolma utilizes the ODC Attribution License, promoting open data sharing and reuse. We strictly adhere to the license terms and intended use of these datasets, ensuring responsible handling of data and compliance with ethical guidelines. We acknowledge the ongoing need for critical assessment and transparency regarding data sources, potential biases, and licensing implications in LLM research. § OPTIMIZER FUNCTIONS =-1 In Equation (<ref>) and Algorithm <ref>, we use functions and to abstractly represent any stateful optimizer's initialization and update function. Here we provide concrete implementations of these functions for Adam <cit.> in Algorithm <ref> and <ref>.[For any matrix Z ∈ℝ^c × d, we have Z^∘ 2 and Z^∘1/2 to respectively denote the matrix which is the elementwise square and elementwise square root of Z.] We assume the parameter matrix Z and its gradient ∇_Z L is of generic shape ℝ^c × d. § DERIVATION OF THE UNIFIED ALGORITHM OF MEMORY-EFFICIENT SUBSPACE OPTIMIZERS As we have described in Section <ref>, MeSO optimizers solve the subspace optimization problem under the projection matrix P ∈ℝ^m × r: 1 min_A ∈ℝ^r × n L(W_0 + PA) by applying an off-the-shelf optimizer . Since we want to start at the initial weight matrix W_0, A is initialized to be the zero matrix: 1 A^(0) ← 0_r × n S^(0) ←(A^(0)) and updated through 1 S^(t + 1), Δ^(t+1) ←(S^(t), d/dA L(W_0 + PA^(t))) A^(t+1) ← A^(t) + Δ^(t+1) By chain rule, we have d/dA L(W_0 + PA^(t)) = P^⊤∇ L(W_0 + PA^(t)). When MeSO updates the projection matrix to be P_new, we can treat the new subspace optimization as having its W_0^new = W_0^old + P_old A^(t) and re-initializing A^(t) at 0_r× n in addition to an optimizer state update using [baseline][fill=yellow!20,anchor=base, inner sep=0pt, outer sep=0pt] . The pseudocode of this algorithm where we maintain the value of the A matrix is given in Algorithm <ref>. By defining W^(t) W_0 + PA^(t), we can easily see that Algorithm <ref> is equivalent to Algorithm <ref> presented in the main paper. § ADDITIONAL RELATED WORK Memory-Efficient Optimization. Several works aim to reduce the memory footprint of adaptive optimizer states. Techniques include factorizing second-order moment statistics <cit.>, quantizing optimizer states <cit.>, and fusing backward operations with optimizer updates to minimize gradient storage <cit.>. Grass is orthogonal to these approaches and proposes a gradient projection-based adaptive optimizer that significantly reduces memory costs by relying on projected gradient statistics. =-1 Gradient Compression. In distributed and federated training, several gradient compression methods have been introduced to reduce the volume of transmitted gradient data. Common approaches include: =-1 * Quantization: Quantization aims to reduce the bit precision of gradient elements. Examples include 1-bit SGD <cit.>, SignSGD <cit.>, 1-bit Adam <cit.>, TernGrad <cit.>, and QSGD <cit.>. =-1 * Sparsification: This involves transmitting only a small subset of significant gradient elements. Random-k and Top-k element select k random or largest-magnitude elements, respectively to transmit. Top-k generally exhibits better convergence <cit.>, and requires communicating both values and indices <cit.>. =-1 * Low-Rank Decomposition: This involves factorizing a gradient matrix M ∈ℝ^n × m as M ≈ PQ^⊤ for transmission, where P ∈ℝ^n × r and Q ∈ℝ^m × r with r ≪min(n, m). ATOMO <cit.> employs SVD for decomposition, while Power-SGD <cit.> utilizes power iteration for more efficient low-rank factorization. Unlike existing methods, Grass introduces a novel approach by employing sparse projection of gradients to enhance memory efficiency in both local and distributed training contexts. § ENSURING UNBIASED GRADIENT RECONSTRUCTION In this section, we formally state the theorem that gives the form of the sampling distribution for σ_j and ρ_jj that ensures the reconstructed gradient PP^⊤ G_W is unbiased which we describe in Section <ref>. Let B ∈{0, 1}^r × m be the sparse binary matrix with the unique non-zero index of j-th row being σ_j ∈ [m]. Let σ_j i.i.d.∼Multinomial(1, q)) (q ∈ℝ^m with the probability of sampling integer k ∈ [m] being q_k). If we correspondingly let the diagonal value of the diagonal matrix ρ to be ρ_jj1/√(r q_σ_j), then for the random projection matrix P = (ρ B)^⊤, we have 𝔼[PP^⊤ G] = G for any (gradient) matrix G ∈ℝ^m × n. Here we first write down the form of the random matrix product PP^⊤. Let e_j ∈ℝ^m be the unit column vector with j-th coordinate being 1 and all other coordinates being zero. Then by definition, the j-th row vector of B is e_σ_j^⊤. PP^⊤ = B^⊤ρ^⊤ρ B = [ 5pte_σ_1 … 5pte_σ_r ]_m × r ×diag(1/r· q_σ_1, …, 1/r· q_σ_r) ×[ - e_σ_1^⊤ -; ⋮; - e_σ_r^⊤ - ]_r × m = 1/r∑_i=1^r 1/q_σ_i e_σ_i e_σ_i^⊤ In <ref>, we have decomposed the matrix PP^⊤ into the average of r random rank-1 matrices each of which depends on on the randomness of a unique σ_i. By linearity of expectation and the i.i.d. property of {σ_i}_i=1^r, we have 𝔼[PP^⊤] = 1/r∑_i=1^r 𝔼 [1/q_σ_i e_σ_i e_σ_i^⊤] = 𝔼 [1/q_σ_1 e_σ_1 e_σ_1^⊤] Since σ_1 have a probability of q_k to take the value of integer k ∈ [m], we have 𝔼 [1/q_σ_1 e_σ_1 e_σ_1^⊤] = ∑_k=1^m q_k ·1/q_k e_k e_k^⊤ = I_m × m Thus we have proved that 𝔼[PP^⊤] = I_m × m. By linearity of expectation, for any matrix G ∈ℝ^m × n, we thus have 𝔼[PP^⊤ G] = G and the proof is complete. § PROOF OF THEOREM <REF> Here we restate the complete version of Theorem <ref> and then present its proof. [Complete statement of Theorem <ref>] Let B ∈{0, 1}^r × m be the sparse binary matrix with the unique non-zero index of j-th row being σ_j ∈ [m]. Let σ_j i.i.d.∼Multinomial(1, q) (q ∈ℝ^m with the probability of sampling integer k ∈ [m] being q_k). Given σ_j, we correspondingly set the diagonal value of the diagonal matrix ρ to be ρ_jj1/√(r q_σ_j) and define P = (ρ B)^⊤. This induces an unbiased gradient estimator of G ∈ℝ^m × n: PP^T G. Among all these gradient estimators induced by different parameter value q of the multinomial distribution, the one that is proportional to the row norms of G with q_k = G_k_2/∑_i=1^m G_i_2 minimizes the total variance of the gradient estimate PP^⊤ G. We first write down the total variance of the estimator PP^⊤ G: 𝔼 tr[(PP^⊤ G)^⊤ (PP^⊤ G)] - tr[𝔼[(PP^⊤ G)] 𝔼[(PP^⊤ G)]^⊤] = [G^⊤𝔼 [PP^⊤ PP^⊤] G] - tr[GG^⊤] Since only the first term in <ref> is a function of P and thus depends on the value of q, we first focus on analytically deriving the form of 𝔼 [PP^⊤ PP^⊤]. By the expression in <ref>, we have: PP^⊤ PP^⊤ = 1/r^2∑_i=1^r ∑_j=1^r 1/q_σ_i1/q_σ_j e_σ_i e_σ_i^⊤ e_σ_j e_σ_j^⊤ = 1/r^2∑_i=1^r 1/q_σ_i^2 e_σ_i e_σ_i^⊤ e_σ_i e_σ_i^⊤ + 1/r^2∑_i=1, j=1, i≠ j^r [1/q_σ_i e_σ_i e_σ_i^⊤] [1/q_σ_j e_σ_j e_σ_j^⊤] = 1/r^2∑_i=1^r 1/q_σ_i^2 e_σ_i e_σ_i^⊤ + 1/r^2∑_i=1, j=1, i≠ j^r [1/q_σ_i e_σ_i e_σ_i^⊤] [1/q_σ_j e_σ_j e_σ_j^⊤] In the last step, we use the fact that for any i, e_σ_i^⊤ e_σ_i= 1. Now we take the expectation of <ref>. By applying linearity of expectation and the i.i.d. property of {σ_j}, we have 𝔼 [PP^⊤ PP^⊤] = 1/rdiag(1/q_1, …, 1/q_m) + r-1/r· I_m × m As a result, we can express the first term in <ref> as [G^⊤𝔼 [PP^⊤ PP^⊤] G] = 1/r[G^⊤diag(1/q_1, …, 1/q_m) G] + r-1/r[GG^⊤] If we represent the rows of G as column vectors {G_k}_k=1^m, then the only term in <ref> that depends on q can be expressed as [G^⊤diag(1/q_1, …, 1/q_m) G] = [∑_k=1^m 1/q_k G_kG_k^⊤] = ∑_k=1^m 1/q_k[G_kG_k^⊤] = ∑_k=1^m G_k_2^2/q_k Based on these derivations, to minimize the total variance is therefore equivalent to minimize <ref>. From now on, we denote λ_i G_i_2 as the 2-norm of the i-th row of matrix G. Solving the variance-minimization problem: As we have shown, minimizing the total variance of PP^⊤ G leads to the following optimization problem: min_p ∑_i=1^m λ_i^2/q_i subject to ∑_i=1^n q_i = 1, q_i ≥ 0 for all i. Here we first ignore the inequality constraint q_i ≥ 0 and solve the relaxed problem: min_p ∑_i=1^m λ_i^2/q_i subject to ∑_i=1^n q_i = 1 The Lagrangian L for this relaxed constrained optimization is: L(q, μ) = ∑_i=1^m λ_i^2/q_i + μ(∑_i=1^m q_i - 1) where μ is the Lagrange multiplier for the equality constraint. The stationary condition for the Lagrangian gives us ∂ L/∂ q_i = -λ_i^2/q_i^2 + μ = 0, ∀ i ∈ [m] ∑_i=1^m q_i = 1 Assuming not all λ_i are zero, this gives us q_i^* = λ_i/∑_j=1^m λ_j Since this optimal solution to <ref> also lies in the constraint space of <ref>, this is also the optimal solution of the optimization we care about. Thus we have shown that the distribution parameter q that minimizes the total variance of the gradient estimate is proportional to the row 2-norm of G. § ROW NORMS AND SUBSPACE EMBEDDING PROPERTY The following proof is from <cit.> which can be roughly stated as sampling with squared row-norms preserves subspaces up to additive error with high probability. Let 𝐀∈ℝ^m × d_1 with rows 𝐚_t. Define a sampling matrix 𝐐∈ℝ^m × m using row-sampling probabilities: p_t ≥𝐚_t^2/𝐀_F^2. If r ≥4p_A ln2d_1/δ/β^2, then with probability at least 1 - δ, it follows that: 𝐀^⊤𝐀 - 𝐀̃^⊤𝐀̃≤ϵ𝐀^2. Considering the singular value decompositions (SVDs) of 𝐀 and 𝐁, we have: 𝐀^⊤𝐁 - 𝐀^⊤𝐐^⊤𝐐𝐁 = 𝐕_A 𝐒_A 𝐔_A^⊤𝐔_B 𝐒_B 𝐕_B^⊤ - 𝐕_A 𝐒_A 𝐔_A^⊤𝐐^⊤𝐐𝐔_B 𝐒_B 𝐕_B^⊤. We may now directly apply Lemma <ref>, with respect to the appropriate sampling probabilities. One can verify that the sampling probabilities are proportional to the sum of the rescaled squared norms of the rows of 𝐀 and 𝐁. Let 𝐖∈ℝ^m × d_1 and 𝐕∈ℝ^m × d_2 be orthogonal matrices, and let 𝐒_1 and 𝐒_2 be positive diagonal matrices in ℝ^d_1 × d_1 and ℝ^d_2 × d_2, respectively. Consider row sampling probabilities: p_t ≥1/𝐒_1_F^2𝐖^⊤𝐒_1^2 𝐖_t + 1/𝐒_2_F^2𝐕^⊤𝐒_2^2 𝐕_t. If r ≥(8(p_1 + p_2)/β^2) ln2(d_1+d_2)/δ, then with probability at least 1 - δ, it holds that: 𝐒_1 𝐖^⊤𝐕𝐒_2 - 𝐒_1𝐖^⊤𝐐^⊤𝐐𝐕𝐒_2≤ϵ𝐒_1𝐒_2. § DETAILED BREAKDOWN OF COMPUTE, MEMORY, AND COMMUNICATION VOLUME In this section we provide detailed breakdown of the compute, memory, and communication volume for different optimization methods. We focus our discussion to a single weight matrix W ∈ℝ^m × n and its gradient G ∈ℝ^m × n. We describe the relevant notation and parameter shape below: * By chain rule, we have G = (∇_y L)^⊤ X, where ∇_y L is a b × m matrix, X is an b × n matrix, where m ≤ n and b is the token batch size usually much larger than m,n. Here we assume ∇_y L and X are constructed ahead of time and we are interested in the memory, floating-point operations, and communication volume to construct the gradients G, update the optimizer state, and update the parameter weights. * P is an m × r projection matrix with r ≪ m. * C is the number of optimizer operations per gradient element. * For Grass, we can decompose P^⊤ = ρ B where ρ is a r× r diagonal scaling matrix, B ∈0, 1^r× m is a sparse binary row selection matrix. Both left multiplication by ρ and B can be computed efficiently. We compare various optimization strategies: Full, , LoRA, ReLoRA, , and our proposed method Grass. All numbers for each method are computed based on the implementation original papers. We additionally consider Efficient , which combines with our proposed efficient matrix associativity implementation for reduced FLOPs and a custom hook for reduced communication. As we shall see, even compared to this more efficient implementation of , our method still enjoys competitive advantages. §.§ Compute Requirements <ref> details the FLOPs (per worker) calculation for the baselines and Grass. We provide a breakdown of the computation cost of each step in the Regular optimization step as well as the computation cost of computing the new projection matrix. As we can see, is considerably more compute-efficient than all other methods – most importantly, its compute cost does not contain the most expensive term mbn unlike all the other published methods. Although Efficient also avoids full parameter gradient computation mbn by using our proposed multiplication rule, it still pays a much higher cost when it computes and performs the weight update (rmn + mn) compared to (2rn). §.§ Memory Requirements <ref> summarizes the memory requirements for the various baselines and Grass when we use Adam as the (internal) optimizer for each method. * In terms of storing the weight parameters, every method needs to store the full parameter matrix of shape m × n, while LoRA and ReLoRA also requires storing the low-rank updateable parameters (the B and A matrix) * In terms of the optimizer state, LoRA and ReLoRA needs to store both the first and second moment estimates for its B and A matrix. For all the MeSO methods, the optimizer state of the implicit A matrix needs to be stored. Besides, these methods also need to store the projection matrix P. Here, unlike the other MeSO methods which employ dense P matrices, can store its sparse projection matrix P using 2r numbers instead of mr numbers. * In terms of the gradient memory, with our proposed regrouped matrix multiplication implementation, never materializes the full parameter's gradient matrix, thus reducing the gradient memory size to only the projection result of shape r × n. §.§ Communication Volume <ref> summarizes the communication volume of gradients (per device) for various methods when we use distributed data parallel (DDP) training. Here all the existing methods perform all-reduce on the full-parameter gradient. In contrast, never materializes the full paramater gradient and performs all-reduce directly on the projected matrix, saving the communication volume from mn to nr. [htb] Distributed Grass Training with PyTorch DDP § DISTRIBUTED DATA PARALLEL IMPLEMENTATION To optimize memory usage in PyTorch's Distributed Data Parallel (DDP) framework <cit.>, we implement strategic modifications to our model architecture aimed at enhancing distributed training efficiency (see Algorithm <ref>). Specifically, we designate the weights in the linear layers as non-trainable to circumvent the default memory allocation for full-sized gradient matrices. Instead, we introduce virtual, trainable parameters— occupying merely 1 byte each—linked to each weight matrix. These virtual parameters hold the compressed gradient of the corresponding weight matrix in the attribute. This method capitalizes on DDP’s asynchronous all-reduce capabilities while preventing unnecessary memory allocation. § EXPERIMENT HYPERPARAMETERS §.§ Pretraining We introduce details of the LLaMA architecture and hyperparameters used for pretraining. Table <ref> shows the dimensions of LLaMA models across model sizes. We pretrain models on the C4 subset of Dolma [<https://huggingface.co/datasets/allenai/dolma>]. C4 is a colossal, clean version of Common Crawl designed to pretrain language models and word representations in English <cit.>. For pretraining all models we use a max sequence length of 256 for all models, with a batch size of 262144 tokens. For all baseline experiments, we adopt learning rate warmup for the first 1000 steps, and use cosine annealing for the learning rate schedule, decaying to 10% of the initial learning rate. Grass, and use a projection matrix update frequency of 200. Grass uses an additional warmup at each update for 200 steps and resets optimizer states for the 60M and 350M training runs, while the 1B run does not require resetting optimizer states. Both 60M and 350M Grass pretraining jobs uses Top-r selectionwhile the 1B job uses Multinomial sampling without replacement. For all methods on each size of models, we tune learning rate from a set of {0.01, 0.005, 0.001, 0.0005, 0.0001}, and the best learning rate is chosen based on the validation perplexity (or train perplexity when a validation does not exist as in Dolma). All MeSO models use a scale factor α=0.25. We find that is sensitive to hyperparameters and exhibits loss spikes and divergence at the prescribed learning rates in the paper (0.01) particularly at the 1B scale, and as a result we have to train using reduced learning rates where we no longer observe such spikes. The learning rates of Grass and are higher than the full model which would display instability at values greater than 0.001. Unless otherwise specified, we average losses using a window of 15 steps. We use Adam with the default hyperparameters (β_1 = 0.9, β_2 = 0.999, ϵ=10^-8). All models were trained on four 80GB A100 GPUs. The training times were as follows: 100 GPU hours for the 60M model, 200 GPU hours for the 250M model, and 650 GPU hours for the 1B model. §.§ Finetuning We finetune the pretrained RoBERTa-Base[<https://huggingface.co/FacebookAI/roberta-base>] model <cit.> on the GLUE benchmark[<https://huggingface.co/datasets/nyu-mll/glue> ] <cit.> using the pretrained model on Hugging Face. GLUE is a natural language understanding benchmark and includes a variety of tasks, including single sentence tasks like CoLA <cit.>, SST-2 <cit.>; similarity and paraphrase tasks like MRPC <cit.>, QQP, STS-B <cit.>; and inference tasks such as MNLI <cit.>, QNLI <cit.>, RTE and WNLI <cit.>. We report accuracy for SST-2, MNLI, QNLI and RTE. For CoLA and STS-B, we use Matthew’s Correlation and Pearson-Spearman Correlation as the metrics, respectively. For MRPC and QQP, we report the average of F1 score and accuracy. We report the best performance out of three seeds due to the instability of the method. We train all models for 3 epochs using a max sequence length of 128, and a batch size of 32. We report the best performance at the end of an epoch. We use a projection update frequency of 100 for all methods. We tuned the learning rate and scale factor α for , , LoRA and Grass from { 1e-5, 2e-5, 3e-5, 4e-5, 5e-5 } and scale factors {1,2,4,8, 16}. We apply the projection matrices or LoRA to target modules “query”, “value”, “key”, “intermediate.dense” and “output.dense” and use a rank r=8. We use Adam with the default hyperparameters (β_1 = 0.9, β_2 = 0.999, ϵ=10^-8). All experiments were run on a single A100 GPU in under 24 hours. Table <ref> shows the hyperparameters used for finetuning RoBERTa-Base for Grass. §.§ Instruction Tuning =-1 We finetune the pretrained LLaMA 7B [<https://huggingface.co/huggyLLaMA/LLaMA-7b>] model from HuggingFace on the 52k samples from Alpaca [<https://huggingface.co/datasets/tatsu-lab/alpaca>], and the 100k samples from Flan-v2 in Tulu [<https://huggingface.co/datasets/arazd/tulu_flan/>]. We evaluate the finetuned model on the MMLU [<https://huggingface.co/datasets/cais/mmlu>] benchmark <cit.>, which covers 57 tasks including elementary mathematics, US history, computer science, and law. We use a constant learning rate that we tune in { 1e-5, 2e-5, 3e-5, 4e-5, 5e-5 } for each method and use a constant scale factor α = 16. (see <ref>). We use Adam with the default hyperparameters (β_1 = 0.9, β_2 = 0.999, ϵ=10^-8). Additionally, we use a source and target sequence length of 512. All experiments use 4 A100 80GB GPUs and take about 48 GPU hours overall. Alpaca Prompt Format The Alpaca prompt format is designed to generate context-dependent text completions. Here, the prompt consists of a task description followed by specific input providing further context. An example of the structured prompt in Alpaca is provided below: [fontsize=] ALPACA_PROMPT_DICT = "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction:instruction### Input:input### Response: " ), "prompt_no_input": ( "Below is an instruction that describes a task. Write a response that appropriately completes the request.### Instruction:instruction ### Response: " ), Flan Prompt Format The FLAN-v2 dataset in the JSON Lines format contains detailed conversational exchanges between a user and an assistant. Each line in the raw file represents a single conversational instance, encapsulated as a JSON object with multiple messages. Our processing script reads these lines and formats them: * iterates over each line in the file, parsing the JSON to extract the conversation. * collects and concatenates all user messages to form the input text for each instance. * extracts the assistant's response to form the corresponding output text. * outputs a simplified JSON structure with `input` and `output` fields for each conversational instance. §.§ Throughput Benchmarking We benchmark pretraining throughput on a single 80GB A100 GPU and AMD EPYC 7763 64-Core Processor using a total batch size of 1024, rank 64, and a sequence length of 256 across models. We use the following per device batch sizes: 60M (256), 350M (64), 1B (16), 7B (16), 13B (1). The 7B model runs into OOM when training with Full rank so the estimated throughput is only for the forward and backward pass without an optimizer update (overestimate). and Full unlike Grass cannot train 13B model on the 80GB GPU so we skip this data point. The throughput estimate is based on 200 iterations on the C4 dataset. We benchmark finetuning throughput on a single 80GB A100 GPU using a total batch size of 1024, rank 64, and a sequence length 256 across models. We use the following per device batch sizes: 60M (256), 350M (64), 1B (16), 7B (16), 13B (1). Grass, , and LoRA are only applied to the attention and MLP linear layers while the other weights are set as non-trainable. The throughput estimate is based on 200 iterations. §.§ Communication Benchmarking For the weak scaling throughput experiments we use a local batch size of 16, a total batch size of 16 ×num_workers and a projection rank of 256 across all methods and model sizes. §.§ Ablations For the ablation experiments Effect of Update Frequency and _P Methods, we pretrain using 500M tokens from the RealNews subset of C4 <cit.>. The RealNews subset[<https://huggingface.co/datasets/allenai/c4>] contains 1.81M lines in the train set and 13.9K lines in the validation set. § EXPERIMENTS: PRETRAINING MEMORY For estimating memory for pretraining we use a token batch size of 256 and a rank r=128 across models. We don't use the layerwise trick in <cit.> since this is currently inefficient during distributed training. As the GPU memory usage for a specific component is hard to measure directly, we estimate the memory usage of the weight parameters and optimizer states for each method on different model sizes. The estimation is based on the number of original parameters, the model dimensions, and the number of low-rank parameters, all trained in BF16 format. As an example, to estimate the memory requirements for the 13B model, we compute memory consumption across different components: activations, parameters, gradients, and optimizer states. Parameter Definitions Let the following variables define our 13B model's configuration: * L: sequence length (256) * B: batch size (1) * D: model hidden size (5120) * N: number of layers (40) * H: number of attention heads (40) * V: vocabulary size (32000) * r: rank (128) §.§ Activation Memory Calculation The activation memory calculation is conducted by accounting for each significant computation within the model layers, including attention mechanisms and feed-forward networks. Each term in <ref> considers the BF16 precision used for storing the activations. §.§ Memory Calculation for Parameters and Gradients Memory for parameters and gradients is estimated as follows: * Total number of parameters across all layers: Computed by summing up all parameter tensors within the model. * Parameter memory in bytes: Total number of parameters multiplied by 2 (assuming BF16 precision). * Gradient memory: For Full-rank and GaLore this equals the parameter memory if all parameters are trainable and gradients are stored in BF16. For Grass this equals the projected gradient memory corresponding to the trainable parameters. §.§ Optimizer State Memory Calculation * The Adam optimizer in pure BF16 precision stores the first and second moment estimates for each parameter, requiring 2mn floats for a weight matrix with dimensions m × n. * MeSO methods, including Grass, reduce optimizer state memory by projecting gradients into a lower-dimensional subspace. Grass, using sparse projections, needs 2r + 2nr floats to store the first and second moment estimates of the compressed gradient (G_C ∈ℝ^r × n) and the sparse projection matrix (P ∈ℝ^m × r). and , which use dense projection matrices, require mr + 2nr floats for the optimizer states. §.§ Total Memory Estimation The total memory required for the model during training is calculated by summing the memory for parameters, gradients, activations, and optimizer states, along with any additional memory overhead as per the adaptation method used. For Grass applied to the 13B model, the memory costs are detailed as follows: * Total Parameters: Approximately 13 Billion * Activation Memory: 1936.25 MB * Parameter Memory: 24825.79 MB * Gradient Memory: 1230.79 MB * Optimizer State Memory: 2461.72 MB * Extra Memory (for largest parameter tensor): 312.50 MB * Total Memory: 30767.05 MB § EXPERIMENT: FINETUNING MEMORY In <ref> and <ref>, we compare the finetuning memory footprint of Grass and LoRA when finetuning a LLaMA model at various scales (350M, 1B, 7B) using token batch sizes of 256 and 2048 (4×512), respectively. Both methods are applied to all linear layers with a fixed rank of 64. Our analysis reveals that at larger batch sizes, activations predominantly contribute to the memory footprint, resulting in comparable memory usage between Grass and LoRA. We estimate memory requirements for finetuning using the same aproach from Section <ref> but only accounting for the gradients and optimizer states corresponding to the trainable (instead of all the) parameters. Furthermore, LoRA requires storing in addition to X (the input to the layer), the activations corresponding to the low-rank input XA to compute the gradient of B, where A and B are the low-rank adapters <cit.>. This results in an additional memory requirement for LoRA of 2 BLr bytes per linear layer. § EXPERIMENTS: THROUGHPUT <ref> compares the normalized pretraining throughput (using the Full model) of Grass and across 60M, 350M, and 1B model sizes. We find that the throughput advantage of Grass over and Full is >25% for the 1B model at rank 64. The throughput approaches that of the full model, as model size decreases or projection rank increases. <ref> compares the finetuning throughput across ranks 8, 16,32, and 64 for the Grass, , and LoRA baselines. For the ranks commonly used for finetuning (8-64) the throughput advantage of Grass remains about the same. § EXPERIMENTS: ADDITIONAL ABLATIONS Comparison with other baselines In <ref>, we report the validation perplexity of various other baselines on a LLaMA 1B pretraining task on the RealNews subset of C4. The attention and feedforward layers in all models are projected to a rank of 256, or use low rank adapters of this rank. We find that the training perplexities are lower while the validation perplexities are higher than in <ref> for the 60M model due to overfitting on the RealNews dataset. All models use an update frequency of 200, and we tune the learning rate and scale factor α per model. In addition to Grass and , we also include the ReLoRA baseline <cit.> without any full-rank training warmup, the baseline where P has entries drawn from 𝒩(0,1/r), and the CountSketch baseline where P^⊤ is a CountSketch matrix with r rows with one nonzero entry from {± 1} per column. The CountSketch projection has been previously applied to embedding layer gradients which are sparse in prior work <cit.>, but shows larger variance and poorer convergence rates for dense gradients. We see that Grass is competitive with , while ReLoRA, , and CountSketch fall short. One way to interpret this is in terms of variance of the gradient sketches— Grass being data dependent and based on row norms can better approximate the gradient low rank subspace than a data agnostic sketch like or CountSketch <cit.>. Grass with Adafactor We pretrain the LLaMA 1B model with Grass and Full-rank in BF16 on the Realnews subset of C4 using the Adafactor optimizer <cit.> as an alternative to Adam for . Adafactor achieves sub-linear memory cost by factorizing the second-order statistics using a row-column outer product. For Grass we use learning rate 0.005, α=0.25, r=256, K=200, batch size 512, optimizer restart with a restart warmup of 100 steps and no initial warmup. For Full-rank training, we use learning rate 0.0005, batch size 512, and 1000 initial linear learning rate warmup steps. In <ref> we report the train perplexity and see that Grass is within 1 perplexity point of Full-rank, demonstrating its ability to work with other inner off-the-shelf optimizers beyond Adam. =-1 Coverage of indices. In <ref>, we plot the coverage defined as the union of indices selected over n update projection steps divided by the total indices per layer. We plot the coverage for the 60M LLaMA model pretrained on the C4 RealNews subset, for n=15 updates with K=200 steps between updates. Here, with the rank 128 and the the number of rows m=512, a uniform sampling with replacement over 15 iterations should on average cover 1 - ((1 - 1/512)^128)^15≈ 97.66% of all the 512 indices in each layer. Empirically, all sampling methods exhibit good coverage with the Multinomial-Norm^2-NR being close to uniform. Top-r and Multinomial-Norm^2-R oversample indices in certain layers, suggesting potential areas for further investigation into their utility in pruning strategies. In <ref> and <ref> we plot the aggregated sampled indices over 15 iterations of 60M LLaMA pretraining on the RealNews subset of C4. We see that while Multinomial-Norm^2-NR and Top-r attain similar performance in terms of perplexity, the sampled indices can be quite different, with Top-r tending to oversample indices in particular layers.
http://arxiv.org/abs/2406.17980v1
20240625232833
Acceleration and radiation: Classical and quantum aspects
[ "Felipe Ignacio Portales Oliva" ]
gr-qc
[ "gr-qc", "hep-th" ]
empty UNIVERSIDADE FEDERAL DO ABC PROGRAMA DE PÓS-GRADUAÇÃO EM FÍSICA Felipe Ignacio Portales Oliva Acceleration and Radiation: Classical and Quantum Aspects Santo André, SP 2023 [ heading=bibintoc, title=Bibliography ]
http://arxiv.org/abs/2406.18104v1
20240626064332
Searching for the Signature of Fast Radio Burst by Swift/XRT X-ray Afterglow Light Curve
[ "Hsien-chieh Shen", "Takanori Sakamoto", "Motoko Serino", "Yuri Sato" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Physical Sciences, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuo-ku, Sagamihara, Kanagawa 252-5258, Japan ksei@phys.aoyama.ac.jp, tsakamoto@phys.aoyama.ac.jp, mserino@phys.aoyama.ac.jp X-rays: bursts_1 — gamma-ray burst: general_2 — radio continuum: general_3 Searching for the Signature of Fast Radio Burst by Swift/XRT X-ray Afterglow Light Curve Hsien-chieh Shen,^* Takanori Sakamoto,^* Motoko Serino,^* and Yuri Sato Received ***; accepted *** =================================================================================================== § ABSTRACT A new type of cosmological transient, dubbed fast radio bursts (FRBs), was recently discovered. The source of FRBs is still unknown. One possible scenario of an FRB is the collapse of a spinning supra-massive neutron star. <cit.> suggests that the collapse can happen shortly (hundreds to thousands of seconds) after the birth of supra-massive neutron stars. The signatures can be visible in X-ray afterglows of long and short gamma-ray bursts (GRBs). For instance, a sudden drop (decay index steeper than -3 to -9) from a shallow decay (decay index shallower than -1) in the X-ray afterglow flux can indicate the event. We selected the X-ray afterglow light curves with a steep decay after the shallow decay phase from the Swift/XRT GRB catalog. We analyzed when the decay index changed suddenly by fitting these light curves to double power-law functions and compared it with the onset of FRBs. We found none of our GRB samples match the onset of FRBs. § INTRODUCTION Fast radio bursts (FRBs) are enigmatic transient radio bursts with a typical frequency of 100 MHz to GHz. These radio bursts have a typical duration of several milliseconds, a high dispersion measure (DM), and the inferred total energy release of ∼ 10^39 erg. According to the observational properties, FRB's origin is considered a compact object located at a cosmological distance. Since the discovery of Lorimer burst in 2006 <cit.>, hundreds of FRBs have been identified by radio observatories all over the world, such as the Canadian Hydrogen Intensity Mapping Experiment (CHIME) <cit.> and Australian Square Kilometre Array Pathfinder (ASKAP) <cit.>. However, the origin and the emission mechanism are still under debate. There are more than 50 theoretical models for FRBs, and young magnetars have been put forward as the leading source candidate for repeating FRBs <cit.>. In contrast, the origin of non-repeating FRBs is still unknown. <cit.> show that a binary neutron star (BNS) merger could be one of the coincidence sources of non-repeating FRBs. A non-repeating CHIME/FRB event, FRB 20190425A, located within the gravitational waves's (GW) sky localization area of LIGO, was detected 2.5 hours after the GW event, GW 190425. According to the GW data, GW 190425 is consistent with a BNS. However, the chirp mass and the total mass are significantly larger than those of any known BNS system <cit.>. FRB 20190425A is a bright FRB with a fluence of 31.6 ± 4.2 Jy ms and a duration of 380 ± 2 μs, and has an unusually low DM of 128.2 pc cm^-3. Considering the temporal, spatial, and DM of GW 190425 and FRB 20190425A, <cit.> claims the chance coincidence between unrelated FRB and GW events to be 0.0052 (2.8 σ). Although no FRB-associated GRB event has been reported, <cit.> claimed the detection of a coherent radio flash 76.6 minutes after a short GRB event, GRB 201006A. This radio flash is offset by 27^'' from the GRB location, which has a chance probability of ∼ 0.5 % (2.6 σ), considering measurement uncertainties. However, its low significance detection warns against a further multi-wavelength search to claim the association between an FRB and a GRB <cit.>. On the other hand, although the hard X-ray counterpart of FRBs is still not clear <cit.>, the recent association between FRB 200428 and the Galactic magnetar SGR 1935+2154 suggests a magnetar as an origin of FRBs <cit.>. This observation also indicates that a bright, unknown magnetar flare can be a hard X-ray counterpart of FRBs, which could be identified as a GRB. One possible scenario of FRBs proposed by <cit.> is that a spinning supra-massive neutron star loses centrifugal support and collapses into a black hole. In this case, FRBs would happen several thousand to a million years after the birth of the supra-massive neutron star. On the other hand, <cit.> suggest that such implosions can happen in supra-massive neutron stars shortly ∼ 10^4 s after their births, and the signatures can be visible in X-ray afterglows of some long and short GRBs. X-ray afterglow of GRB shows several different decay phases. Figure <ref> shows one of the typical X-ray afterglow light curves obtained by the Swift/X-ray Telescope (XRT) <cit.>. The light curve consists of 4 components: (I) an initial steep decay phase, (II) a shallow decay phase, (III) a normal decay phase, and (IV) a jet break phase. At first, the X-rays decay rapidly in the first few hundred seconds, which is explained as the tail of a prompt emission. After that, the X-ray luminosity attenuates gently for 10^3∼ 10^4 s. This shallow decay phase requires continuous energy injection into the blast wave <cit.>, which would be consistent with a spinning-down neutron star engine. At the last part of the X-ray light curves, a normal decay phase with a typical decay index α ≃ -1 could be observed. In some GRBs, a further steepening (decay index α ≃ -2) is detected after the normal decay phase, which is interpreted as a jet break feature <cit.>. By contrast, in some X-ray afterglow light curves of GRBs, there are X-ray plateaus followed by an extremely steep decay, with a decay index steeper than -3, sometimes reaching -9 (upper panel of figure <ref>). Here, we called this phase a late-time steep decay. This sudden drop suggests that the emission stops abruptly, and it can happen when a rapidly spinning-down magnetar collapses into a black hole. The epoch could be the emission epoch of the FRB as suggested in <cit.>. In this paper, we investigate the possibility of the FRB counterparts as GRBs by using the data of the Neil Gehrels Swift Observatory <cit.>. Section 2 introduces our search for the event that matches the scenario of <cit.> and shows how we select and analyze the sample data. We show our results in section 3 and discuss the connection between FRBs and GRBs in section 4. All quoted errors in this work are at the 68% confidence level. § OBSERVATIONS We searched for the X-ray afterglow light curves, which have a late-time steep decay from the Swift/XRT GRB catalog <cit.>. We extracted the time when the decay index suddenly changed by fitting these light curves. Then, we compared the break time with the onset of an FRB to find if there is any associated event between an FRB and a GRB. About 1500 X-ray afterglow light curves exist in the Swift/XRT GRB catalog between 2004 and 2022. We use the following procedure to select our sample to find light curves with a steep decay (temporal decay index is steeper than -3) after the shallow decay phase. First, we classified all the light curves into ten types by shape, the overall X-ray flux, and the decay index from the automatic light curve fitting parameters in the Swift/XRT GRB catalog (figure <ref>). Then, we picked up the light curves, which have a late-time steep decay. Our targets of the GRBs with a late-time steepening correspond to type 1, type 3, and type 5 in our classification. After the classification, we picked up 86 light curves. We excluded the same time intervals of X-ray flares identified on the Swift/XRT GRB catalog, and fitted these light curves to the following double power-law functions <cit.>, F = F_0[ (t/t_b)^ω α_1 + (t/t_b)^ω α_2]^-1/ω, where t_b is a break time, ω describes the sharpness of the break, α_1 is the decay index in the shallow decay phase, and α_2 is the decay index after t_b. Here we fixed ω = 10. The reason for re-fitting the light curve is to obtain a robust result for our purpose. Accepting the fit of a double power-law function over a simple power-law, we request the F-test probability of less than 0.15. For instance, figure <ref> compares the fitting results of two least significant GRBs based on the Swift/XRT GRB catalog and ours. As can be seen, for GRB 080919 (figure <ref> left), the light curve shows a clear steepening in the decay index from -0.98 to -4.50 as in our fitting, whereas the fitting based on the Swift/XRT GRB catalog gives a simple power-law as the best-fit function and shows a decay index of -2.21. The χ^2/d.o.f. for a simple power-law fit is 34/4 while the χ^2/d.o.f. for the double power-law fit is 5/2. The F-test probability is 0.13, which makes the double power-law function more significant based on our criterion. As another example, our fitting for GRB 201017A (figure <ref> right) shows a sudden change in the decay index from -0.28 to -3.83, whereas the automatic fitting of the Swift/XRT GRB catalog shows a simple decay index of -1.07. The F-test probability between the double power-law fit and the simple power-law fit of GRB 201017A is 0.10. After re-fitting, we removed the samples that did not meet our requirements based on the fitting result. Our requirement is the late time steep decay index should be steeper than -3 (α_2 > 3). If α_2 is steeper than -3 within the error, we included it in our sample to maximize the sample. As a result, we selected 51 light curves as our samples in this paper. Our sample includes 42 long GRBs <cit.> and nine short GRBs <cit.>. § RESULTS Figure <ref> shows the XRT light curves of our 51 samples, and table <ref> summarizes the fitting result and the GRB location information from the Swift/XRT observations. The histograms of the decay index α_1 and α_2, and the break time from a shallow to a steep decay of each classified type are shown in figure <ref>. We compared the break time t_b within ± 1 hour to the onset of reported FRBs (536 FRBs in the CHIME/FRB Catalog 1 <cit.> and FRBs detected by other telescopes from the online FRB catalog <cit.>). We find no FRB event matches the time window of t_b and the position. The closest coincident event in time is GRB 171209A and FRB 171209, which was detected by the Parks telescope at 20:34:23.5 UT. The FRB happened 24 minutes after t_b. However, since the position difference between GRB 171209A and FRB 171209 is 74^∘, those two events are not associated because the position difference is larger than the localization accuracy of the events. § DISCUSION The observer's direction could be a critical point in inquiring about the reason for the absence of FRB-associated GRBs. Generally, a GRB is observed when we see the jet from the on-axis direction (observer A of figure <ref>). Even if an FRB occurred, the GRB ejecta would absorb radio emission from the central engine. In contrast, it may be able to see an FRB without a GRB signal when an observer sees the event from an off-axis direction (observer B of figure <ref>). In this off-axis scenario with a BNS origin, if the event happens at a near distance, a GW signal could also be observed. This can be the case of FRB 20190425A (2.5 hrs after GW 190425) reported by <cit.>. In our scenario, although it is difficult to detect a GRB with an FRB signal, the origin of FRBs could be the same as that of short GRBs, which is a BNS merger. The environment of FRBs and GRBs is also a point to compare. <cit.> showed that most of the FRB host galaxies' stellar mass and star formation rate prefer a medium to old population, which implies that the environment of FRB is inconsistent with that of long GRBs but more consistent with short GRBs. On the other hand, the event rate of FRB (R_ FRB(L > 10^37 erg s^-1) ∼ 10^7-10^8 Gpc^-3 yr^-1 <cit.>) is much higher than that of long GRBs (R_ l-GRB ∼ 1.3 Gpc^-3 yr^-1 <cit.>) and short GRBs (R_ s-GRB ∼ 7.5 Gpc^-3 yr^-1 <cit.>). In our picture, if the open angle of the GRB jet was θ_j ∼ 5 degrees, the chance we could detect FRBs from the off-jet angle would be almost three orders of magnitude higher than the chance we observe an on-axis GRB. Therefore, only some FRBs could be explained by our scenario. However, our search of the Swift GRB samples is incomplete, considering the limited field of view and the sensitivity of the Swift Burst Alert Telescope (BAT) <cit.>. The peak energy flux of our samples observed by BAT ranged from 2.4 × 10^-8 to 5.8 × 10^-6 erg s^-1 cm^-2. We need to increase the samples by combining data from multiple GRB observatories and also search for weak GRBs by the upcoming new X-ray transient facility, such as the Einstein Probe <cit.> and the HiZ-GUNDAM mission <cit.>. Non-repeating FRBs are the high-priority target for unraveling the connection between FRBs, GRBs, and GW events. However, it is difficult to catch these transient events because we never know where and when they will come. Also, the error range of the gravitational wave detector is as extensive as ∼100 deg^2, and the detectors of GRB or GW sometimes look at a different sky than the telescopes of the FRB do. For this reason, the radio observatory with a large field of view is essential. Bustling Universe Radio Survey Telescope in Taiwan (BURSTT), a new fisheye radio software telescope with a large field of view of ∼10^4 deg^2, can detect and localize ∼100 nearby FRBs per year <cit.>. Thanks to the wide field of view, BURSTT could discover a large sample of FRBs and achieve immediate multi-wavelength and multi-messenger follow-up observation. § SUMMARY We investigate the case suggested by <cit.> using the extensive X-ray afterglow data of Swift. We elasticated and selected 51 samples from the Swift/XRT X-ray afterglow data. We found no GRB-associated FRBs in our samples. In future work, we would like to combine data from multiple GRB observatories and compare the onset between GRBs and FRBs with a broader time window. Also, we would like to apply a multivariate adaptive regression splines (MARS) technique <cit.> to improve our light curve fitting. A radio telescope with a large field of view, such as BURSTT, and an upcoming high-sensitivity X-ray transient facility, such as the Einstein Probe and the HiZ-GUNDAM, are needed to unveil the association between FRB, GRB, and GW. We thank the referee for their careful reading and their suggestions that substantially improved the quality of this paper. We would like to thank T. Hashimoto and S. Yamasaki for variable comments. This research was supported by JST SPRING, Grant Number JPMJSP2103 (HS) and partially supported by JSPS KAKENHI Grant Nos. 22KJ2643 (YS). This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. [Abbott et al.(2020)]Abbott2020 Abbott, B. P., et al. 2020, , 892, L3 [Barthelmy et al.(2005)]Barthelmy2005 Barthelmy, S. D., et al. 2005, Space Sci. Rev., 120, 143 [Bochenek et al.(2020)]Bochenek2020 Bochenek, C. D., Ravi, V., Belov, K. V., Hallinan, G., Kocz, J., Kulkarni, S. R., & McKenna, D. L. 2020, , 587, 59 [Burrows et al.(2005)]Burrows2005 Burrows, D. N., et al. 2005, Space Sci. Rev., 120, 165 [CHIME/FRB Collaboration et al.(2019)]CHIMEFRB_Collaboration CHIME/FRB Collaboration, et al. 2019, , 566, 230 [CHIME/FRB Collaboration et al.(2021)]CHIMEFRB_Catlog1 CHIME/FRB Collaboration, Amiri, M., Andersen, B. C., et al. 2021, , 257, 59 [e.g., DeLaunay et al.(2016)]DeLaunay2016 DeLaunay, D. D., et al. 2016, , 832, L1 [Evans et al.(2009)]Evans2009 Evans, P. A., et al. 2009, , 397, 3, 1177 [Falcke and Rezzolla(2013)]FalckeRezzolla2013 Falcke, H., & Rezzolla, L. 2013, , 562, 137, 6 [e.g., Friedman(1991)]Friedman1991 Friedman, J. H. 1991, AnSta, 19, 1 [Gehrels et al.(2004)]Gehrels2004 Gehrels, N., et al. 2004, , 611, 1005 [Gehrels et al.(2005)]Gehrels2005 Gehrels, N., et al. 2005, , 437, 851 [Hotan et al.(2021)]ASKAP Hotan, A. W., et al. 2021, PASA, 38, e009s [Li & Zhang(2020)]LiY2020 Li, Y., & Zhang, B. 2020, , 899, 1, L6 [Liang et al.(2007)]Liang2007 Liang, E. W., Zhang, B.-B., & Zhang, B. 2007, , 670, 565 [Lin et al.(2022)]Lin_BURSTT Lin, H.-H., et al. 2022, , 134, 094106 [Lorimer et al.(2007)]Lorimer2007 Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J., & Crawford, F. 2007, Sci., 318, 777 [Luo et al.(2020)]Luo2020 Luo, R., Men, Y., Lee, K., Wang, W., Lorimer, D. R., & Zhang, B. 2020, , 494, 1, 665-679 [Michilli et al.(2018)]Michilli2018 Michilli, D., et al. 2018, , 553, 182 [Moroianu et al.(2023)]Moroianu2023 Moroianu, A., Wen, L., James, C. W., Ai, S., Kovalam, M., Panther, F. H., & Zhang, B. 2023, Nature Astron., 7, 579 [Petroff et al.(2016)]FRBCAT Petroff, E, et al. 2016, , 33, e045, 7 [Popham et al.(1993)]Woosley1993 Popham, R., Woosley, S. E., & Fryer, C. 1999, , 518, 356 [Rhoads(1999)]Rhoads1999 Rhoads, J. E. 1999, , 525, 737 [Rowlinson(2023)]Rowlinson2023 Rowlinson, A., et al. 2023, arXiv:2312.04237 [Sakamoto et al.(2021)]Sakamoto2021 Sakamoto, T., Troja, E., Lien, A., Zhang, B., Cenko, S. B., Cunningham, V., & Berger, E. 2021, , 908, 137 [Sarin et al.(2024)]Sarin2024 Sarin, N., Clarke, T. A., Magnall, S. J., Lasky, P. D., Metzger, B. D., Berger, E., & Sridhar, N. 2024, arXiv:2404.08048 [Wanderman & Piran(2010)]Wanderman2010 Wanderman, D., & Piran, T. 2010,, 406, 1944 [Yonetoku et al.(2020)]Yonetoku2022 Yonetoku, D., et al. 2020, Proc. SPIE, 11444, id. 114442Z [Yuan et al.(2022)]Yuan2022 Yuan, W., et al. 2022, Handbook of X-ray and Gamma-ray Astrophysics. Edited by Cosimo Bambi and Andrea Santangelo, Springer Living Reference Work, ISBN: 978-981-16-4544-0, id. 86 [Zhang et al.(2006)]BingZhang2006 Zhang, B., Fan, Y. Z., Dyks, J., Kobayashi, S., Mészáros, P., Burrows, D. N., Nousek, J. A., & Gehrels,  N. 2006, , 642, 354 [Zhang(2014)]BingZhang2014 Zhang, B. 2014, , 780, 2, L21 [Zhang & Wang(2018)]ZhangGQ2018 Zhang, G. Q., & Wang, F. Y. 2018, , 852, 1
http://arxiv.org/abs/2406.18057v1
20240626042643
An Eulerian Meshless Method for Two-phase Flows with Embedded Geometries
[ "Anand S Bharadwaj", "Pratik Suchde", "Prapanch Nair" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
inst1]Anand S Bharadwaj[Corresponding author: anandbharadwaj1950@gmail.com]^, inst2]Pratik Suchde inst1]Prapanch Nair [inst1]organization=Indian Institute of Technology Delhi, addressline= Hauz Khas, postcode=110016, state=New Delhi, country=India [inst2]organization=University of Luxembourg, addressline= 2 avenue de l'universite, postcode=L-4365, state=Esch-sur-alzette, country=Luxembourg § ABSTRACT We present a novel Eulerian meshless method for two-phase flows with arbitrary embedded geometries. The spatial derivatives are computed using the meshless generalized finite difference method (GFDM). The sharp phase interface is tracked using a volume fraction function. The volume fraction is advected using a method based on the minimisation of a directional flux-based error. For stability, the advection terms are discretised using upwinding schemes. In the vicinity of the embedded geometries, the signed distance function is used to populate the surface of the geometries to generate a body-conforming point cloud. Consequently, the points on the boundaries participate directly in the discretisation, unlike conventional immersed-boundary methods where they are either used to calculate momentum deficit (for example, continuous forcing) or conservation losses (for example, cut-cell methods). The boundary conditions are, therefore, directly imposed at these points on the embedded geometries, opening up the possibility for a discretisation that is body-conforming and spatially varying in resolution, while retaining the consistency of the scheme. We present benchmark test cases that validate the method for two-phase flows, flows with embedded boundaries and a combination of both. Meshless methods Two-Phase Flows Generalized Finite Difference Method Embedded geometries § INTRODUCTION §.§ Two-phase flows The simulation of two-phase flows in complex shaped domains, a ubiquitous phenomenon in industrial problems, aggregates different numerical challenges such as consistency, conservation, capturing of interface sharpness, satisfaction of no-slip at the immersed boundary and spatial resolution. Often, the problems of complexly shaped solid boundaries and two-phase interface evolution are tackled separately; few solvers attempt to address both these phenomena in the same framework <cit.>. Traditionally, two-phase flows have been solved in Eulerian Cartesian mesh-based methods using a variety of techniques: front tracking, volume-of-fluid (VOF), level set (LS) and coupled level set-volume of fluid (CLSVOF) methods. While front-tracking methods <cit.> capture interfacial phenomena sharply using Lagrangian markers, they are challenging for flows with topological changes such as break-up and coalescence. In VOF, LS and CLSVOF, the phase interfaces are implicitly modelled using fields with their own transport equation. These methods can handle topological changes. In VOF <cit.>, the advection of a phase volume fraction function, α∈ [0,1], is solved using a finite volume discretization. The interface is reconstructed using simple line interface (SLIC)<cit.>, piecewise linear (PLIC) <cit.> or parabolic reconstruction (PROST) <cit.>. The VOF method, despite showing good conservation properties, can be challenging in estimating interface curvature. The VOF methodology is also used to embed solid boundaries sharply within a fluid domain <cit.>. In the Level set method <cit.>, interfaces are tracked using level sets of a function (typically the signed distance function) using the advection equation. While this method is more accurate for calculating curvature, for reasonable mass conservation, the level set function needs to be reinitialized at regular intervals. To achieve conservation and accuracy, the coupled level set-volume of fluid (CLSVOF) method <cit.> was introduced. The level set function is reinitialized by using the volume fraction near the interface. A variation of the CLSVOF method was proposed by Tsui et al. <cit.> called the conservative interpolation scheme for interface tracking (CISIT), in which the interface is identified by the contour of the volume fraction field at α = 0.5. The level set function is then reinitialized with respect to this interface position. Since mesh generation and manipulation are expensive, the above approaches have been introduced in the context of Cartesian meshes. Adapting the resolution of Eulerian meshes and conforming the computational nodes to immersed bodies are two major challenges that limit their application. Meshless methods are a widely used alternative to Eulerian mesh-based methods for simulating flows with interfaces. In Lagrangian meshless methods, the interfaces evolve naturally as the particles representing the phases move. The moving particle semi-implicit (MPS) <cit.> method has been used to solve free-surface and two-phase flows. The method relies on moving particles that interact with other particles in the neighbourhood based on the particle number density. Smoothed particle hydrodynamics <cit.> is a popular method that has found widespread use for free-surface flows and two-phase flows. The generalized finite difference method <cit.>, in its Lagrangian form, is yet another meshless method that has found extensive use in two-phase flows and free-surface flows. In the Eulerian framework, the application of meshfree methods to free-surface and two-phase flows is relatively rare. The radial basis function (RBF) method has found application in an Eulerian framework to multiphase phase-change problems involving fluids. Abbaszadeh and Dehghan <cit.> apply the RBF method to the Shan-Chen model<cit.>. Dehghan and Najafi<cit.> apply the RBF framework for liquid-solid phase change by solving the Stefan problem. In the work of Heydari et al. <cit.> a Lagrangian-Eulerian approach using SPH has been proposed to solve a free-surface flow in an Eulerian framework away from the free-surface and a Lagrangian framework close to the free surface. However, the relevance of Eulerian meshless methods for problems with complex and evolving interfaces is seldom explored. In the present work, a two-phase Eulerian meshless solver that captures interfaces in a manner similar to the above-mentioned CISIT method is introduced. The novelty of the method is that it leverages the advantages of the Eulerian as well as the meshless frameworks. While the Eulerian framework obviates the need to recalculate neighbourhoods of points at each time step, the meshless framework obviates the need for an apriori mesh. A volume fraction of one of the phases (α) is declared as a field variable and the interface is located at α=0.5. The volume fraction is advected with the flow and a directional flux-based error minimization scheme is proposed for the convective terms of the advection equation to be solved in the meshless framework. This scheme allows the use of upwinding in the directional fluxes used in the minimization procedure. Additionally, an interface sharpening method is proposed that makes use of the step nature of the volume fraction function and assists with the retention of the interface sharpness. For modelling surface tension, the level-set function needs to be reinitialized from an initial estimate that is constructed from the volume fraction such that zero-contour of the level-set function coincides with the interface i.e. the 0.5-contour of α. §.§ Methods for embedded geometries In mesh-based non-body conforming frameworks, immersed-boundary methods <cit.> have been widely used to simulate flows past complex and moving geometries. They offer the advantage of not requiring a body-conformal mesh, thus, alleviating the difficulties of mesh generation. Consequently, to satisfy the no-slip and no-penetration boundary conditions at the embedded surface, forcing techniques are used. A variety of forcing techniques have been proposed depending on their suitability for specific flow problems. They can be broadly classified as - continuous and discrete forcing methods. In continuous forcing methods <cit.>, a forcing function is added to the governing equations in the vicinity of the embedded surface such that the necessary boundary condition is satisfied. In discrete forcing methods <cit.>, cells in the vicinity of the embedded surface are flagged and the solution is reconstructed in these cells using a suitable interpolation such that the boundary condition at the embedded surface is satisfied. As with the case in forcing, to estimate the flow variables at the surface such as pressure, shear stress and the loads acting on the geometry, suitable interpolation strategies are necessary <cit.>. In the present work, we present a method to accommodate objects in a flow as embedded surfaces in a non-conformal point cloud, in a manner similar to immerse-boundary methods. Conforming the point cloud to the embedded surface would be advantageous for an accurate representation of the shape of the surface as well as capturing the boundary layer effects. To achieve this, we propose a method to generate a conformal point cloud on an arbitrary embedded geometry from a non-conforming initial arrangement of points. This can also be applied to moving geometries. Thus the surface participates in the computation of spatial gradients without the need for interpolation. This ensures direct enforcement of boundary conditions, in contrast to the introduction of source terms in the momentum equation in immersed-boundary methods. Additionally, the surface quantities such as pressure and shear stresses can be extracted directly from the surface points, without a need for interpolation. The paper is organized as follows. Sec. <ref> discusses methodology – the governing equations, the Generalized Finite Difference Method (GFDM), the interface tracking algorithm and the generation of conformal point clouds for arbitrary embedded surfaces. Sec. <ref> presents different test cases for validating the model. Finally, Sec. <ref> presents the conclusions and indicates some future extensions of the proposed model. § METHODOLOGY In this section, we present the new Eulerian meshless two-phase flow solver with embedded geometries. We present the governing equations, followed by the generalized finite difference method (GFDM) that discretises the governing equations. Subsequently, we propose the interface tracking algorithm with emphasis on an interface sharpening method and directional flux-based minimization in the solution to the volume fraction advection equation. Finally, we propose a method that conforms a point cloud in a regular lattice onto an arbitrary geometry. §.§ Governing equations The incompressible Navier-Stokes equations are considered : ∇·𝐮 = 0 , ∂𝐮/∂ t + (𝐮·∇) 𝐮 = -1/ρ∇ p + νΔ𝐮 + 𝐠 , where 𝐮 denotes the velocity, p, the pressure, 𝐠, the acceleration due to gravity, ρ, the density and ν, the kinematic viscosity, respectively. The projection method <cit.> is used to solve the above equations. The momentum equations are first marched to solve for a provisional velocity field. 𝐮_i^* - 𝐮_i^n/Δ t = - 𝐅_c|^n + 𝐅_v|^n + 𝐠. The convective and viscous terms are denoted as 𝐅_c and 𝐅_v respectively. The superscript `n' denotes the time level and `*' denotes the provisional values. The provisional velocity field generally does not satisfy the divergence-free condition. The pressure p is coupled to the velocity through the Poisson equation: ∇^2 p^n+1 = ρ/Δ t∇·𝐮^* + 1/ρ∇ p^n+1·∇ρ . The second term on the RHS of the above equations may be non-zero in situations where two phases of different densities exist in the flow giving rise to a non-zero density gradient in the vicinity of the interface. Having solved the above pressure Poisson equation, the velocity is then corrected as 𝐮^n+1 = 𝐮^* - Δ t/ρ∇ p^n+1 . §.§ Generalized Finite Difference Method The generalized finite difference method (GFDM) is a meshless method that estimates derivatives of flow variables at a point from a set of neighbours that are a part of the point cloud using a weighted least squares error minimization procedure as discussed below <cit.>. Consider a point i which has a neighbourhood of points, j ∈ S_i. S_i denotes the support region around the point i. This is illustrated in Fig. <ref>. Using a monomial basis up to a prescribed degree in two spatial dimensions, such as M_i(x,y) = [ 1 Δ x Δ y Δ x^2 Δ y^2 Δ x Δ y; ], differential operators can be derived for a non-uniformly discretized field. Here, Δ x = x - x_i and Δ y = y - y_i. As an example, we derive the procedure for the Laplacian operator. Let us denote the Laplacian operator at a point j in the neighbourhood of point i as C^Δ_ij. Applying the Laplacian operator to each of the monomial basis elements in Eq. <ref>, we get ∑_j ∈ S_i C^Δ_ij (1) = 0 , ∑_j ∈ S_i C^Δ_ij (Δ x_j) = 0 , ∑_j ∈ S_i C^Δ_ij (Δ y_j) = 0 , ∑_j ∈ S_i C^Δ_ij (Δ x_j^2) = 2 , ∑_j ∈ S_i C^Δ_ij (Δ y_j^2) = 2 and ∑_j ∈ S_i C^Δ_ij (Δ x_j Δ y_j) = 0, which become the consistency conditions for the operator. This can be rewritten concisely as, 𝐕_i C⃗^⃗Δ⃗_i = b⃗_i, where, 𝐕_i = [ … 1 …; … Δ x_j …; … Δ y_j …; … Δ x_j^2 …; … Δ y_j^2 …; … Δ x_j Δ y_j …; ] , C⃗^⃗Δ⃗_i = [ C^Δ_i1; ⋮; C^Δ_ij; ⋮; C^Δ_iN; ] and b⃗_i = [ 0; 0; 0; 2; 2; 0; ]. Henceforth we drop the index i for the tensorial quantities, for brevity, as the ensuing discussion concerns a given particle i. Eq. <ref> is used in minimizing the functional J = ∑_j ∈ S_i(C^Δ_ij)^2/w_ij. Following the weighted least squares procedure, we get C⃗^⃗Δ⃗ = 𝐖𝐕^T (𝐕𝐖𝐕^T)^-1b⃗ where 𝐖 is a diagonal matrix with its entries given by the weight of the point j ∈ S_i w.r.t. the point i, 𝐖 = [ w_i1 … 0; ⋮ ⋱ ⋮; 0 … w_iN ]. In this work, the weights are assigned using the Gaussian function. The weight of a point j in the neighbourhood of a point i is w_ij = 1/π h^2exp(-|r⃗_i - r⃗_j|^2/h^2), where r⃗_i and r⃗_j are the position vectors of point i and j, respectively, and h is the smoothing length which is typically chosen to be the radius of the circle in Fig. <ref>. The choice of the weight function does not significantly alter the numerical results according to <cit.>, and this is observed in our simulations as well. Having derived the Laplacian operator from Eq. <ref>, the Laplacian of a general function ϕ at the point i may be approximated as ∇^2 ϕ |_i ≈∑_j ∈ S_i C^Δ_ijϕ_j. The process is identical the for other operators. For the first order derivatives in x and y, the operator C^Δ_ij in Eq. <ref> is simply replaced with the corresponding operators— C^x_ij and C^y_ij, respectively. It is noted that the operators can also be derived by minimizing the truncation error of the Taylor series. The approach using Taylor series and the approach using monomials (as detailed above) are mathematically equivalent <cit.>. For work on higher order operators in GFDM, readers are referred to <cit.>. We, now, look at the discretization of the convective terms (𝐅_c) from the Navier-Stokes equations (Eqs. <ref>). From the x-momentum equation, the convective terms are F_c^x = ∂ f/∂ x + ∂ g/∂ y, where f = u^2 and g = uv. The derivatives are computed as ∂ f/∂ x = ∑_j ∈ S_i C^x_ij f_j, ∂ g/∂ y = ∑_j ∈ S_i C^y_ij g_j . The same procedure is repeated for the convective terms of the y-momentum equation. The viscous term 𝐅_v involves the Laplacian of the velocity components. F_v^x = ν∇^2 u = ν∑_j ∈ S_i C^Δ _ij u_j , F_v^y = ν∇^2 v = ν∑_j ∈ S_i C^Δ _ij v_j . For the pressure Poisson equation, the Laplacian operator is applied to the pressure field in the same manner as above. In the discretized form, the pressure Poisson equation (Eq. <ref>) can be written as ∑_j ∈ S_i C^Δ_ij p^n+1_j = ρ_i/Δ t∑_j ∈ S_i[C^x_ij u^ *_j + C^y_ij v^*_j ]+ 1/ρ_i[∑_j ∈ S_i C^x_ij p^n+1_j ∑_j ∈ S_i C^x_ijρ_j + ∑_j ∈ S_i C^y_ij p^n+1_j ∑_j ∈ S_i C^y_ijρ_j ]. At the boundaries, Dirichlet boundary conditions are imposed by simply setting the pressure or velocity to the prescribed value. On the other hand, Neumann boundary conditions are imposed using the differential operators. As an example, let's impose a zero-Neumann condition for pressure at a boundary point b with a boundary normal n⃗^b = (n_x^b, n_y^b) such that ∇ p_b ·n⃗^b = 0. ∇ p_b ·n⃗^b = n_x^b ∑_j ∈ S_b C^x_bj p_j + n_y^b ∑_j ∈ S_b C^y_bj p_j = 0. We rearrange the terms to get an expression for pressure at the boundary as p_b = - n_x^b ∑_j ∈ S_b j ≠ b C^x_bj p_j + n_y^b ∑_j ∈ S_b j ≠ b C^y_bj p_j/ n_x^b C^x_bb + n_y^b C^y_bb. §.§ Interface tracking for two-phase flows The interface tracking method used in this work is inspired from the CLSVOF-bsaed CISIT technique <cit.>. At each point of the point cloud, a volume fraction is defined as below: α = {[ 1 , if i is in phase 1; 0 , if i is in phase 2; ]. . At the vicinity of the interface between the two phases, α varies smoothly from α=0 to α=1 and the interface location is identified as α = 0.5. The volume fraction is advected at the velocity of the flow ∂α/∂ t + ∇· (𝐮α) = 0 . The flux is determined by a directional flux-based minimization procedure described in Sec. <ref>. In case surface tension terms are included in the momentum equation, a level set (signed distance) function, ϕ, is required to estimate the surface curvature. In the case of flows with surface tension (not dealt with in this paper), the level set function needs reinitialization (as provided in <cit.>, for example) for mass conservation. §.§.§ Interface sharpening method While advecting the volume fraction as per Eq. <ref>, a hyperbolic partial differential equation, the discretization has a tendency to diffuse the sharp change of α at the interface owing to dissipative errors in upwind schemes. Higher order discretizations such as TVD with limiting and WENO schemes preserve the sharpness better than first-order schemes, although a certain amount of dissipation is inevitable. Here, we propose a method to preserve the sharpness at the interface, in addition to the choice of the discretization. The method relies on the fact that the volume fraction, α, is a step function, in principle, and therefore, the possible values it can assume are either 0 or 1, with a narrow region at the interface where it changes smoothly from 0 to 1. This narrow region widens due to the effect of dissipative numerical errors. Let us define a step function, ψ', such that ψ' = 1- 2 α. ψ', therefore, varies linearly from -1 to 1 in the vicinity of the interface, with its extent of diffusion same as that for α. Further, we define ψ such that ψ = sign(ψ'). ψ takes on the values -1 and 1 and is discontinuous at the interface. We now use the smoothing operation as a filter on ψ at each point i, as defined below, such that the sharp discontinuity gets diffused over a narrow region over which it smoothly varies from -1 to 1. ψ_i^(1) = ∑_j ∈ S_i w_ijψ_j/∑_j ∈ S_i w_ij, where w_ij is a weight used for the smoothing. This smoothing is necessary to maintain numerical stability. Also, the smoothed region is narrower than the diffused region that forms due to the dissipative errors, thus making it a sharper representation of the interface. Further, the volume fraction is revised as α = (1-ψ^(1))/2. This method for interface sharpening can be done at regular intervals as the flow evolves. However, doing it too often can lead to an erroneous evolution of the solution. In the results section, we assess the effect of frequency of interface sharpening on the solution for the Rayleigh-Taylor instability. §.§.§ Directional flux-based minimization Let us consider the convective term of Eq. <ref> ∇· (𝐮α) = ∇·𝐅. The flux vector 𝐅 = [ f g ]^T , where f = uα and g = vα. In contrast to finite-volume methods, the meshless method presented here does not deal with cells and their faces as a part of the discretization. Thus, to estimate a flux, we consider a fictitious interface between a point i and its neighbour point j, as shown in Fig. <ref>. The directional flux at the interface I (for each i–j pair) with unit normal 𝐧̂ is given by 𝐅_I ·𝐧̂ = [ f_I g_I ]^T ·𝐧̂ The unit normal 𝐧̂ is the unit vector pointing from point i to point j. Expanding the terms f_I and g_I w.r.t point i using Taylor Series, we get f_I = f_i + f_x,iΔ x + f_y,iΔ y + e_f, g_I = g_i + g_x,iΔ x + g_y,iΔ y + e_g, where f_x,i and f_y,i are the partial derivatives of f w.r.t x and y at the point i, respectively, g_x,i and g_y,i are the partial derivatives of g w.r.t x and y at the point i, respectively, and e_f and e_g are the higher order terms in the series expansion. Substituting these expressions in Eq. <ref>, we get F_I ·𝐧̂ = (f_i n_x + f_x,iΔ x n_x + f_y,iΔ y n_x) + (g_i n_y + g_x,iΔ x n_y + g_y,iΔ y n_y ) - e Here, e is the accumulated higher order term. Rearranging the above equation, e = f_i n_x + g_i n_y + f_x,iΔ x n_x + g_y,iΔ y n_y - (F_I ·𝐧̂ - f_y,iΔ y n_x - g_x,iΔ x n_y ) For a given point i (dropping the subscript henceforth, for tensorial terms), there will be N such equations corresponding to N neighbours such that 𝐄 = [ e_1 ⋯ e_j ⋯ e_N ]^T = 𝐌𝐚 - 𝐝, where 𝐌 = [ n_x,1 n_y,1 Δ x_1 n_x,1 Δ y_1 n_y,1; ⋮ ; n_x,j n_y,j Δ x_j n_x,j Δ y_j n_y,j; ⋮ ; n_x,N n_y,N Δ x_N n_x,N Δ y_N n_y,N ] , 𝐚 = [ f_i; g_i; f_x,i; g_y,i; ] , 𝐝 = [ d_1; ⋮; d_j; ⋮; d_N; ]. Here, 𝐝 is a N× 1 tensor, whose elements are d_j = (F_I ·𝐧̂ - f_y,iΔ y n_x - g_x,iΔ x n_y ), where all the quantities are evaluated between the point i and neighbour j. Standard interface solution reconstruction schemes developed for the finite-volume framework can be directly applied to the term F_I ·𝐧̂. For the terms f_y,i and g_x,i, the regular differential operators are used as shown below. f_y,i = ∑_j ∈ S_i C^y_ij f_j , g_x,i = ∑_j ∈ S_i C^x_ij g_j . We, now, minimize the below functional (J) w.r.t 𝐚 for a given point i: J = 𝐄^T 𝐄. The minimization leads to 𝐚 = (𝐌^T 𝐌)^-1 𝐌^T 𝐝. We use the third (f_x,i) and the fourth (g_y,i) elements of the solution vector 𝐚 in the solution to the advection equation Eq. <ref>. §.§ Generation of point clouds conforming to embedded geometries The present method starts with a non-conformal point cloud much like Cartesian meshes used in mesh-based immersed-boundary methods. However, in contrast to immersed-boundary methods, we introduce additional points on the surface of the embedded geometry which participate in the discretization of the governing equations directly. This is in contrast to the continuous forcing used in the immersed-boundary methods, where fields are interpolated at the immersed points. This provides a way to impose the boundary conditions exactly at the embedded surface. The points within the geometry are discarded for the case of stationary embedded geometry, reducing the memory footprint of the solver. The procedure to form a conformal point cloud is elaborated here. We begin with a set of points that do not conform to the geometry, as shown in Fig. <ref>(a). The geometry is identified by a set of marker points (that are of relatively higher resolution than the point cloud) and the surface normal at each of these marker points. A signed distance function is defined for the points in the point cloud ξ_i,EG = |X⃗_i - X⃗_m| sign((X⃗_i - X⃗_m)·n⃗_m) , where X⃗_i and X⃗_m are the positions of the point i of the point cloud and the marker point m, respectively, and n⃗_m is the unit outward surface normal at the marker point m. To populate points on the surface of the embedded geometry, a band of points called the insertion band, belonging to the point cloud, is considered such that the signed distance function in the insertion band lies in the range ξ_min < ξ_i,EG < ξ_max. Here, ξ_min is set to the order of the point cloud resolution near the embedded surface and ξ_max is set to a value that is larger than ξ_min by a factor (∼ 5, based on trial and error). In Fig. <ref>(b), the insertion band is represented by blue points that lie within the contour lines of ξ_min and ξ_max. For each point i in the insertion band, a corresponding surface point s is constructed as X⃗_s = X⃗_i - ξ_i,EG∇ξ_i,EG/|∇ξ_i,EG|. Here, ∇ξ_i,EG/|∇ξ_i,EG| is the normal to the embedded geometry that passes through the point i. This surface point, s, is appended to the point cloud if it lies at a distance of at least ξ_min from all the points in the insertion band as well as the previously appended surface points. This is essential to avoid blowing up of the differential operators when the two points of the point cloud are too close. The procedure outlined here provides a general framework for moving and deforming embedded geometries since the creation of the surface points relies on the signed distance function, ξ_EG, which evolves according to ∂ξ_EG/∂ t + 𝐮·∇ξ_EG = 0. In scenarios where only stationary bodies and rigid-body motion are involved, the Lagrangian markers can directly be used to populate the surface of the embedded geometries, without the need for the use of the insertion band. All points k of the non-conformal point cloud that satisfy ξ_k,EG < ξ_min are discarded in the case of stationary geometry or temporarily deactivated in case of moving/deforming geometries. Even though the test cases presented here involve only stationary rigid bodies, we use the insertion band approach for the sake of generalisation. § RESULTS In this section, we present test cases to demonstrate different features of the solver, as remarked in Table. <ref>. §.§ Heat equation on an irregular domain: a convergence study In this test case <cit.>, we assess the order of accuracy of the method in an irregularly shaped domain by comparing the numerical solution with the analytical solution of the heat equation ∂ T/∂ t = Δ T, with the Robin boundary condition ∇ T ·𝐧̂ + T = f. The analytical solution is given by T(x,y,t) = e^-2tcos x cos y. For the numerical simulation, the points in the domain are generated from a set of uniformly spaced points with the irregular boundary embedded inside the set of points. The embedded boundary is characterized in polar coordinates as r = 0.4cos(8θ) + π, θ∈ [0,2π] The initial condition for the simulation is obtained by setting t=0 in Eq. <ref>. The simulation is performed till t=1 unit. At t=1, the error in the numerical solution is quantified as E = 1/N∑_i=1^N e_i = 1/N∑_i=1^N |T^analytical_i - T^numerical_i | We define smoothing length (h) at a point i as the average distance of the points in the neighborhood from point i. The neighborhood is chosen to be the 20 nearest neighbors for all the test cases. Therefore, for refined point clouds, the smoothing length is smaller than that for coarser point clouds for the same number of neighbours. Fig. <ref> shows the plot of the error versus the smoothing length (h) for four different point clouds generated by embedding the irregular boundary inside uniform point clouds of varying resolution. The uniform point clouds consist of 100 × 100, 150 × 150, 200 × 200 and 250 × 250 points discretizing a square domain of 8 units length and are identified as PC_1, PC_2, PC_3 and PC_4, respectively. The error drops with decreasing smoothing length (i.e. for higher resolution of points) and the order of convergence lies between 1 and 2, as seen. Fig. <ref> shows the error contour plots for three different resolutions of point cloud, PC_2, PC_3 and PC_4. It is seen from the contour legend that the error decreases consistently with increase in resolution. §.§ Flow past a circular cylinder This test case verifies a single-phase flow with an embedded surface. We consider the flow past a circular cylinder at a Reynolds number, Re = 40. An illustration of the domain and the boundary conditions are shown in Fig. <ref>. The domain spans 30D in the stream-wise direction and 20D in the transverse direction, D being the cylinder diameter. The cylinder is placed at 10D from the inflow boundary. An initial set of points is obtained from a mesh with a high resolution at the location of the cylinder and its wake. Fig. <ref>(a) shows this initial point cloud with the cylinder as an embedded surface. Using the procedure outlined in Sec. <ref>, a conformal point cloud is generated. Fig. <ref>(b) and (c) show the entire conformal point cloud and a zoomed-in version of it close to the cylinder, respectively. The points that lie inside the cylinder and just outside the cylinder (ξ < ξ_min) are discarded and the surface points are appended to the rest of the points outside (ξ > ξ_min), resulting in a body-conforming point cloud. The condition of no-slip is enforced directly at the points that populate the surface of the cylinder. For Re = 40, there is no vortex shedding as is well known and the solution reaches a steady state. Fig. <ref>(a) shows the u-velocity contours of the flow. The wake behind the cylinder is symmetric about the horizontal plane passing through the centre of the cylinder. Fig. <ref>(b) shows the variation of coefficient of pressure (C_p) along the surface of the cylinder, paramatrized by the angle made with the negative x-axis (Θ). It is seen from Fig. <ref>(b) that the present model matches closely with the results of Choi et al.<cit.>. We reiterate that in the present method, the surface points are directly involved in the discretization and therefore, extracting surface quantities (C_p, for example) is quite straightforward, in contrast to traditional immersed-boundary methods where certain interpolation techniques would need to be employed owing to the non-conformal nature of the mesh. §.§ Rayleigh-Taylor Instability In this test case, we simulate the Rayleigh-Taylor instability in which a heavier fluid is present on top of a lighter fluid with gravity acting downwards. Since this configuration is inherently unstable, a small perturbation causes the heavier fluid to flow downwards, displacing the lighter fluid. With time, the interface assumes a mushroom-like appearance. The parameters for the simulation are taken with Liu et al. <cit.> as the reference. Jeong et al. <cit.> and Duan et al.<cit.> also present this test case for purposes of validation. The domain and the boundary conditions are shown in Fig. <ref> along with a small downward perturbation in the interface. We consider a density ratio, ρ_h/ρ_l=3, where ρ_h and ρ_l denote the densities of the heavier and lighter fluids respectively. The kinematic viscosity of both fluids is ν_h = ν_l = 0.01 m^2/s and the surface tension between the fluids is neglected. The same test case is also simulated using the Gerris flow solver <cit.>, which uses the finite volume method and the volume of fluid method for interface advection. For the Gerris simulation, we chose a cell resolution of 1/2^7 m. At t=0s, the interface is given a perturbation as shown below y ={ 1.0, when x<0.25 and x>0.75 1 - δsin[2π(x-0.25)], otherwise. . Here δ is assigned a value of 0.06. Fig. <ref>(a)-(c) show the evolution of the interface at different instants of time. At t=0.4s, the initial perturbation grows as seen in Fig. <ref>(a). At t=0.8s, the mushroom-like shape starts to develop as more of the heavier fluid flows downwards (Fig. <ref>(b)). At t=1.2s, the mushroom-like shape is prominent as seen in Fig. <ref>(c). Both our result and that of <cit.> compare well with the Gerris simulation until t=0.4s. Owing to greater resolution, the Gerris simulation shows features with greater curvature compared to either of the meshless results. At t=1.2s, both the meshless methods predict the mushroom's length reasonably well. However, the volume of the heavier fluid is visibly under-predicted by <cit.>. Now, we present the effect of interface sharpening introduced in Sec. <ref> at different frequencies. A measure to evaluate the change in volume fraction (Δ), due to advection, over the entire domain is defined as Δ = 1/N∑_i^N |α_i - α_i^(-)|. Here, the summation is applied over all the points of the domain, α_i denotes the volume fraction at point i and α_i^(-) denotes the volume fraction at the last instance of interface sharpening, at the same point. Clearly, Δ takes a value in the range [0,1.0]. Figs. <ref>(a)-(c) show the interface at the same instance for three different frequencies of interface sharpening that correspond to Δ = 1.0, 0.1 and 0.05 respectively. Higher values of Δ would imply the sharpening is performed less frequently. It can be seen that the interface becomes quite diffuse when the sharpening is not performed, as seen in Fig. <ref>(a). The sharpness of the interface improves when the frequency of sharpening is increased by changing Δ from 0.1 to 0.05, as seen in Fig. <ref>(b) and (c). It is important to note that reducing Δ to very low values may lead to a situation where the sharpening algorithm interferes with the natural evolution of the interface according to the advection equation. For this test case and the other two-phase simulations presented in this paper, Δ = 0.05 has been used. §.§ Two-phase Dam Break We consider a 2D dam break problem, as illustrated in Fig. <ref>. The water column (denoted by the dark blue shade) collapses under gravity and we use the proposed method to capture the interface movement. The test case considers water and air as the two phases. The pressure Poisson equation (Eq. <ref>) when used for large density ratios (ρ_water/ρ_air≈ 1000), results in numerical instabilities. The gradient of density ∇ρ is high at the interface when the density ratio is high. We substitute for ρ as <cit.>. ρ = e^γ. Consequently, ∇ρ = e^γ∇γ = ρ∇γ . The pressure Poisson equation, now, becomes ∇^2 p^n+1 = ρ/Δ t∇·u⃗^* + ∇ p^n+1·∇γ The above expression is used instead of Eq. <ref> and is relatively more stable at high density ratios. Fig. <ref> shows the evolution of the water-air interface. The black dots are interface positions extracted from the previous work that present the same test case <cit.>. We note a close comparison at t=0.1s and t=0.2s. At t=0.3s and t=0.4s, the interface position at the right wall, predicted by the present model is slightly lower than that from the simulations of Ubbink <cit.>. A possible cause is the dissipative error in the numerical advection of the volume fraction (given by Eq. <ref>), in which we use a first-order upwind method for reconstruction of fluxes at the fictitious interfaces. Higher order reconstructions are not explored as a part of this work and will be taken up in the future work. §.§ Filling of a mould with core In this test case, we consider a flow problem that the present method is targeted to solve i.e. two-phase flows with embedded geometries. We consider a mould with a circular core as the flow domain, as shown in Fig. <ref>(a). The dimensions of the geometry are shown in Fig. <ref>(b) <cit.>. Starting from a uniform set of points, the conformal point cloud is generated using the method described in Sec. <ref>. The ratio of the density of liquid to that of the gas (ρ_l/ρ_g) of 100 and a Reynolds number of 100 (based on the inflow liquid velocity and unit length), are considered. A time-step Δ t = 0.0001s is used and the distance between two adjacent points of the uniform point cloud is δ x = 0.01m. δ x is also used as the resolution for the population of points on the embedded surfaces. At the mould and core walls, we apply a boundary condition that depends on the volume fraction. When the volume fraction is zero, denoting gas phase, the walls act as outflow such that the gas flows out without any resistance. On the other hand, when the volume fraction is unity, denoting liquid, the fluid encounters the slip-wall condition. Fig. <ref>(a)-(f) show the filling of the mould cavity at different instants of time. The left column of the figure shows the two phases, with red denoting the liquid (the heavy fluid) and blue denoting the gas (the light fluid). The middle and the right columns show the u and v velocity contours. The liquid enters the mould cavity from the bottom (Fig. <ref>(a) and (b)). As this happens, the gas is forced out of the mould cavity as seen in the velocity contours. When the liquid jet impinges on the core, it splits the jet into two as shown in Fig. <ref>(c) and (d). The jets, subsequently, reach the mould wall as seen Fig. <ref>(e). The liquid flows along the wall as it continues to fill the cavity, as seen in Fig. <ref>(f). This test case, motivated by its application to casting industry, demonstrates the feasibility of our method for two phase flows with embedded boundaries. § CONCLUSIONS This paper presents an Eulerian meshless solver for two-phase flows with embedded geometries. The advantages of both the Eulerian framework and the meshless framework are captured in this solver. Owing to the Eulerian framework, neighbourhood search and the differential operator calculations are not required to be performed at every timestep, as is the case with Lagrangian methods. The meshless aspect of the model retains the advantage that a prior mesh is not necessary for the computation. Additionally, the meshless framework would be advantageous when point cloud adaptations are performed, as planned in future extensions of the method. The limitation of using an Eulerian approach is that the interface separating the phases needs to be tracked using a field variable, requiring explicit mass conservation. In contrast, in the Lagrangian approaches the interface evolves naturally as the points move ensuring better mass conservation. The two-phase model uses a volume fraction to track the phase and the interface movement is captured through the advection of the volume fraction. The solution of the advection equation in the meshfree framework requires the use of a direction flux-based minimization procedure, elaborated in this paper. An interface sharpening approach is proposed as an auxiliary method to retain the sharpness of the interface. A method to generate point clouds that are conformal to arbitrary geometries starting with a non-conformal point cloud, is also proposed. The test cases validate the model for both two-phase flows and flows with embedded geometries. The final test case which involves the filling of a mould with a core, illustrates the use of the model for flows with two phases as well as embedded geometries. Future directions would include enhancing the accuracy of the method, incorporating surface tension forces in the model, and using adaptation of the point cloud in the vicinity of the interface and the vicinity of embedded geometries. Two-phase flow through porous cavities and filling of dies in casting manufacturing are potential areas of application of the present model. § ACKNOWLEDGEMENTS Anand S Bharadwaj would like to acknowledge the Science and Engineering Research Board (SERB) for funding through the National Post Doctoral Fellowship Scheme. Prapanch Nair would like to acknowledge the SERB for funding through the Startup Research Grant SRG/2022/000436. unsrt
http://arxiv.org/abs/2406.19223v1
20240627144908
T-FREE: Tokenizer-Free Generative LLMs via Sparse Representations for Memory-Efficient Embeddings
[ "Björn Deiseroth", "Manuel Brack", "Patrick Schramowski", "Kristian Kersting", "Samuel Weinbach" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Time evolution of Von Neumann entropy for a Kerr–Taub–NUT black hole Vicente A. Arévaloe1,addr3 David Andradee2,addr3 Clara Rojas e3,addr3 Received: July 1, 2024/ Accepted: date ========================================================================================== § ABSTRACT Tokenizers are crucial for encoding information in Large Language Models, but their development has recently stagnated, and they contain inherent weaknesses. Major limitations include computational overhead, ineffective vocabulary use, and unnecessarily large embedding and head layers. Additionally, their performance is biased towards a reference corpus, leading to reduced effectiveness for underrepresented languages. To remedy these issues, we propose which directly embeds words through sparse activation patterns over character triplets, and does not require a reference corpus. inherently exploits morphological similarities and allows for strong compression of embedding layers. In our exhaustive experimental evaluation, we achieve competitive downstream performance with a parameter reduction of more than 85% on these layers. Further, shows significant improvements in § FROM TEXT REPRESENTATIONS FOR MACHINE LEARNING Large language models (LLMs) have shown remarkable abilities in processing natural language and various data types. The tokenizer, an essential part of any language-based LLM, splits input text into subwords and converts textual data into integer representation. It is built by populating a fixed-size vocabulary based on statistical frequencies in a reference corpus <cit.>. With the LLM's trained embedding layers, these integers are converted into floating-point representations <cit.>. These components significantly shape the training objectives and influence what an LLM can process, interpret, and generate. Despite advances, the basic principles of tokenization and embeddings have remained largely unchanged in recent years. <https://github.com/Aleph-Alpha/trigrams> Although this approach has served the LLM community well, and influential characters target to tokenize all kinds of data to “lead a new industrial revolution”[], it has significant inherent weaknesses. For one, tokenizers require dedicated training and, as such, additional computational resources. Design choices and errors at this stage can negatively impact the downstream model <cit.>. Any tokenizer's vocabulary is heavily optimized for the reference corpus, leading to strong drops in performance for, e.g., underrepresented languages. We also show that the resulting vocabulary of tokenizers is poorly utilized, where up to 34% of tokens are near duplicates with limited additional information. Despite that, the corresponding embeddings are trained independently. These issues have caused a significant expansion in the size of vocabularies and their corresponding embedding layers, with billions of parameters being allocated exclusively for text encoding and decoding. To remedy these issues and challenge the traditional views, we propose a paradigm shift on how we embed and decode text for LLMs. We present tokenizer-free sparse representations for memory-efficient embeddings () as an alternative to tokenizers. We directly embed each word in the input text with sparse activation patterns over hashed character triplets. Consequently, we eliminate the need for subword tokens, thus retaining near-optimal performance across languages. Additionally,  explicitly models character overlaps between morphologically similar words without the need to learn an embedding for each variant from scratch. We argue that the converged embedding of such similar words should remain close and, thus, can be heavily compressed[The English language contains about 500k words, while “native fluency” is achieved at 10k words <cit.>.]. This exploitation of similarities allows  to reduce the size of the embedding layers by 87.5%[Compared to our 64k unigram baseline.] and the average encoding length of text by 56%[Compared to Mistral 32k avg. of EN, DE, RU, VI, AR.]. In addition to the inherent benefits of , the approach remains highly competitive on standard downstream model performance benchmarks. Finally, for transfer learning to an unseen language, the model quickly improves performance, while the tokenizer baseline shows only minor adaptation. Our contributions can be summarized as follows: * We systematically demonstrate the inherent weaknesses of common tokenization and embedding approaches. * We propose , a powerful and efficient alternative for tokenizer-free LLMs. * We exhaustively evaluate hyperparameters of  on established benchmarks by training 1B LLMs from scratch. Our comparison against equally trained models with classic tokenization demonstrates competitive performance despite the significant reduction in compute resources and parameters. * We demonstrate the capabilities of  for cross-lingual transfer on continual pre-training on a 3B LLM. § CLASSIC TOKENIZATION PRINCIPLES Before we derive  in detail, let us first establish some basics of how LLMs traditionally encode and decode text. Most LLM operations are performed in floating-point numbers through a series of matrix multiplications and non-linear activation functions. Consequently, we require techniques that map discrete textual inputs into floating-point representations and inversely transform the predictions of the model back to text. Traditionally, the first step in this process is to split any textual input into small chunks referred to as tokens. Generally, these tokens can take arbitrary formats, spanning numerous characters, a single or even multiple words, and may also contain special characters. The latter can be particularly useful to encode programming languages. A tokenizer comprises the steps and rules that are necessary to dissect a text into a sequence of tokens. Importantly, the total number of tokens is restricted, and we refer to the set of Each token in the vocabulary is assigned an integer token-id, wherefore tokenizers produce a sequence of token-ids for any textual input. Next, a large matrix of dimensions vocab size × hidden size, an LLM's embedding layer, maps each token-id to an internal representation as a floating point vector (cf. Fig. <ref>). To produce new text, generative models are trained auto-regressively. That is, they iteratively predict the next token, which is appended to the input text. Therefore, the training objective is formulated as a classification problem: a one-label prediction of the next token over the entire vocabulary. Consequently, the last layer of the model—the LM head—is a projection into the size of the vocabulary and thus also of dimension vocab size × hidden size. For decoding, we can, for example, always select the token with the highest assigned value, which is called greedy sampling. The output text is produced by looking up the corresponding text snippet of each predicted token-id in the vocabulary. Generally, it is desirable to encode any text in as few tokens as possible to reduce computational cost. At the same time, different semantic concepts should be separated into distinct tokens to ensure good language comprehension. The combination of both objectives is usually best satisfied by encoding each word as one token. §.§ Tokenizer Algorithms The vast majority of LLMs utilize a tokenizer built with one of two approaches. Both progressively build up tokenization rules and their vocabulary based on statistics in a reference corpus. Byte Pair Encoding (BPE). BPE <cit.> starts with a set of all characters as individual tokens. Progressively, the most frequent token pairs occurring together in the training documents are merged. The resulting new token and the merging rule are added, and the training is completed In order to encode text with the trained tokenizer, BPE splits the input into individual characters and applies the lowest-ranking merge rule until no more are applicable. This exhaustive search can become computationally intensive, especially for long input sequences and large vocabularies. Unigram. Unigram <cit.> operates inversely to BPE. First, it splits the training corpus into a large set of reference words and their respective frequencies. The vocabulary is initially populated with all possible substrings of these words. At each iteration, Unigram computes a loss of the current vocabulary with respect to the training corpus for all possible tokenizations. The least influential tokens are then removed until the desired vocabulary size is reached. For text encoding, the Viterbi algorithm is applied to determine the most preferred segmentation of a given word based on the ranked available tokens. The text decoding in both cases maps directly back into the vocabulary list and the respective sub-words. To ensure that every word can be represented, a “byte-fallback” into unicode is often used for characters not present in the vocabulary. §.§ Facing the Flaws Common to both methods is a set of distinct flaws. Large Vocabularies F1) Words that do not appear in the vocabulary are split into multiple tokens and, as such, require more compute during model inference and training. To avoid out-of-vocabulary words and to achieve the best downstream representations on a diverse set of languages and tasks, researchers tend to use ever larger vocabularies. Although some models still rely on a 32k vocabulary <cit.>, more recent releases go up to 128k <cit.> or even beyond 250k <cit.>. Large vocabularies, in turn, require large embedding and head layers. For example, Command-R <cit.> with a hidden dimension of 12,288 and a vocabulary of 256,000 tokens uses 6.3B parameters only for the embedding and head layer. Naturally, a large number of parameters complicate model training and may require advanced sharding techniques such as “model parallelism”. Even the tokenization itself can become computationally intense for large documents and vocabularies. Naturally, embedding matrices of this scale are generally not an option for smaller “on-the-edge” models. Nevertheless, they still occupy a large portion of parameters in smaller models, e.g. 40% for Gemma-2B <cit.>. Duplicate Tokens F2) Furthermore, the allocated vocabulary is expected to be poorly utilized due to the statistically likely occurrence of near-duplicate tokens. Most prominently, a significant portion of tokens appears multiple times, only differing in capitalization or the existence of a leading whitespace (cf. Sec <ref>). For example, to spell all 64 substrings and variations of the word “_words”[_ represents a whitespace.], we require a total of 37 unique tokens (cf. App. Tab. <ref>). Since the corresponding embeddings of all tokens are independent and randomly initialized, the representation of each duplicate token needs to be learned from scratch without exploiting morphological synergies. Further, large embedding layers are purely utilized since some tokens will rarely occur. The corresponding embedding weights of these tokens are thus seldom active while still requiring compute. Training data overfitting F3) As discussed above, these tokenizers require dedicated training before the actual model training. In addition to the added computational overhead, the data selection and potential mistakes during tokenizer training have significant impact on the subsequent LLM <cit.>. For natural language, for example, this paradigm may result in a vocabulary tailored to one language (usually English) and consequently drops in performance for others, especially those not explicitly included. The resulting LLM may still be somewhat adapted to other languages since many similar low-level structures <cit.>. However, its overall training and inference performance will not be as efficient as we demonstrate. In contrast,  addresses all of these disadvantages. It is computationally efficient and performs good tokenization across languages without duplicates. It drastically reduces the parameters required for text encoding, exploiting word spelling similarities. Importantly, none of these improvements sacrifices downstream model performance. § A key motivation for  is the intuition that minor differences in spelling, like leading whitespaces or capitalization, do not hold enough entropy to justify entirely independent tokens.  directly encodes morphological similarities by representing each word as a multi-label encoding of its character triplets. This designed overlap between words allows us to significantly reduce the size of embedding layers. We now derive 's approach to text encoding and decoding and discuss implications on LLMs in general. We provide a visualization of each step in Fig. <ref> and pseudo-code in App. <ref>. §.§ Text Encoding Step 1: Word splitting. First, we rigorously split the text by digits and non-alphanumeric characters. The resulting splits, therefore, contain entire words, digits, or special characters. We consider each digit separately, as it is standard in SOTA LLMs (cf. Tab. <ref>). Specifically, we include the 10 digits 0 to 9, and otherwise, we rely on attention to comprehend larger numbers or mixtures with characters. By definition, we represent each word with a prefixed and suffixed whitespace. In particular, we assume that an entire word is encoded into a single embedding, and analogously, we predict an entire word at once. Consequently, we no longer need to explicitly model whitespace as a character and eliminate near-duplicate tokens. Nonetheless, we add a dedicated “whitespace” and “non-whitespace” token to the tokenizer. These special tokens allow us to model cases where substrings should (not) be concatenated with whitespace, e.g., single digits of larger numbers. To reduce their need, we further add a rule-set that favors whitespace in front or after certain characters. Generally, we prefer to add no whitespace after a digit embedding and similarly no whitespace before punctuation. A detailed description of the rule set is found in App. <ref>. Considering the example in Fig. <ref>, the input text “Hello_word!” would be tokenized as [`Hello',`word',`!']. Step 2: Encoding. Next, we define a robust hash function that uniformly encodes a token into n descriptors, where n usually equals the word-length[Only exceptions are unicode fallbacks.]. Specifically, we apply convolutions of size three and byte-wise stride to each word. This operation yields a set of character triplets, which we refer to as “trigrams”. Consequently, “Hello” is decomposed into {_He,Hel,ell,llo,lo_}. Trigrams usually contain enough information about the relationship between letters to reassemble the word from the unordered set. Subsequently, we project each trigram descriptor into a sparse hidden representation vector of m “active entries” on the embedding layer. Specifically,  calculates m numerical hashes of each trigram, which can be considered as identifiers. We map these into the LLMs embedding matrix by calculating each hash value v to identify the active indices. The selection of vocab size v is further explained in Step 3. Overall, we obtain n· m total activations for any single word. To further exploit word similarities and bootstrap training, we calculate k∈ [0,m) out of these hash calculations with the lowercased trigram. This mapping from trigram to hidden representation is static and can be precomputed[Note that there are only 256^3≈ 16.7M trigrams.]. Step 3: Aggregation. Similar to classic embedding approaches (cf. Fig. <ref>)  also utilizes an embedding matrix of dimension v with hidden size h. However, we do not have a fixed vocabulary, whose size dictates v. Instead, we can independently choose v as a hyperparamter with words and trigrams sharing individual entries to better encode similarities. Lastly, we sum all n· m embedding entries to produce the final one embedding corresponding to a word, such as “Hello”. §.§ Training Objective & Text Decoding As 's representation of a word is now a multitude of activations, we reflect this change in the LM head, as well (cf. Decode sections in Fig. <ref>, App. Alg. <ref>,<ref>). In particular, we change the target loss function from classic single-label binary cross-entropy (BCE) to a multi-label (ML) BCE over all n· m activations of the next word targets: ℒ^ML_BCE = - ∑_j=1^v [y_jlog(ŷ_j) + (1-y_j)log(1-ŷ_j)], for ŷ being the LM's prediction and y the binary target vocab labels with ∑_j=1^v y_j = n· m. Analogously, for decoding the next token with , we first assemble a dictionary of all possible next words and pre-compute their activation patterns. Importantly, only n· m out of v entries will be non-zero for each word, and since we choose m << v, the dictionary matrix can be encoded as a sparse matrix, thus improving performance. We multiply this dictionary matrix with the predicted logits values of the LLM to finally obtain the argmax prediction of the next token. §.§ Bridging the Gaps Notably, this paradigm shift to a multi-class vocabulary allows for more semantically robust decoding. With the classical approach, the distinctly noisy learning process can lead to unrelated concepts appearing among the top predictions (cf. `House' and `Car' in Fig. <ref>). This effect can have a significant impact on next token sampling and potentially devastative outcomes for model modifications such as compression <cit.>. In contrast, the trigrammification and resulting embedding overlap of similar words with   inherently favors similar words during decoding (cf. `ouse' in Fig. <ref>). Moreover, activations in the embedding and LM head are more uniformly distributed, leading to better parameter utilization, and more stable model behavior. Lastly, our design of a robust hash function on words adresses the afore mentioned flaws (Sec. <ref>) as the results of the next section demonstrate. § EMPIRICAL EVALUATIONS We continue with an empirical demonstration of the performance of , and how it remedies the flaws of standard tokenizers as outlined in Sec. <ref>. To thoroughly analyze the performance differences, we designed three consecutive experiments: First, we perform hyperparameter ablations on a series of 1B parameter models, which achieve competitive scores on standard benchmarks with a reduced vocabulary, which in turn addresses F1. Second, we analyze the duplicates in the tokenizers of recent LLMs with respect to F2. Notably, is by design free of duplicates. Lastly, we look at F3 and evaluate the performance of various tokenizers across languages. Further, we trained 3B parameter models on English and continued training on German data to practically investigate language adaptability. has better tokenization performance across languages and outperforms classic tokenizers on language transfer. §.§ Experimental Details First, let us clarify some details about our experimental setup. We provide more details for each section in the Appendix. Data and Code. We use the slimpajama dataset <cit.> as our English and Occiglot Fineweb v0.5 <cit.> as our German data corpus. Both datasets contain a diverse range of content and have been extensively filtered and deduplicated. As a baseline, we trained BPE and Unigram tokenizers of sizes 32k and 64k on a random 20GB slimpajama sample using Sentencepiece[<https://github.com/google/sentencepiece>]. More details are described in App. <ref>. To ensure fair comparisons, we trained 1B and 3B models from scratch for the baselines and using our adjusted code base[<https://github.com/Aleph-Alpha/trigrams>]. LLM Pre-Training. All models are transformer, decoder-only architectures similar to Llama-2. We solely change the tokenizer, embedding layer and LM head. Consequently, ablations with smaller sizes of v result in a lower overall parameter count, heavily skewing the comparison in favor of the baseline. For hyper-parameter ablations of , we train 1B models for 50k steps with 2k sequence length and 1k total batch size. We then scale up the baseline and models to 3B parameters and train for 110k steps on slimpajama with 4k sequence length. For the multilingual learning experiment, we continue training this English 3B model at a lower learning rate for another 20k steps on German Occiglot data with a 20% replay of English. Evaluation. We evaluate tokenizer performance in isolation using fertility measurements similar to <cit.>. Fertility benchmarks the number of tokens required per word with 1.0 thus being the optimal value. Specifically, we compare different tokenizers across 5 diverse languages on the respective data from Wikipedia. Downstream benchmark comparisons are performed on 18 standardized benchmarks[<https://github.com/EleutherAI/lm-evaluation-harness>] in English that measure a wide variety of LLM capabilities, including general language modeling, question answering, and common sense reasoning. To evaluate german and english in comparison we use german translations of the Hellaswag, Truthfulqa and Arc-Challenge benchmarks[<https://github.com/bjoernpl/GermanBenchmark>]. We built 's prediction dictrionary, from the top 80k words that occurring in slimpajama, and additional top 20k words from the German Occiglot data. §.§ performs at 8k vocab size We present the results of our hyperparameter ablation study of for 1B models in Fig. <ref>. All scores are reported as differences to the Unigram 64k baseline and for fixed parameters m=10 and k=0. Generally,  remains highly competitive with the baseline as all versions outperform the Unigram model on some of the benchmarks. Further, we achieve the best results for a vocab size v of 8k at which  outperforms the baseline on average. In contrast, a vocab size of ≤ 2k seems insufficient with devastating outliers. We performed further ablations on parameters m and k, which are outlined in App. <ref>. These results demonstrate that successfully addresses the flaw of large vocabularies and embedding layers (cf. F1 in Sec. <ref>). We are able to achieve competitive performance with only 12.5%[8k instead of 64k.] of the embedding parameters using  instead of Unigram. Note, that we do not adjust any other model parameters when reducing vocab size. As such, the benchmark results compare a Unigram model with 1.07B parameter against a model with 0.84B parameters (for v=8k). Consequently, we demonstrate that an LLM using  instead of Unigram performs better, despite having over 20% fewer parameters. §.§ has by design no duplicates Let us now look into (near) duplicate tokens in commonly used tokenizers (cf. F2 in Sec. <ref>). In general, there are three types of overlaps in vocabularies: 1) The same token with and without capitalization, 2) with and without leading whitespace, and 3) dedicated tokens for multiple digits. In Tab. <ref>, we report the percentage of duplicate tokens for our baseline tokenizers and commonly used models. Overall, between 15% and 35% of the available vocabulary is spent on (near) duplicate information with limited differences in entropy. Generally, tokenizers contain the most duplicates for capitalization, slightly fewer for whitespaces, and only a few duplicate digits. The relative amount of overlap tends to decrease with larger vocabularies, although Gemma marks an inglorious exception. In contrast,  is inherently designed to be free of duplicates. We can even adjust the parameter k to explicitly model the overlap of words to their lowercase representations. Consequently, all variants are inherently well represented in the emedding layer. §.§ has less fertility across, Finally, we investigate the versatility of tokenizers beyond their (main) language (cf. F3 in Sec. <ref>). We report the fertility of our baselines and other popular models in English, German, and three dissimilar languages that also contain significant character-level differences in Tab. <ref>. Common to all tokenizers is a significantly decreasing performance for non-English languages, especially for Russian and Vietnamese. Naturally, larger vocabulary sizes tend to have better multilingual coverage , in particular to language groups close to English, but still suffer from significant performance drops. In comparison, the tokenization of , which is mainly based on whitespace splitting, provides comparably good performance across all 5 languages[More detailed evaluations are found in App. <ref>.]. The increases in fertility for Russian or Vietnamese remain small and there is no performance difference for German or Arabic. Note that these synergies were explicitly modeled, and no reference corpus is needed to train and bias the fertility of . Consequently,  allows for easier and more efficient model adaptation to low-resource languages. We now explicitly show the devastating consequences of biased tokenizers on the language transfer capabilities of LLMs. As discussed above, we first train 3B models for  and Unigram on English, and then transition to German. Through more ablations, we fixed the activations to m=7 and the lowercase trigram overlap to k=3. Fig. <ref> shows the performance average on the English and German versions of the standard benchmarks. The baseline performance in German is already improved with , indicating that syntactic and semantic similarities between the languages are better captured in the learned representations. Additionally, almost achieves the English-level performance on German after 20k training steps. In contrast, the classical tokenizer variant improves only marginally with the same amount of training. We, again, do not adjust any other model parameters when reducing the vocab size. As such,  uses 10% fewer parameters than the baseline (2.77B instead of 3.11B) and still strongly outperforms the Unigram variant. More detailed evaluations are found in App. <ref>. § DISCUSSION Prior research has demonstrated that the mapping into a sparse hidden representation and the training of a dense aggregation layer as applied in , is a universal function approximator <cit.>. These results provide further theoretical motivation for our approach. allows for significant compression of an LLMs' vocabulary by more than 85% without performance degradation. Notably, the affected embedding and head layers are by far the largest in LLMs in terms of parameter count. They are also the most influential to an LLM, as they dictate the mapping between text and numerical representations. For one, these massive improvements allow for better utilization of billions of parameters in large models. The compression of  in particular paves the way to building better low-resource models, by reducing model size and training cost and improving adaptability. For example, in our experiments without pipe or model-parallelism, we were able to triple the micro-batch size, yielding faster training iterations. Furthermore, we observed more stable loss curves for , in particular for higher learning rates. These improvements may be attributed to the explicit modeling of similar words, the removal of duplicates, and the less volatile multi-label training target. Further, the uniform hashing distributes gradients evenly amongst the available vocab size, in contrast to classical approaches. We provide further details in App. <ref>,<ref>. The rules we use for obtaining word representations are universal and well-defined at pre-training time. They do not change over time, particularly neither when adding languages later on. also lowers computational costs due to its low fertility and easy-to-process whitespace splitting. Consequently, pre-processing, training and inference of an LLM all require less compute. Lastly, allows to explicitly model and steer the decoding process at inference time, by altering the available dictionary. Consequently, hallucinations will likely be reduced due to fewer “generic fall-back” word splits. Moreover, one can dynamically add or remove words. It is worth pointing out that 's compression benefits can also be combined with traditional tokenizers. Instead of the simple whitespace splitting one could keep traditional tokenization and trigramify “classic tokens”. § RELATED WORK Few alternatives to BPE and Unigram have been found in recent LLMs and research. The naive approach of splitting the input text into bytes or characters maximizes fertility and thus increases computational requirements. Consequently, prior research has proposed methods for merging bytes, e.g., through state-space models <cit.>. However, these approaches still result in performance degradation. Finally, linguistically motivated approaches have built tokenizers based on known morphological rules <cit.>. However, these methods are usually tailored to specific applications and are usually too costly and error-prone for large, general-purpose models. Other works on weight tying, have halved the parameters of embedding and head layers by using the same matrix in both <cit.>. Currently, LLMs do not apply weight tying, though, due to its negative impact on performance and the available compute. § CONCLUSION In this work we present , an alternative to tokenizers with a simple and explicitly modeled robust hash function on words. It removes the need and pitfuls to limit “a models potential” to a “pre-pre-trained” tokenizer. We, moreover, fundamentally shift the established target of training language models, previously designed as a single-label problem, into a multi-label prediction based on word similarities. Similarities in particular include leading whitespaces and uppercase variations, for which tokenizers add specific tokens that are independently trained from scratch. These contributions allow us to train language models more robust, more adaptable when continuing pre-training with a new language, and with a significantly (to 12.5%) reduced parameter size without a decrease in benchmark scores. Due to the special role of the matrices, the latter in particular allows one to increase micro-batchsize, which further accelerates training time. Finally, the consequent convolution-like encoding achieves SOTA fertility scores across most languages and enables by design synergies to similar language groups. We demonstrated the latter showing that our 3B almost achieved “native-language” performance after a small amount of language-transfer training steps, in contrast to the tokenizer baseline. § LIMITATIONS With we propose a fundamentally different approach to text encoding and decoding in LLMs. Due to the intense resources required to train LLMs, we have focused on evaluating models up to 3B parameters. Evaluations on even larger models and training datasets remain a relevant point of investigation for future work. Nonetheless, we observed an easy transfer from 1B to 3B parameters, and we will continue to train and release more advanced models. We expect to experience some numerical instabilities for very long words since single-word embeddings are calculated as the sum of their n · m activations. However, less than 2% of the entire slimpajama dataset contains words with more than 10 characters (cf. App. <ref>), and we did not encounter any issues with the benchmarks. Consequently, such potential instabilities remain statistically insignificant. Nonetheless, we could adequately tackle long outliers with an additional split rule based on the words length. Similarly, we did not thoroughly study the effect of repetitive trigrams in words. These did also not occur frequently enough to have any measurable effect on our experiments. As of now, we only accumulate a word pattern in a binary fashion, not accounting for trigrams appearing multiple times in a single word. As a fallback, one could again, split words at the position of repetitions. Although 's fertility on code is on par with that of LLama2 (cf. App. <ref>), it could be further improved by explicitly modeling code patterns. In this work, we have focused on natural language and leave detailed evaluations of in downstream coding tasks for future research. Furthermore, we did not investigate languages entirely relying on Unicode byte-encodings, such as Chinese. Finally, we only studied a single constructed hash function for . As this work paves the way to model required language features more explicitly, we are looking forward to variations of the proposed  method. § ACKNOWLEDGMENTS We gratefully acknowledge support by the German Center for Artificial Intelligence (DFKI) project “SAINT”, the Hessian Ministry of Higher Education, the Research and the Arts (HMWK) cluster projects “The Adaptive Mind” and “The Third Wave of AI”, and the ICT-48 Network of AI Research Excellence Center “TAILOR” (EU Horizon 2020, GA No 952215). § APPENDIX § ALGORITHM Alg. <ref>,<ref>,<ref>,<ref>,<ref> show the core steps to encode text into embeddings, and decode text from model predictions with . Here, regex.split denotes an algorithm that splits text based on a regular expression, hash denotes an arbitrary hash function like md5, % denotes the mathematical operation. In style of python, f'{token}_' denotes text formatting to indicate the string with content of variable token being followed by an underscore, and EL[i] denotes the i-th entry of matrix EL and 'string'[i:i+3] three consecutive characters in the text string starting from position i, where 's' is at position 0. Finally, v≈ 8,000 is the chosen vocabulary size, d≈ 100,000 is the chosen dictionary size, h≈ 3,072 the LLMs hidden size. Finally, 0^h denotes a zero vector of dimension h and 1^v× d a matrix with entries 0 or 1. Note that we included some normalization steps in Alg. <ref>, which we surprisingly found not beneficial for Alg. <ref> in our ablations. § WHITESPACE ENCODING By default our model is trained to predict full words separated by whitespaces. To not be limited to this use-case, we add a special “non-whitespace” and “whitespace” token. We empirically evaluated each exception occuring in code tokenization. To further reduce its fertility, we favor “non-whitespace” before one of the following characters: .,;:#?!=-+*/)<>[] @ We further prefer non-whitespace after one of the following characters: #=-+*/'(̈<[ ^ @ As such, the text “In 2024” would result in the split “[In,2,0,2,4]” without the need of any special annotations, while “In20 24” resolves to “[In,<no_ws>,2,0,<ws>,2,4]”. Finally, to further improve code fertility, we merge consecutive <ws> and newline tokens up to 3 times, i.e.8consecutive whitespaces would result in a single <|8<ws>|> token. § TOKENIZER TRAININGS WITH SENTENCEPIECE For training of a unigram tokenizer with the current sentencepiece library, a 20GB reference data corpus reaches the limit of our available 1TB Ram compute node. We thus randomly sample 20GB of the slimpajama dataset and run the following statement for training of the actual tokenizer: spm_train –input=20GB_sample.txt–model_prefix=unigram_64k –vocab_size=64000 –character_coverage=0.99 –model_type=unigram –byte_fallback=true –split_by_number=true –split_by_whitespace=true –train_extremely_large_corpus=true–split_digits=true –allow_whitespace_only_pieces=true–remove_extra_whitespaces=false –normalization_rule_name=nfkc –num_threads 64 –eos_id=0 –bos_id=-1 –unk_id=2 –pad_id=1 –eos_piece="<|endoftext|>" –pad_piece="<|padding|>" –unk_piece="<|unknown|>" § TRAINING CONFIGURATIONS §.§ 1B Training Parameters are listed in Tab. <ref>. §.§ 3B Training Parameters are listed in Tab. <ref>. § FERTILITY ANALYSIS We subsequently provide further experimental details on the fertility analysis conducted with respect to F3, Sec. <ref>. As a reference dataset, we used the November 23 dump of Wikipedia in the respective languages. We derived reference tokenization using UDPipe <cit.>. A tokenizer's fertility is then calculated by dividing its total token count for a document by the number of tokens produced by UDPipe. We present results for more models on 8 languages in Tab. <ref>. We also evaluated the white-space tokenization of for code. For 22 programming languages, we took 10k random documents each from the starcoder dataset [<https://huggingface.co/datasets/bigcode/starcoderdata>]. Since ground truth text splitting for code is hard to establish, we instead report the normalized sequence length with respect to a reference tokenizer. We here used Llama-2 and report results in Tab. <ref>. Since 's tokenization achieves an NSL close to1.0, it performs roughly on par with Llama-2. § TOKEN OVERLAP/DUPLICATES For the empirical evaluation regarding F2, cf. Sec. <ref>, we present more exhaustive results with additional models in Tab. <ref>. § TRAINING STABILITY Memory footage comparing classic tokenizers to is found in Fig. <ref>. Note that the hashing step of Alg. <ref> uniformly distributes gradients amongst the available vocabulary, as discussed in Sec. <ref>. This is in contrast to classic tokenizers, as they depend on a bijective single-label mapping, and as such each vocabulary entry update is dependent on its the occurance frequency of the corresponding token within the dataset. Moreover, we explicitly let trigram activations overlap with their lowercase version. We assume that these are responsible for the more stable training dynamics as shown in Fig. <ref>. Moreover, we found that the lowercase overlap bootstraps learning as shown with the downstream benchmark ablations Fig. <ref>. § HYPERPARAMETER ABLATIONS Some 1,500 determined experiments later... Albeit pretty scarse, some more hyper-parameter ablations are found in Fig. <ref>,<ref>. We will continue to polish and add more... § SOME STATISTICS Trigram combinatorics. As there are more thanvpossible words, there will naturally be some overlap in the activations between words. However, assuming an embedding dimension ofv ≈ 8,000,m ≈ 8activations per trigram, and a word of lengthn = 5, there are (in theory)v n · m≈ 10^108unique activation patterns. This overlap can be interpreted as an interpolation between input states. For entirely independent inputs, this overlap should be kept small as the results cannot benefit from the states of the shared activations. As such, we require a robust hash function on text, i.e. a mapping from text into sparse activation patterns, for which the overlapping of activations is proportional to the similarity of the input words. We model this through trigrams, and as such, letter-similarity. Tokenizer Duplicates. Tab. <ref> shows the curse of token-based vocabularies: to produce all 64 upper and whitespace variations of the word “_words”, one requires on average 3 tokens per writing. Dataset-Coverages. Fig. <ref> shows the covered percentages of the entire dataset, by word-lengths, for all slimpajama datasets. If we successfully can encode all words of length≤ 10, we can cover≥ 95%of the entire slimpajama dataset. Or conversely, we would only require 5% outlier handling/ additional splits for longer words (cf. Sec. <ref>). Fig. <ref> and Fig. <ref> show dataset coverage (y-axis) of top-n words and trigrams (x-axis) for each slimpajama category. Notably 10k trigrams, and 100k words consistently cover>95%of each slimpajama category. § MORE BENCHMARKS We used the code of the eleuther eval harness, and evaluated each benchmark in 0-shot and 2-shot. All 18 benchmarks are found in Fig. <ref> and Fig. <ref>.
http://arxiv.org/abs/2406.17858v2
20240625180211
Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection
[ "Jialun Pei", "Ruize Cui", "Yaoqian Li", "Weixin Si", "Jing Qin", "Pheng-Ann Heng" ]
cs.CV
[ "cs.CV" ]
Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection ^1The Chinese University of Hong Kong, Hong Kong, China ^2The Hong Kong Polytechnic University, Hong Kong, China ^3Shenzhen Institute of Advanced Technology, CAS, Shenzhen, China wx.si@siat.ac.cn Jialun Pei et al. Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection Jialun PeiEqual contribution.1, Ruize Cui⋆2, Yaoqian Li1, Weixin Si 3(), Jing Qin2, Pheng-Ann Heng1 July 1, 2024 =============================================================================================================== § ABSTRACT Laparoscopic liver surgery poses a complex intraoperative dynamic environment for surgeons, where remains a significant challenge to distinguish critical or even hidden structures inside the liver. Liver anatomical landmarks, , ridge and ligament, serve as important markers for 2D-3D alignment, which can significantly enhance the spatial perception of surgeons for precise surgery. To facilitate the detection of laparoscopic liver landmarks, we collect a novel dataset called L3D, which comprises 1,152 frames with elaborated landmark annotations from surgical videos of 39 patients across two medical sites. For benchmarking purposes, 12 mainstream detection methods are selected and comprehensively evaluated on L3D. Further, we propose a depth-driven geometric prompt learning network, namely . Specifically, we design a Depth-aware Prompt Embedding (DPE) module that is guided by self-supervised prompts and generates semantically relevant geometric information with the benefit of global depth cues extracted from SAM-based features. Additionally, a Semantic-specific Geometric Augmentation (SGA) scheme is introduced to efficiently merge RGB-D spatial and geometric information through reverse anatomic perception. The experimental results indicate that  obtains state-of-the-art performance on L3D, with 63.52% DICE and 48.68% IoU scores. Together with 2D-3D fusion technology, our method can directly provide the surgeon with intuitive guidance information in laparoscopic scenarios. Our code and dataset are available at <https://github.com/PJLallen/D2GPLand>. § INTRODUCTION Laparoscopic liver surgery allows surgeons to perform a variety of less invasive liver procedures through small incisions, enabling faster patient recovery and superior cosmetic outcomes <cit.>. However, it is difficult for surgeons to distinguish critical anatomical structures in the complex and variable laparoscopic surgical environment, making it heavily dependent on the experience of the surgeon. In this regard, augmented reality techniques tailored for laparoscopic liver surgery are urgently desired to provide surgeons with auxiliary information for precise resection and surgical risk reduction. The primary step in achieving augmented reality clues is to automatically identify guiding markers on key frames from intraoperative 2D videos and preoperative 3D anatomy samples, respectively, to assist in intraoperative decision-making. Liver anatomical landmarks, e.g., anterior ridge and falciform ligament, have been validated as effective consistent information for 2D-3D alignment<cit.>. As shown in fig1, using 2D and 3D landmarks as references, internal liver structures are available for intraoperative fusion for enhanced visual guidance. However, accurate laparoscopic landmark detection remains challenging due to the lack of annotated datasets and how to comprehensively exploit the geometric information in video frames. Traditionally, landmarks in laparoscopic augmented reality are defined as points or contours <cit.>. In intricate surgical environments, however, the performance of existing structure-based methods suffers from the instability of detection accuracy due to susceptibility to interruptions and tissue deformation together with the lack of global geometric information <cit.>. Additionally, traditional landmarks fail to provide semantic information for precise correspondence between 2D and 3D medical images, which has great importance for estimating cross-dimensional spatial relationships in laparoscopic liver surgery. To address these challenges, we adapt silhouettes, ridges, and ligaments from laparoscopic video frames as landmarks, which are continuous anatomies with clear semantic features in the preoperative 3D anatomy, facilitating efficient 2D-3D alignment. However, existing laparoscopic liver landmark datasets lack sufficient annotations for training deep learning-based landmark models <cit.>. To address the limited sample of liver landmarks, we build the current largest-scale laparoscopic liver landmark dataset, named L3D. Specifically, we invite four senior surgeons to select 1152 critical frames from surgical videos of 39 patients at two medical sites, while labeling each frame with three types of semantic landmarks. Based on the proposed L3D dataset, we contribute a systematic study on 12 mainstream baselines <cit.>. We observe that existing detection methods concentrate more on semantic feature capture and edge detection while ignoring global geometric features of the liver region, especially the depth information <cit.>. Hence, we delve into a straightforward and effective framework that leverages depth maps and pre-trained large vision models to enhance the accuracy of detecting laparoscopic liver landmarks. In this work, we introduce a depth-driven geometric prompt learning network called . Specifically, we first employ an off-the-shelf depth estimation model to generate depth maps that provide inherent anatomic information. Considering that Segment Anything Model (SAM)-based approaches <cit.> have shown superior performance in extracting global high-level features in surgical scenes, we adopt a pre-trained SAM encoder combined with the CNN encoder to respectively extract RGB multi-level features and depth geometric information. Then, a Bi-modal Feature Unification (BFU) module is designed to integrate RGB and depth features. To distinguish highly similar landmark characteristics in laparoscopic liver surgery, we propose a Depth-aware Prompt Embedding (DPE) operation to highlight geometric attributes guided by prompt contrastive learning and produce class-aware geometric features. Moreover, we propose a Semantic-specific Geometric Augmentation (SGA) scheme to effectively fuse class-aware geometric features with RGB-D spatial features, where a reverse anatomic attention mechanism is embedded to focus on the perception of anatomical structures and overcome the difficulty of capturing ambiguous landmarks. Extensive experimental results on the L3D benchmark show that  achieves a promising performance. Our method has great potential to be applied in augmented reality-based intra-operative guidance for laparoscopic liver surgery. § L3D DATASET To facilitate the detection of laparoscopic liver landmarks, we establish a landmark detection dataset, termed L3D. Relevant information about patients and annotation is shown in tab:table1. To provide enhanced visualization guidance efficiently during the ever-changing surgical environment, we extract key frames from laparoscopic liver surgery videos to annotate liver landmarks according to the suggestions of surgeons. To this end, four surgeons are invited to select key frames and label them, two of whom perform the labeling and the other two check the labels. The selection criterion for the keyframes is to allow the surgeon to observe the global view of the liver, which can greatly reduce anatomical misperception during complex laparoscopic liver surgery. In our dataset, the ridge landmark is defined as the lower anterior ridge of the liver, and the ligament landmark is defined as the junction between the falciform ligament with the liver. In addition, the visible silhouette is also considered as a landmark category. Our dataset is collected from two medical sites, and all surgeries are liver resections for hepatocellular carcinoma (HCC). The annotators screen 1,500 initial frames from 39 patient surgery videos with an original resolution of 1920*1080, and retain 1,152 key frames after checking. We divide all samples in L3D into three sets, where 921 images are used as the training set, 122 images as the validation set, and 109 images as the test set. To ensure the fairness of the experiment, images from the same patient are not shared across these sets. § METHODOLOGY overview outlines the architecture of the proposed . Our model first takes key frame images from laparoscopic liver surgery as inputs and further generates depth maps using an off-the-shelf depth estimation network (AdelaiDepth <cit.>) as auxiliary inputs to supplement the geometric information. Then, we employ a ResNet-34 encoder <cit.> for RGB spatial feature extraction together with a frozen SAM encoder <cit.> for depth geometric cue acquisition. Notably, the original RGB frames are encoded through a CNN encoder to capture lower-level features for anatomical structure identification, while depth maps mainly provide global shape attributes and geometric insights. Thanks to the transformer-based structure and pre-training with large amounts of natural images, the SAM encoder exhibits heightened sensitivity towards global geometric features from the depth modality. We conduct ablation studies for different encoder combinations in ablate. Subsequently, depth feature F_d is passed into the proposed Depth-aware Prompt Embedding (DPE) module to highlight geometric attributes under the guidance of semantic prompts and then output the class-aware geometric features F_G^s,l,r. In parallel, the Bi-modal Feature Unification (BFU) module is applied to incorporate RGB feature F_rgb and F_d, producing integrated features F_f. Then, we interact geometric features F_G^s,l,r focusing on different landmark categories with the fused RGB-D features through our Semantic-specific Geometric Augmentation (SGA) scheme to obtain augmented unified features F_a. Finally, a CNN decoder is used to produce the detection maps. The following subsections will elaborate on the key components of . §.§ Depth-aware Prompt Embedding To capitalize on the advantages of pre-trained foundation models while reducing the computational costs for fine-tuning, we maintain the SAM encoder frozen in our model. Nonetheless, it still requires further guidance for extracting semantic geometry features related to landmark anatomy. To address this challenge, we propose three randomly initialized efficient class-specific geometric prompts and the DPE module to guide the extraction of geometric information related to different classes from the features derived from the SAM encoder. As shown in overview(a), we initially execute matrix multiplication between the input F_d and the geometric prompts, generating spatial attention maps to highlight regions associated with specific classes. Moreover, for each attention map, an element-wise multiplication is applied to depth features with a residual operation to obtain class-activated geometric features F_G^s,l,r. In addition, the proposed DPE module relies on discriminative prompts to guide the class-specific geometry feature extraction. However, it is challenging to learn precise class-specific prompts due to the highly similar landmark characteristics of the liver. To enhance prompt discriminativeness for better guidance, we apply the contrastive learning technique as illustrated in overview(b). Here we take the silhouette prompt P_s as an example. Given the ground truth of the silhouette landmark and F_d, a dot product is conducted on them, followed by taking the channel-wise mean values to obtain the reference embeddings R_s. Upon obtaining all reference embeddings of the three landmark classes, we modify the NT-Xent Loss <cit.> as the contrastive loss, formulated as follows: ℒ_cl = 1/N∑_l∈Llogexp(P_l·R_l/τ)/∑_k∈Lexp(P_l·R_k/τ), where N = 3 is the number of classes, L = {s, l, r} denotes the set of all classes, and τ refers to the temperature-scaled parameter. This contrastive learning strategy enhances the distinctiveness of the class-specific prompt representations. §.§ Geometry-enhanced Cross-modal Fusion Bi-modal Feature Unification. To capture holistic landmark features, we propose a BFU module to merge CNN-based lower-level structural features and SAM-based global geometric features. As depicted in overview(c), we first adaptively adjust the channel weights of F_rgb and F_d with Squeeze and Excitation (SE) blocks <cit.> and add them together. Afterward, we embrace the local and global average pooling modules to unify F_rgb and F_d at different scales and output the fused feature F_f. Semantic-specific Geometry Augmentation. To further inject the class-activated geometric information from feature F_G^s,l,r into the fused feature F_f, we present the SGA scheme shown in overview(d). We concatenate each class-specific feature in F_G^s,l,r with the fused feature F_f respectively, and then obtain the corresponding augmented feature F_a^s,l,r by 3×3 convolutional block. Subsequently, we concatenate all three semantic geometric features and generate the final augmented feature F_a. Considering the high similarity between anatomical structure and surrounding tissue features, we also embed a reverse anatomical perception module in the SGA to improve the sensitivity to ambiguous anatomical structures. Inspired by reverse attention<cit.>, we apply a sigmoid function and reverse the attention weights to yield the anatomic attention maps. Afterward, we interplay the attention map with F_f via element-wise multiplication to predict anatomical features. Here, we use the dice loss as the anatomic Loss ℒ_ana to supervise the anatomic learning. §.§ Loss Function In addition to the above-mentioned contrast loss and anatomic loss, we also add the segmentation loss ℒ_seg to the overall loss function to supervise the final landmark detection map. In summary, the total loss function can be defined as: ℒ_total = λ_segℒ_seg + λ_clℒ_cl + λ_anaℒ_ana,  ℒ_seg = 1/N∑_l∈L(ℒ_dice^(l) + ℒ_bce^(l)), where ℒ_dice^(l) denotes the Dice Loss, ℒ_bce^(l) denotes the binary cross-entropy (BCE) loss. λ_seg, λ_cl, and λ_ana are the balancing parameters for ℒ_seg, ℒ_cl, and ℒ_ana, respectively. All balancing parameters are set to 1 for optimal performance. § EXPERIMENTS §.§ Implementation Details The proposed  is developed with PyTorch, and the training and testing processes are executed on a single RTX A6000 GPU. We run 60 epochs for training with a batch size of 4. A frozen pre-trained SAM-B <cit.> is implemented in the depth encoder. We resize all the images to 1024×1024 and apply random flip, rotation, and crop for data augmentation. The Adam optimizer is used with the initial learning rate of 1e-4 and weight decay factor of 3e-5. In addition, the CosineAnnealingLR scheduler is applied to adjust the learning rate to 1e-6. For evaluation, we utilize the Intersection over Union (IoU), Dice Score Coefficient (DSC), and Average Symmetric Surface Distance (Assd) as evaluation metrics. §.§ Comparison with State-of-the-Art Methods We compare the proposed  with 12 cutting-edge methods on the L3D test set. For a fair comparison, these methods are divided into two types: (1) Non-SAM-based models, including UNet <cit.>, COSNet <cit.>, ResUNet <cit.>, DeepLabV3+ <cit.>, UNet++ <cit.>, HRNet <cit.>, TranUNet <cit.>, and SwinUNet <cit.>, and (2) SAM-based models, including SAM-Adapter <cit.>, SAMed <cit.>, AutoSAM <cit.>, and SAM-LST <cit.>. All compared models were trained to converge with their official implementations. As shown in Table. <ref>,  outperforms competitors on all evaluation metrics. Compared to the top-ranked model SAMed, our method improves 1.51% on DSC, 1.49% on DSC, and 2.17 pixels on Assd metrics with 44.52M fewer parameters, demonstrating the effectiveness of utilizing depth-aware prompt and semantic-specific geometric augmentation for landmark detection. Besides, we observe that non-SAM-based methods exhibit inferior performance compared to most SAM-based methods. It illustrates that the global geometric information extracted by the pre-trained SAM encoder can enhance the perception of landmark features. fig3 also exhibits the visual results of  and other well-performed methods. We can see that our method provides more accurate detection of liver landmarks while mitigating the impact of occlusion by other tissues and surgical tools. §.§ Ablation Study Ablations for Key Designs. Table <ref> shows the contribution of each key design in  on the L3D test set. Notably, all variants are trained with the same settings as mentioned in implement. The baseline (M.1) comprises a ResNet-34 encoder and frozen SAM-B encoder, and we directly concatenate RGB and depth features before feeding them into the decoder. Overall, each component contributes to the performance of our model in varying degrees. Specifically, M.2 and M.6 show the effectiveness of our BFU module in merging RGB and depth features. Based on M.2, M.3 and M.5 sequentially integrate our DPE and contrastive loss ℒ_cl to further enhance the model performance. Further, M.4 adds the SGA scheme to M.2, resulting in 1.07% and 0.89% improvements in DSC and IoU, respectively, indicating the advantages of geometric cues. Backbone Selections. To explore the effect of different backbones in feature extraction across RGB and depth modalities, we conduct additional ablation experiments on L3D with the CNN-based encoder and the SAM-based encoder. As shown in ablation2,  achieves the optimal performance when leveraging the ResNet-34 encoder for RGB inputs and the SAM encoder for depth modality. This experiment further validates the description in method that the ResNet-34 encoder is more effective in capturing lower-level anatomical structural features while SAM excels in extracting global geometric features. § CONCLUSION This paper proposes a novel geometric prompt learning framework, , for liver landmark detection on key frames of laparoscopic videos. Our method utilizes depth-aware prompt embeddings and semantic-specific geometric augmentation to explore the intrinsic geometric and spatial information, improving the accuracy of landmark detection. Moreover, we release a new laparoscopic liver landmark detection dataset, L3D, to advance the landmark detection community. Experimental results indicate that  outperforms cutting-edge approaches on L3D, demonstrating the effectiveness of our method in capturing anatomical information in various surgeries. We hope this work can pave the way for extracting consistent anatomical information from 2D video frames and 3D reconstructed geometries, thereby directly promoting 2D-3D fusion and providing surgeons with intuitive guidance information in laparoscopic scenarios. §.§.§ The work was supported in part by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No.: T45-401/22-N), in part by a grant from Hong Kong Innovation and Technology Fund (Project No.: MHP/085/21), in part by a General Research Fund of Hong Kong Research Grants Council (project No.: 15218521), in part by grants from National Natural Science Foundation of China (62372441, U22A2034), in part by Guangdong Basic and Applied Basic Research Foundation (2023A1515030268), in part by Shenzhen Science and Technology Program (Grant No.: RCYX20231211090127030), and in part by Guangzhou Municipal Key R&D Program (2024B03J0947). §.§.§ The authors have no competing interests to declare. splncs04
http://arxiv.org/abs/2406.19267v1
20240627153543
Analysis of Multistage Feedforward Operational Transconductance Amplifiers using Single-Pole Approximation
[ "Taeju Lee" ]
physics.ins-det
[ "physics.ins-det", "cs.SY", "eess.SY" ]
On the stability of fracton gravity Evangelos Afxonidis[afxonidisevangelos@uniovi.es], Alessio Caddeo[caddeoalessio@uniovi.es], Carlos Hoyos[hoyoscarlos@uniovi.es], Daniele Musso[mussodaniele@uniovi.es] ============================================================================================================================================================================= fancy LEE: ANALYSIS OF MULTISTAGE FEEDFORWARD OTA USING SINGLE-POLE APPROX. A high-gain wideband operational transconductance amplifier (OTA) is an essential block for applications requiring high data rates. As technology nodes scale down, the minimum length of transistors shrinks and the intrinsic gain is reduced. Therefore, design techniques that achieve a high-gain wideband operation are essential in analog amplifier design. Also, appropriate frequency compensation is required to guarantee the reliable operation of an OTA. This paper presents analysis results of the OTAs that combine feedforward paths and multistage amplifiers to achieve high-gain wideband operation as well as frequency compensation. To analyze multistage feedforward OTAs and provide an intuitive design method, the single-pole approximation model is employed for each substage of the OTA. Using the single-pole approximation model, the analysis is carried out from the two-stage OTA to the four-stage OTA in this work. Analog front-end, amplifier, compensation technique, feedforward, high gain, multistage, operational transconductance amplifier (OTA), wideband. § INTRODUCTION [findent=2pt, nindent=0pt]Semiconductor technology has continued to develop, which has significantly advanced electronic devices in terms of power, speed, and size. As CMOS technology has advanced rapidly, computing engines and communication devices with high data rates become available. In these applications that require high data rates, a high-gain wideband operational transconductance amplifier (OTA) is essential for various analog blocks. As technology nodes shrink, digital circuits can be developed to occupy less area, consume lower power, and enable high-speed operation. However, when developing analog circuits such as OTAs using advanced CMOS technologies, it must be carefully designed while considering the performance such as stability and gain. As CMOS technology advances, the minimum length of transistors decreases, improving the speed and power efficiency of digital circuits such as flip-flops and logic gates. However, an intrinsic gain of a single transistor is degraded due to the minimized length, causing gain errors in negative feedback systems. To overcome this gain degradation, the OTA can be developed by cascading multiple gain stages <cit.>–<cit.>. Especially, a two-stage OTA has been widely employed in bias circuits, analog front-ends, analog-to-digital converters, and wireless receivers for applications such as communications <cit.>–<cit.> and sensor interfaces <cit.>–<cit.>. As the number of cascaded gain stages increases, additional poles are created, causing phase shift and degrading stability in feedback systems. Therefore, in an OTA used in a feedback system, the technique called Miller compensation is widely used to compensate for stability by controlling phase shift <cit.> and <cit.>. This paper provides the analysis results of the multistage feedforward OTAs. Section II reviews Miller compensation which is a widely used method to secure stability for feedback systems. Section III describes the analysis results of applying the single-pole approximation model from the two-stage design to the four-stage design. Finally, the conclusion is drawn in Section IV. All analysis results are verified using a 65-nm CMOS technology. § GENERAL CONSIDERATION Two-stage OTAs are widely used to provide a sufficient gain, but an additional pole contributes to phase shift, ultimately degrading the phase margin (PM). To mitigate the PM degradation, Miller compensation has been widely employed in numerous OTA designs. As shown in Fig. <ref>(a), the Miller capacitor Cm is placed between the input and output nodes of the second stage M2, splitting poles and ultimately improving the PM. At the interstage node X, the capacitance looking into the gate terminal of M2 increases to (1+Av2)Cm, shifting the dominant pole toward the origin in low frequencies <cit.>. When looking into the output node of M2 at very high frequencies, the resistance decreases from ro2||ro3 to approximately Ro1||(1/gm2)||ro2||ro3≈ 1/gm2 since Cm becomes a low-impedance path at high frequencies <cit.>. Accordingly, when Cm is employed, the pole at node X moves toward the origin, the pole at the output node moves away from the origin, and two poles are split <cit.>. However, employing Cm also provides the feedforward path from node X to the output node, contributing to a negative phase shift. In other words, as Cm creates the feedforward path, the gain becomes zero at ωz called a right-half-plane (RHP) zero. In Fig. <ref>(a), the RHP zero is obtained by solving the condition of VXCmωz=VXgm2 which means that the small-signal output becomes zero at ωz <cit.>. Considering the gate-drain parasitic capacitance of M2, Cgd2, ωz can be rewritten as gm2/(Cm+Cgd2). The RHP zero contributes to a negative phase shift as with a pole, resulting in the PM degradation. As a solution to remove the RHP zero, the compensation resistor Rz can be series-connected with Cm as shown in Fig. <ref>(b). After adding Rz, ωz is obtained as 1/[Cm(gm2^-1-Rz)] by solving VXgm2=VX[Rz+(Cmωz)^-1]^-1 <cit.>. Therefore, the RHP zero can move to the left-half plane by satisfying Rz > gm2^-1. The left-half-plane (LHP) zero contributes to a positive phase shift, which can cancel out a pole through appropriate design techniques and eventually improve the PM. The following analysis covers the compensation technique with LHP zeros generated by feedforward paths implemented using active devices. For simplicity, the analysis is first conducted from the two-stage structure using the single-pole approximation model and expanded to the four-stage structure in the same manner. § MULTISTAGE FEEDFORWARD OTA §.§ Two-Stage Design For simplicity of multistage analysis, a two-stage feedforward OTA is first analyzed using the diagram shown in Fig. <ref>(a). When neglecting the decoupling capacitor Cd, the gains of the main path are expressed as Av1=gm1Ro1 and Av2=gm2(Ro2||RoF1). The gain of the feedforward path is AvF1=gmF1(Ro2||RoF1). Then, the overall transfer function of the two-stage structure is given by Vout/Vin(s)|2-stage = Av1Av2/(1+s/ωp1)(1+s/ωp2)+AvF1/(1+s/ωp2) = (Av1Av2+AvF1)[1+AvF1s/(Av1Av2+AvF1)ωp1]/(1+s/ωp1)(1+s/ωp2) The multistage feedforward OTAs used in this work are designed based on <cit.>. Eqs. (1) and (2) are obtained in the same way as <cit.>. From Eq. (2), the voltage gain at low frequencies can be obtained as gm1Ro1gm2(Ro2||RoF1)+gmF1(Ro2||RoF1) and approximated as gm1Ro1gm2(Ro2||RoF1). In Fig. <ref>(a), the poles at the first and second output stages are obtained as ωp1=1/(Ro1Co1) and ωp2=1/(Ro2||RoF1)(Co2+CoF1). From Eq. (2), the LHP zero generated by the feedforward path of gmF1 is expressed as ωz1=ωp1[(Av1Av2+AvF1)/AvF1]=ωp1[(gm1Ro1gm2/gmF1)+1]. If gmF1≫ gm1,gm2, ωz1 can be approximated as ωp1. Therefore, the first pole ωp1 can be canceled out by choosing gmF1≫ gm1,gm2. This ultimately improves the stability and PM. Although the PM is improved by employing a feedforward path, the voltage gain is degraded by RoF1. Therefore, to isolate the output resistance of the feedforward path from the main path, the decoupling capacitor Cd can be placed between the feedforward path output and the main path output <cit.>. Compared to the design that places Cd at all inputs and outputs of feedforward paths <cit.>, this work places Cd only at the output of the outer feedforward path to save the design area. By placing Cd as shown in Fig. <ref>(a), the output impedance of the feedforward path becomes high at low frequencies and the effective output resistance of the OTA is dominated by Ro2. Therefore, the voltage gain changes from gm1Ro1gm2(Ro2||RoF1) to gm1Ro1gm2Ro2. Also, the dominant pole ωp2 approximately changes from 1/(Ro2||RoF1)(Co2+CoF1) to 1/(Ro2Co2) as the output impedance of the feedforward path is isolated from the main path. As the frequency increases, Cd operates almost as a short circuit. Accordingly, ωp1 and ωz1 located at high frequencies are almost the same as in the case without Cd. Fig. <ref>(b) shows the simulation results of gain and PM based on the two-stage OTA shown in Fig. <ref>(a): using only the two-stage OTA without the feedforward path significantly degrades the PM while achieving a high gain (red line), adding the feedforward path to the main path improves the PM but the gain is reduced due to the output resistance of the feedforward path (blue line), and combining the feedforward path with Cd compensates for the gain while ensuring sufficient PM (green line). Fig. <ref>(b) is obtained using the schematic shown in Fig. <ref> by setting Cd = 2 pF, VDD = 1.2 V, Vcm = 0.6 V, and Iref = 400 μA. The output load impedance is modeled as 10 MΩ||2 pF on Vout– and Vout+. Fig. <ref> shows the overall schematics of the fully differential two-stage feedforward OTA. The input stage gm1 is implemented by employing the cross-coupled structure M2 and the diode-connected load M3, which improves the voltage gain by maximizing the output resistance of the first stage <cit.>. The output stages gm2 and gmF1 are implemented as the current reuse structure. The common-mode voltage of each output stage is set using the common-mode feedbacks CM1 and CM2. gm1 consumes 273 μA. gm2 and gmF1 consume 1.48 mA and 3 mA, respectively. Each CMn=1,2 consumes 104 μA. Throughout this paper, the multistage feedforward OTAs are implemented by employing the bias block and the common-mode feedback, as shown in Figs. <ref>(b) and (c). §.§ Three-Stage Design Extending Eqs. (1) and (2) to a three-stage structure, the OTA produces three poles and two zeros in Fig. <ref>(a). However, obtaining poles and zeros becomes complex as the number of stages extends. Therefore, a three-stage feedforward OTA is approximated by replacing the internal two-stage feedforward structure with a single-pole system as shown in Fig. <ref>(b). As discussed in a two-stage feedforward OTA shown in Fig. <ref>(a), ωp1 can be canceled out by ωz1 through a feedforward path. Accordingly, a two-stage feedforward OTA can be approximated as a single-pole system with a pole of 1/(Ro2||RoF1)(Co2+CoF1) and a gain of gm1Ro1gm2(Ro2||RoF1) as shown in Fig. <ref>(b). The Miller capacitors, Cm1 and Cm2, are employed in Figs. <ref>(a) and (b). However, to simplify the circuit analysis, Cm1 and Cm2 are first neglected. Extending Eq. (1) to the three-stage structure while neglecting Cd, the overall transfer function of Fig. <ref>(b) is given by Vout/Vin(s)|3-stage = A2-stageAv3/(1+s/ω^'p1)(1+s/ω^'p2)+AvF2/(1+s/ω^'p2) where A2-stage is an approximated voltage gain of the internal two-stage structure, Av3 is the output stage gain of the main path, AvF2 is the output stage gain of the feedforward path, ω^'p1 is a pole at the output of the internal two-stage structure, and ω^'p2 is an output pole. Neglecting Cd for simplicity, each stage has a gain as follows: A2-stage=gm1Ro1gm2(Ro2||RoF1), Av3=gm3(Ro3||RoF2), and AvF2=gmF2(Ro3||RoF2). To obtain poles and zero in Fig. <ref>(b), Eq. (3) can be rewritten as (A2-stageAv3+AvF2)[1+AvF2s/(A2-stageAv3+AvF2)ω^'p1]/(1+s/ω^'p1)(1+s/ω^'p2) In Eq. (4), the voltage gain at low freqeuncies is defined as gm1Ro1gm2(Ro2||RoF1)gm3(Ro3||RoF2)+gmF2(Ro3||RoF2). This gain can be approximated by gm1Ro1gm2(Ro2||RoF1)gm3(Ro3||RoF2). Two poles are obtained as ω^'p1=1/(Ro2||RoF1)(Co2+CoF1) and ω^'p2=1/(Ro3||RoF2)(Co3+CoF2). The zero by the feedforward path of gmF2 is expressed as ω^'z1 = ω^'p1(A2-stageAv3/AvF2+1) = ω^'p1[gm1Ro1gm2(Ro2||RoF1)gm3/gmF2+1] In Fig. <ref>(b), considering Cd at low frequencies, the output impedance of the feedforward path becomes high. At low frequencies, the output impedance is therefore dominated by Ro3 and Co3 while ignoring RoF2 and CoF2. Accordingly, the voltage gain from Eq. (4) can be expressed as gm1Ro1gm2(Ro2||RoF1)gm3Ro3. Also, the dominant pole ω^'p2, which is in a relatively lower frequency range than ω^'p1 and ω^'z1, changes from 1/(Ro3||RoF2)(Co3+CoF2) to 1/(Ro3Co3). As the frequency increases, Cd becomes almost a short circuit. Therefore, ω^'p1 and ω^'z1, which are in the relatively higher frequency range, remain similar to the state in the absence of Cd. Recall that ωz1=ωp1[(gm1Ro1gm2/gmF1)+1] from the two-stage feedforward OTA. If gmF1 is large enough, ωz1 can be designed as ωp1, and ωp1 can be canceled out by ωz1. Similar to this operation, in Eq. (5), ω^'z1 can be designed as ω^'p1 if gmF2 is large enough. However, in practical design, there is a limit to increasing gmF2 and gm1Ro1gm2(Ro2||RoF1) is also large, so ω^'z1 is generally designed to be larger than ω^'p1. Fig. <ref>(a) shows the phase shift of the three-stage feedforward OTA and the phase peaking is generated due to the difference between ω^'p1 and ω^'z1. Fig. <ref>(b) presents the simulated Bode plots of the three-stage feedforward OTA according to gmF2 and Cd. By including the feedforward amplifier gmF2 (blue line), the PM is improved but the difference between ω^'p1 and ω^'z1 accompanies the phase peaking. Also, employing Cd improves the voltage gain at low frequencies (green line). To further improve the PM, the Miller capacitors, Cm1 and Cm2 shown in Fig. <ref>, are employed. As shown in Fig. <ref>(c), the PM is improved through Miller compensation <cit.>, which exchanges the positions of ω^'p1 and ω^'p2, but the bandwidth is sacrificed. ω^'z1 also moves slightly toward the origin as ω^'p1 approaches the origin. In this three-stage OTA design, Cd is set to 2 pF, and Cm1 and Cm2 are set to 50 fF. This means that Cd makes the low-impedance path than Cm1 and Cm2. Therefore, the OTA is still dominated by the feedforward path than the Miller compensation path at high frequencies, which means that Eq. (5) is still valid to describe ω^'z1. Comparing the three-stage feedforward OTA and the two-stage feedforward OTA, the voltage gain improves as the number of stages increases. However, the three-stage structure shows a rapid phase change because ω^'z1 is not completely designed as ω^'p1 in an actual circuit design. Figs. 5(b) and (c) are obtained using the fully differential three-stage feedforward OTA shown in Fig. <ref>. The bias block and common-mode feedback are the same as those shown in Figs. <ref>(b) and (c). The output load impedance is modeled as 10 MΩ||2 pF on Vout– and Vout+. The supply voltage VDD is set to 1.2 V. The common-mode voltages of Vout+/– and Vin+/– are set to 0.6 V. The current consumption of each stage is as follows: gm1 (274 μA), gm2 (794 μA), gmF1 (821 μA), gm3 (1.64 mA), gmF2 (3.28 mA), and CMn=1,2,3 (104 μA). Note that each CMn=1,2,3 sets the common-mode voltage using Vcm (= 0.6 V) as shown in Fig. <ref>(c). §.§ Four-Stage Design Recall that ωz1=ωp1[(gm1Ro1gm2/gmF1)+1] in the two-stage OTA, then the two-stage feedforward OTA can be approximated as a single-pole system if ωz1=ωp1 by making gmF1 large enough. By employing the approximated model of the two-stage feedforward OTA, the three-stage design also can be approximated as shown in Fig. <ref>(b). Then, the zero of ω^'z1 is obtained as ω^'p1[(gm1Ro1gm2(Ro2||RoF1)gm3/gmF2)+1] as shown in Eq. (5). Similar to ωz1 of the two-stage design, if gmF2 is large enough, ω^'z1 becomes ω^'p1, and ω^'p1 can be canceled out by ω^'z1. Therefore, theoretically, the three-stage OTA can be simplified into a two-pole system by making gmF1 large and further simplified into a single-pole system by making gmF2 sufficiently large. Fig. <ref> shows the four-stage feedforward OTA including the approximated circuit model of the preceding stages. Note that the Miller capacitors Cm1,2,3 are employed in the four-stage design, but will be first neglected in the analysis for simplicity and discussed in the Bode plot results. In the red box of Fig. <ref>, if gmF1 is large enough, the two-stage design can be simplified as a single-pole system employing G^'m=(gm1Ro1)gm2, R^'o=Ro2||RoF1, and C^'o=Co2+CoF1. In the blue box of Fig. <ref>, if gmF1 and gmF2 are large enough, the three-stage design can be simplified into a two-pole system and further simplified into a single-pole system by employing G^''m, R^''o, and C^''o. In Eq. (4), G^''m is approximately obtained as gm1Ro1gm2(Ro2||RoF1)gm3. R^''o and C^''o are obtained as Ro3||RoF2 and Co3+CoF2, respectively. Leveraging approximated single-pole models of the two- and three-stage OTAs while neglecting Cd in Fig. <ref>, the transfer function of the four-stage OTA can be given by Vout/Vin(s)|4-stage = A3-stageAv4/(1+s/ω^''p1)(1+s/ω^''p2)+AvF3/(1+s/ω^''p2) = (A3-stageAv4+AvF3)[1+AvF3s/(A3-stageAv4+AvF3)ω^''p1]/(1+s/ω^''p1)(1+s/ω^''p2) where A3-stage=G^''mR^''o, Av4=gm4(Ro4||RoF3), and AvF3=gmF3(Ro4||RoF3). The poles of the three- and four-stage outputs are expressed as ω^''p1=1/(Ro3||RoF2)(Co3+CoF2) and ω^''p2=1/(Ro4||RoF3)(Co4+CoF3), respectively. In Eq. (7) obtained using the single-pole approximation model, the zero of the four-stage feedforward OTA is given by ω^''z1 = ω^''p1(A3-stageAv4/AvF3+1) = ω^''p1[gm1Ro1gm2(Ro2||RoF1)gm3(Ro3||RoF2)gm4/gmF3+1] Table 1 summarizes the zeros according to the number of stages. Recall that ω^'z1 and ω^''z1 are obtained assuming that gmF1 and gmF2 are large enough. Similar to two- and three-stage OTAs, for improving the PM in the four-stage feedforward OTA, gmF3 must also be large enough to cancel out ω^''p1. From the equations of ωz1, ω^'z1, and ω^''z1 as summarized in Table 1, it can be concluded that the transconductance of the feedforward path must be sufficiently large to secure the PM. Assuming that gmF1 and gmF2 are large enough, Eq. (7) can be rewritten as Vout/Vin(s)|4-stage = A4-stage(1+s/ωz1)(1+s/ω^'z1)(1+s/ω^''z1)/(1+s/ωp1)(1+s/ω^'p1)(1+s/ω^''p1)(1+s/ω^''p2) where A4-stage can be approximated by gm1Ro1gm2(Ro2||RoF1)gm3(Ro3||RoF2)gm4(Ro4||RoF3). The four-stage OTA shown in Fig. <ref> consists of four poles and three zeros in Eq. (9). Ideally, by making gmF1 large enough, [1+(s/ωp1)] in the denominator can be canceled out by [1+(s/ωz1)] in the numerator. Similarly, [1+(s/ω^'p1)] in the denominator can be canceled out by [1+(s/ω^'z1)] in the numerator as gmF2 becomes large enough. Eventually, the four-stage OTA can be approximated by a single-pole system assuming that [1+(s/ω^''p1)] in the denominator is canceled out by [1+(s/ω^''z1)] in the numerator as gmF3 becomes sufficiently large. As mentioned above, to satisfy the condition that the poles are canceled out by the zeros, all feedforward stages need to be designed to have sufficiently large transconductances while sustaining gmF1≪ gmF2≪ gmF3 as shown in Table 1. However, as the number of stages increases, the burden on the transconductance of the feedforward path increases for canceling out poles, which is associated with design area and power consumption. In other words, to cancel out a pole by a zero in each stage, the transconductance ratio between the output stages of each order must be greater than the gain of the previous stage. Therefore, the following conditions must be satisfied: (gmF1/gm2) ≫ gm1Ro1, (gmF2/gm3) ≫ gm1Ro1gm2(Ro2||RoF1), and (gmF3/gm4) ≫ gm1Ro1gm2(Ro2||RoF1)gm3(Ro3||RoF2). However, considering DC bias conditions, power consumption, and design area, it becomes more difficult to satisfy the above conditions as the number of stages increases. For these reasons, it becomes difficult to cancel out a pole that is generated as the number of stages increases, ultimately leading to the degradation of the PM. Fig. <ref>(a) shows the simulated Bode plots obtained using the four-stage feedforward OTA without Miller compensation. Employing the feedforward path with gmF3 and without Cd alleviates the phase shift at high frequencies compared to the design without the feedforward path (blue and red lines in Fig. <ref>(a)). However, the zero is located at a higher frequency than the pole, and the zero does not completely cancel out the pole, leading to the PM degradation. As shown in the green line in Fig. <ref>(a), by isolating the output feedforward path using Cd, the voltage gain is improved from G^''mR^''ogm4(Ro4||RoF3) to G^''mR^''ogm4Ro4 at low frequencies. Also, the dominant pole ω^''p2 changes from 1/(Ro4||RoF3)(Co4+CoF3) to 1/(Ro4Co4) as Cd is employed. The other poles and zeros located at high frequencies are almost unaffected by Cd since Cd becomes almost a short circuit. As shown in Fig. <ref>(b), the PM can be improved by employing the Miller capacitors Cm1,2,3 shown in Fig. <ref>. Similar to Miller compensation in the two-stage design, the poles and zeros can be located closely together through pole splitting <cit.>. Therefore, the zero can effectively cancel out the pole that shifts toward high frequencies by pole splitting, resulting in an improved PM. However, the pole that moves toward the origin through pole splitting reduces the bandwidth. In this analysis, Cm1 and Cm3 are set to 10 fF, and Cm2 is set to 200 fF. Also, the load impedance is set to 10 MΩ||2 pF on Vout for this analysis. Note that the output load impedance is set to 10 MΩ||2 pF in analyzing all multistage feedforward OTAs throughout this work, and all output stages are designed using large device size to achieve a large transconductance. Accordingly, when analyzing the OTAs without Miller capacitors, it is assumed that ωp2, ω^'p2, and ω^''p2 are the dominant poles. Fig. <ref> shows the fully differential feedforward OTA used to obtain the results shown in Fig. <ref>. In Fig. <ref>, the bias block and common-mode feedbacks CMn=1,2,3 are the same as the circuits used in two- and three-stage OTAs. The common-mode voltages of Vin+/– and Vout+/– are set to 0.6 V. Cd is set to 2 pF. The current consumption of the four-stage feedforward OTA is as follows: gm1 (273 μA), gm2 (273 μA), gmF1 (820 μA), gm3 (1.61 mA), gmF2 (1.64 mA), gm4 (1.63 mA), gmF3 (3.28 mA), and CMn=1,2,3 (104 μA). The OTA is driven using a 1.2 V supply voltage. Table 2 summarizes the overall transconductances used in this analysis according to the number of stages. Note that the transconductance ratio is defined as gmF(k)/gm(k+1), k=1,2,3 in this analysis. As discussed above, gmF(k)/gm(k+1) must be designed as large as possible to secure sufficient phase margin in a multistage feedforward OTA. However, as the OTA order increases, achieving sufficient gmF(k)/gm(k+1) is a challenging task and becomes even more difficult when considering DC bias conditions, power consumption, and design area. Therefore, the analysis is carried out by setting gmF(k)/gm(k+1) to be larger than 2 in this work, as summarized in Table 2. § CONCLUSION In this work, the analysis of multistage feedforward OTAs is conducted by employing the single-pole approximation model. In each multistage feedforward OTA, the first two-stage OTA is modeled as a single-pole system by assuming that the first feedforward transconductance is large enough. Then, the next stage is again approximated identically to the approximation method in the previous stage by leveraging the previous single-pole system and making the second feedforward transconductance sufficiently large. Although the single-pole approximation model becomes difficult to apply as the number of stages increases in actual circuit design, it provides an intuitive way to analyze and design multistage feedforward OTAs. 00 r1 B. K. Thandri and J. Silva-Martinez, “A robust feedforward compensation scheme for multistage operational transconductance amplifiers with no Miller capacitors,” IEEE J. Solid-State Circuits, vol. 38, no. 2, pp. 237–243, Feb. 2003. r2 H. Jung, D. R. Utomo, S.-K. Han, J. Kim, and S.-G. Lee, “An 80 MHz bandwidth and 26.8 dBm OOB IIP3 transimpedance amplifier with improved nested feedforward compensation and multi-order filtering,” IEEE Trans. Circuits Syst. I: Regul. Pap., vol. 67, no. 10, pp. 3410–3421, Oct. 2020. r3 X. Yang and H.-S. Lee, “Design of a 4th-order multi-stage feedforward operational amplifier for continuous-time bandpass delta sigma modulators,” in Proc. IEEE Int. Symp. Circuits Syst., May 2016, pp. 1058–1061. r4 B. Razavi, Design of Analog CMOS Integrated Circuits, 2nd ed. (New York: McGraw-Hill, 2016). r5 R. G. H. Eschauzier, L. P. T. Kerklaan, and J. H. Huijsing, “A 100-MHz 100-dB operational amplifier with multipath nested Miller compensation structure,” IEEE J. Solid-State Circuits, vol. 27, no. 12, pp. 1709–1717, Dec. 1992. r6 J. Lee and S. H. Cho, “A 1.4-μW 24.9-ppm/^∘C current reference with process-insensitive temperature compensation in 0.18-μm CMOS,” IEEE J. Solid-State Circuits, vol. 47, no. 10, pp. 2527–2533, Oct. 2012. r7 S. Yun et al., “A 2.4/5 GHz dual-band low-noise and highly linear receiver with a new power-efficient feedforward OPAMP for WiFi-6 applications,” IEEE Access, vol. 11, pp. 137264–137273, Dec. 2023. r8 M. S. Kappes, “A 2.2-mW CMOS bandpass continuous-time multibit Δ–Σ ADC with 68 dB of dynamic range and 1-MHz bandwidth for wireless applications,” IEEE J. Solid-State Circuits, vol. 38, no. 7, pp. 1098–1104, Jul. 2003. r9 K. J. de Langen and J. H. Huijsing, “1-GHz operational amplifier with multipath nested Miller compensation,” in Proc. IEEE Int. Symp. Circuits Syst., May 1994, pp. 517–520. r10 A. K. George, J. Lee, Z. H. Kong, and M. Je, “A 0.8 V supply- and temperature-insensitive capacitance-to-digital converter in 0.18-μm CMOS,” IEEE Sens. J., vol. 16, no. 13, pp. 5354–5364, Jul. 2016. r11 X. Zou et al., “A 100-channel 1-mW implantable neural recording IC,” IEEE Trans. Circuits Syst. I: Regul. Pap, vol. 60, no. 10, pp. 2584–2596, Oct. 2013. r12 S.-J. Kim et al., “A 0.5-V sub-μW/channel neural recording IC with delta-modulation-based spike detection,” IEEE Asian Solid-State Circuits Conf., Nov. 2014, pp. 189–192. r13 T. Lee, M. K. Kim, H. J. Lee, and M. Je, “A multimodal neural-recording IC with reconfigurable analog front-ends for improved availability and usability for recording channels,” IEEE Trans. Biomed. Circuits Syst., vol. 16, no. 2, pp. 185–199, Apr. 2022.
http://arxiv.org/abs/2406.17897v1
20240625191246
Pixel-weighted Multi-pose Fusion for Metal Artifact Reduction in X-ray Computed Tomography
[ "Diyu Yang", "Craig A. J. Kemp", "Soumendu Majee", "Gregery T. Buzzard", "Charles A. Bouman" ]
eess.IV
[ "eess.IV" ]
Pixel-weighted Multi-pose Fusion for Metal Artifact Reduction in X-ray Computed Tomography Diyu Yang^1, Craig A. J. Kemp^2, Soumendu Majee^3§, Gregery T. Buzzard^1, and Charles A. Bouman^1 ^1Purdue University-Main Campus, West Lafayette, IN 47907. ^2Eli Lilly and Company, Indianapolis, IN 46225. ^3Samsung Research America, Mountain View, CA 94043. Diyu Yang was supported by Eli Lilly and Company. Gregery T. Buzzard was supported by NSF grant CCF-1763896. Charles A. Bouman was supported by the Showalter Trust. July 1, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This work was done while Soumendu Majee was employed at Purdue University. § ABSTRACT X-ray computed tomography (CT) reconstructs the internal morphology of a three dimensional object from a collection of projection images, most commonly using a single rotation axis. However, for objects containing dense materials like metal, the use of a single rotation axis may leave some regions of the object obscured by the metal, even though projections from other rotation axes (or poses) might contain complementary information that would better resolve these obscured regions. In this paper, we propose pixel-weighted Multi-pose Fusion to reduce metal artifacts by fusing the information from complementary measurement poses into a single reconstruction. Our method uses Multi-Agent Consensus Equilibrium (MACE), an extension of Plug-and-Play, as a framework for integrating projection data from different poses. A primary novelty of the proposed method is that the output of different MACE agents are fused in a pixel-weighted manner to minimize the effects of metal throughout the reconstruction. Using real CT data on an object with and without metal inserts, we demonstrate that the proposed pixel-weighted Multi-pose Fusion method significantly reduces metal artifacts relative to single-pose reconstructions. Inverse problems, Computed tomography, Model based reconstruction, Plug-and-play, Multi-agent consensus equilibrium, Metal artifact reduction. § INTRODUCTION X-ray computed tomography (CT) imaging is widely used in industrial <cit.> and medical <cit.> applications for non-destructive visualization of internal sample morphology. X-ray CT uses a series of projection images from various angles to reconstruct a 3D array of attenuation coefficients that describe the sample <cit.>. Traditional CT reconstruction methods, such as Filtered Back Projection (FBP), use projection images acquired around a single rotation axis. However, for some objects, projection images from different rotation axes may contain complementary information. In such cases, more accurate reconstructions could in principle be achieved by collecting projection images from multiple rotation axes (or poses), and then performing a joint or multi-pose reconstruction. Multi-pose reconstruction is motivated by samples with dense materials such as metal, which produce substantial, spectrally-dependent attenuation in the projection data. Standard reconstructions from such data exhibit bright or dark streaks, commonly known as metal artifacts, radiating out from the metal object <cit.>. These metal artifacts are strongly influenced by the measurement pose of the object <cit.>, and specific regions of the object may be corrupted in one pose but artifact-free in another. Therefore, multi-pose reconstruction holds the potential to mitigate metal artifacts by effectively taking the best quality information from each pose. One approach to multi-pose reconstruction is to transform separate reconstructions to a common pose and fuse them using a post-processing technique such as averaging <cit.>, pixel-wise minimum/maximum <cit.>, or weighted sum <cit.>. However, this approach operates in the image domain and hence does not fully exploit the complementary information in sinogram measurements. In contrast, Kostenko et al. <cit.> and Herl et al. <cit.> adapt the scanner geometry of multiple poses to a common pose to produce a joint reconstruction from sinogram data. However, this method is complex to implement, since it requires reprogramming the system matrix and integrating pose information into the reconstruction software. Metal artifacts can also be reduced by the use of Model-based Iterative Reconstruction (MBIR) <cit.>, which leverages the regularity within reconstructions to compensate for noisy or limited measurements. More recently, MBIR has evolved to include Plug-and-Play (PnP) <cit.>, which uses a denoiser to model the prior distribution of an image, and Multi-Agent Consensus Equilibrium (MACE), which allows multiple agents to represent various objectives in an inverse problem <cit.>. One notable advantage of MACE is its formulation as an equilibrium problem, making it applicable even in cases where there is no well-defined cost function to minimize. In this context, Yang et al. introduced a Multi-pose Fusion approach to perform a joint tomographic reconstruction using multiple poses of a single object <cit.>. In this paper, we build on the prior work of <cit.>, and propose a pixel-weighted Multi-pose Fusion algorithm, which reduces metal artifacts by integrating CT measurements from multiple poses of a single object (see Figure <ref>). As illustrated in Fig. <ref>, the proposed algorithm uses MACE to integrate information from the various poses, with each pose being represented by a single agent. The final reconstruction is then computed as the fixed point of a set of equilibrium equations. This allows for a straightforward, modular implementation using standard CT reconstruction software. A primary novelty of the proposed method is that the output of different MACE agents are fused in a pixel-weighted manner to minimize the effects of metal throughout the reconstruction. We present experimental results for the problem of non-destructive evaluation (NDE) using measured CT data, and demonstrate that pixel-weighted Multi-pose Fusion method is effective in reducing metal artifacts and improving image quality. In this research, we make the following novel contributions: * We introduce a mathematical framework based on MACE for integrating multi-pose CT data. * We introduce a innovative pixel-weighted averaging that can be used to fuse multi-pose image data both directly and in conjunction with MACE. Finally, we present experimental results for the problem of non-destructive evaluation (NDE) using measured CT data. These results demonstrate that the Multi-pose Fusion reconstruction method is effective in reducing metal artifacts and improving image quality. § PROBLEM FORMULATION In multi-pose CT imaging, multiple sets of CT scans are taken using different poses of the object, as illustrated in Fig. <ref>. Notice that the imaging geometry is the same for each pose. However, in practice some poses provide more useful information, particularly when the object contains dense or even opaque components that may obscure portions of the object. The objective of multi-pose CT reconstruction is then to perform a joint MBIR reconstruction from scans acquired from multiple poses of the object. Let y_k ∈ℝ^M_k be the sinogram measurements for the k^th pose, where k∈{0,...,K-1}. Then our goal is to recover x∈ℝ^N, the image vector containing attenuation coefficients in the reconstruction coordinate system. For each pose k, we also define a transformation function x_k = T_k x where x is the object represented in the common reconstruction coordinate system and x_k is the object represented in the k^th pose. So intuitively, T_k transforms the raster sampled object from the common reconstruction coordinate system to the posed coordinate system. In practice, T_k typically implements a rigid body transformation <cit.>, so it requires that the discretized function be resampled on the transformed sampling grid. This process requires some form of typically spline-based interpolation algorithm <cit.>. We will also require an approximate inverse transformation T_k^-1. Since both transforms require interpolation and resampling, we note that they will not in general be exact inverses of each other. Using this notation, the forward model for each pose has the form y_k = A_k T_k x + w_k , where A_k ∈ℝ^N× M_k is the scanner system matrix and w_k∼ N(0, αΛ_k^-1) is independent additive noise, each for the k^th pose. The joint MBIR reconstruction for the multi-pose problem is then given by x^* = min_x{∑_k=0^K-1 f_k(x)+h(x)} with data fidelity terms given by f_k (x) = -log p(y_k |x) + where f_k (x) = 1/2 ‖ y_k-A_kT_k x ‖_Λ_k^2 , and a prior term given by h(x)=-log p(x) that imposes regularity. Notice that direct implementation of (<ref>) is difficult since it requires that software be written to minimize a sum of complex tomographic reconstruction terms each with a different transformation T_k. Alternatively, one can compute the MBIR reconstruction by using consensus ADMM <cit.> to minimize the sum of K+1 terms consisting of h and the K terms in (<ref>). However, this approach has two serious disadvantages. First, the proximal map terms required for each pose will be very computationally expensive to compute. Second, we can improve reconstruction quality by replacing the prior term h(x) with a PnP denoiser. § MACE FORMULATION OF PIXEL-WEIGHTED MULTI-POSE FUSION In this section, we introduce the MACE framework for solving the multi-pose reconstruction problem <cit.>. §.§ Agent formulation We start by describing the agents in our approach. Let x^' = F_k (x) be an agent for the k^th pose. Intuitively, this agent should take a reconstruction x and return a reconstruction x^' that better fits the measurements y_k associated with the data from pose k. One approach would be to use (<ref>) directly in a proximal map. However, this would be computationally expensive and difficult to compute since it requires that the transformation T_k be integrated into the reconstruction software. Alternatively, we propose to use a Conjugate Proximal Map given by F_k (v) = T_k^-1 F (T_k v ; y_k ) , where F(v; y) is the standard proximal map in reconstruction coordinates given by F(v; y ) = min_x{1/2‖ y-A_k x ‖_Λ_k^2 + 1/2σ^2‖ x - v‖^2}. Notice that the conjugate proximal map of (<ref>) can be computed easily since it requires only the computation of the standard proximal map of (<ref>) in the standard coordinates and pre- and post-composition with the spline-based maps T_k and T_k^-1. In fact, software for computing the proximal map in (<ref>) is openly available <cit.>. The conjugate proximal map is exactly equivalent to the conventional proximal map based on (<ref>) when the rigid body transformations T_k and T_k^-1 are exact inverses of each other (see [appendix:conj_prox_map]Appendix A). In practice, we note that T_k and T_k^-1 will generally not be exact inverses due to the nonlinear interpolation, hence the solution to the conjugate proximal map is an approximation to the conventional proximal map. For the prior model, we will use a BM4D denoiser <cit.>. We denote this agent by F_K. The MACE agents are concatenated together to form a single operator F: ℝ^(K+1)N→ℝ^(K+1)N, defined as: F ( x)= [ F_0(x_0), ⋯ ,F_K(x_K)], where x=[ x_0,...,x_K] denotes the full MACE state. §.§ Pixel-weighted MACE The MACE state vector x contains multiple, potentially inconsistent reconstructions x_0,...,x_K. In order to produce a single coherent MACE reconstruction, we define a pixel-weighted averaging operator G_M ( x)= [ x̅_ M( x) , ⋯ , x̅_ M( x) ] , where x̅_ M( x) is a pixel-weighted average of the input vector components given by x̅_ M( x)=1/1+β∑_k=0^K-1M_kx_k + β/1+βx_K. Here, β>0 controls the amount of regularization relative to the data-fitting agents, and M_k ∈ℝ^N × N is a diagonal weight matrix specific to each data-fitting agent satisfying the property [ M_k]_ii≥ 0 ∑_k=0^K-1 M_k = I . Intuitively, G_ M( x) computes a weighted average of the components in x according to the weight matrices M_0, ..., M_K-1 as well as the regularization parameter β, and then returns a state vector formed by replicating this average K+1 times. Notice that M_k provides a mechanism to weight each pixel in each pose separately, which could be valuable in the metal artifact scenario, where certain components from specific poses are corrupted. The design of this weight matrix is discussed in Section <ref>. Using this notation, the MACE equilibrium equation is F ( x^*)= G_M ( x^*) where x^* solves the equation, and the final reconstruction is then given by x^*=x̅_ M( x^*). This equation enforces that all agents have the same output value (consensus) and that the vectors δ^*_k = x^*_k - F_k(x^*_k) satisfy x̅_ M(δ) = 0 (equilibrium) <cit.>. §.§ Computing the MACE Solution It can be shown that the solution to (<ref>) is also the fixed point of the operator T=(2 G_M-I)(2 F-I) (see [appendix:MACE_fixed_point]Appendix B). One popular method of finding such a fixed point is Mann iteration w←(1-ρ) w+ρ T w, where ρ∈ (0,1) controls the convergence speed. Algorithm <ref> shows the general method of solving MACE with Mann iterations. The algorithm starts from an initial reconstruction x^(0), and uses Mann iterations to find the equilibrium point between the prior and forward model terms. From <cit.>, when the agents F_0,...,F_K are all proximal maps of associated cost functions f_0,...,f_K-1,h, and all agents are equally weighted (M_k=1/K+1I for k∈{0,...,K-1}), this equilibrium point is exactly the solution to the consensus optimization problem of (<ref>). § PIXEL-WEIGHTED AVERAGING FOR METAL ARTIFACT REDUCTION We leverage the weight matrix M_k in (<ref>), which provides a mechanism of applying a separate weight to each pixel in each pose, to propose a pixel-weighted averaging algorithm for metal artifact reduction. Given an initial reconstruction x^(0) in the reconstruction coordinates, we locate the metal and object components through the binary masks b^metal, b^object∈ℝ^N, where 1 indicates a metal/object pixel, and 0 elsewhere: [b^metal]_i= 1, [x^(0)]_i>τ_metal 0, [x^(0)]_i ≤τ_metal [b^object]_i= 1, [x^(0)]_i>τ_object 0, [x^(0)]_i ≤τ_object where τ_metal,τ_object∈ℝ are threshold values to identify the metal and object pixels. We then transform the image masks from the reconstruction pose to each of the measurement poses k ∈{0,..., K-1 }: b_k^metal = T_k b^metal b_k^object = T_k b^object For each pose k, we compute a distortion image D_k ∈ℝ^N, which estimates the level of distortion at each pixel in the associated measurement pose: [D_k]_i = [A_k^tA_kb_k^metal]_i/[A_k^tA_kb_k^object]_i+ϵ, where small ϵ>0 prevents division by zero. Here, D_k uses both the location of the metal components (b_k^metal) and the scanner geometry (A_k) to predict the pixel-wise level of distortion. Intuitively, a larger entry in D_k indicates that the associated projections traverse a longer distance through metal, which could lead to more corruptions in the reconstructed pixel. The weight matrices M_0, ..., M_K-1 are then calculated using a softmax function across all distortion images: [M_k]_ii = exp-α[T_k^-1D_k]_i/∑_m=0^K-1exp-α[T_m^-1D_m]_i, where T_k^-1 transforms the distortion image from the measurement pose to the common reconstruction pose, and α≥ 0 controls the range of the weight matrices. Subsequently, pixel-weighted Multi-pose Fusion (Algorithm <ref>) uses these weight matrices to produce a joint reconstruction that selectively fits the informative measurements from different poses. Notice that the proposed algorithm can also work as a post-processing method to directly integrate the images from different poses. We call this method the pixel-weighted post-processing. The pseudo-code for this method is depicted in Algorithm <ref>. The algorithm takes standard CT reconstructions x_0,...,x_K-1 as inputs, each from a distinct pose. For each pose, the distortion image D_k and subsequently the weight matrix M_k are computed. The post-processed image is formed by taking the pixel-weighted average among the input reconstructions x_0,...,x_K-1. § EXPERIMENTAL RESULTS We evaluate the effectiveness of pixel-weighted Multi-pose Fusion on a real cone beam CT dataset featuring two distinct measurement poses. The object of interest (Fig. <ref>) is a plastic component with four removable metal disks. The object is scanned from two different poses (Fig. <ref>) using a North Star Imaging X50 X-ray CT system. The experimental specifications are detailed in Table <ref>. We perform a pixel-weighted Multi-pose Fusion with Algorithm <ref>, and compare the results with various reconstruction and post-processing methods listed below: * PnP, vertical pose: PnP recon from vertical pose. * PnP, horizontal pose: PnP recon from horizontal pose. * Averaging: Averaging of the PnP results. * Pixel-weighted averaging: Pixel-weighted averaging of the PnP results. * Baseline MPF: Baseline Multi-pose Fusion. The agents are equally weighted. * Pixel-weighted MPF (proposed): Pixel-weighted Multi-pose Fusion. A separate weight is assigned to each pixel in each agent. For a fair comparison, we use the same BM4D denoiser in PnP and Multi-pose Fusion algorithms. Fig. <ref> shows the results with different methods. The comparison of PnP results reveals varying metal artifact characteristics across different poses. The artifacts mainly manifest as horizontal streaks in the vertical pose (in red boxes), while appearing as shadowy artifacts between adjacent metal disks in the horizontal pose (in yellow boxes). Compared to simple averaging, pixel-weighted averaging further reduces metal artifacts by selectively integrating the informative components from different poses. Pixel-weighted Multi-pose Fusion further reduces metal artifacts by incorporating this pixel-weighted averaging mechanism into MACE framework, which produces the best image quality with significantly reduced metal artifacts. § CONCLUSION In this paper, we have introduced a novel CT reconstruction method called pixel-weighted Multi-pose Fusion. Our method uses Multi-Agent Consensus Equilibrium (MACE), an extension of Plug-and-Play, as a framework for integrating projection data from different poses. A primary novelty of the proposed method is that the output of different MACE agents are fused in a pixel-weighted manner to minimize the effects of metal throughout the reconstruction. Our experiment demonstrated that pixel-weighted Multi-pose Fusion delivers a significant reduction in metal artifacts compared to single-pose reconstruction and post-processing methods. § APPENDIX §.§ Relationship between Conventional Proximal Map and Conjugate Proximal Map The conjugate proximal map for the k^th pose is given by F_k (v) = T_k^-1min_x{1/2‖ y_k-Ax ‖_Λ_k^2 + 1/2σ^2‖ x-T_kv‖^2}. We show that the conjugate proximal map (<ref>) is exactly equivalent to the conventional proximal map based on (<ref>) when the transformation mappings T_k, T_k^-1 satisfy the following conditions for all x∈ℝ^N: * x=T_k^-1T_kx (inverse condition). * ‖ x ‖ = ‖ T_k x‖ (rigid body transformation) With a change of variable x=T_kx_k, we may rewrite the conjugate proximal map (<ref>) as F_k (v) = T_k^-1min_T_kx_k{1/2‖ y_k-AT_kx_k ‖_Λ_k^2 + 1/2σ^2‖ T_kx_k-T_kv‖^2} = min_x_k{1/2‖ y_k-AT_kx_k ‖_Λ_k^2 + 1/2σ^2‖ T_k(x_k-v)‖^2} = min_x_k{1/2‖ y_k-AT_kx_k ‖_Λ_k^2 + 1/2σ^2‖ x_k-v‖^2}, where (<ref>) requires the inverse condition <ref>, and (<ref>) requires the rigid body transformation condition <ref>. Notice that after substituting the dummy variable x_k with x in (<ref>), the conjugate proximal map is exactly equivalent to the conventional proximal map: F̃_k (v) = min_x{1/2‖ y-A_k T_k x ‖_Λ_k^2 + 1/2σ^2‖ x-v‖^2} §.§ Reformulating MACE as a Fixed-point Problem We follow <cit.> and show that MACE equation (<ref>) can be reformulated as a fixed point problem. For notation simplicity, we rewrite the weight matrices as M_k^' = 1/1+βM_k , 0 ≤ k ≤ K-1 β/1+βI, k=K . With this notation, the weighted averaging operator (<ref>) can be rewritten as: x̅_ M( x)=∑_k=0^KM_k^' x_k. By definitions of x̅_ M and G_ M, we have x̅_ M( G_ M( x)) = ∑_k=0^K M_k^'[ G_ M( x) ]_k = ∑_k=0^K M_k^'x̅_ M( x) = x̅_ M( x), where (<ref>) holds because ∑_k=0^K M_k^' = I. This gives the following property: G_ M( G_ M( x)) = G_ M( x) With this linear property, we may further show that G_ M satisfies the property (2 G_ M- I)^-1 = (2 G_ M- I). From this, we may reformulate the MACE equation (<ref>) as a fixed point problem: (2 G_ M- I)(2 F- I)( x^*) = x^*, or T x^*= x^*, where T=(2 G_ M- I)(2 F- I). IEEEtran
http://arxiv.org/abs/2406.18489v1
20240626165344
Indefinite Causal Structure and Causal Inequalities with Time-Symmetry
[ "Luke Mrini", "Lucien Hardy" ]
quant-ph
[ "quant-ph", "gr-qc" ]
Parameter selection and optimization of a computational network model of blood flow in single-ventricle patients Alyssa M Taylor-LaPole^1, L. Mihaela Paun^2, Dan Lior^3, Justin D Weigand^3, Charles Puelz^3, Mette S Olufsen^1 July 1, 2024 ==================================================================================================================== § ABSTRACT Time–reversal symmetry is a prevalent feature of microscopic physics, including operational quantum theory and classical general relativity. Previous works have studied indefinite causal structure using the language of operational quantum theory, however, these rely on time-asymmetric conditions to constrain both operations and the process matrix. Here, we use time-symmetric, operational probabilistic theory to develop a time-symmetric process matrix formalism for indefinite causal structure. This framework allows for more processes than previously considered and a larger set of causal inequalities. We demonstrate that this larger set of causal inequalities offers new opportunities for device–independent certification of causal non-separability by violating new inequalities. Additionally, we determined that the larger class of time-symmetric processes found here is equivalent to those with Indefinite Causal Order and Time Direction (ICOTD) considered by Chiribella and Liu <cit.>, thereby providing a description of these processes in terms of process matrices. § INTRODUCTION The notion of causal structure—the relations between events that characterize them as timelike, spacelike, or lightlike separated—is of a central importance in information theory and in fundamental physics. The distinction between past and future, or lack thereof, plays a foundational role in quantum theory, classical general relativity, information theory, and physics more broadly. We use the term time-symmetry to describe a theory which is invariant under some form of time-reversal, whether it be the 𝒞𝒫𝒯-invariance of the Standard Model, or any combination of 𝒯 with other discrete transformations. In this work, we develop a formalism to explicate the full range of causal structures that are possible in a quantum theory with time-symmetry, specifically in the setting of two parties acting on finite-dimensional Hilbert spaces. Our work establishes a clear relationship between formalisms for causal structures with and without time-symmetry. We also derive a set of causal inequalities in a theory-independent manner whose violation can characterize certain exotic causal structures as lying outside the realm of classical causality. The range of circuit architectures that may be used in an information processing setting are limited by the possible causal structures and time-orderings available among circuit elements. Traditional circuits using classical or quantum information rely only on definite timelike separations between circuit elements in order to signal forward in time and perform computations. A wider range of causal structures may be utilized in information processing when quantum mechanical considerations are taken into account. These hypothesized, indefinite causal structures have been shown to permit new types of circuits that offer a computational advantage over traditional quantum circuits <cit.> and enhanced communication protocols <cit.>. Another motivation for studying exotic causal structures comes from quantum gravity where it is expected that superpositions of the spacetime metric, which determines the causal relationship between any pair of points, will play an important role <cit.>. Quantum indefiniteness of the metric gives rise to indefinite causal structure <cit.>, an area of research that has seen increasing attention in recent years <cit.>. Oreshkov, Costa, and Brukner <cit.> demonstrated that if one only assumes that quantum mechanics holds locally in its usual form, without imposing a global causal structure, it is possible that exotic causally non-separable processes may emerge <cit.>. They demonstrated this by studying a class of superoperators called process matrices that generalize the notion of a density matrix and assign probabilities to operational circuits. Causally non-separable processes have associated probability distributions that are not consistent with any convex mixture of definite causal structures. Some of these processes violate causal inequalities <cit.>, demonstrating inconsistency with definite causal structure as Bell's inequalities do for non-locality. One example of a causally non-separable process—the quantum switch<cit.>—has been realized in the laboratory <cit.> and has been shown to violate a set of device-independent causal inequalities <cit.>. These experiments presumably took place in a definite spacetime, therefore in these cases the causal non-separability of the quantum switch is not due to spacetime indefiniteness in any quantum gravity sense. Rather, it is likely due to a phenomenon such as “time-delocalized quantum systems,” proposed by Oreshkov <cit.>. This is discussed further at the end of Section <ref>. It remains to be seen whether other exotic causal structures besides the quantum switch may be realized in physical circuits, or whether causal non-separability may be produced experimentally involving genuine spacetime indefiniteness. It is conventional wisdom that signalling only happens forward in time, that is, a past event must not be influenced by a future choice. Existing work on indefinite causal structure relies on this time asymmetry by constraining local operations to prohibit signalling backwards in time. An operation, as explained in Appendix <ref>, refers to a party acting in a compact region of spacetime who may transform physical systems and report classical information. This is a central concept to the current work and will be expanded on throughout the text. Similarly, the causal inequalities are derived by supposing that processes with definite causal structure satisfy certain time-asymmetric no-signalling constraints <cit.>. This unequal treatment of past and future lies in tension with the time-reversal symmetry of classical general relativity. In fact, time-symmetry is a prevalent feature of microscopic physics more generally. The Standard Model of Particle Physics is invariant under the simultaneous reversal of charge, parity, and time known as 𝒞𝒫𝒯-invariance <cit.>. This example demonstrates how time-symmetric microscopic physics does not necessarily entail invariance under time-reversal alone, but that it might be accompanied by other discrete transformations. Dynamics in quantum theory as governed by the Schrödinger equation are time-symmetric. The standard treatment of the measurement process in quantum theory introduces time asymmetry, however, Aharonov, Bergmann and Lebowitz (ABL) showed in the 1960's that the von Neumann model of measurement can be formulated in a time-symmetric fashion at the microscopic level <cit.>. Standard operational quantum theory is time-asymmetric because operations are constrained to be trace non-increasing in the forward time direction. However, a time-symmetric formulation of operational quantum theory is also possible <cit.>. A time-symmetric operational probabilistic theory (TSOPT), of which operational quantum theory is a special case, was recently constructed in <cit.>. At the level of macroscopic statistical physics, time-asymmetry necessarily arises due to the second law of thermodynamics, but this can always be reduced to time-symmetric microscopic physics. Therefore, it is of interest to develop a time-symmetric formalism for indefinite causal structure—this is the primary goal of the current work. Existing work by Chiribella and Liu <cit.> partially addresses this question from a different perspective by studying the case where the direction of the flow of time through an operation is indefinite. In this work, a broader class of processes was uncovered relative to those that can be described in the time-forward process matrix formalism of Ref. <cit.>, offering new computational advantages <cit.>. They also demonstrated recently that processes with both indefinite causal order and time direction (ICOTD) can maximally violate any causal inequality <cit.>. Indefinite causal order refers to the causal ordering between multiple parties, while indefinite time direction refers to the flow of time within a single party's operation. We sometimes use “indefinite causal structure” as a generic term to refer to either or both of these. The ICOTD processes coincide with the full set of processes allowed in our time-symmetric process matrix framework. Chiribella and Liu obtain this set of processes by studying so-called “bidirectional devices,” devices having the property that an input-output inversion results in another valid operation. Examples of such devices include half-wave and quarter-wave plates in quantum optics. A process with indefinite time direction known as the Quantum Time Flip has been realized experimentally by Guo et al. <cit.>. The approach of Chiribella and Liu does not manifestly incorporate time-symmetry since still the conditions used to constrain physical operations treat the past and future distinctly. To address this concern in the current work, we treat operations in a time-symmetric way from the onset. Our approach offers a unified framework to describe ICOTD process in terms of process matrices and makes clear the distinction between the time-symmetric (TS) and time-forward (TF) approaches. The basic ingredients of standard operational probabilistic theory are closed laboratories localized in space and time which take physical systems as input, perform local operations, and output new physical systems. The reader who is unfamiliar with quantum measurement theory (e.g. quantum channels, measurement operations, the Choi-Jamiołkowski isomorphism) may wish to consult Appendix <ref> for a brief introduction. The closed laboratory makes some classical information called an “outcome” available after the operation, which may in some instances be interpreted as a measurement outcome. In the TSOPT developed in <cit.>, an additional classical variable called an “income”—the time-reversed counterpart of an outcome—is available before the operation is performed. An income may be interpreted as the initial state of a measuring apparatus or the initial value of a classical ancilla. To remember the difference between inputs/outputs and incomes/outcomes, one may use the following mnemonic: `p' is for physical and `c' is for classical. One of these local laboratories may be illustrated diagrammatically as a box with various wires coming out of it, as in Fig. <ref>. In quantum theory, these boxes correspond to maps between Hermitian operators on input and output Hilbert spaces. To illustrate the interpretation of the income variables, consider the following example, illustrated in Fig. <ref>. Alice wants to go to bed, so before she enters her bedroom, she decides she is going to turn off the light. Once she enters, she sees that the light is either on or off. If it is on, she flips the light switch off, and if it is off, she does nothing. Assuming that an external observer does not have access to Alice's memory of whether the light switch was initially on or off, the classical information of the initial state of the light switch is only available before Alice's operation is carried out. This is the hallmark feature of an income. Put differently, an external observer watching a video in reverse of Alice's actions only finds out the initial state of the light switch at the end of the reversed video. An income is distinguished from a setting, which is some classical information that the agent, Alice, has free choice to determine at the time of the operation, independent of any external influences. In this example, the setting is Alice's decision to turn off the light and get ready for bed. Independent of the initial state of the light switch or any other factors, Alice has the agency to make any one of four possible decisions of how to operate on the light switch (two possible initial states times two final states equals four deterministic operations). In this example, the inputs and outputs are classical, but they could be quantum systems. Boxes and wires can be joined together to form circuits, representing experiments carried out in a compact region of spacetime. For example, consider the circuit in Fig. <ref>. Circuits with no loose wires represent the joint probability of obtaining the values of variables displayed in readout boxes (e.g. the income and outcome variables u,v,a,x in this example). This circuit consists of three operations labelled 𝖠, 𝖡, and 𝖢, and represents the joint probability distribution p(u,v,a,x) to observe the values u,v,a, and x in a single run of the experiment. In Ref. <cit.> diagrammatic operator tensor notation (which originally appeared in Ref. <cit.>) was adapted to the time-symmetric setting. The operator tensor notation looks, formally, like the diagrammatic operational notation (such as that used in Fig. <ref>). While the operator tensor notation has certain advantages, in the present work we will use the Choi-Jamiołkowski representation as used in Ref. <cit.> since this notation is standard for work on process matrices. Different input/output wires in circuits may carry different types of physical systems. Additionally, different income/outcome wires may carry classical variables which take value in sets of different cardinality. We denote the cardinality of a classical variable, say x for instance, by N_x. The boxes labeled with an `𝖱' indicate the transmission of random information such that each possible value for the classical variable is equally probable. In particular, a readout box x sandwiched between two 𝖱 boxes (see Fig. <ref>) results in a circuit with a constant probability 1/N_x. We also have the identity that summing, or marginalizing, over the variable in a readout box produces a wire with nothing on it (Fig. <ref>). If the corresponding probability p(u,v,a,x) of Fig. <ref> is non-negative for all values of u,v,a and x and all boxes 𝖡 and 𝖢, this is equivalent to the complete positivity of operation 𝖠. This complete positivity condition can be imposed in quantum theory by placing conditions on the operator associated with the operation 𝖠. In quantum theory, the complete positivity condition of an operation M^A_IA_O_a,x in the Choi-Jamiołkowski (CJ) representation can be written in operator language p(u,v,a,x) = _A_IA_OB[M^A_IA_O_a,x M^A_IB_u M^A_OB_v] ≥ 0 ∀ M^A_IB_u, M^A_OB_v. Complete positivity is the first physicality constraint for operations in TSOPT. The second physicality constraint is called double causality and is illustrated in Fig. <ref>. The box denoted with an `𝖨' is the ignore operation. In quantum theory, the `𝖨' box is taken to be the identity operator so that the corresponding Hilbert space is effectively “ignored” through a partial trace when calculating probabilities. In the operator language of quantum theory, the double causality constraints can be written, _A_O∑_x M_a,x^A_IA_O = 1^A_I, 1/N_a_A_I∑_aM_a,x^A_IA_O = 1/N_x1^A_O. We call the first condition here forward causality, and the second backward causality. These constraints are related to trace preservation, since if outcomes (incomes) are averaged over, the trace of the output (input) state is equal to the trace of the input (output) state. While these constraints may appear to be time-asymmetric due the N_a and N_x factors, there is a freedom in the normalization of the `𝖱' and `𝖨' boxes that can be used to shift these factors between the two constraints. In the operator tensor notation as used in <cit.> it is possible to absorb these factors into the corresponding symbols such that the double causality constraints are in time-symmetric form. In the current presentation, the placement of these factors was chosen to be the most intuitive for a reader familiar with the time-forward perspective. A quantum operation M^A_IA_O_a,x satisfying Eqn. (<ref>) and Eqn. (<ref>) is termed physical. Note that the physicality constraints Eqns. (<ref>, <ref>) are manifestly time-symmetric. By restricting to circuits composed of physical operations, we are guaranteed to be dealing with processes that may equally well be run forward or backward in time. In Section <ref>, we use the language of TSOPT to derive conditions that are satisfied by any process with a definite causal order. These conditions result in a set of forward and backward causal inequalities—a larger set than those considered previously by Branciard et al. <cit.>. In Section <ref>, we discuss the set of constraints that characterizes the process matrices consistent with time-symmetric quantum theory. We then expand the most general time-symmetric process matrix in a basis of Hermitian operators in order to analyze new types of allowed processes. In Section <ref>, we present an example of a process matrix that violates a backward causal inequality but not a forward causal inequality. Hence, the time-symmetric process matrix formalism offers new opportunities to certify causal non-separability through backward causal inequality violation. We also show that the time-symmetric process matrix framework naturally incorporates processes with an indefinite time direction by explicitly constructing the process matrix associated to the so-called “quantum time flip” <cit.>. The formalism presented here thus offers a unified framework for indefinite causal order and indefinite time direction. Further, we discuss in this section the one-to-one correspondence between operations in the time-symmetric (TS) and time-forward (TF) theories. This correspondence is broken in the presence of process matrices, due to the possibility for post-selection in the TS theory. Finally, we conclude in Section <ref>. § PROCESSES WITH DEFINITE CAUSAL STRUCTURE We begin by studying circuits that correspond to a process with definite causal order. Consider two parties, Alice and Bob, performing operations satisfying the time-symmetric physicality conditions of Fig. <ref> and Fig. <ref>. Note that in this section we use TSOPT without restricting to quantum theory in order to remain completely general. The most general operational circuit with the definite causal ordering A≼ B (Bob does not precede Alice) is given in Fig. <ref>. Here, we allow Alice and Bob to have setting choices α and β, respectively <cit.>. Each box is taken to be physical. For this reason, we make explicit the possibility for pre-selection and post-selection by including the variables u and v, respectively. If the pre-selection variable u, for instance, is marginalized over, then by Eqn. (<ref>) the lowermost box in the circuit is required to be the identity operation. We would like to allow for the possibility that a non-trivial physical system, such as a density matrix that is not maximally mixed, is known to be available at the beginning of the experiment. This necessitates the presence of the pre-selection variable u. In the spirit of time-symmetry, we similarly include the post-selection variable v. Additionally, we allow for the possibility of ancillary systems that may be entangled with Alice and Bob's inputs/outputs. The circuit Fig. <ref> corresponds to a probability distribution p^A≼ B(a,b,x,y,u,v|α,β). We can apply the double causality rules to obtain linear constraints on these probabilities. Marginalizing over the post-selection variable v and Bob's outcome y, the circuit reduces to that of Fig. <ref> (i). This computation goes through by applying the identity of Fig. <ref>, followed by the identity Fig. <ref>, and finally by applying double causality Fig. <ref> twice. Note that an `𝖨' box connected to multiple wires factors into individual `𝖨' boxes on each wire, just as an identity operator acting on a tensor product of Hilbert spaces does. The dependence on Bob's setting β is made trivial, while the dependence on Bob's income b factors out from the rest of the circuit. Thus we arrive at the familiar no-signalling constraint p^A≼ B(a,b,x,u|α,β) = 1/N_bp^A≼ B(a,x,u|α), ∀ a,b,x,u,α,β. This equation states that Bob cannot signal to Alice unless there is post-selection—either in the post-selection variable v or in Bob's outcome y. In other words, Eqn. (<ref>) states a statistical independence between Bob's setting β and the variables a, x, α of Alice's laboratory. This is what is meant by no-signalling. Now, we repeat the analysis by marginalizing over the pre-selection variable u and Alice's income a. The circuit reduces in this case to that in Fig. <ref> (ii). The dependence on Alice's setting α and her outcome x becomes trivial, resulting in an additional constraint p^A≼ B(b,x,y,v|α,β) = 1/N_xp^A≼ B(b,y,v|β), ∀ b,x,y,v,α,β. This constraint is the time-reversal of Eqn. (<ref>), stating that Alice cannot signal to Bob unless there is pre-selection—either in the pre-selection variable u or in Alice's income a. This inability to signal forward in time seems unfamiliar, yet, in real experiments there is almost always some form of pre-selection present, and hence the apparent issue is evaded. Whenever a measuring apparatus is prepared in some initial configuration, or the state of some physical system is known before the start of the experiment, there is pre-selection present and it is possible to signal forward in time. It is straightforward to repeat the previous analysis for the definite causal ordering B≼ A (Alice does not precede Bob), illustrated in Fig. <ref>. We similarly arrive at two linear constraints: p^B≼ A(a,b,y,u|α,β) = 1/N_ap^B≼ A(b,y,u|β), ∀ a,b,y,u,α,β, stating no-signalling from Alice to Bob without post-selection, and p^B≼ A(a,x,y,v|α,β) = 1/N_yp^B≼ A(a,x,v|α), ∀ a,x,y,v,α,β, stating no-signalling from Bob to Alice without pre-selection. Biparite causally separable correlations are defined as those which are consistent with a definite causal ordering A≼ B, B ≼ A, or a convex mixture of the two. That is, p(a,b,x,y,u,v|α,β) = q p^A≼ B(a,b,x,y,u,v|α,β) + (1-q) p^B≼ A(a,b,x,y,u,v|α,β) for some q∈ [0,1]. Following the analysis of Branciard et al. Ref. <cit.>, we arrive at a set of causal inequalities that are necessarily satisfied by causally separable correlations. This derivation relies only on forward causality (the first condition in Fig. <ref>) and is therefore associated with the time-forward perspective. We adopt the terminology of Branciard et al. in naming the first of these GYNI (Guess Your Neighbor's Income): 1/N_aN_b∑_a,b,x,yδ_x,b δ_y,a p(x,y|α,β,a,b,u,v) ≤1/2. This can be understood as placing an upper bound on the probability for success in a (time-forward) game where Bob's objective is to match his outcome to Alice's income, and Alice's objective is to match her outcome to Bob's income. The second causal inequality is called LGYNI (Lazy Guess Your Neighbor's Income): 1/N_α N_β N_a N_b∑_α,β, a,b,x,yδ_α(y⊕ a),0 δ_β(x⊕ b),0 p(x,y|α,β,a,b,u,v) ≤3/4. The symbol `⊕' denotes addition modulo 2. This second inequality corresponds to a modified game where a player is only required to guess their neighbor's income if their own setting is equal to one, otherwise they are free to produce any outcome they desire. In the time-forward setting of Ref. <cit.>, this is an exhaustive list of the causal inequalities for two parties. The causal polytope is the high-dimensional convex structure of causally separable probability distributions defined by Eqn. (<ref>) and no-signalling constraints <cit.>. In the time-forward formalism, the GYNI and LGYNI causal inequalities describe all of the non-trivial facets of the causal polytope. There are more in the time-symmetric formalism. With time-symmetry, there is automatically a time-reversed counterpart for each of the causal inequalities. They are derived using only backwards causality (the second condition in Fig. <ref>) and are associated to the time-backward perspective. These include a time-reversed GYNI: 1/N_xN_y∑_a,b,x,yδ_x,b δ_y,a p(a,b|α,β,x,y,u,v) ≤1/2, as well as a time-reversed LGYNI: 1/N_α N_β N_x N_y∑_α,β, a,b,x,yδ_β(y⊕ a),0 δ_α(x⊕ b),0 p(a,b|α,β,x,y,u,v) ≤3/4. We refer to the first two inequalities Eqns. (<ref>, <ref>) as the forward inequalities and the last two Eqns. (<ref>, <ref>) as the backward inequalities. Any causally separable process necessarily satisfies both the forward and backward inequalities. Note that Eqn. (<ref>) and Eqn. (<ref>) are valid in these forms only up to two settings each, i.e. 1≤ N_α, N_β≤ 2. A violation of a causal inequality certifies the presence of indefinite causal structure in a given process. In Section <ref>, we study correlations in time-symmetric quantum theory without assuming a definite causal order. We will then use the causal inequalities to certify that there exist causally non-separable processes that are consistent with time-symmetric quantum theory. § INDEFINITE CAUSAL STRUCTURE IN TIME-SYMMETRIC QUANTUM THEORY §.§ The time-symmetric process matrix Consider a bipartite process where Alice and Bob act by quantum operations M_a,x^A_IA_O∈ℒ(ℋ^A_I⊗ℋ^A_O) and M_b,y^B_IB_O∈ℒ(ℋ^B_I⊗ℋ^B_O) in the CJ representation. We use the notation ℒ(ℋ^X) to denote the space of linear operators on a Hilbert space ℋ^X. Without loss of generality, we temporarily omit the dependence on settings α and β. Alice has an income a and an outcome x. Bob has an income b and an outcome y. We require that Alice and Bob's operations are physical, satisfying completely positivitiy Eqn. (<ref>) and double causality Eqn. (<ref>). In the time-symmetric scenario, input and output Hilbert spaces are of the same dimension which we denote by d_A d_A_I = d_A_O and d_B d_B_I = d_B_O. The most general probabilities that can be associated to Alice and Bob's quantum operations are represented by the circuit in Fig. <ref>. The box labelled W represents a Hermitian operator W_u,v^A_IA_OB_IB_O∈ℒ(ℋ^A_I⊗ℋ^A_O⊗ℋ^B_I⊗ℋ^B_O) called the process matrix <cit.>. The classical variables u ad v, which we call the pre-selection and post-selection variables, represent information that is available before and after the experiment. The process matrix can be thought of as a generalization of a density matrix since it determines probabilities in an analogous way p(a,b,x,y,u,v) = _A_IA_OB_IB_O[W_u,v^A_IA_OB_IB_O·(M_a,x^A_IA_O⊗ M_b,y^B_IB_O) ]. More complicated objects can also be encoded in the process matrix, including quantum channels, quantum channels with memory, and, as we will see, indefinite causal structures. As opposed to earlier works <cit.> which implicitly condition on the pre-selection variable, here we study the joint probability p(a,b,x,y,u,v) among all circuit variables, including the pre-selection variable u and post-selection variable v. The transition between joint and conditioned probabilities is not straightforward in the present context since the normalization factor in Bayes' rule will depend non-trivially on Alice and Bob's choices of operations. An analogous phenomenon occurs in the context of the ABL rule <cit.>, where probabilities with pre-selection and post-selection in quantum theory contain a non-trivial normalization factor. When considering only pre-selection (or only post-selection) in the process matrix, the normalization factor is constant and it is trivial to transition between joint and conditioned probabilities. It will turn out that this is the case for the time-forward process matrices, but not the full set of time-symmetric process matrices. In order to associate non-negative probabilities to circuits, we require that the process matrix is a positive semi-definite operator W_u,v^A_IA_OB_IB_O≥ 0 ∀ u,v. We also require that the process matrix associates normalized probabilities. For simplicity, we define the averaged operations M^A_IA_O≡∑_a,xM_a,x^A_IA_O, M^B_IB_O≡∑_b,yM_b,y^B_IB_O. Then the requirement of normalized probabilities is equivalent to imposing ∑_u,v_A_IA_OB_IB_O[ W_u,v^A_IA_OB_IB_O·(M^A_IA_O⊗ M^B_IB_O) ]=1, for all physical, averaged operations M^A_IA_O and M^B_IB_O. As shown in Appendix <ref>, the requirement of normalized probabilities translates into four linear constraints on the process matrix. We adopt the notation of Ref. <cit.> to write _XW 1/d_X1^X⊗_X[W] as the “trace part” of an operator W∈ℒ(ℋ^X). We also use _[1-X]W W - _XW to denote the “traceless part” of W. Then the normalization constraints on the process matrix can be written: ∑_u,v_A_IA_OB_IB_O[W_u,v] = d_Ad_B, ∑_u,v_B_IB_O[1-A_I][1-A_O]W_u,v =0, ∑_u,v_A_IA_O[1-B_I][1-B_O]W_u,v =0, ∑_u,v_[1-A_I][1-A_O][1-B_I][1-B_O]W_u,v =0. The first of these ensures that probabilities are normalized if Alice and Bob act trivially (M^A_IA_O = (1/d_A)1^A_IA_O), as illustrated in Fig. <ref>. If Alice's and/or Bob's operation also contains a non-trivial traceless part, the remaining three constraints guarantee that probabilities remain normalized. An additional set of constraints must be imposed on process matrices with no post-selection or no pre-selection, i.e. u or v is marginalized (summed), respectively. Process matrices with the post-selection variable marginalized satisfy additional no-signalling constraints: ∑_v _A_I[1-B_O]W_u,v = 0, ∑_v _B_I[1-A_O]W_u,v = 0, ∑_v _[1-A_O]_[1-B_O]W_u,v =0. The derivation of these conditions is given in Appendix <ref>. These three constraints guarantee that if one of the parties marginalizes over their outcome, then they cannot signal backward in time to the other party. This principle is respected by the no-signalling constraints derived in Section <ref> and we assert that it remains true in indefinite causal structures. Similarly, there are three constraints for a process matrix with the pre-selection variable marginalized: ∑_u _A_O[1-B_I]W_u,v = 0, ∑_u _B_O[1-A_I]W_u,v = 0, ∑_u _[1-A_I]_[1-B_I]W_u,v =0. These guarantee that if one party marginalizes over their income, they cannot signal forward in time to the other party. The normalization conditions in Eqns. (<ref>-<ref>) are redundant in light of the no-signalling conditions of Eqns. (<ref>-<ref>). We include them anyway for completeness, in case one is interested which assumptions lead to which constraints, and for comparison with other works in the literature. Eqn. (<ref>) is not redundant and must be imposed independently. The process matrices with non-trivial pre-selection/post-selection (both u and v are not marginalized) form the largest class of bipartite process matrices. These process matrices satisfy positivity, Eqn. (<ref>), and must never assign probabilities greater than unity, but otherwise they are unconstrained. This is analogous to how an operation satisfies similar minimal constraints until either the outcome or income is marginalized, following which it must satisfy forward or backward causality. Chiribella and Liu study this same class of processes in Appendix D of Ref. <cit.>, referrring to them as processes with “Indefinite Causal Order and Time Direction” (ICOTD). They demonstrate that the classical processes with ICOTD form a sufficiently large set to produce any arbitrary probability distribution, even in the N-party case. In particular, they show that these classical ICOTD processes can achieve the algebraic maximum of every causal inequality. §.§ Expansion in basis operators The constraints for a process matrix with no pre-selection and/or no post-selection restrict the kinds of terms that are allowed to appear in the process matrix. It is instructive to expand in a basis set of Hermitian operators {σ^X_μ}_μ=0^d_X^2-1, σ_μ^X ∈ℒ(ℋ^X). It is always possible to choose a basis such that the μ=0 operator is the identity, σ_0^X = 1^X, and all others are traceless, _Xσ_j^X = 0, j>0. Further, we require this set of operators to be orthogonal under the Hilbert-Schmidt inner product _X[σ_μ^Xσ_ν^X] = d_X δ_μν. With these properties, any Hermitian operator on ℋ^X can be decomposed as a linear combination of basis operators. In particular, an operation in the CJ representation has the decomposition M^X_IX_O = ∑_μν𝒳_μνσ_μ^X_Iσ_ν^X_O∈ℒ(ℋ^X_I⊗ℋ^X_O). When there is no risk for confusion, we omit the tensor product `⊗'. Then specifying the operation M^X_IX_O amounts to specifying a set of coefficients 𝒳_μν. The double causality constraints Eqn. (<ref>) impose restrictions on the 𝒳_μν, 𝒳_00 = 1/d_X, 𝒳_μ0 = 0, 𝒳_0ν = 0, as can be checked by taking the partial traces of Eqn. (<ref>). Then we can write the most general, physical operation in the form M^X_IX_O = 1/d_X( 1^X_IX_O + ∑_ij>0𝒳_ijσ_i^X_Iσ_j^X_O). The coefficients 𝒳_ij with i,j>0 are free to take any values so long as the resulting M^X_IX_O is positive semi-definite. Note that there are fewer allowed terms here in the time-symmetric formulation than were found in the time-forward case (Ref. <cit.>, see Supplementary Methods). This is due to the fact that the double causality conditions contain an additional constraint (backward causality) that was absent previously in the time-forward formalism. Now, we use the set of basis Hermitian operators to expand the bipartite process matrix W_u,v^A_IA_OB_IB_O = ∑_μναβw_μναβ(u,v)σ_μ^A_Iσ_ν^A_Oσ_α^B_Iσ_β^B_O, as well as the physical operations of Alice and Bob, M^A_IA_O = 1/d_A( 1^A_IA_O + ∑_ij>0𝒜_ijσ_i^A_Iσ_j^A_O), M^B_IB_O = 1/d_B( 1^B_IB_O + ∑_ij>0ℬ_ijσ_i^B_Iσ_j^B_O). The coefficients w_μναβ(u,v) have a dependence on the pre-selection and post-selection variables which we will suppress from here on for simplicity of notation. Given these decompositions, we can rewrite the requirement of normalized probabilities in Eqn. (<ref>), 1 = ∑_u,v_A_IA_OB_IB_O[W_u,v^A_IA_OB_IB_O·(M^A_IA_O⊗ M^B_IB_O) ] = ∑_u,v(d_Ad_Bw_0000 + d_A/d_B∑_ij>0w_ij00𝒜_ij + d_B/d_A∑_lm>0w_00lmℬ_lm + d_Ad_B∑_ijlm>0w_ijlm𝒜_ijℬ_lm) using the defining properties of the set of orthogonal operators {σ_μ^X}_μ. The total probability is required to be unity for any choice of Alice and Bob's operations, that is, for any choice of 𝒜_ij and ℬ_ij. This results in the following requirements: ∑_u,vw_0000 = 1/d_Ad_B, ∑_u,vw_ij00 = 0, ∑_u,vw_00lm = 0, ∑_u,vw_ijlm = 0, ∀ i,j,l,m>0. We can further constrain the coefficients w_μναβ by enforcing Eqns.(<ref>-<ref>). The constraints corresponding to no post-selection can be written ∑_v w_0αβ i = 0, ∑_v w_α i 0 β = 0, ∑_v w_α i β j = 0, ∀α, β≥ 0, i,j>0. The constraints corresponding to no pre-selection are ∑_u w_α 0 i β = 0, ∑_u w_i αβ 0 = 0, ∑_u w_i α j β = 0, ∀α, β≥ 0, i,j>0. The most general, physical bipartite process matrix can be written as a sum W_u,v^A_IA_OB_IB_O = 1/d_Ad_B(p_0(u,v)1^A_IA_OB_IB_O + σ_u,v^TS + σ_u,v^TF + σ_u,v^TB + σ_u,v^ISO) where the operators σ_u,v^TS, σ_u,v^TF, σ_u,v^TB, σ_u,v^ISO are Hermitian operators in ℒ(ℋ^A_I⊗ℋ^A_O⊗ℋ^B_I⊗ℋ^B_O) which may depend non-trivially on u and v. The factor p_0(u,v) is the joint probability for observing the values u,v with all other variables marginalized when Alice and Bob act trivially (M^A_IA_O = (1/d_A)1^A_IA_O). This of course must satisfy ∑_u,vp_0(u,v) = 1. The labels on the final four terms stand for “Time-Symmetric,” “Time-Forward,” “Time-Backward,” and “Isolated,” respectively. These operators are defined by: σ_u,v^TS = ∑_ij>0(a_ijσ_i^A_Iσ_j^A_O + b_ijσ_i^B_Iσ_j^B_O) + ∑_ijkl>0c_ijklσ_i^A_Iσ_j^A_Oσ_k^B_Iσ_l^B_O, σ_u,v^TF = ∑_i(d_iσ_i^A_I + e_iσ_i^B_I) + ∑_ij>0f_ijσ_i^A_Iσ_j^B_I + ∑_ijk(g_ijkσ_i^A_Iσ_j^A_Oσ_k^B_I + h_ijkσ_i^A_Iσ_j^B_Iσ_k^B_O), σ_u,v^TB = ∑_i(m_iσ_i^A_O + n_iσ_i^B_O) + ∑_ij>0o_ijσ_i^A_Oσ_j^B_O + ∑_ijk(q_ijkσ_i^A_Iσ_j^A_Oσ_k^B_O + r_ijkσ_i^A_Oσ_j^B_Iσ_k^B_O), σ_u,v^ISO = ∑_ij>0(s_ijσ_i^A_Iσ_j^B_O + t_ijσ_i^A_Oσ_j^B_I). Each of the 15 sets of coefficients appearing here—a_ij, b_ij, c_ijk, etc.—have a dependence on u and v that has not been written explicitly to save space. The first operator σ_u,v^TS represents the terms that are only available with both pre-selection and post-selection, since ∑_u σ_u,v^TS = ∑_vσ_u,v^TS = 0, as is required by Eqn.(<ref>) and Eqn. (<ref>). The second operator σ_u,v^TF represents the terms that are available in the time-forward picture, but not in the time-backward picture since they vanish when there is no pre-selection: ∑_uσ_u,v^TF = 0. This is required by Eqn. (<ref>). Likewise, σ_u,v^TB represents the terms available in the time-backward picture, but not the time-forward picture since they vanish when there is no post-selection: ∑_vσ_u,v^TB = 0. This is required by Eqn. (<ref>). The final operator σ_u,v^ISO represents terms which may be found in an isolated process, that is, one that does not require pre-selection or post-selection. This may be interpreted as a process that does not rely on any external resources such as density matrices which are not maximally mixed, or non-maximal measurements (these concepts are time-reversal counterparts). These terms are always available even if one marginalizes over u and/or v. Thus, in the time-symmetric process matrix formalism presented here, four classes of processes are arranged naturally in a hierarchical structure, illustrated in Fig. <ref>. Processes with both pre-selection and post-selection form the largest class (TS). These processes are only constrained to give probabilities which are non-negative and not greater than unity. Otherwise, these processes may contain any of the terms listed in Eqn. (<ref>). One conclusion from this analysis is that the time-symmetric operations, which are more constrained than time-forward operations, result in less constrained process matrices. A consequence of the maximal size of the set of TS process matrices is that arbitrary bipartite probability distributions may be produced given the appropriate process and measurements. This is consistent with the results of Liu and Chiribella <cit.> who studied this same class of processes from a different perspective. A sub-class (TF) is formed by the time-forward processes—those that do not involve post-selection. These processes may contain terms from σ_u,v^TF and σ_u,v^ISO and coincide with the known set of bipartite process matrices studied by Oreshkov, Costa, and Brukner <cit.>. Another sub-class (TB) is formed by the time-backward processes, which contain terms from σ_u,v^TB and σ_u,v^ISO. Naturally, every time-backward process is the time-reversal of a time-forward process. Finally, the smallest sub-class (ISO) is formed by the isolated processes. These lie at the intersection of the time-forward and time-backward processes and contain terms only from σ_u,v^ISO. As can be seen from Fig. <ref>, a variety of processes with definite causal structure are possible in the time-symmetric formalism. The simplest examples are obtained by setting all coefficients in Eqn. (<ref>) to zero apart from one of them. For example, a process like the example in the ISO category in Fig. <ref> can be obtained by choosing some of the t_ij in Eqn. (<ref>) to be non-zero. The interpretation of this process is a single quantum channel from Alice to Bob with no pre-selection or post-selection. A non-constant p_0(u,v) indicates the presence of ancillary systems in the corresponding process (see, for example, Fig. <ref>)—at least for causally separable processes. When Alice and Bob act trivially, there can only be correlation between u and v if there is some other ancillary channel through which signalling can occur. Choosing more complicated combinations of non-zero coefficients in Eqn. (<ref>) can sometimes result in causally non-separable processes. We will see an example of this in Section <ref>. The physical interpretation of a causally non-separable process is not immediately clear, although it is commonly thought to be a result of quantum indefiniteness of some kind. This could be due to a spacetime metric in coherent superposition as in Ref. <cit.>, where thin, spherical mass shells are put into a superposition of radii in order to implement the quantum switch <cit.>. Another possible mechanism for causally non-separable processes has been suggested by Oreshkov <cit.>. The proposal is based on the notion of time-delocalized quantum systems, which are “nontrivial subsystems of the tensor products of Hilbert spaces associated with different times.” In particular, Oreshkov has proposed time-delocalized systems as an explanation for realizations of the quantum switch in earthbound laboratories <cit.>, since these experiments supposedly took place in definite near-Minkowski spacetime and cannot be due to spacetime indefiniteness. § RESULTS §.§ Forward and backward causal inequalities We have found that the known causal inequalities Eqns. (<ref>, <ref>) remain valid in the time-symmetric formalism. We have also found that the time-symmetric process matrix formalism results in all of the ICOTD processes considered by Chiribella and Liu <cit.>. The ICOTD set of processes is known to be large enough to violate all causal inequalities. An example of a causal inequality violating-process was studied previously by Oreshkov, Costa, and Brukner <cit.> where two parties deal only with qubits. This bipartite process is given by the following process matrix: W^A_IA_OB_IB_O = 1/4[ 1^A_IA_OB_IB_O + 1/√(2)( σ_z^A_Oσ_z^B_I + σ_z^A_Iσ_x^B_Iσ_z^B_O)], where σ_z and σ_x are Pauli matrices. There are implicit identity operators whenever the action on a particular Hilbert space is not specified within a term. One can check that the process matrix Eqn. (<ref>) satisfies the time-symmetric constraints Eqns. (<ref>-<ref>) and is a positive semi-definite, Hermitian operator. This process matrix requires pre-selection, but not post-selection. This can be seen by comparing with the expansion in basis operators of Eqn. (<ref>). We do not write any dependence on the pre-selection variable u because there could be many ways to implement this particular process matrix with different way to depend on u. Consider Alice's operation to consist of a measurement in the z basis with outcome x followed by a preparation of a state in the z basis determined by the income a. Alice's operation in the CJ representation takes the form M^A_IA_O_a,x = 1/4[1+(-1)^xσ_z]^A_I⊗ [1+(-1)^aσ_z]^A_O. Consider Bob to operate according to two settings. If β=1, Bob measures in the z basis with outcome y and prepares the maximally mixed state. If β=0, Bob measures in the x basis with outcome y and prepares a state in the z basis determined by the income b, if y=0, or determined by b⊕ 1, if y=1. Overall, this is encoded in the CJ representation of Bob's operation M^B_IB_O_b,y[β] = 1/2β [1+(-1)^yσ_z]^B_I⊗1^B_O + 1/4(β⊕ 1)[1+(-1)^yσ_x]^B_I⊗ [1+(-1)^b+yσ_z]^B_O. The operations of Eqn. (<ref>) and Eqn. (<ref>) are completely positive and satisfy double causality. These were adopted from Ref. <cit.> with one minor modification. In Ref. <cit.>, Bob prepares an arbitrary state when β=1. To satisfy double causality, and not only forward causality, this state must be the maximally mixed state. This modification does not alter the resulting causal inequality violation. With these operations, Alice and Bob have a probability of success p_LGYNI = 2+√(2)/4 > 3/4 in the forward LGYNI game that exceeds the bound of 3/4 for causally separable processes. This is a violation of the forward LGYNI causal inequality Eqn. (<ref>), certifying the presence of causal non-separability. Meanwhile, Alice and Bob's probability of success in the backward LGYNI game is p̃_LGYNI = 1/2 < 3/4, which does not violate the backward LGYNI causal inequality Eqn. (<ref>). The time-symmetric process matrix formalism allows us to consider the time-reversal of Eqn. (<ref>), W̃^A_IA_OB_IB_O = 1/4[ 1^A_IA_OB_IB_O + 1/√(2)( σ_z^A_Oσ_z^B_I + σ_z^A_Iσ_x^A_Oσ_z^B_O)], and infer immediately that this too is a valid process matrix. To take the time-reversal of a process matrix, one must swap operators on the following Hilbert spaces: A_O ↔ B_I and A_I ↔ B_O. This reverses inputs and outputs and also reverses the order of the two parties Alice and Bob. The last term in the reversed process matrix W̃ is type A_IA_OB_O and is not allowed by the time-forward constraints of Ref. <cit.>, but is allowed here in the time-symmetric formulation as long as there is post-selection. Using the time-reversals of Eqns. (<ref>, <ref>) as Alice and Bob's operations, which are guaranteed to be valid operations satisfying the time-symmetric constraints, results in a probability of success p_LGYNI = 1/2 < 3/4 in the forward LGYNI game. This is not a violation of the forward LGYNI inequality Eqn. (<ref>). Therefore, it is not possible to certify the causal non-separability of W̃ with these operations using forward causal inequalities alone. However, Alice and Bob's probability of success in the backward LGYNI game is p̃_LGYNI = 2+√(2)/4 > 3/4 and consistutes a causal inequality violation. This example demonstrates that it is possible to violate a causal inequality and not its time-reversal, and that the backward causal inequalities Eqns. (<ref>, <ref>) make it possible to certify more processes as causally non-separable than was previously possible. It would be interesting in future research to determine whether there exists a process that can simultaneously violate forward and backward causal inequalities, and whether such a process offers any new computational advantages or physical insights. It would also be interesting to computationally generate an exhaustive list characterizing the facets of the causal polytope <cit.> determined by Eqns. (<ref>-<ref>). There may exist exotic causal inequalities beyond those presented here, perhaps some that cannot naturally be associated to a particular time direction but rather mix the forward and backward directions. §.§ The quantum time flip Chiribella and Liu characterized a class of time-symmetric operations called bidirectional devices in Ref. <cit.>. They demonstrated that it is possible in quantum theory to operate a bidirectional device in a coherent superposition of the forward time direction and backward time direction. Such a process is said to have indefinite time direction. It has been claimed that the notion of an indefinite time direction could not be captured by the process matrix formalism <cit.>. Here, we find that by manifestly incorporating time-symmetry into the process matrix framework it is indeed possible to describe processes with indefinite time direction. An example of a process with indefinite time direction is the quantum time flip, illustrated in Fig. <ref>. Alice performs an operation satisfying the time-symmetric conditions Eqn. (<ref>) and Eqn. (<ref>). Meanwhile, Bob has access to a control qubit whose value determines the time direction in which Alice's operation is performed. If the control qubit reads 0, Alice's operation acts in the forward time direction, and if the control qubit reads 1, Alice's operation acts in the backward time direction. This scenario is described by a process matrix W_QTF = [1^A_IA_O⊗|0⟩^B_I⟨0| + SWAP^A_IA_O⊗|1⟩^B_I⟨1|] ρ^A_IB_I[1^A_IA_O⊗|0⟩^B_I⟨0| + SWAP^A_IA_O⊗|1⟩^B_I⟨1|] satisfying the time-symmetric constraints Eqn. (<ref>) and Eqns. (<ref>-<ref>). The operator ρ is a density matrix (unit trace) that describes the composite system of the control qubit together with the quantum state Alice acts upon. Effectively, what W_QTF does if the control qubit reads 0 is send Alice's component of ρ into the input terminal of Alice's operation, and if the control qubit reads 1, it sends it into the output terminal of Alice's operation. This is always possible in the time-symmetric process matrix framework because operations are assumed to be valid in both time directions. The operator SWAP^A_IA_O is the usual unitary gate defined by SWAP^A_IA_O = ∑_i,j=1^d_A|i⟩^A_I⟨j|⊗|j⟩^A_O⟨i| in the computational bases of ℋ^A_I and ℋ^A_O. In the case of qubits (ℋ^A_I = ℋ^A_I = 2), this can be represented as a matrix SWAP^A_IA_O = [ 1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1 ]. There is some freedom here in how the computational bases of ℋ^A_I and ℋ^A_O are to be identified relative to one another. This freedom is hypothesized to be equivalent to the choice of the input-output inversion map Θ described by Chiribella and Liu in Ref. <cit.>. The process matrix W_QTF implements the quantum time flip as described in Ref. <cit.>, as can be seen by writing the resulting correlations p_QTF(a,b,x,y) = _A_IA_OB_IB_O[W_QTF(M^A_IA_O_a,x⊗ M^B_IB_O_b,y)] and expanding Alice and Bob's operations in Kraus operators. The time-symmetric process matrix framework offers a unified framework for studying processes with indefinite causal structure and indefinite time direction, synthesizing the approaches of Ref. <cit.> and Ref. <cit.>. The time-symmetric constraints presented here characterize all processes incorporating indefinite causal structure and/or indefinite time direction consistent with time-symmetric quantum theory. §.§ Time-symmetric versus time-forward With definite causal structure, time-symmetric (TS) and time-forward (TF) quantum theory are equivalent. This can be seen by the existence of a one-to-one correspondence between operations in the two theories. The fact that every TS operation has a counterpart in TF theory is straightforward to prove: every TS operation satisfies forward causality automatically, so simply conditioning on the income variable results in a collection of valid TF operations, illustrated in Fig. <ref>. The value of the income in the TS operation can be interpreted as a setting in the resulting collection of TF operations. Therefore, a TS operation corresponds to a particular collection of TF operations which all together satisfy backwards causality. The other direction of the proof is slightly more involved. It can be shown that, given a TF operation M̅^A_IA_O_a, a TS operation can be constructed: M^A_IA_O_a,x = 1/N_aM̅^A_IA_O_x δ_a,1 + 1/d_A N_x (N_a-1)∑_a' > 11^A_I⊗(N_a1^A_O - _A_IM̅^A_IA_O_x )δ_a,a'. The TS operation is constructed so that pre-selecting on the income a=1 gives the TF operation, that is, 1/p(a=1) M^A_IA_O_1,x = N_a M^A_IA_O_1,x = M̅^A_IA_O_x. The TS operation M^A_IA_O_a,x constructed in this way is guaranteed to be positive so long as the number of incomes satisfies N_a≥ d_A N_x, and it is guaranteed to satisfy double causality so long as M̅^A_IA_O_x satisfies forward causality. Therefore, we are free to interpret any TF operation as a TS operation conditioned on the income a=1. Any circuit composed of TS operations is equivalent to a collection of circuits composed of TF operations obtained by pre-selecting on the possible combinations of income values. On the other hand, any circuit composed of TF operations is equivalent to a circuit composed of TS operations constructed by Eqn. (<ref>) all pre-selected on the income value a=1. Thus, the same probability distributions can be obtained in both theories, and TF and TS quantum theory with definite causal structure are equivalent. This equivalence fails to remain true when the assumption of definite causal structure is relaxed. As discussed in the text following Eqn. (<ref>), the TS process matrices form a larger subspace in the space of linear operators ℒ(ℋ^A_I⊗ℋ^A_O⊗ℋ^B_I⊗ℋ^B_O) than the TF process matrices. It is possible that one could construct an alternate formulation of TF quantum theory with indefinite causal structure that allows for post-selection in the process matrix. If only the process matrices averaged over the possible post-selection values are required to satisfy the usual TF process matrix constraints, it is possible an equivalence could be established between TS and TF theory. However, in their current forms, TS and TF quantum theory with indefinite causal structure are distinct. Given the same set of operations (according to the correspondence established in this section), there is a larger space of allowed probability distributions in TS theory than TF theory. To demonstrate this inequivalence concretely, consider the following simple scenario. To avoid developing the formalism for single-party process matrices, we present an example with two parties. However, the inequivalence between TS and TF process matrix theory manifests even with one party. Alice and Bob have inputs and outputs which are qubits, d_A=d_B = 2, and their operations are given as follows: M^A_IA_O_1,1 = 1/4(1^A_IA_O + cσ_z^A_O), M^A_IA_O_2,1 = 1/4(1^A_IA_O - cσ_z^A_O), M^B_IB_O_1,1 = 1/21^B_IB_O, where 0≤ c≤ 1 is a free parameter and σ_z is the Pauli z operator. Alice's TS operation corresponds to a TF operation M̅^A_IA_O_1 = 1/2(1^A_IA_O + cσ_z^A_O) by the construction of Eqn. (<ref>) with N_a=2. Bob's operation is both a valid TS and TF operation. The process matrix is W = 1/4(1^A_IA_OB_IB_O+wσ_z^A_O) where 0< w≤ 1 is a constant. W is a valid process matrix only in the TS or TB theories, and is only permitted if post-selection is present. Then we can calculate p(1,1,1,1) ≡[W (M^A_IA_O_1,1⊗ M^B_IB_O_1,1)] = 1/2(1+cw). Thus it is possible to have non-trivial dependence on the parameter c in TS theory, however, this is impossible in TF theory. This is due to the fact that the term resulting from cσ_z^A_O in Alice's operation is orthogonal to every possible term in a TF process matrix. Given this set of operations, which are valid in both theories, it is possible to demonstrate concretely the inequivalence between TS and TF process matrix theory in their current forms. § CONCLUSION In this paper, we have developed a formalism for causal inequalities and process matrices in the setting of time-symmetric operational theory. We use a modified type of operation which includes incomes as well as outcomes and satisfies a time-symmetric set of constraints. By studying processes with definite causal structure, we arrived at twice as many causal inequalities as those previously known for two parties. Each causal inequality from the time-forward setting (GYNI and LGYNI) has a time-reversed counterpart in our formalism, Eqns. (<ref>, <ref>). We demonstrated in Section <ref> that this larger set of causal inequalities offers new opportunities to certify the causal non-separability of certain processes which violate one of the backward inequalities. It remains to be shown whether these four causal inequalities form an exhaustive list for two parties, or whether there may exist additional inequalities. By requiring non-negative and normalized probabilities, while allowing for both pre-selection and post-selection, we derived the largest possible set of process matrices for two parties. This maximal set corresponds to the ICOTD processes of Chiribella and Liu <cit.>, which were shown to maximally violate every causal inequality. In our formalism, process matrices without post-selection satisfy an additional set of constraints to prevent certain types of backwards signalling, see Eqns. (<ref>-<ref>). The process matrices satisfying these constraints are precisely those of the TF setting <cit.>. This demonstrates that the primary distinction between process matrices in the TS and TF settings is the possibility for post-selection. With post-selection permitted, the set of process matrices in the TS formalism (or ICOTD processes) contain certain processes with indefinite time direction. One example of such a process is the quantum time flip, where the time direction of Alice's operation is determined by an ancillary control qubit, which may be prepared in a coherent superposition. The process matrix for the quantum time flip is written explicitly in Section <ref>. A question for future research is to determine whether processes with indefinite time direction violate any causal inequalites that cannot be violated by a process with definite time direction. It would be interesting to find a process (a process matrix together with a set of operations) that violates both the forward and backward versions of a causal inequality. If there exists a normalized probability distribution which violates both the forward and backward versions of an inequality, then by the results of Chiribella and Liu <cit.> we are guaranteed to be able to find an ICOTD process which produces this probability distribution. We expect that if such a process exists, it will involve indefinite time direction. This process will therefore exist in the full set of time-symmetric processes, but not the sub-class of time-forward processes (see Fig. <ref>). § ACKNOWLEDGMENTS We would like to thank Časlav Brukner, Giulio Chiribella, Djordje Minic, Ognyan Oreshkov, and Aldo Riello for helpful discussions. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. § FUNDAMENTALS OF QUANTUM MEASUREMENT THEORY This Appendix offers a brief introduction to quantum measurement theory for the unfamiliar reader. For a more comprehensive introduction, we refer the reader to the standard textbook on quantum information and quantum computation by Nielsen and Chuang <cit.>. In the main text, we will rely primarily on the notion of an operation in the Choi–Jamiołkowski (CJ) representation. Here, the more commonly known time-forward formulation is presented, while in the main text we discuss the adaptation to the time-symmetric setting. A quantum channel takes a quantum system, represented by a density matrix in the finite-dimensional case, and maps it to a new quantum system. Mathematically, this corresponds to a completely positive, trace-preserving (CPTP) map. If the input and output quantum systems are represented by density matrices ρ^I ∈ℒ(ℋ^I), ρ^O ∈ℒ(ℋ^O) acting on their respective Hilbert spaces, a quantum channel is a CPTP map 𝒞: ℒ(ℋ^I) →ℒ(ℋ^O) ρ^I ↦ρ^O. between the sets of Hermitian, linear operators on two Hilbert spaces. Complete positivity is the requirement that positive operators are mapped to positive operators, even in the presence of ancillary systems. This requirement is equivalent to the positivity of the induced map 1^k ⊗𝒞: ℒ(ℋ^A⊗ℋ^I) →ℒ(ℋ^A⊗ℋ^O) for all k, where ℋ^A is the Hilbert space of an auxiliary quantum system and ℋ^A = k is its dimension. Trace-preservation is the requirement that _O [𝒞(ρ^I)] = 1 for all normalized density matrices ρ^I ∈ℋ^I. Complete positivity and trace-preservation guarantee that density matrices are mapped to density matrices under the action of the quantum channel. We may loosen the requirement of trace-preservation and consider maps which are trace non-increasing, that is, 0 ≤_O[𝒞(ρ^I)] ≤ 1 for all unit-normalized density matrices ρ^I ∈ℋ^I. Trace non-increasing channels offer a natural description of quantum measurements. We may label a collection of trace non-increasing channels with subscripts, 𝒞_x, where x is the outcome of a given measurement. Then the probability associated to the measurement outcome x is p(x) _O[𝒞_x(ρ^I)]. These probabilities are guaranteed to be non-negative so long as ρ^I is a valid density matrix. To guarantee normalized probabilities, that is, ∑_x p(x) = ∑_x _O[𝒞_x(ρ^I)] = 1, we require that the channel ∑_x𝒞_x, defined linearly, resulting from marginalization over the outcome x is a CPTP channel. We refer to the collection {𝒞_x}_x as an operation to indicate that it represents a set of generalized measurements, that is, trace non-increasing channels with associated outcomes. This is not to be confused with the more familiar projective measurements—operations with the additional requirement that the constituent channels act projectively on quantum states. We will see in Section <ref> of the main text that the definition of an operation is modified in the time-symmetric setting to include incomes in addition to outcomes. This is explained in detail in the main text. A useful representation of an operation {𝒞_x}_x is given by the Choi–Jamiołkowski (CJ) isomorphism. To construct the isomorphism, consider the (un-normalized) maximally entangled state |Φ^+⟩ = ∑_i=0^ (ℋ^I)-1|i⟩⊗|i⟩∈ℋ^I⊗ℋ^I. on two copies of the input Hilbert space ℋ^I where {|i⟩∈ℋ^I} forms an orthonormal basis. Then we can construct M^IO_x := 1^I⊗𝒞_x(|Φ^+⟩⟨Φ^+|) ∈ℒ(ℋ^I⊗ℋ^O). This is the CJ representation of the operation {𝒞_x}_x. The complete positivity of 𝒞_x is equivalent to the positive semi-definiteness of M^IO_x, and trace-preservation becomes the condition ∑_x _O[M^IO_x] = 1^I. We call this condition forward causality. Again, in the time-symmetric scenario the definition of an operation is modified, and we are instead left with two constraints which we call collectively double causality, see Eqn. (<ref>). The CJ representation of time-symmetric operations is the main ingredient in our discussion of quantum theory throughout this paper. § DERIVATION OF TIME-SYMMETRIC PROCESS MATRIX CONSTRAINTS The requirement of normalized probabilities as in Eqn. (<ref>) is satisfied iff the process matrix W_u,v^A_IA_OB_IB_O satisfies the following four constraints: ∑_u,v_A_IA_OB_IB_O[W_u,v] = d_Ad_B, ∑_u,v_B_IB_O[1-A_I][1-A_O]W_u,v =0, ∑_u,v_A_IA_O[1-B_I][1-B_O]W_u,v =0, ∑_u,v_[1-A_I][1-A_O][1-B_I][1-B_O]W_u,v =0. In this proof, we follow the technique of Araújo et al. in Appendix B of Ref. <cit.> to derive the necessary and sufficient conditions for a valid process matrix. First, we prove that the conditions stated in the theorem are necessary. Take two Hermitian operators x∈ℒ(ℋ^A_I⊗ℋ^A_O), y∈ℒ(ℋ^B_I⊗ℋ^B_O) acting on the Hilbert spaces of Alice and Bob, respectively. Then we can construct operators 𝒳^A_IA_O = _[1-A_I][1-A_O]x + 1^A_IA_O/d_A, 𝒴^B_IB_O = _[1-B_I][1-B_O]y + 1^B_IB_O/d_B that satisfy the normalization constraints Eqn. (<ref>) for any choice of x and y. Requiring normalized probabilities when Alice and Bob use 𝒳 and 𝒴 as their quantum operations amounts to requiring ∑_u,v_A_IA_OB_IB_O[W_u,v^A_IA_OB_IB_O (_[1-A_I][1-A_O]x + 1^A_IA_O/d_A) ⊗(_[1-B_I][1-B_O]y + 1^B_IB_O/d_B) ] = 1 for all Hermitian x and y. Then we can consider four possibilities: (x=y=0) In this case, the constraint reduces to ∑_u,v_A_IA_OB_IB_O[W_u,v] = d_Ad_B. (x≠ 0, y=0) The term from the x=y=0 remains and equals one, while the additional term for x≠ 0 must equal zero to preserve the normalization. The normalization constraint then reduces to ∑_u,v_A_IA_OB_IB_O[_[1-A_I][1-A_O]x· W_u,v^A_IA_OB_IB_O]=0. Now we make use of the following fact. The pre-subscript _XM acting on a Hermitian operator M∈ℒ(ℋ_X⊗ℋ_Y) is self-adjoint in the Hilbert-Schmidt inner product, that is, _XY[_XM_1^XY· M_2^XY] = _XY[M_1^XY·_XM_2^XY]. This can be shown simply, _XY[(1_X⊗_X[M_1^XY])· M_2^XY] = _Y[_X[M_1^XY]·_X[M_2^XY]] = _XY[M_1^XY· (1_X⊗_X[M_2^XY])]. Following the same logic, the operation _[1-X]M acting on M is self-adjoint in the Hilbert-Schmidt inner product. This requires that ∑_u,v_A_IA_OB_IB_O[x·_[1-A_I][1-A_O]W_u,v^A_IA_OB_IB_O]=0 for all Hermitian x. The only Hermitian operator that is orthogonal to all other Hermitian operators is the zero operator. Therefore, ∑_u,v_B_IB_O[_[1-A_I][1-A_O]W_u,v]=0. This is equivalent to the second constraint in the theorem. (x=0, y≠ 0) Following the same steps but swapping Alice and Bob's roles gives the third constraint: ∑_u,v_A_IA_O[_[1-B_I][1-B_O]W_u,v]=0. (x,y ≠ 0) In this case the normalization constraint reduces to ∑_u,v_A_IA_OB_IB_O[(_[1-A_I][1-A_O]x⊗_[1-B_I][1-B_O]y)· W_u,v^A_IA_OB_IB_O]=0. We can use the fact that the action of _[1-X] is self-adjoint to have each of these act on W ∑_u,v_A_IA_OB_IB_O[(x⊗ y)·_[1-A_I][1-A_O]_[1-B_I][1-B_O]W_u,v^A_IA_OB_IB_O]=0. Since this is true for all Hermitian x and y, we find that ∑_u,v_[1-A_I][1-A_O]_[1-B_I][1-B_O]W_u,v^A_IA_OB_IB_O=0. In conclusion, we have shown that the four constraints in the theorem are a necessary consequence of the four cases considered here. Finally, we show that the four conditions in the theorem are sufficient for normalized probabilities. We rewrite Alice's averaged operation in the form M^A_IA_O = _[1-A_I][1-A_O]M^A_IA_O + _A_IM^A_IA_O + _A_OM^A_IA_O - _A_IA_OM^A_IA_O by trivially adding and subtracting the partial traces of M^A_IA_O. Alice's operation is required to satisfy normalization constraints as in Eqn. (<ref>). We can use this to write M^A_IA_O = _[1-A_I][1-A_O]M^A_IA_O + 1^A_IA_O/d_A. We can write Bob's operation in the analogous way. Inserting these expressions into Eqn. (<ref>) gives ∑_u,v_A_IA_OB_IB_O[W_u,v^A_IA_OB_IB_O (_[1-A_I][1-A_O]M^A_IA_O + 1^A_IA_O/d_A) ⊗(_[1-B_I][1-B_O]M^B_IB_O + 1^B_IB_O/d_B)] = 1. Expanding the tensor product results in four terms. The term ∑_u,v_A_IA_OB_IB_O[1^A_IA_O/d_A⊗1^B_IB_O/d_B· W_u,v^A_IA_OB_IB_O] is equal to one if the first constraint from the theorem is met by the process matrix W_u,v. The remaining three terms vanish due to the remaining three constraints in the theorem. Therefore, the four conditions are sufficient to guarantee normalized probabilities. § PROCESS MATRICES WITHOUT PRE-SELECTION AND POST-SELECTION One of the main lessons from our study of processes with definite causal order in Section <ref> is that signalling forward in time is not possible without pre-selection, and signalling backward in time is not possible without post-selection. This guiding principle imposes further constraints on process matrices that do not feature pre-selection or post-selection. Post-selection can be accomplished either by conditioning on a value for the post-selection variable v in the process matrix, or by conditioning on Alice or Bob's outcome x or y. See Fig. <ref> for the definitions of these variables. Consider first the case when v and y are marginalized. For causally separable processes, it is clear that Bob cannot signal backward in time to Alice. In diagrammatic notation, this fact is demonstrated in Fig. <ref>. By applying the double causality rules, it can be seen that Alice's output is always ignored in either definite causal order, A≼ B or B≼ A. We assert that even in a causally non-separable process, Bob cannot signal backward in time to Alice with this marginalization, so Alice's outcome must be ignored. In quantum theory, this can be written ∑_v,y _A_IA_OB_IB_O[W^A_IA_OB_IB_O_u,v(M_a,x^A_IA_O⊗ M_b,y^B_IB_O)] = ∑_v,y_A_IA_OB_IB_O[_A_OW^A_IA_OB_IB_O_u,v(M_a,x^A_IA_O⊗ M_b,y^B_IB_O)]. This must hold for all operations M_a,x^A_IA_O and M_b,y^B_IB_O. Therefore we find that _A_IA_OB_IB_O[∑_v_[1-A_O]W^A_IA_OB_IB_O_u,v(M_a,x^A_IA_O⊗∑_y M_b,y^B_IB_O)] = 0. This is a statement of the orthogonality between ∑_v_[1-A_O]W^A_IA_OB_IB_O_u,v and the subspace generated by operators of the form M_a,x^A_IA_O⊗∑_y M_b,y^B_IB_O under the Hilbert-Schmidt inner product. Keeping in mind that Alice and Bob's operations satisfy the double causality rules Eqn. (<ref>), this results in the constraints Eqns. (<ref>, <ref>). Marginalizing over v and x instead and repeating this analysis results in the remaining constraint Eqn. (<ref>). A similar argument can be made for processes without pre-selection. Consider the case when u and a are marginalized. Then Alice cannot signal forward in time to Bob, and Bob's input must be ignored. This translates into the orthogonality equation: _A_IA_OB_IB_O[∑_u_[1-B_I]W^A_IA_OB_IB_O_u,v(∑_aM_a,x^A_IA_O⊗ M_b,y^B_IB_O)] = 0 for all operations M_a,x^A_IA_O and M_b,y^B_IB_O. Then a process matrix without pre-selection must satisfy Eqns. (<ref>, <ref>). Repeating the argument by marginalizing u and b results in the final constraint Eqn. (<ref>). unsrt
http://arxiv.org/abs/2406.17669v1
20240625155819
Capacity-Achieving Gray Codes
[ "Venkatesan Guruswami", "Hsin-Po Wang" ]
cs.IT
[ "cs.IT", "cs.DS", "math.IT" ]
#1 Capacity-Achieving Gray Codes Venkatesan Guruswami and Hsin-Po Wang Received September 25, 2023; accepted June 06, 2024 ======================================================= § ABSTRACT To ensure differential privacy, one can reveal an integer fuzzily in two ways: (a) add some Laplace noise to the integer, or (b) encode the integer as a binary string and add iid BSC noise. The former is simple and natural while the latter is flexible and affordable, especially when one wants to reveal a sparse vector of integers. In this paper, we propose an implementation of (b) that achieves the capacity of the BSC with positive error exponents. Our implementation adds error-correcting functionality to Gray codes by mimicking how software updates back up the files that are getting updated (“coded Gray code”). In contrast, the old implementation of (b) interpolates between codewords of a black-box error-correcting code (“Grayed code”). [0$] Research supported in part by NSF grant CCF-2210823 and a Simons Investigator Award. We gratefully acknowledge the hospitality of the Simons Institute for a dedicated semester on error-correcting codes where this work was carried out. We extend our deepest gratitude and respect to the late Jim Simons (1938–2024) for his commitment to investing in mathematics and science. [0] Emails: {venkatg, simple} @berkeley.edu. § INTRODUCTION Differential privacy is the art of publishing collective facts without leaking any detail of any user. A mathematically rigorous way to do so is adding noise to an aggregation function that is Lipschitz continuous (sometimes of bounded variation) in every argument. More concretely, suppose that we are interested in a feature φ{0, 1}^n → [m] that satisfies |φ(u) - φ(u')| < 1, for u (u_1, …, u_i, …, u_n) and u' (u_1, …, 1-u_i, …, u_n), i.e., changing the data of the ith user does not change the feature too much. Then publishing φ(u) + L, where L follows the Laplace distribution with decay rate , is -differentially private <cit.>. That is, {φ(u) + L < t }≤exp() {φ(u') + L < t} for any number t ∈, meaning that a data broker will have a hard time telling if u_i is 0 or 1. Publishing φ(u) + L is called the Laplace mechanism <cit.>. It is optimal privacy-wise as (<ref>) assumes equality half of the time. But it turns out to be randomness-costly and space-inefficient when we have many features φ_1, …, φ_ℓ to publish, wherein only k ≪ℓ of them are non-zero[ For example, φ_i(u) could be the number of times the ith English word was mentioned in a forum archive u. Most word counts are going to be zero.] for a given x. In this case, the Laplace mechanism will add noise to all φ_i(u) and then publish all ℓ of them. For one, this means that we are forced to sample Laplace distribution ℓ times. Even if we can afford that, the output will be Ω(ℓlog m) in size (m is an upper bound on the f's) while the raw data is only (k log(ℓ) log(m)). A brilliant idea of Lolck and Pagh <cit.>, which is a generalization of an earlier work by Aumüller, Lebeda, and Pagh <cit.>, reduces the space requirement as well as the sampling cost. The idea is that, instead of working on the ordered field , we encode each φ_i(u) as a binary string (φ_i(u)) ∈{0, 1}^1× n and put the bits of (φ_i(u)) at n random places on a tape of length Θ(kn). This is illustrated in Figure <ref>. Note that (φ_i_1(u)) and (φ_i_2(u)) might end up choosing the same random places. Such a collision is resolved, fairly, by putting a random bit there. We also put a random bit at every empty place. These random bits will play the role of the Laplace noise—protecting privacy by making precise decoding impossible. One problem remains: To what extent can we translate the binary tape back to real numbers? This motivates the definition of robust Gray codes.
http://arxiv.org/abs/2406.19055v1
20240627100320
SimpleFusion: A Simple Fusion Framework for Infrared and Visible Images
[ "Ming Chen", "Yuxuan Cheng", "Xinwei He", "Xinyue Wang", "Yan Aze", "Jinhai Xiang" ]
cs.CV
[ "cs.CV" ]
SimpleFusion Ming Chen et al. Huazhong Agricultural University Huazhong University of Science and Technology {mchen, hxwxss}@webmail.hzau.edu.cn, xwhe@mail.hzau.edu.cn † Equal contribution. ^()Corresponding author. SimpleFusion: A Simple Fusion Framework for Infrared and Visible Images Ming Chen1^† Yuxuan Cheng1^† Xinwei He1^() Xinyue Wang1 Yan Aze2 Jinhai Xiang1 July 1, 2024 ===================================================================================== § ABSTRACT Integrating visible and infrared images into one high-quality image, also known as visible and infrared image fusion, is a challenging yet critical task for many downstream vision tasks. Most existing works utilize pretrained deep neural networks or design sophisticated frameworks with strong priors for this task, which may be unsuitable or lack flexibility. This paper presents SimpleFusion, a simple yet effective framework for visible and infrared image fusion. Our framework follows the decompose-and-fusion paradigm, where the visible and the infrared images are decomposed into reflectance and illumination components via Retinex theory and followed by the fusion of these corresponding elements. The whole framework is designed with two plain convolutional neural networks without downsampling, which can perform image decomposition and fusion efficiently. Moreover, we introduce decomposition loss and a detail-to-semantic loss to preserve the complementary information between the two modalities for fusion. We conduct extensive experiments on the challenging benchmarks, verifying the superiority of our method over previous state-of-the-arts. Code is available at https://github.com/hxwxss/SimpleFusion-A-Simple-Fusion-Framework-for-Infrared-and-Visible-Imageshttps://github.com/hxwxss/SimpleFusion-A-Simple-Fusion-Framework-for-Infrared-and-Visible-Images § INTRODUCTION Image fusion aims to automatically combine images of distinct but complementary sensors into a high-quality image, which can greatly facilitate extensive downstream applications, such as remote sensing <cit.>, medical imaging <cit.> and video surveillance <cit.>. The commonly fused image types include but are not limited to visible, infrared, computed tomography (CT), and magnetic resonance imaging (MRI). Among them, infrared and visible image fusion (IVIF) is a superior research direction due to their ubiquitous sensors (i.e., infrared and RGB sensors) and highly complementary properties. Visible images are better at capturing rich appearance information at high spatial resolution, yet they are vulnerable to illumination variation or disguise. Nonetheless, infrared images can naturally complement them by capturing the thermal radiation of the scene. Therefore fusing the two modalies enables in a more robust and accurate perception. In general, IVIF can be formulated into a decompose and fusion problem. The decomposition step typically decomposes the source images into several components according to signal processing techniques such as multi-scale transform <cit.>, sparse representation <cit.>, and subspace theory <cit.>. For the second fusion step, it aims to integrate and enhance the corresponding components in the source images to derive a high-quality target one. In the past few years, deep learning-based image fusion methods have emerged as a dominant direction in this field. They typically work by utilizing deep neural networks to decompose features for the source images and then learn to fuse them into high-quality target images. Naturally, designing an appropriate framework is essential. Most works utilize pre-trained convolutional neural networks such as VGG19 and ResNet50 for this task. However, the deep features may dilute the details and may not be a good fit for the low-level fusion task. For low-level tasks, preserving low-level features such as edges, illuminations, and contours is of paramount importance. Another important research line is to design an auto-encoder architecture for fusion. However, it often involves a handcrafted fusion strategy for better performance. Recently, LRRNet <cit.> has developed a sophisticated fusion network guided by low-rank representation. Despite outstanding performance, such an intricate architecture needs to be designed with special care and thus lacks flexibility. In this work, inspired by Retinex theory, we introduce a simple yet effective framework named SimpleFusion for the infrared and visible image fusion task. By design, it only consists of two plain two-streamed convolutional neural networks (CNN). One two-streamed CNN decomposes the visible image I into reflectance R and illumination L following I=R ∘ L, where ∘ indicates the elementwise product. While the other two-steamed CNN mines corresponding enhancement components from the infrared image to enhance R and L respectively. The whole framework does not perform feature downsampling and is trained end-to-end, which supports image decomposition and fusion efficiently. Our framework has the following merits. First, it intrinsically improves the robustness of image fusion under different lighting scenarios with the Retinex theory. Second, it does not perform a downsampling process, thereby fusing the final results to derive the enhanced images is rather natural and flexible, inducing not extra effort for fusion. Moreover, image fusion is a low-level task and keeps the resolution along the convolution layers, reducing low-level detail information loss. Lastly, it is simple yet effective. Compared with LRRNet, it simply utilized plain CNN, which is designed with fewer priors on the architecture design. Without bells and whistles, SimpleFusion outperforms existing state-of-the-art methods by a large margin. For instance, on the challenging TNO <cit.> dataset, SimpleFusion achieves 6.9045, 89.4448, 13.8089 and 0.10570 on Entropy, Standard Deviation, Mutual Information, and Nabf (the modified fusion artifacts measure), respectively, which are superior to the second-best method LRRNet by a large margin. To summarize, the contributions of this work are as follows: * We follow Retinex theory and propose to perform visible and infrared image fusions by decomposing the visible images and then learning to mine components in the infrared images to enhance each component, and such a design naturally endows our methods to deal with low-light scenarios. * We present a simple yet effective framework named SimpleFusion, which only adopts plain convolutional neural networks for decomposition and fusion while having fewer priors on the architecture compared with existing works. * Extensive experiments are conducted on several image fusion benchmarks, demonstrating that SimpleFusion outperforms existing methods by a large margin. § RELATED WORK Traditional methods. Traditional image fusion methods mainly include weighted average-based fusion <cit.>,transform-domain fusion <cit.>, feature-based fusion<cit.> and image pyramid-based fusion <cit.>. These traditional image fusion methods have their own advantages in various application scenarios and requirements, but they generally suffer from insufficient robustness and are not suitable for complex scenes <cit.>. Deep Learning-based methods. With the development of deep learning technology, an increasing number of neural network-based image fusion methods <cit.> have begun to receive attention and have achieved excellent results. For instance, FusionGAN <cit.> uses an adversarial framework involving a generator and a discriminator to tackle fusion tasks. Despite impressive fusion results, significant detail loss remains in the outputs. To address this, the authors developed FusionGANv2 <cit.>, an improved version aimed at enhancing detail preservation. Nonetheless, it encounters challenges with generalization performance. The U2fusion <cit.> network is designed for multiple fusion tasks. Using the Elastic Weight Consolidation (EWC) algorithm and sequential training strategy, it allows a single model to adapt effectively to various fusion tasks without weight decay. However, the architectural design of the fusion network was not addressed.Architectures based on transformers have also been applied to image fusion tasks. For example, fusion methods like SwinFusion <cit.> and the YDTR <cit.>. However, the design of these network architectures still requires substantial experimental exploration to discover an excellent fusion network structure.To address this issues, novel approaches have emerged based on the strategy of combining representation models with deep learning, such as CUNet <cit.> and LRRNet <cit.>. The network architecture of CUNet <cit.> is guided by several optimization problems and multi-modal convolutional sparse coding (MCSC). LRRNet <cit.> is a representation learning guided two-stage fusion network. Its learnable representation model used for source image decomposition exhibits strong interpretability, making image fusion tasks no longer a black art. Low-light enhancement. In 1986, EDWIN H. LAND introduced the retinex theory into the field of image processing, proposing the concept of retinex computation <cit.>. Until 2004, Zia-ur Rahman and others developed this concept into a comprehensive automated image enhancement algorithm known as Multi-Scale Retinex with Color Restoration (MSRCR) <cit.>. In recent years, the Retinex theory has seen significant development in the field of image enhancement, such as RetinexNet <cit.> and PairLIE <cit.>. RetinexNet model learns solely through key constraints, including consistent reflectance shared between low-light and normal-light image pairs and smoothness of illumination. Building on this, subsequent brightness enhancement of the illumination is achieved by an enhancement network called Enhance-Net, which also performs joint denoising of reflectance, thus accomplishing image enhancement. PairLIE <cit.> not only simplifies the network structure and reduces handcrafted priors but also achieves performance comparable to state-of-the-art methods. These low-light image enhancement methods based on the Retinex theory and decomposition ideas have provided us with great inspiration. § METHOD §.§ Problem Formulation and Challenges Given a visible image I_v∈ℛ^H× W × C and an infrared image I_r∈ℛ^H× W × 1, the objective is to learn a fusion network f(·) which integrates the two sources into a high-quality image I_q∈ℝ^H × W × C that simultaneously preserves the thermal radiation and rich appearance information, i.e., I_q = f(I_v, I_r). Here H, W, and C represent the width, height, and the number of channels for the images. There are several obstacles to designing an effective fusion framework. (1) The modality gap between visible and infrared images is huge. Visible images, which are typically composed of three RGB channels, carry rich textural and color information for the scene. However, infrared images have only one-channel robust yet low-contrast thermal radiation about the environment. Therefore, the high incompatibility of the two modalities makes it hard to reconcile them to produce a high-quality output. (2) It is difficult to keep the modality-specific information during fusion. Visible and Infrared images have their distinct patterns, these modality-specific properties help describe the same regions of the environment from different perspectives. However, they can be easily lost by disturbance from the other modality during the fusion process. (3) Visible images are sensitive to lighting conditions, and it is hard to determine appropriate complementary cues from the infrared modalities for enhancement both efficiently and effectively. §.§ SimpleFusion Overview. SimpleFusion follows the decompose-and-fusion paradigm. As shown in Fig. <ref>, SimpleFusion is a two-stream framework with one stream decomposing the visible images and the other for infrared images. Each stream is just the plain convolutional neural network without a resolution reduction layer, thereby the fused image can be naturally derived by directly combining the outputs of the two streams and removing the need to design a specialized decoder. The decomposing formulation follows the Retinex theory for the visible image, which has been widely adopted in low-light enhancement fields. Given an input visible image I_v, it aims to decompose it into illumination component L and reflectance R: I_v = L ∘ R, where ∘ represents the element-wise product. In our work, we utilize two encoders, denoted by Φ_Ill(·) and Φ_Ref(·) to ensure the decompositions under the following constraints: argmin_L, R ||L∘ R - I_v|| + λ_L ℒ_sm(L) + λ_R ℒ_sm(R) where L=Φ_Ill(I_v), R=Φ_Ref(I_v) are the estimated illumination and reflectance, and ℒ_sm denotes regularizer which is enforced on the estimated illumination and reflectance, respectively. For the corresponding infrared image, we also decompose it into two components, with one enhancing the illumination components while the other enhancing the reflectivity component for the visible image images. The decomposition form is similar to the Retinex decomposition for visible images, except that we treat the visible image as the main modality and the infrared decomposition results are as the supplement. We simply instantiate another two-stream encoder of the same structure to achieve such a decomposition. After decomposition, we can simply derive high-quality images by combining the decomposition results. Decomposition network. Image fusion itself is a low-level task with weak semantic reliance. Therefore, how to maintain the low-level modality-specific details is essential. Previous architecture typically downsamples the images into low-resolution feature maps and then makes a great effort to recover the details by upsampling. In this paper, we design our decomposition framework by keeping the resolutions along the layers, which greatly facilitates the following fusion process and keep the important local modality-specific information, giving us satisfactory performance. More specifically, the decomposition network is a two-stream architecture for visible images, where one stream is to estimate illumination components (denoted as Ill-Net), and the other stream (denoted as Ref-Net). Each stream is implemented with the same convolutional neural network structure consisting of 5 3 × 3 convolutional layers. We utilize ReLU layers as the first four layers. For the last layer, the sigmoid function is leveraged to normalize the outputs to [0, 1]. Following Retinex theory, the Ill-Net output is a one-channel illumination map L ∈ℝ^H × W × 1, and the Ref-Net is a 3-channel output R ∈ℝ^H × W × 3. Infrared images contain complementary clues to supplement the visible images to highlight the salient targets. To achieve this goal, we evaluate the contributions of the infrared images on each component of the visible images. We utilize an architecture of the same two-stream structure to estimate for enhancement of the illumination and reflectivity, respectively, with one stream producing I_i∈ℝ^H × W × 3 and the other stream producing R_i∈ℝ^H × W × 1. Fusion layer. After decomposing the visible images and estimating the contributions of infrared images for enhancement, we then fuse them into high-quality images. Note that the resolutions are kept during the convolution process, therefore fusing process is rather simple, which is formulated as follows: I_fusion = (L_vi + L_ir) · (R_vi + R_ir) SimpleFusion can be seen as a decoder-free network, and eliminate the needs to restore high resolutions from the low-resolution maps, which may dilute the details during the downsampling process. Without the downsampling layers, it can best preserve low-level visual information while also facilitate fusion with minimal effort. SimpleFusion is trained to learn to decompose the visible and infrared images and then fuse them into one desired image with improved background details and highlighted targets. To this end, it is important to ensure consistency for the decomposition to ensure data fidelity, and at the same time regularize each decomposed component for smoothness. We simply follow PairLIE<cit.> and leverage the decomposition loss. Besides, we also follow LRRNet <cit.> and adopt the detail-to-semantic information loss, which can better preserve the complementary information from source images. These details are elaborated in the following sections. §.§ Decomposition Loss Following PairLIE <cit.>, the decomposition loss includes the Projection term, the Reflectance consistency one, and Retinex one. We describe them below. Projection loss. Retinex decomposition does not consider disturbance components like noise in the image. Therefore, it is beneficial to remove these useless parts in the image before performing decomposition. We simply utilize projection loss, which discards these noise features by projecting the image into another clean one, which is formulated as: L_P = I_ vi - i_ vi_22 where i_ vi refers to the projected image for input image I_ vi. It helps to transform the raw image into a clean one for decomposition. Reflectance consistency loss. Reflectance maps that are extracted from the visible images indicate the inherent and invariant physical properties of the objects. We enhance it by incorporating the related components extracted from the infrared images. To this end, it is expected to ensure their matching quality for a better fusion. We further apply consistency loss L_C to improve the matching quality, which is formulated as follows: L_C = R_ vi - R_ ir_22 where R_v and R_i represent the reflectance maps and the related enhancing components from visible images and infrared images, respectively. Retinex loss. Retinex loss is adopted to ensure the Retinex decomposition. Specifically, this loss consists of four terms: the reconstruction loss to ensure data fidelity after reconstruction, two consistency terms for reflectance and illumination, and one smooth term for the initial illumination. Mathematically, it is defined as follows: L_R = L ∘ R - i _22 + R -i/stopgrad(L) _22 + L - L_0_22 + ▿ L _1 where i refers to the projected image, L_0 denotes the initial illumination estimation, ▿ denotes gradients along vertical and horizontal directions. According to the above equation, L ∘ R - i _22 is the reconstruction term ensuring minor information loss. R -i/stopgrad(L) _22 adds consistency over the estimated reflectances based on the illuminations. Here we detach the gradients from the illuminations for training stability. L_0 is computed by taking maximum value along the channel dimensions ((i.e.), R,G and B): L_0 = max_c ∈{R, G, B}I^c (x) §.§.§ Final Decomposition loss. The final decomposition loss function for training our model is given as: L_Decomp = ω_ 0·L_P + ω_ 1·L_C + ω_ 2·L_R where ω_ 0 , ω_ 1,ω_ 2 denote the weights. Based on previous works <cit.>, ω_ 0 , ω_ 1,ω_ 2 are set to 500, 1, 1 respectively. §.§ Detail-to-Semantic Loss We follow LRRNet <cit.> and utilize the detail-to-semantic information loss function, which is superior at preserving the complementarity of the visible and infrared images for the fusion process. The loss function is computed by exploiting representations from VGG-16 <cit.> pretrained on ImageNet <cit.>. Pixel-level loss. Compared with the infrared image, the visible image reflects more visual local details. Therefore, we utilize pixel-level loss L_pixel to enforce the fused image to have similar visual information as the visible image. Mathematically, it is formulated as follows: L_pixel = ||I_fusion - I_vi||^2_F where ||·||_F represents Frobenius norm operation. Shallow-level loss. According to the first convolutional block outputs, we define the shallow-level loss L_shallow, expecting the shallow visual representations of fused images close to that of visible images. The loss is given by: L_shallow =Φ(I_fusion) 1 - Φ(I_vi) 1_F2 where Φ(·)1 represents the first conv-block outputs from the pretarined VGG-16. Middle-level loss. Middle-level loss is calculated based on the features from the second and third convolutional blocks. The mid-level features generally reflect perceptual features such as textual and shape information in the images, which are exhibited in both visible and infrared images. Mathematically, it is defined as: L_middle =∑_k=2^3 β ^ k Φ(I_fusion) ^ k - [w_iΦ(I_ir) ^ k + w_vΦ(I_vi) ^ k ]_F2 where β^k is the balanced weights for the k-the conv-block, w_v and w_i are the balanced weights for visible and infrared images, respectively. In practice, w_v is set to a smaller value than w_i since the visual image is the main modality that contains more visual information. We set w_v to 0.5 in our framework. Deep-level loss. We use infrared images to guide the fused images to maintain semantic information. Gram Matrix is applied to both infrared and the fused images to extract such information. The loss function L_deep is defined as follows: L_deep = Gram(Φ(I_fusion) ^ 4) - Gram(Φ(I_ir) ^ 4)_F2 The final detail-to-semantic loss is constructed as follows: L_D2S = γ_1·L_pixel + γ_2·L_shallow + L_middle + γ_4·L_deep where γ_1, γ_2, γ_4 are the balanced weights. Note that for the low-level image fusion task, the local details are more important and should be set to a larger weight. Therefore, we set γ_1 to 10 to preserve more local details. §.§ Overall Loss Function We combine the decomposition and the detail to semantic losses to train our framework: L_total = λ * L_Decomp + L_D2S where λ balances the magnitude difference between the decomposion and detail to semantic loss functions. We empirically set it to 1000 for better results. § EXPERIMENTS §.§ Experimental Setups Datasets. Following previous works <cit.>, our approach leverages the KAIST <cit.> dataset, which comprises 95,328 pairs of infrared-visible light images. We randomly selected 20,000 pairs from this dataset as our training set. Additionally, we have combined two public datasets to create a robust test set. Specifically, the test set is composed of 21 pairs of data from the TNO <cit.> test set and an additional 40 pairs of data from the VOT2020-RGBT <cit.> dataset. This combination provides a diverse and extensive set of data for evaluating the performance of our framework. Implementation details. We implement SimpleFusion with PyTorch and perform optimization with ADAM. The learning rate is set to 1 × 10^-5. We randomly select 20,000 pairs of images from the KAIST <cit.> dataset as training data, with input images converted to gray and compressed to 128 × 128. The model is trained on a single NVIDIA RTX 3090, using a batch size of 8 for 4 epochs. Evaluation metrics. To evaluate our model, a comprehensive set of four quantitative metrics has been employed, which encompasses Entropy (En), Standard Deviation (SD), Mutual Information (MI), and the modified fusion artifacts measure (Nabf). For these metrics, the higher the values, the better (except Nabf). §.§ Comparisons with State-of-the-arts We compare our method with 10 representative image fusion frameworks: an encoder-decoder based method DenseFuse <cit.>, a GAN based method FusionGAN <cit.>, a CNN-based general framework IFCNN <cit.>, an ISTA-based algorithm CUNet <cit.>, a residual fusion network RFN-Nest <cit.>, a Res2Net-based algorithm Res2Fusion <cit.>, a transformer-based framework YDTR <cit.>, a Swin-transformer-based method SwinFusion <cit.>, a unified fusion network U2Fusion <cit.>, and a representation learning guided fusion network LRRNet <cit.>. Fusion results on TNO. Table <ref> summarizes the comparison results with existing state-of-the-art methods on TNO. As shown, SimpleFusion achieves the best scores across three metrics (EN, SD and MI), particularly with a significant improvement in SD. In terms of Nabf, we obtain competitive performance when compared with existing state-of-the-arts, suggesting that the image exhibits a large spatial variation in grayscale values, resulting in higher pixel contrast and richness of detail and contrast. Fusion results on VOTRGBT-TNO. Following LRRNet <cit.>, 40 pairs of images are selected from VOT2020-RGBT <cit.> and TNO <cit.> to construct a new test dataset. According to quantitative results in Table <ref>, we can observe that SimpleFusion further improves the SD metric on this diverse dataset, significantly outperforming previous methods. Note that a higher SD (standard deviation) in an image indicates that the variation or distribution of pixel values within the image is more extensive or diverse. These performance improvements on this metric manifest richer and more diverse details for the fused images, which may facilitate downstream feature extraction and further analysis. §.§ Ablation study Impact of γ_2 and w_i. The loss functions involve a set of hyperparameters to be tuned. In this section, we mainly investigate the impact of hyper-parameters γ_2, γ_4 and w_i. While for (ω_0, ω_1, ω_2,γ_1 and w_v) in Eq. <ref>, Eq. <ref> and Eq. <ref>, we empirically set them according to <cit.>. Our ablation experiments are summarized in Tab. <ref>. As shown, when γ_2 = 1.5 and w_i = 2.0, our model obtains the best in terms of En, SD and MI. However, at the same time, our model performs the worst on the metric Nabf. It implies that the fused image contains excessive noise and is visually perceived as unnatural. When γ_2 = 0.1 and w_i = 2.0, our model reaches the best scores on Nabf. However, it performs poorly on the other metrics. Overall, SimpleFusion has a satisfactory performance across all metrics when γ_2 = 2.5 and w_i = 2.0, which are our default configurations in all our following experiments. Visualization. Fig. <ref> compares typical fusion results of different methods. Observing the red box in Fig.<ref>, subjective evaluations show that fusion images generated by methods such as CUNet, YDTR, and SwinFusion appear blurry and lack texture details. On the other hand, methods like DenseFuse, FusionGAN, Res2Fusion, U2Fusion, and LRRNet preserve textures but may introduce noise into the fused images. Observing the yellow box in Fig.<ref>, subjective evaluations show that fusion images generated by methods such as DenseFuse, CUnet, RFN-Nest, Res2Fusion, TDTR, U2Fusion, and LRRNet appear to significantly lack the features of the target(the “man”).In contrast, in the fused images generated by IFCNN, FusionGAN, and SwinFusion, the features of the target are very prominent, but the edge transitions still lack sharpness. In the images produced by our SimpleFusion method, the target features are prominent and the transitions at the image edges are sharp enough. Furthermore, the output from our fusion network yields a more natural-looking image. § CONCLUSION In this paper, we have presented a simple yet effective image fusion framework for visible and infrared images. Compared with existing works, our framework only adopts plain convolutional neural networks with much fewer priors in the architecture design, thereby being more flexible. In our framework, for the visible images, a two-stream CNN is utilized to decompose it into illuminance and reflectance. For infrared images, we calculate the related components to enhance illuminance and reflectance, respectively. Our whole framework keeps the resolution along the layers which supports fusing each component with minor efforts. Extensive experiments have been done to prove its superiority. However, our framework has many hyperparameters in the loss for tunning. In the future, we plan to adaptively just them in our framework instead of manually tuning them. § ACKNOWLEDGEMENT This work is supported by the National Natural Science Foundation of China (No.62302188); Hubei Province Natural Science Foundation (No.2023AFB267); Fundamental Research Funds for the Central Universities (No.2662023XXQD001). splncs04
http://arxiv.org/abs/2406.19286v1
20240627155928
Mass composition of ultra-high energy cosmic rays from distribution of their arrival directions with the Telescope Array
[ "Telescope Array Collaboration", "R. U. Abbasi", "Y. Abe", "T. Abu-Zayyad", "M. Allen", "Y. Arai", "R. Arimura", "E. Barcikowski", "J. W. Belz", "D. R. Bergman", "S. A. Blake", "I. Buckland", "B. G. Cheon", "M. Chikawa", "T. Fujii", "K. Fujisue", "K. Fujita", "R. Fujiwara", "M. Fukushima", "G. Furlich", "N. Globus", "R. Gonzalez", "W. Hanlon", "N. Hayashida", "H. He", "R. Hibi", "K. Hibino", "R. Higuchi", "K. Honda", "D. Ikeda", "N. Inoue", "T. Ishii", "H. Ito", "D. Ivanov", "A. Iwasaki", "H. M. Jeong", "S. Jeong", "C. C. H. Jui", "K. Kadota", "F. Kakimoto", "O. Kalashev", "K. Kasahara", "S. Kasami", "S. Kawakami", "K. Kawata", "I. Kharuk", "E. Kido", "H. B. Kim", "J. H. Kim", "J. H. Kim", "S. W. Kim", "Y. Kimura", "I. Komae", "V. Kuzmin", "M. Kuznetsov", "Y. J. Kwon", "K. H. Lee", "B. Lubsandorzhiev", "J. P. Lundquist", "H. Matsumiya", "T. Matsuyama", "J. N. Matthews", "R. Mayta", "K. Mizuno", "M. Murakami", "I. Myers", "K. H. Lee", "S. Nagataki", "K. Nakai", "T. Nakamura", "E. Nishio", "T. Nonaka", "H. Oda", "S. Ogio", "M. Onishi", "H. Ohoka", "N. Okazaki", "Y. Oku", "T. Okuda", "Y. Omura", "M. Ono", "A. Oshima", "H. Oshima", "S. Ozawa", "I. H. Park", "K. Y. Park", "M. Potts", "M. S. Pshirkov", "J. Remington", "D. C. Rodriguez", "C. Rott", "G. I. Rubtsov", "D. Ryu", "H. Sagawa", "R. Saito", "N. Sakaki", "T. Sako", "N. Sakurai", "D. Sato", "K. Sato", "S. Sato", "K. Sekino", "P. D. Shah", "N. Shibata", "T. Shibata", "J. Shikita", "H. Shimodaira", "B. K. Shin", "H. S. Shin", "D. Shinto", "J. D. Smith", "P. Sokolsky", "B. T. Stokes", "T. A. Stroman", "Y. Takagi", "K. Takahashi", "M. Takamura", "M. Takeda", "R. Takeishi", "A. Taketa", "M. Takita", "Y. Tameda", "K. Tanaka", "M. Tanaka", "Y. Tanoue", "S. B. Thomas", "G. B. Thomson", "P. Tinyakov", "I. Tkachev", "H. Tokuno", "T. Tomida", "S. Troitsky", "R. Tsuda", "Y. Tsunesada", "S. Udo", "F. Urban", "D. Warren", "T. Wong", "K. Yamazaki", "K. Yashiro", "F. Yoshida", "Y. Zhezher", "Z. Zundel" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute of Physics, Academia Sinica, Taipei City 115201, Taiwan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: KIPAC, Stanford University, Stanford, CA 94305, USA Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Presently at: Purple Mountain Observatory, Nanjing 210023, China Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu, Yamanashi 400-8511, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan The Graduate School of Science and Engineering, Saitama University, Saitama, Saitama 338-8570, Japan Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu, Yamanashi 400-8511, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Tokyo City University, Setagaya-ku, Tokyo 158-8557, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Faculty of Systems Engineering and Science, Shibaura Institute of Technology, Minato-ku, Tokyo 337-8570, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Korea Institute of Geoscience and Mineral Resources, Daejeon, 34132, Korea Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Deceased Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia mkuzn@inr.ac.ru Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, Yonsei University, Seodaemun-gu, Seoul 120-749, Korea Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Center for Astrophysics and Cosmology, University of Nova Gorica, Nova Gorica 5297, Slovenia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Science, Kochi University, Kochi, Kochi 780-8520, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Department of Physical Sciences, Ritsumeikan University, Kusatsu, Shiga 525-8577, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan College of Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Quantum ICT Advanced Development Center, National Institute for Information and Communications Technology, Koganei, Tokyo 184-8795, Japan Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Sternberg Astronomical Institute, Moscow M.V. Lomonosov State University, Moscow 119991, Russia Presently at: NASA Marshall Space Flight Center, Huntsville, Alabama 35812, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Department of Physics, Tokyo University of Science, Noda, Chiba 162-8601, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Earthquake Research Institute, University of Tokyo, Bunkyo-ku, Tokyo 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Information Sciences, Hiroshima City University, Hiroshima, Hiroshima 731-3194, Japan Institute of Particle and Nuclear Studies, KEK, Tsukuba, Ibaraki 305-0801, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA petr.tiniakov@ulb.be Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Graduate School of Science and Engineering, Tokyo Institute of Technology, Meguro, Tokyo 152-8550, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan CEICO, Institute of Physics, Czech Academy of Sciences, Prague 182 21, Czech Republic Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA College of Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Department of Physics, Tokyo University of Science, Noda, Chiba 162-8601, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA The Telescope Array Collaboration § ABSTRACT We use a new method to estimate the injected mass composition of ultrahigh cosmic rays (UHECRs) at energies higher than 10 EeV. The method is based on comparison of the energy-dependent distribution of cosmic ray arrival directions as measured by the Telescope Array experiment (TA) with that calculated in a given putative model of UHECR under the assumption that sources trace the large-scale structure (LSS) of the Universe. As we report in the companion letter, the TA data show large deflections with respect to the LSS which can be explained, assuming small extra-galactic magnetic fields (EGMF), by an intermediate composition changing to a heavy one (iron) in the highest energy bin. Here we show that these results are robust to uncertainties in UHECR injection spectra, the energy scale of the experiment and galactic magnetic fields (GMF). The assumption of weak EGMF, however, strongly affects this interpretation at all but the highest energies E > 100 EeV, where the remarkable isotropy of the data implies a heavy injected composition even in the case of strong EGMF. This result also holds if UHECR sources are as rare as 2 × 10^-5 Mpc^-3, that is the conservative lower limit for the source number density. Mass composition of ultra-high energy cosmic rays from distribution of their arrival directions with the Telescope Array Z. Zundel ======================================================================================================================== § INTRODUCTION Ultra-high energy cosmic rays (UHECR) are charged particles of high energies E > 1 EeV (1 EeV = 10^18 eV) that are reaching the Earth from space. The UHECR spectrum is showing a steep decline at highest energies <cit.>, indicating some specific physical process. The nature of this steepening is related to UHECR mass composition at these energies — the type of particles constituting the UHECR flux. If the cutoff in the injected spectrum is high enough, the steepening of the observed spectrum is associated with the Greisen-Zatsepin-Kuzmin (GZK) process — the scattering of primary UHECR on the cosmic background radiation <cit.>. In this case the observed flux is enriched by protons, either primary or secondary. At the same time, a lower injection cutoff manifests itself in the observed spectrum directly. In this case the flux at high energies consists of the same nuclei that were injected in sources <cit.>. Therefore, by estimating the UHECR mass composition at highest energies one could discriminate between these two scenarios. However, this is a challenging task for standard UHECR measurement techniques. The flux of UHECR is tiny, of order 1  km^-2 sr^-1 yr^-1 at E ≳ 1 EeV and as small as ∼ 10^-2  km^-2 sr^-1 yr^-1 at highest energies of E ≳ 100 EeV. Therefore, they can be detected only indirectly via extensive air showers (EAS) of secondary particles they produce in Earth's atmosphere. The standard technique of the mass composition measurement employs the fluorescence detectors (FD) that are observing the ultraviolet light that EASs emit while propagating through the atmosphere. Extracting the distribution of atmospheric depths of shower maxima (X_max) from the FD data and fitting it with simulated EAS of various primary particles one can estimate the observed UHECR mass composition <cit.>. Being the most reliable mass composition measurement technique up to date, this method is still prone to uncertainties of high-energy hadronic models. In addition, FD measurements are possible only in moonless nights, which reduces the initially small UHECR statistics at the highest energies to ∼ 10% of its full value. As a result, the FD measurements do not cover the physically most interesting region of highest energies. In composition measurements with surface detectors (SD) the uncertainty due to high-energy hadronic interaction models is either inherited from the FD by the cross-calibration <cit.> or follows directly from the SD Monte-Carlo (MC) <cit.>, but in both cases the results are less accurate than those of the FD. There are also interesting proposals of mass composition reconstruction using neural networks <cit.>, but the fundamental problem of hadronic model dependence is not yet solved in this approach either. Finally, the Pierre Auger observatory is now undergoing the surface detector upgrade that would allow it to measure electromagnetic and muonic parts of showers separately <cit.>. These measurements are expected to improve the composition-related discriminating power of the surface detector observations. An alternative idea to use the UHECR anisotropy as a measure of their charge and hence mass composition has been proposed in Ref. <cit.>. There is a number of studies onEarth the measurement of the UHECR anisotropy <cit.>, as well as several theoretical approaches that are using these measurements to unveil UHECR sources and mass composition <cit.>. Our method has an advantage that it uses only the most robust UHECR observables: arrival directions and energies. Comparing the energy-dependent distribution of UHECR arrival directions over the sky with the distribution expected in a generic model of sources with a given injected composition one can constrain this composition from the data. The key ingredient of the method is the test statistics (TS) that summarizes the information contained in the arrival directions of the given event set in a single number: the mean deflection of the events from the sources that are assumed to follow the Large Scale Structure of the Universe. Due to shrinking of the attenuation horizon and decrease of magnetic deflections, at highest energies the UHECR flux is expected to consist of isolated sources with different degrees of smearing for different primaries. This potentially allows one to constrain composition even at highest energies where the experimental statistics is small. The method is applied to the TA data with E > 10 EeV in the companion letter <cit.>. From the physical point of view, the most interesting result is the indication of a heavy mass composition at energies higher than 100 EeV. In this paper we focus on the impact of various uncertainties that affect the compatibility of the composition models with the data: parameters of injected UHECR spectra, systematics of the energy scale, uncertainties of galactic and extragalactic magnetic fields, effect of the small number density of sources. We show that most of these uncertainties have a negligible impact on the physical result mentioned above. The paper is organized as follows: in Sec. <ref> we briefly introduce the Telescope Array experiment, the reconstruction procedure, and the data set used. In Sec. <ref> we describe the analysis method used in this study and give the details of the simulation of the mock UHECR sets. In Sec. <ref> we present the resulting constraints on composition models from the TA data. In Sec. <ref> we evaluate the impact of various uncertainties on these results. Sec. <ref> contains concluding remarks. § EXPERIMENT, DATA AND RECONSTRUCTION Telescope Array <cit.> is the largest cosmic-ray experiment in the Northern Hemisphere. It is capable to detect EAS in the atmosphere initiated by cosmic particles of EeV energies and higher. The experiment is located at 39.3^∘ N, 112.9^∘ W in Utah, USA, and has operated in a hybrid mode since May 2008. It includes the surface detector and 38 fluorescence telescopes grouped into three stations. The SD consists of 507 scintillator stations of 3 m^2 each, placed in a square grid with the 1.2 km spacing and covers the area of ∼ 700 km^2. The duty cycle of the SD is about 95% <cit.>. We use the standard TA SD reconstruction procedure as described in Refs. <cit.>. Each event is reconstructed by separate fits of shower geometry and lateral distribution function (LDF), which allows one to determine the shower arrival direction, core location and signal density at the distance 800 m from the core S_800. The latter quantity together with the zenith angle is used to reconstruct the primary energy by making use of lookup tables derived from a full Monte-Carlo of EASs and the detector response <cit.>. Finally, the energy is rescaled by a correction factor 1/1.27 to match the energy scale of the calorimetric TA FD technique. The resolution of the arrival direction reconstruction is estimated as 1.5^∘ at E ≥ 10 EeV <cit.>. The energy resolution is found to be 18% in terms of logarithm of reconstructed to thrown energies ratio ln (E_ rec/E_ MC) for E_ MC≥ 10 EeV <cit.>. The systematic uncertainty of the energy scale coming from FD is estimated to be 21% <cit.>. To insure proper reconstruction of the primary particle parameters the following quality cuts are imposed <cit.>: * E ≥ 10 EeV; * zenith angle ≤ 55^∘; * number of “good” detectors in the fit ≥ 5; * χ^2/ d.o.f.≤ 4 for both geometry and LDF fits; * pointing direction error ≤ 5^∘; * σ_S_800/S_800≤ 0.25; * detector with the largest signal is surrounded by 4 working detectors, there must be one working detector to the left, right, down, up on the grid of the largest signal detector but they do not have to be immediate neighbors of the largest signal detector. These are the standard TA cuts used for anisotropy studies. In addition, we also eliminate the events induced by lightnings that mimic the EAS <cit.>. The lightning events are taken from the Vaisala lightning database compiled by the U.S. National Lightning Detection Network (NLDN) <cit.>. We correlate the list of the NLDN lightning events detected within 15 miles from the Central Laser Facility of the TA during the full time of TA operation with the list of TA events. We remove all the TA events that occur within 10 minutes before or after the NLDN lightnings. This cut was shown to reduce the total exposure by less then 1% <cit.>. In the present study we use the TA SD data set obtained during 14 years of operation from May 11, 2008 to May 10, 2022. The total number of events passing all the cuts is 5978; 19 of these events have energies larger than 100 EeV, including the highest energy event with E = 244 EeV <cit.>. § ANALYSIS Our analysis closely follows that of Ref. <cit.>. It is based on the computation and comparison of the same, properly defined test statistics (TS) for both UHECR data set and mock sets simulated in the assumption of various injected compositions. The general outline of the method is the following. In general, each composition model is characterized by the fractions of injected species, spectral slopes and cut-off energies for each species. These parameters are not independent as the model has to reproduce the observed UHECR spectrum. In this analysis we limit ourselves to a simplified set of models in which each species independently is injected in such a way (see the details below) as to reproduce the observed spectrum in the energy range of interest. In this case the fractions of injected species are independent parameters — those which we aim to constrain. For given fractions we generate a large mock set of UHECR events distributed according to flux maps computed for this composition with full account of attenuation and propagation effects. The sources are assumed to trace the Large Scale Structure of the Universe. All other parameters affecting the UHECR flux distribution are fixed by some conservative assumptions as will be discussed below. Second, we define the test statistics that quantifies only the overall magnitude of the deflections of a given event set with respect to the LSS. This TS only involves parameters that are most robustly measured by the experiment: the event arrival directions and energies. At the third step, we compute this TS for each mock event set and for the actual TA data set, and quantify the compatibility of each composition model with the data by means of the likelihood method. Finally, we estimate the impact of uncertainties of other parameters affecting the UHECR flux: shapes of injection spectra, galactic and extragalactic magnetic fields, energy scale of the experiment and UHECR source number density. Varying these parameters in their experimentally allowed ranges we estimate how robust our conclusions about the composition are. §.§ Simulation of mock event sets We now discuss the details of the generation of the mock event sets that are used to compare a given model to observations. We first compute the flux maps for different injected species at different observation energies taking into account the UHECR injection spectrum, propagation, deflections by the magnetic fields and the detector effects. Thus we get a set of basic maps for various primaries at various energies. Then we combine these maps with fractions of primaries corresponding to a particular composition model and use the resulting map to generate mock UHECR event sets. The construction of the basic flux maps F_i, k, where i denotes the injected particle type and k the detected particle energy, is organized as follows. UHECR sources are assumed to trace the luminous matter distribution in the Universe. This can be achieved, on a statistical basis, by assigning each galaxy from a complete volume-limited sample an equal intrinsic luminosity in UHECR. In practice, we use instead a flux-limited galaxy sample derived from the 2MRS galaxy catalog <cit.>. We cut out dim galaxies with mag > 12.5 so as to obtain a flux-limited sample with a high degree of completeness, and eliminate galaxies beyond 250 Mpc. We assign progressively larger flux to more distant galaxies to compensate for the observational selection inherent in a flux-limited sample (see Ref. <cit.> for the exact procedure). In a similar way, we assign larger weights to the galaxies within ± 5^∘ from the Galactic plane to compensate for the catalog incompleteness in this region. We also cut out galaxies at distances closer than 5 Mpc as they are too few to be treated statistically (this is equivalent to assuming that there are no sources closer than this distance; if such sources exist they have to be added individually). Finally, we assume that sources beyond 250 Mpc are distributed uniformly with the same mean density as those within this distance. The space distribution of sources obtained in that way is completely fixed. The source number density, ρ, in this model is corresponding to that of all galaxies: ρ≃ 10^-2 Mpc^-3 <cit.>. It should be emphasized that we use this source distribution for both the generation of basic UHECR mock sets and for the computation of the TS, see next Subsection. The source number densities as low as ρ≃ 10^-5 Mpc^-3 and even lower are not excluded experimentally <cit.> (see, however, recent studies that are placing more stringent limits <cit.>). In case of such rare sources one would expect that the TS based on the catalog of all galaxies would show lower sensitivity to mass composition. In Sec. <ref> we describe mock set simulations for low source number density and discuss this issue quantitatively. We set UHECR injection spectra by fitting the TA and Auger observed spectra with the SimProp v2r4 <cit.> propagated spectra for each primary separately. As a result, the following spectra are adopted for our basic expected UHECR flux: power law with the indexes -2.55, -2.20, -2.10 and without injected energy cut-off for protons, helium and oxygen, respectively; power law with the index -1.50 and with a sharp cut-off at 280 EeV for silicon; power law with the index -1.95 and with a sharp cut-off at 560 EeV for iron. The spectra for protons, helium and iron are derived from the fits to TA observed spectrum <cit.>, while the spectra for oxygen and silicon are adapted from Ref. <cit.>, where combined fits to TA <cit.> and Auger <cit.> were performed, taking into account an energy rescaling between the two experiments <cit.>. Note that the shape of the injected spectrum cutoff is not important in our setup, according to discussion of Ref. <cit.>. We show some examples and details of the spectra fitting in the Appendix. The secondary protons generated during propagation of injected primary nuclei through the interstellar medium are taken into account for helium and oxygen nuclei. We adapt the method and the approximations used in Ref. <cit.>. In particular, we assume that all nuclei of atomic weight A injected with E > 10 A EeV immediately disintegrate into A protons having the energy 1/A times the injected energy of the nucleus each. For a power-law injection of nuclei with index γ and no cutoff, this results in the following number of secondary protons N_p above a given threshold E_ min: N_p(≥ E_ min) = A^2-γ N_A(≥ E_ min). Because of the cutoff in the injection spectra the secondaries generated by silicon and iron nuclei drop out of the energy range E > 10 EeV that we consider in this study. More details on the approximations of UHECR propagation used are given in Ref. <cit.>. We also found that for iron primary the observed spectrum can be fitted almost equally well by the injection with and without cutoff. For no-cutoff spectrum the injection slope is -1.89 and the observed flux is supplemented with secondary protons. We choose the injection for iron with the cutoff — this choice is conservative because it yields larger mean deflections. In Sec. <ref> we study how our results change if we use the no-cutoff iron injection instead. We also discuss the effect of varying spectral indexes within their uncertainties. Finally, following the approximations of Ref. <cit.> we assume the remnants of the primary nuclei, that are attenuated upon the propagation through the interstellar medium, at detection have the same charge as primary nuclei have at injection. As it was shown in that study, this assumption lead to a per cent level errors with respect to full MC simulation of the propagation. Moreover, in context of our study these corrections would act in a conservative direction, making the simulated deflections larger and the composition models more compatible with the data (see next Section). We also consider an injection composition model from the Auger study <cit.>. Namely, we use their best-fit model with a power-law spectrum E^-0.96 and a rigidity dependent exponential cutoff of the special form. The fractions of separate mass components are fixed at 1 EeV: f_p = 0, f_He = 0.673, f_N = 0.281, f_Si = 0.046, f_Fe = 0. To get the appropriate spectrum at Earth taking into account the attenuation and secondaries, we use the results of the propagation for this model obtained with the code of Ref. <cit.>. The results are obtained in the energy range 32 ≤ E ≤ 80 EeV due to the limitations of the mentioned code. The deflections in the GMF (see below) for this model are estimated according to an average charge of the observed composition at a given energy. As we plan to use deflections of UHECRs from their sources as a variable discriminating between particle types, the effect of cosmic magnetic fields on the expected UHECR flux is of primary importance. The UHECR deflections by the galactic magnetic fields are implemented as follows. In general, the galactic magnetic field has regular and random components. For the regular field, we adapt the model of Ref. <cit.> for our basic UHECR flux picture and the model of Ref. <cit.> for the test of result robustness. The correction of the UHECR flux for the deflections in regular GMF is done by the standard backtracking technique. The deflections in random magnetic fields, both Galactic and extragalactic, are modeled as smearing of the flux with the von Mises-Fischer distribution f_θ(α) defined as f_θ(α) = exp(2cosα/θ^2)/2πθ^2sinh (2/θ^2), where the parameter θ is the smearing angle. The magnitude of the smearing is proportional to the combination Bq/E and is different for UHECR species of different charges q and energies E. The galactic random field is non-uniform over the sky: the dependence of mean deflections √(⟨θ^2 ⟩) (equivalently, the smearing angle) on the Galactic latitude has been estimated from the dispersion of Faraday rotation measures of extragalactic sources in Ref. <cit.>. The following empiric relation has been obtained for protons of E=40 EeV: √(⟨θ^2⟩)≤1^∘/sin^2b +0.15, b being the Galactic latitude. Note that this formula is purely phenomenological and independent of any assumptions about morphology or coherence length of random GMF. We adopt this relation conservatively treating it as the equality (i.e, assuming maximum deflections) and rescaling it for other species and energies according to magnetic rigidity. Subtleties of implementation of a non-uniform smearing are described in Ref. <cit.>. The deflections in extra-galactic magnetic field are set to zero in our basic flux model. This corresponds to either B_ EGMF≪ 1 nG for the correlation length λ∼ 1 Mpc or B_ EGMF≪ 0.1 nG for cosmological scale λ. The detailed discussion of possible UHECR deflections in EGMF, as well as quantitative estimate of their effect on our results, is given in Sec. <ref>. Finally, we add instrumental effects to our flux maps in order to fully reproduce the observed UHECR flux picture. We add a uniform smearing by 1^∘ to account for the angular resolution of TA. This only slightly affects flux maps for protons at high energies in the Galactic pole regions where the deflections due to random and regular GMF may become comparable to 1^∘. We should also note that the accuracy of our procedure of flux map construction is also 1^∘, that defines the overall accuracy of our method. Finally, we modulate the flux maps by the geometrical exposure of the TA SD. We bin the energy in 20 logarithmic bins per decade starting from 5 EeV, and generate a flux map for each injected species and each energy bin. Several examples of resulting model flux maps F_i,k for injected protons and iron and for different energies are shown in Fig. <ref>. Each map is a continuous function of the direction that is normalized to a unit integral over the sphere. It can be interpreted as a probability density to observe an event from the direction n. Given the flux map F_i,k it is straightforward to generate the set of UHECR events that follow the corresponding distribution by throwing random events and accepting them with the probability F_i,k( n) according to their direction n. We generate the energies of the events in a mock set according to the reconstructed TA spectrum <cit.> and additionally smear the energies with the Gaussian function of a width corresponding to the TA SD energy resolution of 18% (for the Auger best-fit composition model we do not perform this smearing in order not to narrow down the available energy range, which is already not wide). Each event is thrown using the flux map of the given species and energy of the bin it falls into. We generate a large number of events in each mock event set so as to make the statistical uncertainty of the corresponding TS negligible. §.§ Test-statistics The appropriate choice of the test statistics and the corresponding observable is very important for our method. We want the TS to depend on the overall magnitude of deflections but be insensitive to their particular directions. We would expect that such observable would not depend strongly on the details of the regular magnetic field, but mainly on its overall magnitude. While the existing GMF models agree on the overall magnitude of the galactic field within ∼ 50%, the magnitude of deflections in various composition models differ from 1 to 26 times, according to particle charges. Therefore we expect that the TS that is sensitive mainly to deflections magnitude would distinguish between different composition models despite the relatively poor knowledge of the galactic magnetic field. Such an observable is inspired by the case of purely random UHECR deflections which are characterized by a single parameter, the width of the Gaussian spread of a point source. More accurately, we use the von Mises-Fischer distribution (<ref>). By analogy, we choose to characterize the given set of events by their typical deflection angle with respect to the sources in the LSS. To compute this quantity we construct another set of sky maps Φ_k(θ_100) that are simplified analogs of the flux model maps F_i, k. Namely, each map Φ_k(θ_100) is derived from the same LSS source distribution with the flux attenuated as protons with injection spectrum index 2.55 taken at detected energy E_k and uniformly smeared with the angle θ = (100  EeV/E_k) ·θ_100, where θ_100 is the composition-discriminating parameter to be determined from fitting to the data or mock sets: given the set of events with directions n_i one can determine the value of θ_100 by computing the θ_100-dependent test statistics TS(θ_100) = -2 ∑_k ( ∑_i lnΦ_k(θ_100, n_i)/Φ_ iso ( n_i)). Here the internal sum runs over the events in the energy bin k and we have included a standard normalization factor -2. For convenience we also included the normalization factor Φ_ iso( n_i) = Φ(∞, n_i) that corresponds to the isotropic distribution of sources — a uniform flux map modulated by the exposure function. The energy binning here is the same as for the model flux maps F_i, k. The parameter θ_100 ranges from 1^∘ to 200^∘, where the first value comes from the experiment resolution and the second one corresponds to the size of the TA field of view (FoV) and mimics the isotropic distribution. One can infer the value of θ_100 for the given event set by finding the TS minimum with respect to it. This minimum, θ_100^ min, is interpreted as the typical deflection angle with respect to the sources in the LSS. The width of the minimum, σ(TS(θ_100)) characterizes the uncertainty of the deflection angle, and the square root of the minimum depth, |TS(θ_100^ min)|^1/2, measures the significance of the departure of a given set from isotropy in standard deviations. A detailed discussion of the TS choice and construction is given in the study <cit.>. Several examples of maps Φ_k(θ_100) are shown in Fig. <ref>. We should stress that one and the same TS is used to quantify any mock event set with arbitrary injected composition, or a data set. The applicability of such a TS for event sets generated with different assumptions about UHECR flux is justified by the tests of the statistical power of the TS in distinguishing these event sets. These tests were performed in Ref. <cit.> as well as in the present study (see next two sections). For a sufficiently large event set the TS yields a deep and narrow minimum at some value θ_100^ min that, within our approach, is a single characteristics of a given composition model. Comparing the values of θ_100^ min for various models with the TS distribution for the data one may determine to what extend each of these models is compatible with the data. To make the picture more precise we estimate the compatibility separately is several energy ranges (not to be confused with the technical energy bins E_k used for TS construction). Namely, we use the logarithmic energy ranges of 0.25 decade starting from 10 EeV, with the fifth range being the open interval E > 100 EeV. § RESULTS The distributions of the TS for the TA SD data in five energy ranges is shown in Fig. <ref>. We stress again that the same parameter θ_100 — the typical deflection rescaled to the energy E=100 EeV — is measured in different energy ranges. Even before comparing these distributions with simulated models one can notice three important points. First, all the curves show steep rise at small values of θ_100. This implies that at all considered energies the data is incompatible with small deflections of events from the LSS. Second, while at low energies the data does not show any clear preference for any deflection magnitude — all the minima are shallow, if present at all — at 56 ≲ E ≤ 100 EeV the minimum exists at θ_100^ min = 30.8 ^∘, implying that data exhibit the correlation with the LSS at more than 2 σ level as compared to isotropy. Note that this value is global and should not be penalized, as no scanning in any parameters was performed. Finally, at energies E > 100 EeV the data shows no hint of a minimum and prefers complete isotropy. This last remarkable feature is discussed and physically interpreted in our companion letter <cit.>. In Fig. <ref> we confront the same data TS distributions with the values of θ_100^ min of various pure and mixed composition models. As the TS is non-Gaussian we explicitly show the 1 σ and 2 σ error-bars around the TS minima that are shown with black points. We adjust the statistics of mock event sets to be ∼ 1000 times larger than that of the real data in each energy bin so that statistical uncertainties of the model predictions are negligible. One can see that, with our basic assumptions about the UHECR flux model, the light or even the intermediate mass composition is in tension with data. The situation is even more interesting at highest energies, where even pure iron is hardly compatible with data. Something else one can see from Fig. <ref> is that the sensitivity of the proposed TS to composition models is not constant in energy and is a competition of two different trends: the evolution of the expected flux with energy and the simultaneous change in events statistics. At lower energies, the expected flux from the LSS is almost uniform with a very small density contrast (modulo the experiment's exposure) due to large contribution of remote uniformly distributed sources and larger deflections in magnetic fields. At higher energies, on the contrary, the map contrast increases greatly due to simultaneous shrinking of the UHECR horizon and decrease of magnetic deflections. On the other hand, the statistics decreases at high energies. It appears that the first trend wins: even the small event statistics at highest energies gives a better sensitivity in terms of the mass composition discrimination than the large event statistics at lower energies. The non-monotonic behavior of the model predictions from bin to bin is a result of a complicated interplay between various factors affecting the evolution of the model flux maps with energy, such as the fraction of the isotropic component, the flux focusing and secondary source images that might appear due to large deflections in the lower energy flux maps, the ratio between the mean total deflection of a given mass component at a given energy and the size of the TA FoV, etc. These effects make it difficult to predict qualitatively the evolution of θ_100^ min with energy and composition, especially in the case of more than one mass component. Still, a global trend of θ_100^ min to increase with the mass of the injected particles is visible in each energy range. We should also mention that the results in separate bins of the observed energy are not completely independent, as they are projected to a partially overlapping bins of the injected energy. It is also visible that the TS has better model separation power for event sets where the deflections of separate mass components do not differ significantly. This is the main reason for the counter-intuitive result of the method's higher model separation power at lower energies where all deflections are higher: both proton and iron deflections are close to isotropic at low energies, while at higher energies protons deflections are small but iron deflections are still close to isotropic. The method reaches its best sensitivity at highest energies E > 100 EeV, where the total mean deflections of all studied composition components are within FoV and sources are more distinct in the sky-map. Therefore, all the composition lines are below our adopted “isotropic” value θ_100^ min = 200^∘. In this case the lines are not degenerate, which allows us to distinguish between several strongly deflected composition models that are indistinguishable at lower energies. § UNCERTAINTIES In this section we discuss the impact of our theoretical assumptions and experimental uncertainties on the mass composition constraints. Note that in our approach all the uncertainties have an impact on the positions of the model lines but do not affect the data points. To estimate the impact of each uncertainty we compare the model predictions computed for the basic value of a given parameter with the predictions computed for a varied value of this parameter. The main sources of uncertainties are discussed below one by one. §.§ Injection spectra The injection spectra fits described in the Section <ref> yield a value of the spectrum index for each primary and a 1σ interval around it. We compute the composition model lines varying the index within these intervals and compare the result with the basic lines. We use proton-iron mix models for the tests of the uncertainties related to spectrum and energy scale. The fitted values of injection spectrum indices are 2.55^+0.04_-0.03 for protons and 1.95^+0.04_-0.04 for iron. We set the index for one primary to its best fit value and vary the index for another primary. Among all the resulting models we choose the one with maximum deviation from the best-fit injection model. This happens to be the model with γ_p = 2.55 and γ_Fe = 1.91. The resulting comparison is shown in Fig. <ref>, left panel. One can see that the impact of the variatio of the injection index on the model line position is negligible. There is also an uncertainty associated to the presence of the cutoff in the injection spectrum for heavy primaries. For heavy primaries the spectrum fits with and without cutoff are equally viable, while leading to quite different expected flux model maps. Therefore it is instructive to test the change in the composition results due to assumption of no-cutoff injection for iron. The comparison is shown in Fig. <ref>, right panel. One can see that when there is no cutoff in the iron spectrum, the predicted value of θ_100^ min is much lower (obviously, due to large fraction of secondary protons in the flux), so that for instance at 32 ≲ E ≲ 56 EeV it is even hard to reconcile any composition with the data. As the data in general disfavors small deflections, our basic model of iron injection with cutoff (and hence without secondaries) is conservative. §.§ Systematic uncertainty of energy scale In the standard TA SD energy reconstruction procedure the overall energy scale is set to that established in the fluorescence measurements <cit.>. Therefore, a systematic uncertainty of the SD energy measurement is given by that of the FD energy scale which was found to be 21% <cit.>. We estimate the impact of this uncertainty on our composition results by shifting the energies of all the events in a mock set to the lower or the upper edge of the systematic uncertainty band. The results are shown in Fig. <ref>, where the left panel corresponds to the situation when measured energy is systematically higher than the real one and the right panel — to the opposite situation. One can see that the difference in model lines due to this uncertainty grows with energy, but does not exceed the difference between the light and heavy composition models. It is also worth noting that the inconsistency of the data at E > 100 EeV with a light or intermediate composition is robust to this uncertainty. §.§ Galactic magnetic fields A strength of the regular GMF component is known to be several μ G from Faraday rotation measures of extragalactic sources and from some other observations <cit.>. However, its general structure is unknown since a reconstruction of a 3D field from its 2D projection on the sky is ambiguous. Several proposed phenomenological models <cit.> should be used with caution. We also should note that some models predict quite large magnetic fields in the galactic halo <cit.>. The estimated UHECR deflections in these fields for some directions in the sky can be enhanced significantly with respect to “basic” phenomenological models. However, these deflections are in general less than those expected in models of strong EGMF (see discussion in the next Subsection). Our main initial motivation for a new TS (<ref>) was to minimize the impact of GMF uncertainty on the results of the composition estimation. It is therefore interesting to see to which extent this works in practice. To estimate the impact of the GMF uncertainty we compare the TS predictions in one and the same composition model generated with our reference GMF model <cit.> (PT'11) and with the model of Ref. <cit.> (JF'12). The comparison is shown in Fig. <ref>, left panel. One can see that, as expected, the change in the predicted value of the TS with the change of the GMF model is small. Remarkably, in the majority of cases the predictions for the two GMF models are remaining compatible with the data within the same number of sigmas. §.§ Extra-galactic magnetic fields The extra-galactic magnetic field is much more uncertain than the Galactic one. Only loose bounds on EGMF strength in voids were so far derived from observations: the constraint by ∼ 10^-15 G from below <cit.> and by ∼ 10^-9 G from above <cit.>. The correlation length of a field of non-cosmological origin should not be larger than 1 Mpc <cit.>. There are also constraints from CMB on a field of cosmological origin with unbounded correlation length: its strength should not exceed 5 × 10^-11 G <cit.>. The contribution of the EGMF in the voids to the UHECR deflections is the largest for the field strength at its upper-limit value B = 1.7 nG and maximum correlation length of λ = 1 Mpc. In this case the deflections for the protons at 100 EeV are as large as 7^∘ (we assume a distance traveled to be 250 Mpc — the limit of our source catalog). Note that this estimation is a conservative upper bound, as this deflection is assumed to be the same for all sources irrespective of their distance from us. Moreover, the deflection is computed for the detected energy of the particle, while in reality it is accumulated during the whole path of the particle while its energy is higher. We call this scenario “extreme EGMF” and model it with a uniform smearing of the catalog sources (according to particle charge and energy) before applying the deflections in GMF. Examples of UHECR flux model maps used for mock UHECR sets simulation for protons and iron nuclei in extreme EGMF scenario are shown in Fig. <ref>. Apart from global EGMF in voids there can exist a magnetic field inside the extragalactic structures such as filaments. These fields require separate consideration in case our Galaxy itself is situated in a magnetized filament. While upper limits exist on the magnetic field strength of filaments in general <cit.>, even a presence of such a structure around the Milky Way is unclear from observations, not to say of its magnetic field properties. Therefore, for possible estimation of these fields it is reasonable to resort to results of the structure formation simulations <cit.>. For instance, the recent constrained simulation of EGMF in the local Universe <cit.> shows the presence of a ≃ 5 Mpc-large local filament around the Milky Way magnetized to ∼ 0.3 - 3 nG over most of its volume in a most conservative case. The impact of this field on UHECR deflections would be smaller than that of our “extreme EGMF” scenario even if its correlation length equals the size of filament, λ≃ 5 Mpc. Therefore, we consider the “extreme EGMF” scenario as the most conservative one in terms of deflections. In Fig. <ref> we show the comparison of the predicted TS for the same composition model computed without EGMF deflections and with deflections in the extreme EGMF scenario. One can see that the presence of EGMF affects the model predictions significantly, so that even pure proton composition becomes consistent with data at ∼ 2 σ level at low energies. However, this does not hold at energies E > 100 EeV where all compositions lighter than silicon are still inconsistent with the data. To reconcile the proton or helium composition models with the data at E > 100 EeV at least at 2σ level the EGMF should be stronger than 20 nG for λ = 1 Mpc, that is far beyond the upper limit discussed earlier in this section. We also should stress that this conclusion is conservative, since our procedure of estimation of EGMF deflections, that was described in the beginning of this subsection, definitely overestimates it. §.§ Sources number density The largest uncertainty of composition models in our method is related to UHECR source number density. As it was described earlier, the TS is computed assuming the conservative model where all the galaxies are equally luminous sources of UHECR. The source number density is thus ρ≃ 10^-2 Mpc^-3 <cit.>. However, the UHECR sources may be much more rare than ordinary galaxies. The constraints on the source number density were placed by the Pierre Auger observatory in Ref. <cit.>. For the scenario of sources in the LSS and at energies higher then 80 EeV the conservative 95% C.L. constraint is ρ > 2 × 10^-5 Mpc^-3. However, this bound assumes the deflection of events not larger than 30^∘, that does not cover scenarios with heavy nuclei even in the case of deflections in GMF only. There are two recent studies that are placing more stringent lower limits on the UHECR source number density: ρ > 1.0 × 10^-4 Mpc^-3 <cit.> and ρ≳ 3 × 10^-4 Mpc^-3 <cit.>. However, in the first of these works the density is constrained only for sources emitting heavy particles, while in the second one the constraints are put at energies E ≃ 32 EeV, while the sources at higher energies can be more rare. At the same time, the viable UHECR sources being discussed recently include FR-I and Seyfert galaxies with ρ≥ 10^-4 Mpc^-3 in both cases, or even an order of magnitude more frequent low-luminosity AGNs <cit.>. To test the robustness of the TS predictions to source number density we keep the source catalog for the TS computation fixed to our basic one and vary the catalogs used for mock event sets generation, while keeping all other model parameters fixed. Namely, we test the conservative value from the Auger constraints: ρ = 2 × 10^-5 Mpc^-3 and the benchmark value ρ = 10^-4 Mpc^-3. We do not want to tie ourselves to any specific source class model, therefore we produce the test rare source catalogs from our basic all-galaxies catalog. For such low source number densities only a few or a few tens of source can be found in the local Universe and hence in the GZK sphere. Therefore, the expected flux map starts to depend on the particular positions of these sources in the sky. To avoid this statistical issue we generate a number of mock source catalogs and compute the TS for each of them separately. The catalogs are volume limited samples generated by random selection from the original 2MRS catalog. We generate 20 catalogs for both ρ = 10^-4 Mpc^-3 and ρ = 2 × 10^-5 Mpc^-3 scenarios to keep the accuracy of the conclusion at the 95 % level. Among mock catalog realizations, in both ρ = 10^-4 Mpc^-3 and ρ = 2 × 10^-5 Mpc^-3 cases we pick the catalog that gives the results that are most discrepant from that of the basic 2MRS catalog. The examples of respective UHECR flux model maps used for mock UHECR sets simulation for protons and iron nuclei are shown in Fig. <ref>. The results are shown in Fig. <ref>. One can see that the discrepancy in TS between the basic scenario and the one with ρ = 10^-4 Mpc^-3 is not very large and does not exceed the difference between light and heavy composition models. Therefore almost all of the conclusions that can be made for the basic source model stay in force. The discrepancy between the basic scenario and the one with ρ = 2 × 10^-5 Mpc^-3 is more pronounced, so that the light and intermediate compositions are mostly consistent with the data in the lower energy bins. However, in the highest energy bin, the heavy composition is still preferred, while the light and intermediate compositions are in tension with the data. We conclude that our method of composition estimation is robust to all the considered uncertainties, at least at highest energies. At the same time, we suppose that one of the reasons of the degradation of the TS sensitivity in the case of rare sources is partial sky coverage of the TA experiment. For instance, at high energies and large deflections the sources are rare and some of them could contribute to the expected UHECR flux, while being outside the TA field of view. We have tested that in this situation the TS model separation power degrades. Conversely, for a full sky coverage even images of very smeared sources are fully inside the FoV and the TS separation power is higher. Therefore we expect that the sensitivity of our method would improve if we use the UHECR data from the full sky, for instance by combining TA and Pierre Auger data in the style of Auger-TA Anisotropy Working Group studies <cit.>. § CONCLUSION In this paper we have used the novel method proposed in Ref. <cit.> to estimate the UHECR injected mass composition from the distribution of their arrival directions. We improved the original version of the method by attributing all the statistical uncertainties to the data and not to the composition models and therefore making the comparison of the models more transparent. We also applied the developed method to the Telescope Array SD data. We tested several injected compositions: pure protons, helium, oxygen, silicon and iron, as well as a proton-iron mix in different proportions. For each model we propagated the injected particles taking into account the effects of their attenuation, production of secondary species, deflection in magnetic fields and modulation with the detector exposure. We then compared, separately in 5 energy ranges above 10 EeV, the resulting sky distributions of the mock events with the actual TA SD data. To assess quantitatively the compatibility of a given model with the data we calculated for both the test statistics (<ref>) as a function of the angle, and compared the positions of the minima, which represent typical deflection angle of UHECR in a given set with respect to their sources in the LSS. The results presented in Fig. <ref> indicate large deflections of UHECR, significantly larger than would normally be expected for a light composition. The main result of the present paper is the thorough investigation of the stability of the new composition results with respect to all possible uncertainties: injected spectra, experiment energy scale, galactic and extragalactic magnetic fields and source number density. We found that the preference for heavy composition is robust to the first three of these uncertainties at all energies. In the presence of a large extragalactic magnetic field or for very rare sources light composition becomes marginally compatible with the data, but at highest energies the composition should be heavy in both of these cases. We discuss the physical implications of the latter result in the short letter accompanying this study <cit.>. * § FITS FOR INJECTED SPECTRA The fits for the injected spectra for protons and iron nuclei are shown in Fig. <ref>. The TA data from Ref. <cit.> was used for these fits. For iron the fit for the injection spectrum with cutoff at 560 EeV is shown. The respective χ^2/ d.o.f. values are 1.80 for protons and 2.01 for iron nuclei. § ACKNOWLEDGEMENTS The authors would like to thank the former member of the Telescope Array collaboration Armando di Matteo, who kindly provided the simulations of UHECR propagation and respective fits of attenuation curves for the purposes of this study. The Telescope Array experiment is supported by the Japan Society for the Promotion of Science(JSPS) through Grants-in-Aid for Priority Area 431, for Specially Promoted Research JP21000002, for Scientific Research (S) JP19104006, for Specially Promoted Research JP15H05693, for Scientific Research (S) JP19H05607, for Scientific Research (S) JP15H05741, for Science Research (A) JP18H03705, for Young Scientists (A) JPH26707011, and for Fostering Joint International Research (B) JP19KK0074, by the joint research program of the Institute for Cosmic Ray Research (ICRR), The University of Tokyo; by the Pioneering Program of RIKEN for the Evolution of Matter in the Universe (r-EMU); by the U.S. National Science Foundation awards PHY-1806797, PHY-2012934, and PHY-2112904, PHY-2209583, PHY-2209584, and PHY-2310163, as well as AGS-1613260, AGS-1844306, and AGS-2112709; by the National Research Foundation of Korea (2017K1A4A3015188, 2020R1A2C1008230, & 2020R1A2C2102800) ; by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2024-541, IISN project No. 4.4501.18 by the Belgian Science Policy under IUAP VII/37 (ULB), by the European Union and Czech Ministry of Education, Youth and Sports through the FORTE project No. CZ.02.01.01/00/22_008/0004632, and by the Simons Foundation (00001470, NG). This work was partially supported by the grants of The joint research program of the Institute for Space-Earth Environmental Research, Nagoya University and Inter-University Research Program of the Institute for Cosmic Ray Research of University of Tokyo. The foundations of Dr. Ezekiel R. and Edna Wattis Dumke, Willard L. Eccles, and George S. and Dolores Doré Eccles all helped with generous donations. The State of Utah supported the project through its Economic Development Board, and the University of Utah through the Office of the Vice President for Research. The experimental site became available through the cooperation of the Utah School and Institutional Trust Lands Administration (SITLA), U.S. Bureau of Land Management (BLM), and the U.S. Air Force. We appreciate the assistance of the State of Utah and Fillmore offices of the BLM in crafting the Plan of Development for the site. We thank Patrick A. Shea who assisted the collaboration with valuable advice and supported the collaboration’s efforts. The people and the officials of Millard County, Utah have been a source of steadfast and warm support for our work which we greatly appreciate. We are indebted to the Millard County Road Department for their efforts to maintain and clear the roads which get us to our sites. We gratefully acknowledge the contribution from the technical staffs of our home institutions. An allocation of computing resources from the Center for High Performance Computing at the University of Utah as well as the Academia Sinica Grid Computing Center (ASGC) is gratefully acknowledged.
http://arxiv.org/abs/2406.19168v1
20240627134102
Emergent limit cycles, chaos, and bistability in driven-dissipative atomic arrays
[ "Victoria Zhang", "Stefan Ostermann", "Oriol Rubies-Bigorda", "Susanne F. Yelin" ]
quant-ph
[ "quant-ph" ]
Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Physics Department, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA § ABSTRACT We analyze the driven-dissipative dynamics of subwavelength periodic atomic arrays in free space, where atoms interact via light-induced dipole-dipole interactions. We find that depending on the system parameters, the underlying mean-field model allows four different types of dynamics at late times: a single monostable steady state solution, bistability (where two stable steady state solutions exist), limit cycles and chaotic dynamics. We provide conditions on the parameters required to realize the different solutions in the thermodynamic limit. In this limit, only the monostable or bistable regime can be accessed for the parameter values accessible via light-induced dipole-dipole interactions. For finite size periodic arrays, however, we find that the mean-field dynamics of the many-body system also exhibit limit cycles and chaotic behavior. Notably, the emergence of chaotic dynamics does not rely on the randomness of an external control parameter but arises solely due to the interplay of coherent drive and dissipation. Emergent limit cycles, chaos, and bistability in driven-dissipative atomic arrays Susanne F. Yelin July 1, 2024 ================================================================================= § INTRODUCTION Out-of-equilibrium dynamics of long-range interacting many-body quantum systems gives rise to a series of fascinating phenomena. Examples include ergodicity breaking <cit.>, quantum many-body scars <cit.>, time crystals <cit.> and quantum chaos <cit.>. While many of these phenomena are usually studied and realized for closed quantum systems, coupling many-body systems to an environment and adding drive can allow enhanced control over the dynamics and the emergence of additional phenomena that have no counterpart in closed systems <cit.>. In this work, we study a particular realization of a driven-dissipative system — a coherently-driven array of two-level atoms in free space. If the interatomic distance is small enough, light-induced coherent and dissipative long-range interactions give rise to strongly non-linear dynamics in the resultant non-integrable spin model <cit.>. The recent emergence of technologies that offer enhanced control over the arrangement of individual atoms <cit.> or quantum emitters <cit.> in potentially subwavelength geometries requires detailed theoretical analysis of the non-linear dynamics expected for these systems. While many aspects of the transient dynamics were investigated over the past years <cit.>, their steady-state properties in the driven-dissipative case were only very recently studied and experimentally analyzed  <cit.>. In this work, we go beyond investigating the features of the steady states and perform an in-depth study of the late-time driven-dissipative dynamics of two-dimensional periodic arrays of atoms interacting via light-induced dipole-dipole interactions (see fig:model). Our analysis is based on a mean-field model of the spin-degrees of freedom, which circumvents the complexity of the exponentially growing Hilbert space and allows for investigating large arrays of atoms. We first study configurations where all atoms are permutationally symmetric, which naturally occurs for rings and infinite one- and two-dimensional arrays of emitters, in the thermodynamic limit. For these configurations, an effective single particle model that describes the dynamics of a single atom in an effective mean-field generated by all the surrounding atoms provides first analytic intuition. We show that the late-time dynamics resulting from this model can in general exhibit monostability (a single steady state solution exists) and bistability (two steady state solutions exist), as well as the emergence of limit cycles (suggesting potential ergodicity breaking) and chaotic dynamics. However, the nature of the effective parameters one obtains for the underlying physical model of dipole-dipole interacting atoms is such that only the mono- and bistability can be accessed in practice. Nonetheless, we demonstrate that all four types of dynamics can be realized for finite extended arrays with as little as nine atoms. Transitions between the different regimes can be induced by simply tuning the lattice spacing or the driving strength. This is particularly remarkable for the chaotic regime, which arises solely due to the interplay of dissipation (random quantum jumps) and coherent drive and does not require to add randomness to the system parameters. § MODEL We consider a driven array of N two-level atoms with ground state |g_i⟩ and excited state |e_i⟩ located at positions 𝐫_i, as illustrated in fig:model. Applying the Markov approximation and tracing out the field degrees of freedom yields the equations of motion for the atomic operators in the Heisenberg picture <cit.> d Ô/d t=i/ħ[Ĥ, Ô]+ℒ(Ô)+ℱ(Ô). Assuming the system is driven on resonance, the Hamiltonian in the rotating frame reads Ĥ = ħ∑_i,j≠ i^N J_ijσ̂_i^+ σ̂_j^- + ħΩ/2∑_i=1^N(σ̂_i^+ + σ̂_i^-), where we have introduced σ̂_i^+ = |e_i ⟩⟨ g_i | (σ̂_i^- = |g_i ⟩⟨ e_i |) as the raising (lowering) operator for atom i. The first term describes coherent exchange interactions between the atoms mediated by the vacuum electromagnetic field, while the second corresponds to a plane wave drive perpendicular to the atomic array with Rabi frequency Ω. The dissipative nature of the system is described by the Lindbladian ℒ(Ô) =∑_i, jΓ_i j/2(2 σ̂_i^+Ôσ̂_j^--σ̂_i^+σ̂_j^-Ô-Ôσ̂_i^+σ̂_j^-). The light-induced coherent J_ij and dissipative Γ_ij interactions between atoms i and j are obtained from the Green's tensor for a point dipole in vacuum G, given in Appendix <ref>, via J_i j-i Γ_i j / 2 =-3 πγ_0/ω_0𝐝^†·𝐆(𝐫_i j, ω_0) ·𝐝, where r_ij = r_i- r_j is the vector connecting atoms i and j and d is the transition dipole moment. For the remainder of this work we choose d = (0, 0, 1)^T. Here, Γ_ii = γ_0 corresponds to the spontaneous decay rate of a single atom in vacuum, and the Lamb shift J_ii is included in the definition of the resonance frequency. The last term in fig:model, ℱ(Ô), represents the quantum Langevin noise that arises from vacuum fluctuations <cit.>. Assuming white noise, the expectation value ⟨ℱ(Ô)⟩ vanishes. Because we are ultimately interested in the expectation values of atomic operators ⟨Ô⟩, we drop ℱ(Ô) from here onward to simplify notation. Using the relations σ_i^± = (σ_x^i ±σ_y^i)/2, we obtain the equations of motion for the Pauli operators of the k^th atom from fig:model, d σ̂_x^k/dt = - 1/2γ_0 σ_x^k + ∑_i ≠ k J_kiσ̂_y^iσ_z^k + 1/2∑_i ≠ kΓ_kiσ̂_x^iσ̂_z^k, d σ̂_y^k/dt = - Ωσ̂_z^k - 1/2γ_0 σ̂_y^k - ∑_i ≠ k J_kiσ̂_x^iσ̂_z^k + 1/2∑_i ≠ kΓ_kiσ̂_y^iσ̂_z^k, d σ̂_z^k/dt = Ωσ̂_y^k - γ_0(σ̂_z^k + 1) - ∑_i ≠ k J_ki (σ̂_y^iσ̂_x^k - σ̂_x^iσ̂_y^k) -1/2∑_i ≠ kΓ_ki(σ̂_x^iσ̂_x^k + σ̂_y^iσ̂_y^k). Obtaining the full solution of these equations requires calculating additional equations for the higher-order operators σ̂^i_yσ̂^k_z, σ̂^i_xσ̂^k_z, σ̂^i_yσ̂^k_x, σ̂^i_xσ̂^k_y, σ̂^i_xσ̂^k_x, σ̂^i_yσ̂^k_y, which in turn will again depend on Pauli strings with higher-weight. The number of equations required to describe the system exactly grows exponentially with the number of atoms, making an exact solution of the master equation (<ref>) unfeasible for large arrays. Hence, in this work, we instead rely on a mean-field approximation of eq:pauli to drastically reduce the number of equations. We apply the mean-field decoupling for the two-point correlators, ⟨ÂB̂⟩ = ⟨Â⟩⟨B̂⟩, and obtain from eq:pauli the equations of motion for the expectation values s_x,y,z^k≡⟨σ_x,y,z^k⟩. They read d s_x^k/dt = - 1/2γ_0 s_x^k + ∑_i ≠ k J_ki s_y^is_z^k + 1/2∑_i ≠ kΓ_ki s_x^is_z^k, d s_y^k/dt = - Ω s_z^k - 1/2γ_0 s_y^k - ∑_i ≠ k J_ki s_x^is_z^k + 1/2∑_i ≠ kΓ_ki s_y^is_z^k, d s_z^k/dt = Ω s_y^k - γ_0(s_z^k + 1) - ∑_i ≠ k J_ki (s_y^is_x^k - s_x^is_y^k) - 1/2∑_i ≠ kΓ_ki(s_x^is_x^k + s_y^is_y^k). This set of equations is at the core of the analysis presented below. § PERMUTATIONALLY SYMMETRIC CONFIGURATIONS We first study the evolution of the system at the mean-field level for permutationally symmetric configurations where all atoms are indistinguishable due to spatial symmetries, s_x, y, z≡ s_x, y, z^k for all k ∈{1, ..., N}. This is the case for periodic arrays in the thermodynamic limit N →∞ and for finite geometries such as rings of atoms with perpendicular polarization. Defining the effective coherent and dissipative interaction strengths, J_eff = ∑_i=2^N J_1 i and Γ_eff = ∑_i=2^N Γ_1 i, eq: mean-field simplifies to three equations d s_x/dt = J_eff s_y s_z -1/2 (γ_0 - Γ_eff s_z) s_x , d s_y/dt = - Ω s_z - J_eff s_x s_z -1/2 (γ_0 - Γ_eff s_z) s_y , d s_z/dt = Ω s_y - γ_0 (s_z + 1) - 1/2Γ_eff(s_x^2 + s_y^2). This is an effective single particle model that exactly describes the mean-field dynamics of permutationally symmetric configurations <cit.>. §.§ Role of effective collective interactions The dynamics of the system at late times strongly depends on the effective collective interactions Γ_eff and J_eff. To understand this dependency, we characterize the range of possible driven-dissipative dynamics for the permutationally symmetric configuration described by  (<ref>) as a function of the effective dissipative interaction Γ_eff and the drive strength Ω at a fixed effective coherent interaction J_eff = 3 γ_0 [see fig: thermodynamic phase diagram(a)]. Note that similar results are observed for other values of J_eff. We combine an analytical and a numerical analysis to characterize the different parameter regimes of eq: thermodynamic mean-field. Analytically, we characterize the physicality and stability of the equilibrium points of eq: thermodynamic mean-field. Solving for the steady state spin expectation values (ds^k_α/dt = 0, α∈x,y,z) of eq: thermodynamic mean-field yields three equilibrium points {s_x^ss, s_y^ss, s_z^ss}, which can be nicely distinguished by their s_z^ss value [see blue trace in fig: lattice spacing cut(a)]. Mathematically, the steady state solutions may be complex. However, as expectation values must be real, only the purely real fixed points are physically meaningful. We therefore call a steady state solution physical if the expectation values of s_x^ss, s_y^ss and s_z^ss are real-valued. We then determine the stability by analyzing the eigenvalues of the Jacobian matrix evaluated at each physical equilibrium point and use the following classification terms: (i) Focus-Node: the Jacobian has one real eigenvalue and a pair of complex-conjugate eigenvalues. All eigenvalues have real parts of the same sign; the focus-node is stable if the sign is negative and unstable if the sign is positive. (ii) Saddle-Focus: the Jacobian has one real eigenvalue and a pair of complex-conjugate eigenvalues; the sign of the real eigenvalue is opposite that of the real part of the complex-conjugate pair. We say the saddle-focus has a one-dimensional stable manifold if the real eigenvalue is negative; otherwise, it has a two-dimensional stable manifold. A saddle-focus is always unstable along at least one direction in phase space. Based on the physicality and stability of all equilibrium points (which depend on J_eff, Γ_eff and Ω), we can draw conclusions on the nature of the resulting dynamics. We say the system is monostable if there is only one stable steady state solution. Conversely, we say the system is bistable if there are two stable steady state solutions. When the system has a saddle-focus equilibrium point, non-trivial behaviors such as limit cycles and chaotic dynamics can emerge. Numerically, we characterize the different parameter regimes by evaluating how the distance between two neighboring trajectories evolves as a function of time. More precisely, we initialize all atoms in their ground state and evolve eq: thermodynamic mean-field up to γ_0 t = 2000 to ensure the orbit has reached its attractor, the set of states toward which the system tends to evolve. The attractor can either be a point, a cycle, or a chaotic structure. We then choose the state of the system at a random late time close to γ_0 t = 2000, and apply a small displacement with length d ≪ 1 to this state. We then propagate the states with and without displacement for an additional γ_0t = 200 and compute the distance between both orbits as a function of time. We repeat this process for five initial states and eight displacement vectors (see Appendix <ref> for details), and finally obtain d_avg/d_0, where d_avg is the average distance at the final time and d_0 is the average value of d. This quantity is inspired by the determination of the Lyapunov exponent <cit.>, a key quantity used to characterize the dynamics of non-linear systems. While similar conclusions can be reached by analyzing the Lyapunov exponent, d_avg/d_0 is the more robust quantity for the system studied here. In particular, it distinguishes the different dynamical behaviors: d_avg/d_0 grows, stagnates and shrinks when the system respectively evolves into chaotic dynamics, a limit cycle, or a stable steady state point. We identify four regimes with different stability properties [see fig: thermodynamic phase diagram(a)]: (i) In the monostable (Mono) regime, the only physical equilibrium point is a stable focus-node. All initial conditions evolve to this point at late times. This is confirmed by the distance between neighboring trajectories, d_avg/d_0, which decreases for a system initially in the ground state. A representative steady state trajectory is depicted on the Bloch sphere in fig: thermodynamic phase diagram(b). (ii) In the bistable (Bi) regime, all three equilibrium points are physical. However, two are stable focus-nodes, while the third is a saddle-focus with a two-dimensional stable manifold. Consequently, all initial conditions reach one of the two stable fixed points, resulting in bistability. This phenomenon is illustrated by the Bloch sphere in fig: thermodynamic phase diagram(e) for two distinct initial conditions. Because of the stable nature of the trajectories, small displacements d result in trajectories evolving into the same stable fixed point and leads to a decrease in d_avg/d_0. (iii) In the (LC/Ch) regime, the only physical equilibrium point is a saddle-focus with a one-dimensional stable manifold. In this regime, only limit cycle and chaotic behavior are possible. Both types of attractors are distinguished via d_avg/d_0, which remains constant in the former case [white regime in fig: thermodynamic phase diagram(a)] and grows in the latter [red regime in fig: thermodynamic phase diagram(a)]. Representative limit cycle and chaotic trajectories are shown in fig: thermodynamic phase diagram(c) and fig: thermodynamic phase diagram(d), respectively. (iv) In the (LC/Ch/Mono) regime, all three equilibrium points are physical: a stable focus-node, a saddle-focus with a one-dimensional stable manifold, and a saddle-focus with a two-dimensional stable manifold. The system can thus evolve to a fixed point, a limit cycle, or a chaotic behavior depending on the initial state and the values of Ω, Γ_eff and J_eff. For the parameters considered in fig: thermodynamic phase diagram(a), the ground state is in the basin of attraction of the fixed point, and d_avg/d_0 consequently decreases. From fig: thermodynamic phase diagram(a), it becomes apparent that limit cycles and chaos only arise when Γ_eff < -γ_0. That is, only mono- and bistability are possible for Γ_eff≥ -γ_0. Intuitively, we can understand this phenomenon from the term -1/2(γ_0 - Γ_eff s_z) ≡α that appears in eq: thermodynamic mean-field sx and eq: thermodynamic mean-field sy. Noting that s_z^ss < 0, α can be expressed for s_z < 0 as α = -1/2(γ_0 - Γ_eff s_z) = -1/2γ_0 (1+Γ_eff/γ_0 |s_z|). For Γ_eff≥ -γ_0, we obtain α < 0. Then, the terms α s_x in eq: thermodynamic mean-field sx and α s_z in eq: thermodynamic mean-field sy are loss terms. For Γ_eff< -γ_0, however, we obtain α > -γ_0 / 2. In particular, one can attain α > 0, giving rise to gain in the system and thereby enabling the emergence of limit cycles and chaos. §.§ Dynamics for permutationally symmetric atomic arrays For atomic arrays with full permutation symmetry such as rings and periodic lattices with a single atom per unit cell, the minimum effective dissipative interaction is bounded by Γ_eff = -γ_0. This follows from the property that the dissipation matrix Γ=(Γ_i j)_i, j=1^N is positive semidefinite <cit.>. Let 1_N denote the N × 1 column vector of ones. From the definition of a positive semidefinite matrix, it follows γ_Γ 1/N1_N^⊤Γ1_N ≥ 0. Because all atoms are identical, ∑_i=1^N Γ_i, j = γ_0 + Γ_eff independently of j. Then γ_Γ = 1/N(γ_0 + Γ_eff) 1_N^⊤1_N = γ_0 + Γ_eff≥ 0. It readily follows that Γ_eff≥ -γ_0, which implies that arrays with full permutation symmetry cannot exhibit limit cycle and chaotic behaviors regardless of the driving strength Ω. While beyond the scope of the present work, it is worth investigating if engineering alternative dissipation channels or adding incoherent drive can enable the emergence of limit cycles and chaos in such arrays. Nonetheless, bistability persists in both one-dimensional and two-dimensional arrays within a specific range of lattice spacings. In  fig: lattice spacing cut(a), we plot s_z^ss as a function of the drive strength Ω for a lattice spacing corresponding to monostability (magenta trace) and another corresponding to bistability (blue trace), for two-dimensional square arrays in the thermodynamic limit. We define the bistable width L as the length of the interval [Ω_1, Ω_2] that supports bistability. We find L by numerically computing the minimum and maximum Ω such that all three equilibrium points are physical. The effective couplings for atomic chains and two-dimensional square arrays are plotted in fig: lattice spacing cut(b), while the corresponding bistable widths L are shown as a function of spacing a in fig: lattice spacing cut(c). Bistability for permutationally symmetric geometries occurs for a ≲ 0.27 λ_0 for two-dimensional squared arrays and a ≲ 0.14 λ_0 for chains. In the case of two-dimensional arrays, additional narrow bistable regimes emerge for spacings where J_eff and Γ_eff diverge due to constructive interference, for a/λ_0 = 1, a/λ_0 = √(2), etc. § FINITE SIZE ARRAYS As discussed in Section <ref>, permutationally symmetric geometries such as rings and infinite one- and two-dimensional arrays do not exhibit limit cycles or chaotic dynamics. This also implies that the dynamics of a minimal square array of four atoms (which is identical to a four-atom ring) is also limited to mono- and bistability at the mean-field level. This raises the question of how increasing the number of particles beyond N=4 and breaking the full permutation symmetry affects the system's dynamics. In this section, we use the full mean-field model described in eq: mean-field to study the late-time behavior of finite two-dimensional atomic square arrays with N > 4. Breaking the full permutation symmetry indeed extends the accessible dynamics beyond just mono- and bistability, and we uncover both limit cycle and chaotic behaviors in lattices containing as little as nine atoms (see Appendix <ref>). For the set of equations (<ref>), an analytical stability analysis is intractable. We rely on the numerical stability analysis based on the system's response to a perturbation once it reaches its attractor. In fig: mean-field phase diagram(a), we show the stability analysis for a square lattice with N=36 atoms initially in the ground state. Exemplary phase space trajectories on the Bloch sphere corresponding to steady state, limit cycle, and chaotic dynamics are shown respectively in fig: mean-field phase diagram(b) through  fig: mean-field phase diagram(d). These dynamics can be nicely distinguished by the growth of the separation distance d_avg/d_0 between two nearby trajectories (for details see Appendix <ref>). Note that similar results can be obtained for other atom numbers (see <ref>). The size of the regime exhibiting non-trivial limit cycle or chaotic dynamics changes with system size. Based on the findings presented in section <ref>, this regime is expected to vanish for N→∞ once the lattice approaches the thermodynamic limit. Beyond limit cycles and chaos, we also observe that bistability and dual behaviors persist at the mean field level for finite sized arrays. fig: bistability/dual behavior(a) illustrates an example of bistability within a nine atom square array, where different initial states lead the central atom to evolve towards distinct steady states. fig: bistability/dual behavior(b) illustrates a dual behavior example, where the central atom can evolve towards a steady state or a limit cycle depending on the initial state. Note that we have not ruled out the possibility of other dual and more complex behaviors, such as multistability. While such an analysis warrants further study, it goes beyond the scope of the present work. Finally, it is worth noting that the amplitude of limit cycle and chaotic dynamics is significantly reduced for the averaged spin components (e.g. s_z =1/N∑_k s_z^k). Although each atom individually exhibits either limit cycles or chaotic behavior, they oscillate or fluctuate out of phase and centered around different average values. Consequently, the average spin expectation value exhibits much smaller oscillations in time. § CONCLUSIONS AND OUTLOOK We have analyzed the mean-field dynamics of a driven dissipative array of atoms in a two-dimensional periodic lattice. Based on an effective single particle mean-field model that captures permutationally symmetric configurations, we determined the possible non-trivial dynamics that can occur in such a system by tuning the effective parameters. Interestingly we find that only mono- and bistability persist for the parameter space accessible for dipole-dipole coupled atom arrays with permutationally symmetric geometry. However, this limitation vanishes for the case of finite system sizes, where we find limit cycles, chaotic dynamics, and bistability in addition to the trivial monostable steady state solution. The different types of dynamics can be accessed by simply changing the spacing of the lattice or the strength of the driving field. While a treatment of the full quantum model is intractable for the system sizes required to attain non-ergodic dynamics over a wide range of parameters, our findings motivate further studies including quantum correlations. Because theoretical treatments in this case will always be limited to approximate numerical methods, this also renders an exciting avenue for experiments. Our findings unveil the fascinating many-body dynamics that the new generation of subwavelength optical lattice or tweezer experiments can uncover by combining dissipation and drive. In the full quantum case, the chaotic dynamics could also provide an intriguing route towards fast scrambling or the generation of spin squeezed states in driven dissipative light-matter systems. The observed non-ergodic behavior in dissipative finite-size systems with long-range interactions also provides an ideal test bed to analyze the role of dissipation and the interaction range for ergodicity breaking in open quantum systems <cit.>. Hence, our work paves the way for future explorations of many-body quantum dynamics in open quantum systems and has potential implications for technological advancements in areas such as quantum sensing, information processing and quantum simulation. The authors would like to thank Ana-Maria Rey and Na Li for useful discussions. V.Z. was supported by the Harvard College Research Program and the Herchel Smith Undergraduate Science Research Program. S.O. is supported by a postdoctoral fellowship of the Max Planck Harvard Research Center for Quantum Optics. O.R.B. acknowledges support from Fundación Mauricio y Carlota Botton and from Fundació Bancaria “la Caixa” (LCF/BQ/AA18/11680093). S.F.Y would like to acknowledge funding from NSF through the CUA PFC PHY-2317134, the Q-SEnSE QLCI OMA-2016244 and PHY-2207972. The numerical results were obtained using the Quantumoptics.jl package <cit.> partially using Harvard University’s FAS Research Computing infrastructure. § GREEN'S FUNCTION The Green's function for a point dipole that determines the interaction strength J_ij and the collective dissipation Γ_ij in eq:Hamiltonian and eq: dissipative can be written in Cartesian coordinates as <cit.> G_αβ(𝐫,ω) = e^i k r/4π r[ ( 1 + i/kr - 1/(kr)^2) δ_αβ. + . (-1 - 3i/kr + 3/(kr)^2) r_α r_β/r^2] + δ_αβδ^(3)(𝐫)/3k^2, where k=ω_0/c, r=|𝐫|, and α,β=x,y,z. § DETERMINING THE SEPARATION DISTANCE BETWEEN TWO NEARBY TRAJECTORIES For the dynamical system of the form 𝐱̇ =𝐟(𝐱, γ_0 t), the maximal Lyapunov exponent λ indicates the chaotic or regular nature of orbits and characterizes the rate of separation of infinitely close trajectories <cit.>. It is known that in a three-dimensional system, λ > 0 for a chaotic attractor; λ = 0 for a limit cycle; and λ < 0 for a steady state solution <cit.>. We estimate λ by monitoring the rate of the change of the distance |Δ𝐱̃(γ_0 t)| between a pair of initially close trajectories, where <cit.> |Δ𝐱̃(γ_0 t)| ≈|Δ𝐱̃(0)| e^λγ_0 t. In particular, for the thermodynamic limit, we implement the following algorithm given an effective dissipative interaction strength Γ_eff and driving field strength Ω: * Solve eq: thermodynamic mean-field [which takes the form of eq: model] up to γ_0 t = 2000 with all atoms initially in their respective ground state. * Choose 𝐱_0 = 𝐱(γ_0 t = 1600) as a point on the attractor. * For each separation vector 𝐝 in the list [(ϵ, 0, 0), (-ϵ, 0, 0), (0, ϵ, 0), (0, -ϵ, 0), (0, 0, ϵ), (0, 0, -ϵ), (ϵ, ϵ, ϵ), (-ϵ, -ϵ, -ϵ)], choose 𝐱_0+𝐝 as the nearby point, where ϵ = 10^-5. These eight separation vectors are chosen to encompass the three dimensional space of the Bloch sphere. * Advance each orbit up to γ_0 t = 200 for both 𝐱_0 and 𝐱_0+𝐝 as initial conditions. * Calculate the separation distance |Δ𝐱̃(γ_0 t)| between the two nearby orbits, and compute the average final separation distance d_end from γ_0 t= 180 to γ_0 t = 200. * Define t_fit as the first instance in which |Δ𝐱̃(γ_0 t_fit)| = 1/2 |d_end - d|. Fit an exponential curve up to t_fit to estimate the Lyapunov exponent λ. * Repeat steps 3-6 for 𝐱_0 = 𝐱(γ_0 t = 1700), 𝐱_0 = 𝐱(γ_0 t = 1800), 𝐱_0 = 𝐱(γ_0 t = 1900), 𝐱_0 = 𝐱(γ_0 t = 2000) for a total of forty trials. Average over all trials to obtain an average estimate of λ and an average value d_avg for d_end. For the finite mean field model in eq: thermodynamic mean-field, the procedure is analogous, except that the separation vector 𝐝 is applied uniformly to the corresponding coordinate of every particle. For instance, the separation vector 𝐝 = (ϵ, 0,0) becomes 𝐝 = (ϵ, ϵ, …, ϵ_N times , 0,0, …, 0_N times , 0,0, …, 0_N times ), and likewise for other separation vectors in eq: separation vectors. Here, we use the notation 𝐱 = (s_x^1, .., s_x^N, s_y^1, .., s_y^N, s_z^1, .., s_z^N). However, for our models in eq: mean-field and eq: thermodynamic mean-field, we find that λ is unable to distinguish between steady state and limit cycles. In particular, while λ > 0 for chaotic behavior (see fig: distance growth(c)) as desired, λ < 0 for both steady state and limit cycles (see fig: distance growth(a) and fig: distance growth(b)). This arises due to the sensitivity of the fitting process in Step 6 to the choice of t_fit. Nonetheless, the separation distance itself, rather than the rate of separation, nicely characterizes the three different dynamical behaviors, as shown in fig: thermodynamic phase diagram(a) and fig: mean-field phase diagram(a). Here, d_0 = (2+√(3))/3 ϵ is the average initial separation distance for the thermodynamic limit, and d_0 = √(N)(2+√(3)/3) ϵ for the finite limit case. We define d_avg/d_0 ≤ 10^-1 as steady state, d_avg/d_0 > 10^0 as chaos, and intermediate values as limit cycles. § DEPENDENCY OF NON-STEADY STATE DYNAMICS ON N In the main text, we noted that limit cycles and chaotic behavior arise in two-dimensional squared arrays with as little as nine atoms. In fig: N dependency, we show the growth of the separation distance d_avg between two nearby trajectories as a function of N from N = 3^2 up to N=12^2 with all atoms initially in the ground state. These finite system sizes are currently available in state-of-the-art experimental setups. We note that there appears to be no clear relationship between N and the size of the nontrivial limit cycle and chaotic regimes.
http://arxiv.org/abs/2406.18164v1
20240626082444
NeBuLa: A discourse aware Minecraft Builder
[ "Akshay Chaturvedi", "Kate Thompson", "Nicholas Asher" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Start from Zero: Triple Set Prediction for Automatic Knowledge Graph Completion Wen Zhang, Wen Zhang, School of Software Technology, Zhejiang University. Yajing Xu, Yajing Xu, College of Computer Science of Technology, Zhejiang University. Peng Ye, Peng Ye, China Mobile (Zhejiang) Innovation Research Institute Co., Ltd. Zhiwei Huang, Zhiwei Huang, School of Software Technology, Zhejiang University Zezhong Xu, Zezhong Xu, College of Computer Science of Technology, Zhejiang University. Jiaoyan Chen, Jiaoyan Chen, Department of Computer Science, The University of Manchester & Department of Computer Science, University of Oxford. Jeff Z. Pan, Jeff Z. Pan, School of Informatics, The University of Edinburgh. Huajun Chen Huajun Chen, College of Computer Science of Technology, Zhejiang University. Corresponding author. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT When engaging in collaborative tasks, humans efficiently exploit the semantic structure of a conversation to optimize verbal and nonverbal interactions. But in recent “language to code" or “language to action" models, this information is lacking. We show how incorporating the prior discourse and nonlinguistic context of a conversation situated in a nonlinguistic environment can improve the “language to action" component of such interactions. We fine tune an LLM to predict actions based on prior context; our model, NeBuLa, doubles the net-action F1 score over the baseline on this task of <cit.>. We also investigate our model's ability to construct shapes and understand location descriptions using a synthetic dataset. § INTRODUCTION High level building agents use conversation in a collaborative task to combine information about the extant conversation, the world, and prior actions to execute new instructions. Such agents interpret messy or vague language, produce actions, then reassess the situation, ask questions or take in corrections from other agents to optimize their actions. Successful collaborative conversations are vital for efficiently performing complex interactive tasks. In this paper, we study the messy language of ordinary human collaborative conversation, and how a large language model can learn to execute instructions from such conversations. We isolate several factors that affect this task. The first factor is the interactions between linguistic and nonlinguistic contexts. Previous work has shown that at least some context is needed to understand and carry out conversationally given instructions <cit.>. We improve on that work by first establishing a baseline by using the entire exchange up to an instruction i as a context for an LLM to interpret i. Our LLM model, NeBuLa (Neural Builder with Llama), trained on the Minecraft Dialogue Corpus (MDC) <cit.>, achieves net-action F1 scores that is almost double of <cit.>. Using the Minecraft Structured Dialogue dataset (MSDC) <cit.>, which provides semantic relations between MDC dialogue moves and nonlinguistic actions, we show that particular discursive components of the linguistic and nonlinguistic context are necessary and sufficient for the LLM to understand an instruction to the degree provided by the baseline. Analysing NeBuLa's performance revealed two other factors that importantly adversely affect its performance. An instruction in the MSDC has two basic components: a description of a shape in terms of four parameters—numbers of components, colors, arrangement and orientation— and the description of a location where the shape should be placed. Human Architects often use analogies to everyday objects that may be challenging to process, and in addition shape descriptions are often underspecified, meaning that one could perform the instruction correctly in various ways. Location descriptions in the Minecraft world are also quite difficult to process and highly underspecified. For example, put a tower in a corner could be correctly located in any of the four corners of the Minecraft board. We address this problem in two ways: first by further finetuning NeBuLa on a synthetic dataset to improve its performance in building basic shapes and locating them appropriately; and secondly, and more importantly, by revising the evaluation metric used by <cit.> to reflect more realistically the semantics of location expressions. We show that, on our synthetic dataset, NeBuLa achieves high accuracy as per our intuitive metric in performing basic instructions. After some preliminaries and discussion of prior work (Section <ref>), we present the NeBuLa model and its baseline performance in Section <ref>, and then a necessary and sufficient discourse feature to get scores equivalent to the baseline in Section <ref>. In Section <ref>, we explain several issues associated with Minecraft corpus. We try to address these issues in Section <ref>. In this section, we explain our evaluation metric for underspecified instructions, as well as experiments on our synthetic datasets. § RELATED WORK MDC <cit.> introduced a corpus of two person dialogues situated in a simulated Minecraft environment. The dialogues record conversations about collaborative tasks, in which an Architect and a Builder cooperate to build sometimes complex 3-dimensional shapes out of blocks of six different colors. The Architect provides instructions, while the Builder is tasked with translating these instructions into actions. The Builder sometimes asks questions, and the Architect may correct themselves or the Builder, or both, concerning both linguistic and nonlinguistic moves. The corpus accurately reflects the variety and complexity of actual cooperative conversation. Details on the MDC are in Table <ref>. Instructions to code: Neural Builder and variants The MDC <cit.> incentivized the development of an algorithm that could predict sequences of actions from instructions. The actions involved basic moves of placing or removing blocks from certain positions in the environment. <cit.> trained a model consisting of a GRU <cit.> to handle textual input coupled with a CNN to integrate information from the current state and a GRU to predicted an action sequence. Although they experimented with several training regimes, the best performance came from one in which a sequence of conversational moves after some action sequence, assumed to be instructions are given to the model, are followed by the next action sequence of the Builder, followed by the next sequence of linguistic moves are input to the model to predict the subsequent action sequence. (See Figure <ref>). The net-action F1 metric evaluates a model's prediction based on the exact color and coordinate match between the model's predicted sequence and Builder's gold action sequence. In general, <cit.> showed that the problem of predicting action sequences from natural language instructions in naturally occurring dialogue remains extremely challenging. Their Neural Builder had net action f1 of 0.20 on the MDC test set. <cit.> propose a somewhat different task from <cit.>; they try to predict when the Builder should execute an action and when they should instead ask for a clarification question. To this end, they annotated all Builder dialogue moves with a taxonomy of dialogue acts. They then specified a single specific action under the execution label instead of a sequence of actions. Thus, their set-up is not directly comparable to that of <cit.>. <cit.> added dialogue acts to Minecraft utterances, but they did not evaluate the effect of these dialogue acts on the Neural Builder's predictions of actions. Dialogue acts are a partial step towards a full discourse structure; they provide labels for various dialogue moves, but the full discourse structure that we propose to use involves relations between moves. These relations are important as they tell us how to link different parts of, for instance, an instruction into a coherent whole. As we aim to demonstrate in this paper, discourse structure can help to clean up datasets for training and thereby improve training. MSDC <cit.> provided full discourse annotations for the Minecraft corpus, known as the Minecraft Structured Dialogue Corpus (MSDC), using the discourse theory and annotation principles of SDRT <cit.> extended to a multimodal environment, in which both nonlinguistic actions and discourse moves can enter into semantic relations like Elaboration, Correction, and Narration <cit.>. They followed annotation practices given for the STAC corpus <cit.>.  <cit.> also adapted the parser from  <cit.> to predict discourse structures for the Minecraft corpus with relatively high reliability. Statistics on the MSDC are in Table <ref>. LLMs in robotics Parallel to this work, there has been an increasing amount of research in aiding virtual or real robots with tasks by using LLMs to provide translations from natural language instruction to code that programs the robot to perform the relevant actions <cit.>. This research is directly relevant to our work, as we use LLMs to go from natural language to a pseudo-code of pick and place statements. However, whereas <cit.> focus on optimizing the translation from instructions, typically one instruction, to various different coding paradigms, we focus on how linguistic and nonlinguistic interactions affect the resulting action sequence. As our results and previous results on the MDC show, producing actions from interactive conversation with frequently underspecified instructions, which are also dependent upon the discourse and nonlinguistic contexts for proper interpretation, is a much more challenging task than translating well crafted unambiguous instructions into code. In addition, we show that to predict a relevant action from the instruction i_n+1 in the MDC environment, it is not sufficient to use a context with just the penultimate instruction i_n and previous action sequence a_n. § NEBULA: AN LLM FOR PREDICTING ACTION SEQUENCES We've seen that <cit.>'s evaluation method for neural agents gives rather poor results. Observations of the results of neural Builder anecdotally yielded no ending configurations that matched those in the gold. The training scheme of <cit.> assumes, in effect, that Architect instructions and Builder actions follow one another with regularity. An unfortunate consequence of this assumption is that actions are individuated by the conversational turns that immediately precede and follow them. <cit.> initiate a new action sequence whenever there is a linguistic move of any kind. But that's not realistic, as bits of text don't always yield a well-formed or even an underspecified instruction. You might have a clarification question from the Builder in between two action sequences that are in fact carrying out one and the same action as in Figure <ref>. Builders in the MDC frequently ask questions with respect to the initial instruction about the actions they are simultaneously making; answers to those questions may affect the actions, but it doesn't mean that there are two distinct series of actions pertaining to two distinct instructions, one before the question and its response and one after. In addition, the Builder sometimes starts to build before the instruction sequence is complete; intuitively, the initial actions form a coherent action sequence with the actions that are subsequent to the further instruction. These observations show that the assumptions of <cit.> about how actions are individuated are too simple. Different conversational moves will change and make more precise the shape and position of the structure intended by the initial instruction. <cit.> note that different conversational moves can help conceptualize actions differently. For example, in many Minecraft sessions, an initial instruction gives the Builder an action type that might be realized in many different ways. Something like build a tower of 5 blocks is an action type for which a concrete realization would have to specify the color, perhaps the nature of the building blocks, and a location. As the conversation evolves and unless the Architect corrects their instruction, the type of action to be performed becomes more and more specified. A simple baseline alternative to the scheme proposed by <cit.> that addresses the difficulties we just mentioned is to see how a model performs with the complete prior conversation and action sequences up to the predicted action. This was not an option for <cit.>'s model, but more recent LLMs are capable of doing this. We used Llama-2-7B, Llama-2-13B and Llama-3-8B models to take as context all the conversation and action sequences up to action sequence a_n to predict a_n. We fine-tuned Llama on the MDC's <cit.> training set. All the models were finetuned for 3 epochs using QLoRA method <cit.>. Table <ref> in the appendix provides details of computing resources and the hyperparameters for finetuning. Table <ref> shows the net-action F1 scores on the validation and test set of MDC. All the finetuned LLMs significantly improved scores in comparison with the F1 0.20 score of Neural Builder <cit.>. Llama-3-8B essentially doubled the baseline score of 0.20. In the rest of the paper, we refer to Llama-3-8B finetuned on MDC as NeBuLa. The finetuned model, NeBuLa, is available here[<https://huggingface.co/akshay107/NeBuLa>]. § USING DISCOURSE STRUCTURE TO IMPROVE NEBULA The ideal way to model the instructional interactions is to have two ongoing, interleaved processes that interact and influence each other. On the one hand, there is the evolving conversational structure that helps conceptualize the nonlinguistic actions; on the other, there is the sequence of actions that also affects continuations of the given conversational context. Using the discourse parser of <cit.>, we made a first approximation of these interleaved processes by determining necessary and sufficient situated, conversational conditions for computing instructions. An analysis of the discourse structure in the MSDC shows a large scale pattern of so-called Narrative arcs. These arcs delimit portions of discourse structure linked by Narration relation. Each portion begins with an instruction i_n by the Architect, terminates with an action sequence a_m, and involves a negotiation between Architect and Builder about the action sequence to be performed. The negotiation may be extremely short, where the narrative portion then contains just i_n,a_m. On the other hand, it might be complex negotiation involving a number of EDUs related by relations like Elaboration. It may also involve questions of clarification or confirmation question by the Builder, in which case the instruction evolves through the portion. A narrative arc may also involve actions by the Builder that the Architect will correct with a linguistic move that will then result in a nonlinguistic action that revises or corrects the prior actions of the Builder. The end of the negotiation is the action sequence that finally carries out the instructions to the satisfaction of the Architect. Figure <ref> illustrates a narrative arc starting at Architect turn one with a new instruction that results (in green) in an action sequence in Builder turn two. The Builder then asks a complex, alternative question to confirm that this is the right move. The Architect replies to the question, in effect correcting (in red) the Builder's previous action, which then results in an action sequence in Builder turn four that corrects the previous builder action. These arcs are relatively self-contained and are recoverable automatically to a relatively high degree by the parser of <cit.>. So, instead of providing the entire conversation history as in Section <ref> to the model, we provide the world-state at the beginning of the Narrative arc in terms of net place actions, and the discourse within the Narrative arc up to the present instruction i_n. We finetune Llama-3-8B on MDC training set using this input. We refer to the resultant model as NeBuLa+N (NeBuLa trained on Narrative arcs). Table <ref> gives scores on the validation and test set of MDC for NeBuLa+N(arration). From the table, we can see that the scores are comparable with original NeBuLa. This shows that the Narrative arc is sufficient for action prediction. To study whether the Narrative arc is necessary as well, we evaluated NeBuLa+N on those cases where i_na_ni_n+1 had less content than the Narrative arc. There were 254 such cases in MDC test set. For these samples, we looked at performance of NeBuLa+N when worldstate along with the Narrative arc is given as input. This is denoted as NeBuLa+N/N in Table <ref>. Similarly, we looked at performance of NeBuLa+N when worldstate along with i_na_ni_n+1 is given as input. This is denoted as NeBuLa+N/i_na_ni_n+1 in Table <ref>. As we can see, the score for NeBuLa+N/i_na_ni_n+1 is significantly lower (∼ 10%) than NeBuLa+N/N. This shows that Narrative arcs are crucial for the task of action prediction. § PROBLEMS WITH THE MINECRAFT CORPUS In Minecraft, the Architect makes use of several location descriptions. These descriptions are often anaphoric to blocks placed in prior instructions, such as place another block next to that one (one that was placed on previous Builder turn); locations are also sometimes vaguely designated (towards the centre) or underspecified (in a corner, along an edge, n blocks/spaces in from an edge/from the centre). Although the Minecraft environment presents (x,y,z) coordinates, the human participants never used them. This could be because, in the Minecraft environment, players can move their avatars around the board to get different perspectives, which makes it hard to establish an absolute coordinate system. As a result, the net-action F1 metric, which evaluates a model's action sequence based on whether the block placements match exactly in terms of block color and coordinates with the corresponding gold builder action, is often inappropriate. For instance, if the Builder puts down a block at one corner after receiving the instruction in a corner whereas NeBuLa chooses another corner, the metric would give NeBuLa zero credit whereas intuitively it still did the right thing. To summarize, the evaluation metric treats vague instructions as completely precise ones, and considers one instantiation of an instruction (i.e. the action sequence of Builder in the gold data) to be the only ground truth. Another related issue is highlighted in Figure <ref> where the action sequence for the Architect's instruction gets truncated by a question from the Builder “there?". In this case, for the aforementioned instruction, only the first three actions (place yellow -1 1 0, pick -1 1 0, place yellow -1 4 0) constitute the ground truth. To conclude, the underspecified instructions with multiple plausible instantiations, coupled with the strict nature of the metric, puts an upper bound on how much the net-action F1 score can improve on this dataset. More importantly, it doesn't reveal what a model with a high F1 score actually does learn. We attempt to answer this in the next section. § EVALUATING NEBULA ON SYNTHETIC DATASET Given the issues associated with Minecraft Corpus and the evaluation metric, we test NeBuLa on simple scenarios using a more just metric. We begin by testing NeBuLa's ability to construct simple shapes, such as, square, row, rectangle, tower, diagonal, diamond, cube of specific size and understand location (i.e. corner, centre, edge) and orientation descriptions (i.e. horizontal/vertical). We refer to all these shapes as level-1 structures. To do so, we construct a level-1 dataset of 1368 instructions. Some of these instructions simply ask to construct a shape of specific size like “Build a 3× 3 red square.", while others are more detailed, for example, “Build a 3× 3 red horizontal square at the centre." For rows/diagonals/towers, we vary size from 3 to 9. For squares, the size varies from 3× 3 to 5× 5. For cubes, we only use 3× 3× 3. For rectangles, we use sizes m× n, where m≠ n, m× n<30 and 4<=m<=8. For diamonds, we use two variants to describe size “m blocks on a side" and “axes 2m+1 long", where 3<=m<=6. We use orientation descriptions (i.e. horizontal/vertical) for squares, rectangles, and diamonds. To evaluate NeBuLa on these instructions, we use simple binary functions is_square(C), is_tower(C) etc. for each shape. These functions take as input the predicted construction C and returns True if C is the desired shape, and False otherwise. For example, is_tower checks whether all the blocks have the same value for X and Z (as Y is the vertical dimension) and Y values are distinct and form a sequence 1,2, ..., n where n is the number of predicted blocks. For an instruction, we first evaluate if the predicted shape is correct. For correct shapes, we evaluate whether the size/color and location/orientation is correct (for instructions where location/orientation was specified). For an instruction with location description like Build a red tower in a corner, the location is considered correct if the predicted tower is in any of the four corners. Table <ref> gives the result of NeBuLa on level-1 dataset. We don't report color accuracy in the table, as NeBuLa always got the color correct. From the table, we can see that NeBuLa already has a decent command of basic shapes like towers, rows, and diagonals. However, it struggled with shapes like rectangle, square, cube, and diamond. It never correctly constructed diamonds, which might be because there were very few instances of diamonds in Minecraft corpus. For squares and rectangles which were correctly predicted, the model scored very high on orientation accuracy. However, the model has quite low location accuracy across all the correctly predicted shapes. The model rarely achieved an accuracy of above 50%, even with our relaxed evaluation method for locations. As a second step, we look at NeBuLa's ability to understand location descriptions, in particular ones that are anaphorically specified. To do so, we start with an instantiation (randomly chosen from the set of correct instantiations) for the 1368 instructions in level-1 dataset. So, for a level-1 instruction such as “Build a 3× 3 red square.", we have a 3× 3 red square already present in the grid. Now given a level-1 structure in the grid, we design level-2 instructions which require placing or removal of a specific color block. For place instructions, we use location descriptions like on top of, to the side of, touching, and not touching. So an example of level-2 place instruction is “place a blue block on top of that." where that refers to the level-1 structure in the grid. Similarly, for removal instructions, we have the simple instruction “remove a block" and more complex instructions including location descriptions like you just placed. We also have additional location descriptions for certain level-1 structures such as end for rows, diagonals; top, bottom for towers; corner for cube; centre for cube, odd-size squares and towers. An example of level-2 remove instruction is “remove the top block ." Similar to level-1, we evaluate NeBuLa on level-2 dataset by making use of binary functions like is_ontopof(b,C), is_touching(b,C) where C is the level-1 structure already present in the grid and b is the predicted block. As an example, for on top of, we check whether there is no block in C which is directly above the block b, and there is a block in C underneath block b. Table <ref> shows that the model did quite well, with the exception of the instruction involving not touching as location description. Otherwise, the results indicate that NeBuLa has a good knowledge of basic anaphoric location descriptions. We then examined NeBuLa's errors with on top of. We found that the failure cases mostly were a result of the model placing multiple blocks instead of just one on the given level-1 structure. That is, the model does not always understand a block as a single block. In light of these cases, when we check whether all the blocks in predicted b are on top of C, the accuracy improves from 74.2% to 97.2%. Thus, some of the difficulties NeBuLa had with instructions come from what might be a limited understanding of the semantics and pragmatics of indefinite and numerical noun phrases. §.§ Finetuning NeBuLa on Shapes and Locations Our evaluation on level-1 and level-2 data shows that NeBuLa struggles with squares, rectangles, diamonds, and “not touching" place instructions. To tackle this, we used a subset of the two datasets to augment the training data for NeBuLa. From level-1 data, we took the following subset for training: squares of size 3× 3, diamonds of size 3 (or axes 5 spaces long), and rectangles of sizes 4× 3 and 5× 4. From level-2 data, we took those “touching/not touching” instances where the level-1 structure is square or rectangle. Out of total 363 instances for touching/not touching, there were 109 such instances. We then finetuned NeBuLa by combining the Minecraft training with this subset of level-1 and level-2 data. The rest of the level-1 and level-2 data was used for testing. Table <ref> shows NeBuLa's performance on level-1 test set after finetuning. As before, we found that NeBuLa always got the color correct. From the table, we can see that the shape accuracy improved significantly for squares, rectangles, and diamonds in comparison with Table <ref>. Although the location accuracy is still low, it has improved in comparison with original NeBuLa. Interestingly, we also see that NeBuLa has perfect shape accuracy on cube although cube was not part of the training set. Finally, for correctly predicted shapes, NeBuLa achieved a perfect orientation accuracy. Table <ref> shows the results on level-2 test set for NeBuLa after finetuning. Here also, we can see that NeBuLa's accuracy remains very high on almost all of the simple instructions with the anaphoric location descriptions. Furthermore, its accuracy increased significantly for “not touching" instructions. This jump in accuracy is significant enough to conclude that NeBuLa has learned the concept of “contact", at least for our synthetic dataset. On the minecraft test set, we found that NeBuLa's performance remained high with an average precision of 0.40, recall of 0.414 and a net action F1 of 0.391. As we can see, these scores are at-par with the baseline NeBuLa. § CONCLUSIONS AND FUTURE WORK We have introduced NeBuLa, an LLM based action prediction model, for the Minecraft Dialogue Corpus. As a baseline, NeBuLa uses the entire Minecraft dialogue up to action a_n to predict a_n. We showed that this baseline doubles the net action F1 scores of <cit.>. We then showed that certain discourse structures provided necessary and sufficient information for inferring actions to the level of the baseline. We also analyzed NeBuLa's errors on Minecraft corpus and provided additional finetuning to improve the model's ability to interpret underspecified shape descriptions and anaphorically-specified locations using our synthetic dataset. This allowed us to analyze the shortcomings of the net-action F1 metric, and address them using a more realistic evaluation metric. Our evaluation metric captures the notion of relative location, but leaves exact locations typically underspecified, in accordance with our semantic intuitions. For future work, we plan to apply this metric (or a similar relative location metric) on the Minecraft corpus. Given the improvement in performance of NeBuLa after finetuning on our synthetic dataset, we hypothesize that in a more controlled collaborative task, with some pedagogical instructions to the Architect, NeBuLa could contribute as a useful interface for conversational robots that interact with humans. § LIMITATIONS The MSDC contains a great deal of discourse information, including a full discourse structure analysis. We have only used some of this information. Potentially, we could leverage more information from this dataset to improve NeBuLa's action prediction performance. We also need to extend our constraints to cover other frequent anaphoric location descriptions in addition to on top of X and to the side of X. Locutions like in front of/ behind, underneath, hanging off, next to (X) all have underspecified parameters of either orientation, distance or direction that allow for several correct placements, once X has been identified. We need to evaluate NeBuLa on these expressions as well. Finally, we need to reevaluate NeBuLa's predictions as well as builder actions in the MDC with our more appropriate metric, which is suited to the underspecified shape and location descriptions used in the corpus. § ETHICS STATEMENT Our work here has been to improve the capacities of AI systems in interactive tasks where conversation can be used to optimize performance on collaborative actions. We see no direct ethical concerns that arise from this work. However, conversationally more capable robots, which could be one downstream application of this work, might require additional conversational strategies as constraints to ensure that participating humans retain the final say with regards to the actions in the collaborative tasks. [Asher, 1993]asher:1993 Asher, N. (1993). Reference to Abstract Objects in Discourse. Kluwer Academic Publishers. [Asher et al., 2016]asher:etal:2016 Asher, N., Hunter, J., Morey, M., Benamara, F., and Afantenos, S. (2016). Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In 10th International Conference on Language Resources and Evaluation (LREC 2016), pages 2721–2727. [Asher et al., 2020]asher:etal:2020 Asher, N., Hunter, J., and Thompson, K. (2020). Modelling structures for situated discourse. Dialogue & Discourse, 11:89–121. [Asher and Lascarides, 2003]asher:lascarides:2003 Asher, N. and Lascarides, A. (2003). Logics of Conversation. Cambridge University Press, New York, NY. [Bennis et al., 2023]bennis:etal:2023 Bennis, Z., Hunter, J., and Asher, N. (2023). A simple but effective model for attachment in discourse parsing with multi-task learning for relation labeling. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3404–3409. [Bonial et al., 2021]bonial:etal:2021 Bonial, C., Abrams, M., Traum, D., and Voss, C. (2021). Builder, we have done it: evaluating & extending dialogue-amr nlu pipeline for two collaborative domains. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 173–183. [Bonial et al., 2020]bonial:etal:2020 Bonial, C., Donatelli, L., Abrams, M., Lukin, S., Tratz, S., Marge, M., Artstein, R., Traum, D., and Voss, C. (2020). Dialogue-amr: abstract meaning representation for dialogue. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 684–695. [Cho et al., 2014]cho-etal-2014-properties Cho, K., van Merriënboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine translation: Encoder–decoder approaches. In Wu, D., Carpuat, M., Carreras, X., and Vecchi, E. M., editors, Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics. [Dettmers et al., 2023]qlora Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L. (2023). Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. [Hunter et al., 2018]hunter:etal:2017 Hunter, J., Asher, N., and Lascarides, A. (2018). Situated conversation. Semantics and Pragmatics, 11(10). doi: 10.3765/sp.11.10. [Jayannavar et al., 2020]jayannavar:etal:2020 Jayannavar, P., Narayan-Chen, A., and Hockenmaier, J. (2020). Learning to execute instructions in a minecraft dialogue. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 2589–2602. [Liang et al., 2023]liang:etal:2023 Liang, J., Huang, W., Xia, F., Xu, P., Hausman, K., Ichter, B., Florence, P., and Zeng, A. (2023). Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 9493–9500. IEEE. [Narayan-Chen et al., 2019]narayan:etal:2019 Narayan-Chen, A., Jayannavar, P., and Hockenmaier, J. (2019). Collaborative dialogue in minecraft. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405–5415. [Shi et al., 2022]shi:etal:2022 Shi, Z., Feng, Y., and Lipani, A. (2022). Learning to execute actions or ask clarification questions. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2060–2070. [Singh et al., 2023]singh:etal:2023 Singh, I., Blukis, V., Mousavian, A., Goyal, A., Xu, D., Tremblay, J., Fox, D., Thomason, J., and Garg, A. (2023). Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523–11530. IEEE. [Thompson et al., 2024]thompson:etal:2024 Thompson, K., Hunter, J., and Asher, N. (2024). Discourse structure for the Minecraft corpus. In Calzolari, N., Kan, M.-Y., Hoste, V., Lenci, A., Sakti, S., and Xue, N., editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 4957–4967, Torino, Italia. ELRA and ICCL. [Yu et al., 2023]yu:etal:2023 Yu, W., Gileadi, N., Fu, C., Kirmani, S., Lee, K.-H., Arenas, M. G., Chiang, H.-T. L., Erez, T., Hasenclever, L., Humplik, J., et al. (2023). Language to rewards for robotic skill synthesis. arXiv preprint arXiv:2306.08647. § APPENDIX Table <ref> gives the hyperparameters used for finetuning NeBuLa along with the computing resources. We adapted the finetuning code from the following repository[<https://github.com/mlabonne/llm-course/blob/main/Fine_tune_Llama_2_in_Google_Colab.ipynb>]. We provide level-1 and level-2 synthetic data and NeBuLa trained on minecraft and the synthetic data here[<https://huggingface.co/akshay107/NeBuLa>].
http://arxiv.org/abs/2406.18848v1
20240627023825
Temporally Multi-Scale Sparse Self-Attention for Physical Activity Data Imputation
[ "Hui Wei", "Maxwell A. Xu", "Colin Samplawski", "James M. Rehg", "Santosh Kumar", "Benjamin M. Marlin" ]
cs.LG
[ "cs.LG" ]
[ Kiyeon Lee July 1, 2024 ================ § ABSTRACT Wearable sensors enable health researchers to continuously collect data pertaining to the physiological state of individuals in real-world settings. However, such data can be subject to extensive missingness due to a complex combination of factors. In this work, we study the problem of imputation of missing step count data, one of the most ubiquitous forms of wearable sensor data. We construct a novel and large scale data set consisting of a training set with over 3 million hourly step count observations and a test set with over 2.5 million hourly step count observations. We propose a domain knowledge-informed sparse self-attention model for this task that captures the temporal multi-scale nature of step-count data. We assess the performance of the model relative to baselines and conduct ablation studies to verify our specific model designs. *Data and Code Availability This paper uses the All of Us dataset[<https://www.researchallofus.org/>], which is publicly available upon registration. Data processing and modeling code is available at <https://github.com/reml-lab/allofus-imputation>. *Institutional Review Board (IRB) This research does not require IRB approval. § INTRODUCTION Step count data collected by smart watches and activity trackers is one of the most ubiquitous forms of wearable sensor data. These data have the potential to provide valuable and detailed information about physical activity patterns and their relationship to other facets of health over long time spans. These data also have the potential to provide valuable contextual information for just-in-time adaptive interventions that target improving levels of physical activity or deceasing sedentary behavior <cit.>. However, wearable sensor data are subject to complex missingness patterns that arise from a variety of causes including device non-wear, insecure device attachment and devices running out of battery <cit.>. Importantly, these missingness issues can hinder the utility of wearable sensor data to support both improved understanding of health behaviors and to provide actionable contexts in the case of adaptive interventions. Indeed, the presence of missing step count data is a problem for traditional statistical analyses that aim to relate physical activity levels to other health events and to the effect of interventions <cit.>. Missing step count data is also a problem when practitioners seek to use these data as inputs to common supervised and unsupervised models that require complete data as input <cit.>, as well as when step count data is used in the reward function for reinforcement learning-based adaptive interventions <cit.>. In this paper, we consider the problem of imputing missing step count data at the hourly level. This problem has a number of significant challenges due to the presence of high variability in patterns of physical activity both through time for a single person and between different people. This variability can be attributed to a collection of factors that are exogenous to step count data itself including an individual's levels of restedness and business, environmental factors such as weather and temperature, changes in daily routine, seasonal effects, onset and recovery from illness and other major life events. To make progress on these challenges necessitate both carefully designed, domain-informed models and the availability of large-scale step count datasets. To address the need for a large-scale data set, we curate a training set consisting of hourly step count data from 100 individuals. The average step count time series length is over 50,000 hourly observations per person in the training set yielding a total of over 3 million hourly step count observations. We curate a test set consisting of data from 500 individuals including over 2.5 million observed hourly step count instances. This data set is based on minute-level Fitbit step count data collected as part of the All of Us research project <cit.>. The All of Us data set is freely available to registered researchers[<https://www.researchallofus.org>]. To address the modeling challenges, we introduce a novel sparse self-attention model inspired by the transformer architecture <cit.>. The proposed model uses sparse attention to handle the quadratic complexity of the standard dense self-attention mechanism, which is not practical given long time series as input. Importantly, the sparse self-attention mechanism is designed to be temporally multi-scale in order to capture diurnal, weekly, and longer time-scale correlations. The specific design used is informed by an analysis of hourly step count autocorrelations. Finally, we design an input feature representation that combines a time encoding (hour of day, day of week) with a temporally local activity pattern representation. We compare our proposed model to a broad set of prior models and approaches including a convolutional denoising autoencoder that achieved state-of-the-art performance on missing data imputation in actigraphy data <cit.>. The results show that our model achieves statistically significant improvements in average predictive performance relative to the prior approaches considered at the p<0.05 level. We further break down performance by missing data rate and ground truth step count ranges. Finally, we visualize attention weights and relative time encodings to investigate what the proposed model learns and conduct an ablation study of the key components of the proposed model. We begin by discussing related work in Section <ref>, and then describe our dataset in Section <ref>. We describe our proposed self-attention imputation model in Section <ref>. In Section <ref>, we describe our experimental methods and in Section <ref>, we report our experimental results. § RELATED WORK In this section, we briefly review general missing data imputation methods for time series, prior work on sparse self-attention, and prior work specifically on step count imputation models. Imputation Methods for Time Series The missing data imputation problem has been intensively studied in both statistics <cit.> and machine learning <cit.>. Commonly used baseline methods include mean imputation <cit.>, regression imputation <cit.>, k-nearest neighbors (kNN) imputation, and multiple imputation by chained Equations (MICE) <cit.>. Both regression imputation and MICE are model-based approaches that aim to impute missing values as functions of observed variables while (kNN) is a non-parametric approach. More recently, the machine learning community has focused on neural network-based imputation methods for time series including the use of recurrent neural networks (RNNs) <cit.> and generative adversarial networks (GAN) <cit.>. <cit.> introduced the gated recurrent unit with decay (GRU-D) model for irregularly sampled and incomplete time series data, which takes into account missingness patterns and time lags between consecutive observations <cit.>. In the imputation setting, uni-directional RNN models like GRU-D are typically outperformed by bi-directional RNN models such as the M-RNN <cit.> and BRITS <cit.>. While basic GAN models for fully observed data require only a generator and discriminator, training these models using partially observed data can require architectural or training modifications. <cit.> trained a GAN model in two stages to select noise capable of generating samples most similar to the original values. <cit.> proposed E^2GAN, which uses an autoencoder architecture as the generator, enabling end-to-end training and eliminating the need for two-stage training. Additionally, <cit.> (SSGAN) introduced a temporal remainder matrix as a hint to the discriminator to facilitate training. SSGAN also used time series class labels to guide the generation procedure with USGAN provising an non-class supervised alternative. In this work, we focus on self-attention-based imputation models trained using empirical risk minimization (ERM). Self-attention based models are well-known to have improved parallelization compared to RNN-based models <cit.>. The use of ERM-based training (e.g., prediction loss minimization) avoids stability issues inherent to current GAN-based model training algorithms <cit.>. Our primary modeling contribution focuses on making self-attention models computationally efficient for long time series of step counts using sparsity. We discuss prior work on sparse self-attention in the next section. Sparse Self-Attention Many methods have attempted to address the quadratic complexity of self-attention computations using sparsity <cit.>. For instance, the vision transformer <cit.> and Swin transformer <cit.> apply self-attention on non-overlapping patches in an image. The sparse transformer <cit.> and axial transformer <cit.> separate the full attention map into several attention steps using multiple attention heads. Several authors have also investigated learnable sparsity mechanisms. Deformable DETR <cit.>, Reformer <cit.> and Routing Transformer <cit.> retrieve the most relevant keys for each query using learnable sampling functions, locality sensitivity hashing, and k-means, respectively. The drawback of these approaches is that they typically require higher training times. Our proposed model uses a fixed, multi-timescale sparsity pattern that is designed specifically for step count data. Step Count Imputation <cit.> used kNN imputation for step count data collected from accelerometers and magnetometers. <cit.> employed multiple imputation methods combined with both parametric (e.g., regression imputation) and non-parametric approaches (e.g., hot deck imputation) to impute missing daily and hourly step count data. <cit.> proposed a zero-inflated Poisson regression model to handle zero step count intervals more effectively. <cit.> used a convolutional denoising autoencoder architecture that exhibited superior performance compared to multiple other approaches including mean imputation, Bayesian regression and the zero-inflated model by <cit.>. In this work, we focus on model-based single imputation and compare to a wide range of baseline and current stat-of-the art approaches on large-scale data. § DATA SET DEVELOPMENT In this section, we describe the curation and prepossessing methods we apply to develop the data set used in our experiments. Flowcharts summarizing our methods are provided in Appendix <ref>. Data Set Extraction Our data set is derived from the All of Us research program Registered Tier v6 data set <cit.>. All of Us is an NIH-funded research cohort with an enrollment target of one million people from across the U.S. The v6 data set includes minute-level step count and heart rate data collected using Fitibt devices from 11,520 adult participants. While the All of Us research program directly provides daily step count summaries derived from these data, we focus on the finer-grained problem of imputing missing step count data at the hourly level. This timescale is highly relevant for applications like the analysis of adaptive interventions that need access to finer-grained step count data to assess the proximal effects of actions. Further, due to devices running out of battery during the day and temporary device non-wear, the base data set contains substantial partial within-day missingness that can be usefully imputed to support a variety of downstream analyses. We begin by rolling-up the minute-level Fitibit time series for each participant into an hourly time series. We use one-hour long blocks aligned with the hours of the day. Each block is represented by the total observed steps within that hour, the average heart rate within that hour, and the number of minutes of observed data (the wear time) within that hour. The range of minutes of wear time for each hourly block is 0-60. We define hourly blocks with zero minutes of wear time as missing, and hourly blocks with at least one minute of wear time as observed (our modeling approach will specifically account for observed hourly blocks with different wear time). Imputation model training requires holding out observed data to use as prediction targets thus increasing the amount of missing data seen by models during training. Also, learning on more complete data makes it easier for models to identify appropriate physical activity structure in the data. Therefore, we form a training set of individuals with low to moderate levels of natural missing data. Specifically, we select for the training set the 100 participants with the most observed hourly blocks among those with at least one 180 day long segment of step count data containing no run of missing hourly data longer than three days. The resulting training data set consists of over 3 million observed hourly blocks with an average time series length of over 50,000 hours per training set participant. Since many participants do not wear their devices between 11:00pm and 5:00am and the observed step count data for those who do is almost always 0 (presumably due to sleep), we focus on predicting step counts in the interval of 6:00am to 10:00pm (we use data outside of this range as part of the feature representation for some models). The maximum missing data rate among the training participants is 20% within the 6:00am to 10:00pm time frame. Appendix <ref> provides comparisons between the 100 participants in our training cohort and all 11,520 participants in the All of Us Fitbit dataset. To form a test set, we first exclude the training participants. Next, we select a total of 100 participants for each of five missing data level bins [0%, 20%), [20%, 40%), [40%, 60%), [60%, 80%), and [80%, 100%). We again assess missing data within the 6:00am to 10:00pm time frame. For the [0%, 20%) bin, we apply the same filtering criteria as for the training set and select 100 participants at random from those meeting the criteria. For the remaining bins, we select participants at random with no additional criteria. This yields a total of 500 test participants with a total of approximately 2.5 million observed hourly blocks. Data Set Pre-Processing Once the data set is extracted, we apply several pre-processing steps. First, to deal with partially observed hourly blocks, the model that we construct uses step rates as features instead of step counts. The step rate associated with an hourly block is defined as the observed step count divided by the observed wear time. When making predictions for observed hourly blocks, the model predicts a step rate, but the loss is computed between the observed step count and a predicted step count formed by combining the predicted step rate with the observed wear time. Further, we use the mean and standard deviation of each participant's step rate and heart rate data (ignoring outliers beyond the 99.9% percentile) to compute statistics for z-normalization <cit.> of step rates and heart rates. This z-normalization step is applied separately to each participant's data to provide an initial layer of robustness to between-person variability. In order to enable vectorized computations over time series with missing data, we use zero as a placeholder for missing data values and use an auxiliary response indicator time series to maintain information about which blocks are missing and which are observed. Finally, the raw Fitbit time series provided by the All of Us research program were shifted by a randomly selected number of days for each participant as part of a set of privacy preserving transformations. In order to enable models to learn common behavior patterns with respect to day of the week, we select a reference participant and align all other participants to that participant by considering all shifts of between 0 and 6 days. We use similarity in average daily step counts as the alignment criteria. While we can not be certain that this process recovers the correct shift, it will decrease variability relative to the baseline of not applying this correction. § PROPOSED MODEL In this section, we formally define the step count imputation problem within the multivariate context, and introduce our temporally multi-scale sparse self-attention model architecture. Problem Definition We denote by 𝒟={𝐂^(n)_l,t|n=1,…,N, l=1,…,L, t=1,…,T_n} a dataset of N participants, where each participant is represented by a multivariate time series, 𝐂^(n)∈ℝ^L × T_n with L features and T_n hourly blocks. T_n varies across participants, while the number of features L is constant. In our case, the base features associated with each hourly block include step count, step rate, heart rate, day of the week, hour of the day and minutes of wear time. When considering data from a single participant, we drop the (n) superscript for brevity. For each hourly block t, we define the response indicator r_t as shown in equ:res_ind to indicate if the participant's Fitbit data at a given hourly block is observed (i.e. with at least one minute of wear time). We let 𝐂_w,t be the wear time. While heart rates may contain missing values, our focus in this study is not on imputing them. We also note that the hour of the day, day of the week and wear time itself are always completely observed. r_t = 1 𝐂_w,t>0 0 otherwise We let 𝐂_s,t be the step count at time at time t. The problem is thus to impute 𝐂_s,t when r_t = 0 from the observed data. This includes observed Fitbit data from other time steps as well as other observed data at time step t. Crucially, we can only train and assess imputation models on originally observed hourly blocks in the dataset since they have ground-truth Fitbit data values. Thus, instead of imputing originally missing hourly blocks that do not have ground-truth values, we hold out hourly blocks with observed values, consider them as “artificially missing", then use models to predict their original observed values. Model Overview We propose a model architecture based on dot-product self-attention <cit.>. As noted previously, the standard transformer architecture uses dense self-attention, which has quadratic cost in the length of an input time series. This is highly prohibitive for long time series. Indeed, our training data set has an average time series length of 50,000 hours per participant. This is longer than the context window used in some versions of GPT-4 <cit.>. Thus, the first key component of our proposed architecture is the design of a sparse self-attention structure for step count imputation. Based on domain knowledge combined with data analysis, we propose a self attention mechanism based on a multi-timescale context window. The second key component of the architecture is the feature representation. While transformer models applied to text data typically use a base token embedding computed from fully observed data, we require an input representation that is specific to this task. We propose a local activity profile representation (LAPR) that represents hourly blocks with a temporally local window of activity data. Sparse Self-Attention In order to construct a self-attention-based model for long time series, we need to drastically reduce the number of hourly blocks attended to by each query for each missing hourly block. To begin, let 𝒯={1,…,T} be the set of all the hourly blocks from a given participant and |𝒯| be the size of this set. We define the set 𝒜^(t)⊆𝒯 to be a sub-set of hourly blocks that a query at time t is allowed to attend to. For improved computational efficiency, we require |𝒜^(t)| ≪ |𝒯| for all t. However, in the missing data context, even if a time point t is allowed to attend to a time point t', time point t' may not have observed data. We define a mask function m(t, t') in equ:mask_theta that indicates both whether time point t can attend to time point t' and whether time point t' is observed. m(t, t') = 1 t' ∈𝒜^(t) and r_t'=1 0 otherwise The key question is then how to define the self-attention sets 𝒜^(t). Based on domain knowledge, we expect that hourly blocks t' that are close in time to a given target hourly block t will carry information useful to make predictions at time t. However, we also expect that hourly blocks t' corresponding to the same hour of the day as a target block t on nearby days may carry information useful to make predictions at time t. Similarly, we expect that hourly blocks t' corresponding to the same hour of the day and the same day of the week for nearby weeks may also carry information useful to make predictions at time t. In Figure <ref>, we present the hourly step count autocorrelation function for our data set to confirm these expectations. First, we can see that the autocorrelation is highest for the smallest time lags indicating high correlation between nearby hourly blocks. However, we can also see strong correlations at time lags of 24 hours (1 day) and 168 hours (1 week). This confirms our expectations regarding the general correlation structure of the data. Based on these observations, we propose the multi-time scale context window shown in Figure <ref> as our sparse self-attention set 𝒜(t). Letting d be the day number of the target hourly block t, the context window includes data from days d to day d± 7 as well as d± 7k for k∈{2,3,4,5}. Given that time points t' with hour of the day closer to the target hour t have higher correlations, we limit the context window to include time points t' with hours of the day close to that of t. Letting h be the hour of the day for the target hourly block t, the context window includes hours h to h± 4. Of course, the center of the context window, which corresponds to the target hourly block t, is not included in the sparse self-attention set. This yields a total self-attention set size of (2×4+1)(2×(7+4)+1)-1 = 206. Feature Representation Individual hourly blocks are featurized in terms of step count, step rate, average heart rate, wear time minutes, hour of the day, and day of the week. However, the target hourly block has its Fitbit features (i.e. step count, step rate and heart rate) unobserved. A self-attention computation based on comparing the observed features of the target hourly block to the corresponding features in blocks in the self-attention set would thus be limited to expressing similarity based on hour of the day and day of the week. To overcome this problem, we augment the representation of an hourly block's step rate data using a window of activity data from t-W to t+W. We refer to this as the “local activity profile" representation (LAPR) of an hourly block. It allows for learning much richer notions of similarity between hourly blocks within the multi-scale context windows based on comparing their local activity profiles. As described in Section <ref>, missing values in the LAPR feature representation are themselves imputed using a baseline approach. Proposed Model The proposed model is summarized in the equations below. s_t is the predicted step rate at time t. a_tt' is the attention weight from hourly block t to hourly block t'. m(t,t') is the sparse attention mask function defined in equ:mask_theta. The sparse attention mask ensures that the attention weight is 0 for time points t' that are not included in the sparse self attention context window as well as points t' with missing Fitbit data. s_t = ∑_t' ≠ t a_tt' v_t' a_tt' = m(t,t')exp(𝐪_t^⊤𝐤_t' + θ_𝙸(t,t'))/∑_u ≠ t m(t,u)exp(𝐪_t^⊤𝐤_u + θ_𝙸(t,u)) The primary components of the self-attention computation are the value v_t', the query vector 𝐪_t, the key vector 𝐤_t' and the relative time embedding θ_𝙸(t,t'). The value v_t', the query vector 𝐪_t and the key vector 𝐤_t' are produced using distinct neural network-based transformations of the input features for their respective time points. To begin, the local activity profile representation (LAPR) is processed through an encoder network Conv → LayerNorm → ReLU → Average Pool. This encoder extracts more abstract features and also prevents the overfitting problem by lowering the input dimension. The output of the encoder is then concatenated with the other available features. For the key and the value, this includes the hour of the day and day of week features as well as the Fitbit features of that specific time point. For the query, the Fitbit features for the target time point t are not observed, so the LAPR is concatenated with the hour and day features only. We use a one-hot encoding representation for the hour and day features. The resulting representation is projected through linear layers to produce the final query, key and value representations. To encode information based on the time difference between the target hourly block t and another block t', a relative time encoding θ_𝙸(t,t') is employed. Essentially, the model provides an attention bias parameter for each position in the context window. This allows the model to learn that some relative positions in the context window are valuable to attend to regardless of the similarity in feature values at those relative locations for a particular instance. The function θ_𝙸(t,t') returns the value of the relative time encoding bias parameter for time point t' in the context window centered at time t. If t' falls outside of the context window, this function returns 0. Loss Function and Training The output of the model is an unconstrained hourly step rate. We convert the hourly step rate to a step count using the transformation 𝐂_w,t·min(1.5· s_max, max(0, s_t)) where 𝐂_w,t is the observed wear time for time t, and s_max is the maximum training set step rate observed for the participant. This ensures that the step count is always non-negative and clips the maximum predicted step rate to avoid predicting outlying values. We use mean absolute error (MAE) between true and predicted step counts as the loss function during model training. We use a stochastic gradient descent-based training approach where each batch contains instances sampled from different participants. We compute the MAE with equal weight on all samples in the batch. Additional hyper-parameter optimization and training details can be found in Appendix <ref>. § EXPERIMENTS In this section, we describe the baseline and prior methods that we compare to. We also provide experimental protocol and evaluation metric details. Baselines We compare our proposed model to several commonly used strategies for imputing missing values in time series data, as well as to the state-of-the-art imputation method proposed by <cit.>. We group methods into several categories. Simple filling methods include zero fill, forward fill, backward fill, the average of them (Avg.F+B), mean fill, micro mean fill and median fill. Here, mean filling uses the mean of the hourly step count computed over a specified set of hourly blocks while micro mean filling uses the total step count divided by the total wear time where the totals are computed over a specified set of hourly blocks. The mean, micro mean and median based methods are applied in four variations corresponding to computing the imputation statistic over different sets of hourly blocks. All are applied on a per-participant basis. For example, in the “Participant" variant we compute a per-participant imputation statistic over all available data for a single participant and then apply it to all missing hourly blocks for that participant. In the “DW+HD" variant, we compute an imputation statistic per hour of the day and day of the week for each participant and apply it to all missing data from that hour of day and day of week combination for that participant. The kNN model includes two variants: uniform, which assigns uniform weights to neighbors, and softmax, where weights depend on an RBF kernel based on the distances between the target hourly block and its neighbors. Finally, model-based baseline methods include linear regression imputation, iterative imputation (which iteratively estimates variables with missing values from other observed variables <cit.>), the stat-of-the-art convolutional denoising autoencoder (CNN-DAE) model of <cit.>, the RNN models BRITS <cit.> and MRNN <cit.>, the USGAN model of <cit.>, and the attention model SAITs <cit.>. Handling Missing Input Features Multiple models that we consider including basic regression imputation and the proposed model will have missing values in their input feature representations. We address missing data in the LAPR feature representation using DW+HD median imputation. This choice is made since DW+HD median filling is the most accurate of the basic imputation methods on these data and often outperforms kNN imputation. For mean and median imputation methods, if there are no observed hourly blocks associated with a specific hour of the day or day of the week, we apply participant-level median imputation to all the hourly blocks associated with that particular hour of the day or day of the week. For more information on how we handle feature missingness in other baseline models, please refer to Appendix <ref>. Data Partitioning The proposed model and multiple baseline approaches include hyper-parameters that need to be set. To accomplish this, we apply a 10-fold stratified random sampling validation approach to the training data set described in Section <ref>. We use a stratified approach because the target step count variable is significantly skewed toward low step count values as seen in Figure <ref>. When holding out instances, it is thus important to match these statistics since an over or under abundance of large step count values can have a large effect on validation set performance estimates. We use per-participant uniform density bins in the stratified sampling. In terms of the data partitioning scheme, we allocate 80% of instances in each split for training, 15% for validation and 5% for an in-domain test set. However, in this work we focus on the fully held out test set described in Section <ref> to provide results covering multiple levels of missing data. Hyper-Parameter Optimization The stratified train/validation splits are used to select hyper-parameters for all kNN-based and model-based approaches including the proposed model. Details including model configurations, selected hyper-parameters and full training procedures can be found in Appendix <ref>. Model Evaluation We evaluate trained models on the completely held out test set as described in Section <ref>. Results are reported per missing data bin as well as overall. We report results in terms of Macro Mean Absolute Error (MAE). This is the mean over participants in the test set of the mean absolute error per test participant, which is defined in Equation <ref>. Macro MAE = 1/N∑_n=1^N1/|ℳ^(n)|∑_m_n=1^|ℳ^(n)| AE_m_n where m_n ∈ℳ^(n) is the index of a single hourly block to be imputed from the set of missing hourly blocks ℳ^(n) of participant n. N is the number of participants in the dataset and |ℳ^(n)| is the number of imputed hourly blocks from participant n. As a measure of variation, we report ± 1.96 times the standard error of the mean, yielding a 95% confidence interval on mean predictive performance. For models where hyper-parameters are selected using the 10 validation splits, we determine the optimal hyper-parameter values using the validation set and average the test predictions of the 10 corresponding models to form a final test prediction. For personalized baseline models (e.g., participant-level mean imputation), we use imputation statistics computed from the test data set. This is necessary because these approaches are applied per-person and the test set consists of completely held-out individuals with no overlapping data in the training set. This biases these results in favor of the baselines. § RESULTS In this section, we present step count imputation results on the 500-participant test set. Further, we visualize the attention maps and relative time encodings learned by the proposed model to analyze what the model learns from data. Finally, we provide the results of an ablation study varying components used in the self-attention model. Overall Imputation Results tab:extvalid_results shows the overall imputation results (last column) for each method. Methods highlighted in blue have statistically significantly lower error than other methods in their group (p<0.05). Methods highlighted in red have statistically significantly lower error than other methods across all groups (p<0.05). As we can see, our sparse self-attention model achieves the best overall performance and does so with statistical significance relative to all other methods considered. Imputation Results by Missingness Rate The remaining columns in tab:extvalid_results show the imputation results for each missingness rate interval. As we can see, our sparse self-attention model achieves the best performance on all but the highest missing data rate bin. On participants with extremely high missing rates (i.e. ≥ 80%), DW+HD Median Fill performs best and is better than our self-attention model with statistical significance (p<0.05). This is likely due to the fact that at over 80% missing data, the context windows for the proposed model will contain relatively few observations while the LAPR feature vectors will be heavily influenced by the baseline imputation method used. It may be possible to further improve performance for high missing rate bins by using adaptive context window sizes and alternative LAPR construction methods or by adaptively smoothing the model's prediction towards that of simpler models as the volume of observed data in the context window decreases.[We note all 95% confidence intervals reported in the table represent ± 1.96 times the standard error of the mean MAE for each model. These intervals are wide due to variability across participants in our dataset. However, the paired t-test depends instead on the distribution of per-participant differences in performance between two models.] Imputation Results by Step Count We further analyze the imputation results by breaking the overall performance down based on ground truth step count bins for different models. The performance is evaluated in terms of micro MAE per ground truth step-count bin. The first plot in fig:model_compare_hourly shows the test error rate of the proposed model per ground truth step count bin. We can see that the model has higher error on bins corresponding to higher ground truth step counts. This is perhaps not surprising as high ground truth hourly step counts occur much more rarely than low step counts as seen in Figure <ref>. The remaining plots in fig:model_compare_hourly present the ratio of the error obtained by the DW+HD Median, kNN-Softmax and MRNN approaches (the best other models in their groups) to that obtained by the proposed model. Ratios above 1 indicate that the alternative models have higher error than the proposed model. We can see that the proposed model not only outperforms the alternative models overall, it does so with respect to almost all individual ground truth step count bins. Attention and Relative Time Encoding Visualization Figure <ref> shows the attention weights averaged over all instances, the attention weights averaged over specific example days, and the relative time encodings. From these visualizations, we can see that the model produces overall average attention weights that match expectations based on the autocorrelation function shown in Figure <ref>. The time points with consistently high attention relative to the target hour are Δ t=± 1 hr, ± 1 day, ± k weeks. Further, we can see that the average attention weights are not the same for all days of the week. The model produces different average attention weights for different days. Lastly, we can see clear difference between the relative time encoding structure and the overall average attention weights thus clearly indicating that both the input features and the relative time encoding influence the attention weights. Ablation Study We conduct an ablation study to test the impact of local context window sizes and different architecture components used in our sparse self-attention model. Macro MAE of the held-out test samples is used to measure performance. We first consider the effect of changing both the number of weeks represented in the context window and the number of hours. The results are shown in fig:ablation_hours_weeks. We see that as the number of weeks and the number of hours is increased, the prediction error decreases. These results support the importance of using wider context windows spanning multiple weeks. The model used in the main results corresponds to hours=4 and weeks=5. We next consider the impact of the relative time encoding and local activity profile representation (LAPR). Removing the relative time encoding increases the overall test error from 261.68 ± 10.62 to 262.91 ± 10.75. While the error increases, the increase is not statistically significant. When removing the LAPR from the model's input features, the error increases to 278.75 ± 11.93. This increase in error is significant, indicating that the LAPR provides a valuable performance boost to the model relative to using the base features associated with each hourly block. § CONCLUSIONS In this work, we consider the problem of imputing missing step count data collected by wearable devices. To enable this research, we curated a novel dataset consisting of 100 training participants and 500 test participants with more than 5.5 million total hourly step count observations extracted from the All of Us dataset. We proposed a customized model for this task based on a novel multi-timescale sparse self attention structure to mitigate the quadratic complexity of the standard dense self-attention mechanism. Our experiments show that the proposed model outperform the considered baseline approaches and prior state-of-the-art CNN-based models on fully held out test data. Further, we present ablation studies showing the importance of both the activity profile input representation that we propose and the multi-timescale attention computation. We note that although our model and feature representations were specifically designed for step-count data in this paper, the same structures could also be helpful for modeling other behavioral and physiological processes (e.g. heart rates) with similar quasi-periodic and multi-timescale structures across day, weeks and months. In terms of limitations, we first note that computational considerations limited the total data volume that could be used for model training in this work. While we opted to use a training data set containing fewer participants with higher observed data rates, designs using randomly selected training participants with similar total training data volume would also be feasible. While there may be concern that the training set is not representative of the data set over all, the test set is indeed a fully held out and representative stratified random sample and the proposed model achieves superior overall performance on this test set. Next, we note that the missing data mechanism used when evaluating models is effectively a missing completely at random (MCAR) mechanism. However, the per-step count results presented in Figure <ref> provide information about the distribution of predictive performance conditioned on true step counts. In terms of future work, we plan to extend the proposed model to a multi-layer architecture to mitigate the fact that the input feature representation relies on simple imputation currently. Applying the model in multiple layers may further improve performance by providing more accurate local activity profile representations. In addition, we plan to extend the model to produce probabilistic predictions to support multiple imputation workflows and to extend the model architecture to several related tasks including step count and sedentary interval forecasting. Finally, we plan to evaluate the impact of the imputations produced by the model when applied as part of a data analysis procedure that aims to quantify the association between physical activity as measured by step count data and a related health condition or intervention outcome. § ACKNOWLEDGMENTS This work was partially supported by National Institutes of Health National Cancer Institute, Office of Behavior and Social Sciences, and National Institute of Biomedical Imaging and Bioengineering through grants U01CA229445 and 1P41EB028242 as well as by a Google Cloud Research Credits Program credit award. We gratefully acknowledge All of Us participants for their contributions, without whom this research would not have been possible. We also thank the National Institutes of Health’s All of Us Research Program for making available the participant data examined in this study. § DATA CURATION AND PREPROCESSING PIPELINE Figure <ref> and Figure <ref> demonstrates how we curate the training cohort and preprocess the data. § COMPARISON BETWEEN TRAINING COHORT AND ALL ALL OF US PARTICIPANTS Figure <ref> and <ref> compare the statistics of the training cohort of 100 participants with the entire All of Us Fitbit dataset of 11,520 participants. § MODEL CONFIGURATIONS, HYPER-PARAMETERS AND TRAINING PROCEDURES In this section, we introduce the details about all the models used in our experiments, including configurations, hyper-parameters and training procedures. §.§ Multi-Timescale Sparse Self-Attention Model We fix the length of local activity profile representations (LAPR) as 2W+1 = 2×72+1 = 145. The configuration of the LAPR encoder network is: Conv: out_channels=1, kernel_size=49, stride=1, padding=24, with no bias; Average Pool: kernel_size=7 and stride=6. The model is trained using Adam optimizer with the batch size of 20,000 for 30 epochs. The learning rate is searched within {0.1, 0.01, 0.001}. We conduct early stopping based on validation Micro MAE for each split. Validation Micro MAE averaged over 10 splits is used to choose the best hyper-parameters. We train the model using two NVIDIA Tesla T4 GPUs with 32 CPUs and 208 GB RAM within All of Us workspace. The model is implemented using PyTorch 1.13.1. §.§ Filling Methods All the filling methods impute missingness on the level of unnormalized step rates (i.e. before instance z-normalization). Micro mean, mean and median based methods compute statistics of all levels (e.g. participant level) using the data from 6:00am to 10:00pm, while Forward and Backward Fill based methods are allowed to use the data out of this period. §.§ Regression Imputation We set the regression function to be linear. Input features of the linear regression model include (1) normalized step rates and heart rates from all the blocks in the context window, except for the center one (2) day of the week and hour of the day one-hot vectors of the center hourly block. LAPR is not applied as it was found to decrease performance. Missing step rates and heart rates is filled by zeros, which exhibits superior performance compared to DW+HD median filling. The model has the same context window size, training protocol and loss function as our proposed model. We set the batch size as 50,000 and search for the learning rate within {0.1, 0.01, 0.001, 0.0001}. Adam optimizer is used train the model for 20 epochs with the learning rate of 0.001. §.§ k-Nearest Neighbors (kNN) Imputation We search for nearest neighbors within all the observed data of the same participant where the missing block comes from. The neighbors are not limited to 6:00am to 10:00pm period. Input features are LAPR with the same length (i.e.,145) used in the proposed model. Two variations are tested: (1) uniform weighting (kNN-Uniform) and (2) RBF-kernel-based method (kNN-Softmax), where the similarity between the missing hourly block and its neighbors depends on square distances in the feature space. We search for the number of nearest neighbors in {1, 7, 14, 21, 28, 35} for both and the RBF parameter within {0.1, 0.01, 0.001, 0.0001, 0.00001} for kNN-Softmax. §.§ Multiple Iterative Imputation Method (Iterative Imputation) Our model is similar to Multiple Imputation with Chained Equations (MICE) method, which uses chained equations and linear regression models to impute every variable conditioned on the others. However, during the training phase, the algorithm performs a deterministic imputation instead of probabilistic sampling. The input features are the same as used for regression imputation. Since day of the week and hour of the day are always observed, they only serves as the input features while imputing other variables, and themselves are never imputed. Figure <ref> provides an example of our specified imputation order regarding positions in the contex window. Each linear regression model in the chained equation is trained using mini-batch SGD with the batch size of 10,000 for 2 epochs. The number of imputation iterations is set as 2. During inference, we perform multiple imputations for each position by sampling from a Gaussian distribution. Please refer to the codes for the details. §.§ Convolutional Denoise Autoencoder (CNN-DAE) We use the symmetric encoder-decoder architecture to implement CNN-DAE. The encoder consists of three 1D convolutional layers, each followed by BatchNorm and ReLU activation. Correspondingly, the decoder includes three 1D transposed convolutional layers, with the first two layers being followed by Batch Normalization and ReLU activation. The configurations of convolutional and transposed convolutional layers are in Table <ref>. For the input features, we include both z-normalized step rates and heart rates within the same context window used in the proposed methods. Since the CNN model structure is not suitable for use with the multi-scale context window, we apply it at the hourly level to the contiguous time span. Furthermore, LAPR is not employed in CNN-DAE due to its inability to yield better performance. We fill the missingness with zeros. Adam optimizer with batch size as 50,000 is used to train the model of each split for 20 epochs. The learning rate is searched within {0.1, 0.01, 0.001}. §.§ BRITS We adhere to the settings in the original paper, using LSTM as the RNN architecture. Input features are the same as the proposed model.However, we found LAPR cannot help to improve the performance, thus we did not use it here. The context window is chronologically flattened, enabling the RNN model to process information sequentially. We impute both heart rates and step rates at each time step. Notebly, we found that the auxiliary heart rate imputation task indeed helps the step rate imputation task for BRITS, so we keep both of them during training. The best hyper-parameters are selected based on the optimal validation Micro MAE of step counts of the center hourly blocks. Training the BRITS model spans 30 epochs with the batch size of 10,000 and the learning rate of 0.01. The LSTM hidden dimension is searched within {4, 8, 16, 32}. §.§ MRNN MRNN consists of the interpolation block and the imputation block. In the interpolation block, we apply two bidirectional-GRU models to interpolate the missing values, one for step rates and the other for heart rates. Day of the week (DW) and hour of the day (HD), which are always observed, are not input into the interpolation block since it operates within each data stream with missing values. On the contrary, they are input to the imputation block. The context window is consistent with that used in the proposed model. As suggested by the original paper, missing values outside of the center hourly block are filled with zeros. We found DW+HD median filling does not demonstrate the performance as good as zero-filling. Like BRITS, the context window is flattened in chronological order for RNN to process. We also found LAPR can improve the MRNN performance as with our proposed model, thus these features are used when reporting the results. To keep consistent with other models, we employ Mean Absolute Error (MAE) instead of Mean Squared Error (MSE) for model training, different from the original paper. We train MRNN for 40 epochs, utilizing the batch size of 20,000 and the learning rate of 0.01. The GRU hidden dimension in the interpolation block is searched within {4, 8, 16, 32}. §.§ USGAN We employ the BRITS model as the generator and the bidirectional GRU model as the discriminator. The generator configurations align with those outlined in sec:brits. As our data does not have explicit labels for each time series, we omit the classifier component mentioned in the original paper. In contrast to the original implementation, which updates the discriminator five times after each generator update, updating the discriminator only once results in more stable training and improved performance in our case. We train the USGAN model for 30 epochs with the batch size of 10,000 and the learning rate of 0.01. The RNN hidden dimensions for both the generator and discriminator are explored within {4, 8, 16, 32, 64}. Additionally, we search for the weight of the discriminator loss during training, which balances it with the BRITS loss, within {0.1, 0.3, 0.5, 0.7, 0.9, 1.0}. §.§ SAITS We use the learning rate of 0.01 and the batch size of 10,000 when training the model. We fix the number of transformer layers as 2 and search for the hidden representation dimension d_model and the output dimension of each layer d_v within {4, 8, 16, 32}. We leveraged the same multi-scale context window[We note that vanilla SAITS model uses the dense self-attention, which is not feasible in our case due to the long time series data.] as in our proposed model as well as the same feature set, including the LAPR.
http://arxiv.org/abs/2406.18640v1
20240626180000
Neutrino emission in cold neutron stars: Bremsstrahlung and modified urca rates reexamined
[ "Salvatore Bottaro", "Andrea Caputo", "Damiano Fiorlillo" ]
hep-ph
[ "hep-ph", "astro-ph.HE", "nucl-th" ]
CERN-TH-2024-092 a]Salvatore Bottaro b]Andrea Caputo c]Damiano Fiorillo date [a]School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 69978, Israel [b]Department of Theoretical Physics, CERN, Esplanade des Particules 1, P.O. Box 1211, Geneva 23, Switzerland [c]Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark salvatoreb@tauex.tau.ac.il andrea.caputo@cern.ch damiano.fiorillo@nbi.ku.dk Neutrino emission in cold neutron stars is dominated by the modified urca (murca) process and nucleon-nucleon bremsstrahlung. The standard emission rates were provided by Friman and Maxwell in 1979, effectively based on a chiral Lagrangian framework with pion and rho meson exchange, supplemented by Landau parameters to describe short-range interactions. We reevaluate these rates within the same framework, correcting several errors and removing unnecessary simplifications, notably the triangular approximation – Fermi momenta of protons and leptons negligible compared to the neutrons one – in MURCA, and quantify their importance. The impact of rho meson exchange, previously argued to cancel with interference effects, is actually quite relevant. Altogether, the cooling rates are reduced by as much as a factor 2. We provide comprehensive analytical formulas encompassing all contributions, designed for straightforward numerical implementation. Our results are particularly relevant for astrophysical studies of neutron stars evolution and for studies of physics beyond the standard model, where the emission of new particles – such as axions – is typically computed within the same framework we adopt here. Neutrino emission in cold neutron stars: Bremsstrahlung and modified urca rates reexamined [ July 1, 2024 =========================================================================================== § INTRODUCTION Neutron stars (NS) are fascinating objects. It was 1932 when – just one month before the discovery of the neutron – Landau conjectured the existence of cold dense stars in a conversation with Bohr and Rosenfeld. These “scary stars" (“unheimliche Sterne") would have small radii and enormous densities, as also suggested independently in 1934 by Baade and Zwicky. A few years later, in 1939, Oppenheimer and Volkoff described the first model of neutrons stars, made of a free neutron gas, but it was only in 1967 that pulsars were observed and shortly after Gold proposed the now accepted view that they are fast rotating neutron stars. Typical neutron star masses vary in the range M_ NS∼ 1-2 M_⊙ with a radius of the order of R ∼ 10 km, therefore with very large densities, beyond nuclear, ρ_0 ∼ 2.5 × 10^14 g/cm^3 or n_0 ∼ 0.15 fm^-3. These extreme stars form when the degenerate iron core of a massive star at the end of its life evolution becomes unstable and collapses, leading to a type II supernova explosion. The first stages of the life of the new born are decisively determined by neutrino emission, which dominates the cooling of the neutron star. At the beginning the inner temperature is very high, around T ∼ 10^10-10^11 K = 1-10 MeV, but it quickly drops to T ∼ 10^9 K ∼ 100 keV after 1-10 years. After ∼ 10^4-10^5 years, the temperature drops to T ∼ 2 × 10^8 K, and surface photon emission finally takes over in the cooling process. Clearly, neutrino cooling processes in nuclear environments are of paramount importance for the study of these fascinating stars. The first detailed computations of neutrino emission rates in neutron stars traces back to 50 years ago <cit.>. Usually, the direct decay of a neutron n→ p+ e^-+ν_e is inhibited by the strong degeneracy, so the most important processes to consider are neutrino-antineutrino bremsstrahlung n + n → n + n + ν_ℓ + ν̅_ℓ, n + p → n + p + ν_ℓ + ν̅_ℓ, p + p → p + p + ν_ℓ + ν̅_ℓ, where ℓ =e, μ, τ indicates the neutrino flavor, and modified URCA (MURCA) processes, where a charged lepton is also emitted, both in the neutron branch n + n → n + p + ℓ + ν̅_ℓ, n + p + ℓ→ n + n + ν_ℓ , and the proton branch p + n → p + p + ℓ + ν̅_ℓ, p + p + ℓ→ p + n + ν_ℓ. The standard reference for the emission rates associated to these processes is the seminal paper written by Friman and Maxwell in 1979 <cit.> (hereafter ), in which the authors provided a detailed description of all the computations within the framework of one-pion exchange (OPE), but quantifying also the effect of short-distance interactions beyond OPE. The topic has been reconsidered later on, for example by Yakovlev and Levenfish <cit.> with a focus on reduction factors due to possible proton superfluidity, and by Maxwell himself with a focus on the potential presence of hyperons <cit.>. Even without superfluidity, the use of in-vacuum interactions has been questioned, as the medium polarization might significantly renormalize the properties and interactions of nucleons <cit.>. Nevertheless, standard reviews <cit.> and fast public numerical codes like http://www.astroscu.unam.mx/neutrones/NSCool/NSCool, still use the results from the original work , which represents hitherto the most complete and detailed reference for the topic. The ramifications of these results go well beyond pure astrophysics, as NS cooling is a sensitive probe of emission of particles beyond the Standard Model (BSM) <cit.>, where it is usually treated using NSCool, in turn relying on . Given the potential impact of neutrino cooling in NS across multiple fields, especially in recent times, we believe a detailed understanding of its rate is needed. Here we reconsider the historical work of , and we show that several corrections arising from the purely particle side of the calculation can alter the cooling rates by more than a factor 2. We do not aim for a state-of-the-art description of the nuclear physics, but stick to the framework adopted in . Our results correct for multiple small inconsistencies in the original treatment, at the same time relieving some of the approximations of the original paper; specifically: * we model the long-range interaction as an OPE+ρ-meson exchange. The latter was argued in  to compensate with the exchange contribution, at least for the nn bremsstrahlung, but we do not seem to recover this result. The impact of ρ-meson exchange recently entered also the particle physics community, where it was argued to significantly impact the emission rate of axions <cit.>, although here it was incorrectly accounted for in the non-tensor channels contributing to np bremsstrahlung. We take the occasion to rectify the impact of this contribution; * for the first time, we go beyond the triangular approximation that the Fermi momentum of protons and electrons are negligible; one can already see that this is usually smaller than the neutron Fermi momentum only by about a factor 3 or less, so it is worth considering more carefully what is the impact of this approximation. When all is put together, neutrino emission rates, compared to the results in , differ by a factor 2 or more in some cases. Even though our results may not be the final answer, since we still stick with the nuclear framework of , it is a direct update of the most commonly used approaches to describe NS cooling, especially in the context of bounds on BSM physics. Thus, we believe that our results could be an updated go-to recipe in this context. This work is organized as follows. We first introduce our framework, describing the adopted nucleon-nucleon potential. Then, we pass to compute the emission rates for both bremsstrahlung and MURCA. For each of them we provide contact with previous literature and we highlight the impact of different effects on the final answer. We finally provide a compact expression also for the neutrino absorption rate and then conclude. § NUCLEAR PHYSICS FRAMEWORK The emission of weakly interacting particles from dense nuclear matter is notoriously challenging to describe. The intrinsic many-body nature of the system does not allow in principle to discuss properties of individual nucleons. However, one can usually adopt the Landau paradigm to describe such a system of strongly interacting fermions by means of non-interacting quasi-particles, representing collective degrees of freedom of the system which behaving nearly as individual particles. We will adopt this viewpoint in what follows. The main challenge in using Landau's theory of Fermi liquids is that the properties of individual quasi-particles – mass, coupling to neutrinos – become phenomenological, to be determined from comparison with experiment. Here, in order to stick to a well-defined framework, we will make a set of simplifying assumptions, mostly driven by the comparison with the often-used work by Friman and Maxwell . In particular, we will describe nuclear matter in terms of non-interacting, non-relativistic nucleons. The dispersion relation of quasi-particles close to their Fermi surface is determined by their Landau effective masses. For the results shown in the text, we will always use bare values for the masses, to provide a comparison with the results of  which does not include potential differences from this additional source of uncertainty. All emissivities scale with a fixed power of the nucleon mass, so introducing a definite prescription for the effective masses can always be done in a simple way. In asymmetric nuclear matter, the proton effective mass may be lower than the neutron one, an effect we do not consider in this work, aiming at a precision of the order of 10%. This choice is especially helpful in comparing our results with the classic FM ones. The rate of neutron star cooling is now determined by emission of neutrinos from nucleons. On-shell emission processes from an individual nucleon could only happen by direct neutron decay n→ p+e^- +ν_e and inverse beta decay p+e^-→ n+ν_e, but both processes are strongly suppressed by the degeneracy of dense nuclear matter. Hence, neutrino emission happens mostly from off-shell nucleons interacting with each other. This implies the need for a detailed discussion of the quasi-particle interaction in the Fermi liquid of nucleons. Unfortunately, this is a topic clouded in unavoidable complexity and uncertainties. The forward scattering amplitude in various interaction channels might be related, using Landau's theory of a Fermi liquid, to specific thermodynamical coefficients (e.g. compressibility, spin susceptibility, ...) which can in principle be measured in heavy nuclei, although one should stress that such measurements always refer to symmetric nuclear matter, whereas neutron stars are obviously neutron-dominated. Here we follow FM, which used the Fermi liquid parameters extrapolated from Refs. <cit.>. For low momentum exchange, such that kr_0≪ 1, where k is the typical momentum transfer and r_0 is the typical nucleon radius, the amplitude is isotropic, and the Landau parameters are directly representative of the scattering amplitude. We stress that these parameters directly map to the scattering amplitude, not the scattering potential; hence, they allow us to circumvent the need for a perturbative treatment of the interaction potential. The Landau parameters are by definition unable to capture the effects of the tensor interaction driven primarily by pion exchange, which vanish for vanishing momentum transfer. However, tensor interactions are the dominant contribution for most neutrino-emission processes, and in fact are the only contribution for nn bremsstrahlung. Here, we assume that the long-range tensor interaction can be described as in vacuum by the pion exchange, with a reduction at shorter scales which we model as an exchange of a ρ meson. In this sense, we follow again , except that we extract the coupling of the ρ meson from the Bonn potential <cit.>. It must also be noted that assuming the in-vacuum interaction is not quite adequate, since the medium polarization could significantly renormalize the tensor interaction <cit.>; in Ref. <cit.>, for example, the resulting nn bremsstrahlung emission was shown to be renormalized by up to a factor 2[However, in Eq. 3 of Ref. <cit.> a symmetry factor of 2, rather than 4, is reported for nn bremsstrahlung. Presumably a clear estimate of the impact of these medium corrections requires some dedicated analysis, also in view of the corrections we point out in this paper, which we do not attempt here.]. Our choice of sticking to the simpler framework of  allows us to perform a one-to-one comparison with their results to show that significant differences arise already at this level, while a detailed treatment of the medium polarization would certainly be warranted in future works. Hence, our final framework to describe nucleon-nucleon interaction includes short-range interactions inferred from the Landau parameters, and longer range contributions from OPE+ρ-meson exchange. We find that the former have negligible impact on neutrino-neutrino emission, a feature already identified in . On the other hand, as we will see, ρ exchange does significantly affect the emission. Overall, the effective non-relativistic potential that we use for nucleon-nucleon interaction is V(𝐤) = f + f' τ_1 ·τ_2 + g σ_1 ·σ_2 + g'_k τ_1 ·τ_2 σ_1 ·σ_2 + h'_k (σ_1 ·) (σ_2 ·) τ_1 ·τ_2, where f, f', g are the constant Landau parameters for the relevant channel and ≡ k is the exchanged momentum in the scattering. The spin-spin interaction g'_k receives both a constant contribution from the associated Landau parameter and a momentum-dependent contribution arising from the exchange of the ρ g'_k ≡ g' - C_ρ f_π^2/m_π^2k^2/k^2+m_ρ^2. For the meson parameter here we follow the table at pag. 37 for the Bonn model <cit.> and take m_ρ = 769 MeV for the mass of the ρ meson, and C_ρ = 1.4 for its coupling strength, while f_π≃ 1 and should not be confused with the pion decay constant. For the Landau parameters instead we adopt the following parametrisation <cit.> {f, f', g, G} = π^2/2 m_ N p_ F(n){F_0, F_0', G_0, G'_0}, where F_0' = 0.7, G_0 = G_0' = 1.1 and where p_ F is the neutron Fermi momentum. As already noted by , and as we confirm in our results, the Landau parameter F_0 drops in all the relevant rates. Finally, the last operator in Eq. <ref> reads h'_k ≡ - f_π^2/m_π^2(k^2/k^2 +m_π^2 - C_ρk^2/k^2 +m_ρ^2), where we stress the sign difference between the pion and ρ contributions. As a matter of fact, Ref. <cit.> included ρ-meson exchange only in the tensor coupling h'_k, but this was justified by the choice of only determining nn bremsstrahlung, where tensor interaction is the only contribution. This was subsequently followed by Ref. <cit.> which adopted the simple rule k^2/k^2 +m_π^2→k^2/k^2 +m_π^2 - C_ρk^2/k^2 +m_ρ^2, which however is wrong if applied to the spin-spin channel. Hence, the corresponding results for np scattering in Ref. <cit.> – which usually is the dominant contribution to axion emission in supernovae – do not consistently incorporate the ρ-meson exchange reduction. Since here we consider not only np bremsstrahlung but also MURCA, we account for the consistent prescription. We find it useful – especially for the particle physicist reader – to stress that the contributions due to OPE+ρ exchange can be derived from the following Lagrangian ℒ = - i g_πN̅γ_5 τ^a N π^a- i g_ρN̅γ_μτ^a N ρ^a,μ- i f_ρ/4 m_ NN̅σ_μντ^a N (∂^μρ^a,ν - ∂^νρ^a,μ) , where τ^a are the usual Pauli matrices, N = [ χ_p; χ_n ], with χ_ p,n being the spinor fields associated with the proton and the neutron, while π^a = (π^+ + π^-/√(2), i (π^+ - π^-)/√(2), π^0) are the pion fields. Taking the non relativistic limit one can derive the following relations between the coupling constant in this Lagrangian and the parameters for the potential in Eq. <ref> g_π = 2 m_ N f_π/m_π, C_ρ = (g_ρ + f_ρ)^2 m_π^2/4 m_ N^2 =g_ρ^2(1 + r)^2 m_π^2/4 m_ N^2 , where we introduced the tensor to vector ratio of the ρ-meson, r ≡ f_ρ/g_ρ. Following Ref. <cit.> we fix r = 6.1 and g_ρ = √(4π× 0.41)≃ 2.3. The inclusion of the ρ meson primarily serves the purpose of reducing the OPE potential which otherwise at large momentum transfer would saturate to a constant value and largely overpredict the interaction energy. Finally, for completeness, we also report the piece of the non-relativistic Lagrangian describing the electroweak interactions; for the charged-current interactions this is ℒ= G/√(2)(χ_p^†(δ^μ_0-g_A σ^iδ^μ_i)χ_n ℓ̅γ_μ(1-γ_5)ν_ℓ+h.c.), where g_A ≃ 1.27 is the axial vector constant, G = G_ Fcosθ_C with G_ F being the weak Fermi coupling constant and θ_C the Cabibbo angle, χ_n,p are two-components Pauli spinors while ℓ and ν_ℓ are standard Dirac spinors. For the neutral-current interactions with neutrinos, we use ℒ= G_ F/2√(2)(χ_p^†(c_vδ^μ_0-g_A σ^iδ^μ_i)χ_p -χ_n^†(δ^μ_0-g_A σ^iδ^μ_i)χ_n )ν_ℓγ_μ (1-γ_5)ν_ℓ+h.c., with c_v=1-4sin^2θ_W and θ_W is Weinberg's angle. In concluding this section, we wish to emphasize that the numerical results presented herein utilize the Lagrangian couplings in their bare form, akin to the treatment of nucleon masses. It is essential to note, however, that these couplings are expected to get in-medium modifications. For example, axial coupling g_A is expected to experience quenching at finite densities <cit.>. Our results can be rescaled accordingly. § NEUTRINO EMISSIVITY With the nucleon interaction potential at hand, we can proceed to compute the neutrino emissivity for both bremsstrahlung and MURCA. The general form for single-flavor neutrino emissivity (we will will later account for the proper flavor multiplicities) reads Q_ν = 𝒮∫ d^3p_ν,1/(2π)^3 2 ω_1∫∏_i=1^4 d^3p_i/(2π)^3∫d^3 p_ℓ/(2π)^3 2 ω_ lω_ν (2π)^4 δ(P_1 + P_2 - P_3 - P_4 - P_ℓ - Q_ν) ℱ∑_ spins |ℳ|^2, where 𝒮 is a symmetry factor for identical particles, equal to 1/4 for neutron-neutron and proton-proton bremsstrahlung, 1/2 for MURCA and 1 for neutron-proton bremsstrahlung. In Eq. <ref>, capital letters P=(E, ) denote the 4-momentum of the corresponding particle. In particular, _i and E_i are the nucleon momenta and energies respectively, ω_ℓ is the energy of the second lepton emitted (either charged leptons for MURCA or neutral leptons for bremsstrahlung ), and we also defined the phase space factor ℱ, which reads ℱ= f_1 f_2 (1-f_3)(1 - f_4) for bremsstrahlung and ℱ=f_1 f_2 (1-f_3)(1 - f_4)(1-f_ℓ) for MURCA, where f_i are the fermion distribution functions with the appropriate chemical potentials. Finally, ω_ν is the total energy emitted into neutrinos (therefore ω_ν = ω_1 + ω_ℓ for bremsstrahlung and ω_ν = ω_1 for MURCA). In all that follows, we assume that neutrinos freely escape and we can neglect their distribution functions in the Boltzmann equation, an extremely good approximation already a few seconds after the formation of a neutron star. Moreover, we will work under the assumption of strong degeneracy for all nucleons, as well as for muons and electrons, so that scattering processes only involves fermions close to the Fermi surface with momentum p_ F,i and width ∼ T/p_ F,i≪ 1. For the nucleons, we do not include the usual factor 2m_ N in the denominator appearing in a relativistic treatment, which would cancel out with a corresponding factor in the normalization of the relativistic spinors in the matrix element. Thus, we simply consider the nucleon wavefunctions normalized according to the condition NN=1, more appropriate for non-relativistic calculations. Finally, we do not include a factor 2 for the spin of the particles in the integration over the phase space, which means that in our squared amplitude calculations we always have to sum, not average, over the spins. §.§ Bremsstrahlung emission There are two types of processes to consider in this case: one with two identical nucleons (either protons or neutrons) in the initial and final states, and one with a neutron and a proton scattering off each other. We treat the two cases separately as both the amplitudes squared and the phase spaces differ. §.§.§ Neutron-neutron (proton-proton) bremsstrahlung The squared amplitude for identical nucleons bremsstrahlung is |ℳ|^2 = 64 g_A^2 G_ F^2 ω_1 ω_2/(ω_1+ω_2)^2[ h'^2_l + h'^2_k + h'_kh'_l ( 1 - 3 (·)^2 ) ]. Here we have already averaged over the directions of the outgoing leptons; since their momenta are much smaller than the nucleon momenta, they are emitted essentially isotropically with uncorrelated directions. Only the tensor interaction contributes to the squared matrix element, as one can easily understand by noting that neutrino emission couples to the total spin of the nucleon pair, and all the other interactions conserve the total spin. In a more compact form we can write |ℳ|^2=64g_A^2G_ F^2ω_1ω_2/(ω_1+ω_2)^2|m|^2, where we introduced the reduced squared amplitude |m|^2, defined as |m|^2=(f_π/m_π)^4([l^2/(l^2+m_π^2)-C_ρl^2/(l^2+m_ρ^2)]^2+[k^2/(k^2+m_π^2)-C_ρk^2/(k^2+m_ρ^2)]^2 + [l^2/(l^2+m_π^2)-C_ρl^2/(k^2+m_ρ^2)][k^2/(k^2+m_π^2)-C_ρk^2/(k^2+m_ρ^2)] ), where ≡_1-_3, ≡_2-_3, with _i being the spatial momenta of the involved nucleons. In writing the squared amplitude, we used the fact that · vanishes for nucleons exactly on the Fermi surface, and thus is suppressed for very small T/μ_N. The integrals over the Fermi distributions are easily evaluated in the limit T→ 0, using the result that ∫_-∞^+∞∏_i=1^4 dx_i δ(x_1+x_2-x_3-x_4-ξ)/(e^x_1+1)(e^x_2+1)(e^-x_3+1)(e^-x_4+1)=1/1-e^-ξ2π^2ξ/3(1+ξ^2/4π^2); it proves convenient to reinstate the integral over the nucleon momenta forcing them to be on the Fermi surface. Thus, we can rewrite the emissivity as Q_ν=G_ F^2 g_A^2 m_ N^4/96π^12∫ω_1^2 dω_1 ω_2^2 dω_2 T^2(ω_ν^2+4π^2T^2)/1-e^-ω_ν/T∏_i=1^4 d^3 p_iδ(p_i^2-p_F,i^2) δ^(3)(_1+_2-_3-_4) |m|^2, where we already included the symmetry factor 𝒮=1/4. The integrals over the nucleons phase space are strongly constrained by the delta functions. Since the matrix element |m|^2 depends only on ||=|_3-_1| and ||=|_4-_1|, it is most convenient to reparameterize the phase space integration in terms of these differences. After performing all of the integrals except those on k=|| and l=||, we are left with Q_ν^ NN= g_A^2G_ F^2m_ N^4/48 π^10∫_0^∞ dω_1 dω_2 ω_1^2 ω_2^24π^2T^2+(ω_1+ω_2)^2/e^(ω_1+ω_2)/T-1_ℐ×∫_0^2p_ Fdk∫_0^√(4p_ F^2-k^2)dl|m|^2/√(4p_ F^2-k^2-l^2)_J ≡ g_A^2G_ F^2m_ N^4/48 π^10ℐ J, where J≡ 2J_1+J_2 and J_1= f_π^4/m_π^4π p_ F[ϕ(α)+ϕ(β)C_ρ^2-2C_ρΦ(α,β)], J_2=f_π^4 /m_π^4π p_ F[Ψ(α,α)-2 C_ρΨ(α,β)+C_ρ^2 Ψ(β,β)]. Here p_ F is the Fermi momentum of either neutrons or protons, α=m_π/2p_ F, β=m_ρ/2p_ F, and ϕ(α)=1+α^2/2(1+α^2)-3/2α arctan(1/α), Φ(α,β)=1-α^3 arctan(1/α)-β^3 arctan(1/β)/α^2-β^2, Ψ(α,β)=1-α arctan(1/α)-β arctan(1/β) +αβ/√(1+α^2 +β^2)arctan(√(1+α^2+β^2)/αβ). The energy integral ℐ in Eq. <ref> reads instead ℐ = ∫_0^∞dω_1 dω_2 ω_1^2 ω_2^24π^2T^2+ω^2/e^ω/T-1= = 1/32∫_0^∞ dω∫^ω_-ω dδ (ω + δ)^2(ω - δ)^2/e^ω/T-1(ω^2 + 4 π^2 T^2) = 164 π^8 T^8/4725, and therefore Q_ν^ NN = 41 G_ F^2 m_ N^4 T^8/56700 π^2 J. Eq. <ref> includes all diagrams and the exchange of the ρ meson. In Fig. <ref>, we show our complete result (red curve) and directly compare it with the final numerical results Eq. 65a in , which is the standard expression used also in public numerical code http://www.astroscu.unam.mx/neutrones/NSCool/NSCool. This latter – in its public version – has the following rate for nucleon-nucleon bremsstrahlung in units of erg/cm^3/s qbrem_nn=n_ν· 7.4d19 · mstn(i)^4 · (kfn(i)/1.68d0) · alpha_nn· beta_nn * (t/1.d9)^8, where t is the temperature, normalised here to 10^9 K, n_ν is the number of neutrinos flavors, alpha_nn=0.59, beta_nn = 0.56 are kept constant in the code at their nuclear density values (although they should be a function of density), kfn(i) is the nucleon Fermi momentum in units of fm^-1,and mstn(i) represent the nucleon mass in units of the bare one. This expression coincides in fact with Eq. 65a of  . In the comparison plot we fix the number of emitted neutrino flavors to 3, so Eq. <ref> has been multiplied by 3; correspondingly, we show the result of Eq. 52 of  multiplied by 3, or equivalently their Eq. 65a multiplied by 3/2 since there they accounted for two-flavor emission. For both expressions we fix the neutron mass to be its in-vacuum value; in order to implement any in-medium prescription for the effective mass, m̃_N = f(p_ F) m_ N, it will be sufficient to scale our results by f(p_ F)^4. We also multiply Eq. <ref> by the factor beta_nn = 0.56. This is an extra suppression factor introduced by   in their OPE potential to capture “short-range correlations induced by the hard core of the NN interaction" (see their Eq. 17 and Tab. I).   stated that the result in their Eq. 52 was a good representation of the final neutrino emissivity because of a compensation between two missing effects: the “exchange diagrams" contribution and the ρ-exchange suppression. However, we do not observe this compensation and our result for nuclear densities, i.e k_f ∼ 1.7 fm^-1∼ 335 MeV (indicated by a gray dashed vertical line in Fig. <ref>), and above is considerably smaller. Given the simplicity of the master formula <ref>, it can be easily implemented when doing numerical comparison within the framework of OPE+ρ-meson exchange. §.§.§ Neutron-proton bremsstrahlung The structure of the calculation for neutron-proton bremsstrahlung is very similar to the previous one. After setting the symmetry factor 𝒮 of Eq. <ref> to one, instead of 1/4, one needs to change the squared amplitude, which now reads |ℳ|^2 =   64 G_ F^2 g_A^2 ω_1 ω_2/ω^2[ h'^2_k +2h'^2_l+ 4 (h'_k-h'_l)(g'_k - g'_l+f'-g) +6(g'_k - g'_l+f' - g)^2  - 2 h'_l h'_k (1-(·)^2)] ≡   64 G_ F^2 g_A^2 ω_1ω_2/(ω_1+ω_2)^2|m|^2, where now we notice the presence of all Landau parameters (except for f, which does not contribute). The Feynman diagrams for this process are depicted in Fig. <ref>; diagrams from a) to d) constitute the t-channel, where the exchange momentum is , while diagrams from e) to h) are the u-channel, where the exchange momentum is . We notice that our expression for the amplitude, even neglecting Landau terms, differs from that of for a factor of 2 missing in the third interference term. Due to our assumed form for the interaction potential, the matrix element |m|^2 depends only on the modules of the momentum exchange in the t-channel and in the u-channel. Assuming p_p≪ p_n, the latter =_4-_1 is dominated by the momentum of the neutron -_1 on the Fermi surface, and _4 is only a small correction. Since the matrix element itself |m|^2 is a slowly-varying function of l=||, we will expand J to the first non-vanishing order in ϵ=p_p/p_n as J^ np= π/2∫_0^2p_p dk[|m|^2(k,p_n)+p_n∂_l|m|^2(k,p_n)ϵ^2/4(1-3k^2/4p_p^2)+p_n^2∂_l^2|m|^2(k,p_n)ϵ^2/4(1-k^2/4p_p^2)] We have checked that for typical values inside the core of the neutron star, keeping only the first term of the expansion leads to few %. Putting all factors together we find Q_ν^ np= 41 g_A^2G_ F^2m_n^2m_p^2 T^8/14175 π^2J^ np; keeping only the leading term in Eq. <ref> the final integration can be done analytically J^ np = π p_p[f_π^4/m_π^4(2η^2(m_π)+4C_ρ^2η^2(m_ρ)+ϕ(α_p)+3C_ρ^2 ϕ(β_p)+2C_ρΦ(α_p,β_p). -2(η(m_π)+C_ρη(m_ρ))(ψ(α_p)+C_ρψ(β_p))-4C_ρ^2η(m_ρ)ψ(β_p))+6(g-f')^2 .+4f_π^2/m_π^2(g-f')(ψ(α_p)+2C_ρψ(β_p)-η(m_π)-2C_ρη(m_ρ))] ≡π p_p𝒥(α_p, β_p), where α_p=m_π/(2p_p), β_p=m_ρ/(2p_p), and η(m)=p_n^2/(m^2+p_n^2). The functions ϕ(α), Φ(α,β) are defined in (<ref>) and (<ref>), respectively, while ψ(α) is given by ψ(α)=1-αarctan(1/α). Again, this result must be multiplied by the number of neutrino flavors, N_ν = 3, to obtain the total luminosity. If we put C_ρ and the Landau parameters to zero in our expression, and neglect the interference term -2 η(m_π) ψ(α_p), we recover Eq. 53a+53b of without Landau parameters. Our expression with the Landau parameters is different than that reported in , due to cancellations with the interference term, neglected in . In particular, we notice that the parameter g' disappears completely. However, we have checked that this discrepancy is not the leading one in the final numerical difference between our results and the previous literature. In Fig. <ref> we compare our complete emission rate (red curves) with the emission rates implemented in http://www.astroscu.unam.mx/neutrones/NSCool/NSCool (blue curve) qbrem_nn=n_ν· 1.5d20 · mstn(i)^2 mstp(i)^2 · (kfp(i)/1.68d0) · alpha_np· beta_np· (t/1.d9)^8, where alpha_np=1.06, beta_np = 0.66 are kept constant in the code, kfp(i) is the proton Fermi momentum in units of fm^-1,and mstp(i) represents the proton mass in units of the bare one. This expression coincides with Eq. 65b of and includes the contribution of the Landau parameters. The latter are seen to slightly decrease the emissivity, due to the negative contribution of the interference term. However, the values of these parameters, and therefore their impact on the emissivity, should not be taken as precise, given that they are only estimated for the case of nuclear-symmetric matter. Our analytical expressions allow to easily retrieve the emissivity for arbitrary values of the Landau parameters, directly assessing the impact of this uncertainty. In all curves we fixed also N_ν = 3 for the number of neutrinos flavors, T = 10^9 K, the nucleon masses to their in vacuum values, and the proton momentum to be p_p = 85 (p_n/340 MeV)^2 MeV, following . The numerical discrepancy originates partially from the interference term -2η(m_π)ψ(α_p), not included in on the grounds that its enhancement to the emission would be approximately compensated by the ρ-meson exchange. However, we notice that the result in Eq. 70 of is already the full result, while the authors seem to multiply it by an extra factor of 2 because of “group II diagrams". It is unclear what these diagrams are, but it may be that this erroneous extra factor of 2 led to the claimed compensation with the ρ-meson effect. Consequently, for nuclear densities and above, the emission rate used in present numerical codes and most of the literature seems to be overestimated by a factor ∼ 2. §.§ MURCA We now pass to consider MURCA processes. Here we compute the rates of both the neutron branch n + n → n + p + ℓ + ν̅_ℓ, n + p + ℓ→ n + n + ν_ℓ and the proton branch p + n → p + p + ℓ + ν̅_ℓ, p + p + ℓ→ p + n + ν_ℓ, where ℓ=e,μ. The rate for direct and inverse process of neutrino emission are identical; this follows from the approximation that the neutrino energy is much smaller than the nucleon and electron energies, so that the hadronic matrix elements are independent of the neutrino energy, and from detailed balance. Thus, we introduce a factor of 2 to account for both processes (we always consider the combined emission rate of neutrinos and antineutrinos; the two are of course identical by the same argument). We perform the computation for a generic lepton generation; the total emissivity in this case is then obtained by summing over the contributions of electrons and muons alone, since tau leptons are to heavy to be produced. This factor, however, is not a simple factor 2, since muons are not fully relativistic. In this case we also find it useful to first compute the neutrino emission rate and then the final emissivity. The two are easily related as Q_ν^ MURCA = 2∫d^3 p_ν/(2π)^3ω_νΓ_ν^ MURCA; where we remind the factor 2 for the direct and inverse processes. This is particularly convenient because with the neutrino emission rate at hands, one can also easily obtain the neutrino absorption rate, another important quantity in the physics of neutron stars. §.§.§ Neutron branch We start studying the neutron branch, which is supposed to be the most relevant one for typical equation of states <cit.>. To make contact with basically all previous literature, we first perform the computation in the triangular approximation, i.e neglecting protons and leptons momenta p_n ≫ p_p ,p_ℓ in the Dirac-δ for the conservation of momentum. We then proceed to the perform the computation dropping this approximation, and quantifying the discrepancy. Triangular approximation The squared amplitude for the neutron branch MURCA process n(_1)+n(_2)→ n(_3)+p(_4)+ℓ(_ℓ)+ν̅(_ν), in the triangular approximation is |ℳ|^2 =64G^2g_A^2 ω_νω_ℓ/(ω_ν+ω_ℓ)^2[12(f'-g)^2+21/4(f_π/m_π)^4p_n^4/(p_n^2+m_π^2)^2(1-C_ρp_n^2+m_π^2/p_n^2+m_ρ^2)^2] ≡ 64 G^2g_A^2ω_ν/μ_ℓ|m|^2 . where μ_ℓ is the Fermi energy and where we have assumed that p_n≫ p_p,p_e, so that ≈-_2, ≈ -_1, and ·=- p_n^2/2. Notice that even for mildly relativistic or non-relativistic muons, the leptonic trace still leads to the characteristic ω_νω_ℓ product in the numerator, since any term proportional to the momentum of the particle averages to zero due to isotropy of the emitted neutrino. In the last step we also used the fact that electrons and muons are degenerate in the NS core, with ω_ℓ∼μ_ℓ≫ω_ν≈ T. We highlight that when computing the complete amplitude squared, all the Landau parameters but f' and g drop out in the triangular approximation. This was not appreciated in , where in the computation of the “exchange terms" Landau parameters were not included. We now compute the emission rate Γ^ M, T_ω_ν= 1/21/2ω_ν∫d^3 p_1/(2π)^3∫d^3 p_2/(2π)^3∫d^3 p_3/(2π)^3∫d^3 p_4/(2π)^3∫d^3 p_ℓ/(2π)^3(2 ω_ℓ)64G^2g_A^2ω_ν/μ_ℓ|m|^2× × (2π)^4δ(E_1+E_2-E_3-E_4-ω_ℓ-ω_ν)δ^(3)(_1+_2-_3-_4-_ℓ-_ν)× × f_1f_2(1-f_3)(1-f_4)(1-f_ℓ), where the superscript “M,T" stands for “MURCA, Triangular", and the initial factor 1/2 is the symmetry factor for identical particles in the initial state. Using the fact that |m|^2 is constant and the degeneracy of the nucleons, then Γ^ M, T_ω_ν= G^2g_A^28m_n^3m_pp_n^3p_p|m|^2/ (2π)^11p_ℓ/μ_ℓ∫dΩ_l∏_i=1^4dΩ_iδ^(3)(_1+_2-_3-_4-_ℓ-_ν)× ×∫dω_e∏_i=1^4dE_iδ(E_1+E_2-E_3-E_4-ω_ℓ-ω_ν)f_1f_2(1-f_3)(1-f_4)(1-f_ℓ) ≡ G^2g_A^28m_n^3m_pp_n^3p_p|m|^2/ (2π)^11p_ℓ/μ_ℓ 𝒜ℰ, where p_ℓ if the Fermi momentum of the emitted lepton (either an electron or a muon). The integral over the energies can be easily performed in the complex plane and gives ℰ=1/24(ω_ν^2+9π^2T^2)(ω_ν^2+π^2T^2)/e^ω_ν/T+1, while the angular integral (under the triangular approximation) reads 𝒜=128π^4/p_n^3. Putting everything together we get Γ^ M, T_ω_ν =8G^2g_A^2p_pm_n^3m_p|m|^2/3(2π)^7(ω_ν^2+9π^2T^2)(ω_ν^2+π^2T^2)/e^ω_ν/T+1 =8G^2g_A^2m_n^3m_pp_p/3(2π)^7(ω_ν^2+9π^2T^2)(ω_ν^2+π^2T^2)/e^ω_ν/T+1p_ℓ/μ_ℓ ×[12(f'-g)^2+21/4(f_π/m_π)^4p_n^4/(p_n^2+m_π^2)^2(1-C_ρp_n^2+m_π^2/p_n^2+m_ρ^2)^2] Finally, the total energy loss rate is 𝒬^ M, T_ν = 2 (1+p_Fμ/μ_μ)×11513/120960πG^2g_A^2f_π^4/m_π^4m_n^3m_pp_pT^8× ×1/2×21/16×[32/7(m_π/f_π)^4(f'-g)^2+2(η(m_π)-C_ρη(m_ρ))^2] ≡ 2 (1+p_Fμ/μ_μ)×11513/120960πG^2g_A^2f_π^4/m_π^4m_n^3m_pp_pT^8α_ MURCA, where the factor of 2 in front of everything takes into account the inverse reaction, n + p + l → n + n + ν_ℓ, where we assumed the electrons to be fully relativistic (and therefore p_e ≃μ_e), and where α_ MURCA= 1/2×21/16×[32/7(m_π/f_π)^4(f'-g)^2+2(η(m_π)-C_ρη(m_ρ))^2] . Compared to Eq. 56 of , in addition to the factor 21/16 coming from the addition of the u-channel and interference terms, we get a different structure of the Landau parameters and a factor of 1/2 in the OPE plus ρ-exchange terms. We have been able to reproduce the amplitude squared in Eq. 39 of , including Landau parameters, which is the sum of t-channel and u-channel amplitudes squared (the two are the same). However, we have not been able to trace back the factor of 2 discrepancy in their final emission rate, nor the meaning of the so called “exchange diagrams" in this case, which would correspond to our u-channel diagrams. The inconsistency extends to their Eq. 75, where the ratio between emissivity with and without "exchange terms" should be ∼ 0.65, not ∼ 1.3, based on their own rates. Nevertheless, it seems that all the subsequent literature <cit.> has adopted this erroneous factor of two in the emission rate. Then, the inclusion of the ρ-meson further suppresses the emissivity, which has been therefore overestimated by more than a factor of 2 at nuclear densities. The MURCA process is typically the most important for NS cooling. Given its particular relevance, in this case to make the comparison between our findings and previous results, we use a realistic NS profile provided in http://www.astroscu.unam.mx/neutrones/NSCool/NSCool for a NS of 1  M_⊙ with the Akmal-Pandharipande-Ravenhall (APR) equation of state (EOS) <cit.> . In Fig. <ref> we show the comparison for this NS radial profile between our result for the triangular approximation (red curve), Eq. <ref>, and the one reported in http://www.astroscu.unam.mx/neutrones/NSCool/NSCool (blue curve) again in units of erg/cm^3/s Murca_n = 8.55d21 · mstn(i)^3 · mstp(i) · (kfe(i) + kfm(i))/1.68d0 · alpha_n · beta_n · (t/1.d9)^8, with alpha_n =1.76d0-0.63d0 · (1.68d0/kfn(i))^2 and beta_n =0.68d0. This equation coincides with Eq. 65b of , with the addition of the muon contribution. Apart from the differences mentioned above, we notice two further issues with this expression: the presence of the Fermi momentum of the electron, p_e, rather than the one of the proton; the muon contribution is overestimated because it doesn't take into account that, contrary to electrons, muons are not relativistic or mildly relativistic. These two problems were already noticed recently in Ref. <cit.>. However, we stress that these corrections alone tend to make the output of http://www.astroscu.unam.mx/neutrones/NSCool/NSCool deviate further from our exact computation. Fig. <ref> makes manifest that the results presented in the literature overestimate the MURCA emission rate by more than a factor of 2. The biggest impact comes from the ρ-meson exchange, whose contribution is not compensated by other factors when everything is computed self-consistently. We verified that instead the impact of the Landau parameters is modest. General expression We now want to check the impact of the triangular approximation; as a simple reason for doubting its full applicability, we note that if the proton and electron momenta are aligned their sum would be essentially identical to the neutron Fermi momentum. Two things need to be changed: the squared amplitude and then the corresponding angular integration, while the rest stays the same. The squared amplitude |ℳ|^2 can be written as usual in terms of a reduced one |ℳ|^2 = 64G^2ω_ν/ω_l |m|^2. In turn, the reduced amplitude squared can be written as |m|^2 = |m_u|^2+ |m_t|^2 + 2 (m_u^* m_t), where the first two terms are the u and t channels amplitude squared, while the third one is their interference. With the usual momentum assignments n(_1)+n(_2)→ n(_3)+p(_4)+ℓ(_ℓ)+ν̅(_ν), the t-channel amplitude squared is written as follows |m_t|^2 = g_A^2[6(f'^2+g^2)-2(3f'g'_k+3(f'+2g)g'_k̃+f'h'_k+(f'+2g)h'_k̃)+. +.(3g'^2_k+9g'^2_k̃+2g'_kh'_k+h'^2_k+6g'_k̃h'_k̃+3h'^2_k̃)]+3(g'_k-g'_k̃)^2+2(g'_k-g'_k̃)(h'_k-h'_k̃) +(h'_k-h'_k̃)^2+2h'_kh'_k̃(1-(·)^2) where = _1 - _3 and = _2 - _4. The u-channel piece, |m_u|^2, is the same but with → = _2 - _3 and → = _1 - _4. The interference term reads instead 2 (m_u^* m_t) = g_A^2[-24 f' g + 2 (2 f' (3( g'_k̃ + g'_l̃ )+ h'_k̃ + h'_l̃) + 3 g(g'_k + g'_k̃ + g'_l + g'_l̃).. +. g(h'_k + h'_k̃ + h'_l + h'_l̃))-(g'_l (h'_k + h'_k̃) + g'_l̃ (h'_k + 5 h'_k̃) +h'_k h'_l - h'_k̃ h'_l - h'_k h'_l̃ + 3 h'_k̃ h'_l̃ +. +. g'_k (3 g'_l + 3 g'_l̃ + h'_l + h'_l̃) + g'_k̃ (3 g'_l + 15 g'_l̃ + h'_l + 5 h'_l̃))+ +. h'_k h'_l(1 - (·)^2) - 2 h'_k̃ h'_l(1 - (·)^2) - 2 h'_k h'_l̃(1 - (·)^2) + 2 h'_k̃ h'_l̃(1 - (·))^2]+ +[(g'_l-g'_l̃)(h'_k-h'_k̃)+(g'_k-g'_k̃)(h'_l-h'_l̃)-(h'_k-h'_k̃)(h'_l-h'_l̃)+3(g'_l-g'_l̃)(g'_k-g'_k̃)+. +.2 h'_k h'_l(1 - (·)^2) - 2 h'_k̃ h'_l(1 - (·)^2) - 2 h'_k h'_l̃(1 - (·)^2) + 2 h'_k̃ h'_l̃(1 - (·)^2)] where x_ij, with i, j = k, k̃, l, l̃ is the cosine of the angle between the corresponding vectors. The emission rate is then Γ^ MURCA,n_ω_ν=8G_ F^2p_n^3p_pp_lm_n^3m_p/(2π)^11μ_ℓℰ∫dΩ_ℓ∏_i=1^4dΩ_iδ^(3)(_1+_2-_3-_4-_ℓ-_ν)|m|^2 = 256G_ F^2p_n^3p_pp_l m_n^3m_p ℰ/(2π)^11μ_ℓ1/p_n^3 p_p p_ℓ∫ d^3p_ℓ∫∏_i=1^4 d^3 p_i δ(p_i^2 - p_F,i^2) δ^3(_1 + _2 - _3 - _4 - p_ℓ) |m|^2 = 256G_ F^2m_n^3m_pℰ/(2π)^11μ_ℓ∫∏_i=1^4 d^3 p_i δ(p_i^2 - p_F,i^2) |m|^2, where in the last step we used the spatial δ of Dirac to integrate the electron momentum. At this point, in order to perform the angular integrations more easily, we find it useful to perform the following change of basis (with unit determinant) _1, _2, _3, _4 →_2, , , _4. With this transformation, the rate reads Γ^ MURCA,n_ω_ν =256 G^2m_n^3m_pℰ/ (2π)^11μ_ℓ∫ d^3 p_2 δ(p_2^2 - p_n^2)_p_n/2∫ dΩ_2∫ d^3 p_4 δ(p_4^2 - p_p^2)_p_p/2∫ dΩ_4∫ d^3 l d^3 l̃δ(|-_2|^2-p_n^2)× ×δ(|+_4|^2-p_n^2)δ(p_ℓ^2-|+|^2)|m|^2 = =16 G^2m_n^3m_pℰ/(2π)^9μ_ℓ∫_Σ dldl̃∫_0^2π dϕ∫_0^2π dβ |m|^2, where the region Σ is defined by Σ: |l̃-p_n|<p_p, 0<l<2p_n, l+l̃>p_ℓ, |l-l̃|<p_ℓ where it is understood that all the scalar products fixed by the delta functions must be consistently replaced in the reduced squared amplitude. The angles ϕ and β are the azimuthal angles compared to the plane containing and of _2 and _4, respectively. They are defined through the relations sinϕ=_2·(×)/p_nll̃√(1-x_2l^2)√(1-x_ll̃^2), sinβ=_4·(×)/p_pll̃√(1-x_4l̃^2)√(1-x_ll̃^2), where x_2l=_2·/(p_n l) and x_4l̃=_4·/(p_p l̃) are fixed by the kinematic constraints in (<ref>). The remaining four integrals must be done numerically after having written the reduced amplitude squared in this new convenient basis. Finally, the emissivity is given by Q_ν^ MURCA, n=11513/483840G^2m_n^3m_p/(2π)^3μ_ℓ∫_Σ dldl̃∫_0^2π dϕ∫_0^2π dβ |m|^2. We show our final results for Eq. <ref> in Fig. <ref> with (solid red) and without (dashed) ρ-meson exchange. Compared to our triangular approximation (shaded red curve), the full numerical results differ by a factor ∼ 20-30 % at most. Somewhat surprisingly – given that in the considered NS model p_p + p_e ∼ p_n – the triangular approximation provides quite precise results. In all cases the temperature was fixed to T = 10^9 K, the nucleon masses to their bare values, the Landau parameters to the values quoted in and we fixed the short-range physics suppression factor to beta_n =0.68d0. §.§.§ Proton branch The results described in the previous sections are for the MURCA neutron branch. Now we provide the results also for the proton branch and check its relevance. Assuming the following momentum assignments p(_1)+n(_2)→ p(_3)+p(_4)+ℓ(_ℓ)+ν̅(_ν) and based on crossing-symmetry arguments, it is easy to realize that the squared amplitude for the proton branch can be obtained by replacing → -, → -, and ↔. Thus we can immediately write the neutrino emission rate as Γ^ MURCA,p_ω_ν=16 G^2m_nm_p^3ℰ/(2π)^9μ_ℓθ(p_l+3p_p-p_n)∫_Σ dldl̃∫_0^2π dϕ∫_0^2π dβ |m|^2, where the angles ϕ and β are defined as before while the region Σ now is given by Σ: |l-p_n|<p_p, 0<l̃<2p_p, l+l̃>p_ℓ, |l-l̃|<p_ℓ, and finally Q_ν^ MURCA, p=11513/483840G^2m_p^3m_n/(2π)^3μ_ℓθ(p_l+3p_p-p_n)∫_Σ dldl̃∫_0^2π dϕ∫_0^2π dβ |m|^2. We show the result for the proton branch (yellow curves), compared with the neutron branches (red curves) in Fig. <ref>, where we also show the proton branch implemented in http://www.astroscu.unam.mx/neutrones/NSCool/NSCool (blue curve) qmurca_p=8.55d21 · mstn(i) · mstp(i)^3 · (kfe(i)/1.68d0)· (kfe(i)+ + 3.d0*kfp(i)-kfn(i))^2/(8.d0 · kfe(i) · kfp(i)) · alpha_p · beta_p · (t/1.d9)^8, where alpha_p, beta_p are the same as for the neutron branch. The proton branch is always subdominant but not by a large amount, although this may change when effective masses are properly taken into account, because the effective proton mass in NS is usually smaller than the effective neutron mass. For completeness, in Fig. <ref> we also show the results in an approximation analogous (dotted red) to the triangular one for the neutron branch. In the proton branch case this approximation amounts to take k and l to be the maximum exchanged momenta <cit.> 𝒬^ MT, p_ν = 2 ×(∑_ℓ=e,μ(p_ℓ+3p_p-p_n)^2/μ_ℓ)×11513/645120πG^2g_A^2m_n^3m_pT^8× ×[2(f'-g)^2+(f_π/m_π)^4(p_n-p_p)^4/((p_n-p_p)^2+m_π^2)^2(1-C_ρ(p_n-p_p)^2+m_π^2/(p_n-p_p)^2+m_ρ^2)^2]. This result – which we label “Proton Branch Approx." in our figure – agrees very precisely with the output of http://www.astroscu.unam.mx/neutrones/NSCool/NSCool, and it is also very similar to the exact computation. § NEUTRINO ABSORPTION RATE As MURCA and its inverse is the dominant process of neutrino emission, their time-reversed versions n + n + ν_ℓ→ n + p + ℓ, n+p+ℓ+ν_ℓ→ n+n are among the primary absorption mechanisms in neutron star matter (the primary one for old neutron stars). The absorption rates can be directly related to the emission ones from the principle of detailed balance; once the emission rate Γ^ em_ν(ω_ν) for a neutrino with energy ω_ν is known, the absorption rate Γ^ abs_ν(ω_ν) can be found as Γ^ abs_ν(ω_ν)=Γ^ em_ν(ω_ν) e^ω_ν/T. Notice that there is a somewhat conventional choice in the definition of absorption rate; in a given environment, the evolution of the neutrino distribution is driven by the full collisional term (∂ f_ν/∂ t)_ coll=Γ^ em_ν(ω_ν)(1-f_ν)-Γ^ abs_ν(ω_ν) f_ν, which can be rewritten as (∂ f_ν/∂ t)_ coll=Γ^ em_ν(ω_ν)-Γ̃^ abs_ν(ω_ν) f_ν, where we defined an enhanced absorption rate Γ̃^ abs_ν(ω_ν)=Γ^ em_ν(ω_ν)+Γ^ abs_ν(ω_ν)=Γ^ em_ν(ω_ν) (1+e^ω_ν/T) which also accounts for the suppression in the inverse emission process caused by the Pauli exclusion principle. This is simply called the absorption rate in ; here for clarity we will adopt the more conventional definition Γ^ abs_ν(ω_ν). Thus, in the triangular approximation the absorption rate (for the neutron branch, which is the dominant one) is easily found to be Γ^ abs_ω_ν =2G^2g_A^2m_n^3m_pp_p/(2π)^7(ω_ν^2+9π^2T^2)(ω_ν^2+π^2T^2)/e^-ω_ν/T+1p_ℓ/μ_ℓ ×[16(f'-g)^2+(f_π/m_π)^47p_n^4/(p_n^2+m_π^2)^2(1-C_ρp_n^2+m_π^2/p_n^2+m_ρ^2)^2], which also corresponds to the emission rate with the substitution ω_ν→ - ω_ν. In Fig. <ref> we show the neutrino mean free path (MFP) for ω = T = 100 keV (red curves) and ω = T = 1 MeV (blue curves) with (solid) or without (dashed) the contribution from ρ exchange. We also fixed p_p = 85 (p_n/340 MeV)^2 MeV, T = 10^9 K, bare nucleon masses and standard values for the Landau parameters. The MFP exceeds by several orders of magnitude the typical radius of a NS for both cases, with or without the inclusion of a ρ meson. We also checked that the impact of Landau parameters in these conditions is very modest. § CONCLUSIONS The cooling of NSs via neutrino emission plays a fundamental role in their evolution, especially in its early stage. A clear understanding of its quantitative impact is therefore necessary to compare with the evolution of surface temperature and luminosity of isolated NSs. Obviously this process is directly affected by multiple uncertainties from nuclear physics, especially in regards to how the nucleon-nucleon interaction is modeled, both in vacuum and in the dense nuclear medium. However, the large nuclear physics uncertainties should not obscure the importance of the particle physics framework and approximations used to obtain the cooling rates. If anything, the existence of already large uncertainties in the nuclear physics sector should push us further into clarifying completely the particle physics aspects – i.e., phase-space and matrix element evaluation – of this process. In this work, we have moved from this push into a complete reevaluation of the cooling rates from nn bremsstrahlung, np bremsstrahlung, and MURCA processes. For all three processes, we have found a wide range of differences compared to the seminal treatment in . Generally, these differences seem to come mainly from the counting of different groups of diagrams, and the neglect of interference diagrams for certain processes. We have also folded in the suppression of the nucleon interaction potential at large momentum exchange, modeled as a rho-meson exchange, which was in principle present in  but argued to cancel with the interference diagrams. We do not find evidence for this cancellation, especially in the cooling rate as a function of density; rather, neglecting the rho-meson exchange provides an additional cause for overestimation of the cooling rates. Further, we have gone beyond the conventional triangular approximation, in which the Fermi momenta of protons and electrons are neglected compared to the ones of neutrons; while by itself this seems to be a 20-30% effect, piled up with the remaining differences it conspires to create significant discrepancies with the results from previous literature. We refer to our main text for a detailed discussion of each of the differences in treatment, and their impact, for each of the processes. Here we rather focus on the implications for the phenomenological studies of neutron star cooling. We find that the cooling rates for MURCA, the dominant process, can be even a factor 2 lower when integrated over the entire star, and even more for the space-dependent emission rate per unit volume. For the bremsstrahlung processes, which are subdominant, similar discrepancies are found. These differences are evaluated using the same short-range interactions as , modeled with Landau parameters which are however highly uncertain. Our analytical treatment is particularly suitable to evaluate the impact of these uncertainties, and flexible enough to be adapted to specific choices of the Landau parameters. These discrepancies come entirely from particle physics aspects; to maintain a consistent and fair comparison, we have stuck to the same framework as  for the modeling of the nucleon-nucleon interaction potential. A more refined treatment of this point would certainly be of interest, but we notice that NS cooling is anyway at present generally described following . The impact of these corrections on the full evolution of NSs due to their cooling needs to be assessed. To illustrate the significance of these order-one factors, we present in Fig. <ref> the time evolution of the luminosity for a NS of one solar mass, with APR EOS and without superfluidity, computed using NSCool. The dashed curve represents the result using the incorrect MURCA rate, while the solid curve shows the result with our corrected rate. We compare these theoretical curves with the properties of the isolated neutron star RX J1605.3 <cit.>. Notably, with the old rates, the APR EOS almost fails to match the observations within their error bars. However, using the correct MURCA emissivity leads to an excellent fit. Intriguingly, this can have an important impact beyond purely astrophysical questions. A precision treatment of NS cooling, compared with the time evolution of isolated NSs, is also the basis of powerful constraints on novel particle emission, and especially for the QCD axion <cit.>. In addition, the axion emission is evaluated in the same framework as the neutrino one, and therefore must also be reassessed. Thus, axion constraints would be directly affected by the results of this work. We will proceed with a reassessment of the bounds on the QCD axion in a forthcoming work <cit.>. § ACKNOWLEDGMENTS We wish to thank Georg Raffelt and Toby Opferkuch for detailed comments on the manuscript, Bengt L. Friman for instructive email exchanges about nucleon effective masses, and Joachim Kopp and Edoardo Vitagliano for helpful discussions. AC also thanks Stefan Stelzl for enlightening discussions on related topics. SB is supported by the Israel Academy of Sciences and Humanities & Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researcher. DFGF is supported by the Villum Fonden under Project No. 29388 and the European Union’s Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 847523 “INTERACTIONS.” § EXPLICIT MURCA COMPUTATION Given the confusion in the literature and the various order one factors we corrected, we find it useful to report here an explicit computation. In particular, we proceed in computing everything in a non-relativistic framework; however, for the pion and ρ-meson exchange we also checked our results using a relativistic framework, computing the amplitude squared with FeynCalc 9.3.1 <cit.> and then performing a non-relativistic expansion of the final results. In this appendix we limit ourselves to MURCA, which is in any case the most relevant process for NS cooling. We start by considering a single Landau parameter as a simple jumping off point. Therefore, we take the following effective non-relativistic potential for nucleon-nucleon interaction V_g(𝐤) = g σ_1 ·σ_2, and we start computing the amplitude for the t-channel from the diagrams a) to c) in Fig. <ref>. For the diagram a) the amplitude reads ℳ_a = iGg/√(2)ℓ^μ/ωχ_3^†σ^j χ_1 χ_4^†τ^- (δ_μ 0 - g_A δ_μ iσ_i) σ^j χ_2, where in isospin space we have χ_4^† = [ 0 1 ], χ_2 = [ 1; 0 ] and τ^- = [ 0 0; 1 0 ]. So in this case the isospin structure gives just a trivial factor χ_4^†τ^- χ_2 = 1, so that the amplitude is simply ℳ_a = iGg/√(2)ℓ^μ/ωχ_3^†σ^j χ_1 χ_4^† (δ_μ 0 - g_A δ_μ iσ_i) σ^j χ_2. Analogously for the diagram b) we have ℳ_b = iGg/√(2)(-ℓ^μ/ω) χ_3^†σ^j χ_1 χ_4^†σ^j(δ_μ 0 - g_A δ_μ iσ_i) χ_2, where the minus sign of difference comes from the nucleon propagator (emission from an initial rather than final leg). The diagram c) gives null contribution, because χ_3^†τ^- χ_1 = 0, being nucleons 3 and 1 both neutrons. Therefore the total amplitude for the t-channel reads ℳ_t = ℳ_a + ℳ_b = iGg g_A/√(2)ℓ^i/ωχ_3^†σ^j χ_1 χ_4^†(σ^j σ^i - σ^iσ^j)χ_2 = √(2) Gg g_A l^i/ωϵ^ijkχ_3^†σ^j χ_1 χ_4^†σ^k χ_2, where in the last step we used the relation σ^j σ^i - σ^iσ^j = -2 i ϵ^ijk σ^k and where we notice that the vector part has canceled out. In the non-relativistic theory is then very easy to compute the amplitude squared summed over spins, which in this particular case reads ∑_ spins |ℳ_t|^2 = 16 G^2 g^2 g_A^2/ω^2ω_1ω_2 ϵ^ijkϵ^ij'k' Tr( σ^j σ^j') Tr(σ^k σ^k')_4 δ^jj'δ^kk' = 384 G^2g^2g_A^2/ω^2ω_1ω_2, where we used ∑_ spins l^il^j = 8 ω_1 ω_2 δ^ij for the leptonic current and in the last step we also used the relation of the Levi-Civita tensor ϵ^ijkϵ^ijk = 6. We observe that the contribution from the u-channel, corresponding to the diagrams d) to f) in Fig. <ref>, mirrors that of the t-channel, while the interference between the two channels cancels out when summing over spins, as terms with an odd number of Pauli matrices always vanish. Consequently, the final result for the amplitude squared of these Landau parameters is simply twice the value given in Eq. <ref> ∑_ spins |ℳ|^2 = 768 G^2g^2g_A^2/ω^2ω_1ω_2, in agreement with Eq. 39 of . We now pass to consider the OPE term in the nucleon-nucleon potential V(𝐤)^ OPE = h'_π,k (σ_1 ·) (σ_2 ·) τ_1 ·τ_2, where h'_π,k≡ - f_π^2/m_π^2k^2/k^2 +m_π^2. In this case the amplitudes for the t-channel diagrams a) and b) with neutral pion exchange read ℳ_a = iG ℓ^μ/√(2)ωχ_3^† h'_π,k (σ·) τ^a χ_1 χ_4^†τ^- [ δ_μ 0 - g_A δ_μ iσ^i] τ^a (σ·) χ_2, ℳ_b = iG/√(2)( -ℓ^μ/ω) χ_3^† h'_π,k (σ·) τ^a χ_1 χ_4^†τ^a (σ·) τ^- [ δ_μ 0 - g_A δ_μ iσ_i ] χ_2. One can verify that the isospin structure forces a =3 and that χ_4^†τ^- τ^3 χ_2 =1, while χ_4^†τ^3 τ^- χ_2 = - 1. Therefore we have ℳ_a + ℳ_b = √(2) i G l^μ/ωh'_π,kχ_3^† (σ·)χ_1 χ_4^†[ δ_μ 0(σ·) - 1/2g_A δ_μ i{σ^i, σ^j }^j]χ_2 = √(2) i G l^μ/ωh'_π,kχ_3^† (σ·)χ_1 χ_4^†[ δ_μ 0(σ·) - g_A δ_μ j^j]χ_2. For the diagram c) we have instead ℳ_c = iG/√(2)( -ℓ^μ/ω) h'_π,kχ_4^†τ^a (σ·) χ_2 χ_3^†τ^a (σ·) τ^- [δ_μ0 - δ_μ iσ_i] χ_1, and one can check that in this case the isospin structure is such that τ^1 and τ^2 contribute both in the same way. Therefore we are left with ℳ_c = -√(2) i G l^μ/ωh'_π,kχ_4^† (σ·) χ_2 χ_3^† (σ·) [δ_μ0 - δ_μ iσ_i] χ_1. Summing everything we notice that the vector part cancels and the final amplitude is ℳ^ OPE_t = √(2) i G g_A l^i/ω(f_π/m_π)^2k^2/k^2+m_π^2[_iχ_3^† (σ·)χ_1 χ_4^†χ_2 - χ_4^† (σ·) χ_2 χ_3^† (σ·)σ_i χ_1]= = √(2) i G g_A l^i/ω(f_π/m_π)^2k^2/k^2+m_π^2[_iχ_3^† (σ·)χ_1 χ_4^†χ_2 - χ_4^† (σ·) χ_2 χ_3^†χ_1 _i .   + . i ϵ^ijkχ_4^† (σ·) χ_2 χ_3^†σ_kχ_1 _j], where in the second line we used the relation σ_jσ_i = δ_ji - i ϵ^ijkσ_k. This amplitude coincides with Eq. 32 of . Squaring and summing over spins we get ∑_ spins |ℳ^ OPE_t |^2 = 16 G^2g_A^2 ω_1ω_2/ω^2(f_π/m_π)^4 (k^2/k^2+m_π^2)^2[4 Tr(σ^iσ^i') _i_i' +.   +.ϵ^ijkϵ^ij'k'Tr(σ^aσ^a')Tr(σ^jσ^j')_a_a'_j_j'] = 256 G^2g_A^2 ω_1ω_2/ω^2(f_π/m_π)^4(k^2/k^2+m_π^2)^2, which is half of the result in Eq. 39 of . The u-channel is the same with l instead of k ∑_ spins |ℳ^ OPE_u |^2 = 256 G^2g_A^2 ω_1ω_2/ω^2(f_π/m_π)^4(l^2/l^2+m_π^2)^2; in triangular approximation k ∼ l ∼ p_n, so that the two contributions are the same. This explains the factor of 2 in Eq. 39 of , which therefore takes into account both t and u channels, neglecting the interference between the two channels. The latter can be easily computed as follows 2 ∑_ spinsℳ^ OPE_t ℳ^ OPE, †_u = -32 G^2g_A^2 ω_1ω_2/ω^2h'_π,kh'_π,l(_iχ_3^† (σ·)χ_1 χ_4^†χ_2 - χ_4^† (σ·) χ_2 χ_3^†χ_1 _i + +i ϵ^ijkχ_4^† (σ·) χ_2 χ_3^†σ_kχ_1 _j)(_iχ_3^† (σ·)χ_2 χ_4^†χ_1 - χ_4^† (σ·) χ_1 χ_3^†χ_2 _i + i ϵ^ijkχ_4^† (σ·) χ_1 χ_3^†σ_kχ_2 _j)^† = -32 G^2g_A^2 ω_1ω_2/ω^2h'_π,kh'_π,l(ϵ^ijkϵ^ij'k'_δ^jj'δ^kk'-δ^jk'δ^j'kTr[ (σ·) σ_k'σ_k (σ·) ]_j _j' + - 2 i _i _j'ϵ^ij'k'Tr[ (σ·)(σ·)σ_k'] + 2 i _i _j'ϵ^ij'k'Tr[ (σ·)σ_k'(σ·)] ) = 64 G^2g_A^2 ω_1ω_2/ω^2(f_π/m_π)^4(k^2/k^2+m_π^2)(l^2/l^2+m_π^2)[-3 + (·)^2] , which coincides with the third term in Eq. 71 of upon using the Lagrange's identity for their cross-product squared. bibi
http://arxiv.org/abs/2406.19107v1
20240627113427
FDLite: A Single Stage Lightweight Face Detector Network
[ "Yogesh Aggarwal", "Prithwijit Guha" ]
cs.CV
[ "cs.CV" ]
tight_itemize 0000 empty FDLite: A Single Stage Lightweight Face Detector Network Yogesh Aggarwal Indian Institute of Technology Guwahati Assam India yogesh_aggarwal@iitg.ac.in Prithwijit Guha Indian Institute of Technology Guwahati Assam India pguha@iitg.ac.in =============================================================================================================================================================================================== § ABSTRACT Face detection is frequently attempted by using heavy pre-trained backbone networks like ResNet-50/101/152 and VGG16/19. Few recent works have also proposed lightweight detectors with customized backbones, novel loss functions and efficient training strategies. The novelty of this work lies in the design of a lightweight detector while training with only the commonly used loss functions and learning strategies. The proposed face detector grossly follows the established RetinaFace architecture. The first contribution of this work is the design of a customized lightweight backbone network (BLite) having 0.167M parameters with 0.52 GFLOPs. The second contribution is the use of two independent multi-task losses. The proposed lightweight face detector (FDLite) has 0.26M parameters with 0.94 GFLOPs. The network is trained on the WIDER FACE dataset. FDLite is observed to achieve 92.3%, 89.8%, and 82.2% Average Precision (AP) on the easy, medium, and hard subsets of the WIDER FACE validation dataset, respectively. § INTRODUCTION Face detection is an essential first step for several computer vision applications like face tracking, face recognition, gender classification and emotion recognition. Its primary objective is the precise localization of face region(s) within an image. Challenges arise particularly in dense crowds (small faces) and adverse conditions such as variations in face pose, low lighting, occlusions, and poor image quality (blur). An optimal face detection system should be able to localize faces in images with high accuracy while operating at low computational costs. Traditional face detection techniques relied on hand-crafted features along with sliding window techniques <cit.>. Among these, the Viola-Jones face detector <cit.> have been widely used. Most state-of-art face detection systems are benchmarked on the widely used WIDER FACE dataset <cit.>. This dataset includes images with various challenging scenarios including blur, pose variations, illumination changes, small faces, and occlusions. Accordingly, the face images are also annotated into easy, medium, and hard categories. Notably, even on the easy subset of the WIDER FACE dataset, the Viola-Jones detector achieves an Average Precision of 41.2%. This is significantly lesser than the performance of MTCNN (one of the earlier deep network-based proposals), which achieves 85.1% on the easy subset. Recent face detection methodologies have leveraged deep learning frameworks for increased precision over traditional methods. These approaches have utilized diverse convolutional neural network (CNN) structures to extract visual features, have incorporated attention modules and improved detection mechanisms. These advancements have yielded substantially improved results on benchmark datasets such as WIDER FACE. Examples of these systems include cascade CNN <cit.>, RCNN series <cit.>, single-shot face detectors <cit.>, and RetinaFace <cit.>. These face detection systems draw inspiration from the recent advancements in deep learning-based generic object detection methods <cit.>. Nevertheless, the performance improvement has led to increased computational demands (FLOPs) for employing these face detectors. This heavy computational requirement arises from utilizing conventional CNN backbones such as ResNet50/101/152 <cit.>, VGG16 <cit.>, and DenseNet121 <cit.>. Such heavy computation cost makes it hard to deploy such systems for real time applications, especially involving edge devices. Consequently, researchers have focused on the development of lightweight face detection systems. Existing works have proposed efficient face detection systems by employing lightweight feature extractor backbones such as the MobileNetV1 <cit.> series, ShuffleNetV2 <cit.> series, and others. Additionally, several face detection methods have emerged through design with the help of customized backbones <cit.>. These face detection systems have achieved significantly higher accuracy than traditional methods in crowded environments while slightly trailing behind the computation intensive face detectors. An efficient face detector for real-time applications on edge devices needs to operate with low computation costs without sacrificing accuracy. Accordingly, this work aims to reduce the floating point computations in the network without significantly compromising the face detection accuracy. The proposed face detector FDLite is motivated by the RetinaFace architecture <cit.>. It consists of a customized lightweight backbone network (BLite), feature pyramid network (FPN), cascade context prediction modules (CCPM), and detector head (D). Specifically, this work contributes the lightweight customized backbone BLite and the use of two independent multi-task losses. The proposed face detector FDLite is found to provide competitive (or better) performance against 11 state-of-art approaches. The major contributions of this work are as follows: * Proposal of a customized backbone network BLite with 0.167M parameters and 0.52 GFLOPs. * The use of two independent multi-task losses in the detector head. * A lightweight face detector network FDLite with 0.26M parameters and 0.94 GFLOPs. It achieves Average Precision (AP) scores of 92.3%, 89.8%, and 82.2% on the easy, medium, and hard subsets of the WIDER FACE validation dataset. § RELATED WORK Several existing deep network based face detectors <cit.> are known for high performance but they operate with high computation cost (Table <ref>). Researchers have also proposed several lightweight face detectors with accuracies higher than the classical approaches. This work focuses on the design of lightweight face detectors. Accordingly, only lightweight face detectors are briefly reviewed next. The cascade CNN based face detectors <cit.> are considered as lightweight ones due to their low computational requirements. In this framework, candidate windows are initially generated across the input image. A cascade of networks classify these candidate windows as either face or non-face and simultaneously perform bounding box regression while discarding the irrelevant ones. The face prediction is progressively refined through this network cascade. The MTCNN  <cit.> is the most popular among these approaches. The development of single-stage object detection frameworks (such as SSD <cit.> and RetinaNet <cit.>) led to the proposals of single-stage face detectors <cit.> with specific architectural modifications. However, these face detectors utilized computation-intensive backbone networks. Consequently, several lightweight face detection systems have been devised, employing customized backbones like LFFD  <cit.> and FaceBoxes <cit.> (shown in <ref>). In FaceBoxes, the incorporation of Rapidly Digested Convolutional Layers (RDCL) facilitated real-time face detection on the CPU, while the integration of Multiple Scale Convolutional Layers (MSCL) allowed for handling faces of various scales by enriching receptive fields. Additionally, a novel anchor densification strategy was introduced to enhance the recall rate of small faces. Meanwhile, LFFD <cit.> introduced a novel customized backbone and presented a receptive field (RF) anchor-free strategy aimed at overcoming the limitations associated with previous anchor-based <cit.> ones. At that time, these networks <cit.> achieved the best accuracy in the lightweight face detector category (greater than 70% AP on the hard subset of the WIDER FACE validation dataset) with less than 10 GFLOPs (shown in <ref>). The emergence of classification networks such as MobileNetV1 and V2 <cit.> is notable. These networks utilize techniques like depth-wise separable convolution and inverted bottleneck blocks. This development has led to the creation of lighter versions of backbone networks like MobileNetV1x0.25 <cit.>. After the introduction of lightweight backbone networks, RetinaFace <cit.> and Progressiveface <cit.> integrated lighter adaptations of MobileNetV1 (MobileNetV1x0.25). These networks <cit.> achieved top accuracy in the lightweight face detection segment, with approximately 88% AP on the hard set of the WIDER FACE validation dataset, all within a computation of less than 1.5 GFLOPs. A face detector based on YOLOv5 architecture <cit.> (YOLOv5n0.5) introduced a novel face detection model by employing a lighter variant of the ShuffleNetV2 network (ShuffleNetV2X0.5) <cit.>. This network utilized only 0.56 GFLOPs but archived approximately 73% AP on the hard set of the WIDER FACE validation dataset (shown in <ref>). Recently, the face detector EResFD <cit.> achieved the lowest computation cost while maintaining good accuracy (80.43% AP) on the hard set of the WIDER FACE <cit.> validation subset, albeit exhibiting degraded accuracy on Easy and Medium subsets (as shown in <ref>) (less than 90% AP). Efforts to reduce face detectors persist, but lightweight versions remain critical for edge devices, aiming for lower GFLOPs while maintaining accuracy across various faces of different sizes. § PROPOSED WORK The proposed face detector FDLite is motivated by the design of RetinaFace <cit.>. Accordingly, FDLite has the following key components – (a) a customized backbone (BLite) network (Subsection <ref>) for extracting image features, (b) a Feature Pyramid Network (FPN) <cit.>, (c) Cascade Context Prediction Modules (CCPM) <cit.>, and (d) the Detector Head (D). The customized backbone BLite (Subsection <ref>) is utilized for spatial feature extraction from input image 𝐈 (of size w× h× 3). BLite is pre-trained with the ImageNet1K dataset <cit.>. The Feature Pyramid Network FPN accepts spatial feature maps from intermediate convolutional layers of BLite to provide enhanced feature maps 𝐏_i (i ∈{1,2,3}) of different spatial resolutions. The FPN enriches semantic information by enhancing the edges and corners while bringing out the structural characteristics of face outlines <cit.>. The three feature map outputs of FPN (𝐏_𝐢, i∈{1,2,3}) are processed through their corresponding Cascade Context Prediction Modules CCPM^u_i (u ∈1,2). The first module CCPM^1_i receives 𝐏_i as input. The output of CCPM^1_i is provided as input to CCPM^2_i. The CCPM enhances the capability to detect smaller facial features. Subsequently, the refined feature maps obtained from CCPM^1_i and CCPM^2_i are integrated into the corresponding detector head D_i. Each detector head consists of the following three sub-networks for – (a) face classification task, (b) face bounding box localization, and (c) five facial landmark detection. §.§ BLite: The Customized Backbone A major contribution of this work is the proposal of the customized backbone BLite (shown in Figure fig:Backbone). The input image 𝐈 (of size w × h × 3) is first processed by an Initial Feature Extractor (IFE) layer to generate an initial feature tensor 𝐂_in∈ℝ^w/4×h/4× k_in. The IFE layer consists of a cascade of one convolutional unit CBL(m× n× k@q;s,p,g) and two bottleneck units CDw(k,q,s). Here, CBL(m× n× k@q;s,p,g) refers to the application of q number of m × n × k convolution kernels with stride s, padding p, and group convolution parameter g (Notably, g=1 signifies no group convolution) followed by batch normalization, and LeakyRelU activation. CDw(k,q,s) consists of two CBL units in cascade – CBL(1 × 1 × k@q;1,0,1) followed by CBL(3 × 3 × q@q;s,1,q). Here, k is the channel dimension of the input feature map, and q is that of the output feature map. Note that only the second CBL unit employs a group convolution with g = q with stride s. The initial feature tensor 𝐂_in is further refined through three layers L_1, L_2 and L_3. The feature tensor 𝐂_i∈ℝ^w/2^i+1×h/2^i+1× k_i is processed by the layer L_i to produce 𝐂_i+1∈ℝ^w/2^i+2×h/2^i+2× k_i. The CBL blocks and Feature Refinement Units (FRU) are connected in cascade within each layer L_i. The design of FRU is motivated by that of inception module <cit.> and has residual connections <cit.> between input and output. The FRU with an input feature tensor of k channels is designated by FRU(k). It does not change the input feature tensor's spatial resolution and channel dimension. The network also uses Max-Pooling units to reduce the spatial resolution of the feature tensors. A m × n Max-Pooling unit with stride s and padding p is denoted as MP(m× n;s,p). The FRU module processes the input feature map using convolution kernels at multiple scales. This allows the network to discern patterns across different resolutions. The resulting features are amalgamated via depth concatenation (refer to Figure <ref>). Initially, the feature map F_in undergoes convolution with LeakyReLU activation. This is refered as CL(3× 3× k_in@k_in;1,1,1)[CL(m× n× k@q;s,p,g) denotes a convolution operation utilizing q number of m × n × k kernels with a stride s, padding p, and a group convolution parameter g (where g=1 indicates no group convolution).]. Subsequently, the output of CL(3× 3× k_in@k_in;1,1,1) serves as input to two convolutional layers, namely CL(3× 3× k_in@k_in/2;1,1,1) and CL(1× 1× k_in@k_in/2;1,0,1). The outputs of these layers are then concatenated along the channel dimensions. This concatenated feature map is refined by a CL(3× 3× k_in@k_in;1,1,1) convolutional layer. Finally, a residual connection is established by adding the initial feature map F_in to the refined feature map, thereby addressing the vanishing gradient issues. The proposed backbone BLite consisting of the IFE and three layers (L_1, L_2, L_3) (Figure <ref>) is described as follows[The details of the BLite backbone in terms of number of parameters and floating point operations are presented in Table 1 of the supplementary material.]. Initial Feature Extractor (IFE) – It has a cascade of one CBL unit CBL(7× 7× 3@8;2,3,1) along with two CDw units (CDw(8,16,1) and CDw(16,32,2)). The output of IFE (𝐂_in∈ℝ^w/4×h/4× 32) is fed as input to L_1. Layer 1 (L_1) – It has a cascade of one CBL unit (CBL(3× 3× 32@64;2,1,1)), two FRU units (2 × FRU(64)), one CDw unit (CDw(64,64,1)) and another FRU unit (FRU(64)). The output of L_1 (C_1 ∈ℝ^w/8×h/8× 64) is fed as input to L_2 and FPN. Layer 2 (L_2) – It has a cascade of one CBL unit (CBL(3× 3× 64@128;2,1,1)), two FRU units (2 × FRU(128)), one CDw unit (CDw(128,128,1)) and another FRU unit (FRU(128)). The output of L_2 (C_2 ∈ℝ^w/16×h/16× 128) is fed as input to L_3 and FPN. Layer 3 (L_3) – It has a cascade of one max-pooling (MP(3× 3;2,1)) along with three CDw units (CDw(128,128,1), CDw(128,256,1), and CDw(256,256,1)). The output of L_3 (C_3 ∈ℝ^w/32×h/32× 256) is fed as input to FPN. The feature map 𝐂_i obtained from layer L_i of BLite is fed to the FPN to get an enhanced feature 𝐏_i. It is further refined through the CCPM modules (CCPM_i^1 and CCPM_i^2) whose output is fed to the detector head D_i. §.§ Detector Head The i^th detector head D_i consists of the following three sub-networks. First, a classification sub-network (CLS_i) trained with cross-entropy loss to differentiate between faces and non-faces. Second, a sub-network responsible for determining the coordinates of the face bounding boxes. This is known as the bounding-box regression head (BBOX_i) and is trained using the SmoothL1 loss <cit.>. Third, a sub-network dedicated to the localization of five facial landmark coordinates of detected faces. This is named the landmark regression head (LANDM_i) and is trained by using the SmoothL1 loss. The consolidated output from each task-specific sub-networks (CLS_i, BBOX_i and LANDM_i) across all detection layers (D_i) generates a single tensor after reshaping and vertical concatenation operation (C_v). These tensors (CLS, BBOX, and LANDM) are subsequently used for training the network for corresponding task-specific loss function (as shown in <ref>). §.§ Multi-task losses Building on prior anchor-based detectors <cit.>, the goal is to optimize the detection objective by concurrently classifying and regressing anchor boxes, along with landmark point regression. This entails minimizing a multi-task loss for each anchor, denoted as j: ℒ_u = ℒ_cls^u(p_j,p̂_j) + λ_1p_jℒ_box^u(t_j,t̂_j) + λ_2p_jℒ_landm^u(l_j,l̂_j) ℒ^u_cls, ℒ^u_box, and ℒ^u_landm represent the face classification loss (associated with the detector head CLS), bounding box regression loss (associated with the detector head BBOX), and landmark regression loss (associated with the detector head LANDM), respectively. The classification loss function ℒ^u_cls(p_j, p̂_j) compares actual label p_j of the anchor point j and predicted probability p̂_j. If the anchor point is a positive example of a face, p_j is set to 1, and otherwise set to 0. The binary cross-entropy is used to compute classification loss ℒ^u_cls. The face bounding box regression loss for the j^th positive anchor is denoted as ℒ^u_box(t_j, t̂_j) <cit.>. The variables t_j={t_x, t_y, t_w , t_h} and t̂_j={t̂_x, t̂_y, t̂_w,t̂_h} represent the {center-abscissa, center-ordinate, width, height} of the ground-truth bounding box and predicted bounding box respectively. This work uses the bounding box regression loss proposed in <cit.>. The landmark regression loss ℒ^u_landm(l_j, l̂_j) is similar to ℒ^u_box <cit.> with five landmark points. Here, l_j={(l^x1_j, l^y1_j), … (l^xm_j, l^ym_j), … (l^x5_j, l^y5_j) } and l̂_j={(l̂^x1_j, l̂^y1_j), … (l̂^xm_j, l̂^ym_j), … (l̂^x5_j, l̂^y5_j) } are the respective coordinates of ground-truth and predicted facial landmark points. Facial landmark regression employs a target normalization approach based on the anchor center, which is similar to the bounding box center regression. This work uses the landmark regression loss proposed in <cit.>. The FDLite face detector employs the sliding anchor technique <cit.> for multi-task learning, wherein a predefined set of bounding boxes (referred to as anchor boxes) of various scales are systematically slided across an image. These anchor boxes serve as reference templates to cover faces of different sizes and aspect ratios. Employing the sliding anchor technique enhances the recall rate of face detection. The proposed detector FDLite utilizes two independent multi-task losses (ℒ_u, u ∈{1,2}) to facilitate multi-level <cit.> face classification and face localization in an end-to-end framework. The output feature map of CCPM^1_i is fed as input to D_i and the resulting tensors are used for computing the multi-task loss ℒ_1. Similarly, the output feature map of CCPM^2_i is fed as input to D_i, and the resulting tensors are used for computing the multi-task loss ℒ_2. The combination of these two losses yields a more precise face prediction. Here, the first multi-task loss ℒ_1 predicts the bounding boxes using regular anchor selection techniques <cit.>. The second multi-task loss ℒ_2 refines these classification and regression predictions. However, in this study, both multi-task loss functions independently employ regular anchor selection techniques during training. Despite utilizing the same detector head (D_i), the input to the detector head differs: for multi-task loss ℒ_1, it is sourced from CCPM^1_i, whereas for multi-task loss ℒ_2, it comes from CCPM^2_i. The proposed framework utilizes multi-task learning for whole network optimization, which integrates several tasks into a unified model. So finally, the combined multi-task losses, ℒ_1 and ℒ_2, are minimized for any given training anchor j (as elaborated in <ref>). ℒ_Total = ℒ_1 + ℒ_2 § EXPERIMENTAL SETUP Baseline Models – The performance of FDLite is compared against 11 state-of-art models. These are RetinaFace-Lite <cit.>, Progressiveface <cit.>, SCRFD-10DF <cit.>, MTCNN <cit.>, Faceboxes-3.2x <cit.>, LFFDv1 <cit.>, LFFDv2 <cit.>, YOLOv5s  <cit.>, YOLOv5n <cit.>, YOLOv5n0.5 <cit.> and EResFD <cit.>. Table <ref> presents the comparative performance analysis results. Datasets – FDLite is tested using two standard datasets – WIDER FACE <cit.> and FDDB <cit.>. FDLite is trained and validated using the WIDER FACE dataset and FDDB is only used for testing. A multi-scale testing strategy is used to evaluate the results on WIDER FACE <cit.>, whereas the original images are used for the evaluation on FDDB. The WIDER FACE dataset includes 32,203 images with 393,703 annotated bounding boxes outlining faces. These images were randomly sampled from 61 diverse scene categories, presenting various challenges such as pose, scale, occlusion, expression, and illumination variations. The dataset is split into train, validation, and test subsets, comprising 12,883, 3,226, and 16,094 images, respectively. Moreover, five facial landmark points <cit.> are utilized during training. Conversely, the FDDB dataset consists of 2,845 images with 5,171 annotated bounding boxes delineating faces, capturing variations in poses and occlusions. Anchor Setting – At each detection layer (i ∈1, 2, 3), three distinct anchor sizes are employed at every location in the input image. The anchor sizes are determined relative to the original image size as 2^ia_i, 3/2×2^ia_i, and 2^i+1a_i. Here, a_i = 4*2^i represents the down-sampling factor of each detection layer. These anchors maintain a 1:1 aspect ratio, covering areas ranging from 16×16 to 512×512 pixels in the input image. In the training phase, anchors are classified based on their overlap with ground-truth boxes, using the intersection over union (IoU) metric. In the case of multi-task loss ℒ_1, anchors surpassing an IoU threshold of 0.7 are labeled as face anchors, while those falling below 0.3 are classified as non-face anchors, with other anchors disregarded during training. Conversely, for multi-task loss ℒ_2, anchors exceeding a threshold (here set to 0.35) are designated as face anchors, while the rest are labeled as background or negative. Notably, most anchors (over 99%) are classified as negative. To mitigate the substantial imbalance between positive and negative examples during training, online hard example mining (OHEM) <cit.> is employed in both multi-task losses. This involves sorting negative anchors based on their loss and selecting the highest-ranking ones. This selection process ensures that the ratio between negative and positive samples is maintained at a minimum of 7:1 in both multi-task losses <cit.>. Training Details – The proposed face detector is trained using the SGD optimizer, starting with a learning rate of 1×10^-3, a momentum factor of 0.9, and a weight decay of 5×10^-4. The training is conducted over 130 epochs, while the learning rate is reduced by a factor of 10 at epochs 100 and 120. The training process utilized NVIDIA Tesla V100 GPUs with a batch size of 8. Testing Details –The performance of FDLite on the WIDER FACE dataset is computed by following standard evaluation procedures <cit.>. For testing on WIDER FACE, we follow the standard practices of [36, 68] and employ flip as well as multi-scale (the short edge of the image at [500, 800, 1100, 1400, 1700]) strategies. The face confidence scores are acquired for all anchors through the classification sub-networks within the detector head. Subsequently, anchors with confidence scores surpassing the threshold of 0.02 are chosen for the face detection process. Finally, the non-maximum suppression (NMS) algorithm is applied, using a Jaccard overlap of 0.4<cit.>. This algorithm generates the final results by selecting the top 750 highly confident detections for each image <cit.>. § RESULTS AND DISCUSSION This section provides a comprehensive assessment of the proposed face detector FDLite. The effectiveness of FDLite is assessed by comparing its performance with state-of-the-art models using the WIDER FACE and FDDB benchmark datasets. Additionally, an ablation analysis is presented to study the impact of different model components. Results on WIDER FACE Dataset – The performance of the proposed face detector is compared against 11 baseline algorithms (Section <ref>). The following observations can be made from the results presented in Table <ref>. * FDLite achieves the respective average precision (AP) scores of 92.3%, 89.9%, and 82.1% on Easy, Medium, and Hard subsets of the WIDER FACE validation dataset. * FDLite outperforms all baseline face detection frameworks, with the exception of ProgressiveFace <cit.>) in terms of performance on the hard subset of the WIDER FACE validation dataset while maintaining lower floating point operations (GFLOPs) and network size (parameters in millions). * FDLite has lesser floating point operations (in GFLOPs) compared to all baseline face detectors except EResFD and YOLOv5n0.5. However, FDLite outperforms both EResFD and YOLOv5n0.5 in terms of mAP (Table <ref>). * FDLite has lesser parameters compared to all baseline face detectors except for EResFD. Nonetheless, FDLite notably outperforms EResFD and YOLOv5n0.5 in terms of mAP (Table <ref>). The FDLite face detector achieved competitive (or better) performance (average precision of 92.3%, 89.9% and 82.2% on easy, medium, and hard subsets of the WIDER FACE validation dataset) with only 0.94G FLOPs and 0.24M parameters with respect to the state-of-art models. Results on FDDB Dataset – FDLite undergoes assessment on the FDDB dataset without additional training to showcase its effectiveness across diverse domains. With 1,000 false positives, FDLite achieves a TPR of 97.86%, a performance comparable to existing methods. Ablation Study – The following ablation analysis experiments are performed to study the effect of different model components. * Effect of pre-trained backbone – Employing the BLite pre-trained backbone (trained on ImageNet1K dataset) with the FDLite face detector resulted in performance improvements across all four versions (Refer to Table <ref>). Approximately 1% , 1% and 2% respective performance improvements were observed on the easy, medium, and hard subsets of the WIDER FACE validation set. * Effect of CCPM module – Ablation experiments showed that substituting SSH <cit.> with CCPM resulted in slight accuracy improvements across the easy and medium subsets of the WIDER FACE validation set. Additionally, there was a 0.5% increase in accuracy for the hard subset. This trend persisted across configurations using either single or dual multi-task losses. * Effect of two multi-task loss – This ablation experiment examines the effect of employing two multi-task losses on FDLite's performance. Integrating them with the SSH module resulted in slight performance improvements. Approximately 0.5%, 1% , 2% respective improvements were noted on the easy, medium and hard subsets of the WIDER FACE validation dataset. However, the improvements were more significant when the two multi-task losses were used with the CCPM module. Notably, around 2% performance improvement was observed solely on the hard subset of the WIDER FACE validation set, while the performance on other subsets remained unchanged. Qualitative Performance Analysis – Figure <ref> shows the results of face detection in images involving challenging scenarios like occlusions, blur and small faces. These face detection results highlight the effectiveness of the proposed face detector FDLite in overcoming commonly encountered face detection challenges. § CONCLUSION This work presented a lightweight face detector FDLite (0.24M parameters and 0.94 GFLOPs) with a novel customized backbone BLite (0.167M parameters and 0.52 GFLOPs). It applied two independent multi-task losses in the face detector heads. The proposal was validated on two standard datasets (WIDER FACE and FDDB) and benchmarked against 11 state-of-the-art approaches. The proposal achieved competitive accuracy with a much lesser number of network parameters and floating point operations. This work has focused on reducing the number of network parameters and computations by designing a customized backbone while using standard loss functions and training strategies. Thus, it can be extended by exploring novel loss functions and learning strategies for increasing performance without increasing network complexity. ieee
http://arxiv.org/abs/2406.18878v1
20240627041221
Gluonic contributions to the pion parton distribution functions
[ "Jiangshan Lan", "Chandan Mondal", "Xingbo Zhao", "Tobias Frederico", "James P. Vary" ]
hep-ph
[ "hep-ph", "nucl-th" ]
1,2,3]Jiangshan Lan jiangshanlan@impcas.ac.cn 1,2,3]Chandan Mondal mondal@impcas.ac.cn 1,2,3]Xingbo Zhao xbzhao@impcas.ac.cn 4]Tobias Fredericocor1 tobias@ita.br iowa]James P. Vary jvary@iastate.edu [1]Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China [2]School of Nuclear Physics, University of Chinese Academy of Sciences, Beijing, 100049, China [3]CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China [4]Instituto Tecnológico de Aeronáutica, DCTA, 12228-900 São José dos Campos, Brazil [iowa]Department of Physics and Astronomy, Iowa State University, Ames, IA 50011, USA [cor1]Corresponding author § ABSTRACT We investigate the role of a dynamical gluon in the pion within the Basis Light-Front Quantization (BLFQ) framework and compare it with the solution of the Minkowski space Bethe-Salpeter equation, focusing on contributions beyond the valence state. Particularly in BLFQ, we identify the effect on the pion structure of the dynamical chiral symmetry breaking by the enhancement of the spin-flip matrix element, through the |qq̅g⟩ component of the light-front wave function and associated gluon parton distribution function (PDF). We explicitly show an enhancement of the low-x contribution in the quark PDF associated with the large spin-flip matrix element, necessary to provide the π-ρ mass splitting. Pion, Light-Front dynamics, Quark and Gluon Distributions § INTRODUCTION The light-front representation of the hadron <cit.> carries the full complexity of Quantum Chromodynamics (QCD), with their constituents, namely dressed quarks and gluons strongly interacting to build an eigenstate of a mass squared operator. Such a description implies the dynamical coupling of an infinite set of Fock components forming the hadron eigenstate. In practice a truncated Light-front Fock-space is adopted as a hadron within Basis Light-Front Quantization (BLFQ) <cit.>. To date, the valence and valence plus one gluon states were coupled to describe, e.g. light-mesons <cit.>, where the confinement is introduced in the squared mass operator acting on the valence sector and the |qq̅ g⟩ originates from the coupling with the qq̅ sector through the off-diagonal matrix elements of the QCD LC Hamiltonian <cit.>, in such a way that the gluon plays an important dynamical role. Indeed, in the pion case the qq̅ and qq̅ g sectors each carry about 50% of the total normalization <cit.>, indicating the important role of the dynamical dressed gluon, even in the presence of a confining interaction acting in the valence channel. We note that dynamical symmetry breaking should involve the coupling with an infinite number of Fock-states to dress the constituent quarks and at the same time provide the large splitting between the nearly massless pion (the Goldstone boson) and the rho meson. In this connection, the coupling of the valence state with the higher Fock-components can be cast into an effective interaction, as indicated by the “Iterative Resolvent method" <cit.>. This effective interaction has been examined in Ref. <cit.>, where it was proposed to enhance the spin flip matrix element of the effective quark-gluon coupling QCD LF-Hamiltonian by introducing a large effective quark vertex mass (m_f). This mechanism has been implemented with success in BLFQ to split the pion and rho meson masses <cit.>. We should point out that the relevance of the higher Fock components of the pion state has been recently addressed phenomenologicaly with a new parametrization containing qq̅, qq̅qq̅, qq̅ g and qq̅ gg components in Ref. <cit.>, where those components were fitted simultaneously to describe the experimental data on the pion PDFs <cit.> and electromagnetic form factor. On the other hand, four-dimensional field theoretical approaches within the Dyson-Schwinger (DS) and Bethe-Salpeter (BS) frameworks describe the pion as the Goldstone boson originated by the spontaneous breaking of the chiral symmetry in the light-quark sector. These non-perturbative frameworks dress the light-quarks in the SU(3) flavor sector and split the pion and the rho meson as well as the kaon and K^* meson (see e.g. <cit.>). Those approaches have been formulated in Euclidean space, and the connection to the LF Fock expansion for any meson is not direct and demands some elaboration to access the parton distribution (see e.g. <cit.>, as well as in Lattice QCD <cit.>). However, the solution of the pion Bethe-Salpeter equation (BSE) in Minkowski space, like the one developed in Ref. <cit.> with massive constituent quarks and gluons, allows access to the valence component as well the inclusive contribution of the higher-Fock components to structure observables <cit.>. It was found that the pion admits a significant contribution from higher LF Fock-components with the valence carrying 70% of the normalization. The BSE in ladder approximation allows the coupling of the valence state with an infinite set of Fock-components (see e.g. <cit.>). So far, one can only separate out the valence component of the wave function from the BS amplitude, by eliminating the relative LF time through the integration on the LF energy, leaving only the longitudinal momentum fraction and transverse momentum, which characterize the arguments of valence wave function (see e.g. the detailed discussion in Refs. <cit.>). Ideally, BLFQ and DS/BS frameworks applied to QCD should lead to the same results for the physical observables of a given hadron. However, truncations and insertions of confinement in different ways in the these approaches will, on their own, lead to different answers. Furthermore, in BLFQ the enhancement of the spin-flip matrix element is expected to have a distinctive hallmark on the pion structure at low-x from the contribution of the |qq̅ g⟩ Fock-component. In this context, our aim in this paper is twofold: first, explore both the quark and gluon longitudinal momentum fraction (x) distribution, or the PDFs, from the qq̅g component computed within a continuous approach, analyzing different parametrizations, without resorting to the discretization adopted in the BLFQ method to describe the pion; and, second, compare the BLFQ results with the BSE results for the contribution to the quark PDF from the higher Fock sectors after subtraction of the valence part from the total PDF. Within this aim, we will determine the main characteristics of these contributions, particularly concerning the breaking of the symmetry around x=1/2. We note that the BSE model reproduces the pion experimental space-like electromagnetic form factor, as shown in Ref. <cit.>. Furthermore, we will provide a practical method to generate the qq̅g contribution to the pion from the leading spin-antialigned valence wave function that can be used in a range of applications. § THEORETICAL FRAMEWORK The bound state in LF field theory can be obtained by solving an eigenvalue problem of the Hamiltonian in a frame with a vanishing total transverse momentum (P⃗_⊥=0): P^-P^+|Ψ⟩=M^2|Ψ⟩ , where P^±=P^0 ± P^3 represent the LF Hamiltonian, P^-, and the longitudinal momentum, P^+, of the system, respectively. The eigenvalue M^2 is the mass squared of the bound state. The LF Hamiltonian we use contains the LF QCD Hamiltonian and confinement, P^-= P^-_ QCD +P^-_C <cit.>. With one dynamical gluon, the LF QCD Hamiltonian in the LF gauge A^+=0 <cit.> reads P_ QCD^-= ∫dx^-d^2 x^⊥[1/2ψ̅γ^+m_0^2+(i∂^⊥)^2/i∂^+ψ +1/2A_a^i[m_g^2+(i∂^⊥)^2] A^i_a +g_sψ̅γ_μT^aA_a^μψ +1/2g_s^2ψ̅γ^+T^aψ1/(i∂^+)^2ψ̅γ^+T^aψ] , where ψ and A^μ are the quark and gluon fields, respectively. T^a is the half Gell-Mann matrix, T^a=λ^a/2, and γ^+=γ^0+γ^3, where γ^μ represents the Dirac matrix. The first two terms in Eq. (<ref>) are the kinetic energies of quark and gluon, while the last two terms describe their interactions with coupling constant g_s. m_0 and m_g are the bare mass of quarks and the model gluon mass, respectively. Using the Fock sector dependent renormalization scheme <cit.>, we introduce a mass counter term, δ m_q= m_0 -m_q, in the leading Fock sector to regularize the quark self-energy. Here, m_q is the renormalized quark mass. Apart from this, we introduce a different quark mass m_f to parameterize the nonperturbative effects in the vertex interactions <cit.>. The confinement in the leading Fock sector includes transverse and longitudinal confining potentials <cit.>, P_ C^-P^+=κ^4{x(1-x) r⃗_⊥^ 2-∂_x[x(1-x)∂_x]/(m_q+m_q̅)^2} , where κ is the strength of the confinement, and r⃗_⊥=√(x(1-x))(r⃗_⊥ q-r⃗_⊥q̅) represents the holographic variable <cit.>. The pion state vector obeys the eigenvalue equation (<ref>) with M^2≡ M^2_π, and the state vector can be expressed on the null-plane, i.e. x^+=0, through a Fock expansion of the form (see e.g. <cit.>) | Ψ⟩ = ∑_i_q s_q∑_i_q̅ s_q̅∫[∏_j=q, q̅d^3 p_j/(2π)^32p^+_j]2P^+(2π)^3 ×δ^(3)(p⃗_q + p⃗_q̅ - P⃗)ψ^(s_q, s_q̅)_qq̅;(i_q,i_q̅)(x_q, p⃗_q⊥, x_q̅, p⃗_q̅⊥) × b^†_i_q s_q(p⃗_q)d^†_i_q̅ s_q̅(p⃗_q̅)|0⟩ + ∑_i_q s_q∑_i_q̅ s_q̅∑_λ a∫[∏_j=q, q̅, gd^3 p_j/(2π)^32p^+_j] × 2P^+(2π)^3δ^(3)(p⃗_q + p⃗_q̅ + p⃗_g - P⃗)ψ^(s_q,s_q̅,λ)_qq̅g;(i_q,i_q̅,a)({ x, p⃗_⊥}) × b^†_i_q s_q(p⃗_q)d^†_i_q̅ s_q̅(p⃗_q̅)a^†_λ a(p⃗_g)|0⟩ + ⋯ , where { x, p⃗_⊥}≡{ x_q, p⃗_q⊥, x_q̅, p⃗_q̅⊥, x_g, p⃗_g⊥} satisfy the momentum conservation: p⃗_q⊥+p⃗_q̅⊥+ p⃗_g⊥=0  and   x_q+ x_q̅+ x_g=1 . Here b^†_i_q s_q (d^†_i_q̅ s_q̅) is the constituent (anti)quark creation operator, a^†_λ a is the creation operator for the constituent gluon and s_q, s_q̅, and λ=± 1 denote the helicity of the quark, antiquark and gluon, respectively. The BLFQ LF Hamiltonian contains confinement only in the leading Fock-sector, while the Hamiltonian in higher Fock-sector sectors is built from Eq. (<ref>) considering that the effective degrees of freedom are dressed quarks and gluons with constituent masses. Therefore, by inserting the Fock expansion (<ref>) truncated at second order in  (<ref>), one can derive the following equation for the LF wave function of the qq̅g sector, which is also valid within the BLFQ approach: ψ^(s_q,s_q̅,λ)_qq̅g;(i_q,i_q̅,a) = 1/M^2_π - M^2_0,qq̅g[Vψ^(s_q, s_q̅)_qq̅;(i_q,i_q̅) ] , where the mass-squared operator of the free qq̅g system reads M^2_0,qq̅g = ∑_j=q, q̅, gp⃗^ 2_j⊥ + m^2_j/x_j , and V denotes the interaction connecting the qq̅ and qq̅g sectors. In the present work, we truncate the Fock-space up to the qq̅ g sector, and the interaction that couples this sector with the valence sector in the light-cone gauge is written below explicitly with the momentum arguments: [Vψ^(s_q, s_q̅)_qq̅;(i_q,i_q̅) ] = g_s √(2)/x_q + x_g∑_i_1 s_1T^a_i_q i_1W^(s_q,s_1)_λ(p_q, p_g) ×ψ^(s_1, s_q̅)_qq̅;(i_1, i_q̅)(x_q̅, p⃗_q̅⊥) - g_s √(2)/x_q̅ + x_g ×∑_i_1 s_1T^a_i_1 i_q̅W̅^(s_1, s_q̅)_λ(p_q̅, p_g)ψ^(s_q, s_1)_qq̅;(i_q, i_1)(x_q, p⃗_q⊥) , where p_q(q̅)≡{ p^+_q(q̅),p⃗_q(q̅)⊥} and p_g≡{ p^+_g,p⃗_g⊥}. The basic quark-gluon-quark matrix element is given by W^(s_q,s_1)_λ(p_q, p_g) = 1√(2)u̅_s_q(p_q) ε^*_λ(p_g) u_s_1(p_q + p_g) , and the corresponding matrix element for antiquarks reads W̅^(s_1, s_q̅)_λ(p_q̅, p_g) = 1√(2)v̅_s_1(p_q̅+ p_g) ε^*_λ(p_g) v_s_q̅(p_q̅) , where u_s_q and v_s_q̅ are the light-cone helicity spinors. The matrix elements (<ref>) and (<ref>) have been tabulated in Ref. <cit.>. For clarity, we write the spin-flip matrix elements: W^(+,-)_λ(p_q, p_g) = m_fx_1 - x_q/√(x_1 x_q)δ_λ, - , W^(-,+)_λ(p_q, p_g) = -m_fx_1 - x_q/√(x_1 x_q)δ_λ, + , W̅^(+,-)_λ(p_q̅, p_g) = λ m_fx_1 - x_q̅/√(x_1 x_q̅) , W̅^(-,+)_λ(p_q̅, p_g) = 0 , where x_1 = x_q + x_g (x_q̅ + x_g) for (anti)quark matrix elements. The parameter m_f controls their magnitude which, in BLFQ, rules the split between the pion and rho meson masses, lowering the pion to its small mass in the hadronic scale. The contribution from the qq̅g sector to the quark PDF is Δ u_q(x_q) = 1/(2π)^6∑_i_q s_q∑_i_q̅ s_q̅∑_λ a∫d^2p_q⊥d^2p_g⊥dx_g/4x_qx_g(1 -x_q - x_g ) ×|ψ^(s_q,s_q̅,λ)_qq̅g;(i_q,i_q̅,a)({ x, p⃗_⊥})|^2 . The corresponding expression for the antiquark PDF is obtained through the exchange q →q̅. The gluon PDF is similarly given by u_g(x_g) = 1/(2π)^6∑_i_q s_q∑_i_q̅ s_q̅∑_λ a∫d^2p_q⊥dx_qd^2p_g⊥/4x_qx_g(1 -x_q - x_g) ×|ψ^(s_q,s_q̅,λ)_qq̅g;(i_q,i_q̅,a)({ x, p⃗_⊥})|^2 . The summed squared qq̅g LF wave function entering Eqs. (<ref>) and (<ref>) takes the form ∑_i_q s_q∑_i_q̅ s_q̅∑_λ a |ψ^(s_q,s_q̅,λ)_qq̅g;(i_q,i_q̅,a)({ x, p⃗_⊥})|^2 = N(N - 1)g_s^2/(M^2_π - M^2_0,qq̅g)^2∑_s_q s_q̅∑_λ∑_s_1 s_2 {[W^(s_q,s_1)_λ(p_q, p_g)]^*[W^(s_q,s_2)_λ(p_q, p_g)]/(x_q + x_g)^2 ×[ψ^(s_1, s_q̅)_qq̅(x_q̅, p⃗_q̅⊥)]^* ψ^(s_2, s_q̅)_qq̅(x_q̅, p⃗_q̅⊥) + [W̅^(s_1, s_q̅)_λ(p_q̅, p_g)]^*[W̅^(s_2,s_q̅)_λ(p_q̅, p_g)]/(x_q̅ + x_g)^2 × [ψ^(s_q, s_1)_qq̅(x_q, p⃗_q⊥)]^* ψ^(s_q, s_2)_qq̅(x_q, p⃗_q⊥) - 2Re[[W^(s_q,s_1)_λ(p_q, p_g)]^*W̅^(s_2,s_q̅)_λ(p_q̅, p_g)/(x_q + x_g)(x_q̅+ x_g) × [ψ^(s_1, s_q̅)_qq̅(x_q̅, p⃗_q̅⊥)]^* ψ^(s_q, s_2)_qq̅(x_q, p⃗_q⊥) ] } , where we have chosen to denote in the valence wave function the spectator quark or antiquark momenta in the gluon radiation process. In the present work it will be assumed that the valence wave function is dominated by spin-antialigned component, ψ^(+, -)_qq̅=-ψ^(-, +)_qq̅ (see e.g. <cit.>), and the aligned contribution will thus be neglected. § RESULTS AND DISCUSSION In the present work, the gluon contributions to the pion PDFs are studied. In particular, we compute the qq̅g contribution to the quark PDF and gluon PDF by using the formalism outlined in Sec. <ref>. The results with an input model valence wave function will then be compared to those of BLFQ <cit.> and those of BSE. The varied inputs of our model are the quark mass m_q entering the kinetic part (see Eq. (<ref>)), the quark mass entering the interaction (m_f) and the gluon mass m_g. In the present study we consider three different parameter sets which are given in Table <ref>. Namely, in Model I and II we use m_q and m_g from BLFQ <cit.>. However, in the first case m_f=m_q instead of m_f = 5.69 GeV that gives experimental mass splitting between π and ρ. The last set is using the values of the masses as in the BSE <cit.>. The coupling constant g_s=1.92 and other parameters of BLFQ are taken from Ref. <cit.>. For simplicity in the present study we will use a power-law form <cit.>: ψ_ pl(x, p⃗_⊥) = N[1 +(A_0, eff(x, p⃗_⊥)/4 - m^2_q)/β^2]^-s, in the place of the valence amplitude. N is a normalization constant and the parameters s and β will be determined through a fit to either BLFQ or BSE results. Moreover, the effective function A_0, eff(x, p⃗_⊥) is chosen as A_0,eff(x_q,p⃗_q⊥) = p^2_q⊥ + m^2_q/x_q + p_q̅⊥^2 + m^2_q/ x_q̅ , where in the actual calculations of the qq̅ g contribution to the momentum distributions we have used: p⃗_q̅=-p⃗_q⊥-p⃗_g⊥ and x_q̅=1-x_q-x_g , corresponding to the final momentum of the antiquark after the gluon is radiated and q is the spectator quark. When q̅ is the spectator an analogous expression is used, by exchanging the momenta of the quark with the antiquark. This simple recipe takes into account the damping of the loop integral in Eq. (<ref>) close to x_g→ 1, and the decrease of the gluon distribution in Eq. (<ref>) close to the end-point. We plan in the future to use a more general form of the valence wave function which reflects the dynamical content of the BLFQ Hamiltonian and BS equation. Note that we fit s and β parameters with the contribution of the valence state to the quark PDF. In this case, the function A_0,eff(x_q,p⃗_q⊥) reduces to the standard mass squared function, namely: M^2_0,qq̅(x, p⃗_⊥) = p⃗^ 2_⊥ + m^2_q/x(1-x) , which was used in the fitting of the qq̅ leading Fock-sector momentum distributions from the valence state obtained with BLFQ and BSE calculations. We find that the parameters s=1.4 and β/m_q=1.16 reproduce well the valence PDF of both the BLFQ and the BSE as shown in the upper panel of Fig. <ref>. In the middle panel of Fig. <ref>, we compare the results for the qq̅g contribution to the quark PDF for the sets I, II, III with the BSE calculation for the beyond-valence contribution and also the result of the BLFQ. From the figure, it is seen that the Model II qualitatively agrees with the BLFQ, as it should. Namely, the large bump at low-x is reproduced. By comparing the results for Model I and II, it can be concluded that the mentioned bump is related to the large value of m_f = 5.69 GeV, used in Model II. Note that the reproduction of the spectrum, i.e. a small pion mass, requires the large value of m_f <cit.>. Furthermore, it can also be seen in the middle panel of Fig. <ref> that the BSE result differs quite significantly from the Model III. But, one should notice in such a comparison that the BSE result contains an infinite number of contributions of the form qq̅ng where the number n=1, ⋯, ∞ is the number of gluons. Additionally, the BSE calculation was performed in the Feynman gauge. The discrepancy between Model II and the BLFQ result can presumably be explained by the use of a simple analytical form in the numerical calculations and the fact that the BLFQ is using a discretization of the longitudinal fractions not used in the perturbative method developed in this work. The valence and qq̅ contribution to the PDF computed within the BLFQ and BSE frameworks, respectively is compared with the qq̅g for Model II and III in the lower panel of Fig. <ref>. It can be concluded that the second Fock sector is important at small-x. As expected, the valence component dominates at larger values of x. In this work we also studied the impact of the gluon mass m_g on the quark and gluon PDFs in the pion. The results for the qq̅g contribution to the quark PDF are shown in the upper panel of Fig. <ref> for Model II and III that use two different values of m_g, i.e. the values given in Table <ref>, as well as for a vanishing gluon mass. As seen in the figure, an increase of the gluon mass leads to a shift of the quark PDF to lower values of x for both models. However, for Model III with m_f = m_q the effect is more pronounced compared to Model II having a large value of m_f. Similarly, we show in the lower panel of Fig. <ref> the results for the gluon PDF. The behavior is now the opposite, i.e., a larger m_g gives a gluon PDF shifted towards larger-x. In this figure, we also compare those results with the gluon PDF computed within BLFQ. The perturbative results agree qualitatively with those from BLFQ. However, the latter framework provides a PDF slightly shifted towards higher values of x. Model II has a distribution peaked much more to the right compared to Model III, i.e. increasing the mass m_f leads to a larger ⟨ x ⟩_g of the gluon. § CONCLUSION In this work, we studied the pion |qq̅g⟩ contributions within BLFQ and compared to the calculations done for the Minkowski space BSE of the contribution to the quark PDF beyond the leading qq̅ Fock sector. In the BLFQ case, we identified the effect of the dynamical chiral symmetry breaking in the pion quark PDF, namely the enhancement of the spin-flip matrix element impacts the |qq̅g⟩ Fock-component of the LF wave function and associated gluon PDF. We investigated that by exploring different sets of parameters. Noticeably, we explicitly showed that the low-x peaked contribution to the quark PDF is directly associated with the large spin-flip matrix element, necessary to provide the π-ρ mass splitting. Particularly, the explored framework in the light-cone gauge can be applied to other pion models of the valence state to build the |qq̅g⟩ component and eventually provide insights into the roles of the higher Fock-components. We can foresee some further steps to apply our methodologies. For example, one may look for the qq̅ g component extracted from phenomenological parametrizations, like the one developed in Ref. <cit.>, to further support the enhancement of the spin-flip matrix element. Meanwhile, this method could be applied to separate the qq̅ g component starting with the valence pion wave function obtained within DS/BSE approaches, although in covariant gauges. These are future challenges in the perspective to deepen our understanding of the pion LF wave function in the Fock-space with dressed constituents. The other direct manifestation of the higher Fock component with massive gluons appears in the clear separation between the peaks of the gluon PDF and the contribution to the quark PDF when the spin-flip matrix element is enhanced to provide the π-ρ splitting. On the other side, with parameters from the pion BS model in Minkowski space, where the enhancement of the spin-flip matrix element is quite mild, the gluon and quark PDF from the qq̅ g component peaks around the same position at x∼ 0.15. The Fock components beyond the valence from the BS model in Minkowski space provide a contribution to the quark PDF that is peaked around x∼ 0.3. The source of this difference could be associated with the extension of the quark-gluon vertex, which was tested here in a qualitative way, providing the shift from x∼ 0.15 to around ∼ 0.3 of the peak in the PDF for the contribution of the qq̅ g state. The application to the nucleon to compute the qqqg component is another challenge that could begin with the recent BLFQ results for the proton <cit.>, and using proton valence models from Minkowski space dynamics (see e.g.  <cit.>). § ACKNOWLEDGEMENTS The authors would like to thank Dr. Emanuel Ydrefors for his assistance in resolving issues encountered at all stages of this work. J.L. is supported by Special Research Assistant Funding Project, Chinese Academy of Sciences, by the Natural Science Foundation of Gansu Province, China, Grant No.23JRRA631, and by National Natural Science Foundation of China, Grant No. 12305095. C.M. is supported by new faculty start up funding the Institute of Modern Physics, Chinese Academy of Sciences, Grants No. E129952YR0. C.M. also thanks the Chinese Academy of Sciences Presidents International Fellowship Initiative for the support via Grants No. 2021PM0023. X.Z. is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences, by Key Research Program of Frontier Sciences, Chinese Academy of Sciences, Grant No. ZDB-SLY-7020, by the Natural Science Foundation of Gansu Province, China, Grant No. 20JR10RA067, by the Foundation for Key Talents of Gansu Province, by the Central Funds Guiding the Local Science and Technology Development of Gansu Province, Grant No. 22ZY1QA006, by National Natural Science Foundation of China, Grant No. 12375143, by National Key R&D Program of China, Grant No. 2023YFA1606903 and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB34000000. This research is supported by Gansu International Collaboration and Talents Recruitment Base of Particle Physics (2023-2027), and supported by the International Partnership Program of Chinese Academy of Sciences, Grant No.016GJHZ2022103FN. This work is a part of the project INCT-FNA #464898/2014-5. This study was financed in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under the grant 306834/2022-7 (TF). We thank the FAPESP Thematic grant #2019/07767-1. J. P. V. is supported by the U.S. Department of Energy under Grant No. DE-SC0023692. A portion of the computational resources were also provided by Taiyuan Advanced Computing Center. sort compress elsarticle-num
http://arxiv.org/abs/2406.18695v1
20240626185732
Learning to Correct for QA Reasoning with Black-box LLMs
[ "Jaehyung Kim", "Dongyoung Kim", "Yiming Yang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Sequence Graph Network for Online Debate Analysis Quan Mai, 1 Susan Gauch, 1 Douglas Adams, 2 Miaoqing Huang 1 1Department of Electrical Engineering and Computer Science, 2Department of Sociology and Criminology University of Arkansas Fayetteville, Arkansas, USA {quanmai, sgauch, djadams, mqhuang}@uark.edu July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT An open challenge in recent machine learning is about how to improve the reasoning capability of large language models (LLMs) in a black-box setting, i.e., without access to detailed information such as output token probabilities. Existing approaches either rely on accessibility (which is often unrealistic) or involve significantly increased train- and inference-time costs. This paper addresses those limitations or shortcomings by proposing a novel approach, namely (Correct for improving QA reasoning of Black-Box LLMs). It uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings. Specifically, the adaptation model is initialized with a relatively small open-source LLM and adapted over a collection of sub-sampled training pairs. To select the representative pairs of correct and incorrect reasonings, we formulated the dataset construction as an optimization problem that minimizes the statistical divergence between the sampled subset and the entire collection, and solved it via a genetic algorithm. We then train the adaptation model over the sampled pairs by contrasting the likelihoods of correct and incorrect reasonings. Our experimental results demonstrate that significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.[The code will be available at <https://github.com/bbuing9/CoBB>.] § INTRODUCTION Large language models (LLMs) have achieved significant advancements in various NLP tasks, demonstrating exceptional capabilities in understanding and generating text <cit.>. Nevertheless, LLMs still present notable limitations, such as biased opinions toward specific groups <cit.> or inaccurate predictions for infrequent topics <cit.>, primarily due to the imperfections in the knowledge acquired during pre-training <cit.>. Consequently, it is essential to control and adapt the responses of LLMs to achieve optimal performance for specific use cases. Representative methods include fine-tuning on supervised training datasets <cit.> and input-level optimization through prompt engineering and retrieval augmentation <cit.>. However, these approaches require huge training costs or exhibit limited adaptation performance, respectively. To address these challenges, prior works have focused on training relatively smaller models using responses from LLMs and human supervision, then generating adapted responses while assuming that the LLM parameters are fixed or inaccessible (i.e., black-box). One approach assumes that the output token probabilities are available <cit.>, but this is often unrealistic. Although <cit.> recently proposed training a verifier and employing beam search to obtain adapted responses without this assumption, this method results in computationally expensive training and inference pipelines. Alternatively, <cit.> introduced a straightforward seq2seq learning framework to enhance the alignment of black-box LLMs. However, extending this framework to other tasks is challenging, particularly in terms of constructing the training dataset and ensuring the effectiveness of the training method. In this paper, we propose a simple yet efficient framework, learning to Correct for QA reasoning of Black-Box LLMs (). Our key idea is to learn a seq2seq mapping from the original reasoning of black-box LLM to correct and improved reasoning, by training an adaptation model initialized with a relatively small open-source LLM. After training, the adaptation model can be easily deployed during inference as a single additional module, as illustrated in Figure <ref>. Specifically, we firstly sample multiple chain-of-thought reasonings from black-box LLM and label their correctness using ground-truth human labels. Then, from all possible pairs of correct and incorrect reasonings, we subsample a few representative pairs that preserve the characteristics of the entire set. To identify such a subset, we formulate an optimization problem that minimizes the statistical divergence between the subset and the entire set, solving it via a genetic algorithm. Finally, using this optimized subset, we train the adaptation model to simultaneously increase the likelihood of correct reasoning and decrease the likelihood of incorrect reasoning for the given input and reasoning. An overview of is presented in Figure <ref>. We demonstrate the effectiveness of in improving QA reasoning with black-box LLMs through extensive evaluations on four different QA datasets. For instance, achieved average accuracy improvements of 6.2% and 2.2%, compared to the original black-box and previous state-of-the-art adaptation methods, respectively. Furthermore, we found that the adaptation model trained for a specific black-box LLM could generalize to adapt other LLMs, including both API-based and open-source models, which is crucial for efficient deployment in practice. Additionally, our in-depth analyses reveal how improves and corrects the reasoning from the black-box LLMs. We hope our work provides valuable insights into LLM adaptation research, which is becoming increasingly important for the future success of LLMs in real-world applications. § RELATED WORKS §.§ Steering and adapting LLMs' responses While recent LLMs have demonstrated remarkable success in various tasks, steering and adapting their responses for the specific domain or user is still essential for achieving optimal performance <cit.>. Fine-tuning on human- or machine-labeled datasets is a straightforward approach <cit.>, but this method incurs significant costs due to the need to update the vast number of trainable model parameters, particularly for large-scale LLMs like GPT-4 <cit.> (>100B parameters). Consequently, prompt engineering <cit.> and retrieval augmentation <cit.> are often preferred, as these methods only require modifying the inputs to LLMs. However, recent observations indicate that input-level modifications alone are insufficient for adequately steering LLMs’ responses in the desired direction <cit.>, likely due to the absence of learnable parameters and learning from human supervision. In this work, we propose an alternative way to steer and adapt LLMs using a trainable model and human supervision, without updating the target LLMs. §.§ Learning to adapt black-box LLMs As the scale of LLMs continues to increase, and their parameters often remain inaccessible (i.e., black-box), the need to adapt their responses without updating their parameters has gained significant attention. A common approach involves introducing a relatively small trainable model to learn adaptation from the original responses of the black-box LLM. One line of work focuses on learning to adapt output probabilities <cit.>, but this method is impractical when the output probabilities of black-box LLMs are inaccessible. To address this limitation, <cit.> propose a verification-based approach, generating the adapted responses in multiple steps via beam search, where scores are calculated using a learned verifier. However, this method increases the costs of training and inference due to the iterative computation between the black-box LLM and the verifier, and deploying the beam search. On the other hand, <cit.> demonstrate that a simple seq2seq modeling approach can effectively improve the alignment of black-box LLMs. Despite its effectiveness, this method is limited for the other tasks, in terms of constructing the training dataset and ensuring the effectiveness of the training method. To overcome these limitations, we propose a novel approach to construct an effective training dataset, along with an improved training objective. § : LEARNING TO CORRECT FOR QA REASONING WITH BLACK-BOX LLMS In this section, we introduce our framework for learning to Correct for improving QA reasoning with Black-Box LLMs (). We begin with an overview of the problem setup in Section <ref>. Next, in Section <ref>, we present how to construct an effective dataset for training the adaptation model. This dataset is created by solving an optimization problem using a genetic algorithm, to preserve the characteristics of the entire set of correct and incorrect reasoning pairs from black-box LLM. Finally, we describe a training scheme in Section <ref>, where the adaptation model is trained by contrasting the likelihoods of positive and negative reasonings. The full procedure of is outlined in Algorithm <ref>, and an overview is provided in Figure <ref>. §.§ Preliminaries Let denote black-box LLM as ℳ, which generates an original output sequence (e.g., reasoning) 𝐲_o for a given input sequence (e.g., question) 𝐱, i.e., 𝐲_o∼ℳ(·|𝐱). Then, our goal is to obtain an adaptation model π_θ, that can generate the adapted output (e.g., improved reasoning) 𝐲_a from given 𝐱 and 𝐲_o: 𝐲_a∼π_θ(·|𝐱,𝐲_o). For example, <cit.> initialize π_θ with a pre-trained open-sourced LLM, and fine-tune it by minimizing a supervised cross-entropy: ℒ_ SFT(θ) = - logπ_θ(𝐲_a|𝐲_o, 𝐱), where 𝐱, 𝐲_o, 𝐲_a∼𝒟={(𝐱^i, 𝐲_o^i, 𝐲_a^i)}_i=1^N. To improve the LLM's alignment regarding helpfulness and harmlessness, <cit.> construct 𝒟 using weaker LLMs (e.g., Alpaca-7B) for 𝐲_o and stronger LLMs (e.g., GPT-4) or human annotations for 𝐲_c, respectively. In our case, we assume that a human-annotated QA dataset 𝒬={(𝐪^i, 𝐚^i)}_i=1^M is available, where 𝐪 is question of target task and 𝐚 is the ground-truth answer. Then, our goal is to train adaptation model π_θ, which is also initialized with open-sourced LLM, using 𝒬 and obtain the improved reasoning with ℳ for this task. §.§ Optimizing dataset to learn from effective reasoning pairs via genetic algorithm Collecting and labeling of training pairs. To train adaptation model π_θ using 𝒬, we first collect positive and negative reasonings from ℳ. Specifically, for each 𝐪,𝐚∼𝒬, we sample K different reasonings {𝐲_ cot,k}_k=1^K using few-shot chain-of-thought prompt p_ cot <cit.>: 𝐲_ cot,k∼ℳ(·|𝐪,p_ cot). Then, if the prediction by 𝐲_ cot is correct (i.e., equal to the answer 𝐚), we assign this reasoning to the positive reasoning set, 𝒴_ pos. If not, we assign this reasoning to the negative reasoning set, 𝒴_ neg. Remarkably, we denote that there are some cases whether (1) ℳ can't generate any correct reasoning (i.e., 𝒴_ pos = ∅) or (2) there is no incorrect reasoning (i.e., 𝒴_ neg = ∅). For (1), we utilize answer-augmented prompting <cit.> to generate the reasoning to support the given answer 𝐚 and augment 𝒴_ pos with it. For (2), we randomly select the reasoning of another sample and augment 𝒴_ neg with it, to fully utilize the samples in 𝒬. Solving optimization to find effective reasoning pairs via genetic algorithm. With the collected 𝒴_ pos and 𝒴_ neg, we want to construct the training dataset 𝒟 to train π_θ, composed of the triplet of the question 𝐪, positive reasoning 𝐲_p, and negative reasoning 𝐲_n. However, the number of possible combinations between positive and negative reasonings is quadratically increased, i.e., |𝒴_ pos| × |𝒴_ neg|; it can be too large trained within the limited iterations and there can be large redundancy within the constructed dataset. To tackle this challenge, we propose to subsample a few representative positive and negative reasoning pairs, that can preserve the characteristics of all the possible combinations. Specifically, for each 𝐪, let denote the set of all the possible pairs of positive and negative reasonings as 𝒫= 𝒴_ pos×𝒴_ neg. Then, for each pair in 𝒫, we calculate its likelihood difference under π_θ: P = {π_θ(𝐲_p|𝐪)-π_θ(𝐲_n|𝐪) | 𝐲_p,𝐲_n∈𝒫}. Then, we propose to find a subset 𝒫_ sub⊂𝒫 which minimizes d(P_ sub, P), where P_ sub is obtained from 𝒫_ sub similar to Eq. <ref> and d(·,·) is a distance between two sets. Here, we assume the elements of both P and P_ sub are samples from two different normal distributions and then consider 2-Wasserstein distance <cit.> between them: d(P_ sub, P) = (μ - μ_ sub)^2 + (σ - σ_ sub)^2, where μ, σ^2 are the empirical mean and variance of P and μ_ sub, σ_ sub^2 the empirical mean and variance of P_ sub, respectively. We empirically observe that this 2-Wasserstein distance is better than other possible metrics such as KL divergence. However, finding P_ sub that minimizes the distance (Eq. <ref>) is non-trivial, as this selection of representative samples problem is NP-hard <cit.>, and the current objective includes the quadratic terms. To mitigate these challenges, we use a genetic algorithm <cit.>, which progressively optimizes the solution by iterating (1) acquiring a new candidate by perturbing the current solution and (2) updating the solution when the candidate achieves a better optimization objective. We consider a new random sampling of 𝒫_ sub as the perturbation, and obtain 𝒫^*_ sub after T iterations: 𝒫^*_ sub = (𝒫, P, M, T), where M is the size of the subset and a detailed description is presented in Algorithm <ref>.[During the experiments, we fix M as |𝒴_ neg|.] We observed that the genetic algorithm found a good solution within a few iterations and it only requires small additional computations (see Table <ref>). With 𝒫^*_ sub, we construct the dataset 𝒟={(𝐪^i, 𝐲_n^i, 𝐲_p^j)}_i=1^N, where (𝐲_n, 𝐲_p) ∈𝒫^*_ sub for 𝐪.[We remark that there can be duplicated 𝐪^i, as multiple reasoning pairs are constructed for each 𝐪, 𝐚 from 𝒬.] §.§ Learning to correct by contrasting likelihoods of reasoning pairs With the constructed dataset 𝒟, we train the adaptation model π_θ to learn the seq2seq mapping from the original reasoning from black-box LLM ℳ to the correct and improved reasoning. While the supervised training with a cross-entropy (Eq. <ref>) is considerable <cit.>, we observed that this approach could be limited, especially when the target task requires careful discrimination between positive and negative reasonings. Therefore, we propose to further use the negative reasoning 𝐲_n to lower its likelihood in the output space of π_θ, while simultaneously increasing the likelihood of the positive reasoning 𝐲_p. Specifically, we construct our training objective ℒ_ train using Odds Ratio Preference Optimization (ORPO) <cit.>, which enables single-stage learning from pair-wise preference data, without the reference models. Namely, we treat 𝐲_p as preferred output and 𝐲_n as dispreferred output: ℒ_ train(θ, 𝒟) = 𝔼_𝒟[ℒ_ SFT(θ) + λ·ℒ_ OR(θ)], ℒ_ OR(θ) = -logσ( logodds_θ(𝐲_p|𝐱)/odds_θ(𝐲_n|𝐱)), where σ is a sigmoid function, λ is a hyper-parameter, and odds_θ(𝐲|𝐱) = π_θ(𝐲|𝐱)/1 - π_θ(𝐲|𝐱). Here, we use the concatenation of question 𝐪 and reasoning 𝐲 (for both 𝐲_p and 𝐲_n) as the input 𝐱, to model the seq2seq mapping between the original reasoning from ℳ (input) and the refined reasoning through π_θ (output), conditioned on 𝐪. As shown in Figure <ref>, incorporating 𝐲_n via Eq. <ref> effectively suppresses the increasing likelihood of negative reasonings. § EXPERIMENTS §.§ Setups Datasets and metrics. Following the recent work <cit.>, we evaluate on four different question-answering (QA) tasks, requiring adaptation on mathematical (GSM8K), implicit-reasoning (StrategyQA), truthful (TruthfulQA), and scientific (ScienceQA) domains. We use the train and test splits by <cit.>. To generate the reasonings for each dataset, we follow the previous chain-of-thought prompts used in prior work <cit.>, except GSM8K. In the case of GSM8K, we adopt a complex prompt <cit.>, as it yields higher accuracy compared to the previous one. During the evaluation, we sample K=5 chain-of-thought reasoning for each test question, and measure (1) the average accuracy (Avg.) across 5 reasonings, and (2) the accuracy of prediction from majority voting among them (Maj@5). For TruthfulQA, we report the average of the accuracies on helpfulness and informativeness (True + Info) following <cit.>, along with the majority voted accuracy. More details of the datasets are in Appendix <ref>. Baselines. We compare against several extensive baselines as follows: (1) Target black-box LLM: without adaptation, we use the reasoning from the target black-box LLM ℳ, (2) Initial adaptation model: we generate the reasoning from the open-sourced LLM, which is used to initialize the adaptation model π_θ, (3) Supervised Fine-Tuning (SFT): π_θ is fine-tuned with a given QA dataset 𝒬, (4) Chain-of-Thought Distillation (CoT Distill) <cit.>: instead of answer 𝐚 in original 𝒬, the positive reasoning 𝐲_p is used as the output label for input 𝐪 to fine-tune π_θ. (5) Aligner <cit.>: π_θ is fine-tuned to learn a seq2seq mapping from the concatenation of 𝐪 and 𝐲_n to 𝐲_p via cross-entropy loss (Eq. <ref>), (6) BBox-Adapter <cit.>: learning a verifier model to deploy beam search and generate the adapted reasoning in iterative inference and verification steps. Implementation details. For the target black-box LLM ℳ, we mainly consider , and it is used to generate the reasoning for the training adaptation model. To initialize the adaptation model π_θ, we consider <cit.>. For BBox-Adapter <cit.>, we follow the original experimental setups in the official codes. For other adaptation methods including , we commonly fine-tune π_θ for 5 epochs with a batch size of 128, using an Adam optimizer <cit.> with a learning rate of 1×10^-5 and cosine scheduler with a warm ratio of 0.03. Also, we use a temperature of 1.0 to sample the reasoning for each question. For the hyper-parameters of , we used fixed values of λ=0.1,T=1000,K=10. Here, we generate half of the reasonings from ℳ, and the remaining half from the initial π_θ for efficiency. §.§ Main results Table <ref> summarizes the experimental results on four different QA tasks, by adapting the reasoning of (i.e., target black-box LLM ℳ). First, it is observed that , which is used to initialize the adaptation model π_θ, originally exhibits significantly lower performance than the target black-box LLM. However, the model's performance is largely increased after the adaptation to the target task, regardless of the methods; it shows the importance of an additional adaptation stage for black-box LLM, using both the ground-truth human supervision and the collected reasonings of the black-box LLM. In addition, among these adaptation methods, one can observe that yields the largest improvements in most cases. Specifically, exhibits 6.2%/7.0% average accuracy (Acc.) and the majority voted accuracy (Maj@5) improvements for the target black-box LLM, on average across 4 QA tasks. Furthermore, compared to the strongest baselines, exhibits 2.2%/2.3% average improvements, respectively. Remarkably, as shown in Table <ref>, requires much smaller costs during the training of the adaptation model (≈ 20%) and the test-time inference (≈ 7%), compared to the previous state-of-the-art method (BBox-Adapter).[We follow the official implementation and hyper-parameters by the authors in <https://github.com/haotiansun14/BBox-Adapter>.] This is because directly learns a seq2seq modeling while BBox-Adapter learns to verify through the sampling and beam search. These results indicate that could serve as a more powerful yet cost-efficient adaptation method. We further demonstrate the advantage of regarding the transferability to various LLMs; namely, we deploy the adaptation model, trained with (in Table <ref>), to adapt reasonings of other LLMs including other API-based black-box LLM ( <cit.>) and open-source LLMs ( <cit.>, <cit.>, <cit.>).[For open-source LLMs, we only use the generated reasoning without access to the internal model weights or output probabilities to treat them as black-box LLM.] This result is presented in Table <ref>. Here, one can observe that successfully adapts the reasoning of various LLMs and improves the accuracies overall, even without the specific adaptation to the target LLM. To be specific, exhibits 9.1%/11.1% average accuracy (Acc.), and the majority voted accuracy (Maj@5) improvements, on average across 4 LLMs and 4 QA tasks. On the other hand, it is observed that the average accuracy on GSM8K is slightly decreased when the target LLM already exhibits a stronger performance than the LLM used to generate the training data. From this result and the overall improvements with the transferred adaptation model, it is inferred that the knowledge included in the constructed training dataset is more important for the effectiveness of the adaptation model, rather than the specific type of LLM used to construct the data. We present the results with the standard deviation in Appendix <ref>. §.§ Additional analyses with In this section, we provide additional analyses of . We conduct the experiments on StrategyQA and ScienceQA, by setting as the target black-box LLM ℳ and as the initialization model for the adaptation model π_θ in default. Ablation study. To validate the effectiveness of the proposed components of in Section <ref>, we perform the ablation experiments by decomposing our framework with two components of (1) the dataset construction via genetic algorithm (Eq. <ref>) and (2) the training objective to contrast the likelihood of positive and negative reasonings (Eq. <ref>). We denote these components as Gen. and Con., respectively. For comparison, we consider random subsampling when the genetic algorithm selection is not applied. Additionally, we set λ=0 when the contrastive training objective is not used. The results are presented in Table <ref>. Here, it can be observed that using the contrastive training objective significantly improves the accuracy of the adapted reasoning, and the improvements are further enhanced when the adaptation model is trained on more representative reasoning pairs. At the same time, it is observed that the proposed dataset construction is not effective without the contrastive training objective. These results indicate that adjusting the likelihood of π_θ is crucial to successfully learning the adaptation, and effective dataset construction aids by guiding where to adjust. We further present Figure <ref> to reveal the effect of contrastive training objective. Here, one can observe that the likelihood of negative reasoning is even increased compared to the initial stage, when the cross-entropy loss is only used with the positive reasoning (Eq. <ref>). However, by incorporating the contrastive objective, this problem is clearly resolved. One can also observe that its effectiveness is not sensitive to the choice of λ. Effect of different initialization for π_θ. Next, we conduct experiments to reveal the importance of the choice of open-sourced LLM to initialize π_θ. To this end, we use LLaMA2 (), which has a similar number of trainable parameters as the originally used Mistral (), for the initialization and measure the average accuracy before/after applying . The results are presented in Table <ref>; when is applied (51), it indicates that π_θ is trained with each initialization LLM and used to adapt the reasoning from . One can first notice that the accuracy of LLaMA2 is largely worse than Mistral. While the accuracy of the adapted reasoning with LLaMA2 is significantly increased, it still fails to improve the accuracy of the reasonings from the black-box LLM, unlike Mistral. This result implies that pre-trained knowledge within the open-source LLM is crucial to learning the correction of QA reasoning via , and we could benefit from the continued advances of open-source LLMs. In-depth analyses of . Lastly, we conduct additional analyses to deeply understand how works. Specifically, we try to answer the following question: how changes the (1) correctness, (2) likelihood, and (3) diversity of the reasonings of the black-box LLM. The corresponding experimental results are presented in the top, middle, and bottom rows of Table <ref>, respectively. First, it is observed that mostly keeps the correctness of the originally correct reasonings (100 → 92.2), while significantly improving the incorrect ones (0 → 69.72). Also, such behavior is observed in terms of the likelihood; when we measure the likelihood of reasoning 𝐲 with the trained adaptation model π_θ(𝐲), one can observe that the likelihood of originally correct reasonings is maintained and incorrect reasonings' is largely increased. Then, one potential concern might be that loses the diversity within the original reasonings, and generates the identical adapted reasonings. But, as shown in Table <ref>, it is observed that the diversity of original reasonings is well-preserved after the adaptation via ; it demonstrates that can understand the context within the original reasoning and properly incorporate it during the adaptation. § CONCLUSION In this paper, we proposed , a simple yet effective framework for learning to correct QA reasoning of black-box LLM. We propose to learn a seq2seq mapping from the original reasoning of black-box LLM to correct and improved reasoning, by training a relatively small adaptation model with the newly proposed dataset construction and training objective. Our experiments demonstrate the effectiveness of across various QA tasks and LLMs. Therefore, we believe our framework can contribute to various real-world applications that require the adaptation of black-box LLMs. § LIMITATIONS While shows promising results in our experiments, several limitations must be acknowledged. First, the effectiveness of heavily depends on the quality of the training pairs and the capability of the initial open-source LLM. While the proposed dataset construction via genetic algorithm aims to select representative pairs, the initial set of collected reasonings might still be biased <cit.> or insufficiently diverse <cit.> depending on the black-box LLM used for the reasoning generation, potentially affecting the adaptation model’s performance. Moreover, the effectiveness of our framework largely depends on the specific open-source LLM used to initialize the adaptation model, as shown in Table <ref>. While this reliance may be seen as a limitation, it also highlights a strength of our framework, as it can benefit from the rapid advancements in open-source LLM development in recent days. Secondly, requires ground-truth human labels to judge the correctness of reasonings, which can be resource-intensive and time-consuming to obtain, especially for large-scale datasets. Additionally, while demonstrates transferability across different LLMs, the adaptation performance may vary based on the specific characteristics and pre-training knowledge of the target LLMs. Lastly, the computational efficiency of , although improved compared to the baselines, can still pose challenges as it yields the fine-tuned open-source LLMs per each task which has a large number of model parameters. To address this issue, incorporating the parameter-efficient fine-tuning techniques <cit.> or distillation into a smaller model <cit.> could be effective. § BROADER IMPACT AND ETHICAL IMPLICATIONS We strongly believe that framework has the potential to provide significant positive impacts across various real-world applications. For instance, depending on the user, the interested domain could be varied, such as education, healthcare, and finance <cit.>. However, as highlighted in the recent study <cit.>, the accuracy of LLMs could be not sufficient if the considered domain is less frequently trained. In such a case, our framework offers an efficient solution for generating domain-specific responses without incurring huge costs, compared to the conventional solution of continual training <cit.>. At the same time, however, there are also some potential negative impacts. A primary concern is the risk of reinforcing existing biases present in the training data, whether they originate from the target black-box LLM, the human-annotated datasets, or the pre-trained knowledge of the open-source LLM used for initialization. For example, recent research has shown that state-of-the-art LLMs even exhibit biases towards specific groups <cit.>. If this kind of undesired bias is not properly removed during the training of the adaptation model, then our framework could reproduce or amplify the bias. We believe that this problem could be mitigated by incorporating additional filtering stages during the dataset construction, training, or inference <cit.>, and we remain this problem for the future direction. § ADDITIONAL EXPERIMENTAL DETAILS This section provides more details about the experimental setups in Section <ref>. We note that all of our experiments are conducted with 2 NVIDIA RTX A6000 GPUs (48GB memory) and AMD EPYC 7313 16-core Processor (3.7 max CPU Ghz). §.§ Datasets Here, we present more details of four QA tasks used in our experiments. The overall dataset description and statistics are presented in Table <ref>. Also, the examples of this dataset are presented in Figure <ref>. We follow the same train and test splits of the previous work <cit.>. ∘ StrategyQA <cit.> is a binary true/false (T/F) QA benchmark that emphasizes implicit multi-hop reasoning for strategy-based questions. Here, a strategy indicates the skill to derive subquestions from the main question. Notably, the questions in StrategyQA are not constrained to specific decomposition patterns and include strategies employed by humans in answering questions. Therefore, this benchmark requires models to infer unspoken premises and perform multiple reasoning steps to produce accurate answers, especially in cases where the answers are not immediately clear from the given information. ∘ GSM8K <cit.> is a collection of high-quality, linguistically diverse grade school math word problems. Each problem requires between 2 and 8 steps to solve and involves a series of calculations using basic arithmetic operations to determine the final answer. Consequently, solving these problems necessitates multi-step reasoning and mathematical computations based on the problem’s context. ∘ TruthfulQA <cit.> is a dataset to assess a model’s ability to produce truthful, factual, and accurate answers. It targets the common issue of AI models generating plausible yet incorrect responses, challenging their ability to recognize and maintain truthfulness. For evaluation, we follow the prior work <cit.> that utilizes prompting. ∘ ScienceQA <cit.> multi-modal question-answering dataset centered on science topics, consists of annotated answers, lectures, and explanations. The dataset originally included around 21,000 multi-modal multiple-choice questions. In our experiments, we adhere to the setup by <cit.>, which excludes questions needing image input and randomly selects 2,000 questions for training and 500 for testing, each sourced from the dataset’s original train and test subsets, respectively. §.§ Baselines In this section, we provide more details about each baseline. First, to generate the chain-of-thought reasoning <cit.> from LLMs for both the test and the construction of the training dataset of , we adopt the previously used few-shot chain-of-thought prompt <cit.>. The used prompts are presented in Figure <ref> In addition, as noticed in Section <ref>, we sample 5 chain-of-thought reasonings per each test sample. To this end, we use sampling with a temperature for the following baselines: Target Black-box LLM, Initial Adaptation Model, SFT, CoT Distill, and BBox-Adapter. Here, we commonly use a temperature of 1.0 except BBox-Adapter, as we use the optimized hyper-parameter (including temperature) by the authors for this baseline.[<https://github.com/haotiansun14/BBox-Adapter>] In the case of Aligner and (Ours), we generate the adapted reasoning with a greedy decoding (i.e., temperature of 0.0), as both methods receive the generated reasoning by black-box LLMs as the input and hence already includes sufficient diversity. In addition, for both methods, we consider the likelihood-based filtering mechanism for GSM8K dataset, where the adapted reasoning is only accepted when its likelihood is higher than the original one. Also, we commonly evaluate the performance of each method after the training (i.e., last checkpoint). §.§ First, in Algorithm <ref>, we present the full procedure of how the genetic algorithm is used to construct the dataset, which is introduced in Section <ref>. In addition, regarding the choice of hyper-parameters, we use λ=0.1 as this value was most efficient for the alignment fine-tuning in the original ORPO paper <cit.>; also, with the experiments on ScienceQA (shown in Figure <ref>), we similarly confirmed that this value is mostly effective. For the iterations of genetic algorithm T, we use T=1000 as it sufficiently decreases the target objective (Eq. <ref>) within considerable time. For example, the dataset construction for ScienceQA with T=1000 consumes 72 seconds and one can confirm that the improvement from more iterations is almost saturated (see Table <ref>). § ADDITIONAL QUANTITATIVE RESULTS In this section, we present more quantitative results that are not presented in the main draft. §.§ Results with standard deviation First, we present the standard deviation for the results in Tables <ref> and <ref>. Specifically, we additionally calculate the standard deviation of the accuracies among five different reasonings; hence, it is only calculated for the average accuracy (Acc.), not for the majority voted accuracy (Maj@5). These results are presented in Tables <ref> and <ref>. Here, one can observe that the improvement by is clear without the overlap between confidence intervals in most cases. §.§ GPT-4 with Next, we verify the potential of to improve the state-of-the-art black-box LLM. To this end, we consider <cit.> as a target black-box LLM and generated the adapted reasoning using (1) the adaptation model trained with (in Table <ref>) and (2) the newly trained adaptation model with . The results are presented in Table <ref>. First, it is observed that the adapted reasonings by exhibit better performance compared to the ones from ^*. These results show the importance of using better source LLM in constructing the dataset, as it can contribute to providing extensive and deeper knowledge. Nevertheless, even using for dataset construction, the performance improvement is quite limited under the current choice of . We suspect that this limitation might stem from the limited capacity of the current adaptation model, which is initialized by , as implicitly evidenced in Table <ref>. Therefore, if stronger open-source LLM, in terms of the number of model parameters and the overall performance, could be used as the adaptation model, we believe that our framework can learn the adaptation, even for the state-of-the-art black-box LLMs. §.§ In-depth analyses on more datasets Lastly, we further present the in-depth analysis results on StrategyQA in Table <ref>, similar to Table <ref> which is conducted on ScienceQA. Here, similar results are observed and it indicates that the interpretation presented in Section <ref> continuously makes sense across the different tasks. § ADDITIONAL QUALITATIVE EXAMPLES In this section, we present the additional qualitative examples of how the original reasoning from is adapted and corrected in Figures <ref>, <ref>, and <ref>. From these examples, one can notice that successfully corrects the reasoning while preserving lexical diversity and grammatical correctness.
http://arxiv.org/abs/2406.18138v1
20240626074338
B-TMS: Bayesian Traversable Terrain Modeling and Segmentation Across 3D LiDAR Scans and Maps for Enhanced Off-Road Navigation
[ "Minho Oh", "Gunhee Shin", "Seoyeon Jang", "Seungjae Lee", "Dongkyu Lee", "Wonho Song", "Byeongho Yu", "Hyungtae Lim", "Jaeyoung Lee", "Hyun Myung" ]
cs.RO
[ "cs.RO" ]
[figure]labelformat=default,labelsep=period,name=Fig. Exclusive Style Removal for Cross Domain Novel Class Discovery Yicheng Wang, Feng Liu, Junmin Liu, Zhen Fang and Kai Sun This work was supported in part by the National Nature Science Foundation of China (Grant Nos. 62276208, 12326607, U20B207, 11991023, 12201490) and in part by the Natural Science Basic Research Program of Shaanxi Province (Grant No. 2024JC-JCQN-02). Y. Wang is with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China, and is also with the School of Mathematics and Statistics, The University of Melbourne, VIC 3010 Australia. F. Liu is with the School of Computing and Information Systems, The University of Melbourne, VIC 3010 Australia. J. Liu and K. Sun are with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China. Z. Fang is with the Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney, Sydney, NSW 2007, Australia. July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ empty empty § ABSTRACT Recognizing traversable terrain from 3D point cloud data is critical, as it directly impacts the performance of autonomous navigation in off-road environments. However, existing segmentation algorithms often struggle with challenges related to changes in data distribution, environmental specificity, and sensor variations. Moreover, when encountering sunken areas, their performance is frequently compromised, and they may even fail to recognize them. To address these challenges, we introduce B-TMS, a novel approach that performs map-wise terrain modeling and segmentation by utilizing Bayesian generalized kernel (BGK) within the graph structure known as the tri-grid field (TGF). Our experiments encompass various data distributions, ranging from single scans to partial maps, utilizing both public datasets representing urban scenes and off-road environments, and our own dataset acquired from extremely bumpy terrains. Our results demonstrate notable contributions, particularly in terms of robustness to data distribution variations, adaptability to diverse environmental conditions, and resilience against the challenges associated with parameter changes. Terrain segmentation; Traversable terrain; Map-wise segmentation; Off-road navigation; Field robotics § INTRODUCTION In the field of robotics, there is a growing demand for the recognition and accurate representation of the surrounding environment. In particular, recognizing terrain data for unmanned ground vehicles (UGVs) has become increasingly important <cit.>. Numerous research efforts have been concentrated on enhancing drivable region detection, object identification <cit.>, static map generation <cit.>, labeling dynamic objects <cit.>, odometry estimation <cit.>, and global localization <cit.> by utilizing terrain estimation. However, the off-road terrain recognition, which encompasses diverse and uneven landscapes, still remains a formidable challenge. Existing ground segmentation methods primarily focus on flat urban scenes <cit.>. Xue et al. introduced a drivable terrain detection method that employs edge detection in normal maps to segment areas between curbs or walls <cit.>. Addressing non-flat and sloped terrains, Narksri et al. proposed a multi-region RANSAC plane fitting approach <cit.>. Wen et al. utilized LiDAR range- and z-images that combines features with different receptive field sizes to improve ground recognition <cit.>. Paigwar et al. put forth a learning-based terrain elevation representation <cit.>. However, these existing methods face challenges when applied to off-road and irregular bumpy terrain. Our prior work has been primarily centered on enhancing off-road autonomous driving performance. Initially, we proposed a PCA-based multi-section ground plane fitting algorithm <cit.>, and subsequently improved its robustness against outliers frequently encountered in 3D LiDAR data <cit.>. We also introduced a graph-based traversability-aware approach <cit.>. Despite our efforts to enhance ground segmentation in off-road environments such as forested areas, our previous approaches still face challenges, including the need for parameter adjustments based on data distribution and difficulties in recognizing unobservable or sunken areas. In this study, by extending our previous research <cit.>, we introduce B-TMS, a novel approach for integrating probability approach with tri-grid field (TGF)-based terrain modeling and analyzing map-wise traversable terrain regions, as illustrated in Fig. <ref>. We have overcome the limitations of existing methods and conducted evaluation across three diverse datasets, demonstrating the following contributions: * This research marks the pioneering map-wise terrain segmentation, exhibiting robustness against changes in data distribution stemming from map scale changes, for example. * Integration of BGK-based terrain model completion with our global TGF has significantly reduced the performance change gap owing to the parameter alterations. * Environmental adaptability is proved through evaluations in both urban and off-road environments, as well as in extremely bumpy terrain scenarios. § TERRAIN MODELING AND SEGMENTATION B-TMS mainly consists of initial traversable terrain search on global TGF with breadth-first traversable graph search (B-TGS), BGK-based terrain model completion, and traversability-aware global terrain model fitting modules. §.§ Initial Traversable Terrain Search on Global TGF Firstly, as proposed in our previous work <cit.>, we form the global graph structure known as the global TGF as follows: 𝐍^𝒯={𝐧^𝒯_i|i∈𝒩}, 𝐄^𝒯={𝐞^𝒯_ij|i,j∈𝒩}, where 𝐍^𝒯, 𝐄^𝒯, and 𝒩 represent a set of nodes 𝐧^𝒯_i whose center location is defined as 𝐱_i∈R^2, a set of edges 𝐞^𝒯_ij, and the total number of nodes, respectively. 3D cloud data is embedded into TGF by global xy-coordinate location with a resolution r^𝒯, then each 𝐧^𝒯_i contains the corresponding points 𝒫_i. And by applying PCA-based plane fitting to 𝒫_i, the planar model 𝐏_i of 𝐧^𝒯_i can be initially defined as follows: 𝐏_i^𝖳[ 𝐦_i; 1 ] = [ 𝐬_i^𝖳 d_i ][ 𝐦_i; 1 ] = 0, where 𝐦, 𝐬, and d represent the mean point, surface normal vector, and plane coefficient, respectively. Additionally, with the obtained descending ordered eigenvalues λ_k∈1,2,3, the traversability weight w̅^𝒯_i is calculated as follows: w̅^𝒯_i = (1 - λ_3,i/λ_1,i) · ((λ_2,i-λ_3,i)/λ_1,i) ∈[0,1]. Please note that to facilitate BGK-based terrain model and to obtain a normalized weight, w̅^𝒯 is defined with scattering, λ_3/λ_1, and planarity, (λ_2-λ_3)/λ_1, as defined in Weinmann et al. <cit.>, which is different from <cit.>. So each node in the global TGF can be expressed as follows: 𝐧^𝒯_i = {𝐱_i, 𝒫_i, 𝐦_i, 𝐬_i^𝖳, d_i, w̅^𝒯_i}∈𝐍^𝒯. Then, to classify the initial terrain nodes, each node is classified into terrain node 𝐧^𝒯,t and others 𝐧^𝒯,o by the inclination threshold, θ^𝒯, and the threshold σ^𝒯 for the number of 𝒫_i as follows: 𝐧^𝒯_i⇒𝐧^𝒯,t_i, if cos(z_𝐬^𝖳_i) ≥cos(θ^𝒯)∧ n(𝒫_i) ≤σ^𝒯 𝐧^𝒯,o_i, otherwise , where z_𝐬^𝖳_i is a z-axis component of 𝐬^𝖳_i. To search for a set of traversable nodes in the global TGF, we adopt the B-TGS approach based on lcc(·) which determines the local convexity and concavity <cit.>. lcc(𝐞^𝒯,t_ij) confirms the local traversability between 𝐧^𝒯,t_i and 𝐧^𝒯,t_j as follows: lcc(𝐞^𝒯_ij) = true, if |𝐬_i·𝐬_j| ≥ 1 - sin(||𝐝_ij||ϵ_2) ∧ |𝐬_j·𝐝_ji| ≤ ||𝐝_ji||sinϵ_1 ∧ |𝐬_i·𝐝_ij| ≤ ||𝐝_ij||sinϵ_1 false, otherwise, where 𝐝_ji=𝐦_i-𝐦_j is the displacement vector. ϵ_1 and ϵ_2 denote the thresholds regarding normal similarity and plane convexity, respectively. As a result of the B-TGS process, only the searched traversable terrain nodes remain classified as 𝐧^𝒯,t, while the others are reclassified as 𝐧^𝒯,o. §.§ BGK-based Terrain Model Completion In the terrain model completion module, the terrain planar models of 𝐧^𝒯,o are predicted using the remaining 𝐧^𝒯,t. For the neighbor-based prediction, we propose the BGK-based terrain model prediction method on global TGF. Therefore, before predicting the terrain model of 𝐧^𝒯,o_j, we utilize the BGK function k(·,·) which estimates the likelihood of it being influenced by 𝐧^𝒯,t_i, inspired by <cit.> as follows: k(𝐧^𝒯,t_i,𝐧^𝒯,o_j)= (2+cos(2πd_ij/l))(1-d_ij/l)/3+sin(2πd_ij/l)/2π, if d_ij/l < 1 0 , otherwise where d_ij is the 2D xy-distance between 𝐦_i and 𝐱_j and l is the radius of the prediction kernel 𝒦^𝒯_j. Under the assumption that the xy-coordinates between 𝐦_j and 𝐱_i of 𝐧^𝒯,o_j are the same, the z-value of 𝐦_j can be easily predicted as follows: ℒ_z(𝒦^𝒯_j)≜ z_j = ∑_𝐧^𝒯_i^𝒦^𝒯_j k(𝐧^𝒯,t_i,𝐧^𝒯,o_j)· z_i/∑_𝐧^𝒯_i^𝒦^𝒯_j k(𝐧^𝒯,t_i,𝐧^𝒯,o_j), where ℒ_z(·) denotes the inference function of z. Furthermore, to predict 𝐬_j, we set the assumption that 𝐬_j is perpendicular to Δ = 𝐦_j-𝐦_i. So, we can model the normal vector of 𝐧^𝒯_j affected by 𝐧^𝒯_i, 𝐬_j← i as (<ref>), and 𝐬_j can also be predicted by the inference function as (<ref>). 𝐬_j← i^𝖳 = 1/||Δ||[-Δ_xΔ_z/√(Δ_x^2+Δ_y^2),-Δ_yΔ_z/√(Δ_x^2+Δ_y^2), √(Δ_x^2+Δ_y^2)] ℒ_𝐬(𝒦^𝒯_j) ≜𝐬_j = ∑_𝐧^𝒯_i^𝒦^𝒯_j k(𝐧^𝒯_i,𝐧^𝒯_j)·𝐬_j← i/∑_𝐧^𝒯_i^𝒦^𝒯_j k(𝐧^𝒯_i,𝐧^𝒯_j) where ℒ_𝐬(·) denotes the inference function of 𝐬. The plane coefficient, d_j, can be estimated by (<ref>). Lastly, for prediction of w̅_j^𝒯, we define the inference function ℒ_w(·) as follows: ℒ_w(𝒦^𝒯_j) ≜w̅_j^𝒯 = ∑_𝐧^𝒯_i^𝒦^𝒯_j k(𝐧^𝒯,t_i,𝐧^𝒯,o_j)·w̅_i^𝒯(𝐬_i·𝐬_j)/∑_𝐧^𝒯,t_i^𝒦^𝒯,o_j k(𝐧^𝒯,t_i,𝐧^𝒯,o_j), considering the similarity of normal vectors. This is because traversability is related to the similarity to existing terrain models. By utilizing our proposed BGK-based terrain model prediction on global TGF, some 𝐧^𝒯,o are reverted to 𝐧^𝒯,t. §.§ Traversability-aware Global Terrain Model Fitting Finally, in this traverasbility-aware global terrain process, every 𝐧^𝒯_i ∈𝐍^𝒯 are updated as 𝐧̂^𝒯_i ∈𝐍^𝒯. So, by applying weighted corner fitting approach to all tri-grid corners, which was proposed in our previous work <cit.>, 𝐧^𝒯∈𝐍^𝒯, which are surrounded by three weighted corners 𝐜̂_m∈1,2,3∈R^3, are updated as follows: 𝐧̂^𝒯_i = {𝐱_i, 𝒫_i, 𝐦̂_i, 𝐬̂_i^𝖳, d̂_i, w̅^𝒯_i}∈𝐍^𝒯, 𝐏̂_i = [ 𝐬̂_i d̂_i ] , 𝐦̂_i = (𝐜̂_i,1+𝐜̂_i,2+𝐜̂_i,3)/3, d̂_i = - 𝐬̂_i·𝐦̂_i , 𝐬̂_i =(𝐜̂_i,2-𝐜̂_i,1)/||𝐜̂_i,2-𝐜̂_i,1||(𝐜̂_i,3-𝐜̂_i,1)/||𝐜̂_i,3-𝐜̂_i,1||. Finally, based on the updated nodes 𝐧̂^𝒯_i in global TGF, each point 𝐩_k∈𝒫_i is segmented as follows: (𝐩_k) = , if 𝐩_k·𝐬̂_i+d̂_i ≤ϵ_3 , otherwise, where ϵ_3 denotes the point-to-plane distance threshold. § EXPERIMENTS To demonstrate our contributions, we conducted quantitative and qualitative comparisons. For quantitative evaluations, we leveraged various distributed data from single scans to accumulated partial maps from public datasets, which also provide ground-truth semantic labels and poses. The parameter specifications for our proposed method are outlined in Table <ref>. Additionally, to highlight our contributions, we introduce the dataset from extremely bumpy terrain. §.§ Dataset §.§.§ SemanticKITTI Dataset For quantitative comparison on a real-world urban scene dataset, we utilized the SemanticKITTI dataset <cit.>, which was acquried with Velodyne HDL-64E LiDAR mounted on a vehicle. It's important to note that the points labeled as , , , , , , and are considered to be the ground-truth terrain points. §.§.§ Rellis-3D Dataset For quantitative evaluation in off-road environments, we utilized the RELLIS-3D dataset <cit.>, which was acquired with Ouster OS1-64 and Velodyne Ultra Puck mounted on ClearPath Robotics WARTHOG. Specifically, we used the Ouster data, as its location serves as the basis for the provided ground-truth pose data. It's essential to note that the points labeled as , , , , , , , and are considered as ground-truth terrain points. §.§.§ Extremely Bumpy Terrain Dataset To demonstrate the robustness of the proposed method, we acquired our own dataset on the bumpy terrain environments. As shown in Fig. <ref>, this site covers from slightly to extremely bumpy terrains. This dataset was acquired using a quadruped robot, specifically the Unitree Go1, equipped with a 3D LiDAR (Ouster OS0-128) and an IMU (Xsens MTI-300). §.§ Partial Map Generation To assess segmentation performance on partial maps of various scales, we accumulated scan data with ground-truth labels and voxelized it with 0.2m resolution. The partial maps were created based on a certain number of sequential frames, with 200 poses for the RELLIS-3D dataset and 500 for the SemanticKITTI dataset. §.§ Evaluation Metrics Similar to the evaluation methods in our previous studies <cit.>, we evaluated terrain segmentation performance using standard metrics: precision (P), recall (R), F1-score (F1), and accuracy (A). However, there are ambiguous semantic labels such as of SemanticKITTI and of RELLIS-3D cover various plants, which are distinguished differently from terrain. To address challenges posed by ambiguous labels such as and , we conducted two evaluations considering the sensor height h_s: one including the whole data, where only points with z-values below -0.25 · h_s among the ambiguous labels were considered as ground-truth terrain, and one without these data, excluding the ambiguous labels from the metrics. § RESULTS AND DISCUSSION §.§ Resilience Against Parameter Changes We first shed light on the effect of key parameters on terrain segmentation performance, by comparing with our previous work <cit.>. Fig. <ref> illustrates changes in accuracy depending on the TGF resolution (r^𝒯), the inclination threshold (θ^𝒯), and the distance threshold (ϵ_3), both with and without considering and . The two algorithms exhibit similar performance changes in response to ϵ_3 changes. However, for r^𝒯 and θ^𝒯, which are used to establish the tri-grid field (TGF), the proposed method demonstrates significantly reduced performance variations compared to TRAVEL. This suggests that BGK-based terrain model completion on TGF addresses problems arising from the inherent limitations of constant resolution and thresholds. §.§ Robustness to Data Distribution As evident in Table <ref> and Figs. <ref> and <ref>, we conducted performance evaluations on single scans, locally accumulated maps, and large-scale partial maps. Particularly, Table <ref> indicates that, regardless of whether ambiguous labels are considered in the evaluation metrics or not, we achieved the highest F1-score and accuracy performance across off-road datasets, urban scene datasets, single scans, and partial maps. Moreover, as shown in Fig. <ref>, the results of the single scans, which vary in distribution depending on the measured distance, highlights not only the robustness to data distributions, but also the stability on the wide and narrow off-road scenes. Although the introduction of the BGK-based terrain prediction module slightly increases the computation time compared to our previous work <cit.>, it is nonetheless still suitable for real-time navigation with onboard systems. §.§ Adaptability to Diverse Environmental Conditions Figs. <ref> and <ref> illustrate qualitative performance comparisons in various environmental conditions. A closer look at the top two rows of Fig. <ref> reveals a significant reduction in false negatives, previously common in off-road regions, near walls, and under objects. This reduction aligns with the performance improvements shown in Table <ref>. Moreover, to assess in diverse terrain environments, we introduced data from extremely bumpy terrain environments. The existing approach struggles with terrain modeling failures due to three causes: a) insufficient data in unobservable areas, b) terrain model outliers caused by overhanging objects, resulting false positives commonly in off-road scenarios, and c) inappropriate terrain model estimations for bumpy areas, resulting in false negatives. Our proposed algorithm, featuring BGK-based terrain model prediction and normalized weight-based terrain model fitting, overcomes these outlier issues, enabling stable terrain model predictions. § CONCLUSION In this study, we presented a robust map-wise terrain modeling and segmentation method that combines BGK-based terrain model completion with an efficient graph and node-wise PCA-based traversability-aware terrain segmentation approach. Our results demonstrate the consistent outperformance of B-TMS in the face of parameter variations, changes in data distributions, and alterations in environmental conditions. Furthermore, we anticipate that the capability to predict terrain models for unobservable and sunken regions will have a positive impact on subsequent autonomous navigation algorithms, particularly contributing to improved navigation performance in off-road scenarios. However, despite the robust terrain modeling of our approach, which is based on statistical traversability analyzing the distribution of 3D data, it should also incorporate another method of traversability estimation from semantic information, similar to the approach in the research of Shaban et al. <cit.>, for safer navigation. In addition, limitations stemming from pose drift along the z-axis restrict B-TMS from properly recognizing terrains and evaluating whole maps. To address these limitations, we will focus on expanding the approach with a terrain-aware loop-closure module to enhance pose estimation performance based on the research of Lim et al. <cit.>, and extend it to whole map-based terrain recognition techniques. IEEEtran
http://arxiv.org/abs/2406.18434v1
20240626153144
Relativistic theory of the viscosity of fluids across the entire energy spectrum
[ "Alessio Zaccone" ]
hep-ph
[ "hep-ph", "cond-mat.stat-mech", "hep-th", "physics.class-ph", "physics.plasm-ph" ]
=1 tarburst.fd
http://arxiv.org/abs/2406.17879v1
20240625183822
On the Two-parameter Matrix pencil Problem
[ "S. K. Gungah", "F. F. Alsubaie", "I. M. Jaimoukha" ]
math.NA
[ "math.NA", "cs.NA", "15A22, 47A56, 47A80, 47A25, 15A69, 65F15, 15A18, 47B47" ]
.eps,.pdf,.png,.jpg .eps s
http://arxiv.org/abs/2406.18107v1
20240626064748
Delay Infectivity and Delay Recovery SIR model
[ "Christopher N. Angstmann", "Stuart-James M. Burney", "Anna V. McGann", "Zhuang Xu" ]
math.DS
[ "math.DS", "33E30, 34K99, 60K15, 92D30" ]
In Situ In Transit Hybrid Analysis with Catalyst-ADIOS2 François MazenLouis GombertLucas GivordCharles Gueunet F. Mazen et al. Kitware Europe {name.surname}@kitware.com ===================================================================================================================== § ABSTRACT We have derived the governing equations for an SIR model with delay terms in both the infectivity and recovery of the disease. The equations are derived by modelling the dynamics as a continuous time random walk, where individuals move between the classic SIR compartments. With an appropriate choice of distributions for the infectivity and recovery processes delay terms are introduced into the governing equations in a manner that ensures the physicality of the model. This provides novel insight into the underlying dynamics of an SIR model with time delays. The SIR model with delay infectivity and recovery allows for a more diverse range of dynamical behaviours. The model accounts for an incubation effect without the need to introduce new compartments. epidemiological models; SIR models; delay differential equations; continuous-time random walk 33E30, 34K99, 60K15, 92D30 § INTRODUCTION The spread of infectious diseases through populations have been modelled by SIR models, since they were first introduced by Kermack and McKendrick in 1927 <cit.>. In this model the population is split into three compartments - those susceptible (S), infective (I) and recovered (R) from the infection. Individuals in the population move through the compartments. The time-evolution of the population of the compartments is represented by a set of three coupled ordinary differential equations (ODEs). In the intervening years many extensions have been proposed to this models to account for further dynamics, including incorporating `age-of-infection' effects <cit.>, the inclusion of additional compartments or stratifying the population based on sex <cit.>. An increasingly popular way to incorporate `time-since-infection' effects is through introducing the generalisation of the ODEs to a system of delay differential equations (DDEs) <cit.>. These models can, in some instances, better reflect particular disease dynamics, accounting for effects such as an incubation time <cit.>. There have been many generalised SIR models proposed that incorporate a time-delay on the infectivity rate, within the `force of infection' term <cit.>. Delays have also been considered in more general epidemiological models, such as SEIR models with delayed infectivity <cit.>. There is not a singular way of incorporating these delays and they are often included in an ad hoc manner based on a dynamical systems approach. An alternative approach is to use the underlying stochastic process to describe the evolution of the infection through the population and derive the DDEs from the governing equation of the process. We begin by deriving an SIR model from a continuous time random walk (CTRW) <cit.>, and consider an arbitrary infectivity and recovery rate <cit.>. This approach has previously been used to show the conditions under which fractional derivatives arise in generalised SIR models <cit.>. We consider the necessary forms of the infectivity and recovery in the underlying stochastic process that lead to time-delay terms in the SIR model. For the recovery, we require the time spent in the infected compartment to follow a delay exponential distribution <cit.>. This leads to a DDE for the infected population, with the delay occurring in the recovery term. We also consider a form of the `force-of-infection' which leads to delay in the infectivity term. Both of these effects can be taken together to produce a delay infectivity and delay recovery SIR model. The range of physical parameters can be inferred from the derivation of the governing equations from a stochastic process. In Section <ref> we derive a model with delays on the infectivity and recovery and consider the critical values on the time-delays and steady states of the system. In Section <ref> we consider reductions to the delay infectivity SIR and delay recovery SIR models. Examples are shown in Section <ref> and we conclude with a discussion in Section <ref>. § DERIVATION In order to incorporate a delay infectivity and a delay recovery into an SIR model with vital dynamics, we begin by taking the general set of master equations from a CTRW. In the CTRW we consider individuals entering a compartment, waiting there for a random amount of time before leaving to another compartment until they leave the system through death. The length of time an individual spends in a compartment is drawn from a waiting time distribution. We consider an arbitrary infectivity and recovery rate as in Angstman et al. <cit.> where integro-differential equations govern the Susceptible (S), Infective (I) and Recovery (R) compartments. A full derivation of the model is provided in Appendix <ref>. The master equations for the time evolution of the epidemic are, dS(t)/dt =λ(t)-ω(t) S(t) θ(t,0)∫_0^tK_I(t-t')I(t')/θ(t',0)dt'-γ(t)S(t), dI(t)/dt =ω(t) S(t) θ(t,0)∫_0^tK_I(t-t')I(t')/θ(t',0)dt' -θ(t,0)∫_0^tK_R(t-t')I(t')/θ(t',0)dt'-γ(t)I(t), dR(t)/dt =θ(t,0)∫_0^tK_R(t-t')I(t')/θ(t',0)dt'-γ(t)R(t). In this set of equations λ(t)>0 is the birth rate and γ(t)>0 is the death rate per capita. The environmental infectivity rate is ω(t) and the probability of surviving the death process from time t' to t is captured by θ(t,t'). The initial conditions of the infectivity compartment are taken such that I(t)=0 for t<0. Individuals may only enter the Infective compartment from the Susceptible compartment, hence there is a corresponding decrease in the number of individuals in the Susceptible compartment. Similarly, the individuals who leave the Infective compartment through recovery, correspond with the flux into the Recovery compartment. The infectivity (K_I) and recovery (K_R) memory kernels are the result of taking Laplace transforms to enable us to write governing equations. The memory kernel of the recovery functions is: K_R(t)=ℒ^-1{ℒ{ψ(t)}/ℒ{ϕ(t)}}. Here ϕ(t) is the probability of not recovering from the infected state after time t, and ψ(t) is defined as the corresponding waiting time probability density function, hence, ψ(t)=-dϕ(t)/dt. The memory kernel of the infectivity is defined as: K_I(t)=ℒ^-1{ℒ{ρ(t)ϕ(t)}/ℒ{ϕ(t)}}, where ρ(t) is the age-of-infection dependent infectivity rate. If ϕ(t) is an exponential and ρ(t) is a constant, the standard SIR model is recovered. To incorporate a delay into the infectivity and recovery terms, we will choose waiting time distribution, ϕ(t), and infectivity rate, ρ(t), such that the convolution integrals will induce delays. To obtain a delayed recovery term, we take ϕ(t) to be a delay exponential distribution <cit.> with μτ_2∈[0,e^-1], such that, ϕ(t)=dexp(-μ t;-μτ_2). Here τ_2 represents a constant delay and μ^-1 is the mean of the delay exponential distribution. The delay exponential function is defined by the power series <cit.>, dexp(-μ t;-μτ_2)=∑_n=0^∞(-μ)^n(t-nτ_2)^n/Γ(n+1)Θ(t/τ_2-n), t/τ_2∈ℝ, and the Heaviside function is defined by Θ(y)= 0 y<0, 1 y≥ 0. The dynamics of Eq. (<ref>) are illustrated in Figure <ref> for three different delay values such that μτ_2∈ [0,e^-1]. The Laplace transform of Eq. (<ref>) is, ℒ{ϕ(t)}=1/s+μ e^-sτ, then the memory kernel becomes, K_R(t)=μδ(t-τ_2), where δ is the Dirac delta. The recovery convolution can then be explicitly obtained via the property of the Dirac delta, ∫_0^tK_R(t-t')I(t')/θ(t',0)dt' =μ∫_0^tδ(t-t'-τ_2)I(t')/θ(t',0)dt' =μI(t-τ_2)/θ(t-τ_2,0)Θ(t-τ_2). Given initial conditions where I(t)=0 for t<0, the equation can be written without the Heaviside function. Now, let's consider the conditions for a delay to be present in the infectivity term. To force a delay term into the infectivity, we take ρ(t) to be, ρ(t)=Θ(t-τ_1)ϕ(t-τ_1)/ϕ(t). This choice of ρ(t) will lead to a time-delay in the infectivity, regardless of the ϕ(t) taken. When τ_1=τ_2, ρ(t) is a hazard function of the chosen ϕ(t) distribution. The dynamics of the infectivity of ρ(t) with a delay exponential recovery survival ϕ(t) is shown in Figure <ref>. We consider three cases of τ_1 in this figure. In each case the infectivity begins at zero. The larger the value of τ_1 the longer the infectivity stays at zero before `switching on'. Larger values of τ_1 result in an increased infectivity once the infectivity `switches on'. Note, that ρ(t) has a τ_2 dependence within it due to its ϕ(t) dependence. The infectivity kernel can then be written as, K_I(t) =ℒ^-1{ℒ{Θ(t-τ_1)ϕ(t-τ_1)}/ℒ{ϕ(t)}}. Which similar to the recovery kernel, simplifies to, K_I(t)=δ(t-τ_1). The infectivity convolution can then be obtained similarly to Eq. (<ref>), hence ∫_0^tK_I(t-t')I(t')/θ(t',0)dt' =ℒ^-1{ℒ{K_I(t)}ℒ{I(t)/θ(t,0)}} =ℒ^-1{ e^-sτ_1ℒ{I(t)/θ(t,0)}} =I(t-τ_1)/θ(t-τ_1,0)Θ(t-τ_1). As with the recovery delay equation, Eq. (<ref>), the initial conditions mean the Heaviside function can be dropped. Taking these initial conditions and substituting Eqs. (<ref>) and (<ref>), into the Eqs. (<ref>), (<ref>) and (<ref>) we get the governing equations: dS(t)/ dt= λ(t)-ω(t) S(t) θ(t,0)I(t-τ_1)/θ(t-τ_1,0)-γ(t)S(t), dI(t)/dt= ω(t) S(t) θ(t,0)I(t-τ_1)/θ(t-τ_1,0) -μθ(t,0)I(t-τ_2)/θ(t-τ_2,0)-γ(t)I(t), dR(t)/dt= μθ(t,0)I(t-τ_2)/θ(t-τ_2,0)-γ(t)R(t). Now, by taking the birth, death and infectivity rates to be constant, λ(t)=λ, γ(t)=γ and ω(t)=ω respectively, we can simplify the equations to: dS(t)/dt= λ-ω e^-γτ_1 S(t) I(t-τ_1)-γ S(t), dI(t)/dt= ω e^-γτ_1S(t) I(t-τ_1)-μ e^-γτ_2 I(t-τ_2)-γ I(t), dR(t)/dt= μ e^-γτ_2 I(t-τ_2)-γ R(t). We can see in this set of equations that the parameters of the model retain their standard dimensions, as well as their interpretation. In some of the previous SIR models with time delays, there is no accounting for the e^-γτ_1 survival term in the infectivity term. While this doesn't change the correctness of these previous models, it does require a different interpretation for the remaining infectivity term, ω. It also shows that if a model considers a different size delay, the ω e^-γτ_1 term will be rescaled. This reasoning holds for the τ_2 survival term as well. We will consider this set of simplified equations for the remainder of the paper. A representation of the movement through compartments is shown in Figure <ref>. §.§ Critical Values of the Time Delays Thus far, we have not focused on the critical values of delay parameters, τ_1, τ_2 and μ. We discussed in Section <ref>, that the delay exponential function is only a probability distribution when 0 ≤μτ_2 ≤ e^-1. However, there is no similar restriction on τ_1. The infectivity function, ρ(t), is only required to be non-negative. As ρ(t) is defined by Eq. (<ref>), it will be non-negative for all τ_1≥ 0, as it is composed of non-negative functions. §.§ Steady States It is straightforward to find the steady states of the model defined by Eqs. (<ref>), (<ref>) and (<ref>). We will define the steady state to be (S^*, I^*, R^*), where, lim_t→∞ S(t)=S^*,lim_t→∞ I(t)=I^*,lim_t→∞ R(t)=R^*. These steady state values will satisfy the equations: 0= λ-e^-γτ_1ω S^* I^*-γ S^*, 0= e^-γτ_1ω S^*I^*-e^-γτ_2μ I^*-γ I^*, 0= e^-γτ_2μ I^*-γ R^*. The disease-free steady state is: S^*=λ/γ,I^*=0,R^*=0. The endemic steady state is: S^*=μ e^-γτ_2+γ/ω e^-γτ_1,I^*=λ/μ e^-γτ_2+γ-γ/ω e^-γτ_1,R^*=λμ e^-γτ_2/γμ e^-γτ_2+γ^2-μ e^-γτ_2/ω e^-γτ_1. The endemic state only exists if: λω e^-γτ_1>γ(μ e^-γτ_2+γ). § REDUCTIONS The general delay infectivity and delay recovery SIR equations can be reduced to simpler SIR models. An SIR model with only a delay on the infectivity or recovery is produced by setting one of the delays to zero. This enables us to compare existing SIR models with time-delays on the infectivity to our reduced model, a delay infectivity SIR model, when τ_2=0 and τ_1>0. Note, that by setting τ_1=τ_2=0 in the delay infectivity and delay recovery SIR model, the standard SIR model is recovered. This shows that our delay infectivity and recovery SIR model is consistent with the standard SIR model. In this section we present the reduced models. §.§ Delay infectivity SIR To obtain a delay infectivity SIR model, we set τ_2=0. Hence, the set of governing equations for the SIR model with a delay infectivity term are, dS(t)/dt= λ-ω S(t) e^-γτ_1 I(t-τ_1)-γ S(t), dI(t)/dt= ω S(t)e^-γτ_1I(t-τ_1)-μ I(t)-γ I(t), dR(t)/dt= μ I(t)-γ R(t). When τ_2=0, then Eq. (<ref>) shows ϕ(t) to be exponentially distributed, hence ϕ(t)=e^-μ t. As the infectivity, Eq. (<ref>), is dependent on the recovery waiting time, we can also identify the ρ(t) that leads to the existing SIR models with infectivity delays, ρ(t)=Θ(t-τ_1)e^μτ_1. Hence, SIR models with a delay infectivity rate are underpinned by no infectivity until an individual has been infected for a τ_1 length of time, and then a constant rate. A substitution of τ_2=0 into the steady states in Eqs. (<ref>) and (<ref>) gives us the steady states for this model. §.§ Delay recovery SIR To recover the delay recovery SIR model, we set τ_1=0. Hence the set of governing equations for this model are, dS(t)/dt= λ-ω S(t) I(t)-γ S(t), dI(t)/dt= ω S(t)I(t)-μ e^-γτ_2 I(t-τ_2)-γ I(t), dR(t)/dt= μ e^-γτ_2 I(t-τ_2)-γ R(t). When τ_1=0, the infectivity, Eq. (<ref>), becomes ρ(t)=1 for t≥0. Meanwhile the infection recovery waiting time remains a delay exponential distribution. A substitution of τ_1=0 into the steady states in Eqs. (<ref>) and (<ref>) gives us the steady states for this model. § RESULTS In this section we explore the effects of different delay parameters on the delay infectivity and delay recovery SIR model. We find that changes to the delay parameters lead to significant impacts in the short term dynamics of the model but cause minimal impacts in the long term dynamics. We will consider the effect of varying the delay parameters, τ_1, τ_2 as well as the timescale parameter, μ. We note that the variation in τ_1 leads to changes in the infective rate, Eq. (<ref>). While variations in τ_2 and μ affects both the infectivity rate, Eq. (<ref>), and recovery survival function, Eq. (<ref>). For this study, we have taken, λ=0.5, γ=0.001 and ω=0.02. We begin by varying the infectivity delay, τ_1. We set the recovery delay, τ_2=0.1 and μ has been defined as, μ=e^-1/τ_2, to ensure 0≤μτ_2≤ e^-1. The larger the infective delay, the longer the infectivity is zero before the infectivity `turns on'. The τ_1 delay reduces the peak of the infective compartment, as compared to a standard SIR model with matching vital dynamics and constant infective rate. The impact of varying τ_1 on the population of the Infective compartment can be seen in Figure <ref>. The larger the infective delay, the smaller the infective peak and the later the peak occurs. Of more note, the infective delay induces oscillations in the infective compartment, not observed in the standard SIR model. The oscillations occur at a period of the order of τ_1. The oscillations persist longer for greater values of τ_1. Next, we consider the impact of varying μ on the Infective compartment. As μ is decreased, the oscillation effect is dampened. This result is presented in Figure <ref>. In the figure, τ_1=1 and τ_2=0.1. Note that μτ_2≤ e^-1 for all of the plot lines to ensure the Infective compartment remains non-negative. We see that as μ is decreased, a more standard SIR Infective compartment with a unimodal distribution is recovered, where the peak infectivity increases. Next, we consider the affect of varying τ_2. The oscillations persist when τ_2 is increased, until τ_2 is greater than τ_1. In Figure <ref>, we set τ_1=e^-1, μ=0.06 and consider larger values for τ_2. This leads to a sustained peak infection in the compartment. The larger the delay, τ_2, the longer the peak infection number is sustained. The sustained peak infectivity is an intuitive result given the vital dynamics and choice of τ_2. Under these conditions, the infective population grows until almost the entire population is infected. Infected individuals then have two ways to leave the infected compartment, either through recovery or death. The value τ_2 traps individuals in the infected compartment for τ_2 before individuals have the ability to recover and with a much small death rate, the probability of dying is minimal. If the vital dynamics are increased, the steady peak infectivity is minimised. A different choice of vital dynamics would be considered in disease processes occurring over different magnitudes of time i.e. the disease process of influenza compared to a more chronic disease like human papillomavirus (HPV) would be considered over different timeframes. Lastly, the reductions of the model from subsections <ref> and <ref> are considered. When τ_2=0, we recover the delay infectivity SIR model from subsection <ref>. In this model μ is no longer bound by Eq. (<ref>) and can be made arbitrarily large. Increasing either μ or τ_1 introduces stronger oscillations into the system. When we consider the delay recovery SIR reduction, as in subsection <ref> with τ_1=0, no oscillations occur. For larger values of τ_2 we still observe a sustained infective peak, however larger choices of μ can dampen this impact. Overall, the delay parameters produce substantial impacts to the short term dynamics of the epidemiological model. We have observed that for τ_1>τ_2 oscillations are produced, and for τ_2>τ_1 a steady peak is produced. However when τ_1 and τ_2 are similar values their effects cancel out. The effects of both delays are also minimised, when the vital dynamics are increased. An advantage of the delay infectivity and delay recovery SIR model is its computational tractability. The inclusion of the delay term in generalised SIR models is often done with the aim of capturing a `time-since-infection' or other historical effects. However, for many disease processes a set of integrodifferential equations may provide a more accurate way of doing so. There may be circumstances, however, where the accuracy can be sacrificed in order to be able to solve the system of equations. In such cases, the delay infectivity and delay recovery SIR model provides a way to incorporate past states as well as be able to solve the equations. § DISCUSSION We have derived an SIR model with delay terms on both the infectivity and recovery rates by defining the SIR model from an underlying stochastic process. The delay terms are a consequence of our chosen infectivity and recovery functions. Our approach leads to better understanding of the mechanics at play when time delays are added into SIR model equations. We found that setting the recovery survival function to be a delay exponential distribution returned a delay in the recovery term. In order to incorporate a delay into the infectivity term, we defined the infectivity as a hazard function of delay exponential distributions. In the literature, there have been multiple SIR models with delays on the infectivity terms presented. By deriving our model from an underlying stochastic process, we can better understand the differences between the existing models. Our approach also provides insight into the parameters in the model, in particular showing that there is a survival term that is introduced when a delay is incorporated into the system. Typically such terms have not been added into the set of equations when delay SIR models are constructed in an ad hoc manner. This leads to a different interpretation of the `infectivity' and `recovery' rates in those systems. We established critical values of the delay on the recovery term, by examining the constraints on the delay exponential distribution. It was also observed that the recovery delay impacts the infectivity function as the infectivity is defined through the recovery density function. The only restriction on the delay on the infectivity term is that it must remain positive. The steady states of the system were found as well as the condition on the endemic steady state existing. There remains the open question of the stability of these steady states, particularly under a range of initial conditions. This has been the focus of previous delay SIR and SEIR models <cit.>, however the existing models are dependent on only one delay term. There has been an approach put forth to consider the stability of first-order linear DDE systems with multiple delay terms <cit.>. The conditions under which the model reduced to having a single delay, either on the infectivity or the recovery, were established. We considered the implication of such a choice of function on the mechanisms governing the infection. The sets of equations to govern the delay infectivity SIR and delay recovery SIR models were provided. Finally we showed some of the ways the delay infectivity and delay recovery SIR model can differ from a classic SIR model. We examined the impact of varying all three parameters related to the delays. It was shown that when the delay in the infectivity is larger than the delay in the recovery, this induces oscillations into the Infective compartment. We showed that decreasing the timescale parameter, μ, inhibits the magnitude of the oscillations. It was also found that when the delay in the recovery is larger than the delay in the infectivity, a steady infectivity peak is produced. In many instances considering a set of integrodifferential equations may serve to be a more accurate representation of the dynamics of a disease process. However the results from our model show that through considering only two fixed time delays, a simple change in parameter values can result in very different disease dynamics. These dynamics may adequately approximate certain integrodifferential SIR models. Additionally we have demonstrated that the delay infectivity and delay recovery SIR model is highly tractable. The tractability may, in some instances, provide enough benefit to choose this model over a more accurate set of integrodifferential SIR equations. § APPENDIX We begin by defining the flux into the infected compartment, I, at time t. This will be defined as q^+(I,t). The flux is constructed recursively as: q^+(I,t)=∫_-∞^t ρ(t-t')ω(t)S(t)θ(t,t')ϕ(t-t')q^+(I,t')dt'. Here ρ(t) is the age-of-infection dependent infectivity rate and ω(t) is the environmental dependent infectivity. θ(t,t') is the probability of surviving the death process from t' to t, and we assume it is of the form, θ(t,t')=e^-∫_t'^tγ(s)ds, hence it obeys the semi-group property. The probability of surviving the transition into the recovery compartment from t' to t is defined as, ϕ(t-t'). The recovery rate, ψ(t), is subsequently defined as, ψ(t)=-dϕ(t)/dt. We also define the flux, for t<0 as, q^+(I,t)=i(-t,0)/ϕ(-t)θ(0,t), where i(-t,0) is the number of initially infected individuals. The number of infected of individuals who are infected at time t is the sum of all individuals who have become infected at some prior time and not yet recovered. This can be split into the individuals who were initially infected, represented by I_0(t), and remain infected at time t, and the sum of individuals who have been infected at some t>0. Hence we can write the number of infected individuals at time t, as, I(t)=∫_0^t θ(t,t')ϕ(t-t') q^+(I,t')dt'+I_0(t), where I_0(t), the number of initially infected individuals still infected at time t. This term can be represented as, I_0(t)=∫_-∞^0 θ(t,t')ϕ(t-t')/θ(0,t')ϕ(-t') i(-t',0)dt'. We will take the initial conditions of the infected population to be, i(-t,0)=i_0δ(-t), where δ(t) is the Dirac delta function and i_0 is a constant. This simplifies Eq. (<ref>) to, I_0(t)=θ(t,0)ϕ(t) i_0. Taking the derivative of Eq. (<ref>) with the initial conditions defined by Eqs. (<ref>) and (<ref>), we arrive at, dI(t)/dt= ω(t)S(t)(∫_0^t ρ(t-t')θ(t,t')ϕ(t-t')q^+(I,t')dt'+ρ(t)ϕ(t,0)θ(t,0)i_0) -∫_0^t θ(t,t')ψ(t-t') q^+(I,t')dt' -θ(t,0)ψ(t)i_0- γ(t)I(t). In order to write the derivative in terms of I(t) instead of q^+(I,t), we first define, F_I(t)=∫_0^t ρ(t-t')θ(t,t')ϕ(t-t')q^+(I,t')dt', and F_R(t)=∫_0^t θ(t,t')ψ(t-t') q^+(I,t')dt. Using the semi-group property and Laplace transforms, F_I(t) and F_R(t) can be written, respectively as, ℒ{F_I(t)/θ(t,0)} =ℒ{ρ(t)ϕ(t)}ℒ{q^+(I,t)/θ(t,0)}, ℒ{F_R(t)/θ(t,0)} =ℒ{ψ(t)}ℒ{q^+(I,t)/θ(t,0)}. The Laplace transform of Eq. (<ref>) is ℒ{I(t)-I_0(t)/θ(t,0)}=ℒ{ϕ(t)}ℒ{q^+(I,t)/θ(t,0)}, from which we can write F_I(t) as, F_I(t)=θ(t,0)∫_0^t K_I(t-t')I(t')-I_0(t)/θ(t',0)dt' where the infectivity kernel, K_I is defined as: K_I(t)=ℒ^-1{ℒ{ρ(t)ϕ(t)}/ℒ{ϕ(t)}}. Similarly, we can write F_R(t) as, F_R(t)=θ(t,0)∫_0^t K_R(t-t')I(t')-I_0(t)/θ(t',0)dt' where the recovery kernel is: K_R(t)=ℒ^-1{ℒ{ψ(t)}/ℒ{ϕ(t)}}. Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) the governing equation for the infective compartment becomes, dI(t)/dt=ω(t)S(t)θ(t,0)∫_0^t K_I(t-t')I(t')/θ(t',0)dt'-θ(t,0)∫_0^t K_R(t-t')I(t')/θ(t',0)dt'-γ(t)I(t). By considering the balance of flux between the compartments and the vital dynamics, we can write the governing equations for the susceptible, S(t), and recovery, R(t) compartments. Hence, the governing equation for the susceptible compartment is, dS(t)/dt=λ(t)- ω(t)S(t)θ(t,0)∫_0^t K_I(t-t')I(t')/θ(t',0)dt' -γ(t)S(t), and the governing equation for the recovery compartment is, dR(t)/dt=θ(t,0)∫_0^t K_R(t-t')I(t')/θ(t',0)dt'-γ(t)R(t). This gives us the full set of master equations for the time evolution of the epidemic across the Susceptible, Infective and Recovered populations. § ACKNOWLEDGEMENTS This research was funded by Australian Research Council grant number DP200100345.
http://arxiv.org/abs/2406.19060v1
20240627102056
Semi-definite optimization of the measured relative entropies of quantum states and channels
[ "Zixin Huang", "Mark M. Wilde" ]
quant-ph
[ "quant-ph", "cs.IT", "math-ph", "math.IT", "math.MP", "math.OC" ]
Semi-definite optimization of the measured relative entropies of quantum states and channels Zixin HuangSchool of Mathematical and Physical Sciences, Macquarie University, NSW 2109, Australia Centre for Quantum Software and Information, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia Mark M. WildeSchool of Electrical and Computer Engineering, Cornell University, Ithaca, New York 14850, USA July 1, 2024 =============================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The measured relative entropies of quantum states and channels find operational significance in quantum information theory as achievable error rates in hypothesis testing tasks. They are of interest in the near term, as they correspond to hybrid quantum–classical strategies with technological requirements far less challenging to implement than required by the most general strategies allowed by quantum mechanics. In this paper, we prove that these measured relative entropies can be calculated efficiently by means of semi-definite programming, by making use of variational formulas for the measured relative entropies of states and semi-definite representations of the weighted geometric mean and the operator connection of the logarithm. Not only do the semi-definite programs output the optimal values of the measured relative entropies of states and channels, but they also provide numerical characterizations of optimal strategies for achieving them, which is of significant practical interest for designing hypothesis testing protocols. § INTRODUCTION §.§ Background The relative entropy <cit.> and its generalization to Rényi relative entropy <cit.> are important distinguishability measures in information theory, finding direct operational meaning in hypothesis testing tasks <cit.> while being used to construct other entropic measures like mutual information and conditional entropy <cit.> . There are a number of quantum generalizations of these quantities <cit.>, finding operational meaning in quantum hypothesis testing tasks <cit.> while being used to construct other entropic measures like quantum mutual information and conditional entropy (see, e.g., <cit.>). In these quantum hypothesis testing tasks, one assumes that there are many copies of the states available and furthermore that it is possible to perform a collective measurement on them. The technological capabilities required to perform such a collective measurement appear to be quite challenging, and it seems one might generally need a quantum computer to do so <cit.>. The measured relative entropy <cit.> is a distinguishability measure that relaxes the requirements of quantum hypothesis testing signficantly, and it has been generalized to the Rényi family as well <cit.>. Indeed, the idea behind these measures is to evaluate classical distinguishability measures on the distributions that result from performing a measurement on a single copy of the state and then optimize them over all possible measurements. See Definition <ref> and Definition <ref> for precise definitions of the standard and Rényi measured relative entropies, respectively. In such a way, these quantities lead to technologically feasible strategies for quantum hypothesis testing, which consist of a hybrid approach involving quantum measurement and classical post-processing. Indeed, even though there are gaps between the fundamental error rates of quantum hypothesis testing under general, collective measurements and those that result from these hybrid quantum–classical strategies, the latter strategies are more feasible in the near term. Beyond distinguishing states, one can also distinguish quantum channels from one another, a task known as quantum channel discrimination, which has been studied extensively in quantum information <cit.> . The most general strategy allowed by quantum mechanics in such a scenario is rather complex (see <cit.>), and the technological requirements for realizing such a general strategy are even more challenging than those needed to perform a collective measurement (i.e., one would need more complex quantum computations to realize such strategies). As such, one can also consider relaxing the technological requirements for channel discrimination by considering measured relative entropies of channels, as a special case of the generalized channel divergences defined in <cit.>. See Definition <ref> and Definition <ref> for precise definitions of the standard and Rényi measured relative entropies of channels, respectively, which also include energy constraints on the channel input state. Although the diamond distance <cit.> and its energy-constrained counterpart <cit.> are in widespread use as measures of channel distinguishability (see, e.g., <cit.>), the related general notion of measured relative entropy of channels has only been explicitly defined more recently <cit.>, therein related to an operational task called sequential channel discrimination. Here we also explicitly define the measured Rényi relative entropy of channels, as a special case of the general concept from <cit.> and <cit.>. §.§ Summary of results In this paper, we prove that the measured relative entropies of quantum states and channels can be computed by means of semi-definite optimization algorithms (also known as semi-definite programs). These algorithms have a runtime that scales efficiently with the dimension of the states and the input and output dimensions of the channels, by employing known techniques for solving semi-definite programs <cit.>. Furthermore, an added benefit of these algorithms is that, not only does one obtain the optimal values of the measured relative entropies, but one also obtains numerically an optimal measurement for the measured relative entropies of states and an optimal input state and measurement for the measured relative entropies of channels. This latter capability is of significant value for applications, in which one wishes to construct a hybrid quantum-classical strategy for achieving the error rates of hypothesis testing achievable by the measured relative entropies. Our claims build upon two papers, which, coincidentally, were initially released on the quant-ph arXiv within two days of each other <cit.>. Another edifice for our claims is <cit.>. In more detail, the paper <cit.> established variational formulas for the measured relative entropy and measured Rényi relative entropy, while the paper <cit.> proved that the hypograph and epigraph of the weighted geometric mean have efficient semi-definite representations (here, see also <cit.>), and the paper <cit.> proved that the hypograph of the operator connection of the logarithm has an efficient semi-definite representation. Here, we essentially combine these findings to arrive at our claims. Indeed, for quantum states, our main contributions are to establish reductions of the variational formulas of <cit.> to semi-definite optimization problems involving linear objective functions and the aforementioned hypographs or epigraphs (see Propositions <ref> and <ref>). This finding is admittedly a rather direct combination of the contributions of <cit.>. However, it is ultimately useful in establishing our next contribution, which is an extension of these findings to measured relative entropies of channels. To establish these latter results, we use basic properties of weighted geometric means and the operator connection of the logarithm (see Propositions <ref> and <ref>). One benefit of our findings is that they lead to semi-definite programs involving linear matrix inequalities each of size 2d×2d when the states are d× d matrices and of size 2d_Ad_B×2d_Ad_B when the channels have input dimension d_A and output dimension d_B (see Propositions <ref>, <ref>, <ref>, and <ref> for precise statements). As such, they do not suffer from the quadratic increase in size that occurs when applying the approach from <cit.> to the Petz–Rényi and standard quantum relative entropies (however, note that there has been progress on addressing this issue more recently <cit.> ). Furthermore, it is unclear how to apply the approach from <cit.> for computing the dynamical (channel) version of these quantities. However, one of our main contributions is semi-definite programs for the measured relative entropies of channels, and the transition from our claims for states to our claims for channels is smooth, with the proofs consisting of just a few lines (see (<ref>)–(<ref>) and (<ref>)–(<ref>) for these steps). §.§ Organization of the paper The rest of our paper is organized as follows. Section <ref> establishes notation and reviews background material, including the weighted geometric mean and its properties, its hypograph and epigraph, and operator connections and their properties (especially for the logarithm). The remaining Sections <ref>, <ref>, <ref>, and <ref> provide essential definitions and detail our main results for measured Rényi relative entropy of states, measured relative entropy of states, measured Rényi relative entropy of channels, and measured relative entropy of channels, respectively. We conclude in Section <ref> with a brief summary and some directions for future research. § NOTATION AND PRELIMINARIES For a Hilbert space ℋ, we employ the following notation: c]cl 𝕃(ℋ) set of linear operators acting on ℋ ℍ(ℋ) set of Hermitian operators acting on ℋ ℙ(ℋ) set of positive semi-definite operators acting on ℋ ℙ_>0(ℋ) set of positive definite operators acting on ℋ 𝔻(ℋ) set of density operators acting on ℋ Note that 𝔻(ℋ){ρ∈ℙ (ℋ):Tr[ρ]=1}. A quantum channel is a completely positive and trace-preserving map that takes 𝕃(ℋ) to 𝕃(𝒦), where 𝒦 is another Hilbert space. We often denote a quantum channel by 𝒩 _A→ B, which indicates that the input space is 𝕃 (ℋ_A) and the output space is 𝕃(ℋ_B). See <cit.> for further background on quantum information theory. §.§ Weighted geometric mean and its properties Given positive definite operators X,Y∈ℙ_>0(ℋ), the weighted (operator) geometric mean X#_tY of weight t∈ℝ is defined as <cit.> X#_tY X^1/2( X^-1/2YX^-1/2) ^tX^1/2. It is alternatively denoted by G_t(X,Y) X#_tY, and we adopt this notation in what follows. The following identity holds for all t∈ℝ (see, e.g., <cit.>): G_t(X,Y)=G_1-t(Y,X), and so does the following identity for all s,t∈ℝ: G_s(X,G_t(X,Y))=G_st(X,Y). The function x↦ x^t is operator concave and operator monotone for t∈[ 0,1], operator antimonotone and operator convex for t∈[ -1,0], and operator convex for t∈[ 1,2] (see, e.g., <cit.>). The function ( X,Y) ↦ G_t(X,Y) is operator concave for t∈[ 0,1] and operator convex for t∈[ -1,0] ∪[ 1,2]. For t∈[ -1,1], this statement is a consequence of <cit.> and, for t∈[ 1,2], it is a consequence of (<ref>) and <cit.>, as well as the aforementioned operator monotonicity properties of x↦ x^t. Concavity and convexity of the function ( X,Y) ↦ G_t(X,Y) is also known as joint concavity and joint convexity of the weighted geometric mean. A useful property of the weighted geometric mean for t∈[ -1,2] is the transformer inequality <cit.>. For a linear operator K∈𝕃(ℋ), the following inequality holds for all t∈[ 0,1]: KG_t(X,Y)K^†≤ G_t(KXK^†,KYK^†), and the opposite inequality holds for all t∈[ -1,0] ∪[ 1,2]: KG_t(X,Y)K^†≥ G_t(KXK^†,KYK^† ). These inequalities are saturated when K is invertible; i.e., for all t∈[ -1,2] and invertible K, the following holds: KG_t(X,Y)K^†=G_t(KXK^†,KYK^†). The inequalities in (<ref>)–(<ref>) were proven for all t∈[ -1,1] in <cit.>, and the extension to t∈[ 1,2] follows from (<ref>) and <cit.>. See also <cit.>. §.§ Hypograph and epigraph of the weighted geometric mean For t∈[ 0,1], the operator hypograph of G_t is given by <cit.> hyp_t{( X,Y,T) ∈ℙ _>0(ℋ)×ℙ_>0(ℋ)×ℍ (ℋ):G_t(X,Y)≥ T} , and for t∈[ -1,0] ∪[ 1,2], the operator epigraph of G_t is given by <cit.> epi_t{( X,Y,T) ∈ℙ _>0(ℋ)×ℙ_>0(ℋ)×ℍ (ℋ):G_t(X,Y)≤ T} . These sets are convex due to the aforementioned concavity and convexity properties of G_t. As a consequence of <cit.>, for all rational t∈[ 0,1], the set hyp_t is semi-definite representable (see also <cit.>), and for all rational t∈[ -1,0] ∪[ 1,2], the set epi_t is semi-definite representable. This means that these sets can be represented in terms of a finite number of linear matrix inequalities <cit.> and implies that one can use the methods of semi-definite programming to optimize over elements of these sets. This fact was put to use in <cit.> for quantum information-theoretic applications, and we make use of it here as well. §.§ Operator connections Generalizing the notion of an operator geometric mean, an operator connection is defined in terms of an operator monotone function f as <cit.> P_f(X,Y) X^1/2f( X^-1/2YX^-1/2) X^1/2, where X,Y∈ℙ_>0(ℋ). This is also known as a non-commutative perspective function <cit.>. Due to <cit.>, the function ( X,Y) ↦ P_f(X,Y) is operator concave (i.e., jointly concave), and the transformer inequality holds for every linear operator K∈𝕃(ℋ): KP_f(X,Y)K^†≤ P_f(KXK^†,KYK^† ). Equality holds in (<ref>) if K is invertible; i.e., for invertible K∈𝕃(ℋ), the following equality holds: KP_f(X,Y)K^†=P_f(KXK^†,KYK^† ). The operator hypograph of P_f is given by hyp_f{( X,Y,T) ∈ℙ _>0(ℋ)×ℙ_>0(ℋ)×ℍ (ℋ):P_f(X,Y)≥ T} , and it is a convex set due to the aforementioned joint concavity of P_f(X,Y). The logarithm is the main example of an operator monotone function on which we focus, other than the power functions from Section <ref> , due to its connection with relative entropy. Furthermore, there is an efficient semi-definite approximation of the hypograph of the connection of the logarithm (i.e., hyp_ln) <cit.>, which leads to semi-definite optimization algorithms for calculating measured relative entropies of states and channels. This fact was put to use in <cit.> for quantum information-theoretic applications, and we make use of it here as well. To be clear, we use the following notation later on: P_ln(X,Y) X^1/2ln( X^-1/2YX^-1/2) X^1/2. This is also related to the operator relative entropy <cit.> , for which one finds the following notation in the literature <cit.>: D_op(X‖ Y) -P_ln(X,Y) =X^1/2ln( X^1/2Y^-1X^1/2) X^1/2. § MEASURED RÉNYI RELATIVE ENTROPY OF STATES §.§ Definition and basic properties Given a probability distribution p≡( p(x)) _x∈𝒳 and a non-negative function q≡( q(x)) _x∈𝒳, the Rényi relative entropy is defined for α∈( 0,1) ∪(1,∞) as <cit.> D_α(p‖ q)1/α-1ln∑_x∈𝒳 p(x)^αq(x)^1-α, when α∈( 0,1) or when α>1 and supp(p)⊆supp(q). Otherwise, when α>1 and supp(p)⊈supp(q), it is set to +∞. The Rényi relative entropy satisfies the data-processing inequality for all α∈( 0,1) ∪ (1,∞), which means that D_α(p‖ q)≥ D_α(N(p)‖ N(q)), where N is a classical channel (i.e., a conditional probability distribution with elements ( N(y|x)) _y∈𝒴,x∈𝒳) and the notation N(p) is a shorthand for the distribution that results from processing p with N: N(p)≡( ∑_x∈𝒳N(y|x)p(x)) _y∈𝒴, with a similar meaning for N(q). The Rényi relative entropy is also monotone in the parameter α; i.e., for β>α>0, the following inequality holds for all p and q: D_α(p‖ q)≤ D_β(p‖ q). See <cit.> for a review of the classical Rényi relative entropy in (<ref>). Given a quantum state ρ and a positive semi-definite operator σ, the measured Rényi relative entropy is defined by optimizing the Rényi relative entropy over all possible measurements <cit.>: D_α^M(ρ‖σ)sup_𝒳,( Λ _x) _x∈𝒳1/α-1ln∑_x∈𝒳 Tr[Λ_xρ]^αTr[Λ _xσ]^1-α, where the supremum is over every finite alphabet 𝒳 and every positive operator-valued measure (POVM) ( Λ_x) _x∈𝒳 (i.e., satisfying Λ_x≥0 for all x∈𝒳 and ∑_x∈𝒳Λ_x=I). For α>1, the measured Rényi relative entropy is finite if and only if supp(ρ)⊆supp(σ). If the support condition holds, then it follows that the support of ( Tr[Λ_xρ]) _x∈𝒳 is contained in the support of ( Tr[Λ_xσ]) _x∈𝒳, which in turn implies that D_α^M(ρ‖σ)<+∞. If the support condition does not hold, then D_α^M(ρ‖σ)=+∞, by applying the argument in <cit.>. We now recall some basic properties of the measured Rényi relative entropies, the first of which is actually a consequence of <cit.> and the second observed in <cit.>. It suffices to optimize D_α^M(ρ‖σ) over rank-one POVMs; i.e., D_α^M(ρ‖σ)=sup_𝒳,( φ_x) _x∈𝒳1/α-1ln∑_x∈𝒳 Tr[φ_xρ]^αTr[φ_x σ]^1-α, where each φ_x is a rank-one operator such that ∑_x∈𝒳φ_x=I. This is a direct consequence of the data-processing inequality in (<ref>). Indeed, by diagonalizing Λ_x as Λ_x=∑_z∈𝒵ϕ_x,z, where each ϕ_x,z is rank one, consider that every POVM ( Λ_x) _x∈𝒳 can be understood as a coarse graining of the POVM ( ϕ_x,z) _x∈𝒳,z∈𝒵 because Tr[Λ_xρ]=∑_z∈𝒵Tr [ϕ_x,zρ]. By defining p_X,Z(x,z)Tr[ϕ_x,zρ] and q_X,Z(x,z)Tr[ϕ_x,zσ] and noting that one obtains p_X(x)=Tr[Λ_xρ] and q_X (x)=Tr[Λ_xσ] by marginalization (a particular kind of classical channel), the data-processing inequality in (<ref>) implies that D_α(p_X,Z‖ q_X,Z)≥ D_α(p_X‖ q_X), concluding the proof. The measured Rényi relative entropy obeys the data-processing inequality; i.e., for every state ρ, positive semi-definite operator σ, quantum channel 𝒩, and α∈( 0,1) ∪( 1,∞), the following inequality holds: D_α^M(ρ‖σ)≥ D_α^M(𝒩(ρ )‖𝒩(σ)). Observe that 1/α-1ln∑_x∈𝒳Tr[Λ _x𝒩(ρ)]^αTr[Λ_x𝒩 (σ)]^1-α =1/α-1ln∑_x∈𝒳Tr [𝒩^†(Λ_x)ρ]^αTr[𝒩 ^†(Λ_x)σ]^1-α ≤ D_α^M(ρ‖σ). In the above, we made use of the Hilbert–Schmidt adjoint 𝒩^†, which is completely positive and unital, implying that ( 𝒩^†(Λ_x)) _x∈𝒳 is a POVM. The inequality follows from the fact that D_α^M(ρ‖σ) involves an optimization over every alphabet 𝒳 and POVM. Since the inequality holds for every POVM ( Λ_x) _x∈𝒳, we conclude (<ref>). (Here we can also observe that the claim holds more generally for positive, trace-preserving maps.) It can be helpful to write the measured Rényi relative entropy in terms of the measured Rényi relative quasi-entropy: D_α^M(ρ‖σ)=1/α-1ln Q_α^M(ρ‖σ), where the latter is defined as Q_α^M(ρ‖σ){[ inf_𝒳,( Λ_x) _x∈𝒳∑ _x∈𝒳Tr[Λ_xρ]^αTr [Λ_xσ]^1-α for α∈( 0,1); sup_𝒳,( Λ_x) _x∈𝒳∑ _x∈𝒳Tr[Λ_xρ]^αTr [Λ_xσ]^1-α for α>1 ]. . One can also define the projectively measured Rényi relative entropy as D_α^P(ρ‖σ)1/α-1ln Q_α ^P(ρ‖σ), where Q_α^P(ρ‖σ){[ inf_( Π_x) _x∈𝒳∑ _x∈𝒳Tr[Π_xρ]^αTr [Π_xσ]^1-α for α∈( 0,1); sup_( Π_x) _x∈𝒳∑ _x∈𝒳Tr[Π_xρ]^αTr [Π_xσ]^1-α for α>1 ]. , with the key difference being that the optimization is performed over every projective measurement ( Π_x) _x∈𝒳 (i.e., satisfying Π_xΠ_x^'=Π_xδ_x,x^' for all x,x^'∈𝒳 in addition to the requirements of a POVM) and the size of the alphabet 𝒳 is equal to the dimension of the underlying Hilbert space of ρ and σ. It is known from <cit.> that the following equalities hold for all α∈( 0,1) ∪( 1,∞): D_α^M(ρ‖σ)=D_α^P(ρ‖σ), Q_α^M(ρ‖σ)=Q_α^P(ρ‖σ ), which is a non-trivial finding that makes use of operator concavity and convexity properties of the function x↦ x^t. Furthemore, it was noted therein that the measured Rényi relative entropy is achieved by a rank-one, projective measurement. This has practical implications for achieving the measured Rényi relative entropy because projective measurements are simpler to realize experimentally than general POVMs. §.§ Variational formulas for measured Rényi relative entropy of states As a consequence of <cit.>, the measured Rényi relative entropy has the following variational formulas for all α∈( 0,1) ∪( 1,∞): D_α^M(ρ‖σ) =sup_ω>0{1/α -1ln( αTr[ωρ]+( 1-α) Tr[ω^α/α-1σ]) } =sup_ω>0{1/α-1ln( ( Tr[ωρ]) ^α( Tr [ω^α/α-1σ]) ^1-α) } . These are a direct consequence of and equivalent to the precise expressions given in <cit.>, which are as follows: Q_α^M(ρ‖σ) ={[ inf_ω>0{αTr[ωρ]+( 1-α) Tr[ω^α/α-1 σ]} for α∈( 0,1/2); inf_ω>0{αTr[ω^1-1/α ρ]+( 1-α) Tr[ωσ]} for α∈1/2,1); sup_ω>0{αTr[ω^1-1/α ρ]+( 1-α) Tr[ωσ]} for α>1 ]. , Q_α^M(ρ‖σ) ={[ inf_ω>0{( Tr[ωρ]) ^α( Tr[ω^α/α-1 σ]) ^1-α} for α∈( 0,1); sup_ω>0{( Tr[ω^1-1/α ρ]) ^α( Tr[ωσ]) ^1-α} for α>1 ]. . Indeed, one obtains (<ref>) for α≥1/2 from the second and third expressions in (<ref>) by the substitution ω→ω^α/α-1, and similarly for getting (<ref>) from (<ref>) for α>1. We note in passing that these variational formulas have found application in devising variational quantum algorithms for estimating the measured Rényi relative entropy <cit.>. Although the expressions in (<ref>)–(<ref>) are simpler than those in (<ref>)–(<ref>), the various expressions in (<ref>) are helpful for seeing that the optimizations can be performed efficiently (see Proposition <ref> for further details). To see this, let us consider the expressions in (<ref>) one at a time. For α∈( 0,1/2), the function ω↦ω^α/α-1 is operator convex because α/α-1∈( -1,0). As such, the objective function αTr[ωρ]+( 1-α) Tr[ω^α/α-1σ] is convex in ω. For α∈1/2,1), the function ω↦ω^1-1/α is operator convex because 1-1/α ∈-1,0). Then the objective function αTr [ω^1-1/αρ]+( 1-α) Tr [ωσ] is convex in ω. Finally, for α>1, the function ω↦ω^1-1/α is operator concave because 1-1/α∈( 0,1). Then the objective function αTr[ω^1-1/αρ]+( 1-α) Tr[ωσ] is concave in ω. An operator ω that achieves the optimal values in (<ref>)–(<ref>) corresponds to an observable whose eigenvectors form an optimal measurement for achieving the measured Rényi relative entropy. This point becomes clear by inspecting the proof of Proposition <ref> below. As such, being able to calculate such an observable numerically is valuable from an operational perspective, and we note here that this task is accomplished by the semi-definite optimization algorithm mentioned in Proposition <ref>. It has been known for some time that the optimal observable for α=1/2 has an analytical form <cit.>, given by G_1/2(σ^-1,ρ) and known as the Fuchs–Caves observable (see also <cit.>). In Appendix <ref>, we provide an alternative proof of (<ref>), which makes use of the inequality of arithmetic and geometric means, as well as Bernoulli's inequality. We think this proof is of interest due to its simplicity. Let us note that the expressions in (<ref>) were actually shown in the proof of <cit.> to follow from (<ref>) by means of these inequalities. §.§ Optimizing the measured Rényi relative entropy of states One of the main goals of <cit.> was to derive variational formulas for the measured Rényi relative entropy and explore applications of them in quantum information. In this section, we observe in Proposition <ref> below that there is an alternative variational representation of the measured Rényi relative entropy in terms of a linear objective function and the hypograph or epigraph of the weighted geometric mean. From this observation, we conclude that there is an efficient semi-definite optimization algorithm for computing the measured Rényi relative entropy, which makes use of the fact recalled in Section <ref> (i.e., from <cit.>). As mentioned previously, not only does this algorithm compute the optimal value of Q_α^M(ρ‖σ) for all α∈( 0,1) ∪( 1,∞), but it also determines an optimal observable ω. Another advantage of the variational representations in Proposition <ref> is that they lead to a rather rapid derivation of variational representations of the measured Rényi relative entropy of channels (see Proposition <ref>). Let ρ be a state and σ a positive semi-definite operator. For α∈( 0,1/2), Q_α^M(ρ‖σ)=inf_ω,θ>0{αTr[ωρ]+( 1-α) Tr [θσ]:θ≥ G_α/α-1(I,ω)} , for α∈1/2,1), Q_α^M(ρ‖σ)=inf_ω,θ>0{αTr[θρ]+( 1-α) Tr [ωσ]:θ≥ G_1-1/α(I,ω)} , and for α>1, Q_α^M(ρ‖σ)=sup_ω,θ>0{αTr[θρ]+( 1-α) Tr [ωσ]:θ≤ G_1-1/α(I,ω)} . For all rational α∈( 0,1) ∪( 1,∞), the quantity Q_α^M(ρ‖σ) can be calculated by means of a semi-definite program. More specifically, when ρ and σ are d× d matrices and p and q are relatively prime integers such that p/q=α/α-1 for α∈( 0,1/2) or p/q=1-1/α for α∈[ 1/2,1) ∪(1,∞), the semi-definite program requires O(log_2q) linear matrix inequalities each of size 2d×2d. These formulas are a direct consequence of (<ref>) and the following identities: ω^α/α-1=G_α/α-1(I,ω ), ω^1-1/α=G_1-1/α(I,ω ), while noting that the optimal value of θ in (<ref>) is equal to G_α/α -1(I,ω) and the optimal value of θ in (<ref>) and (<ref>) is equal to G_1-1/α(I,ω). As such, we have rewritten Q_α^M(ρ‖σ) for α∈( 0,1/2) in terms of the hypograph of G_α/α-1, for α∈1/2,1) in terms of the hypograph of G_1-1/α, and for α>1 in terms of the epigraph of G_1-1/α. By appealing to <cit.>, it follows that all of these quantities can be efficiently calculated for rational α by means of semi-definite programming, with complexity as stated above. § MEASURED RELATIVE ENTROPY OF STATES §.§ Definition and basic properties Given a probability distribution p≡( p(x)) _x∈𝒳 and a non-negative function q≡( q(x)) _x∈𝒳, the relative entropy is defined as D(p‖ q)∑_x∈𝒳p(x)ln( p(x)/q(x)) , when supp(p)⊆supp(q) and it is set to +∞ otherwise. It is equal to the α→1 limit of the Rényi relative entropy in (<ref>): D(p‖ q)=lim_α→1D_α(p‖ q). By virtue of the ordering property in (<ref>), we can write D(p‖ q)=sup_α∈( 0,1) D_α(p‖ q)=inf_α>1D_α(p‖ q). As a direct consequence of (<ref>) and (<ref>), the relative entropy satisfies the data-processing inequality, which means that D(p‖ q)≥ D(N(p)‖ N(q)), where we used the same notation from (<ref>). Given a quantum state ρ and a positive semi-definite operator σ, the measured relative entropy is defined by optimizing the relative entropy over all possible measurements <cit.>: D^M(ρ‖σ)sup_𝒳,( Λ_x) _x∈𝒳∑_x∈𝒳Tr[Λ_xρ ]ln( Tr[Λ_xρ]/Tr [Λ_xσ]) , where the supremum is over every finite alphabet 𝒳 and every positive operator-valued measure (POVM) ( Λ_x) _x∈𝒳 (i.e., satisfying Λ_x≥0 for all x∈𝒳 and ∑_x∈𝒳Λ_x=I). For ρ a state and σ a positive semi-definite operator, the measured relative entropy is equal to the α→1 limit of the measured Rényi relative entropy: D^M(ρ‖σ)=lim_α→1D_α^M(ρ‖σ). See Appendix <ref>. Let us note that the convergence statement above can be made more precise by invoking (<ref>) and <cit.> (see also <cit.>). Specifically, when supp(ρ)⊆supp(σ), there exists a state-dependent constant c(ρ‖σ) such that, for all δ∈(0,ln3/2c(ρ‖σ)], the following bound holds: D_1-δ^M(ρ‖σ)≤ D^M(ρ‖σ)≤ D_1-δ^M(ρ‖σ)+δ K[ c(ρ‖σ)] ^2, where Kcosh(( ln3) /2). We invoke this bound later on in Remark <ref>. The following statements can be proven similarly to Propositions <ref> and <ref>, or alternatively, they can be understood as the α→1 limit of these propositions. It suffices to optimize D^M(ρ‖σ) over rank-one POVMs; i.e., D^M(ρ‖σ)=sup_𝒳,( φ_x) _x∈𝒳∑_x∈𝒳Tr[φ_xρ ]ln( Tr[φ_xρ]/Tr [φ_xσ]) , where each φ_x is a rank-one operator such that ∑_x∈𝒳φ_x=I. The measured relative entropy obeys the data-processing inequality; i.e., for every state ρ, positive semi-definite operator σ, and quantum channel 𝒩, the following inequality holds: D^M(ρ‖σ)≥ D^M(𝒩(ρ)‖𝒩(σ)). §.§ Variational formulas for the measured relative entropy of states For a state ρ and a positive semi-definite operator σ, the following variational formulas for the measured relative entropy are known: D^M(ρ‖σ) =sup_ω>0{Tr[( lnω) ρ]-lnTr[ωσ]} =sup_ω>0{Tr[( lnω) ρ]-Tr[ωσ]+1} . The first was established in <cit.> and <cit.>, while the second was established in <cit.>. Due to the fact that the function ω↦lnω is operator concave, it follows that the function ω↦Tr[( lnω) ρ]-Tr[ωσ]+1 is concave, which is a notable feature of the variational representation in (<ref>) that has further implications discussed in the next section. §.§ Optimizing the measured relative entropy of states In this section, we observe that there is an efficient algorithm for computing the measured relative entropy, which makes use of the observation in (<ref>) and the fact recalled in Section <ref>. Let ρ be a state and σ a positive semi-definite operator. Then D^M(ρ‖σ)=sup_ω,θ>0{Tr [θρ]-Tr[ωσ]+1:θ≤ P_ln (I,ω)} , where P_ln is defined in (<ref>). Furthermore, when ρ and σ are d× d matrices, the quantity D^M(ρ‖σ) can be efficiently calculated by means of a semi-definite program up to an additive error ε, by means of O(√(ln(1/ε))) linear matrix inequalities, each of size 2d×2d. The formula above is a direct consequence of (<ref>) and the following identity: lnω=P_ln(I,ω), while noting that the optimal value of θ in (<ref>) is equal to P_ln(I,ω). As such, we have rewritten D^M(ρ‖σ) in terms of the hypograph of P_ln(I,ω). By appealing to <cit.>, it follows that D^M(ρ‖σ) can be efficiently calculated by means of a semi-definite program with the stated complexity. An alternative approach for computing the measured relative entropy is to set α=1-2^-ℓ for ℓ∈ℕ, similar to what was done in <cit.>. By appealing to Proposition <ref> and (<ref>), it follows that, in order to achieve an error ε in computing the measured relative entropy, the semi-definite program resulting from this approach requires O(ln (1/ε)) linear matrix inequalities, each of size 2d×2d. When compared to the performance of the approach from Proposition <ref>, it is clear that this latter approach is preferred because it requires only O(√(ln(1/ε))) linear matrix inequalities to achieve the same error. § MEASURED RÉNYI RELATIVE ENTROPY OF CHANNELS §.§ Definition and basic properties Given a quantum channel 𝒩_A→ B, a completely positive map ℳ_A→ B, a Hamiltonian H_A (Hermitian operator acting on system A), and an energy constraint E∈ℝ, the energy-constrained measured Rényi relative entropy of channels is defined for all α∈( 0,1) ∪( 1,∞) as D_α,H,E^M(𝒩‖ℳ ) sup_d_R^'∈ℕ, ρ_R^'A∈𝔻(ℋ_R^'A){ D_α ^M(𝒩_A→ B(ρ_R^'A)‖ℳ _A→ B(ρ_R^'A)):Tr[H_Aρ_A]≤ E} . In what follows, for brevity, we also refer to the quantity in (<ref>) as the measured Rényi channel divergence. In (<ref>) above, the supremum is taken not only over every bipartite state ρ_R^'A but also over the reference system R^' with dimension d_R^'. We define the measured Rényi channel divergence in this general way in order to allow for all physically feasible ways of processing a channel; indeed, one prepares a state ρ_R^'A, sends it through either 𝒩_A→ B or ℳ_A→ B, and processes the systems R^'B with a measurement in order to distinguish the maps 𝒩_A→ B or ℳ_A→ B. We impose an energy constraint only on the input system A, because this is the simplest and most minimal modification of an unconstrained channel divergence and it allows for all physically plausible, yet unconstrained, reference systems. Imposing the energy constraint in this way furthermore has the benefit of leading to an efficient algorithm for computing D_α,H,E^M by semi-definite programming (see Proposition <ref>). Let us note that imposing an energy constraint only on the input system A is similar to the approach taken when defining the Shirokov–Winter energy-constrained diamond norm <cit.> or more general energy-constrained channel divergences <cit.>. One obtains the unconstrained measured Rényi relative entropy of channels by setting H_A=I_A and E=1, so that the “energy constraint” becomes redundant with the constraint that ρ_R^'A is a state. By appealing to (<ref>), we can also write D_α,H,E^M(𝒩‖ℳ)=1/α-1ln Q_α,H,E^M(𝒩‖ℳ), where Q_α,H,E^M(𝒩‖ℳ) {[ inf_d_R^'∈ℕ, ρ_R^'A∈𝔻(ℋ_R^'A), Tr[H_A ρ_A]≤ EQ_α^M(𝒩_A→ B(ρ_R^'A)‖ℳ_A→ B(ρ_R^'A)) for α∈( 0,1); sup_d_R^'∈ℕ, ρ_R^'A∈𝔻(ℋ_R^'A), Tr[H_A ρ_A]≤ EQ_α^M(𝒩_A→ B(ρ_R^'A)‖ℳ_A→ B(ρ_R^'A)) for α>1 ]. . Although the optimization in (<ref>) is defined to be over an unbounded space, it is possible to simplify the task by employing basic quantum information-theoretic reasoning. Indeed, as stated in Proposition <ref> below, one can write D_α,H,E^M(𝒩‖ℳ) and Q_α,H,E ^M(𝒩‖ℳ) in terms of the Choi operators of 𝒩_A→ B and ℳ_A→ B, defined as Γ_RB^𝒩𝒩_A→ B(Γ _RA), Γ_RB^ℳℳ_A→ B(Γ_RA), where the maximally entangled operator Γ_RA is defined as Γ_RA∑_i,j|i⟩⟨ j|_R⊗|i⟩⟨ j|_A, so that the reference system R is isomorphic to the channel input system A (i.e., the corresponding Hilbert spaces ℋ_R and ℋ _A are isomorphic, denoted by ℋ_R≃ℋ_A): Given a quantum channel 𝒩 _A→ B, a completely positive map ℳ_A→ B, a Hamiltonian H_A, and an energy constraint E∈ℝ, the measured Rényi channel divergence can be written as follows for all α∈( 0,1) ∪( 1,∞): D_α,H,E^M(𝒩‖ℳ)=sup_ρ_R ∈𝔻(ℋ_R), Tr[H_Aρ_A]≤ E{ D_α^M(ρ_R^1/2Γ_RB^𝒩ρ _R^1/2‖ρ_R^1/2Γ_RB^ℳρ_R^1/2)} , where ℋ_R≃ℋ_A and ρ_R=ρ_A. Equivalently, Q_α,H,E^M(𝒩‖ℳ)= {[ inf_ρ_R∈𝔻(ℋ_R) , Tr[H_Aρ_A]≤ EQ_α^M(ρ_R^1/2 Γ_RB^𝒩ρ_R^1/2‖ρ_R^1/2Γ _RB^ℳρ_R^1/2) for α∈( 0,1); sup_ρ_R∈𝔻(ℋ_R) , Tr[H_Aρ_A]≤ EQ_α^M(ρ_R^1/2 Γ_RB^𝒩ρ_R^1/2‖ρ_R^1/2Γ _RB^ℳρ_R^1/2) for α>1 ]. See <cit.> or <cit.> for a detailed proof. The only difference with the optimization above and those from  <cit.> and <cit.> is the additional energy constraint Tr[H_Aρ_A]≤ E, which follows because ρ_R^1/2Γ_RAρ_R^1/2, with ρ_R = ρ_A, is the canonical purification of the state ρ_A. §.§ Optimizing the measured Rényi relative entropy of channels In this section, we observe in Proposition <ref> that there is a variational representation of the measured Rényi channel divergence in terms of a linear objective function and the hypograph or epigraph of the weighted geometric mean. From this observation, we conclude that there is an efficient semi-definite optimization algorithm for computing the measured Rényi channel divergence, which makes use of the fact recalled in Section <ref>. Not only does this algorithm compute the optimal value of Q_α,H,E^M(𝒩‖ℳ) for all α∈( 0,1) ∪( 1,∞), but it also determines an optimal input state and measurement that achieves Q_α,H,E ^M(𝒩‖ℳ), which is of significant value for applications. The proof of Proposition <ref> results from a straightforward combination of Proposition <ref>, Proposition <ref>, and the transformer equality in (<ref>). Given is a quantum channel 𝒩_A→ B, a completely positive map ℳ _A→ B, a Hamiltonian H_A, and an energy constraint E∈ℝ. Let Γ^𝒩 and Γ^ℳ be the Choi operators of 𝒩_A→ B and ℳ _A→ B, respectively. For α∈( 0,1/2), Q_α,H,E^M(𝒩‖ℳ)=inf_Ω,Θ,ρ >0{[ αTr[ΩΓ^𝒩]+( 1-α) Tr[ΘΓ^ℳ]:; Tr[ρ]=1, Tr[Hρ]≤ E,; Θ≥ G_α/α-1(ρ⊗ I,Ω) ]} , for α∈1/2,1), Q_α,H,E^M(𝒩‖ℳ)=inf_Ω,Θ,ρ >0{[ αTr[ΘΓ^𝒩]+( 1-α) Tr[ΩΓ^ℳ]:; Tr[ρ]=1, Tr[Hρ]≤ E,; Θ≥ G_1-1/α(ρ⊗ I,Ω) ]} , and for α>1, Q_α,H,E^M(𝒩‖ℳ)=sup_Ω,Θ,ρ >0{[ αTr[ΘΓ^𝒩]+( 1-α) Tr[ΩΓ^ℳ]:; Tr[ρ]=1, Tr[Hρ]≤ E,; Θ≤ G_1-1/α(ρ⊗ I,Ω) ]} . For rational α∈( 0,1) ∪( 1,∞), the quantity Q_α,H,E^M(𝒩‖ℳ) can be calculated by means of a semi-definite program. More specifically, when Γ ^𝒩 and Γ^ℳ are d_Ad_B× d_Ad_B matrices and p and q are relatively prime integers such that p/q=α/α-1 for α∈( 0,1/2) or p/q=1-1/α for α∈[ 1/2,1) ∪(1,∞), the semi-definite program requires O(log_2q) linear matrix inequalities each of size 2d_Ad_B×2d_Ad_B. This follows by combining Proposition <ref> and Proposition <ref> and employing the transformer equality in (<ref>). Let us consider the case α∈( 0,1/2): Q_α,H,E^M(𝒩‖ℳ) =inf_ρ_R∈𝔻(ℋ_R) , Tr[H_Aρ_A]≤ EQ_α^M(ρ_R^1/2 Γ_RB^𝒩ρ_R^1/2‖ρ_R^1/2Γ _RB^ℳρ_R^1/2) =inf_Ω^',Θ^'>0, ρ_R∈𝔻(ℋ_R), Tr[H_Aρ_A]≤ E{[ αTr[Ω^'ρ_R^1/2Γ_RB^𝒩 ρ_R^1/2]+( 1-α) Tr[Θ^'ρ_R^1/2Γ_RB^ℳρ_R^1/2]:; Θ^'≥ G_α/α-1(I,Ω^') ]} =inf_Ω^',Θ^',ρ_R>0, Tr[ρ_R]=1 Tr[H_Aρ_A]≤ E{[ αTr[ρ_R^1/2Ω^'ρ_R^1/2Γ _RB^𝒩]+( 1-α) Tr[ρ_R ^1/2Θ^'ρ_R^1/2Γ_RB^ℳ]:; Θ^'≥ G_α/α-1(I,Ω^') ]} =inf_Ω,Θ,ρ_R>0, Tr[ρ _R]=1 Tr[H_Aρ_A]≤ E{[ αTr[ΩΓ_RB^𝒩]+( 1-α) Tr[ΘΓ_RB^ℳ]:; Θ≥ G_α/α-1(ρ⊗ I,Ω) ]} . The first equality follows from Proposition <ref>. The second equality follows from Proposition <ref>. The third equality follows from cyclicity of trace and the fact that the function ρ_R↦αTr[ρ_R^1/2Ω^'ρ _R^1/2Γ_RB^𝒩]+( 1-α) Tr[ρ_R^1/2Θ^'ρ_R^1/2Γ _RB^ℳ] is continuous in ρ_R, so that the optimization can be performed over the set of positive definite density operators (dense in the set of all density operators). The final equality follows from defining Ωρ_R^1/2Ω^'ρ_R^1/2, Θρ_R^1/2Θ^'ρ_R^1/2 , and noting that Ω^',Θ^'>0 ⇔ Ω,Θ>0, as well as Θ^'≥ G_α/α-1(I,Ω^') ⇔ ( ρ^1/2⊗ I) Θ^'( ρ^1/2⊗ I) ≥( ρ^1/2⊗ I) G_α/α-1(I,Ω^')( ρ^1/2⊗ I) ⇔ Θ≥ G_α/α-1(ρ⊗ I,Ω), with the final equality following from the definitions in (<ref>) and the transformer equality in (<ref>). The proofs of (<ref>) and (<ref>) follow similarly. As such, we have rewritten Q_α,H,E^M(𝒩‖ℳ) for α∈( 0,1/2) in terms of hypograph of G_α/α-1, for α∈1/2,1) in terms of the hypograph of G_1-1/α, and for α>1 in terms of the epigraph of G_1-1/α. By appealing to <cit.>, it follows that all of these quantities can be efficiently calculated for rational α by means of semi-definite programming, with the stated complexity. § MEASURED RELATIVE ENTROPY OF CHANNELS §.§ Definition and basic properties Given a quantum channel 𝒩_A→ B, a completely positive map ℳ_A→ B, a Hamiltonian H_A (Hermitian operator acting on system A), and an energy constraint E∈ℝ, the energy-constrained measured relative entropy of channels is defined as D_H,E^M(𝒩‖ℳ ) sup_d_R^'∈ℕ, ρ_R^'A∈𝔻(ℋ_R^'A){ D^M(𝒩 _A→ B(ρ_R^'A)‖ℳ_A→ B (ρ_R^'A)):Tr[H_Aρ_A]≤ E} . The motivation for this definition is the same as that given after (<ref>). Similar to what was observed in Proposition <ref>, although the optimization in (<ref>) is defined to be over an unbounded space, it is possible to simplify the optimization task as follows. Given a quantum channel 𝒩_A→ B, a completely positive map ℳ _A→ B, a Hamiltonian H_A, and an energy constraint E∈ℝ, the measured relative entropy of channels can be written as follows: D_H,E^M(𝒩‖ℳ)=sup_ρ_R∈𝔻(ℋ_R), Tr[H_Aρ_A]≤ E{ D^M(ρ_R^1/2Γ_RB^𝒩ρ_R^1/2‖ρ_R ^1/2Γ_RB^ℳρ_R^1/2)} , where ℋ_R≃ℋ_A and ρ_R=ρ_A. The proof is the same as that given for Proposition <ref>. §.§ Optimizing the measured relative entropy of channels In this section, we observe in Proposition <ref> that there is a variational representation of the measured relative entropy of channels in terms of a linear objective function and the hypograph of the operator connection of the logarithm. From this observation, we conclude that there is an efficient semi-definite optimization algorithm for computing the measured relative entropy of channels, which makes use of the fact recalled in Section <ref>. Not only does this algorithm compute the optimal value of D_H,E^M(𝒩‖ℳ), but it also determines an optimal input state and measurement that achieves D_H,E ^M(𝒩‖ℳ), which, as mentioned previously, is of significant value for applications. The proof of Proposition <ref> results from a straightforward combination of Proposition <ref>, Proposition <ref>, and the transformer equality in (<ref>). Given is a quantum channel 𝒩_A→ B, a completely positive map ℳ _A→ B, a Hamiltonian H_A, and an energy constraint E∈ℝ. Let Γ^𝒩 and Γ^ℳ be the Choi operators of 𝒩_A→ B and ℳ _A→ B, respectively. Then D_H,E^M(𝒩‖ℳ)=sup_Ω,Θ,ρ>0{[ Tr[ΘΓ^𝒩]-Tr[ΩΓ^ℳ]+1:; Tr[ρ]=1, Tr[Hρ]≤ E,; Θ≤ P_ln(ρ⊗ I,Ω) ]} , where P_ln is defined in (<ref>). Furthermore, when Γ^𝒩 and Γ^ℳ are d_A d_B× d_A d_B matrices, the quantity D_H,E^M(𝒩 ‖ℳ) can be efficiently calculated by means of a semi-definite program up to an error ε, by means of O(√(ln(1/ε ))) linear matrix inequalities, each of size 2d_A d_B×2d_A d_B. This follows similarly to the proof of Proposition <ref>, and we provide the proof for completeness. Indeed, here we combine Proposition <ref> and Proposition <ref> and employ the transformer equality in (<ref>). Consider that D_H,E^M(𝒩‖ℳ) =sup__ρ_R∈𝔻(ℋ_R) , Tr[H_Aρ_A]≤ ED^M(ρ_R^1/2Γ _RB^𝒩ρ_R^1/2‖ρ_R^1/2Γ_RB^ℳ ρ_R^1/2) =sup_Ω^',Θ^'>0, ρ_R∈𝔻(ℋ_R), Tr[H_Aρ_A]≤ E{[ Tr[Θ^'ρ_R^1/2Γ_RB^𝒩 ρ_R^1/2]-Tr[Ω^'ρ_R^1/2Γ _RB^ℳρ_R^1/2]+1:; Θ^'≤ P_ln(I,Ω^') ]} =sup_Ω^',Θ^',ρ_R>0, Tr[ρ_R]=1 Tr[H_Aρ_A]≤ E{[ Tr[ρ_R^1/2Θ^'ρ_R^1/2Γ _RB^𝒩]-Tr[ρ_R^1/2Ω^'ρ _R^1/2Γ_RB^ℳ]+1:; Θ^'≤ P_ln(I,Ω^') ]} =sup_Ω,Θ,ρ_R>0, Tr[ρ _R]=1 Tr[H_Aρ_A]≤ E{[ Tr[ΘΓ_RB^𝒩]-Tr [ΩΓ_RB^ℳ]+1:; Θ≤ P_ln(ρ⊗ I,Ω) ]} . The first equality follows from Proposition <ref>. The second equality follows from Proposition <ref>. The third equality follows from cyclicity of trace and the fact that the function ρ_R↦Tr[ρ_R^1/2Θ^'ρ_R ^1/2Γ_RB^𝒩]-Tr[ρ_R^1/2 Ω^'ρ_R^1/2Γ_RB^ℳ]+1 is continuous in ρ_R, so that the optimization can be performed over the set of positive definite density operators (dense in the set of all density operators). The final equality follows from defining Ωρ_R^1/2Ω^'ρ_R^1/2, Θρ_R^1/2Θ^'ρ_R^1/2 , and noting that Ω^',Θ^'>0 ⇔ Ω,Θ>0, as well as Θ^'≤ P_ln(I,Ω^') ⇔ ( ρ^1/2⊗ I) Θ^'( ρ^1/2⊗ I) ≤( ρ^1/2⊗ I) P_ln(I,Ω^')( ρ^1/2⊗ I) ⇔ Θ≤ P_ln(ρ⊗ I,Ω), with the final equality following from the definitions in (<ref>) and the transformer equality in (<ref>). As such, we have rewritten D_H,E^M(𝒩‖ℳ) in terms of the hypograph of P_ln. By appealing to <cit.>, it follows that this quantity can be efficiently calculated by means of semi-definite programming, with the complexity stated above. § CONCLUSION The main contributions of our paper are efficient semi-definite optimization algorithms for computing measured relative entropies of quantum states and channels. We did so by combining the results of <cit.> to obtain the result for states and then we generalized these to quantum channels by using basic properties of the weighted geometric mean and operator connection of the logarithm. Our findings are of significant value for applications, in which one wishes to find numerical characterizations of technologically feasible, hybrid quantum-classical strategies for quantum hypothesis testing of states and channels. Going forward from here, we note that further work in this direction could consider combining our findings here with those of <cit.>, the latter being about measured Rényi divergences under restricted forms of measurements. We also think it is a very interesting open question, related to those from <cit.>, to determine semi-definite programs for various Rényi relative entropies of quantum channels, including those based on the sandwiched <cit.> and Petz–Rényi <cit.> relative entropies. More generally, one could consider the same question for α-z Rényi relative entropies <cit.>, and here we think the variational formulas from <cit.> could be useful in addressing this question. Acknolwedgements—We are grateful to Ludovico Lami for communicating to us that the root fidelity of states ρ and σ can be written as 1/2inf_Y,Z>0{Tr[Yρ]+Tr [Zσ]:G_1/2(Y,Z)=I}, which was a starting point for our work here. We also thank James Saunderson for several email exchanges regarding semi-definite optimization and the weighted geometric mean, related to <cit.>. ZH is supported by a Sydney Quantum Academy Postdoctoral Fellowship and an ARC DECRA Fellowship (DE230100144) “Quantum-enabled super-resolution imaging”. She is also grateful to the Cornell School of Electrical and Computer Engineering and the Cornell Lab of Ornithology for hospitality during a June 2024 research visit. MMW acknowledges support from the National Science Foundation under Grant No. 2304816 and from Air Force Research Laboratory under agreement number FA8750-23-2-0031. This material is based on research sponsored by Air Force Research Laboratory under agreement number FA8750-23-2-0031. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory or the U.S. Government. alpha § ALTERNATIVE PROOF OF VARIATIONALS FORMULAS FOR MEASURED RÉNYI RELATIVE ENTROPIES For a state ρ and a positive semi-definite operator σ, the expression in (<ref>) holds. Let us begin with the case α∈( 0,1). Let us write an arbitrary ω>0 in terms of a spectral decomposition as ω=∑_yω_y|ϕ_y⟩⟨ϕ_y |, where ω_y>0 for all y and { |ϕ_y⟩} _y is an orthonormal set. Consider that αTr[ωρ]+( 1-α) Tr[ω^α/α-1σ] =α∑_yω_y⟨ϕ_y|ρ|ϕ_y⟩+( 1-α) ∑_yω_y^α/α-1⟨ϕ _y|σ|ϕ_y⟩ =∑_yαω_y⟨ϕ_y|ρ|ϕ_y⟩+( 1-α) ω_y^α/α-1⟨ϕ_y |σ|ϕ_y⟩ ≥∑_y[ ω_y⟨ϕ_y|ρ|ϕ_y⟩] ^α[ ω_y^α/α-1⟨ϕ_y |σ|ϕ_y⟩] ^1-α =∑_yω_y^αω_y^-α⟨ϕ_y|ρ |ϕ_y⟩^α⟨ϕ_y|σ|ϕ_y⟩^1-α =∑_y⟨ϕ_y|ρ|ϕ_y⟩^α⟨ϕ _y|σ|ϕ_y⟩^1-α. The inequality follows from the inequality of weighted arithmetic and geometric means (i.e., α b+( 1-α) c≥ b^αc^1-α for all b,c≥0 and α∈( 0,1)), applied for all y. This inequality is saturated when the following condition holds ω_y⟨ϕ_y|ρ|ϕ_y⟩=ω_y^α/α-1⟨ϕ_y|σ|ϕ_y⟩ . That is, it is saturated when the two terms being averaged are equal. Rearranging this, we find that saturation holds if ω_y=( ⟨ϕ_y|σ|ϕ_y⟩/⟨ϕ_y|ρ|ϕ_y⟩) ^1-α. Thus, it follows that for every projective measurement ( |ϕ _y⟩⟨ϕ_y|) _y, there exists ω>0 such that αTr[ωρ]+( 1-α) Tr [ω^α/α-1σ]=∑_y⟨ϕ_y|ρ|ϕ _y⟩^α⟨ϕ_y|σ|ϕ_y⟩^1-α. As such we conclude the desired equality for all α∈( 0,1): inf_ω>0{αTr[ωρ]+( 1-α) Tr[ω^α/α-1 σ]} =Q_α^P(ρ‖σ). Then applying (<ref>) leads to the claim for α∈( 0,1). The case α>1 follows from a very similar proof, but instead makes use of the following inequality: α b+( 1-α) c≤ b^αc^1-α, which holds for all b≥0, c>0, and α>1. This inequality is a consequence of Bernoulli's inequality, which states that 1+rx≤( 1+x) ^r holds for all r≥1 and x≥-1. Indeed, consider that α b+( 1-α) c≤ b^αc^1-α ⟺ α( b/c) +( 1-α) ≤( b/c) ^α ⟺ α( b/c-1) +1≤( b/c-1+1) ^α, so that we choose x=b/c-1≥-1 and r=α≥1 in Bernoulli's inequality. So then we conclude the following for α>1: sup_ω>0{αTr[ωρ]+( 1-α) Tr[ω^α/α-1 σ]} =Q_α^P(ρ‖σ), by applying the same reasoning as in (<ref>)–(<ref>), but the inequality in (<ref>) goes in the opposite direction for α>1. § Α→ 1 LIMIT OF THE MEASURED RÉNYI RELATIVE ENTROPY This follows because lim_α→1^-D_α^M(ρ‖σ) =sup _α∈( 0,1) D_α^M(ρ‖σ) =sup_α∈( 0,1) sup_𝒳,( Λ _x) _x∈𝒳D_α(( Tr [Λ_xρ]) _x∈𝒳‖( Tr [Λ_xσ]) _x∈𝒳) =sup_𝒳,( Λ_x) _x∈𝒳 sup_α∈( 0,1) D_α(( Tr [Λ_xρ]) _x∈𝒳‖( Tr [Λ_xσ]) _x∈𝒳) =sup_𝒳,( Λ_x) _x∈𝒳D(( Tr[Λ_xρ]) _x∈𝒳‖( Tr[Λ_xσ]) _x∈𝒳) =D^M(ρ‖σ), as noted in <cit.>, and because lim_α→1^+D_α^M(ρ‖σ) =inf _α>1D_α^M(ρ‖σ) =inf_α>1sup_𝒳,( Λ_x) _x∈𝒳D_α(( Tr[Λ_x ρ]) _x∈𝒳‖( Tr[Λ _xσ]) _x∈𝒳) =sup_𝒳,( Λ_x) _x∈𝒳 inf_α>1D_α(( Tr[Λ_xρ]) _x∈𝒳‖( Tr[Λ_xσ]) _x∈𝒳) =sup_𝒳,( Λ_x) _x∈𝒳D(( Tr[Λ_xρ]) _x∈𝒳‖( Tr[Λ_xσ]) _x∈𝒳) =D_α^M(ρ‖σ). The third equality above is non-trivial and follows from the fact that it suffices to optimize D_α(( Tr[Λ_x ρ]) _x∈𝒳‖( Tr[Λ _xσ]) _x∈𝒳) over POVMs with a finite number of outcomes (a compact and convex set) <cit.>, that the relative entropy D_α is lower semi-continuous <cit.>, and an application of the Mosonyi–Hiai minimax theorem <cit.>.
http://arxiv.org/abs/2406.18833v1
20240627020738
Quantum annealing-based structural optimization with a multiplicative design update
[ "Naruethep Sukulthanasorn", "Junsen Xiao", "Koya Wagatsuma", "Shuji Moriguchi", "Kenjiro Terada" ]
cs.CE
[ "cs.CE", "cs.NA", "math.NA", "quant-ph" ]
Approximate Minimum Sum Colorings and Maximum k-Colorable Subgraphs of Chordal Graphs Ian DeHaanSupported by an NSERC Undergraduate Student Research Award held at the University of Alberta.1 Zachary FriggstadSupported by an NSERC Discovery Grant and Accelerator Supplement.2 ================================================================================================================================================================================================ empty § INTRODUCTION Quantum computing has attracted a great deal of attention as a solution method for optimization problems because it can take advantage of the unique capabilities of quantum mechanics. One of the leading algorithms is quantum annealing (QA)<cit.>, building upon the simulated annealing (SA) algorithm that is used in classical computers. The key advantage of QA is that it uses the quantum tunneling effect to penetrate barriers in the objective function landscape. As a result, the process of searching for an optimal solution is greatly accelerated <cit.>. It should be noted here that despite its robust performance, QA faces several challenges due to the limitations of current quantum hardware and thus has not yet reached its full potential. In the past decades, however, quantum computer technology has made remarkable progress, especially since the practical application of quantum engines (e.g., D-Wave device<cit.>, Fujitsu Digital Annealer <cit.>, Hitachi CMOS Annealing Machine <cit.>, Toshiba Simulated Bifurcation Machine<cit.>, and Fixstars Amplify Annealing Engine<cit.>). This advancement has not only broadened the range of feasible applications but also increased interest in exploration in a variety of fields<cit.>. Structural optimization is one of the most attractive applications because of its robustness and versatility for various industrial sectors. The obtained results serve as a blueprint for prototyping, but efficiency and performance depend on the chosen method. As a result, various optimization algorithms have been developed, including gradient-based methods <cit.>, SA <cit.>, Genetic Algorithms <cit.>, Harmony Search <cit.>, and Evolutionary Structural Optimization <cit.>. In the effort to enhance optimization techniques, QA, known for its robustness in solving optimization problems, has emerged as a promising candidate for achieving optimized structures<cit.>. So far, however, there have been relatively few studies in the literature on the application of QA for structural optimization. For example, Will and Chen <cit.> have explored a truss optimization problem using the D-Wave quantum annealer. In their approach, the structural analysis is conducted by finite element (FE) analysis with a classical computer and then QA is used to search for incremental updates to the sectional area. Their results demonstrate the feasibility of using QA to search for optimized cross sections but are limited to nine truss members. Sato et al. <cit.> proposed a quantum optimization framework for a simple heat path design of a discrete truss structure with three and five edges, for which the objective function is to minimize the temperature of the target node. In addition, in their approach, both structural analysis and optimization are performed by the variational quantum algorithms in noisy intermediate-scale quantum (NISQ) devices. While these studies have successfully applied QA to simple discrete truss problems, their scale and areas of application remain limited. Recently, Ye et al. <cit.> developed a topology optimization method using QA, tailored for continuum structure design. In their approach, structural analysis is initially conducted on a classical computer. Subsequently, a hybrid of classical computing and QA is employed to solve optimization problems, adopting a decomposition and splitting strategy to manage complexity. This strategy reformulates the original optimization problem into a series of mixed-integer linear programs (MILPs). They demonstrate this method by designing the Messerschmitt-Bolkow-Blohm (MBB) beam. Although the speed of this method has not yet surpassed classical computers, its application provides significant evidence of the QA's potential in structural optimization. In the present paper, we propose a new quantum annealing-based optimization framework with a multiplicative update scheme for structural design. First, a QA-based optimizer is proposed by adopting the product of the QA iterative solutions as the design variable for each finite element, which characterizes the present multiplicative update scheme. Second, we develop the QUBO model that aims to minimize compliance while integrating inequality constraints through a penalty method and slack variable. The derived model is suitable for QA and allows the multiplicatively updated design variable to converge to the optimum solution, creating an optimal structure. Lastly, the robustness of the proposed framework was demonstrated by applying it to the design of truss and continuum structures, ensuring its reliability and flexibility. § OPTIMIZATION FRAMEWORK SETTING First, we introduce an update multiplier for updating an elemental design variable, denoted as α, within the FE framework. The value of α lies between 0 ≤α≤Θ, where Θ represents the maximum allowable change value. At each design iteration, the update multiplier value is obtained as the solution to the optimization problem via QA, so that the general stiffness is updated at the i-th design iteration as K^i=α^i · K^i-1    with     0 ≤α≤Θ , which is considered as an effective stiffness for the optimization problem of interest. In particular, if α = 0.5, this iteration means a 50% reduction in the value of K. Conversely, if α = 1.1, it means increasing the value of K by 10%. Through this process, the structural layout is dynamically adjusted, with unnecessary areas being reduced and more critical areas being reinforced. It should be noted that the updater α in Eq. (<ref>) is employed solely to update the effective stiffness from the design iteration (i-1) to i-th. For the structural design layout at each design iteration, it is represented by an additional variable, namely design variable α^*, which can be obtained through the multiplication of all previous updaters α^i as α^*^(j) =∏_i=1^jα^i    with   K^j = α^*^(j) K^0, where K^0 and K^j are the stiffness at the initial and j-th iterations, respectively. Here, it can be seen from Eqs. (<ref>) and (<ref>) that the value of α^* can increase until it exceeds the specified upper bound. Therefore, a truncation procedure is needed to address this problem so that the limit value is not exceeded. In this study, when the value of design variable α^* in the current design iteration approaches the limit value, Θ is set to 1, prohibiting the alpha value from increasing. In addition, it is worthwhile to notice that α^* is equivalent to the design variable from density-based approaches (e.g., the SIMP method<cit.>). Thus, in a similar fashion, α^* can be used to represent the design material as α^*= 1                 : solid 0<α^*<1 : mixture 0                 : void. Thanks to the above setting, an optimization problem is established to find a design layout that achieves the target performance. In this study, the objective function is set to minimize the compliance of the structure while incorporating the material volume as the constraint. This is well-known as the standard framework for structural optimization. Once the solution is obtained, the design structure is expected to perform better than the initial design. According to this, the optimization problem can be formulated in the standard form as follows: find  : α^*(α_e) ∈{α^*_1,α^*_2,...α^*_N_e} min_α^*(α_e) : 𝐅^T 𝐔 s.t.   : 𝐊(α^*) 𝐔 =𝐅 ,   ∑_e=1^N_e V_e(α_e^*) /V_0≤V̅_target, 0 ≤α_e≤Θ, 0 ≤α^*_e≤ 1, where 𝐊 is the global stiffness matrix, 𝐅 is the external applied load vector, 𝐔 is the global nodal displacement vector in structural analysis. Also, α_e is the elemental update multiplier, N_e is the number of finite elements (or truss members), V_e (α_e^*) is the elemental volume at each design iteration, V_0 is the initial total volume, and V̅_target is a given desired ratio to the initial volume. In addition, α^* = {α^*_1, ⋯, α^*_N_e} is the set of design variables. Here, once the optimization is established, then it will be solved for the update multiplier, α, via QA. The overall steps of the proposed design framework can be summarized as follows: * Perform structural analysis using the finite element method (FEM) on a classical computer to obtain basic unknowns (e.g., displacements), and then calculate the elemental objective function. * Establish the QUBO model by encoding the updaters α_e, which multiplicatively update the elemental design variables α^*_e, into binary variables, transforming the original objective function and the volume constraint to the cost function into the QUBO formats. * Solve the QUBO cost function for the updaters α_e using QA. * Decode the binary variables back to real values and update the current design structure with α_e and then determine the design variable α^*_e with Eq. (<ref>). This process is repeated until convergence to an optimal solution is achieved within a predetermined tolerance. The schematic of the proposed framework is shown in Fig. <ref>. § METHODS This section provides details on converting the optimization problem from the previous section into a QUBO format. A key process for solving the optimization with QA is to derive a combinatorial optimization problem such as the Ising or, equivalently, QUBO model. Details are as follows. §.§ QUBO model To solve the problem in the QA framework, it is necessary to reformulate the optimization problem in a QUBO format. Depending on each particular problem, the QUBO model, i.e., the cost function, constraints must be derived in the binary variable before passing through a quantum device. We first define the cost function for the QUBO problem in the following form: f_ qubo(𝐪 )= 𝐪^T·𝐐·𝐪, where 𝐐 denotes an upper diagonal coefficient matrix, and q is the unknown binary variable vector whose components are in q ∈{0,1}. Given the property of binary variables such that q^2 = q, Eq. (<ref>) can consequently be reformulated as follows: f_ qubo( q)= ∑_i=1^n Q_i,iq_i+∑_i<j^n Q_i,jq_iq_j, where n is the number of qubits, Q_i,i and Q_i,j are the coefficients of linear and quadratic terms corresponding to the diagonal and off-diagonal entries, respectively. It can be seen that no constraint term appears in Eqs. (<ref>) and (<ref>), implying that the standard QUBO problem is dedicated to an unconstrained optimization problem. However, in structural optimization problems, layouts are usually designed under specific constraints to ensure optimal performance, as presented in Eq. (<ref>). Therefore, in this study, the penalty method is employed to modify the QUBO cost function of the following form: f_ qubo(q)= f_ obj(q) + λ· g(q), where f_ obj represents the objective function as defined in the structural optimization problem by Eq. (<ref>), g denotes its constraint function, and λ is the positive penalty parameter. Note that the value of λ must be large enough to have an effective impact on the QUBO cost function so that the imposed constraints are satisfied. It should be noted that since the formulation presented above has been set up as a general design framework, both the objective function and constraint can be customized and tailored to specific applications, thus achieving the desired performance. However, this study focuses on the problems of minimizing structural compliance or equivalently maximizing structural stiffness under the volume constraint. Accordingly, the objective function in Eq. (<ref>) can be expressed using the unknown binary variable q_e for each structural element as follows: f_ obj(q_e)=𝐅^T 𝐔_minimizing compliance := 𝐔^T𝐊(α^*(q_e)) 𝐔_maximizing stiffness, Besides, there are two key aspects to be noted for the constraint function g(q). First, the second term on the right-hand side of Eq. (<ref>) is needed only when the equality in Eq. (<ref>) is active, so QA cannot be applied in a QUBO format as it is. To address this issue, an additional variable, known as the slack variable S̅, is introduced into the volume constraint in Eq. (<ref>) as a function of another unknown binary variable q_s. Second, to meet the requirements of the QUBO framework, g is commonly formulated by squaring the volume constraint function to a quadratic form. This modification allows the volume constraint in Eq. (<ref>) to be converted to an equality constraint as ∑_e=1^N_e V_e(α^*_e(q_e)) /V_0 -V̅_target+S̅(q_s) = 0, so that the constraint function is defined as g(q_e, q_s)= (∑_e=1^N_e V_e(α^*_e(q_e))/V_0 - ( V̅_target -S̅(q_s) ) )^2. As a result, the optimization framework defined in Eq. (<ref>) can be reformulated by adopting Eqs. (<ref>) and (<ref>) in the QUBO cost function to define the following minimization problem: find   : α^*(q_e) min_q_e, q_s   : f_ qubo(q_e, q_s)= -𝐔^T𝐊(α^*(q_e)) 𝐔 + λ·(∑_e=1^N_e V_e(α^*_e(q_e))/V_0 - ( V̅_target -S̅ (q_s) ) )^2 with      𝐊(α^*(q_e))𝐔=𝐅, 0 ≤α_e(q_e) ≤Θ, 0 ≤α^*_e(q_e) ≤ 1, 0 ≤S̅(q_s) ≤ 1. It is worth mentioning that the negative value of the first term of f_qubo in Eq. (<ref>) arises from the objective of the stiffness of the design structure, and that this expression is identical to the analytical sensitivity formulation in the standard problem of minimizing compliance. Thus, from this perspective, the iterative solving procedure for Eq. (<ref>) with QA can be considered similar to the sensitivity analysis procedure in standard structural optimization. §.§ Encoding As shown in Eq. (<ref>), the main variables and parameters are represented by binary variables q ∈{0,1}. Additionally, in the proposed framework, it is important that the value of the updater and slack variable, α_e and S̅, cover a specific range of real numbers in order to effectively update the layout of the design structure. To this end, an encoding procedure using a specific functional form of q is used to represent these real values. For simplicity, a power series expansion of the following form is adopted<cit.>: α_e(q_e) = Θ· (∑_l=-m^m2^l)^-1· (∑_l=-m^m2^l· q_e,l), S̅(q_s) = (∑_l_s=-m_s^m_s2^l_s)^-1· (∑_l_s=-m_s^m_s2^l_s· q_s,l_s), where q_e,l denotes the unknown binary variable for the l-th basis term of element e, and q_s,l_s is the unknown binary variable for the slack variable with the l_s-th basis term. The integers, m and m_s, determine the number of basis terms representing α_e and S̅, respectively. Thus, for element e, the total number of unknown binary variables is 2m, and for the entire system, it is 2m · N_e + 2m_s. Notably, the coefficient of each basis consists of two parts, a fractional term and an integer term, which vary depending on whether the powers, l and l_s, are negative or positive. Increasing the values of m and m_s means incorporating more fractional and integer terms, thereby enriching the candidate values of α_e and S̅, respectively. At the same time, however, they also imply an increase in the computational effort required by the quantum machine to search for the values of q_e,l and q_s,l_s. In this context, the functional form of the encoding is open to debate and will be left for further detailed study. §.§ QUBO formulation for QA machine The QUBO cost function in Eq. (<ref>) is rearranged as follows: f_ qubo = f_ obj(q_e) +λ (∑_e=1^N V_e(q_e)/V_0 - ( V̅_target -S̅(q_s) ) )^2 = f_ obj(q_e)+λ ((∑_e=1^N V_e(q_e) /V_0) ^2- 2(∑_e=1^N V_e(q_e)/V_0) (V̅_target-S̅(q_s)) +(V̅_target -S̅(q_s))^2) = f_ obj(q_e)_C_1 +λ (∑_e=1^N( V_e(q_e)/V_0) ^2_C_2+2∑_e<j^N(V_e(q_e) V_j(q_e)/V_0^2)_C_3- 2(∑_e=1^N V_e(q_e)/V_0) (V̅_target -S̅(q_s))_C_4 +(V̅_target -S̅(q_s) )^2_C_5). Then, the substitution of Eq. (<ref>) into each term yields C_1 =-∑_e=1^N (Φ_e^) · U_e^T(∑_l=-m^m 2^l· q_e,l)K_e· U_e, C_2 =∑_e=1^N (Φ_e)^2 (∑_l=-m^m 2^2l· q_e,l+∑_l<l_2^m 2^l+l_2+1· q_e,l q_e,l_2), C_3 = ∑_e<j^NΦ_e^·Φ_j^·(∑_l=-m^m∑_l_2=-m^m 2^l+l_2+1· q_e,l· q_j,l_2), C_4 = -(∑_e=1^NΦ_e^·(∑_l=-m^m 2^l+1· q_e,l)) V̅_target +∑_e=1^N(Φ_e^·Φ_s^·(∑_l=-m^m∑_l_s=-m_s^m_s 2^l+l_s+1· q_e,lq_s,l_s)), C_5=  V̅_target^2 - 2V̅_target·Φ_s^·(∑_l_s=-m_s^m_s 2^l_s· q_s,l_s) + (Φ_s^)^2 ·(∑_l_s=-m_s^m_s 2^2l_s· q_s,l_s+∑_l_s<l_s_2 2^l_s+l_s_2+1· q_s,l_sq_s,l_s_2), and Φ_e,j^ = ( V_e,j^/V_0) ·( Θ/∑_l=-m^m 2^l) ;   Φ_s^ = ( 1/∑_l_s=-m_s^m_s 2^l). With this QUBO format, the minimization problem in Eq. (<ref>) can be solved using an available QA computing platform. In the present study, the Amplify Annealing Engine (Amplify AE)<cit.>, a GPU-based Ising machine, is adopted to search for the ground state of the QUBO problem, with the execution time parameter for the Amplify AE machine, namely t_out, set considering the specific problem at hand. § RESULTS AND DISCUSSION §.§ Truss optimization First, the proposed framework is applied to the optimal design problem of four truss structures with different geometries and boundary conditions, as shown in Fig. <ref>. As mentioned before, the state variables (e.g., displacement) are obtained by performing standard structural analysis on a classical computer. In this particular problem, the effective stiffness K stated in Eq. (<ref>) can be a truss member e as K_e^j = α^*_e^(j) K_0 = α^*_e^(j)· E · A_e^0 /L_e, where α^*_e^(j) is the elemental design variable at the j-th design iteration, representing the ratio between the current and initial cross-sectional areas, A_e^j/A_e^0. Here, E is Young's modulus equal to 2 × 10^6 N/ m^2, L_e is the length and A_e^0 is the initial cross-sectional area equal to 10 mm^2 for all members. The target ratio V̅_ target to the initial total volume V_0 is fixed at 1 throughout the optimization process, and the current design volume, denoted by V_ des^j(α^*), corresponds to ∑_e=1^N_e V_e^j(α^*_e) where V_e^j(α^*_e)= α^*_e^(j)· A_e^0 × L_e. Additionally, the number of unknown binary variables is n = N_e · n_q + n_s with n_ q and n_ s being the numbers of qubits for the elemental updaters and the slack variable, both of which are set to 9 in this study. The execution timeout parameter for the Amplify AE machine is set as t_out= 5 seconds for the truss example. Meanwhile, the maximum allowable change Θ can be fixed throughout the optimization process, but we devise a two-step approach to expedite the optimization, in which the initial large value Θ_1 is first set and then reduce to Θ_2. These values are determined through trial and error for each specific problem, and so are the penalty constant λ. The iterative process for optimization is terminated after the value of the objective function changes by less than 0.005 for five consecutive iterations. For comparison purposes, reference solutions are obtained by the optimality criteria (OC) method <cit.> performed on a classical computer. Figure <ref> presents the optimization results for the truss structures having 6 and 11 members by setting the penalty parameter λ to 8× 10^4 and 5× 10^4, respectively, and the initial design variable α^*_e^(1) is set to 0.2 for all members. We have set Θ_1 = 1.5 at the first three iterations and Θ_2 = 1.05 for remaining iterations. As can be seen from each of the figures, the objective function value smoothly converges to the optimal solution within a few iterations. Also, the final configuration after convergence is the well-known optimum solution for the two-bar truss problems<cit.>. Table <ref> compares the final values of the objective function and volume ratio obtained from QA and OC methods. As can be seen, for both truss structures, the objective function values obtained from QA are slightly lower than those from the OC method. This is because for each optimization result, the volume ratio to the initial total volume of the optimized truss structure obtained from OA is slightly larger than that from the OC method. It is worthwhile to note that the volume constraint is only approximately satisfied in QA due to two main factors: the value of the penalty constant λ and the precision of the encoding. Improvement of these factors would allow for a more accurate approximation. Next, we conduct optimization for the two remaining truss structures having 21 and 31 members by setting α^*_e^(1)= 0.4. For the case with 21 members, λ is set to 5.3 × 10^4, and Θ_1 is set to 1.15 for the first five iterations, before being reduced to Θ_2 = 1.025. For the case with 31 members, λ is set to 1 × 10^3, Θ_1 is set to 1.5 for the first three iterations, and is then reduced to Θ_2 = 1.08. Optimization results are shown in Fig. <ref>, each from which we can confirm that the objective function converges monotonically to the optimal solution and agrees with that of the OC method. Moreover, the optimized shapes are well-recognized and consistent with the results reported in the literature. Again, the final objective function values are slightly lower than those of the OC method due to the larger values of the final design volume. It should be pointed out that the encoding parameters (e.g., number of qubits n_q) should be carefully set because the poor encoding may lead to the violation of the volume constraint, as illustrated in this example, and the limited number of possible candidates for the design updater α_e in QA. In other words, the imposition of a certain number of n_q, limits the solution set in QA, which could make α_e in Eq. (<ref>) to potentially overestimate the solution. Nevertheless, the difference in the final objective function value between the OC method and QA remains less than 1.3%. §.§ Continuum structure optimization In this subsection, we focus our attention on continuum structures made of linearly elastic materials. FE analysis is performed to solve the state variables on a classical computer. Here, the element stiffness matrix in Eq. <ref> is calculated as 𝐊_e^j (α^*_e)= ∫_Ω_ eα^*_e^(j)𝐁^T𝐂_0𝐁 dV, where α^*_e^(j) represents the ratio between the current and initial elemental volume, V_e^j/V_e^0, 𝐁 is the strain-displacement matrix, Ω_ e is the domain of an element, and 𝐂_0 is the elasticity matrix dependent on the material properties. In this study, isotropic elastic properties are taken as E = 2 × 10^6 N/m^2 and Poisson’s ratio ν = 0.3 under the plane strain condition. Also, for this example, the target ratio V̅_ target are kept at initial total volume V_0 throughout the optimization process, and V_e^j(α^*_e)= α^*_e^(j)· V_e^0. In order to demonstrate the performance of the proposed framework, two design domains with different boundary conditions are considered and are discretized with 10 × 20 and 20 × 10 elements, respectively, as shown in the leftmost panel of Fig. <ref> and Fig. <ref>. It is important to note that setting the number of n_ q in the current available QA machine has limitations, especially when the number of unknowns increases, significantly affecting computational time. Keeping this limitation in mind, n_ q and n_ s in this example are set to 3 as the small number to explore whether the method will converge or not with t_out = 1 second. Additionally, a two-step strategy for Θ is employed again to accelerate the optimization by initially setting Θ_1 for the first three iterations followed by taking a smaller value of Θ_2. First, we target a well-known coat-hanging problem<cit.> as shown in the leftmost panel of Fig. <ref> with α^*^(1) = 0.5, Θ_1 = 1.2, Θ_2 = 1.05, and λ is set to 7 × 10^3. The optimized topology and its evolution from the proposed design framework are shown in Fig. <ref>, along with the optimized result obtained using the OC method. It can be seen from the figure that the QA optimization results closely converge to a topology similar to the solution of this benchmark problem and that obtained using the OC method. Consequently, it shows that even with a small n_ q, convergence to the optimal solution is achievable, implying the reliability and accuracy of using QA in the proposed multiplicative update scheme for structural optimization. Next, we consider a beam-like two-dimensional structure with fixed ends as shown in the leftmost panel of Fig. <ref>. The design parameters are set as follows: α^*^(1) = 0.7, Θ_1 = 1.1, Θ_2 = 1.02, and λ = 1 × 10^5. Figure. <ref> shows snapshots illustrating the optimization process from the proposed method, which converges to a result similar to the OC method. However, it can be observed that some design variables, α_e^*, obtained from the OC method converge to intermediate values more than those obtained from the proposed method. Although the proposed method tends to clearly split the design variable into 0 or 1, the binary encoding process in Eq. (<ref>), plus the small number of n_q, limited the number of candidate solutions, leading to an overestimation of the design volume, although by less than 4%; see Table <ref>. Because of this tendency, as in the truss example problems, the final objective function value of the proposed method is slightly lower (less than 2.5%) than that of the OC method. Nevertheless, the histories of the objective function value and its design volume for both cases shown in Fig. <ref> indicate that the optimization results are consistent with the OC method and converge well to the optimal solutions. § CONCLUSION We have developed a novel structural design framework based on QA, into which the multiplicative update scheme for the design variable is incorporated. That is, the design variable is represented by the product of updaters, each of which is obtained as a solution provided by QA. The framework is advantageous due to its simplicity and efficiency, which facilitates convergence to the optimal solution. The QUBO form was derived for the compliance minimization problem subject to the inequality volume constraint. A power series expansion encoding process is employed to facilitate the conversion between real and binary values of the updaters so that the design variable of the QUBO model can be updated as their product. This framework has been applied to both truss and continuum structures, demonstrating its robust performance. Indeed, the optimization results indicated that the proposed design framework, utilizing QA, exhibited a good convergence to the optimal design shape for both problems, achieving results comparable to those obtained with the OC method on a classical computer. Remarkably, even with a limited number of binary variables or, equivalently, a small number of qubits, the QA-based design results converged effectively to the optimal solution. However, the final objective function value using QA was lower than that achieved with the OC method, because the design volume was slightly overestimated due to the poor expressive ability of the adopted encoding process and the optimal penalty constant. It should be noted that the parameters within the proposed framework require fine-tuning for each specific problem, particularly the penalty parameter for imposing the volume constraint in the QUBO model. Therefore, further development is needed to automate the process of finding optimal parameter values. Additionally, exploring alternative functional forms for the encoding process could further increase the efficiency of updating the design variable. § DATA AVAILABILITY The datasets generated and/or analysed during the current study are not publicly available due to an ongoing study but are available from the corresponding author upon reasonable request. § AUTHOR CONTRIBUTIONS N.S.: Optimization framework, Software, Validation, Investigation, Writing-original draft preparation; X.J. and K.W.: Quantum annealing discussion, Investigation, Review; S.M.: Investigation, Review-original draft; K.T.: Funding acquisition, Conceptualization, Methodology, Supervision, Writing- Reviewing and Editing. § ADDITIONAL INFORMATION Competing interests The authors declare no competing interests.
http://arxiv.org/abs/2406.18673v1
20240626182108
Two-gluon one-photon vertex in a magnetic field and its explicit one-loop approximation in the intermediate field strength regime
[ "Alejandro Ayala", "Santiago Bernal-Langarica", "Jorge Jaber-Urquiza", "José Jorge Medina-Serna" ]
hep-ph
[ "hep-ph", "nucl-th" ]
=1
http://arxiv.org/abs/2406.18663v1
20240626180342
Enhanced particle acceleration in a pulsar wind interacting with a companion
[ "Valentina Richard Romei", "Benoît Cerutti" ]
astro-ph.HE
[ "astro-ph.HE" ]
Pulsar wind interacting with a companion Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France valentina.richard-romei@univ-grenoble-alpes.fr benoit.cerutti@univ-grenoble-alpes.fr Pulsar winds have been shown to be preferred sites of particle acceleration and high-energy radiation. Numerous studies have been conducted to better characterize the general structure of such relativistic plasmas in isolated systems. However, many pulsars are found in binary systems and there are currently no ab initio models available that would include both the pulsar magnetosphere and the wind of the pulsar in interaction with a spherical companion. We investigate the interaction between a pulsar wind and a companion to probe the rearrangement of the pulsar wind, assess whether it leads to an enhancement of particle acceleration, and predict the high-energy radiative signature that stems from this interaction. We consider the regime where the companion is small enough to hold between two successive stripes of the wind. We performed two-dimensional (2D) equatorial particle-in-cell simulations of an inclined pulsar surrounded by a spherical, unmagnetized, perfectly conducting companion settled in its wind. Different runs correspond to different distances and sizes of the companion. We find that the presence of the companion significantly alters the structure of the wind. When the companion lies beyond the fast magnetosonic point, a shock is established and the perturbations are advected in a cone behind the companion. We observe an enhancement of particle acceleration due to forced reconnection as the current sheet reaches the companion surface. Hence, high-energy synchrotron radiation is also amplified. The orbital light curves display two broad peaks reaching up to 14 times the high-energy pulsed flux emitted by an isolated pulsar magnetosphere. These effects increase with the growth of the companion size and with the decrease of the pulsar-companion separation. The present study suggests that a pulsar wind interacting with a companion induces a significant enhancement of high-energy radiation that takes the form of an orbital-modulated hollow cone of emission, which should be detectable by galactic-plane surveys, possibly with long-period radio transient counterparts. Enhanced particle acceleration in a pulsar wind interacting with a companion Valentina Richard Romei1 Benoît Cerutti1 Received 03 May 2024; accepted 16 June 2024 ============================================================================ § INTRODUCTION A few percent of galactic pulsars are found in binary systems <cit.>. While tight binary systems involving magnetically coupled neutron stars have been studied in the past (e.g., ), fewer studies have drawn attention to the interaction between a pulsar and a companion lying in its wind. However, it is a generic problem that has many astrophysical applications starting from binary pulsar systems, such as: (i) pulsar-neutron star such as PSR  B1913+16 <cit.> or the double pulsar system PSR J0737-3039 <cit.>; (ii) pulsar-black hole (no confirmed detection so far; however, see and ); (iii) pulsar-main sequence star, including spider pulsars such as PSR B1957+20 <cit.> or PSR J2051-0827 <cit.>; and (iv) pulsar-white dwarf, such as PSR J1141-6545 <cit.>, PSR J1909-3744 <cit.>, PSR J1738+0333 <cit.>, and PSR J0348+0432 <cit.>. Planets orbiting pulsars (see and references therein) form another class of relevant applications from which we expect characteristic signatures, as suggested by <cit.>, analogously to the Solar system planets and moons (e.g., for the Jupiter-Io interaction). We should also consider the case of asteroids interacting with a pulsar wind, which have been proposed to trigger repeating fast radio bursts <cit.>. As of today, the basics of such interactions remains uncertain and we are left with opened questions regarding the possible rearrangement of the magnetosphere, the strength and location of particle acceleration, and the high-energy emission originating from such systems. Ultimately, we consider whether a pulsar wind interacting with a companion may lead to a new class of long-period high-energy transients. Modelling the interaction between a pulsar wind and a companion requires us to capture both the magnetosphere’s global dynamics and the microphysical processes of the relativistic plasma responsible for particle acceleration and non-thermal radiation. To do so, we resorted to global particle-in-cell (PIC) simulations. Many studies have been conducted in order to characterize the general structure of isolated pulsar magnetospheres, whether aligned <cit.> or inclined <cit.>. These models have played a major role in the understanding of the magnetospheric structure and the kinetic processes at play. They have demonstrated that a sizeable fraction of the pulsar spindown power can be dissipated into kinetic energy. Magnetic reconnection has been identified as the predominant cause of particle acceleration and localized in the wind current sheet beyond the pulsar light cylinder. Accelerated particles escaping the reconnection layers have been shown to emit synchrotron radiation, mainly from the inner parts of the wind, thereby reproducing pulsars’ magnetospheric radiative signatures <cit.>. To our knowledge, no simulations of a pulsar magnetosphere and a spherical companion lying in the pulsar wind have been realized, neither in the framework of magnetohydrodynamics nor in that of PIC. However, the regime where a millisecond pulsar interacts with a low-mass companion star (i.e., the spider pulsar regime) has been explored with 2D PIC simulations by focusing on the outer parts of the wind, where the pulsar wind was modeled by a plane parallel striped wind (see ). In this work, we aim to probe intermediate scales, namely, scales enabling us to simulate an inclined pulsar magnetosphere and its wind. We focus our study on a spherical unmagnetized companion settled in the pulsar wind, small enough so that its diameter is shorter than the wind stripes width, which excludes the regime of spider pulsars from our field of study. We investigate the structure of the magnetosphere, the enhancement of particle acceleration, and the high-energy emission arising from this interaction. Furthermore, we probe the impacts of the companion size and separation on these diagnostics. In the following, we first introduce the numerical model employed in this study (Section <ref>). Section <ref> describes our reference case, the isolated pulsar magnetosphere, recalling the main characteristics of the striped wind and the main kinetic and observational features. Our results on the pulsar-companion interaction are reported in Section <ref>, where we first describe the wind global dynamics before discussing particle acceleration and high-energy radiation. In Section <ref>, we outline the main results and their implications, and we discuss future prospects. § NUMERICAL MODEL We resort to the ab-initio PIC method, which allows us to capture both the global pulsar magnetosphere as well as the kinetic processes at play. We employ the 2.5D version of the relativistic electromagnetic Zeltron code <cit.>, in spherical coordinates <cit.>, restricted to the equatorial plane. We used nearly the same setup as the one presented in <cit.>, to which we added a companion in the pulsar wind. For the sake of completeness, the full setup is described below. §.§ Initial setup The computational domain is a disk made of (4096 x 4096) cells (see Fig. <ref>). Particles are therefore confined in the rϕ-plane, but we keep the three spatial components of the electromagnetic field and of the particle velocities. The spherical grid is linear in ϕ and logarithmic in r, so as to keep the cell aspect ratio constant with radius. It is well suited for pulsar winds since the particles density and the field amplitudes are decreasing functions of the radius, meaning that the relativistic plasma skin depth, c/ω_p (where ω_p is the plasma frequency) and the Larmor radius, r_L, increase with radius. The surface of the star, r=r_⋆, determines the inner boundary and the box extends radially until 24 R_ LC, where R_ LC is the light-cylinder radius set at R_ LC=3 r_⋆. The inner boundary absorbs all particles and an absorbing layer is implemented at r_ absorb=0.9 r_ max for both particles and fields in order to mimic an open boundary after which no information can go back inwards <cit.>. This assumption is reasonable given that the fast magnetosonic point lies well inside the outer boundary (see Section <ref>). The neutron star is modeled at the center of the box as a spherical perfect conductor with a constant angular velocity of Ω_ 𝐬𝐩𝐢𝐧=(c/R_ LC) 𝐮_𝐳, where the unity vector, 𝐮_𝐳, points in the out-of-plane direction. Initially, the star is in vacuum with magnetic field lines anchored at its surface. The implementation of a dipolar magnetic field for such equatorial configuration was proven to be inadequate (see ). We therefore replace it by a split-monopole configuration <cit.>, for which the initial magnetic field is purely radial and reverses across the plane perpendicular to the magnetic moment μ, so as to ensure ∇·𝐁=0 (see Fig. <ref>). By construction of the equatorial setup, the magnetic axis must be inclined at an angle χ = π/2. The perfect conductor condition applied to the pulsar surface in the corotating frame implies, by Lorentz transformation, a non-zero electric field in the simulation frame: 𝐄_⋆=-(Ω_ 𝐬𝐩𝐢𝐧×𝐫_⋆)×𝐁_⋆/c , where 𝐄_⋆ and 𝐁_⋆ are the fields at the surface of the star. This constraint starts the rotation of the magnetic field lines at t=0. The magnetosphere is initially empty and becomes progressively filled with electron-positron pairs that are evenly injected from the surface of the star at a rate of one macroparticle per cell per timestep for each species. They account for polar cap discharge and pair creation processes <cit.>. Indeed, we neglected any other pair creation process away from the star surface, given that pairs are mainly produced in the inner magnetosphere. The presence of ions (largely dominated by pairs in terms of number density) would not modify the magnetospheric structure and their radiative signature would be largely exceeded by pair losses. This justifies our choice not to model the ion extraction from the surface. We set a high surface multiplicity of κ_⋆=n_⋆/n_ GJ=10, where n_⋆ is the density of the plasma injected at the surface of the star and n_ GJ=Ω B_⋆/2π e c is the Goldreich-Julian density <cit.> at the surface of the star. This ensures that the plasma is sufficiently dense to screen the parallel electric field (𝐄·𝐁=0). We also set a high plasma magnetization at the surface of the star: σ_⋆=B^2_⋆/4πΓ_⋆ n_⋆ m_e c^2, where Γ_⋆ is the surface plasma bulk Lorentz factor. Having very high multiplicity and plasma magnetization (i.e., κ≫1 and σ≫1) ensures that the quasi force-free limit is achieved in the magnetosphere and in the wind (except deviations in the current sheet), meaning that the Lorentz force dominates over any other force: the conservation of momentum equation therefore reads ρ𝐄+𝐉×𝐁/c=0, where ρ and 𝐉 are respectively the charge and current densities. Pair plasma is injected along the magnetic field lines with an initial velocity given by the drift velocity expected for the monopole solution in the force-free limit: 𝐕_D=c𝐄×𝐁/B^2. The scale separation in the computing box is reduced by several orders of magnitude (∼ 10^2-4) compared to a realistic pulsar to resolve the plasma kinetic scales. Our simulations are therefore best suited for millisecond pulsars. The plasma skin depth d_ e is resolved by 1.14 cells (Δ r) at the surface of the star (i.e., at its minimum value d_ e^⋆), (d_ e/Δ r)_ LC∼ 10 at r=R_ LC, and d_ e globally increases with radius up to a resolution of ∼ 24 cells at the outer boundary. The current sheet that forms in the magnetosphere (see Section <ref>) has a width of the order of d_ e. At r=R_ LC, the Larmor radius resolution goes from ∼ 1 cell in the wind to ∼ 70 cells inside the current sheet. The Larmor radius resolution then gradually increases with radius up to ∼ 10 cells in the wind at the edge of the box. §.§ Companion implementation We added a companion in the wind of the central pulsar (see Fig. <ref>). For simplification purposes, the companion is chosen to be an unmagnetized perfect conductor, with no wind and no intrinsic spin. Particles hitting its surface are absorbed. We aim to study the impact of the binary separation and of the companion radius on the pulsar magnetosphere and wind. Therefore, we ran six simulations for different choices of separations (d_ comp) and radii (r_ comp). We refer to Table <ref> for the full set of parameters. The pulsar wind is globally made of two nested Archimedean-shaped stripes of alterning magnetic polarity, with a wavelength of λ=2π R_ LC, separated by current sheets (see Section <ref>). We restricted our study to companions sizes ensuring δ_ cs < r_ comp < π R_ LC , where δ_ cs is the current sheet width. Indeed, we want the companion radius to be smaller than the semi-stripe wavelength so that the whole diameter holds between two successive reconnection layers. Such choice excludes the spider pulsar regime. Given our choice of companion separations and radii (see Table <ref>), the δ_ c.s./r_ comp ratio varies from 0.15 (run D2R1) to 0.9 (run D9R05). We probe different zones of the wind by changing the separation of the companion d_ comp. In particular, we wish to investigate the impact of the companion position with respect to the fast magnetosonic point ,r_ fms. This point is defined as the radius for which the wind velocity exceeds the Alfvén speed <cit.>: Γ_ fms=(B^2/4π n m_e c^2)^1/3=(Γσ)^1/3 , where σ is the plasma magnetization. We need the fast magnetosonic point to be far enough from the light cylinder radius as well as from the outer boundary of the box. This constraint leads us to fix a magnetization of σ_⋆=250 at the neutron star surface, meaning σ_ LC∼ 60 at the light-cylinder radius outside the current sheet. The pulsar-companion separation is assumed to be constant in time. Indeed, even considering a compact object binary made of millisecond pulsars and for our closest separation (i.e., d_ comp=2 R_ LC), the merger time due to the emission of gravitational waves is 300 times longer than the pulsars spin period (P_ spin) and 30 times longer than their orbital period (P_ orb), according to the approximate analytical expression of merger time derived in <cit.>. The companion is settled at rest in the simulation. Even for the closest binary separation (d_ comp=2 R_ LC), the orbital period computed considering the Keplerian orbit of a millisecond pulsar binary is 10 times longer than the pulsar spin period (P_ spin). This assumption, added to the perfect conductor condition, implies that 𝐄=0 at the surface and everywhere inside the companion. §.§ Fields and particle evolution Starting from the initial conditions given above, the PIC method allows us to couple the particles evolution along with time-dependent electromagnetic fields in a self-consistent way. Firstly, the Boris push <cit.> is used to evolve particles positions and velocities according to the Abraham-Lorentz-Dirac equation. This equation accounts for radiative energy losses by adding a radiation-reaction force (𝐟_ 𝐫𝐚𝐝) to the Lorentz force: d(γ m_e 𝐯)/ d t= q (𝐄+β×𝐁)+ 𝐟_ 𝐫𝐚𝐝 , where, for each particle, 𝐯=βc is the 3-velocity, q is the electric charge, and γ is the Lorentz factor. The radiation-reaction force is implemented according to the Landau-Lifshitz formula <cit.>, in the <cit.> approximation (see for full details) as: 𝐟_ 𝐫𝐚𝐝=2/3r^2_e [(𝐄+β×𝐁)×𝐁+(β·𝐄)𝐄] -2/3r^2_e γ^2 [(𝐄+β×𝐁)^2-(β·𝐄)^2] β, where r_e=e^2/m_e c^2 is the classical electron radius. Due to numerical costs, each simulated particle represents a large number of real particles, given by the weight, w_k, with the same q/m ratio and therefore following the same trajectory in phase space. Secondly, charges and currents are deposited on the grid, based on an area-weighting deposition scheme. Thirdly, electromagnetic fields are evolved on the grid by solving Maxwell-Faraday and Maxwell-Ampère equations through the finite-difference-time-domain algorithm <cit.>. While ∇·𝐁 =0 is automatically verified under these conditions, the conservation of charge is not ensured, due to machine truncation errors. We then solve the Poisson equation with the iterative Gauss-Seidel method every 25 timesteps (Δ t). The timestep is given by half the Courant-Friedrich-Lewy critical condition. We note that general relativistic corrections on the electrodynamics of the system are neglected in this work (see however ). The full list of numerical and physical parameters chosen for the simulations is given in Table <ref>. §.§ Radiation modeling Synchrotron radiation is the main source of high-energy emission beyond the light cylinder <cit.>. Each macroparticle of the simulation emits a macrophoton, representing a set of physical photons and emitting the following power <cit.>: 𝒫_ rad=2/3 r_e^2 c γ^2B̃_⊥^2 , with power spectrum d𝒫_ rad/ dν = √(3)e^3B_⊥/m_e c^2(ν/ν_c)∫^+∞_ν/ν_c K_5/3 (x) dx , where K_5/3 is the modified Bessel function of order 5/3, ν is the radiation frequency, ν_c = 3eB_⊥γ^2/4π m_e c is the critical frequency, and B_⊥ is defined in Eq. (<ref>). Around pulsars, the electric and magnetic field strengths are comparable. Instead of referring to the classical expression B_⊥=B sinα (where α is the angle between 𝐁 and the direction of the particle), valid for synchrotron radiation in a pure magnetic field, we use the effective perpendicular magnetic field strength, B_⊥, that takes into account an arbitrary electromagnetic field (see for further details): B_⊥ = √((𝐄+β×𝐁)^2 - (β·𝐄)^2) . Quantum electrodynamical effects reached above the critical magnetic field, B_ QED=m_e^2c^3/ħ e, are neglected since we globally have γB_⊥≪ B_ QED in the magnetosphere. Photons are emitted along the particles momentum. This is a good approximation in the presence of strong relativistic beaming since the emission cone has a semi-aperture angle of ∼ 1/γ≪ 1. Photons then propagate freely at the speed of light, without interacting between themselves nor with the magnetic field. However, they are absorbed if they hit the star or the companion. General relativistic effects on the photons trajectories are neglected. We collect the photons on a screen at infinity. In this problem, we focus on the orbital modulation of the light curves. As previously mentioned, the companion is at rest in the simulation, under the assumption that P_ orb≫ P_ spin. However, assuming a circular orbit, we can model orbital-modulated light curves by placing observers all around the box. An isolated pulsar emits a pulsed radiation over its spin period. To compute such pulsed light curves, we need to take into account the time delay between photons emitted at different locations of the box when they reach the screen <cit.>. Nevertheless, when we focus on orbital-modulated light curves, the delay times are way shorter than the orbital timescales and we do not need to consider them. § REFERENCE CASE: ISOLATED PULSAR MAGNETOSPHERE The initial split-monopole condition represented in Figure <ref> quickly evolves as the magnetic field lines start rotating at t>0 to ensure the perfect conductor condition of the star (see Section <ref>). Pairs injected from the surface gradually fill the box. The magnetosphere converges to a stationary solution after ∼ 4.5 P_ spin (Fig. <ref>). Within the light cylinder, the magnetic equatorial regions are characterized by closed field lines that co-rotate with the star and trap the plasma, whereas open field lines of opposite polarities originating from the polar caps escape the light cylinder due to the rotation of the star and reconnect along the current sheet <cit.>. When the magnetic axis is inclined with respect to the rotation axis (recall χ=π/2 here), the current sheet takes the shape of an oscillatory structure in θ, with wavelength 2π R_ LC and angular aperture 2 χ, referred to as the “striped wind” <cit.>. A 2D equatorial cut of the current sheet results in two Archimedean spirals separated by stripes π R_ LC wide of alterning magnetic field polarities, conveying the outflowing cold relativistic wind (see Fig. <ref>a). Figure <ref>c shows a snapshot of the bulk Lorentz factor of the wind at t=12.4 P_ spin. On average, the bulk Lorentz factor of the wind increases almost linearly until it reaches Γ = 3.9 at the fast magnetosonic point, r_ FMS = 5.1 R_ LC. After this point, the wind continues to accelerate at a slower rate up to Γ∼ 7.5 at the box edge. Due to magnetic reconnection locally attracting particles, the wind is slowed down on the leading edge of the current sheet whereas its acceleration increases on the trailing edge of the spiral. Right after its formation near the light cylinder, the current sheet fragments due to the relativistic tearing instability <cit.>, giving rise to a chain of plasma overdensities confined in magnetic islands, called plasmoids <cit.>. Plasmoids then gradually grow by merging with each other while flowing outwards along the spirals. In between plasmoids, short current layers (referred to as X-points) allow for relativistic magnetic reconnection <cit.>, which is the main physical process responsible for magnetic energy dissipation into particles kinetic energy <cit.>. About 24 % of the Poynting flux reservoir (i.e., the spindown power extracted from the star) is consumed via magnetic reconnection and converted into kinetic energy between the light cylinder and the outer part of the box. Reconnection is particularly efficient at accelerating particles for r < 2 R_ LC. Accelerated particles escaping the X-points are trapped by plasmoids. Figure <ref>b shows the mean particle acceleration (⟨γ⟩). A significant fraction of kinetic energy is radiated away through synchrotron emission (see Fig. <ref>d). As expected, non-thermal radiation is emitted from the current sheet and decays with radius given that the density and B^2_⊥ decrease with radius as 1/r^2 (see Eq 4). In the end, 0.7 % of the spindown power is converted into high-energy radiation. Due to time delay effects, the synchrotron radiation is pulsed, with two short bright pulses per stellar spin period <cit.>. We defined a “reference flux,” computed by averaging this pulsed radiation over one pulsar spin period, to later compare it with the orbital-modulated light curves computed in the presence of the companion (see Section <ref>). § PULSAR-COMPANION INTERACTION We carry out a parametric study in order to probe the impact of the companion on the pulsar wind, depending on the binary separation (d_ comp) and on the companion size (r_ comp). Table <ref> details the parameters chosen for the six simulations we ran. §.§ Wind dynamics Figure <ref> presents the same diagnostics as the ones we show for the isolated pulsar (Section <ref>), but in the presence of a companion at d_ comp= 9 R_ LC and r_ comp=r_⋆ (run D9R1). The conductor is settled beyond the fast magnetosonic point (r_ fms∼ 5 R_ LC) for this run. We observe an alteration of the magnetosphere (Fig. <ref>, panel a), but (as expected) the perturbations are advected by the pulsar wind and, therefore, they stay in a cone behind the companion. On the contrary, when d_ comp≤ r_ fms (runs D2R1 and D5R1), perturbations propagate faster than the outflowing wind and the whole magnetosphere is affected. As the wind reaches the companion, its bulk Lorentz factor sharply drops along a wider cone, indicating the presence of a shock (see Fig. <ref>c). Given the strong magnetization of the magnetosphere, we did not expect large discontinuities in the magnetic field, nor in the density, that would provide evidence of a significant compression <cit.>. Figure <ref> presents, for each of our simulations, the bulk Lorentz factor of the wind averaged over several spin periods of the pulsar, to which we added the fast magnetosonic surfaces. Averaging over several spin periods enables us to get rid of the wind striped structure and of local inhomogeneities, so as to keep only the overall bulk motion. We see that for pulsar-companion separations larger than the radius of the fast magnetosonic surface (lying at r_ fms∼ 5 R_ LC according to the pulsar parameters set in this study), a shock is systematically established. However, when the companion is settled before the fast magnetosonic surface d_ comp≤ 5 r_ fms (i.e., for runs D2R1 and D5R1), no shock appears. The wind is slowed down over the whole box, more isotropically and more intensely as the orbital separation decreases. We notice that, in this case, the fast magnetosonic surface gets disrupted but keeps encompassing the companion. The local enhancement of Γ that appears in the wake of the companion is due to artefacts of the very low plasma density. When a shock is formed in a plane-parallel and uniform flow, the cone aperture angle (ζ) can be related to the relativistic Alfvénic Mach number (ℳ) computed at the apex of the cone via the following relation: sin(ζ/2) ∼1/ℳ , where ℳ=βΓ/β_ AΓ_ A, given that β_ A=√(σ/(1+σ)) <cit.> and Γ_ A are the Alfvén velocity and the Alfvén Lorentz factor, respectively. The measured values from the simulation (see ζ on the top right panel of Fig. <ref>) are in reasonably good agreement with the law presented in Eq. (<ref>), with relative errors of less than 28 %. The concordance is particularly remarkable given the high inhomogeneity of the striped wind and the tilt in the cone direction, which lead to different wind velocities reached for a same distance from the apex of the cone. Increasing the companion radius (r_ comp) leads to an increase of the cone aperture angle (ζ). The altered part of the magnetosphere (Fig. <ref> panel a) displaying enhanced particle acceleration and radiation lies inside the shocked zone, so that the aperture angle of the shocked cone also gives an upper boundary to the aperture of the emitting cone (see Fig. <ref>d). §.§ Particle kinetics The presence of the companion in the magnetosphere of the pulsar enhances particle acceleration regardless of the binary separation. This comes from the compression of the magnetic field lines reaching the companion, which results in a forced reconnection. Indeed, as can be seen in Figure <ref>, the forward part of the current sheet is first slowed down while approaching the companion (starting from a distance of about 1 r_ comp). It then bends backwards but ends up reaching the companion surface as the current sheet continues to progress radially. After the companion surface is reached, the current sheet eventually breaks apart around the obstacle and the two branches of enhanced particle acceleration flow outwards, creating two radial lines of enhanced particle acceleration. In fact, the whole cone between these two lines represents a favorable zone for particle acceleration (see Fig. <ref> panel b). However, the density is extremely low in the wake of the companion, so that most of the highly accelerated particles flow along the borders of the cone. This can be seen in Fig. <ref>d representing the emitted radiation power P_ rad, which is a good tracer of particle acceleration since the mean Lorentz factor (⟨γ⟩) is weighted by the local density. Particle acceleration continues to increase with radius until the end of the box, since magnetic reconnection continues to operate and radiative losses diminish with radius. Figure <ref> shows the particles energy spectra for all runs. The presence of the companion induces a localized bump in the spectrum, centered on different energies depending on the binary separation. While the separation of 2 R_ LC leads to a bump centered on γ∼ 17, the separation of 5 R_ LC seems to be the most favourable to reach the highest energies, with the bump being centered on γ∼ 200. The radius of the companion does not seem to play a significant role on the particles energy spectra (see orange lines on Fig. <ref>). A domain decomposition based on the shocked zone (see Fig. <ref>) confirms that the observed hardening of the spectra entirely comes from the shocked part of the wind while, outside the shocked surface, the spectral shape is the same as for an isolated pulsar. §.§ Electromagnetic signature As previously mentioned (Section <ref>), synchrotron losses account for most of the high energy radiation in the studied system. As can be seen in Figure <ref> for the isolated pulsar configuration, synchrotron losses originate from the particles previously accelerated in the reconnection layers and confined inside the plasmoids along the current sheet. In particular, synchrotron emission is predominant in the inner parts of the magnetosphere where the magnetic field is stronger (B ∝ 1/r) and the density higher (n ∝ 1/r^2). However, the presence of a companion adds a predominant contribution to the usual synchrotron losses. Indeed, when the current sheet hits the conductor surface, particles are further accelerated (see previous section) and the current sheet lights up again (see Fig. <ref>). The current sheet is torn apart by the companion and the two separated branches resulting from this interaction flow radially beyond the companion, creating a hollow cone of light (as shown in Figure <ref>, panel d). Figure <ref> compares the synchrotron spectra for all runs. As expected, for all binary separations, the presence of the companion significantly enhances synchrotron losses and shifts the peaks of the spectra to higher frequencies. Here, again, the separation d_ comp=5 R_ LC seems to optimise the emission of synchrotron radiation peaking at a higher frequency compared to the other runs. The excess of synchrotron emission decreases with the binary separation. This mainly comes from the decrease of the density and the magnetic field strength with radius, despite the slower growth of γ with radius. We notice a slight upward shift of the spectrum when doubling the companion size (r_ comp=2 r_⋆), and a slighter downward shift when decreasing the companion radius (r_ comp=0.5 r_⋆). Figure <ref> makes a comparison, for the run D9R1, between the synchrotron spectrum emitted by the shocked part of the magnetosphere and the one emitted by the unshocked part. The unshocked part of the magnetosphere displays the same spectrum as the isolated pulsar magnetosphere, indicating that the additional contribution exclusively comes from the shocked region of the magnetosphere. Orbital light curves for all runs are shown in Figure <ref>. We represent the high-energy synchrotron emission taken above the fiducial synchrotron frequency ν_0=3 e B_⋆/4π m c, which was proven in previous studies to be an appropriate threshold for studying synchrotron emission from the current sheet (see ). Light curves are integrated over the full polar angle range and are normalized by the reference flux, computed by averaging over P_ spin the pulsed radiation emitted by the isolated pulsar magnetosphere (see Section <ref>). All light curves present a significant enhancement of the radiation flux, with two broad peaks corresponding to the passing of the observer's line of sight through the edges of the hollow cone of emission (see Fig. <ref>d). The peak height varies from 3.5 to 14 times the reference flux, depending on the companion separation and size. Higher companion sizes (orange lines on Fig. <ref>) induce higher peak intensities and higher orbital phases between peaks. The binary separation has an impact on the intensity of the peaks, their widths, the separation between them (ΔΦ), and the light cone orientation with respect to the pulsar-companion direction (δ_ offset). The exact dependency between these parameters and the pulsar-companion separation is presented in Figure <ref>. The upper panel shows that the total synchrotron power emitted over an orbit scales as d_ comp^-0.56 (red line). The orbital phases between both peaks (light blue points on the middle panel of Fig. <ref>) decrease with radius as d_ comp^-0.68 (light blue line). We note that they scale similarly to the corresponding 1/Γ factors, computed for each run at the apex of the shocked cone (dark blue points on the middle panel of Fig. <ref>), suggesting that the peak separation is shaped by relativistic beaming effects. In Figure <ref>, the points corresponding to the 1/Γ scaling are normalized by a factor of 0.55. This discrepancy can be explained by the values of Γ inferred from the averaged maps (shown in Fig. <ref>), which erase any inhomogeneities. Indeed, as can be seen in Figure <ref>c, Γ can locally be amplified by up to a factor of 2 inside the current sheet near the companion. The lower panel demonstrates that the angular offsets of the light hollow cones with respect to the pulsar-companion direction are determined by the plasma drift velocity at the companion radius. Indeed, the values of tan (δ_ offset) for each companion separation (green crosses) are given by the corresponding drift velocity ratios V_ϕ/V_r=R_ LC/r (green line) taken at the companion location and computed according to the monopole analytical solution (derived in ). We note that the shocked cones represented in Figure <ref> are tilted, for each run, by exactly the corresponding angle δ_ offset. § DISCUSSION AND CONCLUSION In this work, we study the interaction of a pulsar wind with a similarly sized companion to assess the extent to which the pulsar magnetosphere reshapes itself. We probe the enhancement and location of particle acceleration to predict the non-thermal radiation originating from such systems. We show that the interaction between a pulsar wind and a companion alters significantly the dynamical and energetic properties of the wind. The pulsar wind is slowed down rather isotropically if the companion lies within the fast magnetosonic point. Otherwise, when the companion is settled beyond the fast magnetosonic point, a shock is established and all the perturbations are advected in a cone behind the companion. Each time the outflowing wind current sheet impacts the companion surface, we observe a forced reconnection that leads to a significant enhancement of particle acceleration. Reaccelerated particles form a hollow cone behind the companion. By doing so, they induce an orbital-modulated hollow cone of high-energy synchrotron radiation, whose intensity and aperture depend on the orbital separation and the size of the companion. Hence, non-thermal radiation from such systems is significantly enhanced as well, compared to the reference pulsed flux of an isolated pulsar magnetosphere. The 2D framework used here may introduce some artifacts. First, the magnetic field is confined in the equatorial plane, which may have an impact on the rearrangement of the field lines. It is worth noting that our results give an upper limit in terms of particle acceleration and high-energy emission. Indeed, the equatorial configuration necessarily implies that all the magnetic field lines accumulating ahead of the companion reconnect, possibly overestimating the amount of dissipation into kinetic energy. In addition, in 2D, particles accelerated by magnetic reconnection escape the X-points and are all trapped by the neighbouring plasmoids, however, a third dimension would allow for the leakage of the accelerated particles outside of plasmoids <cit.>. This could slightly alter the shape of the light hollow cone. As mentioned at the beginning of this study, the scale separation that we have been able to reach in these PIC simulations is reduced by several orders of magnitude compared to a realistic pulsar magnetosphere. This numerical limitation constrains the frequency range of the high-energy radiation, leaving many uncertainties on the spectral limits and the spectral shape – but not on the total energy flux that is conserved. The presence of a companion wind would imply a bigger effective surface, but should not radically alter the nature of the shock. A slight asymmetry between the two peaks appears in the light curves. Taking into account the orbital motion of the companion could potentially exacerbate this asymmetry. We predict an orbital-modulated emission that could originate from the orbital motion of asteroids, planets, or neutron stars around the pulsar, with a higher chance of observing such systems if seen close to edge-on. The characteristic frequency of the non-thermal radiation is expected to fall in the soft gamma-ray band. We therefore expect these transients to be observable only on galactic distances. While not investigated in this work, radio counterparts are expected in addition to the high-energy non-thermal radiation. Indeed, in the wake of the companion where the wind is altered, we observe the propagation of small-scale fast magnetosonic modes. According to <cit.> and <cit.>, the collision of plasmoids with each other and with the magnetic field perturbates the magnetic field and induces short fast magnetosonic pulses that eventually escape the plasma as radio waves. Other coherent mechanisms of radio emission could also be at play, such as the cyclotron maser instabilility that was initially retained to describe fast radio bursts due to an outflowing pulsar wind crossing Alfvén wings <cit.> formed behind a companion <cit.>. Galactic radio counterparts would be of significant interest, especially in the light of the recently discovered galactic long-period radio transients <cit.>. These periodic radio transients are characterized by very long periods, ranging from 10s to 1000s. Their sources have not been uniquely identified yet; however, binary systems including a pulsar have been noted as possible source candidates <cit.>. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 863412). Computing resources were provided by TGCC under the allocation A0150407669 made by GENCI. aa
http://arxiv.org/abs/2406.18105v1
20240626064616
Excitation energies from state-specific ensemble density functionals with density-driven correlations
[ "Tim Gould", "Stephen G Dale", "Leeor Kronik", "Stefano Pittalis" ]
physics.chem-ph
[ "physics.chem-ph" ]
http://arxiv.org/abs/2406.18624v1
20240626125055
Robust Low-Cost Drone Detection and Classification in Low SNR Environments
[ "Stefan Glüge", "Matthias Nyfeler", "Ahmad Aghaebrahimian", "Nicola Ramagnano", "Christof Schüpbach" ]
eess.SP
[ "eess.SP", "cs.LG" ]
XX Month, XXXX XX Month, XXXX XX Month, XXXX XX Month, XXXX XX Month, XXXX XXXX.2022.1234567 Glüge et al. Institute of Computational Life Sciences, Zurich University of Applied Sciences, 8820 Wädenswil, Switzerland Institute for Communication Systems, Eastern Switzerland University of Applied Sciences, 8640 Rapperswil-Jona, Switzerland Armasuisse Science + Technology, 3602 Thun, Switzerland Corresponding author: Stefan Glüge (email: stefan.gluege@zhaw.ch). § ABSTRACT The proliferation of drones, or uav, has raised significant safety concerns due to their potential misuse in activities such as espionage, smuggling, and infrastructure disruption. This paper addresses the critical need for effective drone detection and classification systems that operate independently of uav cooperation. We evaluate various cnn for their ability to detect and classify drones using spectrogram data derived from consecutive Fourier transforms of signal components. The focus is on model robustness in low snr environments, which is critical for real-world applications. A comprehensive dataset is provided to support future model development. In addition, we demonstrate a low-cost drone detection system using a standard computer, sdr and antenna, validated through real-world field testing. On our development dataset, all models consistently achieved an average balanced classification accuracy of ≥ 85% at snr >-12 dB. In the field test, these models achieved an average balance accuracy of >80%, depending on transmitter distance and antenna direction. Our contributions include: a publicly available dataset for model development, a comparative analysis of cnn for drone detection under low snr conditions, and the deployment and field evaluation of a practical, low-cost detection system. Deep neural networks, Robustness, Signal detection, Unmanned aerial vehicles Robust Low-Cost Drone Detection and Classification in Low SNR Environments Stefan Glüge1, Matthias Nyfeler1, Ahmad Aghaebrahimian1, Nicola Ramagnano2 and Christof Schüpbach3, Fellow, IEEE July 1, 2024 ============================================================================================================================== § INTRODUCTION Drones, or civil uav, have evolved from hobby toys to commercial systems with many applications. In particular, mini/amateur drones have become ubiquitous. With the proliferation of these low-cost, small and easy-to-fly drones, safety issues have became more pressing (e.g. spying, transfer of illegal or dangerous goods, disruption of infrastructure, assault). Although regulations and technical solutions (such as transponder systems) are in place to safely integrate uav into the airspace, detection and classification systems that do not rely on the cooperation of the uav are necessary. Various technologies such as audio, video, radar, or rf scanners have been proposed for this task <cit.>. In this paper, we evaluate different cnn for drone detection and classification using the spectrogram data computed with consecutive Fourier transforms for the real and imaginary parts of the signal. To facilitate future model development, we make the dataset publicly available. In terms of performance, we focus on the robustness of the models to low snr, as this is the most relevant aspect for a real-world application of the system. Furthermore, we evaluate a low-cost drone detection system consisting of a standard computer, sdr, and antenna in a real-world field test. Our contributions can therefore be summarised as follows: * We provide the dataset used to develop the model. Together with the code to load and transform the data, it can be easily used for future model development. * We compare different cnn using 2D spectrogram data for detection and classification of drones based on their rf signals under challenging conditions, i.e. low snr down to -20 dB. * We visualise the model embeddings to understand how the model clusters and separates different classes, to identify potential overlaps or ambiguities, and to examine the hierarchical relationships within the learned features. * We implement the models in a low-cost detection system and evaluate them in a field test. §.§ RELATED WORK A literature review on drone detection methods based on dl is given in <cit.> and <cit.>. Both works reflect the state of the art in 2024. Different dl algorithms are discussed with respect to the techniques used to detect drones based on visual, radar, acoustic, and rf signals. Given these general overviews, we briefly summarise recent work based on rf data, with a particular focus on the data side of the problem to motivate our work. With the advent of dl-based methods, the data used to train models became the cornerstone of any detection system. Table <ref> provides an overview of openly available datasets of rf drone signals. The DroneRF dataset <cit.> is one of the first openly available datasets. It contains rf time series data from three drones in four flight modes (i.e. on, hovering, flying, video recording) recorded by two usrp sdr transceivers <cit.>. The dataset is widely used and enabled follow-up work with different approaches to classification systems, i.e. dl-based <cit.>, focused on pre-processing and combining signals from two frequency bands <cit.>, genetic algorithm-based heterogeneous integrated k-nearest neighbour <cit.>, and hierarchical reinforcement learning-based <cit.>. In general, the classification accuracies reported in the papers on the DroneRF dataset are close to 100%. Specifically, <cit.>, <cit.>, and <cit.> report an average accuracy of 99.7%, 100%, and 99.98%, respectively, to detect the presence of a drone. There is therefore an obvious need for a harder, more realistic dataset. Consequently, <cit.> investigate the detection and classification of drones in the presence of Bluetooth and Wi-Fi signals. Their system used a multi-stage detector to distinguish drone signals from the background noise and interfering signals. Once a signal was identified as a drone signal, it was classified using ml techniques. The detection performance of the proposed system was evaluated for different snr. The corresponding recordings (17 drone controls from eight different manufacturers) are openly available <cit.>. Unfortunately, the Bluetooth/Wi-Fi noise is not part of the dataset. Ozturk et al. <cit.> used the dataset to further investigate the classification of rf fingerprints at low snr by adding white Gaussian noise to the raw data. Using a cnn, they achieved classification accuracies ranging from 92% to 100% for snr ∈[-10, 30]dB. The openly available DroneDetect dataset <cit.> was created by Swinney and Woods <cit.>. It contains raw iq data recorded with a BladeRF sdr. Seven drone models were recorded in three different flight modes (on, hovering, flying). Measurements were also repeated with different types of noise, such as interference from a Bluetooth speaker, a Wi-Fi hotspot, and simultaneous Bluetooth and Wi-Fi interference. The dataset does not include measurements without drones, which would be necessary to evaluate a drone detection system. The results in <cit.> show that Bluetooth signals are more likely to interfere with detection and classification accuracy than Wi-Fi signals. Overall, frequency domain features extracted from a cnn were shown to be more robust than time domain features in the presence of interference. In <cit.> the drone signals from the DroneDetect dataset were augmented with Gaussian noise and sdr recorded background noise. Hence, the proposed approach could be evaluated regrading its capability to detect drones. They trained a cnn end-to-end on the raw iq data and report an accuracy of 99% for detection and between 72% and 94% for classification. The Cardinal RF dataset <cit.> consists of the raw time series data from six drones + controller, two Wi-Fi and two Bluetooth devices. Based on this dataset, Medaiyese et al. <cit.> proposed a semi-supervised framework for uav detection using wavelet analysis. Accuracy between 86% and 97% was achieved at snr of 30 dB and 18 dB, while it dropped to chance level for snr below 10 dB to 6 dB. In addition, <cit.> investigated different wavelet transforms for the feature extraction from the rf signals. Using the wavelet scattering transform from the steady state of the rf signals at 30 dB snr to train SqueezeNet <cit.>, they achieved an accuracy of 98.9% at 10 dB snr. In our previous work <cit.>, we created the noisy drone rf signals dataset[<https://www.kaggle.com/datasets/sgluege/noisy-drone-rf-signal-classification>] from six drones and four remote controllers. It consists of non-overlapping signal vectors of 16384 samples, corresponding to ≈1.2 ms at 14 MHz. We added Labnoise (Bluetooth, Wi-Fi, Amplifier) and Gaussian noise to the dataset and mixed it with the drone signals with snr ∈[-20, 30] dB. Using iq data and spectrogram data to train different cnn, we found an advantage in favour of the 2D spectrogram representation of the data. There was no performance difference at snr ≥0 dB but a major improvement in the balanced accuracy at low snr levels, i.e. 84.2% on the spectrogram data compared to 41.3% on the iq data at -12 dB snr. Recently, <cit.> proposed an anchor-free object detector based on keypoints for drone rf signal spectograms. They also proposed an adversarial learning-based data adaptation method to generate domain independent and domain aligned features. Given five different types of drones, they report a mean average precision of 97.36%, which drops to ≈ 55% when adding Gaussian noise with -25 dB snr. The raw data used in their work is available[<https://www.kaggle.com/datasets/zhaoericry/drone-rf-dataset>], but yet, unfortunately not usable without any further documentation. §.§ MOTIVATION As we have seen in other fields, such as computer vision, the success of dl can be attributed to: (a) high-capacity models; (b) increased computational power; and (c) the availability of large amounts of labelled data <cit.>. Thus, given the large amount of available raw rf signals (cf. Tab. <ref>) we promote the idea of open and reusable data, to facilitate model development and model comparison. With the noisy drone rf signals dataset <cit.>, we have provided a first ready-to-use dataset to enable rapid model development, without the need for any data preparation. Furthermore, the dataset contains samples that can be considered as “hard” in terms of noise, i.e. Bluetooth + Wi-Fi + Gaussian noise at very low snr, and allows a direct comparison with the published results. While the models proposed in <cit.> performed reasonably well in the training/lab setting, we found it difficult to transfer their performance to practical application. The reason was the choice of rather short signal vectors of 16384 samples, corresponding to ≈1.2 ms at 14 MHz. Since the drone signals occur in short bursts of ≈ 1.3 – 2 ms with a repetition period of ≈ 60 – 600 ms, our continuously running classifier predicts a drone whenever a burst occurs and noise during the repetition period of the signal. Therefore, in order to provide a stable and reliable classification per every second, one would need an additional “layer” to pool the classifier outputs given every 1.2 ms. In the present work, we follow a data-centric approach and simply increase the length of the input signal to ≈ 75 ms to train a classifier in an end-to-end manner. Again, we provide the data used for model development in the hope that it will inspire others to develop better models. In the next section, we briefly describe the data collection and preprocessing procedure. Section <ref> describes the model architectures and their training/validation method. In addition, we describe the setup of a low-cost drone detection system and of the field test. The resulting performance metrics are presented in Section <ref> and are further discussed in Section <ref>. § MATERIALS We used the raw rf signals from the drones that were collected in <cit.>. Nevertheless, we briefly describe the data acquisition process again to provide a complete picture of the development from the raw rf signal to the deployment of a detection system within a single manuscript. §.§ DATA ACQUISITION The drone's remote control and, if present, the drone itself were placed in an anechoic chamber to record the raw rf signal without interference for at least one minute. The signals were received by a log-periodic antenna and sampled and stored by an Ettus Research USRP B210, see Fig. <ref>. In the static measurement, the respective signals of the remote control (TX) alone or with the drone (RX) were measured. In the dynamic measurement, one person at a time was inside the anechoic chamber and operated the remote control (TX) to generate a signal that is as close to reality as possible. All signals were recorded at a sampling frequency of 56 MHz (highest possible real-time bandwidth). All drone models and recording parameters are listed in Tab. <ref>, including both uplink and downlink signals. We also recorded three types of noise and interference. First, Bluetooth/Wi-Fi noise was recorded using the hardware setup described above. Measurements were taken in a public and busy university building. In this open recording setup, we had no control over the exact number or types of active Bluetooth/Wi-Fi devices and the actual traffic in progress. Second, artificial white Gaussian noise was used, and third, receiver noise was recorded for 30 seconds from the usrp at various gain settings ([30,70] db in steps of 10 dB) without the antenna attached. This should prevent the final model from misclassifying quantisation noise in the absence of a signal, especially at low gain settings. §.§ DATA PREPARATION To reduce memory consumption and computational effort, we reduced the bandwidth of the signals by downsampling from 56 MHz to 14 MHz using the SciPy <cit.> signal.decimate function with an 8th order Chebyshev type I filter. The drone signals occur in short bursts with some low power gain or background noise in between (cf. Tab. <ref>). We divided the signals into non-overlapping vectors of 1048576 samples (74.9 ms) and only vectors containing a burst, or at least a partial burst, were used for the development dataset. This was achieved by applying an energy threshold. As the recordings were made in an echo-free chamber, the signal burst is always clearly visible. Hence, we only used vectors that contained a portion of the signal whose energy was above the threshold, which was arbitrarily set at 0.001 of the average energy of the entire recording. The selected drone signal vectors x with i∈{1,… k} were normalised to a carrier power of 1 per sample, i.e. only the part of the signal vector containing drone bursts was considered for the power calculation (m samples out of k). This was achieved by identifying the bursts as those samples where a smoothed energy was above a threshold. The signal vectors x are thus normalised by x̂(i) = x(i) / √(1/m∑_i | x(i) | ^2). Noise vectors (Bluetooth, Wi-Fi, Amplifier, Gauss) n with samples i∈{1,… k} were normalised to a mean power of 1 with n̂(i) = n(i) / √(1/k∑_i | n(i) | ^2). Finally, the normalised drone signal vectors were mixed with the normalised noise vectors by ŷ(i) = (√(k)·x̂(i)+n̂(i))/√(k + 1), with k=10^SNR/10, to generate the noisy drone signal vectors ŷ at different snr. §.§ DEVELOPMENT DATASET To facilitate future model development, we provide our resulting dataset[<https://www.kaggle.com/datasets/sgluege/noisy-drone-rf-signal-classification-v2>] along with a code example[<https://github.com/sgluege/noisy-drone-rf-signal-classification-v2>] to load and inspect the data. The dataset consists of the non-overlapping signal vectors of 2^20 samples, corresponding to ≈ 74.9 ms at 14 MHz. As described in Sec. <ref>, the drone signals were mixed with noise. More specifically, 50% of the drone signals were mixed with Labnoise (Bluetooth + Wi-Fi + Amplifier) and 50% with Gaussian noise. In addition, we created a separate noise class by mixing Labnoise and Gaussian noise in all possible combinations (i.e., Labnoise + Labnoise, Labnoise + Gaussian noise, Gaussian noise + Labnoise, and Gaussian noise + Gaussian noise). For the drone signal classes, as for the noise class, the number of samples for each snr level was evenly distributed over the interval of snr ∈ [-20, 30] dB in steps of 2 dB, i.e., 679-685 samples per snr level. The resulting number of samples per class is given in Tab. <ref>. In our previous work <cit.> we found an advantage in using the spectrogram representation of the data compared to the iq representation, especially at low snr levels. Therefore, we transform the raw iq signals by computing the spectrum of each sample with consecutive Fourier transforms with non-overlapping segments of length 1024 for the real and imaginary parts of the signal. That is, the two iq signal vectors ([2 × 2^20]) are represented as two matrices ([2 × 1024 × 1024]). Fig. <ref> shows four samples of the dataset at different snr. Note that we have plotted the log power spectrogram of the complex spectrum ŷ_fft as log_10|ŷ_fft| = log_10(√(Re(ŷ_fft)^2 + Im(ŷ_fft)^2)) §.§ DETECTION SYSTEM PROTOTYPE For field use, a system based on a mobile computer was used as shown in Fig. <ref> and illustrated in Fig. <ref>. The rf signals were received using a directional left-hand circularly polarised antenna (H&S SPA 2400/70/9/0/CP). The antenna gain of 8.5 dBi and the front-to-back ratio of 20 dB helped to increase the detection range and to attenuate the unwanted interferers in the opposite direction. Circular polarisation has been chosen to eliminate the alignment problem as the transmitting antennas have a linear polarisation. The usrp B210 was used to down-convert and digitise the rf signal at a sampling rate of 14 Msps. On the mobile computer, the GNU Radio program collected the baseband iq samples in batches of one second and send one batch at a time to our PyTorch model, which classified the signal. To speed up the computations in the model we utilised an Nvidia GPU in computer. The classification results were then visualised in real time in a dedicated GUI. § METHODS §.§ MODEL ARCHITECTURE AND TRAINING As in <cit.> we chose the vgg cnn architecture <cit.>. The main idea of this architecture is to use multiple layers of small (3 × 3) convolutional filters instead of larger ones. This is intended to increase the depth and expressiveness of the network, while reducing the number of parameters. There are several variants of this architecture, which differ in the number of convolutional layers (11 and 19, respectively). We used a variant with a batch normalisation <cit.> layer after the convolutions, denoted as VGG11_BN to VGG19_BN. For the dense classification layer, we used 256 linear units followed by 7 linear units at the output (one unit per class). A stratified 5-fold train-validation-test split was used as follows. In each fold, we trained a network using 80% and 20% of the available samples of each class for training and testing, respectively. Repeating the stratified split five times ensures that each sample was in the test set once in each experiment. Within the training set, 20% of the samples were used as the validation set during training. Model training was performed for 200 epochs with a batch size of 8. The PyTorch <cit.> implementation of the Adam algorithm <cit.> was used with a learning rate of 0.005, betas (0.9, 0.999) and weight decay of 0. §.§ MODEL EVALUATION During training, the model was evaluated on the validation set after each epoch. If the balanced accuracy on the validation set increased, it was saved. After training, the model with the highest balanced accuracy on the validation set was evaluated on the withheld test data. The performance of the models on the test data was accessed in terms of classification accuracy and balanced accuracy. As accuracy simply measures the proportion of correct predictions out of the total number of observations, it can be misleading for unbalanced datasets. In our case, the noise class is over-represented in the dataset (cf. Tab. <ref>). Therefor, we also report the balanced accuracy, which is defined as the average of the recall obtained for each class, i.e. it gives equal weight to each class regardless of how frequent or rare it is. §.§ VISUALISATION OF MODEL EMBEDDINGS Despite their effectiveness, cnn are often criticised for being “black boxes”. Understanding the feature representations, or embeddings, learned by the cnn helps to demystify these models and provide some understanding of their capabilities and limitations. In general, embeddings are high-dimensional vectors generated by the intermediate layers that capture essential patterns from the input data. In our case, we chose the least complex VGG11_BN model to visualise its embeddings. When inferencing the test data, we collected the activations at the last dense classification layer, which consists of 256 units. Given 3549 test samples, this results in a 256 × 3549 matrix. Using tsne<cit.> and umap <cit.> as dimensionality reduction techniques, we project these high-dimensional embeddings into a lower-dimensional space, creating interpretable visualisations that reveal the model's internal data representations. Our goals were to understand how the model clusters and separates different classes, to identify potential overlaps or ambiguities, and to examine the hierarchical relationships within the learned features. §.§ DETECTION SYSTEM FIELD TEST We conducted a field test of the detection system in Rapperswil at the Zurich Lake. The drone detection prototype was placed on the shore (cf. Fig. <ref>) in line of sight of a wooden boardwalk across the lake, with no buildings to interfere with the signals. The transmitters were mounted on a 2.5 m long wooden pole. The signals from the transmitters were recorded (and classified in real time) at four positions along the walkway at approximately 110 m, 340 m, 560 m and 670 m from the detection system. Figure <ref> shows an overview of the experimental setup. At each recording position, we measured with the directional antenna at three different angles, i.e. at 0^∘ – facing the drones and/or remote controls, at 90^∘ – perpendicular to the direction of the transmitters, and at 180^∘ – in the opposite direction. Directing the antenna in the opposite direction should result in ≈ 20 dB attenuation of the radio signals. Table <ref> lists the drones and/or remote controls used in the field test. Note that the Graupner drone and remote control are part of the development dataset (cf. Tab. <ref>), but were not measured in the field experiment. We assume that no other drones were present during the measurements, so recordings where none of our transmitters were used are labelled as “Noise”. For each transmitter, distance, and angle, 20 to 30 s, or approximately 300 spectrograms were live classified and recorded. The resulting number of samples for each class, distance, and angle are shown in Tab. <ref>. § RESULTS §.§ CLASSIFICATION PERFORMANCE ON THE DEVELOPMENT DATASET Table <ref> shows the general mean ± standard deviation of accuracy and balanced accuracy on the test data of the development dataset (cf. Sec. <ref>), obtained in the 5-fold cross-validation of the different models. There is no meaningful difference in performance between the models, even when the model complexity increases from VGG11_BN to VGG19_BN. The number of epochs for training (#epochs) shows when the highest balanced accuracy was reached on the validation set. It can be seen that the least complex model, VGG11_BN, required the least number of epochs compared to the more complex models. However, the resulting classification performance is the same. Figure <ref> shows the resulting 5-fold mean balanced accuracy over snr ∈ [-20, 30]dB in 2 dB steps. Note that we do not show the standard deviation to keep the plot readable. In general, we observe a drastic degradation in performance from -12 dB down to near chance level at -20 dB. The vast majority of misclassifications occurred between noise and drones and not between different types of drones. Figure <ref> illustrates this fact. It shows the confusion matrix for the VGGG11_BN model for a single validation on the test data for the samples with -14 dB snr. §.§ EMBEDDING SPACE VISUALISATION Figure <ref> shows the 2D tsne visualisation of the VGG11_BN embeddings of 3549 test samples from the development dataset. It can be seen that each class forms a separate cluster. While the different drone signal clusters are rather small and dense, the noise cluster takes up most of the embedding space and even forms several sub-clusters. This is most likely due to the variety of the signals used in the noise class, i.e. Bluetooth and Wi-Fi signals plus Gaussian noise. We used tsne for dimensionality reduction because of its ability to preserve local structure within the high-dimensional embedding space. Furthermore, tsne has been widely adopted in the ml community and has a well-established track record for high-dimensional data visualisation. However, it is sensitive to hyperparameters such as perplexity and requires some tuning, i.e. different parameters can lead to considerable different results. It can be argued that umap would be a better choice due to its balanced preservation of local and global structure together with its robustness to hyperparameters. Therefore, we created a web application[<https://visvgg11bndronerfembeddings.streamlit.app>] that allows users to test and compare both approaches with different hyperparameters. §.§ CLASSIFICATION PERFORMANCE IN THE FIELD TEST For each model architecture, we performed 5-fold cross-validation on the development dataset (cf. Sec. <ref>), resulting in five trained models per architecture. Thus, we also evaluated all five trained models on the field test data. We report the balanced accuracy ± standard deviation for each model architecture for the complete field test dataset averaged over all directions and distances in Tab. <ref>. As observed on the development dataset (cf. Tab. <ref>), there is no meaningful difference in performance between the model architectures. We therefore focus on VGG11_BN, the simplest model trained, in the more detailed analysis of the field test results. A live system should trigger an alarm when a drone is present. Therefore, the question of whether the signal is from a drone at all is more important than predicting the correct type of drone. Therefore, we also evaluated the models in terms of a binary problem with two classes “Drone” (for all six classes of drones in the development dataset) and “Noise”. Table <ref> shows that the accuracies were highly depend on the class. Our models generalise well to the drones in the dataset, with the exception of the DJI. The dependence on direction is not as strong as expected. Orienting the antenna 180^∘ away from the transmitter reduces the signal power by about 20 dB, resulting in lower snr and lower classification accuracy. However, as the transmitters were still quite close to the antenna, the effect is not pronounced. As we have seen on the development dataset in Fig. <ref>, there is a clear drop in accuracy once the snr is below -12 dB. Apparently we were still above this threshold, regardless of the direction of the antenna. What may be surprising is the low accuracy on the signals with no active transmitter, labelled as “Noise”, in the direction of the lake (0^∘). Given the uncontrolled nature of a field test, it could well be that there a drone was actually flying on the other side of the 2.3 km wide lake. This could explain the false positives we observed in that direction. Table <ref> shows the average balanced accuracy of the VGG11_BN models on the field test data collected at different distances for each antenna direction. There is a slight decrease in accuracy with distance. However, the longest distance of 670 m appears to be too short to be a problem for the system. Unfortunately, this was the longest distance within line-of-sight that could be recorded at this location. Figure <ref> shows the confusion matrix for the outputs of the VGG11_BN model of a single fold on the field test data. As with the development dataset (cf. Fig. <ref>), most of the confusion is between noise and drones rather than between different types of drones. § DISCUSSION We were able to show that a standard cnn, trained on drone rf signals recorded in a controlled laboratory environment and artificially augmented with noise, generalised well to the more challenging conditions of a real-world field test. The drone detection system consisted of rather simple and low budget hardware (consumer grade notebook with GPU + sdr). Recording parameters such as sampling frequency, length of input vectors, etc. were set to enable real-time detection with the limited amount of memory and computing power. This means that data acquisition, pre-processing and model inference did not take longer than the signal being processed (≈ 74.9 ms per sample in our case). Obviously, the vgg models were able to learn the relevant features for the drone classification from the complex spectrograms of the rf signal. In this respect, we did not find any advantage for the use of more complex models, such as VGG19_BN, over the least complex model, VGG11_BN (cf. Tabs. <ref> and <ref>). Furthermore, we have seen that the misclassifications mainly occur between the noise class and the drones, and not between the different drones themselves (cf. Figs. <ref> and <ref>). This is particularly relevant for the application of drone detection systems in security sensitive areas. The first priority is to detect any kind of uav, regardless of its type. Based on our experience and results, we see the following limitations of our work. The field test showed that the models can be used and work reliably (cf. Tab. <ref>). However, it is the nature of a field test that the level of interference from WiFi/Bluetooth noise and the possible presence of other drones cannot be fully controlled. Furthermore, due to the limited space/distance between the transmitter and receiver in our field test setup, we were not able to clearly demonstrate the effect of free space attenuation on detection performance (cf. Tab. <ref>). Regarding the use of simple cnn as classifiers, it is not possible to reliably predict whether multiple transmitters are present. In that case, an object detection approach on the spectrogams could provide a more fine-grained prediction, see for example the works <cit.> and <cit.>. Nevertheless, the current approach will still detect a drone if one or more are present. We have only tested a limited set of vgg architectures. It remains to be seen whether more recent architectures, such as the pre-trained Vision Transformer <cit.>, generalise as well or better. We hope that our development dataset will inspire others to further optimise the model side of the problem and perhaps find a model architecture with better performance. Another issue to consider is the occurrence of unknown drones, i.e. drones that are not part of the train set. Examining the embedding space (cf. <ref>) gives a first idea of whether a signal is clearly part of a known dense drone cluster or rather falls into the larger, less dense, noise cluster. We believe that a combination of an unsupervised deep autoencoder approach <cit.> with an additional classification part (cf. <cit.>) would allow, first, to provide a stable classification of known samples and, second, to indicate whether a sample is known or rather an anomaly. IEEEtran
http://arxiv.org/abs/2406.19001v1
20240627084127
Optimal routing and communication strategies for autonomous reconnaissance missions
[ "Riley Badenbroek", "Relinde Jurrius", "Lander Verlinde" ]
math.OC
[ "math.OC", "90C90 (Primary) 90C59 (Secondary)" ]
[ Massimiliano Morini July 1, 2024 ======================= § ABSTRACT We consider an autonomous reconnaissance mission where a drone has to visit several points of interest and communicate the intel back to the base. At every point of interest, the drone has the option to either send back all available info, or continue to the next point of interest and communicate at a later stage. Both choices have a chance of detection, meaning the mission fails. We wish to maximize the expected amount of information gathered by the mission. This is modeled by a routing problem in a weighted graph. We discuss the ILP formulation of this problem, show it is NP-complete, and use a genetic algorithm to find good solutions for up to ten points of interest. Keywords: linear programming, genetic programming, graph theory, mission planning, vehicle routing MSC: 90C90, 90C59 § INTRODUCTION The role of Unmanned Aerial Vehicles (UAVs), more commonly known as drones, in society continues to become more significant every day. The civil market alone is estimated to be worth 7.2 billion USD in 2022 and this value is expected to grow to 19.2 billion USD in 2031 <cit.>. Applications range from agriculture over disaster response to package deliveries. But also its military use has become more relevant. Already in the Vietnam War, the US army deployed drones as a weapon <cit.>. When surveillance technology improved, it became clear that drones could also be used for survey missions in enemy terrain <cit.>. Moreover, these automated missions need not be performed by aerial systems, depending on the terrain and the characteristic of the mission, an automated ground vehicle (UGV) or an unmanned under water vehicle (UUV) may be more adept. The ongoing wars in Ukraine and Gaza have shown the importance of hybrid warfare <cit.>. In such warfare, the distinction between different modes of warfare (conventional warfare, cyber warfare, political warfare) tend to blurred. This makes the adversary in such war fluid and harder to predict <cit.>. Intelligence in physical and non-physical infrastructure is key in gaining advantage. Hence the extent to which unmanned vehicles are used for both offensive as well as reconnaissance missions is at an all-time high <cit.>. To expand the number of operational systems while managing costs, it is desirable to deploy systems that can operate fully independently. For a reconnaissance mission, this requires a planning of the complete mission before the drone leaves for enemy territory. The setting of such a mission can be stated as follows: starting from a secure base, multiple surveillance locations need to be safely reached and the acquired information has to be brought back to the base. There are three possible ways of bringing back information. At every surveillance location, there is the possibility of transmitting information back to the base camp. Hence the first option is that the drone travels to a surveillance location, gathers the info and immediately sends it back to the base. Secondly, the drone could also store the information and go to the next surveillance location. After having obtained the information there, the totality of info could then be transmitted together at that location. Thirdly, the drone could also store information and return to the base. In this case the information is physically obtained back from the drone. However, each action in the mission carries the risk of detection – the drone could be spotted during flight, or transmissions might be intercepted. Both ways of detection give away the position of the drone, ending the mission abruptly. This paper investigates how to find the optimal strategy of these reconnaissance missions. Such an optimal strategy consists of two elements: both the route and the send strategy have to be optimal to maximize the amount of retrieved information. Hence the specific questions for which an answer is sought in this research are: * In which order should the different locations be traveled to? * Where is it beneficial to make a transmission and where is it better to hold on to the gathered information? The paper is structured in five sections. After this introduction, the problem is described mathematically. A model based on weighted graphs is proposed and two ways to compute the expected value of retrieved information are discussed. Furthermore, it is investigated whether the problem can be written as an Integer Program and a motivation is given for the choice of a heuristic algorithm. In the third section, the case where one drone is deployed during the mission is examined in great detail. A genetic program that yields the optimal strategy is presented. This genetic program is tested on several mission scenarios and different algorithms are compared in terms of success rate and complexity. In the last section the scope is extended to missions with multiple drones. The best genetic program from the single drone scenario is adapted and improved such that it can also solve the multiple drone scenario. This improved algorithm is tested in the same way as the single drone scenario and a comparison is made. § MATHEMATICAL EXPLORATION OF THE PROBLEM The objective for a planning of an autonomous reconnaissance mission is to maximize the expected value of the transmitted information. This section investigates how to model such a mission mathematically and how to compute the expected value. A mission on n surveillance locations can be naturally formulated as a problem on a weighted graph G= (V, E), with V = {0, …, n-1}. The weights consist of both edge weights and vertex weights. Every edge {i,j} has weight q_ij∈ [0,1]. These edge weights correspond to the crossing probabilities: the survival chance of crossing from i to j. Moreover, every vertex i has two weights. The first one p_i ∈ [0,1] is the transmission probability: the probability of making a successful transmission at vertex i. The second weight w_i is the amount of information that can be gathered at that vertex. Having these weights and probabilities, finding the optimal strategy of the mission, boils down to finding a walk with corresponding send strategy that maximizes the expected amount of retrieved information. In such a strategy, the following rules apply: * Vertex 0 corresponds to a safe base camp: successful transmission has probability one, but there is no information to retrieve here. This means that p_0 is equal to 1 and w_0 is 0. * The walk can have repeated vertices. However, once the information has been retrieved at a vertex i, w_i is set to 0. * If information is retrieved but not transmitted, it is carried to the next vertex in the walk and can be transmitted there or further down the walk. A transmission always sends all information that has been retrieved but not yet transmitted. At the last vertex of the walk – the base camp – a transmission is always made, but this transmission can be empty. * Once a crossing or a transmission fails, the mission is over and no more information can be retrieved nor transmitted. Following the above rules, we formulate the expression for the expected value of transmitted information. By considering the different vertices where information is transmitted, we compute how much information every transmission is expected to contain. Let R = [v_1, v_2, …, v_k ] with v_1 = v_k=0 be the sequence of vertices that make up the walk. Say v_i has transmission probability p_v_i and the probability of crossing from v_i to v_i+1 is q_v_iv_i+1. Moreover, let S = [v_s_0, v_s_1, v_s_2, …, v_s_m] be the subsequence of R consisting of v_s_0 = v_1, followed by the m vertices where information is transmitted. Then we compute the probability to survive the entire route with corresponding send strategy with ℙ(survival) = ∏_j=1^k-1 q_v_jv_j+1∏_i=1^m p_s_i . And if we let X denote the random variable of the amount of well-received information, then the total expected information is given by 𝔼[X] =∑_i=1^m∏_j=0^s_i -1 q_v_j v_j+1∏_u: v_s_u∈ S u ≤ i p_s_u∑_ s_i-1 < h ≤ s_i w_h. §.§ An example of a reconnaissance mission To get a better insight in the above formula and in to why this problem is harder than initially expected, it is useful to look at an example. Consider a mission which is given by the graph in <ref>. This graph consists of a base camp at vertex 0 and three surveillance locations at vertices 1, 2 and 3. The crossing and transmission probabilities are shown on the graph. At every surveillance location there is one unit of information to be retrieved, so w_i=1 for all i. Now let's assume that the following strategy is chosen: Route: (0,1,2,3,0) Send: (0,1,0,0,1). This send strategy means that a transmission is made at vertex 1 and back in the base camp upon return. Let X be the random variable for the amount of well-received information for this route and send strategy. X_1 is the random variable that defines the amount of received information at the first transmission and X_2 at the second one. By linearity of expectation: 𝔼[X] = 𝔼[X_1]+ 𝔼[X_2]. To compute 𝔼[X_1], first the probability of safely reaching vertex 1 needs to be computed, which is equal to 0.6. The transmission probability is 0.9 and there is one unit of information being transmitted. So: 𝔼[X_1] = 0.6·0.9·1 = 0.54. We make a similar computation for the second transmission. The probability of reaching the base camp is 0.6·0.9·0.3·0.9·0.6. The transmission probability is 1 and there are two units of information transmitted: the information from vertices 2 and 3. The expected amount of transmitted information with the second transmission then becomes: 𝔼[X_2] = 0.6·0.9·0.3·0.9·0.6·1·2 = 0.17496. Combining this yields the total expected value: 𝔼[X] = 0.54 + 0.71496 = 0.71496. So for this graph, if the route (0,1,2,3,0) is chosen with transmissions at vertices 1 and 0, the mission is expected to retrieve 0.71496 units of information out of the possible 3 units. In fact, this is the best strategy using a Hamilton cycle in the graph. This might be a bit disappointing when setting up a mission, as we don't even expect a third of all possible information to be retrieved with this strategy. Fortunately, we are able to find an optimal strategy with a higher expected value: Route: (0,2,3,2,0,1,0) Send: (0,0,0,0,1,1,1) has an expected value of 1.666, more than double of the best Hamilton cycle. This shows that the best strategy is not necessarily a `pretty' route that is easy to predict. That is part why this problem is hard to solve. In the first place, the route of the optimal strategy is not always intuitive: it doesn't necessarily have the nice structure of a Hamilton cycle, nor is it necessarily going back and forth between base camp and other vertices. This is due to the fact that our mission graphs are not necessarily metric. Traveling between two connected vertices u and v might have a higher survival probability by going around via some other vertices instead of via the edge {u,v}. And secondly, the expected value is dependent on both the route and the send strategy. This means that we cannot first optimise the route and then find the optimal send strategy corresponding to this route – this would have been possible using a dynamic program. This complicates things when looking for an algorithm to solve this problem. §.§ NP-completeness Consider the following decision version of the reconnaissance problem above: Given an undirected connected graph G = (, ), crossing probability _ij∈ [0,1] for every edge {i,j}∈, transmission probability _i ∈ [0,1] for every vertex i ∈, a weight _i ≥ 0 that indicates the value of the information at i ∈, and a real r ∈ℝ, determine if there is a reconnaissance plan whose expected value is at least r. We will show that this decision problem is NP-complete. To this end, we first bound the length of a certificate for a `yes'-instance of this decision problem. This length depends primarily on the number of time periods the drone needs to travel in an optimal solution. The reconnaissance problem on the undirected connected graph G = (, ) has an optimal solution consisting of at most ||^2-1 time periods. We restrict ourselves to feasible solutions where the drone does not travel in a cycle without transmitting or observing information. Removing such a cycle from the drone's walk keeps the plan feasible, and can never decrease the expected value of the transmitted information. This restriction makes the number of feasible solutions under consideration finite. It follows that this restricted problem has an optimal solution. This is also an optimal solution to the unrestricted problem, since adding cycles without transmitting or observing information cannot improve the objective value. Now let n = ||, and pick ∈. Suppose that at some point in the reconnaissance plan, the drone has visited k ∈{1, …, n} unique vertices. (At the very start of the reconnaissance plan, k = 1, because the drone has only visited the base.) To maximize the number of time periods before a new vertex is observed, the drone could first move to a location from where it transmits information. The drone can spend at most k-1 time periods traveling between the already visited vertices – unless it travels in a cycle without transmitting or observing. After transmitting information, the drone can again spend at most k-1 time periods traveling between the already visited vertices before it makes a cycle without transmitting or observing a new vertex. Hence, there is an optimal solution with at most 2k-1 time periods between the observations of the k-th and (k+1)-th unique vertex. In such an optimal solution, the number of time periods until all vertices are observed is at most ∑_k=1^n-1 (2k-1) = (n-1)^2. Once all vertices are observed, the drone can take at most n-1 time periods to move to a transmission location, and at most another n-1 time periods to move back to the base. In sum, there is an optimal solution consisting of at most (n-1)^2 + 2(n-1) = n^2-1 time periods. It follows that the length of a certificate for a `yes'-instance of the decision problem is bounded by a polynomial of its input size. In other words, the reconnaissance problem lies in NP. In fact, there are problem instances where any optimal solution consists of ||^2 - O(||) time periods, which shows that the dominant term in the bound of Theorem <ref> is tight. The dominant term in the bound in Theorem <ref> is tight. We will construct a reconnaissance problem instance and lower bound the number of time periods in any optimal solution. Let G = (, ) be a path graph such that ∈ is an end point of the path, and let n = ||. Assume the non-base vertices are labeled such that the path is (, 1, 2, …, n-1). Define the crossing probabilities _ij = 1/√(n) for all {i,j}∈, and transmission probabilities _i = 1_i = and information values _i = 1_i ≠ for all i ∈. We will show that any optimal solution to this reconnaissance problem instance consists of at least n^2 - n time periods. Let σ_k be the probability of surviving until vertex k ≥ 1 is visited for the first time, and let τ_k be the probability of surviving until the first base visit after the first visit to vertex k. The objective is then to maximize ∑_k=1^n-1τ_k. As an induction hypothesis, suppose the drone does not carry any non-transmitted information when it visits a vertex ℓ≥ 1 for the first time. At this point, σ_k is fixed for all k ≤ℓ and τ_k is fixed for all k ≤ℓ - 1. After observing the information at vertex ℓ, the drone can either move back towards the base, or move forward. If the drone moves back to vertex ℓ-1, this can only be optimal if it is part of a movement back to the base in ℓ time periods. If the drone does not move back to the base, or does so in more than ℓ time periods, the expected transmission value is decreased unnecessarily, since the crossing probabilities are smaller than one. As a result of this strategy, the objective value would satisfy ∑_k=1^n-1τ_k ≥∑_k=1^ℓτ_k = ∑_k=1^ℓ-1τ_k + σ_ℓ( 1/√(n))^ℓ. If the drone moves forward to vertex ℓ+1, it observes more information. It can gather at most n-1-ℓ units of information from the unvisited vertices with an index greater than ℓ. To send this information, the drone has to travel back to the base, which means crossing at least ℓ+1 edges. As a result of this strategy, the objective value would satisfy ∑_k=1^n-1τ_k = ∑_k=1^ℓ-1τ_k + ∑_k=ℓ^n-1τ_k ≤∑_k=1^ℓ-1τ_k + ∑_k=ℓ^n-1σ_ℓ( 1/√(n))^1+ℓ+1. Since the lower bound from (<ref>) is strictly greater than the upper bound from (<ref>) for any ℓ≥ 1, we conclude that it is optimal for the drone to move back toward the base and transmit after observing a new piece of information. It then does not carry any non-transmitted information when arriving at vertex ℓ+1, as we assumed. The total number of time periods required to visit all locations following this strategy is ∑_k=1^n-1 2k = n^2 - n. Having shown that the reconnaissance problem lies in NP, we now move on to showing it is NP-hard. To this end, we will provide a reduction from the Hamiltonian path problem with a fixed starting point. The following proposition shows that this problem is itself NP-complete. Let G = (, ) be an undirected graph such that ∈. Then, the problem of deciding if G contains a Hamiltonian path with starting point is NP-complete. The problem lies in NP. Let G' = (', ') be an undirected graph such that , t ∈'. The problem of deciding whether there is a Hamiltonian path from to t in G' is NP-complete, see e.g. Schrijver <cit.>. Now construct the graph G = (, ) by adding a vertex u to G', connecting it only to t. Formally, = ' ∪{u} and = ' ∪{{t,u}}. Then G contains a Hamiltonian path with starting point if and only if G' contains a Hamiltonian path from to t. We now reduce this fixed-start Hamiltonian path problem to a specific instance of the reconnaissance problem. The reconnaissance problem is NP-complete. Let G = (, ) be an undirected graph such that ∈, and pick q ∈ (0,1). Then set the crossing probabilities to _ij = q for all {i,j}∈, the transmission probabilities to _i = 1 for all i ∈, and the information value to _i = 1 for all i ∈∖{} (and _ = 0). Finally, set r = q - q^n/1-q, where n = ||. We claim that there is a reconnaissance plan with expected value at least r if and only if there is a Hamiltonian path in G starting at . If there is a Hamiltonian path starting at the base, we can construct a reconnaissance plan by letting the drone follow this path. At every vertex, the drone would observe and immediately transmit the information gathered there. The expected value of this plan is ∑_k=1^n-1 q^k = q - q^n/1-q = r. Conversely, suppose there is no Hamiltonian path starting at the base. Then, any reconnaissance plan will fall in one of two categories. * The drone does not visit all vertices. Visiting m < n vertices will take at least m-1 crossings, making the expected value of such a plan at most ∑_k=1^m-1 q^k = q - q^m/1-q < r. * The drone visits all vertices, but visits a vertex u ∈ V a second time before it has visited all other vertices. The expected value of this reconnaissance plan is the sum of n-1 powers of q, where the exponents are distinct positive integers. If the second visit to u occurs after m < n-1 crossings, the power q^m does not appear in the computation of the plan's expected value. The expected value is then at most ∑_k=1^m-1 q^k + q ∑_k=m^n-1 q^k = q - q^m + q(q^m - q^n)/1-q < r. We conclude that there is no reconnaissance plan with an expected value of at least r. The claim follows from Theorem <ref>. § MIXED-INTEGER LINEAR PROBLEM FORMULATION FOR THE AUTONOMOUS RECONNAISSANCE PROBLEM This section will formulate the autonomous reconnaissance problem as a mixed-integer linear programming problem. To this end, we fix a time horizon = {1, …, }. In order to also find routes that require fewer time periods, we add the edge {,} to the graph. In theory, we can take =n^2-1 as per Theorem <ref>. Better estimates will be discussed in Section <ref>. We will determine the drone's actions during each time period. A time period begins with the drone traveling from one location to another. If the drone survives this, the drone observes the information at its new location (if it was not observed before). Finally, the drone may or may not attempt to send the information it has gathered but not sent before. After each of these three stages, we can compute the drone's survival probability and the `expected transmission value' of the information the drone is carrying – we will define these terms in more detail below. See Figure <ref> for an overview. We first describe how we model the three stages of a time period. In Section <ref>, we model the movement of the drone through the graph. Section <ref> then describes the observation of the information at a node, while Section <ref> discusses sending the information. After that, we describe how the drone's survival probability and expected transmission value can be computed in Section <ref> and Section <ref>, respectively. We state the final model in Section <ref>. §.§ Moving the drone The route of the drone will be modeled by the decision variables _ijt = 1 if the drone travels over {i,j}∈ from i to j at time t ∈ 0 otherwise. Recall that {,}∈, so the drone can also stay at the base in any time period. We need a few constraints to let these decision variables describe a valid walk through the network. In every time period, the drone traverses exactly one edge, that is, ∑_{i, j}∈_ijt = 1 ∀ t ∈. To make the drone start and end at the base, we require ∑_j: {,j}∈_ j1 = ∑_i: {i,}∈_i = 1. In all other time periods, the drone starts at the location where it ended in the previous time period, meaning ∑_i: {i,j}∈_ij,t-1 = ∑_i: {j,i}∈_jit ∀ j ∈, t ∈∖{1}. §.§ Observing information When a drone arrives at a location for the first time, the information there is observed. We model this with the decision variables _it = 1 if the drone observes the information at i ∈ at time t ∈ 0 otherwise. By allowing the information at any vertex to be observed only once, that is, ∑_t ∈_it≤ 1 ∀ i ∈, the value of the information there can only be added to the accumulated information once. Since the objective is to maximize the expected value of the transmitted information, there will be an optimal solution where the information in every visited vertex i ∈ is observed the first time i is visited. After all, observing the information during later visits can only decrease the expected value of that information. Of course, the information at i ∈ can only be observed in time period t ∈ if one also arrives in i at time t. We model this by _it≤∑_j: {i,j}∈_ijt ∀ i ∈, t ∈. §.§ Sending information After the new information has been observed, the drone may or may not transmit the information. This is modeled by the decision variables _it = 1 if the drone transmits information from i ∈ at time t ∈ 0 otherwise. Similar to observations, transmissions can only occur at a location in a certain time slot if one also arrives at that location in that time slot, that is, _it≤∑_j: {i,j}∈_ijt ∀ i ∈, t ∈. (One may argue that the values of the variables x_ijt are already sufficient to determine the location of the drone, and that therefore there is no need to introduce transmission variables that are also indexed by the locations. This is true, but doing so would introduce additional non-linearities later on.) §.§ Survival probability The objective is to maximize the total expected value of the transmitted information. To compute the expected value of a transmission, we need to know the probability that the drone has survived until a time slot in which a transmission takes place. We therefore introduce two new sets of decision variables: * _t is the probability that the drone has survived from the start of the time horizon until after the moving phase in time period t ∈; * _t is the probability that the drone has survived from the start of the time horizon until after the sending phase in time period t ∈. See Figure <ref> for an illustration. To ease notation, we also fix the parameter _0 = 1 to ensure the drone leaves the base with probability one. The survival probability after moving is _t = _t-1∑_{i, j}∈_ij_ijt ∀ t ∈, where _ij is the probability of successfully moving over edge {i,j}∈. Note that (<ref>) is non-linear in the decision variables, because _t-1 is multiplied by _ijt. We can however linearize (<ref>) if we replace _t-1_ijt by the variable _ijt subject to the constraints _ijt≤_t-1 ∀{i,j}∈, t ∈ _ijt≤_ijt ∀{i,j}∈, t ∈ _ijt≥_t-1 - (1 - _ijt) ∀{i,j}∈, t ∈ _ijt≥ 0 ∀{i,j}∈, t ∈. The definition of _t from (<ref>) can now be written as the linear equations _t = ∑_{i, j}∈_ij_ijt ∀ t ∈. Next, the survival probability after sending is equal to the survival probability after moving, unless one performs a transmission in the time period. That means _t = _t ( 1 - ∑_i ∈ (1 - _i) _it) ∀ t ∈, where _i is the probability of a successful transmission at location i ∈. Since (<ref>) is also non-linear in the decision variables, we replace every product _t _it by the variable _it subject to the constraints _it≤_t ∀ i ∈, t ∈ _it≤_it ∀ i ∈, t ∈ _it≥_t - (1 - _it) ∀ i ∈, t ∈ _it≥ 0 ∀ i ∈, t ∈. The definition of _t from (<ref>) can now be written as the linear equations _t = _t - ∑_i ∈ (1 - _i) _it ∀ t ∈. §.§ Expected transmission value We also track the value of the non-transmitted information the drone has gathered, multiplied by the survival probability of the drone. We call this the `expected transmission value' of the drone, since it captures the expected value of a transmission made in a certain time period. There are three types of expected transmission values that we track by decision variables: * _it is the expected transmission value of the drone after moving to location i ∈ in time period t ∈; * _it is the expected transmission value of the drone after potentially observing new information at location i ∈ in time period t ∈; * _it is the expected transmission value of the drone after potentially sending information from location i ∈ in time period t ∈. See Figure <ref> for an illustration. As a matter of initialization, we fix the parameter _i0 = 0 for all i ∈. By moving from j ∈ to i ∈ at time t ∈, the expected transmission value gets multiplied by the probability of successfully moving over the edge {j,i}. In general, we get _it = ∑_j: {i,j}∈_ji_jit_j,t-1 ∀ i ∈, t ∈. Since (<ref>) is non-linear in the decision variables, we replace every product _jit_j,t-1 by the variable _jit subject to the constraints _jit≤_j,t-1 ∀{i,j}∈, t ∈ _jit≤_jit ∀{i,j}∈, t ∈ _jit≥_j,t-1 - M(1 - _jit) ∀{i,j}∈, t ∈ _jit≥ 0 ∀{i,j}∈, t ∈, where we can take = ∑_i ∈_i. The definition of _it from (<ref>) can now be written as the linear equations _it = ∑_j: {i,j}∈_ji_jit ∀ i ∈, t ∈. Next, observing new information at location i ∈ at time t ∈ adds _i _it to the expected transmission value. In general, we therefore have _it = _it + _i _it_it ∀ i ∈, t ∈. Since (<ref>) is non-linear in the decision variables, we replace every product _it_it by the variable _it subject to the constraints _it≤_it ∀ i ∈, t ∈ _it≤_it ∀ i ∈, t ∈ _it≥_it - (1 - _it) ∀ i ∈, t ∈ _it≥ 0 ∀ i ∈, t ∈. The definition of _it from (<ref>) can now be written as the linear equations _it = _it + _i _it ∀ i ∈, t ∈. Finally, making a transmission sets the expected transmission value to zero. If no transmission is made, the expected transmission value does not change. This can be modelled by the constraints _it≤_it ∀ i ∈, t ∈ _it≤ (1 - _it) ∀ i ∈, t ∈. §.§ Final model As mentioned above, the objective is to maximize the expected value of the transmitted information, which would be ∑_i ∈∑_t ∈_i _it_it. To linearize the objective, we replace every product _it_it by the variable _it subject to the constraints _it≤_it ∀ i ∈, t ∈ _it≤_it ∀ i ∈, t ∈ _it≥_it - (1 - _it) ∀ i ∈, t ∈ _it≥ 0 ∀ i ∈, t ∈. In conclusion, the final model is max ∑_i ∈∑_t ∈_i _it subject to _ijt∈{0,1} ∀{i,j}∈, t ∈ _it, _it∈{0,1} ∀ t ∈, and constraints eq:OneEdgePerTimePeriod,eq:StartAndEndAtBase,eq:FlowConservation,eq:OneObservation,eq:ObserveWhenThere,eq:SendWhenThere,eq:DefineSurvsendVarmove,eq:DefineSurvMove,eq:DefineSurvmoveVarsend,eq:DefineSurvSend,eq:DefineExpsendVarmove,eq:DefineExpMove,eq:DefineSurvmoveVarobs,eq:DefineExpObs,eq:DefineExpSend,eq:DefineExpobsVarsend. §.§ Performance As was mentioned at the beginning of this section, a time horizon needs to be set for the maximum number of time periods in a solution. From Proposition <ref> we know that an optimal solution is guaranteed if we take =|V|^2-1. However, if all crossing probabilities are nonzero, often the optimal route is much shorter, as is supported by experimental results up to |V|=10 (some by the method in the next section). The value of is very influential on the run time of an implementation of the model. We have implemented the model in Python, using the Gurobi solver <cit.> on an 8 core Apple M2 processor with 8 GB of memory. First, we ran the code for the graph in Figure <ref> with 4 vertices. We fixed the time horizon at =7. The optimal solution, the same as described in Section <ref>, was found in 0.29 seconds. We then generated a graph with random crossing probabilities on 6 vertices. Setting =9, an optimal solution was found in 70.89 seconds. For =10, the same optimal solution was found, but this took 736.62 seconds (a bit over 12 minutes). For =12, the program ran out of memory after 538825 seconds (over 6 days). An attempt on a bigger machine with 12 CPU's and 80 GB of memory, also ran out of memory before finding a solution. § GENETIC ALGORITHM Even though the above Mixed Integer Linear Program allows us to solve the reconnaissance problem to optimality, it quickly becomes too slow to be useful in practical application. The next section investigates whether we can replace the program with a heuristic, namely a genetic algorithm. This is an algorithm developed by Holland <cit.> in the 1970s and has been used in a lot of applications (see e.g. §14.4 in <cit.> or §6 in <cit.> for a more in-depth summary of the method and its applications). Just as the name suggests, a genetic algorithm is based on the theory of evolution and tries to incorporate the ideas of survival of the fittest. The main idea is that at first, the algorithm is initialized with a list of feasible solutions, i.e. the first generation. All solutions in the generation are then ranked based on their expected value and this ranking is used to construct the next generation. The worst solutions of the generation are discarded: these genes do not survive. The best solutions are immediately copied into the next generation. This next generation is then filled with crossings from two solutions from the previous one. By repeating this process enough times and adding mutations to increase variation in the different generations, we expect that natural selection will lead to the optimal solution. A schematic depiction of how to construct new generations is given in Figure <ref>. The advantage of this metaheuristic is that it allows us to optimize the route and transmission strategy simultaneously. We are able to cross both elements to obtain a new strategy that hopefully inherits the good traits of its parent strategies. Moreover, using specific crossing schemes and mutations we can make sure that all considered strategies in a generation are feasible and are not filled with routes that consist of non-existing edges. §.§ Set-up of the algorithm §.§.§ Initialization To initialize the algorithm, a list of random routes is generated. These routes are constructed by making a random walk of random length through the graph, starting from the base. Just as in the Linear Program, where the value of had to be guessed, the upper bound L_max on the length of the walk also has to be chosen here. Similarly, we also choose a lower bound L_min. Then we sample the actual length of the random walk uniformly at random from the set {L_min, L_min + 1, …, L_max -1, L_max}. Secondly, we repeatedly pick the next vertex in the walk uniformly at random from the vertices adjacent to the current position. If the last vertex of the walk is not the base vertex, the shortest path between the end of the walk and the base is added to the walk. As a rule of thumb, when considering a graph on n vertices, we picked L_min to be approximately equal to n-1 and L_max to n+100. This might seem like a pretty high upper bound, especially because realistically, reconnaissance mission do not include more than 10 surveillance locations. But even though these longer routes are not necessarily better than the shorter ones – at some point all information has been transmitted and making an additional loop doesn't change the expected value – the final part of a route with useless and unnecessary walks can become useful after a crossover. To quickly find a good send strategy for every route in the first generation, a local search algorithm is used. This is a strategy where at every vertex a transmission is made with probability π. We found that π = 1/3 gives a good starting point for the search algorithm. Next, we pick the best transmission strategy amongst all strategies at Hamming distance 1 of the current one. This process is repeated until there is no more improvement possible. Note that this method does not guarantee a global optimum. It is possible that the search gets stuck in a local maximum while there is a different send strategy with an even better expected value. Based on their expected values we make a ranking of the strategies consisting of both a route and a transmission strategy. §.§.§ Cross-over In each iteration of the algorithm, a new generation is constructed based on the previous one. As already mentioned, we first copy the 10% routes with the very best expected values directly in the next generation. Simultaneously, the 7.5% worst routes are discarded. To construct a route in the next generation, two routes are picked uniformly at random from the best 92.5%, i.e. all routes except the discarded ones. These two routes become the parents of a new route, hoping that the good genes of the parents will be inherited by the child. To cross these two parents, a vertex that is present in both routes is randomly chosen and the parent routes are sliced in two parts at the first occurrence of this vertex. From the one route we use the first part, from the other route the second part. The send strategies are sliced and put together in the exact same way. Again, this does not imply that the obtained send strategy is optimal for the newly constructed route. Moreover, it is possible that the only vertex in common is the base camp at the beginning and end of one of the routes. In that case, two new routes are picked and these are crossed instead. This strategy keeps being repeated: the routes in this generation are ranked from best to worst and a new generation is constructed. After enough generations the hope is that eventually the globally optimal strategy will appear. §.§.§ Mutations To increase variation in the different generations, some of the newly crossed strategies are mutated. This means that they are slightly altered in their route or transmission strategy. Three mutations have been chosen to increase the number of considered strategies: * Added random walk. Uniformly at random, one point that is not the last one is chosen in the route. After this point, a random walk of random length is added to the route. This random walk is generated in the same way as we did for the initialization. This mutation increases the length of the route. * Vertex flip. One random vertex in the route is changed to a common neighbour of its preceding and succeeding vertex. Note that the existence of such a common neighbour is not guaranteed in a non-complete graph. * Send flip. Uniformly at random one element in the send strategy is flipped: a 0 becomes 1 or vice versa. These mutations are not always performed and for each strategy, there is at most one mutation performed. The added random walk mutation is performed with a probability of 0.01. Since this mutation increases the length of the route, a new send strategy has to be constructed as well. This is done by discarding the old send strategy and performing the same local search as for the initialization for this new route. The vertex flip mutation is performed with a probability of 0.2 and the send flip with a probability of 0.1. Since the length of the route doesn't change in either one, the send strategy is not changed after these mutations (this wouldn't make any sense for the send flip either). Having implemented these mutations, we can test whether they actually improve the genetic algorithm. At first, we considered the complete graph on 6 vertices, K_6. In this case, the genetic algorithm without mutations is able to find the same believed to be optimal strategy found using the MILP in 98% of the runs. Hence there is no need for mutations, as they increase running time without having a lot of added value. On a non-complete graph on 10 vertices, this success rated dropped a bit to 90/100 times. However, the program returns the strategy in a couple of seconds and hence, is fit for the job. When considering K_10 finding the optimal strategy becomes more interesting. At first, we have ran the genetic algorithm on the graph where every vertex that is not the base camp holds one unit of information and the transmission probabilities (diagonal entries) and crossing probabilities (off-diagonal entries) are given by the following 10 × 10-matrix: [ 1 0.95 0.87 0.93 0.99 0.96 0.92 0.88 0.9 0.93; 0.95 0.9 0.86 0.97 0.93 0.85 0.82 0.91 0.93 0.96; 0.87 0.86 0.94 0.92 0.96 0.98 0.99 0.82 0.85 0.91; 0.93 0.97 0.92 0.99 0.87 0.93 0.9 0.9 0.89 0.95; 0.99 0.93 0.96 0.87 0.9 0.94 0.82 0.85 0.92 0.9; 0.96 0.85 0.98 0.93 0.94 0.95 0.91 0.92 0.91 0.96; 0.92 0.82 0.99 0.9 0.82 0.91 0.93 0.98 0.92 0.93; 0.88 0.91 0.82 0.9 0.85 0.92 0.98 0.95 0.99 0.87; 0.9 0.93 0.85 0.89 0.92 0.91 0.92 0.99 0.94 0.85; 0.93 0.96 0.91 0.95 0.9 0.96 0.93 0.87 0.85 0.92 ]. Running the algorithm without mutations yields the following best strategy: Route: (0,4,0,5,2,6,7,8,1,3,9,3,0) Send: (0,0,1,0,0,0,1,0,0,1,0,1,1), with value 7.305181. Out of 100 runs, this strategy is found 19 times. Compared to the previous graphs – where the optimal strategy was almost always found – this is not a high success rate. With this performance, a lot of runs are required to be pretty sure of finding the best solution. Say we want to have a probability greater or equal than 0.95. To find the optimal strategy assuming a success chance at every try of 19/100, we need to run the algorithm h times, where 1 - ( 81/100)^h ≥ 0.95, which implies that h ≥ 15. While the algorithm still returns a strategy within a minute and this is not an infeasible number of runs, it would be more compelling to increase the success rate of a single try. To see how the genetic algorithm behaves, one can look at the best value of the strategies in each generation. This is depicted in Figure <ref> for four runs of the algorithm. The x-axis shows the generation, the y-axis the best expected value. In all four runs, it is clear that the most progress is in the first generations of the algorithm and there is only one run that keeps improving the optimal value. The other runs quickly get stuck in a local maximum. The worst local maximum value where a run gets stuck on has expected value of around 7.11, which is quite far away from the believed to be optimal 7.305181. The plot shows that the algorithm gets stuck too often in local optima. This could be caused by a lack of variation in the different generations which means that other routes are left unexplored. Indeed, the 150th generation of the algorithm contains almost always exactly only one strategy that fills the entire list: the (local) maximum where it got stuck. Thus it is clear that the variation throughout the algorithm should be increased to hopefully find the optimal strategy more often. Rerunning the algorithm with all mutations shows their added value for larger graphs. Again, we have plotted four runs of the algorithm in Figure <ref>. This plot shows that adding mutations helps the algorithm to explore more strategies and end up in the best known strategy. §.§ Comparison of the Different Versions Figure <ref> shows that the mutations can enable the genetic algorithm to get out of local optima and eventually find the believed to be best planning. It mostly increases the speed by which a run is able to find the optimal strategy. But how often does such a successful run happen? To test this, the genetic algorithm was run a hundred times for K_10. Out of these 100 runs, the optimal value was found 61 times. Given the fact that one run takes less than a minute, it is feasible to run the genetic algorithm ten times. Assuming that the optimal value is found with a probability of 61/100 per run, the probability to find this strategy after ten runs is 0.9999. To put this result into perspective, the comparison of four different genetic algorithms can be made. The first considered algorithm does not contain any mutations, the second one only contains the vertex flip mutation. Then there is the genetic algorithm with only the added random walk mutation and lastly there is the genetic algorithm with all mutations combined as explained above. There is a clear difference in the performance between the different versions. Most mutations increase the probability of finding the optimal value. However, the send flip in itself does not clearly improve the algorithm. The vertex flip on its own does perform better, but 32/100 is still not amazing. Adding the random walk increases the odds of success more. In more than half of the runs the optimal planning was found. The genetic algorithm where three types of mutations are combined is still a little bit better, but since the genetic algorithm includes a lot of randomness, it is hard to determine whether this is actually better than only adding a random walk. Since both algorithms have no distinguishable speed difference, they seem interchangeable. In any case, the version of the algorithm with the combination of the mutations provides a tool to quickly and accurately find the believed to be optimal planning for larger graphs. § TOWARDS A GENERALIZATION: MULTIPLE DRONES So far, we have considered surveillance missions where only one drone is deployed. It makes sense to extend this scenario to missions with multiple drones. This section provides a generalization of the model for such missions and discusses options to adapt our algorithm to this model. First of all, similar modeling choices as before need to be made. * All drones start from the safe base camp, where there is no information to be retrieved. The base camp is also the last vertex in the routes of all drones. * Routes and send strategies are determined before the mission, so there is no way of modifying the mission if a drone is intercepted by the enemy. Also, if this happens, it is assumed that the other drones can continue their mission. * All drones are allowed to retrieve the same information. This means that the information is considered to be a picture taken at a location rather than a package that is retrieved. * If information is successfully transmitted by a drone, the other drones can still send the same information. But this information should not be double counted. The expected value of transmitted information per vertex does increase if its information is sent multiple times, but can never exceed the total amount of information available at that specific vertex. The last condition requires a new way of computing the expected value, as we cannot just look at the different transmissions per drone. Therefore we derive a new formula for the expected value that can be used for any number of drones. While for the MILP, it was successful to focus on the transmission vertices, we will shift focus to how much information from each vertex is expected to reach the base camp. This enables us to compute the expected value for multi-drone missions. The expected amount of information that is safely transmitted can also be formulated as the sum of the expected fractions of each unit of information that is safely transmitted. If vertex i contains w_i units of information, then we compute 𝔼[ Y_i ]: the expected amount of information from vertex i that reaches the base. In the case of one drone, this is equal to the product of w_i with the probability that the drone safely reaches the first next transmission vertex and the probability that the transmission at this vertex is successful. So, let D_i denote the event that the drone successfully transmits the information that it collected at vertex i. Then, for a single drone mission, 𝔼[X] = ∑_i = 1^|V| w_i ·ℙ(D_i). This formula can be extended to multiple drones, but we will require some more notation. Suppose that we have ℓ drones and let D_iv be the binary random variable that equals 1 if drone i succeeds in transmitting the information from vertex v. Moreover, let I ⊆ [ℓ] = {1, …, ℓ}. Then for a given v, we are interested in ℙ (⋃_i ∈ [ℓ] D_iv) = ∑_I ⊆ [ℓ] I ≠∅ (-1)^|I|-1·ℙ( ⋂_i ∈ I D_iv) = ∑_I ⊆ [ℓ] I ≠∅ (-1)^|I|-1·∏_i ∈ Iℙ( D_iv), where we have used the inclusion-exclusion principle and independence of probabilities. Hence, this leads to a formula for the expected value 𝔼[ Y_i ]: 𝔼[ Y_i ] = w_i ·∑_I ⊆ [ℓ] I ≠∅ (-1)^|I|-1∏_i ∈ Iℙ( D_iv). Linearity of expectation provides an alternative formula for the total expected value: 𝔼[X] = ∑_i=1^|V|𝔼[ X_i ]. §.§ Adaptation of the Genetic Algorithm As the MILP formulation was already slow for a single drone mission, it seems hard to generalize it to a practically relevant program for any number of drones. Therefore we immediately present a generalization of the genetic algorithm. As we have a formula for the expected value of a multi-drone strategy, we are able to rank the different strategies in the genetic algorithm. The initialization of the algorithm is very similar as before. At random, the required number of routes is created. These routes are again allowed to be very long, i.e L_max remains n+100 for a graph on n vertices. The transmission strategy is determined by a local search algorithm on the total expected value with a randomly generated starting point as in Section <ref>. This means that we only flip one transmission at a time combined over the send strategies of all drones. Concerning the cross-over, the same method as for the single drone mission is kept. However, at first we randomly permute all routes of the drones in the mission to then pairwise cross them over to obtain a new multi drone strategy. Also the mutations have the same flavor as in the single drone case. First of all, there is the added random walk. This is the same mutation as described for the single drone: one of the routes is picked and a random walk is added somewhere in the walk. This continues to be a useful mutation to increase the variation in the genetic program. But it slows down the algorithm, because it requires a local search for the optimal send strategy in terms of the expected value for multiple drones. Therefore this mutation is only actualized with a low probability. If a generation consists of κ strategies, the mutation probability is set at 2/κ. A second important remark that can be made is that because of the formulation of the expected value, the different drones are incentivized to still travel to all vertices of the graph and try to gather information everywhere. However, it does make sense that the drones fly more or less in `opposite directions' through the graph. In this case, more vertices have a high probability of their information being successfully transmitted by at least one of the drones. This inspired the reversed mutation. In this mutation, exactly one of the routes is completely reversed. The same holds for the corresponding send strategy. This mutation is computationally cheaper than the other one so this is actualized with 20% probability. §.§ Performance of the Multi-Drone Genetic Algorithm After having adapted the genetic algorithm to the multiple drone scenario, we have tested this algorithm on multiple graphs using two drones. On a graph with six vertices and a sparse graph on ten vertices, the genetic algorithm remained fast and reliable. For example for the complete graph on 6 vertices given by the matrix [ 1 0.97 0.81 0.97 0.95 0.96; 0.97 0.93 0.92 0.87 0.89 0.87; 0.81 0.92 0.96 0.81 0.98 0.93; 0.97 0.87 0.81 0.85 0.93 0.93; 0.95 0.89 0.98 0.93 0.91 0.89; 0.96 0.87 0.93 0.93 0.89 0.90 ], the returned solution is 24emDrone 1 Route: (0,3,0,5,0,4,2,1,0) Send: (0,0,1,0,1,0,1,0,1) 24emDrone 2 Route: (0,1,0,4,2,5,0,3,0) Send: (0,0,1,0,1,0,1,0,1) and increased the expected value from 4.115090 for a single drone to 4.859738 for two drones out of 5 possible units of information. Note that indeed, the structure of the two routes is approximately opposite. But on the example of K_10, the current algorithm is not sufficient. The algorithm becomes too slow for practical use and does not consistently return the same optimal strategy. The best strategy so far is given by 24emDrone 1 Route: (0,5,2,6,7,8,7,6,2,5,9,3,1,0,4,0) Send: (0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1) 24emDrone 2 Route: (0,4,0,1,3,9,5,2,6,7,8,7,6,2,4,0) Send: (0,0,1,0,1,0,0,0,0,1,0,1,0,0,0,1) with expected value 8.653276, but was found in less than 10% of the runs. The best solution for one drone had expected value 7.305181, showing that it does pay off for larger missions to use multiple drones, even if that strategy is not necessarily optimal. However, as a further research, it would be interesting to further improve the used methods to solve this problem for even larger instances and find the optimal solution more easily. plain
http://arxiv.org/abs/2406.18082v1
20240626054010
Octo-planner: On-device Language Model for Planner-Action Agents
[ "Wei Chen", "Zhiyuan Li", "Zhen Guo", "Yikang Shen" ]
cs.CL
[ "cs.CL", "cs.HC" ]
Exploring Superconductivity in Ba_3Ir_4Ge_16: Experimental and Theoretical Insights [ =================================================================================== § ABSTRACT AI agents have become increasingly significant in various domains, enabling autonomous decision-making and problem-solving. To function effectively, these agents require a planning process that determines the best course of action and then executes the planned actions. In this paper, we present an efficient on-device Planner-Action framework that separates planning and action execution into two components: a planner agent, or Octo-planner, optimized for edge devices, and an action agent using the Octopus model for function execution. Octo-planner first responds to user queries by decomposing tasks into a sequence of sub-steps, which are then executed by the Octopus action agent. To optimize performance on resource-constrained devices, we employ model fine-tuning instead of in-context learning, reducing computational costs and energy consumption while improving response times. Our approach involves using GPT-4 to generate diverse planning queries and responses based on available functions, with subsequent validations to ensure data quality. We fine-tune the Phi-3 Mini model on this curated dataset, achieving a 97% success rate in our in-domain test environment. To address multi-domain planning challenges, we developed a multi-LoRA training method that merges weights from LoRAs trained on distinct function subsets. This approach enables flexible handling of complex, multi-domain queries while maintaining computational efficiency on resource-constrained devices. To support further research, we have open-sourced our model weights at <https://huggingface.co/NexaAIDev/octopus-planning>. For the demo, please refer to <https://www.nexa4ai.com/octo-planner#video>. § INTRODUCTION Artificial intelligence (AI) agents <cit.> have significantly transformed various industries by enabling autonomous decision-making and improving operational efficiencies <cit.>. These agents rely on a critical planning process that involves determining the optimal course of action, executing the planned actions, and summarizing the outcomes. Large Language Models (LLMs) such as Gemini-Pro <cit.> and GPT-4 <cit.> have shown potential in this domain. While these models face challenges in executing complex planning tasks at a level comparable to human performance <cit.>, they remain effective in addressing simpler tasks, thereby facilitating practical applications. One such application is the emergence of AI assistant tools from companies like MultiOn <cit.>, Simular AI <cit.>, and Adept AI <cit.>, which leverage the capabilities of LLMs to provide intelligent assistance across various domains. Additionally, consumer-oriented AI hardware products, such as Rabbit R1 <cit.>, Humane AI Pin <cit.>, and Limitless Pendant <cit.>, integrate LLMs into user-friendly devices, making intelligent assistance more accessible and driving significant traction. The success of AI agents depends on the performance of the underlying LLMs. Agents using pre-trained models without fine-tuning on task demonstrations have relatively low success rates, ranging from 12% on desktop applications <cit.> to 46% on mobile applications <cit.>, while those leveraging fine-tuned models can achieve up to 80% success rate on tasks similar to their training data <cit.>. However, using LLMs for AI agents is costly due to high computational demands and infrastructure expenses, limiting widespread adoption. The lack of on-device AI agents restricts applications requiring real-time processing, offline functionality, or enhanced privacy. On-device AI agents offer advantages including reduced latency, offline operation, lower costs, and improved data security <cit.>. While action models like Octopus V2 achieve over 95% accuracy for function calling <cit.>, an on-device planning model is still missing. General agent frameworks use single-model in-context learning, requiring lengthy function descriptions and planning instructions in each prompt. This approach is impractical for on-device models with limited context lengths, causing high latency and battery consumption on edge devices. In this paper, we introduce Octo-planner, an on-device planning agent that addresses the key challenges of efficiency, adaptability, and resource constraints. Our Planner-Action framework separates planning and action execution into two components: a planner agent, or Octo-planner, optimized for edge devices, and an action agent using the Octopus model for function execution. By prioritizing fine-tuning over few-shot prompting, we reduce computational costs and minimize key-value (KV) cache requirements. Our approach uses GPT-4 to generate and validate planning data, which is then used to fine-tune Phi-3 Mini for on-device deployment. In-domain tests demonstrate that this fine-tuning improves planning success rates to 97%. To address multi-domain planning challenges, we developed a multi-LoRA training method that merges weights from LoRAs trained on distinct function subsets. This enables flexible handling of complex, multi-domain queries while maintaining computational efficiency on resource-constrained devices. By focusing on pre-defined functions for simpler tasks and leveraging fine-tuning, we aim to make AI agents more practical, accessible, and cost-effective for real-world applications. This work aims to contribute to the ongoing efforts to make AI more accessible and practical for everyday use. By bridging the gap between AI agent potential and edge computing constraints, we seek to facilitate the adoption of intelligent, on-device assistants across various domains. Through open-sourcing our approach, we hope to inspire further innovations in on-device AI, expanding the reach of advanced planning capabilities to a broader range of applications. § RELATED WORKS Planner agent Language models have become essential in planning agent systems. Proprietary models like OpenAI's assistant API <cit.> excel in generating strategies based on user queries and available functions. Recent advancements have further expanded the capabilities of language models in planning. The ReAct framework <cit.> integrates planning and acting for limited action spaces, while research from Alibaba Group <cit.> highlights the effectiveness of separate planning and action models for complex tasks. In robotics, language models are also increasingly applied to task-level planning <cit.>. Notable examples include SayCan <cit.>, which uses LLMs to break high-level tasks into concrete sub-tasks, and Video Language Planning (VLP) <cit.>, which enhances long-horizon planning through a text-to-video dynamics model. This broad application of language models in planning systems, from general strategies to specific robotics tasks, underscores their growing importance and adaptability in decision-making processes across diverse domains. Fine-tuning to replace long context Fine-tuning language models to internalize specific prompts or context information reduces input length and improves efficiency <cit.>. This approach involves training models on carefully curated, task-specific datasets. For models with limited context windows, this technique is particularly valuable as it enables more efficient query processing without sacrificing response quality. The success of fine-tuning largely depends on the use of diverse, high-quality datasets, which ensure the model can generalize across various prompt phrasings <cit.>. When implemented effectively, fine-tuning streamlines application-specific interactions, addressing both context length limitations and computational challenges in practical deployments. LoRA and Multi-LoRA Low-Rank Adaptation (LoRA) efficiently adapts pre-trained language models to specific tasks <cit.>. Unlike fine-tuning, which updates all parameters, LoRA freezes pre-trained weights and adds trainable low-rank matrices to each layer, significantly reducing trainable parameters and computational demands. Multi-LoRA extends this concept by enabling multiple task-specific adapters to be trained, combined, or switched during inference, allowing a single base model to handle various tasks efficiently <cit.>. Building on these approaches, researchers have developed several related variants to address different aspects of model adaptation: LoRA+ optimizes learning rates <cit.>, VeRA uses random projections <cit.>, AdaLoRA implements adaptive rank <cit.>, DoRA decomposes weights <cit.>, and Delta-LoRA updates pretrained weights <cit.>. These variations aim to further refine efficiency or performance in specific scenarios. § METHOD This section presents our framework for on-device Planner-Action agents. We first describe the integration of planning and action agents for efficient problem-solving. We then detail our approach to dataset design and the training process for the planning agent, including support for extensive functions and a plug-and-play capability for additional function sets. Finally, we outline our benchmark used to evaluate agent performance. §.§ Planner and action agents framework Our Planner-Action approach distinguishes itself from general agent frameworks by separating the planning and action execution processes into two components. This separation improves modularity and allows for specialized optimization of each component. The framework operates as follows: Planner Phase: Given a user query q, our planning model π_plan decomposes the task into a sequence of sub-steps. Formally: {τ_1, τ_2, ..., τ_n} = π_plan(q; F), where F is the set of available function descriptions, and τ_i is the i^th execution step. π_plan internalizes F during instruction fine-tuning. Action Phase: For each step in the execution sequence, we employ an action model π_action. At step i, given the observation of the current state O_i, the action model performs: O_i+1 = π_action(τ_i, O_i), where O_i+1 and τ_i+1 are passed to the next step for continued execution. This iterative process ensures a coherent progression through the task's substeps. For the action model, we utilize the Octopus model, which is specifically designed for on-device function calling. Figure <ref> illustrates the difference between our Planner-Action framework and the single-model approach for LLM agents. The modular design of our framework offers several advantages: * Specialization: Separating planning and action execution allows optimization of each model for its specific role, enhancing performance in complex tasks. * Scalability: Independent scaling of planning and action capabilities efficiently accommodates varying task complexities. * Interpretability: Explicit separation of phases improves transparency in the decision-making process. * Adaptability: Easier integration of domain-specific knowledge or constraints into either phase without system-wide changes. §.§ Planning dataset Our framework uses the Octopus model as the action model, requiring training only for the planner agent. We fine-tune the planner agent with the following dataset format: <|user|>{user's query}<|end|> <|assistant|> <nexa_split><nexa_split>...<nexa_split>.<|end|> Special tokens like and are used for chat model pretraining but are optional otherwise. We set n as 1-5, based on our finding that most of tasks on mobile app consist of fewer than 5 steps. The dataset generation and curation process includes: * Dataset collection: Given the available functions F, we use a large language model (GPT-4) to generate diverse queries answerable by these functions. We increase the model's temperature setting to ensure query variety. Responses are then generated in the specified dataset format. Importantly, while function descriptions are used during generation, they are not included in the final dataset. Instead, the planner model internalizes this function information during training. * Data validation: We employ the same language model as a validation tool to assess the correctness of query-response pairs. Despite some errors in the initial generation process, we found that the model effectively classifies generated content as valid or invalid, allowing us to filter out incorrect outputs and maintain dataset quality. Example data points with different number of sub-steps are shown below: 1113One-step example: query: Tell me today's stock market. response: Search for today's stock market news and latest updates<|end|> 1113Two-step example: query: Find and email me Jensen Huang's news in English. response: Find articles about Jensen Huang in English<nexa_split> Email the first article found to myself<|end|> 1113Three-step example: query: Find and increase the screen brightness by 20% to better view the quarterly report document before connecting to the office printer via Bluetooth. response: Find the quarterly report document in the system<nexa_split> Increase screen brightness by 20% to improve visibility<nexa_split> Connect to the office printer via Bluetooth to print the quarterly report<|end|> 1113Four-step example: query: Find and email the project proposal document to project.manager@company.com, then look up the submission deadline and schedule it in my calendar with a reminder. response: Locate the project proposal document<nexa_split> Send the project proposal via email to the project manager at project.manager@company.com<nexa_split> Find the project proposal submission deadline<nexa_split> Schedule a calendar event for the project proposal submission deadline with a reminder<|end|> 1113Five-step example: query: I have a meeting tomorrow morning, please find my presentation then connect to the conference room projector via Bluetooth. After that, increase the screen brightness then take a screenshot of the final summary slide and email it to all participants. response: Find the presentation for the meeting tomorrow<nexa_split> Connect to the conference room projector via Bluetooth<nexa_split> Increase screen brightness by 20%<nexa_split> Take a screenshot of the final summary slide<nexa_split> Email the screenshot to all meeting participants<|end|> For the visualization of the dataset collection, please see Figure <ref>. Example function descriptions are in Appendix <ref>. §.§ Benchmark design Our evaluation relies on a carefully constructed test dataset. This dataset is designed to represent the complexities of real-world planning, employing a multi-stage approach that integrates automated generation, expert validation, and empirical testing. The process begins with the automated generation of an initial dataset comprising 1,000 data points using GPT-4. These data points then undergo a rigorous quality assurance process to ensure their integrity and relevance. The quality assessment criteria are as follows: * Each step must correspond to an existing function; * The sequential order of steps must be correct. To ensure the reliability of our evaluation, we incorporate an additional phase of manual verification. This phase involves selecting a subset of examples for end-to-end model execution, thereby validating the accuracy of our results and providing a comprehensive assessment of our model's performance. For the evaluation of our proposed planning model, we employ GPT-4 as an oracle to determine the correctness of the generated plans. This choice is based on empirical observations indicating GPT-4's high proficiency in our specific use case. § EXPERIMENTAL DESIGN Our experimental design assesses the Octo-planner's performance for on-device AI agent planning. We aim to determine the optimal configuration for deploying efficient, accurate planning models on resource-constrained devices while maintaining adaptability to new domains and functions. Our experiments focus on four key areas: * Performance and efficiency trade-offs between full fine-tuning and LoRA. * Multi-LoRA accuracy in handling different function sets simultaneously. * Performance comparison across various base models and sizes. * The impact of dataset size on accuracy, ranging from 100 to 1000 training examples. We conduct supervised fine-tuning on our curated dataset, using Phi-3 Mini and a few other alternatives as the base model. Training includes both full fine-tuning and LoRA techniques. For all experiments, we set the dataset size to be 800 times the number of available functions and perform fine-tuning on an NVIDIA A100 GPU. We use optimized hyperparameters across both techniques: a learning rate of 5× 10^-6, batch size of 4, a warm-up ratio of 0.2 with 2 epochs. For LoRA, we set the target_modules to be . § RESULTS §.§ Full fine-tuning vs LoRA Table <ref> presents a detailed comparison of full fine-tuning and LoRA approaches for our planning model. Our experiments reveal significant differences in performance across these methods. Full fine-tuning achieves the highest accuracy at 98.1%, demonstrating superior performance. In contrast, LoRA performance depends on rank size. With rank 64 and alpha 256, LoRA achieves 85.1% accuracy, while reducing to rank 16 and alpha 32 drops accuracy to 72.9%. These results highlight the trade-off between model performance and computational efficiency when using LoRA. While full fine-tuning provides better accuracy, LoRA offers a more resource-efficient alternative, with performance varying based on rank configuration. §.§ Multi-LoRA Training and Merging While LoRA-based training is effective for specific function sets, real-world applications often require handling new or expanded sets of functions. To address this challenge, we propose merging multiple LoRA weights, each trained on distinct function subsets, with the same base model. This approach creates a composite model that combines knowledge from various function sets, offering a scalable solution for complex, multi-domain queries in resource-constrained environments. To evaluate this method, we constructed a benchmark dataset by randomly selecting functions from each LoRA's domain with equal probability, combining them into workflows. Queries and plans were generated using GPT-4. For example, when testing two merged LoRAs, queries may involve Android functions, E-commerce functions, or both, with equal likelihood. The code block below shows sample queries from our benchmark dataset and the corresponding inference results from the multi-LoRA merged model: 1113Android functions only: query: Find the quarterly report for Q3 and print it for the upcoming board meeting. response: Locate the quarterly report document for Q3<|nexa_split|>Print the located quarterly report document<|end|> 1113E-commerce functions only: query: Find ergonomic office chairs and then buy the selected one using my default shipping address and credit card response: Search for ergonomic office chairs<nexa_split>Purchase the selected ergonomic office chair using the default shipping address and credit card<|end|> 1113Android and E-commerce functions: query: Find ergonomic office chair list doc and print it for my shopping response: Search for ergonomic office chair list doc<nexa_split>Print the ergonomic office chair list doc <|end|> Table <ref> presents the performance results of our multi-LoRA merging technique. Each individual LoRA was trained with consistent hyperparameters: rank 64, lora_alpha 256, and target_modules set to "all-linear". The single-domain Android function set LoRA achieves 85.1% accuracy. When merging LoRAs from two domains (Android and E-Commerce), accuracy slightly decreases to 82.2%. Further merging yields lower accuracies: 78.9% for three domains (adding Video Streaming), and 69.7% for four domains (adding Travel). These results reveal a pattern of gradual accuracy decline as we integrate more function sets, with a steeper drop occurring after the third domain is added. §.§ Full fine-tunning with different base models Table <ref> presents the benchmark accuracy for different base models after full fine-tuning. Google Gemma 2b achieved 85.6% accuracy, while the larger Gemma 7b excelled with 99.7%. Microsoft Phi-3 also performed strongly at 98.1%. These results indicate that our framework adapts well to various on-device LLMs, with larger models generally achieving higher accuracy. §.§ Full fine-tuning with different dataset sizes Our default training dataset contains 1000 data points, evenly distributed across 1-5 step sequences (200 each) to represent varying task complexities. We investigated the impact of dataset size on model performance to optimize function set integration efficiency and address synthetic data generation costs. Table <ref> shows the benchmark accuracy for various training dataset sizes: The results show a clear correlation between dataset size and accuracy. The full 1000-point dataset achieves 98.1% accuracy, while reducing to 500 data points drops accuracy to 92.5%. Further reductions to 250 and 100 data points result in accuracies of 85.3% and 78.1%, respectively. These findings suggest that for optimal performance, a training dataset of more than 1000 data points is recommended. § CONCLUSION This paper introduces the Octo-planner, an on-device planning agent designed to work alongside action agents like Octopus V2. By separating planning and action execution, we improve specialization and adaptability. Our approach fine-tunes Phi-3 Mini (a 3.8 billion parameter LLM) to serve as a planning agent capable of running locally on edge devices with 97% success in in-domain tests. We've reduced computational demands, improving latency and battery life, and implemented a multi-LoRA technique for expanding model capabilities without full retraining. The Octo-planner contributes to addressing AI deployment concerns including data privacy, latency, and offline functionality. It represents an advancement towards practical, sophisticated AI agents for personal devices. By open-sourcing our model weights, we aim to drive innovation in on-device AI, promoting the development of efficient, privacy-respecting applications that enhance daily life without compromising performance or security. § LIMITATIONS AND FUTURE WORK Our current model, while effective for specific mobile phone use cases, has limitations in its broader applicability. Unlike frameworks such as ReAct, which alternate between planning steps and executing actions based on real-time feedback, our model conducts all its planning in advance. This upfront planning approach, while efficient for straightforward tasks, may be less adaptable to complex or unpredictable scenarios where conditions might change during execution. Future work will focus on exploring an iterative planning methodology that refines plans based on real-time observations, improving adaptability to dynamic environments. We also plan to investigate the integration of our planning model with diverse action models, extending its capabilities beyond mobile applications to areas such as IoT, robotics, and smart home systems. These advancements will address current limitations and expand the versatility of our on-device planning model, bridging the gap between efficient, localized AI processing and the complex demands of real-world applications. unsrt § APPENDIX §.§ Function/API description examples [style=mystyle] def get_trending_news(query, language): """ Retrieves a collection of trending news articles relevant to a specified query and language. Parameters: - query (str): Topic for news articles. - language (str): ISO 639-1 language code. The default language is English ('en'), but it can be set to any valid ISO 639-1 code to accommodate different language preferences (e.g., 'es' for Spanish, 'fr' for French). Returns: - list[str]: A list of strings, where each string represents a single news article. Each article representation includes the article's title and its URL, allowing users to easily access the full article for detailed information. """ def get_weather_forecast(location): """ Provides a weather forecast for a specified location over a given number of days. Each day's forecast includes a brief description of the expected weather conditions. Parameters: - location (str): The location for which the weather forecast is desired. Can be a city name, ZIP code, or other location identifiers. Returns: - list[str]: A list of strings, each representing the weather forecast for one day. Each string includes the date and a brief description of the weather conditions. Formatted in 'YYYY-MM-DD: Description' format. """ def send_email(recipient, title, content): """ Sends an email to a specified recipient with a given title and content. Parameters: - recipient (str): The email address of the recipient. - title (str): The subject line of the email. This is a brief summary or title of the email's purpose or content. - content (str): The main body text of the email. It contains the primary message, information, or content that is intended to be communicated to the recipient. Returns: """ def search_youtube_videos(query): """ Searches YouTube for videos matching a query. Parameters: - query (str): Search query. Returns: - list[str]: A list of strings, each string includes video names and URLs. """ def find_route_google_maps(origin, destination, mode): """ Computes a route using Google Maps from an origin to a destination. Parameters: - origin (str): Starting location. - destination (str): Target location. - mode (enum): Mode of transportation, options include 'driving', 'walking', 'bicycling', and 'transit'. The default mode is 'driving'. Returns: - List[str]: The string provides the route details. """ def send_text_message(contact_name, message): """ Sends a text message to the specified contact. Parameters: - contact_name (str): The name of the recipient contact. - message (str): The content of the message to be sent. This is what the recipient will receive. Returns: """ def create_contact(name, phone_number): """ Creates a new contact entry in the device's address book. Parameters: - name (str): Full name of the contact. This should include first and last name. - phone_number (str): phone number of the contact. The phone number should be provided in a standard format, preferably in E.164 format (e.g., +12345678900 for an international format). Returns: """ def set_timer_alarm(time, label): """ Sets a timer or alarm for a specified time. Parameters: - time (str): Alarm time in "HH:MM" 24-hour format. For example, "07:12" for 7:12 AM. - label (str): Custom label for the alarm, default is "alarm". Returns: """ def create_calendar_event(title, start_time, end_time): """ Schedules a new event in the calendar. Parameters: - title (str): Event title. - start_time (str): Event start time as a string in ISO 8601 format "YYYY-MM-DD-HH-MM". For example, "2022-12-31-23-59" for 11:59 PM on December 31, 2022. - end_time (str): Event end time as a string in ISO 8601 format "YYYY-MM-DD-HH-MM". Must be after start_time. For example, "2023-01-01-00-00" for 12:00 AM on January 1, 2023. Returns: """ def set_volume(level, volume_type): """ Sets the volume level for a specified type : "ring" , "media" , "alarm". Parameters: - level (int): Target volume level, from 0 (mute) to 10 (maximum). - volume_type (enum): The category of volume to adjust, select from "ring" , "media" , "alarm". Returns: """
http://arxiv.org/abs/2406.18506v1
20240626172014
Feferman Interpretability
[ "Joost J. Joosten", "Luka Mikec", "Albert Visser" ]
math.LO
[ "math.LO" ]
ΦΘΨΩΞΓ plain theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition fact[theorem]Fact lemm[theorem]Lemma corollary[theorem]Corollary definition definition[theorem]Definition defi[theorem]Definition claim[theorem]Claim remark[theorem]Remark secondproofof[1] Second Proof of #1. ques[theorem]Open Question question Feferman Interpretability Joost J. Joosten, Luka Mikec, and Albert Visser July 1, 2024 =================================================== We present an interpretability logic or Feferman Interpretability Logic. The provability modality can occur in with a label, as in ^. Likewise the interpretability modality can occur in with a label, as in ^. The labels indicate that in the arithmetical interpretation, the axiomatisation of the base theory will be tweaked/customised. I think we need another word for approximation. Intensionalising the axiom set?The axiom set is already intensional. Customising is a nice word. At some point we use tweaking which I like. The base theory T will always contain the minimum of of arithmetic and T will be approximated by T^a in such a way that T is extensionally the same as T^a. However, T^a will inherit certain properties reminiscent of finitely axiomatised theories. After providing the logic and proving the arithmetical soundness, we set the logic to work to prove various interpretability principles to be sound in a large variety of (weak) arithmetical theories. In particular, we prove the two series of principles from <cit.> to be arithmetically sound using . Up to date, the arithmetical soundness of these series had only been proven using the techniques of definable cuts. § PRELUDIUM Interpretability Logic is an approach to the study of interpretability. Unlike the study of interpretability degrees and categories of theories and interpretations, the distinctive feature of interpretability logic is the internalisation and nesting of interpretability viewed as a modal connective. For example, interpretability logic allows us to study what the internal verification of the model existence lemma means in formal theories (Principle J_5; see Section <ref>). In the case of classical theories, for the primary reading of the modal connectives, there is a marked difference between provability logic and interpretability logic. Where provability logic is remarkable stable: no arithmetical theories with significantly different provability logics have been discovered, substantially different interpretability logics are realised in different (classes of) theories. Interpretability Logic turns out to be a land of two streams. Its Euphrates is the logic ILM and its Tigris the logic ILP. The logic ILM is the logic of essentially reflexive sequential theories, alternatively characterised as sequential theories with full induction w.r.t. a designated interpretation of a theory of the natural numbers.[We think that the class of theories realising ILM can be extended to a class of essentially sententially reflexive sequential theories. We probably need our arithmetical base to be IΣ_1. However, this possibility has not been studied. See <cit.> for some relevant results.] The theory consists of the base logic IL, given in Section <ref> plus the principle M: A B → (A∧ C) (B∧ C). The logic ILP is the interpretability logic of finitely axiomatised theories that interpret EA^+, a.k.a. IΔ_0+ supexp. The logic is given by IL plus the principle P: A B →(A B). Both logics we introduced around 1987 by Albert Visser. A modal semantics for the theories was discovered soon after by Frank Veltman. See, e.g., <cit.> and <cit.>. The arithmetical completeness of ILM was proved by Alessandro Berarducci in <cit.> and by Volodya Shavrukov in <cit.>. The arithmetical completeness of ILP was proved by Albert Visser in his paper <cit.>. For more information, see e.g. <cit.>, <cit.>, <cit.>. But what happens if we distance ourselves from the rivers? There is a scarcity of results for specific theories. We do have a Kripke model characterisation of the interpretability logic of EA, aka IΔ_0+ exp, however, we do not have an axiomatisation. See <cit.>. Another case is Primitive Recursive Arithmetic. This is a theory that is neither finitely axiomatisable nor essentially reflexive. Some modest results have been obtained towards its interpretability logic but the full characterisation is still open (<cit.>). The most salient question is: what is the interpretability logic of all reasonable theories? This is a koan-like question since what is reasonable? is part of the question. A preliminary study was done in <cit.>. See also <cit.> where a list of principles is given and verified. The principles valid in all reasonable theories will certainly be in the intersection of ILM and ILP. An example of such a principle is W: A B → A (B∧ A). This principle has both an ILM- and an ILP-proof. Interestingly, we can generalise the ILM-proof to a wide class of theories, to wit sequential theories where the interpretation of the theory of numbers satisfies . The basic idea here is that we can view the ILM-proof as using the insight that, for all models ℳ of our sequential essentially reflexive theory T, any internal model is an end-extension of the ℳ-internal T-models. This insight has a trace in all sequential theories (as discovered by Pavel Pudlák), to wit that ℳ and its internal model 𝒩 share a definable cut (modulo internally definable isomorphism). We can also generalise the ILP-proof. To do that we use a trick due to Feferman (<cit.>) to make a theory behave as if it were finitely axiomatised by modifying the representation of the axiom set. The P-style proof of W has even wider scope: it holds for all theories (with decent axiom sets) that interpret . In analogy to W, many other principles can be given M-style proofs and P-style proofs with wider scope. The aim of the present paper is to systematically study the P-style methodology and Feferman's trick. We do this by developing a modal logic that is specifically built to implement this methodology. Our present paper is a genuine extension of an earlier paper by Joost Joosten and Albert Visser, to wit <cit.>. <cit.>. <cit.>, <cit.>, <cit.>, <cit.>. <cit.> Do check whether we want to slip more references in. § CONTENTS OF THE PAPER The paper contains something for everyone. You will be go in a scenic tour through fascinating landscapes of the mind; see things that can't be unremembered. This thrilling experience will be with you your whole life. This logic is a sublogic both of and , but it is not the intersection of and . (See <cit.>.) § PRELIMINARIES In this section we revisit the basic definitions and results needed in the rest of the paper; definitions and results from arithmetic, formalised metamathematics and modal interpretability logics. NB: I only used the macros where the wrong variable was used and I did this only for Sections 1–4. §.§ Arithmetic * Δ^ b_1-formulas; * The theory S^1_2 and theories with designated interpretations thereof; * IΔ_0+Ω_1 and its relation to S^1_2; * Theories in this paper will be Δ^ b_1-axiomatised theories with a designated interpretation of S^1_2. The main motivation for this choice is that S^1_2 is finitely axiomatisable. Thus, in our context working with S^1_2 is more convenient than working with IΔ_0+Ω_1. However, nothing substantial is entailed by our choice. * * Σ_1-collection * Poly-time decidable predicates * etc. In this paper we will be using reasoning in and over weak arithmetics. To this end, let us start by describing the theory , introduced by Buss in <cit.>. This is a finitely axiomatisable and weak first-order theory of arithmetic. The signature of is (0, , | · |, ⌊1/2·⌋, +, ×, , =, ≤). The intended interpretation of |·| is the length of its argument when expressed in the binary number system. In other words, |n| is (in the intended interpretation) equal to ⌈log_2 (n + 1) ⌉. The intended interpretation of ⌊1/2·⌋ is precisely the one suggested by the notation: dividing the argument by two and rounding the result downward. The symbol # is pronounced `smash' and has the following intended interpretation (“the smash function”): n m = 2^ |n| |m| . The remaining symbols are to be interpreted in the expected way. The motivation for the smash function is that it gives an upper bound to Gödel numbers of formulas obtained by substitution: Suppose is a formula, x a variable and t a term. Given the Gödel numbers of and t (denoted with and t, as usual), the Gödel number of (x ↦ t) will not surpass t. Of course, we need a `natural' Gödel numbering to make this happen. See below. Here and in the remainder of this paper, the assumption is that both the numeral representation and the Gödel numbers we work with are efficient. For example, we can take the Gödel number of a string of symbols to be its ordinal number in an arbitrary computationally very easy but otherwise fixed enumeration of all strings in the language of . As for the numerals, we use efficient numerals, defined recursively as follows: 0 ↦ 0; 2n + 1 ↦( 0 ×n); 2n+2 ↦( 0 ×n). Clearly, efficient numerals have about the same growth rate as the corresponding binary representations. We also require that the code of a subterm is always smaller than the entire term, and, similarly, for formulas. We will consider such codings to be natural. See <cit.> for details. An example of such a natural coding is the Smullyan coding where we code a string of letters (in a given alphabet of prime cardinality) as its number in the length-first ordering. Before introducing (some of) the axioms of , we will first define a certain hierarchy of formulas in the language of . We will say that a quantifier is bounded if it is of the form (Q x t) where t is a term[ By “(Q x t)” we mean “(∃ x)(x ≤ t ∧…)”, if Q is ∃, and (∀ x)(x ≤ t →…)” if Q is ∀.] that does not involve x. A quantifier is sharply bounded if it is of the form (Q x |t|) where t is a term that does not involve x Let Δ_0^ b, Σ_0^ b, and Π_0^ b stand for the set of formulas all of whose quantifiers are sharply bounded. We define Δ_i^ b, Σ_i^ b, and Π_i^ b for i > 0 as the minimal sets satisfying the following conditions: * If and are Σ_i^ b-formulas, then (∧) and (∨) are Σ_i^ b-formulas. * If is a Π_i^ b-formula and is a Σ_i^ b-formula, then and (→) are Σ_i^ b-formulas. * If is a Π_i-1^ b-formula, then is a Σ_i^ b-formula. * If is a Σ_i^ b-formula, x a variable and t is a term not involving x, then (∀ x |t|) is a Σ_i^ b-formula. * If is a Σ_i^ b-formula, x a variable and t is a term not involving x, then (∃ x t) and (∃ x |t|) are Σ_i^ b-formulas. * The first five conditions are to be repeated in the dual form: with the roles of Σ and Π, and ∃ and ∀, swapped in all places. * A formula is a Δ_i^ b-formula if it is equivalent over predicate logic both to a Σ_i^ b-formula and to a Π_i^ b-formula. Thus, this hierarchy is analogous to the standard arithmetical hierarchy, with bounded quantifiers in the role of unbounded quantifiers, and sharply bounded quantifiers in the role of bounded quantifiers. Let Φ be a set of formulas which may contain zero or more free variables. We define Φ-PIND axioms to be the formulas (x := 0) ∧ (∀ x) ((x := ⌊1/2 x ⌋) →) → (∀ x) , for all ∈Φ and all variables x. Thus, when proving facts using the schema of polynomial induction, in the inductive step we are allowed to refer to the property obtained for ⌊1/2 n ⌋. This is, of course, faster than the standard schema of mathematical induction where we can use the property obtained for n - 1. The price we pay is a stronger antecedent in the induction principle. We obtain by extending a certain list of 32 quantifier-free formulas (dubbed , see e.g. <cit.>) with all Σ_1^ b-PIND axioms. This somewhat unusually axiomatised theory has a nice connection to computational complexity, as the next theorem shows. We have the following. * Suppose ⊢ (∀ x)(∃ y) (x, y) for some Σ_1^ b-formula . Then there is a -computable function f_ such that if f_(x) = y then (x, y) holds f_ is a witnessing function for , and ⊢ (∀ x) (x, f_(x)). * Conversely, suppose f is a -computable function. Then there is a Σ_1^ b-formula _f such that _f(x, y) holds if and only if f(x) = y, and ⊢ (∀ x)(∃ y) _f(x, y). Theories in this paper will be Δ^ b_1-axiomatised theories (i.e. having -decidable axiomatisations). Moreover, we will always assume that any theory we consider comes with a designated interpretation of S^1_2. That is, when we say “a theory”, we mean a pair of an actual theory together with some singled-out and fixed interpretation of S^1_2. A principle similar to induction is that of collection, in particular Σ_1-collection. The schema (∀ n)((∀ x n)(∃ y) (x, y) → (∃ m)(∀ x n)(∃ y m) (x, y)) where is restricted to Σ_1-formulas possibly with parameters, is the Σ_1-collection schema. Collection is occasionally useful, however we will have to find ways to avoid it as it is not available in . §.§ Interpretability We refer the reader to <cit.> or <cit.> for the definitions of translation and interpretation. There is one point specific to this paper. We want to treat a translation k as an interpretation, in a given theory T, of an unspecified target theory in a given target signature Θ. To fulfill this role, T needs to prove at least the k-translations of the axioms of identity for signature Θ. However, generally, this may fail. The reason is that, even if identity as a logical connective, we treat it in translation simply as a symbol from the signature. In other words, we translate identity not necessarily to identity. Also, we need the guarantee that the domain is non-empty to satisfy the axiom ∃ x x=x. In fact, the usual treatment of interpretations fits free logic without identity best. We consider only finite signatures, so the theory of identity for signature Θ will be given by a single axiom, say 𝔦𝔡_Θ. Thus, what we need for the translation k to carry an interpretation at all is that T ⊢𝔦𝔡^k_Θ. We implement a simple hack to ensure that every translation carries an interpretation to some theory. We fix a default translation m that interprets 𝔦𝔡_Θ in T. We can take as the domain of m the full domain of T and translate identity to identity. The translation of the predicate symbols can be arbitrarily chosen. We can now replace k by the disjunctive interpretation k^∗ that is k in case 𝔦𝔡^k_Θ and m otherwise. Clearly, we will always have T⊢𝔦𝔡^k^∗_Θ. Moreover, if T ⊢𝔦𝔡^k_Θ, then k and k^∗ coincide modulo T-provable equivalence. The idea is now simply that the translation we quantify over are really the k^∗, so that they always carry some interpretation. We note that, in the context of interpretability logics, we are interested in translations from a signature Θ to itself. In that context, we can take as the default translation m simply the identity translation on Θ. In Section <ref>, we will strengthen our demand on translations somewhat to ensure that we do have coding in all theories we consider. §.§ Formalised provability and interpretability * Efficient numerals p Added to 2.1 * Mention naturality condition on coding. This is needed, for example in Lemma <ref> when we write Note that, by the naturality conditions on our coding, τ is bounded by p. Added to 2.1 * Formalised provability. Regular and Feferman. * Domain specifier; * Formalised interpretability. * Interpretability will in this paper be theorems interpretability, i.o.w. * k:U V :∀ϕ (_Vϕ→_Uϕ^k). * U V is a ∃Σ^ b_1 sentence. * Etc. Before introducing formalised interpretability, let us say a few words on formalised provability. For a given signature, we fix a natural formalisation aproof(p,x) of proof from assumptions. We usually leave the signature implicit. We assume that a proof from assumptions is given, Hilbert-style, as a sequence of pairs of a number giving the status of the inference step and a formula.[Of course, we do not really need the Hilbert format. However, the definition would be somewhat more complicated for, say, Natural Deduction.] Say 𝔞 tells us that the formula is an assumption. We can make aproof a Δ_1^ b-predicate. A theory T comes equipped with a representation α of its axiom set. We will write axioms_T for α. The default is that α is Δ_1^ b. We write: * proof_T(p,x) for: aproof(p,x) ∧ (∀ i length(p)) ((p)_i0 = 𝔞→ axioms_T((p)_i1)). * Pr_T(x) for ∃ p proof_T(p,x) We note that, if α is Δ_1^ b, then so is prf_T(p,x). Let us denote the efficient numeral of the (natural) Gödel number of A by A. Sufficiently strong theories (such as ) prove the Hilbert–Bernays–Löb derivability conditions (<cit.>): * for all , if T ⊢, then T ⊢𝖯𝗋_T(); * for all , T ⊢𝖯𝗋_T(→) → (𝖯𝗋_T() →𝖯𝗋_T()); * for all , T ⊢𝖯𝗋_T() →𝖯𝗋_T(𝖯𝗋_T()). These conditions, in combination with the Fixed Point Lemma, suffice to show that T ⊢𝖯𝗋_T(0=1) and, consequently, T ⊢ 0 = 1 follows from T ⊢ 𝖯𝗋_T(0=1), i.e. Gödel's second incompleteness theorem. These conditions also suffice to show that the following holds: if T ⊢𝖯𝗋_T() →, then T ⊢. Thus T is only “aware” that 𝖯𝗋_T() implies in case the conditional is trivially satisfied by the provability of its consequent. This entailment is known as Löb's rule. In fact, T is “aware” of this limitation (formalised Löb's rule): T ⊢𝖯𝗋_T(𝖯𝗋_T() →) →𝖯𝗋_T(). We can read e.g. the formulised Löb's rule as a propositional scheme by replacing 𝖯𝗋_T with and the variable that ranges over T-formulas by the variable that rangesover propositional modal formulas. The provability logic is the extension of the basic modal logic with an additional axiom schema representing Löb's formalised rule: ( A → A) → A. In his well-known result, Solovay <cit.> established arithmetical completeness for this logic. Upon inspection, this result works for all c.e. extensions of EA, a.k.a. IΔ_0+ exp, that are Σ_1-sound.[In a wide range of cases, we can, given the theory, redefine the representation of the axiom set in such a way that one can drop the demand of Σ_1-soundness. See, e.g., <cit.>.] The predicate 𝖯𝗋_T satisfies the following property, which is known as the Kreisel Condition, for ∃Σ^ b_1-sound theories: T ⊢ if and only if ℕ𝖯𝗋_T(). We can find alternative axiomatisations of 𝖯𝗋_T, that satisfy Property (<ref>), but behave differently w.r.t. consistency. One such axiomatisation is given in <cit.>. Say the original axiomatisation is α. We write α_x(y) for α(y) ∧ y≤ x. Let the theory axiomatised by α_x be T_x. We take: ϝ(x) iff α(x) ∧ Con(T_x). We note that we diverge from our default here: ϝ is Π_1. We take T^ϝ to be the theory axiomatised by ϝ. We need that the theory T is Σ_1-sound and reflexive to make (<ref>) work for Pr_T^ϝ. Let us call this notion Feferman-provability. As we are interested only in consistent theories, clearly this predicate has the same extension as the predicate 𝖯𝗋_T. However, it is provable within that 0 = 1 is not Feferman-provable. This is, of course, not the case with 𝖯𝗋_, as that would contradict Gödel's second incompleteness theorem. If we are dealing with a theory T with a poly-time decidable axiom set, by Theorem <ref>, there is a Σ^ b_1-predicate (actually Δ^ b_1) verifying whether a number codes a T-proof of a formula. This implies that the provability predicate, claiming that a proof exists for some given formula, is a ∃Σ^ b_1-predicate. This is convenient because for we have provable ∃Σ^ b_1-completeness: For any ∃Σ^ b_1-formula we have ⊢→_T . We now move on and consider interpretability. There are various notions of formalised interpretability[see Theorem 1.2.10. of <cit.> for a discussion on their relationships.] Here we are interested in theorems-interpretability, i.o.w. we say that k is an interpretation of V in U (we write k:U V) if and only if, (∀) (_V→_U^k). Here _V and _U are the provability predicates of V and U, respectively. We remind the reader that theorems-interpretability is -provably transitive —unlike axioms-interpretability. The k-translation of is denoted as ^k. If V is a finitely axiomatisable theory, then U V is in fact a ∃Σ^ b_1 sentence. This is due to the fact that, for finitely axiomatised theories V, their interpretability in U boils down to the provability of the translation of the conjunction of their axioms and the fact that the translation function is P-TIME. As the theories studied in this paper are all Δ^ b_1-axiomatisable, the aforementioned statement is ∃Δ^ b_1, in particular ∃Σ^ b_1. §.§ Modal interpretability logics There are many different interpretability logics in the literature. The language of interpretability logics is that of propositional logic together with a unary modal operator and a binary modal operator . We adhere to the binding convention that the Boolean operators bind as usual and that binds as strong as with all Boolean operators except → binding stronger than and binding stronger than →. Thus, for example, A B → A ∧ C B ∧ C will be short for: (A B) →( ( A ∧ C) (B ∧ C) ). Most interpretability logics extend a core logic called . The logic has as axioms all tautologies in the modal propositional language containing and together with all instances of the following axioms. [ L_1 ⊢ (A → B) → ( A → B); L_2 ⊢ A → A; L_3 ⊢ ( A → A) → A; ; J_1 ⊢ (A → B) → A B; J_2 ⊢ (A B ) ∧ (B C) → A C; J_3 ⊢ (A C) ∧ (B C) → A∨ B C; J_4 ⊢ A B → ( A → B); J_5 ⊢ A A ] The only rules are Necessitation Nec: ⊢ A ⊢ A and Modus ponens. We will consider extensions of by adding axiom schemes to . These logics will be named by appending the names of the new schemes to . For example, the principle P is given by ⊢ A B → (A B) and the logic arises by adding this scheme/principle to . Likewise, the principle P_0 is given by ⊢ A B → (A B) and the logic arises by adding this scheme/principle to . For later use, we prove the following easy observation. If we replace in the axiom schema J_5: ⊢ A A by J_5': ⊢ B C → B C, then the resulting logic will be equivalent to the original logic . Any formula A A is obtained from B C → B C by instantiating in the latter formula B by A and C by A. Thus, we get A A → A A since the antecedent is clearly provable without using J_5'. For the other direction, we reason in and assume B C. Now, by J_5 we get C C so that by the transitivity axiom J_2 we obtain the required B C. § TWEAKING THE AXIOM SET For finitely axiomatised theories V, we have: ⊢ U V →_ (U V), by ∃Σ^ b_1 completeness because U V is a ∃Σ^ b_1-sentence. Recall that, in this paper, as a default, all theories are assumed to be Δ^ b_1-axiomatised. If this were not the case, U V need not, of course, be a ∃Σ_1^ b-sentence, even for finitely axiomatised theories V. To mimic the P-style behaviour for an arbitrary theory V, we will modify V to a new theory V' that approximates V to obtain ⊢ U V →_ (U V'). Of course, the new theory V' should be sufficiently like V to be useful. Thus, we define a theory V' that is extensionally the same as V, but for which U V' is a statement that is so simple that under the assumption that U V, we can easily infer _ (U V'). §.§ The approximating theory defined We start with a first approximation. Given some translation k, let us define the set of axioms V' as consisting of just those axioms ϕ of V such that U⊢ϕ^k. Note that, if k:U V, then V and V' have the same axioms. However, when V is not finitely axiomatisable, in general, we cannot take the insight V≡ V' with us when we proceed to reason inside a box. In formulas: we do have k:U V ⇒ V ≡ V' but in general we do not have k:U V ⇒ ( V ≡ V'). Notwithstanding, defining V' as above is useful and works modulo some trifling details. Firstly, the definition of the new axiom set does not have the right complexity. Secondly, if the argument is not set up in a careful way, we may seem to need both Σ_1-collection and . We shall use a variation of Craig's trick so that the axiom sets that we consider will remain to be Δ_1^ b-definable. The same trick makes the use of strong principles, like Σ_1-collection and , superfluous. Let U and V be Δ_1^ b-axiomatised theories. Moreover, let k be a translation of the language of V into the language of U that includes a domain specifier. We remind the reader of Smoryński's dot notation. e.g., ṗ = ṗ functions as a term that is the arithmetisation of the map p ↦p=p. Here is our definition of VUk. VUkx (∃ p, x) (x= conj(, ṗ=ṗ) ∧ V∧Up,k). We note that this is a Σ^ b_1-formula. We can see that is equivalent to a Δ_1^ b-formula by describing a procedure for deciding whether is a VUk-axiom. * Is a conjunction? If not, does not qualify. Otherwise, proceed to the next step. * Is the first conjunct of , say , a V-axiom? If not, does not qualify. Otherwise, proceed to the next step. * Is the second conjunct of the form p = p and do we have Up,k? If not, does not qualify. If so, will indeed be a VUk-axiom. The following lemma tells us that verifies that k:U V implies that V and VUk are extensionally equal. Actually, VVUk always holds and does not depend on the assumption k:U V. Let U and V be Δ_1^ b-axiomatised theories. We have * ⊢ (∀ k) ( id: VVUk). * ⊢ (∀ k) ( k:U V → id:VUk V). Ad (<ref>). Reason in . We have to show: _VUkφ→_V φ. This is easily seen to be true, since we can replace every axiom φ∧ (p = p) of VUk by a proof of φ∧ (p = p) from the V-axiom φ. The resulting transformation is clearly p-time. Ad (<ref>). Reason in . Suppose k:U V and _Vφ. We set out to prove _VUkφ. Let p be a proof of φ from V-axioms τ_0, … ,τ_n. (Note that n need not be standard.) We would be done, if we could replace every axiom occurrence of τ_i in p by [∧ E,l]τ_iτ_i ∧ (q_i=q_i) where q_i would be a U-proof of τ_i^k, so that we would obtain a VUk-proof r of φ. Clearly, for each τ_i we have that _V τ_i, so that by our assumption k:U V we indeed obtain a U proof q_i of τ_i^k. However, these proofs q_i may be cofinal and, thus, we would need a form of collection to exclude that possibility to keep the resulting syntactical object r finite. It turns out that we can perform a little trick to avoid the use of collection. To this end, let τ be the (possibly non-standard) conjunction of these axioms τ_i. Note that, by the naturality conditions on our coding, τ is bounded by p. Since, clearly, we have _Vτ we may find, using k:U V, a U-proof q of τ^k. Here it is essential that we employ theorems interpretability in this paper! We may use q to obtain U-proofs of q_i of τ_ik. Clearly, we can extract appropriate q_i from q in such a way that |q_i| is bounded by a term of order |q|^2. We can now follow our original pland and replace every axiom occurrence of τ_i in p by [∧ E,l]τ_iτ_i ∧ (q_i=q_i) and obtain a VUk-proof r of φ. We find that |r| is bounded by a term of order |p|·|q|^2. So, r can indeed be found in p-time from the given p and q. For the previous lemma to hold it is essential that we work with efficient numerals p. The reader may find it instructive to rephrase the lemma in terms of provability. For U and V being Δ_1^ b-axiomatised theories we have * ⊢ (∀ k) (∀φ) ( _VUkφ→_V φ); * ⊢ (∀ k) ( k:U V →∀φ ( _VUkφ↔_V φ) ). As mentioned before, even though we have extensional equivalence of V and VUk under the assumption that k:U V, we do not necessarily have this under a provability predicate. That is, although we do have _ (_VUkφ→_V φ) we shall, in general, not have k:U V →_ (_V φ→_VUkφ). §.§ A P-like principle for the approximated theory The theory VUk is defined precisely so that it being interpretable in U is true almost by definition. This is even independent on k being or not an interpretation of V in U. The following lemma reflects this insight. For U and V, Δ_1^ b-axiomatised theories we have ⊢ (∀ k) ( k: UVUk). Reason in . Suppose p is a VUk-proof of ϕ. We want to construct a U-proof of ϕ^k. As a first step we transform p into a V-proof p'of ϕ as we did in the proof of Lemma <ref>,(<ref>): replacing all axioms φ∧ (s = s) of VUk by a proof of φ∧ (s = s) from the V-axiom φ. Next we transform p', using k, into a predicate logical proof q of ϕ^k from assumptions τ_i^k, where each τ_i is a V-axiom. It is well known that this transformation is p-time. Finally, each axiom τ_i extracted from p, comes from a VUk-axiom τ_i∧ (r_i=r_i), where r_i is a U-proof of τ_i^k. So our final step is to extend q to a U-proof q' by prepending the U-proofs r_i above the corresponding τ_i^k. This extension will at most double the number of symbols of q, so q'≈ q^2. As a direct consequence of this lemma, we see via necessitation that ⊢_ (∀ k) ( k: UVUk) so that in a trivial way we obtain something that comes quite close to the P-schema: ⊢ U V →_ (∀ k) ( k: UVUk). However, Equation <ref>, is somewhat strange, since the antecedent of the implication does no work at all. In this paper, we are interested in finite extensions. Fortunately, a minor modification of Equation <ref> does give information about finite extensions. Let U and V be Δ_1^ b-axiomatised theories. We have: ⊢ (∀ k) (∀) k: U (V+) →_ (k: U (VUk+)). We reason in . Suppose U (V+A). It follows that _UA^k, and, hence, _U_UA^k. We reason inside the _U. We have both _U^k and k:UVUk. We prove U (VUk+). Consider any V-sentence B and suppose _VUk+. It follows that _VUk(→). Hence, _U (→)^k. We may conclude that _U ^k, so we are done. We will need the following thinned version of Theorem <ref>, which shall be the final version of our approximation of the principle P. Let T be a Δ_1^ b-axiomatised theory and let and be T-sentences. We have: * ⊢ (∀ k) ( k: (T + ) (T + ) →_ k: (T + ) (TT+k + )), * ⊢ (∀ k) ( k: (T + ) (T + ) →_ k: (T + ) (TTk + )). For (a), we apply Theorem <ref> to T+ in the role of U, T in the role of V, and in the role of . Claim (b) follows from (a), since, clearly, TT+k extends TTk §.§ Iterated approximations We will need to apply our technique of approximating theories to theories that themselves are already approximations[An example can be found in the proof of Lemma <ref>.]. To this end we generalise the definition of approximated theories to sequences of interpretations as follows. Let V^[ U, k ] := V^[U, k]. We recursively define V^[ U_0, k_0 , …, U_n, k_n, U_n + 1, k_n +1] for n ≥ 0 to stand for ( V^[ U_0, k_0 , …, U_n, k_n]) ^ [U_n + 1, k_n + 1 ], i.e.: V^[ U_0, k_0 , …, U_n, k_n, U_n + 1, k_n +1]x (∃ p, x) ( x = ⌜∧ (ṗ = ṗ) ⌝ ∧ V^[ U_0, k_0 , …, U_n, k_n] ∧ U_n + 1p,k_n + 1). If x denotes a finite sequence U_0, k_0 , …, U_n, k_n, then we understand V^[x, U_n + 1, k_n +1] as V^[ U_0, k_0 , …, U_n, k_n , U_n + 1, k_n +1]. Theorem <ref>(a) can be adapted to this new setting, so that we get the following. Let T be a Δ_1^ b-axiomatised theory and let α and β be T-sentences. Let the variable x range over codes o sequences of pairs U_i, k_i. We have: ⊢ (∀ x)(∀ k) ( k: (T + ) (T^[x] + ) →_( (T + ) T^[x, T + , k ] + )). This is immediate from Theorem <ref>, noting that the parameter x does not affect the proof of that theorem. Again, it seems that there is no need to keep track of the formulas γ in the TT+γk definition. Therefore, we shall, in the sequel, simply work with sequences of interpretations of T in T rather than sequences of pairs of theory and interpretation. The corresponding definition is as follows where denotes the empty sequence and for a sequence x, we use x ⋆ k or sometimes simply x,k to denote the concatenation of x with k. For T a Δ_1^ b-axiomatised theory we define T^[] := T and T^[x ⋆ k] := ( T^[x] )^[T, k]. From now on, we shall write T^[k] instead of T^[ k]. With the simplified notion of iteration we can formulate a friendlier P-flavoured principle. Let T be a Δ_1^ b-axiomatised theory and let α and β be T-sentences. Let x range over sequences of interpretations. We have: ⊢ (∀ x)(∀ k) ( (T + α) (T^[x] + β) →_ (k: ( T + α) ( T^[x, k ] + β) ) ). § A MODAL LOGIC FOR APPROXIMATION In this section, we will present a modal logical system to reason about interpretations and approximations based on them. §.§ The logic We proceed to articulate modal principles reflecting facts about approximations. The main idea is to label our modalities with sequences of interpretation variables. Of course, in the arithmetical part, these sequences will indeed be interpreted via some map κ as a sequence κ () of translations from the language of T to the language of T. In the next subsection we shall make the arithmetical reading precise but the idea is that A ^ B will stand for T+ α T^[κ()] + β, whenever A is interpreted by the T-sentence α and B by β. Likewise, ^ A will be interpreted as _T^[κ()]α. In the next section, we will see how we can avoid nonsensical interpretations k so that the theories T^k will always contain a minimum of arithmetic. As in <cit.>, we will call our modal system even though the system presented here slightly deviates from the one in <cit.>. We first specify the language. We have propositional variables p_0,p_1,p_2,…. We will use p,q,r,… to range over them. Moreover, we have interpretation variables k_0,k_1,k_2,…. We have one interpretation constant id. The meta-variables k,ℓ,m,… will range over the interpretation terms (i.e. interpretation variables and id). The meta-variables , , , … will range over finite sequences of interpretation variables.Including the empty sequence? The clauses below suggest not. We need empty sequences for some of our current formulations at least. Wouldn't the clause be equivalent to saying this for the empty sequence case: If A B is in the language, ... A ^k B is ... The modal language of is the smallest language containing the propositional variables, closed under the propositional connectives, including ⊤ and , and, given an interpretation term k, the modal operators ^k and ^k, and closed under the following rule. * If A ^ B is in the language and k is an interpretation term not contained in , then A^, kB is in the language. Similarly, for ^, k A. We let ^ A abbreviate ^ A. We write for ^ id, and analogously for and . The logic  has axioms ⊢ A for any propositional logical tautology A in the extended language. Moreover,  has the obvious interchange rules to govern interaction between both sides of the turnstyle ⊢ based on the deduction theorem so that , Γ⊢ C ⇔⊢⋀Γ→ C. Apart from modus ponens,  has the following axioms and rules. [ L_1 ⊢ (A → B) → ( A → B); L_2, ⊢y A →svt A; L_3 ⊢ ( A → A) → A; ; J_1 ⊢ (A → B) → A B; J_2 a ⊢ (A B ) ∧ (B C) → A C; J_2 b ⊢ (A B ) ∧^(B → C) → A C; J_3 ⊢ (A C) ∧ (B C) →A∨ B C; J_4 ⊢ A B → ( A → B); J_5, ⊢ A^^B → A^B; ; ,k ⊢,k A → A; ,k ⊢ A B → A ,k B; ; Nec ⊢ A⊢^ A; P, , k Γ,, ^ (A ^, k B)⊢ CΓ, A^ B ⊢ C; ] In the above, the rule P, , k is subject to the following conditions: * k is an interpretation variable; * k does not occur in , Γ, A,B,C; * consists of formulas of the form E^, k F→E^ F and ^ E→^, k E. §.§ Basic observations on The first group of axioms L_1-L_3 express the straightforward generalisation of the regular provability axioms. The second group of axioms J_1-J_5, are the straightforward generalisations of the interpretability axioms. In particular, taking all interpretations to be the identity we retrieve all the regular axioms. The third group of axioms tells us how we can vary the interpretability parameters. The Necessitation rule is as usual, and the P, , k encodes the essential behaviour of approximations. The following derivation shows how the P, , k rules implies the axiom version CHANGE THIS WORD: (A ^, k B)⊢ (A ^, k B)A^ B ⊢ (A ^, k B)⊢ (A^ B) → (A ^, k B). The use of a P-flavoured rule instead of an axiom is suggested since it better allocates flexibility in collecting all applications of Lemma <ref> and Corollary <ref> in our reasoning. To be on the safe side, we consider that  is presented using multi-sets so that we can allocate for applications of Lemma <ref> and Corollary <ref> after a P, , k rule is applied. Often we shall not mention all parameters of an axiom and, for example, just speak of the Pk rule instead of the P, , k rule. From Section <ref> onwards we shall put the logic to work. Rather than giving formal proofs as a sequence of turnstyle statements we will describe such formal proofs. In doing so, we will call the licence to use ^ E→^, k E provided by P, , k: k, and we will call [A possible strengthening of P, , k is: Γ,, ^ (A_i ^,k B_i | i<n+1)⊢ C⇒Γ, A_i^ B_i| i<n+1⊢ C [ P, , k^+ Γ,, ^ (A_i ^,k B_i | i<n+1)⊢ C; Γ, A_i^ B_i| i<n+1⊢ C ] putting the obvious conditions on occurrences of k and on . We will not consider this strengthening in the paper.] the licence to use E^, k F→E^ F: k. We observe that by taking the empty sequence we get various special cases of our axioms. For example, a special case of x,k would be ⊢k A → A. Furthermore, successive applications of ,k yield ⊢ A → A. Likewise, a special case of ,k gives us ⊢ A B → A k B. We observe that repeatedly applying ,k yields a generalisation of J_2 b: (A B ) ∧^⋆ y(B → C) → A C. Furthermore, we observe that J_1 follows from the classical J_1 principle since [ (A → B) → (A → B); → A B; → A B.; ] We also observe that, if we drop the superscripts in J_5, , we get the formula A B → A B that is equivalent over J_1, J_2 to the ordinary version A A of J_5 as we saw in Lemma <ref>. As a first and simple derivation in our system we have the following strengthening of the principle P_0 in   (recall that P_0 is the scheme A B →(A B)). Let and be arbitrary sequences of interpretations. Let consist of formulas of the form E^, k F→E^F and ^E→^, k E for some k that does not occur in , Γ, A, B, C. We have the following rule to be derivable over : Γ,, ^ (A B)⊢ CΓ, A^ B ⊢ C We assume Γ,, ^ (A B)⊢ C. By J_5, k, id we know that ⊢ A ^, k B → A B, so that we get Γ,, ^ (A ^, k B)⊢ C. An application of P, , k yields the required Γ, A^ B ⊢ C. § ARITHMETICAL SEMANTICS In order to set up arithmetical semantics, we would like to quantify over sensible translations. For example, a translation should at least map a minimum of arithmetic to provable sentences. However, how are we to separate the sensible from the non-sensical translations? In the first subsection we shall provide a construction to guarantee that we only use sensible translations. Then we shall define arithmetical semantics and prove a soundness theorem. §.§ A further modification of translations To do interpretability logic we need that we have sufficient coding possibilities in each theory we consider. Suppose we already have a theory with coding and translation k from the signature of T to the signature of T. We want to insure that T^[T,k] also has coding. To do this we simply have to produce an improved version of the modification trick we introduced in Section <ref>. We fix our base theory T of signature Θ. Our coding will always be implemented via an interpretation of in T. We also fix such an interpretation, say N. Let α^⋆ be a conjunction of T-axioms that implies both 𝔦𝔡_𝔄, where 𝔄 is the signature of arithmetic, and ()^N. We fix α^⋆ for the remainder of this section. We can now specify our standard modification. Define, for any translation k of the signature of T to the signature of T, the disjunctive interpretation n(k) that is k if (α^⋆)^k and 𝗂𝖽_Θ, otherwise. Over predicate logic, we have, by a simple induction, that, for any Θ-sentence φ, φ^𝗇(k) ↔ ( (α^⋆)^k ∧φ^k) ∨( (α^⋆)^k ∧φ). Since the needed induction to prove this is on the length of φ and since the proof can be uniformly constructed in p-time from φ, we have access to (<ref>) when reasoning inside . We observe that, for example, in the formula ∃ k _U ϕ^𝗌(k), the choice of whether 𝗌(k) will be equivalent to 𝗂𝖽_Θ or to k will depend on whether (α^⋆)^k holds under the _U. In contrast, in the expression ∃ k _U^[𝗌(k)]φ, the nature of U^[𝗌(k)] depends on whether (α^⋆)^k holds outside the box. Let us proceed by making some easy observations on 𝗇(k). In the following lemma, we start by observing that regardless of the nature of k, the derived 𝗌(k) provides us an interpretation of α^⋆ in T. Next, we see that any other interpretation of α^⋆ in T will also occur as an image of 𝗌. Thus, modulo T-provable equivalence, n(k) ranges precisely over all interpretations of α^⋆. We have, verifiably in , that, for all good translations k and j, * T⊢ (α^⋆)^ n(k), * for any formula ϕ we have T⊢ (α^⋆)^k → (ϕ^ n(k)↔ϕ^k), * T⊢ϕ^ n( id_Θ)↔ϕ i.o.w., n( id_Θ) is T-equivalent to id_Θ, * T⊢ϕ^ n(k∘ j)↔ϕ^ n(k)∘ n(j) i.o.w., n T-provably commutes with composition of translations. Let us prove the first claim. Reason in and let k be arbitrary. Now reason in T or more formally, under the _T. We distinguish cases. If (α^⋆)^k, then 𝗌(k) = k, and (α^⋆)^k holds by the case assumption. Otherwise, 𝗌(k) = 𝗂𝖽_Θ. The choice of T and α^⋆ (see beginning of the subsection) implies T⊢α^⋆, as required. The second claim is immediate by the induction we already discussed. The third and fourth claims are easy. We recall that where the lemma mentions the theory T^[ T, n(k) ] we really mean the theory axiomatised by TT n(k)x = ( ∃ p, φ x) (x= ⌜φ∧ (p=p)⌝ ∧ Tφ∧Tp,φ n(k)). In this formula we can expand φ n(k) as in (<ref>). ⊢∀ k _T^[ T , n(k) ]()^N. Reason in . Consider any translation k from the language of T to the language of T. Lemma <ref> tells us there is a proof in T of (α^⋆)^ n(k). Hence, we have proofs p_i of (α_i)^ s(k), for (standardly) finitely many T-axioms α_1, …, α_n. We would like to show that T^[T, n(k)] proves each of these α_i, since then T^[T , n(k)] proves ()^N. We take arbitrary α_i and put x = ⌜α_i ∧ (p_i=p_i)⌝. Clearly, this x witnesses (<ref>), the first two conjuncts of the body of (<ref>). Furthermore, T proves (α_i)^ n(k) because p_i is a proof of this formula in T. So, α_i ∧ (p_i=p_i) is an axiom of T^[T , n(k)], whence T^[T , n(k)] proves α_i. Recall that we work with the theories T that interpret and that we fix a designated interpretation N : T. We defined a variety of other theories of the form T^[x], but we did not specify what interpretation of we are supposed to bundle them with. The preceding lemma tells us that we can reuse N. Thus, we will take N as the designated interpretation of in the T^[x]. §.§ Arithmetical soundness As before, we fix our base theory T of signature Θ, the interpretation N of in T, the sentence α^⋆, and the mapping on translations n. Let us say that translations in the range of n are good translation. We call the formalised predicate of being good: good. As usual, the modal logics are related to arithmetic via realisations. Realisations map the propositional variables to sentences in the language of arithmetic. However, we now also have to deal with the interpretation sequences. Thus, our realisations for the arithmetical interpretation are pairs (σ, κ), where: * σ maps the propositional variables to T-sentences, and * κ maps the interpretation variables to good translations from the language of T to the language of T. We stipulate that the σ maps all but finitely many arguments to ⊤ and, likewise, that the κ maps all but finitely many arguments to n( id_Θ). The realisations are lifted to the arithmetical language in the obvious way by having them commute with the logical connectives and by taking: (^k_1, …, k_nA)^σ,κ := _T^[ ⟨κ(k_1) , …, κ(k_n)⟩ ]A^σ,κ, (A^k_1, …, k_nB)^σ,κ := (T+A^σ,κ) (T^[ ⟨κ(k_1), …, κ(k_n)⟩ ]+B^σ,κ). Here, in the context of T, we suppress the relativisation to N, it being the silent understanding that all coding is done inside N. We observe that the nested modalities make sense because of Lemma <ref>. A central point here is that we allow κ to be an internal variable. The transformation T ↦ T^[⟨κ(k_1),…κ(k_n)⟩] is, in essence, a transformation of indices of theories and can, thus, be represented internally. We note that the formula (^k_1, …, k_nA)^σ,κ will not be generally equivalent to (( A)^k_1, …, k_n)^σ,κ. A modal formula A will be arithmetically valid in T and N, w.r.t. our choice of α^⋆, iff, for all σ, we have T⊢∀κ A^σ,κ. We note that it is necessary that the quantifier over σ is external, since the substitutions are at the sentence level. However, the internal quantification over the κ makes sense since these program transformations of of indices for theories. Let T be a Δ_1^ b-axiomatisable theory containing via N. We have, relative to a fixed α^⋆, Γ⊢_ A for all σ, T⊢∀κ ( ⋀Γ^σ,κ→ A^σ,κ ). We use induction on proofs. The axiom ,k: ⊢,k A → A is directly obtained by Corollary <ref>.<ref>. and ,k are immediate from Lemma <ref>. The principles L_1 to J_4 are simple. The principle J_2b is immediate from the observation that (T+α) (T^[]+β) and _T^[](β→γ) imply (T+α) (T^[]+γ). The validity of J_5, follows from the observation that, ⊢∀ j,k∈ good (T^[U, k]+ con(T^[V, j]+B)) (T^[V, j]+B). This follows by the usual formalisation of Henkin's Theorem (see e.g., <cit.>). We now consider P^,,k which tells us that from Γ,, ^ (A ^, k B)⊢ C we may derive Γ, A^ B ⊢ C. Consequently, for the closure of arithmetical validity under this rule we assume that for all σ, T ⊢ (∀κ) ( ⋀Γ^σ, κ∧⋀^σ, κ∧( ^ (A ^,kB))^σ, κ→ C^σ, κ) and will need to prove for all σ, T ⊢∀κ ( ⋀Γ^σ, κ∧( A ^B)^σ, κ→ C^σ, κ) . To this end, we fix σ. We reason in T. We fix some κ' and assume ⋀Γ^σ, κ'∧( A ^B)^σ, κ' . We remind the reader that the modal interpretation variable k is supposed to be fresh. Let κ be as κ' with the sole exception that κ (k) : A^σ, κ'^κ'() B^σ, κ'κ (k) : A^σ, κ^κ() B^σ, κ. Note that we can that the existence of a desired choice for κ(k) is guaranteed by Assumption (<ref>). By Lemma <ref> we get ( A^σ, κ^κ(,k) B^σ, κ). Moreover, by Lemma <ref>(<ref>) and by Corollary <ref> we may conclude ⋀^σ, κ so that by (<ref>) we conclude C^σ, κ. Since k does not occur in C we may thus conclude C^σ, κ' which finishes the proof of (<ref>) and hence the soundness of P^,,k. We refer the reader to <cit.> for details. An important ingredient is given in Theorem <ref>. § GENERALISATIONS AND ALTERNATIVES The system leaves room for alternatives that we shall briefly discuss in this section. I suggest to drop the whole Section <ref> for the time being. The first subsection, <ref>, is clearly not well thought through. The second subsection, <ref>, is nonsense. §.§ Room for generalisations We already observed that our modal system does not directly allocate the potential extra flexibility that Lemma <ref> has over Theorem <ref> regarding different base theories. If we would like our logics to reflect the extra flexibility, we could work with sequences of pairs of formulas and translations instead of just sequences of translations. These formulas can then be added to the base theory. Similar, but even more general, is the following notion where assignments for the arithmetical interpretation are triples (σ, κ, τ), where: * σ maps the propositional variables to T-sentences, * κ maps the interpretation variables to good translations from Θ to Θ, where Θ is the signature of T, and * τ maps the interpretation variables to theories in the language of T (as given by a formula representing the axiom set). This is mucho obscuro. I guess we want finite extensions of T. Then, τ will produce Θ-sentences and we are saved from the dangerous swamp of intensionality. As before, we stipulate that the σ are ⊤ for all but finitely many arguments; the κ map to n( id_Θ) for all but finitely many arguments; and the τ map to T for all but finitely many arguments. In my proposal this would be ⊤. The assignments are lifted to the language of T in the obvious way as before, but now taking: (^k_1, …, k_nA)^σ,κ,τ := _T^[ ⟨τ(k_1), κ(k_1) ⟩, …, ⟨τ(k_n), κ(k_n) ⟩ ]A^σ,κ,τ, (A^k_1, …, k_nB)^σ,κ,τ := (T+A^σ,κ,τ) (T^[ ⟨τ(k_1), κ(k_1) ⟩, …, ⟨τ(k_n), κ(k_n) ⟩ ]+B^σ,κ,τ). This notion gives rise to the following notion of consequenceWhy do we define semantic consequence here? Don't we just want to state that Γ⊢_ A (...) holds? This is correct. What you want is the arithmetical completeness theorem, but that is beyond the horizon. and soundness of  with respect to this notion of consequence is readily proven. Γ_T A : for all σ, ⊢ (∀κ, τ) (⋀Γ^σ,κ,τ→ A^σ,κ,τ). Another way of possible generalisation is given by approximating both the interpreted and the interpreting theory. We observe that currently we only approximate the interpreted theory. Alternatively, we could label the binary modality by a pair of sequences x,y of translations with the intended reading of A ^x,y B being T^[ s(x)] + α T^[ s(y)] + β when α and β are the intended readings of A and B respectively. Such a generalisation would allow for a more sophisticated transitivity axiom: (A ^, B ) ∧ (B ^, C) → (A ^, C ). We leave these observations for future investigations. §.§ An alternative system The idea of approximating finite axiomatisability Let T be PA and let α := ⊤ and let β be con(). Then, over or over PA: αβ, but α^0β is equivalent to ^0, by the Second Incompleteness Theorem. I.o.w., this is all nonsense. Also, Luka refers to Section <ref>. We must also remove that. can be realised in a different way. Let A ^0 B stand for T + α + β when α and β are the arithmetical interpretation of A and B respectively. Likewise, ^0 A will stand for _α. Using this notation, we can formulate a new sound principle. Given an interpretation k, we have ⊢ k : A B →^0 (k : A ^0 B). Should we say T ⊢ instead of just ⊢? Similarly below. Assume k : A B. Clearly k : A ^0 B, by the assumption that all our base theories T extend . Since + B is finitely axiomatisable, k : A ^0 B is a ∃Σ_1^ b-statement. By ∃Σ_1^ b-completeness we get the required ^0 (k : A ^0 B). In 9.4 we refer to 6.2 as if the left hand side of the 6.2's statement was A ^0 B. So we probably need to reorganise this a bit, I guess the simplest way is to add ^0 to the LHS both in 6.1 and 6.2, and then have one more lemma stating that k: A B implies k: A ^0 B. ⊢ A B →^0 (A B). Using Lemma <ref> and noticing that from k : A ^0 B we can obtain A B just like with the rule J_5x, y of . The relation between the logic , i.e. reasoning with iterated approximations, and reasoning with ^0 and non-iterated approximations, is unknown. In particular, we do not know if both systems prove the same theorems in their common language or in the language without any interpretability variable at all. However, we do observe that both systems are sufficiently strong for the principles appearing in this paper. §.§.§ P^k We now consider P^k. Reason in . Suppose k^⋆:(T+φ) (T+ψ). Let k':=k^⋆φ id be the translation that acts like k^⋆ if φ and like id if φ. Let k:= s(k'). (This last move is only of an administrative nature, since, in the present context, k' and k will be the same in their behaviour as interpretations.) Then, we have both 2ex k:T T—: why does this matter, and why can we leave φ out of this claim? 2ex and k:(T+φ) (T+ψ). : Perhaps this (second claim) is obvious, but I wrote a proof (it makes sense to me now that I see the proof, but I wouldn't call it obvious): Let us first show that k' :(T+φ) (T+ψ). That is, let us deduce _ T+φ A^k' from the assumption _ T+ψ A. Unpacking A^k', we see that we are to show _ T+φ( φ∧ A^k^⋆) ∨( φ∧ A ). Since k^⋆ : (T+φ) (T+ψ), we have _ T+φ A^k^⋆ and thus _ T+φφ∧ A^k^⋆, as required. Now we can show that k :(T+φ) (T+ψ). Assume _ T+ψ A. Our goal is to prove _ T+φ A^k, i.e. _ T + φ( (α^⋆)^k'∧ A^k') ∨( (α^⋆)^k'∧ A ). We already have _ T+φ A^k', so it would suffice to show that _ T+φ (α^⋆)^k', i.e. _ T+φ( φ∧ (α^⋆)^k^⋆) ∨( φ∧ (α^⋆) ). Thus, we have to show _ T+φ (α^⋆)^k^⋆. We know that _T + ψα^⋆, and since k^⋆:(T+φ) (T+ψ), we must have _T + φ(α^⋆)^k^⋆. : [This is the end of my proof of that claim above] 2ex By Lemma <ref>, we have _T(k:T T^[k]).—: I would say _T (k:T + φ T^[T + φ, k]), since that doesn't rely on k:T T, and is also closer to what we actually need here. 2ex Also we have k:(T+φ)ψ, and, hence, _T(k:(T+φ)ψ). Combining, we find: _T(k:(T+φ) (T^[T + φ, k]+ψ)). 2ex By Lemma <ref>, we find that id:T≡ T^[k]. So, from _Tδ we will get _T^[k]δ. : (my alternative to the preceding two sentences) Lemma <ref> implies k : T + φ T →𝗂𝖽 : T^[ T + φ, k ] T. Thus 𝗂𝖽 : T^[ T + φ, k ] T. So, from _T δ we will get _T^[ T + φ, k ]δ. This tells us that usages of k are arithmetically sound. 2ex Moreover, Suppose m:(T+δ)(T^[k]+ε). It follows that: HERE WE SHOULD LOAD A NEW DIAGRAM PACKAGE TO DISPLAY THE ABOVE COMMENTED CODE. ALBERT, I GUESS YOU HAVE THAT ON YOUR COMPUTER So, m: (T+δ)(T+ε). : (again, my alternative for the preceding paragraph) We aim to show that the same holds for the usages of k, i.e. we will prove the following: T + δ T^[ T + φ, k ] + ε→ T + δ T + ε . Assume T + δ T^[ T + φ, k ] + ε. We use 𝗂𝖽 : T^[ T + φ, k ] T again. Clearly 𝗂𝖽 : T^[ T + φ, k ] + ε T + ε. Combining with our assumption, T + δ T + ε. : I'm just noting here that, as far as I can see, we didn't need k:T T for anything above. §.§ Joost's shorter proof invoking simply the earlier lemmas § ON PRINCIPLES IN In this section, we give arithmetical soundness proofs for some well-known principles that hold in all . For this purpose we will employ the system . To avoid repeating too much content from <cit.>, here we study only the following principles, but with proofs written in more detail compared to <cit.>. For other well-known principles we refer to <cit.>. W ⊢ A B→ A (B∧ A) M_0 ⊢ A B → ( A∧ C) (B∧ C) R ⊢ A B → (A C ) B ∧ C §.§ The principle W We start with the P-proof of the principle W, which we will later convert to an proof of W. ⊢W. We reason in P. Suppose A B. Then, (A B). Hence, (*) ( A → B), and, thus, (**) ( B → A). Moreover, from A B, we have A (B ∧ A) ∨ (B∧ A). So it is sufficient to show: B ∧ A B ∧ A. We have: [ B∧ A B ; (B ∧ B) ; B ∧ B ; B ∧ A . ] To prove arithmetical soundness of W we will essentially replicate the modal proof of W in . We first give a more formal version of the proof that uses the rule Px, y, k in the way we formally defined it. Afterwards we will give a more natural proof. The following holds: (A ^[k] B), (B ∧ A ^[k] B ∧^[k] B) → (B ∧ A B ∧^[k] B) ⊢_ B ∧ A B ∧ A. Reason in . Some simple uses of rules and axiom schemas of are left implicit. (A ^[k] B) assump. (B ∧ A ^[k] B ∧^[k] B) → (B ∧ A B ∧^[k] B) assump. ( A →^[k] B) by (<ref>), J_4k (^[k] B → A) by (<ref>) A ^[k] B by (<ref>), J_1 B ∧ A ^[k] B by (<ref>), J_1, J_2 B ∧ A ^[k] (B ∧^k B) by (<ref>), L_3k, J_1, J_2 B ∧ A ^[k] B ∧^k B by (<ref>), J_5k B ∧ A B ∧^[k] B by (<ref>), (<ref>) B ∧ A B ∧ A by (<ref>), (<ref>) The principle W is arithmetically valid. ⊢ A B → A B ∧ A. Reason in . By Pk and Lemma <ref> we get A B ⊢_ B ∧ A B ∧ A. (*) Now assume A B. Combining A B with (*) we get B ∧ A B ∧ A. (**) Clearly A B implies A (B ∧ A) ∨ (B ∧ A). (***) From (**) and (***) by J_3 we obtain A B ∧ A. Thus ⊢ A B → A B ∧ A, as required. The proof presented in Proposition <ref> (and Lemma <ref>) resembles the proof we gave earlier demonstrating that P⊢W. However, the resemblance is not exactly obvious; we had to turn our proof “inside-out” in order to use the rule Pk (resulting in the contrived statement of Lemma <ref>). This can be avoided by applying the rule Pk in a different way. When we want to conclude something starting from A ^x B, we introduce a fresh interpretation variable k and getWould “and the new rule now allows us to infer” be better than “get”? ^y (A ^x,k B) (for whichever y we find suitable). Now we have to be a bit more careful; we can't end the proof before we eliminate this k. We also have to be careful in how we use the rules k and k. Essentially, any proof in the new form must be formalisable in the system as it was defined earlier. Let us demonstrate this with the principle W. Reason in . Suppose that A B. By Pk we have for some k that (A ^[k] B). Hence, by J_4k, we have (*) ( A →^[k] B) and, so, (**) (^[k] B → A). Moreover, from A B, we have A (B∧ A) ∨ (B ∧ A). So it is sufficient to show B∧ A B∧ A. We have: [ B∧ A ^[k] B ; ^[k] (B ∧^[k] B) ; B ∧^[k] B ; B ∧ A . ] §.§ The principle M_0 Another good test case is the principle M_0, since both W⊬M_0 and M_0⊬W. Although we will later demonstrate the method for the principle R too and R⊢M_0, the proof for R is more complex. For this reason we include the principle M_0. We start with the P-proof of M_0: A B → ( A∧ C) (B∧ C). ⊢M_0. Reason in P. [ A B → (A B ) ; → ( A → B) ; → ( A ∧ C → B ∧ C) ; → A ∧ C B ∧ C ; → A ∧ C (B ∧ C) ; → A ∧ C B ∧ C ] Now we adapt this proof to fit . We will not write the more formal version of the proof (see the commentary in Subsection <ref>). P-style soundness proof of M_0 Reason in . [ A B → (A ^[k] B ) ; → ( A →^[k] B) ; → ( A ∧ C →^[k] B ∧ C) ; → A ∧ C ^[k] B ∧ C ; → A ∧ C ^[k] B ∧^[k] C ; → A ∧ C ^[k] (B ∧ C) ; → A ∧ C ^[k] B ∧ C. ; → A ∧ C B ∧ C ] §.§ The principle R As a final example, we will prove that the principle R: A B → (A C ) B ∧ C is arithmetically valid. Before we see that ⊢R, we first prove an auxiliary lemma. ⊢ (A C )∧ (A B ) → (B ∧ C ). We prove the -equivalent formula (A B ) ∧ (B → C ) → A C. But this is clear, as ⊢ (A B ) ∧ (B → C ) → A C and ⊢ C C. ⊢R. We reason in P. Suppose A B. It follows that (A B ). Using this we get: [ (A C ) (A C ) ∧ (A B ); (B ∧ C ) ; B ∧ C ] P-style soundness proof of R Reason in . We first show that (A ^[k] B ) ∧ ( A C ) →^[k] ( B ∧ C ). We show an equivalent claim (A ^[k] B ) ∧^[k] ( B → C ) → A C. Suppose that A ^[k] B and ^[k] (B → C ). Thus, A ^[k] C by J_2kb. By J_5k we get A C, as required. By necessitation, ((A ^[k] B ) ∧ ( A C ) →^[k] ( B ∧ C )). We now turn to the main proof. Suppose A B. Then, for some k, we have (A ^[k] B ) and, thus, [ ( A C ) (A C ) ∧ (A ^[k] B ) ; ^[k](B ∧ C) ; B ∧ C. ] § TWO SERIES OF PRINCIPLES In <cit.> two series of interpretability principles are presented. One series is called the broad series, denoted R^n (for n∈ω). The other series is called the slim hierarchy, denoted R_n (for n∈ω). The latter is actually a hierarchy of principles of increasing logical strength. Both series of principles are proven to be arithmetically sound in any reasonable arithmetical theory. The methods used to prove this soundness in <cit.> involve definable cuts and in essence can be carried out in the system called CuL. In the next two sections we will see how both series admit a soundness proof based on the method of finite approximations of target theories as embodied in our logic . We will also use this opportunity to state the results concerning modal semantics we obtained in collaboration with Jan Mas Rovira, which concern the two series. The proofs of these results can be found in his Master's thesis <cit.>. §.§ Arithmetical soundness of the slim hierarchy As already mentioned, the slim hierarchy 𝖱_n defined in <cit.> is actually a hierarchy. Thus, to prove arithmetical soundness it suffices to study a cofinal sub-series. In our case we will study the certain sub-series 𝖱_n. Let us define the original sequence first; even though we will use the sub-series for the most part. Let a_i, b_i, c_i and e_i denote different propositional variables, for all i∈ω. We define a series of principles as follows. [ R_0 := a_0 b_0 → (a_0 c_0) b_0 ∧ c_0; ; R_2n+1 := R_2n [ (a_n c_n) / (a_n c_n) ∧ (e_n+1 a_n+1);; b_n ∧ c_n/b_n ∧ c_n ∧ (e_n+1 a_n+1)]; ; R_2n+2 := R_2n+1 [b_n/ b_n ∧ (a_n+1 b_n+1);; a_n+1 / (a_n+1 c_n+1);; (e_n+1 a_n+1)/ (e_n+1 a_n+1) ∧ (e_n+1 b_n+1∧ c_n+1) ]; ] We proceed with defining the sub-series 𝖱_n (see <cit.>, below Lemma 3.1) where the 𝖱_n hierarchy exhausts the even entries of the original 𝖱_n hierarchy: 𝖷_0 := A_0 B_0 𝖷_n + 1 := A_n + 1 B_n + 1∧ (𝖷_n) 𝖸_0 := (A_0 C_0) 𝖸_n + 1 := (A_n + 1 C_n + 1) ∧ (E_n + 1𝖸_n) 𝖹_0 := B_0 ∧ C_0 𝖹_n + 1 := B_n + 1∧ (𝖷_n) ∧ C_n + 1∧ (E_n+1 A_n) ∧ (E_n + 1𝖹_n) 𝖱_n := 𝖷_n →𝖸_n 𝖹_n. For convenience, define 𝖷_-1 = ⊤. With this we have 𝖷_n ≡_ A_n B_n ∧ (𝖷_n - 1) for all n ∈ω. The first two schemas are: [ 𝖱_0 := A_0 B_0 → (A_0 C_0) B_0 ∧ C_0;; 𝖱_1 := A_1 B_1 ∧ (A_0 B_0) → (A_1 C_1) ∧ (E_1 (A_0 C_0)); B_1 ∧ (A_0 B_0) ∧ C_1 ∧ (E_1 A_0) ∧ (E_1 B_0 ∧ C_0).; ] In the proof that ⊢𝖱_n (Theorem <ref>) we use the following lemma. For all n ∈ω, and all interpretation variables k: ⊢ (A_n ^k B_n ∧𝖷_n - 1)∧𝖸_n →^k 𝖹_n . Let n = 0 and fix k. We are to prove ⊢ (A_0 ^k B_0 ∧⊤) ∧(A_0 C_0) →^k (B_0 ∧ C_0). Equivalently, ⊢ (A_0 ^k B_0) ∧^k(B_0 → C_0) → A_0 C_0. Assume (A_0 ^k B_0) ∧^k(B_0 → C_0). By J_2k b, this yields A_0 ^k C_0, whence by J_5k, A_0 C_0. Let us now prove the claim for n + 1. Fix k. Unpacking, we are to show that: ⊢ (A_n + 1^k B_n + 1∧𝖷_n) ∧ (A_n + 1 C_n + 1) ∧ (E_n + 1𝖸_n) →^k( B_n + 1∧𝖷_n∧ C_n + 1∧ (E_n+1 A_n) ∧ (E_n + 1𝖹_n) ). Equivalently, we are to show that: ⊢ (A_n + 1^k B_n + 1∧𝖷_n) ∧ (E_n + 1𝖸_n) ∧^k( (B_n + 1∧𝖷_n) → C_n + 1∨ (E_n+1 A_n) ∨ (E_n + 1𝖹_n)) → A_n + 1 C_n + 1. Assume the conjunction on the left-hand side of (<ref>). The first and the third conjunct imply A_n + 1^k B_n + 1∧𝖷_n∧( C_n + 1∨ (E_n+1 A_n) ∨ (E_n + 1𝖹_n) ), whence by weakening, A_n + 1^k 𝖷_n∧( C_n + 1∨ (E_n+1 A_n) ∨ (E_n + 1𝖹_n) ). We now aim to get A_n + 1^k C_n + 1. To this end, we set out to eliminate the last two disjuncts within (<ref>). From E_n + 1𝖸_n (the second conjunct on the left-hand side of (<ref>)) we have E_n + 1 (A_n C_n), thus E_n + 1 A_n, whence ^k (E_n + 1 A_n) by the generalised P_0 (Lemma <ref>). We now combine ^k (E_n + 1 A_n) with (<ref>), simplify and weaken to obtain A_n + 1^k 𝖷_n∧ ( C_n + 1∨ (E_n + 1𝖹_n)). Thus, we have eliminated the second disjunct within (<ref>), and we are left to eliminate (E_n + 1𝖹_n). We will now use the second conjunct on the left-hand side of (<ref>), E_n + 1𝖸_n, again. We wish to apply the rule P, k, j, so assume ^k(E_n + 1^j 𝖸_n). Combining ^k(E_n + 1^j 𝖸_n) with (<ref>) and unpacking 𝖷_n, we obtain A_n + 1^k (A_n B_n ∧𝖷_n - 1) ∧ (E_n + 1^j 𝖸_n) ∧ ( C_n + 1∨ (E_n + 1𝖹_n)). Reason under ^k. We wish to apply the rule P, j, ℓ with A_n B_n ∧𝖷_n - 1, so assume ^j( A_n ^ℓ B_n ∧𝖷_n - 1 ). Combining ^j( A_n ^ℓ B_n ∧𝖷_n - 1 ) with E_n + 1^j 𝖸_n we obtain (still under the ^k) that E_n + 1^j ( A_n ^ℓ B_n ∧𝖷_n - 1 ) ∧𝖸_n. Applying this to (<ref>) we may conclude Do you have a preference here, e.g. all same size, large only for some nesting level etc.? I am for a tasteful growth in size for the more outer brackets. A_n + 1^k ( E_n + 1^j ( A_n ^ℓ B_n ∧ (𝖷_n - 1) ) ∧𝖸_n ) ∧ ( C_n + 1∨ (E_n + 1𝖹_n)). The induction hypothesis allows us to replace A_n ^ℓ B_n ∧ (𝖷_n - 1 ) ∧𝖸_n with ^ℓ (𝖹_n). A_n + 1^k (E_n + 1^j ^ℓ (𝖹_n)) ∧ ( C_n + 1∨ (E_n + 1𝖹_n)). By J_5j, ℓ, A_n + 1^k (E_n + 1^ℓ𝖹_n) ∧ ( C_n + 1∨ (E_n + 1𝖹_n)). By our last application of P, j, ℓ and ℓ, we can substitute for ^ℓ: A_n + 1^k (E_n + 1𝖹_n) ∧ ( C_n + 1∨ (E_n + 1𝖹_n)). Finally, we can simplify, weaken and apply J_5k, to obtain A_n + 1 C_n + 1. We can now prove soundness for the slim hierarchy. It suffices to do this for the cofinal sub-hierarchy 𝖱_n. For all n ∈ω, ⊢𝖱_n. Let n ∈ω be arbitrary. Assume (A_n ^k B_n ∧ (𝖷_n - 1)). Clearly 𝖸_n (A_n ^k B_n ∧𝖷_n - 1) ∧𝖸_n. Now Lemma <ref> implies 𝖸_n ^k𝖹_n, whence by J_5,k, 𝖸_n ^k 𝖹_n. By the rule Pk, we can replace our assumption ^k(A_n B_n ∧𝖷_n - 1) with 𝖷_n. Furthermore, by the same application of Pk, and by k, we have 𝖸_n 𝖹_n. Thus, X_n →𝖸_n 𝖹_n, i.e. 𝖱_n. Finally, as we announced earlier, we quote the result obtained in collaboration with Jan Mas Rovira. To state the generalised frame condition for the principle R_1 (which lies strictly between 𝖱_0 and 𝖱_1) we let R^-1[E] := {x : (∃ y∈ E) xRy}, and R_x^-1[E] := R^-1[E]∩ R[x]. The frame condition for the principle R_1 with respect to generalised Veltman semantics is the following condition: ∀w,x,u,𝔹,ℂ,𝔼 (wRxRuS_w𝔹, ℂ∈𝒞(x,u) ⇒ (∃𝔹'⊆𝔹)(xS_w𝔹',R[𝔹']⊆ℂ,(∀v∈𝔹')(∀c∈ℂ) (vRcS_xR_x^-1[𝔼]⇒(∃𝔼'⊆𝔼)cS_v𝔼'))). Please see <cit.> for the proof (including a formalisation in Agda). §.§ Arithmetical soundness of the broad series In order to define the second series we first define a series of auxiliary formulas. For any n≥ 1 we define the schemata _n as follows. Old version, not sure why we wrote it like that: _n+2 := ((D_n+1 D_n+2)∧_n+1) _1 := (D_1 C), _n+1 := ((D_n D_n+1)∧_n). Now, for n≥ 0 we define the schemata R^n as follows. R^0 := A B→(A C) B∧ C, R^n+1 : = A B→( _n+1∧(D_n+1 A)) B∧ C. As an illustration we present the first three principles. [ R^0 := A B → (A C) B ∧ C;; R^1 := A B →(D_1 C) ∧ (D_1 A) B ∧ C;; R^2 := A B →[ (D_1 D_2) ∧(D_1 C)] ∧ (D_2 A) B ∧ C.; ] Given finite sequences[A finite sequence of pairs j, C where j is an arbitrary interpretation and C an arbitrary formula such that j : T + C …. If this notation stays, we might want to consider a name for these iterated-approximation sequences.] x and y and, given an interpretation k, we have ⊢ k : A ^x B →^y (k : A ^x, kA B). When working with this series it is convenient to also have the following schemas: 𝖵_1 := (D_1 C), 𝖵_n + 1 := (D_n D_n + 1→ V_n) . Alternatively, we could have defined 𝖵_n := 𝖴_n for n≥ 1. For all n ∈ω∖{0}, and all finite sequences consisting of interpretation variables: ⊢ D_n ^ C →𝖵_n. Let n = 1 and be arbitrary. We want to prove that ⊢ D_1 ^ C → (D_1 C). This is an instance of the generalised P_0 schema as we stated in Lemma <ref>. Let us now prove the claim for n + 1. Thus, we fix an arbitrary sequence of interpretations . We are to show that ⊢ D_n + 1^ C →(D_n D_n + 1→𝖵_n). Thus, reasoning in , we assume D_n + 1^ C. We now wish to apply the rule Pk with this formula, where k is an arbitrary variable not used in its left or right side or . So, assume (D_n + 1^, k C). Reason under a box. Assume D_n D_n + 1. Now D_n D_n + 1 and D_n + 1^, k C imply D_n ^, k C. By the necessitated induction hypothesis, this implies 𝖵_n. Thus, we find (D_n D_n + 1→𝖵_n), as required. For all interpretation variables k we have the following: ⊢𝖴_ n ∧ (D_ n A) ∧ (A ^k B) ^k B ∧ C. It is clear that the claim to be proved follows by necessitation, J_1, and J_5,k from the following: ⊢𝖴_ n ∧ (D_ n A) ∧ (A ^k B) →^k (B ∧ C). This formula is equivalent to (D_ n A) ∧ (A ^k B) ∧^k (B → C) →𝖵_ n . Assuming the left-hand side, we get D_n ^k C, whence V_n by Lemma <ref>. For all n ∈ω, ⊢𝖱^n. Case n = 0 is clear. Let n > 0 be arbitrary and let us prove 𝖱^n. Reason in . Assume A B. We wish to apply the rule Pk here. So, assume (A ^k B). We have: 𝖴_ n ∧ (D_ n A) 𝖴_ n ∧ (D_ n A) ∧ (A ^k B). Lemma <ref> and the rule J_2 imply 𝖴_ n ∧ (D_ n A) ^k B ∧ C, and by k, 𝖴_ n ∧ (D_ n A) B ∧ C. So we are done. §.§.§ A proof using ^0 (𝖲^1_2) Here we present an alternative proof which avoidsSince Lemma <ref> is false, I guess we must remove this subsection. iterated approximations, and instead uses the idea exploited in Lemma <ref> and Lemma <ref>. The proof is essentially the same, but slightly shorter. We note here that we also wrote an alternative proof for the series R_n but we omit it in this paper it as the proofs are very similar in that case too. For all n ∈ω∖{0}: ⊢ D_n ^0 C →𝖵_n. Let n = 1. We are to prove ⊢ D_1 ^0 C → (D_1 C). This is an instance of the generalised P_0 schema (Lemma <ref>).As mentioned near 6.2, this isn't exactly true (that this formulas is an instance of ...), but we can restructure 6.1 and 6.2. Similarly when we refer to 6.1. Let us now prove the claim for n + 1. We are to show that ⊢ D_n + 1^0 C →(D_n D_n + 1→𝖵_n). Assume D_n + 1^0 C. By Lemma <ref>, we have ^0(D_n + 1^0 C). Reason under a box. Assume D_n D_n + 1. Now D_n D_n + 1 and D_n + 1^0 C imply D_n ^0 C. By the induction hypothesis, this implies 𝖵_n, as required. Given an interpretation variable k,I think it might be a bit unusual that we refer to the interpretation variables here, without previously mentioning variables are a part of the alternative system. We can remove them, but we then might also have to remove reference to the rules from the sequences version of AtL (J5k). The ending is a bit unclear too. I suggest to either (1) omit the proof like in 9.6 and say it's analogous, or (2) remove int. variables, add ^0, and say the J* rules etc. are analogous. ⊢𝖴_ n ∧ (D_ n A) ∧ (A ^k B) ^k B ∧ C. It is clear that the claim to be proved follows by necessitation, J_1, and J_5,k from the following: ⊢𝖴_ n ∧ (D_ n A) ∧ (A ^k B) →^k (B ∧ C). This formula is equivalent to (D_ n A) ∧ (A ^k B) ∧^k (B → C) →𝖵_ n . On the left-hand side we get D_n ^k C. In particular, D_n ^0 C. Now V_n follows from Lemma <ref>. For all n ∈ω, ⊢𝖱^n. The proof is exactly the same as the proof of Theorem <ref>. Finally, we state the generalised frame condition for the series R^n, obtained in joint work with Jan Mas Rovira. Let n ∈ω be arbitrary. We have 𝔉⊩ R^n if and only if for all w, x_0, …, x_n-1, y, z, 𝔸, 𝔹, ℂ, 𝔻_0, …, 𝔻_n-1 we have the following: wRx_n-1R…Rx_0RyRz, (∀u ∈R[w] ∩𝔸)(∃V) uS_wV⊆𝔹, (∀u ∈R[x_n-1] ∩𝔻_n-1) (∃V) uS_x_n-1V⊆𝔸, (∀i∈{1,…,n-2})(∀u ∈R[x_i] ∩𝔻_i)(∃V) uS_x_iV⊆𝔻_i+1, (∀V ∈S_y[z]) V∩ℂ≠0, z∈𝔻_0 ⇒ (∃V⊆𝔹)(x_n-1S_wV & R[V]⊆ℂ). Please see <cit.> for the proof (including a formalisation in Agda). alpha
http://arxiv.org/abs/2406.18694v1
20240626185713
Analytic solution to the nonlinear generation of squeezed states in a thermal bath
[ "Paul R. B. Hughes", "Marc M. Dignam" ]
quant-ph
[ "quant-ph" ]
p.hughes@queensu.ca Department of Physics, Engineering Physics and Astronomy, Queen's University, Kingston, ON K7L 3N6, Canada § ABSTRACT We model squeezed state generation in a lossy optical cavity in the presence of a thermal bath using the Lindblad master equation. We show that the exact solution is a squeezed thermal state, where thermal photons arise both from loss and from the thermal bath. We derive an exact, closed-form solution for the evolution of the quadrature uncertainty arising from pulsed degenerate spontaneous parametric down conversion in the cavity. We apply this solution under different pump conditions and show in detail how the thermal environment reduces quadrature squeezing as well as the second order coherence function. Analytic solution to the nonlinear generation of squeezed states in a thermal bath Marc M. Dignam July 1, 2024 ================================================================================== Introduction. Nonlinear optical processes such as spontaneous parametric down conversion (SPDC) and spontaneous four wave mixing are often used to generate nonclassical states of light, such as photon pairs, single-mode quadrature squeezed states, multimode squeezed states, and entangled optical modes <cit.>. Photon pairs can be used as a heralded single-photon source or as entangled two-photon states <cit.>. Quadrature squeezed states can be used to reduce the uncertainty in interferometric measurements <cit.> or to create continuous variable entanglement <cit.>. Because the nonlinear interactions are generally quite weak, one usually requires a resonator to form an optical parametric oscillator (OPO) to enhance the process. Some common resonators are microring resonators <cit.> and Fabry-Perot cavities <cit.>. The use of a resonator has been shown to lower the uncertainty in one quadrature below vacuum fluctuations at the expense of the other, but with a steady-state squeezing limit of 3dB in the resonator <cit.>. Larger squeezing can been obtained for light coupled out of the resonator <cit.>, but within it, alternative methods are required to overcome the 3dB limit. These include pulsed excitation <cit.>, quantum feedback <cit.>, and dissipation <cit.>. The steady-state quantum fluctuations of the state in an OPO can be derived using the Langevin equations for a stochastic process <cit.>. When loss and detuning are not considered in these systems, the signal field in the OPO is a squeezed vacuum state. Other groups have considered the effects of detuning from resonance on squeezing in the OPO in the steady state <cit.>. The effects of loss and a thermal bath on the generation and evolution of the density operator of the light in an OPO can be modeled using the Lindblad master equation (LME). Recently it was shown that when there is no thermal bath, the exact solution to the LME is a squeezed thermal state (STS) <cit.>. At optical or near infrared frequencies, the thermal effects of an environment on the generation and nature of the squeezed states is negligible at or below room temperature when employing SPDC in a resonator. At lower frequencies of a few tens of terahertz or less, thermal noise can have a significant effect on the generation, evolution, and final state. In particular quadrature squeezing can be greatly reduced unless one cools the system to millikelvin temperatures <cit.>. It is therefore important to be able to accurately and efficiently model the evolution dynamics in such systems and to quantify the effects of temperature on the final squeezed state for CW and pulsed pumping configurations. To this end, in this work, we derive the exact solution to the LME for SPDC in a lossy OPO coupled to a thermal bath and show that the density operator is that of a STS. With this exact solution, we are able to derive a closed-form solution for the evolution of the quadrature uncertainty for an arbitrary, un-chirped pump pulse and to examine the evolution of the squeezing parameter, the thermal photon number and the second-order quantum coherence function. The paper is organized as follows. We first outline the theory behind the generation of the signal field in a resonator. We show that the solution to the LME is a STS, where the thermal photon number and squeezing parameter evolution are described by three coupled first-order differential equations. Using these equations, we derive closed form solutions for the quadrature uncertainties, which to the best of the authors' knowledge have never been derived previously. Next, we examine the transient and steady-state properties of the system excited by a constant-amplitude pump pulse, presenting an exact analytic solution for the uncertainties. Using the second order quantum coherence and the quadrature uncertainty, we investigate the nonclassicallity of the light and discuss the threshold where the quadrature is squeezed below vacuum noise. Finally, we present the results for a Gaussian pump pulse and examine the relationship between the pulse amplitude and the quadrature squeezing as a function of the bath temperature. Theory. We consider the generation of a squeezed state in a single mode of a resonant cavity with frequency ω. Shown schematically in <ref>, the system consists of a resonant cavity that is coupled to a thermal bath of photons and is excited by a coherent optical pump. The pump operates at a frequency ω_p = 2ω and generates signal photons in the resonator through SPDC. The pump is a coherent state with time-dependent coherent state amplitude α(t) = α_0(t) e^-iω_p t, where α_0(t) is the pump envelope. Thus, we treat the pump classically in the undepleted pump approximation. When the interaction with the bath is neglected, the system Hamiltonian is given by <cit.> H = ħωb^† b + α(t)γb^†^2 + α^*(t) γ^* b^2, where b^† (b) is the creation (annihilation) operator of photons in the cavity and γ = ħω_p χ_eff^(2)/n_eff^2 is the coupling coefficient of the pump field to the signal field for an effective second order nonlinear susceptibility χ_eff^(2) and refractive index n_eff in the cavity. The signal mode in the cavity is coupled to a thermal bath at temperature T_b, which has a mean photon number, n_b = (exp(ħω/kT_b) - 1)^-1 at the signal frequency. The density operator of the cavity ρ(t) evolves according to the LME <cit.> d/dtρ (t) = -i/ħHρ(t) + Γ (n_b+1) D[b](ρ(t)) + Γ n_b D[b^†](ρ(t)), where Γ is the power decay constant of the cavity photons into the bath, while D[F](ρ) ≡ Fρ F^† - 1/2F^† Fρ is the dissipator, which accounts for the two-way coupling between bath and cavity. In a previous work, it was shown that for the special case where the bath is at zero temperature (n_b=0), the exact solution to <ref> is a STS <cit.>. In this work, we prove that at non-zero temperatures, the solution is still a STS, but that the evolution of the squeezing and thermal temperature depends, in general, on the bath temperature. We find that as long as the initial state is the vacuum, a thermal state, or a STS, the exact solution to the above LME is the time-dependent STS: ρ(t) = S(ξ(t))ρ_T(n_th(t))S^†(ξ(t)), where S(ξ) = exp[ 1/2(ξ^* b^2 - ξb^†^2)] is the squeezing operator, with the time-dependent, complex squeezing factor ξ(t) = u(t)e^iϕ(t), and ρ_T(n_th) = 1/1 + n_th( n_th/1 + n_th)^b^† b is a thermal state, with a time-dependent thermal population, n_th(t). To show that this is the exact solution, we write the density operator in the form ρ(t) = S(ξ) ρ_T^1/2(n_th) O(t) ρ_T^1/2(n_th) S^†(ξ). We then need to prove that the operator, O(t) = ρ_T^-1/2(n_th)S^†(ξ)ρ(t)S(ξ)ρ_T^-1/2(n_th) is simply the identity operator for all time. In the supplementary material, we show that this is indeed the case as long as the thermal photon number, the squeezing amplitude, and phase evolve according to the following three coupled first-order differential equations: dn_th/dt = Γ[n_b cosh(2u) + sinh^2(u) - n_th], du/dt = -i/ħ(γ^* α^* e^iϕ - γα e^-iϕ) - Γ/2sinh(2u) 2n_b + 1/2n_th + 1, dϕ/dt = -2ω + 2/ħcosh(2u)/sinh(2u)(γ^* α^* e^iϕ + γα e^-iϕ). <ref>, are the dynamic equations valid for any initial STS and for any α(t). In all that follows, we restrict ourselves to unchirped pump pulses, such that α_0(t)γ = |α_0(t)γ|e^iθ, where θ is a time-independent phase and we assume that the initial state is an unsqueezed thermal state, such that u(0) = 0. To avoid a divergence on the left hand side of <ref> that arises at t=0 for an unsqueezed state, we impose the initial condition on ϕ that (γ^* α^*(0) e^iϕ(0) + γα (0)e^-iϕ(0)) = 0, or ϕ(0) = θ + π/2. Using this in <ref>, we see that for all time (γ^* α^*(t) e^iϕ(t) + γα (t)e^-iϕ(t)) = 0 and ϕ (t)=θ +π/2-2ω t. We now define the pump function, g(t) ≡ 4|α_0(t) γ|/ħΓ, which is the ratio of the pumping rate to the loss rate, such that g(t)=1 is the critical pump rate at which the injection of photons is exactly balanced by the loss. Using this definition and the above initial condition, the equations of motion become dn_th/dt = Γ[n_b cosh(2u) + sinh^2(u) - n_th], du(t)/dt = Γ g(t)/2 - Γsinh(2u)/22n_b + 1/2n_th + 1, with ϕ (t)=θ + π/2-2ω t . The above dynamic equations contain an explicit dependence on the temperature of the bath; however, in the limit that n_b = 0, they reduce to what we obtained in our previous T_b=0 work <cit.>. Additionally, in the simple case that there is no pump present, but the initial state is a thermal state that is not at the bath temperature, these equations show that the system remains a thermal state with the thermal photon number evolving as n_th(t) = n_b + (n_th(0) - n_b)e^-Γ t, a result that has been shown previously using the LME <cit.>. We can see from <ref> that the bath population increases the decay rate of the squeezing factor u(t). However, this contribution is not present for the typical initial state in which the system is in equilibrium with the environment. To see this, let n_th^0 be the thermal population that would arise if T_b=0. We define it using the equation 2n_th + 1 = (2n_b + 1)(2n_th^0 + 1). Using this in <ref>, we find that dn_th^0/dt = Γ[sinh^2(u) - n_th^0], which is exactly the evolution of the thermal photon number when the bath temperature is zero. We can also rewrite <ref> using n_th^0 as du(t)/dt = Γ g(t)/2 - Γsinh(2u)/2(2n_th^0 + 1). The bath population is still implicitly present in these equations, since from <ref>, the initial value of n_th^0 is given by n_th^0(0) = n_th(0) - n_b/2n_b + 1. However, in the case where the cavity begins in a thermal state in equilibrium with the bath (n_th(0) = n_b), the evolution of n_th^0 and thus squeezing factor u are independent of the bath temperature, while the actual thermal population only depends on n_b through the prefactor of 2n_b + 1 (see <ref>) [We can see from <ref> that these properties arise not just for an initial state in equilibrium with the thermal bath, but any initial thermal population that satisfies n_th(0) = n_b + a(2n_b + 1). The dynamics of the squeezing amplitude will then be independent of the bath, but the decay will scale by the arbitrary factor a.]. We will examine the dependence of the thermal population, the total population, and the squeezing amplitude on the bath temperature and initial thermal population in more detail later in this letter. We now define the two quadrature operators, X = b^† e^-iβ(t) + b e^iβ(t), Y = -i(b^† e^-iβ(t) - b e^iβ(t)), where β (t) is the local oscillator phase. For β(t) ≡ω t, the system is squeezed in X and antisqueezed in Y. For a STS, the uncertainties in these quadratures can be shown to be given by Δ X^2 = (2n_th + 1)e^-2u and Δ Y^2 = (2n_th + 1)e^2u <cit.>, which allows us to determine the evolution of the squeezing from the evolution of n_th and u. Alternatively, we can derive differential equations for the quadrature uncertainties. Taking the time derivative of Δ X^2 and using <ref> and the hyperbolic identities, we find that d/dtΔ X^2 = [2dn_th/dt - 2(2n_th + 1) du/dt]e^-2u = Γ[ (2n_b + 1)( cosh(2u) + sinh(2u))e^-2u - (1 + g(t))(2n_th + 1)e^-2u], which can also be written as d/dtΔ X^2 = Γ[ (2n_b + 1) - (1 + g(t))Δ X^2 ]. Similarly, d/dtΔ Y^2 = Γ[ (2n_b + 1) - (1 - g(t))Δ Y^2 ]. These equations show that the squeezing dynamics depend only on the pumping strength and the thermal bath population. Furthermore, the evaluation of the quadrature squeezing only requires the solution of a single first-order differential equation, which is directly solvable using standard techniques. To the authors' knowledge, this is the first time a closed-form solution has been derived for the nonlinear generation and evolution of the quadrature uncertainty in a lossy cavity. Constant Pump and Steady State. In this section, we examine the early-time evolution and steady-state solution of the system when it is excited by a pump that has a constant strength, g_0 that is turned on at t=0. Before analysing the quadrature uncertainties directly, we return to the question of the dependence of the STS parameters on the bath temperature and initial state of the system. Recall that for the special case where n_th(0)=n_b, the squeezing factor u(t) is independent of n_b. As we now show, when the initial state is a thermal state at a different temperature from the environment, u(t) is still only weakly dependent on both the initial thermal population and the bath temperature. In <ref>, we plot the total photon number and squeezing amplitude as a function of time for a cavity pumped by a continuous pulse excitation. It is prepared in the same initial state each time but coupled to environments at different bath temperatures. From <ref>(a), we see that when n_b is increased, the photon number increases by much more than simply n_b. Meanwhile, <ref>(b) shows that u(t) exhibits a small dependence on n_b at early times and that even this dependence disappears as t→∞. We can determine the steady-state characteristics of the continuous wave pump by setting the derivatives in <ref> to zero. For the steady-state squeezing amplitude, we obtain u^ss = 1/2tanh^-1(g_0), which is independent of the environment and only exists for pumping below the critical pump strength, g_0 = 1. The steady-state thermal and total populations are given by n_th^ss = n_b + sinh^2(u)(2n_b + 1) = 1/2( 2n_b + 1/√(1 - g_0^2) - 1), and n^ss = 2n_b + g_0^2/2(1 - g_0^2), where, to determine the total population, we have used the relation for a STS that n = n_thcosh(2u) + sinh^2(u) <cit.>. Thus, as discussed earlier, the thermal environment adds many more photons to the system than just n_b, but it does nothing to the steady state squeezing factor. We now consider the evolution of quadrature uncertainties. The dynamic equations, <ref>, can be solved exactly for an arbitrary time-dependent pump g(t), but we first consider solutions for a constant-pump excitation, where g(t) = g_0 ≠ 1 for t>0. The exact solutions are Δ X^2(t) = 2n_b + 1/1 + g_0 + ( Δ X^2(0) - 2n_b + 1/1 + g_0) e^-Γ(1 + g_0)t, Δ Y^2(t) = 2n_b + 1/1 - g_0 + ( Δ Y^2(0) - 2n_b + 1/1 - g_0) e^-Γ(1 - g_0)t. When the system is prepared as a thermal state in equilibrium with the bath, both quadratures start with a value of 2n_b + 1, and <ref> simplify to Δ X^2(t) = 2n_b + 1/1 + g_0[ 1 + g_0 e^-Γ(1 + g_0)t] , Δ Y^2(t) = 2n_b + 1/1 - g_0[1 - g_0 e^-Γ(1 - g_0)t] . Increased squeezing in X will usually be accompanied by increased anti-squeezing in Y. Because of this, it is important to examine the merits of using a short, strong pump pulse rather than a long, weak pulse to squeeze the signal. To this end, we consider the antisqueezing at the time τ_1 at which the squeezing in X reaches the threshold value of Δ X^2 = 1. For n_th(0)=n_b>0, we have from <ref> that Δ Y^2(τ_1) = 2n_b + 1/1 - g_0[1 - ( 1/g_0g_0 - 2n_b/2 n_b + 1)^1 - g_0/1 + g_0]. In the limit of weak pumping (g_0 ≪ 1), Δ Y^2(τ_1) → 2n_b/g_0, while in the strong pumping limit (g_0 >> 1), Δ Y^2(τ_1) → 2n_b(2n_b + 1)/(g_0 - 2n_b). In both limits, increasing g_0 reduces the uncertainty in Δ Y^2 at t = τ_1. Thus, in order to avoid excess growth in the anti-squeezed uncertainty, it is always beneficial to maximize the pumping strength if a specified squeezing is desired. This is because the longer it takes to reach a desired squeezing level, the more thermal photons will be generated due to loss. We can see from <ref> that the squeezed quadrature will reach a steady-state value of Δ X_min^2 = 2n_b + 1/1 + g_0, for all values of g_0. The anti-squeezing, however, will only reach steady state for g_0<1, diverging otherwise. Below critical pumping, this anti-squeezing maximum is Δ Y^2_max = 2n_b + 1/1 - g_0. In the steady-state, the squeezing is limited to a minimum quadrature uncertainty of (2n_b + 1)/2, and therefore we cannot achieve any squeezing if n_b ≥ 0.5, which agrees with the results for thermalized squeezed states by previous authors <cit.>. We now consider the evolution of the equal-time, second order quantum coherence function <cit.> g^(2)(t)≡Tr{b^† b^† b bρ(t)}/n^2(t), which quantifies the correlation between two simultaneous photon measurements at time t, where g^(2)(t) > 1 indicates super-Poissonian statistics and photon bunching. In a thermal state g^(2)=2, while in a STS <cit.> g^(2) = 2 + (2n_th + 1)^2 sinh^2(2u)/[(2n_th + 1)cosh(2u) - 1]^2. In the large population limit (n →∞), the STS value approaches the squeezed vacuum state coherence g^(2) = 3 <cit.>. In <ref>, we plot g^(2)(t) as a function of time for two different constant pumping strengths and several different bath temperatures, all with the initial condition n_th(0)=n_b. We see that g^(2) peaks at early times before settling down to a lower steady-state value. As the environmental population is increased, the peak and steady-state values are decreased and the peak occurs at a later time. When the pump is increased, the peak is larger and occurs at an earlier time, while the steady state value is reduced. Using <ref>, we determine the steady-state coherence to be g^(2)_ss = 2 + ( (2n_b + 1)g_0/2n_b + g_0^2)^2. The coherence peak above the steady-state value arises as the state transitions from a thermal state to a STS with higher steady-state coherence. As the pumping begins, if n_b ≪ 1, the population is still small but many squeezed pairs are being created before they can be removed by loss and before the total population becomes too large; because g^(2) is normalized to the square of the total population, this results in a larger g^(2) during this time period. When the bath temperature is larger though, g^(2) is suppressed by the existing thermal population, so we find that the peak is significantly reduced or even absent. We can determine when the transition to an STS fails to create a peak in g^(2)(t) by finding the time when the coherence is maximized. In <ref>, we plot this peak time, τ_p as a function of the pump strength and bath population. We note a clear distinction between the coherence peak times when squeezing is possible (g_0 > 2n_b, upper left) and when it is not (lower right). When squeezing is not possible, the coherence reaches the steady-state value monotonically and does not peak. We have seen from <ref> that the steady state coherence will decrease with larger pumping, while the peak value will increase. In <ref>, we compare the maximum and steady-state coherence as a function of pump strength and bath population. Again, we see that for g_0 < 2n_b, the peak value of g^(2) is nearly identical to the steady-state value [Direct comparison between the theoretical steady-state value in <ref> and the maximum of the numeric simulation shows a difference of less than 10^-8 for the region g_0 < 2n_b, which can be attributed to the limits of the computational precision.]. Arbitrary Pump Pulse. We now examine the solutions for an arbitrary pump envelope and for a Gaussian pulse. The linear, non-homogeneous ODE in <ref> has closed-form solution <cit.>, Δ X^2(t) = [ Γ(2n_b + 1)∫_0^t q(t̃) dt̃ + q(0)Δ X^2(0)] q^-1(t), where q(t) ≡exp(Γ∫ (1 + g(t)) dt). If the initial state is a thermal state in equilibrium with the bath, this simplifies to Δ X^2 (t) = (2n_b + 1)[ Γ∫_0^t q(t') dt' + q(0)] q^-1(t), which includes the bath temperature only as a prefactor. This means that for the usual initial condition of n_th(0)=n_b, one only needs to solve a single equation to obtain the time evolution of quadrature variance for all bath temperatures. We note that the expression for Δ Y^2 (t) is identical to the one for Δ X^2 (t), but with g(t)→ -g(t). We now consider the particular example of excitation by a Gaussian pulse, with an envelope given by g(t) = g_0 exp[ -1/2Γ^2(t - t_o)^2/σ^2]. In <ref>, we plot the pump envelope as a function of time, along with the squeezed quadrature uncertainty for three different thermal bath populations. We see that the different uncertainty profiles are identical apart from the scaling factor 2n_b + 1 in <ref>. In particular, the uncertainty is minimized at the same time, τ_0 for all temperatures. Unless the pulse is very short relative to the cavity lifetime, this minimum will occur very close to the time, τ_M at which the pump reaches its maximum value. Therefore, we can approximate the pump strength at the minimum uncertainty by the maximum pump strength, as was done in Ref. <cit.>. Doing this, we obtain Δ X^2_min≈2n_b + 1/1 + g(τ_M). Note that for the given pump pulse, the quadrature squeezing below shot noise disappears for n_b≳ 2.5. However, with a well-chosen pulse strength and shaping, we see that substantially more squeezing can be achieved than is possible in steady-state. Conclusion. In this work, we derived a closed-form solution to the Lindblad master equation, and used it to determine the effects of a thermal environment on the generation of quadrature squeezing via SPDC in a lossy resonator. We proved that the solution is a squeezed thermal state, with contributions to the thermal population coming from loss to and photons from the thermal bath. We derived a closed-form solution for the evolution of the quadrature uncertainty for an arbitrary un-chirped classical pump pulse and applied it for both a constant and a Gaussian pump pulse. We found that the thermal bath reduces quadrature squeezing and the equal-time, second order coherence function. The results presented in this work can be used to help determine the temperature, loss, and pump requirements for squeezed state generation in the few-terahertz regime, where room temperature environmental photons can significantly degrade squeezing. This will be particularly relevant for squeezed state generation in microwave cavities or optomechanical systems <cit.>, where pulse optimization will be necessary to overcome thermal limitations. In future work, we plan to extend these results to two-mode cavity system to determine the effect of a thermal environment on the entanglement correlation variance and to directly examine the generation of squeezing in optomechanical systems. Acknowledgements This work was supported by the Canada Foundation for Innovation and the Natural Sciences and Engineering Research Council of Canada (NSERC).
http://arxiv.org/abs/2406.19152v1
20240627131115
Mixture priors for replication studies
[ "Roberto Macrì Demartino", "Leonardo Egidi", "Leonhard Held", "Samuel Pawel" ]
stat.ME
[ "stat.ME", "stat.AP" ]
hyphensurl @nat@width>@nat@width kframe @end@of@kframe @end@of@kframe ##1totalleftmargin -##1- --totalleftmargin -totalleftmargin@ setminipage @end@of@kframe a4paper, total=170mm,257mm, left=20mm, right=20mm, top=30mm, bottom=25mm, fancy Mixture priors for replication studies Preprint upquote.sty Mixture priors for replication studies Roberto Macrì DemartinoCorresponding author e-mail: mailto: a 0000-0002-5296-6566, Leonardo Egidib 0000-0003-3211-905X, Leonhard Heldc 0000-0002-8686-5325, and Samuel Pawelc 0000-0003-2779-320X a Department of Statistical Sciences, University of Padova, Via C. Battisti 241, Padova, 35121, Italy. b Department of Economics, Business, Mathematics, and Statistics “Bruno de Finetti", University of Trieste, Via A. Valerio 4/1, Trieste, 34127, Italy c Epidemiology, Biostatistics and Prevention Institute, Center for Reproducible Science, University of Zurich, Hirschengraben 84, Zurich, 8001, Switzerland THIS IS A PREPRINT WHICH HAS NOT YET BEEN PEER REVIEWED ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Replication of scientific studies is important for assessing the credibility of their results. However, there is no consensus on how to quantify the extent to which a replication study replicates an original result. We propose a novel Bayesian approach based on mixture priors. The idea is to use a mixture of the posterior distribution based on the original study and a non-informative distribution as the prior for the analysis of the replication study. The mixture weight then determines the extent to which the original and replication data are pooled. Two distinct strategies are presented: one with fixed mixture weights, and one that introduces uncertainty by assigning a prior distribution to the mixture weight itself. Furthermore, it is shown how within this framework Bayes factors can be used for formal testing of scientific hypotheses, such as tests regarding the presence or absence of an effect. To showcase the practical application of the methodology, we analyze data from three replication studies. Our findings suggest that mixture priors are a valuable and intuitive alternative to other Bayesian methods for analyzing replication studies, such as hierarchical models and power priors. We provide the free and open source R package that implements the proposed methodology. Keywords: Bayesian inference, Borrowing, Effect size, Evidence synthesis, Historical data § INTRODUCTION The integrity and credibility of scientific research heavily relies on the replicability of its results <cit.>. However, in recent years, an increasing number of published findings failed to replicate, leading to growing concerns about a “replication crisis” in several scientific fields <cit.>. As a consequence, there is an increasing emphasis within the scientific community on the importance of replication studies <cit.>. Establishing the success of a replication remains a challenging task. Multiple statistical methodologies, ranging from frequentist to Bayesian paradigms and even hybrid models of both, have been suggested to quantify the degree of success a replication study achieved in replicating the original result <cit.>. Analyzing replication studies involves per definition the use of historical data – the data from the original study. Given the inherent nature of sequential information updating, Bayesian methods are natural for this purpose. Consequently, an intuitive way to incorporate historical information is to use a prior distribution based on the data from the original study for the analysis of the replication data. In its simplest form, one could use the posterior distribution of the model parameters based on the original data as the prior for the replication analysis. However, this may be problematic if there is heterogeneity between the two studies, as the resulting posterior may then conflict with both studies. This is of particular concern in the replication setting, where replication studies often show less impressive effects than their original counterparts <cit.>, often argued to happen because of stricter control of biases and researcher degrees of freedom, for example, via preregistration of the replication study. A variety of more sophisticated Bayesian methods have been proposed to mitigate potential conflict between historical and current data, and “borrow information” from the historical data in an adaptive way <cit.>. Notably, power priors <cit.>, hierarchical models <cit.>, and mixture priors <cit.> are three prominent approaches in this domain. The power prior, in its basic version, is derived by updating an initial prior distribution with the likelihood of the historical data raised to the power parameter δ, ranging between zero and one, which determines the degree to which historical data influences the prior distribution. Power priors evaluate two primary concepts of successful replication <cit.>. Firstly, they ensure the replication study confirms the presence of a tangible effect, often by assessing the effect size θ, and checking if it differs significantly from zero. Secondly, they assess how well the original data matches with the replication data, as a δ value close to one means both studies align seamlessly, while a value close to zero implies a disagreement between the original and the replication study. Hierarchical modeling offers an alternative way to incorporate historical data into Bayesian analyses. The idea is to assume a hierarchical model where the true original θ_o and replication effect sizes θ_r are sampled themselves from a distribution around an overall effect size θ. The variance τ^2 of this distribution then determines the similarity between the studies, a value of zero corresponding to identical true effects while a large value corresponds to heterogeneity. Works by <cit.> and <cit.> have effectively applied this approach in replication scenarios. Mixture priors represent yet another way to adaptively borrow information from historical data <cit.>. Essentially, a mixture prior combines a prior based on the historical data with a non-informative one, allocating distinct mixing weights to each component. The informative prior encourages information borrowing, while the non-informative prior indicates limited or no use of historical information. The robust meta-analytic predictive (MAP) prior presented by <cit.>, which mixes a MAP prior derived from multiple historical studies with a non-informative prior, is an example of a mixture prior used for historical data borrowing. In the replication setting, <cit.> have proposed a mixture prior modification of the reverse-Bayes method from <cit.> to limit prior-data conflict between the original study and a “sceptical prior” that is used to challenge it. The set of 21 replication studies from the Social Sciences Replication Project have also been jointly analyzed with a Bayesian mixture model to estimate an overall true positive rate and an effect size deflation factor <cit.>. However, apart from these two works, mixture prior modeling has not been applied to replication studies in any way, particularly not in its most basic form of using a mixture prior based on the original study for the analysis of the replication study. The aim of this paper is therefore to present a novel and conceptually intuitive Bayesian approach for quantifying replication success based on mixture priors. The idea is to use a mixture of the posterior distribution based on the original study and a non-informative distribution as the prior for the analysis of the replication study. The mixture weight then determines the extent to which the original and replication data are pooled. This methodology is illustrated using data from three replication studies, which were part of the replication project from <cit.>, detailed in the following Section <ref>. Section <ref> then describes the process of deriving mixture priors from data of an original study within a meta-analytic framework, presenting a general approach for integrating original data into the mixture prior. In this exploration, two distinct approaches are examined: the first fixes the mixture weights, while the second introduces uncertainty by assigning a prior distribution on the mixture weight parameter. In Section <ref> different hypotheses regarding the underlying parameters of interests are examined. Bayes factors are derived offering a quantitative measure of evidence for one hypothesis over another. Finally, Section <ref> provides concluding remarks about similarities and differences between the discussed method and established approaches, particularly hierarchical models and power priors. Additionally, the strengths and limitations of the mixture prior approach are emphasized, along with insights into potential extensions. § RUNNING EXAMPLE We examine a particular experiment from communication science titled “Labels” which was part of the large-scale replication project by <cit.>. The original authors hypothesized that the type of label used by a person to describe another person can indicate something about the preferences of the person themselves. For example, when someone uses the term “denier” to describe someone else who does not believe in global warming, the authors hypothesized that this is an indication that the speaker believes in global warming. The original study found evidence for this hypothesis. Its main finding was drawn from a sample of 1577 participants which led to a standardized mean difference effect estimate of θ̂_o = 0.21 and standard error σ_o = 0.05, suggesting a positive effect of “labelling”. Subsequently, this experiment was replicated by three other labs. The first replication yielded a smaller effect estimate, with θ̂_r_1 = 0.09 and σ_r_1 = 0.05. In contrast, the other two replications reported either the same effect estimate, θ̂_r_2 = 0.21 and σ_r_2 = 0.04, or a larger one, θ̂_r_3 = 0.44 and σ_r_3 = 0.06, compared to the original study. Figure <ref> shows the effect size estimates along with their 95% confidence intervals for the original study. its three independent replications, and the pooled replication. § MIXTURE PRIOR MODELING OF REPLICATION STUDIES In the following, we use a meta-analytic framework which can be applied to a broad range of data types and effect sizes <cit.>. Define by θ the unknown effect size, with θ̂_o and θ̂_r_i being the estimated effect size from the original study and replication i = 1, …, m, respectively. As assumed by <cit.> and <cit.>, it is common to specify that the likelihood of the effect size estimates is approximately normal θ̂_o |θ∼N(θ, σ_o^2) θ̂_r_i|θ∼N(θ, σ_r_i^2), where σ_i represents the standard error of an estimate, which is assumed to be known. There are circumstances under which the effect size might need a particular transformation, such as a logit function or a log function transformation, to refine the normal distribution approximation. Additionally, adjusting the effect size for confounders via regression might also be necessary. Finally, define the pooled replication effect size estimate and its standard error by θ̂_r_p = ∑_i=1^m θ̂_r_i/σ^2_r_i/∑_i=1^m 1/σ^2_r_i σ_r_p = √(1/∑_i=1^m 1/σ^2_r_i), which are sufficient statistics for inference regarding the effect size parameter θ, that is, we have that the likelihood of a sample of independent replication studies is ∏_i=1^m N(θ̂_r_i|θ, σ^2_r_i) = K ×N(θ̂_r_p|θ, σ^2_r_p), with N(·| m,v) the normal density function with mean m and variance v and K a constant that does not depend on the effect size θ. In the following, we will investigate posterior distribution and Bayes factor analyses related to the effect size θ and based on the likelihood of the pooled replication effect size estimate and standard error θ̂_r_p|θ∼N(θ, σ^2_r_p). For both analyses, the constant K cancels out and the approach thus encompasses both the analysis of a single replication study (m = 1 so that θ̂_r_p = θ̂_r_1 and σ_r_p = σ_r_1) or multiple replication studies (m > 1). The aim is now to develop a mixture prior for the effect size θ that combines two distinct components. The first component is derived from the original study, akin to the meta-analytic-predictive (MAP) prior described by <cit.> and <cit.>; and the second component is a normal prior that provides an alternative in case there is conflict between the replication and original data π(θ|θ̂_o, ω) = ωN(θ|θ̂_o,σ^2_o) + (1-ω) N(θ|μ,τ^2). The mean μ and variance τ^2 of the alternative are typically specified such that the prior is proper but non-informative (e.g., μ = 0 and τ^2 large). Clearly, by setting ω = 1, we obtain a prior that leads to a complete pooling of the data from both studies, while setting ω = 0 completely discounts the original data. For a 0 < ω < 1, there is a gradual compromise between these two extremes. In a mixture prior as in (<ref>), setting an appropriate mixing weight ω, is a complex but crucial task. It is essential that the chosen ω accurately reflects the level of agreement between the original and replication studies. A prior that places too much weight towards the non-informative component can undermine the effectiveness of borrowing from the original study, leading to an underestimate of the real agreement between the original and replication studies. On the contrary, a mixing weight skewed heavily towards the informative prior may result in overestimating the confidence on the similarity between the two studies, introducing a potential bias. In the following we will discuss two strategies for determining the value of ω. The first strategy involves fixing ω on a predetermined value that is considered reasonable, while the second employs an additional prior specification by taking ω as a random quantity. §.§ Fixed weight parameter After observing the replication data, the mixture prior (<ref>) is updated yielding the posterior distribution π(θ|θ̂_o, θ̂_r, ω) = N(θ̂_r |θ, σ^2_r) {ωN(θ|θ̂_o,σ^2_o) + (1-ω) N(θ|μ,τ^2)}f(θ̂_r |θ̂_o, ω), where the marginal likelihood is f(θ̂_r |θ̂_o, ω) = ∫_ΘN( θ̂_r |θ, σ^2_r) {ωN(θ|θ̂_o,σ^2_o) + (1-ω) N(θ|μ,τ^2)}dθ = ωN(θ̂_r|θ̂_o,σ_r^2+σ_o^2) + (1-ω)N(θ̂_r |μ, σ^2_r + τ^2). For a normal mixture model, there exists a closed-form solution for the marginal likelihood, as denoted in equation (<ref>), based on which it can be shown that the posterior is again a mixture of two normals π(θ|θ̂_o, ω) = ω^'N(θ| m_1, v_1) + (1 - ω^') N(θ| m_2, v_2), with updated means and variances m_1 = (θ̂_o/σ^2_o + θ̂_r/σ^2_r) × v_1, v_1 = (1/σ^2_o + 1/σ^2_r)^-1, m_2 = (μ/τ^2 + θ̂_r/σ^2_r) × v_2, v_2 = (1/τ^2 + 1/σ^2_r)^-1, and updated weight ω^' = {1 + 1 - ω/ω×N(θ̂_r |μ, τ^2 + σ^2_r)/N(θ̂_r |θ̂_o, σ^2_o + σ^2_r)}^-1. The two posterior components thus represent two ordinarily updated normal posteriors, while the initial weight along with the relative predictive accuracy of the replication data under either component determines the updated weight. The fact that for a fixed mixture weight, the posterior distribution is again a mixture distribution is known from general Bayesian theory <cit.>. The mixture representation of the posterior also shows that the non-informative component has to be proper (τ^2 < ∞) to enable borrowing, as otherwise the updated weight will be ω^' = 1, leading always to a complete pooling with the historical data regardless of conflict. There are different approaches for specifying the mixture weight ω. A straightforward approach involves assigning to ω a value that is reasonable, based on domain-expert knowledge, regarding the agreement between the two studies. Alternatively, the empirical Bayes estimate of ω may be used, which represents the value that maximizes the marginal likelihood function (<ref>). Finally, in order to assess prior sensitivity, a reverse-Bayes approach <cit.> may be used to find the mixture weight such that a certain posterior distribution is obtained. Returning to the “Labels” experiment from <cit.> introduced in Section <ref>, Figure <ref> shows the shifts in the posterior distribution for the effect size (<ref>) under different fixed weights assigned to the mixture prior. Here, the non-informative prior component in (<ref>) is constructed to be a unit-information normal distribution centred at a mean μ = 0 and with variance τ^2 = 2. A unit-information prior <cit.> is structured to provide only a minimal amount of information. Essentially, its variance is set so that the prior's information has a content equivalent to a unit sample size. The use of unit-information prior is illustrated in several studies such as <cit.> for binary response models, <cit.> apply this principle to generalized linear mixed models, and <cit.> demonstrate its application in the context of generalized linear models. For further details see also <cit.>. We see that, varying ω, within the range from 0 to 1, induces a progressive transformation of the posterior distribution. At ω = 0, the posterior distribution virtually aligns with the likelihood of the replication study as the influence of the non-informative component is minimal compared to the replication data. Conversely, as ω increases towards 1, the posterior distribution gradually becomes more influenced by the prior associated with the original study, leading to a posterior that lies somewhere in between the replication and original likelihood, as the replication borrows information from the original study. Based on the reverse-Bayes approach <cit.>, a tipping point analysis was conducted to assess the influence of the mixing weight ω on the resulting posterior distribution. This analysis focuses on the question: “How much does the mixing weight has to change for the conclusion of the analysis to change?” Figure <ref> shows the posterior median and the 95% highest posterior density interval (HPDI) of the effect size for each weight value associated to the original study component in (<ref>). We see that the second, third, and the pooled replication scenarios are robust with respect to the choice of weights, as the effect size posterior median and its corresponding 95% HPDI remain substantially above zero across all prior weights, thereby suggesting robust evidence for a genuine effect. In contrast, the first replication is less stable, as the 95% HPDI includes zero up to about a weight of 0.1. Thus, this first replication can only be considered as providing evidence for a genuine effect if a mixture weight of at least 0.1 seems plausible, as the replication study alone (i.e., a mixture weight of zero) fails to do so. It is important to note that the posterior median can be a misleading point estimate in the case of bimodality. This does not seem to be a problem in our analysis, as only the posterior of the third replication shows a slight “hump” in the posterior distribution for certain weight values, while the posteriors of the remaining replications appear unimodal. However, if assessing bimodality by looking at the posterior density is not possible, it may be advisable to at least compute numerical summaries that quantify potential bimodality <cit.>. §.§ Prior on the weight parameter We now introduce an extension of the mixture prior in (<ref>) assuming uncertainty on the weight ω. This approach considers ω as a random quantity, requiring the specification of a prior distribution π(ω). A natural choice is a Beta distribution ω|η, ν∼Beta(η,ν), since ω is a proportion. Consequently, this formulation leads to the joint prior distribution for the effect size θ and the weight ω π(θ,ω|θ̂_o, η, ν) = π(ω|η, ν ) π(θ|ω, θ̂_o) = Beta(ω|η, ν)×{ωN(θ|θ̂_o, σ^2_o)+ (1-ω)N(θ|μ, τ^2)}, where Beta(·|η, ν) is the Beta density function with the strictly positive shape parameters η, ν > 0. Given the joint prior distribution (<ref>) and in light of the replication data, the joint posterior distribution is then π(θ,ω|θ̂_r,θ̂_o, η, ν) = N(θ̂_r |θ, σ^2_r)×Beta(ω|η, ν) ×{ωN(θ|θ̂_o, σ^2_o)+ (1-ω)N(θ|μ, τ^2)}f(θ̂_r |θ̂_o, η,ν). The marginal likelihood in the normal mixture model with random weights can be determined through a closed-form solution, similar to Equation (<ref>). In this scenario, it depends on the expected value of the weight parameter ω and on the updated normal prior components of the mixture f(θ̂_r |θ̂_o, η,ν) = ∫∫N(θ̂_r |θ, σ^2_r)×Beta(ω|η, ν) ×{ωN(θ|θ̂_o, σ^2_o)+ (1-ω)N(θ|μ, τ^2)}dθdω = ∫Beta(ω|η, ν) ×{ωN(θ̂_r|σ_r^2+σ_o^2) + (1-ω)N(θ̂_r |μ, σ^2_r + τ^2)}dω = (ηη+ν) ×{N(θ̂_r |θ̂_o, σ^2_r +σ^2_o)-N(θ̂_r |μ, σ^2_r + τ^2)} + N(θ̂_r |μ, σ^2_r+ τ^2). Consequently, in the case of a random weight, the marginal likelihood is similar to that in Equation (<ref>), with the difference of replacing the fixed weight with the expected weight over the prior. By integrating θ out in (<ref>), the marginal posterior distribution of ω can be expressed as π(ω|θ̂_r,θ̂_o, η, ν) =Beta(ω|η, ν) ×{ωN(θ̂_r, σ_r^2+σ_o^2) + (1-ω)N(θ̂_r |μ, σ^2_r + τ^2)}f(θ̂_r |θ̂_o, η,ν). The marginal posterior of θ is given by π(θ|θ̂_r,θ̂_o, η, ν) = N(θ̂_r |θ, σ^2_r) ×(ηη+ν)×{N(θ|θ̂_o,σ^2_o)-N(θ|μ, τ^2)}+N(θ|μ, τ^2)f(θ̂_r |θ̂_o, η,ν). In summary, when introducing uncertainty in the mixture weight ω via a Beta prior, the marginal likelihood of the data, the joint and marginal posteriors of the effect size θ, and the mixture weight ω are still available in closed-form. Moreover, the marginal likelihood and marginal posterior of θ are of the same form as with a fixed mixture weight ω as shown in the previous section, but with ω replaced by its expected value under its prior distribution. Figure <ref> shows the contour plot of the joint posterior distribution for the effect size θ and the weight parameter ω considering the data from the “Labels” experiment, its three replications, and the pooled replication. In our analysis, we employ a mixture prior, as in (<ref>), in which the informative prior component is derived from the original study, while the non-informative prior is a unit-informative prior as in Section <ref>. Additionally, we adopt a flat prior distribution for the weight parameter choosing a Beta(1,1). We see that for the first, second, and pooled replications, the posterior distribution is concentrated around weight parameter values close to one, reflecting the similarity between the original and replication results. In contrast, for the third replication the posterior distribution is concentrated around zero, indicating a conflict between the original study and the results of this replication. In addition, because it is based on three replications instead of just one, the posterior based on the pooled replications is much more peaked than the others. Figure <ref> displays the marginal posterior distributions of the effect size θ (left) and the weight parameter ω (right). The plot related to θ is enriched by contrasting it with the posterior distribution of θ based solely on the replication data, represented as a dashed line. This effectively illustrates the added value from integrating the original data through a mixture prior. The blue marginal posterior, corresponding to the most divergent estimate θ̂_r_3 = 0.44, shows a tendency to incorporate less information, leading to a more heavy-tailed posterior distribution despite the smallest standard error associated, σ_r_3 = 0.04, among the three external replications. The discrepancy with the original study increases the variance of the posterior distribution, as evident when comparing with the replication-only posterior shown by the dashed blue line. This is further highlighted in the 95% HPDI, which ranges from 0.35 to 0.52, slightly exceeding the 95% HPDI range of 0.36 to 0.52 observed when the replication data is analyzed without considering the original study, represented by the dashed horizontal blue bar. Conversely, the green marginal posterior, associated with the most coherent replication θ̂_r_2 = 0.21, results in a noticeably narrower 95% HPDI compared to the one derived solely from the replication data. Additionally, the magenta marginal posterior based on the pooled replication θ̂_r_p = 0.21 results to be the most peaked density. It is worth noting that these marginal posteriors are equivalent to those obtained when the weight parameter is fixed at ω = 0.5, as shown in Figure <ref>, because the expected value of a Beta(1,1) distribution is 0.5. The right panel in Figure <ref> shows the marginal posterior distribution for the weight parameter ω, under the assumption of a flat prior distribution for ω. Following the formula as detailed in (<ref>) and under a flat prior, this yields a linearly increasing/decreasing posterior density. However, for non-flat priors (i.e., Beta(η, ν) with η≠ 1 and ν≠ 1), the posterior density of the weight ω is not linear anymore. The first, second, and pooled replications, highlighted in yellow, green, and magenta respectively, display linear marginal posterior distributions that increase monotonically, indicating a peak at ω=1. This suggests compatibility between the two replications and the pooled replication with respect to the original study. Conversely, the linear marginal distribution of the third replication, illustrated in blue, exhibits a monotonically decreasing trend with the most probable value at ω = 0. This trend suggests a notable disagreement between this replication and the original study. Nevertheless, it is worth noting that the HPDIs remain considerably wide across all the replication scenarios, despite their large sample sizes. § HYPOTHESIS TESTING Estimating the parameters of a model is one aspect, but in statistical analysis one may also want to test the plausibility of different scientific hypotheses. Within the Bayesian framework, the Bayes factor is a key tool for assessing and comparing hypotheses about the parameters <cit.>. Let us consider the replication data θ̂_r and let ℋ_0 and ℋ_1 be two competing hypothesis. The Bayes Factor is then given by the updating factor of the prior odds of the hypotheses to their posterior odds BF_01 = (ℋ_0 |θ̂_r)(ℋ_1 | θ̂_r) / (ℋ_0)(ℋ_1) = f(θ̂_r|ℋ_0)f(θ̂_r|ℋ_1), which simplifies to the ratio of marginal likelihoods (or evidences) as shown by the second equality. As such, the Bayes Factor is a quantitative tool to measure the relative evidence that we have for ℋ_0 over ℋ_1. For example, when the a priori probabilities of both hypotheses are assumed to be equal, a Bayes factor greater than one indicates that the data are more likely under ℋ_0 than ℋ_1. Conversely, a Bayes factor less than one suggests that ℋ_1 is more in agreement with the observed data. A value approximately equal to one implies that the data do not distinctly favor any model, indicating similar levels of empirical support for ℋ_0 and ℋ_1. To interpret the Bayes Factor effectively, various categorizations have been proposed. One of the most notable is outlined by <cit.>, as detailed in Table <ref>. §.§ Hypothesis testing for the mixture weight ω To determine how closely the replication aligns with the original study, we may perform hypothesis testing on the mixture weight parameter ω. A key goal is testing whether the original and replication studies are consistent with each other, formulated as the hypothesis ℋ_c: ω = 1. This hypothesis may be tested against the alternative hypothesis that suggests the data from the studies should be entirely disregarded, indicated as ℋ_d: ω = 0. Contrary to the power prior approach <cit.>, the point hypothesis ℋ_d: ω = 0 avoids leading to an improper mixture prior. As a result, the Bayes factor derived in this context does not encounter problematic issues related to the dependence on the ratio of the two arbitrary constants, since it is based on the ratio of two well-defined marginal likelihoods BF_dc(θ̂_r |ℋ_d: ω = 0) = f{θ̂_r |ℋ_d: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω = 0}f{θ̂_r |ℋ_c: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω = 1} = N(θ̂_r |μ, σ^2_r+ τ^2)N(θ̂_r |θ̂_o, σ^2_r+ σ^2_o). A more flexible hypothesis to consider is that the data exhibit a certain level of compatibility or disagreement. A suitable hypothesis is defined by the prior class ℋ_d: ω∼Beta(1, ν), where ν > 1. In this class of distributions, the density is maximized at ω = 0 and decreases consistently from there. This encodes a hypothesis where the importance of the original data is systematically reduced. The degree of this reduction is dictated by the parameter ν. In the asymptotic case where ν→∞, the hypothesis simplifies to ℋ_d: ω = 0, implying a complete discounting of the original data. Consequently, the Bayes factor is BF_dc{θ̂_r |ℋ_d: ω∼Beta(1, ν)} = f{θ̂_r |ℋ_d: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω∼Beta(1, ν) }f{θ̂_r |ℋ_c: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω = 1} = (ηη+ν) ×{N(θ̂_r |θ̂_o, σ^2_r +σ^2_o)-N(θ̂_r |μ, σ^2_r + τ^2)} + N(θ̂_r |μ, σ^2_r+ τ^2)N(θ̂_r |θ̂_o, σ^2_r+ σ^2_o). §.§ Hypothesis testing for the effect size θ In the assessment of hypotheses regarding the magnitude of the effect size θ, the analysis typically involves a comparative evaluation between the null hypothesis, ℋ_0: θ = 0, which posits absence of the effect, and the alternative hypothesis, ℋ_1: θ≠ 0, suggesting the presence of an effect. The null hypothesis ℋ_0 represents a singular value within the possible range of θ values, while the alternative hypothesis ℋ_1 requires a prior specification for both θ and ω. To address this, the use of a mixture prior as in equation (<ref>) is proposed. Specifically, the first mixture prior component is based on the empirical data from the original study θ̂_o while the second component is designed to have the same amount of information content equivalent to a single observation. This approach is complemented by the specification of a suitable Beta prior for the weight parameter ω. Consequently, the Bayes factor is BF_01{θ̂_r |ℋ_1: ω∼Beta(η,ν)} = f(θ̂_r |ℋ_0: θ = 0)f{θ̂_r |ℋ_1: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω∼Beta(η,ν) } = N(θ̂_r | 0, σ^2_r)(ηη+ν) ×{N(θ̂_r |θ̂_o, σ^2_r +σ^2_o)-N(θ̂_r |μ, σ^2_r + τ^2)} + N(θ̂_r |μ, σ^2_r+ τ^2). It is important to emphasize that, as parallel discussed in <cit.> for the power parameter in the power prior approach, assigning a point mass to the weight parameter ω = 1 leads to the Bayes factor contrasting a point null hypothesis to the posterior distribution of the effect size based on the original data that is the replication Bayes factor under normality <cit.>. In detail, it is BF_01{θ̂_r |ℋ_1: ω = 1} = f(θ̂_r |ℋ_0: θ = 0)f{θ̂_r |ℋ_1: θ|ω∼ωN(θ̂_o,σ^2_o) + (1-ω) N(μ,τ^2), ω = 1} = N(θ̂_r | 0, σ^2_r)N(θ̂_r |θ̂, σ^2_r + σ^2_o). Similar to the power prior formulation, the mixture prior version of the replication Bayes factor represents a generalization of the standard replication Bayes factor that provides a flexible and controlled approach for combining original and replication data. §.§ Posterior distribution and Bayes factor asymptotics In delving deeper into the proposed mixture model, a key focus is on examining the asymptotic characteristics of the marginal posterior distribution and the Bayes factor for the weight parameter. Specifically, let us consider the Bayes factor contrasting ℋ_d θ∼N(μ, τ^2) to ℋ_c θ∼N(θ̂_o, σ^2) for the replication data θ̂_r |θ∼N(θ, σ^2_r) as in (<ref>). Subsequently, the marginal posterior distribution in (<ref>) can be expressed in terms of the Bayes factor π(ω|θ̂_r, θ̂_o) = π(ω) {ωN(θ̂_r |θ̂_o, σ^2_r + σ^2_o) + (1 - ω) N(θ̂_r |μ, σ^2_r + τ^2)}/(ηη+ν) ×{N(θ̂_r |θ̂_o, σ^2_r + σ^2_o) - N(θ̂_r |μ, σ^2_r + τ^2)} + N(θ̂_r |μ, σ^2_r + τ^2) = π(ω){ω + (1 - ω)BF_dc(θ̂_r)}/(ηη+ν)×{1 - BF_dc(θ̂_r)} + BF_dc(θ̂_r). We investigate the behavior of the limiting marginal posterior distribution in (<ref>) when the Bayes factor tends to zero and when it tends towards infinity, respectively. In these cases, we have that lim_BF_dc(θ̂_r) ↓ 0π(ω|θ̂_r, θ̂_o) = π(ω)/𝔼_π(ω)(ω)ω = Beta(ω|η + 1, ν), lim_BF_dc(θ̂_r) ↑ +∞π(ω|θ̂_r, θ̂_o) = π(ω) /1 - 𝔼_π(ω)(ω) (1 - ω) = Beta(ω|η, ν + 1). This means that even when we find overwhelming evidence in favor of ℋ_d or ℋ_c, the posterior distribution is only slightly changed from the prior (i.e., “updated by one observation” from the prior). For example, for a flat prior with ν = η = 1, the limiting posteriors are given by the Beta(2, 1) and Beta(1, 2) distributions, respectively, which correspond to densities that are linearly increasing (decreasing) from 0 (2) to 2 (0). We can see from Figure <ref> that the marginal posteriors for the second, third, and pooled “Labels” replications are not too different from these two asymptotic distributions. While the previous calculations assumed that the Bayes factor can go to infinity or zero, thereby overwhelmingly favoring one of the contrasted models, it is unclear whether this is even possible. We therefore now aim to explore the effects on the Bayes factor (<ref>) when the standard error of the replication study σ_r becomes arbitrarily small. This could occur due to an increase in the sample size which is typically inversely related to the squared standard error. The limiting Bayes factor as the replication standard error σ_r approaches zero is lim_σ^2_r ↓ 0BF_dc(θ̂_r) = N(θ̂_r |μ, τ^2)/N(θ̂_r |θ̂_o, σ^2_o). Consequently, for finite τ^2 and σ^2_o, the Bayes factor is bounded and cannot converge to either zero or +∞. However, if the original standard error σ_o also approaches zero, the Bayes factor in (<ref>) behaves differently. In this case, the Bayes factor approaches lim_σ^2_r, σ^2_o ↓ 0BF_dc(θ̂_r) = N(θ̂_r |μ, τ^2)/δ_θ̂_o(θ̂_r), where δ_θ̂_o(·) represents the Dirac delta function. When the standard errors from both original and replication go to zero, the Bayes factor thus shows the correct asymptotic behavior, converging to zero when their effect sizes are the same while converging to infinity when they are not. §.§ Hypothesis testing for the “Labels” experiment We will now illustrate the results of the proposed hypothesis tests in Sections <ref> and <ref> using the data obtained from the “Labels” experiment, as described in Section <ref>. Specifically, a comprehensive analysis is performed to understand the behaviour of these parameters in the replication study. To evaluate the agreement between the initial study and subsequent replications, Table <ref> shows the results of the hypothesis tests concerning the mixture weight parameter ω. The fourth column shows the Bayes factor contrasting two point hypotheses: ℋ_d: ω = 0 and ℋ_c: ω = 1. This analysis reveals substantial and strong evidence for ℋ_c in the first, second, and pooled replication scenarios, respectively. Conversely, the third replication study shows strong evidence favoring ℋ_d. While the Bayes factors based on the Beta(1, 2) prior under ℋ_d (fifth column) still point in the same direction, the extent of evidence is lower than for the point hypothesis. Table <ref> presents the outcomes of our hypothesis tests regarding the effect size parameter θ. Specifically, the fourth column shows the Bayes factors contrasting the null hypothesis (ℋ_0: θ = 0) to the alternative hypothesis (ℋ_1: θ≠ 0), with a Beta(1,1) prior for the weight parameter ω under ℋ_1. The results suggest that there is absence of evidence for either hypothesis in the first replication. Conversely, the Bayes factor BF_01{θ̂_r |ℋ_1 : ω∼Beta(1, 1)} indicates strong evidence in favor of ℋ_1 for the second, third, and pooled replications. In addition, the evidence from the replication Bayes factor under normality, BF_01(θ̂_r |ℋ_1 : ω = 1), shown in the last column, leads to the same qualitative conclusions thus corroborating the previous findings. In summary, the findings from our analysis indicate that among the three replications, only the second one aligns with the original study's results and also offers evidence for a non-zero effect. While the first replication slightly aligns with the initial findings, it fails to offer substantial evidence supporting an effect different from zero. Conversely, the third replication presents significant evidence for an effect that is non-zero but does not align with the findings of the original study. However, when pooled, the replications align with the original study's findings and provide evidence for a non-zero effect, indicating that the replication effort was successful overall. § DISCUSSION In this paper, we introduced a novel Bayesian method for analyzing data from replication studies. By using a mixture prior that mixes the posterior based on the original study with a non-informative prior, our method addresses the issue of potential conflict between original and replication study, as in such cases the information from the original study can be discounted. A crucial element is the mixture weight parameter ω. We explored two distinct strategies for setting this weight parameter. The first strategy involves fixing the weight to a specific value, for example, on the basis of expert knowledge or an empirical Bayes estimate. The sensitivity of this choice may then be assessed with a reverse-Bayes tipping point analysis <cit.>. The second strategy introduces a level of uncertainty by assigning a prior distribution to the mixture weight parameter. We then showed that the prior on the weight strategy is equivalent to the fixed weight strategy using the expected value of the weight's prior as fixed weight. However, the uncertain weight strategy also provides data analysts with a posterior distribution of the weight, which can be used for quantitatively assessing the degree of study compatibility, yet the extent to which this posterior can be updated from the prior was also shown to be limited. Importantly, both strategies yield the same results for the effect size when the fixed weight equals the expectation of the prior distribution. The only difference lies in the additional posterior distribution for the weight parameter. Scientists should choose between these two strategies based on the characteristics of their study. Fixed weights can be more straightforward and are based on prior knowledge. They are suitable in situations where there is a reasonable confidence about the degree of agreement between the original and replication studies. A tipping point analysis can additionally help to assess how robust the analysis is to the choice of the weight. On the other hand, the random weight approach provides an additional posterior distribution for the weight parameter, showing the uncertainty related to this parameter. We also presented Bayesian hypothesis tests for assessing the magnitude of the effect size θ and to determine how closely the replications align with the original study. We analyzed the asymptotic behavior of the marginal posterior distribution for the weight parameter when the Bayes factor tends to zero or towards infinity. Moreover, we examined how the Bayes factor related to the effect size behaves as the replication study's standard error σ_r tends to zero. Our findings reveal that the Bayes factor contrasting ℋ_d θ∼N(μ, τ^2) to ℋ_c θ∼N(θ̂_o, σ^2), for finite τ^2 and σ^2_o, is inconsistent. However, when the original study's standard error σ_o also approaches zero, the behavior of the Bayes factor changes, leading to correct asymptotic behavior and consistency. The mixture prior approach we developed presents some similarities with two well-established methods in the replication setting – power priors <cit.> and hierarchical models <cit.>. All three approaches exhibit similar strengths in assessing differences between original and replication studies providing valuable inferences that complement each other. Analogously to the heterogeneity variance and the power parameter, the mixture weight ω controls the degree of compatibility between the original and replication studies. Nevertheless, we think that our approach has some practical advantages. First, the mixture weight parameter ω seems to be a more straightforward and intuitive discounting measure, making this approach more accessible for the analysts. Second, the inherent structure of the mixture prior provides computational advantages. Notably, the calculation of the marginal likelihood in the random weight scenario is similar to that in the fixed weight scenario, with the only difference being the replacement of the fixed weight with the expected weight over the prior, which is computationally advantageous. This is particularly evident when compared to the computationally-prohibitive normalizing constant of the normalized power prior <cit.>. Finally, when multiple original studies are involved, our mixture prior approach may facilitates their inclusion into the analysis. Specifically, this can be achieved by using two or more informative components derived from the original studies, along with a non-informative component. Our method relies on the widely-used meta-analytic assumption that the distribution of effect estimates can be accurately approximated by a normal distribution with known variance, making it adaptable to a broad range of effect sizes from various data models across different research fields. However, this assumption becomes too strong in presence of small sample sizes and/or extreme effect size values at the boundary of the parameter space (e.g., very small or large probabilities). Future research could thus adapt our approach to specific data models (e.g., binomial or t-student distribution), especially in the presence of small sample sizes. In this paper, we analyze replications both individually by directly comparing each one with the original study, and simultaneously by pooling them into a unique replication without assuming heterogeneity among the replications. An alternative pooling approach would be to assume a hierarchical model for the replication effect sizes that incorporates potential between-replication heterogeneity with a heterogeneity variance parameter. However, it remains unclear how to specify this parameter or a prior distribution for it. Consequently, an opportunity for future research could be to explore methods to specify a fixed value for this additional parameter or to elicit a prior distribution for it. § SOFTWARE AND DATA AVAILABILITY All analyses were conducted in the R programming language version 4.2.3 <cit.>. The code and data to reproduce this manuscript are openly available at <https://github.com/RoMaD-96/MixRep>. We provide an R package for analysis of replication studies using the mixture prior framework. The package is currently available on GitHub and can be installed by running (requiring the package available on CRAN). We plan to release the package on CRAN in the future. apalike
http://arxiv.org/abs/2406.17661v1
20240625155049
Neuro-Modeling Infused EMT Analytics
[ "Qing Shen", "Yifan Zhou", "Peng Zhang", "Yacov A. Shamash", "Xiaochuan Luo", "Bin Wang", "Huanfeng Zhao", "Roshan Sharma", "Bo Chen" ]
eess.SY
[ "eess.SY", "cs.SY" ]
IEEE Transactions on Consumer Electronics, Vol. 71, No. 1, February 2025 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Neuro-Modeling Infused EMT Analytics Qing Shen, Graduate Student Member, IEEE, Yifan Zhou, Member, IEEE, Peng Zhang, Yacov A. Shamash,  Fellow, IEEE, Xiaochuan Luo, Senior Member, IEEE, Bin Wang, Senior Member, IEEE, Huanfeng Zhao, Member, IEEE, Roshan Sharma,  Member, IEEE, Bo Chen,  Member, IEEE This work was supported in part by the National Science Foundation under Grant No. ITE-2134840 and in part by ISO New England. This work relates to the Department of Navy award N00014-24-1-2287 issued by the Office of Naval Research. The U.S. Government has a royalty-free license throughout the world in all copyrightable material contained herein. Q. Shen, Y. Zhou, P. Zhang, Y. A. Shamash and H. Zhao are with the Department of Electrical and Computer Engineering, Stony Brook University, NY, USA (e-mails: qing.shen, yifan.zhou.1, p.zhang, yacov.shamash@stonybrook.edu, huanfengzhao@gmail.com). X. Luo and B. Wang are with ISO New England, Holyoke, MA, USA (e-mails: xluo, bwang@iso-ne.com). R. Sharma and B. Chen are with Commonwealth Edison, Chicago, IL, USA (e-mails: roshan.sharma, bo.chen@comed.com). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The paper presents a systematic approach to developing Physics-Informed neuro-Models (PIM) for the transient analysis of power grids interconnected with renewables. PIM verifies itself as an adequate digital twin of power components, taking full advantage of physical constraints while requiring only a small fraction of data for training. Three new contributions are presented: 1) An PINN-enabled neuro-modeling approach is devised to construct an accurate EMT model; 2) A data-physics hybrid learning approach is substantiated to demonstrate its data efficiency and adaptability at various levels of data; 3) A balanced-adaptive PIM exemplifies its applicability when multiple terms are predicted simultaneously while maintaining alignment with physical principles. Under various operational scenarios, tests on rotating and static electric components as well as an IEEE test system verify the efficacy and efficiency of the new learning-infused transient grid analytics. EMT modeling, inverter-based resources, physics-informed machine learning, self-adaptive learning, data-physics hybrid learning, induction machine § INTRODUCTION ElectroMagnetic Transients <cit.> (EMT) simulations, capable of capturing high-fidelity fast dynamics are indispensable for the planning and operation of today's digitally controlled and distributed power grid progressively dominated by inverter-based resources (IBRs) and renewables. Accurately representing fast dynamics under possible scenarios are of critical importance for the resilient interconnection of distributed energy resources (DERs) into the associated grids <cit.>. However, modeling the power electronic switching of inverters and the associated controls requires highly complex component models and extremely small simulation time steps <cit.>. As a resolution, data-driven modeling has been investigated to address these long-standing challenges <cit.>. Data-driven machine learning, however, requires high-quality real-time data. In today's power grids, acquiring such metadata sets for abnormal situations is challenging, given that customers tend to preserve their data privacy and that abnormal events suitable for training purposes are unlikely to occur <cit.>. Even when datasets for these rare events can be obtained through simulations, they significantly increase computational burdens and storage expenses <cit.>. Another challenge arises when facing diversified operations; the neural network learned from data needs to be repeatedly retrained to be generalizable <cit.>. To address the limitations of data availability and compromised generalization, a new trend in scalable dynamic modeling is to exploit transformative physics-informed machine learning techniques <cit.> to circumvent the drawbacks of conventional purely data-driven modeling for IBR interconnected grids. The key idea of this paper is to establish an architecture where the transient behaviors of a grid are captured by infusing EMT modeling with learning-based component models which can essentially streamline the process for solving differential equations of these grid components <cit.>. This is expected to result in more tractable EMT computations , achieving model plug-and-plays while maintaining simulation accuracy. Nevertheless, two obstacles persist in the implementation of PINNs. While they are data-efficient, PINNs can not be entirely data-free as complex grids can exhibit significantly different behaviors with slight parameter changes<cit.>. Periodic synchronization with measurement data is essential to keep the learned twin model aligned with reality <cit.>. Therefore, hybrid learning enhanced with data is necessitated. Additionally, unlike conventional data-driven training, the loss function in PINNs often involves multiple terms corresponding to sets of physical equations such as Ordinary Differential Equations (ODE) or Differential Algebraic Equations (DAE). Determining how to allocate weights for each physical term remains an open problem<cit.>. This paper addresses the bottlenecks of PINN—hybrid learning and training weights allocation—to enable learning-infused transient grid analytics. We propose a systematic approach to establishing a component-based full-order Physics-Informed neuro-Model (PIM) as a high-fidelity digital twin that reflects the states of its corresponding physical counterpart. It learns from historical data, adheres to the physical model, and makes predictions using real-time data. The contributions of PIM are threefold: * A PINN-enabled neuro-modeling method is devised to construct an accurate EMT model for perturbed power system components, leveraging physical knowledge. This approach effectively adapts to parameter changes and scenarios with limited or low-quality data. * A data-physics hybrid learning approach is established, which is enhanced and updated with data, to generate robust and accurate fast and slow dynamics with varying levels of data. * A balanced-adaptive learning strategy is devised and substantiated, which presents itself as a lightweight yet efficient method for weight allocation when predicting multiple terms simultaneously. The aforementioned contributions lay a solid foundation for learning-infused transient grid analytics which has been extensively validated in both dynamic components modeling and accurate EMT simulations of an IEEE test system. § ENHANCED PHYSICS-INFORMED NEURAL MODELING In this section, we present two approaches to enhancing PINN for EMT analytics: data-physics hybrid learning of power system components and balanced-adaptive physics-informed learning. §.§ Preliminaries of physics-informed learning This subsection briefly introduces the background knowledge of PINN and highlights its advantages over purely data-driven approaches. Consider a system of ordinary differential equations with a state vector u and a parameter vector γ: du/dt =R(u,γ),  u|_t=0=u_0 y =f(u,u,γ) where R, f represents the governing physical laws. Consider a machine learning task to learn a neural network u_NN to represent the solution of (<ref>), i.e., the trajectory of u(t) under arbitrary given γ and u_0: u(t) = u_NN(u_0, γ) Conventional data-driven machine learning directly computes the loss function by the differences between the predictions from the neural network and the training data (denoted by u): L_data= mean (u - u_NN_2) However, such a data-driven learning architecture demands abundant accurate training data, underscoring the need for both data quality and quantity to address diverse scenarios. In contrast, PINN leverages the physical constraints in (<ref>) to necessitate the alignment of the predicted outputs u_NN with the inherent physical equations, contributing to a physics-informed loss function elaborated below: L_physics = mean (||du_NN/dt-R(u_NN,γ)||_2) Comparing L_data and L_physics, an obvious distinction is that PINN exploits the inherent physical laws in (<ref>), to construct the loss function. Through the minimization of this physics-informed loss, the prediction from the neural network is compelled to conform to the real states of the physical system, thus significantly reducing the dependency on extensive high-quality datasets. §.§ Data-physics hybrid learning of power system component One complication of the purely physics-informed method is that it requires many iterations to get converged, especially for fast transient cases <cit.>. Interestingly, even if a small amount of data is available, the efficiency of training is found to be extensively improved <cit.>. Physics-informed learning enhanced with data can turn a neural network into a fast learner. Besides, it is sometimes unrealistic to derive the perfect first principle equation. Therefore, a data-physics hybrid PIM is proposed to unlock the potential. Hybrid learning can be regarded as a grey box with a compelling advantage that, provided only limited data to equip with the training, exhibits a notable generalization ability that may surpass what the purely data-driven approach requires significantly more data. In contrast, data-driven neural networks may suffer from overfitting or underfitting obstacles if the dataset is incomplete or insufficient<cit.>. Hybrid PIM thus ignites the hope to make up for the incomplete data set by deploying physics information. Suppose a power system component is to be described as a set of differential-algebraic equations as in (<ref>). Substitute the neural network u_NN in (<ref>) to (<ref>): du_NN/dt =R(u_NN,γ),  u|_t=0=u_0 ŷ =f(u_NN,u_NN,γ) Then one or a series of neural networks can be used to learn the dynamic models of this component. A specially designed framework for the power system components is illustrated in Fig. <ref>, where the framework function in the dashed-line box stands for intermediate functions before entering the next neural network. For example, if the component being learned is a rotating device such as an induction machine, the framework function can be a Park transformation. A more detailed modeling will be discussed in Subsection <ref>. The n networks can be learned in parallel or series. Each construction of the loss function fully exploits the physical equations. For hybrid PIM learning, the construction of loss is defined as: ℒ =min_δ L_hybrid= min_δ(L_physics+ η L_data) where δ is the parameters of the neural networks; η is the percentage of data acquisition, which depends on the data availability. §.§ Balanced-adaptive physics-informed learning Once the data level η in (<ref>) is established, another challenge emerges in formulating an effective loss function within L_physics. Existing work in <cit.> proposes a loss scaling method to resolve the training difficulty. But its scaling is fixed during the whole training. In <cit.>, they enhance the collocation point set via an auxiliary neural network. In <cit.>, a sequential training strategy is introduced to remedy the violation of temporal causality during model training. Nevertheless, its loss calculation relies on the previous time steps, introducing an extra iteration per epoch, significantly increasing computational complexity. Moreover, these intricate methods have primarily addressed low-dimensional problems; whereas in power systems, the involved dimensions are usually higher, either sequential casual training or auxiliary network approaches introduce extra training burdens and are prone to convergence difficulties. We propose a balanced and adaptive physics-informed loss (BA-loss) to tackle the challenge of formulating the high-dimensional loss terms in L_physics (see Fig. <ref>). Suppose the learning encompasses K physical loss terms L_1,L_2, …, L_K with λ_1,λ_2, …,λ_K as their weights. The challenge lies in the fact that the range of each loss term in L_physics is different since each term represents a realistic physical meaning. Thus, it is complicated to identify a set of proper weights to balance different loss terms. In conventional PINN algorithms, those weights are pre-defined and manually tuned, which is, as aforementioned, inefficient and ineffective. In the proposed balanced and adaptive loss, these weights can be defined as learnables that are automatically optimized during the learning process. To begin with, the effect of each physical term needs to be balanced into the same range as: L_physics =∑_k=1^K λ_k L_k/max_k(L_k) Without this step, further optimization will be infeasible in training. Then the loss is optimized in a self-supervised manner that ascends in the loss weight space and descends in the model parameter space: min_δmax_λ_1,λ_2, …,λ_Kℒ(δ ,λ_1,λ_2, …,λ_K) The learnables λ are clamped to be bigger than zero. The gradient descent of Balanced-adaptive (BA)-PIM is as follows: δ ←δ-∇_δℒ(δ,λ_1,λ_2, …,λ_K) λ_k ←λ_k +∇_λ_kℒ(δ,λ_1,λ_2, …,λ_K),   ∀ k where ∇_λ_kℒ(δ,λ_1,λ_2, …,λ_K) =L_k/max_k(L_k) By constraint, λ_k >0. ∇_λ_k≥ 0 and is zero if and only if the corresponding physical term L_k is 0, e.g. ∇_λ_kℒ=0 L_k=0. The learnable weights become larger if the corresponding physical losses are larger, progressively functioning as a faithful penalty on the network for not closely fitting the physical constraints. The diagram of the BA-PIM is shown in Fig. <ref>. § LEARNING-BASED TRANSIENTS MODELING FOR REPRESENTATIVE COMPONENTS As discussed in Section <ref>, the kernel of our PIM algorithm is the multi-neural network structure defined in Fig. <ref> and the BA-enhanced physical loss defined in Fig. <ref>. In this section, we will demonstrate, given an arbitrary power system component, how to identify the best multi-neural network structure and the most efficient physical loss to learn the PIM. Specifically, we showcase PIM on two important power system components, i.e., induction machines and inverters, which represent rotating components and static components, respectively. §.§ Rotating component: induction machine Without loss of generality, a typical yet important rotating component, the induction machine (IM) serves as a good example. For IMs, the optimal multi-neural network structure in Fig. <ref> is to decompose the aforementioned data-physics hybrid learning into two lightweight neural networks, with the Park transformation as the connecting framework function between the two networks. The rationale behind this decomposition will be elaborated in Subsection <ref>. IMs constitute a significant portion of loads, distributed energy resources (DERs), and industrial power systems. Among existing IM models, the voltage-behind-reactance (VBR) model is known for its interpretability and efficiency <cit.><cit.>. Thus, this section combines PIM and VBR to develop a PIM-based Neural Induction Machine Model (NeuIM). §.§.§ Physical formulation of induction machine An induction machine can be represented as a set of differential algebraic equations. In the dq0 frame, the induction machine has two main parts <cit.>. First, for the electrical part, the voltage and flux linkage equations can be expressed as: λ=𝒮(λ, ω,ω_r, i^dq0_s,r, v^dq0_s, r_s,r) λ=ℱ(i^dq0_s,r, L_ls,lr, L_M) v^dq0_s=𝒥(r_s,ω,λ,i_s,λ) where λ is the flux linkage; the subscript s denotes variables and parameters associated with the stator circuits, and subscript r denotes those from the rotor circuits, e.g., r_r is rotor resistance. 𝒮, ℱ, 𝒥 represent the physics laws governing the flux linkages. Current i^dq0, voltage v^dq0 and flux linkage λ are in dq0 frame. Full details can be found in Appendix <ref>. L_ls, L_lr, L_M are the stator, rotor, and mutual leakage inductance respectively. However, the induction machine modeling in dq0 frame makes it difficult to integrate with the three-phase power grid directly<cit.><cit.>. A traditional approach to obtaining the phase domain model from the dq0 model is through the mathematical derivation based on (<ref>), and (<ref>)-(<ref>). But it is rather complicated as presented in Appendix <ref>. As a solution, we propose a physics-informed, learning-based modeling approach that transforms the model from dq0 frame to the phase domain. PIM exhibits versatility and can be extended to various aspects, such as neural synchronous machines and neural transmission lines, extending beyond the scope of induction machines. §.§.§ Data-physics hybrid PIM-based induction machine When an IM is connected into a power system, it directly interacts with the system through the three-phase stator current (denoted as i^abc_s) and the terminal voltage (denoted as v^abc_s), rather than the dq0 frame states as shown in (<ref>). Thus, the target is to identify an abc-phase model of the IM using PINN. We formulate the physics-informed neural model as: di_s/dt=𝒩(v_s,z), i_s|_t=0=i_0 where v_s, i_s are in abc-frame, e.g., v^abc_s, i^abc_s for clarity. v_s is the terminal voltages on the stator side, 𝒩 is a neural network to predict the derivatives of i_s. i_0 is the initial values of i_s; [θ,ω,ω_r] are denoted as z. As shown in Fig. <ref>, the kernel idea is to take advantage of the well-established physics equations of IM to learn the NeuIM model, i.e., 𝒩 in (<ref>). Specifically, we divide 𝒩 into two sub-neural models 𝒢 and 𝒫 (see the yellow boxes in Fig. <ref>). The block 𝒢 learns the algebraic equations of the dq0 currents of the stator and the rotor (denoted as î^dq0): î^dq0= 𝒢(i_0,v_s,z) Here the measurements z,v_s and initial values i_0 are the inputs of 𝒢; ∧ denotes the outputs from neural networks. Then î_s (in abc frame) is obtained via Park's inverse transformation and passed into block 𝒫, where 𝒫 will obtain the final output dî_s/dt in abc frame: dî_s/dt = 𝒫(i_0,î_s,v_s,z) For rotating components, the decomposition of 𝒩 into 𝒢 and 𝒫 is motivated by two reasons. First, performing the Park inversion in training results in slow computation speed due to the need for calculating it at each time step using the dynamic variable θ. Second, separating the Park inversion from the training process enables a more direct and informative loss function, allowing for a clearer understanding of each neural network's behavior. Substitute the neural block 𝒢 defined in (<ref>) into (<ref>): λ̇=𝒮(λ,z,r_s,r, 𝒢) λ=ℱ(𝒢,L_ls,lr,L_M) The physics-informed loss function is derived based on the numerical integration of (<ref>). Without loss of generality, the integration of λ is performed by the modified Euler rule[Note that the method can be adapted to arbitrary integration algorithms.]: λ(t+Δ t) = λ(t) + Δ t/2· [λ̇'̇(t+Δ t)+λ̇(t)] where λ̇'̇(t+Δ t) is an estimation of the derivative of λ at time t+Δ t, which is calculated as λ'(t+1) =s(λ(t) + Δ t ·λ̇(t),z,r_s,r, 𝒢) according to (<ref>); Δ t is the time step. Correspondingly, the loss function for 𝒢 is developed as: min_θ L_𝒢 = mean (Δλ(t)_2) s.t. Δλ(t) =λ(t) + Δ t/2· [λ̇'̇(t+Δ t)+λ̇(t)] - λ(t+Δ t) The loss function in (<ref>) is constructed leveraging both the neural network-based phase-domain IM model and the physics-based dq0-domain IM model, thus compelling the NeuIM to align with the physics laws even without using training data. Similarly, the training model for 𝒫 is developed as: min_θL_𝒫 = mean (Δî_s(t)_2) s.t. Δî_s(t) = Δ t/2· [dî_s(t+Δ t)/dt+î_s(t)/dt]+î_s(t) - î_s(t+Δ t) Here, Δî_s represents the residual of the numerical integration of î_s, i.e., the output of the second neural block 𝒫. The whole process of NeuIM is shown in Fig. <ref>. The advantages of it lie in: 1) adjustable to different operation scenarios and varied machine parameters because the induction machine parameters (e.g., r_r,r_s,L_M) and boundary conditions (e.g., z) are incorporated into the learning model (see (<ref>)); 2) physically consistent results because the physics laws embedded in the loss function will naturally ensure that the learned model adheres to the governing equations; 3) better generalization with limited data. For hybrid learning, the loss function of 𝒢 is: min_δ L_hybrid, 𝒢 = L_physics,𝒢+ η L_data s.t.   L_data = mean ( î_s(t) - i_s(t) _2) where L_physics,𝒢 stands for the purely physical loss defined in (<ref>) and L_data is the loss calculated from training data i_s. η is the percentage of data acquisition. Note that L_data will only be calculated for those time points and dimensions that have available measurements. A special case is that if no measurement is available, i_s becomes an empty set and L_data correspondingly becomes 0, indicating a purely physics-informed training. §.§ Static component: grid-forming inverter In Subsection <ref>, although the data-physics hybrid learning is decomposed into multiple neural networks, the learning task within each network, such as in (<ref>), (<ref>), remains straightforward. In contrast, for static components, no framework function is required to split the learning. A single neural network suffices to predict the full-order model, with the trade-off that multiple terms with different physical meanings are calculated within one loss function simultaneously, making it an ideal candidate for evaluating balanced adaptive PIM (BA-PIM). Consequently, in this subsection, this persistent challenge—the formulation of an effective loss function—is addressed using the example of a grid-forming three-phase inverter. Different from Subsection <ref>, this formulation intrinsically encompasses multiple terms, making the allocation of weights a difficult problem. In the proposed BA-PIM, the training process is streamlined automatically. §.§.§ Physical formulation of an inverter Inverters play a fundamental role in the modern power grid by facilitating the transfer of energy from DC voltage sources to AC loads<cit.>. The topology of a grid-connected system is shown in Fig. <ref>, where the pink box at the right end represents the connected system. The purple box represents the grid-forming control block. A detailed block diagram of the control structure is presented in Fig. <ref>. The parameters used are listed in Table <ref>. The power controller consists of the power calculation, P-ω droop, and Q-V droop control in Fig. <ref>. The detailed equations are in Appendix <ref>. §.§.§ Balanced-adaptive physics-informed inverter Collecting all the equations from (<ref>) to (<ref>), the ODE for an inverter can be formulated as: dx/dt =𝒬(x,u, const) v^dq0_c =ℛ(x,u) 𝒬, ℛ represents the functions in Appendix <ref>. Define a neural network 𝒩 that is capable of predicting v̂^dq0_c with inputs of u and t. Here ∧ denotes the predicted values from 𝒩. Then the final output v̂^abc_c can be calculated via inverse Park transformation: v̂^dq0_c, x̂ =𝒩(u,t) v̂^abc_c=InvPark(v̂^dq0_c,θ̂) where θ̂ is one of the predicted state variables within x̂. The core idea of physics-informed learning for PIN-Inverter is to integrate the physical equations (<ref>) in the loss function of training, ensuring the predicted values in (<ref>) satisfy the physical constraints. Following the philosophy in Subsection <ref>, the hybrid loss function can be formulated into a physical term and a data-driven term: ℒ= L_phy+η L_data For the data-driven term L_data, define: L_data= ||v̂^dq0_c-v^dq0_c||_2+||x̂-x||_2 We then derive the physics-informed loss function based on the numerical integration of (<ref>). Without loss of generality, an estimation of x̂ is performed by the modified Euler rule: x̂'(t+Δ t)=Δ t/2[dx̂(t+Δ t)/dt+dx̂(t)/dt]+x̂(t) s.t.  dx̂(t+Δ t)/dt=Ax̂(t+Δ t)+Bu(t+Δ t)+const where dx̂(t+Δ t)/dt is an estimation of the derivative of x̂ at time t + Δ t. It is calculated using the predicted value x̂ from the neural network. Δ t is the time step. The loss function for L_dx is developed as: L_dx=mean||Δ dx||_2 s.t.  Δ dx(t)= x̂'(t+Δ t)-x̂(t+Δ t) Similarly, we define L_v and L_Cx as: L_v=mean||Δv̂||_2, L_Cx=mean||ΔCx||_2 s.t.  Δv̂=Cx+Du-v̂, Δ Cx=C·x̂-Ĉx̂ Here, Cx represents an intermediate value predicted by the neural network. Due to the presence of large values in the original matrix C, precision diminishes after normalization. Therefore, the incorporation of L_Cx can serve to regularize the term Cx, thereby improving accuracy. The derivation of L_v takes full advantage of the physical equations in (<ref>). Finally, the physical term L_phy, comprising L_v and L_dx, L_Cx, quantifies the discrepancy between the predicted values from the neural network and the values calculated using the physical equations. Given the number of terms included in L_phy, the balanced and adaptive physics-informed loss is implemented as discussed in Subsection <ref>. Losses of the balanced-adaptive PIM-based inverter are first balanced and then optimized as: ℒ =L_phy(λ _a,λ _b,λ _c)+L_data min_δ max_λ _a,λ _b,λ _cℒ(δ ,λ _a,λ _b,λ _c) L_phy =λ_aL_v+λ_bL_dx+λ_cL_Cx/max(L_v,L_dx,L_Cx) § CASE STUDY §.§ Validity of PIM-based induction machine This subsection verifies the performance of a PIM-based induction machine under various operational conditions and demonstrates its advantages over the data-driven method. §.§.§ Experiment settings PIM-based NeuIM is deployed with Tensorflow 1.5 (Python 3.6) with 2 hidden layers. The ground truth of the IM dynamics is obtained by running the original VBR model in Matlab, which is cross-validated from the results in <cit.>. The free acceleration and torque change case uses a 3-hp machine, 2500-hp machine is used for the fault case. The test system is an induction machine connected to an infinite bus. Key machine parameters are given in Table <ref>. The training sets comprise 6 sets of trajectories, encompassing free acceleration, torque change, and fault cases. Note that the goal is to maximize the utilization of limited data, hence the training sets are not extensive. The testing was carried out on 14 distinct sets, with additional parameter changes and noisy measurements. The training time is 0.087s per iteration, in total 5320 iterations for all training data. The computation time of implementing a converged PIM-based induction machine is shown in Table <ref>. The efficacy of the NeuIM is validated in three scenarios, free acceleration, torque changes, and faults, as specified in Table <ref>. A typical load torque change scenario is shown in Fig. <ref>. The mechanical torque T_m changes from 0 to 12 N·m at 2.05s and stays at 12 N· m until 2.5s, then T_m is reversed to -12 N· m, stays till the end. This case is marked ±12 in Table <ref>. A typical fault case is shown in Fig. <ref>, where a three-phase fault is applied at 6.1s and cleared at 6.2s. The IM parameter L_M also changes in testing to show the adaptability of the NeuIM. §.§.§ Efficacy of the PIM-based induction machine under slow dynamics Fig. <ref> presents the performance of NeuIM under free acceleration and torque changes. Fig. <ref>-<ref> show the accuracy of predictions from the first neural network 𝒢; Fig. <ref>-<ref> show the final results of 𝒫. Trajectories of predicted current i^q_s and the final output of di^a_s/dt demonstrate a perfect match between PIM-based induction machine's results and real dynamics, verifying the accuracy of the PIM-based induction machine in capturing the relatively slow dynamics. Note that this training is purely physics-informed, indicating no ground truth values of i^dq0_s, i^abc_s are utilized. This underscores its effectiveness in capturing dynamic models solely through physical equations, even in the absence of data. §.§.§ Efficacy of the PIM-based induction machine under fast dynamics A three-phase short circuit is applied at the terminals at 6.1s and cleared at 6.2s. In the training, the parameter L_M is 0.0346 H while in the testing set, L_M is 0.0531 H. Fig. <ref> illustrates the performance of the hybrid PIM-based induction machine under different portions of data. For instance, 75% (3:4) hybrid means that 75% of training trajectories have the true values of the outputs of 𝒢, i.e. i^dq0_s,r; thus the loss of these trajectories is L_phy+L_data. For the other 25% of the trajectories where the true values of i^dq0_s,r are unavailable, the loss of these remaining subsets only consists of L_phy. Since the derivatives of currents are not readily obtained from measurements, the training philosophy for 𝒫 is always purely physics-informed. From Fig. <ref>, it can be drawn that the difficulty for training lies in the post-disturbance period [6.2s,6.4s], especially when the proportion of data is 50% (1:2). The reason lies in the nature of the physical equations governing this problem, as outlined in (<ref>), which involve a set of ordinary differential equations. In scenarios characterized by sudden and rapid changes, certain segments of the trajectories, such as [6.2s,6.4s], exhibit relatively large derivatives or may not be continuous. This makes it challenging for a purely physics-informed neural network to identify optimal solutions. In such scenarios, Fig. <ref> shows that the hybrid NeuIM significantly enhances the convergence speed, particularly in the initial stages of training, while retaining the benefits of physics-informed learning. Fig. <ref> shows that 1:1 NeuIM (η=1) yields satisfactory results. When the proportion of data vastly exceeds 1:1, the physical loss will be overshadowed by the data-driven loss and is more inclined to data-driven training. In the next subsection, We will discuss the advantages of the hybrid NeuIM with a data-driven approach. In sum, the PIM-based NeuIM not only delivers high-fidelity solutions but also leverages the underlying physical mechanisms. It explores the data efficiency of PIM with various levels of data availability, including scenarios with no data. §.§.§ Comparison with data-driven approach For slow dynamics where purely physics-informed NeuIM is used, in Fig. <ref>, 20% of Gaussian noise is added to the measurements. It is observable that the purely data-driven deep neural network (green curve), which has the same structure as NeuIM, has irregular oscillations due to its training with limited data under noise. Meanwhile, NeuIM (purple curve) is more resilient and close to real dynamics (red dotted curve) under noisy measurements (grey curve) with an error rate of less than 3%. For fast dynamics, in Fig. <ref>, it is obvious that hybrid NeuIM achieves a lower mean error rate, showing better generalization ability when facing different fault contingencies. Table <ref> shows the final results from 𝒫. Hybrid physics-informed NeuIM overshadows the purely data-driven DNN in terms of accuracy, which reveals that under the same training set, NeuIM has a better generalization ability to cope with unseen cases and varied parameters. For slow transients such as torque change, both NeuIM and hybrid NeuIM beat the data-driven approach. For the fault cases, the hybrid NeuIM outperforms both the data-driven method and the purely physical-informed NeuIM in terms of better convergence rates and lower MSE, with the percentage of data acquisition being 1:1. The data-driven approach, even though it outperforms the pure NeuIM when data is available, cannot grasp the transients completely when one machine parameter changes. In addition, as in Table <ref>, by checking the overall calculation time, NeuIM is significantly more efficient and its scalability is guaranteed. §.§ Efficacy of PIM-based inverter Following the insight from NeuIM in Subection <ref>, where the data level between L_phy and L_data is 1:1, e.g., ℒ=L_phy+L_data, this section further explores the allocation of weights within L_phy by tests on a PIM-based inverter. §.§.§ Experiment settings The case studies are performed on the IEEE 39 bus system. The interface of a Balanced Adaptive (BA)-PIM-based inverter connecting with the rest of the system is illustrated in Fig. <ref>. The training set consists of 6 scenarios including load change of r_L and short circuit faults, represented by the open switch in Fig. <ref>. Note that the goal is to maximize the utilization of limited data, hence the training sets are not extensive. The testing was carried out on 20 distinctive cases, where faults happen and clear at different times, and load changes for different levels, randomly ranging between [0.8, 1.2] to the original load. Training sets were simulated by the detailed full-order physical model via Python 3.7 and were cross-validated from RTDS results. Training and closed-loop testing are developed and implemented in Python 3.7 with Pytorch 1.13. The neural network follows a multi-layer perception structure, which has 2 hidden layers with [128,64] neurons. Key parameters are shown in Table <ref>. The data level follows the percentage 1:1 as stated in Subsection <ref>. §.§.§ PIM-based inverter under varied operational conditions The dynamic simulation of the system requires assembling the PIM-based inverter model from the rest of the physics model, which is referred to as the closed-loop simulation in the following discussion, where the PIM-based inverter is inserted back into the system in the step-by-step simulation. So the accuracy of the outputs from the neural network directly impacts the results for the next timestep. Fig. <ref> presents the closed-loop performance of the PIM-based inverter under fault and load change. Fig. <ref>-<ref> show the high accuracy of predictions under fault and load change respectively. Trajectories of intermediate variables current i^d_1 and i^d_2 in Fig. <ref> demonstrate a perfect match between the PIM-based inverter's results and real dynamics, verifying the accuracy of PIM-based inverter in capturing the transients. Fig. <ref> displays the time-series relative error of predictions made by the PIM-based inverter across 20 scenarios. It demonstrates that even in unforeseen circumstances, the PIM-based inverter maintains reasonable error rates throughout the time horizon. This highlights its effectiveness in preserving dynamic behaviors following contingencies and its satisfactory generalization capability beyond the training datasets. §.§.§ Comparative analysis This subsection compares the proposed BA-PIM method with the existing purely data-driven method and a baseline PINN to reveal its necessity and superiority. In Fig. <ref>, a baseline PINN and a data-driven deep neural network, which share the same neural network structure and training datasets as the proposed method, are tested under a fault case. It is obvious that in Fig. <ref>, it is evident that all three methods achieve satisfactory results during the open-loop training, indicating convergence of the networks. However, in the closed-loop test in Fig. <ref>, the baseline PINN (green dotted line) exhibits irregular oscillations and the data-driven method struggles to converge by the end of the trajectory. This discrepancy arises because, during the closed-loop test, the learned neural network interacts with the physical system at each time step. Consequently, even minor residual errors can accumulate over time. As shown in Fig. <ref> and Table <ref>, baseline PINN has slightly difficulty in open-loop training, necessitating more training iterations. However, its poor performance in the closed-loop test indicates suboptimal network behavior. The data-driven method is notably sensitive to data quantity and quality, making it unstable when confronted with unforeseen data. From Table <ref>, it is evident that BA-PIM surpasses the performance of other methods. § CONCLUSION This paper devises a physics-informed neural modeling (PIM) approach to establish the continuous-time dynamics of power system components by leveraging the component physics knowledge. Case studies of NeuIM show that for slow transients, PIM operates effectively without data, while for fast transients, the data-physics hybrid PIM performs better. An improved Balanced-Adaptive PIM (BA-PIM) reduces the difficulty of determining weights by automatically optimized training. Integrated into a connected system, the PIM-based inverter is effective under various contingencies, surpassing purely data-driven methods and baseline PINNs. In a nutshell, PIM scales efficiently to large, complex systems and offers a flexible, interpretable, and observable framework. It eliminates the need for extensive EMT trajectories as training samples. The success of PIM marks the first step toward skillfully combining power system transient analysis and artificial intelligence. Each approach has its strengths and weaknesses, but they can complement each other. By exploiting both, the learning-infused EMT simulations are effective while remaining as reliable as the original full-scale model-based EMT simulations. § §.§ Induction machine modeling The voltage and flux linkage equations in (<ref>) can be expressed as: v^dq0_s=r_si^dq0_s+ωλ^dq0_s+λ^dq0_s 0=r_ri^dq_r+(ω-ω_r)λ^dq_r+λ^dq_r λ^dq_s=L_lsi^dq_s+L_M(i^dq_s+i^dq_r) λ^dq_r=L_lri^dq_r+L_M(i^dq_s+i^dq_r) λ_0s=L_lsi_0s, λ^0_r=L_lri^0_r,0=r_ri^0_r+λ^0_r The flux linkage equations can be rearranged as follows: λ^q_s=L^”i^q_s+λ_q^”,  L_q^”=L_M^”λ^q_r/L_lr λ^d_s=L^”i^d_s+λ_d^”,  L_d^”=L_M^”λ^d_r/L_lr L^”=L_ls+L_M^”, L_M^”=(1/L_M+1/L_lr)^-1 Applying the inverse transformation to 𝐯^abc_s(t)=𝐫_s𝐢^abc_s(t)+d[ 𝐋”_s𝐢^abc_s(t) ]/dt+𝐯”_s(t) where 𝐯^”_𝐬(t)=[ 𝐊_s( θ) ]^-1[ v^”_q v^”_d 0 ], 𝐋^”_𝐬=[ L_ls+2/3L^”_M -L^”_M/3 -L^”_M/3 -L^”_M/3 L_ls+2/3L^”_M -L^”_M/3 -L^”_M/3 -L^”_M/3 L_ls+2/3L^”_M ] v^”_d =-ωλ^”_q+L_M^”r_r/L_lr^2[λ^”_d-λ^d_r+L_M^”i^d_s]+L_M^”/L_lr (ω -ω_r)λ^q _r v^”_q =ωλ^”_d+L^”_Mr_r/L_lr^2[λ^”_q-λ ^q_r+L^”_Mi^q_s]-L^”_M/L_lr (ω -ω_r)λ^d_r §.§.§ Discrete VBR model Based on the trapezoidal rule, (<ref>) can be discretized and rearranged as: 𝐯^abc_s(t)= ( 𝐫_s+2/Δ t𝐋”_s)𝐢^abc_s(t)+𝐯”_s(t)+ 𝐞_h(t) 𝐞_h(t)= ( 𝐫_s-2/Δ t𝐋”_s)𝐢^abc_s( t-Δ t )+ 𝐯”_s( t-Δ t )-𝐯^abc_s( t-Δ t ) Rewrite (<ref>) and apply the inverse Park transformation, the equivalent circuit for the stator voltage can be written as: 𝐢^abc_s( t ) =𝐆_eq𝐯^abc_s( t )-𝐡(t),𝐡(t)=𝐆_eq[ 𝐞_h(t)+𝐞_hs(t) ] where 𝐊(t)=[ 𝐊_s( θ (t) ) ]^-1[ 𝐊_1(t) 0 0 0 ][ 𝐊_s( θ (t) ) ], 𝐞_hs(t) = [ 𝐊_s( θ (t) ) ]^-1[ 𝐊_2(t) 0 ], 𝐆_eq=[ 𝐫_s+2/Δ t𝐋”_s+𝐊(t) ]^-1 𝐊_2(t)= A· B[ 2+b_11Δ t b_12(t-Δ t)Δ t b_21(t-Δ t)Δ t 2+b_22Δ t ][ λ_qr(t-Δ t) λ_dr(t-Δ t) ] +A· B^-1[ b_13Δ t 0 0 b_23Δ t ][ i_qs(t-Δ t) i_ds(t-Δ t) ] A= [ a_11 a_12 -a_12 a_11 ], a_11=L_M^”r_r/L_lr^2( L_M^”/L_lr-1 ) a_12= ω_r(t)L_M^”/L_lr, a_13=L_M^”2r_r/L_lr^2, a_23=a_13 B= [ 2-b_11Δ t -b_12(t)Δ t b_12(t)Δ t 2-b_11Δ t ], b_11=r_r/L_lr( L”_M/L_lr-1 ) b_12= -ω (t)_ω_r(t), b_13=r_r/L_lrL”_M, b_23=b_13 For the mechanical part: T_e=J(2/P)ω̇_r+T_m,  ω_r=θ̇_r T_m=3P/4·(λ_dsi_qs-λ_qsi_ds) where T_e is the electromagnetic torque output, J is the inertia of the rotor, T_m is the mechanical torque. θ_r is the electrical angular displacement of the rotor. P is the number of poles. §.§ Inverter modeling The power calculation, P-ω droop and Q-V droop control in Fig. <ref> are presented below: P=v^d_ti^d_2+v^q_ti^q_2,  Q=-v^d_ti^q_2+v^q_ti^d_2 ω=ω_n-k_p· P·ω_f/s+ω_f,  V=V_n-k_q· Q·ω_f/s+ω_f The voltage controller in Fig. <ref> can be represented as: v^d*_t=V , v^q*_t=0 dϕ^d/dt= v^d*_t-v^d_t , dϕ^q/dt= v^q*_t-v^q_t i^d*_1=Fi^d_2-ω_nC_fv^q_t +K_pv·(v^d*_t-v^d_t)+K_ivϕ_d i^q*_1=Fi^q_2-ω_nC_fv^d_t +K_pv·(v^q*_t-v^q_t)+K_ivϕ^q The current controller is: dγ^d/dt= i^d*_1-i^d_1 , dγ^q/dt= i^q*_1-i^q_1 v^d*_c=-ω_n· L· i^q_1 +K_pc·(i^d*_1-i^d_1)+K_icγ^d v^q*_c=ω_n· L· i^d_1 +K_pc·(i^q*_1-i^q_1)+K_icγ^q The matrices in (<ref>) are: x =[ ω; V; θ; ϕ^d; ϕ^q; γ_d; γ^q ],u=[ P; Q; v^d_t; v^q_q; i^d_2; i^q_2; i^d_1; i^q_1 ], u=[ v^d_t; v^q_t; i^d_d; i^q_2; i^d_1; i^q_1 ], const= [ ω_nω_f; V_nω_f; 0; 0; 0; 0; 0 ] A=[ -ω_f 0 0 0 0 0 0; 0 -ω_f 0 0 0 0 0; 1 0 0 0 0 0 0; 0 1 0 0 0 0 0; 0 0 0 0 0 0 0; 0 K_pv 0 K_iv 0 0 0; 0 0 0 0 K_iv 0 0 ] B=[ -k_pω_f 0 0 0 0 0 0 0; 0 -k_qω_f 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 -1 0 0 0 0 0; 0 0 0 -1 0 0 0 0; 0 0 -K_pv -ω_nC_f F 0 -1 0; 0 0 ω_nC_f -K_pv 0 F 0 -1 ] C=[ 0 K_pcK_pv 0 K_pcK_iv 0 K_ic 0; 0 0 0 0 K_pcK_iv 0 K_ic; 0 0 0 0 0 0 0 ] D= [ -K_pcK_pv -ω_nC_fK_pc K_pcF 0 -K_pc -ω_nL; ω_nC_fK_pc -K_pcK_pv 0 K_pcF ω_nL -k_PC; 0 0 0 0 0 0 ] ieeetr
http://arxiv.org/abs/2406.17720v1
20240625170954
Arboretum: A Large Multimodal Dataset Enabling AI for Biodiversity
[ "Chih-Hsuan Yang", "Benjamin Feuer", "Zaki Jubery", "Zi K. Deng", "Andre Nakkab", "Md Zahid Hasan", "Shivani Chiranjeevi", "Kelly Marshall", "Nirmal Baishnab", "Asheesh K Singh", "Arti Singh", "Soumik Sarkar", "Nirav Merchant", "Chinmay Hegde", "Baskar Ganapathysubramanian" ]
cs.CV
[ "cs.CV" ]
Accessing a New Population of Supermassive Black Holes with Extensions to the Event Horizon Telescope [ June 25, 2024 ===================================================================================================== § ABSTRACT We introduce Arboretum, the largest publicly accessible dataset designed to advance AI for biodiversity applications. This dataset, curated from the iNaturalist community science platform and vetted by domain experts to ensure accuracy, includes 134.6 million images, surpassing existing datasets in scale by an order of magnitude. The dataset encompasses image-language paired data for a diverse set of species from birds (Aves), spiders/ticks/mites (Arachnida), insects (Insecta), plants (Plantae), fungus/mushrooms (Fungi), snails (Mollusca), and snakes/lizards (Reptilia), making it a valuable resource for multimodal vision-language AI models for biodiversity assessment and agriculture research. Each image is annotated with scientific names, taxonomic details, and common names, enhancing the robustness of AI model training. We showcase the value of Arboretum by releasing a suite of CLIP models trained using a subset of 40 million captioned images. We introduce several new benchmarks for rigorous assessment, report accuracy for zero-shot learning, and evaluations across life stages, rare species, confounding species, and various levels of the taxonomic hierarchy. We anticipate that Arboretum will spur the development of AI models that can enable a variety of digital tools ranging from pest control strategies, crop monitoring, and worldwide biodiversity assessment and environmental conservation. These advancements are critical for ensuring food security, preserving ecosystems, and mitigating the impacts of climate change. Arboretum is publicly available, easily accessible, and ready for immediate use. Please see the https://baskargroup.github.io/Arboretum/project website for links to our data, models, and code. § INTRODUCTION AI advances are poised to play a crucial role in biodiversity conservation, ecology management, and agriculture. Already, AI tools have been shown to enable automated species identification, monitoring of ecological changes, and optimization of crop management <cit.>. However, standard AI approaches for biodiversity applications persistently face major challenges. Training datasets are labor-intensive and costly to create; they cover only a narrow set of visual concepts; standard vision models excel at single tasks, but require extensive retraining for new tasks; models often struggle with generalizing to unseen labels and new environments, limiting their effectiveness in real-world applications <cit.>. Models that perform well on benchmarks often fail in the wild <cit.>. Standard computer vision datasets (ImageNet and its successors) have significant limitations, including incorrectly labeled images, geographical and cultural biases, and overlapping or ill-defined labels, all of which impair the development of high-performant AI models <cit.>. Consequently, there is a critical need for large, diverse, accurately annotated datasets that are specific to biodiversity, ecology, and agricultural research <cit.>. In response to this need, several datasets have been introduced. Perhaps the most well-known (raw) pool of biodiversity images on the Web is iNaturalist <cit.>, from which several curated datasets have been sourced, among them being iNat2021 <cit.> with 2.7M images of over 10,000 species of plants, animals, and fungi. However, insects (which comprise a very large fraction of extant species) are under-represented in this dataset. IP102 <cit.>, Insecta <cit.>, and the more recent BioScan-1M <cit.>, are alternative datasets that focus on the Insecta Class. Perhaps the latest advance in such research is TreeOfLife-10M <cit.>, which is currently the state-of-the-art dataset of text-annotated biological images, comprising 10M images with approximately 450K unique taxonomic classes. In this paper, we make significant contributions to this body of work by curating and releasing Arboretum. This dataset includes 134.6 million captioned images of approximately 326.9K species. The dataset surpasses all existing datasets in scale by an order of magnitude, constituting the largest public, “AI-ready" dataset of curated biodiversity images. The dataset encompasses image-language paired data for a diverse set of species from birds (Aves), spiders/ticks/mites (Arachnida), insects (Insecta), plants (Plantae), fungus/mushrooms (Fungi), snails (Mollusca), and snakes/lizards (Reptilia). See Figure <ref> for representative examples, and the project https://baskargroup.github.io/Arboretum/website for details. Each image in Arboretum is sourced from the iNaturalist community science platform <cit.> and is annotated with the corresponding common (English) name, scientific (Latin) name, and taxonomic hierarchy. The metadata for each species has been vetted by domain experts, ensuring the accuracy of the text annotations for robust AI model training. We https://github.com/baskargroup/Arboretum/open-source the tooling pipeline used to curate Arboretum, and hope that the community will use this dataset as a valuable resource for further development of multimodal (vision-language) AI models for biodiversity and agriculture research. To showcase the potential of Arboretum, we further make two technical contributions. First, we train and release ArborCLIP, a suite of vision-language foundation models trained on a subset of approximately 40M Arboretum samples, comprising approximately 33K species. This subset was constructed with standard filtering criteria in order to reduce skewness in species counts. Our models show excellent generalization performance and can support zero-shot (or few-shot) classification with either common or scientific names of unseen taxa. We also anticipate that these models can be fruitfully fine-tuned in the future on datasets for specific biodiversity-related applications. Second, we rigorously quantify the performance of our foundation models on five existing fine-grained image classification benchmarks, as well on three newly curated test datasets. We find that ArborCLIP models comfortably achieve the state-of-the-art in certain settings, while both the original (OpenAI) CLIP model as well as BioCLIP <cit.> excel in certain other settings. We analyze these findings in further detail below, but overall we hope that our dataset can be used by the AI community as a testbed for further algorithmic and scaling research in fine-grained image recognition. The remainder of this paper is organized as follows. Section <ref> introduces the Arboretum dataset, the dataset's salient characteristics, and a comparison with previous work. Section <ref> details our curation methodology. Section <ref> introduces our newly proposed test datasets and their characteristics. Section <ref> details our new ArborCLIP models and their benchmark performance relative to previous work. Section <ref> concludes with a discussion of limitations and potential future directions. § THE ARBORETUM DATASET Characteristics. Arboretum comprises over 134.6M images across seven taxonomic classes —Aves, Arachnida, Insecta, Plantae, Fungi, Mollusca, and Reptilia. These taxonomic classes were chosen to represent the span of species — outside of charismatic megafauna — which critically impact biodiversity. The images in Arboretum span 326,888 species. Overall, this dataset nearly matches the state-of-the-art curated dataset (TreeOfLife-10M) in terms of species diversity, while comfortably exceeding it in terms of scale by a factor of nearly 13.5×. Figure <ref> shows representative image samples, and Figure <ref> displays the distribution of samples according to the seven categories (and the most frequently occurring species). Figure <ref> displays phyla, taxonomic classes, orders, and families represented in the dataset. Each image sample in Arboretum is annotated with a rich amount of curated metadata. This metadata design facilitates easy filtering of species by image count and taxonomic information. The metadata integrates common names, scientific names, and detailed taxonomic hierarchies. For the full list of fields, see Table <ref>. Along with the dataset we also release our data curation tooling pipeline, that enables users to easily access and manipulate the dataset. The pipeline allows researchers to select specific categories, visualize data distributions, and manage class imbalance effectively based on their needs. It facilitates the downloading of specific images by their URLs and provides image-text pairs and user-defined chunks to support various AI applications. This pipeline enables users to define different subsets of Arboretum in an easy manner, overall making the dataset AI-ready and easy to use, and reducing the barriers to follow-up research in AI for biodiversity. Dual-language text descriptions. We adopt both common and scientific names, since Latin is a low-resource language and current vision-language backbones do not perform well on scientific names alone in a zero-shot manner. We found that a well-structured text description that integrate common names, scientific names, and detailed taxonomic hierarchies facilitates the learning of relationships between Latin and English terms, thereby improving the models' applicability in scientific contexts <cit.>. Moreover, incorporating the taxonomic hierarchy enables models to more effectively associate visual data with taxonomic terminology <cit.>. This matches the guidelines suggested by BioCLIP <cit.> to enhance model performance and generalization. Privacy Measures: The images of Arboretum were sourced from the iNaturalist Open Dataset, whose metadata included Personally Identifiable Information (PII). This included information about observers, such as their usernames and sometimes their real names if they have chosen to share that information publicly. We removed all such fields to ensure that no PII is present in the metadata associated with Arboretum samples, ensuring the privacy of all contributors. License: During curation, we took care to include only images from iNaturalist Open Data, which are all licensed under either the , or , or licenses. This ensures that all our images are available for public research purposes. Offensive Content: Some of our URLs may point to images that users could find disturbing or harmful, such as photos of dead or dismembered animals. We retained these types of images since they sometimes can provide valuable scientific data about wildlife, including information on predation events, roadkill, and other occurrences relevant to conservation and biodiversity studies. Although iNaturalist relies on user contributions and community moderation to maintain the quality and appropriateness of the data, we acknowledge that the vast and diverse nature of the data means that some offensive or inappropriate content might be present. Our closest comparisons are with BioScan-1M (which appeared in NeurIPS 2023 Datasets and Benchmarks) and TreeOfLife-10M (which will appear in CVPR 2024). BioScan-1M focuses solely on the Insecta Class and provides scientific names, taxonomic ranks, as well as DNA barcodes. The TreeOfLife-10M dataset comprises 10.4 million images, integrating data from iNat2021 <cit.>, BioScan-1M, and a fresh set of image samples sourced from the Encyclopedia of Life (EOL). It also supports dual-language labels and detailed taxonomic hierarchies, and was used to train the BioCLIP vision-language model. See Table <ref> for essential differences. § DATA COLLECTION AND CURATION METHODOLOGY Challenges with iNaturalist Open Data. All of Arboretum is sourced from the iNaturalist Open Data community science platform, which (in all) comprises over 190M biodiversity-relevant observations shared by users. However, there are still significant gaps in usability for AI research. The photos and metadata, although easily downloadable, are provided in four separate metadata sheets that are not ready to use. Taxa information is encoded as numerical IDs, requiring additional API calls and non-trivial lookups to convert these into common or scientific names. The multiple metadata sheets structure is fragmented across four separate files—photos, taxa, observations, and observers—adding complexity to data integration. Managing data balance and filtering out species with too few images can lead to biases towards common (charismatic) species and an imbalanced training process. Curation of Arboretum. The iNaturalist Open Dataset comprises a collection of 250.0M images stored on an AWS S3 bucket as of May 27, 2024, with associated metadata in the form of four separate CSV files (indexed as , , , and ). Details of each of these files are provided in Section <ref> in the Appendix. While these files contain a plethora of interesitng information, they are designed for rapid lookup, and therefore not “AI-ready". To resolve this, we curate the metadata into a streamlined format for easy usage by AI practitioners. We first use the corresponding CSV files to populate an SQL database with each CSV file as its own SQL table. We create a new aggregate SQL table by joining , , and on its relational columns, and discarding columns not relevant to us. We also construct a new column in the aggregated table and populate it with the Amazon S3 URL where the image file is hosted. We then query this SQL table and extract rows corresponding to the following seven Arboretum categories: Aves, Arachnida, Insecta, Plantae, Fungi, Mollusca, and Reptilia.[As mentioned above, we choose these 7 classes since they represent non-megafauna that criticially impact biodiversity. Typically, megafauna species are well suited to standard image recognition models, but there remains a pressing need for similar efforts for these particular classes.] The iNaturalist Open Data metadata files do not contain common name information; to reconstruct this information, we cross-match species names from the iNaturalist Taxonomy DarwinCore Archive, which is compiled monthly. Once the common name is reconstructed we append this to the SQL table. Finally, we export this table as a CSV file, which we release for public use on https://huggingface.co/datasets/ChihHsuan-Yang/ArboretumHuggingFace. Data filtering and preprocessing. As described above, Arboretum comprises well-processed metadata with full taxa information and URLs pointing to image files. The metadata can be used to filter specific categories, visualize data distribution, and manage imbalance effectively. We provide a collection of software tools that enable users to easily download, access, and manipulate the dataset. This makes it straightforward for researchers to explore different data subsets. The citizen science aspect of iNaturalist data leads to a wide variation in the number of observations per species, with some species having only a few records while others have thousands. To mitigate this, our tools can be used to apply user-defined filtering criteria to exclude species with fewer than a specified number of images. Additionally, we imposed an upper limit on the number of images per species to mitigate overfitting. To address imbalances during training (described further below in our experiments), we download the data and organize into chunked tar files using a semi-global shuffling strategy. Initially, each tar file is shuffled and divided into smaller groups. These groups were then randomly merged to form larger batches, ensuring a balanced distribution of species within each batch. This approach significantly improves dataset integrity by preventing any single species from spanning full batches. § MODELS AND BENCHMARKS We now showcase demonstrate the utility of the Arboretum dataset by creating and benchmarking ArborCLIP, a new suite of vision-language foundation models for biodiversity. §.§ Arboretum-40M To start, we first construct Arboretum-40M, a subset comprising approximately 40M samples and 33K species, which we constructed by applying the data filtering tools described above. The data included in this subset include all samples in the seven Arboretum categories posted on iNaturalist prior to January 27, 2024. We applied filtering criteria to exclude species with fewer than 30 images and imposed a maximum limit of 50,000 samples per species. We conducted semi-global shuffling and divided the data into mini-batches of approximately 50,000 samples each. From these mini-batches, 95% were randomly selected for training and validation, while the remaining 5% were reserved for testing. Detailed information can be found in Table <ref>. §.§ New Benchmarks From Arboretum, we create three new benchmark datasets for fine-grained image classification. In addition, we report results on several established benchmarks in the literature; see Table <ref>. Arboretum-Balanced. To enforce a balanced species distribution across the seven categories, we curate Arboretum-Balanced. Each category includes up to 500 species, with 50 images per species. Table <ref> in the Appendix shows each category's exact number of species. Arboretum-Unseen. To provide a robust benchmark for evaluating the generalization capability of models on unseen species, we curate Arboretum-Unseen. The test dataset was constructed by identifying species with fewer than 30 instances in Arboretum, ensuring that the dataset contains species that were unseen by ArborCLIP. Each species contained 10 images. Arboretum-LifeStages. To assess the model's ability to recognize species across various developmental stages, we curate Arboretum-LifeStages (see Figure <ref>). This dataset has 20 labels in total and focuses on insects, since these species often exhibits significant visual differences across their lifespan. Arboretum-LifeStages contains five insect species and utilized the observation export feature on the iNaturalist platform to collect data from 2/1/2024 to 5/20/2024 to ensure no overlap with the training dataset. For each species, life stage filters (egg, larva, pupa, or adult) were applied. §.§ ArborCLIP: New vision-language foundation models for biodiversity We use Arboretum-40M to train new CLIP-style foundation models, and then evaluate them on zero-shot image classification tasks. Following the implementation of <cit.>, we utilize a ViT-B/16 architecture initialized from the OpenAI CLIP weights <cit.>, and train for 40 epochs. We also train a ViT-L/14 model from the MetaCLIP <cit.> checkpoint for 12 epochs, and a ViT-B/16 from the BioCLIP checkpoint for 8 epochs. All training hyperparameters are included in the Appendix (Section <ref>). We compare with OpenAI's ViT-B/16 CLIP model, the BioCLIP ViT-B/16 checkpoint, and MetaCLIP-CC ViT-L/14. We publicly release all code needed to reproduce our results https://github.com/baskargroup/Arboretum/here. § EXPERIMENTAL RESULTS Metrics. For each of the benchmark datasets, we report top-1 zero-shot accuracy. For datasets where taxonomic information is available, we report accuracy over scientific names. For datasets where no taxonomic information is available, we utilize category names provided by the benchmark's authors. For our Arboretum-Life-Stages benchmark, we consider a class label to be a unique (species | life-stage) tuple, resulting in a 20-class benchmark. Finally, we report an aggregate metric which is the weighted average over unique class labels across all benchmarks in our suite. Overview of results. In Table <ref>, we report the results of our core benchmark suite. At a high level, we observe that ArborCLIP variants achieve the best accuracy averaged over benchmarks. In particular, they perform extremely well on Arboretum-Balanced (a remarkable 91.1 top-1 accuracy over 2250+ class labels). ArborCLIP also does very well on the Fungi dataset (even though the Fungi class is not central to Arboretum-40M), and the DeepWeeds dataset. Therefore, ArborCLIP exhibits strong generalization capabilities across diverse datasets. We also observe that BioCLIP performs very well on Arboretum-Unseen and BioCLIP-Rare. The reasons might be that BioCLIP has seen approximately 450K species, and there might be nontrivial overlap with the species set in Arboretum-Unseen. On the other hand, it could be that ArborCLIP suffers from forgetting issues while training on Arboretum-40M. For BioCLIP-Rare, the dataset is a subset from EOL which BioCLIP did not see before, but TreeofLife contains the majority of the EOL dataset. Limitations. We also evaluated all models on the challenging Confounding-species benchmark introduced in <cit.>, but find that all models perform at or below random chance, and do not report results here; this could be an interesting avenue for follow-up work. In Table <ref> in the Appendix, we report model performance at different levels of the taxonomic hierarchy. Generally, we find that models trained on web-scraped data perform better with common names, whereas models trained on specialist datasets perform better when using scientific names. Additionally, models trained on web-scraped data excel at classifying at the highest taxonomic level (kingdom), while models begin to benefit from specialist datasets like Arboretum-40M and Tree-of-Life-10M at the lower taxonomic levels (order and species). From a practical standpoint, this is not problematic: ArborCLIP is highly accurate at the species level, and higher-level taxa can be deterministically derived from lower ones. Addressing these limitations will further enhance the applicability of models like ArborCLIP in real-world biodiversity monitoring tasks. § CONCLUDING DISCUSSION We introduce Arboretum, the largest publicly accessible dataset designed to advance AI for biodiversity applications. This dataset, curated from the iNaturalist community science platform, includes 134.6 million images, surpassing existing datasets in scale by an order of magnitude. We anticipate that Arboretum will enable the development of AI models that can enable various digital tools ranging from pest control strategies, crop monitoring, and worldwide biodiversity assessment and environmental conservation. We also believe that Arboretum can be used as a unique testbed for measuring progress on fine-grained image recognition. The success of ArborCLIP on Arboretum-Unseen underscores the importance of scaling up per-category sample size, or vertical scaling <cit.>, in achieving high accuracy on long-tailed extreme-imbalance classification. However, BioCLIP continues to exhibit superior performance on several datasets, and we believe that this is because TreeofLife-10M contains an order-of-magnitude more classes (species) than Arboretum-40M. We invite the AI community to create new subsets of Arboretum with varying degrees of balance and species diversity, and use our tooling to measure model performance against current benchmarks. § ACKNOWLEDGEMENTS This work was supported by the AI Research Institutes program supported by the NSF and USDA-NIFA under AI Institute: for Resilient Agriculture, Award No. 2021-67021-35329. This was also partly supported by the NSF under CPS Frontier grant CNS-1954556. CH, BF, AN, and KM gratefully acknowledge the support of NYU IT High Performance Computing resources, services, and staff expertise. § APPENDIX §.§ Background on CLIP and zero-shot classification Unlike traditional vision models, CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) examples, leveraging natural language supervision to enhance generalization <cit.>. CLIP's approach allows it to learn from a wide variety of images and their associated textual descriptions, making it more flexible and general compared to standard vision models. This flexibility is crucial for in various domains, including biodiversity monitoring and agriculture. For instance, CLIP models analyze digital plant specimen images, aiding in pre-processing and filtering for further analysis for agriculture purposes <cit.>. As for biodiversity, WildCLIP and KI-CLIP facilitate wildlife observation and monitoring with high accuracy and effectiveness in data-sparse settings <cit.>. These examples underscore the importance of developing and utilizing comprehensive datasets to fully leverage the capabilities of CLIP models in advancing biodiversity and agricultural research. §.§ The value of taxonomic information Taxonomic classification, the hierarchical arrangement of organisms into categories based on shared characteristics, is foundational in biological sciences. Taxonomy underpins various scientific, ecological, and agricultural applications. It allows for precise identification and classification of species, which is fundamental for understanding biodiversity and monitoring ecosystems. For instance, accurate species identification can aid in tracking invasive species, as noted in studies such as <cit.>. In agriculture, detailed taxonomic information helps in identifying pests and beneficial species, thereby improving pest control strategies and crop management; supports ecological research by providing insights into species interactions, distribution patterns, and evolutionary relationships <cit.>; and is essential for policy-making and conservation planning <cit.>. §.§ Scientific versus common names Although we identify the importance and need to include taxonomic information in the dataset for biodiversity, one potential challenge is the fact that this information is mostly in Latin for which text embedding models often exhibit suboptimal performance due to its status as a low-resource language <cit.>. Nonetheless, Latin remains indispensable as it is the standard for representing scientific names and taxonomic classifications. We therefore integrate common names, scientific names, and detailed taxonomic hierarchies. We believe that such an “all-encompassing” approach facilitates the learning of relationships between Latin and English terms, thereby improving the models' applicability in scientific contexts <cit.>. Furthermore, incorporating taxonomic data into the training process significantly enhances the multimodal capabilities of the models, enabling them to associate visual data with taxonomic terminology <cit.>. §.§ iNaturalist, iNaturalist Open Data iNaturalist is an online social network for sharing biodiversity information and learning about nature. It serves as a crowdsourced species identification system and organism occurrence recording tool. Users from around the world upload images, making the continuously updated dataset valuable for AI applications in biodiversity and research. Each photo includes detailed metadata: copyright status, location, uploader, time, and taxonomic classification. This diversity in image sources makes iNaturalist an excellent dataset for training AI models intended for real-world applications <cit.>. Despite its vast and diverse data, iNaturalist is not directly optimized for AI researchers: arranging this data for use in AI models like CLIP is not straightforward. Each photo has its own page on the iNaturalist website, making it difficult to download images along with all the necessary information in a streamlined manner. The iNaturalist Open Dataset aims to address some of these challenges. It is one of the world’s largest public datasets of photos of living organisms, structured as a "bucket" of images stored using Amazon Web Service's Simple Storage Service (S3). The dataset includes multiple resized versions of each photo, allowing users to download the size most useful to their research. Additionally, the dataset provides four tab-separated CSV files representing observations, observers, photos, and . These files are generated monthly, capturing a snapshot of the continually changing iNaturalist data. The images in the iNaturalist Open Dataset are licensed under either CC0, CC-BY, or CC-BY-NC and are open for public research. Photos with a CC0 license can be attributed as "[observer name or login], no rights reserved (CC0)". Photos with other Creative Commons licenses can be attributed as "© [observer name or login], some rights reserved ([license abbreviation])". §.§ iNaturalist Details Each image in the iNaturalist Open Dataset can be associated with its appropriate metadata through a group of four metadata CSV files, representing photos, observations, taxa, and observers. The photos metadata file contain nine distinct columns of metadata information of each photo. Of these columns, only photo_id and observation_uuid are relevant for us. The value of photo_id is a identifier number used to access individual photos, the photo’s iNaturalist page can be found by constructing a URL in this format: https://www.inaturalist.org/photos/[photo_id]. The value of observation_uuid indicates which observation the photo is associated with, it is used to map the photos metadata to the observations metadata. An observation represents one user submission of a species encounter to the iNaturalist website. One observation can have multiple photos of the same species but never multiple species. The observation metedata file contains eight distinct columns of metadata information on each observation. The columns relevant to us are observation_uuid, quality grade, and taxon_id. Each observation is given a unique number identifier indicated by its observation_uuid. iNaturalist has its own system to determining the quality of an observation and its associated photos, quality_grade represents this and can range from "Casual", "Research Grade", or "Needs ID". The value taxon_id indicates the species is represented in the observation, it is used to map the observations metadata to the taxa metadata. The taxa metadata file contains information about each specific taxon in iNaturalist, it has has six distinct metadata columns. The columns relevant to us are taxon_id, name, ancestry, and active. Each specific taxon in iNaturalist has a unique identifier number associated with it, this is its taxon_id. This taxon_id will map to the scientific name of the taxon which is represented in the name metadata column. Each taxon also has associated with it a taxonomic ancestry, this is represented as a string of taxon_ids concatenated together with "\" like so "48460/1/47115/47584/1051154". The active column indicated whether the taxon is currently in use in iNaturalist. The observer metadata file comtains information about each user within the iNaturalist site. For the purpose of machine learning research none of its three metadata columns are relevant. While the iNaturalist Open Dataset metadata files provide a plethora of interesting information, its structure makes it inherently cumbersome to use for research. To solve this, we aggregate and process the iNaturalist metadata into a concise and streamlined format for easy query and usage. First, the respective CSV files are used to populate a SQL database with each CSV file as its own SQL table. A new aggregate SQL table is created that joins the photos, observations, and taxa tables on its relational columns. Only the metadata columns we deemed relevant are kept and the extraneous non-useful metadata columns are discarded. One of the difficulties working with the base iNaturalist metadata files is that it does not contain the image URL, information that is critical in image downloads. We include a new column in the aggregated metadata table that explicitly links to the Amazon S3 URL in which the image is hosted. The Arboretum metadata file used for model training contains the metadata columns phylum, class, order, family, genus, species, scientific_name, common_name for the seven Arboretum categories Aves, Arachnida, Insecta, Plantae, Fungi, Mollusca, and Reptilia. To ensure that only images and metadata from the seven Arboretum categories appear in our final dataset we use the taxa table to find the taxon in our categories then use it in a SQL query on the ancestry column of our aggregated metadata table. The taxonomic rank columns are also found utilizing the ancestry metadata column. A difficulty in working with the ancestry metadata is present in that there is not a clear indication of what taxonomic rank a taxon id represents the ancestry string. This problem is exacerbated due to the presence of taxonomic ranks and dsub ranks whose presence is variable across different species. As such, a custom function is applied to each row to dynamically find the rank of each taxon id in the ancestry and then appropriately populate the taxon id to a metadata column of that rank. This process results in all taxonomies rank represented as metadata columns; only phyllum, class, order, family, genus and species are kept in the Arboretum metadata file. The scientific name of a species is found using the name metadata column of our aggregated metadata table. The common name of a species is also useful metadata information. Unfortunately, the iNaturalist Open Data metadata files do not contain the common name information of a species. To address this, we curate a lookup table of the common names in our dataset. This is obtained from the iNaturalist Taxonomy DarwinCore Archive, Having obtained the common names for each species, we append it to the Arboretum-specific metadata. §.§ Composition of Arboretum-40M See Figure <ref> and Table <ref>. § ARBORCLIP TRAINING DETAILS We use Arboretum-40M to train new CLIP-style foundation models, and then evaluate them on zero-shot image classification tasks. Following the implementation of <cit.>, we utilize a ViT-B/16 architecture initialized from the OpenAI pretrained weights for our main model, and train for 40 epochs. In addition, we also train a ViT-L/14 model from the MetaCLIP <cit.> checkpoint for 12 epochs, and a ViT-B/16 from the BioCLIP checkpoint for 8 epochs. We select the AdamW optimizer from <cit.> along with a cosine learning rate scheduler, as this has previously been shown to perform well for CLIP pretraining <cit.>. We conduct twenty rounds of hyperparameter optimization using Ray Tune <cit.> to determine the optimal learning rate, β_1, β_2 and weight decay settings. We train our models for a combined 10 days on 8xH100 nodes in bfloat16 precision <cit.> with gradient checkpointing, computing loss with local features, and utilizing static graph optimization for DDP. § ADDITIONAL ARBORCLIP RESULTS In Table <ref>, we report model performance at different levels of the taxonomic hierarchy. Generally, we find that models trained on web-scraped data perform better with common names, whereas models trained on specialist datasets perform better when using scientific names. Additionally, models trained on web-scraped data excel at classifying at the highest taxonomic level (kingdom), while models begin to benefit from specialist datasets like Arboretum-40M and Tree-of-Life-10M at the lower taxonomic levels (order and species). However, ArborCLIP shows a performance decline at taxonomic levels below the species level. This is likely because our training metadata structure allows for classifications solely by referring to species information. From a practical standpoint, this is not problematic for the species in our test set since ArborCLIP is highly accurate at the species level, and higher-level taxa can be deterministically derived from the lower ones. Furthermore, the OpenCLIP and MetaCLIP baselines outperform ArborCLIP on the life stages benchmark. This highlights the importance of retaining the general linguistic capabilities of the pretrained CLIP models for hybrid tasks.
http://arxiv.org/abs/2406.17773v1
20240625175847
Spectrum and low-energy gap in triangular quantum spin liquid NaYbSe$_2$
[ "A. O. Scheie", "Minseong Lee", "Kevin Wang", "P. Laurell", "E. S. Choi", "D. Pajerowski", "Qingming Zhang", "Jie Ma", "H. D. Zhou", "Sangyun Lee", "S. M. Thomas", "M. O. Ajeesh", "P. F. S. Rosa", "Ao Chen", "Vivien S. Zapf", "M. Heyl", "C. D. Batista", "E. Dagotto", "J. E. Moore", "D. Alan Tennant" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
scheie@lanl.gov Los Alamos National Laboratory, Los Alamos, NM 87545, USA ml10k@lanl.gov National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Department of Physics, University of California, Berkeley, CA 94720, USA Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA National High Magnetic Field Laboratory, Florida State University, Tallahassee, FL 32310, USA Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA School of Physical Science and Technology, Lanzhou University, Institute of Physics, Chinese Academy of Sciences, Lanzhou 730000, China Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Los Alamos National Laboratory, Los Alamos, NM 87545, USA Los Alamos National Laboratory, Los Alamos, NM 87545, USA Los Alamos National Laboratory, Los Alamos, NM 87545, USA Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, D-86135 Augsburg, Germany National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, D-86135 Augsburg, Germany Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA jemoore@berkeley.edu Department of Physics, University of California, Berkeley, CA 94720, USA dtennant@utk.edu Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA Department of Materials Science and Engineering, University of Tennessee, Knoxville, TN 37996, USA § ABSTRACT We report neutron scattering, pressure-dependent AC calorimetry, and AC magnetic susceptibility measurements of triangular lattice NaYbSe_2. We observe a continuum of scattering, which is reproduced by matrix product simulations, and no phase transition is detected in any bulk measurements. Comparison to heat capacity simulations suggest the material is within the Heisenberg spin liquid phase. AC Susceptibility shows a significant 23 mK downturn, indicating a gap in the magnetic spectrum. The combination of a gap with no detectable magnetic order, comparison to theoretical models, and comparison to other AYbSe_2 compounds all strongly indicate NaYbSe_2 is within the quantum spin liquid phase. The gap also allows us to rule out a gapless Dirac spin liquid, with a gapped ℤ_2 liquid the most natural explanation. Spectrum and low-energy gap in triangular quantum spin liquid NaYbSe_2 D. Alan Tennant July 1, 2024 ====================================================================== A quantum spin liquid (QSL) is a state of matter first predicted by P. W. Anderson in 1973, wherein spins arranged in a lattice exhibit a massively entangled and fluctuating ground state <cit.>. One defining characteristic of QSLs is their fractional excitations, which interact with each other through emergent gauge fields <cit.>. The potential for topological protection from decoherence makes QSLs appealing platforms for quantum technologies. However, despite decades of searching and extensive theoretical work, no unambiguous examples of a quantum spin liquid material have been found. Anderson's original prediction for a QSL state was the two-dimensional triangular lattice antiferromagnet. With nearest-neighbor exchange only, this system orders magnetically, but a small antiferromagnetic second-nearest-neighbor exchange J_2 theoretically stabilizes a QSL phase <cit.>. Though the existence of this phase is well-accepted theoretically (although not experimentally until now), it is not clear what kind of QSL such a state would be. Proposals include a gapless 𝕌_1 Dirac QSL <cit.>, a valence bond crystal <cit.>, a gapped ℤ_2 QSL <cit.>, or a chiral spin liquid <cit.>. Because numerical simulations are limited by finite size, theoretical results are ambiguous <cit.>. The best (and perhaps only) way to resolve this question would be to find a real material which harbors the triangular lattice QSL ground state. Inelastic neutron scattering studies of triangular antiferromagnets with nearest-neighbor exchange have revealed anomalous continuum scattering that cannot be explained by semiclassical theories <cit.>. The measured single-magnon dispersion was accurately reproduced using a Schwinger Boson approach, where magnons are obtained as two-spinon bound states <cit.>. This suggests that these ordered magnets are in close proximity to a gapped ℤ_2 QSL as the deconfined Schwinger Boson phase. But an unambiguous measurement of a material in the deconfined QSL phase has not been reported. A very promising class of materials is the Yb delafossites AYbSe_2 where A is an alkali metal <cit.>. These form ideal triangular lattices of magnetic Yb^3+, and appear to approximate the Heisenberg J_2/J_1 model <cit.>, see Fig. <ref>. Of these, CsYbSe_2 and KYbSe_2 have been observed to order magnetically at zero field <cit.>. However, following a trend in the periodic table that a smaller A-site element enhances J_2 and destabilizes order <cit.>, no long-range magnetic order has been observed in NaYbSe_2 <cit.>, which makes it a prime candidate for a QSL ground state. Importantly, the tunability of these compounds means that the QSL phase can be approached systematically from magnetic order (Fig. <ref>). This allows for greater confidence and rigor than studying a single compound in isolation would. Previous NaYbSe_2 studies reported a diffuse neutron spectrum that was interpreted in terms of spinon Fermi surface excitations from a QSL <cit.>, but because of 3% Na site disorder on those samples, it is not clear whether the magnetic order and coherent excitations were destroyed by small amounts of disorder (as in the ill-fated Yb^3+ QSL candidates YbMgGaO_4 <cit.> and Yb_2Ti_2O_7 <cit.>). To further clarify NaYbSe_2, we measured the inelastic neutron spectra, AC calorimetry, and AC susceptibility with high quality samples. We observe coherent excitations, lack of magnetic order, and evidence in bulk susceptibility of a 2.1 μeV gap at low temperature. This is strong evidence for a QSL ground state in NaYbSe_2 and a gapped QSL on the triangular lattice. The neutron spectra at 100 mK, shown in Fig. <ref>, show a highly dispersive continuum of excitations with a well-defined lower bound, similar to KYbSe_2 <cit.> (see supplemental materials for experimental details). This is qualitatively different from the spectra measured by Dai et al <cit.> on the 3% Yb/Na site-mixed sample which in contrast showed smeared our continua in k-space and diffuse spectra extending to low energies in many regions of reciprocal space. (Later in the text, we will explain why we believe our samples are free from mixing disorder.) Here, the only region of reciprocal space which has appreciable intensity down to low energies is (1/3,1/3,0), corresponding to the 120^∘ magnetic order seen in sister compounds KYbSe_2 <cit.> and CsYbSe_2 <cit.>. Down to 50 μ eV (the limit before the incoherent scattering on the elastic line obscures the scattering energy for the incident energy of E_i=1 meV), no gap in the spectrum is resolved. For comparison we also show matrix product state (MPS) calculated spectra in <ref>f-i with J_2/J_1 = 0.071 (this value derived from finite field non-linear spin wave fits <cit.>), at varying levels of exchange anisotropy Δ (see Supplemental Materials). The boundary to the quantum spin liquid phase for the isotropic model is at J_2/J_1=0.063 calculated using neural quantum states (see Supplemental Materials) locating the material in the theoretically predicted QSL phase for weak anisotropies. Because of finite size lattice effects the calculated spectra are gapped, and it is difficult to make quantitative comparisons between theory and neutron experiments. Nevertheless, the calculated spectra are consistent with the observed spectra, corroborating the idea that a J_2/J_1 model with easy-plane anisotropy is an appropriate model for NaYbSe_2. Despite intensity concentrated at (1/3,1/3,0) and similar spectra to CsYbSe_2 and KYbSe_2, we observe no static magnetic order in NaYbSe_2 in neutron scattering measurements down to 100 mK. No magnetic ordering features are visible in heat capacity down to 100 mK either, as shown in Fig. <ref>. (Note also that our sample has similar low-temperature specific heat to those reported in Refs. <cit.>. If the C/T maximum at 800 mK is an indication of sample quality, our sample is free from the site mixing reported in Ref. <cit.>.) To test whether applied hydrostatic pressure can induce order—as in KYbSe_2 wherein pressure enhanced T_N <cit.>—we also measured AC calorimetry under pressure (see Supplemental Materials) shown in Fig. <ref>b. Up to 2.0 GPa, no sharp feature as expected for an ordering transition is seen in the data (pressure-dependent thermalization issues cause the low-T specific heat to increase at low T, but this is a known artifact and would not mask a sharp ordering transition). Also in Fig. <ref>c we compare NaYbSe_2 heat capacity to KYbSe_2, with the temperature axis rescaled by the fitted J_1 <cit.>. This shows not only a lack of ordering transition, but also a smaller k_B T/J_1 ≈ 0.2 maximum heat capacity and greater low-temperature heat capacity in NaYbSe_2 relative to KYbSe_2. Comparing this to thermal pure quantum state (TPQ) simulations of the 27-site 2D triangular lattice in Fig. <ref>d, these trends are beautifully explained with a larger J_2/J_1 in NaYbSe_2: the low-temperature heat capacity is largest when J_2/J_1 ≈ 0.07 and the k_B T/J_1=0.2 bump is suppressed with larger J_2. Because the TPQ simulations are of a finite size cluster which induces an artificial energy gap, the lowest temperature trends are not quantitatively accurate. However, on a qualitative level, this is remarkable confirmation that NaYbSe_2 is indeed closer to or inside the triangular QSL phase. To investigate the magnetic state to lower temperatures, we measured AC susceptibility down to 20 mK with AC and DC field applied along the a and c directions on NaYbSe_2 (see Supplemental Materials). In this case we observe a clear magnetization plateau in the B ∥ a direction at 5 T, but not for B ∥ c (note these data were collected simultaneously on two separate crystals mounted on two separate susceptometers mounted on the same dilution refrigerator). This agrees with previous measurements <cit.>, and indicates an easy-plane exchange anisotropy in NaYbSe_2: in the perfectly isotropic triangular model, 1/3 magnetization plateaux appear both in-plane and out-of-plane, but the out-of-plane plateau is suppressed by planar anisotropy <cit.>, although the in-plane magnetism still has a continuous rotation symmetry and similar physics is preserved. However, the most important feature in susceptibility is the low-field drop in susceptibility at 23 mK, shown in Fig <ref>b and e. This drop occurs in both the B ∥ a and B ∥ c data. Observing such a feature at such low temperatures is prima facie evidence of high crystalline quality: any magnetic randomness or disorder in the material must involve an energy smaller than ∼ k_B T = 2.2 μeV, or else such a feature would be suppressed. Furthermore, there is no detectable frequency dependence in either direction, shown in Fig. <ref>c and f, indicating that it is not a spin-freezing transition. Rather, this indicates either a magnetic ordering transition, or a gap opening in the magnetic excitations. If the 23 mK susceptibility feature were a magnetic ordering transition, this would indicate that NaYbSe_2 is extremely close to a QSL phase, closer than any other triangular delafossite materials <cit.>. The order would be extremely subtle, as the temperature 23 mK/J_1/k_B≈ 1/280 means the amount of energy available for any type of order is very small, i.e., the order parameter saturates to a tiny value, and the system is left mostly fluctuating. However, the hypothesis of 23 mK magnetic order is unlikely, for three reasons. First, the isotropy: the drop in susceptibility is qualitatively the same along a and c. The only difference at zero field is a slightly higher peak temperature for H ∥ c at 24 mK. Antiferromagnetic order (especially the coplanar order expected for NaYbSe_2 with planar anisotropy) should produce a very different response for the field along which the spins order. Meanwhile, a gapped spectrum produces an isotropic response, consistent with what is observed here. Second, the magnitude: the drop in susceptibility is more than 5% without background susceptibility subtracted, whereas an ordering transition at such extremely low temperatures (in comparison to a ∼ 15 K bandwidth) would indicate very weak order with an extremely small recovered entropy, and a correspondingly weak signal in bulk properties. Indeed, in the sister compound KYbSe_2, despite a clear magnetic order transition at 290 mK in heat capacity and neutron diffraction <cit.>, the ordering feature in susceptibility is essentially invisible (see Supplemental Fig. 11). If the NaYbSe_2 shows the same antiferromagnetic ordering and the ordering temperature is an order of magnitude lower in temperature than KYbSe_2, we would expect a much weaker feature in susceptibility. Instead we observe a very strong feature, which is evidence for it being from a gapped spectrum. This abrupt drop is also inconsistent with spin glass behavior, where the ac susceptibility typically shows a frequency-dependent peak and symmetric decrease both above and below the peak temperature. Third, fine-tuning: such a low transition temperature would require the system to sit exactly on the boundary between the QSL and AFM order. If we assume the system lies within the 120^∘ ordered phase, the dynamical exponent of the critical point would be z=1 (linearly dispersing zero modes). This implies that the effective dimension of the theory that describes the quantum phase transition is D=d+z=3+1=4. Since D=4 is the upper critical dimension (Gaussian fixed point), we expect the behavior of T_N(J_2/J_1) to be mean field like, i.e., T_N ∝√(J_2^c - J_2) where J_2^c is the critical value of J_2. This means that the boundary becomes vertical in J_2 vs T at the lowest transition temperatures. A sharp magnetic ordering transition at 23 mK (less than 0.5% of the bandwidth) would suggest a system so finely tuned to the boundary that it is much easier to believe that the system lies within an extended gap phase. Although the evidence points towards the susceptibility feature arising from a gap in the magnetic spectrum, an alternative explanation is nuclear-dipole ordering. We consider this unlikely because (i) only 30% of Yb nuclei are magnetic, which is below the percolation threshold (50%) for the triangular lattice, and (ii) 23 mK is quite high for nuclear dipole ordering, which is typically less than 1 mK <cit.>. That said, there is a noticeable nuclear Schottky anomaly in the heat capacity data in Fig. <ref>, which indicates some splitting the energy levels of nuclear moments. However, this does not necessarily indicate static dipolar order: the ^173Yb isotope (16% natural abundance) has a nonzero electric quadrupolar moment <cit.> whose energy levels will be split by an ionic electric field gradient at the Yb site, producing a Schottky feature without static electronic magnetism. (Furthermore, if the ordering temperature is 23 mK, a Schottky anomaly onset at 80 mK as in Ref. <cit.> would be much too high.) Therefore, the most natural explanation for this feature in susceptibility is a (2.1± 0.1) μ eV gap in the magnetic spectrum, which is estimated by fitting zero field data with e^-Δ/T (see Supplemental Fig. 9). This is too low to have been observed in the inelastic neutron experiment, which could not resolve features below 50 μ eV. According to the generalized Lieb-Schultz-Mattis theorem, the existence of a low-energy gap in the absence of a phase transition for a translationally invariant S=1/2 triangular antiferromagnet implies that the ground state degeneracy must have a topological origin <cit.>. Because these materials are known to be in close proximity to a QSL phase <cit.>, this indicates that NaYbSe_2 lies within the QSL phase. This was suggested by previous refinements of the second-nearets-neighbor exchange <cit.>, but the observation of a spin gap is far more direct evidence. A further piece of evidence in favor of QSL physics is that the quantum critical effects seen in KYbSe_2 are suppressed in NaYbSe_2. More specifically, the neutron spectra in KYbSe_2 show energy temperature scaling <cit.> due to the proximity to the quantum critical point (QCP) between the 120^∘ and QSL phase (see Figure <ref>). Quantum Fisher Information is a sensitive gauge to quantum criticality <cit.> and the elevated value of nQFI = 3.4(2) <cit.> in KYbSe_2 indicates the influence of the QCP. In contrast nQFI = 2.3(5) for NaYbSe_2 (see Supplementary Material) is consistent with the material being beyond the QCP (where nQFI should be a maximum) and within the QSL phase where spectral intensity is more distributed <cit.>. The existence of a low-energy gap allows us to rule out a gapless 𝕌_1 Dirac QSL <cit.>, but there are at least three competing theoretical options for gapped phases on the triangular lattice: (i) a resonating valence bond (gapped ℤ_2) liquid, (ii) a valence bond crystal (VBC) <cit.>, or (iii) a chiral QSL <cit.>. The data we present here is insufficient to fully resolve this debate, but the strong agreement with the Schwinger boson representation of a condensed ℤ_2 liquid in KYbSe_2 <cit.> indicates that the gap is from ℤ_2 topological order <cit.>. Furthermore, the chiral QSL and valence bond crystal break discrete symmetries (time-reversal and crystalline, respectively) and hence have a finite-temperature phase transition, whereas the ℤ_2 liquid does not. We do not observe the specific heat signature of either phase transition down to 100 mK, and no susceptibility signatures between 25 mK and 400 mK. Moreover, if the small gap were caused by a spin-Peierls instability of the U(1) Dirac spin liquid <cit.>, the momentum distribution of the integrated intensity over ω would be expected to closely resemble that of the U(1) Dirac spin liquid state. However, dynamical variational Monte Carlo calculations for J_2/J_1=0.07 and J_2/J_1=0.09 show that the K and M points have comparable integrated spectral weights <cit.>, which starkly contrasts with experimental observations. Thus although we cannot uniquely identify the type of QSL with the measurements described here, the simplest interpretation of our results suggests a ℤ_2 liquid, consistent with Anderson's original proposal <cit.>. In conclusion, we have used neutron spectroscopy, heat capacity, and magnetic susceptibility to investigate NaYbSe_2. The susceptibility feature either indicates a phase transition or a gap. If the former, NaYbSe_2 is the closest triangular delafossite material yet to a QSL; if the latter, NaYbSe_2 is a gapped QSL. Details of the experiment strongly suggest a gap, which means NaYbSe_2 lies within the 2D triangular lattice QSL phase with a (2.1 ± 0.1) μ eV gap, making it stable against perturbations. Beyond susceptibility (i), further evidence for QSL physics are (ii) the coherent excitations observed in the neutron spectra are consistent with QSL simulated spectra, (iii) no static magnetic order is observed in specific heat down to 100 mK, and (iv) quantum entanglement witnesses indicates NaYbSe_2 has less divergent intensity than KYbSe_2, and is within the QSL phase. The presence of a gap allows us to rule out the gapless 𝕌_1 and suggests a gapped ℤ_2 liquid, but determining the precise nature of this ground state requires further investigation. Thus, over 50 years after Anderson's original proposal, we finally have a clean experimental realization of a triangular lattice QSL phase. § ACKNOWLEDGMENTS The work by A.O.S., M.L., K.W., S.T., V.S.Z., C.D.B., J.E.M., and D.A.T. is supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (DOE). The neutron scattering study used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The work of P.L. and E.D. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The work by H.Z. is supported by the U.S. Department of Energy under Grant No. DE-SC0020254. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-2128556 and the State of Florida. The work by Q.Z. and J. Ma was supported by the National Key Research and Development Program of China (Grant No. 2022YFA1402700). We acknowledge helpful discussions with Johannes Knolle. § SUPPLEMENTAL MATERIALS FOR SPECTRUM AND LOW-ENERGY GAP IN TRIANGULAR QUANTUM SPIN LIQUID NAYBSE_2 § SAMPLE SYNTHESIS The samples for the neutron experiments were grown with NaCl flux and are the same as reported in Ref. <cit.>. A new batch of samples were grown for the susceptibility measurement. A mixture of 1.58 gram NaCl powder, 0.23 gram Yb pieces, and 0.26 gram Se pieces were sealed in a vacuumed quartz tune. The tube was vertically located in a box furnace. The temperature profile for the reaction is that the temperature was raised to 850 Celsius degree with 50 degree/hour rate, stayed 16 days, and then decreased to 750 Celsius degree with 1 degree/hour rate, and thereafter decreased to room temperature with 100 degree/hour rate. The reddish thin plates of crystals could be picked out after the whole product was washed by water. § NEUTRON EXPERIMENTS We measured the inelastic spectrum of NaYbSe_2 using the ∼ 300 mg co-aligned sample used in Ref. <cit.> mounted in a dilution refrigerator (no magnet was used in this experiment). We measured the hhl inelastic scattering on the CNCS spectrometer <cit.> at Oak Ridge National Laboratory's Spallation Neutron Source <cit.>, measuring at E_i=3.32 meV, 1.55 meV, and 1.0 meV, rotating 180^∘ to map the neutron spectrum. We measured at T=0.1 K and 12 K for a background. The data are shown in main text Fig. 2, and were normalized to absolute units by normalizing the magnon mode measured in Ref. <cit.> to the nonlinear spin wave theory, such that the effective spin is 1/2. Figure <ref> shows the inelastic spectrum with an incident energy E_i = 1.0 meV, which gives an elastic line FWHM energy resolution 0.02 meV. With this resolution, no gap is observed at K. Figure <ref> shows the elastic scattering with the higher resolution E_i=1.55 meV data. Temperature-subtraction shows no elastic scattering at hh=(1/3,1/3), indicating an absence of long range static magnetic order. However, this may be because the CNCS spectrometer is not sensitive enough: similar CNCS scans on KYbSe_2 showed no static magnetism at zero-field <cit.>, even though triple axis scans clearly showed the onset of elastic Bragg intensity <cit.>. Therefore, the absence of detectable NaYbSe_2 elastic scattering in these data does not necessarily indicate the absence of static magnetic order. For a more complete view of the collected scattering data, Fig. <ref> shows constant energy slices of NaYbSe_2 with E_i=3.32 meV. Note the magnetic signal (most clearly shown in the temperature-subtracted data) has essentially no dependence on ℓ, indicating no correlations between the triangular lattice planes. Figure <ref> also shows this, with plots of different integration widths along ℓ which makes no visible difference to the inelastic scattering pattern. Therefore these scattering data are very two-dimensional. Figure <ref> shows the intensity at K as a function of energy transfer. Unfortunately, because only one temperature is available, it is not possible to evaluate the presence or absence of a power-law scaling collapse to the data. Instead, we merely point out that the high energy transfer region appears to follow a power law with α = 1.74(6), consistent with the fitted KYbSe_2 value of α = 1.73(12) <cit.> (though the precise exponent depends upon the fitted energy transfer region). § AC CALORIMETRY Ac calorimetry measurements under hydrostatic pressure were performed in a piston-clamp pressure cell using Daphne oil 7373 as the pressure medium using the standard steady state technique <cit.>. The temperature oscillations were measured using an Au/0.07%Fe-chromel thermocouple, and a constantan meander was attached to the opposite side of the sample to apply heat. The heater power was varied between 25 nW and 5 μW depending on the sample temperature. The measurement frequency was continuously adjusted to keep a constant phase relationship between the applied heat and the temperature oscillations on the thermocouple. For the lowest (≤ 100 nW) powers and temperatures the frequency was fixed near 2 Hz because the signal was too small to continuously vary the frequency. Below 300 mK, it was not possible to find a frequency range where fΔT_ac was constant. This indicates that the internal relaxation of the sample is likely slower than the relaxation rate to the bath. Nonetheless, the measurement would still be sensitive to phase transitions even in this temperature range. In Fig. <ref> the NaYbSe_2 specific heat is compared to previously published KYbSe_2 data <cit.>. The “bump” in C/T is smaller in NaYbSe_2 than in KYbSe_2, while the specific heat below 300 mK is significantly larger in NaYbSe_2. This indicates more of the density of states has shifted to low energies, which is consistent with the system being within a QSL phase. Note also, in main text Fig. 3 there is no significant missing entropy in NaYbSe_2, which again is consistent with it being in a well-defined quantum ground state rather than a glassy frozen state. Figure <ref> also shows the experimental data from NaYbSe_2 and KYbSe_2 compared to the TPQ simulations. In C/T the theoretical heat capacity maximum is at higher temperature than the experimental maximum, possibly due to a finite-size-induced gap. However, on a qualitative level the resemblance between theory and experiment is strong, and the theoretical trend is consistent with NaYbSe_2 having a larger second neighbor exchange J_2 than KYbSe_2. § AC SUSCEPTIBILITY §.§ Method The ac susceptometer comprises a solenoidal coil to generate an ac magnetic field and a pair of sensing coils housed within it. The pair of sensing coils are wound in opposite directions, ensuring they possess equal mutual inductance in magnitude but opposite signs. Consequently, when two sensing coils are connected in series, the induced voltage across them becomes zero. The presence of a sample positioned in the center of one of the sensing coils induces a nonzero net voltage across the coils. This induced voltage is directly proportional to the change in magnetic flux passing through the sensing coil over time. More detailed information can be found in https://nationalmaglab.org/user-facilities/dc-field/measurement-techniques/ac-magnetic-susceptibility-dc/. This setup includes a nonzero background susceptibility. Based on our experience of running this setup for over ten years, we believe that the excessive susceptibility near zero magnetic field is due to coil background, although we did not perform a background measurement. The background in temperature scans is much smaller compared to the sample signal. Therefore, the susceptibility drop below 23 mK is due to the sample's intrinsic behavior (confirmed by the absence of such a downturn in KYbSe_2 data, see below). We used “Arbi. Unit” because of the background signal of the AC susceptometer. §.§ Additional data Figure <ref> shows the temperature-dependent AC susceptibility at zero magnetic field up to higher temperatures than in the main text Fig. 4. Paramagnetic behavior is evident up to 500 mK, with no phase transitions visible. Figure <ref> shows susceptibility to estimate the gap of the low-temperature drop in susceptibility, which we find to be 2.1 μeV. Figure <ref> shows additional temperature-dependent NaYbSe_2 susceptibility data for applied fields between 1 T and 12 T. For field applied along c, there are no clear features in the data indicating phase boundaries. For field along a, there are several kinks and discontinuities. The phase diagram from temperature and field dependent susceptibility features is plotted in panel (c) of Fig. <ref>. Finally, for comparison with NaYbSe_2, Figure <ref> shows the measured in-plane susceptibility of KYbSe_2 (which was also measured in the same cryostat at the same time—and therefore the same temperature and field configurations—as the two NaYbSe_2 crystals). Note the absence of a gap feature in the data, which follows a 1/T divergence to the lowest temperatures. Note also that the ordering transition is not visible in the data (which is admittedly somewhat noisy), again evidencing that the 23 mK downturn in NaYbSe_2 is not from a magnetic ordering transition. § THEORETICAL SIMULATIONS §.§ MPS calculations We performed MPS simulations on the J_2/J_1 model with varying values of XXZ anisotropy Δ <cit.>. H=J_1∑_⟨ i,j⟩(S_i^xS_j^x+S_i^yS_j^y+Δ S_i^zS_j^z)+J_2∑_⟨⟨ i,j⟩⟩(S_i^xS_j^x+S_i^yS_j^y+Δ S_i^zS_j^z) Simulations are done on a cylinder geometry with circumference C=6 and length L=36 with XC boundary conditions <cit.> on the triangular lattice, at a maximum bond dimension of χ=512 using the ITensor library <cit.>. The ground state |Ω⟩ of the model is found using the density matrix renormalization group (DMRG). The spin-spin correlation function is determined with time evolution using the time-dependent variational principle (TDVP) with a time step of dt=0.1 <cit.>. G(𝐱,t)=⟨Ω|𝐒_𝐱(t)·𝐒_c(0)|Ω⟩ where the subscript c represents the central site on the cylinder. The dynamical spin spectral function is then computed as the Fourier transform of the correlation function. S(𝐱,t)=1/N∑_𝐱∫_0^∞dt/2πe^i(𝐪·𝐱-ω t)G(𝐱,t) To remedy the finite time cutoff of the Fourier transform, Gaussian broadening of the time data—on the order of the cutoff T_max∼ 80—is applied to the correlation function before transforming <cit.>. §.§ TPQ specific heat calculations We numerically calculated the magnetic specific heat C_m for the S=1/2 AFM J_1-J_2 Hamiltonian H = J_1 ∑_⟨ i,j⟩𝐒_i ·𝐒_j + J_2 ∑_⟨⟨ i,j⟩⟩𝐒_i ·𝐒_j on a 27-site cluster (shown in Fig. <ref>) with periodic boundary conditions using the microcanonical thermal pure quantum state (TPQ) <cit.> method and the ℋΦ library <cit.>, version 3.5.2. In this typicality-based approach, a thermal quantum state is iteratively constructed starting from a randomized initial vector, and associated with a temperature estimated from the internal energy. To reduce statistical errors, we averaged over 15 initial vectors. Finite-size errors are expected to mainly affect the results at low temperatures <cit.>, but not to change the trend with J_2/J_1 highlighted here. §.§ Phase transition through neural quantum states (NQSs) §.§.§ NQS wave function The NQS method utilizes an artificial neural network as a variational wave function to approximate the ground state of a target model <cit.>. In a system with N spin-1/2 degrees of freedom, the Hilbert space can be spanned by the S_z basis |σ⟩ = |σ_1,...,σ_N⟩ with σ_i = ↑ or ↓. Similar to image recognition tasks in which the artificial neural network converts every image input to a probability, in quantum many-body problems the NQS converts every input basis |σ⟩ to a wave function amplitude ψ_σ. This gives the full quantum state as |Ψ⟩ = ∑_σψ_σ|σ⟩. In this work, we employ deep residual convolutional neural networks as the variational wave function. The network contains 16 convolutional layers, each with 32 channels and 3×3 kernels, leading to 139008 real parameters in total. The GeLU activation is applied before each convolutional layer. The circular padding is utilized in the convolutional layer to realize the exact translation symmetry. The output after the last convolutional layer contains 32 channels, which is divided into two groups x^(1)_j and x^(2)_j each with 16 channels, and the final wave function amplitude output of the network is given by ψ_σ = ∑_j exp (x^(1)_j + i x^(2)_j), where we sum over all elements in the 16 channels. In addition, we apply symmetries on top of the well-trained ψ_σ to project variational states onto suitable symmetry sectors. Assuming the system permits a symmetry group represented by operators T_i with characters ω_i, the symmetrized wave function is then defined as <cit.> ψ^symm_σ = ∑_i ω_i^-1ψ_T_i σ. The applied symmetry groups in Eq. (<ref>) are the D_6 group realizing rotation and reflection symmetries and the Z_2 group realizing the spin inversion symmetry σ→ -σ. The deep network is trained by the MinSR method to approach the ground state of the triangular J_1-J_2 Hamiltonian <cit.>. The training employs 10000 Monte Carlo samples, 20000 steps without symmetries followed by 10000 steps with symmetries. §.§.§ Phase transition The transition between the 120^∘-ordered and the QSL phase can be detected through the spin structure factor S(𝐪) = 1/N∑_ij C_ij e^i 𝐪· (𝐫_i - 𝐫_j), where 𝐪 denotes the momentum, and C_ij is the real-space spin-spin correlation given by C_ij = ⟨𝐒_i ·𝐒_j|,⟩ which is obtained from the NQS wave function by Monte Carlo sampling. The 120^∘ order is signaled by a peak in the spin structure factor S(𝐊) at 𝐊 = (4π/3, 0). In the thermodynamic limit, S(𝐊) diverges only in the 120^∘ ordered phase but not in the QSL phase. Importantly, the numerical simulations are performed for large but finite systems, leading to finite structure factors in both phases. In order to minimize finite-size effects for the detection of phase transitions, the so-called correlation ratio R has been introduced <cit.> R = 1 - S(𝐊 + δ 𝐪)/S(𝐊), where 𝐊 + δ 𝐪 represents the nearest neighboring momentum of 𝐊. The correlation ratio represents a measure for the sharpness of the spin structure factor. As the system size N increases, R grows in the 120^∘ ordered phase and decreases in the QSL phase. Most important for the current purpose, this opposite behavior in the two phases with system sizes, generically leads to a crossing point in R for different N at the phase transition point. As shown in Fig. <ref>, the correlation ratio R for different system sizes indeed exhibits such a crossing at J_2/J_1 ≈ 0.063 signaling the phase transition. We identify two sources for uncertainties in estimating the precise quantum phase transition point, namely a variational bias and a statistical error. First, for complex quantum models such as the considered frustrated magnets we find that the variationally obtained wave function exhibits larger variational errors upon increasing system size. We observe that these errors usually have the tendency to lead to a stronger spin order and consequently to a larger correlation ratio R consistent with other works <cit.>. Therefore, our estimate for the phase transition point J_2/J_1=0.063 exhibits a bias towards larger values of J_2/J_1 so that we interpret 0.063 as an upper bound. Second, the measurement of R is based on an underlying Monte Carlo sampling scheme, which introduces statistical errors and leads to an uncertainty 0.001 in the critical J_2/J_1 value. In summary, the result provided in Fig. <ref> lead to a bound of the critical point of the form J_2/J_1 ≲ 0.063 ± 0.001. 72 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Anderson(1973)]Anderson1973 author author P. Anderson, title title Resonating valence bonds: A new kind of insulator?, https://doi.org/https://doi.org/10.1016/0025-5408(73)90167-0 journal journal Materials Research Bulletin volume 8, pages 153 (year 1973)NoStop [Savary and Balents(2016)]Savary_2016review author author L. Savary and author L. Balents, title title Quantum spin liquids: a review, https://doi.org/10.1088/0034-4885/80/1/016502 journal journal Reports on Progress in Physics volume 80, pages 016502 (year 2016)NoStop [Broholm et al.(2020)Broholm, Cava, Kivelson, Nocera, Norman, and Senthil]broholm2019quantum author author C. Broholm, author R. J. Cava, author S. A. Kivelson, author D. G. Nocera, author M. R. Norman, and author T. Senthil, title title Quantum spin liquids, journal journal Science volume 367, https://doi.org/10.1126/science.aay0668 10.1126/science.aay0668 (year 2020)NoStop [Zhu and White(2015)]PhysRevB.92.041105 author author Z. Zhu and author S. R. White, title title Spin liquid phase of the s=1/24.pt0exJ_1J_2 heisenberg model on the triangular lattice, https://doi.org/10.1103/PhysRevB.92.041105 journal journal Phys. Rev. B volume 92, pages 041105 (year 2015)NoStop [Hu et al.(2015)Hu, Gong, Zhu, and Sheng]PhysRevB.92.140403 author author W.-J. Hu, author S.-S. Gong, author W. Zhu, and author D. N. Sheng, title title Competing spin-liquid states in the spin-1/2 heisenberg model on the triangular lattice, https://doi.org/10.1103/PhysRevB.92.140403 journal journal Phys. Rev. B volume 92, pages 140403 (year 2015)NoStop [Iqbal et al.(2016)Iqbal, Hu, Thomale, Poilblanc, and Becca]PhysRevB.93.144411 author author Y. Iqbal, author W.-J. Hu, author R. Thomale, author D. Poilblanc, and author F. Becca, title title Spin liquid nature in the heisenberg J_1J_2 triangular antiferromagnet, https://doi.org/10.1103/PhysRevB.93.144411 journal journal Phys. Rev. B volume 93, pages 144411 (year 2016)NoStop [Saadatmand and McCulloch(2016)]PhysRevB.94.121111 author author S. N. Saadatmand and author I. P. McCulloch, title title Symmetry fractionalization in the topological phase of the spin-1/2 J_1J_2 triangular heisenberg model, https://doi.org/10.1103/PhysRevB.94.121111 journal journal Phys. Rev. B volume 94, pages 121111 (year 2016)NoStop [Wietek and Läuchli(2017)]PhysRevB.95.035141 author author A. Wietek and author A. M. Läuchli, title title Chiral spin liquid and quantum criticality in extended s=1/2 heisenberg models on the triangular lattice, https://doi.org/10.1103/PhysRevB.95.035141 journal journal Phys. Rev. B volume 95, pages 035141 (year 2017)NoStop [Gong et al.(2017)Gong, Zhu, Zhu, Sheng, and Yang]PhysRevB.96.075116 author author S.-S. Gong, author W. Zhu, author J.-X. Zhu, author D. N. Sheng, and author K. Yang, title title Global phase diagram and quantum spin liquids in a spin-1/2 triangular antiferromagnet, https://doi.org/10.1103/PhysRevB.96.075116 journal journal Phys. Rev. B volume 96, pages 075116 (year 2017)NoStop [Hu et al.(2019)Hu, Zhu, Eggert, and He]PhysRevLett.123.207203 author author S. Hu, author W. Zhu, author S. Eggert, and author Y.-C. He, title title Dirac spin liquid on the spin-1/2 triangular heisenberg antiferromagnet, https://doi.org/10.1103/PhysRevLett.123.207203 journal journal Phys. Rev. Lett. volume 123, pages 207203 (year 2019)NoStop [Kaneko et al.(2014)Kaneko, Morita, and Imada]doi:10.7566/JPSJ.83.093707 author author R. Kaneko, author S. Morita, and author M. Imada, title title Gapless spin-liquid phase in an extended spin 1/2 triangular heisenberg model, https://doi.org/10.7566/JPSJ.83.093707 journal journal Journal of the Physical Society of Japan volume 83, pages 093707 (year 2014)NoStop [Drescher et al.(2023)Drescher, Vanderstraeten, Moessner, and Pollmann]drescher_dynamical_2023 author author M. Drescher, author L. Vanderstraeten, author R. Moessner, and author F. Pollmann, title title Dynamical signatures of symmetry-broken and liquid phases in an s=1/2 Heisenberg antiferromagnet on the triangular lattice, https://doi.org/10.1103/PhysRevB.108.L220401 journal journal Physical Review B volume 108, pages L220401 (year 2023), note publisher: American Physical SocietyNoStop [Wietek et al.(2024)Wietek, Capponi, and Läuchli]PhysRevX.14.021010 author author A. Wietek, author S. Capponi, and author A. M. Läuchli, title title Quantum electrodynamics in 2+1 dimensions as the organizing principle of a triangular lattice antiferromagnet, https://doi.org/10.1103/PhysRevX.14.021010 journal journal Phys. Rev. X volume 14, pages 021010 (year 2024)NoStop [Seifert et al.(2023a)Seifert, Willsher, Drescher, Pollmann, and Knolle]seifert2023spin author author U. F. Seifert, author J. Willsher, author M. Drescher, author F. Pollmann, and author J. Knolle, title title Spin-peierls instability of the u (1) dirac spin liquid, https://doi.org/10.48550/arXiv.2307.12295 journal journal arXiv preprint arXiv:2307.12295 (year 2023a)NoStop [Sachdev(1992)]Sachdev92 author author S. Sachdev, title title Kagome´- and triangular-lattice heisenberg antiferromagnets: Ordering from quantum fluctuations and quantum-disordered ground states with unconfined bosonic spinons, https://doi.org/10.1103/PhysRevB.45.12377 journal journal Phys. Rev. B volume 45, pages 12377 (year 1992)NoStop [Jiang and Jiang(2023)]PhysRevB.107.L140411 author author Y.-F. Jiang and author H.-C. Jiang, title title Nature of quantum spin liquids of the s=1/2 heisenberg antiferromagnet on the triangular lattice: A parallel dmrg study, https://doi.org/10.1103/PhysRevB.107.L140411 journal journal Phys. Rev. B volume 107, pages L140411 (year 2023)NoStop [Cookmeyer et al.(2021)Cookmeyer, Motruk, and Moore]PhysRevLett.127.087201 author author T. Cookmeyer, author J. Motruk, and author J. E. Moore, title title Four-spin terms and the origin of the chiral spin liquid in mott insulators on the triangular lattice, https://doi.org/10.1103/PhysRevLett.127.087201 journal journal Phys. Rev. Lett. volume 127, pages 087201 (year 2021)NoStop [Sherman et al.(2023)Sherman, Dupont, and Moore]Sherman_2023_spectral author author N. E. Sherman, author M. Dupont, and author J. E. Moore, title title Spectral function of the J_1J_2 heisenberg model on the triangular lattice, https://doi.org/10.1103/PhysRevB.107.165146 journal journal Phys. Rev. B volume 107, pages 165146 (year 2023)NoStop [Ma et al.(2016)Ma, Kamiya, Hong, Cao, Ehlers, Tian, Batista, Dun, Zhou, and Matsuda]Ma16 author author J. Ma, author Y. Kamiya, author T. Hong, author H. B. Cao, author G. Ehlers, author W. Tian, author C. D. Batista, author Z. L. Dun, author H. D. Zhou, and author M. Matsuda, title title Static and dynamical properties of the spin-1/2 equilateral triangular-lattice antiferromagnet ba_3cosb_2o_9, https://doi.org/10.1103/PhysRevLett.116.087201 journal journal Phys. Rev. Lett. volume 116, pages 087201 (year 2016)NoStop [Ito et al.(2017)Ito, Kurita, Tanaka, Ohira-Kawamura, Nakajima, Itoh, Kuwahara, and Kakurai]Ito17 author author S. Ito, author N. Kurita, author H. Tanaka, author S. Ohira-Kawamura, author K. Nakajima, author S. Itoh, author K. Kuwahara, and author K. Kakurai, title title Structure of the magnetic excitations in the spin-1/2 triangular-lattice heisenberg antiferromagnet ba3cosb2o9, https://doi.org/10.1038/s41467-017-00316-x journal journal Nature Communications volume 8, pages 235 (year 2017)NoStop [Macdougal et al.(2020)Macdougal, Williams, Prabhakaran, Bewley, Voneshen, and Coldea]Macdougal20 author author D. Macdougal, author S. Williams, author D. Prabhakaran, author R. I. Bewley, author D. J. Voneshen, and author R. Coldea, title title Avoided quasiparticle decay and enhanced excitation continuum in the spin-1/2 near-heisenberg triangular antiferromagnet ba_3cosb_2o_9, https://doi.org/10.1103/PhysRevB.102.064421 journal journal Phys. Rev. B volume 102, pages 064421 (year 2020)NoStop [Ghioldi et al.(2018)Ghioldi, Gonzalez, Zhang, Kamiya, Manuel, Trumper, and Batista]Ghioldi_2018 author author E. A. Ghioldi, author M. G. Gonzalez, author S.-S. Zhang, author Y. Kamiya, author L. O. Manuel, author A. E. Trumper, and author C. D. Batista, title title Dynamical structure factor of the triangular antiferromagnet: Schwinger boson theory beyond mean field, https://doi.org/10.1103/PhysRevB.98.184403 journal journal Phys. Rev. B volume 98, pages 184403 (year 2018)NoStop [Ghioldi et al.(2022)Ghioldi, Zhang, Kamiya, Manuel, Trumper, and Batista]Ghioldi22 author author E. A. Ghioldi, author S.-S. Zhang, author Y. Kamiya, author L. O. Manuel, author A. E. Trumper, and author C. D. Batista, title title Evidence of two-spinon bound states in the magnetic spectrum of ba_3cosb_2o_9, https://doi.org/10.1103/PhysRevB.106.064418 journal journal Phys. Rev. B volume 106, pages 064418 (year 2022)NoStop [Ranjith et al.(2019a)Ranjith, Dmytriieva, Khim, Sichelschmidt, Luther, Ehlers, Yasuoka, Wosnitza, Tsirlin, Kühne, and Baenitz]Ranjinth2019 author author K. M. Ranjith, author D. Dmytriieva, author S. Khim, author J. Sichelschmidt, author S. Luther, author D. Ehlers, author H. Yasuoka, author J. Wosnitza, author A. A. Tsirlin, author H. Kühne, and author M. Baenitz, title title Field-induced instability of the quantum spin liquid ground state in the J_eff=1/2 triangular-lattice compound NaYbO_2, https://doi.org/10.1103/PhysRevB.99.180401 journal journal Phys. Rev. B volume 99, pages 180401 (year 2019a)NoStop [Ranjith et al.(2019b)Ranjith, Luther, Reimann, Schmidt, Schlender, Sichelschmidt, Yasuoka, Strydom, Skourski, Wosnitza, Kühne, Doert, and Baenitz]Ranjith2019_2 author author K. M. Ranjith, author S. Luther, author T. Reimann, author B. Schmidt, author P. Schlender, author J. Sichelschmidt, author H. Yasuoka, author A. M. Strydom, author Y. Skourski, author J. Wosnitza, author H. Kühne, author T. Doert, and author M. Baenitz, title title Anisotropic field-induced ordering in the triangular-lattice quantum spin liquid NaYbSe_2, https://doi.org/10.1103/PhysRevB.100.224417 journal journal Phys. Rev. B volume 100, pages 224417 (year 2019b)NoStop [Zhang et al.(2021a)Zhang, Ma, Li, Wang, Adroja, Perring, Liu, Jin, Ji, Wang, Kamiya, Wang, Ma, and Zhang]Zhang_2021_NYS author author Z. Zhang, author X. Ma, author J. Li, author G. Wang, author D. T. Adroja, author T. P. Perring, author W. Liu, author F. Jin, author J. Ji, author Y. Wang, author Y. Kamiya, author X. Wang, author J. Ma, and author Q. Zhang, title title Crystalline electric field excitations in the quantum spin liquid candidate NaYbSe_2, https://doi.org/10.1103/PhysRevB.103.035144 journal journal Phys. Rev. B volume 103, pages 035144 (year 2021a)NoStop [Dai et al.(2021)Dai, Zhang, Xie, Duan, Gao, Zhu, Feng, Tao, Huang, Cao, Podlesnyak, Granroth, Everett, Neuefeind, Voneshen, Wang, Tan, Morosan, Wang, Lin, Shu, Chen, Guo, Lu, and Dai]Dai_2021 author author P.-L. Dai, author G. Zhang, author Y. Xie, author C. Duan, author Y. Gao, author Z. Zhu, author E. Feng, author Z. Tao, author C.-L. Huang, author H. Cao, author A. Podlesnyak, author G. E. Granroth, author M. S. Everett, author J. C. Neuefeind, author D. Voneshen, author S. Wang, author G. Tan, author E. Morosan, author X. Wang, author H.-Q. Lin, author L. Shu, author G. Chen, author Y. Guo, author X. Lu, and author P. Dai, title title Spinon fermi surface spin liquid in a triangular lattice antiferromagnet NaYbSe_2, https://doi.org/10.1103/PhysRevX.11.021044 journal journal Phys. Rev. X volume 11, pages 021044 (year 2021)NoStop [Scheie et al.(2024a)Scheie, Ghioldi, Xing, Paddison, Sherman, Dupont, Sanjeewa, Lee, Woods, Abernathy, Pajerowski, Williams, Zhang, Manuel, Trumper, Pemmaraju, Sefat, Parker, Devereaux, Movshovich, Moore, Batista, and Tennant]scheie2024_KYS author author A. O. Scheie, author E. A. Ghioldi, author J. Xing, author J. A. M. Paddison, author N. E. Sherman, author M. Dupont, author L. D. Sanjeewa, author S. Lee, author A. J. Woods, author D. Abernathy, author D. M. Pajerowski, author T. J. Williams, author S.-S. Zhang, author L. O. Manuel, author A. E. Trumper, author C. D. Pemmaraju, author A. S. Sefat, author D. S. Parker, author T. P. Devereaux, author R. Movshovich, author J. E. Moore, author C. D. Batista, and author D. A. Tennant, title title Proximate spin liquid and fractionalization in the triangular antiferromagnet KYbSe_2, https://doi.org/10.1038/s41567-023-02259-1 journal journal Nature Physics volume 20, pages 74 (year 2024a)NoStop [Xie et al.(2023)Xie, Eberharter, Xing, Nishimoto, Brando, Khanenko, Sichelschmidt, Turrini, Mazzone, Naumov, Sanjeewa, Harrison, Sefat, Normand, Läuchli, Podlesnyak, and Nikitin]Xie2023 author author T. Xie, author A. A. Eberharter, author J. Xing, author S. Nishimoto, author M. Brando, author P. Khanenko, author J. Sichelschmidt, author A. A. Turrini, author D. G. Mazzone, author P. G. Naumov, author L. D. Sanjeewa, author N. Harrison, author A. S. Sefat, author B. Normand, author A. M. Läuchli, author A. Podlesnyak, and author S. E. Nikitin, title title Complete field-induced spectral response of the spin-1/2 triangular-lattice antiferromagnet csybse2, https://doi.org/10.1038/s41535-023-00580-9 journal journal npj Quantum Materials volume 8, pages 48 (year 2023)NoStop [Scheie et al.(2024b)Scheie, Kamiya, Zhang, Lee, Woods, Ajeesh, Gonzalez, Bernu, Villanova, Xing, Huang, Zhang, Ma, Choi, Pajerowski, Zhou, Sefat, Okamoto, Berlijn, Messio, Movshovich, Batista, and Tennant]Scheie_2024_Nonlinear author author A. O. Scheie, author Y. Kamiya, author H. Zhang, author S. Lee, author A. J. Woods, author M. O. Ajeesh, author M. G. Gonzalez, author B. Bernu, author J. W. Villanova, author J. Xing, author Q. Huang, author Q. Zhang, author J. Ma, author E. S. Choi, author D. M. Pajerowski, author H. Zhou, author A. S. Sefat, author S. Okamoto, author T. Berlijn, author L. Messio, author R. Movshovich, author C. D. Batista, and author D. A. Tennant, title title Nonlinear magnons and exchange hamiltonians of the delafossite proximate quantum spin liquid candidates KYbSe_2 and NaYbSe_2, https://doi.org/10.1103/PhysRevB.109.014425 journal journal Phys. Rev. B volume 109, pages 014425 (year 2024b)NoStop [Paddison et al.(2017)Paddison, Daum, Dun, Ehlers, Liu, Stone, Zhou, and Mourigal]Paddison2017 author author J. A. M. Paddison, author M. Daum, author Z. Dun, author G. Ehlers, author Y. Liu, author M. Stone, author H. Zhou, and author M. Mourigal, title title Continuous excitations of the triangular-lattice quantum spin liquid YbMgGaO_4, https://doi.org/10.1038/nphys3971 journal journal Nature Physics volume 13, pages 117 (year 2017)NoStop [Ross et al.(2011)Ross, Savary, Gaulin, and Balents]Ross_2011_YTO author author K. A. Ross, author L. Savary, author B. D. Gaulin, and author L. Balents, title title Quantum excitations in quantum spin ice, https://doi.org/10.1103/PhysRevX.1.021002 journal journal Phys. Rev. X volume 1, pages 021002 (year 2011)NoStop [Zhang et al.(2021b)Zhang, Li, Liu, Zhang, Ji, Jin, Chen, Wang, Wang, Ma, and Zhang]Zhang_2021_NYS-HC author author Z. Zhang, author J. Li, author W. Liu, author Z. Zhang, author J. Ji, author F. Jin, author R. Chen, author J. Wang, author X. Wang, author J. Ma, and author Q. Zhang, title title Effective magnetic hamiltonian at finite temperatures for rare-earth chalcogenides, https://doi.org/10.1103/PhysRevB.103.184419 journal journal Phys. Rev. B volume 103, pages 184419 (year 2021b)NoStop [Chubukov and Golosov(1991)]Chubukov_1991 author author A. V. Chubukov and author D. I. Golosov, title title Quantum theory of an antiferromagnet on a triangular lattice in a magnetic field, https://doi.org/10.1088/0953-8984/3/1/005 journal journal Journal of Physics: Condensed Matter volume 3, pages 69 (year 1991)NoStop [Sellmann et al.(2015)Sellmann, Zhang, and Eggert]Sellmann_2015 author author D. Sellmann, author X.-F. Zhang, and author S. Eggert, title title Phase diagram of the antiferromagnetic xxz model on the triangular lattice, https://doi.org/10.1103/PhysRevB.91.081104 journal journal Phys. Rev. B volume 91, pages 081104 (year 2015)NoStop [Quirion et al.(2015)Quirion, Lapointe-Major, Poirier, Quilliam, Dun, and Zhou]Quirion_2015 author author G. Quirion, author M. Lapointe-Major, author M. Poirier, author J. A. Quilliam, author Z. L. Dun, and author H. D. Zhou, title title Magnetic phase diagram of ba_3cosb_2o_9 as determined by ultrasound velocity measurements, https://doi.org/10.1103/PhysRevB.92.014414 journal journal Phys. Rev. B volume 92, pages 014414 (year 2015)NoStop [Goldman(1977)]GOLDMAN19771 author author M. Goldman, title title Nuclear dipolar magnetic ordering, https://doi.org/https://doi.org/10.1016/0370-1573(77)90070-9 journal journal Physics Reports volume 32, pages 1 (year 1977)NoStop [Stone(2016)]STONE20161 author author N. Stone, title title Table of nuclear electric quadrupole moments, https://doi.org/https://doi.org/10.1016/j.adt.2015.12.002 journal journal Atomic Data and Nuclear Data Tables volume 111-112, pages 1 (year 2016)NoStop [Oshikawa(2000)]Oshikawa00 author author M. Oshikawa, title title Commensurability, excitation gap, and topology in quantum many-particle systems on a periodic lattice, https://doi.org/10.1103/PhysRevLett.84.1535 journal journal Phys. Rev. Lett. volume 84, pages 1535 (year 2000)NoStop [Hastings(2004)]Hastings_2004 author author M. B. Hastings, title title Lieb-schultz-mattis in higher dimensions, https://doi.org/10.1103/PhysRevB.69.104431 journal journal Phys. Rev. B volume 69, pages 104431 (year 2004)NoStop [Hauke et al.(2016)Hauke, Heyl, Tagliacozzo, and Zoller]hauke2016 author author P. Hauke, author M. Heyl, author L. Tagliacozzo, and author P. Zoller, title title Measuring multipartite entanglement through dynamic susceptibilities, https://doi.org/10.1038/nphys3700 journal journal Nat. Phys. volume 12, pages 778 (year 2016)NoStop [Laurell et al.(2024)Laurell, Scheie, Dagotto, and Tennant]laurell2024witnessing author author P. Laurell, author A. Scheie, author E. Dagotto, and author D. A. Tennant, https://arxiv.org/abs/2405.10899 title Witnessing entanglement and quantum correlations in condensed matter: A review (year 2024), https://arxiv.org/abs/2405.10899 arXiv:2405.10899 [quant-ph] NoStop [Miksch et al.(2021)Miksch, Pustogow, Rahim, Bardin, Kanoda, Schlueter, Hübner, Scheffler, and Dressel]dresselscience author author B. Miksch, author A. Pustogow, author M. J. Rahim, author A. A. Bardin, author K. Kanoda, author J. A. Schlueter, author R. Hübner, author M. Scheffler, and author M. Dressel, title title Gapped magnetic ground state in quantum spin liquid candidate &#x3ba;-(bedt-ttf)<sub>2</sub>cu<sub>2</sub>(cn)<sub>3</sub>, https://doi.org/10.1126/science.abc6363 journal journal Science volume 372, pages 276 (year 2021), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.abc6363 https://www.science.org/doi/pdf/10.1126/science.abc6363 NoStop [Seifert et al.(2023b)Seifert, Willsher, Drescher, Pollmann, and Knolle]pollmannvbs author author U. F. P. Seifert, author J. Willsher, author M. Drescher, author F. Pollmann, and author J. Knolle, @noop title Spin-peierls instability of the u(1) dirac spin liquid (year 2023b), https://arxiv.org/abs/2307.12295 arXiv:2307.12295 [cond-mat.str-el] NoStop [Szasz et al.(2020)Szasz, Motruk, Zaletel, and Moore]szasz_chiral_2020 author author A. Szasz, author J. Motruk, author M. P. Zaletel, and author J. E. Moore, title title Chiral Spin Liquid Phase of the Triangular Lattice Hubbard Model: A Density Matrix Renormalization Group Study, https://doi.org/10.1103/PhysRevX.10.021042 journal journal Physical Review X volume 10, pages 021042 (year 2020), note publisher: American Physical SocietyNoStop [Chen et al.(2022)Chen, Chen, Gong, Sheng, Li, and Weichselbaum]weichselbaum author author B.-B. Chen, author Z. Chen, author S.-S. Gong, author D. N. Sheng, author W. Li, and author A. Weichselbaum, title title Quantum spin liquid with emergent chiral order in the triangular-lattice hubbard model, https://doi.org/10.1103/PhysRevB.106.094420 journal journal Phys. Rev. B volume 106, pages 094420 (year 2022)NoStop [Wang and Vishwanath(2006)]Wang06 author author F. Wang and author A. Vishwanath, title title Spin-liquid states on the triangular and kagomé lattices: A projective-symmetry-group analysis of schwinger boson states, https://doi.org/10.1103/PhysRevB.74.174423 journal journal Phys. Rev. B volume 74, pages 174423 (year 2006)NoStop [Ferrari and Becca(2019)]Ferrari_2019 author author F. Ferrari and author F. Becca, title title Dynamical structure factor of the J_1J_2 heisenberg model on the triangular lattice: Magnons, spinons, and gauge fields, https://doi.org/10.1103/PhysRevX.9.031026 journal journal Phys. Rev. X volume 9, pages 031026 (year 2019)NoStop [Ehlers et al.(2011)Ehlers, Podlesnyak, Niedziela, Iverson, and Sokol]CNCS author author G. Ehlers, author A. A. Podlesnyak, author J. L. Niedziela, author E. B. Iverson, and author P. E. Sokol, title title The new cold neutron chopper spectrometer at the spallation neutron source: Design and performance, https://doi.org/10.1063/1.3626935 journal journal Review of Scientific Instruments volume 82, pages 085108 (year 2011)NoStop [Mason et al.(2006)Mason, Abernathy, Anderson, Ankner, Egami, Ehlers, Ekkebus, Granroth, Hagen, Herwig, Hodges, Hoffmann, Horak, Horton, Klose, Larese, Mesecar, Myles, Neuefeind, Ohl, Tulk, Wang, and Zhao]mason2006spallation author author T. E. Mason, author D. Abernathy, author I. Anderson, author J. Ankner, author T. Egami, author G. Ehlers, author A. Ekkebus, author G. Granroth, author M. Hagen, author K. Herwig, author J. Hodges, author C. Hoffmann, author C. Horak, author L. Horton, author F. Klose, author J. Larese, author A. Mesecar, author D. Myles, author J. Neuefeind, author M. Ohl, author C. Tulk, author X.-L. Wang, and author J. Zhao, title title The spallation neutron source in oak ridge: A powerful tool for materials research, https://doi.org/10.1016/j.physb.2006.05.281 journal journal Physica B: Condensed Matter volume 385, pages 955 (year 2006)NoStop [Sullivan and Seidel(1968)]PhysRev.173.679 author author P. F. Sullivan and author G. Seidel, title title Steady-state, ac-temperature calorimetry, https://doi.org/10.1103/PhysRev.173.679 journal journal Phys. Rev. volume 173, pages 679 (year 1968)NoStop [Xing et al.(2019)Xing, Sanjeewa, Kim, Meier, May, Zheng, Custelcean, Stewart, and Sefat]PhysRevMaterials.3.114413 author author J. Xing, author L. D. Sanjeewa, author J. Kim, author W. R. Meier, author A. F. May, author Q. Zheng, author R. Custelcean, author G. R. Stewart, and author A. S. Sefat, title title Synthesis, magnetization, and heat capacity of triangular lattice materials naerse_2 and kerse_2, https://doi.org/10.1103/PhysRevMaterials.3.114413 journal journal Phys. Rev. Mater. volume 3, pages 114413 (year 2019)NoStop [Schollwöck(2011)]schollwock_density-matrix_2011 author author U. Schollwöck, title title The density-matrix renormalization group in the age of matrix product states, https://doi.org/https://doi.org/10.1016/j.aop.2010.09.012 journal journal Annals of Physics volume 326, pages 96 (year 2011)NoStop [Fishman et al.(2022)Fishman, White, and Stoudenmire]fishman_itensor_2022 author author M. Fishman, author S. White, and author E. Stoudenmire, title title The ITensor Software Library for Tensor Network Calculations, https://doi.org/10.21468/SciPostPhysCodeb.4 journal journal SciPost Physics Codebases , pages 4 (year 2022)NoStop [Haegeman et al.(2011)Haegeman, Cirac, Osborne, Pižorn, Verschelde, and Verstraete]haegeman_time-dependent_2011 author author J. Haegeman, author J. I. Cirac, author T. J. Osborne, author I. Pižorn, author H. Verschelde, and author F. Verstraete, title title Time-Dependent Variational Principle for Quantum Lattices, https://doi.org/10.1103/PhysRevLett.107.070601 journal journal Physical Review Letters volume 107, pages 070601 (year 2011), note publisher: American Physical SocietyNoStop [Haegeman et al.(2013)Haegeman, Osborne, and Verstraete]haegeman_post-matrix_2013 author author J. Haegeman, author T. J. Osborne, and author F. Verstraete, title title Post-matrix product state methods: To tangent space and beyond, https://doi.org/10.1103/PhysRevB.88.075133 journal journal Physical Review B volume 88, pages 075133 (year 2013), note publisher: American Physical SocietyNoStop [Haegeman et al.(2016)Haegeman, Lubich, Oseledets, Vandereycken, and Verstraete]haegeman_unifying_2016 author author J. Haegeman, author C. Lubich, author I. Oseledets, author B. Vandereycken, and author F. Verstraete, title title Unifying time evolution and optimization with matrix product states, https://doi.org/10.1103/PhysRevB.94.165116 journal journal Physical Review B volume 94, pages 165116 (year 2016), note publisher: American Physical SocietyNoStop [Vanderstraeten et al.(2019)Vanderstraeten, Haegeman, and Verstraete]vanderstraeten_tangent-space_2019 author author L. Vanderstraeten, author J. Haegeman, and author F. Verstraete, title title Tangent-space methods for uniform matrix product states, https://doi.org/10.21468/SciPostPhysLectNotes.7 journal journal SciPost Physics Lecture Notes , pages 007 (year 2019)NoStop [Yang and White(2020)]yang_time-dependent_2020 author author M. Yang and author S. R. White, title title Time-dependent variational principle with ancillary Krylov subspace, https://doi.org/10.1103/PhysRevB.102.094315 journal journal Physical Review B volume 102, pages 094315 (year 2020), note publisher: American Physical SocietyNoStop [Sugiura and Shimizu(2012)]PhysRevLett.108.240401 author author S. Sugiura and author A. Shimizu, title title Thermal pure quantum states at finite temperature, https://doi.org/10.1103/PhysRevLett.108.240401 journal journal Phys. Rev. Lett. volume 108, pages 240401 (year 2012)NoStop [Kawamura et al.(2017)Kawamura, Yoshimi, Misawa, Yamaji, Todo, and Kawashima]Kawamura2017 author author M. Kawamura, author K. Yoshimi, author T. Misawa, author Y. Yamaji, author S. Todo, and author N. Kawashima, title title Quantum lattice model solver ℋϕ, https://doi.org/10.1016/j.cpc.2017.04.006 journal journal Comp. Phys. Commun. volume 217, pages 180 (year 2017)NoStop [Ido et al.(2024)Ido, Kawamura, Motoyama, Yoshimi, Yamaji, Todo, Kawashima, and Misawa]Ido2024 author author K. Ido, author M. Kawamura, author Y. Motoyama, author K. Yoshimi, author Y. Yamaji, author S. Todo, author N. Kawashima, and author T. Misawa, title title Update of ℋϕ: Newly added functions and methods in versions 2 and 3, https://doi.org/10.1016/j.cpc.2024.109093 journal journal Comp. Phys. Commun. volume 298, pages 109093 (year 2024)NoStop [Prelov ššek and Kokalj(2018)]PhysRevB.98.035107 author author P. Prelov ššek and author J. Kokalj, title title Finite-temperature properties of the extended Heisenberg model on a triangular lattice, https://doi.org/10.1103/PhysRevB.98.035107 journal journal Phys. Rev. B volume 98, pages 035107 (year 2018)NoStop [Schnack et al.(2020)Schnack, Richter, and Steinigeweg]PhysRevResearch.2.013186 author author J. Schnack, author J. Richter, and author R. Steinigeweg, title title Accuracy of the finite-temperature Lanczos method compared to simple typicality-based estimates, https://doi.org/10.1103/PhysRevResearch.2.013186 journal journal Phys. Rev. Research volume 2, pages 013186 (year 2020)NoStop [Carleo and Troyer(2017)]Carleo_Science17_NQS author author G. Carleo and author M. Troyer, title title Solving the quantum many-body problem with artificial neural networks, https://doi.org/10.1126/science.aag2302 journal journal Science volume 355, pages 602 (year 2017)NoStop [Nomura(2021)]Nomura_JPCM21_RBMsymm author author Y. Nomura, title title Helping restricted boltzmann machines with quantum-state representation by restoring symmetry, https://doi.org/10.1088/1361-648x/abe268 journal journal Journal of Physics: Condensed Matter volume 33, pages 174003 (year 2021)NoStop [Reh et al.(2023)Reh, Schmitt, and Gärttner]Reh_PRB23_NQSsymm author author M. Reh, author M. Schmitt, and author M. Gärttner, title title Optimizing design choices for neural quantum states, https://doi.org/10.1103/PhysRevB.107.195115 journal journal Phys. Rev. B volume 107, pages 195115 (year 2023)NoStop [Chen and Heyl(2023)]Chen_arxiv23_MinSR author author A. Chen and author M. Heyl, @noop title Efficient optimization of deep neural quantum states toward machine precision (year 2023), https://arxiv.org/abs/2302.01941 arXiv:2302.01941 [cond-mat.dis-nn] NoStop [Kaul(2015)]Kaul_PRL15_TriQSL author author R. K. Kaul, title title Spin nematics, valence-bond solids, and spin liquids in SO(n) quantum spin models on the triangular lattice, https://doi.org/10.1103/PhysRevLett.115.157202 journal journal Phys. Rev. Lett. volume 115, pages 157202 (year 2015)NoStop [Pujari et al.(2016)Pujari, Lang, Murthy, and Kaul]Pujari_PRL16_QBT author author S. Pujari, author T. C. Lang, author G. Murthy, and author R. K. Kaul, title title Interaction-induced dirac fermions from quadratic band touching in bilayer graphene, https://doi.org/10.1103/PhysRevLett.117.086404 journal journal Phys. Rev. Lett. volume 117, pages 086404 (year 2016)NoStop [Nomura and Imada(2021)]Nomura_PRX21_SquareQSL author author Y. Nomura and author M. Imada, title title Dirac-type nodal spin liquid revealed by refined quantum many-body solver using neural-network wave function, correlation ratio, and level spectroscopy, https://doi.org/10.1103/PhysRevX.11.031034 journal journal Phys. Rev. X volume 11, pages 031034 (year 2021)NoStop [Viteritti et al.(2024)Viteritti, Rende, Parola, Goldt, and Becca]viteritti2024transformer author author L. L. Viteritti, author R. Rende, author A. Parola, author S. Goldt, and author F. Becca, @noop title Transformer wave function for the shastry-sutherland model: emergence of a spin-liquid phase (year 2024), https://arxiv.org/abs/2311.16889 arXiv:2311.16889 [cond-mat.str-el] NoStop
http://arxiv.org/abs/2406.19328v1
20240627165914
Subtractive Training for Music Stem Insertion using Latent Diffusion Models
[ "Ivan Villa-Renteria", "Mason L. Wang", "Zachary Shah", "Zhe Li", "Soohyun Kim", "Neelesh Ramachandran", "Mert Pilanci" ]
cs.SD
[ "cs.SD", "cs.LG", "eess.AS" ]
A Contact Binary Satellite of the Asteroid (152830) Dinkinesh Yifan Zhao July 1, 2024 ============================================================= § ABSTRACT We present Subtractive Training[<subtractivetraining.github.io>], a simple and novel method for synthesizing individual musical instrument stems given other instruments as context. This method pairs a dataset of complete music mixes with 1) a variant of the dataset lacking a specific stem, and 2) LLM-generated instructions describing how the missing stem should be reintroduced. We then fine-tune a pretrained text-to-audio diffusion model to generate the missing instrument stem, guided by both the existing stems and the text instruction. Our results demonstrate Subtractive Training's efficacy in creating authentic drum stems that seamlessly blend with the existing tracks. We also show that we can use the text instruction to control the generation of the inserted stem in terms of rhythm, dynamics, and genre, allowing us to modify the style of a single instrument in a full song while keeping the remaining instruments the same. Lastly, we extend this technique to MIDI formats, successfully generating compatible bass, drum, and guitar parts for incomplete arrangements. § INTRODUCTION While impressive strides have been made in the field of generating fully-mixed music, the conditions for such generation are often abstract, relying on text or style descriptors <cit.>. These descriptors provide high-level guidance but little temporal or melodic control, limiting the practicality of such tools for musicians, who would like them to synergize with existing ideas or themes instead of forming completely new ones. For instance, a musician who is already proficient at a single instrument may have a musical idea that they would like to expand to other instruments. In this scenario, the ideal tool would not only `listen' to the musician's existing work but also literally build upon it by adding complementary waveforms to enrich the piece. Music is often the superposition of multiple `stems,' or audio waveforms representing the individual instruments, tracks, or performers in a piece. When summed synchronously, these audio waveforms complement one another and constitute a coherent piece of music. Thus, stems are codependent in the sense that any subset of stems imposes temporal and harmonic constraints for the remaining set of stems. By working within these constraints, musicians can produce songs by starting with a single musical idea and adding stems iteratively, ensuring that all stems add together harmoniously. To aid in this process, our goal is to use existing text-to-audio diffusion models to generate stems that accompany existing music. We frame our task as a spectrogram-editing problem: given an audio spectrogram representing a musical piece and an instruction describing the stem to be added, we would like to generate a new spectrogram that adds the stem specified, while maintaining the musical context and the cohesiveness of the piece. Inspired by recent work in text-based image editing <cit.>, we propose Subtractive Training for diffusion models. Our idea is to combine a large dataset of complete music mixes with 1) a variant of the dataset where a single stem has been removed, achieved by using pretrained music source separation tools, and 2) a set of edit instructions describing how the missing stem should be reintegrated, generated by combining a music captioning model with a large language model. We then fine-tune a text-to-audio diffusion model using our complete music mixes as targets, and our incomplete music mixes and text prompts as input conditions. Our contributions are threefold. First, we show that our method can be used to generate compelling drum accompaniments to tracks that otherwise lack them. These additional stems both sound realistic and are sympathetic to the existing audio. Second, current text-to-audio diffusion models have been trained on an extremely large number of text-audio pairs, and thus can model a broad and diverse distribution of musical textures, styles, genres, and rhythms <cit.>. Since our method uses these text-to-audio diffusion models as a foundation, we show that we can control the re-insertion of a stem by modifying the text instruction. Thus, our method allows us to take a full song and modify the arrangement, timbre, and style of a specific instrument, while keeping the rest of the instruments the same. Lastly, we show that the Subtractive Training paradigm works in the space of symbolic music, by training a pitch-roll-based diffusion model from scratch to add guitar, bass, and drum stems. § BACKGROUND §.§ Text-Based Image Editing Our method can be viewed as a musical analogue to InstructPix2Pix <cit.>, an image editing procedure that trains a diffusion model to edit images based on text instructions. The procedure uses GPT-3.5 Turbo <cit.> and Stable Diffusion <cit.> to generate a large dataset of image-editing examples on which a conditional diffusion model is trained. Our method generates a similar dataset of text-guided spectrogram edits, focusing on stem insertion edits. Our task is similar to image inpainting <cit.>, where the goal is to infill masked portions of an image. However, instead of training the model to infill portions of an image that have been masked, we train the model to add audio stems that have been subtracted. Thus, in contrast to training procedures that are `masked,' our method is `subtractive,' hence the name `Subtractive Training.' §.§ Diffusion Models Diffusion models have emerged as a powerful class of generative models, particularly in the domain of image generation <cit.>. These models learn to generate samples from a data distribution by iteratively denoising a Gaussian noise signal, gradually refining it into something that represents a generated sample. Many diffusion models operate in a latent space, using an encoder-decoder framework. In this framework, a Variational Autoencoder (VAE) <cit.> is employed to extract deep latent vectors that represent the desired data (images or audio). The diffusion model is then trained to iteratively denoise Gaussian noise signals into latent vectors that can be decoded by the VAE's decoder to generate data samples. §.§ Controlled Music Generation Since WaveNet <cit.>, there has been a surge of generative music models. Some are instances of latent diffusion models as described above <cit.>. Other models use sequence modeling on audio tokens using transformers <cit.>. In the latter case, the training objective is to predict masked tokens, while our method relies on subtracted audio stems, as a natural analogue to the conception of music as a sum of individual stems. Work is progressing on music generation models controlled by lower-level features (e.g., temporal or rhythmic features). MusicControlNet <cit.> is a music generation model with control of time-varying music attributes, like melody, dynamics, and rhythm, and is based on ControlNet <cit.>, a neural network architecture designed to add conditioning controls to pretrained text-to-image diffusion models. Concurrent work on music stem generation includes StemGen, which uses a non-regressive transformer-based architecture on audio tokens. Compared to existing work on stem generation, our method has the benefit of utilizing the incredible power of large, pretrained text-to-audio diffusion models. This allows us to control the generated stem according to a text instruction. Thus, our method is a way of distilling knowledge from large text-to-audio diffusion models for downstream applications, adding further to a field of flourishing study <cit.>. § METHOD Inspired by <cit.>, the goal of our method is to provide a pretrained text-to-audio diffusion model with a dataset of text-guided stem insertion examples. As an overview, our method involves generating a dataset of song spectrograms of complete mixes, which are coupled with the same songs missing a single stem (e.g., drums). For each pair of complete and stem-subtracted spectrograms, we also use a music captioning model and a large language model to generate a text instruction describing how the missing stem should be added to complete the spectrogram. Then, we fine-tune a pretrained text-to-audio diffusion model on the task of infilling the missing stem and reconstructing the full-mix spectrogram, given both the text instruction and the stem-subtracted spectrogram. §.§ Dataset Generation Our training procedure requires a large dataset of audio-audio-text triplets, each consisting of: * A full-mix spectrogram. * The same spectrogram, but where a single audio stem has been subtracted or removed. * Text instructions describing how each the spectrogram with a subtracted stem should be modified to re-insert the missing stem. A large dataset of such triplets does not exist. Thus, we contribute a large, novel dataset of text-guided stem-insertion examples by combining three preexisting datasets and by utilizing off-the-shelf source separation and music captioning tools. First, the data we use comes from three source datasets: * MusicCaps, a dataset of 5.5k caption-audio pairs from songs downloaded from YouTube. The captions are written by musicians, and each of the songs are 10 seconds long <cit.>. * MUSDB18, a music source separation dataset of 150 full-length music tracks (about 10 hours) with isolated drums, vocals, bass and accompaniment stems <cit.>. * MagnaTagATune, a music-tagging dataset containing 25,863 music clips, where each clip consists of 29-seconds-long excerpts belonging to one of the 5223 songs, 445 albums, and 230 artists <cit.>. We describe how we obtain our training examples in the following subsections. §.§.§ Full-Mix Spectrograms MagnaTagATune and MusicCaps already contain full-mix audio data, i.e., audio files where all instruments are playing simultaneously. In order to obtain full-mix spectrogram data from the datasets, we segment the songs into 5.11 s portions and compute magnitude spectrograms. For the MUSDB18 dataset, we combine all provided stems to obtain a full mix, then segment it into 5.11-second chunks, computing magnitude spectrograms for each segment. §.§.§ Stem-subtracted Spectrograms Our goal is to pair each full-mix spectrogram with a version of it where a specific instrument stem has been removed (e.g., drums). Our method of achieving this goal depends on the source dataset. Obtaining stem-subtracted audio from MUSDB18 is trivial, since each track comes pre-stemmed; we simply combine all of the stems of interest except for the subtracted stem. No clean separation of stems is provided in the MagnaTagATune or MusicCaps datasets. Thus, to subtract a particular stem from the full-mix segment, we use Demucs <cit.>, a state-of-the-art music source separation model, to decompose the full mix into the subtracted stem and the remaining mix (e.g., the mix without drums). We segment the stem-subtracted mixes into 5.11-second chunks corresponding to the same time intervals as the full-mix segments, and compute their magnitude spectrograms. This process results in a dataset of paired full-mix and stem-subtracted spectrograms, where each pair represents the same 5.11 s musical excerpt with and without the specified instrument stem. §.§.§ Edit Instructions To guide the text-to-audio diffusion model in generating the missing stem, we create a dataset of edit instructions that describe how the stem should be reintroduced. We first leverage the LP-MusicCaps captioning model to generate captions for all full-mix spectrograms. Next, we employ GPT-3.5 Turbo, a state-of-the-art language model, to generate edit instructions based on the newly generated captions. The prompt template used to generate the edit instructions takes the name of the desired instrument stem (e.g., drums), an action word (e.g., "add" or "insert"), and the segment's caption as input. The language model is then instructed to output an edit instruction describing how to add the specified stem to the clip portrayed in the caption, assuming the stem was not initially present. The inclusion of action words encourages diversity in the generated edit instructions, enhancing the richness of the resulting dataset. The complete prompt used for generating edit instructions is detailed in the Supplementary Materials. By applying these processes, we obtain a dataset consisting of 83.5k training examples, each comprising a pair of full-mix and stem-subtracted spectrograms, their corresponding captions, and a generated edit instruction. This dataset forms the foundation for our subsequent experiments and analyses. The resulting edit instructions, along with the stem-subtracted spectrograms, serve as input conditions for the diffusion model during fine-tuning. This allows the model to learn how to generate the missing stem based on both the existing musical context and the text instructions, enabling a high level of control over the generated stem's characteristics. §.§ Subtractive Learning Building upon the idea of generating missing stems based on existing musical context and text instructions, we propose a novel approach called Subtractive Learning, which we define as follows: Consider the joint distribution p(𝐱, 𝐲), where 𝐱 represents a complete data sample and 𝐲 represents an associated label or condition. In our context, 𝐱 is a full-mix audio spectrogram, and 𝐲 is an edit instruction describing how to add a specific instrument stem to the mix. We decompose 𝐱 into two components: 𝐱_partial and 𝐱_missing, such that 𝐱 = f(𝐱_partial, 𝐱_missing), where f is a function that combines the two components to reconstruct the complete data sample. In our case, 𝐱_partial represents the stem-subtracted spectrogram (e.g., a song with the drum stem removed), and 𝐱_missing represents the missing instrument stem (e.g., the drum stem). Our goal is to learn the conditional distribution p(𝐱_missing | 𝐲, 𝐱_partial), which corresponds to the probability of generating the missing instrument stem given the edit instruction 𝐲 and the stem-subtracted spectrogram 𝐱_partial. Diffusion models are particularly well-suited for this task, as they learn to model the data distribution by iteratively denoising a Gaussian noise signal conditioned on the input data. In our case, the diffusion model learns to generate the missing stem 𝐱_missing by conditioning on both the edit instruction 𝐲 and the stem-subtracted spectrogram 𝐱_partial. By training the model to estimate the conditional distribution p(𝐱_missing | 𝐲, 𝐱_partial), we enable it to generate the missing instrument stem that is coherent with the provided audio context and follows the given edit instruction. §.§ Fine-tuning the Diffusion Model Latent diffusion models generate data examples by beginning with a latent vector of noise and iteratively denoising it using a UNet <cit.> into a latent vector that can be decoded into a data example. Our method utilizes a pretrained text-to-audio latent diffusion model, which we fine-tune on our newly created dataset of audio-audio-text triplets. We begin the fine-tuning process by loading the weights of a pretrained latent diffusion model, and continuing its training. During the fine-tuning process, we provide the stem-subtracted spectrogram 𝐱_partial as an input to the denoising UNet, replacing the noisy latent representation. We also input the text embedding of the edit instruction 𝐲 into the diffusion model. The UNet is then trained to reconstruct full-mix spectrogram. § EXPERIMENTS §.§ Experimental Setup §.§.§ Model Architecture For our experiments, we utilize Riffusion, a latent diffusion model that generates mel spectrograms conditioned on text prompts. Riffusion was created by fine-tuning the Stable Diffusion v1-5 checkpoint to operate on audio data. The model accepts text prompts and 512x512 images as input, and outputs 512x512 mel spectrogram images. An overview of the model architecture is shown in Figure <ref>. As a baseline, we compare against SDEdit <cit.>, a diffusion-based style transfer method that is designed to edit images based on a given instruction, which we apply to Riffusion. The baseline method is similar to using our model without Subtractive Training with some nuances. We provide Riffusion with our stem-subtracted spectrogram, and give it a text-conditioning signal instructing it to re-insert the missing stem. The SDEdit baseline additionally adds a small amount of noise to the latent representation of the stem-subtracted spectrogram before the denoising process begins. §.§.§ Evaluation Dataset For evaluation, we create a separate test set using the MUSDB18 dataset <cit.>. We extract 5.11 second clips from the MUSDB18 test split and perform the same stem subtraction, mel spectrogram computation, and edit instruction generation process as we did for the training data. Using the MUSDB18 test set for evaluation helps minimize the effect of stem leakage on the generated outputs, since residual parts of the drum track cannot be used to guide the drum-insertion. This issue of spectral leakage is discussed further in Section <ref>. In total, the evaluation dataset contains 2,160 examples, each consisting of a 5.11 second full-mix clip, a corresponding stem-subtracted clip, and both a long-form text edit instruction and a shortened 5-word text caption. The short text captions are generated by prompting GPT-4 to summarize the full edit instructions, and are used as conditioning signals for the SDEdit baseline. §.§.§ Training Details We fine-tune the pretrained Riffusion model on our dataset using the training procedure from InstructPix2Pix <cit.>. The weights of the VAE encoder and decoder and the text encoder are frozen, and only the UNet is updated. We train the model for 300k steps with a batch size of 4 on a single NVIDIA A10G GPU, which equates to roughly 15 epochs. We use the AdamW optimizer with β_1=0.9, β_2=0.999, weight decay of 0.02, and learning rate of 10^-4 with a cosine decay schedule and 500 warmup steps. The conditioning dropout probability is set to 0.05 during training. §.§.§ Evaluation Metrics We evaluate our model and the SDEdit baseline using several metrics designed to assess the quality and diversity of the generated audio, following a similar procedure from <cit.>: * Fréchet Distance (FD): Similar to frechet inception distance in image generation, FD measures the similarity between generated and real audio features, extracted using a state-of-the-art audio classifier model called PANNs <cit.>. Lower FD indicates generated examples are more similar to real examples. * Fréchet Audio Distance (FAD): Similar to FD, but uses features from a VGGish model <cit.> instead of PANNs. FAD may be less reliable than FD due to the potential limitations of the VGGish model. * Kullback-Leibler Divergence (KLD): Measures the difference between the distributions of generated and real audio based on classifier predictions. Lower KLD suggests that the generated distribution is closer to the real data distribution <cit.>. We compute KLD for each pair of generated and target examples and report the average. * Inception Score (IS): Estimates the quality and diversity of generated examples based on the entropy of classifier predictions. Higher IS generally indicates better quality and diversity. <cit.> For the SDEdit baseline, we compare two variants using either 20 or 50 denoising steps. Our model is evaluated using 20 denoising steps. Results on all metrics are shown in Table <ref>. §.§ Extension to MIDI To further demonstrate the generalizability of Subtractive Training, we extend our approach to the domain of symbolic music generation. We represent MIDI data as 3-channel piano roll images, where each channel corresponds to a specific instrument: drums, bass, or guitar. The piano roll values are binary, indicating the presence or absence of a note at each time step and pitch. We train three separate diffusion models, one for each instrument. For our architecture and training procedure, we use the binary-noise based diffusion model described in <cit.>. We use a large dataset of Guitar Pro tabs from DadaGP <cit.> to train our models, from which we transcribe 19,433 pitch-roll chunks. The input to each model is a piano roll with two channels filled with the context instruments and the remaining channel initialized with noise. For example, the drum model takes in piano rolls with the bass and guitar parts intact, but with the drum part replaced by noise. The diffusion process then generates the missing drum part conditioned on the bass and guitar parts. Figure <ref> shows generated results from held-out data, where we can observe that notes generated with our model align well with the stems they are conditioned on. Qualitative examples are provided on the project website. Given any subset of the three instrument parts, the appropriate model can generate the missing part(s) based on the provided context. This extension highlights the flexibility of our approach and its potential for generating compatible instrumental parts in a symbolic music setting, enabling assistive composition tools that can suggest complementary parts based on incomplete scores or recordings. §.§ Results Table <ref> presents the evaluation results comparing our method against the SDEdit baselines. Our model outperforms both SDEdit variants across all metrics, indicating that the outputs generated by our model are significantly closer to the target audio than those produced by SDEdit. Specifically, we observe a 22.09 decrease in Fréchet Distance and a 2.78 decrease in FAD compared to the best-performing SDEdit variant. Moreover, our method achieves a substantial 2.24 decrease in KLD and a modest increase in Inception Score from 1.38 to 1.41. These results demonstrate the effectiveness of our Subtractive Training approach in generating high-quality and diverse drum stems that are well-aligned with the target audio. The superior performance of our model can be attributed to its ability to leverage the rich knowledge captured by the pretrained text-to-audio diffusion model and adapt it to the specific task of stem insertion, guided by natural language instructions. §.§ Qualitative Analysis To further assess the quality of the generated drum stems, we provide qualitative examples of generated audio on our website[subtractivetraining.github.io]. Figure <ref> displays the mel spectrograms of the original full-mix audio, the stem-subtracted input, and the generated output for two representative examples from the test set. We can see that our model inserts the drum stem into the spectrogram by preserving the original content in the background and overlaying drum onsets which span the majority of the frequency bins. Moreover, we see that the onsets aren't exactly the same as the onsets from the target, indicating that for this example, the model does not take advantage of data leakage from stem-bleeding. These examples showcase our model's ability to reconstruct realistic and coherent drum patterns that seamlessly blend with the existing instrumental parts. In addition to stem reconstruction, our method exhibits intriguing style transfer capabilities. By modifying the text instruction, we can guide the model to generate drum stems that adhere to specific genres, dynamics, or stylistic elements. For instance, we demonstrate the successful insertion of jazzy drums into a reggae song in Figure <ref>, where we can see high presence of high-frequency content compared to the full mix, which correspond to repeated snare hits, highlighting the model's flexibility in adapting to different musical contexts. We also provide an example of style transfer focused on dynamics, where the generated drum stem reflects the desired intensity and expressiveness. These and more examples are shown in our website. These qualitative results underscore the versatility and creative potential of our Subtractive Training approach. By enabling fine-grained control over the characteristics of the generated stem through natural language instructions, our method opens up new possibilities for assistive music composition and arrangement tools. § DISCUSSION §.§ Limitations and Future Work While our Subtractive Training method demonstrates promising results in generating high-quality and stylistically appropriate drum stems, there are certain limitations that warrant further investigation and improvement. One notable issue is the presence of high-frequency leakage in the source-separated audio used as training data. Due to imperfections in the source separation process, slight remnants of the original drum patterns can be observed in the high-frequency range of the stem-subtracted audio. This leakage introduces a bias during training, causing the model to generate drum patterns that closely mimic the original drums. Future work should explore techniques to mitigate this leakage, such as employing more advanced source separation algorithms or incorporating additional pre-stemmed datasets to reduce the reliance on synthetically generated data. Another limitation is the model's occasional failure to generate proper drum tracks, particularly in the EDM genre. We hypothesize that this issue may be a derivative of the model's bias towards high-frequency leakage patterns. EDM often features prominent high-frequency synth sounds that the model may misinterpret as leakage, leading to the generation of unusual drum patterns that incessantly hit cymbals in an attempt to match the synth patterns. Addressing the leakage problem and improving the model's ability to distinguish between genuine high-frequency content and artifacts would likely alleviate this issue. To further enhance the quality and controllability of the generated stems, future work could explore the following directions: * Experiment with the ratio of synthetically source-separated data to pre-stemmed data: More detailed investigation on the optimal balance between synthetically generated and pre-stemmed data may help mitigate the impact of data leakage and improve the model's generalization capabilities. * Extend to other stems: Once the issues with drum stem generation are resolved, the Subtractive Training approach should be extended to generate other instrumental stems, such as bass, guitar, or vocals, to enable more comprehensive music production and arrangement tools. * Explore alternative diffusion architectures: Investigating and adapting state-of-the-art diffusion architectures specifically designed for audio generation may lead to improved performance and increased flexibility in modeling complex musical structures. * Incorporate larger and more diverse datasets: Expanding the training data to include a wider range of musical genres, styles, and instrumentation would enhance the model's versatility and ability to handle diverse musical contexts. * Refine edit instruction generation: Developing more sophisticated methods for generating edit instructions, such as leveraging state-of-the-art Music QA LLMs could improve the quality and specificity of the generated stems. By addressing these limitations and exploring the suggested future directions, we believe that Subtractive Training can be further refined and extended to become a powerful tool for assistive music composition and production. § CONCLUSION In this paper, we introduced Subtractive Training, a novel approach for synthesizing individual musical instrument stems using pretrained text-to-audio diffusion models. Our experimental results demonstrate the effectiveness of Subtractive Training in generating high-quality and stylistically appropriate drum stems, outperforming baseline methods across various evaluation metrics. We also extended Subtractive Training to the domain of symbolic music generation, successfully generating compatible bass, drum, and guitar parts for incomplete MIDI arrangements.   Acknowledgements This work was supported in part by the National Science Foundation (NSF) under Grants ECCS-2037304 and DMS-2134248; in part by the NSF CAREER Award under Grant CCF-2236829; in part by the U.S. Army Research Office Early Career Award under Grant W911NF-21-1-0242; and in part by the Office of Naval Research under Grant N00014-24-1-2164. § APPENDIX §.§ Text Prompt The prompt that was used to generate the edit instructions from the music captions is given as follows: corresponds to the stem we want to subtract from our data, which in this case is "drums". Corresponds to the LPMusicCaps-generated caption of our song. The comes from the set {'Insert', 'Add', 'Generate', 'Enhance', 'Put', "Augment'}. The action word was chosen uniformly at random, and was included in order to create linguistic variety in the instructions that would be used as triaining input for the model.
http://arxiv.org/abs/2406.18283v1
20240626120848
Cascaded multi-phonon stimulated Raman scattering near second-harmonic-generation in thin-film lithium niobate microdisk
[ "Yuxuan He", "Xiongshuo Yan", "Jiangwei Wu", "Xiangmin Liu", "Yuping Chen", "Xianfeng Chen" ]
physics.optics
[ "physics.optics" ]
Learning-rate-free Momentum SGD with Reshuffling Converges in Nonsmooth Nonconvex OptimizationThe research of Xiaoyin Hu is supported by Zhejiang Provincial Natural Science Foundation of China under Grant (No. LQ23A010002), the National Natural Science Foundation of China (Grant No. 12301408), Scientific Research Foundation of Hangzhou City University(No.J-202317), and the advanced computing resources provided by the Supercomputing Center of HZCU. The work of Xin Liu is supported in part by the National Key R&D Program of China (2023YFA1009300), the National Natural Science Foundation of China (12125108, 12226008, 11991021, 11991020, 12021001, 12288201), Key Research Program of Frontier Sciences, Chinese Academy of Sciences (ZDBS-LY-7022), and CAS AMSS-PolyU Joint Laboratory of Applied Mathematics. Xiaoyin Hu Nachuan Xiao Xin Liu Kim-Chuan Toh Received: date / Accepted: date ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== High-Q factor microresonators are excellent platforms for the study of nonlinear optics by confining photons in the cavity for a long time through total internal reflection, which greatly enhances the light-matter interaction and increases the energy density of the optical field in the cavity<cit.>. Lithium niobate (LN) has been identified as a promising material with a high second-order susceptibility (χ^(2))<cit.>, high electro-optic (EO) coefficient<cit.>, and a wide transparency window, making it an excellent contender for use in integrated photonics. The maturation of thin-film lithium niobate (TFLN) fabrication technologies over recent years is also paving the way for lithium niobate applications in next-generation photonic integrated circuits (PICs)<cit.>. The most advanced micro-nanofabrication techniques are now available, enabling the Q factor of TFLN microresonators to be increased to the order of 10^8, approaching its theoretical limit<cit.>.The characteristics of TFLN microresonators, including high-Q factors and small volume, facilitate the realization of various strong nonlinear effects. Previously, effects such as second-harmonic generation (SHG)<cit.>, third-harmonic generation (THG)<cit.>, sum-frequency generation (SFG)<cit.> and spontaneous parametric down conversion (SPDC)<cit.> on TFLN microresonators have been extensively studied. Furthermore, phenomena such as optical parametric oscillation (OPO)<cit.>, optical frequency comb (OFC)<cit.>, and cavity optomechanics<cit.> have also been extensively demonstrated on TFLN microresonators. TFLN microresonators are also becoming increasingly important in optical amplification<cit.>, optical communications<cit.> and optical sensing<cit.>.Stimulated Raman scattering (SRS), a third-order nonlinear nonparametric process, has been demonstrated on a variety of photonic integration platforms, including silicon<cit.>, silicon dioxide<cit.>, diamond<cit.>, and aluminum nitride<cit.>. By aligning the pump wavelength and the Stokes light wavelength, which is related to the phonon vibrational frequency, with the cavity modes, SRS can provide new wavelengths that are different from the pump. This contributes to the realization of integrated Raman lasers with low pump levels in the continuous-wave (cw) regime. However, as a Raman-active crystalline material with different polarization configurations and multiple strong vibrational phonon branches<cit.>, the Raman effect in LN integrated photonic devices has not been extensively studied. Recent works have investigated cascaded Raman lasing<cit.> in TFLN microresonators and the effect of SRS on the Kerr comb formation<cit.>. However, the majority of the works is concentrated on the SRS in the vicinity of the optical communication band (C-Band). In order to ascertain the potential of the LN Raman effect for wavelength conversion, it is necessary to investigate the SRS in other bands of the LN microresonators.In this paper, we demonstrate the generation of cascaded multi-phonon Raman signals near SHG peak and related cascaded SFG processes by modal-phase-matching condition in an X-cut high-Q factor LN microdisk cavity under cw optical pumping at around 1543 nm. We fabricated TFLN microdisk cavity with Q factors higher than 8×10^5. The high Q and small mode volume of the WGM modes in the microdisk compensate for the small spatial mode overlap between the interacting modes, allowing the SRS effect and cascaded nonlinear effects to be observed. The spectrum of multi-phonon Raman signals and their cascaded SFG signals can be changed by tuning the pump frequency within a small interval. Furthermore, we observed the generation of multi-color visible light in the LNOI microdisk cavity under the optical microscope. We fabricate the LN microdisk resonator with a radius of 100 μ m using X-cut TFLN with a LN layer of 300 nm on the top of a silica layer with a thickness of 2 μ m, and the bottom layer is a LN substrate with a thickness of 500 μ m. We use photolithography-assisted chemo-mechanical polishing (CMP) to obtain smoother sidewalls for the microdisk and thus higher Q factors. First, a layer of chromium (Cr) is deposited on the LN film by evaporation, and photoresist is spin-coated on the surface of the film, followed by exposure of a circular pattern using ultraviolet (UV) lithography. The microdisk with a flat surface and smooth sidewalls is obtained through a process of two wet etching and CMP. Finally, the silica under the LN microdisk was etched with the buffered oxide etch (BOE) method to form pillars to support the suspended microdisk. Our experimental setup is schematically depicted in Fig. 1(a). The transmission spectrum characterizing the optical modes of the microdisk was measured before our experiment. To avoid thermal and nonlinear optical effects, the transmission spectrum is scanned using a very low input power over the wavelength range from 1520 nm to 1570 nm, as shown in Fig. 1(b). The high Q-factor of the mode around 1564.90 nm is estimated to be 8.4×10^5 by the Lorentz fitting (red solid line). Due to the lack of a tunable laser source, it is not possible to accurately measure the Q-factor in the visible range. However, we can expect the Q-factor to be similar to that of the C-band. We first investigate the SHG process in our microdisk. We perform a scanning operation on the pump with the wavelength range from 1520 nm to 1570 nm. Within the scanning range, high intensity SHG signals can be generated at multiple pump wavelengths, indicating that the phase-matching condition of SHG is widely satisfied in the microdisk. It is due to the fact that our microdisk has a large radius, which leads to the emergence of a large number of higher-order modes. It facilitates the realization of modal-phase-matching condition for the SHG process. Next, we tune the pump wavelength to 1543.59 nm and fix the power at 21.78 mW, and observe multi-phonon Raman spectral lines near the SHG peak as shown in Fig. 2. The SHG peak appears at 771.18 nm. By comparing the wavelength of these spectral lines to the wavelength of the SHG peak and calculating the difference value of the wavenumber υ̃ = 1/λ_SHG-1/λ, the frequency shifts of the spectral lines at 786.42 nm, 790.66 nm, and 827.45 nm are 251 cm^-1, 319.5 cm^-1 and 881 cm^-1. Referring to the Raman spectra of the LN crystal<cit.>, we can find that the spectral lines at 786.42 nm, 790.66 nm, and 827.45 nm are the first-order Raman signals near SHG correlated to the Raman-active phonons of (1A_1  TO), (4E  TO), and (4A_1  LO). Here A and E are the two symmetric polarization directions of the LN crystal, and TO and LO are the transverse and longitudinal optical phonon modes of the LN crystal. These three peaks are labeled by blue, green and purple pillars in Fig. 2, respectively. In addition, we also observe spectral lines which appear at 802.86 nm and 810.08 nm. By comparing the wavelength of the spectral line at 802.86 nm to the first-order Raman signal at 790.66 nm, the frequency shift is 192 cm^-1 which is corresponded to Raman active phonon of (1E  LO). Similarly, by comparing the wavelength of the spectral line at 810.08 nm to the first-order Raman signal at 786.42 nm, the frequency shift is 371 cm^-1 which is corresponded to Raman active phonon of (5E  TO). The spectral lines at 802.86 nm and 810.08 nm can be seen as cascaded Raman signals associated with the first-order Raman signals mentioned above. The relationship between Raman spectral lines and Raman active phonons are shown in Table 1. It is unlikely that these first-order and cascaded Raman signals are directly triggered by the SHG signal due to the low intensity of the SHG signal. Therefore, we believe that the multi-color Raman signals observed in our experiment originates from the Raman signals generated by the pump light in the C-Band through SHG process. Due to the limitations of the range of our spectrometer, the Raman signals near the pump wavelength cannot be observed directly. In fact, we also observe signal peaks at 845.53 nm and 893.05 nm. They could be generated by the same reasons mentioned above. However, we do not introduce them here due to the low intensity measured. While keeping the power unchanged and further tuning the pump wavelength to 1544.18 nm, we observe Raman lines emission near SHG signal accompanied by THG and SFG processes as shown in Fig 3(a). The “flat top” spectral line on the left represents the high intensity of SHG, and the “flat top” spectral line on the right probably represents the high intensity of Raman line. It shows that the intensity of the spectral lines exceeds the detection range of our spectrometer. The spectral line at 854.67 nm may be the cascaded Raman signal which is related to the Raman active phonons of (4A_1  LO) and (5E  TO) according to our analysis. More importantly, we record the SFG and THG signals in the visible range. They are decorated with green pillars as shown in Fig. 3. The spectral line located at 530.68 nm is generated by the SFG process of the pump light and the Raman signal near SHG which means the satisfaction of the modal-phase-matching condition. Further, we record the spectrum which is shown in Fig. 3(b) after tuning the pump wavelength to 1544.20 nm and increasing the pump power to 24.64 mW. Not only does the intensity of the SHG and Raman spectral lines near 800 nm continue to increase, but spectral lines also appear near 900 nm. These spectral lines may be generated by the SFG processes between the Raman spectral lines produced directly by the pump in the C-band. It is worth noting that there is a significant increase in the number of SFG spectral lines in the visible range. This indicates that the modal-phase-matching condition between the fundamental modes near the pump and the Raman signals near SHG gets further satisfaction under the adjustment of the pump wavelength and power. But it is hard to figure out which Raman line is involved in the process due to the "flat top" phenomenon. We also observe the spatial mode intensity profile recorded by the optical microscope. The emission of green light and red light is demonstrated in Fig. 3(c) and Fig. 3(d), respectively. It should be noted that purple light also appears under the optical microscope but we cannot record the corresponding frequency components by our spectrometer. This is mainly due to the fact that we use only one tapered fiber for input and output. The signals at the shorter wavelength cannot be efficiently coupled out from the microdisk cavity through the tapered fiber designed for telecom band coupling. Fig. 3(f) demonstrates the mixed emission of the different color we mentioned above which shows the multi-color spectrum we obtained from our experiment. Finally, we tune the pump wavelength to 1541.63 nm and the pump power to 19.54 mW, and the spectrum is shown in Figure. 4(a). The “flat top” spectral line decorated with dark blue pillar represents the high intensity of SHG. We exhibit only one Raman spectral line at 808.59 nm which may be a cascaded signal related to Raman active phonons of (1A_1  TO) and (5E  TO). Forward tuning the pump wavelength to 1541.68 nm, we record the spectrum shown in Fig. 4(b). Besides the Raman line at 808 nm, we obtain another cascaded Raman line at 850.72 nm which may be related to Raman active phonons of (4A_1  LO) and (4E  TO). Reverse tuning the pump wavelength to 1541.66 nm, we record the spectrum shown in Fig. 4(c). Besides the Raman lines mentioned above, we obtain the third Raman line at 826.46 nm which may be related to Raman active phonons of (4A_1  LO) and the forth cascaded Raman line at 802.86 nm which may be related to Raman active phonons of (4E  TO) and 1E  LO). As shown in Fig. 4, by changing the pump wavelength, we can make fine tuning of the specific Raman and cascaded Raman spectral lines in the microdisk. This is essentially due to the adjustment of the modal-phase-matching condition. It is worth mentioning that the spectral width of the "flat top" spectral line of SHG may cause the Raman spectral lines under 800 nm to be masked, such as the Raman lines. Our experiment has the characteristic of broadband matching and tunable operation. This is mainly attributed to the unique quasi-phase-matching condition in the X-cut microdisk and the multi-resonance condition of the microdisk. As transverse-electrically polarized light waves in X-cut microdisk undergo rotational crystal orientation during propagation, the effective refractive index and effective nonlinear coefficient oscillate with the azimuthal angle of the microdisk<cit.>. This results in a relaxation of the stringent conditions for quasi-phase-matching. At the same time, the complex higher-order modes in the microdisk form a multiple-resonance background. All these conditions make the phase-matching conditions easy to fulfill and adjust for the stimulated Raman scattering in our experiments. However, the spatial modal overlap factor between these interacting modes may be small<cit.>, which can be compensated by the high quality factor of the microdisk. In conclusion, in this letter, we observed the on-chip multi-phonon cascaded Raman signals near the SHG wavelength range. Cascaded Raman signals and related SFG signals are generated in a high Q factor X-cut TFLN microdisk cavity by satisfying modal-phase-matching condition with a cw pump operating in the C-band at an input power of about 20 mW. This work expands the way for the realization of on-chip wavelength conversion over an ultra-wide wavelength range. Funding This work was supported by the National Natural Science Foundation of China (Grant Nos. 12134009), and Shanghai Jiao Tong University (SJTU) (Grant No. 21X010200828). Acknowledgments The authors thank the Center for Advanced Electronic Materials and Devices (AEMD) of Shanghai Jiao Tong University (SJTU) for the supports in device fabrications. Disclosures The authors declare no conflicts of interest. Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. aop § AUTHOR BIOGRAPHIES [t][6.3cm][t]1.0 L0.25 < g r a p h i c s > John Smith received his BSc (Mathematics) in 2000 from The University of Maryland. His research interests include lasers and optics. 1.0 L0.25 < g r a p h i c s > Alice Smith also received her BSc (Mathematics) in 2000 from The University of Maryland. Her research interests also include lasers and optics.
http://arxiv.org/abs/2406.18220v1
20240626100824
Guiding Video Prediction with Explicit Procedural Knowledge
[ "Patrick Takenaka", "Johannes Maucher", "Marco F. Huber" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Guiding Video Prediction with Explicit Procedural Knowledge Patrick Takenaka^1,2, Johannes Maucher^1, Marco F. Huber^2,3 Institute for Applied AI, Hochschule der Medien Stuttgart, Germany^1 Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany^2 Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany^3 {takenaka,maucher}@hdm-stuttgart.de, marco.huber@ieee.org July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================= empty § ABSTRACT We propose a general way to integrate procedural knowledge of a domain into deep learning models. We apply it to the case of video prediction, building on top of object-centric deep models and show that this leads to a better performance than using data-driven models alone. We develop an architecture that facilitates latent space disentanglement in order to use the integrated procedural knowledge, and establish a setup that allows the model to learn the procedural interface in the latent space using the downstream task of video prediction. We contrast the performance to a state-of-the-art data-driven approach and show that problems where purely data-driven approaches struggle can be handled by using knowledge about the domain, providing an alternative to simply collecting more data. § INTRODUCTION The integration of expert knowledge in deep learning systems reduces the complexity of the overall learning problem, while offering domain experts an avenue to add their knowledge into the system, potentially leading to improved data efficiency, controllability, and interpretability. <cit.> showed in detail the various types of knowledge that are currently being integrated in deep learning, ranging from logic rules to regularize the learning process <cit.>, to modelling the underlying graph structure in the architecture <cit.>. Especially in the physical sciences, where exact and rigid performance of the model is of importance, data-driven systems have shown to struggle on their own, and many complex problems cannot be described through numerical solvers alone <cit.>. This highlights the need for hybrid modelling approaches that make use of both theoretical domain knowledge, and of collected data. When viewing this approach from the perspective of deep learning, if the model is able to understand and work with integrated domain knowledge, it could potentially render many data samples redundant w.r.t. information gain. In addition to the recognized knowledge integration categories <cit.>, we propose to view procedural knowledge described through programmatic functions as its own category, as it is equally able to convey domain information in a structured manner as other types, while bringing with it an already established ecosystem of definitions, frameworks, and tools. Such inductive domain biases in general can help models to obtain a more structured view of the environment <cit.> and lead them towards more desirable predictions by either restricting the model hypothesis space, or by guiding the optimization process <cit.>. We argue that by incorporating procedural knowledge we can give neural networks powerful learning shortcuts where data-driven approaches struggle, and as a result reduce the demand for data, allow better out-of-distribution performance, and enable domain experts to control and better interpret the predictions. In summary, our contributions are: * Specification of a general architectural scheme for procedural knowledge integration. * Application of this scheme to video prediction, involving a novel latent space separation scheme to facilitate learning of the procedural interface. * Performance analysis of our proposed method in contrast to a purely data-driven approach. The paper is structured as follows: First, our proposed procedural knowledge integration scheme is introduced in Sec. <ref>, followed by its specification for the video prediction use case in Sec. <ref>. We show relevant related work in Sec. <ref> and continue by describing the concrete model and overall setup that we used in Sec. <ref>, after which several experiments regarding the model performance and feasibility are made in Sec. <ref>. § PROPOSED ARCHITECTURE We view the integrated procedural knowledge as an individual module in the overall architecture, and the learning objective corresponds to the correct utilization of this module, i.e., the learning of the program interface, to solve the task at hand. More specifically, we consider the case where the integrated knowledge is only solving an intermediate part of the overall task, i.e., it neither directly operates on the input data, nor are its outputs used as a prediction target. More formally, given data sample X and procedural module f, the model latent state z is decoded into and encoded from the function input space through learned modules M_f_in and M_f_out, respectively. Here, z corresponds to an intermediate feature map of an arbitrary deep learning model M whose target domain at least partially involves processes that are described in f. The output of M_f_out is then fused with z using an arbitrary operator ⊕. This structure is shown in Fig. <ref>. Procedural knowledge in general and programmatic functions in particular operate on a discrete set of input and output parameters. The aforementioned interface thus needs to disentangle the relevant parameters in the distributed representation and bind them to the correct inputs, and perform the reverse operation on the output side, tasks that are still challenging in many cases <cit.>. We show that in our setup we are able to learn this interface implicitly by focusing on a downstream task instead. §.§ Case Study: Video Prediction Video Prediction is an important objective in the current deep learning landscape. With it, many visual downstream tasks can be enhanced or even enabled that utilize temporal information. Example tasks are model predictive control (MPC) <cit.>, visual question answering (VQA) <cit.>, system identification <cit.>, or even content generation <cit.>. Some of these benefit even more if the system is controllable and thus, allows the integration of human intention into the inference process. This is typically done by conditioning the model on additional modalities such as natural language <cit.> or by disentangling the latent space <cit.>. More recently, researchers have shown <cit.> that object-centric learning offers a suitable basis for video prediction, as learning object interactions is difficult without suitable representations. We propose to build on top of such models, since knowledge about objects in the environment is an integral aspect of many domain processes and as such facilitates our approach. We proceed by reducing these distributed object-centric representations further to individual object properties, which are then usable by our procedural module—a simple differentiable physics engine modeling the underlying scene dynamics. We follow the approach of SlotFormer <cit.> and utilize a frozen pretrained Slot Attention for Video (SAVi) <cit.> model trained on video object segmentation to encode and decode object latent states for each frame. Also similarly, our proposed model predicts future frames in an auto-regressive manner using a specialized rollout module, with the assumption that the first N frames of a video are given in order to allow the model to observe the initial dynamics of the scene. Within the rollout module, our first goal for each object latent state is to disentangle object factors that are relevant as function input from those that are not. In our case, these are the dynamics and appearance—or Gestalt <cit.>—factors, respectively. However, as the upstream SAVi model is frozen and did not assume such disentanglement, we first have to apply a non-linear transformation on its latent space to enable the model to learn to separate the latent state into dynamics and Gestalt parts based on the inductive biases of the architecture that follows. We then use these—still distributed—latent states to obtain discrete physical state representations—i.e., in our case 3D vectors representing position and velocity—that can be processed by our explicit dynamics module in order to predict the state of the next time step. In order to avoid bottlenecks in the information flow, we introduce a parallel model that predicts both a dynamics correction and the future Gestalt state. The reasoning here is that in many cases both are dependent of each other and thus, need to be modelled jointly. Both dynamics predictions are then averaged over to produce the final dynamics state. The fused dynamics state and the predicted Gestalt state are finally concatenated to obtain the latent state of the next time step. This latent state is finally transformed non-linearly back into the latent space of the pretrained SAVi model, before it is decoded into pixel space. The rollout module can be seen in detail in Fig. <ref>. We verify in our experiments that even without additional auxiliary loss terms to regularize the latent state our model is able to correctly utilize the integrated dynamics module, indicating that the inductive bias of a correctly predicted and decoded physics state is sufficient for better visual predictions. § RELATED WORK Physics-Guided Deep Learning for Videos. The explicit representation of dynamics prevalent in a video within a deep learning model is a popular shortcut to learning the underlying concepts in the scene, and oftentimes necessary due to the inherent difficulty and ambiguity of many tasks <cit.>. The main objectives are usually the estimation of underlying system parameters and rules <cit.>, or the adherence of the model output to certain environmental constraints <cit.>, leading to more accurate predictions. With that—as is the case for neuro-symbolic approaches <cit.> in most cases—the idea is to also inherently benefit from an improvement in interpretability and data efficiency. A long-standing approach is to represent the dynamics by an individual module, i.e., a physics engine, and use different means to join it with a learnable model. Early work <cit.> utilized this to predict physical outcomes, while simultaneously learning underlying physical parameters. Later work extended this towards video prediction, in which the output of the physics engine is used for rendering through a learnable decoder. Some used custom decoder networks for the given task <cit.>, or integrated a complete differentiable renderer in addition <cit.>. However, these were limited to specialized use cases for the first, and required perfect knowledge of the visual composition of the environment for the latter. Another common direction is the use of Spatial Transformers (STs) <cit.>, since they allow easy integration of spatial concepts such as position and rotation in the decoding process. However, these approaches <cit.>—albeit similar to our approach—assumed that (1) no data-driven correction of physics state is necessary and (2) the visuals of the scene outside of the dynamics properties remain static and can be encoded in the network weights, limiting their applicability to more complex settings. With our proposed approach we can model such properties. For object-centric scenarios it is common to also take into account the relational structure of dynamical scenes in order to model object interactions by utilizing graph-based methods in the architecture <cit.>. Disentangled Video Dynamics. Latent factor disentanglement in general assumes that the data is composed of a set of—sometimes independent—latent factors. Once the target factors can be disentangled, control over the environment becomes possible, and as such these approaches are of special interest in generative models. Early work heavily built on top of Variational Autoencoders (VAEs) <cit.>. However, later on it was proven that inductive biases are necessary to achieve disentanglement, and earlier work instead only exploited biases in the data <cit.>. Typically, these inductive biases are in the form of factor labels <cit.>. Such models were also used for disentanglement of physical properties and dynamics <cit.>. In this domain, instead of only providing labels to achieve disentanglement, it is also common to help the model discover underlying dynamics by modeling them as Partial Differential Equations (PDEs)<cit.>. For video data that does not necessarily follow certain physical rules, some use a more general approach and focus on the disentanglement of position and Gestalt factors, with the idea that many object factors are independent of their position in the frame <cit.>. Having explicit encoding or decoding processes also helps in obtaining disentangled dynamics <cit.>. § SETUP As is done in the original SAVi paper <cit.>, we condition the SAVi slots on the first frame object bounding boxes and pre-train on sequences of six video frames, optimizing the reconstruction of the optical flow map for each frame. Experiments have shown that optical flow reconstruction leads to better object segmentations, which we find is a better proxy for evaluating correct object dynamics than video reconstruction itself. After convergence we freeze the SAVi model. For the video prediction task, we encode the initial six frames using this frozen model, and use these as initial context information for the video prediction model. We then let the model auto-regressively predict the next 12 frames during training—or 24 frames during validation—always keeping the most recent six frames as reference. While more than a single reference frame would not be necessary for the integrated dynamics knowledge, the six frames are instead used in the transformer-based joint dynamics and Gestalt predictor model. In order to give the model a hint about the magnitude of the dynamics state values, we condition the dynamics state of the first frame on the ground-truth state. §.§ Implementation Details For the SAVi model we mainly follow the implementations of SlotFormer <cit.> and the original work <cit.>. The encoder consists of a standard Convolutional Neural Network (CNN) with a subsequent positional embedding. To obtain slot representations for a given frame we perform two iterations of slot attention, followed by a transformer model with multi-head self attention for modelling slot interactions and a final Long Short-Term Memory (LSTM) model in order to transition the representation into the next time step. We set the number of slots to six, each with size 128. The representations obtained after the slot attention rounds are decoded into the target frames using a Spatial Broadcast Decoder <cit.> with a broadcast size of 8. For the video prediction model we denote the most recent N context frame representations of time steps t-N… t-1 in bold as z and the latent representation prediction for time step t as z in order to improve readability. Both the latent state encoder S_enc and decoder of the video prediction model S_dec are MLPs, each with a single ReLU activated hidden layer of size 128. They have shown to introduce sufficient non-linearity to allow state disentanglement. The latent state obtained from S_enc is kept the same size as the slot size and is split into two equally sized parts z_d and z_g for the subsequent dynamics and Gestalt models. The dynamics model—i.e., the explicit physics engine—takes a physical state representation consisting of a 3D position and 3D velocity of a single frame as input, which is obtained from a linear readout layer of the most recent context frame of latent state z_d, or directly from groundtruth for the very first predicted frame. The physics engine itself is fully differentiable and consists of no learnable parameters. It calculates the dynamics taking place as in the original data simulation using a regular semi-implicit Euler integration scheme. Pseudo code of this engine can be seen in Listing <ref>. Its output—consisting of again a 3D position and 3D velocity of the next timestep—is then transformed back into the latent state z_d_exp with another linear layer. For the Gestalt properties we utilize a prediction setup and configuration as in the original SlotFormer model: First, the latent state z_g is enriched with temporal positional encodings after which a multi head self attention transformer is used for obtaining future latent representations z_d_cor and z_g. Both z_d_exp and z_d_cor are merged by taking their mean, and the resulting vector z_d is concatenated with z_g in order to obtain the latent representation of the future frame. S_dec is finally used to transform this vector back into the latent representation z of SAVi, where it can be decoded into pixel space by the pretrained frozen SAVi decoder. language=Python,commentstyle=, basicstyle=, emph=[3]range,sum,pow,sqrt, emphstyle=[3], emph=[4]for,in,def,get_pos_delta, emphstyle=[4], [basicstyle=, label=lst:integrated_function, language=Python, caption=Python pseudo code of the integrated function for our data domain which calculates a future physical state consisting of position and velocity of each object. G in the code corresponds to the gravitational constant. As is done in the original simulation each predicted frame is subdivided into smaller simulation steps—a standard approach for numerical-based physics simulations.] def dynamics_step(pos, vel): for sim_idx in range(simulation_steps): # Position delta between objects pos_delta = get_pos_delta(pos) # Squared distances between objects r2 = sum(pow(pos_delta, 2)) # Calculate force direction vector F_dir = pos_delta / sqrt(r2) # Calculate force F = F_dir * (G * (mass / r2)) # F = ma a = F / mass # Semi-implicit euler vel = vel + simulation_dt * a pos = pos + simulation_dt * vel return pos, vel §.§ Data Our dataset consists of a simulated environment of multiple interacting objects resulting in complex nonlinear dynamics. The idea was to generate an object-centric dataset for which current state-of-the-art video prediction models struggle and where the integration of knowledge about the environment is possible and sensitive. Datasets used in existing object-centric video prediction literature either did not feature complex nonlinear dynamics, or involved non-differentiable dynamics (, collisions) that are out of scope for now. However for the latter we note that non-differentiable dynamics such as collisions could still be integrated with our approach by building a computational graph that covers all conditional pathways. Although this approach is computationally more inefficient and does not directly convey collision event information to the learning algorithm, work exists <cit.> that show that this can still be exploited well enough and is simultaneously easy to implement in current deep learning frameworks with dynamic computational graphs. The future states are predicted using a simple physics engine that simulates gravitational pull between differently sized spherical objects without collisions, as in the three body problem <cit.>. In order to keep objects in the scene, we add an invisible gravitational pull towards the camera focus point and limit the movement in x and y direction. Objects are then rendered in 3D space using slight illumination and no background. Each object can have different material properties, which change their visuals slightly. We create 10k RGB video samples consisting of 32 frames and spatial size 64×64 each with their corresponding optical flow and segmentation masks using kubric <cit.>, which combines a physical simulator with a 3D rendering engine, allowing the generation of arbitrary physical scenes. We render four frames per second, and subdivide each frame into 60 physical simulation steps. Each sample uses the same underlying dynamics but with different starting conditions for the objects. The number of objects randomly varies per sample from 3-5 objects. For each object, we also store its physical state at each frame consisting of the 3D world position and velocity. All objects have the same fixed mass. § EXPERIMENTS In all experiments, we compare our proposed architecture with a SlotFormer model, representing a purely data-driven approach. To improve comparability, the transformer architectures of both our joint dynamics and Gestalt predictor G and the SlotFormer rollout module are the same. Also, both use the same underlying frozen SAVi model as object-centric encoder and decoder. We train SAVi and the video prediction models for at maximum 100k steps each or until convergence is observed by early stopping, using a batch size of 64. We clip gradients to a maximum norm of 0.05 and train using Adam with an initial learning rate of 0.0001. For evaluation purposes, we report the aggregated object segmentation performance over three seeds using the Adjusted Rand Index (ARI) and mean Intersection-Over-Union (mIoU) scores, in addition to their foreground (FG) variants ARI-FG and mIoU-FG which disregard background predictions. We first analyze the baseline performance of our proposed approach in Sec. <ref>, followed by an experiment focusing on the completeness of the integrated function in Sec. <ref>. We then consider the model performance for very limited data availability in Sec.<ref> and conclude with an ablation experiment regarding the latent state separation in Sec.<ref>. §.§ Baseline Here, we integrate the complete underlying dynamics of the environment in our model. As such, we also verify the utility of still keeping a parallel auto-regressive joint Gestalt and dynamics model by replacing it with an identity function and observing the performance, since with perfect knowledge about the dynamics and the initial frame appearance the model should have all necessary information for an accurate prediction. As we can see in Tab. <ref>, our proposed architecture outperforms a purely data-driven approach such as SlotFormer by a large margin, and comes close to the performance of the underlying SAVi model, which in contrast to video prediction methods has access to every video frame and simply needs to segment them. However, even when integrating perfect dynamics knowledge it is still beneficial to keep a parallel data-driven Gestalt and dynamics predictor, highlighting the need to model the dependency between appearance and dynamics in the scene. Both our models are also able to predict the future object positions and velocities in the physics state space accurately, with a Mean Absolute Error (MAE) close to 0 across all predicted frames when compared to the groundtruth. Regarding the unroll performance, i.e., the frame-by-frame prediction performance, the SlotFormer model's performance quickly deteriorates, while both variants of our architecture keep the performance more stable over time, as seen in Fig. <ref>. As seen in Fig. <ref>, the performance decrease stems mainly from wrong dynamics, as the object shapes are kept intact even for the SlotFormer model. §.§ Inaccurate Dynamics Knowledge In the previous setup, the integrated function described the underlying dynamics perfectly and as such might allow the model to learn undesirable shortcuts. Here, we therefore evaluate whether inaccuracies in the integrated dynamics knowledge hinder the utilization of the integrated dynamics. We introduce these inaccuracies by using wrong simulation time steps, which results in wrong state predictions, albeit with the same underlying dynamics. We report the results in Tab. <ref>. While the performance has clearly deteriorated, it is still above the purely data-driven approach. As such we can see that just the information about the dynamics process in itself carries valuable information for the final predictions, not only the concrete dynamics state. §.§ Data Efficiency Next, we analyze the prediction performance when using only 300 data samples, amounting to 3% of the original data. We report the results in Tab. <ref>. As expected, the performance of both models drops, however the SlotFormer predictions are now close to random predictions, indicated by the very low foreground scores. In contrast, our proposed model still achieves a better overall performance than the SlotFormer model using the complete dataset. §.§ Joint Latent State Here we analyze whether the separation of the latent state into Gestalt and dynamics factors is necessary by working on only a single latent state without separation, without both the latent state encoder and decoder. As can be seen in Tab. <ref>, the performance decreases significantly when not performing latent state separation. However, the performance was still above that of the SlotFormer model, indicating that even poor dynamics integration can be beneficial. § CONCLUSION We have introduced a scheme to integrate procedural knowledge into deep learning models and specialized this approach for a video prediction case. We have shown that the prediction performance can be significantly improved if one uses knowledge about underlying dynamics as opposed to learning in a data-driven fashion alone. However, we also highlighted the benefit of (1) a sensible latent state separation in order to facilitate the use of the procedural knowledge, and (2) the use of a parallel prediction model that corrects the dynamics prediction and models Gestalt and dynamics interdependencies. Future work is focused on increasing the benefit further for inaccurate or incomplete knowledge integration, as this enables the use in more complex settings. Also, the current need for ground truth conditioning in the first frame limits applicability in some settings, and as such semi-supervised or even completely unsupervised state discovery increase the utility of our approach. Last, the application to video prediction downstream tasks such as MPC, VQA, or more complex system parameter estimation are all potential extensions of this work. ieee_fullname
http://arxiv.org/abs/2406.17984v1
20240625234505
Galactic Rotation Curves of LSB Galaxies using core-halo FDM configurations
[ "Ivan Alvarez-Rios", "Tula Bernal", "Pierre-Henri Chavanis", "Francisco S. Guzman" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO", "gr-qc" ]
ivan.alvarez@umich.mx Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo. Edificio C-3, Cd. Universitaria, 58040 Morelia, Michoacán, México. tbernalm@chapingo.mx Área de Física, Depto. de Preparatoria Agrícola, Universidad Autónoma Chapingo, Km 38.5 Carretera México-Texcoco, Texcoco 56230, Edo. Méx., México chavanis@irsamc.ups-tlse.fr Laboratoire de Physique Théorique, Université Paul Sabatier, 118 route de Narbonne 31062 Toulouse, France francisco.s.guzman@umich.mx Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo. Edificio C-3, Cd. Universitaria, 58040 Morelia, Michoacán, México. § ABSTRACT In this work, we construct galactic halos in order to fit the rotation curves (RCs) of a sample of low surface brightness (LSB) galaxies. These halos are made of Fuzzy Dark Matter (FDM) with a multimode expansion of non-spherical modes that in average contribute to the appropriate density profile consisting of a core and an envelope needed to fit the rotation curves. These halos are constructed assuming a solitonic core at the center and two types of envelopes, Navarro-Frenk-White and Pseudo-Isothermal density profiles. The resulting FDM configurations are then evolved in order to show how the average density changes in time due to the secular dynamical evolution, along with a condensation process that lead to the growth of the solitonic core. Galactic Rotation Curves of LSB Galaxies using core-halo FDM configurations Francisco S. Guzmán July 1, 2024 =========================================================================== § INTRODUCTION Fuzzy Dark Matter (FDM) is a dark matter candidate, consisting of an ultralight spin zero boson that has received recent attention because it apparently solves some of the traditional problems of Cold Dark Matter (CDM), namely the cusp-core and the too-big-to-fail problems as explained in recent reviews <cit.>. The reason is that the formation of very small scale structures is prevented by the uncertainty principle for such an ultralight particle and the mass power spectrum is cut-off at small scales. In addition, the tiny mass of the boson implies smooth galactic cores as opposed to the cuspy shape obtained from predictions of CDM. At cosmic scales the model has been deeply studied in structure formation simulations (SFS) (see e.g. <cit.>), that are promising and already involve the dynamics of baryonic matter. At local scales the works concentrate on the formation of core-tail halos like those obtained in SFS, for example through the merger of multiple cores (see e. g. <cit.>) that end-up with the core surrounded by a typical granular structure that in average shows a Navarro-Frenk-White (NFW) density profile <cit.>. Construction of target density profiles is also a subject of current interest, because the wave function describing the FDM at local scales suggests a clear multimode dependency. This approach has been developed for SFS <cit.> as well at local scales with the construction of on demand multimode density profiles whose stability is studied with simulations <cit.>. Now, the boson mass m_B in the FDM, in order to address the small scale problems (core density profile and suppression of the small-scale structure) and to behave like CDM on large scales, must be of the order of m_B ∼ 10^-23-10^-21 eV. From the high-redshift luminosity function of galaxies we have the constraint for the boson mass m_B > 1.2 × 10^-22 eV <cit.>, while <cit.> derive a stringent constraint, indicating m_B ≳ 2 × 10^-21 eV. On the lower limit of the boson mass, the most used value is m_B ∼ 10^-22 eV in order to solve the small-scale problems of CDM. In the cosmological context, the analysis of Cosmic Microwave Background (CMB) and galaxy clustering data in e.g. <cit.>, establishes a constraint for the boson mass in the FDM model of m_B > 10^-24 eV. Considering the galaxy UV-luminosity function and reionization constraints, <cit.> determined a minimum mass requirement of m_B > 10^-23 eV. Also, from Lyman-α observations, the constraint is m_B > 10^-23 eV <cit.>. This value is in tension with the results by <cit.>, setting the minimum value for m_B > 10^-25 eV. This indicates there is no consensus on the accurate mass of the ultra-light boson and that further exploration is still necessary. Meanwhile we explore the viability of the model at local scales. Notice that self-interaction is another parameter that influences the construction and phenomenology of structures within the bosonic dark matter model, and that it could substantially change the constraints on the boson mass <cit.>. In this work, likewise in <cit.>, we focus on the construction of multimode FDM configurations, in particular with solitonic core and an envelope with NFW and Pseudoisothermal (PISO) density profiles that fit rotation curves of low surface brightness (LSB) galaxies, and study their evolution in order to study their behavior and stability properties. The article is organized as follows. In Section <ref> we describe the method we use to construct multimode halos, in Section <ref> we study the evolution of these configurations, and finally in Section <ref> we draw some conclusions. § CONSTRUCTION OF GALACTIC CORE-HALO PROFILES §.§ Basic assumptions The dynamics of FDM is modeled with the Schrödinger-Poisson (SP) system: iħ∂Ψ/∂ t = -ħ^2/2m_B∇^2Ψ + m_B V Ψ, ∇^2 V = 4π G (ρ - ρ̅), where Ψ is an order parameter related to the matter density through ρ := m_B |Ψ|^2, with m_B the boson particle mass, ħ the reduced Planck constant, G the gravitational constant, and ρ̅ = 1/|D|∫_D ρ d^3x the spatially averaged density calculated within a spatial domain D with volume |D| := ∫_D d^3x, where the construction of configurations is implemented and where the evolution is carried out. The gravitational potential V is sourced by the difference between the density and its spatial average. We want to construct solutions of the Schrödinger-Poisson (SP) system that are consistent with some galactic rotation curves. In order to construct the wave function of the core-halo, we follow a similar strategy as that designed in <cit.> and <cit.>. We assume there is a target density profile ρ_T, and the goal is to construct a corresponding wave function Ψ_0 that is consistent with this density profile and satisfies the SP system. For this, we consider the target density to be a spherically symmetric function, depending on the radial coordinate r only. This makes possible to solve Poisson equation (<ref>) in spherical symmetry which can be written as the following first order system: d V_Tdr= GM_Tr^2, d M_Tdr = 4π r^2 ρ_T, where ρ_T is the target density and M_T is the radial mass function. Once Poisson equation is solved, the resulting potential V_T is a function of r. This potential is injected into the stationary version of the Gross-Pitaevskii equation (<ref>). This equation is reminiscent of the problem of the hydrogen atom, with the notable difference being the replacement of the Coulomb potential by the potential V_T, which is written as a Sturm-Liouville problem: -ħ^2/2 m_B1/r^2∂/∂ r(r^2∂ψ_j/∂ r) +ħ^2/2m_BL^2/r^2ψ_j+ m_B V_Tψ_j = E_jψ_j, where L^2=-1/sinθ∂/∂θ (sinθ∂/∂θ )+1/sin^2θ∂^2/∂ϕ^2 is the squared angular momentum operator and j labels the eigen-state ψ_j with eigen-energy E_j. To solve this equation, we assume a separation of variables for ψ_j := ψ_nℓ m(r,θ,ϕ) = R_nℓ(r) Y_l^m(θ,ϕ), where Y_l^m(θ,ϕ) are the spherical harmonics and R_nℓ is expressed as R_nℓ := u_nℓ/r, with u_nℓ satisfying the following radial equation: -ħ^22 m_Bd^2 u_nℓdr^2 + (ħ^22m_Bℓ(ℓ + 1)r^2 + m_BV_T(r) )u_nℓ = E_nl u_nl, where n, ℓ, and m are “quantum numbers", and where we have used the identity L^2Y_lm=l(l+1)Y_lm. We name the wave function Ψ_0 as the one that fits the target density, which can be expressed as a linear combination of the eigen-functions ψ_j: Ψ_0 = ∑_j a_j ψ_j e^-iE_j t / ħ. The density profile |Ψ_0|^2 associated with the wave function is given by |Ψ_0|^2 = (∑_j a_jψ_je^-iE_j t / ħ) (∑_k a_k^* ψ_k^*e^iE_k t / ħ) = ∑_j |a_j|^2 |ψ_j|^2 + ∑_j≠ k a_j a_k^* ψ_j ψ_k^* e^ i(E_k - E_j)t/ħ. An essential assumption when fitting structure densities in structure formation simulations or multi-core collisions, is that ρ_T is a time-averaged quantity, as well as a spatially averaged quantity along various radial directions. Therefore, we assume that the target density can be decomposed as follows: |Ψ_0|^2_T→∞ := lim_T→∞1T∫_0^T |Ψ_0(t,x)|^2 dt = 14π∑_n,ℓ (2ℓ +1) |ã_nℓ|^2 |R_nℓ|^2, where T is the time-window used to calculate time-averages. The coefficients are written as a_nℓ m = ã_nℓ e^iΘ_nℓ m, where Θ_nℓ m are random phases with values between 0 and 2π. To derive Eq. (<ref>) we have used the identity ∑_m |Y_lm(θ,ϕ)|^2=(2l+1)/4π. Another alternative is to consider a spatial-average over the solid angle Ω := [0,π]×[0,2π] as follows: |Ψ_0|^2_Ω := 14π∫_Ω |Ψ_0(t,x⃗)|^2dΩ = 14π∑_n,ℓ (2ℓ +1) |ã_nℓ|^2 |R_nℓ|^2. In this way, temporal and spatial averages are assumed equal and the target density must satisfy ρ_T ≈ m_B |Ψ_0|^2_T→∞ = m_B |Ψ_0|^2_Ω. Then, we can simply write ρ_T ≈ m_B |Ψ_0|^2 referring to either angular or time average. However, it must also hold that V_T ≈V. An important aspect of Ψ_0 is whether it corresponds to a virialized configuration or not. In order to answer this question, we calculate the quantity Q_0 = 2K_0 + W_0, where K_0 =ħ^2/2m_B∫ |∇Ψ_0|^2 d^3x= -ħ^2/2m_B∫_D Ψ_0^*∇^2Ψ_0 d^3x, is the kinetic energy and W_0 = m_B/2∫_D V_T |Ψ_0|^2 d^3x is the gravitational energy. In an ideally virialized configuration Q_0=0. This quantity can be written in terms of a spectral decomposition as Q_0 = ∑_n,l(2ℓ+1)|ã_nℓ|^2Q_nℓ, with Q_nℓ = 2K_nℓ + W_nℓ, where K_nℓ and W_nℓ are the matrix elements of the kinetic and potential energies with respect to the supposed basis of the eigenproblem given by K_nℓ = -ħ^2/2m_B∫ R_nℓ[d/dr(r^2d R_nℓ/dr)-ℓ(ℓ+1)R_nℓ] dr, and W_nℓ = m_B/2∫ V_T R_nℓ^2 r^2 dr. On the other hand, from Eqs. (<ref>), (<ref>) and (<ref>) we obtain the identity K_0+2W_0=E_0 with E_0=1/m∫∑_nl R_nl(r)^2 (2l+1) |ã_nl|^2 E_nl r^2 dr, where E_nℓ are the eigenvalues (notice that E_0 is an eigenvalue and not the total energy that could be confused with K_0+W_0). Therefore, we have Q_0=2E_0-3W_0 (in components form K_nℓ+2W_nℓ=E_nℓ and Q_nℓ = 2E_nℓ - 3W_nℓ). The mass reads M_0=∫ρ d^3x=m_B ∫ |Ψ_0|^2 d^3x, hence M_0=∫∑_nl R_nl(r)^2 (2l+1) |ã_nl|^2 r^2 dr. We numerically verify that, in general, the individual terms Q_nℓ are different from zero and can have different sign for different values of n and ℓ. That is, each mode of superposition is not virialized, however we find a superposition such that Q_0 ≈ 0. The construction of Ψ_0 reduces to the calculation of the coefficients ã_nℓ of the expansion for the target density in equation (<ref>) or equivalently (<ref>) and a specific way in which the constraint Q_ 0≈ 0 is satisfied. Once these coefficients are determined, it becomes possible to reconstruct a wave function that is consistent with the stationary SP system and at the same time has an average density consistent with a core-halo target density. The steps to construct the FDM core-halo configuration are summarized as follows: * Start with a given target density ρ_T. In order to have a finite integrated mass we follow the recipe in <cit.> that suggests to modulate the target density with a Gaussian e^-r^2/(2 r_0^2), having r_0 as the value for which ρ_T(0)/ρ_T(r_0)∼ 10^3. * Solve Poisson equation (<ref>-<ref>) for such density in the domain r∈[0,2r_0], as also suggested in <cit.>. * Use the resulting gravitational potential V_T to solve equation (<ref>) for all combinations of n and ℓ to be considered. * Then find the coefficients a_j that minimize an error function between ρ:=m_B |Ψ|^2 and ρ_T. In the following subsection, we elaborate on the ingredients of step 4. §.§ Description of the fitting method The expansion of the wave function (<ref>) is determined using a Genetic Algorithm (GA), where the DNA of each organism in the population is assumed to consist of the coefficients ã_nℓ. The maximum number of genes considered is N_DNA = n_maxℓ_max, where the quantum numbers take on the values n = 1,2,...,n_max and ℓ = 0,1,...,ℓ_max-1. The fitness function for each individual is defined by the scalar η = 11+|Q_0|[1/r_max∫_0^r_max(ρ_T - ρ)^2/ρ_T dr]^-1, where the term 1+|Q_0| is not significant when |Q_0| < 1, but when |Q_0| > 1, the value of the fitness function decreases for profiles that move away from the virialization state. Finally, r_max = 2 r_0 is the upper boundary of the numerical domain where the eigenvalue problem (<ref>) is solved. The operation of the GA is based on the random generation of an initial population of N_org organisms. We calculate the fitness function η to all indivusuals, and choose the k most fitted organism. Following an elitist approach, these selected individuals prevail through the next generation. From these k organisms, N_cross are randomly chosen to cross-over and produce children for the next generation; in a biological context, one would typically choose N_cross = 2, but this is not a limitation in a GA and N_cross = 5 worked better. These selected parents will randomly share their genetic material, namely the coefficients of the expansion, to create a new individual. This process is repeated N_org - k times until the initial population size N_org is completed again. The organisms in the new generation can potentially adapt more effectively with a mutation process that works as follows. We generate a new random number β_nℓ ranging from 0 to 1, representing the likelihood that the gene ã_nℓ undergoes a mutation. Each gene has its own probability of change. Subsequently, a new random number γ_nℓ is generated, and the mutation occurs if γ_nℓ exceeds β_nℓ. In such cases, the coefficient ã_nℓ is altered to αã_nℓ, where α is a randomly selected number within the range of -1.5 to 1.5 for all values of n and ℓ. Finally, a second type of mutation, known as differential mutation is applied. This mutation involves selecting the i-th organism with DNA defined by the coefficients ã_nℓ^(i) along with a fitness η^(i). Subsequently, two other organisms with DNA ã_nℓ^(1) and ã_nℓ^(2) are randomly selected. A new organism is then created by linearly combining these coefficients as ã_nℓ^(new,i) = ã_nℓ^(i) + δ(ã_nℓ^(1) - ã_nℓ^(2)), where δ is a number between 0 and 1, with fitness η^(new,i). If it happens that η^(new,i)>η^(i), the i-th organism is replaced by the new organism. This process is repeated for i=1,2,…,N_org. Notice that the fitness function is a norm of the error between the density of the multipolar expansion and the target density. Considering the randomness in various stages of the GA it could well happen that different sets of coefficients of the expansion, or equivalently individuals with different DNA, may have similar values of η. In this sense, the expansion of the profile can be degenerate. Now, our goal is to tune the galactic dark matter densities. Inspired by <cit.> and <cit.> we use a certain type of target density profile which we discuss below. §.§ Models for target density Core-NFW model. Simulations of binary systems, multicore mergers, and more complex scenarios like structure formation simulations reveal that the time-spatial averages of the formed structures exhibit a spherical profile with a soliton core at the center. This core, with density similar to that of the ground state of the SP system (see e.g. <cit.>), is modeled with the empirical profile <cit.>: ρ_core(r) = ρ_c[1+0.091(r/r_c)^2]^-8, where we can find the relation between the central density ρ_c and the core radius r_c from the numerical solution of the ground state using the boundary condition ψ(r=0)=1 in units where ħ = m_B = 4π G = 1. If we fix ρ_c = 1, we can find that r_c ≈ 1.30569 ± 0.000113. Using the λ-scaling relation of the SP system <cit.>, it is found that ρ_c ≈ (1.30569/r_c)^4 for an arbitrary value of the core radius r_c. With this, we can translate the central density to physical units as ρ_c = ħ^24π G m_B^2(1.30569r_c)^4 ≈ 1.983×10^7( kpc^4m_22^2r_c^4) M_⊙, where m_22 is defined as m_22 = m_B × 10^-22eV^-1, and the units for r_c are [r_c] = kpc. Outside of this core, there is an envelope region that can be approximated by the NFW profile <cit.>: ρ_NFW(r) = ρ_s/r/r_s(1+r/r_s)^2, where ρ_s and r_s are halo parameters. The complete profile of the structure takes the form <cit.>: ρ_CN(r) = ρ_sol(r)Θ(r-r_t) + ρ_NFW(r)Θ(r_t-r). In this equation, we assume continuity, which fixes one of the two halo parameters with the relation ρ_s = ρ_sol(r_t)r_t/r_s(1+r_t/r_s)^2. Core-PISO model. It is well-known that a soliton nucleus forms within the halo in the FDM model since the ground state is an attractor of the SP system. However, in the envelope region, it is possible to discuss what may be the best approximation for the average profile of the envelope. One of the alternative proposals to the NFW model is the Pseudo-Isothermal profile, which is written as ρ_PISO(r) = ρ_p/1+(r/r_p)^2, in this case, ρ_p and r_p are halo parameters. The complete profile of the structure takes the form: ρ_CP(r) = ρ_sol(r)Θ(r-r_t) + ρ_PISO(r)Θ(r_t-r), similar to the core-NFW model, in which we assume continuity in the density. In this case the envelope parameters can be related as: ρ_p = ρ_sol(r_t)[1+(r_t/r_p)^2]. which reduces the number of fitting parameters. §.§ Fitting of LSB galaxies LSB galaxies are dominated by dark matter, thus we assume that we can fit their rotation curves with the core-NFW or core-PISO profiles. The independent parameters of a core-NFW profile are r_c, r_t, and r_s, and for the core-PISO profile, they are r_c, r_t, and r_p. We use the same strategy presented in <cit.> to obtain the appropriate values for observational data in <cit.>. We additionally find the radius r_0 of the resulting configurations. Table <ref> provides the best-fit free parameters for these galaxies. Now, according to <cit.>, the halo surface density is nearly constant and independent of the galaxy luminosity, with value Σ_0 = ρ_0 r_0 = 140^+80_-30 M_⊙pc^-2, where ρ_0 and r_0 the halo central density and core radius <cit.>. We include in Table <ref> the corresponding surface densities for both the core-NFW and core-PISO profiles, as defined in <cit.>, for the soliton FDM configurations: Σ_0 = ρ(r_t) r_h, with r_h the radius where the density is ρ(r_h) = ρ(r_t)/4, for r_t the transition radius outside the soliton region. As discussed in Appendix L of <cit.>, the constant Σ_0 observational value is not consistent with the soliton, it decreases like 1/r^3 as the size of the soliton increases. This suggests (see Sec. VII of <cit.>) to define Σ_0 with the density at the interface between the soliton and the NFW envelope, at the transition radius r_t. As seen in Table <ref>, from the small sample we are studying, the results do not coincide with the observational value, except for ESO4880049 with the core-PISO profile. For the core-PISO profile, the results are closer to the value obtained in <cit.>. The discrepancy may arise from the density profile assumed to model their huge sample of galaxies, a Burkert profile. In our case, the core-PISO profile decays slowly and is closer to the Burkert profile. We would need to simulate a large sample of galaxies with a profile closer to Burkert's to conclude if our results are in agreement with a universal surface density of dark matter. As described earlier, there is no consensus on the correct mass of the boson, and in our study we use a boson mass m_B = 10^-23 eV because it is near the upper bound, and allows the profiles to be adjusted for the LSB galaxies in our analysis. This mass value is on the boundary with the cosmological constraints found in <cit.> from the galaxy UV-luminosity function and reionization observations, and in <cit.> from Lyman-α observations, m_B > 10^-23 eV. It also falls within the constraints provided by <cit.>, from CMB and galaxy clustering data, m_B > 10^-24 eV. Now, concerning the fitting method, the parameters parameters of the GA are a population of N_org = 200 organisms, each having n_max = l_max = 41. This implies that the DNA of each organism consists of N_DNA = 1681 genes, resulting in a total of approximately 10^5 coefficients a_nlm, a similar number as in <cit.>. During reproduction, N_cross = 5 organisms contribute to creating a new organism, selected from a pool of k = 100 parents. Additionally, for the differential mutation, we set δ = 0.1. These parameters have proven effective in identifying organisms with a fitness η≈ 10^5, or equivalently, a proportional χ^2 error 1/η≈ 10^-5 within the initial 1000 generations, and in general a virialization factor in the range |Q_0|<10^-5 in code units. Using these parameters, we determined the suitable coefficients for each of the considered galaxies in Table <ref>. The results appear in Figure <ref>, which illustrate how the GA is able to construct multimode configurations that approximate the target density within the region r < r_0. Beyond this radius, the adjustment becomes more challenging, as seen after the a vertical red dotted line. § EVOLUTION OF THE GALACTIC PROFILES We investigate the evolution of the core-halo profiles described in the previous section by evolving the wave function with the fully time-dependent SP system (<ref>-<ref>), for which we use the code CAFE <cit.>. In order to prevent the wave function from decaying into an isolated solitonic profile as suggested in <cit.> and <cit.>, we implemented periodic boundary conditions that guarantee the persistence of a core surrounded by an envelope, as well as the constancy of mass and total energy. As initial conditions, we inject the wave function (<ref>) at time t=0, Ψ(0,x⃗)=Ψ_0 centered in the 3D cubic box D=[-r_0,r_0]^3. It is worth noticing that when the coefficients are fixed, the wave function can possess an overall momentum different from zero, calculated as p⃗_0 = -iħ∑_j,k a_k^* a_j ∫_D ψ_k^* ∇ψ_jd^3x. Then, we correct the initial wave function to be Ψ(0,x⃗) = Ψ_0 e^-i p⃗0 ·x⃗ / M, where M :=∫_D ρ d^3x represents the total mass in the domain. This choice ensures that the initial wave function has zero total linear momentum, and the evolution remains with the core nearly at the center of the domain. The domain was discretized with a spatial resolution of Δ = r_0/128 along the three spatial directions. To capture the temporal dynamics, a time resolution satisfying Δ t / Δ <0.25 in code units was employed, and the evolution was carried out over a time-window of 2 Gyrs. The evolution of each galaxy is depicted through snapshots of the density and velocity vector field in the z=0 plane at times t=0, 1, and 2 Gyr in Figures <ref> and <ref>. These simulations use the initial conditions with core-NFW and core-PISO target density profiles, respectively. It is evident that even though the configurations are initially near a virialized state, they evolve and in fact the configurations do not remain stationary and not even in average, instead they develop some dynamics. In order to understand better the evolution of the whole configuration, we look into the time dependence of the core mass for each of the galaxies of the sample. The core mass M_c is the integral of the density (<ref>) from the origin until r_c and its value as function of time is shown in Fig. <ref> for six of the configurations during 7Gyr. Notice that the core mass oscillates with an overall growing trend that can be understood as the accretion of matter from the granular envelope, indicating that the growth mass is due to collisional effects <cit.>, interpreted as condensation in the kinetic regime <cit.> or wave condensation <cit.>. This slow, but never ending core mass growth, has been shown to happen after the saturation time <cit.>. This core mass growth seems inevitable and the reason why possibly any configuration with granular structure will lead to evolution and core growth. As a result, the dynamics is influenced and the average density in the evolution deviates from the averages of the initial data, an effect also described in <cit.>. The implication is that the redistribution of density will also distort the rotation curve. We illustrate this difference by calculating the spatio-temporal density average ρ, which is now only a function of the radial coordinate. Once the average density is obtained, we compute the radial rotation curve as v_RC = √(G m(r)/ r), where m(r) is the mass density integrated until radius r. For each halo, the results are presented in Figure <ref>. The discrepancy between the initial and the evolved configuration is noticeable. The concentration of matter in the core region definitely changes the RC with a characteristic peak of a concentrated mass. § CONCLUSIONS We present a method to construct FDM halos, with multimode expansions characterized by a core-halo profile associated to observational rotation curves. The target density profiles were of type core plus an envelope with NFW and Pseudo-Isothermal profiles that fit rotation curves of a sample of LSB galaxies. While the core is dominated by the first term in an expansion in spherical harmonics, the envelope contains the expected granular structure. When averaged on the solid angle, the density profile approaches the target density that fits rotation curves. Even though the constructed configurations are nearly virialized, it seems unavoidable the evolution of the configuration that degrades the quality of the RCs fittings. We then evolved these configurations during 7Gyr and measured the core mass as function of time, we found the generic result, namely, the core permanently accretes matter from the granular envelope, an effect already measured after the formation and saturation time of cores <cit.>, with a slow but permanent growth that goes as t^1/8. The core-halos are stable in the “collisionless” regime but they evolve due to “collisions” (granularities) on a secular timescale. Note that this is not an instability but a natural secular dynamical evolution, accompanied by a condensation process and the growth of the soliton. This is in agreement with kinetic theory as described in <cit.>. A direct implication of the core growth is that matter concentrates near the center of the galaxy and the rotation curve develops a characteristic peak at a small radius, observed in some galaxies (e.g. in <cit.>). A lesson from our analysis is that no matters how well RCs are fitted with a core surrounded a granular envelope, and how virialized to model is, FDM configurations will evolve and get distorted by the core accretion. A considerable enhancement to this analysis would be the contribution of luminous matter during the evolution, which if gravitationally coupled to FDM and would influence the dynamics of the whole structure. § ACKNOWLEDGMENTS Iván Álvarez receives support within the CONAHCyT graduate scholarship program under the CVU 967478. FSG is supported by grants CIC-UMSNH-4.9. TB and FSG are supported by CONAHCyT Ciencia de Frontera 2019 Grant No. Sinergias/304001. § CONNECTION BETWEEN THE WAVE DESCRIPTION AND THE KINETIC DESCRIPTION In this Appendix, we discuss the connection between the wave description and the kinetic description. We recall the relation between the wave superposition coefficients |ã_nl|^2 and the particle distribution function f(ϵ) following <cit.>. We then use this relation to recover the classical energy functionals and the classical virial theorem from the quantum ones in the WKB (high energy) limit. §.§ Classical kinetic description based on the Vlasov-Poisson equations In a classical description (applying for example to stellar systems or to the time-averaged envelope of DM halos) the density is given by ρ=∫ f d v, where f( r, v,t) is the distribution function for particles of mass m_B, i.e., f( r, v,t) gives the mass density of particles with position r and velocity v at time t. It is normalized such that ∫ f d rd v=M_ envelope. We assume that the envelope is spherically symmetric with a DF f=f(ϵ) that is a function of the energy alone. Such a DF determines a steady (virialized) state of the classical Vlasov-Poisson equations. We have introduced the energy per unit mass ϵ=E/m_B=v^2/2+V, where V is the gravitational potential. Eq. (<ref>) can then be rewritten as ρ=∫_V(r)^0 f(ϵ)4π√(2(ϵ-V)) dϵ. In practice it is difficult to predict the DF f of the envelope resulting from the process of violent relaxation. Note that there is no rigorous derivation of the NFW and Burkert profiles so these profiles remain essentially empirical. Actually, a prediction of the DF may be attempted from the statistical theory of Lynden-Bell <cit.> However, this “naive” prediction leads to a DF with an infinite mass so it is necessary to take into account the evaporation of high energy particles to have a more physical model. This leads to the fermionic King model <cit.> where the “fermionic” nature of the DF arises from the specificities of the Vlasov equation in the Lynden-Bell statistical theory. In many cases “degeneracy” effects can be neglected leaving us with the classical King model. The King model determines a sequence of equilibrium states (indexed by the central concentration) ranging from a pure polytrope of index n=5/2 to an isothermal distribution (n=∞) <cit.>. It is shown that the King model at the critical point of marginal stability, just before the system undergoes the gravothermal catastrophe, gives a good agreement with the Burkert profile (see the comparison between the different density profiles reported in Fig. 18 of <cit.> and Fig. 1 of <cit.>. Therefore, the (fermionic) King model may be a relevant model of DM halos that is physically motivated. §.§ Quantum wave description based on the Schrödinger-Poisson equations in the WKB approximation The quantum wave description of DM halos is discussed in the main text. Because of the process of violent relaxation or gravitational cooling, the envelope of DM halos may be viewed as a superposition of excited states with energies E_nl and amplitude ã_nl. This is similar to the orbits of particles with energies ϵ and DF f(ϵ) in classical systems. By contrast, the core (soliton) of quantum DM halos corresponds to the ground state of the Schrödinger-Poisson equations that has no counterpart in classical systems. Here, we focus on the envelope and we consider sufficiently high energies E so that the WKB approximation can be employed (see <cit.> for details). In the WKB approximation (large E limit), the radial function is given by R_nl(r)=N_nl/r√(p(r))sin1/ħ∫_r_1^r p(r') dr'+π/4, where p(r)=√(2m_B (E_nl-l(l+1)ħ^2/2mr^2-m_B V )) is the classical radial momentum. The normalization condition is chosen such that ∫_r_1^r_2 R_nl(r)^2 r^2 dr=1 giving N_nl^2=1/∫_r_1^r_2dr/2p(r), where we have approximated the square of the sine as 1/2. In the above expressions, r_1 and r_2 are the turning points where p vanishes. The energy eigenvalues E_nl satisfy the Bohr-Sommerfeld quantization condition ∫_r_1^r_2 p(r) dr=πħ (n+1/2 ). To compute the time-average density of the envelope we approximate the sum over n and l in Eq. (<ref>) by integrals and write ρ(r)=1/4π∫ dϵ dl dn/dϵ R_nl(r)^2 (2l+1) |ã_nl|^2 with ϵ=E_nl/m_B. The Jacobian dn/dl can be obtained by differentiating the quantization condition from Eq. (<ref>) yielding dn/dϵ=m^2/πħ∫_r_1^r dr/p(r). Using this expression together with the WKB approximation for R_nl(r) in Eqs. (<ref>) and (<ref>), a nice cancellation of terms occurs, leaving us with ρ(r)=m_B^2/4π^2ħ∫ dϵ dl (2l+1) |ã_nl|^21/r^2p(r). For a given ϵ and r, l ranges from 0 to l_ max such that ϵ-l(l+1)ħ^2/(2m_B^2r^2)-V=0. If we assume that |ã_nl|^2 depends only on ϵ=E_nl/m_B (in agreement with the corresponding assumption that f=f(ϵ) in the classical description) the integral over l can be easily performed with the change of variables x=l(l+1) yielding ρ(r)=m_B^3/2π^2ħ^3∫ dϵ |ã_nl|^2 √(2(ϵ-V)). Comparing Eqs. (<ref>) and (<ref>) the following relation is obtained <cit.>: f(ϵ)=m_B^3/(2πħ)^3|ã_nl|^2. This equality is approximate in the sense that it is valid in the WKB limit. It is expected to hold only for eigenmodes with a high enough energy ϵ, i.e., for eigenmodes that describe the envelope of the DM halo. The soliton has to be treated independently as being the ground state of the SP equations. The interface between the soliton and the halo (with intermediate energies) may not be accurately described by the WKB approximation. In conclusion, for a spherically symmetric halo with a particle distribution function f(ϵ), the density profile is given by Eq. (<ref>) and the wave is given by Eq. (<ref>) with |ã_nl|^2 given by Eq. (<ref>) in the WKB limit, i.e., for large energies. Using this kind of construction <cit.> have shown that the time-average envelope of FDM halos obtained in numerical simulations is well-fitted by the fermionic King model <cit.> giving further support to the claim made in <cit.> that the (fermionic) King model may be a good model of the envelope of DM halos. §.§ WKB for functionals Using the WKB approximation for R_nl(r) [see Eqs. (<ref>) and (<ref>)], and proceeding as above, we find that the energy functional defined by Eq. (<ref>) reduces to E=2m_B^3/πħ^3∫ dϵ |ã_nl|^2 √(2(ϵ-V))ϵ r^2 dr. Using the identity from Eq. (<ref>) it can be rewritten as E=16π^2∫ dϵ f(ϵ)ϵ√(2(ϵ-V)) r^2 dr or as E=∫ f(ϵ)ϵ d rd v. Recalling that ϵ=v^2/2+V we obtain E=∫ fv^2/2 d rd v+2W. Finally, recalling the identity E=K+2W established in Sec. <ref> and using Eq. (<ref>), we find that the quantum kinetic energy coincides, in the high energy limit, with the classical kinetic energy K=∫ fv^2/2 d rd v. This agreement is expected, but not trivial, since K in Eq. (<ref>) is expressed in terms of the wave function ψ( r,t) – a function of position only – while K in Eq. (<ref>) is expressed in terms of the DF f( r, v,t) – a function of position and velocity (reducing to a function of the energy ϵ for spherically symmetric systems). Finally, we emphasize that the total energy (the one which is conserved) is E_ tot=K+W. It differs from the energy E (related to the eigenenergies) which is given by E=K+2W. The factor 2 arises because the system is self-gravitating (instead of being subjected to an external potential).
http://arxiv.org/abs/2406.18906v1
20240627053653
Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets
[ "Melanie Walsh", "Anna Preus", "Maria Antoniak" ]
cs.CL
[ "cs.CL" ]
The nonexistence of unicorns and many-sorted Löwenheim–Skolem theorems Benjamin Przybocki1() 0009-0007-5489-1733 Guilherme Toledo2 0000-0002-6539-398X Yoni Zohar2 0000-0002-2972-6695 Clark Barrett1 0000-0002-9522-3084 Received XXX; accepted YYY ========================================================================================================================================================== § ABSTRACT Large language models (LLMs) can now generate and recognize text in a wide range of styles and genres, including highly specialized, creative genres like poetry. But what do LLMs really know about poetry? What can they know about poetry? We develop a task to evaluate how well LLMs recognize a specific aspect of poetry, poetic form, for more than 20 forms and formal elements in the English language. Poetic form captures many different poetic features, including rhyme scheme, meter, and word or line repetition. We use this task to reflect on LLMs' current poetic capabilities, as well as the challenges and pitfalls of creating NLP benchmarks for poetry and for other creative tasks. In particular, we use this task to audit and reflect on the poems included in popular pretraining datasets. Our findings have implications for NLP researchers interested in model evaluation, digital humanities and cultural analytics scholars, and cultural heritage professionals. § INTRODUCTION Writing free verse is like playing tennis with the net down. - Robert Frost The poetic capabilities of large language models (LLMs) have been cited prominently by journalists, social media users, and even LLM developers and marketers <cit.>. Google named its first chatbot “Bard,” a traditional term for a poet and the nickname of William Shakespeare, and Anthropic named two of its 2024 Claude models after popular poetic forms, “Sonnet” and “Haiku.” Microsoft released an ad that featured its Bing chatbot writing poetry <cit.>, as well as an instruction guide for how to write poems with Copilot, including a list of suggested forms to try <cit.>. Generated poetry was also one of the first LLM outputs to go viral on social media and remains popular there <cit.>.[After ChatGPT debuted in November 2022, several LLM-generated poems and poem-like texts went viral on social media, including one in response to the prompt: “write a biblical verse in the style of the king james bible explaining how to remove a peanut butter sandwich from a VCR” <cit.>] Poetry is a lightning rod for the marketing and popular imagination of LLM capabilities because it is a signifier of human creativity and complexity, as well as a popular and culturally significant art form with a long history. But what do LLMs really know about poetry? What can they know about poetry? Prior research has focused on computational poetry generation <cit.>, summarization <cit.> and detection of individual forms <cit.>, but we need broader evaluation of a wider range of poetic forms and features, and updated audits of LLM capacities and knowledge. Poetic features uniquely combine verbal, aural, and visual elements; the substance, sound, and (in written poetry) appearance of words on the page (e.g., white space) all matter. What's more, poetry often communicates deep emotion and meaning in non-literal, ambiguous ways, employing figurative language, irony, and allusion. To measure LLMs' poetic capabilities, we develop a task to evaluate how well LLMs recognize more than 20 poetic forms and formal elements in the English language. Poetic form captures many different poetic features, including rhyme scheme, meter, and word or line repetition (see <ref>), and it also represents a distinct kind of literary genre. Sonnets, limericks, and haiku are well-known forms, but there are also less-known, more complicated forms like sestinas (which repeat the same six endwords in an intricate pattern) or pantoums (which repeat the second and fourth lines of stanzas in an alternating pattern). Identifying poetic form is a “difficult” task—in some ways inherently so—even for expert human annotators, as we show in a small formative study. We use this task to reflect on LLMs' current poetic capabilities, as well as the challenges and pitfalls of creating NLP benchmarks for poetry and for other creative tasks. In particular, we use this task to audit and reflect on the poems included in popular pretraining datasets. A complication is that the circulation of poetry is different from other literary texts, like fiction books and long-form prose, resulting in unmeasured differences in pretraining datasets. Poems are often short and “portable”; on the web and within the publishing industry, individual poems can “travel” across multiple websites and anthologies in ways that previously studied books data <cit.> do not, resulting in increased memorization issues that will affect any poetry evaluation benchmark. We find that LLMs—particularly GPT-4 and GPT-4o—can successfully identify both common and uncommon fixed poetic forms, such as sonnets, sestinas, and pantoums, at surprisingly high accuracy levels when compared to annotations by human experts. But performance varies widely by poetic form and feature; the models struggle to identify unfixed poetic forms, especially ones based on topic or visual features. While the LLMs have most success with the poetic forms most commonly found in popular pretraining datasets, we do not see major differences when we compare model performance on poems from major online poetry institutions, popular pretraining datasets, or print books with little to no digital presence. Our findings have implications for NLP studies of poetry/creative text generation and analysis, digital humanities and cultural analytics research, as well as cultural heritage collections, libraries, and archives that include poetry. Our contributions include: -.1em * the introduction of the poetic form detection task, with a comparison to formative human study of poetry experts, * a set of benchmark evaluation experiments using 4.1k poems, * an analysis of poems found in popular pretraining data and memorized by models, * code, data (1.4k public domain poems and form annotations), and metadata (pretraining inclusion and model memorization) that we release to the public.[<https://github.com/maria-antoniak/poetry-eval>] § POETIC FORM Subjective, Fluid, Context-Dependent. Traditionally, “form” refers to “the manner in which a poem is composed as distinct from what the poem is about,” and it can also refer more broadly to “genre or kind of composition” <cit.>. Poetic form can be defined by particular patterns of sound, referred to as prosody, and/or by visual patterns. In scholarship on poetics, forms are fluid and sometimes overlapping. They exist within specific cultural and linguistic contexts, but also travel across them <cit.>. They are socially and historically constructed and have been the subject of heated debates <cit.>, while also demonstrating remarkable durability across time (a number of the forms we test originated over 1,000 years ago). Since we focus on a corpus of mostly English-language poetry, the forms we focus on are all common in English, although most of them originated in other languages. For “fixed” forms, there are often specific rules and complex patterns of versification, but these rules are also likely to be stretched or broken by poets <cit.>. Like other literary genres, forms serve as “frameworks of expectation” <cit.> that are called up and manipulated in meaningful ways by writers. This makes it inherently difficult and subjective to evaluate poetic form. Fixed and Unfixed Forms. We divide the poetic forms we consider into three categories: fixed forms, formal elements, and unfixed forms. Fixed forms follow particular patterns in terms of number of lines, meter, rhyme, and/or repetition. Sonnets and villanelles are both fixed forms. Formal elements, such as common stanza types and meters, may be component parts of other forms or may define a poem as a whole. For example, there are generally three quatrains—or 4-line stanzas—in a Shakespearean sonnet. But a poem made up entirely of quatrains is a “quatrain poem.” Unfixed forms are defined by particular subject matter or kinds of content, rather than by repetition and sound. These are forms like elegy (writing about loss), which come in a variety of shapes, sizes, and patterns. See <ref> for full definitions and examples. These categorizations are recognized as imperfect, and they are neither stable nor discrete. A type of poetry like haiku has a common fixed form in English—three lines consisting of 5, 7, and 5 syllables—but haiku can also refer to concise, non-narrative poems with any number of lines that tend to focus on natural imagery <cit.>. Lastly, a single poem can also belong to more than one category. For example, John Keats's “Ode on a Grecian Urn” is an ode, but it is also an example of ekphrasis (writing about art), since it describes a decorated vase. To address this complexity, we exclude poems with multiple relevant tags in the same “form group,” such as pastoral and elegy (both unfixed forms). We believe that multi-label classification is an important avenue for future work. Meta-Discussion of Poetic Form. Like Keats, many authors include the name of the form they are engaged with in the title or text of a poem itself. While in the context of NLP evaluation these explicit mentions of a poem's form may seem to “give away” the correct answer, they are a fundamental aspect of poetry and are integral to a human reading experience. Thus, we do not exclude this information from our data or task; however, we do include basic statistics about how many poems include the form in the title or text (Figures <ref>, <ref>), and we experiment with prompts where the title is and is not included. § DATA To test how well LLMs evaluate poetic form, we curate over 4.1k poems, mostly English-language, which have been tagged/categorized with their poetic forms by human annotators, and either published online or collected in books. §.§ Poetry Sources Poetry Foundation. Poetry Foundation is a non-profit that works “to amplify poetry and celebrate poets” <cit.>. The organization runs Poetry magazine, and it also hosts an online database of English-language poetry with more than 47k poems. Academy of American Poets. The Academy of American Poets is also a non-profit whose mission is “to support American poets at all stages of their careers and to foster the appreciation of contemporary poetry” <cit.>. The organization hosts the website Poets.orgPoets.org, which includes more than 10k poems. Manually Digitized Poetry Books. We also manually digitize a range of poetry collections and anthologies organized by form that, when searched in the international library database WorldCat, did not have obvious e-books or presences in major databases (e.g. HathiTrust Digital Library). See <ref> for full list of books. To our knowledge, the collections from the Poetry Foundation and Academy of American Poets represent the largest collections of human-labeled poetry that extend into the present day. They are both well-respected poetry institutions with significant engagement from poets and poetry scholars. Both institutions have taken great care in formatting their poems with correct white space and line breaks in the HTML of their websites—an aspect of the poems that is essential to understanding both their form and meaning. We release 1.4k public domain poems from this dataset with form annotations as well as other accompanying metadata, such as subject tags and author birth and death years, when available. We do not make in-copyright poems available. §.§ Poetry Curation and Processing We select poems in the following categories delineated by the Poetry Foundation on their website: verse forms, stanza forms, meters, and types/modes. Conceptually, as discussed in <ref>, we frame these tag categories as fixed forms, formal elements, and unfixed forms (see Table <ref>). The Academy of American Poets does not tag poems by meter or stanza form, so for these forms, we only use the Poetry Foundation as our source. We scrape up to 400 poems per available form on each of the two websites. We exclude poems that have multiple relevant tags in the same “form group,” but we allow poems that may have multiple relevant tags in different form groups, such as blank verse (formal element) and elegy (unfixed form). We preserve white space and line breaks in our dataset and see this as a central contribution. Additionally, we digitize 15 print poetry anthologies and collections tagged with each of the fixed forms that we consider, according to Library of Congress subject headings via WorldCat. §.§ Auditing Pretraining Data for Poems Online resources like Poetry Foundation are valuable in large part because they make thousands of poems available on the internet for free. However, this also means that these specific poems are more likely to be present in the training data of LLMs, leading to memorization issues that are could affect performance on our form classification task. Prior work has found significant amounts of poetry memorization in large models like GPT-3.5 <cit.>. We therefore perform initial experiments to probe pretraining datasets for the poems in our datasets. Thanks to new data resources <cit.>, we can search directly for poems in pretraining data as well as probing model outputs. Dolma. The Dolma open pretraining dataset <cit.> is a “three-trillion-token English corpus, built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials.” It includes Github, Wikipedia, WikiBooks, Reddit, Semantic Scholar, Project Gutenberg, and Common Crawl texts, resulting in a large pretraining dataset that is open to researchers. We query the Dolma dataset (see <ref>) using the What's In My Big Data (WIMBD) platform <cit.>.[<https://github.com/allenai/wimbd>] WIMBD allows us to search for exact strings and returns all matches along with their associated metadata, including the data source, the original web domain, the surrounding text, and other information. We split each poem into lines, and we remove lines with fewer than four whitespace-delimited tokens (otherwise, the queries are often short and generic, resulting in matches that are not reliably part of a poem). We truncate lines at 20 tokens for query efficiency. We release this data publicly to support future research. How many poems are in pretraining data? We find that about half of the poems (57%) are not present in Dolma (not even one line is detected). This does not guarantee that these poems are not present in the pretraining data for industry models, whose pretraining data is not disclosed and which likely include many in-copyright texts—but this provides us with one publicly available clue. Fig. <ref> shows the forms and the proportions of their associated poems that were detected in Dolma, categorized by the Dolma source. About 30% of our poems are found in the Common Crawl data included in Dolma, with the C4 dataset close behind. Wikipedia and Semantic Scholar contain the fewest detected poems. Overall, if at least one line from a poem is detected, it is likely that all the lines will be detected somewhere in Dolma (see Fig. <ref>). Where does poetry pretraining data come from? Examining the web domains from which the Dolma data was sourced, we find that large websites like Github, Reddit, and Google Books dominate the rankings (Table <ref>). Many poetry-specific websites like <engpoetry.com> and <poets.org> (the website of the Academy of American Poets, one of our data sources) are also present in the top ranked domains, as are domains related to books. Figure <ref> shows the distribution across data sources, with the Common Crawl dataset dominating, but some sources, e.g., Gutenberg, only containing significant percentages for certain forms like ballads and couplets. Models trained on different mixes of these sources could be more or less capable of recognizing certain forms. Are these poems memorized? We additionally replicate the tests from <cit.> by prompting GPT-4 to produce the next five lines of a poem, given its title, author, and first line (see <ref> for our prompt). We then check for any overlapping five-gram span between the model's output and the original poem text; hand-annotations for 300 random poems indicate that this is a viable method to check for memorization (97% accuracy). We find that 41% of poems are memorized by GPT-4, and 46% of these memorized poems are also found in Dolma. This indicates that more poetry data is available in the training of closed models like GPT-4 than is available in Dolma, and memorization is an issue that can be partly but not fully addressed by current open resources. § METHODS §.§ Form Classification We compare the performance of six diverse, state-of-the-art LLMs on the task of identifying more than 20 poetic forms and formal elements from a list of possible options. We test three iterations of the GPT models—GPT-3.5 Turbo, GPT-4 <cit.>, and GPT-4o <cit.>—because we are interested in the evolution of poetic capacities in LLMs over time. We also test Claude 3 Sonnet <cit.>, Llama3 <cit.>, and the open-source Mixtral 8x22B <cit.>. We experiment with four different zero-shot prompt types, showing the model different amounts of the poem and/or contextual information. We prompt the model with 1) only the text of the poem; 2) only the title and author; 3) only the first line of the poem; 4) only the last line of the poem. We use these different prompts to test for memorization and to better understand how different aspects of a poem, such as a title, may impact performance. We additionally ask the model to provide both an elaborated and one-word rationale for its choice as well as a confidence score. We show two example templates of the desired response format. An example prompt and response is included in <ref>. §.§ Formative Study with Human Experts We conduct a small, formative survey with 15 self-identified literature and poetry scholars, asking them to categorize four example poems from our dataset based on text alone. We purposely select four challenging and ambiguous examples based on our own domain expertise: John Crowe Ransom's https://www.poetryfoundation.org/poems/49146/piazza-piece“Piazza Piece” (sonnet); Robert Browning's https://www.poetryfoundation.org/poems/43773/prospice“Prospice” (ballad); Natalie Diaz's https://www.poetryfoundation.org/poems/56355/my-brother-at-3-am“My Brother at 3 A.M.” (pantoum); Matthew Rohrer's https://www.poetryfoundation.org/poetrymagazine/poems/57528/poem-written-with-buson-in-a-minute“Poem Written with Buson [`In a minute']” (haiku). We shared the survey in early 2024 on social media, with colleagues, and to scholars associated with the literary studies conference MLA. § RESULTS §.§ Form Classification by LLMs When prompted with only the text of a poem, the LLMs perform better overall on the fixed poetic forms than on the unfixed forms or formal elements. Classification performance for sonnets and haiku is particularly high, with F1 scores near or over 0.9 for all models except Llama3 (Table <ref>). This may be attributed to the prevalence of these forms in the training data. Yet when we average model performance by poetic feature (Table <ref>), it suggests that the models may identify forms with rhyme, meter, and fixed length more easily overall (sonnets typically depend on all three, and haiku on length and syllable count). The models generally struggle to identify forms based on repetition (see Table <ref>). However, GPT-4 and GPT-4o do well in this more uncommon poetic category, especially with sestinas (F1=0.87; 0.73), villanelles (F1=0.93; 0.92), and pantoums (F1=0.81; 0.82). This marks significant improvement from GPT-3.5 (F1=0.17, 0.62, 0.20) and is substantially stronger than Claude 3 Sonnet (F1=0.41, 0.58, 0.53), Mixtral 8x22B (F1=0.26, 0.69, 0.56), and Llama3 (F1=0.17, 0.32, 0.46). Poetic forms based on topic prove more difficult for the models, depending on the topic (Table <ref>, <ref>). Forms centered on more concrete subjects like death (elegy) and art (ars poetica, ekphrasis) are more often recognized, while poems about abstract ideas and styles like aubades and odes are less so. There are fewer forms in our dataset that depend on visual features, but most models except GPT-4 and GPT-4o falter with them, namely with concrete or pattern poetry (i.e. poems that rely on visual and typographical elements for their structure) and prose poetry (i.e. poems that don't have line breaks and look like prose). §.§ Form Classification by Human Experts Though the majority of the 15 self-reported literary scholars in our formative study correctly answered sonnet and ballad for poems 1 and 2, respectively (see Figure <ref>), it was not an overwhelming majority, and answers were split between a wide variety of poetic forms, suggesting that this is not an “easy” task even for trained professionals. Poems 3 and 4 are even more interesting because they deviate slightly from conventional forms, and the majority of our literary scholar survey respondents did not accurately identify them. Yet all models except GPT-4o correctly identified Matthew Rohrer's atypically long haiku based on the text alone, and GPT-4, GPT-4o, and Llama3 correctly identified Natalie Diaz's pantoum even though Diaz varies the form slightly over the course of the poem. We see these results as promising for more robust studies that compare poetry evaluation between human experts and LLMs. §.§ Investigating Memorization Issues When prompted with only the author and title of a poem (and not the text), the models achieve nearly as high or higher classification performance in certain categories (see Figures <ref>, <ref>). For sonnets, all the models achieve F1 scores of 0.85 or higher when provided with only the title and author, and scores of 0.70 or higher when provided with only the first or last line. While this result suggests possible memorization issues, more than 40% of the sonnets in our dataset also include the word “sonnet” in their title. Similarly, the models perform better with the author/title only prompt with forms that are often named in their titles, such as aubade (56%) and ode (48%) (see Figure <ref>). We also compare model performance for poems from the major online poetry websites with a sample of manually digitized poems found only in print books (see Figure <ref>), and we see both improvements and declines in classification accuracy across different forms. Detection of pantoums improves across all models when shown the poem text but decreases when prompted with only the title and author. This improvement suggests that the models may more easily recognize conventional pantoums (since the poems from our prestigious online literary sources are less conventional and more experimental), and the decrease, while seemingly related to memorization, is more likely explained by the fact that none of the hand-digitized poems include pantoum in their title. Classification accuracy for sonnets drops the most dramatically in our hand-digitized sample, but these sonnets are, by contrast, more unconventional than the online sources in many ways, revealing the complexity and ambiguity of this task and the difficulty of curating data in these categories. When we compare performance between poems that are found and not found in Dolma's popular pretraining datasets, the results are similarly mixed (Table <ref>). Lastly, when we compare performance between poems that are likely memorized and not memorized by GPT-4 (<ref>), we see drops in performance for ballads and sonnets (especially for Llama3); however, classification performance seems relatively stable otherwise (see Table <ref>). § DISCUSSION §.§ Implications for NLP Researchers Poetry poses unique challenges to NLP systems. Our form detection task captures many of these complexities, including the need to detect rhyme, meter, topic, and both word and line repetition while allowing for artistic license. This differs from the detection of prose genres, whose delineations mainly rely on topics. Our results emphasize the difficulty of this task, as none of the models tested were able to achieve high test results across the forms, especially the less popular forms. Additionally, our audit of pretraining data holds important lessons for NLP researchers who are designing evaluation benchmarks, e.g., memorization is an uneven issue that is difficult to quantify, heightening the importance of open resources for auditing pretraining data. §.§ Implications for Poetry Researchers, Readers, and Digitized Collections Automatic or computationally-augmented form detection has the potential to improve discoverability of poems in digital libraries and archives. Poems were often published in periodicals, collections, and anthologies, and when these sources are digitized in full, it makes it difficult to find them as individual texts and forms. Consistent detection of structured verse forms would aid in the identification of poetic texts within digitized historical sources. Additionally, LLM evaluations may offer scholars potential insight into the legibility and durability of different poetic forms, as well as how forms relate to each other. For example, LLMs' successful classification of sonnets may provide further evidence for the form's status as “an exceptionally transnational poetic design... dispersed throughout more of the modern world than any other type of Western lyric” <cit.>. Finally, this research has implications for scholarship on the circulation and reception of poems online. Poems and/or subsections of them often circulate widely. Analyzing which lines appear in training data offers insight into where poems appear on the internet and how they travel online. § RELATED WORK §.§ Poetry Generation and Analysis Machine-generated poetry has been a focal area in NLP for many decades and has received renewed interest in the era of LLMs <cit.>. The computational analysis of poetry, including form and features like rhyme and meter, has a similarly long history that is being transformed by LLMs <cit.>. Most germane to our study, recent NLP work has specifically addressed LLMs' capacity to understand poetry. <cit.> develop a task and dataset, “PoemSum,” to evaluate how well LLMs can summarize poetry. “PoemSum” contains 3,011 poem summary/poem text pairs, which were respectively collected from the website PoemAnalysis.com and various websites. They conclude that SOTA summarization models are currently “not well-suited” for this task. We build on this work by focusing on a more specific sub-task (poetic form detection), by curating a dataset of poems tagged by form (thus attending to internal differences), and by selecting poems from well-respected poetry institutions. §.§ Literary Genre/Form Classification Automatically classifying literary texts by genre has been an active area of research in both NLP and the digital humanities. Many studies have focused on classifying fictional prose writing genres in novels <cit.>, while other work has focused on distinguishing between kinds of poetry, such as Greek epic vs. drama <cit.> and various styles of spoken free verse <cit.>. In the digital humanities, genre classification has often been used to highlight ambiguity. <cit.> find that features of English-language haiku are statistically distinct, yet they emphasize the importance of misclassifications for examining how “broadly distributed haiku’s influence was.” <cit.> similarly suggests that computational analysis of poetry “works, in part, because of its failures.” These scholars largely use classification to explore the fuzziness, as opposed to the rigidity, of genres and poetic forms. This is an angle that we do not fully explore in our work and view as important for future research. § CONCLUSION Our work audits current poetic capacities and training data in leading LLMs. We contribute the poetry evaluation task and release to the research community a dataset of 1.4k+ annotated public domain poems with accompanying metadata about their prevalence in popular training datasets. We also join <cit.> and others in cautioning the benchmark/task as the be-all and end-all framework for NLP research. Poetry is a good example of a human output that purposely troubles neat categorization. We encourage more work that builds nuance and ambiguity into humanistic benchmarks such as this one, as well as work that places value beyond this orientation. Further research is also needed to study LLM poetic capacities in languages beyond English and to evaluate impacts on human creators (we expand on these issues in Limitations and Ethical Considerations). § LIMITATIONS In this study, we focus mostly on English-language poetry that was written and published in Europe and North America. Further, we only consider poems that were tagged by the Poetry Foundation, the Academy of American Poets, or editors of particular poetry collections (see <ref>), leaving out many other possible forms as well as poems that do not adhere neatly to forms. Poetry Foundation and the Academy of American Poets do not have a comprehensive or representative (in terms of gender, race, culture, geography) collections of poems, nor do the print anthologies we digitized. Additionally, most of the poems in these collections are not tagged by form, and it is not always clear why some poems have tags and others do not. For example, on the Poetry Foundation website, Etheridge Knight and Sonia Sanchez, two late 20th-century poets associated with the Black Arts Movement, both wrote haiku series that include the word “haiku” in their titles, but they are not tagged as haiku on Poetry Foundation. While we select these resources because they are well-respected poetry institutions, we do not know how exactly these tags were applied to the poems, or who put them there. From our manual examination of the poems/tags and classification results, we found some examples where tags from either of these institutions were incorrectly applied. We do not believe this problem is extensive, but we have not manually checked every tagged poem. On these websites, and thus within our dataset, there is also an uneven distribution of poems in each form, reflecting biases related to race, class, language, and culture. For example, the ghazal is a poetic form that originated in Arabic and is popular in the Middle East and South Asia; however, ghazals are less popular, and less likely to be curated, in English-language contexts. Limericks are another popular and pervasive genre of poetry, yet they are often considered an unsophisticated genre or “light verse” form, and thus there are few of them in this particular dataset. There are also limitations to conceiving of poetic form as a single-label classification task, as a set of independent categories that a poem can belong to or not. Poetry is often valued for ambiguity, experimentation, and interpretive potential, so fitting neatly into a category is not necessarily what one looks for in poetic analysis. Poets also often mix and merge forms. For example, Gwendolyn Brooks developed the “https://poets.org/poem/sonnet-balladSonnet-Ballad,” and Roger Sedarat has created the “Sonnet Ghazal” <cit.>. Our approach does not account for these kinds of hybrid forms. Further, form only exists in relation to content. As foundational English literary scholars <cit.> wrote, “the reader, unlike a robot, must be able to recognize the dramatic implications of the form.” These implications only come through when form is considered as part of a broader composition with numerous intertwined elements. § ETHICAL CONSIDERATIONS Many of the poems that we asked the models to identify are currently under copyright. The poems from Poetry Foundation and Academy of American Poets are freely available online, but this is due to the fact that these institutions pay for copyright and compensate poets for their work, which is crucial for reproduction of recent texts. In the dataset we share, we only include poems that are in the public domain and whose authors died before 1929. In the U.S., copyright extends for 95 years after the date of first publication, so works published before 1929 are in the public domain. In using LLMs to evaluate poetry, there is a risk of reinforcing dominant understandings of poetic form and prosody. As has been well documented, LLMs can reproduce existing biases related to gender, race, class, and cultural background <cit.>, and there is significant existing bias in discourse surrounding poetic form. <cit.> emphasize that “Women were often underrepresented in poetry in the sixteenth, seventeenth, and eighteenth centuries” and were “absent—whether in retrospect or reality... from the festival of form that poetry became in those centuries.” And <cit.> notes that the “discourse around innovative and avant-garde poetry in the U.S.,” which has often emphasized discussions of form, “has historically constructed these categories as implicitly `white,”' pointing out that “African American poets, even when they were involved in, perhaps central to, now-canonical avant-garde movements have been marginalized or erased from literary histories.” These literary histories inform which works are included in anthologies and incorporated into digital collections, and they also influence training data. <cit.> have shown that inclusion in the 1983 edition of the Norton Anthology of Poetry was the best predictor of poem memorization in ChatGPT. This anthology represents a traditional view of the English poetic canon, favoring historical works published in the U.K. and the U.S., and excluding important works by women authors, Black and Indigenous authors and authors of color, and authors working outside Europe and North America. If the performance of LLMs improves in relation to poetic form evaluation, whose versions of form will be reproduced? Given the complex cultural, historical, and textual conditions from which poetic forms emerge, as well as the centuries-long discourse surrounding how to label, categorize, and analyze form, this work requires domain expertise, and domain experts should be included in discussions about benchmarks for complex creative and interpretive tasks. At the same time, domain experts may have hesitations about this kind of collaboration, given the widespread use of copyrighted material in training data, and the risks LLMs pose to authors, whose work is fundamental to literary studies. § ACKNOWLEDGEMENTS We would like to thank Dany Haddad, Yanai Elazar, and Kyle Lo for their help with Dolma and WIMBD, as well as Elizabeth Gronski for her help with early versions of this work. We are also grateful to Greg Yauney, David Mimno, and Matthew Wilkens for formative conversations and feedback about this project. This work was supported by the NEH AI for Humanists project. § APPENDIX §.§ Poetic Features by Form §.§ Additional Poetic Form Detection Results §.§ Additional Memorization Results §.§ Formative Study Results §.§ Example Prompt and LLM Response Example Prompt: Read the following poem and then respond with the form of the poem based on the following verse forms: ['sonnet', 'ballad', 'villanelle', 'haiku', 'ghazal', 'sestina', 'pantoum', 'limerick']. All of the poems have been tagged by experts as one of these forms. You must pick one of these options. Please also provide an elaborated rationale for why you think the poem is in this form, a one-word summary rationale, and a score ranking your confidence in your answer from 0 to 1. Please report the single poetic form, elaborated rationale, one-word rationale, and multiple forms in the following format. 1. Poetic Form: Ballad 2. Elaborated Rationale: This poem is a ballad because... 3. One-Word Summary: Meter 4. Confidence Score : 0.91 1. Poetic Form: Sonnet 2. Elaborated Rationale: This poem is a sonnet because... 3. One-Word Summary: Meter 4. Confidence Score : 0.73 Poem Text (in full): There was an old man of Thermopylæ, Who never did anything properly; But they said, “If you choose, To boil eggs in your shoes, You shall never remain in Thermopylæ.” Pick ONE of these possible verse forms: ['sonnet', 'ballad', 'villanelle', 'haiku', 'ghazal', 'sestina', 'pantoum', 'limerick']. Example Response (GPT-4): 1. Poetic Form: Limerick 2. Elaborated Rationale: This poem is a limerick because it strictly follows the standard limerick rhyme scheme of AABBA. The poem begins with a character and a location (an old man of Thermopylæ), and it proceeds in a humorous, light-hearted manner typical of limericks. The first two lines are longer, the next two lines are shorter, and the final line is longer again, also within the parameters of a limerick. 3. One-Word Summary: Rhyme 4. Confidence Score : 0.98 §.§ Manually Digitized Print Poetry Sources John Barrell and John Bull. 1975. A Book of English pastoral verse. Oxford University Press, New York. Wilmon Brewer. 1937. Sonnets and Sestinas. Cornhill Publishing Co., Boston. Fred Cogswell. 1986. Meditations: 50 Sestinas. Ragweed Press. Clark Coolidge. 2012. 88 Sonnets. Fence Books, Albany, NY. Dennis Daly. 2018. Pantoums. Dos Madres Press, Loveland, Ohio. Annie Finch, Marie-Elizabeth Mali, Annie Finch, and Julie Kane. 2012. Villanelles. Alfred A. Knopf, New York. Sandra M. Gilbert. 2001. Inventions of farewell: a book of elegies. W.W. Norton & Co., New York. Robert Graves. 1957. English and Scottish ballads. Heinemann, London. Scott Gutterman, editor. 2015. Sunlight on the river: poems about paintings, paintings about poems. Prestel, Munich. Jim Kacian, Philip Rowland, and Allan Burns. 2013. Haiku in English: the first hundred years. W.W. Norton & Company, New York. G. Legman. 1969. The Limerick: 1700 examples, with notes, variants, and index. Bell Publishing Co., New York. G. Legman. 1977. The New Limerick: 2750 Unpublished Examples, American and British. Crown Publishers. Bob Raczka. 2016. Wet cement: a mix of concrete poems. Roaring Brook Press, New York. Cor Van den Heuvel. 1986. The haiku anthology: haiku and senryu in English. Simon & Schuster, New York. Joseph Warton. 1977. Odes on various subjects (1746). Scholars’ Facsimiles & Reprints, Delmar, N.Y. Eugene Wildman. 1967. The Chicago review anthology of concretism. Swallow Press, Chicago. Emmett Williams and Something Else Press. 1967. An anthology of concrete poetry. Something Else Press, New York. Seishi Yamaguchi and Sono Uchida. 1993. The essence of modern haiku: 300 poems. Mangajin, Inc., Atlanta, Georgia. Kevin Young. 2010. The art of losing: poems of grief and healing. Bloomsbury USA, New York. Thomas Perrin Harrison. 1968. The pastoral elegy: an anthology. Octagon Books. §.§ Memorization Prompt What are the next five lines of the poem “<POEM_TITLE>” by <AUTHOR_NAME>? First Line: <FIRST_LINE> Next Lines: §.§ Poetic Forms Poetic forms can be defined and categorized in various ways. The definitions of forms and formal elements that we offer here are synthesized from information in glossaries of poetic terms available on the https://www.poetryfoundation.org/learn/glossary-termsPoetry Foundation and https://poets.org/glossaryAcademy of American Poets websites as well as from widely used poetry resources by <cit.>, <cit.>, and <cit.>. §.§.§ Fixed Forms Ballad A type of narrative poem with ties to music and oral performance. Traditional ballads often feature regular meter and stanzas. One conventional pattern is “common measure,” which consists of quatrains that rhyme ABCB and alternate iambic tetramater and trimeter. Example Ballad: from “https://www.poetryfoundation.org/poems/50273/barbara-allenBarbara Allen” (by Anonymous) In Scarlet town, where I was born, There was a fair maid dwellin’, Made every youth cry Well-a-way! Her name was Barbara Allen. All in the merry month of May, When green buds they were swellin’, Young Jemmy Grove on his death-bed lay, For love of Barbara Allen. He sent his man in to her then, To the town where she was dwellin’; “O haste and come to my master dear, If your name be Barbara Allen... Ghazal Originally an Arabic verse form, ghazals consists of a series of couplets usually all ending in the same word. Poets may include their name in the final couplet. Example Ghazal: from “https://www.poetryfoundation.org/poetrymagazine/poems/144612/where-did-the-handsome-beloved-goWhere did the handsome beloved go?” (by Jalal Al-Din Rumi, translated by Brad Gooch and Maryam Mortaz) Where did the handsome beloved go? I wonder, where did that tall, shapely cypress tree go? He spread his light among us like a candle. Where did he go? So strange, where did he go without me? All day long my heart trembles like a leaf. All alone at midnight, where did that beloved go? Go to the road, and ask any passing traveler — That soul-stirring companion, where did he go? Go to the garden, and ask the gardener — That tall, shapely rose stem, where did he go? Go to the rooftop, and ask the watchman — That unique sultan, where did he go? Haiku Originating in Japan, haiku are concise, non-narrative poems that often focus on imagery. In English, haiku often consist of three unrhymed lines with 5, 7, and 5 syllables respectively. Example Haiku: “https://www.poetryfoundation.org/poems/48708/in-kyoto-In Kyoto” (by Bashō, translated by Jane Hirshfield) In Kyoto, hearing the cuckoo, I long for Kyoto. Limerick A light, often comedic verse form consisting of five lines rhymed AABBA. In traditional limericks, lines 1, 2, and 5 are trimeter, while lines 3 and 4 are dimeter, and the dominant meter is anapestic. Example Limerick: “https://www.poetryfoundation.org/poems/42910/a-young-lady-of-lynnA Young Lady of Lynn” (by Anonymous) There was a young lady of Lynn, Who was so uncommonly thin That when she essayed To drink lemonade She slipped through the straw and fell in. Pantoum A Malaysian verse form that was adapted into French and later English, which consists of a series of quatrains in which the second and fourth lines of each quatrain serve as the first and third lines of the next quatrain. Pantoums do not have a determined length. Example Pantoum: from “https://poets.org/poem/nocturne-5Nocturne” (by Sadakichi Hartmann) Upon the silent sea-swept land The dreams of night fall soft and gray, The waves fade on the jeweled sand Like some lost hope of yesterday. The dreams of night fall soft and gray Upon the summer-colored seas, Like some lost hope of yesterday, The sea-mew’s song is on the breeze. Upon the summer-colored seas Sails gleam and glimmer ghostly white, The sea-mew’s song is on the breeze Lost in the monotone of night. Sails gleam and glimmer ghostly white, They come and slowly drift away, Lost in the monotone of night, Like visions of a summer-day. They shift and slowly drift away Like lovers’ lays that wax and wane, The visions of a summer-day Whose dreams we ne’er will dream again. Sestina A complex verse form consisting of six, unrhymed, six-line stanzas followed by a three-line envoi. Each sestet includes the same six endwords in shifting, but specific patterns (below), and all six endwords also appear in the envoi. Endword pattern: 1: ABCDEF 2: FAEBDC 3: CFDABE 4: ECBFAD 5; DEACFB 6: BDFECA envoi : ECA or ACE Example Sestina: from “https://poets.org/poem/sestina-altaforteSestina: Altaforte” (by Ezra Pound) I Damn it all! all this our South stinks peace. You whoreson dog, Papiols, come! Let's to music! I have no life save when the swords clash. But ah! when I see the standards gold, vair, purple, opposing And the broad fields beneath them turn crimson, Then howl I my heart nigh mad with rejoicing. II In hot summer have I great rejoicing When the tempests kill the earth's foul peace, And the lightnings from black heav'n flash crimson, And the fierce thunders roar me their music And the winds shriek through the clouds mad, opposing, And through all the riven skies God's swords clash. III Hell grant soon we hear again the swords clash! And the shrill neighs of destriers in battle rejoicing, Spiked breast to spiked breast opposing! Better one hour's stour than a year's peace With fat boards, bawds, wine and frail music! Bah! there's no wine like the blood's crimson! IV And I love to see the sun rise blood-crimson. And I watch his spears through the dark clash And it fills all my heart with rejoicing And pries wide my mouth with fast music When I see him so scorn and defy peace, His lone might 'gainst all darkness opposing. V The man who fears war and squats opposing My words for stour, hath no blood of crimson But is fit only to rot in womanish peace Far from where worth's won and the swords clash For the death of such sluts I go rejoicing; Yea, I fill all the air with my music. VI Papiols, Papiols, to the music! There's no sound like to swords swords opposing, No cry like the battle's rejoicing When our elbows and swords drip the crimson And our charges 'gainst “The Leopard's” rush clash. May God damn for ever all who cry “Peace!” VII And let the music of the swords make them crimson! Hell grant soon we hear again the swords clash! Hell blot black for always the thought “Peace!” Sonnet A fourteen-line verse form, usually in iambic pentameter, and usually following a set rhyme scheme. The most common types of sonnets are Shakespearean/English, which consist of three quatrains followed by a couplet and often rhyme ABABCDCDEFEFGG, and Petrarchan/Italian, which consists of an octave followed by a sestet and often rhyme ABBAABBACDCDCD or ABBAABBACDECDE. Example Petrarchan sonnet: “https://www.poetryfoundation.org/poems/44750/sonnet-19-when-i-consider-how-my-light-is-spentWhen I consider how my light is spent” (John Milton) When I consider how my light is spent, Ere half my days, in this dark world and wide, And that one Talent which is death to hide Lodged with me useless, though my Soul more bent To serve therewith my Maker, and present My true account, lest he returning chide; “Doth God exact day-labour, light denied?” I fondly ask. But patience, to prevent That murmur, soon replies, “God doth not need Either man’s work or his own gifts; who best Bear his mild yoke, they serve him best. His state Is Kingly. Thousands at his bidding speed And post o’er Land and Ocean without rest: They also serve who only stand and wait.” Example Shakespearean Sonnet: “https://www.poetryfoundation.org/poems/44691/america-56d223e1ac025America” (Claude McKay) Although she feeds me bread of bitterness, And sinks into my throat her tiger’s tooth, Stealing my breath of life, I will confess I love this cultured hell that tests my youth. Her vigor flows like tides into my blood, Giving me strength erect against her hate, Her bigness sweeps my being like a flood. Yet, as a rebel fronts a king in state, I stand within her walls with not a shred Of terror, malice, not a word of jeer. Darkly I gaze into the days ahead, And see her might and granite wonders there, Beneath the touch of Time’s unerring hand, Like priceless treasures sinking in the sand. Villanelle A 19-line verse form originating in France, made up of five tercets followed by a quatrain, in which the first and third line of the first stanza are alternatingly repeated as a refrain in the following stanzas. Stanza 1 line 1 repeats as the third line of stanzas 2 and 4, and stanza 1 line 3 repeats as the third line of stanzas 3 and 5. These two lines also appear as the closing lines of the quatrain. Example Villanelle: “https://poets.org/poem/do-not-go-gentle-good-nightDo not go gentle into that good night” (Dylan Thomas) Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men at their end know dark is right, Because their words had forked no lightning they Do not go gentle into that good night. Good men, the last wave by, crying how bright Their frail deeds might have danced in a green bay, Rage, rage against the dying of the light. Wild men who caught and sang the sun in flight, And learn, too late, they grieved it on its way, Do not go gentle into that good night. Grave men, near death, who see with blinding sight Blind eyes could blaze like meteors and be gay, Rage, rage against the dying of the light. And you, my father, there on the sad height, Curse, bless, me now with your fierce tears, I pray. Do not go gentle into that good night. Rage, rage against the dying of the light. §.§.§ Stanza Forms Couplet A two-line stanza or two lines of verse, often but not always rhymed. Example Couplets: “https://www.poetryfoundation.org/poems/44830/interview-56d22412c4b44Interview” by Dorothy Parker The ladies men admire, I’ve heard, Would shudder at a wicked word. Their candle gives a single light; They’d rather stay at home at night. They do not keep awake till three, Nor read erotic poetry. They never sanction the impure, Nor recognize an overture. They shrink from powders and from paints ... So far, I’ve had no complaints. Tercet A three-line stanza or three lines of verse, often but not always containing a rhyme. Example Tercets: from “https://www.poetryfoundation.org/poems/47266/the-convergence-of-the-twainThe Convergence of the Twain” (Thomas Hardy) (Lines on the loss of the “Titanic”) I In a solitude of the sea Deep from human vanity, And the Pride of Life that planned her, stilly couches she. II Steel chambers, late the pyres Of her salamandrine fires, Cold currents thrid, and turn to rhythmic tidal lyres. III Over the mirrors meant To glass the opulent The sea-worm crawls — grotesque, slimed, dumb, indifferent. IV Jewels in joy designed To ravish the sensuous mind Lie lightless, all their sparkles bleared and black and blind. V Dim moon-eyed fishes near Gaze at the gilded gear And query: “What does this vaingloriousness down here?” ... Quatrain A four-line stanza or unit of verse, often, but not always containing rhyme. Example Quatrains: from “https://www.poetryfoundation.org/poems/44299/elegy-written-in-a-country-churchyardElegy Written in a Country Churchyard” (Thomas Gray) The curfew tolls the knell of parting day, The lowing herd wind slowly o'er the lea, The plowman homeward plods his weary way, And leaves the world to darkness and to me. Now fades the glimm'ring landscape on the sight, And all the air a solemn stillness holds, Save where the beetle wheels his droning flight, And drowsy tinklings lull the distant folds; ... §.§.§ Meters Free Verse Verse that does not follow a particular pattern of meter or rhyme. Example Free Verse: from “https://www.poetryfoundation.org/poems/47311/the-waste-landThe Waste Land” (T.S. Eliot) April is the cruellest month, breeding Lilacs out of the dead land, mixing Memory and desire, stirring Dull roots with spring rain. Winter kept us warm, covering Earth in forgetful snow, feeding A little life with dried tubers. Summer surprised us, coming over the Starnbergersee With a shower of rain; we stopped in the colonnade, And went on in sunlight, into the Hofgarten, And drank coffee, and talked for an hour. Bin gar keine Russin, stamm’ aus Litauen, echt deutsch. And when we were children, staying at the archduke’s, My cousin’s, he took me out on a sled, And I was frightened. He said, Marie, Marie, hold on tight. And down we went. In the mountains, there you feel free. I read, much of the night, and go south in the winter. ... Blank Verse Unrhymed iambic pentameter. Example Blank Verse: from https://www.poetryfoundation.org/poems/45718/paradise-lost-book-1-1674-versionParadise Lost (John Milton) Of Mans First Disobedience, and the Fruit Of that Forbidden Tree, whose mortal tast Brought Death into the World, and all our woe, With loss of Eden, till one greater Man Restore us, and regain the blissful Seat, Sing Heav'nly Muse, that on the secret top Of Oreb, or of Sinai, didst inspire That Shepherd, who first taught the chosen Seed, In the Beginning how the Heav'ns and Earth Rose out of Chaos: or if Sion Hill Delight thee more, and Siloa's brook that flow'd Fast by the Oracle of God; I thence Invoke thy aid to my adventrous Song, That with no middle flight intends to soar Above th' Aonian Mount, while it pursues Things unattempted yet in Prose or Rhime. Common Measure Quatrains consisting of alternating lines of iambic tetrameter and trimeter, rhymed ABAB. Example Common Measure: from “https://www.poetryfoundation.org/poems/44085/it-was-not-death-for-i-stood-up-355It was not death for I stood up” (Emily Dickinson) It was not Death, for I stood up, And all the Dead, lie down - It was not Night, for all the Bells Put out their Tongues, for Noon. It was not Frost, for on my Flesh I felt Siroccos - crawl - Nor Fire - for just my marble feet Could keep a Chancel, cool - And yet, it tasted, like them all, The Figures I have seen Set orderly, for Burial Reminded me, of mine - ... §.§.§ Unfixed forms Ode A formal lyric poem, which addresses or celebrates a person, place, object, or concept, usually that is not present. Odes are often longer verse forms, and their stanza patterns vary. Example Ode: from “https://www.poetryfoundation.org/poems/44477/ode-on-a-grecian-urnOde on a Grecian Urn” (John Keats) Thou still unravish'd bride of quietness, Thou foster-child of silence and slow time, Sylvan historian, who canst thus express A flowery tale more sweetly than our rhyme: What leaf-fring'd legend haunts about thy shape Of deities or mortals, or of both, In Tempe or the dales of Arcady? What men or gods are these? What maidens loth? What mad pursuit? What struggle to escape? What pipes and timbrels? What wild ecstasy? Heard melodies are sweet, but those unheard Are sweeter; therefore, ye soft pipes, play on; Not to the sensual ear, but, more endear'd, Pipe to the spirit ditties of no tone: Fair youth, beneath the trees, thou canst not leave Thy song, nor ever can those trees be bare; Bold Lover, never, never canst thou kiss, Though winning near the goal yet, do not grieve; She cannot fade, though thou hast not thy bliss, For ever wilt thou love, and she be fair! Pastoral A type of poetry and a broader creative tradition idealizing rural life. Example Pastoral: “https://www.poetryfoundation.org/poems/44675/the-passionate-shepherd-to-his-loveThe Passionate Shepherd to His Love” (Christopher Marlowe) Come live with me and be my love, And we will all the pleasures prove, That Valleys, groves, hills, and fields, Woods, or steepy mountain yields. And we will sit upon the Rocks, Seeing the Shepherds feed their flocks, By shallow Rivers to whose falls Melodious birds sing Madrigals. And I will make thee beds of Roses And a thousand fragrant posies, A cap of flowers, and a kirtle Embroidered all with leaves of Myrtle; A gown made of the finest wool Which from our pretty Lambs we pull; Fair lined slippers for the cold, With buckles of the purest gold; A belt of straw and Ivy buds, With Coral clasps and Amber studs: And if these pleasures may thee move, Come live with me, and be my love. The Shepherds’ Swains shall dance and sing For thy delight each May-morning: If these delights thy mind may move, Then live with me, and be my love. Aubade A poem or song welcoming or lamenting the arrival of dawn, usually with romantic themes. Example Aubade: “https://www.poetryfoundation.org/poems/51783/break-of-dayBreak of Day” (John Donne) ‘Tis true, ‘tis day, what though it be? O wilt thou therefore rise from me? Why should we rise because ‘tis light? Did we lie down because ‘twas night? Love, which in spite of darkness brought us hither, Should in despite of light keep us together. Light hath no tongue, but is all eye; If it could speak as well as spy, This were the worst that it could say, That being well I fain would stay, And that I loved my heart and honour so, That I would not from him, that had them, go. Must business thee from hence remove? Oh, that’s the worst disease of love, The poor, the foul, the false, love can Admit, but not the busied man. He which hath business, and makes love, doth do Such wrong, as when a married man doth woo. Dramatic Monologue: a poem in which a usually fictional speaker addresses a listener, who is also often imagined. Example Dramatic Monologue: from “https://www.poetryfoundation.org/poems/43768/my-last-duchessMy Last Duchess” (Robert Browning) That’s my last Duchess painted on the wall, Looking as if she were alive. I call That piece a wonder, now; Fra Pandolf’s hands Worked busily a day, and there she stands. Will’t please you sit and look at her? I said “Fra Pandolf” by design, for never read Strangers like you that pictured countenance, The depth and passion of its earnest glance, But to myself they turned (since none puts by The curtain I have drawn for you, but I) And seemed as they would ask me, if they durst, How such a glance came there; so, not the first Are you to turn and ask thus. Sir, ’twas not Her husband’s presence only, called that spot Of joy into the Duchess’ cheek; perhaps Fra Pandolf chanced to say, “Her mantle laps ... Elegy A form of poetry and broader mode of writing expressing grief or loss, often in relation to its subject’s death. Example Elegy: from “https://www.poetryfoundation.org/poems/44733/lycidasLycidas” (John Milton) Yet once more, O ye laurels, and once more Ye myrtles brown, with ivy never sere, I come to pluck your berries harsh and crude, And with forc'd fingers rude Shatter your leaves before the mellowing year. Bitter constraint and sad occasion dear Compels me to disturb your season due; For Lycidas is dead, dead ere his prime, Young Lycidas, and hath not left his peer. Who would not sing for Lycidas? he knew Himself to sing, and build the lofty rhyme. He must not float upon his wat'ry bier Unwept, and welter to the parching wind, Without the meed of some melodious tear. Concrete Poetry A type of poetry that is structured by visual effect on the page, and often emphasizes nonlinguistic aspects of writing, including typography, layout, whitespace, etc. Example Concrete Poetry: “https://www.poetryfoundation.org/poems/44361/easter-wingsEaster Wings” (George Herbert) Lord, who createdst man in wealth and store, Though foolishly he lost the same, Decaying more and more, Till he became Most poore: With thee O let me rise As larks, harmoniously, And sing this day thy victories: Then shall the fall further the flight in me. My tender age in sorrow did beginne And still with sicknesses and shame. Thou didst so punish sinne, That I became Most thinne. With thee Let me combine, And feel thy victorie: For, if I imp my wing on thine, Affliction shall advance the flight in me. Prose Poem A poetic composition that is not broken up into lines. Example Prose Poem: https://poets.org/poem/gitanjali-14Gitanjali, #14 (by Rabindranath Tagore) My desires are many and my cry is pitiful, but ever didst thou save me by hard refusals; and this strong mercy has been wrought into my life through and through. Day by day thou art making me worthy of the simple, great gifts that thou gavest to me unasked—this sky and the light, this body and the life and the mind—saving me from perils of overmuch desire. There are times when I languidly linger and times when I awaken and hurry in search of my goal; but cruelly thou hidest thyself from before me. Day by day thou art making me worthy of thy full acceptance by refusing me ever and anon, saving me from perils of weak, uncertain desire. Ars Poetica A poem about poetry. Example Ars Poetica: from “https://poets.org/poem/poetryPoetry” (Marianne Moore) I too, dislike it: there are things that are important beyond all this fiddle.    Reading it, however, with a perfect contempt for it, one discovers that there is in    it after all, a place for the genuine.       Hands that can grasp, eyes       that can dilate, hair that can rise          if it must, these things are important not because a high-sounding interpretation can be put upon them but because they are    useful; when they become so derivative as to become unintelligible, the    same thing may be said for all of us—that we       do not admire what       we cannot understand. The bat,          holding on upside down or in quest of something to eat, elephants pushing, a wild horse taking a roll, a tireless wolf under    a tree, the immovable critic twinkling his skin like a horse that feels a flea, the base—    ball fan, the statistician—case after case       could be cited did       one wish it; nor is it valid          to discriminate against “business documents and school-books”; all these phenomena are important. One must make a distinction    however: when dragged into prominence by half poets, the result is not poetry,    nor till the autocrats among us can be      “literalists of       the imagination”—above          insolence and triviality and can present for inspection, imaginary gardens with real toads in them, shall we have    it. In the meantime, if you demand on the one hand, in defiance of their opinion—    the raw material of poetry in       all its rawness, and       that which is on the other hand,          genuine, then you are interested in poetry. Ekphrasis Writing that uses vivid language to respond to or describe a work of visual art. Example Ekphrasis: “https://poets.org/poem/seeing-elgin-marblesOn Seeing the Elgin Marbles” (John Keats) My spirit is too weak—mortality Weighs heavily on me like unwilling sleep, And each imagined pinnacle and steep Of godlike hardship tells me I must die Like a sick eagle looking at the sky. Yet ‘tis a gentle luxury to weep, That I have not the cloudy winds to keep, Fresh for the opening of the morning’s eye. Such dim-conceived glories of the brain Bring round the heart an indescribable feud; So do these wonders a most dizzy pain, That mingles Grecian grandeur with the rude Wasting of old Time—with a billowy main— A sun—a shadow of a magnitude.
http://arxiv.org/abs/2406.17974v1
20240625231139
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts
[ "Xuyang Wu", "Yuan Wang", "Hsin-Tai Wu", "Zhiqiang Tao", "Yi Fang" ]
cs.CL
[ "cs.CL", "cs.CV" ]
Inherent Challenges of Post-Hoc Membership Inference for Large Language Models [ ======================================================================================== § ABSTRACT Large vision-language models (LVLMs) have recently achieved significant progress, demonstrating strong capabilities in open-world visual understanding. However, it is not yet clear how LVLMs address demographic biases in real life, especially the disparities across attributes such as gender, skin tone, and age. In this paper, we empirically investigate visual fairness in several mainstream LVLMs and audit their performance disparities across sensitive demographic attributes, based on public fairness benchmark datasets (e.g., FACET). To disclose the visual bias in LVLMs, we design a fairness evaluation framework with direct questions and single-choice question-instructed prompts on visual question-answering/classification tasks [Our code can be found at <https://github.com/elviswxy/LVLM_fairness>.]. The zero-shot prompting results indicate that, despite enhancements in visual understanding, both open-source and closed-source LVLMs exhibit prevalent fairness issues across different instruct prompts and demographic attributes. § INTRODUCTION Large vision-language models (LVLMs) have successfully encoded images and text into a shared latent space, enabling a better visual reasoning <cit.>. Pre-trained LVLMs can accurately interpret images and extract semantics by meticulously designing natural language instructions (also known as “prompts”), providing additional information for traditional vision tasks such as classification <cit.>, segmentation <cit.>, and visual question answering <cit.>. Although many studies and models have achieved remarkable results <cit.>, there is a knowledge gap in the literature regarding the fairness evaluation of recent large models. Most existing works focus on improving the accuracy and efficiency of LVLMs <cit.>, with limited attention given to their performance across different demographic groups. This oversight is critical as it can lead to biased outcomes, potentially perpetuating stereotypes <cit.>, as illustrated in Figure <ref> from our experiments. Moreover, existing studies <cit.> have not adequately addressed the need for fairness evaluation specifically designed for the contemporary large model settings. It is essential to systematically study the impact of various demographic attributes on LVLMs performance. In this study, we empirically provide a detailed evaluation of LVLMs from a fairness perspective. We propose a novel evaluation framework that employs direct questions and single-choice question-instructed prompts on visual question answering/classification tasks based on the FACET benchmark <cit.>. The proposed framework analyzes the models' ability to understand and interpret images accurately while assessing any inherent biases related to visual clues such as gender, skin tone, and age. We summarize the contribution of this work in two folds: 1) We proposed a novel evaluation framework to investigate visual fairness issues in LVLMs, utilizing a fairness benchmark and meticulously designed instruct prompts. 2) Our extensive experimental results demonstrate that both open-source and closed-source LVLMs exhibit fairness issues across different instruct prompts and demographic attributes. § LVLMS FAIRNESS EVALUATION §.§ Datasets Construction To evaluate demographic bias in LVLMs based on attributes such as age, gender, and skin tone, we selected only images containing a single person from the FACET <cit.>, a human-annotated fairness benchmark. Each image is annotated with demographic attributes, allowing us to systematically assess models' performance and identify visual fairness across different ages, genders, and skin tones in LVLMs. The statistics of our FACET dataset are shown in Table <ref>. §.§ Evaluation Framework Our LVLMs evaluation framework employs a variety of instruct prompts and a wide range of images in different scenarios. This framework is designed to assess the model's ability to understand individuals in images during prediction and classification tasks. By analyzing the results, we evaluate the model's performance across different demographic attributes, providing insights into its fairness and potential biases. Figure <ref> illustrates our proposed LVLMs fairness evaluation framework. Prompts Recent studies have shown that prompting methods are highly effective for evaluating LVLMs and LLMs <cit.>. Building on these studies, we designed specific prompts for LVLMs with different objectives by converting knowledge facts into a question-answering format. In our evaluation experiments, we use diverse instruct prompts tailored to extract person-related classes (e.g., soldier, nurse) from the images. Direct Question Prompts ask straightforward questions to gather specific information from the model, allowing for detailed responses. This approach provides in-depth insights into the model's understanding and generates rich, descriptive answers, making it ideal for exploratory analysis and assessing the model's comprehension. Single-Choice Question Prompts present a specific question with a set of predefined answers from which the model must choose, ensuring consistent and comparable responses. This method is effective for quantifying the model's accuracy and systematically detecting biases. More details of Prompts can be found in Appendix <ref>. LVLMs Inference and Formatting Results During model inference, the model generates predictions based on the instructed prompts and the content of the image. For direct question prompts, the model directly predicts the class label of the person in the image. For single-choice question prompts, the model answers based on the prompt about the person's class and the attributes in the image, providing the most probable prediction of yes, no, or unknown. Due to the LVLMs' unexpected output format issues (such as format errors or additional explanations), an encoder function encodes these raw labels as o⃗_⃗1⃗ and o⃗_⃗2⃗ and the selected respective labels c⃗_⃗1⃗ and c⃗_⃗2⃗ based on different prompt. The encoder finds the closest match using the cosine similarity function cos⟨o⃗, c⃗⟩ <cit.>. This method allows us to measure the likeness between the LVLMs' generated labels and the available dataset labels. More details of encoder functions can be found in Appendix <ref>. Evaluation Metrics We evaluate the performance of the models through two main aspects. First, we assess the model's understanding of the images by examining the accuracy of the model's predictions for the class of the person depicted in the image. Second, we perform a quantitative analysis of the impact of demographic attributes on the model's predictions. More details of demographic attributes illustrate in Appendix <ref>. We following the same fairness evaluation metric in FACET benchmark <cit.>. Given a model f, the instruct prompt p, a set person class C, the demographic attribute l and a set of images I_l^C, we evaluate the model prediction accuracy based on recall, which compute by R_l = recall(f(l, I_l^C, C)). The value of R_l ranges between 0 and 1, with higher values indicating more accurate model predictions. We evaluate the model fairness by disparity between demographic attribute, which compute as D_l_1-l_2 = R_l_1 - R_l_2 = recall(f(l_1, I_l_1^C, C)) - recall(f(l_2, I_l_2^C, C)). When D > 0, the model exhibits a preference for l_1 within class c. Conversely, when D < 0, the model shows a preference for l_2 within class c. A disparity value of 0 indicates a perfectly fair model, demonstrating equal performance across all images within class c regardless of the demographic attributes l1 and l2. § EXPERIMENTS §.§ Experimental Settings We evaluate various LVLMs, including both closed-source and open-source models, under a zero-shot setting to assess their ability to generate accurate answers without fine-tuning. Customized prompts from our framework are used for each model evaluation based on the specific model inference setting. All experiments are conducted using NVIDIA A100 GPUs. Evaluation Models We utilize CLIP <cit.> and ViT <cit.> as our baseline models, which align visual and textual representations to enable zero-shot learning across diverse vision tasks. We report the classification results for the person class only due to model evaluation limitations. For closed-source LVLMs, we select GPT-4o <cit.> and Gimini 1.5 Pro <cit.>. For open-source LVLMs, we include LLaVa-1.5 (7B and 13B parameters versions) <cit.>, LLaVa-1.6 (34B version) <cit.>, ShareGPT4V (7B and 13B versions) <cit.>, and MiniCPM-V (8B version) <cit.>. These LVLMs have demonstrated significant vision understanding abilities across various benchmark datasets. §.§ Results and Analysis In Table <ref>, we present the overall evaluation results of recall and disparity for each demographic group (gender, skin tone and age) from each model, based on images of 13 selected person classes. Detailed results for each class and each model will be provided in the Appendix <ref>. Despite improvements in recall accuracy, nearly all LVLMs exhibit fairness issues across gender, skin tone, and age, leading to biased outcomes and perpetuating existing inequalities. Models Except for the 7B-based models, other LVLMs show significant improvements in recall performance over traditional CLIP and ViT models, indicating enhanced image understanding and increasing accuracy with more model parameters. However, LVLMs have not shown significant improvements in fairness metrics, with some performing worse than the baselines. Closed-source LVLMs do not have absolute superiority over open-source LVLMs in recall performance and fairness metrics. For instance, GPT-4 and Gimini 1.5 Pro often respond with “unknown” to sensitive questions when information is insufficient, unlike open-source models, which tend to provide vague answers. It reveals that even the most accurate models can still perform inconsistently across different demographic groups. Demographic Groups In evaluating gender-based performance, LVLMs fairness assessments reveal differing disparities depending on the prompt type. Direct question prompts tend to elicit more stereotypically female attributes, while single-choice prompts lean towards male attributes. For the demographic attribute of skin tone, the performance under the direct question prompt shows a clear preference for lighter skin tones over darker ones. This bias is also evident in the age group evaluation, where the direct question prompt demonstrates a tendency to favor younger individuals over older ones. Prompts Based on various prompts, single-choice question prompt generally achieve higher recall performance than direct question prompt for the same images across all demographic groups. This trend is especially pronounced in open-source LVLMs, which show a significant performance gap. Conversely, closed-source LVLMs exhibit smaller gaps and more consistent outputs. In fairness evaluations, single-choice question prompt consistently yield lower disparity scores. § CONCLUSION AND FUTURE WORK In this paper, we proposed the novel visual fairness evaluation framework for investigating demographic bias in LVLMs. The experimental results demonstrated significant fairness gap across gender, skin tone, and age in both open-source and closed-source LVLMs. In future work, we aim to fine-tune LVLMs by incorporating fairness constraints and bias mitigation techniques to reduce disparities. § LIMITATIONS Our study provides a novel evaluation of LVLMs from a fairness perspective, it still has several limitations. 1) The dataset may not fully capture all real-world demographic attributes, and the design of instruct prompts may not cover all dimensions of bias. 2) The model output can vary across different versions and configurations of models, particularly with close-source LVLMs that lack transparency. 3) Our evaluation framework might not reflect the evolving nature of biases, and the focus on gender, skin tone, and age may not cover other critical demographic factors. 4) The high computational resources required for this framework may limit its applicability. Addressing these limitations will be crucial for better evaluating fairness in LVLMs. § APPENDIX §.§ Prompts Table <ref> illustrates the direct questions and single-choice question-instructed prompts utilized in our LVLMs fairnesss evaluation framework. §.§ Encode Functions In this study, we utilized two different text encoder methods: the CLIP text encoder and the T5 text encoder. These encoders were employed to enhance the matching between the outputs from LVLMs and the selected class labels. We used the pre-trained parameters of both models to leverage their robust capabilities. §.§ Demographic Attributes For gender presentation, we aim to investigate whether the model's predictions exhibit more stereotypically male attributes or more stereotypically female attributes. For skin tone, we categorize into three distinct groups based on The Monk Skin Tone Scale <cit.>: light (Monk points 1-3), medium (Monk points 4-6), and dark (Monk points 7-10) <cit.>. For age, we classify into three perceived age groups: younger (under 25 years old), middle-aged (25-65 years old), and older (over 65 years old). §.§ Class-level Evaluation Results To provide a deeper understanding, detailed results for each individual class and each model, this supplementary information allows for an in-depth analysis of how each model performs across various person classes and demographic groups, ensuring a robust evaluation of both accuracy and fairness.
http://arxiv.org/abs/2406.19327v1
20240627165815
Spectroscopy of magnetized black holes and topological stars
[ "Alexandru Dima", "Marco Melis", "Paolo Pani" ]
gr-qc
[ "gr-qc", "hep-th" ]
alexandru.dima@uniroma1.it marco.melis@uniroma1.it paolo.pani@uniroma1.it § ABSTRACT We study the linear response of four dimensional magnetized black holes and regular topological stars arising from dimensional compactification of Einstein-Maxwell theory in five dimensions. We consider both radial and nonradial perturbations and study the stability of these solutions, both in the frequency and in the time domain. Due to the presence of magnetic fluxes in the background, axial (i.e., odd-parity) gravitational perturbations are coupled to polar (i.e., even-parity) electromagnetic perturbations (Type-I sector) whereas polar gravitational and scalar perturbations are coupled to axial electromagnetic ones (Type-II sector). We provide a comprehensive analytical and numerical study of the radial perturbations and of the Type I sector, finding no evidence of linear instabilities (besides the already known Gregory-Laflamme instability of black strings occurring only in a certain range of the parameters), even despite the fact that the effective potential for radial perturbations of topological stars is negative and divergent near the inner boundary. Ultracompact topological stars exhibit long-lived trapped modes that give rise to echoes in the time-domain response. Their prompt response is very similar to that of the corresponding black hole with comparable charge-to-mass ratio. This provides a concrete realization of ultracompact objects arising from a well-defined theory. The numerical analysis of the Type-II sector will appear in a companion paper. Spectroscopy of magnetized black holes and topological stars Paolo Pani July 1, 2024 ============================================================= § INTRODUCTION The fuzzball program of string theory aims at describing the classical black hole (BH) horizon as a coarse-grained description of a superposition of regular quantum states <cit.>. The horizon-scale structure is provided by “microstate geometries”: solitons with the same mass and charges as a BH, but where the horizon is replaced by a smooth horizonless cap <cit.>. These solutions hinge on two distinctive features of string theory: the presence of non-perturbative entities known as D-branes, whose mass diminishes with increasing gravitational strength, and the introduction of numerous new degrees of freedom (possibly accounting for the BH entropy), which boost quantum tunneling and prevent the formation of horizons <cit.>. A key aspect of this program is that it is intrinsically higher dimensional and relies on nontrivial topologies to prevent the horizon-scale structure from collapsing. The microstates characterizing horizon-scale structure are smooth, topologically nontrivial geometries in ten dimensions. However, when reduced to four dimensions, they appear to have curvature singularities <cit.>. Dimensional reduction also gives rise to a number of Kaluza-Klein fields nonminimally coupled to gauge fields. Furthermore, regardless of the details of horizon-scale structure, the large Hilbert space describing the states that give rise to the huge BH entropy will contain coherent states. These will resemble classical solutions in low-energy theories coupled with gravity. Thus, from a four-dimensional perspective, one is left with General Relativity coupled to various forms of matter, including gauge fields and scalars, and potential singularities which are anyway well behaved from the 5D perspective. Einstein-Maxwell theory in five dimensions allows for magnetized black strings and regular solitons known as topological stars (TSs) <cit.>. The scope of this paper is to study the linearized dynamics and stability of these objects as a toy model for more complicated and realistic microstate geometries. These solutions are particularly interesting because they contain several ingredients of more complicated microstate geometries while keeping a certain degree of symmetry and hence being more tractable. In particular, while being regular in five dimensions, from a four dimensional perspective they contain an extra scalar field that diverges at the boundary of the TS solution, where also the metric becomes singular. This implies that extra care should be put in investigating the boundary conditions (BC) in the four-dimensional theory. Furthermore, the spherically symmetric solution has a magnetic field which mixes sectors with different parity. In general, scalar, electromagnetic (EM), and gravitational perturbations are coupled to each other in a nontrivial way, as generically expected from classical solutions of a low-energy effective theory. Due to their nontrivial structure, a natural question concerns the stability of these solutions. Linear perturbations of magnetized BHs in this theory were partially studied in <cit.>. Due to the presence of magnetic fluxes in the background, polar (i.e., even-parity) gravitational perturbations are coupled to axial (i.e., odd-parity,) EM perturbations and viceversa. We shall refer to the sector containing odd-parity (resp. even-parity) gravitational perturbations as Type-I (resp. Type-II). In <cit.>, the quasinormal modes (QNMs) of magnetized BHs in this theory were obtained for the Type-I sector, which is easier than the Type-II sector since it contains less dynamical degrees of freedom. For the case of TSs, only the linear dynamics of a test scalar field in the frequency domain has been studied <cit.>, finding different families of modes depending on parameters of the TS. In particular, in some regions of the parameter space, TSs can develop a pair of stable and unstable photon spheres which can support long-lived modes <cit.> and can give rise to echoes <cit.> in the ringdown signal at late times. From this perspective, TSs provide a concrete model for ultracompact objects <cit.> arising from a well-defined theory and are therefore an ideal testbed to investigate the phenomenology of these objects. Here, we greatly extend this program by studying the complete linearized dynamics (in which scalar, EM, and gravitational perturbations are coupled to each other) both in the frequency and in the time domain. This will also allow us to discuss the linear stability of magnetized BHs and TSs. While we will derive the equations for all kinds of perturbations, in this work we numerical solve for the dynamics of radial perturbations and of nonradial perturbations in the Type-I sector. Nonradial Type-II perturbations will be studied in a companion paper <cit.>. Overall, in the radial Type-II and nonradial Type-I sectors we found no evidence for linear instabilities[Beside a known radial instability <cit.> associated to the Gregory-Laflamme instability of black strings <cit.>, which occurs only in a certain range of the parameters, see below for further details.], even despite the fact that the effective potential for radial perturbations of TSs is negative and divergent near the inner boundary. For both BHs and TSs, the QNMs computed in the frequency domain are in perfect agreement with the object's response to small perturbations computed in the time domain. As found in Ref. <cit.> for test scalar perturbations, we find that TSs without a stable photon sphere have (scalar, EM, and gravitational) QNMs similar to those of BHs, which indeed resemble the gravitational w-modes of compact stars in General Relativity <cit.>. TSs with a pair of stable and unstable photon spheres have long-lived QNMs that dominate the object response at late time. As expected for ultracompact objects <cit.>, the response in the time domain is initially very similar to that of a BH with similar charge-to-mass ratio, while the late time response is governed by echoes. An example of this behavior is anticipated in Fig. <ref>, which will be discussed in detail in the rest of the paper. The same effect was observed for various classes of ultracompact objects (see, e.g., <cit.>), including BH microstates <cit.>. However, in many of the previous studies the background solution was either phenomenological or pathological, while in <cit.> only test scalar perturbations were studied, due to the complexity of the theory. To the best of our knowledge this is the first example of clean echoes appearing in the gravitational waves emitted by a consistent and stable solution to a well-defined theory. The rest of the paper is organized as follows. Section <ref> presents the setup and the various sectors of the linearized field equations, in many cases providing them in Schrödinger-like form with an analytical effective potential. Section <ref> presents our numerical results for the QNMs and linear response in time of magnetized BHs and TSs. We conclude in Sec. <ref> and provide some technical details of the computations in the appendices. Note added: While this work was nearly completion, we were informed that another group was working independently on the same problem <cit.>. Although there is significant overlap, our analysis and that of <cit.> also focus on different aspects and numerical methods and are therefore complementary to each other. We have compared several numerical results with those of <cit.>, finding excellent agreement, especially for long-lived modes. § SETUP AND MASTER EQUATIONS §.§ Five-dimensional theory, field equations, and background solutions We consider Einstein-Maxwell theory in 5D, S_5 = ∫ d^5 x √(-𝐠)(1/2κ_5^2 - 1/4_AB^AB) , yielding the covariant equations: _AB-1/2_AB + κ_5^2 ( _AC^C_B + 1/4_AB_CD^CD) = 0 ∇^B_AB = 0 This theory admits a regular solution known as TS <cit.> ds^2 =-f_S dt^2+f_Bdy^2+1/hdr^2+r^2dΩ_2^2 F = P sinθ dθ∧ dϕ where f_S=1-r_S/r , f_B=1-r_B/r , h=f_B f_S , P = ±1/κ_5√(3r_Sr_B/2) . with r_B > r_S. The solution is everywhere regular and asymptotes to four dimensional Minkowski times a circle, parametrized by the coordinate y with period 2π R_y. The case r_B≤ r_S corresponds to a magnetized black string with event horizon located at r=r_S. In the following we shall study the linear perturbations of both solutions. §.§ Four-dimensional compactification To study the linear perturbations of magnetized BHs and TSs, we perform a four-dimensional compactification, introducing a scalar field Φ and a gauge field A_μ for the gravity sector, and a scalar field Ξ for the EM sector: ds^2_5 = e^-√(3)/3Φds_4^2 + e^2√(3)/3Φ(dy+_μ dx^μ)^2 , 𝐅_ABdx^Adx^B =F_μνdx^μ dx^ν + ( ∂_μΞ dx^μ) ∧( dy + A_μ dx^μ) , where henceforth we define the 4D field strengths F_μν=∂_μ A_ν-∂_ν A_μ and F_μν=∂_μA_ν-∂_νA_μ. We assume that all variables are independent of the extra dimension y. While this is certainly true for the background solution, the translation symmetry along y of the latter implies that perturbations can be decomposed with a e^iky dependence, where k= p/R_y is the quantized momentum along y and p=0,1,2,... Phenomenologically we expect R_y≪ r_S, so perturbations with p≠0 are hardly excited in classical processes. We will assume p=0, so there is no y dependence in the dynamical variables[Later on we will briefly discuss radial perturbations with nonvanishing Kaluza-Klein momentum, which are relevant for the Gregory-Laflamme instability of a black string <cit.>.]. In this setup, the corresponding 4D action describes an Einstein-Maxwell-Dilaton (EMD) theory with two scalars and two gauge fields: 𝒮 = ∫ dx^4 √(-g)[ 1/2κ_4^2( R - 1/2∂_μΦ∂^μΦ -1/4e^√(3)Φ_μν^μν). +. 1/e^2( -1/4e^√(3)/3Φ F_μνF^μν -1/2e^-2√(3)/3Φ(∂_μΞ)^2) ] , giving the 4D field equations: G_μν + [ 1/2e^√(3)Φ( _μρ^ρ_ν + 1/4g_μν_ρσ^ρσ) - 1/2( ∂_μΦ∂_νΦ -1/2 g_μν∂_ρΦ∂^ρΦ) ] + κ_4^2/e^2[ e^Φ/√(3)( F_μρF^ρ_ν + 1/4g_μνF_ρσF^ρσ) - e^-2Φ/√(3)(∂_μΞ∂_νΞ-1/2g_μν∂^ρΞ∂_ρΞ)] = 0 , ∇^ρ(e^√(3)Φ_μρ) = 0 , Φ - √(3)/4e^√(3)Φ_μν^μν +κ_4^2/e^2[ 2√(3)/3e^-2Φ/√(3)(∂_μΞ)^2 - √(3)/6e^Φ/√(3)F_μνF^μν] = 0   , ∇^ρ(e^√(3)/3Φ F_μρ) = 0   , ∇^ρ(e^-2√(3)/3Φ∂_ρΞ) =0   . We introduced here the couplings κ_4^2:=κ_5^2/(2π R_y) and e^2:=1/(2π R_y), where R_y is the radius of the compact extra dimension[We will keep these coupling constants explicit but, when presenting numerical results, we will use units such that κ_4=√(8π) and e=√(4π).]. The dynamics of the 4D theory, which includes gravity, two gauge fields A_μ and A_μ, and two scalar fields Φ and Ξ, is fully equivalent to the 5D Einstein-Maxwell theory. The background 4D line element reads ds_4^2 =-f_S f_B^1/2dt^2+1/f_S f_B^1/2dr^2+r^2 f_B^1/2dΩ_2^2 ,   Φ = √(3)/2log f_B , F = ± e Q_msinθ dθ∧ dϕ = ±e/κ_4√(3/2r_Br_S)sinθ dθ∧ dϕ =0=Ξ  . Note that both the metric and the dilaton diverge at r=r_B, but the TS solution is regular in the 5D uplift. The ADM mass and magnetic charge of the background are, respectively, M = 2π/κ_4^2(2r_S+r_B) , Q_m = 1/κ_4√(3/2r_Sr_B) . The parameter space of static magnetized BHs and TSs in this theory is depicted in Fig. <ref>. This is obtained by inverting the above relations to get Q_m/M as a function of r_B/r_S. Magnetized BHs require r_B/r_S<1 which implies eQ_m/M≤eκ_4/2 π√(6)≈ 1.1547. When 0<r_B/r_S<1/2, these solutions are linearly unstable against the Gregory-Laflamme mechanism <cit.>. For TSs (r_B/r_S>1) we have eκ_4/2 π√(6)≤eQ_m/M≤√(3)eκ_4/8 π. Solutions with r_B/r_S>2 are also unstable <cit.>, as can be obtained from the aforementioned Gregory-Laflamme instability of magnetized BHs and performing a double Wick rotation (t, y, r_S , r_B) → (iy, it, r_B , r_S) which maps BHs to TSs <cit.>. We will confirm this result numerically in Sec. <ref>. Note that both magnetized BHs and TSs can have eQ_m/M>1, at variance with the four-dimensional Reissner-Nordström BH, which is not a solution to this theory. While the magnetized BH is characterized by a single unstable photon sphere at r_ ph = 3/2 r_S, the TS may show a pair of stable and unstable photon spheres, depending on the parameter space <cit.>. We can classify TSs in two families: TS first kind, 3/2 < r_B/r_S≤ 2 : r_ ph^(1) = r_B , TS second kind, 1 ≤r_B/r_S≤3/2 : r_ ph^(1) = 3/2 r_S , r_ ph^(2) = r_B . TSs of the second kind have a stable photon sphere at r_ ph^(2) = r_B and an unstable one at r_ ph^(1) = 3/2 r_S, just like the magnetized BH solution. §.§ Linear dynamics In Appendix <ref> we perform a Regge-Wheeler-Zerilli <cit.> spherical-harmonic decomposition of the metric, EM, and scalar perturbations of the background. The magnetic field of the background breaks parity and enforces a coupling between even-parity gravito-scalar perturbations and odd-parity EM perturbations, and viceversa. We discuss different independent sectors below. Since F=0=Ξ on the background, the perturbations of these fields decouple from the others. Through a field redefinition, at the linear level Eqs. (<ref>) and (<ref>) can be written as those for a test Maxwell and massless scalar field, respectively, propagating on the fixed background. Since test scalar perturbations of TSs and BHs have been studied in <cit.> and Maxwell perturbations of a magnetized BH have been studied in <cit.>, here we do not discuss these further, focusing instead to the coupled system of equations. For completeness, the field equations for the decoupled vector and scalar fields are given in Appendix <ref>. We will present the main equations in the frequency domain, assuming a ∼ e^-iω t time dependence for each variable. These are the equations that will be relevant to compute the QNMs as a one-dimensional eigenvalue problem. When needed, in the end we will present also the time-domain version of the relevant equations that will be integrated with a 1+1 evolution code. §.§.§ Type-I perturbations The Type-I sector couples odd-parity metric components with even-parity EM components and is decoupled from the scalar perturbations. To derive the evolution equations we adopt and generalize the approach used in <cit.>. For l≥ 2 perturbations, we obtain a system of two equations D[(r)] + ( 2 f_S^2f_B' + 3 f_Bf_Sf_S' ) ∂_r (r) - ( 2f_Bf_S^2/r^2 + ( Λ - 2 )f_S/r^2 + 2f_S^2f_B' + f_B f_S f_S'/r - 3f_Sf_B'f_S' - f_Bf_S'^2 ) (r) - 2κ_4^2 Q_m/e r^3(r) = 0   D[(r)] + ( 2f_S^2f_B' + f_Bf_Sf_S' ) ∂_r (r) - ( 2κ_4^2Q^2_mf_S/r^4 + Λ f_S/r^2 + 2f_S^2f_B'/r - f_Sf_B'f_S' )(r) - eQ_m(Λ-2)f_S^2/r^3(r) = 0 where we defined the second-order differential operator D=(f_Bf_S^2) ∂_r^2 + ω^2 , and Λ=l(l+1). To obtain the above system we combine the (r, ϕ) and (θ, ϕ) components of the perturbed Einstein equation (<ref>) together with the radial component of the perturbed Maxwell equation (<ref>) to get a second-order equation for h_1 sourced only by f_01^+. Analogously, we can derive a second-order equation for f_01^+ from the t and r components of the Maxwell equations, combined with the Maxwell constraint f_01^+-ω f_12^+ - ∂_r f_02^+=0. The relation between metric and EM perturbations defined in Appendix <ref> and the auxiliary variables and is given in terms of h_1(r) = -ω r √(f_B)(r) , f_01^+(r) =Λ/r^2(r) . Interestingly, the above equations can be decoupled. One can first introduce a generalized tortoise coordinate defined by dρ = √(g_rr/g_tt)dr = dr/f_B^1/2f_S , which can be integrated to obtain a closed-form expression[This expression is valid both when r_B>r_S and when r_S>r_B. For the latter case (magnetized BHs) the integration constant should be fixed to ensure that ρ(r) is a real function outside the horizon, r>r_S.] for ρ(r). Then, making a field redefinition ^-(r) = f_B^3/4 f_S (r) , (r) = f_B^3/4κ_4/e√(2/Λ-2)(r) , we obtain ( d^2/dρ^2 + ω^2 )[ ^-; ] = B[ ^-; ] , where B = f_S/r^3[ F(r)[ 1 0; 0 1 ] +[ 0 P; P 2 r_B + 3 r_S ]] , where F(r)=Λ r - 3(13 r_B^2 r_S + 8r^2(r_B+2r_S)-r r_B(9r_B+28r_S))/16 r (r-r_B) and P=√(3(Λ-2)r_B r_S). The system above can be decoupled by performing a linear, r-independent transformation Z_1 = ℒ_1 ^- + ℒ_2 Z_2 = ℒ_2 ^- - ℒ_1 with ℒ_1 = -(2 r_B + 3 r_S) - √((2r_B + 3r_S)^2+12 Λ r_S r_B) , ℒ_2 = 2 √(3 (Λ-2) r_S r_B) . The decoupled system is ( d^2/dρ^2 + ω^2 ) Z_i = V_ eff^i Z_i i = 1,2 with the effective potentials V_ eff^1,2 = r-r_S/16 r^5 (r-r_B) [16 r^3 Λ - r^2 (8r_B + 24 r_S + 16 Λ r_B) +r(11 r_B^2 + 60 r_B r_S) - 39 r_B^2 r_S ∓ 8 r (r-r_B)√((2 r_B - 3 r_S)^2+ 12r_B r_S Λ)] . Finally, for l=1, the metric perturbation h_0 can be eliminated via a gauge transformation and the h_1 perturbation is nondynamical. Indeed, by the same combination of equations used to derive Eqs. (<ref>) and  (<ref>), in this case one can check that h_1 can be fixed as a function of f_01^+ via the relation h_1 = Q_m κ_4^2 f_B^1/2/ω e f_01^+ . Thus, for l=1 the Type-I sector reduces to a single master equation: D[(r)] + (2 f_S^2 f_B' + f_B f_S f_S')∂_r (r) -V_ eff^l=1(r) = 0 , where f_01^+(t,r) =1/r^2(t,r) and the effective potential reads V_ eff^l=1=f_S( 2κ_4^2Q^2_m/r^4 + 2/r^2 + 2f_Sf_B'/r - f_B'f_S') . §.§.§ Type-II perturbations The Type-II sector is more involved because it couples even-parity metric components and the scalar perturbations with odd-parity EM components. In Appendix <ref> we list the relevant components of the perturbed Einstein equations, along with the scalar and axial gauge perturbation equations. The discussion and solution of these equations is deferred to a companion paper, which will address the analysis of Type-II perturbations in the general case for l ≥ 1. Instead, in the next section we will focus on radial (l=0) perturbations. §.§.§ Radial perturbations Radial perturbations belong to the Type-II sector but, because l=0 gravitational and EM perturbations are nondynamical, they are much easier to study. In the radial case the metric perturbation K in front of the two-sphere submanifold and the off-diagonal perturbation H_1 can be eliminated with a coordinate choice, so one is left only with the diagonal perturbations of the (t,r) submanifold, namely H_0 and H_2. Furthermore, the only radial mode of the EM sector is even-parity, so it does not contribute to Type-II perturbations. Using Einstein's equations, one finds two constraints relating H_0 and H_2 to the dynamical scalar perturbation, which is governed by a single master equation: D[φ] + ( f_S^2f_B'+f_Bf_Sf_S' ) ∂_rφ - V_ eff^l=0φ =0 , with the effective potential V_ eff^l=0(r) =f_S^2f_B'+f_Bf_Sf_S'/r + Q_m^2κ_4^2 f_S/3r^4 + Q_m^2κ_4^2f_Sf_B'/2r^3(4f_B+rf_B') - 6f_S^2f_B'^2(f_B+rf_B')/(4f_B+rf_B')^2 + 3 r f_S f_B'^2 f_S'/4(4f_B+rf_B') . As discussed below, in the TS case the potential diverges at the boundary, V_ eff^l=0(r→ r_B)→-∞, is zero at some r>r_B, and vanishes at spatial infinity. In the magnetized BH case the potential vanishes also at r=r_S and has the standard shape. §.§.§ Comparison of effective potentials As shown above, the equations for the radial (l=0) perturbations, and for all (l≥1) Type-I perturbations can be written in canonical form d^2 Ψ/dρ^2+(ω^2-V_ eff)Ψ=0 , in terms of some suitable master variable Ψ, generalized tortoise coordinate ρ, and an effective potential V_ eff. For the reader's convenience, we summarize the effective potentials in Table <ref>. These potentials are shown in Fig. <ref> for some representative values of the parameters. We consider two magnetized BH solutions (with different values of the charge) and two TSs, of the first and second kind, respectively. While the effective potentials in the BH case have the standard shape  –namely they vanish at the boundaries and display a single maximum which roughly corresponds to the unstable photon sphere – those for perturbations of a TS have a richer structure. First of all, it is easy to show that the effective potentials diverge at r=r_B. In particular, the radial-perturbation potential diverges to negative values. Despite this fact, we have not found any unstable mode or signature of linear instability for r_B<2 r_S, as later discussed. Furthermore, the shape of the effective potentials for l>0 Type-I perturbations depends strongly on the background solution: only compact TSs have an unstable photon sphere at some r>r_B, so that they display a local maximum, a cavity, and finally a positively diverging potential at r=r_B. This shape of the potential naturally supports long-lived modes <cit.>, as we will explicitly show below. §.§.§ Boundary conditions The above system of second-order differential equations is solved – both in the frequency and in the time domain – imposing suitable BCs. At infinity we require radiative purely outgoing waves in all cases. If the background solution is a BH, we impose radiative purely ingoing BCs at the horizon, r=r_S. If the background is a TS, we impose regularity of the perturbation at the boundary r=r_B. Schematically, for a generic array of perturbations Ψ in the frequency domain, we impose the series expansion Ψ = (r-r_B)^λ∑_i=0^∞ c_i (r-r_B)^i , and obtain the two independent solutions by solving the indicial equation for λ. In all cases under consideration, only one of the two solutions is regular at r=r_B. Strictly speaking, regularity is not required in the 4D compactification as long as the 5D uplift is regular. However, regularity of the perturbations in 4D ensures also regularity in 5D, despite the fact that the 4D background is singular at r=r_B. The coefficients c_i with i>0 can all be written in terms of c_0 by solving the field equations order by order near the boundary. We typically solve the field equations near both boundaries perturbatively to high order, to improve numerical accuracy. In the time domain, we perform some field/coordinate redefinition to impose the same BC. As an illustrative example, let us consider the case of Type-I dipolar perturbations. The single equation reads ”(r) + 1/(r-r_B)(r(2r_B+r_S)-3r_Br_S/r(r-r_S)) '(r) + 1/(r-r_B)( ω^2r^4 - 2 r^2 - 2 r (r_B-r_S) + 2r_Br_S/r(r-r_S)^2)(r) = 0 . Using a series expansion as in Eq. (<ref>), the indicial equation is λ(1+λ)=0, so the two linearly independent solutions are _1(r) = ∑_i=0^∞ a_i (r-r_B)^i _2(r) = (r-r_B)^-1∑_i=0^∞ b_i (r-r_B)^i + αlog(r-r_B) _1(r) . The second solution is divergent at r=r_B. A general solution would be a linear combination of the two above which, after reabsorbing some coefficients, reads (x) = (a_0 + a_1 x + ...) + b_0/x + αlog(x) (c_0 + c_1 x + ...), with x=r-r_B. This suggests that the regularity condition x∂_x|_x=0=0 implies correctly b_0=0=α c_0. For the Type-II radial equation, one obtains two identical roots of the indicial equation, λ^2=0. The most general solution in the asymptotic limit r→ r_B can be written as φ(r) = ∑_i=0^∞ a_i (r-r_B)^i + log(r-r_B) ∑_i=0^∞ b_i (r-r_B)^i As explained before, we can impose x∂_xφ|_x=0=0 to ensure regularity of the solution. A similar procedure applies to any kind of perturbations, including those in coupled systems. § SPECTROSCOPY OF MAGNETIZED BHS AND TSS This section presents our numerical results for the linear perturbations of magnetized BHs and TSs. Besides the technicalities related to the coupling between scalar, EM, and gravitational perturbations of different parities, the spectrum of magnetized BHs is standard and qualitatively similar to that of a Reissner-Nordström BH. The spectrum of TSs is instead richer and strongly depends on the parameter space of the background solution. TSs of the first kind do not have a stable photon ring and their QNMs are similar to the gravitational w-modes of compact stars in general relativity <cit.> and hence qualitatively similar to BHs. TSs of the second kind have a pair of stable and unstable photon spheres, which can support long-lived QNMs, as expected for ultracompact objects <cit.>. Their response in the time domain is initially very similar to that of a BH with comparable charge-to-mass ratio, while their late time response is governed by echoes <cit.>, analogously to what observed for test scalar perturbations of microstate geometries <cit.>. §.§ Numerical methods We have computed the QNMs of magnetized BHs and TSs both in the frequency domain, solving an eigenvalue problem, and in the time domain, solving a 1+1 evolution problem and then extracting the QNMs from the inverse Fourier transform of the signal. As discussed below, the two complementary methods show excellent agreement. In the next two subsections we shall discuss some details of the numerical implementation. §.§.§ Frequency-domain computations The frequency-domain analysis is performed by computing the QNMs of both magnetized BHs and TSs employing a direct integration shooting method <cit.>. At the boundary r = r_B we expand the perturbation as in Eq. (<ref>) imposing regularity (the same holds for magnetized BHs in the vicinity of the horizon at r = r_S, imposing purely ingoing waves). Solving the equations near the boundaries as a Frobenius series one can obtain the Frobenius index λ and the coefficients c_i with i > 0 in terms of c_0. We can set c_0=1 without loss of generality using the fact that the perturbation equations are linear. For coupled systems of equations the initial conditions are generically N-dimensional and one can choose an orthogonal basis c_0=(1,0,0,..,0), c_0=(0,1,0,...,0), ..., c_0=(0,0,0,...,1) of N dimensional unit vectors as explained in <cit.>. We can then use the BC at the inner boundary to integrate the radial equations up to arbitrarily large distance. Asymptotically at infinity we expect the general solution to be a linear combination of an ingoing and an outgoing wave Ψ∼ B(ω) e^- i ω r r^λ + C(ω) e^i ω r r^-λ r → +∞ , where B(ω) and C(ω) are complex coefficients and the exponent λ can be derived by solving the equations order by order. In order to find the discrete spectrum of complex frequencies of the QNMs we impose the asymptotic BC of purely outgoing waves, namely we require B (ω) = 0. (For an N-dimensional problem, this condition generalizes to the vanishing of a determinant obtained from the N-dimensional basis.) Since B is a complex function of ω, imposing B (ω) = 0 is achieved through a shooting procedure in the complex ω plane. All the computations discussed above are performed using Mathematica with large numerical precision and high-order series expansions at the boundary. §.§.§ Time-domain computations In parallel with the frequency-domain approach, we solve numerically the time evolution equations of linear perturbations and extract the QNMs using spectral analysis techniques. Specifically, by inverse-Fourier transforming the canonical equation (<ref>), we consider [d^2 /dρ^2-d^2 /dt^2-V_ eff]Ψ(t,ρ)=0 . To conduct the time evolution we resort to a custom PDE solver written in  <cit.>, based on algorithms collected in the  suite <cit.>, which is part of the  library of Open Source Software for Scientific Machine Learning. The numerical method we adopt is based on the method of lines: we approximate the spatial derivatives with a standard fourth-order finite difference stencil and employ a fourth-order Runge-Kutta algorithm for the time stepping. The boundary treatment consists in fourth-order finite difference approximations of physical BCs. For magnetized BHs, these consist in ingoing/outgoing radiative conditions, respectively at the inner/outer boundary. Instead, as discussed in the previous section <ref>, TSs require the ingoing BC to be replaced by regularity conditions at the surface r=r_B, corresponding to (r-r_B) ∂_r X → 0, where X is to be replaced with the appropriate linear perturbation. As a relevant technical detail, we report that we solved the time evolution equations expressed in the standard radial tortoise coordinate, dr_* = f_S^-1 dr, instead of the coordinate radius r or the generalized tortoise coordinate ρ defined in Eq. (<ref>). Compared to the latter, the choice of r_* allows us to avoid dealing with terms ∼ f_B^-1 in the effective potentials, which would require regularization in a neighborhood of r=r_B (equivalently, ρ=0) to avoid spurious numerical instabilities. At the same time, in the case of TSs perturbations, a grid in r_* allows having a resolution that increases like ∼ (r-r_S)^-1 near the surface of the star. Empirically, this turns out to be crucial to properly resolve the cavity effects in TSs of the second kind. In addition, we find that we need long simulations (i.e., up to t/M ∼ O( 10^310^4)) to be able to extract the long-lived modes with damping timescale ∼ O(10^5) O(10^10) with sufficient accuracy. This was made possible by adopting an ad-hoc coordinate stretching, that allowed us to push the outer boundary sufficiently far away from the effective potential as to not contaminate the simulation with spurious boundary effects. To estimate the QNM frequencies in our time evolution simulations, we produce a time series by extracting the amplitude of each linear perturbation field at an arbitrary fixed point, typically corresponding to r_ ex=20 M. We then process this signal by windowing and applying a Fast Fourier Transform to determine the corresponding spectrum of the perturbations. Then, we move to fitting each peak in two steps: first, we employ a nonlinear fit using a Lorentzian function to have an estimate of the real part of the frequency, ω_R. Unfortunately, this step does not yield an equally accurate estimate of the imaginary part of the QNM frequency. To obviate this inconvenience, we filter the original time series in such a way as to suppress all modes with frequencies that do not match the estimated ω_R. Then, we apply a nonlinear fit with a damped sinusoid template to the filtered signal. This provides us with a refined estimate of ω_I. An example of the power spectrum will be discussed in Sec. <ref> below. For further details on this spectral analysis technique, we refer the interested reader to previous applications to similar problems (e.g.,  <cit.>). Our implementation has been validated by comparing results against tabulated Schwarzschild QNMs <cit.> and cross-checking between the frequency-domain and time-domain frameworks. In addition, we discuss numerical convergence in Appendix <ref>. §.§ Type-I QNMs of magnetized BHs and TSs We start presenting the Type-I perturbations, which are less involved than the Type-II case. Indeed, the Type-I sector couples odd-parity gravito-scalar perturbations with even-parity EM perturbations. Since scalar perturbations have even parity and odd-parity gravitational perturbations are easier than their even-parity counterpart, in the Type-I sector we have fewer degrees of freedom and no l=0 modes. We present the QNMs in the form ω =ω_R +ω_I, typically normalizing them by the mass, i.e. ω M. Given our conventions, ω_I<0 (resp., ω_I>0) corresponds to a stable (resp., unstable) mode. §.§.§ Dipolar perturbations Dipolar (l=1) Type-I perturbations are described by a single master variable governed by Eq. (<ref>). In Table <ref> we show the QNMs for some representative examples of magnetized BHs and TSs. In particular we consider a nearly-extremal magnetized BH with e Q_m/M≈ 1.153, a TS of the first kind with e Q_m/M≈ 1.220, and a TS of the second kind with e Q_m/M≈ 1.157, so with a charge-to-mass ratio very similar to that of the BH. We show results obtained both in the frequency and in the time domain using the methods previously discussed. As evident from this and similar tables presented below, the agreement of the two methods is very good, even for higher-order overtones. QNMs with smaller quality factor, ω_R/|ω_I|, are less accurate, because in this case the direct integration method is less efficient and the accuracy of the power spectrum extracted from the time-domain signal is limited by the short duration of the mode. For BHs and first-kind TSs we show only the fundamental mode, which already has a relatively short damping time. For second-kind TSs, we find long-lived modes (with |ω_I|≪ω_R), as expected. In this case we computed several overtones up to a point in which their imaginary part is comparable to that of an ordinary BH QNM. The fundamental QNM of magnetized BHs in this sector is shown in the top panels of Fig. <ref> as a function of Q_m/M up to the nearly-extremal case. The behavior of this mode with the BH charge is qualitatively similar to that of a Reissner-Nordström BH (see, e.g., <cit.>). More interestingly, the top panels of Fig. <ref> track the behavior of some Type-I dipolar QNMs of the TS as a function of Q_m/M, in particular across the smooth transition between first- and second-kind solutions. As expected, we see that first-kind solutions have BH-like modes, with imaginary part only slightly smaller than the real one, which are akin to the so-called w-modes of a neutron star <cit.>. However, as the charge-to-mass ratio decreases, the solution develops a stable photon sphere and the mode becomes long-lived, as expected for a ultracompact object. We track the fundamental mode (n=0) and the first overtone (n=1). Interestingly, the damping time of the latter is longer than that of the former for any Q_m (i.e., the curves on the right panel do not cross each other), so the fundamental mode does not change during the tracking at different Q_m. For completeness, in Fig. <ref> we track the modes for different values of Q_m/M in the complex (ω_R,ω_I) plane, where the transition between first- and second-kind TSs is more evident. §.§.§ l≥2 perturbations Type-I perturbations with l≥2 are described by the two decoupled master equations (<ref>) but gravitational and EM perturbations are anyway mixed. In the decoupling limit (r_B→0), the master variables Z_1 and Z_2 are associated to gravitational and EM perturbations of a Schwarzschild BH, respectively. As such, we will refer to modes coming from the first and second equation in (<ref>) as gravitational-induced and EM-induced, respectively, even for generic values of r_B/r_S where an actual decoupling of the original perturbations is not possible. Examples of the QNMs in this sector for BHs and TSs are shown in Table <ref> and Table <ref> for the gravitational-induced and EM-induced modes, respectively. Some modes of both families are tracked as function of Q_m/M in the middle panels of Fig. <ref> and Fig. <ref> for BHs and TSs, respectively. Beside this doubling of modes, we observe the same qualitative behavior as previously discussed for l=1 Type-I modes. §.§ Type-II QNMs of magnetized BHs and TSs: radial case Let us now turn our attention to the more involved case of Type-II perturbations, which couple even-parity gravito-scalar perturbations with odd-parity EM perturbations. Also scalar perturbations are excited in this case, starting from the monopolar (l=0) perturbations. Perturbations with l≥2 and with l=1 involve three and two propagating degrees of freedom, respectively, and the resulting system of equations does not appear to be diagonalizable. We postpone the numerical analysis of Type-II perturbations with l≥1 to a companion paper <cit.>. Here we focus on the case of radial perturbations. Radial (l=0) perturbations only exist in the Type-II sector since in this case only the (even-parity) scalar perturbations are dynamical and described by Eq. (<ref>). This sector is particularly interesting because, as shown in Fig. <ref>, the effective potential for TSs is negative and divergent as r→ r_B, which might signal an instability in the spectrum, i.e. QNMs with positive imaginary part. We have searched for unstable modes and did not find any for r_B<2r_S, also in agreement with the time evolution presented below (see Sec. <ref>) which does not show any evidence for an instability. An example of the radial QNMs of magnetized BHs and TSs is presented in Table <ref>. The fundamental mode of magnetized BHs and the n=0,1 modes of TSs as a function of the charge-to-mass ratio are presented in the bottom panels of Figs. <ref> and <ref>, respectively. Finally, as previously discussed, TSs with r_B > 2 r_S are unstable under radial perturbations with purely imaginary frequency, as a consequence of the Gregory-Laflamme instability of magnetized black strings with r_B ≤ r_S/2 and the duality (t, y, r_S , r_B) → (iy, it, r_B , r_S) that maps magnetized black strings to TSs and viceversa <cit.>. In agreement with the analysis in <cit.>, for TSs with r_B>2r_S we found an unstable purely imaginary mode, i.e. ω=iω_I with ω_I>0, for radial perturbations with zero Kaluza-Klein momentum. As shown in Fig. <ref>, the frequency approaches zero in the r_B = 2 r_S limit, thus reaching the threshold of the Gregory-Laflamme zero mode of the corresponding black string. §.§ Time signal and comparison between magnetized BHs and TSs In the previous sections we have compared the results of QNM computation in the frequency domain with those extracted from the inverse Fourier transform of the signal in the time domain, the latter being obtained by evolving a system of 1+1 equations. Here we present the results of the time-domain analysis. We shall only show selected cases, since the qualitative features are similar in all sectors. One of our main result was anticipated in Fig. <ref> for l=2 Type-I perturbations, namely odd-parity gravitational and even-parity EM perturbations. In this example we focus on a nearly-extremal magnetized BH with a charge-to-mass ratio similar to that of a second-kind TS. In practice, we consider a BH and a TS solution slightly below and above the r_B/r_S=1 threshold, respectively. Results are normalized by the mass of the solution so, in practice, the magnetized BH and the TS have the same mass and a very similar charge (as shown in the phase diagram <ref>, it is not possible to have TSs and BHs with exactly the same charge-to-mass ratio). In this condition the effective potentials for perturbations of BHs and TSs are very similar at large distances and they remain so even at smaller distances down to the inner region, as shown in Fig. <ref> for the simpler case of l=1 Type-I perturbations (similar results apply to other sectors). In this example the potentials are very similar around the maximum, the shape of which is responsible for the prompt ringdown in the time domain <cit.>. However, near the inner boundary the behavior is completely different: the potential vanishes as r→ r_S for the BH while it diverges to positive values as r→ r_B for the TS. The latter behavior supports the long-lived modes discussed in the previous section, which dominate the signal at late times. This discussion is perfectly consistent with what shown in Fig. <ref>: the initial ringdown is almost indistinguishable between the BH and TS cases, the small differences are only due to the slightly different charge-to-mass ratio. However, after the perturbation had time to probe the inner boundary of the TS and gets reflected, the signal is dominated by echoes associated with perturbations being reflected back and forth between the inner boundary and the unstable photon sphere. This behavior is generic as long as the charge-to-mass ratio is similar, as shown in Fig. <ref> for the case of Type-I, l=1 perturbations and for l=0 perturbations. In the same plot, we also show an example of first-kind TS. Due to the absence of unstable photon sphere, in this case the response does not show long-lived modes and the prompt ringdown is completely different from the BH case, even though the charge-to-mass ratio is only ≈6% different. Thus, TSs provide a concrete model, arising as a solution to a consistent theory, in which the time signal smoothly interpolates between different regimes, including one in which a clean echo signal appears in the gravitational waves. This improves on previous studies of ultracompact objects, which either considered phenomenological backgrounds or test fields, due to the complexity of the field equations (see <cit.> for an overview). Finally, in Fig. <ref> we show an example of power spectrum obtained from the time evolution of Type-I perturbations with l=2 on a second-kind TS background. In this case, multiple peaks are present and these allow for a precise estimate of the QNM frequency and damping time for several overtones, as shown in the previous tables. § CONCLUSIONS We thoroughly examined the coupled scalar-EM-gravitational perturbations of magnetized BHs and TSs originating from the dimensional compactification of Einstein-Maxwell theory in five dimensions. Our results, supported by both frequency-domain and time-domain analysis, provide strong numerical evidence for the linear stability of these solutions against radial perturbations and axial gravitational perturbations (which are coupled to polar EM ones). The numerical analysis of the more involved polar perturbations (which are coupled to axial EM and scalar ones) for l≥1 will appear in a companion paper <cit.>. Overall, the perturbations that we studied in this paper can all be reduced to a single second-order differential equation and display qualitatively similar properties. In particular, we confirm the expectation that ultracompact TSs with a stable photon sphere support long-lived modes, giving rise to echoes in the (scalar, EM, and gravitational-wave) signal at late times. Thus, TSs provide a concrete model, arising as a solution to a consistent theory, in which a clean echo signal appears in the gravitational waves. To the best of our knowledge, this is the first example of a consistent solution showing echoes in the gravitational-wave signal, since previous studies of ultracompact objects considered either phenomenological backgrounds or test fields, due to the complexity of the field equations. Although we found no evidence of linear instabilities (at least in the sectors presented in this work and besides the well-known Gregory-Laflamme instability in a certain region of the parameter space), there are arguments suggesting that ultracompact objects might be unstable at the nonlinear level <cit.>. This is due to the slow (possibly logarithmic) decay in time, as discussed for microstate geometries <cit.> and for other ultracompact objects <cit.>. TSs provide a well-defined model in which the nonlinear evolution of the perturbations can be possibly studied in a relatively simple setting. We have derived the full set of equations describing the linear response of magnetized BHs and TSs. Besides analyzing in detail the Type II sector <cit.>, a natural follow-up of our analysis is to study tidal perturbations of these solutions and compute their tidal Love numbers, extending the test scalar case studied in <cit.>. We expect that the various Love numbers of a TS that can be defined in the different perturbation sectors are generically nonzero, and tend to their corresponding value in the extremal BH case as r_B/r_S→1, as it occurs in other models <cit.>. Another relevant extension is to consider spinning TSs or other topological solitons with less symmetry <cit.>. We thank Iosif Bena, Massimo Bianchi, Roberto Emparan, Pierre Heidmann, and many participants of https://indico.in2p3.fr/event/30310/Black-Hole Microstructure VI (Paris Saclay, 10-15 June 2024) for interesting discussions. We are grateful to Giorgio Di Russo and Francisco Morales who have shared their preliminary results with us <cit.>. This work is partially supported by the MUR PRIN Grant 2020KR4KN2 “String Theory as a bridge between Gauge Theories and Quantum Gravity”, by the FARE programme (GW-NEXT, CUP: B84I20000100001), and by the INFN TEONGRAV initiative. Some numerical computations have been performed at the Vera cluster supported by the Italian Ministry for Research and by Sapienza University of Rome. § LINEAR PERTURBATIONS IN REGGE-WHEELER-ZERILLI GAUGE Regge-Wheeler gauge, even perturbations of the metric: h_μν^ even=∑_l,m( [ f_Sf_B^1/2 H_0(t,r) H_1(t,r) 0 0; H_1(t,r) f_S^-1f_B^-1/2 H_2(t,r) 0 0; 0 0 r^2 f_B^1/2 K(t,r) 0; 0 0 0 r^2 f_B^1/2sinθ^2 K(t,r) ]) Y_lm(θ,ϕ) . The odd sector of the metric reads h_μν^ odd=∑_l,m( [ 0 0 -h_0(t,r)/sinθ∂_ϕ h_0(t,r) sinθ∂_θ; 0 0 -h_1(t,r)/sinθ∂_ϕ h_1(t,r) sinθ∂_θ; -h_0(t,r)/sinθ∂_ϕ -h_1(t,r)/sinθ∂_ϕ 0 0; h_0(t,r) sinθ∂_θ h_1(t,r) sinθ∂_θ 0 0 ]) Y_lm(θ,ϕ) The even-parity EM perturbations (for either F_μν or _μν) read f_μν^ even =∑_l,m( [ 0 f_01^+(t,r) f_02^+(t,r) ∂_θ f_02^+(t,r) ∂_ϕ; -f_01^+(t,r) 0 f_12^+(t,r) ∂_θ f_12^+(t,r) ∂_ϕ; -f_02^+(t,r) ∂_θ -f_12^+(t,r) ∂_θ 0 0; -f_02^+(t,r) ∂_ϕ -f_12^+(t,r) ∂_ϕ 0 0 ]) Y_lm(θ,ϕ) g_μν^ even =∑_l,m( [ 0 g_01^+(t,r) g_02^+(t,r) ∂_θ g_02^+(t,r) ∂_ϕ; -g_01^+(t,r) 0 g_12^+(t,r) ∂_θ g_12^+(t,r) ∂_ϕ; -g_02^+(t,r) ∂_θ -g_12^+(t,r) ∂_θ 0 0; -g_02^+(t,r) ∂_ϕ -g_12^+(t,r) ∂_ϕ 0 0 ]) Y_lm(θ,ϕ) while the odd-parity EM perturbations are f_μν^ odd =∑_l,m( [ 0 0 f_02^-(t,r)/sinθ∂_ϕ -f_02^-(t,r) sinθ∂_θ; 0 0 f_12^-(t,r)/sinθ∂_ϕ -f_12^-(t,r) sinθ∂_θ; -f_02^-(t,r)/sinθ∂_ϕ -f_12^-(t,r)/sinθ∂_ϕ 0 f_23^-(t,r) sinθ; f_02^-(t,r) sinθ∂_θ f_12^-(t,r) sinθ∂_θ -f_23^-(t,r) sinθ 0 ]) Y_lm(θ,ϕ) g_μν^ odd =∑_l,m( [ 0 0 g_02^-(t,r)/sinθ∂_ϕ -g_02^-(t,r) sinθ∂_θ; 0 0 g_12^-(t,r)/sinθ∂_ϕ -g_12^-(t,r) sinθ∂_θ; -g_02^-(t,r)/sinθ∂_ϕ -g_12^-(t,r)/sinθ∂_ϕ 0 g_23^-(t,r) sinθ; g_02^-(t,r) sinθ∂_θ g_12^-(t,r) sinθ∂_θ -g_23^-(t,r) sinθ 0 ]) Y_lm(θ,ϕ) Finally, scalar perturbations are simply decomposed as δΦ = ∑_l,mφ(t,r)/r Y_lm(θ,ϕ) δΞ = ∑_l,mξ(t,r)/r Y_lm(θ,ϕ) Note that, for clarity, we have omitted the indices (l,m) in the coefficients (which are functions of t and r) of the spherical-harmonic decomposition. The symmetry of the background guarantees that m is degenerate and perturbations with different values of l are decoupled from each other. § TYPE-II EQUATIONS E_tt = f_Bf_S^2 ∂_r^2 K + (3f_Bf_S^2/r + f_S^2 f_B' + 1/2f_Bf_Sf_S') ∂_r K - f_S^2/4r( 4f_B+rf_B')∂_rH_2 + √(3)/4rf_S^2f_B'∂_rφ - ( Λ f_S/2r^2 + f_Bf_S^2/r^2 + f_S^2f_B'/r + f_Sf_S'/4r( 4 f_B + rf_B')) H_2 + (f_S/r^2 - Q_m^2κ_4^2f_S/r^4 - Λ f_S/2r^2) K - ( √(3)f_S^2f_B'/4r^2-√(3)Q_m^2κ_4^2f_S/6r^5) φ + 4Q_mκ_4^2f_S/er^3 f_23^- = 0 E_tr = 4 r f_B ∂_r K + 2f_B/f_S(2f_S-rf_S') K + 2Λ√(f_B)/ω r H_1 + √(3)f_B'φ - (4f_B+rf_B')H_2 = 0 E_tθ = f_S√(f_B)/ω∂_r H_1 + ( f_Sf_B' + 2 f_B f_S' ) /2 ω√(f_B) H_1 + H_2 + K - 2Q_mκ_4^2/e r^2 Λ f_23^- = 0 E_rr = f_S^2(4f_B+rf_B') ∂_r H_0 - 2f_S(2f_Bf_S+rf_Sf_B'+rf_Bf_S')∂_r K + √(3)f_S^2f_B' ∂_rφ - 2Λ f_S/r H_0 + f_S( 4f_Bf_S/r + 4f_Sf_B' + 4f_Bf_S' + r f_B'f_S' ) H_2 + 2ωf_S/√(f_B)(4f_B+r f_B') H_1 + 2r( 2 Q_m^2κ_4^2 f_S/r^4 + (Λ-2)f_S/r^2 - 2ω^2) K - ( √(3)f_S^2f_B'/r + 2√(3)Q_m^2κ_4^2f_S/3r^4) φ - 4Q_mκ_4^2f_S/er^3 f_23^- = 0 E_rθ = f_Bf_S (∂_r H_0 - ∂_r K ) + 2Q_mκ_4^2/eΛf_Bf_S/r^2∂_rf_23^- - 1/2(2f_Bf_S/r - f_Bf_S') H_0 + 1/2(2f_Bf_S/r + f_Sf_B' + f_Bf_S' ) H_2 + ω√(f_B) H_1 - √(3)f_Sf_B'/2rφ =0 E_θθ = f_Bf_S^2 (∂_r^2 H_0 - ∂_r^2 K ) + ( f_Bf_S^2/r + f_S^2f_B' + 3/2f_Bf_Sf_S' ) ∂_r H_0 + 2ω√(f_B) f_S ∂_r H_1 + ( f_Bf_S^2/r + 1/2f_S^2f_B' + 1/2f_Bf_Sf_S' ) ∂_r H_2 - ( 2f_Bf_S^2/r + f_S^2f_B' + f_Bf_Sf_S' ) ∂_r K - √(3)f_S^2f_B'/2r∂_rφ + ω/√(f_B)( 2f_Bf_S/r + f_Sf_B' + f_Bf_S') H_1 - (ω^2 - 3/2f_Sf_B'f_S') H_2 - (ω^2 + 2Q_m^2κ_4^2f_S/r^4) K + ( √(3)f_S^2f_B'/2r^2 + √(3)Q_m^2κ_4^2f_S/3r^5)φ + 2Q_mκ_4^2f_S/e r^4f_23^- =0 E_θϕ = H_0 - H_2 = 0 E_φ = f_Bf_S^2∂_r^2φ + (f_S^2f_B' + f_Bf_Sf_S') ∂_r φ + √(3)/2r f_S^2 f_B' ( ∂_r K - ∂_r H_2 ) + (ω^2 - Λ f_S/r^2 -Q_m^2κ_4^2f_S/3r^4-f_S^2f_B'+f_Bf_Sf_S'/r) φ - √(3)ω r f_Sf_B'/2√(f_B) H_1 - √(3) r f_Sf_B'f_S'/2 H_2 + 2√(3) Q_m^2κ_4^2 f_S/3r^3 K - 2√(3)Q_mκ_4^2f_S/3e r^3 f_23^- = 0 E_f23m = f_Bf_S^2∂_r^2f_23^- + (f_S^2f_B'+f_Bf_Sf_S') ∂_r f_23^- + (ω^2-Λ f_S/r^2) f_23^- + eQ_mΛ f_S/r^2 K - √(3) eQ_mΛ f_S/3r^3φ = 0 § DECOUPLED EQUATIONS For completeness, in this appendix we provide the field equations for the perturbations of F and Ξ on the background of a magnetized BH or TS. Since F=0=Ξ on these background, the field equations decouple at the linear level and can be written as those of a test scalar, ξ := f_B^-1/4δΞ, and a test massless gauge with with two physical degrees of freedom, 𝔈:= f_B^5/4f_S g_12^+ and 𝔅:= f_B^3/4 g_23^-, respectively. These can be derived by considering Eqs. (<ref>) and (<ref>) and linear perturbations defined in Eqs. (<ref>), (<ref>) and  (<ref>): D[ξ] + (1/2f_S^2f_B' + f_Bf_Sf_S' ) ∂_r ξ - ( Λ f_S/r^2 + f_Bf_Sf_S'/r + f_S^2f_B'/2r +3 f_S^2f_B'^2/16 f_B - f_Sf_B'f_S'/4)ξ = 0 , D[] + (1/2f_S^2f_B' + f_Bf_Sf_S' )∂_r - ( Λ f_S/r^2 + 3f_S^2f_B'/2r + 15 f_S^2 f_B'^2/16 f_B - 3f_Sf_B'f_S'/4) = 0 , D[] + (1/2f_S^2f_B' + f_Bf_Sf_S' )∂_r - ( Λ f_S/r^2 - 3f_S^2f_B'/2r + 3f_S^2f_B'^2/16 f_B + 3f_Sf_B'f_S'/4) = 0 . These equations can be cast in Schrodinger-like form by transforming to the ρ coordinate: ∂_ρ^2 ξ + [ ω^2 - f_S( Λ/r^2 + f_Bf_S'/r + f_Sf_B'/2r +3 f_Sf_B'^2/16 f_B - f_B'f_S'/4)]ξ = 0 , ∂_ρ^2 + [ ω^2 - f_S( Λ/r^2 + 3f_S f_B'/2r + 15 f_S f_B'^2/16 f_B - 3f_B'f_S'/4)] = 0 , ∂_ρ^2 + [ω^2 - f_S( Λ/r^2 - 3f_Sf_B'/2r + 3f_Sf_B'^2/16 f_B + 3f_B'f_S'/4)] = 0 . All three effective potentials above are singular at r=r_B. From Eqs. (<ref>)–(<ref>) the following indicial equations can be derived: 16λ_ξ^2 -8 λ_ξ -3 = 0 , 16λ_^2 -8 λ_ -15 = 0 16λ_^2 -8 λ_ -3 = 0 . From the latter, one can deduce that the field equations admit regular solutions corresponding to λ_ξ=3/4=λ_, and λ_=5/4. § CONVERGENCE TESTS OF TIME-DOMAIN CODES To check the convergence with resolution of our results, we have simulated the time evolution of { Z_1, Z_2 } perturbations (see Sec. <ref>) of a second-kind TS (r_B =1.01 r_S) with low (L), medium (M) and high (H) resolution, corresponding to N={ 2^16, 2^17, 2^18} points on the radial grid. In Fig. <ref> we plot the relative error between medium and low resolution results, L-M, and between high and medium resolution time series, Q_n (M-H), rescaled by the appropriate convergence factor: Q_n := (dr_L)^n-(dr_M)^n/(dr_M)^n-(dr_H)^n From the comparison in Fig. <ref> we notice that during the prompt ringdown phase the convergence matches the expectations (i.e. is compatible with fourth order convergence), but undergoes a sudden drop in the convergence order during the first reflection of the signal on the surface of TS. In the subsequent phase of the signal, the convergence order is then restored. One possible culprit for the loss in convergence during the first reflection could consist in the finite difference approximation of the regularity BC we impose at the TS boundary, although the latter is realized with a fourth-order stencil. Alternatively, we suspect that resolving the regular singular point at r=r_B requires much higher resolution, while our current results are only marginally in the convergence regime. On the other hand, this underperformance of our time-domain framework in resolving the boundary of the star does not seem to affect the robustness of the results since the afflicted part of the signal contributes only marginally to the full spectrum. Indeed, most of the information we extract with our spectral analysis is enclosed in the part of the signal that follows the first reflection of the initial wave packet. Here the tortoise coordinate we choose allows resolving sufficiently well the effective potential and its cavity, which explains the restoration in the convergence order we observe. apsrev4-1
http://arxiv.org/abs/2406.18435v1
20240626153404
Upper Bounds on the Mass of Fundamental Fields from Primordial Universe
[ "Hassan Firouzjahi" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph", "hep-th" ]
=1 equationsection 1.1 RPN k q x footnote Upper Bounds on the Mass of Fundamental Fields from Primordial Universe Hassan Firouzjahi[firouz@ipm.ir] School of Astronomy, Institute for Research in Fundamental Sciences (IPM) P. O. Box 19395-5531, Tehran, Iran § ABSTRACT We study the fluctuations in the vacuum zero point energy associated to quantum fields and their statistical distributions during inflation. It is shown that the perturbations in the vacuum zero point energy have large amplitudes which are highly non-Gaussian. The effects of vacuum zero point fluctuations can be interpreted as the loop corrections in primordial power spectrum and bispectrum. Requiring that the primordial curvature perturbation to remain nearly scale-invariant and Gaussian imposes strong upper bounds on the mass of fundamental fields during inflation. We show that the fundamental fields can not be much heavier than the Hubble scale during inflation, otherwise their vacuum zero point fluctuations induce large non-Gaussianities and scale-dependence in primordial perturbations. Considering the observational upper bound on tensor to scalar ratio, we conclude that all fundamental fields are lighter than 10^14 GeV. § INTRODUCTION Vacuum zero point energy is a fundamental property of quantum mechanics, having its origin from the fact that the operators like position and momentum do not commute in quantum mechanics. The effects of vacuum zero point energy become more pronounced in quantum field theory where particles and antiparticles can be created and annihilated continuously in vacua. The reality of vacuum zero point energy were confirmed in Casimir effect <cit.>. The roles of vacuum zero point energy become even more significant when one deals with gravity. Based on Einstein field equation, any source of energy will act as a source of gravitation and the curvature of spacetime. Locally, the effects of the quantum vacuum zero point energy appears as a cosmological constant term in the Einstein field equation. Based on equivalence principle, one expects the energy momentum tensor associated to vacuum zero point fluctuations to be locally Lorentz invariant. Consequently, the vacuum expectation values of the pressure and the energy density are simply related to each other as ⟨ P_v⟩ =-⟨ρ_v⟩, where here and below the subscript “v" stands for vacuum. The vacuum zero point energy associated to quantum perturbations of a fundamental field with mass m is UV divergent. To regularize the quartic UV divergence, one may put a cutoff Λ, obtaining ⟨ρ_v⟩∼Λ^4. Assuming that Λ is given by a natural scale of the theory, such as the TeV scale of Standard Model (SM) of particle physics, one obtains the magnitude of vacuum zero point energy be roughly at the order (TeV)^4. Alternatively, if one assumes Λ to be at the order of Planck mass M_P, then the vacuum energy density becomes at the order M_P^4. Of course, the trouble is that both of these predicted values are grossly in contradictions with observations. Indeed, various cosmological observations <cit.> indicate that the Universe is accelerating now with an unknown source of energy density, the so-called dark energy, which is roughly at the order (10^-3eV)^4 which is vastly smaller than what one may naively obtain from basic quantum field theory analysis. This is the famous old cosmological constant problem, for a review see <cit.>. In addition, there is a new cosmological constant problem stating why the effects of dark energy become relevant at this very late stage of the expansion history of the Universe, at redshift around z ∼ 0.3. While imposing a cutoff by hand to regularize the vacuum zero point energy is a useful approach to start with, but it is not technically correct. The simple reason is that it violates the underlying local Lorentz invariance when a cutoff scale in momentum space is introduced. A proper regularization method should respect the underlying symmetry. For this purpose, dimensional regularization scheme is more appropriate for regularizations which respects the underlying symmetries <cit.>. Employing dimensional regularizations scheme in flat background, one actually obtains <cit.>⟨ρ_v⟩ = m^4/64 π^2ln(m^2/μ^2) , in which μ is a regularization mass scale. This shows that the contribution of a fundamental field with mass m in vacuum energy is at the order m^4. The heavier is the field, the higher is its contribution in vacuum energy density. It is natural to look for vacuum zero point energy in a curved background. However, in a curved spacetime the solutions for the mode functions are non-trivial. In addition, the notion of vacuum is non-trivial in a curved background <cit.>. Therefore, it is an important question as how to regularize and renormalize the infinities in a curved spacetime to find the finite physical quantities. The vacuum zero point energy and its regularizations in a dS background are vastly studied, for an incomplete list of papers see for example <cit.>. Among many things, it is shown that in a dS background with the Hubble expansion rate H, the contribution of the massless and light fields in vacuum energy is at the order H^4. This is in contrast to the flat background where from Eq. (<ref>) one concludes that the massless fields do not contribute into vacuum energy density. However, for very heavy fields with m ≫ H, it is shown in <cit.> that ⟨ρ_v⟩ obeys the same formula as Eq. (<ref>) with subleading O(m^2 H^2) corrections. This may be understood from local Lorentz invariance and equivalence principle. Another question of interest is to look at the fluctuations of the vacuum zero point energy itself, δρ_v. This was studied in more details in <cit.> where it is shown that the fluctuations in the vacuum zero point energy is large in the sense that δρ_v ∼⟨ρ_v ⟩. Furthermore, it is shown that the distribution of the vacuum zero point energy is highly non-Gaussian in which δρ_v^3 ∼⟨ρ_v ⟩^3. In this work we study the effects of the vacuum zero point energy and its fluctuations in an inflationary background. By studying the contributions of the vacuum energy in the background energy density we obtain a weak upper bound on the mass of the fundamental fields. However, by considering the perturbations of the vacuum zero point energy and requiring that the primordial perturbations to be nearly scale invariant and Gaussian, we obtain a strong upper bound on the mass of the fundamental fields during inflation. While in this work we investigate the effects of vacuum zero point fluctuations to put constraints on the mass of fundamental fields, but the question of investigating the masses and couplings of fundamental fields during inflation were investigated extensively in the context of cosmological collider physics, for an incomplete list of papers on this direction see <cit.>. § QUANTUM FIELDS IN INFLATIONARY BACKGROUND In this section we review the quantum field perturbations in inflationary background. This analysis follow the earlier works <cit.>. We consider a scalar field χ with mass m which is minimally coupled to gravity. The background is an inflationary universe which is driven by the inflaton field ϕ. While the inflaton field rolls slowly along its classical potential V(ϕ), the field χ is stuck in its local minimum with no classical evolution. However, its is under quantum fluctuations which contribute to its vacuum energy density. We assume that the vacuum zero point energy associated to the spectator field does not dominate the background inflation dynamics. This sets an upper bound on the mass of χ field. As usual, we assume that the total cosmological constant from inflaton and the spectator field is set to zero at the end of inflation. This is another realization of the old cosmological constant problem where one requires the potential to be zero or small for a consistent expansion history of the Universe during the hot big bang cosmology. While we perform the analysis for a single fundamental scalar field, but our results can be extended to other fundamental fields with various spins. In order to regularize the UV divergences associated to vacuum zero point energy, we employ the dimensional regularization scheme and consider a D-dimensional inflationary background. To simplify further, we assume the background is nearly a dS spacetime as in standard slow-roll inflationary setups. The background metric is a D-dimensional FLRW universe with the line element, ds^2 = a(τ)^2 ( -d τ^2 + d x^2 ) , where a(τ) is the scale factor and τ is the conformal time which is related to the cosmic time via d τ = dt/a(t). In our approximation of a near dS background, we have a H τ =-1 in which H is the Hubble expansion rate during inflation which is constant in our approximation. In the above metric, d x^2 represents the line element along the D-1 spatial dimensions. To study the quantum perturbations, we introduce the canonically normalized field σ(x^μ)σ(x^μ)≡ a^D-2/2χ(x^μ) , and expand its quantum perturbations in the Fourier space as follows, σ(x^μ)=∫d^D-1𝐤/(2 π)^(D-1)/ 2(σ_k(τ) e^i 𝐤·𝐱 a_𝐤+σ^*_k(τ) e^-i 𝐤·𝐱 a_𝐤^†) , in which σ_k(τ) is the quantum mode function while a_𝐤 and a_𝐤^† are the annihilation and creation operators satisfying the following commutation relation in D-1 spatial dimension, [a_𝐤, a_𝐤^'^†]=δ^D-1(𝐤-𝐤^') . In terms of the canonically normalized field σ, the Klein-Gordon field equation takes the following form, σ_k^''(τ)+[k^2+ 1/τ^2( m^2/H^2 -D(D-2)/4) ] σ_k(τ)=0 . The above equation is similar to the Mukhanov-Sasaki equation in D-dimension dS background. Imposing the Bunch-Davies (Minkowski) vacuum deep inside the horizon, the solution for the mode function is obtained in terms of the Hankel function χ _k(τ) = a^2 - D/2σ _k(τ ) = ( - Hτ )^D - 1/2( π/4H)^1/2e^i π/2 (ν + 1/2) H_ν ^(1)( - kτ ) 1mu , where ν≡1/2 1mu√((D-1)^2- 4 β^2) , β≡m/H . From the above expression we see that ν can be either real or pure imaginary, depending on the mass m. For a light field with β <1, ν is real while for a heavy field with β≫ 1 it is a pure complex number. §.§ Vacuum Zero Point Energy We are interested in the vacuum zero point energy ρ_v associated to χ quantum fluctuations. It is convenient to define the following components of ρ_v, ρ_1 ≡1/2χ̇^2 , ρ_2 ≡1/2 g^i j∇_i χ∇_j χ , ρ_3 ≡1/2 H^2 χ^2 , so ρ_v= ρ_1+ ρ_2 +β^2 ρ_3 . Note our convention in which we have pulled out a factor β=m/H when defining ρ_3 so β^2 ρ_3= 1/2 m^2 χ^2. We would like to calculate the vacuum expectation values like ⟨ρ_v ⟩≡⟨ 0| ρ_v |0⟩ in which |0⟩ is the vacuum of the free theory i.e. the Bunch-Davies vacuum. Here we briefly outline the analysis, for more details see <cit.>. Let us start with ⟨ρ_1⟩. With the mode function given in Eq. (<ref>) we obtain ⟨ρ_1⟩ =μ ^4 - D/2a^2(τ ) ∫ d^D - 1 k/(2π )^D - 1| χ _k^' (τ ) |^2 , in which μ, as in standard dimensional regularization analysis, is a mass scale to keep track of the dimensionality of the physical quantities. To calculate the integral, we decompose it into the radial and angular parts as follows d^D-1 = k^D - 2 dk d^D-2Ω 1mu , in which d^D-2Ω represents the volume of the D-2-dimensional angular part, ∫d^D-2Ω= 2 π^D-1/2/Γ(D-1/2) . Defining the dimensionless variable x≡ - k τ and combining all numerical factors, we end up with the following integral, ⟨ρ_1 ⟩ = π^3-D/2μ^4-DH^D/2^1+DΓ( D-1/2) e^-πIm(ν)∫_0^∞ dx x |d/d x(x^D-1/2 H_ν^(1)(x))|^2 . Performing the integral [We use the Maple computational software to calculate the integrals like in Eq. (<ref>)], we obtain ⟨ρ _1⟩ = μ ^4 - Dπ ^ - D/2 - 1/4 1muΓ( ν + D/2 + 1/2)Γ( - ν + D/2 + 1/2)Γ( - D/2) cos( π 1muν) ( H/2)^D . Performing the same steps for ⟨ρ_2 ⟩ and ⟨ρ_3 ⟩, one can show that the following relations hold, ⟨ρ_1 ⟩ = β^2/D⟨ρ_3 ⟩ , ⟨ρ_2 ⟩ = -(D-1) ⟨ρ_1 ⟩ = - (D-1)/Dβ^2 ⟨ρ_3 ⟩ . The above relations between ⟨ρ_i ⟩ will be useful later on. Plugging the above expressions for ⟨ρ_i ⟩ in ⟨ρ_v ⟩ in Eq. (<ref>), we obtain ⟨ρ_v ⟩ = 2 β^2/D⟨ρ_3 ⟩ . Following the same steps, one can check that the following relation between the pressure P and the energy density holds <cit.>, ⟨ P_v ⟩ =- ⟨ρ_v ⟩ . This is an important result. It shows that the vacuum zero energy has the form of a cosmological constant. This is physically consistent since we calculate the contribution from the bubble Feynman diagrams and Lorentz invariance is expected to hold locally with ⟨ T_μν⟩ = ⟨ρ_v ⟩ g_μν. The above result for ⟨ρ_v ⟩ is valid for a general value of D. Now, we perform the dimensional regularization by setting D= 4-ϵ and expand ⟨ρ_v ⟩ to leading orders in powers of ϵ. As usual, the UV divergent contributions are controlled by the singular pole term ϵ^-1 which should be absorbed by appropriate counter terms. Regularizing this divergence contribution, the remaining finite contribution is obtained to be <cit.>⟨ρ_v ⟩_reg = H^4 β^2/64 π^2{ ( β^2 -2) [ ln( H^2/4πμ^2 ) + 2Ψ(ν+ 1/2) - πtan( νπ) ] + 1 - 3/2β^2 } in which Ψ(x) is the digamma function and ν is now given by setting D=4 in Eq. (<ref>), ν = 1/2√(9- 4 β^2 ) . The appearance of ln( H/μ) in ⟨ρ_v ⟩_reg is the hallmark of quantum corrections from dimensional regularization scheme. To read off the physical contribution, we need to renormalize the above finite value. This can be achieved by choosing a physical value for the mass scale parameter μ or if we compare the values of ⟨ρ⟩_reg at two different energy scales and examine its running with the change of the energy scale. As mentioned previously, depending on the mass of the field, ν in Eq. (<ref>) can be either real or imaginary. For light enough mass with β≤3/2 it is real while for heavier field it is pure complex. Let us look at the value of ⟨ρ⟩_reg in Eq. (<ref>) for some limiting cases. For a massless field with β=0, we obtain ⟨ρ_v ⟩_reg= 3 H^4/32 π^2 , (β=0) . This shows that for the massless field, the vacuum energy density scales like H^4. On the other hand, for very heavy field with β≫ 1, one obtains <cit.>⟨ρ_v ⟩_reg = m^4/64 π^2ln( m^2/4πμ^2 ) + O ( m^2 H^2) (β≫ 1) . This has exactly the same form as in flat background Eq. (<ref>). As explained before, this may be expected from local Lorentz invariance and the equivalence principle. Finally, for the intermediate mass range with β≲ 1, the vacuum energy has the form (<ref>) but with a numerical prefactor depending on β as well. As we would like to put an upper bound on the mass of the field during inflation, we consider the heavy and intermediate mass range fields. For light field where ⟨ρ_v ⟩∼ H^4, the contribution of the vacuum energy in inflation dynamics is negligible as H^4 ≪ 3 M_P^2 H^2. In our limit of interest concerning heavy and semi-heavy fields, we parametrize the zero point energy density given in Eq. (<ref>) as follows, ⟨ρ_v ⟩_reg = c_0 m^4 , in which c_0 is a numerical factor which depends on parameters such as the renormalization scale μ. As can be seen from Eq. (<ref>), c_0 depends logarithmically on m as well. §.§ Fluctuations in Vacuum Zero Point Energy As observed in <cit.>, the vacuum energy is subject to random fluctuations with large amplitudes. Denoting the statistical variation of ρ_v by δρ_v^2 ≡⟨ρ_v^2 ⟩ - ⟨ρ_v ⟩^2, it is shown in <cit.> that δρ_v^2/⟨ρ_v ⟩^2 =10 . The above result for the density contrast of the vacuum energy holds in both flat, dS as well as in black hole backgrounds. This result plays crucial role in our investigation of the upper bound on the mass of the quantum fields during inflation. The fact that δρ_v ∼⟨ρ_v ⟩ indicates that the distribution of the vacuum zero pint energy is non-linear and non-perturbative, which may generate inhomogeneity and anisotropies on small scales, see also <cit.>. Combining Eqs. (<ref>) and (<ref>) we can parametrize the fluctuation in vacuum energy density as δρ_v^2 = c_1^2 m^8 in which c_1^2 = 10 c_0^2. In addition, it is shown in <cit.> that the fluctuations in the distribution of the vacuum zero point energy is highly non-Gaussian. Denoting the skewness in vacuum zero point energy by δρ_v^3 ≡⟨( ρ_v - ⟨ρ_v ⟩)^3⟩, one obtains that[In <cit.> a slightly different definition of skewness is used, defined by δρ_v^3 ≡⟨ρ_v^3⟩ - ⟨ρ_v ⟩^3.]δρ_v^3/⟨ρ_v ⟩ ^3 = 62 . Combining this with Eq. (<ref>), we can parametrize the skewness as δρ_v^3 = c_3^3 m^12 , in which c_3 is a numerical factor related to c_0 via c_3^3= 62 c_0^3. Eq. (<ref>) indicates that the distribution of the fluctuations of the vacuum zero point energy is highly non-Gaussian. This will play crucial roles in our analysis of obtaining an upper bound for the mass of the fundamental fields. Before closing this section, we comment that while we have obtained the density contrast and the skewness associated to zero point fluctuations of an spectator field, but the same results apply for inflaton field as well. The reason is that all we needed was the solution of the mode function (<ref>) which works for all fields, whether light or heavy. The case of inflaton corresponds to β≪1. Correspondingly, the density contrast and the skewness for the zero point fluctuations of inflaton satisfy the same expressions as Eqs. (<ref>) and (<ref>). § UPPER BOUNDS ON THE MASS AND COUPLINGS Now we are ready to put constraints on the mass of the quantum fields during inflation. To simplify the discussions, here we consider the simple case where we have two fields, the inflaton field ϕ and the spectator field χ with masses m_ϕ and m_χ respectively. This setup can be extended to more general cases involving multiple spectator fields with different spins. Our implicit assumption is that the spectator field χ is heavy enough compared to H. As it is locked in its local minimum with zero potential, it has no classical value and with no classical contribute in energy density. It is assumed that the inflationary background is driven with the inflaton field with a classical potential V(ϕ). Both ϕ and χ are subject to quantum fluctuations so we should take into account their contributions in vacuum zero point energy, i.e. ρ_v = ρ_v^(ϕ) + ρ_v^(χ). The total energy density driving the background expansion is the sum of the inflaton classical energy density ρ_ϕ^cl and the vacuum zero point energy of both fields, i.e. ρ_tot= ρ_ϕ^cl+ ⟨ρ_v ⟩. In the slow-roll approximation we further have ρ_ϕ^cl≃ V(ϕ). §.§ Bounds from Background Expansion Our first requirement is that the vacuum energy density should be negligible compared to the inflaton classical potential. This is because we need to terminate inflation and connect it to standard hot big bang cosmology during reheating. In a sense, this is similar to old cosmological constant problem in which it is assumed that the total vacuum zero point energy is either zero or tuned to a tiny value as required in ΛCDM setup to be the source of the observed dark energy. This requirement corresponds to ⟨ρ_v ⟩≪ V(ϕ) ≃ 3 M_P^2 H^2. Combining this with Eq. (<ref>), and neglecting the prefactors such as c_0 which may induce a numerical uncertainty of order unity, we require m_ϕ , m_χ < (M_P H )^1/2 . The above upper bound on the inflaton mass is trivial, since inflaton is light in order for inflation to sustain, m_ϕ≪ H, so the condition (<ref>) is easily met for inflaton. Alternatively, we can translate this bound in terms of the reheating temperature T_r via M_P^2 H^2 ∼ T_r^4, yielding (up to numerical factors) m_χ < T_r . Using the Planck/BICEP/Keck observations constraint on the amplitude of the tensor to scalar ratio r ≲ 10^-2<cit.> yields T_r ≲ 10^15GeV. This in turn imposes the upper bound that m_χ is below the GUT scale. Although this is a useful upper bound, but it is not strong as GUT scale is just a few orders of magnitude below M_P and one may not expect that the mass of fundamental fields to be much heavier than the GUT scale. §.§ Bounds from Power Spectrum More strong upper bounds on m_χ can be obtained when we consider the perturbations. In our view, the vacuum energy density from the quantum fields provides random fluctuations in energy density. Observationally, the perturbations in energy density is the source of perturbations in CMB map with the amplitude δρ/ρ∼δ T/T ∼ 10^-5. In the analysis below, we look at the perturbations in real space. It is understood that the perturbations from vacuum zero point energy have the correlation length m^-1 so if the field is heavy, its correlation length is sub-Hubble. However, the key issue is that we look at the accumulative contributions of all UV modes when looking at statistical quantities such as ⟨(δρ( x)/ρ)^2 ⟩ locally in real space. In the spirit, this is similar to the cosmological constant problem at the background level when all UV modes contribute to the averaged quantity ⟨ρ_v ⟩. Now, we extend this view to perturbation in vacuum energy density itself. To start, let us consider the curvature perturbation on surface of constant energy density ζ, defined as <cit.>ζ≡ -ψ + H/ρ̇_totΔρ , in which ψ is the curvature perturbation on three-dimensional spatial hypersurface. Also, ρ_tot is the total background energy density while Δρ is the perturbation in total energy density. Note that neither Δρ nor ψ are gauge invariant but ζ is. To simplify further, we go to spatially flat gauge where ψ=0. As mentioned previously, in our case of interest ρ_tot = ρ^cl_ϕ + ⟨ρ_v⟩. Since ⟨ρ_v⟩ is the vacuum energy density which is constant by construction, as parametrized in Eq. (<ref>), we conclude that ρ̇_tot = ρ̇^cl_ϕ = - 3 H (ρ^cl_ϕ + P^cl_ϕ ) =-3 H ϕ̇^2 = -6 ϵ_H M_P^2 H^3 , in which ϵ_H ≡ -Ḣ/H^2 is the first slow-roll parameter. On the other hand, the fluctuations in energy density receive contributions from both ϕ and χ, Δρ= Δρ_ϕ + Δρ_χ, yielding ζ_flat= H/ρ̇^cl_ϕ( Δρ_ϕ + Δρ_χ) . As mentioned before, the spectator field has no classical component and its contribution is from the vacuum zero point fluctuations, Δρ_χ = ρ_v^(χ) - ⟨ρ_v^(χ)⟩≡Δρ_v^(χ) . In terms of χ quantum fluctuations, Δρ_χ is second order in χ^2 or its derivatives like χ̇^2. This is because χ is pure quantum perturbations with ⟨χ⟩=0. Also note that ⟨Δρ_v^(χ)⟩ =0. On the other hand, Δρ_ϕ can have a linear contribution in δϕ. This is because ϕ is rolling on its classical potential so Δρ_ϕ can have mixed contributions such as ϕ̇δϕ̇ or m_ϕ^2 ϕδϕ etc. We denote this mixed contribution which is linear in δϕ perturbations by Δρ_ϕ^(1). This is the standard source of perturbation in inflationary energy density in the absence of vacuum zero point energy. Similar to Eq. (<ref>), the contribution of vacuum zero point energy in Δρ_ϕ, which is second order in δϕ, is denoted by by Δρ_v^(ϕ), Δρ_v^(ϕ)≡ρ_v^(ϕ) - ⟨ρ_v^(ϕ)⟩ . Combining the standard contribution Δρ_ϕ^(1) and the contributions from the vacuum zero point fluctuations of Δρ in Eq. (<ref>), and discarding the subscript “flat" for convenience, we obtain ζ= H/ρ̇^cl_ϕ( Δρ_ϕ^(1) + Δρ_v^(ϕ) + Δρ_v^(χ)) . The two point correlation functions ⟨ζ^2 ⟩ gives the amplitude of temperature fluctuations in CMB maps. The cosmological observations such as the Planck observation indicate that the curvature power spectrum is nearly scale invariant and Gaussian <cit.>. This is because the inflaton potential is nearly flat (i.e. the background is nearly dS) and the inflaton is light compared to Hubble expansion rate, m_ϕ≪ H. The first term in Eq. (<ref>) yields the usual nearly scale-invariant power spectrum. However, the remaining two terms, Δρ_v^(ϕ) and Δρ_v^(χ), originating from the vacuum zero point fluctuations, have non-trivial scale-dependence in Fourier space. This is because they are in the form δϕ^2 and δχ^2. In addition, the index ν for the spectator field is far from the special value ν=3/2 (ν can even become complex valued) if χ is heavy. Therefore, their contributions will modify the near scale-invariance of the standard power spectrum coming from the first term in Eq. (<ref>). We present the analysis of the scale-dependence of the contributions of Δρ_v^(ϕ) and Δρ_v^(χ) in power spectrum in Appendix <ref>. Using the above expression for ζ, and noting that ⟨Δρ_ϕ^(1)⟩=0 (since it is linear in terms of δϕ fluctuations), we obtain ⟨ζ^2 ⟩ = ( H/ρ̇^cl_ϕ)^2 [ ⟨ (Δρ_ϕ^(1))^2 ⟩ + ⟨ (Δρ_v^(ϕ))^2 ⟩ + ⟨ (Δρ_v^(χ))^2 ⟩ + 2 ⟨Δρ_ϕ^(1)Δρ_v^(ϕ)⟩ + 2 ⟨Δρ_v^(χ)Δρ_v^(ϕ) ⟩] . As the δϕ and χ perturbations are independent, one can easily show that the last term in big bracket above vanishes. More specifically, ⟨Δρ_v^(χ)Δρ_v^(ϕ) ⟩ = ⟨Δρ_v^(χ)⟩⟨Δρ_v^(ϕ)⟩ =0 . On the other hand, the fourth term in the big bracket is cubic in δϕ perturbations. Since we assume that δϕ perturbations are Gaussian, this contribution is suppressed compared to the first term. Now, noting that ⟨ (Δρ_v)^2 ⟩ =δρ_v^2= c_1^2 m^8 for both ϕ and χ fields, we obtain _ζ≡⟨ζ^2 ⟩ = ( H/ρ̇^cl_ϕ)^2 [ ⟨ (Δρ_ϕ^(1))^2 ⟩ + c_1^2 (m_ϕ^8+ m_χ^8) ] = _ζ^(0)(1+ Δ_ζ/_ζ^(0)) , where the fractional correction in power spectrum is given by Δ_ζ/_ζ^(0)≡c_1^2 (m_ϕ^8+ m_χ^8)/⟨(Δρ_ϕ^(1))^2⟩ . In order to simplify the analysis, we have assumed that the coefficient c_1 is independent of mass. Of course, this assumption can be relaxed which only affects the numerical prefactors in our following analysis. The first term in Eq. (<ref>) gives the usual contribution in curvature perturbation power spectrum in the absence of zero point contributions, which is given by _ζ^(0) = ( H/ρ̇^cl_ϕ)^2 ⟨ (Δρ_ϕ^(1))^2 ⟩ = H^2/8 π^2 ϵ_H M_P^2 . Combining the above formula for _ζ^(0) with Eq. (<ref>), we obtain the following expression for the fractional correction in power spectrum induced from vacuum zero point energy, Δ_ζ/_ζ^(0)= ( 4 π^2 c_1/3)^2 _ζ^(0)(m_ϕ^8+ m_χ^8)/H^8 . Note that we have calculated ⟨ζ^2 ⟩ in real space so there is no information of scale-dependence from Eq. (<ref>). To look for the scale-dependence of the power spectrum, we should look at the scale-dependence of various contributions of ζ in Eq. (<ref>) in Fourier space. As shown in Appendix <ref>, the contributions of the vacuum zero point fluctuations ⟨ (Δρ_v^(ϕ))^2 ⟩ and ⟨ (Δρ_v^(χ))^2 ⟩ have non-trivial scale-dependence in Fourier space, with Δ_ζ typically scaling like k^3. In order to keep the power spectrum to remain nearly scale-invariant, we require that the fractional corrections in power spectrum induced by vacuum zero point fluctuations in Eq. (<ref>) to be negligible, yielding c_1^2 m_ϕ^8, c_1^2 m_χ^8 ≪ H^8/ _ζ^(0). The above constraint can be refined further as follows. The spectral index associated to curvature perturbation in Fourier space is defined via n_s-1 ≡d ln_ζ(k)/d ln k . Assuming that the leading term in Eq. (<ref>) is nearly scale-invariant, we require the subleading corrections in Eq. (<ref>) not to induce too much scale-dependence. Assuming that Δ_ζ has a power law dependence on the scale k, from Eq. (<ref>) we obtain Δ_ζ/_ζ^(0)≲ 1- n_s . Using the specific form of Δ_ζ from Eq. (<ref>) this yields the following upper bound on the mass of the fields, m_ϕ, m_χ≲( 3/4 π^2 c_1)^1/4( 1- n_s/_ζ^(0))^1/8 H ≃( 1- n_s/_ζ^(0))^1/8 H , where in the final approximation we have assumed that the numerical factor ( 3/4 π^2 c_1)^1/4 is at the order unity. The above result does not impose strong upper bound on inflaton mass as we already know that m_ϕ≪ H. However, the above upper bound has important implications for the spectator field. From the COBE normalization we have _ζ≃ 2× 10^-9 while from the Planck observation <cit.> the spectral index is 1- n_s ≃ 4× 10^-2. Therefore, up to numerical factor of order unity, we conclude that m_χ≲ H. Technically speaking, the origin of this strong bound is the relation (<ref>) which indicates that the fluctuations in the vacuum zero point energy density is comparable to the vacuum zero point energy itself, i.e. δρ_v ∼⟨ρ_v⟩. This is the reason why the upper bound from perturbation in Eq. (<ref>) is much stronger than the bound obtained from the background in Eq. (<ref>), by an additional factor (H/M_P)^1/2. §.§ Bounds from Bispectrum To continue this line of investigation, now let us look at the non-Gaussianity induced by the fluctuations of the vacuum zero point energy. As we saw in Eq. (<ref>), the fluctuations of vacuum zero point energy is highly non-Gaussian with δρ_v^3∼⟨ρ_v ⟩^3. We expect this to induce large non-Gaussianity in curvature perturbations if the field is heavy. The non-Gaussianity parameter f_NL is roughly given by f_NL∼⟨ζ^3 ⟩/⟨ζ^2 ⟩^2 . The cosmological observations indicate that the primordial perturbations are nearly Gaussian with |f_NL| ≲ 1<cit.>. On the other hand, the inflaton potential is nearly flat and its contribution in primordial non-Gaussianity is typically negligible <cit.>. Therefore, any significant contribution in f_NL comes from the fluctuations of the vacuum zero point energy of the heavy field. Using Eq. (<ref>) for ζ, the three-point function ⟨ζ^3 ⟩ is given by ⟨ζ^3 ⟩ = ( H/ρ̇^cl_ϕ)^3 [ ⟨ (Δρ_ϕ^(1))^3 ⟩ + ⟨ (Δρ_v^(ϕ))^3 + (Δρ_v^(χ))^3 ⟩ + 3 ⟨Δρ_ϕ^(1) (Δρ_v^(ϕ))^2 ⟩ + 3 ⟨ (Δρ_ϕ^(1))^2 Δρ_v^(ϕ)⟩] In obtaining the above expression, we have used ⟨Δρ_ϕ^(1)⟩= ⟨Δρ_v^(ϕ)⟩ = ⟨Δρ_v^(χ)⟩=0. The first term in Eq. (<ref>) represents the non-Gaussianity associated to δϕ perturbations. As we discussed before, this is very small in slow-roll limit <cit.> so we ignore its contribution in f_NL. The second and third terms in Eq. (<ref>) represent the skewness in vacuum zero point energy distribution as given in Eq. (<ref>), ⟨ (Δρ_v^(ϕ))^3⟩ = c_3^3 m_ϕ^12 ⟨ (Δρ_v^(χ))^3⟩ = c_3^3 m_χ^12 . The fourth term containing ⟨Δρ_ϕ^(1) (Δρ_v^(ϕ))^2 ⟩ is odd in power of δϕ, at the order δϕ^5. Since δϕ perturbations are Gaussian, the contribution from this term, like the first term, is suppressed. Finally, the last term in Eq. (<ref>) has fourth powers of δϕ so it is not suppressed a priori. From the structure of this term, it will have the following form ⟨ (Δρ_ϕ^(1))^2 Δρ_v^(ϕ)⟩∼⟨ (Δρ_ϕ^(1))^2 ⟩⟨ρ_v^(ϕ)⟩ . Now, we can compare the last term in Eq. (<ref>) with the second term which is the contribution of the vacuum zero point fluctuations of inflaton, obtaining ⟨ (Δρ_v^(ϕ))^3⟩/⟨ (Δρ_ϕ^(1))^2 Δρ_v^(ϕ)⟩∼⟨ρ_v^(ϕ)⟩^2/⟨ (Δρ_ϕ^(1))^2 ⟩∼_ζ^(0)( m_ϕ/H)^8 . This is the same bound as in Eq. (<ref>) so we conclude that the contribution from the fluctuations of the vacuum zero point energy of inflaton is much smaller than the last term in Eq. (<ref>). Intuitively speaking, this is because the second term scales like m_ϕ^8 while the last term scales like m_ϕ^4. Since, m_ϕ≪ H, we expect that the second term to be negligible compared to the last term. Now we calculate the contributions of the dominant terms, the third and the last terms of Eq. (<ref>) in f_NL. Starting with the last term, and neglecting the numerical prefactors, its contribution in f_NL is given by f_NL|_last term∼ ( H/ρ̇^cl_ϕ)^3 ⟨ (Δρ_ϕ^(1))^2 Δρ_v^(ϕ)⟩ (_ζ^(0))^-2∼ m_ϕ^4 ( _ζ^(0))^-2 _ζ^(0) ( H/ρ̇^cl_ϕ) ∼(m_ϕ/H)^4 . As the inflaton field is light, we conclude that the above contribution in f_NL is negligible. Therefore, the dominant contribution in f_NL is entirely from the fluctuations of the vacuum zero point energy of the spectator field when it is heavy. Using our formula for skewness Eq. (<ref>) we obtain f_NL≃ ( H/ρ̇^cl_ϕ)^3 ⟨( Δρ_v^(χ))^3 ⟩ (_ζ^(0))^-2∼ m_χ^12 (_ζ^(0))^-2( _ζ^(0)/H^4)^3 ∼_ζ^(0)(m_χ/H)^12 . In order to be consistent with cosmological observations with |f_NL| ≲ 1, we obtain the following upper bound on the mass of the heavy field, m_χ≲ (_ζ^(0))^-1/12 H . Numerically, this upper bound is similar to the upper bound (<ref>) obtained from the power spectrum. Again, the physical reason for this strong bound is that the inflaton perturbations are nearly Gaussian while the perturbations of the zero point energy are highly non-Gaussian. While the contribution of the zero point energy of the heavy field is negligible in the background expansion (as given by bound (<ref>)), but its non-Gaussian properties are strong enough to affect the primordial bispectrum. Intuitively speaking, this situation is similar to the curvaton scenario. One can manage that the curvaton field to be subdominant in the background energy during inflation by a factor R ≪1. However, the perturbations become highly non-Gaussian with the amplitude f_NL∼ 1/R<cit.>. In conclusion, taking into account the uncertainties from the numerical prefactors, we conclude that the spectator field can not be much heavier than H, m_χ≲ H . This is the main result of this work. This conclusion has important implications for physics beyond SM. For example, from the upper bound r < 10^-2 on the amplitude of tensor to scalar spectra, we obtain the upper bound on the scale of inflation as H≲ 10^-5 M_P ∼ 10^13GeV. Considering the numerical uncertainties of order unity in our analysis, this implies that the mass of the fields in the beyond SM sector should be lighter than 10^14 GeV. In general, in terms of the parameter r, we can express the upper bound (<ref>) as follows, m_χ≲_ζ^5/12√(r) M_P ∼√(r)× 10^14 GeV . This implies that the mass of the fundamental fields are lighter that the GUT scale by a factor √(r). For example, if the scale of inflation happens to be very low, then the mass of the fundamental fields are significantly below the GUT scale. While the upper bound (<ref>) is on the mass of the fundamental field, but it can be used to put bounds on the coupling of the heavy fields to the inflaton field as well. Suppose we have the interaction L= 1/2g^2 ϕ^2 χ^2. This will induce an effective mass m_χ for the field χ given by m_χ^2 = g^2 ϕ^2 , in which ϕ is the classical value of the inflaton. Using the bounds (<ref>), we require that g ≲H/ϕ . If the spectator field χ is coupled to inflaton with a coupling much stronger than the bound (<ref>), then it induces a large mass for the spectator field, violating the bounds (<ref>). As an example, suppose we have the large field model with ϕ∼ 10 M_P and H ∼ 10^-5 M_P. Then, the bound (<ref>) requires g≲ 10^-6. §.§ Feynman Diagrams While our analysis were mostly based in real space, it is instructive to look at the corrections from vacuum zero point energy in Fourier space as well. As the perturbations Δρ_v^(χ) is quadratic in χ^2, the contribution of Δρ_v^(χ) in power spectrum of ζ_ k in Fourier space has the following form, ⟨ζ_ k_1(τ) ζ_ k_2(τ)⟩∼ (2 π)^3 δ^3( k_1 + k_2) ( H/ρ̇^cl_ϕ)^2 ∫ d^3 p |χ_ p(τ) |^2 |χ_ k_1- p(τ) |^2 , where the symbol ∼ means we discard the numerical factors and other contributions such as χ̇(τ)^2 and (∇χ)^2. As we demonstrate in Appendix <ref>, the mode functions are quite blue for massive fields. The structure of the above convolution integral therefore suggests that, for a given mode k, the contribution of the vacuum zero point energy in _ζ(k) comes from the UV modes in the integral over p. Here τ is any representative time when the mode of interest k has left the horizon and the leading contribution in ζ, i.e. the first term in Eq. (<ref>), freezes. This can be a few e-folds after the time of horizon crossing for the mode k or simply the time of end of inflation. As the variance δρ_v^2 and skewness δρ_v^3 are constant (independent of time), the above correlation can be calculated at any time as long as ζ freezes. Similarly, the contribution of Δρ_v^(χ) in bispectrum has the following form ⟨ζ_ k_1ζ_ k_2ζ_ k_3⟩∼ (2 π)^3 δ^3( k_1 + k_2 + k_3) ( H/ρ̇^cl_ϕ)^3 ∫ d^3 p|χ_ p(τ) |^2 |χ_ k_2+ p(τ) |^2 |χ_ k_1- p(τ) |^2 . It would be instructive to look at the above results in terms of Feynman diagrams. In Figure <ref> we have presented the Feynman diagrams for the contributions of the fluctuations of the vacuum zero point energy from the heavy field in power spectrum and bispectrum. The structure of the integrals in Eqs. (<ref>) and (<ref>) indicates that these contributions are in the form of one-loop corrections. The small scale modes that are running inside the loops yield the dominant contributions in the integrals in Eqs. (<ref>) and (<ref>). As the correction in power spectrum Δ_ζ(k) is blue, the long CMB scale modes are unaffected from the loop corrections. Instead, the corrections in power spectrum is significant on small scales. In this view, the effects of one-loop corrections here are different than the one-loop corrections in <cit.> where it is shown that short modes which experience an intermediate phase of ultra slow-roll inflation can affect the long CMB scale mode. More specifically, Δ_ζ(k) from the loop corrections in the latter setup is scale-invariant so the long CMB scale modes and the short modes are affected similarly. However, in our case Δ_ζ(k) has a strong blue scale-dependence so the long modes are protected from large loop corrections. Before closing this section we comment that the roles of the heavy spectator fields were investigated by Chen and Wang in <cit.>.[ We thank Xingang Chen for bringing <cit.> to our attention while our work was in its final stage.] In that work the authors used perturbative in-in formalism to calculate the corrections in power spectrum from quartic interactions of the type m^2 ζ^2 χ^2. To regularize the UV divergent integrals, they imposed a cut-off Λ by hand obtaining a correction of the form Δ_ζ/_ζ∼_ζ^(0) (Λ/H)^4. Comparing their result with our result Eq. (<ref>), there are two important differences. First, we do not have the cutoff Λ as we perform the regularization automatically via dimensional regularization scheme. In a sense, their Λ will be replaced by the mass of the field m. Second, their fractional correction in power spectrum scales like Λ^4/H^4 while ours scales like m^8/H^8. The reason is that they used the quartic Hamiltonian of the type m^2 ζ^2 χ^2. To obtain our scaling m^8/H^8, one should start with a cubic Hamiltonian of the form ζχ^2 in the analysis of <cit.> which yields to a Feynman diagram similar to the left panel of Fig. <ref> with a nested in-in integral. Since our result for Δ_ζ is expressed in term of m^8 we are able to put an upper bound on the mass of field while in <cit.> the bound will be imposed on Λ which was interpreted as the scale of the UV completed theory. § SUMMARY AND DISCUSSIONS In this work we have studied the implications from the fluctuations of the vacuum zero point energy associated to a fundamental field during inflation. At the background level, the vacuum zero point energy associated to a field with mass m contributes to the cosmological constant of the order m^4. This is the source of the infamous cosmological constant problem. There is no compelling dynamical mechanism to tune the contributions of the quantum fields in cosmological constant to be consistent with the magnitude of dark energy as observed in cosmological observations. One may simply set the cosmological constant induced by quantum fields to be zero (or very nearly zero) at the background level. However, the crucial observation is that the perturbations in the distribution of the vacuum zero point energy scales like the background vacuum energy, i.e. δρ_v ∼ m^4. While one may absorb the background vacuum zero point energy by some mechanism, however the perturbations in distribution of vacuum energy are always present. This shows another face of the cosmological constant problem, now at the level of perturbations. The fluctuations in vacuum zero point energy contribute to primordial curvature perturbations. We have shown that in order to keep the primordial perturbations to be nearly scale-invariant and Gaussian, the fundamental fields can not be significantly heavier than H. This is a strong conclusion. Of course, we are already familiar with the specific example of the inflaton field itself that it should be light during inflation. However, our analysis show that this is not unique to inflaton field. There may be a hierarchy between the mass of the fundamental fields and the inflaton field, but this hierarchy is subject to our upper bound that m ≲ H. In terms of the parameter r, our bound is translated into m ≲√(r)× 10^14GeV. This conclusion has important implications for beyond SM particle physics. For example, from the upper bound r < 10^-2, and considering numerical uncertainties of order unity in our analysis, we conclude that all fields in the beyond SM sector should be lighter than 10^14GeV. This is just below the GUT scale. While we presented the specific analysis for one spectator scalar field, but the result can be extended to other fields with different spins as well. For example, as shown in <cit.>, the fluctuations in vacuum zero point energy of the fermionic fields also satisfy the relation δρ_v ∼⟨ρ_v ⟩∼ m^4. Since this relation was the key ingredient in the derivation of our upper bound on the mass, we conclude that our upper bound applies to fermionic fields as well. This conclusion applies for massive gauge bosons with spin one as well. In addition, if we have many fields, then all of them contribute to the vacuum zero point energy. However, as in cosmological constant problem, only the heaviest field has the most dominant contribution in power spectrum and would be subject to our upper bound m ≲ H. In terms of the Feynman diagrams, the corrections from the vacuum zero point fluctuations can be interpreted as one-loop corrections in power spectrum and bispectrum. Since the spectrum of Δρ_v^(χ) is highly blue, the leading contributions from these loop corrections come from small scale modes which run inside the loop. Since the correction in power spectrum Δ_ζ(k) is blue-tilted, the loop corrections affect the short scales while the long modes, such as the CMB scale modes, are largely unaffected by these quantum loop corrections. Acknowledgments: We thank Xingang Chen, Mohammad Hossein Namjoo, Misao Sasaki and Haidar Sheikhahmadi for useful discussions and correspondences. This work is supported by the INSF of Iran under the grant number 4022911. § SCALE-DEPENDENCE OF ΔΡ_V^(Φ) AND ΔΡ_V^(Χ) IN Ζ In this appendix we investigate the scale-dependence of the contributions from the perturbations in vacuum zero point fluctuations Δρ_v^(ϕ) and Δρ_v^(χ) in ζ. Our goal is to show that since these contributions are quadratic in field perturbations, respectively δϕ^2 and δχ^2, then their contributions are highly scale-dependent in Fourier space. The curvature perturbation on the surface of constant energy density is given by ζ= H/ρ̇^cl_ϕ( Δρ_ϕ^(1) + Δρ_v^(ϕ) + Δρ_v^(χ)) . The first term above is the usual scale-invariant term. To see this, let us look at the mode functions, δϕ_k(τ) ,χ _k(τ) = ( - Hτ )^D - 1/2( π/4H)^1/2e^i π/2 (ν + 1/2) H_ν ^(1)( - kτ ) 1mu , where ν≡1/2 1mu√(9- 4 β^2) , β≡m/H . For inflaton field, β≪ 1 so ν≃3/2. However, for the spectator field χ, if β∼1 then ν is far from the critical value 3/2 while for larger values of β it can even be a complex number. Now let us look at the superhorizon limit where k τ→ 0. In a sense, we calculate the power spectrum at the end of inflation τ→ 0 so all modes of interests are superhorizon. Using the small argument limit of the Hankel function, we have (assuming ν is real) H_ν ^(1)( - kτ ) ≃ -i/πΓ(ν) ( -k τ/2)^-ν . We see that the mode function scales like (- k τ)^-ν on superhorizon scales. As the dimensionless power spectrum _ζ is defined via _ζ≡k^3/2 π^2 |ζ_k|^2 , we conclude that the first term in Eq. (<ref>) which is linear in δϕ_k scales like k^3 k^-2ν = k^3- 2 ν. Since for inflaton ν≃3/2, the power spectrum is nearly scale-invariant. The deviation in scale-invariance is determined by the slow-roll corrections. Now we investigate the scale-dependence of the remaining two terms in Eq. (<ref>). As both of them have similar forms, we consider the third term induced from the spectator field. Since Δρ_v^(χ)∼χ^2, its contribution in power spectrum of ζ_ in Fourier space has the following form, ⟨ζ_ k_1(τ) ζ_ k_2(τ)⟩∼ (2 π)^3 δ^3( k_1 + k_2) ( H/ρ̇^cl_ϕ)^2 ∫ d^3 p |χ_ p(τ) |^2 |χ_ k_1- p(τ) |^2 , where, as mentioned in the main text, the symbol ∼ means we discard the numerical factors and other contributions such as χ̇(τ)^2 and (∇χ)^2. As the integral in Eq. (<ref>) is UV divergent, we expect the dominant contribution to come from the modes deep inside the horizon, i.e. from modes which experience the flat Minkowski background with p →∞. In this limit χ_p ∼ p^-1/2 so ⟨ζ_ k_1(τ) ζ_ k_2(τ)⟩∼∫ d^3 p1/p^2 . As expected, the above integral is UV divergent which is the hallmark of the vacuum zero point energy and its fluctuations. After regularizing this divergence (as we did via dimensional regularization in section (<ref>)), we conclude that ⟨ζ_ k_1ζ_ k_2⟩ is nearly independent of k. Constructing _ζ∼ k^3 ⟨ζ_ k_1ζ_ k_2⟩ we conclude that _ζ is blue scaling like k^3. JHEPNoTitle
http://arxiv.org/abs/2406.19069v1
20240627103553
A halo model approach to describe clustering and emission of the two main star forming galaxy populations for Cosmic Infrared Background studies
[ "Giorgia Zagatti", "Erminia Calabrese", "Caterina Chiocchetta", "Martina Gerbino", "Mattia Negrello", "Luca Pagano" ]
astro-ph.CO
[ "astro-ph.CO" ]
();a, #1#2#1#2 #1#1 Planck ^4He-JT planck2011-1.1, planck2011-1.3, planck2011-1.4, planck2011-1.5, planck2011-1.6, planck2011-1.7, planck2011-1.10, planck2011-1.10sup, planck2011-5.1a, planck2011-5.1b, planck2011-5.2a, planck2011-5.2b, planck2011-5.2c, planck2011-6.1, planck2011-6.2, planck2011-6.3a, planck2011-6.4a, planck2011-6.4b, planck2011-6.6, planck2011-7.0, planck2011-7.2, planck2011-7.3, planck2011-7.7a, planck2011-7.7b, planck2011-7.12, planck2011-7.13 planck2013-p01, planck2013-p02, planck2013-p02a, planck2013-p02d, planck2013-p02b, planck2013-p03, planck2013-p03c, planck2013-p03f, planck2013-p03d, planck2013-p03e, planck2013-p01a, planck2013-p06, planck2013-p03a, planck2013-pip88, planck2013-p08, planck2013-p11, planck2013-p12, planck2013-p13, planck2013-p14, planck2013-p15, planck2013-p05b, planck2013-p17, planck2013-p09, planck2013-p09a, planck2013-p20, planck2013-p19, planck2013-pipaberration, planck2013-p05, planck2013-p05a, planck2013-pip56, planck2013-p06b, planck2013-p01a planck2014-a01, planck2014-a03, planck2014-a04, planck2014-a05, planck2014-a06, planck2014-a07, planck2014-a08, planck2014-a09, planck2014-a11, planck2014-a12, planck2014-a13, planck2014-a14, planck2014-a15, planck2014-a16, planck2014-a17, planck2014-a18, planck2014-a19, planck2014-a20, planck2014-a22, planck2014-a24, planck2014-a26, planck2014-a28, planck2014-a29, planck2014-a30, planck2014-a31, planck2014-a35, planck2014-a36, planck2014-a37, planck2014-ES to 5pt. = --2pt = --2pt #1 #2=0.8em =0pt=0pt ==1 to ^#1#2 3pt1.5pt 5pt Ł2L_2L_2 ΔT/T ΔTΔ T ΔtΔ t f_kneef_ knee F_maxF_ max M_⊙M_⊙ M_⊙M_⊙ L_⊙L_⊙ ^-1^-1 ^-1^-1 sup#1^#1^ #1 #1×10^#1× 10^#1 .4ex<1.2ex∼ .4ex>1.2ex∼ ==.4ex∝1.2ex∼ ^∘^∘ 0=^∘.110 .^∘ 0=^∘.110 .^∘ ^'' ^'' ^' ^' =.07em =.03em .^'-'- .^' -'- =.08em =.03em .^'- .^'- [#1 #2 #3.#4]#1suph#2supm#3sups.#4 [#1 #2 #3.#4]#1#2#3.#4 [#1 #2 #3]#1#2#3 [#1 #2]#1suph#2supm ……… W Hz srW Hz sr mHz mHz GHz GHz mK s^1/2 mK s^1/2 μK s^1/2 μK s^1/2 μK_RJs^1/2 μK_ RJ s^1/2 μK Hz^-1/2 μK Hz^-1/2 MJy sr MJy sr MJy sr mK_ CMB MJy sr mK_CMBμm μm μK μK μK μK μW μWkm s^-1 km s^-1Mpc Mpc#1 #1—f_NL^localΛCDM cm^-2cm^-2C_ℓ^EEC_ℓ^BBA&AApJMNRASApJ Supp. μK_ CMBμK_ CMBhalo modelWMAPCOBE
http://arxiv.org/abs/2406.18186v1
20240626090249
The Predicament of Absorption-dominated Reionization II: Observational Estimate of the Clumping Factor at the End of Reionization
[ "Frederick B. Davies", "Sarah E. I. Bosman", "Steven R. Furlanetto" ]
astro-ph.CO
[ "astro-ph.CO" ]
0000-0003-0821-3644]Frederick B. Davies Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany 0000-0001-8582-7012]Sarah E. I. Bosman Institute for Theoretical Physics, Heidelberg University, Philosophenweg 12, D–69120, Heidelberg, Germany Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany 0000-0002-0658-1243]Steven R. Furlanetto Department of Physics & Astronomy, University of California, Los Angeles, CA 90095, USA § ABSTRACT The history of reionization reflects the cumulative injection of ionizing photons by sources and the absorption of ionizing photons by sinks. The latter process is traditionally described in terms of a “clumping factor” which encodes the average quadratic increase in the recombination rate of dense gas within the cosmic web. The importance of ionizing photon sinks during reionization is under increased scrutiny due to the short mean free path measured from stacked quasar spectra at z≃6. Here we present analytic arguments to connect the clumping factor to the mean free path by invoking ionization equilibrium within the ionized phase of the intergalactic medium at the end of (and after) reionization. We find that the latest mean free path and hydrogen photoionization rate measurements at z=5–6 imply a global clumping factor C≈12, much higher than previous determinations from radiation-hydrodynamic simulations of the reionization process. Similar values of C are also derived when applying the same procedure to observations at 2<z<5. Compared to the traditional assumption of C=3, high-redshift galaxies must produce roughly twice as many ionizing photons (∼3 photons per baryon) to reionize the universe by z∼6. This additional requirement on the ionizing photon budget may help to reconcile the reionization history with JWST observations that suggest a far greater output of ionizing photons by the most distant galaxy populations. § INTRODUCTION After the formation of baryons during the Big Bang, and their subsequent (re-)combination into atoms and the release of the cosmic microwave background (CMB), the hydrogen and helium in the Universe persisted in a predominantly neutral state. After the formation of the first stars and galaxies, the ionizing photons emitted by massive stars began to carve ionized bubbles into the surrounding intergalactic medium (IGM), beginning the epoch of reionization. The ionized bubbles from individual galaxies eventually merged, filling more and more of the cosmic volume until, by z∼5.3 <cit.>, the IGM was fully reionized. The most straightforward quantitative description of reionization was put forward by <cit.>, whose “one-zone” model for the process provides valuable intuition: dQ/dt = ṅ_ ion/⟨ n_ H⟩ - Q/t_ rec, where Q is the ionized fraction of the IGM, and the first and second terms on the right-hand side represent the source and sink terms, respectively. The sources are represented by ṅ_ ion, the emissivity of ionizing photons, while the sinks are represented by t_ rec, the recombination timescale of the ionized gas. In this work, as in <cit.>, we will assume that Q represents the volume-averaged ionized fraction. While the inside-out nature of reionization implies that a mass-averaged approach may be more appropriate, in which case an additional factor of Q arises in the sink term (e.g. ), equation (<ref>) is approximately correct in a two-phase approximation (cf. ) where the IGM is either fully ionized (x_ HII=1) or neutral (x_ HII=0). That is, Q represents the volume of the IGM within the ionized phase rather than a physical ionized fraction, and so it does not asymptote to a finite residual neutral fraction after reionization is complete (although see for a solution to this). Considerable effort has been undertaken to determine ṅ_ ion at early cosmic time. Such determinations typically involve a measurement of the UV luminosity function (LF) of galaxies at high redshifts (z>6), but the connection between the UV LF and the ionizing output of galaxies is still uncertain. This connection is usually parameterized as ṅ_ ion = ρ_ UVξ_ ion f_ esc, where ρ_ UV is the integral over the UV LF down to some magnitude limit, ξ_ ion is the “ionizing efficiency” which represents the average (intrinsic) spectral shape of the stellar populations between the ionizing and non-ionizing UV continuum, and f_ esc is the escape fraction of ionizing photons from the galaxies into the IGM. While observations of galaxy nebular emission lines can constrain their ξ_ ion (e.g. ), direct measurements of f_ esc (i.e., direct detections of ionizing photons) are hindered by the high opacity of the Lyman-series forests, and thus are only possible at redshifts z≲4 (e.g. ). The role of sinks has been a subject of considerable debate over the years. As mentioned above, sinks of ionizing photons enter the <cit.> formalism via t_ rec, the average recombination time of a proton in the IGM, which can be written as t_ rec = 1/C ⟨ n_ H⟩α_ HII(T), where α_ HII is the hydrogen recombination rate, and ⟨ n_ H⟩ is the mean cosmic hydrogen density. Crucially, as recombination is a collisional process, the rate depends on the squared density of ionized gas. It is common to summarize this dependence with the so-called “clumping factor,” C ≡⟨ n^2 ⟩ / ⟨ n ⟩^2, such that t_ rec = t_ rec^ uniform / C. As there is no analytic shortcut to determine C from first principles, the assumed value is typically derived from cosmological simulations. Early cosmological hydrodynamical simulations suggested C∼30 (e.g. ), implying a dominant role of recombinations in determining the reionization history. However, this high value considered all gas particles in the simulation volume. In practice, recombinations occurring inside of galaxies are already accounted for by the f_ esc parameter, so care must be taken to avoid double-counting. Later works by <cit.> and <cit.> took this exclusion into account and found values closer to C∼10, using dark-matter-only N-body simulations. But the baryons are also subject to gas pressure, which smooths their distribution relative to the dark matter field. Finally, following several works employing radiation hydrodynamic simulations of reionization <cit.>, a value of C∼2–3 is now a typical assumption in analytic reionization models in the literature. The impact of sinks is now being revisited after the measurement of a short mean free path of ionizing photons at z=6 by <cit.>, and further confirmed by <cit.>, which suggests the presence of substantially more small-scale structure in the IGM than present in traditional reionization simulations. A short mean free path can dramatically increase the number of ionizing photons required to ionize the IGM, as photons must travel long distances from ionizing sources to large-scale voids (, henceforth , see also ). However, the quantitative connection between the short mean free path and the clumping factor is not immediately obvious (e.g. ), which has limited the extent to which the short mean free path has been taken into account by the larger reionization community. In this work, we aim to build a stronger connection between the mean free path and clumping factor at high redshift, in an attempt to unify the description of ionizing photon sinks during the reionization epoch. We first show that the clumping factor at a given redshift can be estimated from the ionizing background intensity and mean free path. We then apply this methodology to measurements of these quantities across cosmic time, finding a nearly constant value of C which is several times higher than typically assumed. We assume a Planck ΛCDM cosmology <cit.> with h=0.68, Ω_m=0.31, and Ω_b=0.049. § THE CLUMPING FACTOR AND THE MEAN FREE PATH In this section, we will investigate the connection between the clumping factor and the mean free path of ionizing photons. But first, we must define what we mean by “clumping factor,” as its exact definition varies considerably between different works. Here we define the clumping factor to be the relevant clumping factor for solving equation (<ref>) – i.e. the clumping factor that provides the correct globally averaged recombination rate, but where effects inside of galaxies that give rise to the escape fraction in the definition of ṅ_ ion are ignored. Specifically, we assume that C is the constant of proportionality between the true global (external) recombination rate ṅ_ rec = ⟨ n_e n_ HIIα_ HII⟩ and the recombination rate at the cosmic mean density, C ≡⟨ n_e n_ HIIα_ HII⟩/⟨ n_e ⟩⟨ n_ HII⟩⟨α_ HII⟩ where we assume that the fiducial recombination coefficient in the denominator is equal to the Case B recombination rate for 10,000 K gas[We note that α_ HII^B(T=10,000 K)≃α_ HII^A(T=20,000 K), another common assumption in previous works.], ⟨α_ HII⟩=α_ HII^B(T=10,000 K). The most straightforward way to connect the clumping factor to the mean free path starts with the assumption of photoionization equilibrium, which should generally hold in the ionized IGM, n_ HIΓ_ HI = ⟨ n_e n_ HIIα_ HII⟩ = C χ_e (1-x_ HI)^2 n_ H^2 α_ HII, where Γ_ HI is the photoionization rate of hydrogen and χ_e≈1.08 is the enhancement in the number of free electrons due to ionized helium[We assume that helium is singly-ionized at the same time as hydrogen, and that the (second) reionization of helium has not yet begun. While this assumption will become incorrect at z≲4 (e.g. ), the additional 8% boost to the electron density from the second ionization of helium is small compared to the uncertainties in the observed quantities we employ in  <ref>.]. Solving for C, we have C = x_ HI n_ HΓ_ HI/χ_e (1-x_ HI)^2 n_ H^2 α_ HII. The two unknowns in this expression are Γ_ HI and x_ HI. While the former can be derived from observations of the Lyα forest, and is most sensitive to low-density gas which is unambiguously resolved in simulations, the residual neutral fraction is not so simple to derive, as it is sensitive to self-shielding and geometrical effects in dense gas (e.g. ). To proceed, one can make the simplifying assumption that the neutral fraction is connected to the mean free path of ionizing photons via λ_ mfp = (n_ HIσ_ HI)^-1. In this case, similar to the “effective” clumping factor in <cit.>, we have: C = σ_ HIΓ_ HI/λ_ mfpχ_e (1-x_ HI)^2 n_ H^2 α_ HII, where we note that both the mean free path and photoionization cross section terms implicitly represent frequency-averaged values. This expression for the mean free path, however, makes the crucial assumption that neutral hydrogen is uniformly distributed in space. In reality, the dense self-shielded gas that gives rise to optically-thick absorption should be inhomogeneous, distributed in clumps and/or in the filaments of the cosmic web (e.g. ). Another way to associate the clumping factor with the mean free path was suggested by <cit.>, who connected the attenuation of ionizing photon flux to the recombination rate. In this model, one considers the attenuation of ionizing photon flux dF inside a slab of material with area dA and proper width ds compared to the recombinations inside said slab, -dF dA = C n_e n_ HIIα_ HII dA ds, where the flux inside the slab is attenuated following dF/ds = -F(1+z)/λ_ mfp, where the (1+z) term converts λ_ mfp from comoving to proper units. The following expression can then be derived after solving for C, C = F(1+z)/λ_ mfpχ_e (1-x_ HI)^2 n_ H^2 α_ HII, where we note again that the F and λ_ mfp terms represent values integrated over the spectrum of the ionizing background. While this expression improves upon the previous one by removing the explicit connection between the mean free path and the neutral fraction, the geometrical assumption in its derivation (i.e. the slab) introduces additional ambiguity. We suggest a third way to conceptualize (and quantify) the connection between the mean free path and the clumping factor. A typical ionizing photon passing through the IGM will travel one mean free path before being absorbed, i.e. before ionizing a hydrogen atom. Thus a “photon photo-ionization rate,” the rate at which a given ionizing photon will ionize a hydrogen atom, can be written as Γ_γ = c/λ_ mfp. The space density of photoionizations is then n_γΓ_γ = n_γ× c/λ_ mfp, where n_γ is the number density of ionizing photons. In ionization equilibrium, this rate will balance the recombination rate, i.e. n_γ c/λ_ mfp = C α_ HIIχ_e (1-x_ HI)^2 n_H^2, Solving for C as above, we find C = n_γ c/λ_ mfpα_ HIIχ_e (1-x_ HI)^2 n_ H^2, where again the n_γ and λ_ mfp terms represent frequency-averaged quantities. In all three cases, the inferred clumping factor is proportional to the strength of the ionizing background (in various forms) divided by the mean free path of ionizing photons, e.g. C∝Γ_ HI/λ_ mfp. All three methods result in quantitatively similar values for C; in this work, we adopt the third method to compute C (i.e. equation <ref>), as it appears to have the fewest explicit assumptions on the nature of the distribution of neutral gas. We have so far ignored the dependence on photon frequency of various quantities in the expressions for C above for the sake of clarity, but due to the steep frequency dependence of the photoionization cross-section <cit.>, such terms could matter at the level of a factor of a few. We write the specific number density of ionizing photons n_ν as n_ν = u_ν/hν = 4π/cJ_ν/hν, where u_ν is the specific energy density and J_ν is the specific angle-averaged mean intensity of the ionizing background. We then proceed to estimate C using the following expression: C = [4π∫_ν_ HI^4 ν_ HIJ_ν/hνλ_ν dν]×1/α_ HIIχ_e (1-x_ HI)^2 n_ H^2, where ν_ HI is the frequency of the hydrogen ionizing edge. In the following, we make the assumption that the neutral fraction is small enough that the (1-x_ HI) term can be approximated as unity – that is, we compute the clumping factor relative to a fully ionized IGM. We further approximate the frequency dependencies of the mean free path and ionizing background intensity as power laws with λ_ν∝ν^α_λ and J_ν∝ν^-α_ b, leading to an analytic simplification to equation (<ref>) as long as α_ b+α_λ≠1, C = 4π J_ HI/hν_ HIλ_ mfp[1-4^-(α_ b+α_λ-1)/α_ b+α_λ-1] ×1/α_ HIIχ_e n_ H^2, where J_ HI and λ_ mfp are J_ν(ν_ HI) and λ_ν(ν_ HI), respectively. The frequency dependence assumptions are largely encapsulated by the term in brackets, which is a function of α_b+α_λ, equal to 2 with our assumed power-law indices[We note that at z≳5, where the mean free path is short relative to the Hubble distance (the “absorption-limited” regime), we can write J_ν∝ϵ_νλ_ν where ϵ_ν is the average ionizing emissivity <cit.>. The implied scaling of the emissivity is ϵ_ν∝ν^-(α_b+α_λ), suggesting that our choice of α_b+α_λ=2 is reasonable (e.g. ). Even harder ionizing spectra are also plausible for young, metal-poor stellar populations (e.g. ) which would increase our estimates of C.]. In practice, as described below in  <ref>, we will convert observational constraints on Γ_ HI to J_ HI, which introduces an additional factor of roughly (α_b+3). Lower (higher) values of C would be derived if the ionizing spectra of the sources is softer (harder), or if the mean free path is a stronger (weaker) function of frequency, corresponding to a steeper (shallower) H1 column density distribution. As measurements of the ionizing background require non-zero transmission through the highly sensitive Lyα forest <cit.>, application of this method will only become possible at the very end of the reionization process. Thus the clumping factor during the majority of reionization cannot be constrained directly. At these earlier times, the inside-out nature of reionization should lead to higher densities in the ionized regions, and thus a higher recombination rate relative to the mean IGM (e.g. , see also ). We ignore this effect for simplicity in our analytic calculations, but note that this would likely increase the effective clumping factor significantly at low ionized fractions as in <cit.>. § ESTIMATING THE CLUMPING FACTOR FROM IGM OBSERVATIONS In this section, we use observational properties of the IGM to constrain the clumping factor as defined in the previous section. Here we stress that our clumping factor is an effective quantity corresponding to the entire volume of the IGM, and not a “local” clumping factor that can be applied to the density field in simulations a posteriori (see, e.g., ). In particular, our definition of the clumping factor is specifically designed for use in analytic reionization calculations like equation (<ref>) that consider the IGM as a whole <cit.>. §.§ Observations of Γ_ HI and λ_ mfp Estimating the clumping factor using the method above requires an estimate of the ionizing background intensity as well as the mean free path of ionizing photons. While most studies of the clumping factor have focused on its behavior during the reionization epoch, our formalism applies to any cosmic time where both of these quantities have been measured. For the mean free path, at z<5 we use the power-law fit to the direct measurements of quasar spectra stacked beyond the Lyman limit from <cit.> and references therein. At z>5, we use the measurements from <cit.>, who (following ) use a similar stacking method to <cit.> but additionally account for the bias due to the intense local ionizing flux from the background quasars. At all redshifts we assume a power-law frequency dependence of the mean free path of λ_ν∝ν^α_λ, with α_λ=1. This power-law dependence corresponds to a neutral hydrogen column density distribution function f(N) proportional to N^4/3, somewhat flatter than the distribution of lower density Lyα forest absorbers and consistent with measurements (and models) at z=2–6 (e.g. ). For the ionizing background, at z<5 we adopt the measurements of Γ_ HI from <cit.>, who calibrated a suite of hydrodynamical simulations to the mean transmitted flux of Lyα derived from a stacking analysis of SDSS quasars <cit.>, and comprehensively accounted for various sources of systematic uncertainty. At z>5, we use the constraints on Γ_ HI from <cit.>, who fit a fluctuating ionizing background model <cit.> to the Lyα forest opacity distributions from <cit.>. To determine the specific intensity at the hydrogen-ionizing edge J_ HI required by equation (<ref>), we assume that the spectrum of the hydrogen-ionizing background (i.e. ν_ HI < ν < 4 ν_ HI) is described by a power-law shape J_ν∝ν^-α_b with α_b=1.0. This spectral shape is consistent with an intrinsic ionizing emissivity proportional to ν^-2 (cf. ) filtered through the absorber distribution giving rise to λ_ν∝ν^1 as assumed above. We then determine the corresponding J_ HI by requiring that the observed Γ_ HI is reproduced by Γ_ HI = 4π∫_ν_ HI^4 ν_ HIJ_ HI (ν/ν_ HI)^-α_b/hνσ_ HI(ν) dν, where σ_ HI(ν) is the hydrogen photoionization cross-section from <cit.>. We note that the mean free path measurements from <cit.> are derived assuming specific values of the hydrogen photoionization rate (and its uncertainty) from <cit.>. To ensure self-consistency, we recompute the mean free path and corresponding uncertainties using the <cit.> constraints on Γ_ HI(z), but note that this does not make a substantial difference to our results. §.§ Estimates of the effective clumping factor With the ionizing background strength and mean free path in hand, we can now proceed to compute the clumping factor following Section <ref>. Specifically, we evaluate equation (<ref>) using the mean free path measured by <cit.> (i.e. the power-law fit from z=2–5) and <cit.>, and the ionizing background intensity implied by the photoionization rate measurements of <cit.> and <cit.>, with assumed frequency dependencies λ_ν∝ν and J_ν∝ν^-1. We show the resulting estimates of C from z=2–6 in Figure <ref>. The error bars at z<5 include only the uncertainty in Γ_ HI from <cit.>, while at z>5 they include both the uncertainty in Γ_ HI from <cit.> and in λ_ mfp from <cit.>. We find a remarkably constant value of C∼10–15 across the entire redshift range, with an average value of C≈12 at z=5–6, well above the simulation-calibrated prescriptions often used in the literature (C∼3). While the short mean free path at z=6 suggests a rather high value C≈17, the uncertainty is large enough to be consistent with all lower redshifts. §.§ Comparison to simulations In Figure <ref>, we compare our estimates of C to various determinations of C-like quantities in the literature. The solid curves from <cit.>, <cit.>, and <cit.> represent the basis behind the commonly-assumed values of C=2–3, with the ranges of estimates from more recent simulations by <cit.>, <cit.>, and <cit.> shown as shaded regions with moderately higher values up to C∼5 at z=5–6. It is worth noting that all of these works compute the clumping factor in different ways, but in principle they are all designed to fulfill the same role: to quantify the effect of the sink term on the progression of reionization in equation (<ref>). The dashed curves in Figure <ref> show clumping factors measured from simulations without any contribution from photoionization heating, with <cit.> employing small-volume adiabatic hydrodynamical simulations and <cit.> using a combination of small and large N-body simulations. These curves can be considered to be theoretical “maximum” values for C, and our estimates lie comfortably below them. Why, then, do we recover such a large value for C compared to the commonly-accepted value from simulations? In the absence of an unforeseen source of bias in our approach, it is possible that the C measured in simulations is not directly comparable to our value of C due to a difference in definition. Simulations are careful to compute C without including dense gas inside of halos, to avoid double-counting this gas which could be responsible for the galactic escape fraction. This is typically implemented as a density threshold, e.g. C_100 from <cit.> is measured from gas with overdensity Δ < 100. Other works include cuts on the temperature and ionization state of the gas (e.g. ). But this dense gas masking ignores the crucial possibility that dense halo gas can also be illuminated from the outside by the UV background, and potentially make up a substantial fraction of the opacity to ionizing photons streaming through the IGM. The mean free path measured in stacked quasar spectra (e.g. ) or from the column density distribution of discrete H1 absorbers <cit.>, includes encounters of ionizing photons with all gas without any regard for whether it is associated with a galaxy. Recent simulations by <cit.> which take into account the effect of IGM small-scale (∼ kpc) structure, and its relaxation dynamics after reionization heating, have found that applying a clumping factor C=5 to their coarse 1 Mpc-resolution simulation provides a decent match to their more sophisticated sink modeling. On the surface, this value is substantially lower than our estimates, but recall that our C is defined globally – this distinction is important, because the locally-defined C can be much smaller than the global one <cit.>. In fact, the scale-dependence of the clumping factor found by <cit.> suggests that the global C is ∼2 times larger than the local C on ∼1 Mpc scales, implying that our estimate for the (global) C is reasonably consistent with the model from <cit.>. § IMPLICATIONS FOR REIONIZATION Fundamentally, the purpose of estimating this particular definition of the clumping factor is to explore what implications the short mean free path of ionizing photons at z=6 has for the reionization history, and particularly, for the requirements on the number of ionizing photons that must have been emitted to complete the process. The semi-numerical simulations in examined this in the context of the way a short mean free path inhibits the ionization of the last remaining voids; here, instead, we consider solely the effect of the additional recombinations inside of ionized gas from the large clumping factor implied by IGM observations at z=5–6 as shown above. While analytically convenient, this choice comes at the expense of neglecting the effect of the spatial offset between ionizing sources and the last patches of neutral gas at the end of reionization (; ); we leave a closer look at that effect to future work. Nevertheless, in this section we will consider the impact of our high value of C=12 on reionization calculations involving equation (<ref>). We must first consider our model for the sources, i.e. ṅ_ ion (equation <ref>). We compute ρ_ UV(z) from the UV luminosity function parameterizations by <cit.> and <cit.>, extrapolating up to z=15, and integrating down to a fiducial limiting UV magnitude of M_ UV=-13. The redshift evolution of the luminosity function from <cit.> includes a strongly evolving suppression at the faint end leading to a rapid decline in the UV luminosity density outside of their fitting range at z>9. We thus extrapolate to higher redshift with a double-power-law fit to the evolution at 5<z<9, although we note this makes little difference to our results. We can now explore the consequences for the reionization history, evaluating equation (<ref>) across cosmic time. We adopt clumping factors of C=3, representing the traditional approach, and C=12, as determined in this work. We then tune the product of the ionizing escape fraction and ionizing efficiency f_ escξ_ ion in each case to reach a neutral fraction of 10% at z=5.9, consistent with the Lyα forest dark pixel constraint from <cit.> and with the model from , leading to a late end to reionization consistent with the most recent constraints from the Lyα forest <cit.>. The resulting reionization histories are shown in the top panel of Figure <ref>. At this fixed endpoint of reionization, and with our particular models for ṅ_ ion(z), increasing the clumping factor from C=3 to C=12 has a negligible effect on the reionization history at earlier times. In the lower panel of Figure <ref>, we show the corresponding integrated number of ionizing photons per baryon. Assuming C=3 requires 1.5 photons per baryon to complete reionization, while with C=12 the number doubles to 3.0 photons per baryon. This elevated photon budget is nevertheless still slightly below the nominal range from , but it is very similar to the dynamic sink radiative transfer models of <cit.>. This number is also consistent with the total number of recombinations (i.e. the number of emitted photons per baryon minus one) at the end of reionization in the CROC simulations <cit.>. Next, we examine the commonly-used criterion for reionization to remain complete, defined by setting Q=1 and dQ/dt=0 in equation (<ref>): ṅ_ ion, crit≥ C ⟨ n_ H⟩^2 α_ HII. We note that this expression can be re-stated as a criterion that reionization progresses at a given value of the ionized fraction (e.g. ), ṅ_ ion,crit(Q) ≥ Q C ⟨ n_ H⟩^2 α_ HII = Q×ṅ_ ion,crit. i.e. for the ionized fraction to increase with time, the number of new ionizations must be larger than the number of recombinations within the ionized phase of the IGM. In Figure <ref>, we compare the critical values of ionizing photon emissivity for ionized fractions of 25%–100% at z=6–8 with the corresponding emissivity calculated from the UV LFs versus the UV magnitude integration limit. We assume a fiducial logf_ escξ_ ion = 24.8, corresponding to e.g. a model with logξ_ ion=25.8, consistent with a recent determination for UV-faint galaxies with JWST <cit.>, and f_ esc=0.1, consistent with direct measurements of Lyman continuum photons from Lyman-break galaxies at z∼3 <cit.>. Under this assumption, galaxies at z=6 can maintain reionization provided that ionizing photons escape from galaxies as faint as M_ UV∼-14, while at z=7 and z=8 this would only be sufficient to continue reionizing the universe at ionized fractions of 50% and 25%, respectively. § CONCLUSION In this work, we have explored the implications of the short mean free path of ionizing photons at z≈6 <cit.> for the recombination rate in the intergalactic medium as a whole, quantified by the clumping factor C. We find a characteristic value of C≈12 at z=5–6 that is well in excess of the C=3 assumption commonly made in the literature based on cosmological radiation-hydrodynamics simulations. We attribute this difference to the way in which simulation analyses explicitly neglect dense gas within galaxy halos. While such an exclusion appears necessary to avoid double-counting the gas responsible for the galactic escape fraction, it ignores the fact that this dense gas can still absorb external photons streaming through the IGM, and thus play an important role in determining the total budget of recombinations. Compared to the typical assumption of C=3, we find that late-ending reionization histories with C=12 require roughly twice as many ionizing photons to complete the process at z≲6. However, recent observations of the ionizing efficiency of z>6 galaxies from JWST and scaling relations for the ionizing escape fraction from low-redshift Lyman continuum leakers imply a tremendous excess in the ionizing photon budget <cit.>. Due to the difference in our assumed recombination coefficient, the recombination rate in our fiducial model with C=12 is comparable to that of the C=20 model explored by <cit.> in which reionization still ends quite early at z∼7.5. We note also that the clumping factor may not provide a complete picture of the number of photons required to complete the reionization process. As shown in , the fact that the ionizing sources and neutral islands are physically offset from one another implies a large degree of attenuation, requiring ∼6 photons per baryon to reach a neutral fraction of x_ HI∼10% at z∼6; about a factor of two higher than our fiducial model here with C=12. It is possible that both a large recombination rate and a consideration of the physical offset are required to reconcile the copious ionizing photon production of the first galaxies with current constraints on the reionization history. The manuscript was completed following productive discussions with Girish Kulkarni, Laura Keating, Anson D'Aloisio, and Christopher Cain at the NORDITA workshop programme “Cosmic Dawn at High Latitudes”. SEIB is supported by the Deutsche Forschungsgemeinschaft (DFG) under Emmy Noether grant number BO 5771/1-1. aasjournal natexlab#1#1 [Atek et al.(2024)Atek, Labbé, Furtak, Chemerynska, Fujimoto, Setton, Miller, Oesch, Bezanson, Price, Dayal, Zitrin, Kokorev, Weaver, Brammer, Dokkum, Williams, Cutler, Feldmann, Fudamoto, Greene, Leja, Maseda, Muzzin, Pan, Papovich, Nelson, Nanayakkara, Stark, Stefanon, Suess, Wang, & Whitaker]Atek24 Atek, H., Labbé, I., Furtak, L. J., et al. 2024, , 626, 975, 10.1038/s41586-024-07043-6 [Becker & Bolton(2013)]BB13 Becker, G. D., & Bolton, J. S. 2013, , 436, 1023, 10.1093/mnras/stt1610 [Becker et al.(2021)Becker, D'Aloisio, Christenson, Zhu, Worseck, & Bolton]Becker21 Becker, G. D., D'Aloisio, A., Christenson, H. M., et al. 2021, , 508, 1853, 10.1093/mnras/stab2696 [Becker et al.(2013)Becker, Hewett, Worseck, & Prochaska]Becker13 Becker, G. D., Hewett, P. C., Worseck, G., & Prochaska, J. X. 2013, , 430, 2067, 10.1093/mnras/stt031 [Bianco et al.(2021)Bianco, Iliev, Ahn, Giri, Mao, Park, & Shapiro]Bianco21 Bianco, M., Iliev, I. T., Ahn, K., et al. 2021, , 504, 2443, 10.1093/mnras/stab787 [Bosman(2021)]Bosman21MFP Bosman, S. E. I. 2021, arXiv e-prints, arXiv:2108.12446. 2108.12446 [Bosman et al.(2022)Bosman, Davies, Becker, Keating, Davies, Zhu, Eilers, D'Odorico, Bian, Bischetti, Cristiani, Fan, Farina, Haehnelt, Hennawi, Kulkarni, Mesinger, Meyer, Onoue, Pallottini, Qin, Ryan-Weber, Schindler, Walter, Wang, & Yang]Bosman22 Bosman, S. E. I., Davies, F. B., Becker, G. D., et al. 2022, , 514, 55, 10.1093/mnras/stac1046 [Bouwens et al.(2021)Bouwens, Oesch, Stefanon, Illingworth, Labbe, Reddy, Atek, Montes, Naidu, Nanayakkara, Nelson, & Wilkins]Bouwens21 Bouwens, R. J., Oesch, P. A., Stefanon, M., et al. 2021, arXiv e-prints, arXiv:2102.07775. 2102.07775 [Cain et al.(2021)Cain, D'Aloisio, Gangolli, & Becker]Cain21 Cain, C., D'Aloisio, A., Gangolli, N., & Becker, G. D. 2021, , 917, L37, 10.3847/2041-8213/ac1ace [Cain et al.(2023)Cain, D'Aloisio, Gangolli, & McQuinn]Cain23 Cain, C., D'Aloisio, A., Gangolli, N., & McQuinn, M. 2023, , 522, 2047, 10.1093/mnras/stad1057 [Chen et al.(2020)Chen, Doussot, Trac, & Cen]Chen20 Chen, N., Doussot, A., Trac, H., & Cen, R. 2020, , 905, 132, 10.3847/1538-4357/abc890 [Cullen et al.(2024)Cullen, McLeod, McLure, Dunlop, Donnan, Carnall, Keating, Magee, Arellano-Cordova, Bowler, Begley, Flury, Hamadouche, & Stanton]Cullen24 Cullen, F., McLeod, D. J., McLure, R. J., et al. 2024, , 531, 997, 10.1093/mnras/stae1211 [D'Aloisio et al.(2019)D'Aloisio, McQuinn, Maupin, Davies, Trac, Fuller, & Upton Sanderbeck]D'Aloisio19 D'Aloisio, A., McQuinn, M., Maupin, O., et al. 2019, , 874, 154, 10.3847/1538-4357/ab0d83 [Davies et al.(2021)Davies, Bosman, Furlanetto, Becker, & D'Aloisio]Davies21 Davies, F. B., Bosman, S. E. I., Furlanetto, S. R., Becker, G. D., & D'Aloisio, A. 2021, , 918, L35, 10.3847/2041-8213/ac1ffb [Davies & Furlanetto(2016)]DF16 Davies, F. B., & Furlanetto, S. R. 2016, , 460, 1328, 10.1093/mnras/stw931 [Davies & Furlanetto(2022)]DF22 —. 2022, , 514, 1302, 10.1093/mnras/stac1005 [Davies et al.(2024)Davies, Bosman, Gaikwad, Nasir, Hennawi, Becker, Haehnelt, D'Odorico, Bischetti, Eilers, Keating, Kulkarni, Lai, Mazzucchelli, Qin, Satyavolu, Wang, Yang, & Zhu]Davies24 Davies, F. B., Bosman, S. E. I., Gaikwad, P., et al. 2024, , 965, 134, 10.3847/1538-4357/ad1d5d [Emberson et al.(2013)Emberson, Thomas, & Alvarez]Emberson13 Emberson, J. D., Thomas, R. M., & Alvarez, M. A. 2013, , 763, 146, 10.1088/0004-637X/763/2/146 [Erkal(2015)]Erkal15 Erkal, D. 2015, , 451, 904, 10.1093/mnras/stv980 [Finkelstein & Bagley(2022)]FB22 Finkelstein, S. L., & Bagley, M. B. 2022, , 938, 25, 10.3847/1538-4357/ac89eb [Finlator et al.(2012)Finlator, Oh, Özel, & Davé]Finlator12 Finlator, K., Oh, S. P., Özel, F., & Davé, R. 2012, , 427, 2464, 10.1111/j.1365-2966.2012.22114.x [Gaikwad et al.(2023)Gaikwad, Haehnelt, Davies, Bosman, Molaro, Kulkarni, D'Odorico, Becker, Davies, Nasir, Bolton, Keating, Iršič, Puchwein, Zhu, Asthana, Yang, Lai, & Eilers]Gaikwad23 Gaikwad, P., Haehnelt, M. G., Davies, F. B., et al. 2023, arXiv e-prints, arXiv:2304.02038, 10.48550/arXiv.2304.02038 [Gnedin(2022)]Gnedin22 Gnedin, N. Y. 2022, , 937, 17, 10.3847/1538-4357/ac8d0a [Gnedin(2024)]Gnedin24 —. 2024, , 963, 150, 10.3847/1538-4357/ad298e [Gnedin & Ostriker(1997)]GO97 Gnedin, N. Y., & Ostriker, J. P. 1997, , 486, 581, 10.1086/304548 [Gunn & Peterson(1965)]GP65 Gunn, J. E., & Peterson, B. A. 1965, , 142, 1633, 10.1086/148444 [Haardt & Madau(2012)]HM12 Haardt, F., & Madau, P. 2012, , 746, 125, 10.1088/0004-637X/746/2/125 [Iliev et al.(2007)Iliev, Mellema, Shapiro, & Pen]Iliev07 Iliev, I. T., Mellema, G., Shapiro, P. R., & Pen, U.-L. 2007, , 376, 534, 10.1111/j.1365-2966.2007.11482.x [Iliev et al.(2005)Iliev, Scannapieco, & Shapiro]Iliev05 Iliev, I. T., Scannapieco, E., & Shapiro, P. R. 2005, , 624, 491, 10.1086/429083 [Izotov et al.(2016)Izotov, Schaerer, Thuan, Worseck, Guseva, Orlitová, & Verhamme]Izotov16 Izotov, Y. I., Schaerer, D., Thuan, T. X., et al. 2016, , 461, 3683, 10.1093/mnras/stw1205 [Ji et al.(2020)Ji, Giavalisco, Vanzella, Siana, Pentericci, Jaskot, Liu, Nonino, Ferguson, Castellano, Mannucci, Schaerer, Fynbo, Papovich, Carnall, Amorin, Simons, Hathi, Cullen, & McLeod]Ji20 Ji, Z., Giavalisco, M., Vanzella, E., et al. 2020, , 888, 109, 10.3847/1538-4357/ab5fdc [Kannan et al.(2022)Kannan, Garaldi, Smith, Pakmor, Springel, Vogelsberger, & Hernquist]Kannan22 Kannan, R., Garaldi, E., Smith, A., et al. 2022, , 511, 4005, 10.1093/mnras/stab3710 [Kaurov & Gnedin(2015)]KG15 Kaurov, A. A., & Gnedin, N. Y. 2015, , 810, 154, 10.1088/0004-637X/810/2/154 [Madau(2017)]Madau17 Madau, P. 2017, , 851, 50, 10.3847/1538-4357/aa9715 [Madau et al.(1999)Madau, Haardt, & Rees]Madau99 Madau, P., Haardt, F., & Rees, M. J. 1999, , 514, 648, 10.1086/306975 [Mao et al.(2020)Mao, Koda, Shapiro, Iliev, Mellema, Park, Ahn, & Bianco]Mao20 Mao, Y., Koda, J., Shapiro, P. R., et al. 2020, , 491, 1600, 10.1093/mnras/stz2986 [McGreer et al.(2015)McGreer, Mesinger, & D'Odorico]McGreer15 McGreer, I. D., Mesinger, A., & D'Odorico, V. 2015, , 447, 499, 10.1093/mnras/stu2449 [McQuinn et al.(2011)McQuinn, Oh, & Faucher-Giguère]McQuinn11 McQuinn, M., Oh, S. P., & Faucher-Giguère, C.-A. 2011, , 743, 82, 10.1088/0004-637X/743/1/82 [Meiksin & White(2003)]MW03 Meiksin, A., & White, M. 2003, , 342, 1205, 10.1046/j.1365-8711.2003.06624.x [Muñoz et al.(2024)Muñoz, Mirocha, Chisholm, Furlanetto, & Mason]Munoz24 Muñoz, J. B., Mirocha, J., Chisholm, J., Furlanetto, S. R., & Mason, C. 2024, arXiv e-prints, arXiv:2404.07250, 10.48550/arXiv.2404.07250 [O'Meara et al.(2013)O'Meara, Prochaska, Worseck, Chen, & Madau]O'Meara13 O'Meara, J. M., Prochaska, J. X., Worseck, G., Chen, H.-W., & Madau, P. 2013, , 765, 137, 10.1088/0004-637X/765/2/137 [Pahl et al.(2021)Pahl, Shapley, Steidel, Chen, & Reddy]Pahl21 Pahl, A. J., Shapley, A., Steidel, C. C., Chen, Y., & Reddy, N. A. 2021, arXiv e-prints, arXiv:2104.02081. 2104.02081 [Pawlik et al.(2009)Pawlik, Schaye, & van Scherpenzeel]Pawlik09 Pawlik, A. H., Schaye, J., & van Scherpenzeel, E. 2009, , 394, 1812, 10.1111/j.1365-2966.2009.14486.x [Planck Collaboration et al.(2020)Planck Collaboration, Aghanim, Akrami, Ashdown, Aumont, Baccigalupi, Ballardini, Banday, Barreiro, Bartolo, Basak, Battye, Benabed, Bernard, Bersanelli, Bielewicz, Bock, Bond, Borrill, Bouchet, Boulanger, Bucher, Burigana, Butler, Calabrese, Cardoso, Carron, Challinor, Chiang, Chluba, Colombo, Combet, Contreras, Crill, Cuttaia, de Bernardis, de Zotti, Delabrouille, Delouis, Di Valentino, Diego, Doré, Douspis, Ducout, Dupac, Dusini, Efstathiou, Elsner, Enßlin, Eriksen, Fantaye, Farhang, Fergusson, Fernandez-Cobos, Finelli, Forastieri, Frailis, Fraisse, Franceschi, Frolov, Galeotta, Galli, Ganga, Génova-Santos, Gerbino, Ghosh, González-Nuevo, Górski, Gratton, Gruppuso, Gudmundsson, Hamann, Handley, Hansen, Herranz, Hildebrandt, Hivon, Huang, Jaffe, Jones, Karakci, Keihänen, Keskitalo, Kiiveri, Kim, Kisner, Knox, Krachmalnicoff, Kunz, Kurki-Suonio, Lagache, Lamarre, Lasenby, Lattanzi, Lawrence, Le Jeune, Lemos, Lesgourgues, Levrier, Lewis, Liguori, Lilje, Lilley, Lindholm, López-Caniego, Lubin, Ma, Macías-Pérez, Maggio, Maino, Mandolesi, Mangilli, Marcos-Caballero, Maris, Martin, Martinelli, Martínez-González, Matarrese, Mauri, McEwen, Meinhold, Melchiorri, Mennella, Migliaccio, Millea, Mitra, Miville-Deschênes, Molinari, Montier, Morgante, Moss, Natoli, Nørgaard-Nielsen, Pagano, Paoletti, Partridge, Patanchon, Peiris, Perrotta, Pettorino, Piacentini, Polastri, Polenta, Puget, Rachen, Reinecke, Remazeilles, Renzi, Rocha, Rosset, Roudier, Rubiño-Martín, Ruiz-Granados, Salvati, Sandri, Savelainen, Scott, Shellard, Sirignano, Sirri, Spencer, Sunyaev, Suur-Uski, Tauber, Tavagnacco, Tenti, Toffolatti, Tomasi, Trombetti, Valenziano, Valiviita, Van Tent, Vibert, Vielva, Villa, Vittorio, Wandelt, Wehus, White, White, Zacchei, & Zonca]Planck18 Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, , 641, A6, 10.1051/0004-6361/201833910 [Prochaska et al.(2014)Prochaska, Madau, O'Meara, & Fumagalli]Prochaska14 Prochaska, J. X., Madau, P., O'Meara, J. M., & Fumagalli, M. 2014, , 438, 476, 10.1093/mnras/stt2218 [Prochaska et al.(2009)Prochaska, Worseck, & O'Meara]Prochaska09 Prochaska, J. X., Worseck, G., & O'Meara, J. M. 2009, , 705, L113, 10.1088/0004-637X/705/2/L113 [Raičević & Theuns(2011)]Raicevic11 Raičević, M., & Theuns, T. 2011, , 412, L16, 10.1111/j.1745-3933.2010.00993.x [Rudie et al.(2013)Rudie, Steidel, Shapley, & Pettini]Rudie13 Rudie, G. C., Steidel, C. C., Shapley, A. E., & Pettini, M. 2013, , 769, 146, 10.1088/0004-637X/769/2/146 [Satyavolu et al.(2023)Satyavolu, Kulkarni, Keating, & Haehnelt]Satyavolu23 Satyavolu, S., Kulkarni, G., Keating, L. C., & Haehnelt, M. G. 2023, , 521, 3108, 10.1093/mnras/stad729 [Shull et al.(2012)Shull, Harness, Trenti, & Smith]Shull12 Shull, J. M., Harness, A., Trenti, M., & Smith, B. D. 2012, , 747, 100, 10.1088/0004-637X/747/2/100 [Simmonds et al.(2023)Simmonds, Tacchella, Maseda, Williams, Baker, Witten, Johnson, Robertson, Saxena, Sun, Witstok, Bhatawdekar, Boyett, Bunker, Charlot, Curtis-Lake, Egami, Eisenstein, Ji, Maiolino, Sandles, Smit, Übler, & Willott]Simmonds23 Simmonds, C., Tacchella, S., Maseda, M., et al. 2023, , 523, 5468, 10.1093/mnras/stad1749 [Simmonds et al.(2024)Simmonds, Tacchella, Hainline, Johnson, McClymont, Robertson, Saxena, Sun, Witten, Baker, Bhatawdekar, Boyett, Bunker, Charlot, Curtis-Lake, Egami, Eisenstein, Hausen, Maiolino, Maseda, Scholtz, Williams, Willott, & Witstok]Simmonds24 Simmonds, C., Tacchella, S., Hainline, K., et al. 2024, , 527, 6139, 10.1093/mnras/stad3605 [So et al.(2014)So, Norman, Reynolds, & Wise]So14 So, G. C., Norman, M. L., Reynolds, D. R., & Wise, J. H. 2014, , 789, 149, 10.1088/0004-637X/789/2/149 [Songaila & Cowie(2010)]SC10 Songaila, A., & Cowie, L. L. 2010, , 721, 1448, 10.1088/0004-637X/721/2/1448 [Spina et al.(2024)Spina, Bosman, Davies, Gaikwad, & Zhu]Spina24 Spina, B., Bosman, S. E. I., Davies, F. B., Gaikwad, P., & Zhu, Y. 2024, arXiv e-prints, arXiv:2405.12273, 10.48550/arXiv.2405.12273 [Vanzella et al.(2018)Vanzella, Nonino, Cupani, Castellano, Sani, Mignoli, Calura, Meneghetti, Gilli, Comastri, Mercurio, Caminha, Caputi, Rosati, Grillo, Cristiani, Balestra, Fontana, & Giavalisco]Vanzella18 Vanzella, E., Nonino, M., Cupani, G., et al. 2018, , 476, L15, 10.1093/mnrasl/sly023 [Verner et al.(1996)Verner, Ferland, Korista, & Yakovlev]Verner96 Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, ApJ, 465, 487 [Worseck et al.(2019)Worseck, Davies, Hennawi, & Prochaska]Worseck19 Worseck, G., Davies, F. B., Hennawi, J. F., & Prochaska, J. X. 2019, , 875, 111, 10.3847/1538-4357/ab0fa1 [Worseck et al.(2014)Worseck, Prochaska, O'Meara, Becker, Ellison, Lopez, Meiksin, Ménard, Murphy, & Fumagalli]Worseck14 Worseck, G., Prochaska, J. X., O'Meara, J. M., et al. 2014, , 445, 1745, 10.1093/mnras/stu1827 [Wu et al.(2021)Wu, McQuinn, Eisenstein, & Iršič]Wu21b Wu, X., McQuinn, M., Eisenstein, D., & Iršič, V. 2021, , 508, 2784, 10.1093/mnras/stab2815 [Zhu et al.(2021)Zhu, Becker, Bosman, Keating, Christenson, Bañados, Bian, Davies, D'Odorico, Eilers, Fan, Haehnelt, Kulkarni, Pallottini, Qin, Wang, & Yang]Zhu21 Zhu, Y., Becker, G. D., Bosman, S. E. I., et al. 2021, , 923, 223, 10.3847/1538-4357/ac26c2 [Zhu et al.(2023)Zhu, Becker, Christenson, D'Aloisio, Bosman, Bakx, D'Odorico, Bischetti, Cain, Davies, Davies, Eilers, Fan, Gaikwad, Haehnelt, Keating, Kulkarni, Lai, Ma, Mesinger, Qin, Satyavolu, Takeuchi, Umehata, & Yang]Zhu23 Zhu, Y., Becker, G. D., Christenson, H. M., et al. 2023, arXiv e-prints, arXiv:2308.04614, 10.48550/arXiv.2308.04614
http://arxiv.org/abs/2406.18670v1
20240626181407
Generalized Cuts and Grothendieck Covers: a Primal-Dual Approximation Framework Extending the Goemans--Williamson Algorithm
[ "Nathan Benedetto Proença", "Marcel K. de Carli Silva", "Cristiane M. Sato", "Levent Tunçel" ]
cs.DS
[ "cs.DS", "cs.DM", "math.OC" ]
§ ABSTRACT We provide a primal-dual framework for randomized approximation algorithms utilizing semidefinite programming (SDP) relaxations. Our framework pairs a continuum of APX-complete problems including MaxCut, Max2Sat, MaxDicut, and more generally, Max-Boolean Constraint Satisfaction and MaxQ (maximization of a positive semidefinite quadratic form over the hypercube) with new APX-complete problems which are stated as convex optimization problems with exponentially many variables. These new dual counterparts, based on what we call Grothendieck covers, range from fractional cut covering problems (for MaxCut) to tensor sign covering problems (for MaxQ). For each of these problem pairs, our framework transforms the randomized approximation algorithms with the best known approximation factors for the primal problems to randomized approximation algorithms for their dual counterparts with reciprocal approximation factors which are tight with respect to the Unique Games Conjecture. For each APX-complete pair, our algorithms solve a single SDP relaxation and generate feasible solutions for both problems which also provide approximate optimality certificates for each other. Our work utilizes techniques from areas of randomized approximation algorithms, convex optimization, spectral sparsification, as well as Chernoff-type concentration results for random matrices. [ Alistair Brewin^1, Liam A P Gallagher^1, Jon D Pritchett^1, Horatio Q X Wong^1, Robert M Potvliege^1, Stewart J Clark^1, Matthew P A Jones^1 July 1, 2024 ================================================================================================================================================ empty § INTRODUCTION Some of the most impressive successes for randomized approximation algorithms, utilizing semidefinite programming relaxations, have been on problems such as MaxCut <cit.>, Max2Sat <cit.>, and MaxDicut <cit.>. We define APX-complete duals for such problems, which involve what we call Grothendieck covers. Then, we design a primal-dual framework of randomized approximation algorithms for a wide range of problems, including maximum Boolean constraint satisfaction problems (CSPs) paired with their APX-complete duals, which we call Boolean CSP covering problems. Our focus is on 2-CSPs, where each constraint has at most 2 literals; this includes the MaxCut, the Max2Sat, and the MaxDicut problems. For each of these APX-complete problems, our framework transforms the randomized approximation algorithms for the primal problem to randomized approximation algorithms for their (also APX-complete) duals while preserving the approximation factor. In particular, it allows us to recover the same best known approximation factors for the new problems. For example, we provide a randomized (1/0.874)-approximation algorithm for weighted fractional dicut covers. Although the new problems have exponentially many variables, the covers produced have small support and their approximation quality relies on symmetric Grothendieck inequalities; see <cit.>. Our algorithms and analyses utilize Chernoff-type concentration results and spectral sparsification. We further describe how each APX-complete instance can be paired with a dual APX-complete instance by solving a single semidefinite program, unlike in usual scenarios where the dual is built syntactically from the primal. The SDP solutions yield, via a randomized sampling algorithm, primal and dual feasible solutions along with a simultaneous certificate of the approximation quality of both solutions. Note that such a certificate has two primal-dual pairs involved: one pair intractable, and the other pair tractable. E.g., [label=(*),] * MaxDicut and weighted fractional dicut cover, * the SDP relaxation of MaxDicut and its SDP dual. Let D = (V, A) be a digraph. For each U ⊆ V, define (U) as the set of arcs leaving U. A dicut is the set (U) for some U ⊆ V. For arc weights w ∈A, the maximum dicut number of (D, w) is (D,w) maxw(U)U∈V, where (U)∈0, 1^A is the incidence vector of (U) and V denotes the power set of V. The vector of all-ones is . The dual problem we consider is fractionally covering the arcs by dicuts: for arc weights z ∈A, the fractional dicut-covering number of (D, z) is (D, z) min[]y y ∈V, ∑_U ⊆ V y_U^(U)≥ z . BrakensiekHuangPotechinZwick2023 obtained a randomized -approximation for the maximum dicut problem, where ≈ 0.87446. Our framework yields the following result. Fix β∈ (0, ). There is a randomized polynomial-time algorithm that, given a digraph D = (V,A) and z ∈A, computes w ∈A and returns U ⊆ V and y ∈V with support size (y) = O(logV) such that ∑_S ⊆ V y_S (S)≥ z holds with high probability (w.h.p.), y≤1β(D,z), and w(U)≥β(D,w). Moreover, our algorithm returns a simultaneous certificate that each of U and y is within a factor of β of the respective optimal value. Our results also allow one to start from an instance (D,w) of the primal problem (i.e., MaxDicut) and the algorithm computes a dual instance (D,z) of the fractional dicut-covering problem, along with β-approximate solutions for both and a simultaneous certificate. Analogous claims also apply to <ref>. Let (,w) be an instance of the maximum 2-satisfiability problem, i.e., is a set of disjunctive 2-clauses on two variables from x_1,…, x_n, and w ∈ is a nonnegative weight vector. Thus, each element of has the form x_i x_j, x_i x_j, or x_ix_j. Let , ^n be the set of all possible assignments for (x_1,…, x_n). For an assignment a∈, define _(a) ∈0,1^ as the binary vector indexed by such that (_(a))_C = 1 if C is satisfied by a, and 0 otherwise. The goal is to find an assignment a∈ that maximizes the inner product w_(a). Denote (,w) maxw_(a)a∈. The dual problem we consider is fractionally covering the clauses by assignments: for weights z ∈, the fractional 2-sat covering number of (,z) is 2sat(,z) min[]y y ∈, ∑_a ∈ y_a _(a) ≥ z . LewinLivnatZwick2002 provide a randomized -approximation algorithm for Max2Sat, where ≈ 0.9401. Our framework yields the following result. Fix β∈ (0, ). There is a randomized polynomial-time algorithm that, given a set of disjunctive 2-clauses on n variables and z ∈, computes w ∈ and returns an assignment a∈ and y ∈ with (y) = O(log n) such that ∑_a ∈ y_a _(a) ≥ z holds w.h.p., y≤1β2sat(, z), and w_(a)≥β(,w). Moreover, our algorithm returns a simultaneous certificate that each of a and y is within a factor of β of the respective optimal value. Our results are general enough to include all forms of Boolean 2-CSPs. A Boolean 2-CSP is a CSP where the variables x_1,…,x_n take on Boolean values (i.e., or ) and each constraint involves only two variables. Formally, we specify a Boolean constraint satisfaction problem using a set of binary predicate templates, i.e., functions from , ^2 to ,. We assume throughout that the constant function is not in . Let (,w) be an instance of the (Boolean) maximum 2-CSP problem, i.e., each element of is a function that sends x ∈, ^n to f(x_i,x_j) for some f ∈ and i,j ∈ [n] 1,…,n, and w ∈. We refer to an element of as a -constraint or just as a constraint. The maximum -satisfiability number of (,w) is (, w) maxw_(a)a ∈. The dual problem is: for every z ∈, the fractional -constraint covering number of (,z) is (, z) min[]y y ∈, ∑_a ∈ y_a _(a) ≥ z . By choosing distinct sets one can formulate various interesting problems. By setting x_1 x_2, we recover the MaxDicut problem via and the fractional dicut-covering problem via . Our Max2Sat results are recovered with x_1 x_2, x_1 x_2, x_1x_2. Using these choices, <ref> are special cases a more general result from our framework, which we state next. The approximation factor that appears in the statement will be defined shortly in <ref>; a self-contained version of the result will be stated later as <ref>. Let be a set of predicates in two Boolean variables. Fix β∈ (0, ). There is a randomized polynomial-time algorithm that, given a set of -constraints on n variables and z ∈, computes w ∈ and returns an assignment a∈ and y ∈ with (y) = O(log n) such that ∑_a ∈ y_a _(a) ≥ z holds w.h.p., y≤1β(, z), and w_(a)≥β(, w). Moreover, our algorithm returns a simultaneous certificate that each of a and y is within a factor of β of the respective optimal value. Our framework builds on works by GoemansWilliamson1995, Grothendieck1953, and Nesterov1998, involving approximation results. GoemansWilliamson1995,Nesterov1998 both tackle the problem of solving maxWs_UU ∈V, where the V× V matrix W belongs to the positive semidefinite cone V and s_U 2_U-∈±1^V is the signed incidence vector of U ⊆ V. We introduce a parameterization for both the domain cone of matrices W and the allowed subsets U of V. Throughout V denotes a finite set and let , ⊆V be closed convex cones, where V is the space of symmetric V-by-V matrices. Let () U⊆ Vs_Us_U∈ encode the feasible/allowed subsets of V. Our primal problem involves maximization of a quadratic form: _,(W) maxWs_U U∈() , for every W ∈. Let () denote the smallest affine space containing . For each Z in the dual cone ^* X ∈()XY≥ 0 for each Y ∈ (where we use the trace inner product), a vector y∈() is a tensor sign cover for Z if ∑_U∈() y_U^s_Us_U≽_^* Z, where as usual the notation A ≽_^* B means A - B ∈^*. Here, we are denoting by ^*X ∈nXY≥ 0 for each Y ∈ the dual cone to in the potentially larger space of symmetric matrices n — see <ref>. Our dual problem is to find a tensor sign cover y that minimizes y: _,(Z) min[]y y ∈(), ∑_U ∈() y_U^s_Us_U≽_^* Z , for every Z ∈^*. The notation `' refers to fractional elliptope vertex cover. Recall that the elliptope is the set ^V Y ∈V(Y) =, where V→^V extracts the diagonal, and its vertices are UU ∈V; see <cit.>. By fixing and varying , it is clear that _, always attributes the same value for an input matrix, whereas _, defines a continuum of relaxations, affecting feasibility via the constraint ∑_U ∈() y_U^s_U≽_^* Z on the tensor sign covers. The smaller is, the weaker the constraint on the tensor sign cover becomes. <Ref> describe SDP-based approximation algorithms for fractional covering problems. Covering problems, in general, proved to be difficult for tractable SDP relaxations. For some negative results on various SDP relaxations of vertex cover problem, see for instance <cit.>. In those settings, the SDP relaxations considered fail to improve on their much simpler LP-based counterparts, in terms of the approximation ratio. Thus, it is noteworthy that in our framework we obtain randomized approximation algorithms that are tight under the UGC. Another interesting feature of our results is that our conic covering problems have an exponential number of variables (and computing their optimal values is NP-hard) but we still are able to treat these covering problems algorithmically, in polynomial time, and obtain approximately optimal sparse covers. Throughout the paper, we assume that ,⊆n[n] are closed convex cones such that the following conditions hold: ⊆n, ⊆^*, ∫((^)) ≠∅, 0≠ has a strictly feasible point, where ^UU ⊆ [n] , U⊆, is the convex hull, denotes the generated convex cone containing 0, and ∫ takes the interior. We refer the reader to <ref> for the definition of strictly feasible point. Set () ^[n]∩. A randomized rounding algorithm [] for is an indexed set [] = ([Y])_Y ∈() of matrix-valued random variables sampled from UU ∈(). Define inf_Y ∈()maxα∈[[Y]] ≽_^*α Y , which we call the rounding constant for (,,[]). We shall drop the pair whenever they can be inferred by context; in particular, the rounding constant may appear as . We define a Grothendieck cover for Z ∈^* as a tensor sign cover y for Z such that y≤ (1/) (Z). Our algorithms produce tensor sign covers y with approximation factor β arbitrarily close to ; we also call such vectors Grothendieck covers. We show how to pair instances of the problems and so that, given an instance W ∈ of , we obtain an instance Z ∈^* of and we approximately solve both instances simultaneously and provide a certificate for the approximation factor of both solutions. We do the same by starting with an instance of . Note from <ref> that feasible solutions can have exponential support size. The solutions produced by our algorithm have sparse support, with the bound on the support size varying according to geometric properties of the cone . For the case that ⊆n, we rely on spectral sparsification results for positive semidefinite matrices from <cit.>. Our main results are the outcome of our framework powered by primal-dual conic relaxations, randomized rounding algorithms together with generalized Chernoff concentration results, and spectral sparsifications methods. We state our main results in <ref>. They output objects called β-certificates (see <ref>), where β is an approximation factor, which are formed by feasible solutions for both problems, together with a simultaneous certificate of their approximation quality. Assume that ⊆n, and let [] be a randomized rounding algorithm for . Fix β∈ (0, ). There exists a randomized polynomial-time algorithm that, given an instance Z ∈^* of as input, computes an instance W ∈ of and a β-certificate for (W, Z) with high probability. Dually, there exists a randomized polynomial-time algorithm that, given an instance W ∈ of as input, computes an instance Z ∈^* of and a β-certificate for (W, Z) with high probability. Both algorithms take at most O(n^2 log(n)) samples from [], and produce covers with O(n) support. If = n, then O(n log n) samples suffice. We state in <Ref> our other main result, which is similar to <ref>, however, with a slightly different assumption on the cone  and it obtains better support size. The cone is the image (d) for a linear map ^d →n, and the support size obtained is O(log(n)+log(d)). <Ref> shall follow from this result. §.§ Additional Related Work In addition to the above cited references, here we mention some additional related work. In the continuum of the APX-complete duals, the one for MaxCut, called fractional cut-covering problem, was previously studied: first, in the special case that z=, i.e. unweighted graphs, see <cit.>; then, in general (arbitrary nonnegative weights z), see <cit.>. We vastly generalize the results of <cit.> while keeping all the desired properties. Their results apply to the pair MaxCut and fractional cut covering, which is a single pair of APX-complete problems in the wide swath of APX-complete problem pairs covered here. Part of the unification and generalization of the primal problems we consider was proposed earlier <cit.>. Their generalization is similar to the way we use the convex cone and the generalized Grothendieck constant. However, our framework is more general than that of <cit.> in two ways: (i) we consider, as an additional generalization, a set of convex cones restricting the feasible region of the primal problem (this additional generalization helps us achieve the best approximation ratios for the duals of Max Boolean 2-CSPs); (ii) for every primal APX-complete problem in our generalized domain we associate a dual conic covering problem and provide randomized approximation algorithms which provide approximate solutions to both problems. Part of our development of the underlying theory leading to the APX-complete duals is best explained via gauge duality <cit.> and its interplay with conic duality. A closely related concept is antiblocking duality theory <cit.>. The corresponding conic generalization of antiblocking duality appeared previously in <cit.>. § FRAMEWORK FOR GENERALIZED CUTS AND TENSOR SIGN COVERS AND CERTIFICATES This section introduces our framework along with its theoretical foundations. Recall the assumptions <ref>. We define relaxations for _, and _,: for every W ∈, set ν_,(W) maxWY Y ∈, (Y) = minρρ∈, x ∈^n, (x) ≽_^* W, ρ≥x, maxWZ W ∈, x ∈^n, W ≼_^*(x), x≤ 1 . and, for every Z ∈^*, ν^_(Z) minμμ∈, Y ∈, (Y) = μ, Y ≽_^* Z maxWZ W ∈, x ∈^n, W ≼_^*(x), x≤ 1 . Our algorithms rely on solving these relaxations and then sampling using the feasible solutions found. We show the following relation between , , ν, and ν^, and the rounding constant : Let [] be a randomized rounding algorithm for . We have that ·ν(W) ≤(W) ≤ν(W) for every W ∈; ν^(Z) ≤ (Z) ≤1 ·ν^(Z) for every Z ∈^*. Note that Ws_U = WU≤ν(W) for every U ⊆ [n], so the second inequality in <ref> holds. Let Y be a feasible solution of <ref>. Since [Y] has finite support, we have that [[Y]] = ∑_U ⊆ [n][Y] = UU, which implies [[Y]] ∈(UU ∈()) = ^. Thus <ref> follows from <ref>, as (W) ≥W[[Y]]≥WY. Similarly, for every y ∈() feasible in <ref>, we have that ∑_S ∈() y_U^U∈ is feasible in <ref> with the same objective value, so the first inequality in <ref> holds. It is immediate from <ref> that for every Y ∈(), if μ Y ≽_^* Z, then μ[[Y]] ≽_^* Z, so (Z) ≤μ/. Hence <ref> follows from <ref>. We remark that <ref> and <ref> are equivalent by gauge duality; see <ref> for more details. Our discussion so far has focused exclusively on the matrix space. Indeed, the definition <ref> of , as well as the concentration results we will exploit are naturally expressed in this context. Yet, applications may require results on other spaces. For example, an approximation algorithm for the fractional dicut covering problem on a digraph D = (V, A) is about weights in A. In our setting, this mapping between vectors and matrices is built into the cone . This is natural, as the cone is central to the covering constraint of <ref>. Let ^d →n be a linear map. We assume throughout the paper that (w) ∑_i ∈ [d] w_i A_i for nonzero A_1,…,A_d ∈n. Set (d) = []∑_i ∈ [d] w_i A_i w ∈d. We have that, for every X, Y ∈n, X ≼_^* Y if and only if A_iX≤A_iY for every i ∈ [d]. One can succinctly encode the finite set of linear inequalities above with the adjoint linear map ^* n→^d, thus obtaining that X ≼_^* Y holds if and only if ^*(X) ≤^*(Y). This is similar to what is done in the entropy maximization setting; see, e.g., <cit.>. In particular, the linear map ^* recovers relevant marginal probabilities when working with random matrices. With this setup, we move to d by defining, for every w ∈d and z ∈d, _, (w) _((w)), _, (z) min _(Z) Z ∈^*, ^*(Z) ≥z , ν_, (w) ν_((w)), ν_, ^(z) min ν_^(Z) Z ∈^*, ^*(Z) ≥z ; see <ref> in <ref> for details. We highlight that ν_, ^(z) = maxzw w ∈d, x ∈^n, (w) ≼_^*(x), x≤ 1 = minμμ∈, Y ∈, (Y) = μ, ^*(Y) ≥ z . Hence, given z ∈d as input, one can compute Z ∈^* such that ν_, ^(z) = ν_^(Z), as well as solving <ref> for said Z, by solving a single convex optimization problem, namely <ref>. In this way, we can both “lift” the vector w ∈d to the matrix (w) ∈, and z ∈d to the matrix Z ∈^*, with no extra algorithmic cost. Next we discuss how to simultaneously certify the approximation quality for instances W ∈ of and Z ∈^* of . A key observation is that (W)·(Z) ≥WZ for every W ∈ and Z ∈^*, which holds since WZ≤∑_U ∈() y_U^WU≤(W) y for every feasible solution y ∈() to <ref>. In the context of gauge duality, <ref> serves as a weak duality result. Assumptions <ref> can be used to provide a strong duality result: for every W ∈ there exists Z ∈^* such that equality holds in <ref>; and for every Z ∈^* there exists W ∈ such that equality holds in <ref>. Motivated by <ref>, we define β-pairings and β-certificates. Let β∈0,1. A β-pairing on (, ) is a pair (W, Z) ∈×^* such that there exist ρ, μ∈ with WZ [eq:beta-pairing-def](<ref>a)=ρμ and βρμ[eq:beta-pairing-def](<ref>b)≤_(W)μ[eq:beta-pairing-def](<ref>c)≤ρμ[eq:beta-pairing-def](<ref>d)≤ρ_(Z) [eq:beta-pairing-def](<ref>e)≤ 1/βρμ. If = (d) for a linear map ^d →n, we say that (w, z) ∈d×d is a β-pairing if ((w), Z) is a β-pairing for some Z ∈^* such that ^*(Z) ≥ z. We define an exact pairing on (, ) to be a 1-pairing on (, ). Note that, for nonzero ρ and μ, this definition implies that βρ≤(W) ≤ρ and μ≤(Z) ≤ (1/β) μ. Thus, the definition of β-pairing establishes the idea of simultaneous approximations. We need objects which algorithmically certify that a pair (W, Z) is a β-pairing. For this, we use an analogue of <ref> for our relaxations: ν(W)·ν^(Z) ≥WZ for every W ∈ and Z ∈^*. Similar to <ref>, inequality <ref> follows from <ref> and <ref>. The advantage of <ref> over <ref> is that the quantities here are computable in polynomial time, and by <ref>, they are closely related to the quantities in <ref>. A β-certificate for (W, Z) ∈×^* is a tuple (ρ, μ, U, y, x) such that ρ,μ∈_+ are such that ρμ= WZ, U ∈() is such that Ws_U ≥βρ, y ∈_+^() is such that ∑_U' ∈() y_U'^ U' ≽_^* Z and y ≤1βμ, and x ∈^n is such that ρ≥x and (x) ≽_^* W. If = (d) for a linear map ^d →n, we say that (ρ, μ, U, y, x) is a β-certificate for (w, z) ∈d×d if (ρ, μ, U, y, x) is a β-certificate for ((w), Z) for some Z ∈^* with ^*(Z) ≥ z. If there exists a β-certificate for (W, Z), then (W,Z) is a β-pairing. [eq:beta-pairing-def] (<ref>a), [eq:beta-pairing-def] (<ref>b), and [eq:beta-pairing-def] (<ref>e) follow immediately from <ref>, <ref>, and <ref>, resp. The connecting part (W)μ≤ρμ≤ρ(Z) is a combination of two notions of duality: conic duality (via <ref>) and conic gauge duality (via <ref>). Item <ref> provides a feasible solution to <ref>, which implies (W) ≤ν(W) ≤ρ. Hence (W)μ≤ρμ<ref>=WZ<ref>≤(W)·(Z) ≤ρ(Z). § GENERALIZED ROUNDING FRAMEWORK AND SPARSIFICATION Let [] be a randomized rounding algorithm for , and let Y ∈(). One can roughly see from <ref> that sampling from [Y] provides a feasible solution for , as well as a feasible solution for in expectation. However, such a solution may have exponential support size. Moreover, even in well-studied special cases like the Goemans and Williamson algorithm, it is not known how to compute the marginal probabilities exactly to obtain an expression for [[Y]]. We show how to obtain a Grothendieck cover by repeated sampling from a randomized rounding algorithm so that we have polynomial support size and the approximation ratio can be controlled with high probability. We first treat the polyhedral case. Let , ∈ (0, 1). Let (d) for a linear map ^d →n. Let X Ω→ be a random matrix such that ^*[X]≥. Let (X_t)_t ∈ [T] be i.i.d. random variables sampled from X. There is ψ_,∈Θ(1) such that, if T≥ψ_,(log(d) + log(n)), then 1/T∑_t ∈ [T] X_t ≽_^* (1 - ) [X] with probability at least 1 - 1/n. The main argument in the proof of <Ref>, which appears in <ref>, relies on Chernoff's bound for each generating ray of the cone, followed by union bound on those rays. Let , ∈ (0, 1). Let (d) for a linear map ^d →n. Let [] be a randomized rounding algorithm for . Let Y ∈() be such that ^*(Y) ≥. There exists a randomized polynomial-time algorithm producing a Grothendieck cover y ∈() for Y w.h.p. such that the algorithm performs at most T O(log(d) + log(n)) samples from [Y], the support size (y) is at most T and y≤ ((1 - ))^-1. Beyond polyhedral cones, we present a rounding algorithm under the assumption that ⊆n. In this case, we leverage matrix Chernoff bounds to ensure correctness of our algorithms with high probability. We refer the reader to <ref> for a complete proof. The result is an application of <cit.>, which exploits results arising from a Matrix Chernoff bound with respect to the positive semidefinite (Löwner) order. We denote by Xmax(X), (X) the spectral norm on n. Let X be a random matrix in n such that X≤ρ almost surely, and set σ^2 [X^2]. Let (X_t)_t ∈ [T] be i.i.d. random variables sampled from X. There is ψ_ = Θ(1) such that, if T≥ψ_maxσ^2,ρlog(n), then [X] - I ≼1/T∑_t ∈ [T] X_t ≼[X] + I holds w.h.p.. The tensor sign covers we can obtain by directly applying <ref> have polynomial support size. To guarantee linear support size, we rely on the following spectral sparsification result: Let Z ∈n. Let A_1, A_2 …, A_m ∈n and c ∈m. Suppose that the semidefinite program min[] cy: y ∈m, ∑_i=1^m y_iA_i ≽ Z has a feasible solution y^*. Let ∈ (0,1). There is a deterministic polynomial-time algorithm that, given y^*, and the matrices A_1,A_2…, A_m and Z as input, computes a feasible solution y̅ with at most O(n/^2) nonzero entries and cy̅≤ (1+) cy^*. Thus, we obtain the following result, which is proved in <ref>. Let , , ∈ (0, 1). Let ⊆n. Let [] be a randomized rounding algorithm for . Let Y ∈() be such that Y ≽ I. There exists a randomized polynomial time algorithm producing a Grothendieck cover y ∈() for Y w.h.p. such that the algorithm performs at most O(n^2 log(n)) samples from [Y], the support size (y) is O(n/^2) and y≤ (1 + )((1 - ))^-1. § SIMULTANEOUS APPROXIMATION ALGORITHMS The last ingredient of our algorithms is to ensure the feasible solutions behave well with respect to our sampling results. Both <ref> require a numeric bound on how interior to the cone the feasible solutions are: either by requiring Y ≽ I or ^*(Y) ≥. These assumptions are necessary: <cit.> exhibits instances of the fractional cut-covering problem and optimal solutions to the SDP relaxation that require, in expectation, exponentially many samples to ensure feasibility. For a fixed element of ^* which is “central” enough, we define perturbed versions of <ref> whose feasible regions exclude these ill-behaved matrices. For concreteness, we assume that I ∈UU ∈()⊆, which can be easily verified in the examples we will work with. We now describe one of the algorithms in <ref>. Let [] be a randomized rounding algorithm for . Let β∈ (0, ). Assume we are given an instance Z ∈^* of as input. Then * nearly solve the perturbed version of <ref> to compute (μ,Y) ∈_+ × and (W, x) ∈×^n; * sample O(n^2 log n) times from [Y] to obtain a Grothendieck cover y ∈() for Z; * apply <ref> to reduce the support size of y to O(n); * choose U that maximizes Ws_U' among all U' ∈(y); * output W and the β-certificate (1, μ, U, y, x). (Steps (1)–(3) involve errors terms that are chosen small enough to guarantee our desired approximation factor β.) <Ref> proves the correctness of steps (2) and (3). This is where we crucially exploit ⊆n, so that concentration and sparsification results developed for positive semidefinite matrices can be translated to the cone ^*. That (4) will define a set U which is part of the β-certificate follows from y being a good enough estimate: we have that βρ≤Ws_U since ρμ = WZ≤∑_U' ∈() y_U'^WU'≤WUy≤(s_UW s_U) 1/βμ. <Ref> has the precise proofs. The algorithm sketched above highlights an important part of our framework. For a given instance Z ∈^* of , we obtain from the SDP solutions to <ref> an instance W ∈ of , and we then certify the pair (W, Z). This mapping among instances is something we now make explicit. Define _[] (Z, W) ∈×^* WZ = ν_^(W)·ν_^(Z). One may prove that = [] (W, Z) ∈×^* [ ∃ (μ,Y) feasible in <ref> for Z,; ∃ (ρ,x) feasible in <ref> for W,; and WZ = ρμ ]. We invite the reader to compare the RHS of <ref> with the feasible regions of <ref> and <ref>. One may see solving either SDP as fixing one side of the pair of instances and obtaining the other; i.e., as computing an element of (W) Z ∈^*(W, Z) ∈ when given W ∈ as input, or computing an element of (Z) W ∈(W, Z) ∈ when given Z ∈^* as input. In both cases, by solving a single (primal-dual pair of) SDP we obtain an element of and the objects (ρ, x) and (μ, Y) which witness the membership. We now address <ref>, in which the cone is polyhedral and not necessarily contained in n. Here, we do not require the use of sparsification, as the cover produced is already (very) sparse. Let (d) for a linear map ^d →n. Assume <ref> and that ^*(I) ≥κ for some positive κ∈. Let [] be a randomized rounding algorithm for . Fix β∈ (0, ). There exists a randomized polynomial-time algorithm that, given an instance z ∈d of as input, computes an instance w ∈d of and a β-certificate for (w, z). Dually, there exists a randomized polynomial-time algorithm that, given an instance w ∈d of as input, computes an instance z ∈d of and a β-certificate for (w, z). Both algorithms output covers whose support size is bounded by C · (log(d) + log(n)), where C C(κ, , β) is independent of d and n. § BOOLEAN 2-CSP Let U ⊆0∪ [n] with 0 ∈ U. Let x [n] →, be defined such that x_i = if and only if i ∈ U ∖0. Let i, j ∈ [n]. For any predicate P, we let P∈0, 1 be 1 if the predicate P is , and 0 otherwise. <Ref> defines matrices Δ_± i, ± j∈0∪ [n] such that x_ix_j = 14Δ_-i, -jU, x_i x_j = 14Δ_-i, +jU, x_i x_j = 14Δ_+i, -jU, x_i x_j = 14Δ_+i, +jU. By decomposing a predicate as a disjunction of conjunctions, one can write any Boolean function on two variables as a sum of these matrices. Thus, for any set of constraints on two variables, one can define a linear map ^→0∪ [n] such that (e_f)U = f(x) for every f ∈; here, e_f f∈0,1^ is a canonical basis vector. With this particular linear map , we say that is the polyhedral cone defined by if = (). The definitions are made so that [[Y]] ≽_^*α Y if and only if (f(x) = ) ≥α(e_f)Y for every f ∈, where x ∈ is obtained from [Y] in the following way: let U ⊆0∪ [n] be such that 0 ∈ U and U was sampled from [Y], and define x ∈ by x_i = if and only if i ∈ U ∖0. If we set Δ^n ⋃_i, j ∈ [n]Δ_± i, ± j, it is immediate that ⊆(Δ^n). The set _Δ0∪ [n]∩[] ⋃_i, j ∈ [n]Δ_± i, ± j^* has been studied — see e.g., <cit.> —, and these additional inequalities are referred to as triangle inequalities. Since ^_Δ = UU ⊆0∪ [n] (see <ref>) we have that _Δ^* = 0∪ [n] + (Δ^n). This then ensures that <ref> holds for _Δ and the polyhedral cone defined by . Let be a set of predicates in two Boolean variables. For every n ∈, let []_n be a randomized rounding algorithm for _Δ⊆n + 1. Let α≤inf[]α__Δ, , []_n[ set of -constraints on n variables,; polyhedral cone defined by . ], and fix β∈ (0, α). There exists a randomized polynomial-time algorithm that, given an instance (, z) of as input, computes w ∈ and a β-certificate for (w, z). Dually, there exists a polynomial-time randomized algorithm that, given an instance (, w) of as input, computes z ∈ and a β-certificate for (w, z). Both algorithms take at most O(log n) samples from []_n and produce covers with O(log n) support size. Set _Δ and let ^→0∪ [n] be as in <ref>. From <ref> we have that <ref> holds. Since ^*(U) = _(x), using that 2^-n I = ∑0∪ UU ⊆ [n], we see that ^*(I) computes the marginal probability of satisfying each constraint by uniformly sampling an assignment in . As the constant function is not in , any constraint is satisfied by at least 1/4 of the assignments. Hence ^*(I) ≥14. Note that since ≤ 16, we have that log() = O(log n). <Ref> then ensures we can compute β-certificates (ρ, μ, U, y, x) with (y) = O(log() + log(n)) = O(log(n)). Set x_1 x_2. For every digraph D = (V, A), each arc uv can be mapped to a constraint x_u x_v. Hence, there exists such that (D, w) = (, w) and (, z) = (, z) for every w ∈A and z ∈A. BrakensiekHuangPotechinZwick2023 — see formulation after Proposition 2.4. — define [] such that [[Y]]14Δ_+u, -v≥Y14Δ_+u, -v for every arc uv ∈ A and Y ∈(). Thus [[Y]] ≽_^* Y, so <Ref> finishes the proof. Let [Y] = U where U = 0∪i ∈ [n]x_i = for x ∈ being sampled from the algorithm defined by LewinLivnatZwick2002. Then [[Y]]14(Δ_-i, + j + Δ_+i, -j + Δ_+i, +j) = (x_i x_j) def. of and Δ ≥Y 14 3 + Y_0i + Y_0j + Y_ij by <cit.> = Y14Δ_-i, + j + Δ_+i, -j + Δ_+i, +j. The mismatch between the expression in the second line and the expression in <cit.> arises from our modelling imposing x_0 =, whereas <cit.> impose x_0 =. The case for constraints x_i x_j, x_i x_j, and x_ix_j is analogous. As this holds for every constraint, we have that [[Y]] ≽_^* Y, where is the polyhedral cone defined by . Thus ≤α_, , and <ref> implies the statement. § CONCLUDING REMARKS AND FUTURE DIRECTIONS Despite its generality, our framework still captures several best possible and best known results. A first aspect concerns the approximation constants of the algorithms presented. We refer to _min[]_(W)/ν_(W) W ∈ = min[]ν_^(Z)/_(Z) Z ∈^* as the integrality ratio of (, ). Equality between the two expressions above follows from gauge duality. One may see that _ is the largest β such that every (W, Z) ∈() is a β-pairing. Positivity of _ is a corollary of all norms on a finite-dimensional vector space being equivalent. It is more interesting then to consider families of triples (, , []) where [] is a randomized rounding algorithm for . <Ref> implies that _inf_(, , []) ∈≥infα_, , [](, , []) ∈α_. For example, if encodes all the cones arising from instances of the maximum cut problem, to say that _≥ is to say we have a -approximation algorithm for the maximum cut problem, and a 1/-approximation algorithm for the fractional cut covering problem. Equality in <ref> indicates that no better approximation algorithm can be obtained without strengthening the formulation (by changing ) or restricting the input instances (by changing ). Whenever arises from instances related to a specific 2-CSP, Raghavendra <cit.> shows that the triangle inequalities (i.e., _Δ) are enough, as there exists a randomized rounding algorithm ensuring equality in <ref>. It is also known that 2/π = inf_n, n n ∈ <cit.>. In this way, the algorithms in <Ref> all have tight analyses. One formulation of the “equivalence between separation and optimization” proved by GrotschelLovaszEtAl1981 is that one can compute a positive definite monotone gauge whenever one can compute its dual. In this way, whenever ν_ is the best polynomial-time computable approximation to _ under the Unique Games Conjecture <cit.> (and assuming P ≠ NP), the same immediately holds for ν_^ and _. In particular, <cit.> shows that, assuming the UGC, it is NP-hard to obtain any approximation algorithm for a Boolean 2-CSP with approximation factor better than __Δ,. Thus, the UGC implies that <Ref> is best possible unless P = NP. The support size bounds in <Ref> are asymptotically tight. If = n, it is immediate that any feasible y in <ref> for Z = I has (y)≥ n. Hence the O(n) support size in <ref> is best possible. BenedettoProencadeCarliSilvaEtAl2023 argues that (y)≥log(χ(G)) for every graph G = (V, E) whenever = _G(E), where _G(w) ∑_ij ∈ E w_ij(e_i - e_j) is the Laplacian of G. Hence the O(log n) support size in <Ref> is also best possible. Tightly related to the support size of the solutions we produce, is the number of samples necessary to ensure a good enough cover with high probability. Although <Ref> shows that O(n^2 log n) samples suffice when ⊆n, in specific cones we exploited conic concentration bounds in <ref> to obtain better sampling bounds. It is conceivable that other families of cones also admit better bounds. E.g., <cit.> offers conic concentration for hyperbolicity cones. Three of the natural generalizations of our framework not discussed here are: * extension to the complex field and Hermitian matrices, * extension of the intractable pairs defined by exponentially many constraints and exponentially many variables to a semi-infinite setting (infinitely many constraints in the intractable primal and infinitely many variables in the intractable gauge dual), * extension to handle general CSPs. The first two generalizations allow the treatment of many applications in continuous mathematics and engineering, including some applications in robust optimization and system and control theory. The underlying theoretical results include as a special case the Extended Matrix Cube Theorem <cit.>. § MAXQ AND FEVC, AND THEIR CONIC RELAXATIONS We denote by (S) the affine hull of the set S ⊆n, which is the intersection of all affine subspaces of n containing S. Define the dual cone of S ⊆n as S^* X ∈(S)YX≥ 0 for all Y ∈ S . For full-dimensional convex sets, our definition matches the usual definition of dual cone. In general, the dual of a set is taken in its affine hull, analogous to how the relative interior is taken with respect to the topology induced in the affine hull of the set. This becomes most relevant as <ref> allows for cones which are not full dimensional. Although for a convex cone ⊆n the set (^*) may be strictly smaller than () — and hence ^** may not be —, if is a pointed cone, then ^** =. Let , ⊆n be closed convex cones. Recall that ^UU ⊆ V, U∈. Assume ⊆n, ⊆^*, ∫^≠∅, = ∩() and ∃X∈∫() ∖0 s.t. (X) = 0, where ⊆n is a closed convex cone and n→^k is a linear map for some k ∈. Under these assumptions, we have that is pointed and () ∖0≠∅. Note that ^* is pointed, since ∫() ≠∅ by <ref>. As ^* ⊇ by <ref>, we conclude is pointed. The second part follows from <ref>, as X≠ 0 and () is the smallest affine subspace of n containing . Let , be closed convex cones such that <ref> holds, and let n→^k and be the linear transformation and cone appearing in <ref>, respectively. We write ^*^* + (^*) ⊆n. If we denote by P n→n the orthogonal projector onto () = (), then ^* = P(^*). This relationship motivates the notation in <ref>: it shows that ^* is a lifting of the cone ^*. In our setting, we will have () as the instance space, where the inputs to our gauges arise from, and n as the lifted space where optimization is performed. In this way, both ^* and its lifting ^* appear throughout our developments. From <ref> we have that ^*⊇(^*) = ()^⊥ = ()^⊥. Hence ()^⊥⊆^*. From <ref>, <ref>, and <ref> we have that ⊆^*. Since is pointed by <ref>, we have that ^** = . Finally, the orthogonal projector gives a convenient map from to ^*, since Y ≽_^* P(Y) ∈^* for every Y ∈. Indeed, for every Y ∈ we have that P(Y) ∈^*, as P() ⊆ P(^*) = ^* by <ref> and <ref>. Moreover, Y - P(Y) ∈()^⊥ = (^*) ⊆^* by <ref>, so <ref> holds. Recall the definitions of _,, _, ν_, and ν_^, along with conic dual formulations, for each W ∈ and Z ∈^*: _,(W) maxs_UW s_U U∈() , max[]ZX X ∈, UX≤ 1 for every U ∈() _,(W)_,(Z) min[]y y ∈(), ∑_U ∈() y_U^U≽_^* Z max[]ZX X ∈, UX≤ 1 for every U ∈() ; _,(W)ν_,(W) maxWY Y ∈, (Y) = minx x ∈^n, (x) ≽_^* W , max[]ZX X ∈, UX≤ 1 for every U ∈() _,(W)ν^_(Z) minμμ∈, Y ∈, (Y) = μ, Y ≽_^* Z maxZX X ∈, x ∈^n, (x) ≽_^* X, x≤ 1 . max[]ZX X ∈, UX≤ 1 for every U ∈() Our arguments rely on standard results on Conic Programming Duality — see, e.g., <cit.>. In particular, a strictly feasible solution to an optimization problem is a feasible solution where every conic constraint is satisfied by a point in the interior of the relevant cone. By <ref>, there exists y∈() such that ∑_S ∈()y_U^UY∈∫() and (Y) = . We may assume that y > 0. Note that αY - Z = α(Y - 1αZ)∈∫() ⊆∫(^*) for large enough α∈_++. Hence <ref> has a strictly feasible solution. From <ref> one may reformulate <ref> into an equivalent problem with a strictly feasible solution. Conic Programming Strong Duality <cit.> implies equality and attainment in <ref>. Similarly, note that Y is a strictly feasible solution to <ref>, whereas x 2(W) is a strictly feasible solution for <ref>, as n⊆^* by <ref>. Once again, Strong Duality ensures equality and attainment in <ref>. A positive multiple of (1, Y) is a strictly feasible point to <ref>. Let X∈ be as in <ref>. Without loss of generality, assume that (X) < 1. Then (12n, 12nX) is a strictly feasible solution to <ref>. Hence equality and attainment holds in <ref>. We will look at these functions through the lens of conic gauges, which are defined as follows: Let be an Euclidean space. Let ⊆ be a closed convex cone. A function φ→_+ is a gauge if φ is positively homogeneous, sublinear, and φ(0) = 0. The gauge φ is positive definite if φ(x) > 0 for each nonzero x ∈, and φ is monotone if 0 ≼_ x ≼_ y implies φ(x) ≤φ(y). Let φ→ be a positive definite monotone gauge. The dual of φ is the positive definite monotone gauge φ^^* →_+ defined by φ^(y) maxyxx ∈, φ(x) ≤ 1 for each y ∈^*. Let ϕ→ be a positive definite monotone gauge. Whenever ^** =, — in particular whenever <ref> holds — one can prove that ϕ^ = ϕ. We show that _, _, ν_, and ν^_ are positive definite monotone gauges and how they are related. Let , ⊆n be closed convex cones such that <ref> holds. Then * _ and _ are positive definite monotone gauges, dual to each other; * ν_ and ν_^ are positive definite monotone gauges, dual to each other; * _≤ν_ and ν^_≤_. The fact that _ and ν_ are gauges follows directly from their definitions in <ref> and <ref>. The monotonicity of _ and ν_ is a direct consequence of <ref>. Next we show that _ is positive definite. Let y∈() and Y∈∫() be as in <ref>. Then 0 < YW = ∑_U ∈()y_U^UW for every nonzero W ∈⊆^*. Thus there exists U ∈() such that _(W) ≥Ws_U > 0. This implies that _ is positive definite. Since _, (W) = maxWs_UU ∈()≤maxWY Y ∈, (Y) = = ν_, (W), it follows that ν_ is positive definite. Thus, _ and ν_ are positive definite monotone gauges such that _≤ν_. The fact that _ and ν^_ are gauges follows directly from <ref> and <ref>. We now prove _ to be monotone. Let P n→n denote the orthogonal projector onto (). Let Z_0, Z_1 ∈^* be such that Z_0 ≼_^* Z_1. Let y ∈() be such that ∑_U ∈() y_U^U≽_^* Z_1. By <ref>, P[] ∑_U ∈() y_U^U ≽_^* P(Z_1) = Z_1 ≽_^* Z_0 = P(Z_0). By <ref>, there exists Ŷ∈^* such that P[]∑_U ∈() y_U^U - Z_0 = P(Y). Since (P) = ()^⊥, from <ref> we conclude ∑_U ∈() y_U^U - Z_0 - Y∈()^⊥⊆^*, which ensures ∑_U ∈() y_U^U≽_^* Z_0 + Y≽_^* Z_0. Hence, by <ref>, we have that _(Z_0) ≤_(Z_1). Similarly, let Y ∈ and μ∈ be such that (Y) = μ and Y ≽_^* Z_1. Then <ref> implies P(Y) ≽_^* Z_1 ≽_^* Z_0 = P(Z_0), so P(Y - Z_0) = P(X) for some X∈^* by <ref>. Hence Y - Z_0 - X∈()^⊥⊆^* by <ref>, and hence Y ≽_^* Z_0 + X≽_^* Z_0. From <ref> we conclude ν_^ is monotone. By <ref>, there exists X∈() ∖0. Then ν_(X) > 0, and hence we may assume ν_, (X) = 1. We claim that ZX > 0 for every nonzero Z ∈^*. Note that this implies via <ref> that ν^_, (Z) ≥ZX > 0 for every nonzero Z ∈^*, so ν^_ is positive definite. We now prove <ref>. Let Ŷ∈^* and let P n→n be the orthogonal projector onto (). Assume that P(Ŷ) ≠ 0. By <ref>, it suffices to prove P(Ŷ)X > 0. Let ∈ (0, 1). Note that X - P(Ŷ) ∈(), since (1 - )^-1X∈ and -P(Ŷ) ∈(), so X - P(Ŷ) = (1 - )[]1/1 - X + (-P(Ŷ)) ∈(). Since X∈(), there exists > 0 such that X - P(Ŷ) ∈. Using that XŶ = P(X)Ŷ = XP(Ŷ) and P(Ŷ)Ŷ = P(Ŷ)P(Ŷ), we use <ref> to conclude 0 ≤X - P(Ŷ)Ŷ = XŶ - P(Ŷ)Ŷ = XP(Ŷ) - P(Ŷ)P(Ŷ), so <ref> holds. Since _, (Z) = min[]y y ∈(), ∑_U ∈() y_U^U≽_^* Z ≥min[]μ∈ Y ∈, (Y) = μ, Y ≽_^* Z = ν^_, (Z). it follows that _ is positive definite. Thus, _ and ν^_ are positive definite monotone gauges such that _≥ν^_. It is immediate from <ref> that _, ^ = _,. We have that ν_, (W) = maxWZ Z ∈^*, ν^_, (Z) ≤ 1 for every W ∈. Moreover, max WY Y ∈, (Y) = = max WP(Y) Y ∈, (Y) = since W ∈⊆() ≤max WZ Z ∈^*, Y ∈, (Y) = , Y ≽_^* Z by <ref> ≤max WY Y ∈, (Y) = . since W ∈⊆ Hence equality holds throughout, which implies <ref> via <ref>. § POLYHEDRAL CONES Let ^d →n be a linear map, and set (d). We have that <ref> always hold. Indeed, we may write (d) = X ∈n(X) = 0, (X) ≥ 0 with n→^k and n→^ℓ linear transformations such that there exists X∈n such that (X) = 0 and (X) > 0. Since X ∈n(X) ≥ 0 has nonempty interior, we have that <ref> holds. Note further that () = (). We further have that X ≽_^* Y if and only if ^*(X) ≥^*(Y) for every X, Y ∈n. Let , be closed convex cones such that <ref> holds, where (d) for a linear map ^d →n such that (e_i) ≠ 0 for each i ∈ [d]. If w ∈d is such that (w) = 0, then w = 0. If there exists nonzero w ∈d and such that (w) = 0, then is not pointed, contradicting <ref>. Let , ⊆n be closed convex cones such that <ref> holds, where (d) for a linear map ^d →n such that (e_i) ≠ 0 for each i ∈ [d]. Then ν_,d→ defined by ν_, (w) ν_((w)) for every w ∈d is a positive definite monotone gauge, and its dual is the positive definite monotone gauge ν_,^(z) = minν_^(Z) Z ∈^*, ^*(Z) ≥ z = minμμ∈, Y ∈, (Y) = μ, ^*(Y) ≥ z for every z ∈d. Similarly, _,d→ defined by _, (w) _((w)) for every w ∈d is a positive definite monotone gauge, and its dual is the positive definite monotone gauge _,(z) = min_(Z) Z ∈^*, ^*(Z) ≥ z = min[]y y ∈(), ∑_U ∈() y_U^^*(U) ≥ z for every z ∈d. We first prove <ref> to be a positive definite monotone gauge. As the composition of the gauge ν_, with a linear function, it is immediate that ν_, is a gauge. If 0 ≤ w ≤ v, then 0 ≼_(w) ≼_(v), so monotonicity of ν_, follows from the monotonicity part of <ref>, <ref>. Let w ∈d be such that ν_, (w) = 0. Then ν_((w)) = 0, so <ref> implies that (w) = 0. Hence w = 0 by <ref>. Thus ν_, is a positive definite monotone gauge. Hence ν_, ^(z) = max zw w ∈d, ν_, (w) ≤1 = max zw w ∈d, ν_((w)) ≤1 = max zw w ∈d, x ∈^n, (x) ≽_^* (w), x ≤1 = min μ μ∈, Y ∈, (Y) = μ, ^*(Y) ≥z . Let α > 0 be such that ((α)) < 1. Then (w, x) (α2n, 12n) is strictly feasible in the second to last optimization problem, since (x) - (w) = 12n I - α2n() ∈∫(n) ⊆∫(^*) and x < 1. For Y as in <ref>, since Y∈∫() ⊆∫(^*), we have that ^*(Y) > 0 from <ref>, and thus ^*(αY) - z = α(^*(Y) - 1α z) > 0 for α∈ big enough. Thus the last optimization problem is also strictly feasible. Hence ν_, ^(z) = min μ μ∈, Y ∈, (Y) = μ, ^*(Y) ≥z = min μ μ∈, Y ∈, Z ∈^*, (Y) = μ, Y ≽_^* Z, ^*(Z) ≥z = min ν_^(Z) Z ∈^*, ^*(Z) ≥z . The second equation holds because (μ, Y) ↦ (μ, Y, P(Y)) and (μ, Y, Z) ↦ (μ, Y) map feasible solutions between both problems while preserving objective value by <ref>. That <ref> is a positive definite monotone gauge follows from <ref> and <ref> as above. Hence _, ^(z) = max zw w ∈d, _((w)) ≤1 = max zw w ∈d, w^*(U) ≤1 for every U ∈() = min[] y y ∈(), ∑_U ∈() y_U^^*(U) ≥z by LP Strong Duality = min[] y Z ∈^*, y ∈(), ∑_U ∈() y_U^U ≽_^* Z, ^*(Z) ≥z by <ref> and <ref> = min (Z) Z ∈^*, ^*(Z) ≥z . by <ref> § CONCENTRATION RESULTS In this section we prove the concentration results in <ref>. First we prove the results concerning the polyhedral case. Set S ∑_t ∈ [T] X_t. Also, define x_t ^*(X_t) for every t ∈ [T], and set s ^*(S). We also denote x ^*(X), where X is the random matrix in the statement. Then, by linearity of expectation, [s] = T [x] = T [^*(X)] = T ^*([X]) ≥ T . Let i ∈ [d]. Chernoff's bound and the previous inequality imply that []s_i ≤ (1 - )[s]_i≤exp[] - ^2 [s]_i/2≤exp[] - ^2 /2T . Hence, by the union bound, ∃i ∈[d], s_i ≤(1 - )[s]_i ≤d exp[] - ^2 /2 T ≤exp[] log(d) - ^2/2 2(log(d) + log(n))/^2 = 1/n. Thus with probability at least 1 - 1/n we have that s ≥ (1 - )[s]. By <ref>, this event holds if and only if S ≽_^* (1 - ) [S]. Both <ref> and <ref> imply that ^*([[Y]]) ≥^*(Y) ≥. <Ref> implies that, with probability at least 1 - 1/n, 1/T∑_t ∈ [T] ([Y])_t ≽_^* (1 - ) [[Y]] ≽_^* (1 - ) Y. Hence y ∈() defined by (1 - )· y_U 1/Tt ∈ [T]([Y])_t = U for every U ∈() satisfies the desired properties. One case we treat separately is when = n. For this case, we use the following result by Tropp: Let X_tt ∈ T be independent random matrices in n. Let ρ∈ be such that 0 ≼ X_t ≼ρ I almost surely for every t ∈ T. Set S ∑_t ∈ T X_t. Then for every ∈ (0, 1), (S) ≤ (1 - ) ( S) ≤ n exp[]-^2/2( S)/ρ. <ref> weakens the upper bound from <cit.> using that exp(-)/(1 - )^1 - ≤exp[]-^2/2, which follows from []1 - /2/1 - = + ∑_k = 2^∞^k/2≥ + ∑_k = 2^∞^k/k = log[]1/1 - . We prove the following result. Let ∈ (0, 1), let τ, ρ∈, and let Y̅∈n_++. Let X Ω→n be a random matrix such that 0 ≼ X ≼ρY̅ almost surely, and τY̅≼[X]. Let (X_t)_t ∈ [T] be independent identically distributed random variables sampled from X, for any T ≥[]4 ρ/^2 τlog(2n) . Then, with probability at least 1 - 1/2n, 1/T∑_t ∈ [T] X_t ≽ (1 - ) τY̅. For every t ∈ [T], set Y_t Y̅^-1/2X_t Y̅^-1/2. Then 0 ≼ Y_t = Y̅^-1/2 X_t Y̅^-1/2≼ρY̅^-1/2Y̅Y̅^-1/2 = ρ I for every t ∈ [T] almost surely. Set Q ∑_t ∈ [T] Y_t. Since [X] ≽τY̅, [Q] = [] ∑_t ∈ [T] Y_t = ∑_t ∈ [T]Y̅^-1/2[X_t]Y̅^-1/2≽ T τ I, which implies that ([Q]) ≥ T τ. Hence 1 - [] 1/T∑_t ∈[T] X_t ≽(1 - ) τY̅ = 1 - Q ≽T(1 - )τI ≤1 - Q ≽(1 - )([Q]) I as ([Q]) ≥ Tτ ≤(Q) ≤(1 - ) ([Q]) ≤2n exp[]-^2([Q])/2ρ by <ref> ≤2nexp[]-^2 τ/2 ρ T as ([Q]) ≥ Tτ ≤exp[] log(2n) - ^2τ/2ρ4ρlog(2n)/^2τ = 1/2n. by <ref> For the more general case when ⊆n, we use the following result by Tropp which requires a bound on the spectral norm of the random matrix and uses its second moment: Let T ∈ be nonzero. Let X be a random matrix in n such that X≤ρ almost surely. Let (X_t)_t ∈ [T] be i.i.d. random variables sampled from X. Set σ^2 [X^2], set M [X], and set E 1/T∑_t = 1^T X_t. Then for all ≥ 0, E - M≥≤ 2nexp[] -T ^2/2/σ^2 + 2ρ/3. Using <ref>, we are ready to prove our general concentration result. If σ^2 ≥ (2/3)ρ, then -T/2^2/σ^2 + (2/3)ρ≤ -T/4^2/σ^2≤ -8σ^2log(2n)/^2^2/4 σ^2 = -2log(2n). On the other hand, if σ^2 ≤ (2/3)ρ, then -T/2^2/σ^2 + (2/3)ρ≤ -T/4^2/(2/3)ρ≤ -16ρlog(2n)/33/8ρ = -2log(2n). <Ref> implies []1T∑_t ∈ [T] X_t - [X] ≥≤ 2nexp[] -T/2^2/σ^2 + (2/3)ρ≤ 2nexp -2log(2n) = 1/2n. Finally, we prove <Ref> by combining <ref> with <ref>. Set Z Y^-1/2[Y] Y^-1/2. For every U ⊆ [n], Y^-1/2U Y^-1/2 = UY^-1 and (Y^-1/2UY^-1/2)^2 = UY^-1^2. Thus (Y^-1) ≥Z and (Y^-1)^2 I ≽ Z^2 almost surely. Set σ^2 (n/)^2 and ρ n/. As Y ≽ I, we have that Y^-1≼ (1/) I, so (Y^-1) ≤ n/. Since n ≥ 1 ≥ (2/3), we have that T ≥[]8/()^2n^2 log(2n) ≥8/()^2n^2/^2log(2n) = 8σ^2log(2n)/()^2≥16/3n/1/log(2n) = 16ρlog(2n)/3. Let Z_tt ∈ [T] be i.i.d. random variables sampled from Z. <Ref> implies that Y^-1/2[[Y]]Y^-1/2 - γ I ≼ Y^-1/2[]1/T∑_t ∈ [T] Z_t Y^-1/2 with probability at least 1 - 1/(2n). Assume that this event holds. Let y ∈() be defined by y_U 1/Tt ∈ [T]Z_t = U. Then y = 1 and ∑_U ∈() y_U^U = 1T∑_t ∈ [T] Z_t ≽[[Y]] - Y. <Ref> implies we can compute in polynomial time ỹ∈() with support size (ỹ)∈ O(n/^2), such that ỹ≤ 1 +, and ∑_U ∈()ỹ_U^U≽[[Y]] - Y. As ⊆n, we have that n⊆^*, so by <ref> we obtain ∑_U ∈()ỹ_U U≽_^*[[Y]] - γ Y ≽_^* Y - γ Y = (1 - ) Y. § ALGORITHMIC SIMULTANEOUS CERTIFICATES Let , ⊆n be cones such that <ref> holds. Let ∈ (0, 1). Set, for every W ∈, ν_,(W) (1 - )ν_(W) + IW min[]ρρ∈_+, x ∈^n, ρ≥ (1-)x + IW, (x) ≽_^* W . For every Z ∈^*, set ν_,^(Z) = maxZW x ∈^n, W ∈, W ≼_^*(x), (1 - )x + IW≤ 1 = minμμ∈_+, Y ∈n, Y ≽_μ I, Y ≽_^* Z, (Y) = μ. One may check that ν_, and ν_,^ are positive definite monotone gauges, dual to each other. Let σ∈ (0, 1) and set _, σ(, ) * (W, Z) ∈×^* [ ∃ (μ,Y) feasible for <ref> for Z,; ∃ (ρ,x) feasible for <ref> for W,; 3c and WZ≥ (1 - σ) ρμ ]. One may check that if (ρ, x) and (μ,Y) witness the membership (W, Z) ∈_, σ(), then (1-σ)ρμ≤WZ≤ρμ. Let ∈0,1. Then, (1 - ) ν(W) ≤ν_(W) ≤ν(W), for each W ∈, ν^(Z) ≤ν^_(Z) ≤1/1 - ν^(Z), for each Z ∈^*. We have that (1 - ) ν(W) ≤ν_(W) since ν_(W) = (1 - ) ν(W) + IW and IW≥ 0 by <ref> since W ∈⊆^*. We have that ν( W) ≥IW by <ref>. Therefore, ν_(W) = (1-) ν(W) + (W) ≤ν(W). <Ref> holds by duality. Let , ⊆n be closed convex cones such that <ref> holds, where is the polyhedral cone defined by ^d →n. Assume that <ref> holds. Set ν_, , (w) ν_,((w)) for every w ∈d. Then, for every z ∈d, ν_,,^(z) = minμ∈ Y ≽_μ I, (Y) = μ, ^*(Y) ≥ z = maxzw w ∈d, x ∈^n, (1 - )x + ((w)) ≤ 1, (x) ≽_^*(w) . Let (μ, Y) be as in <ref>. Since I ∈⊆^* by <ref> and <ref>, it follows from <ref> that ^*(I) ≥ 0. Hence ^*((1 - )Y + μI) ≥ (1 - ) ^*(Y) > 0. We thus conclude that (1 - )Y + μ I is strictly feasible in the first optimization problem in <ref>. The second problem in <ref> is also strictly feasible, as one can see by setting (w, x) (α3n, 1/3n) for α∈_++ such that ((α)) < 1. Hence the second equality in <ref> and attainment of both problems follow from Strong Duality. For the first equality in <ref>, note that ν_^(z) = min ν_^(Z) Z ∈^*, ^*(Z) ≥z = min μ∈ Z ∈^*, ^*(Z) ≥z, Y ∈n, Y ≽_ μI, Y ≽_^* Z, (Y) = μ by <ref> ≥min μ∈ Y ∈n, Y ≽_ μI, (Y) = μ, ^*(Y) ≥z , as Y ≽_^* Z implies ^*(Y) ≥^*(Z) by <ref>. Equality follows from <ref> and ^*(Y) = ^*(P(Y)) for every Y ∈n, where P n→n is the orthogonal projector on (). Let , ⊆n be closed convex cones such that <ref> holds, where (d) for a linear map ^d →n. Assume that <ref> holds. Let z ∈d. If (μ, Y) and (w, x) are feasible solutions to <ref> such that (1 - σ)μ≤zw≤μ, then (1, x) and (μ, Y) witness the membership ((w),Y) ∈_, σ(, ). It is immediate that (1, x) is feasible in <ref> for W (w). It is also clear that (μ, Y) is feasible in <ref> for Z Y. The proof follows from (1 - σ)μ≤wz≤w^*(Y) = (w)Y. Let , σ, ∈0, 1. Let , ⊆n be such that <ref> holds. Let W, Z ∈n be nonzero. Let (ρ̅, x) and (μ̅, Y) witness the membership (W, Z) ∈_, σ(, ). Set ρ (1 - )^-1ρ̅ and μρ^-1ZW. Let p ∈() be such that p≤ 1 + and ∑_U ∈() p_U^U≽_^*1/μ̅Y. Set β (1 - )(1 - σ)/(1 + ). Then Ws_V≥βρ and Z ≼_^*1/β∑_U ∈() p_U^U for V Ws_UU ∈(p). Set V Ws_UU ∈(p). Then VW ≥∑_U ∈() p_U/pUW ≥1/μ̅pYW ≥1/μ̅pZW by <ref> ≥1 - σ/pρ̅ by <ref> = (1 - σ)(1 - )/pρ ≥(1 - σ)(1 - )/1 + ρ. Moreover, μ = ρ^-1ZW≥ρ^-1(1 - σ)ρ̅μ̅ = (1 - σ)(1 - )μ̅. Hence Z ≼_^* Y by <ref> ≼_^* μ̅ ∑_U ∈() p_U^U ≼_^* μ/(1 - σ)(1 - ) ∑_U ∈() p_U^U ≼_^* μ1 + /(1 - σ)(1 - ) ∑_U ∈() p_U^U. Let , σ, ∈ (0, 1). Let , ⊆n be closed convex cones such that <ref> holds, where (d) for a linear map ^d →n. Assume <ref> and that ^*(I) ≥κ, for some κ∈_++. Let [] be a randomized rounding algorithm for . Set β (1 - )(1 - σ) (1-). Let (W, Z) ∈_, σ() be such that W ≠ 0 ≠ Z. Let (ρ̅,x) and (μ̅,Y) witness the membership (W, Z) ∈_, σ(). There exists a randomized polynomial-time algorithm that takes (ρ̅, x) and (μ̅, Y) as input and outputs a β-certificate (ρ, μ, y, U, x) for (W, Z) with high probability, and such that (y)≤[]2(log(d) + log(n))/κ^2 almost surely. In particular, (W,Z) is a β-pairing. Note that ρ̅, μ̅ > 0 as W ≠ 0 ≠ Z. Set ρ (1-)^-1ρ̅ and μ (1/ρ) WZ. Note that <ref> holds trivially. We also have <ref>, since (x) ≽_^* W and ρ = ρ̅/1-≥1/1-[] (1-)x + (W)≥x as I ∈ by <ref> and ⊆^* by <ref>. We now prove <ref>. Set Y̅μ̅^-1 Y. Since (μ̅, Y) is feasible in <ref> and ⊆^*, we have that Y̅≽_^* I. Thus ^*(Y̅) ≥^*(I) ≥κ by <ref> and <ref>. <Ref> ensures that one can compute y̅∈() such that, with probability at least 1 - 1/n, 1/(1 - )1/∑_U ∈()y̅_U^U≽_^*1/μ̅ Y. Setting p (1 - )^-1y̅ and 0, <ref> finishes the proof. Set τ 1 - β/, and στ/3. If we are given w ∈d as input, nearly solve <ref> with W (w) and set z ^*(Y) and Z Y. If we are given z ∈d, nearly solve <ref> and set Z Y. In both cases, we obtain ((w), Z) ∈_, σ(, ) such that ^*(Z) ≥ z, as well as the appropriate witnesses of this membership. By definition, it suffices to obtain a β-certificate for ((w), Z). <Ref> implies one can compute, in polynomial time, a β̂-certificate (ρ, μ, U, y, x) for ((w), Z) with β̂ = (1 - )(1 - σ)(1 - ). Since 0 < τ < 1 ≤ 9, we have that β̂ = [] 1 - τ/3^3 = [] 1 - τ + 13τ^2 - 127τ^3 ≥(1 - τ) = β. This implies that (ρ, μ, U, y, x) is a β-certificate. By <ref>, we have that (y)≤[]2/κ^2 (log(d) + log(n)) = []54/κτ^3(log(d) + log(n)) . Let , σ, , ∈ (0, 1). Let , ⊆n be such that <ref> holds, and assume that ⊆n. Let [] be a randomized rounding algorithm for . Set β (1 - )(1 - σ) (1-)/(1 + ). Let (W, Z) ∈_, σ(, ) be such that W ≠ 0 ≠ Z. Let (ρ̅,x) and (μ̅,Y) witness the membership (W, Z) ∈_, σ(, ). There exists a polynomial time algorithm that takes (ρ̅, x) and (μ̅, Y) as input and outputs a β-certificate (ρ, μ, y, U, x) with high probability. Almost surely, we have that (y)∈ O(n/^2) and that the algorithm takes at most []8n^2log(2n)/()^2, samples from [μ̅^-1 Y]. In particular, (W,Z) is a β-pairing. Let (W, Z) ∈_, σ(, ). Let (ρ̅, x) and (μ̅, Y) witness the membership (W, Z) ∈_, σ(, ). Note that ρ̅, μ̅ > 0 as W ≠ 0 ≠ Z. Set ρ (1 - )^-1ρ̅ and μρ^-1ZW. Then <ref> holds trivially. We also have <ref>, since (x) ≽_^* W and ρ = ρ̅/1 - ≥1/1 - (1 - )x + (W) ≥x, as I ∈ by <ref> and ⊆^* by <ref>. We now prove <ref>. Set Y̅μ̅^-1Y. Since (μ̅, Y) is feasible in <ref> and ⊆^*, we have that Y̅≽_^* I. <Ref> ensures one can compute, in polynomial time, y̅∈() such that y̅≤ 1 + and (y̅)∈ O(n/^2), and with probability at least 1 - 1/(2n), ∑_U ∈() y_U^U≽_^* (1 - )Y̅. Assume this event holds, and set p ((1 - ))^-1y. <Ref> finishes the proof. Let ξ, γ be such that ξ∈ (0, 1] and γ∈ (0, 1). Let [] be a randomized rounding algorithm for . Let Z ∈n. Let (μ, Y) be feasible in <ref> with μ>0. Let T ≥[]2π/^2ξlog(2n), and let (X_t)_t ∈ [T] be i.i.d. random variables sampled from [Y]. If Y ≻ 0 and Y^-1s_U≤1/ξμ for every U ⊆ [n] with ([Y] = U) > 0, then [] μ/(1 - )T∑_t ∈ [T] X_t ≽ Z ≥ 1 - 1/2n. Set Y̅μ^-1Y. Since μ>0 and (Y)=μ, we have that Y̅≻ 0 and (Y̅) = 1. Note that (Y̅^-1/2UY̅^-1/2) = μY^-1s_U≤μ1/μξ≤1/ξ. Hence 0 ≼[Y] ≼1/ξY̅ almost surely. From <ref>, we may apply <Ref> with ρ 1/ξ and τ = 2/π to conclude that with T ≥[]4ρ/^2τlog(2n) = []4/^2ξlog(2n) = []2π/^2ξlog(2n) we have that 1T∑_t ∈ [T] X_t ≽ (1 - ) Y̅ with probability at least 1 - 1/(2n). The result follows from (μ, Y) being feasible in <ref>. Set τ 1 - β/, and set στ/4. Set τ/(4 - τ), so (1 + )^-1 = 1 - τ/4. By nearly solving either <ref> or <ref>, depending on whether W ∈ or Z ∈^* was given as input, one can compute (ρ̅, x) and (μ̅, Y) witnessing the membership (W, Z) ∈_, σ(, ). <Ref> ensures one can compute, in randomized polynomial time, a β̂-certificate (ρ, μ, U, y, x) for (W, Z) with high probability, where β̂ = (1 - τ/4)^4. Since the function x ↦ (1 - x/4)^4 is convex, it overestimates 1 - x, which is its best linear approximation at x 0. Hence β̂ = (1 - τ/4)^4 ≥(1 - τ) = β. Thus (ρ, μ, U, y, x) is a β-certificate. Note that is independent of n, so from <ref> we can conclude that (y)∈ O(n), the hidden constant depending only on β. Similarly, since and do not depend on n either, we have that the algorithm takes at most []8/()^2 n^2log(2n) = []2048/^2τ^4 n^2log(2n) ∈ O(n^2log(n)) samples from [], the hidden constant depending only on β. Now assume = n. It is immediate that <ref> holds. Set Y̅μ̅^-1 Y. We have that Y̅≽ I, so Y̅^-1≼1/ I. Hence s_UY̅^-1s_U≤ n/ for every U ⊆ [n], so <ref> holds for ξ/n. <Ref> imply that we can compute, with high probability, a β̂-certificate (ρ, μ, U, y̅, x) for (W, Z) with []2π/ξ^2log(2n) = []2π/^2n log(2n) = []128 π/τ^3n log(2n) samples from . Since τ is independent of n, <Ref> implies that we can sparsify y̅∈(), which potentially has support O(n log n), into y ∈() with (y)∈ O(n). § BOOLEAN 2-CSP Let i,j ∈ [n]. Define the following matrices in 0∪ [n]: Δ_-i, -j 12(e_0 - e_i)(e_0 - e_j) + (e_0 - e_j)(e_0 - e_i), Δ_-i, +j 12(e_0 - e_i)(e_0 + e_j) + (e_0 + e_j)(e_0 - e_i), Δ_+i, -j 12(e_0 + e_i)(e_0 - e_j) + (e_0 - e_j)(e_0 + e_i), Δ_+i, +j 12(e_0 + e_i)(e_0 + e_j) + (e_0 + e_j)(e_0 + e_i). Direct computation shows that, for every B ∈^0,…,n×0,…,n, Δ_-i, -jB^ B = B(e_0 - e_i)B(e_0 - e_i), Δ_-i, +jB^ B = B(e_0 - e_i)B(e_0 + e_i) Δ_+i, -jB^ B =B(e_0 + e_i)B(e_0 - e_i), Δ_+i, +jB^ B = B(e_0 + e_i)B(e_0 + e_i). It is then routine to check that UU ⊆0∪ [n]∩_Δ = UU ⊆0∪ [n] = ^_Δ.
http://arxiv.org/abs/2406.19038v1
20240627094352
Binary neutron star mergers using a discontinuous Galerkin-finite difference hybrid method
[ "Nils Deppe", "Francois Foucart", "Marceline S. Bonilla", "Michael Boyle", "Nicholas J. Corso", "Matthew D. Duez", "Matthew Giesler", "François Hébert", "Lawrence E. Kidder", "Yoonsoo Kim", "Prayush Kumar", "Isaac Legred", "Geoffrey Lovelace", "Elias R. Most", "Jordan Moxon", "Kyle C. Nelli", "Harald P. Pfeiffer", "Mark A. Scheel", "Saul A. Teukolsky", "William Throwe", "Nils L. Vu" ]
gr-qc
[ "gr-qc" ]
BNS mergers using discontinuous Galerkin-finite difference hybrid method]Binary neutron star mergers using a discontinuous Galerkin-finite difference hybrid method ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ nd357@cornell.edu § ABSTRACT We present a discontinuous Galerkin-finite difference hybrid scheme that allows high-order shock capturing with the discontinuous Galerkin method for general relativistic magnetohydrodynamics in dynamical spacetimes. We present several optimizations and stability improvements to our algorithm that allow the hybrid method to successfully simulate single, rotating, and binary neutron stars. The hybrid method achieves the efficiency of discontinuous Galerkin methods throughout almost the entire spacetime during the inspiral phase, while being able to robustly capture shocks and resolve the stellar surfaces. We also use Cauchy-Characteristic evolution to compute the first gravitational waveforms at future null infinity from binary neutron star mergers. The simulations presented here are the first successful binary neutron star inspiral and merger simulations using discontinuous Galerkin methods. Keywords: discontinuous Galerkin, Finite Difference, GRMHD, neutron star, WENO § INTRODUCTION The discontinuous Galerkin (DG) method was first presented by Reed and Hill <cit.> to solve the neutron transport equation. Later, in a series of seminal papers, Cockburn and Shu applied the DG method to nonlinear hyperbolic conservation laws <cit.>. An important property of DG methods is that they guarantee linear stability in the L_2 norm for arbitrary high order <cit.>. While this means the DG method is very robust, DG alone is still subject to Godunov's theorem <cit.>: at high order it produces oscillatory solutions. This means DG requires a nonlinear supplemental method for stability in the presence of discontinuities and large gradients. We extend the discontinuous Galerkin-finite difference (DG-FD) hybrid method developed in <cit.> to dynamical spacetimes. The method is implemented in the open-source numerical relativity code, <cit.>. Spectral-type methods have proven extremely useful in producing a large number of long and accurate gravitational waveforms from binary black hole merger simulations <cit.>, as well as other applications in relativistic astrophysics <cit.>. The Spectral Einstein Code () <cit.> performs binary neutron star merger simulations by solving the spacetime using pseudospectral methods and the magnetohydrodynamics (MHD) using finite difference methods <cit.>. However, these use completely separate grids requiring interpolation between them at every time/sub step. This interpolation adds non-trivial cost, though more importantly, 's use of large spectral elements causes significant load-imbalance and prohibitive cost as resolution is increased. This is because the spacetime is only two derivatives smoother than the MHD solution, and so the spectral approximation is less accurate at the stellar surfaces causing 's adaptive mesh refinement algorithm<cit.> to significantly increase the number of grid points in these regions. SpEC has leveraged its use of pseudospectral methods to produce relatively low-cost 10-15 orbits long simulations of binary neutron star (BNS) and black hole-neutron star (BHNS) <cit.> mergers with accuracy comparable to state-of-the art finite difference codes <cit.>. Given its scaling issues when attempting higher resolution simulations, however, producing the longer, higher accuracy BNS and BHNS waveforms needed by next-generation gravitational wave detectors <cit.> with SpEC would be impractical. The same issues arise when attempting to capture the growth of magnetic fields due to MHD instabilities during and after merger, as well as the expected dynamo processes leading to the production of a large scale, organized magnetic field from the small scale field generated by these instabilities. Recent simulations have demonstrated the transfer of magnetic energy from small to large scales in merger simulations <cit.>, yet even the highest resolution simulations available do not show clear convergence of the magnetic field evolution during and after merger <cit.>. The need to perform high-resolution MHD simulations is particularly acute given the importance of the large scale structure of the magnetic field on matter ejection and the electromagnetic signals powered by neutron star mergers <cit.>. By using the same method (DG or finite difference) in each element for both the spacetime and MHD, is able to achieve robust convergence during the inspiral phase while avoiding the scaling issues limiting 's ability to perform high-resolution simulations. Recently, Ref. <cit.> successfully performed long-term simulations of static spacetimes using a DG-finite volume hybrid method to evolve the Einstein-Euler system. However, inspiral and merger simulations were not presented for black hole or neutron star binaries. There are several other next-generation numerical relativity codes using DG methods, including <cit.> and <cit.>. Other next-generation codes like <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> use FD methods ( combines this with wavelet-based adaptive mesh refinement) but are aimed at using Graphics Processing Units (GPUs). The code <cit.> uses smooth particle hydrodynamics to evolve the relativistic hydrodymanics and FD methods for the spacetime. In this paper we present the improvements necessary for to simulate the inspiral and merger of a binary neutron star system using DG methods. We will present results of collapsing simulations in future work since several improvements to our algorithm currently in development will be necessary. In <ref> we briefly review the generalized harmonic (GH) equations. In <ref> we provide details about the improvements necessary since <cit.> to successfully simulate the inspiral and merger of two neutron stars. In <ref> we present numerical results from simulations of a TOV star, rotating neutron star, and a binary neutron star merger, including the first gravitational waveforms extracted using Cauchy-Characteristic Evolution <cit.>. All simulations are done using the open-source code  <cit.> using the scheme presented here. We conclude in <ref>. § EQUATIONS OF MOTION We adopt the standard 3+1 form of the spacetime metric, (see, e.g., <cit.>), ds^2 = g_abdx^a dx^b =-α^2 dt^2 + γ_ij(dx^i+β^i dt) (dx^j +β^j dt), where α is the lapse, β^i the shift vector, and γ_ij is the spatial metric. We use the Einstein summation convention, summing over repeated indices. Latin indices from the first part of the alphabet a,b,c,… denote spacetime indices ranging from 0 to 3, while Latin indices i,j,… are purely spatial, ranging from 1 to 3. We work in units where c = G = M_⊙ = 1, and use geometrized Heaviside-Lorentz units where the magnetic field is rescaled by 1/√(4π) compared to Gaussian units. We refer the reader to the literature <cit.> for a detailed description of the equations of general relativistic magnetohydrodynamics (GRMHD) and their implementation in . The generalized harmonic (GH) equations are given by<cit.>, ∂_t g_ab =(1+γ_1)β^k∂_k g_ab -αΠ_ab-γ_1β^iΦ_iab, ∂_tΦ_iab =β^k∂_kΦ_iab - α∂_iΠ_ab + αγ_2∂_ig_ab +1/2α n^c n^dΦ_icdΠ_ab + αγ^jkn^cΦ_ijcΦ_kab -αγ_2Φ_iab, ∂_tΠ_ab =β^k∂_kΠ_ab - αγ^ki∂_kΦ_iab + γ_1γ_2β^k∂_kg_ab +2α g^cd(γ^ijΦ_icaΦ_jdb - Π_caΠ_db - g^efΓ_aceΓ_bdf) -2α∇_(aH_b) - 1/2α n^c n^dΠ_cdΠ_ab - α n^c Π_ciγ^ijΦ_jab +αγ_0(2δ^c_(a n_b) - g_abn^c)𝒞_c -γ_1γ_2 β^iΦ_iab -16πα(T_ab - 1/2g_abT^c_c), where g_ab is the spacetime metric, Φ_iab=∂_i g_ab, Π_ab = n^c∂_cg_ab, n^a is the unit normal vector to the spatial slice, γ_0 damps the 1-index or gauge constraint 𝒞_a=H_a+Γ_a, γ_1 controls the linear degeneracy of the system, γ_2 damps the 3-index constraint 𝒞_iab=∂_i g_ab-Φ_iab, Γ_abc are the spacetime Christoffel symbols of the first kind, Γ_a=g^bcΓ_bca, and T_ab is the stress-energy tensor. The gauge source function H_a can be any arbitrary function depending only upon the spacetime coordinates x^a and g_ab (but not derivatives of g_ab). For the GRMHD system the trace-reversed stress-energy tensor that sources Π_ab is given by T_a b - 1/2 g_a b T^c_c = (ρ h + b^2) u_a u_b + [1/2(ρ h + b^2) - p] g_a b - b_a b_b, where u_a is the four-velocity of the fluid, ρ is the baryon rest mass density, p the fluid pressure, h the specific enthalpy, and b^a=-1/2ϵ^abcdF_cdu_b with ϵ_abcd=√(-g)[abcd], g the determinant of the spacetime metric and [abcd]=±1 with [0123]=+1 is the flat-space antisymmetric symbol. § DISCONTINUOUS GALERKIN-FINITE DIFFERENCE HYBRID METHOD IN DYNAMICAL SPACETIMES In this section we present our DG-FD hybrid method improvements necessary to simulate dynamical spacetimes. The reader is referred to <cit.> for the original algorithm and to <cit.> for improvements developed for simulating general relativistic force-free electrodynamics. §.§ Generalized harmonic spectral filter We use an exponential filter applied to the spectral coefficient c_i in order to reduce and eliminate aliasing-driven instabilities. Specifically, for a 1d spectral expansion u(x)=∑_i=0^Nc_i P_i(x), where P_i(x) are the Legendre polynomials, we use the filter c_i → c_i exp[-a(i/N)^2b]. We choose the parameters a=36 and b=64 so that only the highest spectral mode is filtered. We only apply the filter to the GH variables g_ab, Φ_iab and Π_ab. Note that the filter drops the order of convergence for the GH variables from 𝒪(N+1) to 𝒪(N) on the DG grid, but is necessary for stability. §.§ Generalized harmonic finite difference method When FD is necessary, we discretize the GH system using standard cell-centered central FD methods[See, e.g. <cit.> for a pedagogical overview.]. In general, the order of accuracy of the FD derivatives for the GH system is two orders higher than that of the GRMHD system. That is, if we use second-order monotonized central reconstruction <cit.>, we use fourth-order FD derivatives of the GH system. We apply Kreiss-Oliger dissipation <cit.> as a filter to the GH variables before taking numerical derivatives and evaluating the time derivatives. Specifically, the filtered variable ũ is given by ũ_ = u_ + ϵ F^(m)(u_), for Kreiss-Oliger operator F^(m) and ϵ∈[0,1]. The subscript refers to a grid point index. We use the filter F^(5) when using fourth-order FD derivatives where F^(5) is given by F^(5) u_= -375/8( 1/336 (u_-2 + u_+2) - 1/84 (u_y-1 + u_+1) + 1/56 u_). The advantage of this approach is that the number of ghost zones is not increased for dissipation, and the GH system is still solved at a higher order than the GRMHD system. We reconstruct the metric variables to subcell interfaces at the same order as the hydro variables. That is, if we use the fifth-order positivity-preserving adaptive-order (PPAO) scheme<cit.> to reconstruct the GRMHD variables, then we use unlimited fifth-order reconstruction for the metric. §.§ GRMHD finite difference method The overall method is very similar to that presented in <cit.>. However, instead of reconstructing the pressure p we now reconstruct the “temperature” T of the fluid. This is because the temperature is an independent variable in equation of state tables, so even if there are numerical errors, as long as the temperature remains positive the reconstructed primitive state is “reasonable”. §.§ Curved mesh finite difference method solves hyperbolic systems of equations of the form ∂_t u + ∂_i F^i(u) + B^i(u)∂_i u = S(u), where u are the evolved variables, F^i(u) the fluxes, B^i(u) non-conservative products, and S(u) source terms. In our DG-FD method the computational domain is divided up into non-overlapping elements or cells, which we denote by Ω_k. This allows us to write the system eq:model pde in semi-discrete form where time remains continuous. In the DG method one integrates the evolution equations eq:model pde against spatial basis functions of degree N, which we denote by ϕ_. We index the basis functions and collocation points of the DG scheme with breve Latin indices, e.g. , , k̆. The basis functions are defined in the reference coordinates of each element, which we denote by ξ^∈{ξ, η, ζ}. We use hatted indices to denote tensor components in the reference frame. The reference coordinates are mapped to the physical coordinates using the general function x^i=x^i(ξ^, t). Since the reference coordinates ξ^ are Cartesian, applying a FD scheme is comparatively straightforward to implement in the reference coordinates rather than the physical or inertial coordinates that the hyperbolic equations are written in. In order to support FD on curved meshes now solves the equations in the form ∂ u/∂ t + 1/J∂_[ J∂ξ^/∂ x^i(F^i-u v^i_g)] = S - u∂_i v^i_g, where v^i_g is the grid or mesh velocity and we can identify F^=J∂ξ^/∂ x^i F^i as the “locally Cartesian flux” in the reference coordinates. This is analogous to how DG schemes are formulated <cit.>. In practice we compute the mesh velocity divergence on the DG grid and project it to the FD grid. While this form is somewhat different from the strong form used by our DG solver, we can still rewrite the equations in a form that naturally hybridizes with a DG solver. In particular, the boundary corrections G in a DG scheme are essentially n_iF^i where n_i is the normal covector to the spatial element interface and in the logical direction is given by n^()_i=∂ξ^()/∂ x^i1/√(∂ξ^()/∂ x^jγ^jk∂ξ^()/∂ x^k) =J∂ξ^()/∂ x^i1/√(J∂ξ^()/∂ x^jγ^jkJ∂ξ^()/∂ x^k). With G^() as the boundary correction or numerical flux at the interface in direction , possibly including a high-order correction<cit.>, we can write the discretized FD evolution equation as ∂ u_(ξ,η,ζ)/∂ t = S_(ξ,η,ζ) -u_(ξ,η,ζ)𝒫(∂_i v^i_g)_(ξ,η,ζ) - 1/J_(ξ,η,ζ)[ (|n^(ξ̂)|G^(̂ξ̂)̂)_(ξ + 1/2,η,ζ) -(|n^(ξ̂)|G^(̂ξ̂)̂)_(ξ - 1/2,η,ζ)/Δξ. .+ (|n^(η̂)|G^(̂η̂)̂)_(ξ,η + 1/2,ζ) -(|n^(η̂)|G^(̂η̂)̂)_(ξ,η - 1/2,ζ)/Δη. .+ (|n^(ζ̂)|G^(̂ζ̂)̂)_(ξ,η,ζ + 1/2) -(|n^(ζ̂)|G^(̂ζ̂)̂)_(ξ,η,ζ - 1/2)/Δζ], where 𝒫 is the projection operator from the DG to the FD grid as defined in <cit.> and |n^()|=√(J∂ξ^()/∂ x^jγ^jkJ∂ξ^()/∂ x^k) or |n^()|=√(∂ξ^()/∂ x^jγ^jk∂ξ^()/∂ x^k) depending on which normal vector form is chosen. Since the correction G^() is exactly what is used in a DG scheme we can straightforwardly hybridize the two schemes in a conservative manner, independent of the exact DG or FD formulation used. The primary reason for using the discretized form in Eq. <ref> is for easier implementation on curved meshes since in this form the equations are simply the standard Cartesian evolution equations. A non-trivial challenge on curved meshes is populating ghost zones. In we divide the computational domain into a set of conforming collections of elements called blocks. Each block has Cartesian block-logical coordinates [-1,1]^3. These coordinates are then mapped by a possibly time-dependent map to the inertial frame in which the evolution equations are written. This is a standard approach in the spectral finite element method community and is similar to what uses<cit.>. An example domain is a 2d disk made up of five deformed rectangles. One square is in the middle surrounded by 4 wedges. The specific challenge is reconstruction at block boundaries since the block-logical coordinates of two blocks in general do not align. This requires some form of interpolation to populate ghost points for the neighbor. Currently we use simple trilinear interpolation. However, we are developing an approach based on high-order limited interpolation inspired by <cit.>. In figure <ref> we show an example of a 2d domain where the logical coordinates are not aligned. The left element must interpolate its solution to the red diamonds when populating the ghost zones for the right element. We currently use compact linear interpolation where neighbor points are not used during the interpolation. Not using neighbor points means that if the ghost zones lie outside the region enclosed by the grid points, extrapolation is used. §.§ Troubled-cell indicators One of the most important parts of the DG-FD hybrid method is the TCI that determines when to switch from DG to FD and back. We still use the relaxed discrete maximum principle (RDMP) as discussed in <cit.>. Specifically, the RDMP requires that min_𝒩[u(t^n)] - δ≤ u^⋆(t^n+1) ≤max_𝒩[u(t^n)] + δ, where 𝒩 are either the Neumann or Voronoi neighbors plus the element itself, δ is a parameter defined below that relaxes the discrete maximum principle, u are the conserved variables, and u^⋆(t^n+1) is a candidate solution at time t^n+1 computed using an unlimited DG scheme. When computing max(u) and min(u) over an element using DG, we first project the DG solution to the FD grid and then compute the maximum and minimum over both the DG and FD grid. However, when an element is using FD we compute the maximum and minimum over the FD grid only. The maximum and minimum values of u^⋆ are computed in the same manner as those of u. The parameter δ used to relax the discrete maximum principle is given by: δ = max(δ_0,ϵ{max_𝒩[u(t^n)] - min_𝒩[u(t^n)]}), where, as in <cit.>, we take δ_0=10^-7 and ϵ=10^-3. If the condition <ref> is satisfied, we say the variable u passes the RDMP. We also continue to use the Persson indicator <cit.>; however, we have changed the details. Specifically, consider a variable u with a 1d spectral decomposition: u(x)=∑_=0^Nc_ P_(x), where in our case P_(x) are Legendre polynomials, and c_ are the spectral coefficients. The Persson TCI essentially monitors the percentage of power in the highest spectral coefficient(s). To do this, we define û as û(x)=c_N P_N(x). and check that (N+1)^α√(∑_=0^N û_^2) > √(∑_=0^N u_^2), where (N+1)^α can be precomputed and stored. We find that this mathematically equivalent condition to our previous check <cit.>, s^Ω=log_10(√(∑_=0^N û_^2/∑_=0^N u_^2)) <s^e=-α_Nlog_10(N+1), is cheaper and better behaved in the limit of u→0. A significant change in handling initial data is that all elements start on the FD grid and then evaluate the TCI to see if restriction to the DG grid is allowed. This is particularly useful for initial data interpolated to the grid from another grid, e.g. when reading data from an elliptic solver such as <cit.>. The TCI used on the initial FD grid is essentially identical to the one used during the evolution described below, <ref>, except for the RDMP TCI. For the RDMP TCI the candidate solution is the restricted DG solution of the initial data. Below we denote time levels by superscripts. For example, u^n is the value of the variable u at time t^n while u^n+1 is the value of the variable u at time t^n+1. We also monitor several conserved magnetohydrodynamical variables, which are defined as ([ D̃; S̃_i; τ̃; B̃^i ]) = √(γ)([ ρ W; (ρ h + b^2) W^2 v_i - α b^0 b_i; (ρ h + b^2)^* W^2 - [p+b^2/2] - (α b^0)^2 - ρ W; B^i ]), where γ is the determinant of the spatial metric γ_ij, v^i is the spatial velocity of the fluid as measured by an observer at rest in the spatial hypersurfaces (“Eulerian observer”) is v^i = 1/α(u^i/u^0 + β^i), with a corresponding Lorentz factor W W = - u^a n_a = α u^0 = 1/√(1 - γ_ijv^i v^j) = √(1+γ^iju_i u_j), and B^i = F^ian_a = α F^0i. §.§.§ TCI on DG grid for GRMHD On the DG grid we require: * that min(D̃^n+1)/(√(γ^n))≥ D_min on both the DG and the projected FD grid. * that min(τ̃^n+1)/(√(γ^n))≥τ_min on both the DG and the projected FD grid. * that max(ρ^n+1)/(√(γ^n))≥ρ_atm on the DG grid. This is to ensure that we only apply the below TCI checks when the solution will not be reset to atmosphere since we would like to always use the DG solver in atmosphere. * that B̃^2≤1.0 - ϵ_B 2 τ̃√(γ) at all grid points in the DG element. * that primitive recovery is successful. * if we are in atmosphere we mark the solution as admissible. * that D̃ and the pressure p pass the Persson TCI. * that if max(√(B̃^iδ_ijB̃^j)) is above a user-specified threshold, √(B̃^iδ_ijB̃^j) satisfies the Persson TCI. * that the RDMP TCI passes for D̃, τ̃, and √(B̃^2). If all requirements are met, then the DG solution is admissible. We use (√(γ^n)) to reduce computational cost since we can use the same average on both the DG and FD grid. This eliminates the need to project √(γ) and also reduces the amount of memory bandwidth needed. §.§.§ TCI on FD grid for GRMHD In order to switch to DG from FD, we require: * that min(D̃^n+1)/(√(γ^n))≥ D_min and min(τ̃^n+1)/(√(γ^n))≥τ_min on the DG grid. * that we did not need to fix the conservative variables (see III.F of <cit.>) if we are not in atmosphere. * that D̃ and the pressure p pass the Persson TCI if we are not in atmosphere. * that the RDMP TCI passes for D̃, τ̃, and √(B̃^2). * that if max(√(B̃^iδ_ijB̃^j)) is above a user-specified threshold, √(B̃^iδ_ijB̃^j) satisfies the Persson TCI. If all the above checks are satisfied, then the numerical solution is representable on the DG grid. §.§ Restriction from FD to DG The restriction operator ℛ[referred to as reconstruction in <cit.>, which we find is easily confused with the reconstruction done on the FD grid] that interpolates variables from the FD to the DG grid, as presented in <cit.>, is a 3d operator in 3 spatial dimensions. This means it is a matrix of size (N+1)^3×(2N+1)^3[N is the degree of the DG basis and N+1 is the number of DG grid points per dimension.], resulting in a rather expensive matrix multiplication in the troubled-cell indicator (TCI) used on the FD grid, where we restrict D̃, p, and optionally √(B^iδ_ijB^j) from the FD grid to the DG grid. This turns out to be a non-negligible expense and so instead we apply the 1d restriction operator dimension-by-dimension. This is a stronger constraint on the DG solution than the 3d restriction, but in addition to the reduced cost it also guarantees that if the solution is constant along an axis of the element on the FD grid, it will also be constant on the DG grid. This ultimately helps reduce noise introduced through restriction. An additional two performance improvements that reduce how frequently the TCI is run on the FD grid were introduced after many of the simulations presented here were already completed. These are: * When an element switches from DG to FD, enough time must elapse for the discontinuous feature to propagate through the troubled element before a TCI check is necessary. A heuristic for choosing the number of time steps to wait before running the TCI on the FD grid is min(Δ x)/Δ t(2 N + 1) where min(Δ x) is the minimum grid spacing between FD points in the inertial frame, Δ t is the time step size, and (2N+1) is the number of FD grid points per dimension in the element. For example, for a P_5 DG-FD method with min(Δ x) / Δ t∼ 2, we should wait ∼ 22 time steps before checking the TCI after switching from DG to FD. * Instead of checking the TCI every step after the initial check, we check with a specified frequency that we typically choose to be ∼(2N-1) time steps, primarily to reduce the overhead of TCI calls. A heuristic argument for the frequency at which to check is not clear. Essentially, one wants to minimize the overhead incurred by calls to the TCI while not spending too much time using FD when DG would work. has input file options that allow controlling the two frequencies at which the TCI is applied on the FD grid. A third option added to the TCI, but not yet extensively tested, is requiring the TCI to mark the solution in an element as admissible multiple times before switching back to DG. The motivation for this is to provide additional time for the FD solver to smooth the solution and to prevent having to switch back to the FD grid soon after switching to DG. All three of these methods were necessary when studying more dynamical systems like current sheets in general relativistic force-free electrodynamics <cit.> and so is not just a characteristic of GRMHD, but dynamical systems in general. §.§ Generalized harmonic system at DG-FD interface For systems in flux-conservative and flux-balanced form, stable methods for a DG-FD hybrid scheme have been developed <cit.>. These are all based on a weak form of the system of partial differential equations. However, since the GH system is not in flux-conservative form it is not as clear how to couple the DG and FD solver. While weak forms for non-conservative systems exist <cit.>, these formulations are not developed for FD schemes. We opt for a simple approach. On the FD grid we use cell-centered FD stencils using the GH variables in the ghost zones as interpolated by the DG grid. On the DG grid we interpolate the GH variables on the FD grid to the interface using unlimited reconstruction and then use the DG boundary correction just as is done for flux-conservative systems. In practice we find this to be stable except when the hybrid solver rapidly switches back and forth between the DG and FD grids. However, we view that as an issue with the TCI and not with how we handle the DG-FD interface for non-conservative systems. The same behavior is observed in simulations of current sheets in general relativistic force-free electrodynamics <cit.> and the methods described at the end of <ref> result in a robust TCI that does not exhibit such pathological behavior. §.§ Outer boundary conditions We impose constraint-preserving boundary conditions on the GH constraint variables<cit.>, first-order Bayliss-Turkel-type<cit.> boundary conditions on the gauge degrees of freedom<cit.>, and a no-incoming radiation boundary condition on the physical degrees of freedom<cit.>. The boundary conditions are imposed using the Bjørhus method<cit.>. In the future we plan to use Cauchy-Characteristic Matching to impose more realistic boundary conditions on the incoming physical fields<cit.>. We impose outflow boundary conditions on the GRMHD variables, filling ghost zones reflected about the outer boundary. For the DG grid this means the primitive variables are simply copied from the interior interface to the exterior one. However, we adjust the spatial velocity. Specifically, for an outward-directed normal vector n_i at the grid points, if n_iv^i≥0 we use v^i_ghost=v^i while if n_iv^i<0 then we set v^i_ghost=v^i-n^i (n_j v^j). These boundary conditions allow us to stably evolve single and binary neutron star spacetimes for long times, though our simulations are terminated before the matter reaches the outer boundary. Because 's FD solver does not yet have the ability to handle refinement boundaries, matter within 4 code units of the boundary of the inner region (see  <ref>) is removed from the evolution (i.e. the density is set to our numerical floor). A crucial future improvement will be better handling of the matter outflows in two key ways. First, we need to add mesh refinement support to the FD solver in order to track matter outflows. Second, we plan to add the ability to impose outflow boundary conditions on the matter fields inside the computational domain. That is, rather than enforcing outflow boundary conditions in the wavezone we impose them closer to the binary and do not evolve the GRMHD system farther out. This is so that a larger computational domain can be used to track the GW emission but we can ignore low-density outflows in the wavezone to reduce computational cost. §.§ Constraint damping One non-trivial challenge in evolving the first-order GH system is choosing constraint damping parameters that allow for stable long-term evolutions while minimally decreasing the accuracy of the solution. For single neutron star simulations we use γ_0 = 0.12 exp(-r^2/7.884855^2) + 0.01, γ_1 = 0.999 exp(-r^2/30^2) - 0.999, γ_2 = 1.2 exp(-r^2/7.884855^2) + 0.01, where r is the coordinate radius r=√(x^iδ_ijx^j) with δ_ij the Kronecker delta symbol. For binary neutron star mergers we use γ_i = γ_i,Aexp[|x^j-x^j_A|^2/w_A^2] + γ_i,Bexp[|x^j-x^j_B|^2/w_B^2] + γ_i,Cexp[r^2/w_C^2] + γ_i,D where x^i is the grid-frame coordinates, x^i_A,B are the locations of the centers of the neutron stars in the grid frame, and the other parameters are freely specifiable constant. For the three parameters γ_0,1,2 entering the GH equations, we use γ_0,A=γ_0,B= γ_0,C = 0.06277857994; γ_0,D=0.01; γ_1,C=0.999; γ_1,A=γ_1,B=0; γ_1,D=-0.999; γ_2,A=γ_2,B = 0.94167869922; γ_2,C = 0.19182343873; γ_0,D=0.01; w_A = w_B = 7.884855; w_C = 51.60996. These choices are drawn from our experience running similar systems with where we use the below functional forms depending on the masses of the stars M_A and M_B in solar masses. For γ_0 we use γ_0,A = 0.09/M_A, γ_0,B = 0.09/M_B, γ_0,C = 0.18/M_A + M_B, γ_0,D = 0.01. For γ_1 we use γ_1,A = 0, γ_1,B = 0, γ_1,C = 0.999, γ_1,D = -0.999, which makes the zero-speed constraint fields propagate radially outward at the outer boundary. For γ_2 we use γ_2,A = 1.35/M_A, γ_2,B = 1.35/M_B, γ_2,C = 0.55/M_A + M_B, γ_2,D = 0.01. Finally, the weights are given by w_A = 5.5 M_A, w_B = 5.5 M_B, w_C = 18 (M_A + M_B). § NUMERICAL RESULTS We now present numerical results from single and binary neutron star simulations. We refer to an element using (N+1)^3 DG points as a P_N element or as using a P_N scheme. The corresponding FD grid has (2N+1)^3 FD grid points. §.§ Single Star We begin our numerical evaluation of 's DG-FD hybrid method by simulating several configurations of single stars in equilibrium in full 3d using the harmonic gauge H_a=0. We use an HLL Riemann solver on all elements. Time evolution is performed using a third-order Runge-Kutta method. In all tests, the domain consists of an inner cube covering the region [-17.8,17.8]^3 with a transition layer to a spherical boundary at r=100 and a surrounding spherical shell covering r∈[100, 250]. The transition layer and spherical shell are divided into 6 regions with 90^∘ opening angles (i.e. a cubed sphere geometry). We vary the resolution across the scenarios, but by convention, the cube consists of K^3 elements, where we may have K∈[8,16,32]. Each region of the inner spherical shells consists of (K/2)^3 elements, and each region the outer shells consists of (K/2)_r×(K/4)_θ,ϕ^2 elements. For each value of K, we may further vary the resolution by changing the number of basis functions used by each DG element. Specifically, we use P_5 through P_7 elements. This choice is uniform across the entire domain, except when K=32, in which case the shells only use P_5 elements, even if the central cube uses P_6 or P_7 ones. All single star simulations are evolved with a polytropic equation of state, p(ρ)=κρ^Γ, with the polytropic exponent Γ=2, polytropic constant κ=100. The following subsections outline the various tests that were run adopting this setup. §.§.§ TOV star In the case of a static, spherically symmetric star, we follow the procedures of <cit.>. Namely, we construct a star using the Tolman-Oppenheimer-Volkoff (TOV) solution <cit.>. The star's central density is ρ_c=1.28×10^-3, such that the total mass in this solution is M=1.4M_⊙. For FD cells in these simulations, we use the monotonized central reconstruction method <cit.>. We use this case to provide an in-depth exploration of the effects of resolution on our results. As such, we test all K∈[8,16,32] and all DG basis functions P_5 through P_7. In all cases, we run the simulation to t=10 ms. The left panel of figure <ref> shows the evolution of the normalized maximum rest mass density over time. The K=8 and K=16 simulations use FD throughout the entire stellar interior, and although the K=32 simulations use DG cells for at least part of the stellar interior, the P_5 and P_6 cases still use FD cells at the stellar core. This means that all simulations except for K=32, P_7 will not have a grid point at the center and will have an offset in central density from the initial value. However, as the resolution improves, the density converges toward the initial value. The right panel of figure <ref> shows the same data in the frequency domain, demonstrating that the oscillations in the data can largely be attributed to the known radial oscillation modes <cit.>. We find that even the lowest resolution simulation resolves two of the modes well, and as resolution improves, so does the quality and quantity of resolved modes. §.§.§ Rotating neutron star To generate initial data for a uniformly rotating star, we numerically solve the Einstein-hydrostatic equilibrium equations according to the methods of <cit.>. In our case, we generate a star of gravitational mass M=1.627M_⊙ and rotational period 1.820 ms, such that the ratio of polar to equatorial radius is 0.85. We then load this initial data into and evolve the system. For FD cells in these simulations, we use the PPAO reconstruction method <cit.>. We test this scenario at two resolutions, K∈[16,32], both using the P_6 scheme, and we run both simulations to t=5 ms. As with the TOV case, figure <ref> depicts the evolution of the maximum rest mass density over time. We see a decay in this maximum density over time, which is a known consequence of the dissipative nature of FD schemes <cit.>. For the lower resolution K=16 case, the decay is sub-percent level over our considered duration, and increasing the simulation resolution further reduces its effect. §.§ Binary neutron star merger §.§.§ Numerical setup To test 's ability to evolve binary neutron star (BNS) systems with the DG-FD hybrid method, we perform a series of BNS simulations with both and . For ease of comparison the simulations use the same initial conditions, gauge choice, and constraint damping parameter. Specifically, we consider an equal mass system with neutron stars of gravitational masses M_A=M_B=1.35M_⊙. The stars have initial separation d_0=47 km in the coordinates of our initial data. The center of the neutron stars are initially at x^i_A=(16,0,0)M_⊙, x^i_B=(-16,0,0)M_⊙, with initial angular velocity Ω_0 (M_A+M_B)=0.0223. The neutron stars have coordinate radii R=9.48 (14 km). We generate quasi-circular data with the elliptic solver <cit.>. We use the simple ideal gas equation of state p = κρ^Γ + ρ T =κρ^Γ+ρ (Γ-1)(ϵ-Kρ^Γ-1/Γ-1), T =(Γ-1)(ϵ-Kρ^Γ-1/Γ-1), ϵ = 1/(Γ-1)p/ρ with p the pressure, ρ the baryon density, ϵ the specific internal energy, and T a “temperature” variable effectively parametrizing the thermal energy of the fluid. We use Γ=2 and κ=123.6. All evolutions are performed using the harmonic gauge H_a=0. Besides these choices, the and simulations use very different numerical methods. The simulations use the standard setup described in <cit.>, i.e. a pseudospectral grid for the evolution of the GH equations made of a small number of large subdomains adapted to the geometry of the system (78 pseudospectral subdomains during inspiral, including spheres close to the compact objects and in the wave zone, distorted cylinders and blocks in between), finite volume evolution of the hydrodynamics equations in the Valencia formalism <cit.> with WENO5 <cit.> reconstruction from cell centers to cell faces and an HLL <cit.> approximate Riemann solver to calculate numerical fluxes on those faces. The finite difference grid is itself split during inspiral into ∼ (400-700) elements, with the number of elements changing over time as matter covers an increasingly large fraction of the pseudospectral grid. For the low-resolution simulation, each FD element uses 16^3 grid points excluding ghost zones and for the medium-resolution simulation each FD element uses 19^3 grid points excluding ghost zones. Ghost zones add 6 grid points per dimension for each simulation. Time evolution is performed using a third-order Runge-Kutta method. We run the simulations at 2 resolutions corresponding to our standard “low” and “medium” resolution settings (Δ x =[0.263,0.211]M_⊙ on the finite difference grid). In , the grid both rotates and contracts with the evolution of the neutron stars, in such a way as to keep the center of mass of each star fixed on the numerical grid. on the other hand uses a larger number of smaller elements (14592 elements in total), making use of the mixed DG-FD algorithm described above. On FD elements, we use the PPAO reconstruction method<cit.> and HLL Riemann solver. Time evolution is performed using a third-order Adams-Bashforth method <cit.>. We show the domain decomposition in figure <ref>. The inner part of the domain is constructed from a grid of 32× 16× 16 cubes covering the region [-40,40]× [-20,20]× [-20,20] around the neutron stars. The outer part of the domain is a shell covering radii r∈ [100,250], divided into 6 regions with 90^∘ opening angles (“JuggleBalls” geometry). Each of these regions is divided into 8× 4× 4 cubes (8 in the radial direction). Finally, the inner and outer region are connected by an envelope of 10 distorted cubes, each divided into 8× 8× 8 elements. We vary resolution by changing the number of basis functions used by each DG elements. Specifically, we use P_4 through P_7 elements. Around the neutron stars, and when using FD, our effective grid resolution is thus Δ x =[0.278,0.227,0.192,0.167]. Practically, our experimentation with a range of different domain decompositions indicate that for P_4, our errors are dominated by inaccurate evolution of the GH equations in the envelope and outer region. This is not surprising given the GH solver is running at only fourth order in this case. We will see that we observe a significant decrease in numerical error for e.g., the trajectories of the neutron stars, between P_4 and P_5. In , the computational domain rotates with the neutron stars, but we do not perform any rescaling, i.e. the center of mass of the two neutron stars remains along the x-axis in grid coordinates, but the stars approach each other on that axis over the course of the evolution. In figure <ref> we plot the baryon rest mass density at t≈3.5ms. The elements outlined in black use FD. We see that during inspiral uses FD methods close to the stellar surfaces and DG methods elsewhere. During the merger itself, larger fractions of the computational domain switch to FD; how to optimize when to switch between FD and DG methods during the post-merger phase remains an open question. and use the same mechanism to “correct” the evolved variables in low-density regions <cit.>. While additionally corrects the primitive variables (temperature, velocity) at densities ρ<2× 10^-11 (i.e. roughly 8 orders of magnitude below the central density of the neutron star), only sets velocities to small (<10^-4) values at ρ<9× 10^-15, and does not correct the temperature. uses a density floor of ρ=10^-13, while uses a floor of ρ=10^-15. §.§.§ Results Each neutron star goes through about 3.5 orbits before merger, on slightly eccentric orbits. Figure <ref> provide us with more information about the numerical accuracy of the simulations. In the left panel, we show the binary separation as a function of time for all simulations. As previously mentioned, the P_4 simulation is significantly less accurate than all other simulations, likely because of the low-order methods used in the envelope. Experimentally we found that the P_4 scheme is fairly sensitive to the choice of domain decomposition in the outer regions. The P_5 through P_7 simulations show clear convergence of the merger time, with the and simulations agreeing within estimated numerical error. In the right panel we show the phase error, estimated here as the orbital phase difference with the P_7 simulation. The simulations show both the and simulations quickly approaching the results of the high-resolution simulations as resolution increases. We note that the trajectory and phase do not show clean pointwise convergence at all times, due to crossings in the trajectories at the time of periastron passage; i.e. around 4-5 ms. This is particularly visible in the phase difference for the simulations at time ∼ 5 ms. In figure <ref>, we show quantities that more directly track the error in the evolution of the GH and hydrodynamics equations. The left panel shows the violation of the gauge constraint, integrated over the entire computational domain. We see clear convergence with resolution before merger. At merger, we do not necessarily expect convergence as we introduce a fixed source of error at the boundary of the region where we allow matter to evolve. In the right panel we plot the maximum value of the baryon density on the computational domain. Errors in the evolution of the fluid equations typically lead to a slow decrease in the value of maximum density during inspiral—in addition to the physically expected decrease of the maximum density as each star is tidally distorted by its companion. We indeed see higher dissipation for P_4, and convergence of the evolution of the maximum density for P_5, P_6, P_7 up to contact, at least in a time-averaged sense (i.e. ignoring the oscillations of the stars that do not remain exactly in phase at all resolutions, and lead to out-of-phase oscillations of the central density). Finally, we show the first binary neutron star gravitational waveforms extracted using Cauchy-Characteristic Evolution (CCE) <cit.>. In figure <ref> we plot the real part of the (2,2) mode of the strain h using an extraction worldtube located at r=200M at the three highest resolutions. We see convergence with increasing resolution of the Cauchy evolution, while the CCE discretization errors are negligible by comparison. We evolve the CCE equations in spherical coordinates, with the evolved variables expended radially using a Legendre-Gauss-Lobatto basis and spin-weighted spherical harmonics in the angular directions. The characteristic evolution uses a fifth-order Adams-Bashforth local time stepper<cit.> with an absolute error tolerance of 10^-8 and a relative tolerance of 10^-6 for the spin-weighted spherical harmonic variables and a relative error of 10^-7 for the coordinate variables. We use l_max=20 for the expansion in spin-weighted spherical harmonics and filter out the top 2 modes. The radial grid uses 15 grid points and an exponential filter as in Eq. <ref> with a=35 and b=64. The CCE data on the initial slice is determined using the conformal factor method <cit.>. The CCE quantities like the strain, news, and Weyl scalars are output at ℐ^+ up to and including l_max=8. CCE also needs data on a worldtube from the Cauchy evolution as radial boundary conditions for the system. This data is kept at a resolution of l_max=16. We can ignore the effects of matter in our characteristic evolution because the extraction radius is in a region of atmosphere. The effects of strongly gravitating matter near the extraction radii on the characteristic evolution have not been studied yet. We also perform a supertranslation using the <cit.> and <cit.> packages to set the strain to zero at retarded time zero. No time or phase alignment is done. While also outputs the news and all the Weyl scalars, we leave a careful analysis of CCE waveforms from BNS mergers as future work, pending a more detailed understanding of how various resolution, control system, extraction radii, and other choices affect the accuracy and convergence of the waveforms. These simulations clearly demonstrate that the DG-FD hybrid scheme is capable of accurately evolving binary neutron star systems up to and through merger, as long as sufficiently high-order methods are used in the envelope and wave zone. Note that and have distinct (dis)advantages in the context of binary neutron star evolutions. allows for very cost-effective evolutions of the binary thanks to spectral domains adapted to the geometry of the system. However, the use of a small number of large elements prevents from scaling beyond 𝒪(100) processors at low resolution. Additionally, attempting to go to higher resolution with leads to simulations whose cost is dominated by a few elements with large numbers of basis functions, typically situated around the surfaces of the stars. This would lead to extremely poor load-balancing, and has thus far limited our ability to perform higher resolution simulations. , on the other hand, is less cost-effective at low resolutions but can leverage a higher number of processors and, by using FD methods close to the surface of the star, is less sensitive to high-frequency noise in that region. In the simulations presented here, which use domains optimized over more than 10 years of experimentation but very unoptimized domain decompositions, the higher resolution simulation cost 14.2k CPU-hrs on the Wheeler cluster at Caltech to reach t=11 ms. Wheeler has two 12-core Intel® Xeon® CPU E5-2680 v3 with a base clock of 2.50GHz CPUs per node. The P_5 simulation, with a FD grid spacing 5% coarser at the location of the neutron stars, costs 27.8k CPU-hrs, but used 288 cores instead of 120. The P_7 simulation used 117k CPU-hrs. This cost increase can be compared to the expected scaling for finite difference method (with cost ∝Δ x^-4), which would predict a cost of 95k CPU-hrs for the P_7 simulation. Similarly, the P_6 simulation cost 52k CPU-hrs to reach the same time, while scaling as ∝Δ x^-4 the P_6 cost predicts a cost of 54k CPU-hrs. These are already promising numbers that we expect will improve with better parallelization of the DG-FD hybrid algorithm. Specifically, we expect on-the-fly redistribution of elements to different processes (since FD elements are significantly more expensive than DG elements) and more optimized domain decompositions to significantly reduce the cost. Whether is able to outperform 's CPU time is currently unclear. However, one 's primary goals is to reduce wall time by scaling to more processors. § CONCLUSIONS In this paper we gave a detailed description of our DG-FD hybrid method that can successfully solve challenging general relativistic astrophysics problems in dynamical spacetimes, including the simulation of a neutron star, a rotating neutron star, and a binary neutron star merger. Our method combines an unlimited DG solver with a conservative FD solver. Alternatively, this can be thought of as taking a standard FD code in numerical relativity and compressing the data to a DG grid wherever the solution is smooth. The DG solver is more efficient than the FD solver since no reconstruction is necessary and fewer Riemann problems need to be solved. The algorithm presented here is an extension of our previous work in static spacetimes <cit.>. The basic idea is that an unlimited DG solver is used wherever a troubled-cell indicator deems the DG solution admissible, while a FD solver is used elsewhere. Unlike classical limiting strategies like WENO <cit.> which attempt to filter out unphysical oscillations, the hybrid scheme prevents spurious oscillations from entering the solution. This is achieved by retaking any time step using a robust high-resolution shock-capturing conservative FD scheme where the DG solution was inadmissible, either because the DG scheme produced unphysical results like negative densities, or because a numerical criterion like the percentage of power in the highest modes deemed the DG solution bad. Our DG-FD hybrid scheme was used to perform the first simulations of a rotating neutron star and binary neutron star merger using DG methods. We show the first gravitational waveforms obtained from binary neutron star mergers using Cauchy-Characteristic Evolution <cit.>, though leave a detailed analysis of the waveforms to future work[Cauchy-Characteristic Evolution currently does not take the effects of matter passing through the worldtube into account and so long-term post merger wave extraction will require careful study.]. In the future we plan to improve our handling of curved meshes to allow tracking outflows in the post-merger phase, incorporate constrained transport for ensuring ∂_i B^i=0 <cit.>, use local adaptive time stepping using a linear multi-step method <cit.>, use adaptive mesh refinement (e.g. <cit.>), dynamic continuous load-balancing, and an optimized domain decomposition. Charm++/Converse <cit.> was developed by the Parallel Programming Laboratory in the Department of Computer Science at the University of Illinois at Urbana-Champaign. The figures in this article were produced with  <cit.>,  <cit.> and  <cit.>. Computations were performed with the Wheeler cluster at Caltech and the mbot cluster at Cornell. This work was supported in part by the Sherman Fairchild Foundation and by NSF Grants No. PHY-2309211, No. PHY-2309231, and No. OAC-2209656 at Caltech, and NSF Grants No. PHY-2207342 and No. OAC-2209655 at Cornell. F.F. gratefully acknowledges support from the Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-05CH11231, from NASA through grant 80NSSC22K0719, and from the NSF through grant AST-2107932. M.D. gratefully acknowledges support from the NSF through grant PHY-2110287 and support from NASA through grant 80NSSC22K0719. GL and MSB acknowledge support from NSF award PHY-2208014, the Dan Black Family Trust and Nicholas and Lee Begovich. ERM acknowledges support by the National Science Foundation under Grant No. AST-2307394 and PHY-2309210, the NSF Frontera supercomputer under grant AST21006, and Delta at the National Center for Supercomputing Applications (NCSA) through allocation PHY210074 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. ERM further acknowledges support on Perlmutter through NERSC under grant m4575. P.K. acknowledges support of the Department of Atomic Energy, Government of India, under project no. RTI4001, and by the Ashok and Gita Vaish Early Career Faculty Fellowship at the International Centre for Theoretical Sciences. § REFERENCES unsrt
http://arxiv.org/abs/2406.19396v1
20240627175958
SimLOB: Learning Representations of Limited Order Book for Financial Market Simulation
[ "Yuanzhe Li", "Yue Wu", "Peng Yang" ]
cs.CE
[ "cs.CE" ]
[NO \title GIVEN] [NO \author GIVEN] July 1, 2024 ====================== § ABSTRACT Financial market simulation (FMS) serves as a promising tool for understanding market anomalies and the underlying trading behaviors. To ensure high-fidelity simulations, it is crucial to calibrate the FMS model for generating data closely resembling the observed market data. Previous efforts primarily focused on calibrating the mid-price data, leading to essential information loss of the market activities and thus biasing the calibrated model. The Limit Order Book (LOB) data is the fundamental data fully capturing the market micro-structure and is adopted by worldwide exchanges. However, LOB is not applicable to existing calibration objective functions due to its tabular structure not suitable for the vectorized input requirement. This paper proposes to explicitly learn the vectorized representations of LOB with a Transformer-based autoencoder. Then the latent vector, which captures the major information of LOB, can be applied for calibration. Extensive experiments show that the learned latent representation not only preserves the non-linear auto-correlation in the temporal axis, but the precedence between successive price levels of LOB. Besides, it is verified that the performance of the representation learning stage is consistent with the downstream calibration tasks. Thus, this work also progresses the FMS on LOB data, for the first time. § INTRODUCTION One important and challenging issue in financial market data is to discover the causes of the abnormal market phenomena, e.g., flash crash <cit.> and spoofing <cit.>. Traditional machine learning methods are well-equipped to detect the anomalies but lack of the financial interpretability of the causes <cit.>. Given an observed market time series data, financial market simulation (FMS) aims to directly approximate its underlying data generative process, informed by the trading rules of the real market <cit.>. By simulating the underlying trading activities behind the observed market data, FMS is able to both discover how the structure of the financial market changed over time and explain the changes on the micro level <cit.>. Generally, FMS models the market as a parameterized multi-agent system M(𝐰) that mimics various types of traders and the exchange as different agents. Each trading agent only interacts with the exchange agent who implements the real trading rules of the market. Since 1990s, diverse trading agents have been designed to simulate the chartists, fundamentalists, momentum traders, high-frequency traders and so on <cit.>, covering most of the trading behaviors recognized from the real markets. By running the system, those trading agents continuously submit orders to the exchange agent and the exchange produces the simulated market data by matchmaking those orders. Normally, the exchange will publicize the newest market data 𝐱̂(t) at a certain frequency (e.g., 1 second). Hence, the FMS can be viewed as a market data generative process, denoted as M(𝐰)=𝐗^𝐰_T={𝐱^𝐰(t)}_t=1^T with any length of T∈ℕ^+. To simulate any given observed financial market data 𝐗̂_T = {𝐱̂(t)}_t=1^T, FMS requires the simulated data 𝐗^𝐰_T to closely resemble the observed data 𝐗̂_T, which forces M(𝐰) approximating the underlying data generative process of 𝐗̂_T. This largely relies on the careful calibration of M(𝐰) by tuning the parameters 𝐰 to minimize the discrepancy between the two data sequences, denoted as D(𝐗̂_T, M(𝐰))=D(𝐗̂_T, 𝐗^𝐰_T). Unfortunately, the calibration problem is non-trivial due to the highly non-linear interactions among the simulated agents <cit.>. In recent years, various advanced methods have emerged from both the fields of optimization and statistical inference <cit.>. The former minimizes the discrepancy using black-box optimization methods <cit.>, while the latter estimates the likelihood or posterior of 𝐗̂_T and 𝐰 on the selected samples where D(𝐗̂_T, M(𝐰)) < ϵ <cit.>. In both ways, the calibration is mostly guided by the discrepancy. Intuitively, the more information observed from the market is included in 𝐗̂_T, the closer the approximation of M(𝐰) to the underlying data generative process can be expected. In the literature, almost all FMS works merely consider 𝐗̂_T as the mid-price data <cit.>, ignoring the other important observable data information, e.g., trading volume, bid/ask directions, and orders inter-arrival time. Consequently, the calibrated M(𝐰) cannot capture the full dynamics of the real data generative process where 𝐗̂_T is generated. To our best knowledge, this is the first work of calibrating FMS with respect to the Limited Order Book (LOB) data, the fundamental market data widely adopted in most of the world-class securities exchanges <cit.>. LOB is a complex data structure for continuously recording all the untraded orders submitted from all traders. In LOB, all the untraded orders fall into either ask side (for selling) or bid side (for buying) according to their order directions. At each side, the untraded orders are organized in the "price-first-time-second" manner <cit.>. When a new order comes, the LOB is updated immediately in two possible ways: if the new order's price matches any opposite price of LOB, it will be traded and the corresponding untraded orders will be deleted from LOB; Otherwise, the new order will be untraded and inserted in its own side of LOB. Note that, the untraded orders implicitly reflect the trading intentions of the whole market. Besides, the immediate updates of LOB represent the mostly fine-grained dynamics of the market micro-structure. On this basis, simulating LOB data conceptually defines the closest approximation to the real data generative process of the market scenario of interest. Unfortunately, it is not straightforward to apply LOB into existing discrepancy measures. Representative discrepancies like Euclidean distances <cit.> and probabilistic distances <cit.> all require 𝐗̂_T and 𝐗^𝐰_T to be vectorized inputs, while the T time steps LOB is publicized in the form of normally a 10 × 4 × T matrix data. That is, at each time step t, the best 10 price levels on both sides of LOB as well as the associated total volume of the untraded orders are output as 𝐱̂(t) ∈ℝ^10 × 4. More specifically, 𝐱̂(t)=[[p^b_1(t),v^b_1(t),p^a_1(t),v^a_1(t)];...;[p^b_10(t),v^b_10(t),p^a_10(t),v^a_10(t)]], where p_i and v_i are for the i-th price level and the associated total volume, a and b indicates the ask side and bid side (see Figure <ref>), respectively. Existing FMS works that calibrate 1 × T mid-price can be viewed as simplifying LOB by averaging the best ask price and the best bid price at t-th step, i.e., m̂p̂(t)=p^a_1(t)-p^b_1(t)/2∈ℝ^1, which naturally lose much important information of the underlying data generative process. This paper aims to learn a low-dimensional vectorized representation of the LOB data with the autoencoder, so that the well-established discrepancy measures can still be easily adopted for calibration. Ideally, if the output of the decoder closely resembles the input of the encoder, the latent vector is believed to be an effective representation of the input LOB. The key challenge lies in that the LOB involves not only the non-linear auto-correlation in the temporal axis but also the precedence between successive price levels at each time step. The literature witnesses increasing efforts on analyzing LOB data with neural networks, which, however, neither adopt the autoencoder framework nor explicitly study the vectorized representations. To accurately represent these two properties in the latent vector, the proposed encoder as well as the symmetric decoder contains three blocks, i.e., a fully connected network for extracting features of the price precedence, a Transformer stack for capturing the temporal auto-correlation, and another fully connected network for dimension reduction. Three groups of experiments have been conducted. The empirical findings are as follows: Finding 1: Calibrating LOB leads to better FMS than traditional mid-price calibration; Finding 2: The vectorized representations of various LOB can be effectively learned, while the convolution layers adopted in existing works is not effective; Finding 3: Better representation of LOB, better calibration of FMS. The rest of this paper is organized as follows. Section 2 introduces the related works of FMS calibration and deep learning for LOB. Section 3 describes the proposed Transformer-based autoencoder in details. Section 4 reports 3 key empirical findings. The conclusion is drawn in Section 5. § RELATED WORK Calibration of FMS. Traditional FMS works focus on designing rule-based agents to imitate various trading behaviors. Though several phenomena generally existed in different stocks have been reproduced and explained by FMSs <cit.>, they received increasing criticisms for not being able to simulate any specific time series data but only macro properties of the market <cit.>. Since 2015, several calibration objectives have been proposed for FMS <cit.>. A straightforward way <cit.> is to simply calculate the mean square error (MSE) between the simulated mid-price vector and the observed mid-price vector. Methods of Simulated Moments calculate the statistical moments of both the observed and simulated mid-price, and measure the distances between the two vectors of moments as the discrepancy <cit.>. To better utilize the temporal properties of the mid-price vector, some information-theoretic criteria based discrepancies are proposed based on various time windows <cit.>. Francesco <cit.> tries to compares the distribution distance between the simulated and observed mid-price data. The Kolmogorov-Smirnov (K-S) test is generalized to multi-variate data, while their empirical verification was only conducted on the mid-price vector and the traded volume vector <cit.>. To summarize, existing calibration functions all require the vectorized input format and related works mainly calibrate to the mid-price data. Deep Learning for LOB. Most of the studied tasks are to predict the price movement <cit.>, a binary classification problem. Since classification with neural networks is known to be effective, the end-to-end solutions are naturally considered for predicting the price movement from LOB and the vectorized representation learning is not explicitly defined. In the contrary, since the FMS models are mostly rule-based (especially the matchmaking rule of updating the LOB), they are non-differentiable and cannot be trained end-to-end with the feature extraction layers. Thus, the vectorized representation learning need especial treatments in FMS. Recent works propose to generate LOB data with deep learning, which is quite similar to the goal of FMS <cit.>. However, they do not involve any validation or calibration steps to force the generated data resembling any given observed LOB, but only limited to follow some macro properties of the market <cit.>. Some of them also are not informed by the real-world matchmaking rules for updating LOB <cit.>. These issues impose significant restrictions on the use of these models as simulators, since one can hardly intervene the order streams to do "what-if" test and reveal the micro-structured causes of certain financial events <cit.>. Furthermore, they explicitly require the ground truth order streams as input to train the networks, which is not available in the setting of FMS calibration problems. Among them, the convolution layers and Long-Short Term Memory (LSTM) are the mostly used architecture. Recently, the Transformer model is adopted instead of LSTM, but the convolution layers are still kept <cit.>. This work argues for abandoning the convolution layer as it is empirically found ineffective to learning from LOB. § METHOD To feed the LOB data to the well-established calibration functions D who only accept the vectorized input, we propose to learn a function f that can represent the time series LOB data in a vectorized latent space. The requirement of f is two-fold. First, f naturally constitutes a dimension reduction process. Suppose f receives a τ step LOB 𝐗̂_τ∈ℝ^10× 4×τ, it is expected that f(𝐗̂_τ) ∈ℝ^1×τ̃ with τ̃∈ℕ^+ and τ̃≪ 40τ. Second, within the latent space of ℝ^1×τ̃, the information of LOB can be largely preserved for downstream calibration tasks. One verification is the existence of an inverse function g: ℝ^1×τ̃→ℝ^10× 4×τ such that g(f(𝐗̂_τ)) resembles the original 𝐗̂_τ. To this end, the autoencoder architecture is considered, where f constitutes the encoder network and g is modeled by the decoder. The latent vector 𝐙∈ℝ^1×τ̃ between the encoder and the decoder is the learned representation of 𝐗̂_τ, i.e., 𝐙=f(𝐗̂_τ), and g(𝐙) is the reconstructed data. The Properties of LOB. The difficulty of the representation learning lies in the specific properties of LOB. First, the time series LOB is essentially auto-correlated between successive time steps, since 𝐱̂_t+1 is generated by updating 𝐱̂(t) with incoming order streams between the time interval [t,t+1]. Second, 𝐱̂(t) comprises 10 levels of price and volume on both ask and bid sides. The price levels should strictly follow the precedence that prices at lower levels are worse than those at higher levels. That is, the bid/ask price at the i-th level should be certainly larger/smaller than the i+1-th level, where i ∈ℕ^+ and i≤ 9. Hence, the encoder network should be able to effectively handle both the non-linear temporal auto-correlation and the precedence of price levels. On this basis, the proposed encoder is comprised of three components, i.e., the feature extraction block as a fully connected network (FCN, denoted as FCN_1), a Transformer stack, and the dimension reduction block as another FCN (denoted as FCN_2). The architecture of the encoder is depicted in Figure <ref>. The Feature Extraction Block. The FCN_1 is designed for extracting useful features from LOB. Previous works that utilize machine learning methods for processing LOB data often pre-process a set of hand-crafted features to describe market dynamics <cit.>. Those works typically consider the price levels as independent features and seldom deal with the precedence relationship. Recent pioneer work <cit.> uses convolution neural networks (CNNs) to automatically extract the features from LOB. The intuition is that CNN may capture the precedence between the price levels through convolutions like what has been done to the pixels. However, the 10 levels of bid prices, bid volumes, ask prices, and ask volumes, have quite different scales and meanings, which is non-trivial to be convolved adequately. This work utilizes the FCN for feature extraction by 𝐡_0=FCN_1(𝐗̂_τ). First, the LOB at each time step is flattened from 𝐱̂(t)=[[p^b_1(t),v^b_1(t),p^a_1(t),v^a_1(t)];...;[p^b_10(t),v^b_10(t),p^a_10(t),v^a_10(t)]] to [p^b_1(t),v^b_1(t),p^a_1(t),v^a_1(t),...,p^b_10(t),v^b_10(t),p^a_10(t),v^a_10(t)]^𝖳 using 1 linear layer. That is, the flattened 𝐱̂(t) is into ℝ^40 × 1. The times series LOB 𝐗̂_τ is accordingly flattened in ℝ^40 ×τ and then projected into higher-dimensional space of 𝐡_0 ∈ℝ^256 ×τ to obtain a more diverse range of features, using also 1 linear layer. The Transformer Block. The second component of the encoder is the Transformer stack with L linked vanilla Transformers <cit.>. It aims to learn the temporal auto-correlation between successive time steps with the multi-headed self-attention (MSA), feed-forward layer (or say FCN), and layernorm (LN). Specifically, at each l-th stacked Transformer, it computes 𝐡_l^' =MSA(LN(𝐡_l-1))+𝐡_l-1, l = 1, …, L 𝐡_l =FCN(LN(𝐡_l^'))+𝐡_l^', l = 1, …, L where 𝐡_L is the output of the whole Transformer stack as well as the input of FCN_2. The Dimension Reduction Block. Note that, the Transformer does not change the data space and thus 𝐡_l ∈ℝ^256 ×τ, l=0,...,L. The FCN_2 is designed to reduce the dimensionality of 𝐡_L so that the LOB can be represented as the latent vector 𝐙∈ℝ^1×τ̃. For that purpose, we need to first flatten 𝐡_L as a vector using 1 linear layer. Notice that, direct concatenating each of 256 rows of 𝐡_L as a 1 × 256τ vector will lead to too large input for the successive layers of dimension reduction. Hence, we first project 𝐡_L ∈ℝ^256 ×τ to 𝐡_L+1∈ℝ^40 ×τ with 1 linear layer. Then the 𝐡_L+1 is flattened by concatenating its 40 rows to produce a vector 𝐡_L+2∈ℝ^1 × 40τ using 1 linear layer. At last, 3 linear layers are carried out to reduce 𝐡_L+2 to 𝐙={z(t)}^τ̃_t=1. The Decoder. The architecture of the decoder keeps symmetric to the encoder. The latent vector 𝐙 first passes a network upside-down of FCN_2 to increase its dimension back to ℝ^256 ×τ. Subsequently, the output of FCN_2 undergoes a stack of L vanilla Transformers. Finally, the output of the Transformers stack goes through another FCN that is upside-down of FCN_1 to obtain the reconstructed data 𝐗^r_τ={𝐱^r(t)}^τ_t=1=g(f(𝐗̂_τ)) ∈ℝ^10× 4×τ. Implementation Details. The proposed autocoder is named as SimLOB. To keep the size of SimLOB computationally tractable, the observed T steps LOB is first split into multiple segments, each of which contains τ=100 time steps as suggested by <cit.>. Through sensitive analysis in Appendix <ref>, we set L=2 and τ̃=128 by default. Given a pair of an original 𝐗̂_τ and its reconstructed 𝐗^r_τ with the length of τ=100 steps, the reconstruction error calculates 𝐄𝐫𝐫_r=1/4000∑^10_i∑^4_j∑^100_t (𝐱̂_i,j(t) - 𝐱^r_i,j(t))^2. And 𝐋𝐨𝐬𝐬_r=∑^M_m=1𝐄𝐫𝐫^m_r/M defines the training loss given M pairs of training data sequences, by averaging the reconstruction errors on the training data with batch size M=128. SimLOB is trained by Adam <cit.> with learning rate 1E-4 for 200 epochs. § EMPIRICAL STUDIES The following three research questions (RQs) are majorly concerned in this work. RQ-1: Can we learn to represent various LOB into vectors? What is the recommended architecture? RQ-2: Does "better representation of LOB, better calibration of FMS" generally hold? RQ-3: Is it really beneficial to calibrate FMS with LOB instead of traditional mid-price? Three groups of experiments are conducted accordingly. Group-1: the proposed SimLOB is trained with large volume of synthetic LOB data, together with 8 state-of-the-art (SOTA) networks. The reconstruction errors on both synthetic and real testing data are measured to assess the quality of the vectorized representations. The parameter sensitivity of SimLOB is also analyzed in terms of both reconstruction and calibration. Group-2: for the above 9 networks, the represented latent vectors are applied to 10 FMS calibration tasks to show the consistency between the reconstruction errors and the calibration errors. Group-3: the calibration errors in terms of the vector representation learned by SimLOB, the raw LOB and the extracted mid-price are compared. §.§ General Experimental Setup The FMS Model. The widely studied Preis-Golke-Paul-Schneid (PGPS) model is adopted as the FMS model <cit.>, which models all the traders as two types of agents: 1) 125 liquidity providers who submit limited orders; 2) 125 liquidity takers who submit market orders and cancel untraded limited orders in the LOB. The detailed workflow and configurations can be found in Appendix <ref>. In short, the PGPS model contains 6 parameters to be calibrated. The Dataset for Representation Learning. In the literature, augmenting the training data with the synthetic LOB data generated by FMS is increasingly popular <cit.>. Hence, this work trains the SimLOB with purely the synthetic LOB data generated by the PGPS model with different parameters. In general, each setting of the 6-parameter tuple of PGPS actually defines a specific market scenario. To ensure the synthetic data enjoys enough diversity, we uniformly randomly sample 2000 different settings of those 6-parameter tuples from a predefined range suggested by <cit.> (see Appendix <ref>). The PGPS runs with each setting to generate one simulated LOB for 50000 time steps, simulating 50000 seconds of the market. Then we split each of the 2000 LOB data as 500 sequences with 100 time steps. Thus, there are 1 million LOB sequences with τ=100, where 80% of those synthetic data is randomly sampled for training, and the rest 20% is taken as testing data. The real market data from the sz.000001 stock of Chinese market (during May of 2019) is also used to test the trained networks. The Compared Networks. In the literature, several networks have been designed for analyzing LOB data <cit.>. Although their tasks main focus on a different task of price trend prediction, they are indeed attempts of dealing with time series LOB. This experiment selects 7 SOTA networks by taking their architecture before the last linear layer as their representation learning layers, and all share τ̃=128. We follow the conventions of <cit.> to name the 8 networks as MLP<cit.>, LSTM<cit.>, CNN1<cit.>, CNN2<cit.>, CNN-LSTM<cit.>, DeepLOB<cit.>, TransLOB<cit.>. As their names suggested, despite that MLP, LSTM, CNN1, and CNN2 adopt single type networks, the other works mostly employ a Convolution-Recurrent based architecture to intuitively first extract the prices precedence and then capture the temporal auto-correlation. TransLOB employs the Transformer architecture to replace the recurrent network while still keeping the convolution layers. Their input formats and sizes follow the suggestions of their original papers and are shown in Appendix <ref>. TransLOB-L is an enlarged version of TransLOB to keep the size approximately aligned with SimLOB, by using 7 stacked Transformers instead of 2 as in original TransLOB. All the 8 compared networks are trained in the same protocol with SimLOB. The Calibration Tasks. The 10 synthetic LOB data are generated with 10 different settings of 6-parameter tuples (see Appendix <ref>), each of which contains T=3600 time steps, i.e., simulating 1 hour of the market at the frequency of 1 second. These 10 synthetic data are utilized as the target data for calibrating PGPS. Note that these data instances are challenging that they have a much higher frequency than traditional FMS works who can only calibrate to daily data <cit.>. And the length of the target data is also at least 10× longer than the existing calibration works <cit.>. The objective function for calibration employs MSE. For calibrating the mid-prices as traditional works do, Eq.(<ref>) gives D(𝐗̂_T, M(𝐰))=∑^⌈T/τ⌉_i=1∑^τ_t=1 (m̂p̂((i-1)τ+t) - mp^𝐰((i-1)τ+t))^2/⌈T/τ⌉, where M(𝐰)=𝐗_t={𝐱(t)}^T_t=1. For calibrating the learned latent vectors as this paper proposes, the calibration function is slightly different. Both the target and simulated LOB with T time steps is represented as ⌈T/τ⌉ latent vectors, each of which has the length of τ̃. Then we have Eq.(<ref>) D(𝐗̂_T, M(𝐰))=∑^⌈T/τ⌉_i=1∑^τ̃_t=1(ẑ((i-1)τ̃+t) - z^𝐰((i-1)τ̃+t)))^2/⌈T/τ⌉, where τ=100 and τ̃=128. The calibration problem can be generally defined as min_𝐰 D(𝐗̂_T, M(𝐰)). For simplicity, a standard Particle Swarm Optimizer (PSO) is employed to solve it <cit.>, and the detailed settings of PSO is given in Appendix <ref>. §.§ Group-1: Learning the Vectorized Representations of LOB The reconstruction errors on 0.2 million synthetic testing data of all 9 compared algorithms are depicted in Figure <ref>. Among them, SimLOB not only achieves the smallest averaged reconstruction error, but performs the stablest. Specifically, the distribution of the reconstruction errors of SimLOB follows a power law distribution with a mode of 0.003 (See Figure <ref> of Appendix <ref>). Almost all the CNN-based networks perform the worst, suggesting that CNN is not effective as expected to deal with LOB data. This is quite contradictory to the trend of existing works on deep learning for LOB. Next, as shown in the first 4 rows of Figure <ref>, the 10 levels of bid prices of 4 randomly selected synthetic data and the reconstructed LOB of 4 networks are visualized. For the visualizations on more data and networks, please refer to Figure <ref> of Appendix <ref>. It is clear that the reconstructed data of SimLOB successfully captures many details of the target LOB, especially the fluctuations and the precedence of the 10 price levels. Comparatively, the other networks perform quite poorer as their reconstructed price levels are smoothed. This implies that they do not preserve the information between both time steps and price levels. Note that, these network are trained purely on synthetic data. We directly apply them to reconstruct real market data. As seen in the last 4 rows of Figure <ref>, the real LOB fluctuates quite slightly comparing to synthetic data. SimLOB can still capture the information of price precedence well and the fluctuations to some extent, though the range of prices deviates. Contrarily, the compared networks all fail to reconstruct meaningful details. This suggests that SimLOB can be further generalized to more diverse LOB with special treatments, such as training with real data. The structural settings of SimLOB are analyzed in Appendix <ref>, which suggests L=2 and τ̃=128. §.§ Group-2: Better Representation, Better Calibration Each of the 10 synthetic LOB data with T=3600 is first applied to 9 compared networks to obtain the latent vectors, respectively. Then we apply Eq.(<ref>) and PSO to PGPS to obtain the best-found parameter tuple and the corresponding simulated data. For each network, the calibration error between the target data and the simulated data is calculated using 𝐄𝐫𝐫_r. The detailed calibration errors are listed in Table <ref>. The best result between TransLOB and TransLOB-L is listed here. Among all the learned representations, the ones learned by SimLOB lead to the best calibration performance on 7 out of 10 instances and the runner-up performance on data 1 and data 9. The distribution of the calibration errors (outliers omitted) in Figure <ref> show the advantages of SimLOB more clearly. Furthermore, by comparing Figure <ref> and Figure <ref>, it can be observed that the reconstruction errors and the calibration errors are consistent, which implies that the better the representation of LOB is, the better the calibration of FMS is highly likely to be. §.§ Group-3: Calibrating Vectorized LOB is Beneficial To demonstrate that calibrating LOB is beneficial to FMS, the above 10 synthetic LOB data is also calibrated using traditional objective function Eq.(<ref>) with merely mid-prices, denoted as Cali-midprice. Furthermore, one may concern why not directly use the reconstruction error 𝐄𝐫𝐫_r as the calibration function to calibrate the raw LOB. Hence, we also calibrate PGPS to the raw LOB, denoted as Cali-rawLOB. Table <ref> in Appendix <ref> shows that SimLOB almost dominates Cali-midprice and Cali-rawLOB. The simulated mid-prices of SimLOB, Cali-rawLOB, and Cali-midprice on 10 target data are also depicted in Figure <ref>. It can be intuitively seen that the simulated mid-price of SimLOB resembles the target data much more than Cali-rawLOB and Cali-midprice. Comparing SimLOB with Cali-rawLOB, 𝐄𝐫𝐫_r essentially measures the MSE value between two very long vectors with the length of 40τ=4000 on each segmentation. Though it can tell the difference between two raw LOB data, it does not reflect the importance of the key properties of temporal auto-correlation and the price precedence. Comparing to Cali-midprice, SimLOB helps PGPS achieve much better performance on not only the whole LOB but also the mid-price, simply because the mid-price is basically a derivate of LOB. To summarize, calibrating with more information of the market can help improve the fidelity of the simulation, but needs effective latent representations. This supports the initial motivation of this work. § CONCLUSIONS This paper proposes to learn the vectorized representations of the LOB data, the fundamental data in financial market. The autoencoder framework is employed for this purpose, and the latent vector is taken as the representation. In the literature, there have been increasing research works focusing on analyzing LOB data with neural networks. However, they did not use the autoencoder framework, neither explicitly learned the vectorized representations. This work studies 8 SOTA neural architectures for LOB, and discusses that the commonly used convolution layers are not effective for preserving the specific properties of LOB. Based on that, a novel neural architecture is proposed with three components: the first fully connected network for extracting the features of price precedence from LOB, the stacked Transformers for capturing the temporal auto-correlation, and the second fully connected network for reducing the dimensionality of the latent vector. Empirical studies verify the advantages of the proposed network in terms of the reconstruction errors against existing neural networks for LOB. Moreover, it is found that the effectiveness of the representation learning positively correlated with the downstream tasks of calibrating financial market simulation model. Further empirical findings support that the financial market simulation models should be calibrated with compactly represented LOB data rather than merely mid-prices or raw LOB data. ieeetr § COMPUTATIONAL RESOURCES The experiments run on a server with 250GB memory, the Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz with 20 physical cores, and 2 NVIDIA RTX A6000 GPU. The training of each network on average requires around 50 hours in the data parallel manner with 2 A6000 GPU. The calibration of each LOB data sequence costs 2-3 hours with 20 CPU cores or say 40 threads. The simulator can only run on a single core, while the PSO can be run in parallel with 40 simulators. § THE PGPS FINANCIAL MARKET SIMULATION MODEL §.§ The Details of PGPS The Preis-Golke-Paul-Schneid (PGPS) model is a representative FMS model <cit.>, which models all the traders in the market as two types of agents: 125 liquidity providers and 125 liquidity takers. The entire simulation flowchart can be seen in Figure <ref>. The detailed settings are as follows. At each t-th time step, each liquidity provider may submit a limited order with a fixed probability of α. The probability of each limited order being either the bid side or ask side is set to 0.5. Each liquidity taker may submit a market order at a fixed probability μ and cancel an untraded limited order with a probability δ. The probability of the market order being either the bid side or ask side is q_taker(t) and 1-q_taker(t), respectively. The q_taker(t) is determined by a mean-reverting random walk with a mean of 0.5, and the mean reversion probability equals to 0.5 + |q_taker(t) - 0.5|. The increment size towards the mean is controlled by ±Δ s. Each liquidity taker may also cancel an untraded limited order with a probability δ. For each order, the volume is fixed to 100 shares. The price of an ask limited order is determined by p^a_1(t) - ⌊ -λ(t) log u)⌋ - 1 and the price of a bid limited order is calculated as p^b_1(t) - ⌊ -λ(t) log u)⌋ + 1, where λ(t) = λ_0(1 + |q_taker(t) - 0.5|/√(⟨ q_taker - 0.5⟩^2)C_λ). Here ⟨ q_taker - 0.5⟩^2 indicates a pre-computed value obtained by taking the average of 10^5 Monte Carlo iterations of (q_taker - 0.5)^2 before simulation. u ∼ U(0,1) is a uniform random number. The price of the market order is automatically set to the best price level at the opposite side. In summary, the PGPS model contains 250 agents and the key parameters to be calibrated are 𝐰=[δ, λ_0, C_λ, Δ_s, α, μ]. §.§ Generating Synthetic LOB by Randomly Sampling 6 PGPS Parameters The ranges of the parameters tuple 𝐰 are listed in Table <ref>, as suggested by <cit.>. The 10 calibration tasks with respect to synthetic data are also generated by randomly sampling from these ranges and simulating with M(𝐰). Here, we list the 10 groups of parameters in Table <ref> for clarity. § THE CHARACTERISTICS OF THE COMPARED NETWORKS The characteristics of the 9 compared networks are listed as in Table <ref>. The second and third columns give the acceptable input formats of each network, and the last column shows the sizes of the networks. The MLP has the largest size of 2.2E7 weights, which can be simply considered as the enlarged version of solely the proposed FCN_2. In this regard, by comparing SimLOB with MLP, we know that the Transformer stacks are important to SimLOB. By comparing SimLOB and TransLOB-L, we know that more Transformer does not lead to better results (verified later in Figure <ref> of Appendix <ref>), the CNN should be abandoned and the linear layers of FCN_1 and FCN_2 are helpful. §.§ The Calibration Algorithm Note that M(𝐰) is basically a software simulator. In this work, it is programmed on the Multi-Agent Exchange Environment (MAXE)[https://github.com/maxe-team/maxe] environment developed by University of Oxford with the MIT Lisence <cit.>. Thus the calibration problem does not enjoy useful mathematical properties like gradients. It is reasonable that more advanced black-box optimization algorithms can lead to better calibration errors. Here, for simplicity, a standard Particle Swarm Optimizer (PSO) is employed for optimizing the above problem <cit.>. The hyper-parameters of PSO follow the suggested configurations, where the population size is set to 40, the inertia weight is set to 0.8, the cognitive and social crossover parameters c1 = 0.5, c2 = 0.5. The total iteration number of PSO in each run is fixed to 100 <cit.>. § MORE RESULTS OF THE RECONSTRUCTION ERRORS The trained SimLOB is tested on 0.2 million LOB sequences with 100 time steps. The distribution of the reconstruction error 𝐄𝐫𝐫_r on each testing data is depicted in Figure <ref>, which follows a power law distribution with a mode of 0.003, implying that SimLOB performs quite stable on different LOB data. In addition to Figure <ref> in the main context, we depict the reconstructed data of the compared networks on 5 more synthetic data to demonstrate the superiority of SimLOB. In Figure <ref>, the reconstructed data of all the compared networks except for TransLOB and TransLOB-L are visualized. The situation remains the same that the reconstructed data of SimLOB resembles the target data to the most. The other networks have smoothed the target data to some extent, indicating that they are unable to fully preserve the key properties of LOB in their latent vectors. The reason of not depicting TransLOB and TransLOB-L is simply that their reconstructed data look like almost flat. § SENSITIVE ANALYSIS ON THE STRUCTURAL PARAMETERS OF SIMLOB To analyze the architecture settings in SimLOB, we made the following counterparts of SimLOB. The number of stacks in Transformer block is adjusted as 2, 4, 6 and 8, respectively. The length of the latent vector τ̃ is set to 64, 128, 256, and 512, respectively. The resultant variants of SimLOB are denoted as SimLOB_τ̃=64, SimLOB_τ̃=128, SimLOB_τ̃=256, and SimLOB_τ̃=512, where SimLOB with τ̃=128 is found the best choice for default settings. §.§ On the number of Transformer Stack In the main context, the number of Transformer stack is recommended as L=2. Actually, with L=2,4,6,8, we found that the choice of L did not influence the reconstruction errors much, as shown in Figure <ref>. That is, the reconstructed errors on 0.2 million testing data are around 0.025 for the 4 variants of SimLOB, and the standard deviation remains similarly, i.e., about 0.095. As adding one Transformer stack leads to around 1.5 millions more weights, we thus use the smallest tested value of L=2 as the default setting for SimLOB. §.§ On the Length of Representation Vector The default setting of the length of the learned representation vector is τ̃=128. Such setting is based on the sensitive analysis on the parameter τ̃ with different choices like 64, 128, 256, and 512. That is, we train SimLOB with those settings separately and tested on the 0.2 million testing data. The reconstructed errors are listed in Table <ref>. It is shown that the longer the learned representation is, the better the reconstruction errors can be obtained. This immediately suggests using larger τ̃ for representation learning. On the other hand, if we closely look at Figure <ref>, this advantage decreases quickly after τ̃=128. At the same time, larger lengths will make the downstream tasks more complex and do not necessarily lead to better calibration errors (see Table <ref>), where τ̃=128 leads 5 best calibration errors out of 10 instances. Thus, by balancing the reconstruction errors and the complexity, we choose τ̃=128, which has already outperformed the SOTA, as the default setting for SimLOB. § COMPARISONS AMONG SIMLOB, CALI-RAWLOB, AND CALI-MIDPRICE ON 10 SYNTHETIC DATA To demonstrate that calibrating LOB is beneficial to FMS, the above 10 synthetic LOB data is also calibrated using traditional objective function Eq.(<ref>) with merely mid-prices, denoted as Cali-midprice. Furthermore, one may concern why not directly use the reconstruction error 𝐄𝐫𝐫_r as the calibration function to calibrate the raw LOB. Hence, we also calibrate PGPS to the raw LOB, denoted as Cali-rawLOB. Table <ref> shows that SimLOB almost dominates Cali-midprice and Cali-rawLOB that SimLOB wins 9 out 10 instances. On data 2, the result of SimLOB is also very competitive to Cali-rawLOB and far better than Cali-midprice. For Cali-rawLOB and Cali-midprice, it is hard to tell which is significantly better. This may be the reason why traditional FMS works did not calibrate to raw LOB data.
http://arxiv.org/abs/2406.19006v1
20240627084531
VideoMambaPro: A Leap Forward for Mamba in Video Understanding
[ "Hui Lu", "Albert Ali Salah", "Ronald Poppe" ]
cs.CV
[ "cs.CV" ]
[ Massimiliano Morini July 1, 2024 ======================= § ABSTRACT Video understanding requires the extraction of rich spatio-temporal representations, which transformer models achieve through self-attention. Unfortunately, self-attention poses a computational burden. In NLP, Mamba has surfaced as an efficient alternative for transformers. However, Mamba's successes do not trivially extend to computer vision tasks, including those in video analysis. In this paper, we theoretically analyze the differences between self-attention and Mamba. We identify two limitations in Mamba's token processing: historical decay and element contradiction. We propose VideoMambaPro (VMP) that solves the identified limitations by adding masked backward computation and elemental residual connections to a VideoMamba backbone. VideoMambaPro shows state-of-the-art video action recognition performance compared to transformer models, and surpasses VideoMamba by clear margins: 7.9% and 8.1% top-1 on Kinetics-400 and Something-Something V2, respectively. Our VideoMambaPro-M model achieves 91.9% top-1 on Kinetics-400, only 0.2% below InternVideo2-6B but with only 1.2% of its parameters. The combination of high performance and efficiency makes VideoMambaPro an interesting alternative for transformer models. Code is available at https://github.com/hotfinda/VideoMambaProhttps://github.com/hotfinda/VideoMambaPro. § INTRODUCTION Video understanding is a challenging task, requiring models that can extract rich spatio-temporal representations from video inputs. Transformers are powerful neural networks capable of effectively capturing temporal and spatial information from videos <cit.>. Therefore, most current state-of-the-art models for video understanding are based on transformers <cit.>. At the core of transformers is self-attention <cit.>, which learns the self-alignment between tokens in an input sequence by estimating the relative importance of a given token with respect to all other tokens. This long-range token dependency accounts for much of the success of transformer models <cit.>. However, the cost involved in computing the self-attention is high, which eventually limits the application of powerful transformer models in practical settings <cit.>. Recently, alternative models with lower-cost operators have been proposed in the language processing domain, including S4 <cit.>, RWKV <cit.>, and RetNet <cit.>. Among these methods, Mamba <cit.> shows the best performance on long-range and causal tasks such as language understanding <cit.> and content-based reasoning <cit.>. Motivated by the favorable computational cost, researchers have recently extended Mamba from the NLP domain to the computer vision domain. The core adaptation involved splitting the input image into multiple regions and embedding these as continuous tokens <cit.>. For video understanding, the recently proposed VideoMamba <cit.> extracts key frames from videos as the continuous input sequence. However, compared to previous transformer-based methods, VideoMamba's performance on video benchmarks is significantly lower. For example, VideoMamba achieves 82.4% top-1 on Kinetics-400, compared to 85.2% for VideoMAE <cit.>, indicating room for improvement. In this paper, we first analyze differences in the feature extraction capabilities of transformers and Mamba. We identify two limitations of Mamba when applied to video understanding: historical decay and element contradiction. We then extend VideoMamba to mitigate these limitations. The proposed VideoMambaPro addresses historical decay through masked backward computation in the bi-directional Mamba process, allowing the network to better handle historical tokens. To tackle element contradiction, we introduce residual connections to Mamba's matrix elements. VideoMambaPro consistently improves the performance of VideoMamba on video understanding tasks, positioning it as a strong, efficient competitor to transformers. In summary, our contributions are: * We derive a formal representation of Mamba from the perspective of self-attention and identify two limitations of Mamba in the video analysis domain. * We propose VideoMambaPro, which effectively addresses the identified limitations present in Mamba for the video understanding task. * We report strong performance on video action recognition benchmarks compared to state-of-the-art transformer methods, and surpass the original VideoMamba by clear margins. We first discuss related work. Then, we provide our theoretical analysis, before introducing the VideoMambaPro architecture. Experiments are summarized in Section <ref> and we conclude in Section <ref> § RELATED WORK Transformers. One core aspect of transformers is self-attention <cit.>. It achieves long-range interactions by measuring the similarity between tokens. Self-attention was introduced in the computer vision domain for tasks such as image recognition <cit.> and object detection <cit.>. Subsequent works (e.g., <cit.> extended vision transformers to the video domain, to achieve superior performance. However, the mechanism of self-attention, which relies on similarity measurement, introduces significant computational overhead. The bulk of the computational cost arises from matrix multiplications for all input tokens with each other. Alternative models. Recent work has introduced alternative models with reduced computational complexity, while maintaining the advantages of self-attention <cit.>. SOFT <cit.> propose to utilize Gaussian kernel function to replace the dot-product similarity, which enables a full self-attention matrix to be approximated via a low-rank matrix decomposition. Combiner <cit.> proposes to utilize the structured factorization to approximate full self-attention, realizing low computation and memory complexity. RWKV <cit.> combines parallel self-attention training with efficient recurrent neural network (RNN) inference using a linear attention mechanism. It proposes a model architecture called Receptance Weighted Key Value (RWKV) to achieve parallel computation and constant-level computational and memory complexity. RetNet <cit.> contains another variant of self-attention, by dividing the input into multiple chunks. Within each chunk, the self-attention mechanism can be computed in parallel, while information is transmitted between chunks based on an RNN. The S4 model completely abandons self-attention and, instead, builds upon a state space model <cit.>. Instead of performing individual matrix multiplications for tokens to obtain a similarity matrix, it enables the network to directly learn a global HiPPO (high-order polynomial projection operator) matrix to handle relations between tokens. Additionally, for the simultaneous input of multiple tokens, S4 proposes a convolutional processing approach, enabling parallel training and thereby accelerating the training process. Based on S4, Mamba <cit.> proposes a selection mechanism where, for each input token, a unique HiPPO matrix <cit.> is generated. This allows the model to selectively process input tokens, enabling it to focus on or ignore specific inputs. Due to Mamba's strong representation ability in NLP, and linear-time complexity, it has garnered widespread attention as a promising alternative to transformers. In the computer vision domain, researchers have proposed Vision Mamba <cit.> and VMamba <cit.> for tasks such as image classification and object detection. In the video domain, VideoMamba <cit.> has been proposed. However, its performance is lower than expected, with limited understanding of the causes. We argue that a systematic, mathematical analysis of Mamba from the perspective of self-attention could reveal shortcomings of Mamba's inner workings. Better understanding of these limitations allow us to develop improvements, and to close the accuracy performance gap with transformers, while enjoying the efficiency of Mamba. § THEORETICAL ANALYSIS First, we revisit Mamba from the perspective of self-attention. Then, we analyze its limitations for video understanding. We propose VideoMambaPro to address these limitations in Section <ref>. §.§ Mamba from the perspective of self-attention Self-attention. Given an input sequence 𝑋 := [𝑥_1, ⋯, 𝑥_𝑁 ] ∈ℝ^N × D_x of N feature vectors of depth D_x, self-attention <cit.> computes the output sequence 𝐘 from 𝑋 following two steps: Step 1: Compute similarity matrix. The input sequence 𝑋 is linearly projected onto three different subspaces: query 𝐐∈ℝ^N × D, key 𝐊∈ℝ^N × D, and value 𝐕∈ℝ^N × D_V: 𝐐 = 𝑋𝐖_Q^⊤; 𝐊 = 𝑋𝐖_K^⊤; 𝐕 = 𝑋𝐖_V^⊤; with 𝐖_Q,𝐖_K ∈ℝ^D × D_x, and 𝐖_V ∈ℝ^D_v × D_x the corresponding weight matrices. Specifically, 𝐐 := [𝑞_1, ⋯, 𝑞_𝑁 ]^⊤, 𝐊 := [𝑘_1, ⋯, 𝑘_𝑁 ]^⊤, and 𝐕 := [𝑣_1, ⋯, 𝑣_𝑁 ]^⊤ with vectors 𝑞_𝑖, 𝑘_𝑖, 𝑣_𝑖 for i = 1, ⋯, N the query, key, and value vectors, respectively, for input vector i. Based on 𝐐 and 𝐊, similarity matrix 𝐒∈ℝ^N × N contains the correlations between all query and key vectors: 𝐒 = softmax(𝐐𝐊^⊤/√(D)) The softmax function is applied to each row of the matrix (𝐐𝐊^⊤/√(D)). The similarity matrix 𝐒 can be denoted as: 𝐒 = [ s_11 s_12 s_13 ⋯ s_1N; s_21 s_22 s_23 ⋯ s_2N; s_31 s_32 s_33 ⋯ s_3N; ⋯ ⋯ ⋯ ⋱ ⋮; s_N1 s_N2 s_N3 ⋯ s_NN; ] where each component s_ij (i, j = 1,⋯,N) represents the similarity score between 𝑞_𝑖 and 𝑘_𝑗. Step 2: Compute output based on similarity matrix. Output sequence 𝐘 := [𝑦_1, ⋯, 𝑦_𝑁 ]^⊤ is then calculated as: 𝐘 = 𝐒𝐕 Following this equation, each output vector 𝑦_𝑖 (i = 1,⋯,N) can be written in vector form as: 𝑦_𝑖 = ∑_j=1^Ns_ij𝑣_𝑗 Any output vector 𝑦_𝑖 is a linear combination of vectors 𝑣_𝑗 (j = 1,⋯,N), with similarity score s_ij serving as coefficient. The larger the similarity score, the greater the influence of 𝑣_𝑗 on 𝑦_𝑖 <cit.>. Mamba. State Space Models (SSMs) serve as the foundation of Mamba <cit.>, and they are based on continuous systems that map a 1D function or sequence, x(t) ∈ℝ^L→ y(t) ∈ℝ^L to output sequence y(t) through a hidden state h(t) ∈ℝ^N. Formally, SSM implements the mapping as[The original SSM <cit.> employs h'(t) = Ah(t) + Bx(t), with h(t) the hidden state inherited from the previous time step t-1, and h'(t) represents the updated current hidden state, replacing h(t). Considering this approach may lead to ambiguity, we have adopted the updated description as in Mamba to avoid ambiguity.]: h(t) = 𝐀h(t-1) + 𝐁x(t), y(t) = 𝐂h(t) where 𝐀∈ℝ^N × N is the evolution matrix of the system, and 𝐁∈ℝ^N × 1, 𝐂∈ℝ^N × 1 are the projection matrices. Often, inputs are discrete rather than a continuous function x(t). Therefore, Mamba performs discretization, effectively creating a discrete version of the continuous system. A timescale parameter Δ is used to transform the continuous parameters 𝐀, 𝐁 into their discrete counterparts 𝐀, 𝐁, and the transformation typically employs the zero-order hold method <cit.>. This process is expressed as: 𝐀 = exp(Δ𝐀), 𝐁 = (Δ𝐀)^-1 (exp(Δ𝐀) - 𝐈) ·Δ𝐁 h_t = 𝐀h_t-1 + 𝐁x_t, y_t = 𝐂h_t. Considering that parameters 𝐀,𝐁,𝐂 in the original SSM are independent of the input data x(t) and cannot be tailored to specific input data, Mamba employs a Selective Scan Mechanism as its core operator. More precisely, three functions S_B(x), S_C(x), S_Δ(x) are introduced to associate parameters 𝐁,𝐂,Δ in Equations <ref>–<ref> to the input data x. Based on S_Δ(x), 𝐀 can also be associated with the input data x. For example, given the input x_1, functions S_Δ(x) will produce the corresponding 𝐀_1 based on Equation <ref>, and functions S_B(x) and S_Δ(x) will produce the corresponding 𝐁_1 based on Equation <ref>. 𝐂_1 is obtained based on function S_C(x). Following Equations <ref> and  <ref>, we analyze the process to obtain output sequence 𝐘 when given an input sequence 𝑋 := [𝑥_1, ⋯, 𝑥_𝑁 ] ∈ℝ^N × D_x of N feature vectors. The hidden state of each vector is denoted as: h_1 = 𝐁_1 𝑥_1 h_2 = 𝐀_2 h_1+ 𝐁_2 𝑥_2= 𝐀_2𝐁_1 𝑥_1+𝐁_2 𝑥_2 h_3 = 𝐀_3h_2+ 𝐁_3 𝑥_3 = 𝐀_3𝐀_2𝐁_1 𝑥_1+𝐀_3𝐁_2 𝑥_2 + 𝐁_3 𝑥_3 ⋯ h_N = 𝐀_Nh_N-1+ 𝐁_N 𝑥_𝑁=𝐀_N𝐀_N-1⋯𝐀_2𝐁_1 𝑥_1 + 𝐀_N𝐀_N-1⋯𝐀_3𝐁_2 𝑥_2 + 𝐀_N 𝐀_N-1⋯𝐀_4𝐁_3 𝑥_3 + ⋯⋯+ 𝐀_N𝐁_N-1𝑥_𝑁-1 + 𝐁_N𝑥_𝑁 Equations <ref> – <ref> can be written in the following matrix form: 0.85𝐇 = [h_1, h_2, h_3,⋯, h_N ]^⊤ = [ 𝐁_1 0 0 ⋯ 0; 𝐀_2 𝐁_1 𝐁_2 0 ⋯ 0; 𝐀_3𝐀_2𝐁_1 𝐀_3𝐁_2 𝐁_3 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 𝐀_N𝐀_N-1⋯𝐀_2𝐁_1 𝐀_N𝐀_N-1⋯𝐀_3𝐁_2 𝐀_N 𝐀_N-1⋯𝐀_4𝐁_3 ⋯ 𝐁_N ][ 𝑥_1; 𝑥_2; 𝑥_3; ⋮; 𝑥_𝑁 ] For output sequence 𝐘 := [𝑦_1, ⋯, 𝑦_𝑁 ]^⊤, each vector 𝑦_𝑖 (i = 1, ⋯, N) can be expressed as: 𝑦_𝑁 = 𝐂_Nh_N And in matrix form as: 𝐘 = [ 𝐂_1 0 0 ⋯ 0; 0 𝐂_2 0 ⋯ 0; 0 0 𝐂_3 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 𝐂_N; ][ h_1; h_2; h_3; ⋮; h_N ] By substituting Equation <ref> into Equation <ref>, we obtain the following expression: 0.72𝐘 = [ 𝐂_1 0 0 ⋯ 0; 0 𝐂_2 0 ⋯ 0; 0 0 𝐂_3 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 𝐂_N; ][ 𝐁_1 0 0 ⋯ 0; 𝐀_2 𝐁_1 𝐁_2 0 ⋯ 0; 𝐀_3𝐀_2𝐁_1 𝐀_3𝐁_2 𝐁_3 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 𝐀_N𝐀_N-1⋯𝐀_2𝐁_1 𝐀_N𝐀_N-1⋯𝐀_3𝐁_2 𝐀_N 𝐀_N-1⋯𝐀_4𝐁_3 ⋯ 𝐁_N ][ x_1; x_2; x_3; ⋮; x_N ] Which can be expressed as: 𝐘 = 𝐂 (𝐌𝑋) where 𝐂 and 𝐌 represent the first and second term on the right-hand side of Equation <ref>, respectively. Recall Equation <ref> that the result 𝐘 obtained by self-attention processing can be expressed as: 𝐘 = 𝐒𝐕 = (𝐒𝑋) 𝐖_V^⊤ From the perspective of self-attention, by comparing Equations <ref> and <ref>, the essence of Mamba is to generate a matrix 𝐌 similar to similarity matrix 𝐒, such that the result of 𝐌𝑋 is also based on the correlation between vectors of 𝑋. Although the final result of 𝐌𝑋 is left multiplied by a mapping matrix 𝐂, while the result of 𝐒𝑋 is right multiplied by a mapping matrix 𝐖_V^⊤, the geometric meanings of the two are the same. §.§ Limitations of Mamba in video understanding From the perspective of self-attention, the concept of Mamba is similar: both center around similarity matrices. We now analyze the differences between the similarity matrices of Mamba and self-attention, and discuss the limitations of Mamba in the context of the video understanding task. Limitation 1: Historical decay. Matrix 𝐌 in Equation <ref> corresponds to the second right-hand term in Equation <ref>, which is a lower triangular matrix of the form: 1.0𝐌 = [ m_11 0 0 ⋯ 0; m_21 m_22 0 ⋯ 0; m_31 m_32 m_33 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; m_N1 m_N2 m_N3 ⋯ m_NN ] By comparing 𝐌 with matrix 𝐒 in self-attention, we find that outputs in Mamba favor more recent information, because the more weights are zero the earlier the token is observed. For example, for input [𝑥_1, 𝑥_2,𝑥_3], the output 𝐌𝑥_1 in Mamba is m_11𝑥_1 while the output 𝐒𝑥_1 is s_11𝑥_1 + s_12𝑥_2 + s_13𝑥_3 in self-attention. This indicates that, in Mamba, the influence of earlier observed tokens on the final result is greatly diminished. We refer to this limitation as historical decay. In the NLP domain, more recent dialogue information often has more impact on the final judgment, so this effect is acceptable. However, in the computer vision domain, the order of the tokens has less meaning. Previous works such as Vision Mamba <cit.> and VMamba <cit.> have partly mitigated this issue by processing the token sequence in both forward and backward directions. This produces better results but no work has explained why this is effective. When processing bi-directionally, the results generated from input forward tokens [𝑥_1, ⋯,𝑥_𝑁 ], denoted as 𝐌_f𝑋, and the results generated from input backward tokens [𝑥_𝑁, ⋯,𝑥_1 ], denoted as 𝐌_b𝑋, are linearly combined to generate the final result 𝐌_bi𝑋 with 𝐌_bi being a dense matrix. As a result, the influence of historical information on the result is increased, consequently leading to better results. For example, for the input tokens [𝑥_1, 𝑥_2,𝑥_3 ], 𝐌_f𝑋 and 𝐌_b𝑋 can be expressed as: 𝐌_f𝑋 = [ f_11 0 0; f_21 f_22 0; f_31 f_32 f_33 ][ 𝑥_1; 𝑥_2; 𝑥_3 ] = [ h_1f; h_2f; h_3f ], 𝐌_b𝑋 = [ b_33 0 0; b_23 b_22 0; b_13 b_12 b_11 ][ 𝑥_3; 𝑥_2; 𝑥_1 ] = [ h_3b; h_2b; h_1b ] where f_ij represents the similarity score during the forward process, and b_ij is the similarity score in the backward direction. After bi-directional computation, with the outputs linearly combined, the results are expressed as: h_1 = h_1f+h_1b = f_11𝑥_1 + b_13𝑥_3+b_12𝑥_2+b_11𝑥_1 h_2 = h_2f+h_2b = f_21𝑥_1 + f_22𝑥_2+ b_23𝑥_3+b_22𝑥_2 h_3 = h_3f+h_3b = f_31𝑥_1 + f_32𝑥_2 + f_33𝑥_3 + b_33𝑥_3 We can write Equation <ref> in matrix form: [ h_1; h_2; h_3; ] = [ f_11+b_11 b_12 b_13; f_21 f_22+b_22 b_23; f_31 f_32 f_33+b_33; ][ 𝑥_1; 𝑥_2; 𝑥_3; ] = 𝐌_bi[ 𝑥_1; 𝑥_2; 𝑥_3; ] The bi-directional computation transforms the original matrix 𝐌 from a lower triangular matrix to a dense matrix 𝐌_bi, thereby capturing more historical information and effectively avoiding the historical decay. When extending to the case of N input tokens [𝑥_1, ⋯,𝑥_𝑁 ], 𝐌_bi can be written as 𝐌_bi= [ f_11+b_11 b_12 b_13 ⋯ b_1N; f_21 f_22+b_22 b_23 ⋯ b_2N; f_31 f_32 f_33+b_33 ⋯ b_3N; ⋮ ⋮ ⋮ ⋱ ⋮; f_N1 f_N2 f_N3 ⋯ f_NN+b_NN; ] The diagonal elements of 𝐌_bi contain duplicates of the similarity between a token and itself. For example, f_33 and b_33 each represent the similarity between token 𝑥_3 and itself. Consequently, the similarity is effectively doubled which weakens the association with other tokens. One possible approach is to adjust 𝐌_f and 𝐌_b using a weight coefficient z through a linear combination. However, learning such a parameter z that weakens the diagonal elements without affecting other elements might be challenging. Limitation 2: Element contradiction. By analyzing the non-zero elements m_ij in 𝐌 of Equation <ref>, it can be summarized that: m_ij=𝐀_im_i-1 j After multiple iterations, the above equation results in implicit consideration of the correlation between previous tokens and token j when computing the correlation between token i and token j. As a result, m_ij exhibits stronger contextual dependencies compared to the elements s_ij in the matrix 𝐒. This might explain why Mamba achieves better performance than transformers in the field of NLP. While this is advantageous in the NLP domain, for the computer vision domain, input tokens often lack semantic connections. The consideration of the influence of other tokens on each element, can lead to significant drawbacks. We often observe an interleaved token structure when processing images. Tokens that “belong together” might not be subsequently processed. For example, in an image classification task, input tokens [𝑥_1, 𝑥_2,𝑥_3 ] might represent image regions [dog, other, dog]. Ideally, m_31 should be a high value, and m_21 should be low. According to Equation <ref>, m_31=𝐀_3m_21, which requires the network to set 𝐀_3 to a high value to meet the requirement on m_31. However, in doing so, m_32=𝐀_3m_22 would also become larger because m_22 is also high. But, theoretically, m_32 should be low. This leads to an element contradiction. Especially for video understanding, such contradictions are common because most video regions contain background and other irrelevant information, making relevant tokens sparse. Consequently, the performance of Mamba applied to video analysis tasks is underwhelming <cit.>. § VIDEOMAMBAPRO We propose two adaptations to VideoMamba <cit.> to address the two identified limitations: historical decay and element contradiction. The resulting architecture is termed VideoMambaPro (VMP). To address historical decay, we keep the result of 𝐌_f𝑋 unchanged but we use masked computation during the backward process. Specifically, we assign a mask to the diagonal elements of 𝐌_b, setting their values to 0, and then proceed with the calculations in Equations <ref>–<ref>. We thus eliminate the duplicate similarity on the diagonal, without affecting other elements. The final 𝐌_bi is expressed as: M_bi= [ f_11 b_12 b_13 ⋯ b_1N; f_21 f_22 b_23 ⋯ b_2N; f_31 f_32 f_33 ⋯ b_3N; ⋮ ⋮ ⋮ ⋱ ⋮; f_N1 f_N2 f_N3 ⋯ f_NN; ] To solve the element contradiction issue, we propose residual SSM, which is inspired by the idea of residual connections to distribute the requirement for 𝐀_i in m_ij across multiple 𝐀_i. This helps to avoid contradictions caused by interleaved sequence structures. For example, for the previous mentioned input sequence [𝑥_1, 𝑥_2,𝑥_3 ], which represents image regions [dog, other, dog], we let m_31=𝐀_3m_21+𝐀_3. This way, the requirement for a single 𝐀_3 can be split into two parts, thus avoiding contradictions. This can be expressed as: m_ij=𝐀_im_i j-1+𝐀_i Based on these two solutions, we propose our VideoMambaPro framework, based on VideoMamba <cit.> and illustrated in Figure <ref>. Given an input video 𝑋^v ∈ℝ^3 × T × H × W, we first use a 3D convolution with a 1×16×16 size kernel to convert 𝑋 ^v into L non-overlapping patch-wise tokens 𝑋 ^p ∈ℝ^L × C, where L = t × h × w (t = T, h = H/16, w = W/16. Because SSM is sensitive to token positions, and in line with VideoMamba, we include a learnable spatial position embedding 𝑝_𝑠∈ℝ^(hw+1) × C and a temporal position embedding 𝑝_𝑡∈ℝ^t × C. Input tokens 𝑋 are expressed as: 𝑋 = [𝑋_cls, 𝑋] + 𝑝_𝑠 + 𝑝_𝑡 where 𝑋_cls is a learnable classification token positioned at the beginning of the sequence. The input tokens 𝑋 pass through K Mamba blocks, and the final layer's [CLS] token is used for video classification, after normalization and linear projection. § EXPERIMENTS We introduce experiment setup, followed by our main results and comparisons with the state-of-the-art. In Section <ref> and <ref> we compare against VideoMamba model variants and presents statistical tests, respectively. We investigate the effect of each of the two innovations separately in an ablation study in Section <ref>. Finally, we analyze the computation cost in Section <ref>. §.§ Experimental setup Datasets. We evaluate VideoMambaPro on five video benchmarks: (a) Kinetics-400 (K400) <cit.>. K400 comprises ∼240K training and ∼20K validation videos, each with an average duration of 10 seconds and categorized into 400 classes. (b) Something-Something V2 (SSv2) <cit.> includes ∼160K training and ∼20K validation videos. The videos in SSv2 have an average duration of 4 seconds and there are 174 motion-centric classes. (c) UCF-101 <cit.> is a relatively small dataset, consisting of ∼9.5K training and ∼3.5K validation videos. (d) HMDB51 <cit.> is also a compact video dataset, containing ∼3.5K training and ∼1.5K validation videos. (e) AVA <cit.> is a dataset for spatio-temporal localization of human actions with ∼211k and ∼57k validation video segments. Implementation. In line with VideoMamba, we introduce three models with increasing embedding dimension and number of bi-directional Mamba blocks: Tiny, Small, and Middle (details in the supplementary material). To compare with VideoMamba, we pre-train VideoMambaPro on ImageNet-1K (IN-1K). On K400, we also pre-train with IN-1K and CLIP-400M. We fine-tune on the benchmark's training set and report on the validation set. During pre-training, we follow DeiT <cit.> and we apply a center crop to obtain the 224^2 size images. We apply random cropping, random horizontal flipping, label-smoothing regularization, mix-up, and random erasing as data augmentations. We use AdamW <cit.> with a momentum of 0.9, a batch size of 1024, and a weight decay of 0.05. We employ a cosine learning rate schedule during training, 1 × 10^-3 initial learning rate over 300 epochs. The fine-tuning settings follow VideoMAE <cit.>. We resize frames to 224^2, and use AdamW with a momentum of 0.9 and a batch size of 512. Details in the supplementary materials. §.§ Comparison with state-of-the-art K400. Results appear in Table <ref>. Compared to VideoMamba, our model has slightly fewer parameters and FLOPs. This is primarily because VideoMamba employs an additional projection layer to generate the weight coefficient z to adjust 𝐀_f and 𝐀_b. See the supplementary materials for an architecture comparison. VideoMambaPro outperforms VideoMamba significantly. When pre-trained only on IN-1K, the best-performing VideoMambaPro-M achieves a top-1 accuracy of 90.3%, 7.9% higher than VideoMamba-M. When additionally pre-training on CLIP-400M, we report a top-1 performance of 91.7, surpassing all transformer models except the recent InternVideo2-6B <cit.>. The latter scores 0.4% better but is trained on more data. With 1.2% of the parameters, VideoMambaPro is much more efficient. The number of FLOPs for InternVideo2 is not reported but compared to the previous InternVideo <cit.>, inference takes only ∼5.3% of the FLOPs, while performing 0.8% better. Finally, when we increase the input size, we narrow the gap to InternVideo2-6B to 0.2%. SSv2. Results appear in Table <ref>. VideoMambaPro outperforms VideoMamba by 6.7–8.1%. It also outperforms several popular transformer models. Only InternVideo <cit.> and InternVideo2-6B <cit.> perform 0.8 and 1.1% better, respectively, but with more pre-training, 18.8-85.6 times more parameters and at least 40 times more FLOPs. In line with the results for K400, we expect that the performance for VideoMambaPro will increase with more pre-training. UCF-101/HMDB51/AVA V2.2. From Table <ref>, it shows that VideoMambaPro-M is competitive, and outperforms VideoMamba by 4.2% and 11.6% on UCF-101 and HMDB51, respectively. When pre-trained only on IN-1K, VideoMambaPro-M achieves 42.2 mAP on AVA V2.2, which is 1.1% lower than Hiera-H <cit.> but with an order of magnitude fewer parameters and FLOPs (see Table <ref>). §.§ Comparison with VideoMamba on K400 In Table <ref>, we compare VideoMamba and VideoMambaPro across model size, pre-train data and input size. The improvements of VideoMambaPro are systematic, in the range of 6.2–8.2%. Interestingly, the performance increase for the larger, better performing, models is not lower despite the smaller margin for improvement. More quantitative results appear in the supplementary materials. We investigate the relative performance per class when using VideoMambaPro-M with 224^2 image size pre-trained on IN-1K dataset, compared to a VideoMamba-M baseline with the same settings. We show the relative performance for all classes of Kinetics-400 in Figure <ref>. For over 95% of the classes, VideoMambaPro shows improvement. Although there is a lower performance for certain classes, the decrease is typically limited. The majority of the classes sees a 6-10% improvement, which is substantial. For a small number of classes, VideoMambaPro performs >10% better than VideoMamba. §.§ Statistical comparison between VideoMamba and VideoMambaPro results In order to understand whether the improvements of VideoMambaPro over VideoMamba are statistically significant, we compare the results of the respective Middle models, both pre-trained on ImageNet-1K and with a spatial input size of 224 × 224. Other settings are also the same. For each test sample, we check whether it correctly classified by either model, irrespective of the class. The summary of these results appears in Table <ref>. Based on these results, we calculate the McNemar test, which is a non-parametric test with a single degree of freedom. Essentially, it checks whether the number of items that are incorrectly classified by VideoMambaPro-M but not VideoMamba is substantially lower than the number of items misclassified by VideoMamba but not VideoMambaPro. The McNemar test is calculated as χ^2 = (n_01 - n_10)^2/(n_01 + n_10) with n_01 corresponding to the number of items that were misclassified by VideoMamba but not VideoMambaPro, and n_10 the number of items that were correctly classified by VideoMamba but misclassified by VideoMambaPro. These numbers correspond to 2,075 and 502, respectively. Based on the Chi-square distribution, the resulting value of 12,573 corresponds to a significance level of p < 0.001. We can thus conclude that VideoMambaPro-M is statistically significantly better than VideoMamba. Because we relied on the performance reported in papers for other methods, we cannot report statistical comparisons here. §.§ Ablation: masked backward computation and residual connections We have identified two limitations that exist in VideoMamba, historical decay and element contradiction, and have introduced masked backward computation and residual connections to address these, respectively. Here, we examine the contribution of each innovation separately and jointly. We use the same settings as in the previous section, with VideoMambaPro-M and pre-training on the IN-1K+CLIP-400M dataset. We summarize the performance of VideoMambaPro-M on Kinetic-400 in Table <ref>. Both innovations result in improvements over the VideoMamba-M baseline. Using residual connections in the bi-directional Mamba block increases the top-1 accuracy by 3.3%, whereas using masked backward computation adds 4.5% top-1 accuracy. Importantly, these gains are partly independent, as witnessed from the increased when comparing these results to the full model with both innovations applied. Similar observations are made when examining the top-5 accuracy, albeit with smaller effects. §.§ Computation cost comparison We compare the performance of our VideoMambaPro-M with and without pre-training and with two input sizes with other approaches on Kinetics-400. We visually map the top-1 accuracy against the number of parameters and FLOPs in Figures <ref> and <ref>, respectively. VMP-A is the VideoMambaPro-M model with additional training on CLIP-400 and VMP-A+L is VideoMambaPro-M with additional training and larger input size. The details of these models appear in Table <ref>. VideoMambaPro's performance is only superseded by InternVideo and InternVideo2, but these come with a substantially larger number of parameters and FLOPs. The number of parameters for EVL-L (67M) is comparable to VideoMambaPro-M (69M) but the number of FLOPs is a factor 4 higher. Moreover, the performance of EVL-L is 4.2% lower. Overall, our VideoMambaPro achieves performance on par with the state-of-the-art at computationally very competitive performance. § CONCLUSION From a mathematical comparison with self-attention, we have identified two limitations in how Mamba processes token sequences. We argue that these limitations constrain Mamba's potential, especially in video understanding tasks. To this end, we have introduced VideoMambaPro (VMP), which takes VideoMamba and introduces the masked backward State Space Model (SSM), and adds residual connections in both forward and backward SSM to address the two limitations. In experiments on Kinetics-400, Something-Something V2, HMDB51, UCF-101, and AVA V2.2, VideoMambaPro consistently demonstrates state-of-the-art or competitive performance, but with significantly lower computation cost. For example, with a top-1 performance of 91.9% on Kinetics-400, we perform only 0.2% lower than the recent InternVideo-6B, but with only 1.2% of the parameters. This combination of high performance and efficiency makes VideoMambaPro a promising solution for video understanding tasks. splncs04 § APPENDIX We provide the detailed architecture for the three VideoMambaPro models in Section <ref>. A comparison between the architectures of VideoMamba and VideoMambaPro appears in Section <ref>. The implementation details are shown in Section <ref>. §.§ VideoMambaPro architectures We present the architecture details of VideoMambaPro-Tiny, -Small, and -Middle in Tables <ref>–<ref>, respectively. The difference are in the embedding dimension (192, 384, 576) and the number of SSM blocks (24, 24, 32). §.§ Architecture comparison with VideoMamba We compare the architectures of VideoMambaPro and VideoMamba in Figure <ref>. Compared to VideoMamba, VideoMambaPro does not have the linear layer to generate parameters z. Additionally, our residual SSM and mask scheme do not introduce additional parameters or computational overhead, so our method has slightly fewer parameters and FLOPs. §.§ Implementation details We conduct the experiments with 16 NVIDIA A100-80G GPUs for both pre-training on ImageNet-1K and fine-tuning on the Something-Something V2 and Kinetics-400 datasets. The experiments on the smaller UCF101 and HMDB51 datasets are trained with 8 A100-80G GPUs. The experiments on the AVA dataset are conducted with 32 A100-80G GPUs. The values of the hyperparameters is largely similar to those used in VideoMamba. We linearly scale the base learning rate with respect to the overall batch size, lr = lr_base× batch size / 256. The pre-training details are shown in Table <ref>, and the fine-tuning details on Kinetics-400, SSv2, UCF101, HMDB51, and AVA V2.2 are listed in Tables <ref>–<ref>.
http://arxiv.org/abs/2406.18616v1
20240626042927
Towards Large Language Model Aided Program Refinement
[ "Yufan Cai", "Zhe Hou", "Xiaokun Luan", "David Miguel Sanan Baena", "Yun Lin", "Jun Sun", "Jin Song Dong" ]
cs.SE
[ "cs.SE", "cs.AI", "cs.CL", "K.6.3" ]
Y. Cai et al. National University of Singapore cai_yufan@u.nus.edu dcsdjs@nus.edu.sg Griffith University z.hou@griffith.edu.au Peking University luanxiaokun@pku.edu.cn Singapore Institute of Technology david.miguel@singaporetech.edu.sg Shanghai Jiaotong University lin_yun@sjtu.edu.cn Singapore Management University junsun@smu.edu.sg Towards Large Language Model Aided Program Refinement Yufan Cai1 Zhe Hou2 Xiaokun Luan3 David Sanan4 Yun Lin5 Jun Sun6 Jin Song Dong1 ====================================================================================== § ABSTRACT Program refinement involves correctness-preserving transformations from formal abstract specification statements into executable programs. Traditional verification tool support for program refinement is highly interactive and lacks automation. On the other hand, the emergence of large language models (LLMs) enables automatic code generation from informal natural language specifications. However, code generated by LLMs is often unreliable. Moreover, the opaque procedure from specification to code provided by LLM is an uncontrolled black box. We propose LLM4PR – a tool that combines formal program refinement techniques with informal LLM-based methods to (1) transform the specification to pre- and post-conditions, (2) automatically build prompts based on refinement calculus, (3) interact with LLM to generate code, and finally, (4) verify that the generated code satisfies the conditions of refinement conditions, thus guaranteeing the correctness of the code. We have implemented our tool with GPT4 and Coq and evaluated it on the HumanEval and EvalPlus datasets. Section Section Algorithm CStyle backgroundcolor=, commentstyle=, keywordstyle=, numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2, language=C ExtWhile morekeywords=skip, if, else, while, for, assert, assume, const, store, select, aggregate, int, verify, sensitive=false, morecomment=[l]//, morecomment=[s]/**/, ExtWhileStyle backgroundcolor=, commentstyle=, keywordstyle=, numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2, language=ExtWhile ExtWhileStyleOp backgroundcolor=, commentstyle=, keywordstyle=, numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2, language=ExtWhile § INTRODUCTION Background. Recently, AI-powered large language models (LLMs) have advanced rapidly in mathematics, reasoning, and programming <cit.>. Industrial products like GPT4 <cit.> and Copilot <cit.> greatly assist programmers in coding-related tasks. In general, the programmer inputs a specification of the question in natural language, and then the LLM will generate the associated code, which basically translates the natural language to the programming language. The end-to-end framework of deep learning models makes it possible to generate the intended program in a very flexible way. Some studies, however, show that programmers usually find it hard to trust and debug the LLM-generated code <cit.> as the generation procedure is opaque and out of control. Past works like Code2Inv <cit.> proposed an end-to-end learning framework to learn and generate a valid proof for a program by interacting with the proof checker. With the emergence of LLM applications, recent works investigate methods that combine LLMs with formal verification techniques for generating program properties and invariants  <cit.>. The recent works based on deep learning techniques usually adopt an end-to-end framework and rely on various informal heuristics like the chain of thoughts to control the reasoning of LLMs <cit.>. The verification procedure usually involves another LLM to check the output of the LLM in a debating-like procedure <cit.>. Challenges. While the above methods show the significant potential of LLMs in code generation and program verification, there remain questions in verifying and controlling the code generation procedure. Besides, LLMs often generate unsound and insecure code that users would typically have no clue how to fix them. Building trust and interpretability in the code generation process is extremely important in practice since the generated code will be adopted in a large context and should be properly maintained. As a complementary method, program refinement involves correctness-preserving transformations from formal specification statements into executable code. However, the current transformation from specifications to code based on program refinement calculus is largely designed or even implemented by hand, which is costly and laborious  <cit.>. Naturally, the manual transformation of program refinement always tends to be an ad-hoc procedure that is hard to generalize and apply in industry. Proposed Solution. In this work, we propose a mostly automated approach called to combine the formal program refinement calculus with informal LLMs to refine the specification and generate verified code step by step automatically. also combines some automated theorem provers(ATPs) to verify the code and justify the choice of the refinement laws made by LLMs. Our approach is complementary to LLMs, automated theorem provers, and traditional verification tools. While the formal specification still requires manual effort for the initial input in the first step, it should not be a hurdle for the formal methods community and is necessary since, otherwise, there is no correctness to speak of. Besides, our LLM also facilitates the formalization procedure shown in the experiment. To the best of our knowledge, is the first framework that combines LLMs and program refinement techniques. Motivating Example. We illustrate our motivation using a program for computing the square root of a real number. In <ref>, we show the code snippets generated by GPT4 and Copilot. The LLMs can generate almost correct code. However, these programs still contain some bugs. Both upper two programs are wrong in the case N < 1 as N*N < N. Mathematically, the choice of the variable as the upper bound of the square root of N should be larger than N+1/4 as ∀ N, (N+1/4)^2 ≥ N. We try to fix the GPT4 code with the prompt The upper bound is wrong for N less than 1. However, the newly generated code still fails on several cases like sqrt(5) since the variable x goes to a fixed point but does not terminate the loop. The final code (bottom right) with the formal constraints shows the conditions that should be obeyed for GPT4. In contrast, our program refinement with the LLM will automatically generate prompts with constraints to generate the code and refine the specification shown in <ref>. Intuitively, we regard the LLMs as “constraint solvers”, whose powerful extensibility and rich background knowledge shed light on the potential of automation for program refinement. Our program refinement can passively “assert” constraints that help debugging and actively “verify” constraints that benefit code generation. Contributions. The contributions of the paper are summarized below. * A framework for mostly automated program refinement with the LLMs, including a formal specification language L_spec, a programming language L_pl associated with our program refinement calculus, and a verification strategy that verifies the outputs of LLM based on Coq and ATPs. * A GPT4 variant fine-tuned with program refinement instructions and knowledge of our defined languages and laws. * A dataset of formal specifications and an evaluation benchmark based on the samples of the HumanEval and EvalPlus datasets. § PRELIMINARIES This section introduces the background knowledge of program refinement. We mainly follow Morgan's notations in <cit.>. Specification describes what a program is expected to do. In detail, a specification contains variants, a precondition, and a postcondition, in the form variants: [precondition,  postcondition]. Variants are the list of program variables, the precondition describes the initial states, and the postcondition describes the final states of the program. Refinement of the specification is the relation between two expressions where one can solve the other. Formally, we have the following laws of refinement: Let the precondition pre and postcondition post be any FOL formula, if post' ⇛ post, then x: [pre, post] ⊑ x: [pre, post']. Let the precondition pre and postcondition post be any FOL formula, if pre ⇛ pre', then x: [pre, post] ⊑ x: [pre', post]. The relation symbol ⊑ is called refinement. For two formulae A and B, A entails B (A ⇛ B) means that in every state if A is true then B is true. Skip is a command where the final state of the program is the same as its initial state. If the precondition of the specification entails the postcondition, it can be refined by skip. If pre ⇛ post, then x: [pre, post] ⊑skip. Sequential Composition refines a single specification to two others. Let mid be any formula except for pre or post. x: [pre, post] ⊑ x:[pre, mid]; x:[mid, post]. Assignment assigns the variant with new expressions. We denote post⟨ x:=E ⟩ as a new condition that assigns all occurrences of x in post by E. If the precondition entails the new postcondition after the assignment, it can be refined by assignment. Let E be any Expression, post⟨ x:=E ⟩ assigns every x in post with E. If pre ⇛ post⟨ x:=E ⟩, then x: [pre, post] ⊑x = E. Alternation is built by guarded branches. Let GG be the disjunctive normal form of the guards G_0, G_1, , ..., G_i, ..., G_n, if pre ⇛ GG, then x: [pre, post] ⊑if_i (G_i then x: [G_i pre, post]) where if _i G_i then means if G_0 then ... else if G_i then ... . Iteration. Iterations (while loops) are built by loop conditions, invariants, and variants. An invariant inv is a formula that if is true initially, stays true for each repetition. The variant V of the iteration is chosen to guarantee the termination of the iteration. Let Inv, the invariant, be any formula; let V, the variant, be any integer-valued expression. Let GG be the disjunctive normal form of the guards G_0, G_1, ..., G_i, ..., G_n then x: [Inv, Inv GG ] ⊑ while _i(G_i do x: [Inv G_i,Inv (0≤ V<V_0)]) where V_0 is the initial value of V, while _i G_i do means while G_0 do ... else G_i do ... else G_n do. Expand. It expands the variant list by introducing another variant. Let x be the origin variant and y be another variant and y_0 be the initial value of y, then x: [pre, post] = (x, y): [pre, post y = y_0] Procedure. A procedure is declared by a name, some parameters, and a program. procedure N (param V:T) ≜ Prog. Given a procedure that refines procedure Proc (param f:T) ≜ f: [pre, post], with post containing no f. Let A be some expression, then w: [pre⟨ f:=A ⟩, post⟨ f:=A ⟩] ⊑ Proc(A) § FORMAL LANGUAGES IN OUR APPROACH We introduce our formal specification language L_spec used to describe the specification and the programming language L_pl for our generated code. We further define the annotated programming language for the program refinement procedure, which contains both L_spec and L_pl. Formally, it is a tuple (L_spec, L_pl) that has two parts, one for each of the above languages, respectively. As these languages closely interacted with the LLMs, we target designing languages well understood and applied by LLMs. §.§ The Specification Language L_spec Our specification language L_spec extends first-order logic (FOL) and is a subset of the language of Coq <cit.>. The LLMs are familiar with both FOL and Coq grammar. We follow the standard syntax and semantics of FOL and highlight the following notations. Variants and Constants. We use lower case words like x, y, z to denote the variants that will change in the refinement and upper case words like N, M to denote constants. Both variants and constants should be typed. Relations and Functions. We use common relation operators and function operators in SMT, such as <, =, +, -, *, /, Array[Int], Array[Int:Int]. Syntax. We define our specification based on the first-order theory and theory of arrays. The full syntax of L_spec is given in <ref>, where Specification defines the specification that needs to be refined, Definition defines the condition that the variants should satisfy, Params defines the variants and constants. In the case of atom, Expr_ denotes the previous value of the expression, Nameatom specifies the array selecting operation, and Name[atom:atom] is used for array slicing operation. The remainder of the syntax is standard FOL used in SMT solving. Semantics. We follow the standard FOL semantics defined in Coq and only present the notable elements in <ref>. Note that the theory of arrays is realized by relations and functions, similar to its treatment in the literature <cit.>. §.§ The Program Language L_pl Our program language is mainly based on While language. The language is kept simple to make it easier for the LLM to understand and generate. The complete syntax of our program language is given in <ref>. Our programming language is imperative and has data types for booleans, natural numbers, integers, float, characters, and arrays. We include the extension of Array and Assert statements. The array has a natural number index type and the reading, updating, and slicing operations. To control the size and structure of programming, we also incorporate the use of procedures. The procedure is declared by a name, some parameters, and an associated program follows <ref>. The formal semantics follows the literature <cit.>. § THE REFINEMENT LAWS IN OUR APPROACH This section introduces our program refinement laws used for interaction with the LLMs. We aim to transform the refinement laws to facilitate both LLM interaction and ATPs verification. Our defined refinement laws can be utilized by our LLM. Skip. Another skip law gives the variant an initial value. The new skip law utilizes the fact that the initial and final variables have the same value. Let x_0 denote the initial value of variant x, if(x=x_0) P ⇛ Q, then the specificationx: [P, Q] ⊑ Skip. Use the skip law in <ref> as P ⇛ Q. Seq. We extend a new sequential composition law to flexibly divide one specification into two parts. Let P, Q, A, B, C, D be some formulate, if(P ⇛ A) (B ⇛ C) (Q ⇛ D), then the specificationx: [P, Q] ⊑ x:[A, B]; x:[C, D]. First, use the sequential composition law in <ref>, x: [P, Q] ⊑ x:[P, B]; x:[B, Q]. Then refine the two parts with the weaken-precondition law in <ref>, x:[P, B] ⊑ x:[A, B]; x:[B, Q] ⊑ x:[C, Q]. Finally refine the second part with the strengthen-postcondition law in <ref>, x:[C, Q] ⊑ x:[C, D]. Assign. We have two assignment laws. The initialized assignment law utilizes the initial values of the variants to simplify the further proof for pre ⇛ post⟨ x:=E ⟩. The following assignment law allows any assignment in its second half provided the changed variants. Let E be any Expr in the programming language, post⟨ x:=E ⟩ replaces every x in the formula post with E. If (x = x_0) (y = y_0) pre ⇛ post⟨ x:=E ⟩, then x, y: [pre, post] ⊑x = E. Use the assignment law in <ref> as pre ⇛ post⟨ x:=E ⟩. Let E be any Expr in the programming language, post⟨ x:=E ⟩ replaces every x in the formula post with E. x : [pre, post] ⊑ x :[pre, post⟨ x:=E ⟩] ; x = E. First use the sequential composition law, x: [pre, post] ⊑ x:[pre, post⟨ x:=E ⟩]; x:[post⟨ x:=E ⟩, post]. Then refine the second part using the assignment law, x:[post⟨ x:=E ⟩, post] ⊑x = E. Alternate. The if-else alternation law is a simplified version of the original one. Intuitively, it separates the specification into a case analysis. Let P, Q, and G be some formulae, then the specification x: [P, Q] ⊑ if (G) (x:[P G, Q]) else (x:[P G, Q]). As Pre ⇛ G G based on the law of excluded middle, the lemma can be directly implied from the alternation law in <ref>. Iterate. We extend the origin iterative law to float numbers, which need to find an upper bound to guarantee the loop termination in finite time. The first new specification assigns the initial value to the invariant and the second specification preserves the invariant and changes the variant during the iteration until the negated guard condition holds. In practice, based on the convergence of monotonic sequences of real numbers, we replace the existing condition with the monotonic and bounded condition given in <ref>. To avoid infinite loops, we add the assertion to check whether the expression V decreases by at least the error bound of the floating-point precision. Let P, I, and G be some formulae, V be any variant expression, and i and M are positive integers, then the specification x: [P, I G] ⊑ x:[P, I] ; while(G) do (x: [I G, I (∃ i < M, V_i → G)) ]. First, using the sequential composition law in <ref>, x: [P, I G] ⊑ x:[P, I]; x:[I, I G]. Then refine the second part with the iteration law in <ref>. Note that we replace the condition for integer-valued variants with any variant expression for scalability. To guarantee the termination of the iteration, a state of variant should exist to negate the guard condition after finite iterations. Let P, I, and G be some formulae, V be any variant expression, then the specification x: [P, I G] ⊑ x:[P, I] ; while(G) do (x: [I G, I V < V_0]; assert V ≠ V_0). First, follow the initialized Iteration Law. Then, note that the float precision error is e, then we have ∃ i = ⌈V_0/e⌉ < M, V < 0 → G. Traverse. We build a traverse law to facilitate the problems related to the list with recurrence relation. The formula P contains the variants l and i, which can be equations that recursively define a sequence. Note that the following refinement should preserve the invariant P(l, i) and make progress to P(l, i+1) following induction. Let l be the list of type T, natural numbers m and n denote the range, pre and P be some formula, l: [pre, ∀ i:nat m ≤ i < n → P(l, i)] ⊑ l, i:[pre, l[m]]; i = m ; while(i < n) do (l, i: [P(l, i), P(l, i+1)]; i=i+1). First, using the expand law and sequential composition law in <ref>, l, i: [pre, l[i] i=m]; l, i: [l[i] i=m, l[i] i=n]. Then refine the second part with the initialised assignment law <ref> and iteration law in <ref>, we have i = m ; while(i < n) do (l,i: [P(l, i), P(l, i) 0≤ n-i < n-i_0]. Finally, using the following assignment law in <ref> for the specification, [P(l, i), P(l, i+1) 0≤ n-(i+1) < n-i]; i=i+1 and can be simplified to the target. §.§.§ Semantics The semantics shown in <ref> is defined according to the refinement laws proved above. § : PROGRAM REFINEMENT WITH LLM This section presents our approach that combines the above program refinement laws with the LLMs for automation. Overview. <ref> shows an overview of our approach. The formal specification written in L_spec will be first transformed to an abstract syntax tree and will extract the conditions to input the LLM. The LLM will select a predefined law to refine the specification based on the description and constraints of the formal specification, and then generate the associate code with the law. correspondingly generates the proviso condition of the law and builds the verification scripts to justify the code generated by the LLM. ATPs will try to automatically verify the scripts and output the success message or error message. Based on the ATP result, the LLM will regenerate the code if failed, or the will save the verified code and generate the new specification if succeeded. If getting multiple times of failure, will trace back to the last refinement step and specification and interact with the LLM to choose another law and generate the associated code. <ref> shows a summarization of the actions of LLM and using the predefined six kinds of refinement laws. Actively Prompt. A prompt for the LLM like GPT4 refers to the instruction given to the model to elicit a response. The traditional design of prompts always follows some static templates like Program Refinement for the following specification. In this work, we regard the LLM as a constraint solver and actively build the prompt, including associated logical formulae for specifications step by step. These formulae contain the constraints that the output of LLM should satisfy. Consequently, the prompts contain the constraints written in L_spec and previous failure history if it exists. The LLM will select the refinement law and generate the associated code based on the given prompts. As each step has its generated specification, there is no need to add the previous refinement as history based on the congruence of Hoare Logic. Passively Verify. After the LLM generates the choice of the law and the associated code, will verify them using ATPs to justify whether the code satisfies the constraints based on the condition of the selected refinement law. If the constraints can be satisfied, then will apply the refinement law on the current specification and generate the new specification formally. If the verification fails, the LLM will receive the failure message and the possible counterexamples and then try to generate another code. The trace-back process will repeat for a limited time. If it still fails, will fall back to the last refinement step and the last specification. The LLM will receive both the failure history and the specification. Refinement Procedures. The refinement procedure can be regarded as a specification tree, and the nodes are linked by the refinement laws. Each node has its specification and the possible refinement paths with the associated code. We follow the procedure defined in <ref> and will save the specification trees as a procedure in the refinement library for future reuse. <ref> shows an overview of the specification tree and the refinement library. The refinement procedures can be reused when meeting the new specifications that contain the same precondition and postcondition. For example, the problem factorize contains predicates like modulo and isPrime which can be referred to the library. Retrieval Augmented LLM with Fine-tuning. We provide the LLM with the refinement procedures library as background knowledge so that the LLM can utilize the past knowledge with the retrieval augmented techniques. We also customize the LLM for our program refinement task by crafting prompts and instructions based on the refinement laws mentioned above. The LLM is fine-tuned with the examples in Morgan's book <cit.>, formal specification language L_spec, and our program language L_pl. Specification Formalization The first input of our should be a formal specification that needs the user to formalize the requirement. The LLMs can auto-formalize the specification to our L_spec, but it still needs the user to verify the correctness of the transformation from informal description to formal specification. It should not be a hurdle for the formal methods community and is necessary since, otherwise, there is no correctness to speak of. § EVALUATION In this section, we first do the qualitative analysis of the detailed example to show the benefits of our approach and then the quantitative analysis of the most popular benchmark datasets compared to the state-of-the-art LLMs. §.§ Case Study We show how deals with the motivating example of <ref> in <ref>. More examples are shown in <cit.>. The statement tagged with # for code comments and the precondition and postcondition is the current specification. The verification statement is the proviso condition to apply the refinement law. Note that we remove the condition of iteration termination check in (...) for a concise presentation. In detail, the LLM first sequentially splits the original specification into two parts. Informally, the first specification defines x, y such that x^2 ≤ N < y^2, which can be implemented by assignment. Note that the assignment of y needs to satisfy the constraints in the postcondition of the specification that is N < y^2, eliminating the possibility of bugs of LLMs like y = N in <ref>. The second specification preserves the invariant x^2 ≤ N < y^2 and makes the variants x, y closer until x + e >= y, which can be implemented with the iteration. The invariant, the guard condition, and the variant can be extracted from the specification. The LLM reduces the distance between x and y to assign x or y with the mean of x and y. It uses alternation to add another constraint to strengthen the precondition and make it easier to conclude the postcondition. Compared to the LLM-generated code, each refinement step can be verified as each has its associated specifications. LLMs, on the other hand, are used to select the refinement law and generate associated code automatically based on the generated constraints. The constraints are automatically built based on the choice of the law and the generated code in . When the refinement law is applied, the new specification will be generated based on the refinement laws with . §.§ Experiments §.§.§ Dataset and Implementations We choose the HumanEval dataset as the benchmark, which is widely used in evaluating code generation <cit.>. To evaluate with formal specifications, we transform 157 examples in the HumanEval dataset to a formal specification dataset, where 115 examples are correctly transformed by GPT4 and all formal specifications are manually checked. Note that 7 examples can not be transformed to formal specifications. Besides, to test the correctness and robustness of the generated code, we adopt the EvalPlus <cit.> dataset with the same examples but average more than 80x the number of test cases. We choose GPT4 as the base model and fine-tune it with examples from Morgan's book <cit.> and then test on the above dataset. §.§.§ Results <ref> shows the evaluation results. We choose the state-of-the-art LLMs include LLama3 <cit.>, GPT-3.5, claude-3 <cit.> and GPT4 as our baselines. The baselines' results follow the work <cit.>. To be fair, we add experiments that incorporate the formal specifications with the natural language descriptions to the GPT4. With only the natural language descriptions as input, GPT4 shows the best overall performance of all the LLMs. However, all the LLMs' performance decreases from HumanEval to EvalPlus because EvalPlus contains more challenging test cases and LLMs' generated code may have some bugs that can not pass the extra test cases. In contrast, 's performance is consistent between HumanEval and EvalPlus as the code is verified with guaranteed correctness. Theoretically, our generated code can be regarded as canonical solutions regardless of the number of test cases. Interestingly, incorporating formal specifications also enhances the GPT4 since formal specifications contain useful constraints and information. Overall, the shows better performance and generates more robust code compared to the LLMs. §.§ Limitations First, the capability of largely depends on the capabilities of LLMs and ATPs. To remedy the limitation, users can be involved in the procedure of program refinement by selecting the law, building, and checking the proof. Second, if a refinement lacks proof of the loop termination in the iterative law, we still consider it as partially correct. We create more iteration laws and the traverse law in <ref> to help avoid the termination condition, as proving termination is a hard and generally undecidable problem. Third, is designed to guide LLM in generating more robust code, not for analyzing problems and building specifications, where the latter still requires human input. However, it should not discount our approach since a definition of correctness is necessary for verified code. § RELATED WORK Program Refinement. <cit.>, <cit.>, <cit.> defines a formal method to build a program from its specification. It mainly focuses on the correctness of a given specification and refinement of a program while preserving its correctness. Some works propose a formalization of the refinement calculus in interactive theorem provers such as <cit.> for Isabelle and <cit.> for Coq <cit.>. Recent works utilize refinement calculus on different applications including  <cit.>. Theorem Proving. There are two main types of tools for theorem proving: Interactive Theorem Provers (ITPs) and Automated Theorem Provers (ATPs) <cit.>. ITPs, also known as proof assistants, interact with humans in the process of proof building and development, like Isabelle <cit.>, Coq <cit.>, Lean <cit.>. ATPs prove the goals automatically, including E-prover <cit.>, cvc4 <cit.>, vampire <cit.> and Z3 <cit.>. Some ITPs also incorporate automated provers like Isabelle with Sledgehammer <cit.> and Coq with Coqhammer <cit.>. Formal methods with LLM. Recent research on generating formal mathematical proofs utilizes machine learning techniques for proof search and premise selection. Existing works like GPT-f <cit.>, PACT <cit.>, Expert Iteration <cit.> use LLMs to generate actions, and the search engine tries to find possible correct steps using the actions provided by the model. Some works including HTPS <cit.>, and DT-Solver<cit.> enhance the search engine by machine learning techniques. Thor <cit.> uses the neural policy models incorporating ATPs to prove the theorems. LeanDojo <cit.> enables interaction with the proof environment Lean <cit.>. It extracts fine-grained annotations of premises in proofs from Lean, providing valuable data for premise selection. Verification with LLM. One of the key challenges of LLMs is their tendency to "hallucinate", which refers to generating information that is not just incorrect but often fabricated specious text. <cit.> sketches a self-monitoring and iterative prompting way that uses formal methods to detect the hallucination and steer the LLM to the correct specification. <cit.> builds the specialized prompt based on counterexamples provided by model checking and conducts code debugging and repairing based on LLMs. § CONCLUSION We have presented a tool for automated generation of verified code using LLMs, Coq and ATPs. We formally transform the specifications into code based on our refinement laws and LLMs. Our approach also extends the formal refinement calculus and builds active prompts to the informal LLMs. Finally, uses the ATPs to verify the refinement condition and the code based on the precondition and postcondition of the specification. Our experiments show that our method can generate more robust and correct code compared to the state-of-the-art LLMs. splncs04
http://arxiv.org/abs/2406.19115v1
20240627115245
Coagulation-flocculation process on a lattice: Monte Carlo simulations
[ "V. Blavatska", "Ja. Ilnytskyi", "E. Lähderanta" ]
cond-mat.soft
[ "cond-mat.soft" ]
^1 Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 79011 Lviv, Ukraine ^2 Dioscuri Centre for Physics and Chemistry of Bacteria, Institute of Physical Chemistry, Polish Academy of Sciences, 01-224 Warsaw, Poland ^3 Institute of Applied Mathematics and Fundamental Sciences, Lviv Polytechnic National University, 12 S. Bandera Str., UA-79013 Lviv, Ukraine ^4 Department of Physics, School of Engineering Science, LUT University, Yliopistonkatu 34, FI-53850 Lappeenranta, Finland ^5 Department of Physics, Universitat de les Illes Balears, Cra Valldemossa, km. 7.5, 07122, Palma, Spain viktoria@icmp.lviv.ua § ABSTRACT Coagulation-flocculation, the physicochemical process widely used for purification a wastewater, is affected both by chemical details of involved polymers and by the statistics of their conformations on a large scale. The latter aspect is covered in this study by employing a coarse-grained modelling approach based on a combination of two paradigms of statistical mechanics. One is the self-avoiding walk (SAW) which generates a range of conformations for a linear polymer of N_ SAW monomers. Another one is a non-trivial diffusion limited aggregation (DLA) process of N_ DLA impurities (referred thereafter as “particles") which describes their coagulation occurring with the probability 0< p ≤ 1 (p=1 recovers a standard DLA). DLA of diffusive particles is complemented by their irreversible adsorption on the SAW monomers occurring with the probability equal to one, both processes resulting in formation of the DLA-SAW agglomerates. The dynamics of formation of such agglomerates, as well as their fractal dimensions and internal structure are of practical interest. We consider a range of related characteristics, such as: (i) absolute N_a and relative n_a adsorbing efficiencies of SAW; (ii) effective gyration radius R_g DLA-SAW of the DLA-SAW agglomerates; and (iii) the fractal dimension D_ DLA-SAW of these aggregates. These are studied within a wide range for each parameter from a set {p,N_ DLA,N_ SAW}. 36.20.-r, 36.20.Ey, 64.60.ae Journal of Physics A: Mathematical and Theoretical Coagulation-flocculation process on a lattice: Monte Carlo simulations Viktoria Blavatska^1,2[Author to whom any correspondence should be addressed], Jaroslav Ilnytskyi^1,3, and Erkki Lähderanta^4,5 July 1, 2024 ==================================================================================================================================== § INTRODUCTION Coagulation-flocculation is an important physicochemical process to purify a wastewater off impurities <cit.>. A coagulant, typically a short polymer termed often as a “clarifying agent", neutralizes the particles’ charge allowing them to coagulate. A flocculant, added to wastewater, aids further conglomeration of particles into larger agglomerates, speeding up the process of sedimentation <cit.>. High molecular weight flocculants exhibit bridging flocculation, when a single molecule adsorbs a number of particles resulting in formation of a necklace-like structure <cit.>. Polymers commonly used in applications with inorganic solids such as clays and silts are anionic, whereas cationic polymers are used to settle organic solids such as animal waste or vegetation. They can be either synthetic, e.g. alum, lime, ferric chloride, polyaluminium, or derived from plant parts <cit.>, bacteria <cit.>, and chitosan <cit.>. In most wastewater purification protocols, coagulation-flocculation is an in­ter­me­diate step, prior to the membrane filtration <cit.>. In this case both processes take place in bulk. However, there are many situations when the target particles for aggregation reside on an interface, e.g. the liquid-liquid, water-air, etc. <cit.>. Their aggregation is desirable either for the consequent removal of aggregates, or for the sake of the interface stabilization and control of its properties <cit.>. In this case the problem, obviously, reduces to the case of the two-dimensional one. Therefore, both the 3D and 2D cases are equally important from the application point of view, but in this study we consider the 2D case only. The high molecular weight polymers used in bridging flocculation process are typically of linear architecture, commonly based on polyacrylamide. In particular, Gibson et al. examined seven of such polyacrylamide polymers and they found that the two of them, Drewfloc 2449 and 2468, demonstrate the highest solid removal efficiency <cit.>. In general, adsorption mechanism for such flocculation is considered to be based on hydrogen bonding between the amide or hydroxyl groups of a polymer and hydroxylated sites on particles' surfaces <cit.>. Modelling the coagulation-flocculation faces serious difficulties as it is intrinsically a multiscale process involving events ranging from an atomistic to a macroscopic scale. Indeed, interactions between particles and monomers of a polymer chain, be it a coagulant or flocculant, depends strongly on the chemical details of both. This aspect can be tackled by ab initio and/or atomistic molecular dynamics simulations. The aim of the current study is to cover an opposite, macroscopic, side of the length scales spectrum. Firstly, the coagulant is considered in an implicit way, where we just assume that due to its presence the particles are able to coagulate with the tunable probability 0<p<1. A linear high molecular weight flocculant is considered explicitly, assuming that it is able to adsorb particles irreversibly, with the probability equal to one. Our main focus will be on the macroscopic properties such as: (i) the efficiency of a flocculant to agglomerate particles; (ii) effective dimensions of agglomerates; and (iii) their fractal dimension. All these characteristics define hydrodynamic behavior of agglomerates during their removal from the wastewater via membrane filtering, gel chromatography, or other approaches <cit.>. We should mention that the modelling of the flocculation as a fractal DLA process have been done before <cit.>. Both 2D and 3D DLA were considered but for the case of a point seed only. The description of the mixed solution, containing a polymeric flocculant and impurity particles, can be performed on required large scale level by employing the lattice models of polymers <cit.>. In these terms, a flocculant is represented via the SAW of the size N_ SAW <cit.>. On the other hand, particles aggregation can be described by the DLA process of the size N_ DLA <cit.>. The concept of the DLA was initially introduced <cit.> as a discrete model of growth of dendritic clusters formed by irreversible aggregation of small particles. Since then, it became a paradigm in description of growth phenomena and pattern formation <cit.>, in particular as a model of colloidal particle aggregation <cit.>, growth of neuronal trees <cit.>, retinal arterial and venous vessels <cit.>, thin film nucleation growth process<cit.>, growth of microbial colonies <cit.> etc. Experimental studies reveal proportionality between the particles sedimentation rate and the flocculant molecular weight <cit.>. An explanation is given in terms of stronger adhesion, e.g., via hydrogen bonding, between the particles and the abundant hydroxyl groups in the case of the high molecular weight glycopolymers <cit.>. The result, however, might depend on the chemical composition of a flocculant. It is of interest to clarify the effect of the flocculant molecular weight, N_ SAW, originated purely from the conformational statistics of long polymers. Other parameters affecting the flocculation rate that can be pointed out are: the coagulation probability p, and concentration of the particles, provided by N_ DLA. We should note that, although both the SAW and DLA paradigms are well established and understood by now, combining them towards description of particular physical phenomena has not been exploited to its full extent yet. By doing this, one goes one step further from more simple cases of point-like seeds or the seed cores with a regular shape. Indeed, the seed core in a form of a SAW presents a complex structure that can adopt a wide range of conformations, from stretched to highly coiled ones, affecting the process of the DLA cluster growth in each case. By altering the probability of coagulation, p, one may shift an emphasis from aggregation of particles to their diffusion and vice versa, steering the process towards some specific spatial structure of a cluster. Therefore, here we link two paradigms of DLA and SAW to model coagulation-flocculation on a large scale level. In doing this, DLA describes coagulation of particles, whereas SAW represents a linear high molecular weight flocculant in a good solvent. We may note that the same model is also applicable to other related problems, such as: adsorption of colloidal particles from a dilute aqueous suspension <cit.>; nanoparticle diffusion and adsorption in polymer melts <cit.>, etc. Finally, we would like to address the issue of relative scales of a flocculant and the impurity particles in the lattice type of modelling. The dimensions of flocculant monomers and of a particle are both defined via a lattice constant a. Since a polymer is modelled as a SAW, a is lower bound by the persistent length l_p of a polymeric flocculant, with the typical values of l_p=0.5-1nm. The scale a, however, is not upper bound because of the self-similarity feature of a polymer chain at a>l_p. Therefore, the monomer can be interpreted as a subchain with essentially larger dimensions than l_p, bringing the intrinsic length scale of a problem into the realms of tens of nanoseconds or further up. More close comparison of the scales with particular real life polymers is problematic and, in general, is not intended. The layout of the paper is as follows. In the next Section <ref>, we give details of the computational methods used to construct the SAW and DLA clusters, the results are presented and discussed in Section <ref>, followed by Conclusions. § MODELS AND ALGORITHMS The growth of the SAW is performed sequentially: the nth monomer is generated at the random, but yet unoccupied, lattice site adjacent to the (n-1)th monomer. The process continues until the required number of monomers, N_ SAW, is created. The model is known to capture perfectly the universal configurational properties of long, flexible polymer chains in a good solvent <cit.> and has been studied in detail by both computer simulations <cit.> and analytical approaches <cit.>. The DLA process is typically initiated from a single “seed” particle. The second particle diffuses from a large distance away from a seed until it reaches it, in which case two particles merge into a cluster serving as a new seed for subsequent iterations. The process continues until the cluster of desired size N_ DLA is built. Since the perimeter sites of a cluster can be accessed more easily than those in its inner core, DLA cluster is characterized by a highly branched fractal structure. Its fractal dimension D_ DLA is found from the relation connecting the cluster size N_ DLA and its effective linear dimensions given by the gyration radius R_g_ DLA R_g_ DLA∼ N_ DLA ^1/D_ DLA, where the averaging (⋯) is performed over an ensemble of different DLA realizations. In general, the fractal dimension gives an estimate of the space-filling properties of considered structure: the closer it is to the Euclidean dimension D of embedding space, the more “dense” is the structure. Note that the fractal dimension D_ DLA, as defined by Eq. (<ref>), can be referred to as the dynamical one, describing growth of cluster dimensions R_g with the increase of its size N_ DLA. In D=2, the value D_ DLA= 1.712(2) has been found <cit.>. Alternatively, D_ DLA can be obtained from the density correlation function for the completely grown DLA cluster, and smaller value, D_ DLA≈ 1.66, is reported in this case <cit.>. Apart from a “classic” DLA, the reaction limited cluster aggregation (RLCA) is also of interest <cit.>. In this case, the non-vanishing repulsive forces between particles are taken into account, resulting in reduction of the aggregation rate, so that the aggregation (in our case, coagulation) probability is set lower than one. These two regimes, DLA and RLCA, can be attributed to the rapid and slow colloid aggregation, respectively, as defined in colloid science <cit.>. DLA with the non-trivial coagulation probability 0<p≤ 1 of diffusive particles in D=2 has been analyzed in Ref. <cit.>. In the case of rapid aggregation, p=1, a highly porous “classic” DLA cluster is reproduced, see Fig. <ref> (a). In a moderately slow aggregation regime, p<1, the process became more diffusion driven allowing particles more time to explore the voids inside a fractal structure. As the result, more compact and dense DLA clusters are created that are characterized by the increase of D_ DLA with the decrease of p, see Fig. <ref> (b). In the extremely slow aggregation regime, at p → 0, the limit of the Eden model is recovered with D_ RLCA = D <cit.>. Recently, such modified DLA process has been used to simulate aggregation of wax and asphaltene particles in a crude oil <cit.>, as well as in the antimicrobial peptide attack on supported lipid membranes <cit.>. When DLA is performed on SAW, the seed is not local and some generalizations of the standard DLA should be discussed. Note also that SAW itself a fractal object, the exact value for its fractal dimension in D=2 is found to be D_ SAW=4/3 <cit.>. The multi-seed generalization of DLA has been considered in Ref.<cit.>. At large separations between individual seeds, the crossover from a fractal to a uniform aggregate structure was observed at a certain length scale dependent on the concentration of mobile particles <cit.>. Another generalization of DLA is obtained for the seed with a non-zero dimensions. As shown by Wu et al. <cit.>, the fractal dimension of DLA cluster decreases with an increase of the dimensions of a seed particle. The same conclusion is derived from the studies of DLA growth on spherical surfaces of various radius <cit.>. Besides a seed of a spherical shape, its linear counterpart has also been considered. In particular, the DLA on a linear seed, reproducing a fibre, was studied in Refs. <cit.>, where an increase of its length resulted in gradual decrease of D_ DLA towards D=1, the value of Eucledean dimension of a seed, and the range of singularities becomes narrower. More complex structure of a seeding core has been also examined <cit.>. In this respect, DLA in a SAW as a seed can be interpreted as yet another extension of the standard DLA process. To construct SAW on D=2 square lattice, we use the pruned-enriched Rosenbluth Method (PERM) <cit.>, based on the Rosenbluth-Rosenbluth (RR) algoritm of growing chain <cit.> and reinforced by enrichment strategies <cit.>. The first monomer is introduced at a random site of a lattice with the coordinates x_1,y_1. Each following nth monomer is added at a randomly chosen site adjacent to the (n-1)th one, such that its coordinates {x_n,y_n} satisfy the conditions: {x_n=x_n-1± 1, y_n=y_n-1} or {x_n=x_n-1,y_n=y_n-1± 1} (n≤ N_ SAW, where N_ SAW is the total length of polymer chain). The weights W_n∼(∏_l=2^n m_l)^-1 are prescribed to each SAW configuration with n monomers, where m_l is the number of free lattice sites, where lth monomer could be potentially added. When the polymer chain of total length N_ SAW is constructed, the new one starts from the same starting point, until the desired number of growing chain configurations are obtained. Population control in PERM suggests pruning configurations with too small weights, and enriching the sample with copies of the high-weight configurations <cit.>. To this end, two thresholds values W_n^< and W_n^> are chosen depending on the running value of partition sum Z_n=∑_ conf W_n^ conf, where summation is performed over existing configurations of a chain. If the current weight W_n of an n-monomer chain is less than W_n^<, the chain is either discarded with probability 1/2, or it is kept and its weight is doubled. If W_n exceeds W_n^>, the configuration is doubled and the weight of each copy is taken as half the original weight. The pruning-enrichment control parameters are adjusted in such a way that on average 10 chains of total length N_ SAW are generated per each iteration <cit.>. At the second stage of our simulation, the ensemble of constructed SAW clusters are used as seeds for DLA process with N_ DLA particles. We draw an imaginary circle of radius R=R_0+R_max <cit.> (so-called birth circle, see Fig. <ref>) around each constructed SAW cluster, where R_max is defined as the distance between the center of DLA-SAW aggregate and the farthest adsorbed particle with respect to it, and R_0 is chosen to be sufficiently large (we used the value of R_0=50). The model has been tested for different R_0 <cit.>, and the increase of the value of this parameter leads to an increase of average time for diffusive particles to reach the perimeter of aggregate, but does not influence the quantitative characteristics of adsorption processes. Let us define the position of ith diffusive particle (i=1,…,N_ DLA) at time t with its coordinates {X_i(t),Y_i(t)}. Each particle starts to move from the point randomly chosen on the circle, so that the coordinates of its initial position satisfy the condition X_i^2(0)+Y_i^2(0)=R^2. The particle is not allowed to cross the perimeter of a circle and moves only inside of it, so that at any time step t we have: X_i^2(t)+Y_i^2(t)≤ R^2. When a particle reaches the position such that any of its neighboring site contains a monomer of SAW, i.e. ∀ n: {X_i(t)±1=x_n, Y_i(t)=y_n} or {X_i(t)=x_n, Y_i±1=Y_n }, it is adsorbed with probability equal to one and a new particle starts to move from a new randomly chosen point on a birth circle. The following particles can be adsorbed not only to a SAW cluster, but also aggregate with the previously adsorbed particles, with a chosen probability p. The process is stopped when the desired number of particles N_ DLA are adsorbed. Note that the double averaging is needed to be performed for any observable of interest O in the considered problem. First, we perform the averaging on a fixed SAW cluster over an ensemble of M different DLA realizations O = 1/M∑_i=1^MO^i. Also the avaraging over an ensemble of C constructed SAW configurations is to be performed according to ⟨O⟩=∑_j=1^C W_N_ SAW^jO ^j/Z_N_ SAW with W_N_ SAW^j being the weight of an N_ SAW-monomer chain in jth configuration of SAW as given by (<ref>), Z_N_ SAW=∑_j^C W_N_ SAW^i and O^j is the value obtained after averaging over all constructed DLA realizations on jth SAW configuration. We applied averaging over M=10^3 DLA realizations and C up to 10^4 SAW configurations in our analysis below. § RESULTS First of all, we revisit the results for the DLA on a single particle seed constructed on a D=2 square lattice with the variable coagulation probability p <cit.>. The gyration radius of DLA cluster is defined as R_g DLA^2=1/N_ DLA^2∑_i=1^N_ DLA∑_j=1^DLA ( (X_i-X_j)^2+(Y_i-Y_j)^2). It is subsequently averaged over the ensemble of DLA realizations, according to Eq. (<ref>), providing an average value R^2_g_ DLA. The data obtained are shown in Fig. <ref>a. Reduction of the p value below 1 gives more opportunity for particles to diffuse towards the center of a cluster. This results in formation of a more densely packed structures as compared with a standard DLA, p=1, see Fig. <ref>. It is clearly seen from Fig. <ref>a, that the value of R^2_g_ DLA decreases with the decrease of p. To evaluate the values of fractal dimensions D_ DLA of generated clusters, the linear least-square fits are performed. To this end we estimate the lower cutoff for the number of particles N_ DLA^min at which the correction to scaling terms become irrelevant. The linear fits for the average gyration radius is used lnR^2_g_ DLA = A+ 2/D_ DLAln N_ DLA. The χ^2 value (sum of squares of normalized deviation from the regression line) divided by the number of degrees of freedom, DF serves as an estimate for the fit accuracy. An example is given in Table <ref>. The estimates for D_ DLA obtained as functions of p are provided in Fig. <ref>b. Now, we turn our attention to the formation of DLA on a seed in a form of SAW, see Fig. <ref>. It is intuitively obvious that different perimeter sites of a such cluster are accessible to diffusive particles with uneven probabilities. The particles adsorbed on the polymer directly form a solvation shell, which causes a screening effect for the newly arriving particles. As the result, when the number of particles N_a in a solvation shell reaches certain saturation value, the new particles are unable to reach the SAW core directly, but can be adsorbed by the solvation shell particles only. Let us evaluate the size of the solvation shell of the polymer seed of size N_ SAW, i.e. to estimate the number of particles N_a adsorbed directly by the polymer seed. After performing double averaging, over an ensembles of SAW and DLA configurations, we obtain the results for ⟨N_a⟩ presented in Fig. <ref>a. As expected, with the decrease of the probability p, the number of particles ⟨N_a⟩ directly adsorbed by a polymer flocculant increases. This results in formation of a more compact agglomerate, as illustrated in Fig. <ref>b. We also introduce the normalized value n_a = N_a/N_ SAW, which characterizes the efficiency of direct adsorption of diffusive particles per single monomer of the SAW seed. By analysing behaviour of ⟨n_a⟩ as function of N_ SAW (Fig. <ref>b), one notices that the direct adsorption efficiency is higher for shorter chains, and saturates at large values of N_ SAW. Let us recall that the underlying SAW seed is itself a not-regular fractal structure with its effective size (gyration radius R_g SAW) taking on a range of different values in an ensemble of possible SAW configurations. We analyzed the correlation between values of R_g SAW in given configuration of SAW and the numbers of particles in solvation shell N_a, observed in performing different DLA realizations with this SAW configuration as a seed. Corresponding results are presented on Fig. <ref>. The smaller values of R_g SAW correspond to more compact SAW configurations with higher fraction of “inner” monomers, screened from incoming diffusive particles as compared with monomers positioned closer to the SAW outer perimeter. Indeed, the number of particles N_a directly adsorbed to SAW perimeter is smaller for the case of compact SAW and increase gradually with an increase of R_g SAW. This effect is more pronounced for smaller p values (Fig. <ref>b), when direct adsorption of particles on SAW seed is dominant as compared with particle-particle coagulation, and thus is more “sensitive” to the subtle peculiarities of the underlying SAW. Note that the probability distribution for N_a as averaged over SAW and DLA realizations (see Fig. <ref>) is broader at smaller p due to the same effect. Another parameter of interest is the total number of bonds (contacts) N_ bond established between SAW and adsorbed diffusive particles. Since each the monomer of SAW can have contacts with more than one adsorbed particle, this quantity is not trivial and is expected to be larger than the number of particles N_a in a solvation shell. We prescribe labels v(n) to each nth monomer of the SAW (n=1,…,N_ SAW) with the initial values of v(n)=0. When, during the process of diffusion, the ith particle became adjacent to nth monomer and gets adsorbed, we increase the label v(n) by one, so that: * if X_i(t)=x_n± 1 and Y_i(t)=y_n then v(n)=v(n)+1, * if X_i(t)=x_n and Y_i(t)=y_n± 1 then v(n)=v(n)+1 so that at the end v(n) contains the total number of contacts established by nth monomer with adsorbed particles and N_ bond=∑_nv(n). We introduce the value n_ bond=N_ bond/N_a, which characterizes the chelation efficiency of a polymer chain and is also of interest from the point of view of wastewater cleaning: the larger is the average number of contacts per single monomer, the stronger is the ability of a polymer flocculant to “hold” the particles, which are already adsorbed. At each fixed value of p, ⟨n_ bond⟩ is found to increase with the length of a polymer chain (see Fig. <ref>) and gradually reaches its saturation value. The larger polymer macromolecules are thus found to be more effective in “keeping” the directly adsorbed impurity particles, and this efficiency increases with decreasing the parameter p. Now we proceed to analyze the structure of agglomerates of N_ SAW+N_ DLA particles, as given by their fractal dimension. The scaling of the gyration radii ⟨R^2_g⟩_ DLA-SAW in a range of sizes of the SAW seed N_ SAW is analyzed next. This is related to a real life situations with various degree of pollution levels and of a pollutant coagulation ability. The data obtained are presented in both frames of Fig. <ref>. Here, we observe what we believe is the crossover between the solvation and DLA regimes, which is especially evident at small p, e.g. p=0.1. In this case, particles prefer to deposit theirselves on a SAW rather than to aggregate. Therefore, at low N_ DLA<2N_ SAW, the process is dominated by solvation (screening) of a SAW by particles, characterized by very slow increase of effective size ⟨R^2_g⟩_ DLA-SAW with N_ DLA. By applying least-square fitting of data in this region to the form (Eq. (1)), we obtained 1/D=0.35(3) (the dashed line in Fig. <ref>). At N_ DLA∼ 2N_ SAW, a SAW is completely screened, and since then a normal DLA starts, where the screened SAW plays a role of its seed. At sufficiently large N_DLA, the DLA scaling with D_ DLA-SAW(p=0.1)=1.868(5) is retrieved (the result of least-square fitting of data in this region to the form (Eq. (1)) are presented with solid line on Fig. <ref>), see also Table <ref>). The crossover is seen for all N_SAW being examined, and it occurs at N_ DLA∼ 2N_ SAW in all cases (as can be seen from curves of Fig. <ref>a). At yet smaller p=0.01, solvation regime has a non-linear dependence of ⟨R^2_g⟩_ DLA-SAW on N_ DLA due to strongly suppressed DLA in favour of screening the SAW, whereas at higher p>0.1 it gradually disappears, as in this case the screening of a SAW loses its priority over a DLA, see, respective curves in Fig. <ref>b. Let M denote the number of realizations of DLA processes (which is 1000 in our case) on a fixed SAW configuration. For all the monomers n=1,…,N_ SAW we sum up the number of times k(n), when diffusive particle was absorbed to this monomer. In such a way, a weight w_n=k(n)/M is prescribed to each monomer. Such distributions of the hitting points are called the "harmonic measure" <cit.>, which can be represented within the multifractal concepts. When studying physical processes on complex fractal objects, one often encounters the situation of coexistence of a family of singularities, each associated with a set of different fractal dimensions <cit.>; combination of two fractal growth processes, as in our case, is expected to lead to multifractal features as well <cit.>. The different moments of the distribution of observables scale independently, which is usually referred to as multifractality <cit.>. The multifractal spectrum can be used to provide information on the subtle geometrical properties of a fractal object, which cannot be fully described by its fractal dimensionality. Indeed, the growth probability distribution in DLA clusters is a typical example of multifractal phenomena <cit.>. In our case, the multifractal moments can be defined as M(q)=∑_n=1^N_ SAWw^q(n) When averaged over different configurations of the constructed SAW clusters and realizations of DLA, they scale with the gyration radius R_g of an underlying SAW core according to: ⟨M(q)⟩=R_g SAW^d(q) with the exponents d(q) being characterized by a non-linear dependence on q. To estimate the numerical values of d(q) on the basis of data obtained by us (see Fig. <ref>), the least square fitting was used. At q = 0 we just count the number of sites of the cluster of linear size R, and thus d(0) corresponds to the fractal dimension of the SAW trajectory for each p. At q>0, the set of d(q) is found to be non-trivial and dependent on p. The obtained spectrum at several values of p are given on Fig. <ref>b). At each q, the values of d(q) increase with decreasing the parameter p. This demonstrates an increase of non-uniformity distribution of harmonic measure (the hitting probabilities of underlying SAW trajectory by incoming diffusive particles are distributed more with decreasing p). § CONCLUSIONS In this paper we mapped the process of coagulation-flocculation, which is important from the environmental point of view for cleaning wastewater, onto the physical model of the DLA of N_ DLA particles that takes place on a seed represented by a SAW containing N_ SAW monomers. Within this approach, the DLA particles represent the impurities in suspension, which may sediment in form of flocs (aggregates) either spontaneously (as a result of particle-particle coagulation with probability p) or due to addition of a special agent (flocculant). The adsorbing linear polymer chain, represented by SAW here, serves as a flocculant by establishing multiple bonds with adsorbed DLA particles. Despite the fact that both paradigms are well studied over a number of years, their combination has got less attention, especially in relation to this particular application. Computer simulations are performed by means of the pruned-enriched Rosenbluth Monte Carlo algorithm on a D=2 square lattice. Both DLA and SAW processes generate clusters of complex fractal structure. We found that their combination in the D=2 space, at the special case of N_ SAW=N_ DLA, leads to formation of a flocculated agglomerate with the fractal dimension D_ DLA-SAW=1.618(5), which exceeds the fractal dimension of a pure SAW, but is smaller than D_ DLA. The result is relevant in the respect of the further removal of the agglomerate containing impurities by means of membrane filtration or gel chromatography. We also evaluated the properties related to the adsorption efficiency of SAW, such as: the average size of solvation shell ⟨ N_a ⟩, adsorbing efficiency per monomer ⟨ n_a ⟩, and a number of adsorption bonds between the SAW and the DLA particles ⟨ N_bond⟩ depending on the N_ SAW and the coagulation probability p. In particular we found, that the direct adsorption efficiency as related to a single monomer of SAW, is higher for shorter chain length, and reaches the asymptotic saturation value as N_ SAW increases. The effective dimension of DLA-SAW agglomerates is given by their gyration radii ⟨ R_g DLA-SAW^2 ⟩, it is analyzed in a wide range of N_ SAW and p. A crossover between two characteristic regimes have been observed at N_ DLA≫ N_ SAW. At N_ DLA≤ N_ SAW, the incoming diffusive particles are preferentially attached to monomers along the perimeter of SAW and form a solvation shell around the polymer seed; at N_ DLA≫ N_ SAW, the underlying SAW seed is already well “screened” from new incoming particles so that they mainly coagulate with particles from a solvation shell and the scaling regime of DLA is retrieved. Introducing the probabilities for the perimeter sites of underlying SAW cluster to be encountered by incoming DLA particles, we found the estimates for multifractal sets of exponents, governing the moments (<ref>) and giving more subtle characteristics of agglomeration clusters constructed by combination of two growth processes. The study contains a wide range of adjustable parameters that can be tuned towards particular chemical setup, namely: (i) molecular weight and branching type of a flocculant; (ii) relation between the impurity-flocculant adsorption and impurity-impurity coagulation; (iii) spatial arrangement of flocculants, e.g. polymer brush of variable density, etc. Such extensions of this study are planned for a future. § ACKNOWLEDGEMENTS The work was supported by Academy of Finland, reference number 334244. § REFERENCES
http://arxiv.org/abs/2406.19146v1
20240627130243
Resolving Discrepancies in Compute-Optimal Scaling of Language Models
[ "Tomer Porian", "Mitchell Wortsman", "Jenia Jitsev", "Ludwig Schmidt", "Yair Carmon" ]
cs.LG
[ "cs.LG", "cs.CL" ]
A new class of non-Einstein pp-wave solutions to quadratic gravity Andrea Fuster July 1, 2024 ================================================================== [ Resolving Discrepancies in Compute-Optimal Scaling of Language Models Tomer Porianequal,yyy yyyTel Aviv University, Israel 0.3in ] § ABSTRACT <cit.> and <cit.> developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the (i.e., “Chinchilla”) scaling law. Counter to a hypothesis of <cit.>, we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW β_2 parameter is essential at lower batch sizes. § INTRODUCTION We consider the problem of compute-optimal language model training: given a compute budget C, we wish to predict how to best allocate it across model size (in parameters) and dataset size(in tokens). With pretraining budgets ever-increasing, compute-optimal scaling is a question of paramount importance. maybe be more concrete and say billions of dollars (with reference of course) In their seminal work, <cit.> proposed a scaling law predicting that the optimal ratio of tokens to parameters decays as a power of C. somewhere (maybe related work) we need to point out the <cit.> found many different scaling laws in their paper, and primarily focused on the setting where models are trained to convergence. This also explains why they set their LR schedule to so many tokens. Nevertheless, the compute optimal scaling law is arguably the most influential outcome of this paper. This scaling law was influential in determining the size of GPT-3 and several subsequent models <cit.>. However, <cit.> challenged its validity, arguing instead that the optimal token-to-parameter ratio should be approximately independent of C, and that contemporary models had too many parameters relative to their number of training tokens. Based on this prediction, they trained a 67B parameters model called Chinchilla and which outperformed larger models with a similar compute budget. While <cit.> and subsequent work <cit.> established that following the scaling law leads to better performance than scaling, it is still important to understand why the two works arrived at different conclusions. Is the difference due to architecture, training setup, pretraining data, results analysis, or perhaps something else entirely? The answer could teach us important lessons on how to correctly predict and perform model scaling. <cit.> hypothesize that the scaling law discrepancy is due to <cit.> not tailoring the learning rate decay schedule for each token budget separately. While they demonstrate that mismatched learning rate decay results in a higher loss, they do not show it leads to a different compute-optimal scaling law. To the best of our knowledge, this hypothesis is the only explanation offered in the literature so far. Our contribution. In this work, we uncover three factors contributing to the discrepancy, and disprove Hoffman et al.'s hypothesis about the role of learning rate decay; <Ref> illustrates our main results. We begin by reproducing the scaling law in a Llama-derived pretraining setup using the OpenLM library <cit.> and the RefinedWeb dataset <cit.> (<Ref>a). Our first observation is that accounting for the computational cost of the decoding layer (as done in <cit.> but not in <cit.>) shifts compute-optimal scaling toward a more constant token-to-parameter ratio (<Ref>b). Second, we note that the constant-length warmup period of <cit.> is too long for smaller models, inflating the optimal number of tokens at lower compute budgets; scaling the warmup period with the model size further shifts the scaling in the direction (<Ref>c). Next, we match the learning rate decay to the token budget of each configuration we test (as <cit.> conjecture to be essential) but observe little effect on the compute-optimal scaling law (<Ref>d). Finally, we set the learning rate, batch size, and the AdamW β_2 parameters individually for each model size, leading to compute-optimal scaling that agrees closely with (<Ref>e). Notably, the latter configuration uses a constant learning rate schedule, showing that learning rate decay is not essential for the scaling law to emerge. We repeat our experiment on the OpenWebText2 dataset <cit.>, observing similar results despite performing hyperparameter tuning only on RefinedWeb. We complement our main results with the following analyses: * In the last phase of our experiments (<Ref>e) we choose different hyperparameters for each model size. To do so, we conduct a hyperparameter sweep for small-scale models and use the results to fit power laws for the optimal batch size and learning rate as a function of model parameters. This approach is inspired by <cit.>, and our hyperparameter scaling laws roughly agree. However, we observe that setting the AdamW β_2 parameter to be 0.95 is suboptimal at smaller batch sizes (128 and below), and increasing it allows establishing clear trends from our small-scale hyperparameter sweep. * We study the scaling of the optimal loss as a function of the compute budget. We show that the steps we take to settle the /discrepancy (namely shortening warmup and scaling learning rate and batch size) significantly decrease this loss at smaller scales, but only marginally improve it at larger scales. In contrast, introducing a cosine learning rate decay schedule substantially decreases the loss, with benefits persisting at larger scales. Similar to <cit.>, we observe some curvature on the optimal loss curve. Nevertheless, the optimal loss with tuned hyperparameters is fairly consistent with a saturating power law. * We calculate the computational cost of each of our experiments and plot how prediction quality improves as we consider larger training runs. We observe that the cost of our hyperparameter sweep is comparable to that of a scaling law fit experiment, but the compute saved by using a constant instead of a cosine learning rate schedule roughly makes up for that cost. Code and data release. To facilitate future research, we share our data and the code necessary to reproduce our analyses and figures at <https://github.com/formll/resolving-scaling-law-discrepencies>. Limitations. We discuss the limitations of our work in <Ref>. § PRELIMINARIES AND EXPERIMENT DESIGN §.§ Notation and problem setting We train language models of size on tokens of data (essentially without repetition). The precise definition of plays an important role in this paper: Unless mentioned otherwise, denotes the number of parameters in all the linear layers of the model. That is, excludes embedding layers, but includes the model's head: the final linear layer producing the predicted token logits. (In the models we train there is no tying of the embeddings and the head). Let (, ) be the ammount of floating point operations (FLOPs) required to train a model of size on tokens. Throughout, we employ the approximation (, ) ≈ 6 In <Ref> we compare our definition of to the one used in <cit.>. In <Ref> we also discuss the effect of taking attention FLOPs into account and FLOP estimation approaches in other works. Let (,) be the log loss (in expectation over the training data distribution and any randomness of the training procedure) obtained by a model of size trained for tokens.[This notation abstracts away the fact that there are many different models of size and many different ways to train them for tokens. Ideally, (,) represents the loss attained by the optimal architecture of size trained with the best possible training method that uses tokens. In practice, for any value of (,) we consider only a single configuration, but this configuration is the result of architecture search and optimizer tuning, performed either directly or indirectly by building on prior work.] Assuming a fixed compute budget C, we aim to predict (C) _ > 0*, C/6≈_ > 0min_: (,) = C*, , (C) _ > 0*, C/6 ≈_ > 0min_: (,) = C*, i.e., the model size yielding the smallest loss when trained with compute budget C under the approximation (<ref>). We also let (C) C/6    (C) (C)/(C) = C/6*(C)^2 (C) C/6     (C) (C)/(C) = C/6*(C)^2 denote the optimal number of tokens and the optimal token-to-parameter ratio. To predict these quantities, we use power laws of the form: (C) ≈_0 · C^    (C) ≈_0 · C^    (C) ≈_0 · C^, (C) ≈_0 · C^    (C) ≈_0 · C^     (C) ≈_0 · C^, and fit the exponents ,, and coefficients where _0, _0, _0 from data as described below. §.§ Training setup We train decoder-only Transformer language models using OpenLM <cit.>, which integrates many of the architecture and training advances in Llama <cit.> and subsequent works. We largely base our initial training configuration on the hyperparameter search in <cit.>. Our setup does not replicate <cit.>, but we match or closely approximate several key hyperparameters as discussed in <Ref>. See <Ref> for a detailed description of our setup and chosen hyperparameters. Model set. We search for compute-optimal models over a set consisting of 16 models with sizes ranging from 5M to 901M. We pick model layer numbers l and widths d such that increases by multiples of roughly √(2) while the aspect ratio d/l stays between 32 and 64 as suggested in <cit.>. The number of attention heads in each configuration is 4, as preliminary experiments showed this is optimal for smaller models, and increasing it did not noticeably improve larger models. <Ref> in the appendix specifies all the models in our grid. Data. We perform our experiments on OpenWebText2 <cit.> which contains roughly 30B tokens of data from Reddit and resembles the WebText2 dataset used in <cit.>, as well a RefinedWeb <cit.> dataset which contains roughly 600B tokens from CommonCrawl <cit.> and resembles the MassiveWeb dataset that formed roughly half of the data mix in <cit.>. Evaluation and FLOP grid. We evaluate models on 160M tokens held out from the training data. We perform the evaluation whenever the product of 6 and the number of training tokens seen so far crosses an element of a FLOP grid of the form {1.25e16· 2^i}_i=0^11. This grid plays a central role in our data analysis. We also record the average training loss every 20 steps. §.§ Data analysis Our technique for estimating the compute-optimal power law is akin to the second (IsoFLOP-based) approach of <cit.>, but differs in several details. The approach consists of two steps: directly estimating (C_i) for all C_i in our FLOPs grid, and fitting a power law to these estimates. We briefly outline each step below and provide full details in <Ref>. Estimating (C_i). For each value of C_i, we train several models from our set (<Ref>) for C_i FLOPs and extract an IsoFLOP curve of loss vs. model size (see <Ref>). For FLOP values where validation loss is not available (specifically <Ref> and <Ref>) we use the smoothed training loss instead. We estimate (C_i) and its uncertainty using a noise-and-interpolate procedure based on Gaussian noise with empirically-calibrated magnitude and Akima interpoaltion <cit.>. For every C_i, this yields a “bootstrap sample” population optimal size estimates; we take their median as the point estimate for (C_i). The procedure also yields an estimate of the log-scale standard deviation of (C_i) (shown as error bars in <Ref>). Fitting a power law. We fit power laws of the form (<ref>) by performing weighted linear regression in log space, with the weights inversely proportional to the squared log-space standard deviations computed above (i.e., log-space Gaussian maximum likelihood estimation). To obtain a point estimate for the power law parameters we fit the point estimates for each (C_i) value. To quantify uncertainty, we fit power laws to bootstrap samples, obtaining a population of _0, , and (·) samples. We construct confidence intervals from their quantiles. § MAIN RESULTS: SETTLING THE SCALING LAW DISCREPANCY In this section, we describe in detail our main results, visualized in <Ref>, tabulated in <Ref> and plotted in detail in <Ref>. The following subsections address each panel of <Ref> in order. §.§ Reproducing the scaling law To reproduce the scaling law, we match the setup of <cit.> in terms of the batch size (2^19 tokens) and in terms of the learning rate schedule (warmup for 3000 · 2^19≈1.57B tokens followed by cosine decay to zero at 2.5e5· 2^19≈131B tokens). Other configurations do not match exactly, but the suite of models we train covers a range of sizes and compute similar to <cit.>. For this reproduction only, we also take the “model size” to be the number of parameters in all linear layers except the head (last decoding layer). That is, for a model of width d and vocabulary size v, we subtract d· v from our usual definition of (see <Ref>, last column). As <Ref>a shows, with this setting we obtain a compute-optimal exponent and power law fits close to the power law 1.6e9 (C / 8.64e19)^0.88 obtained by <cit.>. §.§ Counting last layer FLOPs <cit.> chose to define model size without counting embedding parameters since they found this makes scaling laws in the infinite-compute regime more consistent across network depths <cit.>. Perhaps because their model head and embeddings had tied weights, this led them to also discount the contribution of the model head to the model's FLOPs per token <cit.>. However, as <Ref> reveals, not accounting for the model head leads to under-approximation that grows smoothly as model size decreases, from roughly 10% at larger models to roughly 90% at smaller models. Thus, counting the head FLOPs (i.e., using our definition of ) results in a significantly more accurate approximation. As shown in <Ref>b, switching to our model size count also reduces the exponent by more than 0.1, closer to but not all the way there. §.§ Correcting learning rate warmup Next, we address the duration of the learning rate warmup period, which <cit.> set proportionally to their full training duration, designed to reach complete convergence. <Ref> (left) shows this warmup period is too long: for smaller-scale models, the optimal number of tokens as a function of compute is less than or close to the number of warmup tokens, and therefore these models are suboptimally trained. The same issue is evident in Figure 14 (right) of <cit.> which shows that for many compute budgets the optimal number of steps is below or close to the number of warmup steps (fixed at 3000). <Ref> (left) also provides an intuitive explanation for the increased value of : at smaller compute scales, models are `forced' to use more training tokens than would otherwise be optimal in order to `escape' the long warmup period. Having escaped, the warmup Once this warmup period is escaped, the optimal number of tokens grows only slowly, leading to a fast rate of increase in the optimal model size and hence the large exponent. With the problem identified, we propose a simple heuristic for more appropriately choosing the warmup duration: for each model, we set the number of warmup tokens to be identical to the model size N. The bottom row of <Ref>b illustrates the validity of our new choice of warmup, showing that the optimal number of tokens is always at least 5 times greater than the (interpolated) duration of the warmup period corresponding to the model of the appropriate size. As is evident from this figure and from <Ref>c, shortening the warmup shifts the scaling law in the direction of further, yielding an exponent of roughly 0.6. §.§ Learning rate decay has limited impact on compute-optimal allocation With learning rate warmup corrected, we turn to study learning rate decay, which <cit.> conjecture to be a main cause of the difference between their result and <cit.>. We observe that the long 131B tokens decay period in <cit.>, which is aimed toward training to full convergence, means that their compute-constrained experiments see virtually no learning rate decay: <Ref> shows that, at our compute scales, it is never optimal to train for more than 10B, which corresponds to less than 1.5% decay with a cosine schedule. To correct this, we follow the second approach of <cit.> and choose the learning rate schedule for every model and FLOP budget individually. For each FLOP value in our grid, we pick the 7 models from <Ref> which yield token-to-parameter ratios in the range 1 to 100, and train them with a cosine learning rate schedule that decays to 1% of the maximum learning when reaching the target FLOP value.[We set the warmup period to be the minimum of the model size and 20% of the total token budget.] This is roughly twice as expensive as previous experiments, which required only a single training run for each model size (see additional discussion in <Ref>). As <Ref>d shows, adding cosine decay results in a slightly cleaner linear trend (R^2 improves from 0.993 to 0.998) and an exponent slightly closer to the scaling law (0.57 instead of 0.6), but most of the gap remains. Therefore, even with the FLOP count and warmup issues corrected, adding learning rate decay is not sufficient to reproduce the scaling law. §.§ Correcting batch size, learning rate and β_2 A final factor contributing to the /discrepancy is the choice of optimization hyperparameters, particularly the batch size: with a fixed batch size of 2^19 tokens, compute-optimal models at smaller scales train for only a few hundred steps, which is likely too little. <cit.> notice this issue, and attempt to correct for it using post-processing based on an empirical model of large-batch size training <cit.>; we return to their result at the end of this section. Here, we take the more direct approach of predicting near-optimal hyperparameters for each model size.[More specifically, we predict the optimal hyperparameters per model size when trained for 20 tokens per parameter. As we discuss in <Ref>, this choice of training budget is potentially an issue but further analysis in <Ref> suggests it does not significantly impact our results.] Since changing the batch size often also requires re-tuning the learning rate <cit.>, we sweep over both parameters for models of sizes 5M to 108M, with an additional validation sweep over models of size 220M. Initially, we kept β_2 at its previous value of 0.95. However, this led to poor results at smaller batch sizes: as the batch size gets smaller, the squared gradients become noisier, and AdamW requires more smoothing to obtain a correct denominator. Therefore, we added 0.99 and 0.999 to the sweep, obtaining improved performance on small batch sizes. In <Ref> we describe the parameter sweep in full and provide additional discussion about the role of β_2. <Ref> plots our estimates for the optimal values of batch size and learning rate for each model size. It shows clear trends, to which we fit power laws in the number of parameters . Observing good extrapolation to nearby values of , we apply these power laws (with slight rounding) to select the batch size and learning rate for all model sizes and tabulate the results in <Ref>. Our parameter tuning approach is inspired by <cit.>, who predict the optimal batch size and learning rate as a function of compute. Translating compute to model size using the scaling law, we find remarkable agreement in the batch size predictions (a difference of less than 0.05 in exponent and less than 60% in predictions over our models), and somewhat different learning rate predictions (a difference of 0.11 exponent and a factor of 2–3 in predictions), potentially due to using different weight decay. Both our results appear to contradict the conventional wisdom about the existence of a critical batch size <cit.> below which every batch size is good, finding instead an optimal batch size below which performance degrades. This suggests further tuning of β_2 or other hyperparameters may be warranted. We discuss <cit.> further in <Ref><Ref>. With the new hyperparameters, we obtain a close reproduction of the scaling law (<Ref>e) with the scaling exponent matching 0.5 to within 0.6% and the predicted model size at Chinchilla compute within 15% of Chinchilla's size. Notably, here we use a constant learning rate schedule, demonstrating that careful learning rate decay is not necessary for this scaling law to hold. Finally, we reproduce the adjusted scaling law (C) = 1.3e9 (C / 8.64e19)^0.73 which <cit.> obtain by estimating the compute required to reach the same results at a sufficiently low batch size. To do so, we use our tuned hyperparameters as a proxy for suitable batch size and revert our previous corrections (head FLOP count and warmup duration). We obtain an exponent of 0.717 and good agreement with their adjusted scaling law; see <Ref> in the appendix. § ADDITIONAL ANALYSIS §.§ Trends in compute-optimal loss <Ref> shows the minimum loss achievable for each compute budget C in the experiments shown in <Ref>. We estimate the minimum loss using the same interpolation procedure we use to extract the optimal parameter number and token count . The figure shows that, at low compute scales, shortening the warmup duration and tuning hyperparameters leads to substantial loss improvements (each by up to 0.5 nat per token). However, at larger scales these interventions do not significantly improve the loss. In contrast, learning rate decay becomes increasingly beneficial as compute grows, and appears to also improve the rate of decrease in the loss. Perhaps coincidentally, the effects of overestimating the optimal loss (due to long warmup and large batch size) seem to closely offset the effect of underestimating computational cost (by discounting the contribution from the model's head): the first and last curves in <Ref> closely overlap. Similarly to <cit.> we observe a curvature in the optimal loss, while <cit.> report a near-perfect power law behavior. This difference is due to a combination of the difference in FLOP counts discussed in <Ref> and the fact that the experiments of <cit.> extend to higher compute budgets where the loss is closer to its irreducible level. Indeed, for the tuned optimizer experiment (<Ref>) we find that a saturating power law fits the optimal loss and extrapolates well, while extrapolating poorly for other experiments (see <Ref> in the appendix). This suggests that a predictable trend in ((C), (C)) is an indicator of locally-optimal hyperparameters. The exponent of our saturating power fit is approximately -0.1, twice as large as the exponent found in <cit.>. §.§ Scaling law accuracy as a function of compute We now estimate the computational cost of our scaling law experiments, quantifying the effect of the learning rate schedule, and plot how our predictions improve and become more confident with increased computation. We find that the training cost of each experiment that utilized a fixed learning rate schedule was 1.54e20 FLOPs, while the experiments that used a varying-length cosine learning rate schedule required 2.99e20 FLOPs; essentially double the compute; see <Ref> for more details. We also find that the cost of the hyperparameter sweep described in <Ref> was 2.04e20 FLOPs—slightly less than the combined cost of two scaling experiments that leveraged it (one on each dataset). Moreover, in hindsight, we could have arrived at similar hyperparameters using only models of size at most 57M and a simple heuristic for choosing β_2 based on batch size, which would have cost only 1.44e19 FLOPs. <Ref> shows the evolution of the predicted compute-optimal model size exponent , its confidence interval, and a measure of the prediction accuracy as we modulate the experiment's compute budget by truncating our FLOP grid. The figure shows that the prediction becomes steadily more accurate and confident as compute increases. § DISCUSSION §.§ Related work § RELATED WORK While neural scaling laws precede the advent of large language models <cit.>, breakthroughs in model <cit.> and data <cit.> scaling allowed <cit.> to demonstrate the dramatic possibility of unbounded improvement with scale, triggering an explosion in the literature on the topic. Here we focus on the relatively fewer works that tackle optimal resource allocation under a compute constraint. For language modeling, <cit.> and <cit.> repeat subsets of the analyses in <cit.> and derive compute optimal scaling laws. Employing Approach 3 of <cit.> (see also <cit.>), <cit.> find that, for their models, optimal scaling favors larger token-to-parameter ratios than in <cit.> and in our results. They attribute this difference to modeling improvements since <cit.> and argue the same holds for Llama 2 <cit.>. However, our setup incorporates most of the advances in Llama 2 and still produces power laws very close to <cit.>. Like us, <cit.> perform hyperparameter tuning and use isoFLOP analysis to determine compute-optimal model sizes on multiple datasets. While they arrive at an exponent on the order of <cit.> for the main dataset they study, they report a higher exponent =0.578 for OpenWebText2 (i.e., predicting lower token-parameter-ratio at scale), which they attribute to the superior quality of the dataset. We also study this dataset but arrive much closer to the scaling law. We conjecture the larger exponent might be due to repeating training data, which likely occurred in their experiment given the dataset's limited size and their compute budget. Settling these discrepancies could be a source of further valuable lessons on optimal model scaling. Recent work also studies compute-bounded scaling laws beyond the compute-optimal regime. Informed by the increasingly common practice of training medium-scale models beyond compute optimality <cit.>, <cit.> account for the expected inference cost of the model, showing that it naturally skews optimal settings toward smaller models. <cit.> directly predict the loss and downstream performance for models trained past the point of compute optimality, and <cit.> model joint compute-data bottlenecks. All three works rely on the law as a reference point, with <cit.> baking it to their parametric forms. Compute-optimal scaling is studied beyond the language domain, particularly in vision. <cit.> study autoregressive modeling for a variety of tasks and find scaling laws roughly consistent with the <cit.> adjusted scaling law (with exponent =0.73). That work shares the methodological issues described in the top row of <Ref> (FLOP count and long warmup), but performs hyperparameter tuning for smaller scale models; in <Ref> we reach similar results when doing the same. <cit.> characterize the compute-efficient frontier of Vision Transformers (ViTs), while <cit.> studies compute constrained scaling of CLIP models. However, they do not offer a power law for scaling model size with compute. <cit.> tackle model design under an inference compute constraint by fitting multi-term parametric forms to obtain predictions for the optimal ViT shape. <cit.> point out an intricate interplay between data filtering and compute constraints. Finally, <cit.> study compute-optimal scaling of MLP's and obtain exponent =0.35, suggesting that MLP require much more rapid data growth than more sophisticated architecture. Overall, whether and to what extent does scaling hold in the vision domain remains a compelling open problem. We also remark on two themes of our paper that draw from prior work. The first is the importance of hyperparameter tuning: several works <cit.> make the case that smooth, predictable scaling laws emerge when models on all scales are properly tuned. Our work (and particularly <Ref>) provides another example of this principle and agrees with previous observations that tuning is particularly important at smaller scales. Second, previous studies <cit.> as well as the concurrent work <cit.>, propose alternative learning rate schedules that address a key shortcoming of cosine decay: the need to commit to a step budget in advance. We consider a constant learning rate that requires no commitment at all. We show this simple choice suffices to reproduce the law and quantify the computational savings compared to a cosine schedule. However, <Ref> (and also <cit.>, among others) show that in terms of loss, the constant schedule clearly underperforms the cosine schedule. Concurrent and independent work. <cit.> also study the discrepency between the and scaling laws. By re-analyzing data extracted from the <cit.> experiments by <cit.>, they identify the last layer FLOP count as a cause for the discrepancy. Moreover, they report on a small-scale experimental study (with model sizes up to 5M and training tokens number up to 530M) in which they observe that a non-decaying learning rate schedule is sufficient for reproducing the exponent and that learning rate tuning is necessary. These results independently corroborate part of our observations in <Ref>. <cit.> do not identify the warmup duration issue we describe in <Ref>. As a consequence, when reproducing the exponent they reach a value close to 0.73 rather than the `raw' value 0.88 reported in <cit.> (see discussion in <Ref>, <Ref>, and <Ref>). In addition, our experiments roughly match the <cit.> compute budget, which is about 3 orders of magnitudes larger than budget in <cit.>, and we perform careful tuning of both the learning rate and the batch size. § LIMITATIONS §.§ Limitations Computational scale is a notable limitation, as well as a defining feature, of our results: our experiments are roughly on the scale of those in <cit.> but are substantially smaller than those of <cit.>. Scaling may effectively mitigate each of the issues we identify: with scale, the contribution of the model head becomes negligible, any (fixed) warmup period eventually becomes reasonably long, and hyperparameter sensitivity decreases, as shown in <Ref> and <Ref>. Nevertheless, we believe that experimental protocols that induce correct scaling behavior at low computational budgets are crucial for developing the empirical science of machine learning, particularly in academic settings. Due to limited compute budgets, our hyperparameter sweep only targeted the smaller models in our grid, and furthermore trained each model for only 20 steps, i.e., the optimal point according to the scaling law. This raises the concern that the hyperparameters we chose unfairly favor models trained for that particular token-to-parameter ratio, and rerunning our experiment with perfect tuning for each model size and each token-to-parameter ratio would have yielded different results. We believe this is unlikely: at small scales (where hyperparameter tuning is crucial) our original set of hyperparameters favored higher token-to-parameter ratios because they still had a sufficient number of steps to train for, and therefore choosing hyperparameters specifically for them is not likely to result in significant gains. In <Ref> we analyze our existing tuning results to estimate the potential gains from perfect tuning, and find that they are likely to have small impact on our conclusions. Moreover, transferring our hyperparameters to another dataset yields similar results. Finally, a broader limitation of compute-optimal scaling as defined by <cit.>, <cit.> and our work, is that it only concerns the pretraining loss rather than more direct measures of a model's capabilities. Here again, scale is an issue: most zero-shot and in-context capabilities do not emerge at the scales we consider here, and predicting them from small-scale proxies is an important open problem <cit.>. Instead, it is possible to study downstream performance via fine-tuning, though this may cause the clean scaling patterns seen in pretraining to break down <cit.>, potentially because the fine-tuning procedure is sensitive to the choice of hyperparameters <cit.>. § ACKNOWLEDGMENTS We thank Georgios Smyrnis, Samir Yitzhak Gadre, Achal Dave, and Mehdi Cherti for helpful discussion and assistance with OpenLM and with the JSC cluster. TP and YC acknowledge support from the Israeli Science Foundation (ISF) grant no. 2486/21 and the Adelis Foundation. MW was supported in part by a Google Fellowship. JJ acknowledges funding by the Federal Ministry of Education and Research of Germany under grant no. 01IS22094B WestAI - AI Service Center West. LW acknowledges funding from Open Philanthropy. We gratefully acknowledge compute budget granted by Gauss Centre for Supercomputing e.V. and by the John von Neumann Institute for Computing (NIC) on the supercomputers JUWELS Booster and JURECA at Jülich Supercomputing Centre (JSC). 55 urlstyle [com()]commoncrawl Common Crawl. <https://commoncrawl.org>. [Akima(1970)]akima1970new H. Akima. A new method of interpolation and smooth curve fitting based on local procedures. Journal of the ACM (JACM), 1970. [Alabdulmohsin et al.(2023)Alabdulmohsin, Zhai, Kolesnikov, and Beyer]alabdulmohsin2023getting I. Alabdulmohsin, X. Zhai, A. Kolesnikov, and L. Beyer. Getting ViT in shape: Scaling laws for compute-optimal model design. In Advances in Neural Information Processing Systems (NeurIPS), 2023. [Ba et al.(2016)Ba, Kiros, and Hinton]ba2016layer J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv:1607.06450, 2016. [Bachmann et al.(2024)Bachmann, Anagnostidis, and Hofmann]bachmann2024scaling G. Bachmann, S. Anagnostidis, and T. Hofmann. Scaling mlps: A tale of inductive bias. Advances in Neural Information Processing Systems (NeurIPS), 2024. [Bellagente et al.(2024)Bellagente, Tow, Mahan, Phung, Zhuravinskyi, Adithyan, Baicoianu, Brooks, Cooper, Datta, et al.]bellagente2024stable M. Bellagente, J. Tow, D. Mahan, D. Phung, M. Zhuravinskyi, R. Adithyan, J. Baicoianu, B. Brooks, N. Cooper, A. Datta, et al. Stable lm 2 1.6 b technical report. arXiv:2402.17834, 2024. [Besiroglu et al.(2024)Besiroglu, Erdil, Barnett, and You]besiroglu2024chinchilla T. Besiroglu, E. Erdil, M. Barnett, and J. You. Chinchilla scaling: A replication attempt. arXiv:2404.10102, 2024. [Black et al.(2022)Black, Biderman, Hallahan, Anthony, Gao, Golding, He, Leahy, McDonell, Phang, et al.]black2022gpt S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, et al. Gpt-neox-20b: An open-source autoregressive language model. arXiv:2204.06745, 2022. [Brown et al.(2020)Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, Neelakantan, Shyam, Sastry, Askell, et al.]brown2020language T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems (NeurIPS), 2020. [Cherti et al.(2023)Cherti, Beaumont, Wightman, Wortsman, Ilharco, Gordon, Schuhmann, Schmidt, and Jitsev]cherti2023reproducible M. Cherti, R. Beaumont, R. Wightman, M. Wortsman, G. Ilharco, C. Gordon, C. Schuhmann, L. Schmidt, and J. Jitsev. Reproducible scaling laws for contrastive language-image learning. In Conference on computer vision and pattern recognition (CVPR), 2023. [Chowdhery et al.(2023)Chowdhery, Narang, Devlin, Bosma, Mishra, Roberts, Barham, Chung, Sutton, Gehrmann, et al.]chowdhery2023palm A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 240 (240):0 1–113, 2023. [DeepSeek(2024)]deepseekai2024deepseek DeepSeek. Deepseek LLM: Scaling open-source language models with longtermism. arXiv:2401.02954, 2024. [Dehghani et al.(2023)Dehghani, Djolonga, Mustafa, Padlewski, Heek, Gilmer, Steiner, Caron, Geirhos, Alabdulmohsin, et al.]dehghani2023scaling M. Dehghani, J. Djolonga, B. Mustafa, P. Padlewski, J. Heek, J. Gilmer, A. P. Steiner, M. Caron, R. Geirhos, I. Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning (ICML), 2023. [Gadre et al.(2024)Gadre, Smyrnis, Shankar, Gururangan, Wortsman, Shao, Mercat, Fang, Li, Keh, Xin, Nezhurina, Vasiljevic, Jitsev, Dimakis, Ilharco, Song, Kollar, Carmon, Dave, Heckel, Muennighoff, and Schmidt]gadre2024language S. Y. Gadre, G. Smyrnis, V. Shankar, S. Gururangan, M. Wortsman, R. Shao, J. Mercat, A. Fang, J. Li, S. Keh, R. Xin, M. Nezhurina, I. Vasiljevic, J. Jitsev, A. G. Dimakis, G. Ilharco, S. Song, T. Kollar, Y. Carmon, A. Dave, R. Heckel, N. Muennighoff, and L. Schmidt. Language models scale reliably with over-training and on downstream tasks. arXiv:2403.08540, 2024. [Gao et al.(2020)Gao, Biderman, Black, Golding, Hoppe, Foster, Phang, He, Thite, Nabeshima, Presser, and Leahy]gao2020pile L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv:2101.00027, 2020. [Goyal et al.(2017)Goyal, Dollár, Girshick, Noordhuis, Wesolowski, Kyrola, Tulloch, Jia, and He]goyal2017accurate P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD: Training imagenet in 1 hour. arXiv:1706.02677, 2017. [Goyal et al.(2024)Goyal, Maini, Lipton, Raghunathan, and Kolter]goyal2024scaling S. Goyal, P. Maini, Z. C. Lipton, A. Raghunathan, and J. Z. Kolter. Scaling laws for data filtering–data curation cannot be compute agnostic. In Conference on computer vision and pattern recognition (CVPR), 2024. [Gururangan et al.(2023)Gururangan, Wortsman, Gadre, Dave, Kilian, Shi, Mercat, Smyrnis, Ilharco, Jordan, Heckel, Dimakis, Farhadi, Shankar, and Schmidt]gururangan2023openlm S. Gururangan, M. Wortsman, S. Y. Gadre, A. Dave, M. Kilian, W. Shi, J. Mercat, G. Smyrnis, G. Ilharco, M. Jordan, R. Heckel, A. Dimakis, A. Farhadi, V. Shankar, and L. Schmidt. OpenLM: a minimal but performative language modeling (lm) repository, 2023. <https://github.com/mlfoundations/open_lm>. [Hägele et al.(2024)Hägele, Bakouch, Kosson, Allal, Von Werra, and Jaggi]hagele2024scaling A. Hägele, E. Bakouch, A. Kosson, L. B. Allal, L. Von Werra, and M. Jaggi. Scaling laws and compute-optimal training beyond fixed training durations. arXiv:2405.18392, 2024. [Henighan et al.(2020)Henighan, Kaplan, Katz, Chen, Hesse, Jackson, Jun, Brown, Dhariwal, Gray, et al.]henighan2020scaling T. Henighan, J. Kaplan, M. Katz, M. Chen, C. Hesse, J. Jackson, H. Jun, T. B. Brown, P. Dhariwal, S. Gray, et al. Scaling laws for autoregressive generative modeling. arXiv:2010.14701, 2020. [Hestness et al.(2017)Hestness, Narang, Ardalani, Diamos, Jun, Kianinejad, Patwary, Yang, and Zhou]hestness2017deep J. Hestness, S. Narang, N. Ardalani, G. F. Diamos, H. Jun, H. Kianinejad, M. M. A. Patwary, Y. Yang, and Y. Zhou. Deep learning scaling is predictable, empirically. arXiv:1712.00409, 2017. [Hoffmann et al.(2022)Hoffmann, Borgeaud, Mensch, Buchatskaya, Cai, Rutherford, de Las Casas, Hendricks, Welbl, Clark, et al.]hoffmann2022empirical J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, et al. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems (NeurIPS), 2022. [Hu et al.(2024)Hu, Tu, Han, He, Cui, Long, Zheng, Fang, Huang, Zhao, et al.]hu2024minicpm S. Hu, Y. Tu, X. Han, C. He, G. Cui, X. Long, Z. Zheng, Y. Fang, Y. Huang, W. Zhao, et al. MiniCPM: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395, 2024. [Ivgi et al.(2022)Ivgi, Carmon, and Berant]ivgi2022scaling M. Ivgi, Y. Carmon, and J. Berant. Scaling laws under the microscope: Predicting transformer performance from small scale experiments. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. [Jiang et al.(2023)Jiang, Sablayrolles, Mensch, Bamford, Chaplot, Diego de las Casas, Lengyel, Lample, Saulnier, Lavaud, Lachaux, Stock, Scao, Lavril, Wang, Lacroix, and Sayed]jiang2023mistral A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, F. B. Diego de las Casas, G. Lengyel, G. Lample, L. Saulnier, L. R. Lavaud, M.-A. Lachaux, P. Stock, T. L. Scao, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed. Mistral 7B. arXiv:2310.06825, 2023. [Kaplan et al.(2020)Kaplan, McCandlish, Henighan, Brown, Chess, Child, Gray, Radford, Wu, and Amodei]kaplan2020scaling J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. arXiv:2001.08361, 2020. [Lefaudeux et al.(2022)Lefaudeux, Massa, Liskovich, Xiong, Caggiano, Naren, Xu, Hu, Tintore, Zhang, Labatut, Haziza, Wehrstedt, Reizenstein, and Sizov]xFormers2022 B. Lefaudeux, F. Massa, D. Liskovich, W. Xiong, V. Caggiano, S. Naren, M. Xu, J. Hu, M. Tintore, S. Zhang, P. Labatut, D. Haziza, L. Wehrstedt, J. Reizenstein, and G. Sizov. xformers: A modular and hackable transformer modelling library. <https://github.com/facebookresearch/xformers>, 2022. [Lieber et al.(2021)Lieber, Sharir, Lenz, and Shoham]lieber2021jurassic O. Lieber, O. Sharir, B. Lenz, and Y. Shoham. Jurassic-1: Technical details and evaluation, 2021. [Loshchilov and Hutter(2017)]loshchilov2017decoupled I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv:1711.05101, 2017. [McCandlish et al.(2018)McCandlish, Kaplan, Amodei, and The OpenAI Dota Team]mccandlish2018empirical S. McCandlish, J. Kaplan, D. Amodei, and The OpenAI Dota Team. An empirical model of large-batch training. arXiv:1812.06162, 2018. [Muennighoff et al.(2024)Muennighoff, Rush, Barak, Le Scao, Tazi, Piktus, Pyysalo, Wolf, and Raffel]muennighoff2024scaling N. Muennighoff, A. Rush, B. Barak, T. Le Scao, N. Tazi, A. Piktus, S. Pyysalo, T. Wolf, and C. A. Raffel. Scaling data-constrained language models. Advances in Neural Information Processing Systems (NeurIPS), 2024. [Paszke et al.(2019)Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, et al.]paszke2019pytorch A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. 2019. [Pearce and Song(2024)]pearce2024reconciling T. Pearce and J. Song. Reconciling Kaplan and Chinchilla scaling laws. arXiv:2406.12907, 2024. [Penedo et al.(2023)Penedo, Malartic, Hesslow, Cojocaru, Alobeidli, Cappelli, Pannier, Almazrouei, and Launay]penedo2023refinedweb G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, H. Alobeidli, A. Cappelli, B. Pannier, E. Almazrouei, and J. Launay. The refinedweb dataset for falcon LLM: Outperforming curated corpora with web data only. In Advances in Neural Information Processing Systems (NeurIPS), 2023. [Radford et al.(2019)Radford, Wu, Child, Luan, Amodei, Sutskever, et al.]radford2019language A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners, 2019. [Rae et al.(2021)Rae, Borgeaud, Cai, Millican, Hoffmann, Song, Aslanides, Henderson, Ring, Young, et al.]rae2021scaling J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv:2112.11446, 2021. [Raffel et al.(2020)Raffel, Shazeer, Roberts, Lee, Narang, Matena, Zhou, Li, and Liu]raffel2020exploring C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research (JMLR), 210 (140):0 1–67, 2020. [Rosenfeld et al.(2019)Rosenfeld, Rosenfeld, Belinkov, and Shavit]rosenfeld2019constructive J. S. Rosenfeld, A. Rosenfeld, Y. Belinkov, and N. Shavit. A constructive prediction of the generalization error across scales. arXiv:1909.12673, 2019. [Sardana and Frankle(2023)]sardana2023beyond N. Sardana and J. Frankle. Beyond Chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv:2401.00448, 2023. [Scao et al.(2022)Scao, Fan, Akiki, Pavlick, Ilić, Hesslow, Castagné, Luccioni, Yvon, et al.]bigscience2022bloom T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, et al. BLOOM: A 176b-parameter open-access multilingual language model. arXiv:2211.05100, 2022. [Schaeffer et al.(2024)Schaeffer, Miranda, and Koyejo]schaeffer2024emergent R. Schaeffer, B. Miranda, and S. Koyejo. Are emergent abilities of large language models a mirage? Advances in Neural Information Processing Systems (NeurIPS), 2024. [Shallue et al.(2019)Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl]shallue2019measuring C. J. Shallue, J. Lee, J. Antognini, J. Sohl-Dickstein, R. Frostig, and G. E. Dahl. Measuring the effects of data parallelism on neural network training. Journal of Machine Learning Research (JMLR), 200 (112):0 1–49, 2019. [Shazeer(2020)]shazeer2020glu N. Shazeer. Glu variants improve transformer. arXiv:2002.05202, 2020. [Shoeybi et al.(2019)Shoeybi, Patwary, Puri, LeGresley, Casper, and Catanzaro]shoeybi2019megatron M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv:1909.08053, 2019. [Smith et al.(2022)Smith, Patwary, Norick, LeGresley, Rajbhandari, Casper, Liu, Prabhumoye, Zerveas, Korthikanti, et al.]smith2022using S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti, et al. Using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, a large-scale generative language model. rXiv:2201.11990, 2022. [Su et al.(2024)Su, Ahmed, Lu, Pan, Bo, and Liu]su2024roformer J. Su, M. Ahmed, Y. Lu, S. Pan, W. Bo, and Y. Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:0 127063, 2024. [Tay et al.(2022)Tay, Dehghani, Rao, Fedus, Abnar, Chung, Narang, Yogatama, Vaswani, and Metzler]tay2021scale Y. Tay, M. Dehghani, J. Rao, W. Fedus, S. Abnar, H. W. Chung, S. Narang, D. Yogatama, A. Vaswani, and D. Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. In International Conference on Learning Representations (ICLR), 2022. [Touvron et al.(2023a)Touvron, Lavril, Izacard, Martinet, Lachaux, Lacroix, Rozière, Goyal, Hambro, Azhar, Rodriguez, Joulin, Grave, and Lample]touvron2023llama H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971, 2023a. [Touvron et al.(2023b)Touvron, Martin, Stone, Albert, Almahairi, Babaei, Bashlykov, Batra, Bhargava, Bhosale, Bikel, Blecher, Ferrer, Chen, Cucurull, Esiobu, Fernandes, Fu, Fu, Fuller, Gao, Goswami, Goyal, Hartshorn, Hosseini, Hou, Inan, Kardas, Kerkez, Khabsa, Kloumann, Korenev, Koura, Lachaux, Lavril, Lee, Liskovich, Lu, Mao, Martinet, Mihaylov, Mishra, Molybog, Nie, Poulton, Reizenstein, Rungta, Saladi, Schelten, Silva, Smith, Subramanian, Tan, Tang, Taylor, Williams, Kuan, Xu, Yan, Zarov, Zhang, Fan, Kambadur, Narang, Rodriguez, Stojnic, Edunov, and Scialom]touvron2023llama2 H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv:2307.09288, 2023b. [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin]vaswani2017attention A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. 2017. [Wortsman et al.(2024)Wortsman, Liu, Xiao, Everett, Alemi, Adlam, Co-Reyes, Gur, Kumar, Novak, Pennington, Sohl-Dickstein, Xu, Lee, Gilmer, and Kornblith]wortsman2024smallscale M. Wortsman, P. J. Liu, L. Xiao, K. E. Everett, A. A. Alemi, B. Adlam, J. D. Co-Reyes, I. Gur, A. Kumar, R. Novak, J. Pennington, J. Sohl-Dickstein, K. Xu, J. Lee, J. Gilmer, and S. Kornblith. Small-scale proxies for large-scale transformer training instabilities. In International Conference on Learning Representations (ICLR), 2024. [Zhai et al.(2022)Zhai, Kolesnikov, Houlsby, and Beyer]zhai2022scaling X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer. Scaling vision transformers. In Conference on computer vision and pattern recognition (CVPR), 2022. [Zhang et al.(2019a)Zhang, Titov, and Sennrich]zhang2019improving B. Zhang, I. Titov, and R. Sennrich. Improving deep transformer with depth-scaled initialization and merged attention. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019a. [Zhang et al.(2019b)Zhang, Li, Nado, Martens, Sachdeva, Dahl, Shallue, and Grosse]zhang2019algorithmic G. Zhang, L. Li, Z. Nado, J. Martens, S. Sachdeva, G. Dahl, C. Shallue, and R. B. Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model. In Advances in Neural Information Processing Systems (NeuIPS), 2019b. [Zhang et al.(2022)Zhang, Roller, Goyal, Artetxe, Chen, Chen, Dewan, Diab, Li, Lin, et al.]zhang2022opt S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, et al. OPT: Open pre-trained transformer language models. arXiv:2205.01068, 2022. abbrvnat § LIMITATIONS §.§ Limitations Computational scale is a notable limitation, as well as a defining feature, of our results: our experiments are roughly on the scale of those in <cit.> but are substantially smaller than those of <cit.>. Scaling may effectively mitigate each of the issues we identify: with scale, the contribution of the model head becomes negligible, any (fixed) warmup period eventually becomes reasonably long, and hyperparameter sensitivity decreases, as shown in <Ref> and <Ref>. Nevertheless, we believe that experimental protocols that induce correct scaling behavior at low computational budgets are crucial for developing the empirical science of machine learning, particularly in academic settings. Due to limited compute budgets, our hyperparameter sweep only targeted the smaller models in our grid, and furthermore trained each model for only 20 steps, i.e., the optimal point according to the scaling law. This raises the concern that the hyperparameters we chose unfairly favor models trained for that particular token-to-parameter ratio, and rerunning our experiment with perfect tuning for each model size and each token-to-parameter ratio would have yielded different results. We believe this is unlikely: at small scales (where hyperparameter tuning is crucial) our original set of hyperparameters favored higher token-to-parameter ratios because they still had a sufficient number of steps to train for, and therefore choosing hyperparameters specifically for them is not likely to result in significant gains. In <Ref> we analyze our existing tuning results to estimate the potential gains from perfect tuning, and find that they are likely to have small impact on our conclusions. Moreover, transferring our hyperparameters to another dataset yields similar results. Finally, a broader limitation of compute-optimal scaling as defined by <cit.>, <cit.> and our work, is that it only concerns the pretraining loss rather than more direct measures of a model's capabilities. Here again, scale is an issue: most zero-shot and in-context capabilities do not emerge at the scales we consider here, and predicting them from small-scale proxies is an important open problem <cit.>. Instead, it is possible to study downstream performance via fine-tuning, though this may cause the clean scaling patterns seen in pretraining to break down <cit.>, potentially because the fine-tuning procedure is sensitive to the choice of hyperparameters <cit.>. § ADDITIONAL TRAINING SETUP DESCRIPTION Modeling. We train decoder-only Transformer <cit.> language models for next-token prediction using OpenLM, a Pytorch <cit.> for efficient training of medium scale language models. We use the library with largely the same configuration as <cit.>, leveraging xFormers <cit.>, bfloat16 automatic mixed precision, (qk)-LayerNorm <cit.>, SwiGLU <cit.>, depth-scaled initialization <cit.>, and rotary positional embeddings <cit.>. We use the GPT-NeoX-20B tokenizer <cit.> whose vocabulary size of 50432 closely matches the vocabulary size of <cit.>. We use a sequence length of 2048 which is twice the sequence length used in <cit.>, but we attempt to match parameters like batch size and warmup duration in their size in tokens. We do not tie the weights of the embeddings and head layers. Optimization. Throughout the paper, we use the AdamW optimizer <cit.> to minimize the standard log loss with an additive z-loss term for stabilization <cit.> (coefficient 1e-4) as an auxillary loss (for our analysis, we record the log loss without the z-loss term in both train and validation). As advocated for in <cit.>, we use independent weight decay <cit.> with parameter 1e-4, i.e., we set the “weight decay” parameter in the standard PyTorch AdamW implementation to be 1e-4 /η, where η is the base learning rate. In <Ref> and <Ref> we describe our choice of hyperparameter in our experiments. Hardware and computational cost. We train our models on a cluster with 40GB A100 GPU's, using between 4–32 GPU's in parallel per training run. We use the OpenLM/PyTorch distributed data parallel implementation as well as gradient checkpointing. According to our logs, the total compute cost of all the experiments going into this paper is 22.3K GPU hours., and the total FLOP count is 3.03e+21 FLOPs. Data repetition. The datasets we work with are large enough to allow us to perform all of our training runs without any data repetition. However, due to two software issues, some experiments experienced limited data repetition. In particular, data going into our hyperparameter sweep might have been repeated up to 10 times. Moreover, on OpenWebText2, some of our larger-scale training runs might have seen data repeated up to 4 times. We believe this had limited to no impact on our results, as the hyperparameter sweep involved fairly small models unlikely to be able to memorize, while <cit.> show that 4 data repetitions have only a marginal effect on model performance. In our main experiments on the much larger RefinedWeb dataset we have verified that no data repetition occurred. § ESTIMATING FLOPS AND ACCOUNTING FOR ATTENTION In this section, we compare our definition of model size to three alternatives and also discuss choices made by related work. Before that, we provide the precise expression for computing (by our definition) from the model depth l, width d, and vocabulary size v=50432. Due to efficient implementation considerations, OpenLM sets the model's feedforward dimension to d_FF=256 *255 + 8h/3/256. Since each SwiGLU feedforward block has 3 d_ff× d parameter matrices, and since each attention block has 4d^2 parameters in linear layers, our total estimate is: = (3d_FF + 4d)d l + dv. We begin by considering, instead of the number of weights in linear layers, the total number of non-embedding learnable weights (e.g., including also LayerNorm gains). The fourth column of <Ref> shows that the difference between this number and is negligible. We also note that embedding layers have a negligible contribution to the model FLOP counts, since they do not require matrix-vector products. Consequently, the only non-negligible source of error in the approximation (, )=6 is the attention layers. Since in OpenLM the attention dimension is identical to the model width, <cit.> shows that the attention operation costs an additional 6 n d FLOPs per token per layer for a forward and backward pass, where n=2048 is the sequence length. Thus, if we define an effective model size of + ndl, we have that 6 captures the cost of training the model for tokens, including attention. We now consider the difference between these approximations and its effect on compute-optimal scaling laws. The fifth column of <Ref> compares to . It shows that the ratio / changes smoothly between roughly 1.1 to roughly 1.2 and back to 1.1 as our model sizes grow. We note that had this ratio been completely constant, there would have been no essentially no difference between working with and working with since a power law in one would correspond directly to a power law in the other. Since in our model grid this ratio is approximately constant, we expect to see limited differences between the scaling laws resulting from each definition. <Ref> confirms this expectation, showing quantitatively and qualitatively similar results to <Ref>. Consequently, we cannot determine with certainty which definition is more appropriate for predicting compute-optimal model sizes. Nevertheless, we observe our final experiment (with parameter tuning) predicts that the optimal (effective) model size at the Chinchilla/Gopher compute scale to be about 16B parameters larger than the one size predicted using our standard definition. These predictions are directly comparable since model size and effective model size are essentially identical at these scales. If we take the scaling law as ground truth, then the prediction we get using is a bit worse. Finally, we touch on a third measure of model size, which does not count the contribution of the model's head to the FLOP count. That is, we consider - dv. This is the definition that <cit.> use in their experiment, approximating the flop count as 6. Their main motivation for this choice is an observation that not counting embedding parameters leads to more predictive scaling laws in the unlimited compute regime. However, as the final column of <Ref> shows, this approximation leads to a large, systematic error in FLOPs counts for smaller models. In <Ref> we show that this is one of the primary factors behind the /discrepancy. We conclude this section with an overview of the model size definitions used by related works other than <cit.>. <cit.> use as <cit.> and observe high scaling exponents as a result. <cit.> account for both linear and attention layers in their FLOP computation essentially using in their first two estimation approaches. However, their third approach appears to ignore the attention FLOPs and also count the embeddings parameters, i.e., setting '=+dv. <cit.> compare 3 definitions of model size, including , , and a hybrid of and that takes attention into account and ignores the model head. They report that the latter option gives the best prediction of the compute-optimal loss at large scales. However, we note that both <cit.> and <cit.> claim that attention costs double the FLOPs mentioned in <cit.>; we believe this is likely because they do not account for the fact that the attention is causal, meaning it requires only half the FLOPs of an unstructured matrix-vector product. Finally <cit.> use as we do. § ADDITIONAL DATA ANALYSIS DETAILS This section provides a comprehensive description of our procedure for fitting the power law for (C). Our procedure for the power law is analogous, using the relationship (<ref>). Training loss smoothing. We smooth the training loss using a variable-length smoothing averaging window. In particular, we estimate the loss at step i as the average of the losses in steps i - pi to i+pi, for p=0.05. We also compensate for the lag introduced by logging the averaged training loss every k=20 by shifting the training loss's index k/2; this compensation is quite important for matching the validation loss early in the optimization. We have verified that the smoothed training loss matches the validation loss (where it is available) roughly to the validation loss's sampling error. Fetching the loss at C_i FLOPs from a single run. To estimate the loss of a model of size trained to compute C_i we linearly interpolate the validation/training loss (in log-space) at the two steps closest to C_i / (6 B), where B is the batch size in tokens. We also require the nearest step to be within 10% of C_i / (6 B), and do not return the loss if not such step exists. For most of our experiments, we compute the validation loss precisely at step C_i / (6 B). However, for the experiments in <Ref> and <Ref>, which consider alternative definitions of we do not have validation loss samples, and we use the training loss instead. Estimating loss noise. Defining the ideal loss as the population log loss in expectation over model training, there are two sources of error in estimating it: finite samples in the validation set, and variation between training seeds. We estimate the former directly by storing the validation loss on 100 subsamples of the holdout data and find the standard deviation to be in the range 0.001–0.002 across the different experimental settings. To gauge the error due to seed variance, we train smaller-scale models from our grid on 7 seeds using the tuned hyperparameters for 20 tokens each. We find a roughly log-linear relationship (see <Ref>) between the (post-warmup) smoothed training loss and the inter-seed standard deviation. For RefinedWeb (<Ref>), it appears to saturate around the sampling error, and we heuristically assign standard deviation 0.05 to samples with loss >7, standard deviation 0.002 to samples with loss <3, and linearly interpolate the standard deviation in log-space to samples with loss in the range [3,7]. For OpenWebText2 we observe significantly more cross-seed variance as well as less stable loss during training (compare the loss curves in <Ref>), potentially due to a difference in document lengths. Therefore, we set our standard deviation estimate to go from 0.1 at loss 6 to 0.01 at loss 3, saturating outside the interval and log-space interpolating inside it. Estimating (C_i) and its uncertainty. Given a list of values and their respective loss samples at compute C_i (fetched as described above), we estimate the optimal value (C_i) using the following bootstrap-like procedure. For each bootstrap sample, we add independent Gaussian noise to each loss, whose standard deviation is determined according to the heuristic formula described above. We then interpolate the curve of loss vs. using <cit.> interpolation in log-space and find the value minimizing the interpolant; this forms our population of bootstrap samples for (C_i) estimate. We estimate their standard deviation in log-space, and take the maximum between that value and one-third of the average log-spacing on the grid (roughly 1/3log√(2)). Occasionally, appears on the edge of the grid (though we attempt to avoid this in our experiment design). If more than half of the bootstrap samples land at the edge of the grid, we omit the value of C_i from the subsequent power law fit. Otherwise, we keep only the samples outside the grid edge, and blow up the standard deviation estimate by the fraction of omitted samples. § ADDITIONAL PLOTS FOR MAIN EXPERIMENTS In <Ref> we complement <Ref> by plotting our observation and power law fits for , and for all the experiments described in <Ref>. In <Ref> we reproduce this figure for the OpenWebText2 dataset, showing consistent qualitative and quantitative results. § FITTING HYPERPARMAETERS This section provides a detailed description of our hyperparameter tuning procedure. §.§ Full parameter sweep results We perform an extensive parameter sweep over 6 models from our grid (<Ref>) with sizes between 5M and 221M parameters. For each model, we sweep over learning rate and batch sizes, as well as three values of β_2. We train each model of size for 20 tokens (i.e., following the scaling law) and record the validation loss that the end of training. Overall, our hyperparameter sweep includes 642 training runs, and we perform it on only a single dataset (RefinedWeb). <Ref> plots all the loss values recorded in the sweep. Compared to analogous plots in <cit.>, we observe more sensitivity to the choice of learning rate, particularly for smaller models. We conjecture that this is because all the models in <cit.> train for the same amount of tokens, so the smaller models become fully convergence for a wide range of learning rates. §.§ Estimating the optimal batch size and learning rate via interpolation To estimate the optimal batch size and learning rate for each model size, we adopt a two-stage interpolation approach. In the first stage, for each model size and batch size, we estimate the optimal learning rate by interpolating (in log-space) the loss as a function of learning rate using <cit.> interpolation, where for every learning rate we assign the lowest loss obtained from the three values of β_2. We minimize the interpolant and save its minimizing argument and minimum value. In the second stage, repeat this procedure over the sequence of batch size and interpolated loss pairs, finding an optimal batch size for each model size. To extract an estimate of the optimal learning rate, we simply interpolate the (batch size, minimizing learning rate) sequence and evaluate it at the optimal batch size. §.§ The necessity of tuning β_2 To demonstrate the importance of tuning β_2, we repeat the analysis described above except while only considering experiments β_2=0.95. <Ref> shows the result of this experiment, illustrating that breaks part of the clean scaling trend depicted in <Ref>. §.§ Estimating scaling law with ideal tuning We determine our learning rate and batch size scaling laws by sweeping over hyperparameters for models of sizes ≤ 108M with each model trained for 20 tokens. As discussed in <ref>, this is a limitation as it potentially “bakes in” a preference toward scaling. An ideal tuning strategy would select different hyperparameters for each model size and each compute budget, or equivalently each model size and each token-to-parameter ratio . In this section, we use the training loss data from our hyperparameter sweep to approximate such ideal tuning and estimate its effect on the compute-optimal scaling law. We do so in three steps. * Estimating suboptimality as a function of token-to-parameter ratio. We estimate the best hyperparameters for < 20 using the same interpolation logic as in <ref> but for the training loss after = tokens. (We do not consider values of below 2 since they are too close to the warmup period.) Thus, for every value of and , we obtain an estimate of the loss with optimal hyperparameters, denoted ^⋆. We also use interpolation to estimate the loss under our chosen hyperparameters for each model size (given by <Ref>), denoted . <Ref> top-left shows the (smoothed) estimated suboptimality of our hyperparmaeters, i.e. -^⋆ as a function of for each value of in the sweep. * Updating IsoFLOP curves. For all model sizes in <Ref> up to 220M and FLOP values in our grid up to 1.6e18, we estimate the loss attained by an ideal tuning by subtracting the from our observed loss the smoothed sub-optimality as estimated above at the corresponding value of ρ. For model sizes below 220M that are not present in the hyperparameter sweep we interpolate the smoothed sub-optimality based on neighboring model sizes (while keeping the ρ fixed). We include all model size/FLOP combinations with token-to-parameter ratio in between 2 and 30. To estimate the sub-optimality for token multipliers between 20 and 30 (not present in sweep), we extend our smoothed sub-optimality measures symmetrically around ρ=20. <Ref> top-right shows the original IsoFLOP curves (as in <Ref>) along with our estimated loss with ideal tuning. * Re-fitting the scaling law. To estimate the effect of ideal tuning on our estimate of the compute-optimal exponent , we apply our `bootstrap' fitting procedure described in <Ref> on the updated IsoFLOP curves described above. To fit the scaling law we only use FLOP values in {2^k·1.25e16}_k=0^7. For k>7 we do not have data to estimate loss under ideal tuning. Conclusions. The top-left panel of <Ref> shows that our hyperparameters yield losses within 1e-2 of the optimal value for models sizes above 15M and values above 10. For smaller models or values, the potential loss reduction from ideal tuning is greater, as is also evident in the top-right panel of the figure. Nevertheless, the top-right panel also shows that the compute-optimal model size (marked by stars on the IsoFLOP curves) do not move much due to the loss reduction. The bottom panel further reveals that for most FLOP values the difference between compute-optimal model sizes is within their estimated standard deviations. It also shows that approximating ideal hyperparameter tuning moves the estimate of the compute-optimal exponent by less than 0.032, bringing further away from the exponent. Furthermore, since learning rate and batch size sensitivities decrease with model size, we expect ideal tuning to have an even smaller effect at larger compute budgets, so we are likely to see even better agreement in if we introduce larger models to our ideal tuning estimate (which would require extending the hyperparameter sweep to larger models as well). Indeed, dropping the two smallest FLOP values from the power law fit (that is, using FLOP values {2^k·1.25e16}_k=2^7) yields exponent =0.493 for the original observations and exponent = 0.491 for ideal tuning. Overall, we estimate that ideal hyperparameter tuning would produce similar results to our scaling-law-based hyperparameter choices. § REPRODUCING THE ADJUSTED SCALING LAW <Ref> shows that by reintroducing the FLOP count and long warmup issues from <Ref> but training with optimized hyperparameters, we recover the “adjusted” form of the <cit.> compute-optimal scaling law, given by 1.3e9 · (C/8.64e19)^0.73. <cit.> derive this scaling law by theoretically compensating for the fact that they used a too-large batch size for smaller models. It is reassuring to observe that when we add back the other issues we have identified but appropriately decrease the batch size via parameter tuning, we obtain very close agreement with this adjusted scaling law. § THE COMPUTE-OPTIMAL LOSS In <Ref> we fit a saturating power law of the form (C) = E + _0 C^-ℓ to each of the compute-optimal loss curves in our experiments. As the figure shows, the fit is predictive only for the experiment where hyperparameters are tuned. We fit the saturating power law similarly to <cit.>, by minimizing the Huber prediction loss for log(C) over ℓ, log(E) and log(_0). § THE COMPUTATIONAL COST OF OUR EXPERIMENTS We now discuss the calculation of the cost of each of our main experiments, comparing fixed learning rate schedules (constant or fixed-length cosine as in <cit.>) with a cosine schedule tailored for each. Each of our experiments consists of directly estimating (C) for a grid of C values of the form C_k = 2^k·1.25e16 FLOPs and k going from 0 to K=11. With a cosine learning rate schedule, each value of C requires distinct training runs, so the cost of the experiment is ∑_k=0^K m_k C_k, where m_k is the number of models we train for C_k FLOPs—between 6 and 7 in our experiments. A constant learning rate schedule offers savings since we can extract performance at different FLOP values from the same run, so the cost of the experiment is ∑_k=0^K m_k' C_k where m_k' is the number of models we train for at most C_k FLOPs. At the maximum budget we have m_K'=m_K between 6 and 7, but for all smaller k<K we have m_k' between 0 and 2 (typically 1). Thus, we save ∑_k=0^K-1 (m_k-m_k')C_k ≈∑_k=0^K-1m_k C_k, which for our doubling grid of C is roughly half the cost. For a fair comparison, when empirically summing over the cost of our experiments omit runs where the number of tokens is more than 100 times the model size or where loss is more than 1 nat above the optimal loss for the compute budget since they do not contribute to the analysis and when experimenting with a cosine schedule we were more careful not execute them.
http://arxiv.org/abs/2406.18134v1
20240626073824
Assessing "Implicit" Retrieval Robustness of Large Language Models
[ "Xiaoyu Shen", "Rexhina Blloshmi", "Dawei Zhu", "Jiahuan Pei", "Wei Zhang" ]
cs.CL
[ "cs.CL" ]
Observations of the Formation and Disappearance of a Funnel Prominence Junchao Hong July 1, 2024 ====================================================================== § ABSTRACT Retrieval-augmented generation has gained popularity as a framework to enhance large language models with external knowledge. However, its effectiveness hinges on the retrieval robustness of the model. If the model lacks retrieval robustness, its performance is constrained by the accuracy of the retriever, resulting in significant compromises when the retrieved context is irrelevant. In this paper, we evaluate the “implicit” retrieval robustness of various large language models, instructing them to directly output the final answer without explicitly judging the relevance of the retrieved context. Our findings reveal that fine-tuning on a mix of gold and distracting context significantly enhances the model's robustness to retrieval inaccuracies, while still maintaining its ability to extract correct answers when retrieval is accurate. This suggests that large language models can implicitly handle relevant or irrelevant retrieved context by learning solely from the supervision of the final answer in an end-to-end manner. Introducing an additional process for explicit relevance judgment can be unnecessary and disrupts the end-to-end approach. [We release our model outputs https://drive.google.com/drive/folders/1Mrx2E0E2KaDiye_oeZ2wQuC4boKOisY7?usp=sharinghere. The used datasets can be accessed through https://drive.google.com/drive/folders/1Pv1gWX3xiI2qs2pHddQHqn91aSZK1d1w?usp=sharingthis link.] § INTRODUCTION Large language models (LLMs) have brought about a paradigm shift in the field of Natural Language Processing, enabling remarkable advancements in various tasks <cit.>. However, their static nature imposes limitations, preventing them from fully encompassing all specialized knowledge or maintaining its currency <cit.>. To mitigate this constraint, a prevailing trend involves the adoption of retrieval-augmented generation (RAG) methodologies <cit.>. Through bringing extra context from the retriever, these models can tap into external knowledge reservoirs, refining their outputs with heightened precision and contextually fitting information <cit.>. Nevertheless, acquiring a reliable retriever is challenging. Since the number of candidate documents for retrieval is typically much larger than the vocabulary size of LLMs, it is often easier to generate the correct answer from the knowledge stored in the model parameters rather than retrieving it <cit.>. When the retriever is imperfect, the quality of LLM generations can be significantly compromised, which often leads to poorer performance compared to scenarios where no retriever is employed at all <cit.>. The main reason that influences the quality of RAG is their retrieval robustness <cit.>. Ideally, a retrieval-robust model should possess two key capabilities: * Properly incorporate helpful retrieved information to provide an accurate answer. * Ignore distracting information and rely on its own internal knowledge as a fallback.[Some works take a conservative strategy of refraining from answering if the retrieved context is unhelpful. However, this limits the model's potential to the accuracy of the retriever and underutilizes LLMs' internal knowledge <cit.>.] Capability 1 pertains to scenarios where the retrieved information aids in deriving the answer, while Capability 2 pertains to scenarios where the retriever only returns distracting information. A wide range of approaches have been proposed to improve the retrieval robustness of LLMs, which can be classified into two categories: The first category explicitly decouples Capability 1 and 2 by injecting an intermediate process to judge the relevance of retrieved information, based on which different functions are called <cit.>. The second category, on the contrary, relies on the model itself to implicitly judge the relevance of the retrieved information and generate the right answer directly <cit.>. Figure <ref> depicts the difference between explicit and implict approaches. Despite being finer-grained, explicit approaches increase runtime latency and the risk of error propagation. They also require annotations regarding the relevance of retrieved information, which can be costly to obtain on a large scale.[Annotations can be circumvented by developing complex self-supervision or weak-supervision algorithms <cit.>, but these algorithms often come with additional costs, such as increased computations or suboptimal performance.] In this paper, we conduct a thorough analysis in a controlled setting to evaluate the “implicit” retrieval robustness of LLMs. More concretely, we aim to determine the extent to which we can uphold the retrieval robustness without requiring explicit judgment of the retrieval's relevance. To conduct this analysis, we run extensive experiments with 5 question-answering tasks spanning different domains and scenarios; 5 open-source LLMs (Vicuna-7/13/33B and Llama 2-7/13B); 2 closed-source models (GPT-3.5 and GPT-4) and 3 testing scenarios (zero-shot with prompting, full fine-tuning and LoRA fine-tuning). For each experiment, we run controlled tests to evaluate Capability 1 and 2 of the models separately. Our findings can be summarized as follows: * Without fine-tuning, open-source LLMs often under-perform GPT-3.5/GPT-4 in terms of Capability 1, but match them in terms of Capability 2. Larger models generally exhibit greater resilience to distractions. * Fine-tuning on gold context enhances Capability 1 on challenging tasks, but often hits a plateau on easier tasks, accompanied by a drop in Capability 2. LoRA matches full fine-tuning in improving Capability 1 and better preserves Capability 2. * Fine-tuning on noisy context can significantly enhance Capability 2 of LLMs without affecting their Capability 1. A higher noise ratio (50%) can often lift the performance of Capability 2 to the level of non-retrieval models, except on questions requiring multi-hop or multi-turn inference. Overall, we suggest that LLMs are notably robust at noisy retrievals during fine-tuning. With a high noise ratio, the “implicit” retrieval robustness of LLMs can be remarkably effective. For most question-answering tasks that do not involve sophisticated multi-hop or multi-turn inference, relying on the model's implicit retrieval robustness may already suffice. § RELATED WORK Retrieval-Augmented Generation Due to the static nature of the knowledge stored within their parameters, large language models encounter difficulties in tasks that require extensive knowledge or have temporal dependencies <cit.>. Retrieval-augmented generation has emerged as a valuable approach to address these limitations by enabling models to retrieve and integrate information from external sources during the generation process <cit.>. The external sources may include knowledge bases, search engines, multi-turn histories, or private databases, depending on the specific knowledge needed for the task <cit.>. Various studies have explored the integration of retrieval mechanisms into generative models to enhance the quality and relevance of generated text from LLMs <cit.>. The retrieval-augmented mechanism not only improves performance but also offers a cost-effective approach to adapting the model for diverse domains by dynamically adjusting external knowledge sources <cit.>. Although improvement has been observed, the quality of generations is strongly affected by the accuracy of retrievers. Inaccuracies in retrievers can lead to the incorporation of irrelevant or misleading information, resulting in lower-quality generated content <cit.>. Retrieval-Robust Large Language Model Recognizing that the quality of text generations from LLMs is significantly influenced by the retriever's quality, various research works have been proposed to enhance the retrieval robustness of LLMs, i.e. , the model should effectively utilize accurate retrieved information while also disregarding distracting information in cases where the retriever is inaccurate <cit.>. The first line of research introduces an intermediary step to assess the relevance of retrieved information, aligning with conventional methods of step-by-step planning in text generation <cit.>. When the information is detected to be unhelpful, the model will simply fall back to use its own parameterized knowledge to answer the question. This helpfulness label is usually obtained by manual annotation <cit.>, chain-of-thought prompting on a powerful LLM <cit.>, or inspecting its effect on the model generation <cit.>. Although this step-by-step approach provides finer-grained signals, it also leads to increased runtime latency and training costs, with potential risks of error propagation <cit.>. Conversely, the alternative line of research employs an end-to-end approach to train models to autonomously discern the relevance of retrieved information from without extra helpfulness labels. The key to achieving successful end-to-end learning is to incorporate noisy retrievals, allowing the model to adjust to distracting information <cit.>. Nonetheless, existing studies lack quantitative analysis on how the retrieval robustness is influenced by factors such as the model, fine-tuning method, data, and noise ratio. Our research seeks to address this gap in the literature. § DEFINITION OF RETRIEVAL ROBUSTNESS Let q,c,a denote the question, context retrieved from an external source, and answer respectively. The variable p denotes the probability estimator from the LLM generator. In retrieval-augmented generation, the retriever retrieves some context c[Depending on the granularity of the retrieval, the context can be in the unit of documents, passages, sentences, entities, etc <cit.>.] from external sources where c can be either helpful or unhelpful depending on the accuracy of the retriever. The answer is generated from p(a|q,c) by conditioning on q and c. An LLM is considered retrieval-robust if the probability estimation p(a|q,c) remains effective regardless of the helpfulness of c. It corresponds to two different capabilities that the LLM should possess: * When c is helpful, i.e. , the correct answer a^* can be derived from the information contained in c, then it should return a^*. * When c is not helpful, it should discard the information in c and rely on its own parameterized knowledge p(a|q) to answer the question. Equation <ref> illustrates the ideal p_robust(a|q,c) from a retrieval-robust LLM mathematically, where δ is the dirac-delta function. p_robust(a|q,c)= δ(a-a^*), if a^* ∈ c p(a|q), otherwise § EXPERIMENT SETUP Model We test 5 open-source LLMs: Vicuna-1.3-7/13/33B <cit.> and Llama 2-chat-7B/13B <cit.>, as well as two closed-source LLMs GPT-3.5 and GPT-4 <cit.>. For open-source LLMs, we test their performance with zero-shot prompting, LoRA and full fine-tuning on task-specific datasets. For closed-source LLMs, we only report their performance by prompting them with instructions. Dataset In order to test model capabilities comprehensively, we test the models on 5 datasets covering diverse domains, question types and knowledge sources: AmbigQA <cit.>, ePQA <cit.>, Musique <cit.>, SciQ <cit.> and TopioCQA <cit.>. We specifically choose datasets with short answers because evaluating long answers is known to be challenging <cit.>. AmbigQA is a refined version of Natural Questions <cit.> after removing the ambiguity among questions. It contains general-knowledge questions answerable with Wikipedia contents. ePQA contains product-specific questions from the Amazon website. Testing on ePQA reduces the chance that the model memorizes the knowledge since product information is tail-distributed. MuSiQue is an improved version of HotpotQA <cit.> after removing potential short cuts. It contains questions requiring multi-hop reasoning, which have to be answered with at least two passages. SciQ contains scientific questions about physics, chemistry, etc. TopioCQA contains questions in multi-turn conversations. Table <ref> provides a summary of used datasets. Dataset examples are in Appendix <ref>. Hyperparameter When fine-tuning models, we observe that the learning rate can have big impact on the performance. In general for 7B/13B models, full fine-tuning requires a small learning rate (in the scale of 1e-6) while LoRA fine-tuning requires a larger learning rate (in the scale of 1e-4). For 33B models, a small learning rate in the scale of 1e-6 is necessary. Due to the large impact of learning rate, we perform a grid search over [1e-6, 3e-6, 5e-6, 1e-5, 3e-5, 5e-5, 1e-4, 3e-5, 5e-4, 1e-3, 3e-3, 5e-3] for every model fine-tuning in the following section, then choose the checkpoint with the best score.[As the learning rate increases, the behavior of the curve varies between full FT and LoRA FT. In full FT, the model performance initially improves before declining. The optimal rate falls somewhere in between. In LoRA FT, the model performance fluctuates, showing two cycles of improvement and decline, with the optimal rate located at one of the peaks.] The batch size is fixed as 64 for all runs. The model is fine-tuned for 1 epoch with the best-performing learning rate. Prompt We conduct a series of prompt engineering and finalize two prompt templates: Template <ref> is used when the retrieval is not involved and <ref> is used when the retrieval is involved. For the ePQA dataset, we add an additional instruction to let the model always start with “yes/no” for binary questions to enable easier evaluation. For the TopiOCQA dataset, we further instruct the LLM to be aware that the question is within a conversation and turns are separated by the <SEP> symbol. Details are in Appendix <ref>. Empirically we find these templates are the best at inducing LLMs to produce answers at the desired format. In order to keep a fair comparison, we use the same set of prompts both when directly prompting the original LLMs, and when fine-tuning them, such that we can quantify how fine-tuning changes the retrieval robustness. [woret]Instruction w/o Retrieval [Q] [wret]Instruction w. Retrieval [Q] [C] Metric We evaluate the model's performance using recall, which indicates the number of words (excluding punctuation) from the gold answer that also appear in the model prediction. The recall metric is averaged across the test samples. This choice is made because LLMs may generate answers that are correct but longer than the concise answers in the original dataset, so using other metrics such as precision or F1 scores can significantly underestimate their performance <cit.>. Empirically we also observe that the recall score correlates the best with human evaluations by manually examining 100 cases from each dataset. Evaluation We evaluate the model performance under three scenarios to quantitatively measure the two capabilities of retrieval robustness: (1) when no retrieval is provided; (2) when gold retrieval is provided; and (3) when distracting retrieval is provided. The gold retrieved information is extracted from the original dataset. To acquire the distracting retrieval, we retrieve the top 10 documents from the knowledge sources of each dataset.[We adopt a dense passage retriever <cit.> trained on each knowledge source.] Subsequently, we consider the document with the lowest recall score with the answer as distracting information.[Most passages selected by this way have a recall score of 0 and only ∼2% of them have recall scores > 0.5, so we can consider they are almost distracting information.] The rationale for selecting from the top-10 DPR results is to align the process with realistic use cases. If the passages are blatantly distracting, it could make it too simplistic for the model to differentiate. We run all model generations with beam search under the beam size of 5. § RESULTS AND ANALYSIS We evaluate how retrieval robust different LLMs are in three scenatios: when directly prompting the original LLMs without fine-tuning them; when fine-tuning them only on gold context, and when fine-tuning them on mixed gold and distracting context. The results are presented in this order. Full results tables are in Appendix <ref> §.§ Without Fine-Tuning Figure <ref> presents the results of directly prompting original LLMs without fine-tuning when provided with (1) no context, (2) gold context and (3) distracting context. Without Context When no context is provided, LLMs often struggle to recall exact answers from their internal knowledge. As expected, larger models generally perform better than smaller ones. While GPT-3.5 and GPT-4 outperform open-source LLMs, their advantage is not substantial. For questions involving tail product knowledge (ePQA) or requiring multi-hop inferences (Musique), GPT-3.5 and GPT-4 face the same challenges as open-source models, limiting their advantages. Notably, most questions in ePQA are binary, allowing models to achieve decent scores through random guessing. As a result, performance on ePQA appears reasonable despite the LLMs' lack of specific product knowledge. Capability 1 When gold context is provided, all LLMs exhibit large improvement across all tasks, demonstrating their remarkable capabilities in extracting the right answers from the retrieved context. As model size increases, Vicuna-series models show more consistent performance improvements. However, for Llama 2-series models, the 13B model does not exhibit a clear advantage over the 7B model, except on the easiest dataset, AmbigQA. Nevertheless, there is still a large gap between open-source LLMs and closed-source GPT-3.5/4. This gap is more notable (>14%) on ePQA, Musique and TopioCQA as their question types and knowledge sources are more challenging. On ePQA, where a substantial amount of context is in JSON format, open-source LLMs encounter difficulty in efficiently processing information from this source. On Musique and TopioCQA, the presence of multiple items in the context and questions requires LLMs to accurately grasp the inter-dependencies among them, thereby increasing the complexity of the task. Capability 2 When distracting context is introduced, all LLMs experience a decline in performance compared to having no context at all. However, the decline with distracting context is usually much smaller than the gain from gold context, suggesting that existing LLMs are quite good at ignoring distracting context.[Previous research typically reports larger declines because they did not explicitly instruct the LLM to revert to its own knowledge when the context is unhelpful <cit.>.] The decline also varies across datasets. On datasets with tail knowledge, such as ePQA, the decline is minimal because the original LLM has almost no prior knowledge about specific products. Compared to Capability 1, there is a more consistent trend that larger models are more resilient with distracting context, suggesting that model size has a greater impact on the inherent capability for instruction following than on the understanding of additional context information. Surprisingly, powerful closed-source LLMs are even more vulnerable to distracting context, particularly on questions involving common knowledge (AmbigQA and SciQ). The largest open-source LLM we tested, Vicuna-33B, is comparable to or better than GPT-3.5/4 in terms of performance drop when faced with distracting context. In summary, when directly prompting LLMs, we have the following observations: * In terms of Capability 1, open-source LLMs significantly under-performs GPT-3.5/4, especially on challenging tasks with complex question types and knowledge sources. * In terms of Capability 2, open-source LLMs can be comparable or better than GPT-3.5/4. Larger models are more resilient with distracting context. §.§ Fine-Tuning on Gold Context While directly prompting existing LLMs can showcase remarkable performance, further task-specific fine-tuning is often necessary to fully tailor an LLM for a specific task. In order to see how task-specific fine-tuning can improve Capability 1 and 2 of LLMs, we perform full and LoRA fine-tuning on every task. During fine-tuning, the gold context is provided to teach LLMs to extract answers from the context, a common setup in retrieval-augmented training. Figure <ref> depicts the experiment results. Without Context Before fine-tuning on gold context, we first analyze the performance change when fine-tuning without context (“None” as in Figure <ref>). This can serve as an upper-bound performance for an LLM when the retrieved context is distracting (p(a|q) as in Equation <ref>). As observed, fine-tuning without context often results in limited improvement. The only exception is the TopioCQA dataset, likely because the original LLMs struggle to understand the conversational format of the input and require fine-tuning to fully grasp the task format. This supports the superficial alignment hypothesis, which suggests that fine-tuning mainly trains the model to follow task-specific formats rather than adding new knowledge <cit.>. Capability 1 When fine-tuning LLMs with gold context, performance often improves significantly in terms of extracting the correct answer from the provided context. The improvement is especially pronounced on the ePQA and TopioCQA datasets, as these tasks are not inherently difficult but require adaptation to specific knowledge sources and conversational questions. On the ePQA dataset, the fine-tuned models can even outperform the closed-source GPT-3.5 and GPT-4 models. After fine-tuning, there is a more consistent trend of larger models performing better, as the variance from prompting formats is reduced. However, all open-source LLMs struggle to further improve on the AmbigQA dataset, even with task-specific fine-tuning, possibly because their initial performance is already high and adding more data alone does not yield significant improvement. Llama 2 models also hit a performance plateau on the Musique dataset. This suggests that task-specific fine-tuning alone may not be sufficient for open-source LLMs to match GPT-3.5 and GPT-4 in Capability 1. Additional factors beyond task-specific fine-tuning might be necessary to close this gap. Across all models and datasets, there is no clear advantage of full fine-tuning over LoRA fine-tuning, even though training costs associated with full fine-tuning are significantly higher. Capability 2 Despite the improvement of Capability 1, fine-tuning LLMs only on gold context can mislead them to always rely on the provided context, even when the information is distracting. This can eventually harm Capability 2, preventing LLMs from safely falling back to their internal knowledge. As observed in Figure <ref>, there is indeed some performance decrease when LLMs are provided with distracting context. The gap between the LLM's probability estimation p(a|q,c) and the ideal upper bound p(a|q) widens. However, unexpectedly, the decrease is often small compared to the big performance boost when provided with gold context, especially on the more challenging ePQA, Musique and TopioCQA datasets. This may be because existing open-source LLMs struggle to handle distracting context on these more difficult datasets, so their initial performance is already close to random, leaving little room for further decline even when fine-tuning only on gold context. On the easier AmbigQA and SciQ datasets, LoRA fine-tuning often results in less performance drop compared to full fine-tuning due to the smaller number of adjustable training parameters. In summary, when fine-tuning LLMs only on gold context, we have the following observations: * Capability 1 is improved significantly on challenging datasets, but hit a plateau on easier ones, suggesting other factors might be needed to fully close the gap with GPT-3.5/4. * Capability 2 is decreased mainly on easier datasets, potentially because the original performance on harder datasets with distracting context is already close to random. * LoRA fine-tuning is similar to full fine-tuning in terms of improving Capability 1, but better at maintaining capability 2. §.§ Fine-Tuning on Mixed Context Fine-tuning LLMs solely with gold context can reduce their robustness to distracting context, which are inevitable in real-world retrieval-augmented generation scenarios. Therefore, we further explore whether the retrieval robustness can be improved by mixing distracting context into the fine-tuning datasets. We experiment with two distraction ratios: 20% and 50%. All distracting context are hard negative samples from the top-10 retrieved contents with dense retrieval to simulate real-case scenarios. Capability 1 Figure <ref> illustrates the performance of LLMs when fine-tuning with varying distraction ratios and testing on gold context. The results indicate that different levels of distracting context have little impact on performance. Even when fine-tuned with 50% distracting context (i.e. the training examples with gold context is reduced to half), the models still maintain their performance on gold context. Interestingly, in several instances, especially on challenging datasets such as Musique, augmenting the fine-tuning datasets with more distracting context actually enhances performance on gold context. This suggests that Capabilities 1 and 2 are not mutually exclusive, and that incorporating some noisy context during fine-tuning can also be advantageous for Capability 1. Regarding the fine-tuning methods, LoRA fine-tuning performs similarly to full fine-tuning, with the only exception being observed on the Musique dataset for the Llama 2-7B model. This is due to the fact that fine-tuning cannot further enhance performance, allowing LoRA to preserve the original model performance to the greatest extent possible. Capability 2 After confirming that mixing distracting context into the fine-tuning dataset will not affect Capability 1, we further investigate whether it can benefit Capability 2 by testing on distracting context. The results are visualized on Figure <ref>. As can be seen, increasing the distracting ratios steadily improves the performance when provided with distracting context. On the easier AmbigQA, ePQA and SciQ datasets, after LLMs getting used to their input formats, the performance when provided with distracting context can be very close to the performance when no context is provided, i.e. , the model is not affected by the distracting context. This holds true for models of varying sizes, with LoRA fine-tuning performing similarly to full fine-tuning. On the more challenging datasets, Musique and TopioCQA, despite the steady improvement, there is still some room for growth before the model can be fully robust against distracting context. We hypothesize that the model may require more data to effectively understand longer input sequences, considering that Musique includes multiple context passages and TopioCQA involves an entire conversation as the input question. In summary, when fine-tuning LLMs on a mixture of gold and distracting context, we have the following observations: * Capability 1 is maintained, or sometimes even enhanced, when the distracting ratio is increased in the fine-tuning data. * Capability 2 gets improved steadily. On easier datasets with shorter inputs, the model can even achieve complete robustness against distracting context. § CONCLUSION Retrieval robustness is the key to determine the quality of model generations in RAG. In this paper, we conduct an extensive assessment of the “implicit” retrieval robustness of LLMs without explicitly letting models judge the relevance of the retrieved context. Our findings indicate that LLMs are remarkably adept at handling context with varied retrieval accuracy, without needing explicit relevance annotations. By incorporating a certain ratio of distracting context into the fine-tuning dataset, LLMs can maintain their ability to extract correct answers from relevant context while hardly being misled by irrelevant information. § LIMITATIONS We aim to perform an extensive evaluation of the implicit retrieval robustness across various LLMs. However, due to resource and time constraint there are several limitations of this paper. First, we select models based only on LLama and LLama-2 with up to 33B parameters. By the time of writing, there have been more advanced and larger open-source models available. The conclusions drawn from this paper, especially the comparison between open-source LLMs and closed-source LLMs might not hold with up-to-date models. Second, we choose only datasets with short answers for simplicity of evaluations in this paper. Long answers are also an important research direction and is attracting growing attention. When instructing models to generate more complex long answers, the retrieval robustness of LLMs need to be re-examined. Finally, despite conducting a grid search over a wide range of learning rates, it is possible that the optimal configuration lies outside the range we considered. We also did not extensively test results with different batch sizes and data sizes, which could impact model performance in various ways. § ETHICS STATEMENT Our work’s sole aim is to study the implicit retrieval robustness of retrieval-augmented large language models. We expect minimal social risks to be associated with our efforts. acl_natbib § PROMPTS USED FOR LLMS §.§ W/o Retrieval [woret1]AmbigQA/MuSique/SciQ [Q] [woret2]ePQA [PRODUCT TITLE] [QUESTION] [woret3]TopioCQA [CONVERSATION] §.§ W. Retrieval [wret1]AmbigQA/MuSique/SciQ [QUESTION] [CONTEXT] [wret2]ePQA [PRODUCT TITLE] [QUESTION] [CONTEXT] [wret3]TopioCQA [CONVERSATION] [CONTEXT] § DATASET EXAMPLES Table <ref> shows example snippets from each of the datasets used in this paper. Musique contains at least 2 gold passages per question as all questions require multi-hop inferences. The other datasets contain only 1 gold passage per question. When sampling distracting passages, the numper of distracting passages is the same as that of gold passages. The original ePQA dataset contains one-sentence answers. In order to extract short answers from them, we apply ChatGPT to extract a short span from each annotated answer. If ChatGPT judges the annotated answer cannot answer the question, then we discard this example. Namely, we only keep examples that ChatGPT thinks as valid answers, so that we can reduce the chance of noisy annotations in the original dataset. For the test data, in order to catch diverse answers per question, we manually annotated other possible spans apart from the one generated by ChatGPT. When evaluating model generations, a generation is considered correct as long as it matches any one of the gold answers. We report the maximum recall scores with all possible gold answers. For all datasets, we select ∼3000 samples as the training data and 200 samples as the test data. Since our purpose is not to achieve state-of-the-art performances but rather to inspect the effects of retrieval-augmented generation, we use this data split to reduce running time. § RESULT TABLES Table <ref>, <ref>, <ref> and <ref> show the full results presented in this paper. We only reported the results with the best tried learning rate. We run all experiments on 8 Nvidia A100 GPUs. Each example is cut off with 1024 sub-tokens. On each dataset, we train the model for one epoch and select the run with the best learning rate. Each training takes about 10 GPU hours for a 7B model, 15 hours for a 13B model and 30 hours for a 33B model.
http://arxiv.org/abs/2406.18828v1
20240627014516
Insights into Gravitinos Abundance, Cosmic Strings and Stochastic Gravitational Wave Background
[ "K. El Bourakadi", "G. Otalora", "A. Burton-Villalobos", "H. Chakir", "M. Ferricha-Alami", "M. Bennai" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-th" ]
k.elbourakadi@yahoo.com giovanni.otalora@academicos.uta.cl andres.burton.villalobos@alumnos.uta.cl chakir10@gmail.com ferrichaalami@yahoo.fr mdbennai@yahoo.fr § ABSTRACT In this paper, we investigate D-term inflation within the framework of supergravity, employing the minimal Kähler potential. Our study reveals that this model can overcome the η-problem found in F-term models. Additionally, we explore reheating dynamics and gravitino production, emphasizing the interplay between reheating temperature, spectral index, and gravitino abundance. Our analysis indicates that gravitino production is sensitive to the equation of state during reheating, affecting the reheating temperature and subsequent dark matter relic density. Perturbation theory reveals that Primordial Gravitational Waves (PGWs) evolve according to second-order effects arising from first-order curvature perturbations. In fact, the spectral energy density of these waves is particularly relevant to Pulsar Timing Array (PTA) observations. Furthermore, we analyze gravitational waves generated by cosmic strings, providing critical constraints on early Universe dynamics and cosmic string properties. The dimensionless string tension significantly influences the stochastic gravitational wave background (SGWB) produced by cosmic strings post-inflation. Finally, the stochastic gravitational wave background (SGWB) produced by cosmic strings is influenced by the energy scales of inflation and string formation. ^1Quantum Physics and Magnetism Team, LPMC, Faculty of Science Ben M'sik, Casablanca Hassan II University, Morocco. ^2Departamento de Física, Facultad de Ciencias, Universidad de Tarapacá, Casilla 7-D, Arica, Chile. ^3Subatomic Research and Applications Team, Faculty of Science Ben M'sik, Casablanca Hassan II University, Morocco Insights into Gravitinos Abundance, Cosmic Strings and Stochastic Gravitational Wave Background M. Bennai^1 July 1, 2024 =============================================================================================== § INTRODUCTION Supersymmetric (SUSY) inflation <cit.> provides an appealing framework for connecting inflation with particle physics models rooted in grand unified theories (GUTs) <cit.>. In the simplest models, the soft supersymmetry breaking terms are crucial for aligning predictions of the scalar spectral index with cosmic microwave background (CMB) observations <cit.>,<cit.>. SUSY provides an appealing solution to the analogous hierarchy problem in the Standard Model (SM) of particle physics and also facilitates the unification of the three gauge couplings. Notably, the local version of supersymmetry — supergravity (SUGRA) — would dominate the dynamics of the early Universe when high-energy physics played a crucial role. Therefore, considering inflation within the framework of supergravity is quite natural. However, integrating inflation into supergravity is a challenging task. This difficulty arises mainly because the SUSY breaking potential term, essential for inflation, typically imparts an additional mass to the would-be inflaton, thereby disrupting the flatness of the inflaton potential <cit.>. To achieve successful inflation that aligns with observational data from large-scale structures and the anisotropy of CMB radiation, the potential of the scalar field, known as the inflaton, must be very flat. This required flatness of the potential can be achieved through the mechanisms provided by supersymmetry or supergravity. In supersymmetric models, the scalar potential is composed of contributions from both the F-term and D-term. Specifically, in F-term inflation models, where the vacuum energy that drives the inflationary expansion predominantly comes from the F-term, the inflaton mass is generally on the same order as the Hubble parameter H during inflation <cit.>. Consequently, achieving a sufficiently long expansion to address the horizon and flatness problems is challenging. This issue is referred to as the eta problem in inflation models within the framework of supergravity. Conversely, D-term inflation, in which the vacuum energy is supplied by the D-term, does not encounter this problem <cit.>. Therefore, from this perspective, D-term inflation is more appealing than F-term inflation. However, it has been discovered that the D-term inflation model also has its own set of issues <cit.>. For instance, from an observational standpoint, cosmic strings generated after inflation can significantly impact the spectrum of CMB anisotropy <cit.>, as this model is a type of hybrid inflation. Additionally, there are concerns about the validity of the D-term inflation potential, since the inflaton requires a large initial value on the order of the (sub-)Planck scale for a natural model parameter<cit.>. Therefore, D-term inflation appears to be under considerable scrutiny. Additionally, within the framework of (heterotic) string models, there are two further issues: the runaway behavior of the dilaton and an excessively large magnitude of the Fayet-Iliopoulos (FI) term <cit.>. Given the current and anticipated future constraints on inflaton decay via CMB limits on N_re and the competition with constraints from gravitino abundance in supersymmetric models, we revisit the issue of gravitino production following inflation. A well-understood source of gravitinos is their production through particle collisions in the thermal plasma that fills the Universe after reheating <cit.>. However, gravitinos could also have been produced by particle collisions before the reheating process was complete, either from collisions of relativistic inflaton decay products before thermalization or within any dilute thermal plasma formed by these collisions while inflaton decay was still ongoing <cit.> Cosmic strings are intriguing messengers from the early Universe due to their distinctive signatures in the stochastic gravitational wave background (SGWB). Recent NANOGrav 12.5-year data has provided evidence of a stochastic process at nanohertz frequencies, which has been interpreted as SGWB in numerous recent studies <cit.>. Relic gravitational waves (GWs) offer a fascinating window into the exploration of early Universe cosmology <cit.>. Cosmic strings generate powerful bursts of gravitational radiation that could be detected by interferometric gravitational wave detectors such as LIGO, Virgo, and LISA <cit.>. Additionally, the SGWB can be detected or constrained through various observations, including big bang nucleosynthesis (BBN), pulsar timing experiments, and interferometric gravitational wave detectors <cit.>. The paper is organized as follows, in Sec. <ref> we examine supergravity formalism, in Sec. <ref> we focus on the D-term model for the case of the minimal Kähler potential. In Sec. <ref>, we apply the previous findings in the study of the reheating process and its relation to gravitinos production. In Sec. <ref>, we link the primordial gravitational waves production from different epochs in the light of the chosen supergravity formalism, which is constrained by PTA observations. In Sec. <ref>, we study the gravitational waves production due to networks of cosmic strings. In Sec. <ref>, we conclude. § SUPERGRAVITY FORMALISM The supergravity lagrangian scalar part can be defined by the superpotential W( Θ _i), Kähler potential K( Θ _i,Θ _i^∗), and gauge kinetic function f( Θ _i) <cit.>. W and f are holomorphic functions of complex scalar fields, whereas the initial function K is non-holomorphic and operates as a real function of the scalar fields Θ _i and their conjugates Θ _i^∗. The three functions mentioned above are initially defined in terms of (chiral) superfields. Since our primary focus is on the scalar component of a superfield, we equate a superfield with its complex scalar counterpart, denoting both by the same symbol. The action of the complex scalar fields, when minimally coupled to gravity, comprises both kinetic and potential components. S=∫ d^4x√(-g)[ 1/√(-g)ℒ_kin-V( Θ _i,Θ _i^∗) ] . The kinetic terms of the scalar fields are dictated by the Kähler potential, denoted as K, and expressed as follows: 1/√(-g)ℒ_kin=-K_ij∗D_μΘ _iD_νΘ _j^∗g^μν, with K_ij∗=∂ ^2K/∂Θ _i∂Θ _j^∗, and D_μ denotes the covariant derivative in the gauge field context. The potential V for scalar fields Θ _i consists of two components, namely, the F-term V_F and the D-term V_D. Here the F-term component which is determined by both the superpotential W and the K ähler potential K is given by, V_F=e^K[ D_Θ _iWK_ij∗^-1D_Θ _j^∗W^∗-3| W| ^2/M_p^2] , where D_Θ _iW=∂ W/∂Θ _i+1/M_p^2 ∂ K/∂Θ _iW. Conversely, the D-term V_D is associated with gauge symmetry and is defined by the gauge kinetic function and the Kähler potential, V_D=1/2∑[ Ref_ab( Θ _i) ] ^-1g_ag_bD_aD_b, where D_a=Θ _i( T_a) _j^i∂ K/∂Θ _i+ξ _a. In this context, the subscripts ( a) and ( b) denote gauge symmetries, where g_a represents the gauge coupling constant, and T_a stands for the corresponding generator. The term ξ _a is referred to as the Fayet–Iliopoulos ( FI) term, which is non-zero exclusively when the gauge symmetry is Abelian, specifically U(1) symmetry. Keep in mind that within a supersymmetric framework, the tree-level potential during inflation comprises both an F-term and a D-term. These two components exhibit distinct characteristics, and in the context of all inflationary models, it is noteworthy that dominance is typically attributed to only one of these terms. In certain models rooted in Supergravity, such as the one under consideration here with a nonminimal Kähler potential, the exponential term e^K contributes a term V to the effective mass squared, roughly of the order of H^2 (the Hubble scale squared), affecting all scalar fields. Consequently, inflation experiences an effective mass squared V_g=∑|∂ W/∂Θ _i| ^2=3H^2 <cit.>, introducing a contribution to the slow-roll parameter η. This contribution results in a violation of the slow-roll conditions (|η|≪ 1) and leads to the so-called η-problem η =M_p^2V^''/V≃1/3( m/ H) ^2≃ 1. Addressing this challenge has prompted the proposal of various approaches. Here, we are only interested in the use of D-term hybrid inflation, which can provide positive energy in D-term potential. This allows for the realization of inflation while avoiding the η-problem. § D-TERM HYBRID MODEL Constructing an inflation model in supergravity faces a significant challenge arising from the F-term, particularly the exponential factor associated with it. Hence, achieving a positive potential energy in the D-term could pave the way for successful inflation without encountering the η problem. This insight was initially highlighted in Ref. <cit.>. Let us consider a D-term model of hybrid inflation as introduced in <cit.>, W=λ SΘ _+Θ _-, where λ is the coupling constant, and S, Θ _+, and Θ _- are three (chiral) superfields. The superpotential remains unchanged under a U(1) gauge symmetry, with assigned charges of 0, +1, and -1 for the fields S, Θ _+, and Θ _-, respectively. Additionally, it exhibits an R symmetry governing the transformation of these fields as follows: S( θ) → e^2iαS( θ e^iα) ,      Θ _+Θ _-(θ )→Θ _+Θ _-(θ e^iα), and considering the minimal Kähler potential, K=| S| ^2+|Θ _+| ^2+|Θ _-| ^2. From Eqs. (<ref>) and (<ref>) the tree level scalar potential is given by, V( S,Θ _+,Θ _-) = λ ^2e^| S| ^2+|Θ _+| ^2+|Θ _-| ^2[ |Θ _+Θ _-| ^2+| SΘ _-| ^2+| SΘ _+| ^2+( | S| ^2+|Θ _+| ^2+|Θ _-| ^2+3) | SΘ _+Θ _-| ^2] +g^2/2( |Θ _+| ^2-|Θ _-| ^2+ξ). Here, g represents the gauge coupling constant, ξ is a non-zero FI term, and we have adopted a minimal gauge kinetic function. This potential exhibits a distinctive global minimum V=0 at S=Θ _+=0,Θ _-= √(ξ). Yet, when | S| is significantly large, the potential displays a local minimum with a positive energy density at Θ _+=Θ _-=0. To determine the critical value S_c of | S|, we compute the mass matrix of Θ _+ and Θ _- along the inflationary trajectory where Θ _+=Θ _+=0. This is expressed as: V_mass=m_+^2|Θ _+| ^2+m_-^2|Θ _-| ^2, with m_+^2=λ ^2| S| ^2e^| S| ^2+g^2ξ ,         m_-^2=λ ^2| S| ^2e^| S| ^2-g^2ξ . Hence, provided that m_-^2≥ 0, corresponding to | S|≥ S_c≃ g√(ξ)/λ for S_c≲ 1, the local minimum at Θ _+=Θ _+=0 remains stable, leading to inflation driven by the positive potential energy density of g^2ξ ^2/2. Moreover, this mass split induces quantum corrections computed using the standard formula <cit.>, V_1L=1/32π ^2[ m_+^4ln( λ ^2| S| ^2e^| S| ^2+g^2ξ/ Λ ^2) +m_-^4ln( λ ^2| S| ^2e^| S| ^2+g^2ξ/Λ ^2 ) -2λ ^4| S| ^4e^| 2S| ^2ln( λ ^2| S| ^2e^| S| ^2/Λ ^2) ] , and when | S|≫ S_c, the effective potential of S S during inflation is approximated as, V( S) ≃g^2ξ ^2/2[ 1+g^2/8π ^2ln( λ ^2| S| ^2e^| S| ^2/Λ ^2) ] . Given that the potential is independent of the phase of the complex scalar field S, we can propose its real part as σ≡√(2)ReS. Subsequently, for σ _c≪σ≲ 1, the effective potential of the inflaton field σ is expressed as: V( σ) ≃g^2ξ ^2/2[ 1+g^2/ 8π ^2ln( λ ^2σ ^2/2Λ ^2) ] . During the inflationary period, the slow roll slow-roll parameters take the form ϵ ≃ g^4/32π ^4σ ^2, η ≃ -g^2/4π ^2σ ^2, and the e-folding number N is estimated as N≃2π ^2/g^2( σ _k^2-σ _end^2) . Throughout the inflationary regime, curvature perturbations arose from inflaton fluctuations. The amplitude of these perturbations in the comoving gauge ℛ <cit.>, measured on the comoving scale of 2π /k , is determined as, ℛ^2≃1/24π ^2V/ϵ=N/3 ξ ^2. Here, k represents the epoch when the k mode exited the Hubble radius in the course of inflation, as indicated in <cit.>. Conversely, gravitational wave tensor perturbations ℎ are also generated, and their amplitude on the comoving scale of 2π /k is specified in Ref. <cit.> , and calculated in this model as, h_k^2≃2/3π ^2V( σ _k) = g^2ξ ^2/3π ^2. Then, the tensor-to-scalar ratio r is given by r≡h_k^2/ℛ^2≃ 16ϵ where h_k^2=2H_k^2/π ^2, and for a chosen pivot scale, the power spectrum of scalar perturbation can be considered as ℛ ^2≃ A_s. § REHEATING DYNAMICS AND GRAVITINOS PRODUCTION In the initial stages, the quasi-de-Sitter phase is driven by the inflaton field, resulting in N_k e-folds of expansion. The comoving horizon scale decreases proportionally to ∼ a^-1 during this period. Following the conclusion of accelerated expansion and the subsequent expansion of the comoving horizon, the reheating phase begins <cit.>. After an additional N_re e-folds of expansion, all the energy stored in the inflaton field is completely dissipated, leading to the formation of a hot plasma with a reheating temperature of T_re. Following this phase, the Universe undergoes N_RD e-folds of expansion in a state of radiation domination before transitioning into a state of matter domination. In cosmology, we observe perturbation modes that exhibit magnitudes comparable to the size of the cosmic horizon. For instance, Planck identifies the pivot scale at k=0.05Mpc^-1 <cit.>. The comoving Hubble scale, denoted as a_kH_k=k is associated with the current timescale in relation to the moment when this particular mode crossed the horizon <cit.>, k/a_0H_0=a_k/a_enda_end/a_rea_re /a_eqa_eqH_eq/a_0H_0H_k/H_eq. Quantities represented by a subscript ( k) are calculated at the point of horizon exit. Other subscripts denote various epochs, including the end of inflation (_end), reheating (_re), radiation-matter equality (_eq), and the present time (_0). It is noteworthy that ln( a_k/a_end) =N_k, ln( a_end/ a_re) =N_re and ln( a_re/a_eq) =N_RD. A connection between the temperature at the end of reheating T_re and the CMB temperature T_0 can be established by considering factors is given by, T_re=( 43/11g_re) ^1/3( a_0T_0/k) H_ke^-N_ke^-N_re. On the other hand, the reheating duration can be expressed as, N_re=1/3( 1+ω _re) ln( 30·3/2V_end/π ^2g_reT_re^4) , and thus, the final form of reheating temperature is described as a function of the number of inflationary e-foldings N_k, the Hubble parameter H_k and the potential value at the end of inflation V_end as <cit.>, T_re=[ ( 43/11g_re) ^1/3 a_0T_0/kH_ke^-N_k[ 3^2· 5V_end/π ^2g_re ] ^-1/3( 1+ω _re) ] ^3( 1+ω _re) /3ω _re-1. To determine the reheating temperature, T_re, for a specific model, one must calculate N_k, H_k, V_end, and the potential at the end of inflation by using the formula V_end=V( σ _end), and knowing that σ _end is determined considering |η| =1. The SUGRA effects result in the coupling of the inflaton field ϕ to all matter fields, provided there is a non-zero vacuum expectation value ( VEV). The interactions with fermions are appropriately expressed in the context of the total Kähler potential, denoted as G=K+ln |W|^2, and such that: ℒ=-1/2e^G/2G_ϕ ijkϕψ ^iψ ^jφ ^k+h.c. In the given context, φ ^i represents a scalar field, and ψ ^i represents a fermion in a 2-spinor representation. We make the assumption that G_i is much smaller than 𝒪(1) for all fields, excluding the field responsible for SUSY breaking. The presence of the SUSY breaking field may lead to the suppression of the contribution proportional to G_ϕ during the inflaton decay due to interference <cit.>. In this section, we simplify our analysis by assuming the minimal Kähler potential. Consequently, the Kähler potential lacks non-renormalizable terms. Despite isolating the inflaton field from other fields in the global SUSY Lagrangian, the presence of (SUGRA) corrections enables its decay. The coupling constants are expanded at the vacuum state <cit.>. G_ϕ ijk=-W_ϕ/WW_ijk/W+W_ϕ ijk/W ≃ K_ϕW_ijk/W+W_ϕ ijk/W. In previous analysis <cit.>, the assumption was made that the Vacuum Expectation Values (VEVs) are effectively negligible for all fields except the inflaton. We employed the condition G_ϕ≪⟨ϕ⟩ in the final equation. Notably, we observed that the outcome remains invariant under Kähler transformation, and these constants tend to zero in the global SUSY limit. Subsequently, the decay rates are computed Γ _3/2( ϕ⟶ψ ^iψ ^jφ ^k) ≃N_f/1536π ^3| Y_ijk| ^2( ⟨ϕ⟩/M_p) ^2 m_ϕ^3/M_p^2, Here, N_f represents the count of final states, and the Yukawa coupling Y_ijk is denoted as W_ijk. In this context, we have disregarded the masses of the particles in the final state and employed K=φ ^†φ for the inflaton. Additionally, it is assumed here that the particles ψ ^i and ψ ^j are non-identical . The decay rates of the inflaton into scalar particles align with the previously mentioned results. Indeed, considering the scalar potential, V=e^G( G^jG_j-3) , the estimated decay amplitude of ϕ ^∗⟹φ ^iφ ^jφ ^k is denoted as V_ ϕ̅ijk. Given the sizable SUSY mass of the inflaton, this amplitude is approximately proportional to e^G/2G_ϕ ijk, multiplied by the inflaton mass, m_ϕ≃ e^G/2| G_ϕ̅ ^ϕ|. The inflaton's decay into a pair of gravitinos occurs at the rate Γ. The resulting gravitino abundance Y_3/2 is then calculated, with Y_3/2 equal to the ratio of the final gravitino abundance to the entropy density n_3/2/s <cit.>. Y_3/2≃ 2× 10^-11( 10^6GeV/T_re) ( m_ϕ×( ⟨ϕ⟩) / 10^27GeV^2) ^2. The plot in Fig. <ref> displays the gravitino abundance Y_3/2 as a function of the spectral index n_s for different values of the inflaton mass m_ϕ. Each subplot corresponds to a specific m_ϕ values, ranging from 10^8GeV to 10^14GeVḊifferent curves representing various equation of state parameter values during reheating ω, with a shaded region highlighting the Planck bound on n_s=0.9649± 0.0042 according to recent observations <cit.>. From the given formalism, the abundance of gravitinos produced during the reheating phase, denoted as Y_3/2 is significantly dependent on the reheating temperature T_re,which is affected by ω. The curves of the gravitinos abundance tend towards the central value where all the lines with the different equation of state values converge. This central value of the abundance curves corresponds to the highest reheating temperature, and all the lines coincide within the observed bound on the spectral index n_s. Higher m_ϕ values generally lead to increased gravitino production when considering the maximum reheating temperature, while different ω values impact the reheating temperature and thus the gravitino abundance, causing Y_3/2 to decrease away from the observed bound of n_s. In fact, lower equation of state values tend to fall outside the n_s bound faster than higher values of ω. Consequently, this plot underscores the need to reconcile reheating scenarios with observational constraints on gravitino abundance. § GRAVITATIONAL WAVES BACKGROUND §.§ Scalar induced gravitational waves Detecting the background of primordial gravitational waves would provide strong evidence for the inflation paradigm and offer insights into the fundamental physics of the early Universe <cit.>. Recent studies have focused on Primordial Gravitational Waves (PGW) from the period immediately following inflation <cit.>. Within the inflationary scenario, the presence of a kinetic epoch preceding inflation leads to a distinct blue tilt in the spectra of primordial gravitational waves at higher frequencies <cit.>. Gravitational waves are identified as the transverse-traceless component of metric perturbations. According to linear perturbation theory, scalar, vector, and tensor modes do not interact. Now, let us calculate the gravitational waves generated by second-order gravitational interactions resulting from first-order curvature perturbations <cit.>. Using the Newtonian gauge, the perturbed metric is expressed as <cit.>: ds^2=a^2( τ) [ -( 1+2Φ) dτ ^2+(δ_i j( 1-2Φ) +h_ij/2) dx^idx^j ] , where τ is the conformal time, Φ represents the first-order Bardeen gravitational potential, and h_ij denotes the second-order tensor perturbation. We can reformulate the spectral abundance of gravitational waves, defined as the energy density of gravitational waves per logarithmic comoving scale, as <cit.>, Ω _GW(k)=k^2/12H_0𝒯(τ_0, 𝐤)𝒫_t(k). The time evolution of a gravitational wave field, denoted as h_𝐤 (τ _i) at an initial conformal time τ _i and characterized by its tensor spectrum, can be determined by computing the GW transfer function 𝒯(τ ,𝐤)=h_𝐤(τ )/h_𝐤(τ _i) <cit.>. Here, h_𝐤(τ ) is evaluated at a conformal time η≫η _i <cit.>. The current conformal time is denoted by τ_0, and the Hubble constant is represented by H_0 <cit.>. Our focus is on the spectral energy density parameter Ω _GW(k) at Pulsar Timing Array (PTA) scales, where f∼𝒪(10-9)Hz corresponds to wavenumbers k∼𝒪 (10^6)Mpc^-1 which are significantly larger than the wavenumber linked to a mode crossing the horizon at matter-radiation equality. Stated differently, the modes observed at PTA scales crossed the horizon deep within the radiation era, well before matter-radiation equality. In the regime where k≫ k_eq, the gravitational wave spectral energy density associated with PTA signals can be expressed as <cit.> Ω _GW(f)=2π ^2f_yr^2/3H_0^2A^2( f/ f_yr) ^α. The NANOGrav measurements <cit.>, estimate ranges of several parameters which are given as follows, α =( 5-γ) with γ=13/3, f_yr is best estimated to f_yr≃ 3.1× 10^-8 and H_0 is given by H_0≡ 100h km/s/Mpc. The relationship linking the amplitude of the Pulsar Timing Array (PTA) signal, with the cosmological parameters were found to be <cit.> A=√(45Ω _m^2A_s/32π ^2(η _0k_eq)^2)c f_yr/η _0( f_yr^-1/f_⋆) ^ n_T/2√(r). The dependence on n_T in this context arises from the substantial "lever arm" between the Cosmic Microwave Background (CMB) pivot frequency, where A_s is constrained, and the frequency of the Pulsar Timing Array (PTA) signal [ 1yr^-1]. Table <ref> presents the NanoGrav proposed model for the density h^2Ω _gw(f) as a function of frequency f, scalar-to-tensor ratio r, and the amplitude of the PTA signal related to the potential parameters of the D-term hybrid model. The model for Ω _gw(f) depends on various cosmological parameters and constants defined earlier. The analytical expression for, h^2Ω _gw(f) is derived from a recent theoretical model for primordial gravitational waves. h^2Ω _gw(f) represents the fractional energy density of gravitational waves in the universe, indicating the energy carried by the gravitational wave background relative to the critical energy density needed for a flat universe. By examining the table <ref>, one can identify the parameters that predict higher or lower densities of gravitational waves h^2Ω _gw(f) in line with NANOGrav predictions <cit.>. The amplitude of PTA scales A≲ 10^-20 provides accurate predictions for h^2Ω _gw(f) values. Furthermore, the Table <ref> offers insights into the behavior of gravitational wave background (GWB) density concerning frequency, scalar-to-tensor ratio, and the D-term model parameters, demonstrating good consistency with predicted bounds on density and frequency when fine-tuning the inflationary parameters associated with PTA signals. §.§ Primordial Gravitational Waves For primordial gravitational waves in a spatially flat FLRW background, the metric element can be expressed as follows: ds^2=a^2(τ)[-dτ^2+(δ _i j+h_i j )dx^i dx^j]. The perturbation h_i j satisfies the transverse-traceless (TT) conditions: h^i_ i=0 and ∂^i h_i j =0. The tensor perturbation h_ij ( τ,x⃗) can be decomposed into its Fourier modes, which are associated with two polarization tensors, satisfying the equation of motion h_𝐤^λ''(τ )+2ℋh_𝐤 ^λ'(τ )+k^2h_𝐤^λ(τ )=0. Here ( ^') denotes the derivative with respect to conformal time τ, where dτ = dt/a and ℋ= a^'/a. The normalized gravitational wave energy density spectrum is defined as the energy density per logarithmic frequency interval Ω _gw(k)=1/ρ _cdρ _gw/dln k, here ρ _c represents the energy density. Additionally, Ω _gw,0(k)=1/12( k^2/a_0^2H_0^2) 𝒫_h(k), knowing that 𝒫_h(k)≡k^3/π ^2∑_λ| h_𝐤^λ| ^2. By considering the scale at which horizon re-entry occurs ( k=a_hcH_hc)and analyzing the horizon re-entry scale alongside the Hubble parameter across different epochs, we can derive the present-day primordial gravitational wave spectrum for mode re-entry during the matter-dominated (M), radiation-dominated (R), and kinetic (K) eras, respectively, as follows: Ω _gw,0^( 𝑀) = 1/24Ω _m,0^2 a_0^2H_0^2/k^2𝒫_t      ( k_0<k≤ k_eq) , Ω _gw,0^( 𝑅) = 1/24Ω _r,0^2( g_∗/g_∗ 0) ( g_∗ s /g_∗ s0) 𝒫_t      ( k_eq<k≤ k_r) , Ω _gw,0^( 𝐾) = Ω _gw,0^( 𝑅) ( k/k_r)      ( k_r<k≤ k_max) , Fig. <ref> provides a detailed visualization of the primordial tensor power spectrum and the corresponding energy density parameters Ω _gw,0 across different cosmological eras: matter-dominated, radiation-dominated, and kinetic eras. In the matter-dominated era, Ω _gw,0 decreases with increasing comoving wavenumber k, indicating larger scales are more influenced by primordial tensor perturbations. During the radiation-dominated era, the energy density of gravitational waves is only affected directly by the D-term hybrid parameter ξ, while the scale k has a constant effect on Ω_gw,0. In the kinetic era, the energy density of gravitational waves is increasing with respect to the comoving scale to reach its maximum values at superhorizon scales. This comprehensive depiction illustrates how primordial gravitational waves evolve through different epochs, influenced by the universe's thermal history and expansion dynamics, and underscores the significance of these factors on the energy densities observed. § CONSTRAINTS ON GRAVITATIONAL WAVES FROM COSMIC STRINGS Cosmic strings form at the end of inflation, impacting the anisotropies observed in CMB and contributing to the creation of stochastic gravitational waves <cit.>. The dimensionless string tension, Gμ _cs, is key to understanding these phenomena, where G= 1/8π M_p^2=6.7× 10^-39GeV^-2, and μ _cs is the string's mass per unit length. Current CMB constraints place limits on this tension as Gμ _cs≲ 1.3× 10^-7 <cit.>. The SGWB arises from a mix of sources including inflation, cosmic strings, and phase transitions <cit.>. Specifically, inflationary tensor perturbations re-entering the horizon generate an SGWB <cit.>, leaving a unique imprint on the CMB B-mode polarization. The amplitude and scale dependence of this background are described by the tensor-to-scalar ratio r and the tensor spectral index n_T, adhering to the inflationary consistency relation r=-8n_T <cit.>. Given that r≥ 0, this implies n_T≤ 0 indicating a red spectrum <cit.>. With current limits on the tensor-to-scalar ratio, the amplitude of the inflationary SGWB at pulsar timing array (PTA) and interferometer scales remains too low for detection by these instruments, necessitating a primordial tensor power spectrum with a strong blue tilt ( n_T≥ 0) for detection <cit.>. The detection of gravitational waves from cosmic strings is primarily influenced by two key scales: the energy scale of inflation Λ _inf and the scale at which cosmic strings generate the GW spectrum Λ _cs≡√(Gμ _cs). The amplitude of the tensor mode anisotropy in the cosmic microwave background fixes the energy scale of inflation to approximately Λ _inf∼ V^1/4∼ 3.3× 10^16r^1/4 <cit.>. By applying the Planck 2σ bounds on the tensor-to-scalar ratio r, we derive an upper limit on the inflation scale, Λ _inf<1.6× 10^16GeV <cit.>. In our model, cosmic strings form post-inflation, indicating Λ _inf>Λ _cs, which results in a SGWB generated from undiluted strings. The SGWB from metastable cosmic string networks is expressed relative to the critical density as described in the folowing form <cit.>, Ω _GW( f) = 8π G/3H_0^2f( Gμ _cs) ^2Σ _n=1^∞C_n( f) P_n, = 8π G/3H_0^2f𝒢_cs(C_n,P_n). Here, 𝒢_cs(C_n,P_n) represent the term of cosmic strings contribution to GWs production, which contain the power spectrum P_n≃50/ζ( 4/3) n^4/3 that represent the gravitational waves (GWs) emitted by the n-th harmonic of a cosmic string loop, and C_n which denotes the number of loops emitting GWs observed at a specific frequency f . The number of loops emitting GWs, observed at a given frequency f is defined as <cit.>, C_n( f) =2n/f^2∫_z_min^z_maxdz/ H(z)( 1+z) ^6𝒩(𝑙,t). The integration range spans the lifetime of the cosmic string network, starting from its formation at z_max≃T_R/2.7K, with T_R being approximately 10^9GeV to its decay at z_min=( 70/H_0) ^1/2( Γ( Gμ _cs) ^2/2π× 6.7× 10^-39exp( -πκ _cs) ) ^1/4 <cit.>, where Γ≃ 50 is a numerical factor specifying the cosmic strings decay rate. Here, 𝒩(𝑙,t) represents the number density of CS loops of length 𝑙=2n/( 1+z) f. The loop density is defined by considering their formation and decay across different epochs. For the region of interest, the dominant contribution arises from the loops generated during the radiation-dominated era which is given by <cit.>, 𝒩_r(𝑙,t)=0.18/t^2/3( 𝑙+Γ Gμ _cst) ^5/2. The cosmological time and the Hubble rate with the current values of matter, radiation and dark energy densities, are respectively expressed as a function of the redshift z as t(z)=∫_z_min^z_maxdz/ H(z)( 1+z) , H(z)=H_0√(Ω _Λ+Ω _m( 1+z) ^3+Ω _r( 1+z) ^4). Fig. <ref> illustrates the density of gravitational waves Ω _GW as a function of frequency f and the cosmic string parameter Gμ _cs. The contour plot uses a logarithmic scale for both the frequency and Gμ _cs, highlighting how gravitational wave densities vary across different scales and parameters. The gravitational wave energy density is computed by integrating over the redshift z, taking into account the contributions from matter, radiation, and dark energy to the Hubble parameter H(z) The function 𝒩_r(𝑙,t) reflects the number density of gravitational waves, which is influenced by the redshift and other cosmological parameters. C_n and P_n are functions of n , frequency f, and the cosmic string parameter 𝒢_cs, representing cosmic strings contribution to the power spectrum of the gravitational waves. The resulting plot shows that at higher frequencies, the gravitational wave density varies more significantly with changes in Gμ _cs, indicating a strong dependence on the string tension parameter. This comprehensive depiction underscores the intricate relationship between cosmic string dynamics and gravitational wave emissions, offering insights into how different frequencies and cosmic string tensions contribute to the observable gravitational wave background. The use of a logarithmic scale for both axes ensures that a wide range of values is represented, making it easier to visualize the detailed structure and variation of Ω _GW across different cosmological scenarios. Fig. <ref> presents the cosmic strings factor 𝒢_cs as a function of frequency and the tensor-to-scalar ratio r. The contour plot uses a logarithmic scale for frequency, ranging from 10^-10Hz to 10^-6Hz , and spans multiple values of r=[ 0.026,0.128], reflecting different possible strengths of primordial tensor perturbations. The frequency dependence of 𝒢_cs incorporates a power-law behavior with an exponent α, representing the spectral shape of the cosmic string signal. The resulting plot visually demonstrates that 𝒢_cs increases with higher values of r and decreases with increasing frequency. The contour levels indicate the logarithmic values of, with color gradients representing the intensity. This visualization helps to elucidate the relationship between cosmic string dynamics and gravitational wave signals across a range of frequencies and tensor-to-scalar ratios. Specifically, it shows that stronger primordial tensor perturbations lead to higher 𝒢_cs values, particularly at lower frequencies, which is crucial for understanding the potential observational signatures of cosmic strings in gravitational wave experiments. This comprehensive depiction provides insights into the scale and strength of cosmic string contributions to the gravitational wave background, highlighting key dependencies and aiding in the interpretation of potential observational data in the context of early universe cosmology. § CONCLUSION In this paper, we investigated the framework of supergravity model, employing the minimal Kähler potential. Our study revealed that D-term potential can circumvent the η-problem inherent in F-term models, offering a viable path to successful inflation. Mathematically, we derived the scalar potential, incorporating both F-term and D-term contributions, and demonstrated the stability conditions for the inflaton field S. The effective potential during inflation was approximated, highlighting the importance of the gauge coupling constant g and the Fayet-Iliopoulos term ξ. The slow-roll parameters and the resulting e-folding number were calculated, providing insights into the inflationary dynamics. Furthermore, we explored reheating dynamics and gravitino production, emphasizing the interplay between reheating temperature T_re, spectral index n_s, and gravitino abundance Y_3/2. Our analysis indicated that gravitino production is sensitive to the equation of state during reheating, impacting the reheating temperature and the subsequent dark matter relic density. The study of scalar induced gravitational waves offers crucial insights into the early Universe, particularly within the inflationary paradigm. Detecting the primordial gravitational wave background can substantiate inflationary models by providing evidence for a kinetic epoch preceding inflation, characterized by a distinct blue tilt at higher frequencies. The perturbation theory framework reveals that the transverse-traceless components of these waves evolve according to second-order gravitational interactions induced by first-order curvature perturbations. The spectral energy density of these waves at various scales, particularly those relevant to Pulsar Timing Array (PTA) observations, can be quantified through analytical models, which align well with empirical data, such as that from NANOGrav. These models illustrate the dependence of gravitational wave amplitudes on key cosmological parameters, thus highlighting the interplay between inflationary dynamics and observable gravitational wave spectra. The analysis of gravitational waves generated by cosmic strings provides critical constraints on the dynamics of the early Universe and the properties of cosmic strings. The dimensionless string tension, Gμ _cs , significantly influences the stochastic gravitational wave background (SGWB) produced by cosmic strings post-inflation. Current constraints from CMB observations limit Gμ _cs≤ 1.3× 10^-7. The SGWB from cosmic strings is shaped by the energy scales of inflation and the string formation, with the amplitude of tensor mode anisotropy fixing the inflation scale around Λ _inf∼ 3.3× 10^16r^1/4. Cosmic string contributions to the gravitational wave spectrum, represented by Ω _GW(f), depend on the number density and dynamics of cosmic string loops across different cosmological epochs. These contributions exhibit a strong frequency dependence and vary significantly with the string tension parameter, highlighting the complex relationship between cosmic string properties and observable gravitational wave signals. The comprehensive models and visualizations underscore the importance of cosmic string dynamics in shaping the SGWB and provide insights into potential observational signatures in gravitational wave experiments, aiding in the broader understanding of early universe cosmology. § ACKNOWLEDGMENTS G.O. acknowledges the financial support of Fondecyt Grant 1220065. 99 A1 Dvali, G., Shafi, Q., & Schaefer, R. (1994). Large scale structure and supersymmetric inflation without fine tuning. Physical Review Letters, 73(14), 1886. A2 Copeland, E. J., Liddle, A. R., Lyth, D. H., Stewart, E. D., & Wands, D. (1994). False vacuum inflation with Einstein gravity. Physical Review D, 49(12), 6410. A3 Linde, A., & Riotto, A. (1997). Hybrid inflation in supergravity. Physical Review D, 56(4), R1841. A4 Şenoğuz, V. N., & Shafi, Q. (2005). Reheat temperature in supersymmetric hybrid inflation models. Physical Review D, 71(4), 043514. A5 Senoguz, V. N., & Shafi, Q. (2003). Testing supersymmetric grand unified models of inflation. Physics Letters B, 567(1-2), 79-86. A6 Rehman, M. U., Shafi, Q., & Wickman, J. R. (2010). Supersymmetric hybrid inflation redux. Physics Letters B, 683(2-3), 191-195. A7 Afzal, A., Ahmed, W., Rehman, M. U., & Shafi, Q. (2022). μ -hybrid inflation, gravitino dark matter, and stochastic gravitational wave background from cosmic strings. Physical Review D, 105(10), 103539. A8 McDonald, J. (2004). Conditions for a successful right-handed Majorana sneutrino curvaton. Physical Review D, 70(6), 063520. A9 Copeland, E. J., Liddle, A. R., Lyth, D. H., Stewart, E. D., & Wands, D. (1994). False vacuum inflation with Einstein gravity. Physical Review D, 49(12), 6410. A10 Seto, O., & Yokoyama, J. I. (2006). Hiding cosmic strings in supergravity D-term inflation. Physical Review D, 73(2), 023508. A11 Halyo, E. (1996). Hybrid inflation from supergravity D-terms. Physics Letters B, 387(1), 43-47. A12 Binetruy, P., & Dvali, G. (1996). D-term inflation. Physics Letters B, 388(2), 241-246. A13 Lyth, D. H., & Riotto, A. (1999). Particle physics models of inflation and the cosmological density perturbation. Physics Reports, 314(1-2), 1-146. A14 Jeannerot, R. (1997). Inflation in supersymmetric unified theories. Physical Review D, 56(10), 6205. A15 Linde, A. (1994). Hybrid inflation. Physical Review D, 49(2), 748. A16 King, S. F., & Riotto, A. (1998). Dilaton stabilisation in D-term inflation. Physics Letters B, 442(1-4), 68-73. A17 Ellis, J., Kim, J. E., & Nanopoulos, D. V. (1984). Cosmological gravitino regeneration and decay. Physics Letters B, 145(3-4), 181-186. A18 Ellis, J., Nanopoulos, D. V., Olive, K. A., & Rey, S. J. (1996). On the thermal regeneration rate for light gravitinos in the early universe. Astroparticle Physics, 4(4), 371-385. A19 Giudice, G. F., Riotto, A., & Tkachev, I. (1999). Thermal and non-thermal production of gravitinos in the early universe. Journal of High Energy Physics, 1999(11), 036. A19-1 Ellis, J., Linde, A. D., Nanopoulos, D. V. (1982). Inflation can save the gravitino. Physics Letters B, 118(1-3), 59-64. A20 King, S. F., Pascoli, S., Turner, J., & Zhou, Y. L. (2021). Gravitational waves and proton decay: complementary windows into grand unified theories. Physical Review Letters, 126(2), 021802. A21 Buchmuller, W., Domcke, V., & Schmitz, K. (2020).From NANOGrav to LIGO with metastable cosmic strings. Physics Letters B, 811, 135914. A22 King, S. F., Pascoli, S., Turner, J., & Zhou, Y. L. (2021). Confronting SO (10) GUTs with proton decay and gravitational waves. Journal of High Energy Physics, 2021(10), 1-38. A23 Benetti, M., Graef, L. L., & Vagnozzi, S. (2022). Primordial gravitational waves from NANOGrav: A broken power-law approach. Physical Review D, 105(4), 043520. A24 Ahriche, A., Hashino, K., Kanemura, S., & Nasri, S. (2019). Gravitational waves from phase transitions in models with charged singlets. Physics Letters B, 789, 119-126. A25 Scientific, L. I. G. O., Abbott, B. P., Abbott, R., Abbott, T. D., Abraham, S., Acernese, F., ... & Calloni, E. (2019). Search for the isotropic stochastic background using data from Advanced LIGO's second observing run. Physical Review D, 100(6), 061101. A26 Amaro-Seoane, P., Audley, H., Babak, S., Baker, J., Barausse, E., Bender, P., ... & Zweifel, P. (2017). Laser interferometer space antenna. arXiv preprint arXiv:1702.00786. A27 Goncharov, B., Shannon, R. M., Reardon, D. J., Hobbs, G., Zic, A., Bailes, M., ... & Zhang, S. (2021). On the evidence for a common-spectrum process in the search for the nanohertz gravitational-wave background with the Parkes Pulsar Timing Array. The Astrophysical Journal Letters, 917(2), L19. B1 Wess, J., & Bagger, J. (1992). Supersymmetry and supergravity (Vol. 103). Princeton university press. B2 Nilles, H. P. (1984). Supersymmetry, supergravity and particle physics. Physics Reports, 110(1-2), 1-162. B3 Bailin, D., & Love, A. (1994). Supersymmetric gauge field theory and string theory (p. 322). Taylor & Francis. BB3 YAMAGUCHI, Masahide. Supergravity-based inflation models: a review. Classical and Quantum Gravity, 2011, vol. 28, no 10, p. 103001. B4 Stewart, E. D. (1995). Inflation, supergravity, and superstrings. Physical Review D, 51(12), 6847. B5 Binétruy, P., & Dvali, G. (1996). D-term inflation . Physics Letters B, 388(2), 241-246. B6 Halyo, E. (1996). Hybrid inflation from supergravity D-terms. Physics Letters B, 387(1), 43-47. B7 Coleman, S., & Weinberg, E. (1973). Radiative corrections as the origin of spontaneous symmetry breaking. Physical Review D, 7(6), 1888. B8 BARDEEN, James M. Gauge-invariant cosmological perturbations. Physical Review D, 1980, vol. 22, no 8, p. 1882. Gonzalez-Espinoza:2019ajd M. Gonzalez-Espinoza, G. Otalora, N. Videla and J. Saavedra, JCAP 08 (2019), 029 Gonzalez-Espinoza:2020azh M. Gonzalez-Espinoza and G. Otalora, Phys. Lett. B 809 (2020), 135696 Leyva:2021fuo Y. Leyva, C. Leiva, G. Otalora and J. Saavedra, Phys. Rev. D 105 (2022) no.4, 043523 Leyva:2022zhz Y. Leyva and G. Otalora, JCAP 04 (2023), 030 B9 HAWKING, Stephen W. The development of irregularities in a single bubble inflationary universe. Physics Letters B, 1982, vol. 115, no 4, p. 295-297. B10 STAROBINSKY, Alexei A. Dynamics of phase transition in the new inflationary universe scenario and generation of perturbations. Physics Letters B, 1982, vol. 117, no 3-4, p. 175-178. Kofman:1994rk Mansfield, G., Fan, J., Lu, Q. (2023). Phenomenology of Spillway Preheating: Equation of State and Gravitational Waves. arXiv preprint arXiv:2312.03072. Ade:2015lrj Ade, P. A. R., Desert, F. X., Knoche, J., Giard, M., Dupac, X., Liguori, M., ... Hernandez-Monteagudo, C. (2015). Planck 2015. XX. Constraints on inflation. Astron. Astrophys., 594(arXiv: 1502.02114), A20. B11 GUTH, Alan H. et PI, So-Young. Fluctuations in the new inflationary universe. Physical Review Letters, 1982, vol. 49, no 15, p. 1110. B12 STAROBINSKY, Alexei A. Relict gravitation radiation spectrum and initial state of the universe. JETP lett, 1979, vol. 30, no 682-685, p. 131-132. B112 DAI, Liang, KAMIONKOWSKI, Marc, et WANG, Junpu. Reheating constraints to inflationary models. Physical review letters, 2014, vol. 113, no 4, p. 041302. B113 COOK, Jessica L., DIMASTROGIOVANNI, Emanuela, EASSON, Damien A., et al. Reheating predictions in single field inflation. Journal of Cosmology and Astroparticle Physics, 2015, vol. 2015, no 04, p. 047. Lopez:2021agu M. López, G. Otalora and N. Videla, JCAP 10 (2021), 021 B13 DINE, Michael, KITANO, Ryuichiro, MORISSE, Alexander, et al. Moduli decays and gravitinos. Physical Review D, 2006, vol. 73, no 12, p. 123518. B14 ENDO, Motoi, HAMAGUCHI, Koichi, et TAKAHASHI, Fuminobu. Moduli/inflaton mixing with supersymmetry-breaking field. Physical Review D, 2006, vol. 74, no 2, p. 023531. B15 Endo, M., Kawasaki, M., Takahashi, F., & Yanagida, T. T. (2006). Inflaton decay through supergravity effects. Physics Letters B, 642(5-6), 518-524. B16 TAKAHASHI, Fuminobu. Inflaton decay in supergravity and gravitino problem. In : AIP Conference Proceedings. American Institute of Physics, 2008. p. 57-60. B17 ENDO, Motoi, TAKAHASHI, Fuminobu, et YANAGIDA, T. T. Inflaton decay in supergravity. Physical Review D, 2007, vol. 76, no 8, p. 083509. B18 NAKAYAMA, Kazunori, TAKAHASHI, Fuminobu, et YANAGIDA, Tsutomu T. Constraint on the gravitino mass in hybrid inflation. Journal of Cosmology and Astroparticle Physics, 2010, vol. 2010, no 12, p. 010. B19 P. Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi, et al. (2020). Planck 2018 results. VI. Cosmological parameters B20 Easther, R., Lim, E. A. (2006). Stochastic gravitational wave production after inflation. Journal of Cosmology and Astroparticle Physics, 2006(04), 010. B21 An, H., Yang, C. (2024). Gravitational waves produced by domain walls during inflation. Physical Review D, 109(12), 123508. B22 Eggemeier, B., Niemeyer, J. C., Jedamzik, K., Easther, R. (2023). Stochastic gravitational waves from postinflationary structure formation. Physical Review D, 107(4), 043503. K1 El Bourakadi, K., Ferricha-Alami, M., Filali, H., Sakhi, Z., & Bennai, M. (2021). Gravitational waves from preheating in Gauss–Bonnet inflation. The European Physical Journal C, 81(12), 1144. K2 El Bourakadi, K., Asfour, B., Sakhi, Z., Bennai, M., & Ouali, T. (2022). Primordial black holes and gravitational waves in teleparallel Gravity. The European Physical Journal C, 82(9), 792. K3 El Bourakadi, K., Bousder, M., Sakhi, Z., & Bennai, M. (2021). Preheating and reheating constraints in supersymmetric braneworld inflation. The European Physical Journal Plus, 136(8), 1-19. K4 Sakhi, Z., El Bourakadi, K., Safsafi, A., Ferricha-Alami, M., Chakir, H., & Bennai, M. (2020). Effect of brane tension on reheating parameters in small field inflation according to Planck-2018 data . International Journal of Modern Physics A, 35(30), 2050191. K5 Bousder, M., El Bourakadi, K., & Bennai, M. (2021). Charged 4d einstein-gauss-bonnet black hole: Vacuum solutions, cauchy horizon, thermodynamics. Physics of the Dark Universe, 32, 100839. K6 El Bourakadi, K., Sakhi, Z., & Bennai, M. (2022). Preheating constraints in α-attractor inflation and Gravitational Waves production. International Journal of Modern Physics A, 37(17), 2250117. K7 El Bourakadi, K., Koussour, M., Otalora, G., Bennai, M., & Ouali, T. (2023). Constant-roll and primordial black holes in f (Q, T) gravity. Physics of the Dark Universe, 41, 101246. K8 El Bourakadi, K., Ferricha-Alami, M., Sakhi, Z., Bennai, M., & Chakir, H. (2024). Dark matter via baryogenesis: Affleck–Dine mechanism in the minimal supersymmetric standard model. Modern Physics Letters A, 2450060. K8-1 Bourakadi, K. E., Chakir, H., Khlopov, M. Y. (2024). Leptogenesis Effects on the Gravitational Waves Background: Interpreting the NANOGrav Measurements and JWST Constraints on Primordial Black Holes. arXiv preprint arXiv:2401.05311. B23 Ebadi, R., Kumar, S., McCune, A., Tai, H., Wang, L. T. (2024). Gravitational waves from stochastic scalar fluctuations. Physical Review D, 109(8), 083519. C01 MATARRESE, Sabino, PANTANO, Ornella, et SAEZ, Diego. General-relativistic approach to the nonlinear evolution of collisionless matter. Physical Review D, 1993, vol. 47, no 4, p. 1311. C02 MATARRESE, Sabino, PANTANO, Ornella, et SAEZ, Diego. General relativistic dynamics of irrotational dust: Cosmological implications . Physical review letters, 1994, vol. 72, no 3, p. 320. C03 MCCORMICK, Stephen. First law of black hole mechanics as a condition for stationarity. Physical Review D, 2014, vol. 90, no 10, p. 104034. C04 DOMENECH, Guillem. Scalar induced gravitational waves review. Universe, 2021, vol. 7, no 11, p. 398. C1 Basilakos, S., Nanopoulos, D. V., Papanikolaou, T., Saridakis, E. N., & Tzerefos, C. (2024). Gravitational wave signatures of no-scale Supergravity in NANOGrav and beyond. Physics Letters B, 138507. C2 Kohri, K., & Terada, T. (2018). Semianalytic calculation of gravitational wave spectrum nonlinearly induced from primordial curvature perturbations. Physical Review D, 97(12), 123532. C3 Maggiore, M. (2000). Gravitational wave experiments and early universe cosmology. Physics Reports, 331(6), 283-367. C4 Baumann, D., Steinhardt, P., Takahashi, K., & Ichiki, K. (2007). Gravitational wave spectrum induced by primordial scalar perturbations. Physical Review D, 76(8), 084019. C5 Vagnozzi, S. (2021). Implications of the NANOGrav results for inflation. Monthly Notices of the Royal Astronomical Society: Letters, 502(1), L11-L15. C6 Zhao, W., Zhang, Y., You, X. P., & Zhu, Z. H. (2013). Constraints of relic gravitational waves by pulsar timing arrays: Forecasts for the FAST and SKA projects. Physical Review D, 87(12), 124012. C6-1 Afzal, A., Agazie, G., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., ..., NANOGrav Collaboration. (2023). The NANOGrav 15 yr data set: search for signals from new physics. The Astrophysical Journal Letters, 951(1), L11. C6-2 Agazie, G., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., Bécsy, B., ..., NANOGrav Collaboration. (2023). The NANOGrav 15 yr data set: evidence for a gravitational-wave background. The Astrophysical Journal Letters, 951(1), L8. C6-3 Agazie, G., Alam, M. F., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., ..., NANOGrav Collaboration. (2023). The NANOGrav 15 yr data set: Observations and timing of 68 millisecond pulsars. The Astrophysical Journal Letters, 951(1), L9. C6-4 Wu, Y. M., Chen, Z. C., Huang, Q. G. (2023). Search for stochastic gravitational-wave background from massive gravity in the NANOGrav 12.5-year dataset. Physical Review D, 107(4), 042003. C7 Kuroyanagi, S., Chiba, T., & Sugiyama, N. (2009). Precision calculations of the gravitational wave background spectrum from inflation. Physical Review D, 79(10), 103501. C8 Afzal, A., Agazie, G., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., ... & NANOGrav Collaboration. (2023). The NANOGrav 15 yr Data Set: Search for Signals from New Physics. The Astrophysical Journal Letters, 951(1), L11. C9 Agazie, G., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., Bécsy, B., ... & NANOGrav Collaboration. (2023). The NANOGrav 15 yr data set: Evidence for a gravitational-wave background. The Astrophysical Journal Letters, 951(1), L8. C10 Agazie, G., Alam, M. F., Anumarlapudi, A., Archibald, A. M., Arzoumanian, Z., Baker, P. T., ... & NANOGrav Collaboration. (2023). The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars. The Astrophysical Journal Letters, 951(1), L9. C11 Arzoumanian, Z., Baker, P. T., Blumer, H., Bécsy, B., Brazier, A., Brook, P. R., ... & NANOGrav Collaboration. (2020). The NANOGrav 12.5 yr data set: search for an isotropic stochastic gravitational-wave background. The Astrophysical Journal Letters, 905(2), L34. C12 Arzoumanian, Z., Brazier, A., Burke-Spolaor, S., Chamberlin, S. J., Chatterjee, S., Christy, B., ... & NANOGrav Collaboration. (2016). The NANOGrav nine-year data set: limits on the isotropic stochastic gravitational wave background. The Astrophysical Journal, 821(1), 13. C13 Guo, S. Y., Khlopov, M., Liu, X., Wu, L., Wu, Y., & Zhu, B. (2023). Footprints of Axion-Like Particle in Pulsar Timing Array Data and JWST Observations. arXiv preprint arXiv:2306.17022. C14 Arzoumanian, Z., Baker, P. T., Blumer, H., Bécsy, B., Brazier, A., Brook, P. R., ... & NANOGrav Collaboration., The NANOGrav 12.5 yr data set: search for an isotropic stochastic gravitational-wave background. The Astrophysical journal letters 905 (2), L34 (2020). C15 Gao, T. J., & Yang, X. Y., Double peaks of gravitational wave spectrum induced from inflection point inflation. The European Physical Journal C, 81 (6), 1-10 (2021). C16 Vagnozzi, S. (2023). Inflationary interpretation of the stochastic gravitational wave background signal detected by pulsar timing array experiments. Journal of High Energy Astrophysics. C17 Zhao, W., Zhang, Y., You, X. P., & Zhu, Z. H. (2013). Constraints of relic gravitational waves by pulsar timing arrays: Forecasts for the FAST and SKA projects. Physical Review D, 87(12), 124012. K9 Sahni, V., Sami, M., & Souradeep, T. (2001). Relic gravity waves from braneworld inflation. Physical Review D, 65(2), 023518. K10 Figueroa, D. G., & Tanin, E. H. (2019). Inconsistency of an inflationary sector coupled only to Einstein gravity. Journal of Cosmology and Astroparticle Physics, 2019(10), 050. K11 Giovannini, M. (1998). Gravitational wave constraints on post-inflationary phases stiffer than radiation. Physical Review D, 58(8), 083504. K12 Riazuelo, A., & Uzan, J. P. (2000). Quintessence and gravitational waves. Physical Review D, 62(8), 083506. J1 Akrami, Y., Arroja, F., Ashdown, M., Aumont, J., Baccigalupi, C., Ballardini, M., ... & Tomasi, M. (2020). Planck 2018 results-IX . Constraints on primordial non-Gaussianity. Astronomy & Astrophysics, 641, A9. J2 Aghanim, N., Akrami, Y., Ashdown, M., Aumont, J., Baccigalupi, C., Ballardini, M., ... & Roudier, G. (2020). Planck 2018 results-VI . Cosmological parameters. Astronomy & Astrophysics, 641, A6. J3 Vagnozzi, S. (2021). Implications of the NANOGrav results for inflation. Monthly Notices of the Royal Astronomical Society: Letters, 502(1), L11-L15. J4 Benetti, M., Graef, L. L., & Vagnozzi, S. (2022). Primordial gravitational waves from NANOGrav: A broken power-law approach. Physical Review D, 105(4), 043520. J5 Caprini, C. (2015, April). Stochastic background of gravitational waves from cosmological sources. In Journal of Physics: Conference Series (Vol. 610, No. 1, p. 012004). IOP Publishing. J6 Kuroyanagi, S., Takahashi, T., & Yokoyama, S. (2021). Blue-tilted inflationary tensor spectrum and reheating in the light of NANOGrav results. Journal of Cosmology and Astroparticle Physics, 2021(01), 071. J7 Liddle, A. R., & Lyth, D. H. (1993). The Cold dark matter density perturbation. Physics Reports, 231(1-2), 1-105. J8 Ade, P. A. R., Ahmed, Z., Aikin, R. W., Alexander, K. D., Barkats, D., Benton, S. J., ... & (Keck Array and bicep2 Collaborations). (2018). Constraints on Primordial Gravitational Waves Using Planck, WMAP, and New BICEP2/Keck Observations through the 2015 Season. Physical review letters, 121(22), 221301. J9 Vagnozzi, S. (2021). Implications of the NANOGrav results for inflation. Monthly Notices of the Royal Astronomical Society: Letters, 502(1), L11-L15. J10 Ahmed, W., Junaid, M., Nasri, S., & Zubair, U. (2022). Constraining the cosmic strings gravitational wave spectra in no-scale inflation with viable gravitino dark matter and nonthermal leptogenesis. Physical Review D, 105(11), 115008. J11 Easther, R., Kinney, W. H., & Powell, B. A. (2006). The Lyth bound and the end of inflation. Journal of Cosmology and Astroparticle Physics, 2006(08), 004. J12 Akrami, Y., Arroja, F., Ashdown, M., Aumont, J., Baccigalupi, C., Ballardini, M., ... & Savelainen, M. (2020). Planck 2018 results-X. Constraints on inflation. Astronomy & Astrophysics, 641, A10. J13 Blanco-Pillado, J. J., & Olum, K. D. (2017). Stochastic gravitational wave background from smoothed cosmic string loops. Physical Review D, 96(10), 104046. J14 Buchmuller, W., Domcke, V., & Schmitz, K. (2020). From NANOGrav to LIGO with metastable cosmic strings. Physics Letters B, 811, 135914. J15 Auclair, P., Blanco-Pillado, J. J., Figueroa, D. G., Jenkins, A. C., Lewicki, M., Sakellariadou, M., ... & Kuroyanagi, S. (2020). Probing the gravitational wave background from cosmic strings with LISA. Journal of Cosmology and Astroparticle Physics, 2020(04), 034.
http://arxiv.org/abs/2406.19335v1
20240627170834
$L^\infty$-sizes of the spaces Siegel cusp forms of degree $n$ via Poincaré series
[ "Soumya Das" ]
math.NT
[ "math.NT" ]
a4paper, left=18mm, right=18mm, top=20mm, bottom=20mm ref.bib showonlyrefs
http://arxiv.org/abs/2406.18739v1
20240626201003
RetroGFN: Diverse and Feasible Retrosynthesis using GFlowNets
[ "Piotr Gaiński", "Michał Koziarski", "Krzysztof Maziarz", "Marwin Segler", "Jacek Tabor", "Marek Śmieja" ]
cs.LG
[ "cs.LG" ]
[ Galan Moody July 1, 2024 ================ § ABSTRACT Single-step retrosynthesis aims to predict a set of reactions that lead to the creation of a target molecule, which is a crucial task in molecular discovery. Although a target molecule can often be synthesized with multiple different reactions, it is not clear how to verify the feasibility of a reaction, because the available datasets cover only a tiny fraction of the possible solutions. Consequently, the existing models are not encouraged to explore the space of possible reactions sufficiently. In this paper, we propose a novel single-step retrosynthesis model, RetroGFN, that can explore outside the limited dataset and return a diverse set of feasible reactions by leveraging a feasibility proxy model during the training. We show that RetroGFN achieves competitive results on standard top-k accuracy while outperforming existing methods on round-trip accuracy. Moreover, we provide empirical arguments in favor of using round-trip accuracy which expands the notion of feasibility with respect to the standard top-k accuracy metric. § INTRODUCTION The rising interest in machine learning led to the development of many deep generative models for de novo drug design <cit.>. Such approaches can propose novel molecules with promising properties (e.g. high binding affinity score) predicted by other machine learning models <cit.>, however, these virtual compounds eventually need to be synthesized and evaluated in the wet lab. This motivates the development of reliable (retro)synthesis planning algorithms able to design a synthesis route for an input molecule. Retrosynthesis aims to recursively decompose a target compound into simpler molecules forming a synthesis tree. The leaves of the tree are purchasable molecules from which the synthesis process can start and the tree itself is a synthesis recipe. By going bottom-up the tree and performing reactions defined by the tree nodes, one will eventually obtain the target molecule. The construction of such a tree usually consists of two components: a single-step retrosynthesis model that decomposes a molecule <cit.>, and a multi-step planning algorithm that guides the recursive decomposition to obtain the full synthesis tree <cit.>. In this paper, we focus on single-step retrosynthesis, which predicts a reaction that is likely to synthesize a given molecule. In practice, many feasible reactions can lead to a given product. Since the success of a synthesis plan depends on factors that may vary over time (e.g. the availability or cost of reactants), the retrosynthesis model should ideally return all possible reactions. In other words, we would like to produce a diverse set of feasible reactions leading to the requested product. However, the available datasets cover only a fraction of feasible reactions, so for many of the included products, a lot of alternative reactions are missing. This limitation of current reaction datasets causes two major issues that we address in this paper. First, the typical evaluation of retrosynthesis models involves the use of top-k accuracy, which verifies how many top-k reactions returned by the model are included in a given dataset. Our analysis performed on the USPTO-50k test split <cit.> reveals that on average more than 100 feasible reactions returned by the examined retrosynthesis models are ignored by top-k accuracy (see <Ref>). Since it is practically impossible to include all possible reactions in a finite dataset, one remedy relies on employing a machine learning model, which reliably assesses the reaction feasibility. This approach is applied in round-trip accuracy, a less exploited alternative to the top-k accuracy metric. Our experimental study demonstrates that replacing top-k accuracy with top-k round-trip accuracy decreases the number of ignored reactions to less than 20 elements (<Ref>) while being robust to non-trivially unfeasible reactions (<Ref>). In consequence, round-trip accuracy should be taken into account as a complementary metric to standard top-k accuracy. We recommend reporting it in future papers to make model evaluation more comprehensive (see <Ref> for detailed analysis). Second, since typical evaluation relies on using top-k accuracy (instead of round-trip accuracy), the existing retrosynthesis models are not encouraged to explore the space of feasible reactions well. Taking inspiration from the construction of round-trip accuracy, we employ a machine learning model, which rewards the retrosynthesis model for returning highly feasible reactions (not only those included in a fixed dataset). The main contribution of the paper is the development of a RetroGFN model that can explore beyond the dataset and return a diverse set of feasible reactions (<Ref>). RetroGFN is based on the recent GFlowNet framework <cit.> which enables exploration of the solution space and sampling from that space with probability proportional to the reward function, e.g., reaction feasibility. In consequence, GFlowNets can sample a large number of highly scored and diverse solutions. Our RetroGFN model leverages this property, sampling a large number of feasible reactions. It outperforms existing methods on the round-trip accuracy metric while achieving competitive results on the top-k accuracy. To summarize, our contributions are: * We provide empirical arguments for the importance of reporting the round-trip accuracy in the single-step retrosynthesis model evaluation (<Ref>). * We develop RetroGFN: a model based on the GFlowNet framework that generates diverse and feasible reactions. To our knowledge, we are the first to adapt GFlowNets for retrosynthesis (<Ref>). We make the code publicly available[<https://github.com/gmum/RetroGFN>]. * We benchmark the state-of-the-art single-step retrosynthesis models and show that our RetroGFN outperforms all considered models on the round-trip accuracy while achieving competitive results on the top-k accuracy (<Ref>). § RELATED WORK Single-step Retrosynthesis. The single-step retrosynthesis problem is well-known in the drug-discovery community. The methods in this field can be roughly divided into template-based and template-free. The former are based on reaction templates (also called rules, see <Ref>), which describe the graph-level transformation of molecules that are encountered in the reactions <cit.>. Templates provide a strong inductive bias as they form a fixed set of possible transformations that the retrosynthesis model can perform. Template-free approaches, on the other hand, do not rely on a template and aim to generate the transformation of the product (the change of the bonds and atoms between reactants and product) <cit.> or generate the product from scratch <cit.>. Our RetroGFN is a template-based model, but it is not limited to a fixed set of templates as it composes them using pre-defined patterns. The template composition process was inspired by RetroComposer <cit.> but remains substantially different: we implement the generation process in the GFlowNets framework; we use more general patterns; we parametrize the second phase to be order-invariant; we guarantee the second phase ends with product and reactant patterns that can be mapped; and finally, we map the atoms using a machine learning model (while RetroComposer uses a heuristic). GFlowNets. GFlowNets <cit.> are a type of generative methods devoted to sampling from high-dimensional distributions. GFlowNets were originally proposed as an alternative to MCMC (offering the benefits of amortization) and reinforcement learning (displaying a mode-seeking behavior, that is the ability to discover multiple diverse modes), and later shown to be equivalent with special cases of other generative methods <cit.>. The diversity, in particular, is a desired property in multiple scientific discovery tasks <cit.>, including molecule <cit.>, biological sequence <cit.>, crystal <cit.>, conformer <cit.> and DNA-encoded library <cit.> generation tasks. Importantly, from a scientific discovery standpoint, GFlowNets have also been used in the active learning context, both in the multi-objective <cit.> and multi-fidelity <cit.> settings. § IMPORTANCE OF ROUND-TRIP ACCURACY In this section, we set up the single-step retrosynthesis problem, discuss the limitations of the widely used top-k accuracy metric, and argue for the relevance of the round-trip accuracy. §.§ Single-Step Retrosynthesis Single-step retrosynthesis is focused on predicting reactions that could lead to the given product (see <Ref>(a)). The retrosynthesis model is evaluated with a reaction dataset D={(R_1, p_1), ..., (R_n, p_n)} containing reaction tuples where p_i denotes a product and R_i is a set of reactants that can synthesize the product p_i. During inference, the model is requested to return at most k reactions for every product from the dataset, which are expected to be sorted from the most to the least probable. §.§ Limitations of Top-k Accuracy Top-k accuracy is one of the most widely used metrics in retrosynthesis. To calculate it for the entire dataset, we first compute the support function F_ACC for every product p, which informs whether the ground-truth reaction was found in top k results returned by the model f: F_ACC(f, p, k) = 1[∃_i ≤ k(f(p)_i, p) ∈ D], where f(p)_i is the i-th set of reactants proposed by the model for product p. Top-k accuracy denotes the portion of ground-truth reactions that were retrieved by the model and can be written as ACC(f, k) = 1/n∑_i=1^nF_ACC(f, p_i, k). Top-k accuracy works under the assumption that all sensible reactions for a given product are contained in the dataset. However, since there are often many different ways to make product, and it would be too expensive to try all of them. Therefore, real datasets are highly incomplete. In particular, it turns out that this assumption is not true for the USPTO-50k dataset <cit.> which is the most widely used benchmark in the retrosynthesis community. To showcase that, we gathered all reactions returned by any of the considered retrosynthesis models (see <Ref>) that are not included in USPTO-50k. Among 8409 reactions ranked top-1 by any considered model, 76 of them can be found in the USPTO-MIT dataset <cit.>. While top-k accuracy ignores these feasible reactions, the round-trip accuracy can account for a significant portion of them (see <Ref>). §.§ Relevance of Round-Trip Accuracy The top-k round-trip metric uses the wider notion of feasibility than top-k accuracy. For a single product, the top-k round-trip accuracy value denotes the percentage of feasible reactions among top k reactions returned by the machine learning model. The feasibility is estimated with a forward reaction prediction model F which is a fine-tuned Chemformer <cit.> (see <ref>). The exact formula for top-k round-trip accuracy calculated on a product p and retrosynthesis model f is given by: F_Round(f, p, k) = 1/k∑_i=1^k 1[p ∈ F(f(p)_i)], where F(f(p)_i) is the set of products predicted by the forward model F for a set of reactants f(p)_i. In other words, the metric measures how many reactions proposed by a backward model f can be back-translated by a forward model F. We report the top-k round-trip accuracy for the entire dataset D, which can be written as Round(f, k) = 1/n∑_i^nF_Round(f, p_i, k). Therefore, round-trip accuracy assesses both the diversity and feasibility of the returned reactions. <Ref> shows the ratio of feasible reactions ignored by top-k accuracy and round-trip accuracy. The metrics were computed on the USPTO-50k dataset and the "real" feasibility was assessed with USPTO-MIT. Note that the number of ignored feasible reactions is highly underestimated as USPTO-MIT is by no means exhaustive. The space of all reactions is enormous and even simple manipulations of leaving groups of reactants (e.g. changing Cl to Br) are likely to result in a lot of feasible reactions that were not screened in the wet lab before. While all the feasible reactions cannot be included in the dataset directly, the round-trip accuracy can account for some portion of them by leveraging the generalization properties of deep learning. The fact that our round-trip accuracy can account for strictly more feasible reactions than top-k accuracy is essential in the context of drug design, even at the cost of the increased number of non-feasible reactions accounted as feasible. It is because the profit lost caused by the inability to synthesize a drug is drastically higher than the cost of performing an unsuccessful synthesis experiment <cit.>. §.§ Reliability of Round-Trip Accuracy To assess the reliability of the round-trip accuracy, we want to estimate what percentage of non-feasible reactions the round-trip accuracy will treat as feasible (we call this metric acceptance accuracy). The problem with constructing a set of non-feasible reactions is that they are very rarely reported in the literature. All reaction from USPTO-50k and USPTO-MIT datasets that we consider in that paper consists only of feasible reactions. Therefore, to obtain non-feasible reactions, we assume that for every set of reactants from the USPTO-MIT all of its possible outcomes were reported. Under this assumption, we can create a non-feasible reaction by taking a set of reactants from USPTO-MIT and a product that is not a possible outcome. We create an initial set of such reactions by applying random forward templates to the m reactants from USPTO-MIT (test split). Then we select a subset C of size m/10 of obtained reactions so that all reactions have distinct products and reactants. Then, for every reaction (r, p) ∈ C, we gather 9 sets of reactants from USPTO-MIT with possibly high Tanimoto similarity to r and add them to C. As a result, for every product p from C, we have a set of 10 corresponding reactions that are non-feasible in a non-trivial way. We additionally ensure that the sets of reactants are distinct across the reactions. If a set of reactants were shared between some reactions, then only one of those reactions would be accepted by a forward model, artificially increasing the acceptance accuracy. The acceptance accuracy of round-trip accuracy (and underlying forward model) is reported in <ref>. We see that the forward model accurately rejects even the most challenging reactions obtained by the forward reaction template application. §.§ Generalization of Round-Trip Accuracy Round-trip accuracy can be generalized to be able to use a wider class of machine learning models as a reaction feasibility proxy. We propose such a generalized metric in <ref> and evaluate the baseline models on it. § RETROGFN RetroGFN is a single-step retrosynthesis model, meaning it predicts a set of molecules that could react to a given target product (see <Ref> a)). A product is represented as an annotated graph G=(V, T, E), where nodes V={v_1, v_2, ..., v_n} correspond to the molecule's atoms along with associated atom symbols (types) T, and edges E are bonds. Additionally, each node and edge have an associated vector of features that will be used when embedding a molecule. §.§ Reaction Templates and Patterns Several existing single-step retrosynthesis models, including ours, work on the (backward) reaction templates. A reaction template can be seen as a regular expression on graphs (see <Ref>). It describes the transformation of a product into the reactants and consists of the product pattern r0.4 < g r a p h i c s > Illustration of a single-step retrosynthesis (a), and a corresponding reaction template (b). Atoms from a product pattern on the left side of the template are mapped to atoms from reactant patterns on the right side (red C:i is mapped to blue C:i). -10pt (left side of the regular expression) and a set of reactants' patterns (right side). The atoms of the product pattern are mapped to atoms of reactants' patterns. Reaction templates provide a strong inductive bias to the model while limiting it to a fixed set of possible transformations. However, we extend the covered reaction space by introducing a template composition process inspired by RetroComposer <cit.>. In this approach, we choose the reaction center where the template is going to be applied and compose a concrete template step by step using the building blocks, called patterns. We extract the templates from the train split of USPTO-50k, following <cit.>. Each template is then split into product and reactant patterns (see <Ref> b)). We denote a set of all encountered product patterns PPS and an analogous set of reactant patterns RPS. The patterns do not include any molecular regular expression (SMARTS) and can be represented similarly to molecules - as annotated graphs. §.§ Generation Process Given a product, our RetroGFN composes an appropriate template in three phases: * The first phase determines a reaction center: a product pattern matched the product. * The second phase gathers the reactant patterns. * The third phase constructs atom mapping between the atoms of the product pattern and the reactants' patterns. In the end, the obtained template is applied to the given product and results in a final set of reactants. <Ref> shows an example of the composition process while a detailed description of each phase can be found further in the section. The core component of a GFlowNet model is a forward policy P_F(a | s) describing the probability of taking action a in the state s. The generation process samples a sequence of states and actions τ=(s_1, a_1, ..., s_k, a_k, t) called a trajectory, where t is a terminal state. In RetroGFN, an initial state s_1 is an input product, the intermediate states s_i correspond to the partially constructed template, and the terminal state t stores a final template along with a result of its application to the product. We group the states into three phases and the specific definition of P_F(a | s) depends on the phase i: P^i_F(a | s) = exp(_i(s, a)α)/∑_a' ∈ A^i(s)exp(_i(s, a') α), where _i is a phase-specific score function parameterized with a neural network and A^i(s) is a set of possible actions that can be taken from s in the i-th phase. The policy is simply a softmax with temperature coefficient α over the scores of all possible actions A^i(s). Score functions for all the phases share a common Graph Neural Network (GNN) encoder, denoted as _1 that given a product p=(V, E, T), embeds its nodes' features: _1(p) ∈^n × d, where n is the number of product nodes and d is the embedding size. We overload the notation and let _1(v_j) denote the embedding of a product node v_j ∈ V. The GNN architecture we use is similar to the one from LocalRetro: a stack of MPNN layers with a single Transformer layer <cit.> on top. Details can be found in the <ref>. First Phase. A state s in the first phase is an input product p. The action space A^1(s) contains all possible atom matchings of product patterns from PPS to the product p. An action a ∈ A^1(s) contains the matched product pattern pp ∈ PPS and the matched atom indices I={i_1, ..., i_m}. The value of i_j is an index of the product atom matched with j-th product pattern atom. To compute the _1(s, a), we aggregate the representation of matched product's nodes and put them into multi-layer perceptron _1: ^d →: _1(s, a) = _1(∑_i ∈ I_1(v_i)). After the action is chosen and applied, the generation process transitions directly to the second phase. Second Phase. The second phase iteratively adds reactant patterns to the composed template. At the beginning of the phase, the list of reactant patterns is empty. The second phase action a is a reactant pattern rp_j ∈ RPS that is going to be added to the template. The _2(s, a) concatenates the information from the previous phase and the reactant patterns collected so far (denoted as R) and feeds it to _2: ^3d→^|RPS| that predicts the score for all the reactant patterns in RPS: _2(s, a) =_2(∑_i ∈ I_1(v_i) | _PPS(pp) | ∑_rp ∈ R_RPS(rp))_j. Here we select the jth score returned by the _2 as it corresponds to rp_j reactant pattern from the action. Index embedding e=_A(a) is a function that looks up the index of the element a in the set A and assigns the index a learnable embedding e ∈^d (e.g. _PPS assign a unique learnable embedding to every pp ∈ PPS). At the end of this phase, we want to be sure that every atom from the product pattern can be mapped to some atom of the reactant pattern. Originally, each pattern had some atom mapping in the template it comes from (see <Ref>). r0.45 -10pt < g r a p h i c s > Illustration of a pattern before (left) and after (right) mapping removal. The mappable atoms of the pattern are colored blue. -10pt Although those explicit mappings are inadequate in the novel-composed template, we can leverage the knowledge that an atom was originally mapped. For every pattern, we construct a set of mappable atoms that consists of the pattern's atoms that were mapped in the original template (see <Ref>). The composed template is allowed to map only the mappable atoms. We ensure that all mappable atoms in the composed template can be mapped by properly restricting the action space A^2(s). Third Phase. The third phase creates a mapping between atoms of product and reactant patterns. An action a is an atom mapping (j, k, l) ∈ M that links the j-th node from the product pattern pp with the l-th mappable node of the k-th reactant pattern from the list of reactant patterns R. The _3(s, a) is given with the formula: _3(s, a) = _3(_1(v_i_j) | _2(v_kl)), where v_i_j is a product node matched with the j-th node of the product pattern, and v_kl is the l-th node of the k-th reactant pattern from R. To embed the reactant pattern nodes, we introduce a GNN _2 with the same architecture as _1. The action space A^3(s) contains all possible atom mappings. We call an atom mapping between two nodes possible when the atom symbols of the nodes are the same and neither of the nodes was previously mapped. The third phase ends when every node from the product pattern is mapped, resulting in a template that can be applied to the reaction center chosen in the first phase. The obtained reaction forms the terminal state t. §.§ Training We trained our RetroGFN with a modified version of Trajectory Balance Objective from <cit.>, which for a trajectory τ = (s_1, a_1, s_2, a_2, ..., s_k, a_k, t) is given with the formula: ℒ(τ) = ( logF(s_1)∏_i=1^k P_F(a_i | s_i)/R(t)P_B(a_k | t)∏_i=2^kP_B(a_i-1 | s_i))^2. The main difference from the original formulation comes from the fact that our RetroGFN is conditioned <cit.> on the product from the initial state s_1. Therefore, for every initial state, we estimate the incoming flow separately using F(s_1) function which is essentially an index embedding F(s)=E_P(s) ∈ that looks up the set of training products P and returns a learnable scalar (note that we only evaluate F(s) during training). As a backward policy P_B(a | s), we use a uniform distribution over the possible actions that could lead to state s. The reward is an exponential reward of the form R(x) = exp(β f(x)) where f is a feasibility proxy. It can be a machine learning model that predicts the feasibility of a reaction or an indicator of whether the forward reaction prediction model was able to backtranslate the reaction x. In the main part of the paper, we evaluate the former, while experiments with the latter can be found in the <ref>. The forward model used during training is distinct from the one used in the round-trip evaluation. §.§ Inference During inference, the retrosynthesis model is given a product and requested to output at most N reactions sorted from the most to least promising. RetroGFN samples the reactions using the trained forward policy P_F(a | s) and orders them with the estimated probability. The probability of a reaction represented by a terminal state t is estimated by summing the probabilities of all sampled trajectories that end with t: p(t) = ∑_τ: t ∈τ∏_(s, a) ∈τ P_F(a | s). To increase the accuracy of the estimation, we sample K · N trajectories. We leave the exploration of other estimation methods for future work. The details on the architecture and hyperparameters of both training and inference can be found in the <Ref>. § EXPERIMENTS This section describes the benchmark methodology and results of our RetroGFN models compared to the current state-of-the-art. Tables <ref>, <ref>, <ref> and <ref> show that our RetroGFN outperforms all considered models on round-trip accuracy while achieving competitive results on the top-k accuracy. §.§ Setup Datasets. We compared the considered methods on two datasets: USPTO-50k, a default choice for benchmarking retrosynthesis models, and USPTO-MIT, which we use as a generalization benchmark for models trained on USPTO-50k. We used commonly used splits for both datasets <cit.>. We refined the USPTO-MIT to ensure there is no overlap between it and the USPTO-50k train split. Retrosynthesis Models. We compared our RetroGFN to well-known and recent state-of-the-art models: GLN <cit.>, MEGAN <cit.>, MHNreact <cit.>, LocalRetro <cit.>, RootAligned <cit.>, RetroKNN <cit.>, and Chemformer <cit.>. We used the wrappers of the original implementations and checkpoints from the Syntheseus repository[<https://github.com/microsoft/syntheseus>]. We used the evaluation procedure from Syntheseus that queries the model for 100 reactions, removes the duplicates, and truncates the list of reactions for every product to be no larger than 50. The same output was used both to calculate standard and round-trip metrics. Forward Model. A forward (reaction prediction) model takes a set of reactants as an input and outputs a set of possible products. As a backbone, we used a pre-trained Chemformer model from <cit.>. We fine-tuned two forward models: Chemformer-Eval which was used to estimate the reaction feasibility in the round-trip accuracy (see <ref>) and Chemformer-Train which guided RetroGFN during the training (see <ref>). Chemformer-Train was fine-tuned on the train split of USPTO-50k, while Chemformer-Eval used both the train and test split of USPTO-50k. §.§ Results on USPTO-50k The top-k round-trip accuracy results for USPTO-50k dataset can be found in <ref>. We observe that for k>1 RetroGFN consistently outperforms all the models. The absolute and relative advantage of RetroGFN over the second-best model on top-k round-trip increases with k, indicating that the model can return a large set of diverse and feasible reactions. Note that the forward model used during the training of RetroGFN was trained on a different data split than the one used for evaluation. In <Ref>, we can find standard top-k accuracy results. Our method performs competitively with state-of-the-art single-step retrosynthesis models, especially for larger values of k which is arguably more important for retrosynthesis search than k=1. We observe that for k > 1, our model consistently outperforms other models, producing plenty of feasible reactions. The absolute and relative advantage of RetroGFN over the second-best model on top-k FTC increases with k: from 3.6%p and 5.9% for k=3 to 9%p and 30.8% for k=50. The good results of RetroGFN on standard metrics and its exceptional performance on FTC evidence that one can greatly improve the results on FTC without sacrificing the performance on standard metrics. Interestingly, the Pearson correlation between top-k accuracy and top-k FTC for k=1 seems relatively high (corr=0.6, p-value=0.12), but it becomes insignificant for k > 1, indicating that the standard accuracy metric and FTC are complimentary (one can greatly improve upon the FTC without improving on standard metrics). §.§ Generalization Results on USPTO-MIT We evaluated the models trained on USPTO-50k further on the USPTO-MIT dataset to assess their generalization properties (<Ref> and <ref>). The evaluation of both standard and round-trip accuracy metrics echoes the results of USPTO-50k: RootAligned is the best on top-k accuracy, while our model achieves SOTA results on round-trip metrics. As in the USPTO-50k case, the absolute and relative advantage of RetroGFN over the second-best model on top-k round-trip increases with k. §.§ Leveraging the Forward Model In <ref>, we study a simple model-agnostic way of leveraging the Chemformer-Train to maximize the results of the round-trip accuracy metric. While this approach significantly improves the round-trip accuracy results, it drastically decreases the standard top-k accuracy, especially for larger values of k. We leave the development of other methods of incorporating the Chemformer-Train model into the training pipeline for future work. § CONCLUSIONS In this paper, we provided empirical arguments for the importance of reporting the round-trip accuracy in the single-step retrosynthesis model evaluation. Leveraging the GFlowNet framework which is designed for tasks where plenty of sensible solutions are desired, we developed a RetroGFN model that achieves competitive results on top-k accuracy and performs outstandingly on the top-k round-trip accuracy. We discuss the limitations of the paper in the <Ref>. § ACKNOWLEDGEMENTS The research of P. Gaiński and M. Śmieja was supported by the National Science Centre (Poland), grant no. 2022/45/B/ST6/01117. The research of J. Tabor was supported by the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund in the POIR.04.04.00-00-14DE/18-00 project carried out within the Team-Net program. The research of M. Koziarski was supported by funding from CQDM Fonds d'Accélération des Collaborations en Santé (FACS) / Acuité Québec and Genentech. We gratefully acknowledge Poland's high-performance Infrastructure PLGrid (ACK Cyfronet Athena, HPC) for providing computer facilities and support within computational grant no PLG/2023/016550. For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. unsrtnat § RETROGFN DETAILS All neural networks in RetroGFN used the same hidden dimension h=200. To obtain initial node and edge features for products, we used featurization from <cit.> implemented in the DGL library <cit.>. For the reactant pattern, we used the same edge featurization and a custom node featurization that accounted for atom type, degree, aromaticity, whether the atom was mapped in the original template, relative charge difference between product and reactant atom in the original template, and analogous implicit hydrogen difference. The node features for both products and reactant patterns were enriched with random walk positional encoding <cit.> of size n_random_walk=16. Product node encoder _1 consists of num_layer_1=4 layers of the MPNN convolution <cit.> and one Transformer layer with num_heads=8. The reactant pattern encoder differs only in the number of layers num_layer_2=3. Multi-layer perceptrons _1, _2, _3 had one hidden layer (with hidden dimension h) and used the GeLU activation function. During training, we used a combination of three sampling methods: 1) standard exploratory sampling from the forward policy P_F with some ϵ probability of taking random actions, 2) backward sampling from replay buffer <cit.>, and 3) backward sampling from the dataset D. Backward sampling starts with a terminal state and samples the trajectory in the backward direction using the backward policy. During the training probability of taking random action in the forward policy was set to ϵ=0.05, the number of sampled forward trajectories in the batch was n_forward=16, and the analogous numbers for backward dataset trajectories and backward replay buffer trajectories were n_dataset=96 and n_replay=16. The model was trained with Adam optimizer <cit.> with a learning rate lr=0.0005 (with other parameters set to default values in the torch implementation) for n_iterations=25000 iterations. In the evaluation, the forward policy temperature was set to α=0.7. During the inference, we sampled K · N trajectories to accurately estimate the reaction probability. For USPTO-50k, we set K=20 while for, due to limited computational resources, we set K=10 for USPTO-MIT. All the hyperparameters were chosen manually based on the top-k accuracy and round-trip accuracy estimated on the USPTO-50k validation split. § GENERALIZATION OF ROUND-TRIP ACCURACY We propose a generalization of round-trip accuracy that allows to use of a wider class of machine-learning models to assess the reaction feasibility. We call this metric Feasible Thresholded Count (FTC). For a single product, the top-k FTC value denotes the percentage of feasible reactions among top k reactions returned by the model. The feasibility is estimated with an auxiliary model described in further in this section. The exact formula for top-k calculated on a product p and retrosynthesis model f is given by: F_(f, p, k) = 1/k∑_i=1^k 1[(f(p)_i) ≥ t], where (f(p)_i) ∈ [0, 1] is the output of the reaction feasibility model for the i-th reaction proposed by f, and t is a feasibility threshold given by the user. We assume that (x) = 1 for reaction x ∈ D. We report the top-k for the entire dataset D, which can be written as (f, k) = 1/n∑_i^nF_(f, p_i, k). §.§ Reaction Feasibility Model (RFM) The Reaction Feasibility Model (RFM) is a model that takes reaction x as an input and outputs its feasibility - probability that the reaction is feasible: (x) ∈ [0, 1]. In this paper, we develop an RFM baseline that can be used as a benchmark in future work. Architecture. Our RFM implementation consists of two GINE <cit.> Graph Neural Networks (GNN) with a Transformer <cit.> layer and attention pooling at the top that creates product and reactant embeddings which are then concatenated and fed into the layer. Checkpoints for USPTO-50k. To train the model, we augmented the USPTO-50k dataset with negative (non-feasible) reactions using two methods: 1) application of existing forward templates to obtain a novel product from existing reactants, 2) swapping a product in the reaction with another product that is similar to the original one in terms of Tanimoto similarity. Such an approach ensured that the generated negative reactions are not trivially unfeasible (they use an existing template and/or the product is not strikingly different from the reactants), but still are very unlikely to occur in reality (the original reactants were reported to return a different product). We obtained a reaction feasibility dataset with a negative-to-positive ratio of 5:1. We trained two distinct checkpoints of feasibility models: RFM-Train-50k and RFM-Eval-50k. The RFM-Train was trained only on the train split of the reaction feasibility dataset and was then used to calculate the reward in the RetroGFN during the training. §.§ Experiments We trained the RetroGFN using the RFM-Train model as a feasibility proxy and compared it on top-k accuracy and our metric. We used the same hyperparameters as in <ref>, but with n_dataset=80, n_replay=16, n_forward=32 and β=12. The results (Tables <ref>, <ref>, <ref> and <ref>) mimic the ones from the main paper: our RetroGFN outperforms the model on metric while obtaining competitive results on the standard top-k accuracy. The experiments show that our RetroGFN can leverage any machine-learning feasibility proxy. We believe that training a reliable and powerful feasibility proxy is a promising direction for future work. § ABLATIONS In this section we study a simple model-agnostic way of leveraging the Chemformer-Train to maximize the results of round-trip accuracy metric. The idea is to filter the results that are not backtranslated by the Chemformer-Train model during the evaluation. Tables <ref> and <ref> show that such an approach significantly improves the round-trip accuracy results, but with the costs of a drastic decrease of a standard top-k accuracy, especially for larger values of k. § LIMITATIONS AND DISCUSSION This section briefly discusses the limitations of the paper. §.§ Round-trip Accuracy The main limitation of the top-k round-trip accuracy is that it relies on the forward reaction prediction model which suffers both false negative and false positive errors. However, we believe that there is an inherent epistemic uncertainty within the notion of feasibility (we cannot screen all the reactions) and any sensible retrosynthesis metric will have some portion of false negatives (it will not take all feasible reactions into account). In comparison to top-k accuracy, our round-trip accuracy has a strictly lower number of false negatives, while keeping false positives on a decent level. We believe that the round-trip will benefit from the further improvements of the forward reaction prediction model and we leave it for future work. §.§ RetroGFN Top-k Accuracy. The main limitation of our RetroGFN method is its results on top-k accuracy for k < 5. At first glance, it looks like a trade-off necessary to achieve excellent results on the round-trip accuracy. We argue that it may be caused by two things: 1) other hyperparameters of the model are not optimal for top-k accuracy, 2) the GFlowNet framework struggles with spiky reward function, and 3) the parametrization of the composition process is sub-optimal. It is possible that further refinements of the method could improve the results. Leveraging Chemformer-Train. The fact that RetroGFN leverages the Chemformer-Train checkpoint can be seen as an unfair advantage because a similar Chemformer-Eval model is used in the round-trip accuracy computation. However, we think that fairness comes from the fact that all models use the same data splits for training or evaluation. The models differ in the way they learn from the training data and leveraging the Chemformer-Train is yet another way of learning. It does not inject any new knowledge that cannot be extracted from the training data. Once the round-trip accuracy metric is established, it becomes reasonable to optimize it using Chemformer-Train. Moreover, we believe that Chemformer-Eval and Chemformer-Train are expected to be similar because they have similar goals: 1) to extract as much information from the train and test split as possible, and 2) to extract as much information from the train split as possible. It is sensible then that they share architecture. The difference should come from the data split used for training. § COMPUTATIONAL RESOURCES We ran all the experiments on Nvidia V100 and A100 GPUs. The training of our model takes no more than 48h per checkpoint. When experimenting with the architecture and different feasibility proxy models, we trained no more than 100 checkpoints. For all the baselines, we used already trained checkpoints and only evaluated them on USPTO-50k and USPTO-MIT. The evaluation time depends on the model, but in total, it took no more than 400 GPU hours. It gives the upper bound of 5200 GPU hours for the total experimenting costs.
http://arxiv.org/abs/2406.19098v1
20240627112654
Double Mpemba effect in the cooling of trapped colloids
[ "Isha Malhotra", "Hartmut Löwen" ]
cond-mat.soft
[ "cond-mat.soft" ]
AIP/123-QED Isha.Malhotra@hhu.de Institut für Theoretische Physik II: Weiche Materie, Heinrich-Heine-Universität Düsseldorf, 40225 Düsseldorf, Germany § ABSTRACT The Mpemba effect describes the phenomenon that a system at a hot initial temperature cools faster than at an initial warm temperature in the same environment. Such an anomalous cooling has recently been predicted and realized for trapped colloids. Here, we investigate the freezing behavior of a passive colloidal particle by employing numerical Brownian dynamics simulations and theoretical calculations with a model that can be directly tested in experiments. During the cooling process, the colloidal particle exhibits multiple non-monotonic regimes in cooling rates, with the cooling time decreasing twice as a function of the initial temperature—an unexpected phenomenon we refer to as the Double Mpemba effect. Additionally, we demonstrate that both the Mpemba and Double Mpemba effects can be predicted by various machine learning methods, which expedite the analysis of complex, computationally intensive systems. Double Mpemba effect in the cooling of trapped colloids Hartmut Löwen July 1, 2024 ======================================================= § INTRODUCTION The Mpemba effect challenges conventional understanding by proposing that hot water can cool and freeze faster than its cooler counterpart, contrary to intuitive expectations <cit.>. Despite extensive experimental investigations into this phenomenon in water, a consensus regarding its underlying cause remains elusive <cit.>. Recent research advances have demonstrated that the Mpemba effect is not limited to the freezing of water but occurs in a variety of contexts. This phenomenon has been identified in granular gases <cit.>, inertial suspensions <cit.>, Markovian models <cit.>, optical resonators <cit.>, spin glasses <cit.> and quantum systems<cit.>. Notably, it has also been observed in colloidal particle systems undergoing rapid thermal quenching <cit.>. In its simplest form, single particles are confined within one-dimensional asymmetric double-well potential, replicating the liquid and frozen states of water. The synthesis of experimental findings and theoretical insights, unravel the mechanisms driving this intriguing effect <cit.>, thereby advancing our comprehension of its fundamental principles. In this study, we examine the cooling process of a trapped colloid within a potential featuring two repulsive walls shown in Fig. <ref>b and discover that it exhibits a pronounced Mpemba effect, occurring not just once but twice if the initial temperature is varied (Fig. <ref>a and <ref>c) a phenomenon which we report as Double Mpemba effect. Furthermore, we explore how imposed bath temperatures influence the type of Mpemba -normal, or Double - that the system exhibits. We have generalized a simple theoretical framework proposed by Kumar et al.<cit.> that explains the observations of numerical simulations and quantitatively agrees with the analysis based on the eigenfunction expansion of the Fokker-Planck equation <cit.>. Furthermore, traditional experimental and computational approaches to studying the Mpemba effect often face challenges due to the complexity and variability of the parameters involved. To overcome, these challenges, we propose a novel approach that leverages theoretical modeling and machine learning <cit.> to predict the colloidal Mpemba effect with high accuracy. To illustrate the Mpemba effect, imagine two systems with temperatures ranging from warm to hot. Typically, when these systems are cooled to a set cold bath temperature, we would expect that the hotter the system, the longer it would take to cool. However, the Mpemba effect occurs when the hot system cools faster than the warm one. In the case of a passive colloid in an asymmetrical potential, this happens because the hot particle has enough residual energy to overcome the barrier and quickly settles into the cold state. In contrast, a warm particle, with less residual energy, takes longer to cross the barrier. We show the existence of the Double Mpemba effect and that the key factors influencing the Mpemba effect are not just the residual energy but also the initial state of the system and the final bath temperature. This finding broadens our understanding of the Mpemba effect and highlights the complexity of cooling dynamics in these systems. § MODEL AND SIMULATION TECHNIQUE We explore the process of cooling for a Brownian colloidal particle confined within a double well potential through numerical simulations. The symmetry of the double well potential is broken either by bringing a tilt in the potential or by the asymmetric placement of the potential in a domain (see Fig. 1b). The motion of the Brownian particle, experiencing fluctuations at temperature T and undergoing overdamped motion, is described in one spatial dimension by the equation: dx/dt = -1/γ∂_x U(x) + η(t) where η(t) represents Gaussian white noise with zero mean and variance <η(t)η(t^')> = 2 D_T δ(t-t^'). Here, the noise strength corresponds to the translational diffusion constant D_T of the particle, which is determined by the temperature T, given by the Stokes-Einstein relation D_T = k_B T/γ, where k_B denotes the Boltzmann constant and γ represents the friction coefficient. The particle is subjected to an external double well potential similar as in <cit.> defined as follows: U(x) = -F_0 x if x<x_min F_1 [(1-x^2)^2 - 0.5 x] if x_min<x<x_max F_0 x if x>x_max The components in Eq. <ref> that scale with F_0 signify the presence of repulsive barriers positioned at x_min and x_max such that the forces are constant beyond x_min and x_max, while the component proportional to F_1 describes an asymmetric potential featuring two minima at x_a and x_b of varying heights and maxima at x^*. The length of the confining box denoted as ℓ = |x_max - x_min|, serves as a convenient unit of length. When this is combined with the translational diffusion constant, it yields a natural time scale expressed as τ_D = ℓ^2/D_T. Throughout this paper, the temperatures are defined in units of F_0ℓ. To gain quantitative insight into the relaxation process, we quantify the distance between the target equilibrium distribution π_bath(x) and the probability distribution P(x, t) of a particle generated from Eq. <ref> during the cooling process <cit.>. To construct this distance measure, we discretize the spatial components of both π_bath(x) and P(x, t) into N grid points, resulting in π_i, bath and P_i(t), respectively. The distance measure is then defined as: 𝒟(t) = 1/N∑_i=0^N |P_i(t) - π_i, bath|. In the following, we present a theoretical formula by generalizing the approach proposed by Kumar et al. for calculating the cooling time scale of particles starting from various initial temperatures T_initial. The occupation ratios/probabilities N_a(T) and N_b(T), which indicate the probability of a particle in the left-hand domain (-∞, x^*) and right-hand domain (x^*, ∞) respectively, at a temperature T with β=1/k_B T in equilibrium is given as: N_a(T) = ∫_-∞^x^*exp(-β U_a(x)) dx /∫_-∞^∞exp(-β U_a(x)) dx N_b(T) = 1- N_a(T) The time scale for cooling is approximately given as <cit.>: t_c ≈τ_D |N_a(T_initial) - N_a(T_bath)| × exp(Δ E_i/k_B T_bath) The Arrhenius-like exponential factor accounts for the diffusion over an energy barrier that a particle originally in the potential hole at x_a, will escape to x_b crossing the barrier at x^* (see Fig. <ref>b)<cit.>. The expressions Δ E_a = U(x^*) - U(x_a) and Δ E_b = U(x^*) - U(x_b) define the energy barriers for the potential minima at x_a and x_b, respectively. Here, Δ E_i is equal to Δ E_a if N_a(T_initial) > N_a(T_bath), and it is equal to Δ E_b otherwise. § RESULTS In Fig. <ref>a, we present the cooling curve calculated from the theory as a function of different initial temperatures. The cooling time t_c has a double minima, indicating the presence of the Double Mpemba effect for our chosen parameters. Numerical simulations further confirm theoretical predictions where we calculate the distance measure 𝒟(t) as defined in Eq. <ref> in Figs. 2b and 2c. From this measure, we can extract a cooling time t_c^sim, defined as the time at which 𝒟(t) has decayed to zero or, in our case, to the noise level. We show that particles at temperatures T_2 and T_4 cool very quickly, while particles at temperatures T_1 and T_3 take longer to relax, fully consistent with the theoretical calculations. To understand the effect of different bath temperatures, we present the particle distribution and calculate the normalized cooling time from the theoretical model at various bath temperatures, as shown in Fig. <ref>a and Fig. <ref>b, respectively. We observe that at T_bath = 1.7 × 10 ^-4 F_0 ℓ, the system exhibits the normal Mpemba effect, whereas at T_bath = 1.7 × 10 ^-3 F_0 ℓ, a strong Mpemba effect is observed. At T_bath = 3.4 × 10 ^-4 F_0 ℓ, the cooling time shows double minima, indicating the presence of the Double Mpemba effect. To validate the theoretical model and numerical simulations, we employ a recent approach that relates the Mpemba effect to an eigenvalue expansion <cit.>. The probability density p(x,t) can be represented as an infinite sum of eigenfunctions of the Fokker-Planck equation, which governs its evolution. The theory subsequently predicts that the density function is primarily influenced by the first two terms of the infinite series at long times. p(x,t) ≈π(x;T_bath) + a_2(T) v_2(x) e^-λ_2 t where the coefficients a_2(T) is a real number that depends on the initial temperature and the potential energy. This approach shows that the cooling time at different initial temperatures is proportional to the second eigenvalue coefficient |a_2(T)| of the Fokker-Planck equation. In Fig. <ref>c, we display the normalized |a_2(T)| at different bath temperatures and demonstrate that the cooling times in Fig. <ref>b quantitatively agree with the values of |a_2(T)|. Finally, in Fig. <ref>d, we illustrate the cooling time plots for different types of Mpemba effects. To understand the origin of different Mpemba effects, we examine the role of the first term defined in Eq. <ref>. In cases where the initial occupation probability matches the occupation probability at the final bath temperature, the particle relaxes very quickly as it does not need to cross the barrier to hop from the potential well at x_a to x_b. In Fig. <ref>, we show the occupation probability N_b(T), which indicates the likelihood of a particle being in the right-hand domain (x^*, ∞) as a function of initial temperature T_initial. If for a given T_bath, N_b(T) = N_b(T_bath), the particle immediately relaxes to the cold distribution, resulting in a pronounced Mpemba effect. Conversely, if N_b(T) ≠ N_b(T_bath), the particle must overcome the potential barrier to reach the equilibrium distribution at T_bath. Since crossing this barrier takes a considerable amount of time, the particle relaxes slowly. In Fig. <ref>, we observe that N_b(T) decreases monotonically with increasing temperature for the given parameters as indicated in the caption. This implies that hotter particles will take longer to relax to the cold distribution compared to warmer particles, indicating the absence of the Mpemba effect. Conversely, the inset displays N_b(T) with non-monotonic behavior for a set of parameters mentioned in the caption, suggesting the presence of Mpemba, Double Mpemba, and strong Mpemba effects at A, B, and C, respectively. The Mpemba effect is characterized by the non-monotonic behavior of N_b(T), and a strong Mpemba effect is observed when N_b(T) = N_b(T_bath). For the Double Mpemba effect, there are two temperatures where N_b(T) = N_b(T_bath), indicating a rapid transition to the cold distribution twice during the cooling process. Finally, we employ Machine learning (ML) to study the Mpemba effect without relying on computationally intensive calculations of eigenvectors or cooling times across varying initial temperatures. Utilizing machine learning algorithms, we can effectively classify and predict different Mpemba effect types solely based on observed data patterns. Initially, we establish a robust training dataset using the previously described theoretical model, which includes parameters such as potential and bath temperature. This dataset is structured for subsequent analysis as follows. { x_min, x_max, T_bath, F_0, F_1 } -⟩ 0 if No Mpemba 1 if Mpemba 2 if Double Mpemba By systematically varying these parameters, we generate a robust dataset that encompasses a wide range of scenarios. This dataset is then utilized to train multiple machine learning models, including logistic regression (LR) <cit.>, decision trees (DT) <cit.>, random forests (RF) <cit.>, and K-nearest neighbors (K-NN) <cit.>. The performance of these models is evaluated using the F-1 score<cit.>, a metric that balances precision and recall to provide a comprehensive measure of model accuracy. The detailed process is illustrated in Fig <ref>a. Initially, the dataset is generated through theoretical models, computational simulations, or experiments. In the current work, we have used a theoretical model detailed previously. Subsequently, various machine learning models are built and their performances are measured. The best-performing model is then selected to predict the Mpemba effect under different conditions. In Fig <ref>b, we present the F1-scores of different machine learning models in predicting the No Mpemba, Mpemba, and Double Mpemba effects. The random forest exhibits the highest F-1 score among these models across all three Mpemba scenarios. § CONCLUSIONS We have investigated the influence of potential parameters and bath temperature on the manifestation of different types of Mpemba effects, demonstrating how these factors can fundamentally alter the relaxation process and lead to the Double Mpemba effect, characterized by a cooling trajectory with two minima. Furthermore, we generalized a simple theoretical framework that quantitatively aligns with the analysis based on the eigenfunction expansion of the Fokker-Planck equation <cit.>. Additionally, we have integrated our theoretical model with advanced machine-learning techniques to enhance the predictability of this intriguing phenomenon. Future research could explore the application of our findings to other systems exhibiting the Mpemba effect. It would be particularly interesting to examine how varying bath temperatures and different types of potentials affect the Mpemba effect in systems such as active colloids and many-particle systems. This model can also be used to study the Mpemba effect in quantum systems <cit.>, offering insights into the relaxation dynamics and thermal behaviors of complex quantum systems. A promising avenue for future research is to deepen our investigation into the Kovacs effect using our model<cit.>, aiming to uncover its intricate dynamics and implications in diverse physical systems. Our results can be tested in real-space experiments of colloidal particles in an optical double-well potential <cit.>. To study the effect of different initial temperatures T_initial, the external potential has to be switched from one initial to another, which is proportional to the initial one. § ACKNOWLEDGEMENTS I.M. acknowledges support from the Alexander von Humboldt Foundation. H.L. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG) within the project LO 418/29. jcp
http://arxiv.org/abs/2406.18451v1
20240626160035
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
[ "Jonas Ngnawé", "Sabyasachi Sahoo", "Yann Pequignot", "Frédéric Precioso", "Christian Gagné" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
DiffuseHigh: Training-free Progressive High-Resolution Image Synthesis through Structure Guidance Younghyun Kim^1Equal Contribution. Geunmin Hwang^1Eunbyung Park^1,2Corresponding author. July 1, 2024 ================================================================================================= § ABSTRACT Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning models can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to detect non-robust samples and is intractable for deep neural networks. This paper introduces the concept of margin consistency – a property that links the input space margins and the logit margins in robust models – for efficient detection of vulnerable samples. First, we establish that margin consistency is a necessary and sufficient condition to use a model's logit margin as a score for identifying non-robust samples. Next, through comprehensive empirical analysis of various robustly trained models on CIFAR10 and CIFAR100 datasets, we show that they indicate strong margin consistency with a strong correlation between their input space margins and the logit margins. Then, we show that we can effectively use the logit margin to confidently detect brittle decisions with such models and accurately estimate robust accuracy on an arbitrarily large test set by estimating the input margins only on a small subset. Finally, we address cases where the model is not sufficiently margin-consistent by learning a pseudo-margin from the feature representation. Our findings highlight the potential of leveraging deep representations to efficiently assess adversarial vulnerability in deployment scenarios. § INTRODUCTION Deep neural networks are known to be vulnerable to adversarial perturbations, visually insignificant changes in the input resulting in the so-called adversarial examples that alter the model's prediction <cit.>. They constitute actual threats in real-world scenarios <cit.>, jeopardizing their deployment in sensitive and safety-critical systems such as autonomous driving, aeronautics, and health care. Research in the field has been intense and produced various adversarial training strategies to defend against the vulnerability to adversarial perturbations with bounded ℓ_p norm (e.g., p=2, p=∞) through augmentation, regularization, and detection <cit.>, to cite a few. The empirical robustness (adversarial accuracy) of these adversarially trained models is still far behind their high performance in terms of accuracy. It is typically estimated by assessing the vulnerability of samples of a given test set using adversarial attacks <cit.> or an ensemble of attacks such as the standard AutoAttack <cit.>. The objective of that evaluation is to determine if, for a given normal sample, an adversarial instance exists within a given ϵ-ball around it. Yet, this robustness evaluation over a specific test set gives a global property of the model but not a local property specific to a single instance <cit.>. Beyond that particular test set, obtaining this information for each new sample would typically involve rerunning adversarial attacks or performing a formal robustness verification, which in certain contexts may be computationally prohibitive in terms of resources and time. Indeed, in high-stakes deployment scenarios, knowing the vulnerability of single instances in real-time (i.e., their susceptibility to adversarial attacks) would be valuable, for example, to reduce risk, prioritize resources, or monitor operations. Current research lacks efficient and scalable ways to determine the vulnerability of a sample in a deployment context. The input space margin (i.e., the distance of the sample to the model's decision boundary in the input space), or input margin in short, can be used as a score to determine whether the sample is non-robust and, as such, likely to be vulnerable to adversarial attacks. Computing the exact input margin is intractable for deep neural networks <cit.>. These input margins may not be meaningful for fragile models with zero adversarial accuracies as all samples are vulnerable (close to the decision boundary). However, for robustly trained models, where only certain instances are vulnerable, the input margin is very useful in identifying the critical samples. Previous research studies have explored input margins of deep neural networks during training, focusing on their temporal evolution <cit.>, and their exploitation in improving adversarial robustness through instance-reweighting with approximations <cit.> and margin maximization <cit.>. However, to the best of our knowledge, no previous research studies the relationship between the input space margin and the logit margin of robustly trained deep classifiers in the context of vulnerability detection. In this paper, we investigate how the deep representation of robust models can provide information about the vulnerability of any single sample to adversarial attacks. We specifically address whether the logit margin as an approximation of the distance to the decision boundary in the feature space of the deep neural network (penultimate layer) can reliably serve as a proxy of the input margin for vulnerability detection. When this holds, we will refer to the model as being margin-consistent. The margin consistency property implies that the model can directly identify instances where its robustness may be compromised simply from a simple forward pass using the logit margin. Fig. <ref> illustrates this idea of margin consistency. The following contributions are presented in the paper: •=1em =0em =0em * We introduce the notion of margin consistency[Code available at: <https://github.com/ngnawejonas/margin-consistency>], a property to characterize robust models that allow using their logit margin as a proxy estimation for the input space margin in the context of non-robust sample detection. We prove that margin consistency is a necessary and sufficient condition to reliably use the logit margin for detecting non-robust samples. * Through an extensive empirical investigation of pre-trained models on CIFAR10 and CIFAR100 with various adversarial training strategies, mainly taken from RobustBench <cit.>, we provide evidence that almost all the investigated models display strong margin consistency, i.e., there is a strong correlation between the input margin and the logit margin. * We confirm experimentally that models with strong margin consistency perform well in detecting samples vulnerable to adversarial attacks based on their logit margin. In contrast, models with weaker margin consistency exhibit poorer performance. Leveraging margin consistency, we can also estimate the robust accuracy on an arbitrarily large test set by estimating the input margins only on a small subset. * For models where margin consistency does not hold, exhibiting a weak correlation between the input margin and the logit margin, we simulate margin consistency by learning to map the model's feature representation to a pseudo-margin with a better correlation through a simple learning scheme. § METHODOLOGY §.§ Notation and Preliminaries Notation We consider f_θ: ℝ^n→ℝ^C a deep neural network classifier with weights θ trained on a dataset of samples drawn iid from a distribution 𝒟 on a product space 𝒳×𝒴. Each sample x in the input space 𝒳⊂ℝ^n has a unique corresponding label y∈𝒴 = {1,2,…,C}. The prediction of x is given by ŷ(x) = _j ∈𝒴 f_θ^j(x), where f_θ^j(x) is the j-th component of f_θ(x). We consider that a deep neural classifier is composed of a feature extractor h_ψ: 𝒳→ℝ^m and a linear head with C linear classifiers {w_j,b_j} such that f_θ^j(x) = w_j^⊤ h_ψ(x) + b_j. The predictive distribution p_θ(y|x) is obtained by taking the softmax of the output f_θ(x). A perturbed sample x' can be obtained by adding a perturbation δ to x within an ϵ-ball B_p(x,ϵ), an ℓ_p-norm ball of radius ϵ>0 centered at x; B_p(x,ϵ):={x' : x'-x_p=δ_p<ϵ}. The distance x'-x_p=δ_p represents the perturbation size defined as (∑_i=1^n|δ_i|^p)^1/p. In this paper, we will focus on ℓ_∞ norm (x_∞ = max_i=1,…,n|x_i|), which is the most commonly used norm in the literature. Local robustness Different notions of local robustness exist in the literature <cit.>. In this paper, we equate local robustness to ℓ_p-robustness, a standard notion corresponding to the invariance of the decision within the ℓ_p ϵ-ball around the sample <cit.> and formalized in terms of ϵ-robustness. A model f is ϵ-robust at point x if for any x' ∈ B_p(x,ϵ) (x' in the ϵ-ball around x), we have ŷ(x')=ŷ(x). For a given robustness threshold ϵ, a data instance is said to be non-robust for the model if this model is not ϵ-robust on it. This means it is possible to construct an adversarial sample from that instance in its vicinity (i.e., within an ϵ-ball distance from the original instance). A vulnerable sample to adversarial attacks is necessarily non-robust. This notion of local robustness can be quantified in the worst-case or, on average, inside the ϵ-ball. We focus here on the worst-case measurement given by the input margin, also referred to as the minimum distortion or the robust radius <cit.> The input space margin is the distance to the decision boundary of f in the input space. It is the norm of a minimal perturbation required to change the model's decision at a test point x: d_in(x) = inf{δ_p : δ∈ℝ^n s.t. ŷ(x)≠ŷ(x + δ)} = sup{ϵ: f is ϵ-robust at x}. An instance x is non-robust for a robustness threshold ϵ if d_in(x) ≤ϵ. Evaluating Eq. <ref> for deep networks is known to be intractable in the general case. An upper bound approximation can be obtained using a point x_0', the closest adversarial counterpart of x in ℓ_p norm by d̂_in(x)=x - x_0'_p (see Fig. <ref>). The logit margin is the difference between the two largest logits. For a sample x classified as i=ŷ(x)=_j ∈𝒴 f_θ^j(x) the logit margin is defined as (f_θ^i(x)-max_j, j≠ if_θ^j(x))>0. It is an approximation of the distance to the decision boundary of f_θ in the feature space. The decision boundary in the feature space around z=h_ψ(x), the feature representation of x, is composed of (C-1) linear decision boundaries (hyperplanes) DB_ij = {z' ∈ℝ^m: w_i^⊤z'+b_i = w_j^⊤z'+b_j} (j≠ i). The margin in the feature space is, therefore, the distance to the closest hyperplane, i.e. min_j, j≠ id(z, DB_ij), where the distance d(z, DB_ij) from z to a hyperplane DB_ij has a closed-form expression: d(z, DB_ij) = inf{η_p : η∈ℝ^m s.t. z+η∈DB_ij}= f_θ^i(x)-f_θ^j(x)/w_i - w_j_q, where ·_q is the dual norm of p, q=p/p-1 for p>1 <cit.>. When the classifiers w_j are equidistant (w_i-w_j_q=ω >0, ∀ i, j), the margin becomes: min_j, j≠ if_θ^i(x)-f_θ^j(x)/ω= 1/ωmin_j, j≠ i(f_θ^i(x)-f_θ^j(x)) = 1/ω(f_θ^i(x) - max_j, j≠ i f_θ^j(x)_logit margin). Under the equidistance assumption, the logit margin is proportional (equal up to a scaling factor) to the margin in the feature space. We will denote the logit margin of x by d_out(x): d_out(x)= f_θ^i(x) - max_j, j≠ i f_θ^j(x) §.§ Margin Consistency A model is margin-consistent if there is a monotonic relationship between the input space margin and the logit margin, i.e., d_in(x_1) ≤ d_in(x_2) ⇔ d_out(x_1) ≤ d_out(x_2), ∀x_1, x_2 ∈𝒳. A margin-consistent model preserves the relative position of samples to the decision boundary from the input space to the feature space. A sample further from (closer to) the decision boundary in the input space remains further from (closer to) the decision boundary in the feature space with respect to other samples, as illustrated in Fig. <ref>. We can evaluate margin consistency by computing the Kendall rank correlation (τ∈ [-1,1]) between the output scores and the input margins over a test set. The Kendall rank correlation tests the existence and strength of a monotonic relationship between two variables. It makes no assumption on the distribution of the variables and is robust to outliers <cit.>. Perfect margin consistency corresponds to an absolute value of 1, and 0 means the absence of margin consistency. §.§ Non-robust Samples Detection Non-robust detection can be defined as a scored-based binary classification task where non-robust samples constitute the positive class, and the input margin d_in induces a perfect discriminative function g for that: g(x)=1_[d_in(x)≤ϵ](x)= 1 if x is non-robust 0 if x is robust . If a model is margin-consistent, its logit margin can also be a discriminative score to detect non-robust samples. The following theorem establishes that this is a necessary and sufficient condition. Therefore, the degree to which a model is margin-consistent should determine the discriminative power of the logit margin. If a model is margin-consistent, then for any robustness threshold ϵ, there exists a threshold λ for the logit margin d_out that separates perfectly non-robust samples and robust samples. Conversely, if for any robustness threshold ϵ, d_out admits a threshold λ that separates perfectly non-robust samples from robust samples, then the model is margin-consistent. Proof sketch. Fig. <ref> presents intuition behind the proof of Theorem <ref>. For the first part of the theorem (see Fig. <ref>), if there is a monotonic relationship between d_in and d_out (margin consistency), any point x with d_in less than the threshold ϵ (non-robust) will also have d_out less than λ=d_out(x_0) (with d_in(x_0)=ϵ). For the second part (see Fig. <ref>), if there are two points x_1 and x_2 with non-concordant d_in and d_out (no margin consistency), then for a threshold ϵ_0 between d_in(x_1) and d_out(x_2), they will both have different classes but no threshold of d_out (horizontal line) can classify them both correctly. The complete proof of Theorem <ref> is deferred to Appendix <ref>. Common metrics for detection include <cit.>: the Area Under the Receiver Operating Curve (AUROC), which ensures the ability of a model to distinguish between the positive and negative classes across all possible thresholds; the Area Under the Precision-Recall Curve (AUPR), which evaluates the trade-off between precision and recall and is less sensitive to imbalance between positive and negative classes; and the False Positive Rate (FPR) at a 95% True Positive Rate (TPR) (FPR@95), that is crucial in systems where missing positive cases can have serious consequences, such as minimizing the number of vulnerable samples missed. The AUROC and AUPR of a perfect classifier is 1, while 0.5 for a random classifier. §.§ Sample Efficient Robustness Evaluation Margin consistency enables empirical robustness evaluation over an arbitrarily large test set by only estimating the input margins of a small subset of test samples. For a robustness evaluation at threshold ϵ (e.g., ϵ=8/255 in ℓ_∞ norm on CIFAR10 and CIFAR100), we randomly sample a small subset of the large test set and determine the threshold λ for the logit margin that corresponds to ϵ. The threshold λ is then used to detect vulnerable samples. With the true labels of these test sets, we can determine the proportion of correct non-vulnerable samples, which is the standard robust accuracy as described in Algorithm <ref>. A naive way to set the threshold λ at line 6 of Algorithm <ref> would be to set it to the detection threshold at α=95% TPR or α=90% TPR, but the logit margin threshold could vary from one model to ano; therefore a better way is to select it by tuning over values α>=0.80 that gi ves the best approximation of the robust accuracy in terms of the absolute error on the small subset X_s. The same logic applies if we want to estimate the vulnerability of a large dataset without the labels. § EVALUATION §.§ Experimental Setup Datasets and models We investigate various pre-trained models on CIFAR10 and CIFAR100 datasets. The majority of models were loaded from the RobustBench model zoo[https://github.com/RobustBench/robustbench] <cit.>, with a few more models that are ResNet-18 <cit.> models we trained on CIFAR10 with Standard Adversarial Training <cit.>, TRADES <cit.>, Logit Pairing (ALP and CLP, <cit.>), and MART <cit.>, using the experimental setup of <cit.>. Input margin estimation This is done using FAB attack <cit.>, which is an attack that minimally perturbs the initial instance. <cit.> used it in their adversarial training strategy as a reliable way to compute the closest boundary point given enough iterations. We perform the untargeted FAB attack without restricting the distortion to find the boundary for all the samples in the test set instead of constraining the perturbation inside a given ϵ-ball when evaluating robustness. As a sanity check for the measured distances, we compare the ratio of correct samples x with estimated input margins greater than ϵ=8/255 and the robust accuracy in ℓ_∞ norm measured with AutoAttack <cit.> at ϵ=8/255. Both quantities estimate the same thing, with a mean and a maximum absolute difference over the models respectively of 1.3 and 6.1 for CIFAR10, 0.48 and 0.75 on CIFAR100, which are reasonable (cf. Fig. <ref> in appendix <ref> for the comparison for all the models). The estimation of the input margins over the 10,000 test samples allows us to create for a given threshold ϵ a pool of vulnerable samples that can be successfully attacked at threshold ϵ and non-vulnerable samples that were not able to be attacked. Training and distance estimations were run on an NVIDIA Titan Xp GPU (1x). §.§ Results and Analysis Correlation analysis The results presented in Fig. <ref> show that the logit margin has a strong correlation (up to 0.86) with the input margin, which means that they have a level of margin consistency for those models. The plots are given with standard error for the y-axis values in each interval. However, we also observe that two models (i.e., DI0 <cit.> and XU80 <cit.> WideResNets on CIFAR10) have a weaker correlation. We show in Sec. <ref> that we can learn to map the feature representation of these models to a pseudo-margin that reflects the distance to the decision boundary in the input space. Vulnerable samples detection We present the results for the robustness threshold ϵ=8/255 in Table <ref>. As expected with the strong correlations, the performance over the non-robust detection task is excellent. We can note that the metrics are lower for the two models with low correlations with particularly very high FPR@95. The performance remains quite good with different values of ϵ (cf. appendix <ref>). Sample Efficient Robustness Estimation We recover the robust accuracy of the investigated models evaluated with over 10,000 using only a small subset. Fig. <ref> shows the absolute error of the estimation for 500 samples. The estimations are still descent with 100 samples (cf. Fig. <ref> in appendix <ref>). Margin Consistency and Lipschitz Smoothness A neural network f is said to be L-Lipschitz if f(x_1)-f(x_2≤ Lx_1-x_2, ∀x_1, x_2. Lipschitz smoothness is important for adversarial robustness because a small Lipschitz constant L guarantees the network's output cannot change more than a factor L of the change in the input. There are strategies to directly constraint the Lipschitz constant to achieve 1-Lipschitz networks <cit.>. Empirical adversarial training strategies aim to achieve Lipschitz's smoothness indirectly. Note, however, that Lipschitz continuity does not imply margin consistency, for example, two points x_1 and x_2 with 0<d_in(x_1)<d_in(x_2). While the L-Lipschitz condition implies that d_out(x_i)≤ L d_in(x_i) for i=1,2, it is clearly possible to have d_out(x_2)<d_out(x_1), thus violating the margin consistency condition. Fig. <ref> and <ref> show that the strength of the correlation, i.e. the level of margin consistency, does not depend on the robust accuracy. Insight into when margin consistency may hold? We hypothesize that margin consistency can occur when the feature extractor h_ψ behaves locally as an isometry (preserving distances, up to a scaling factor κ), i.e., h_ψ(x)-h_ψ(x')_p = κx-x'_p. We can experimentally see that there is a high correlation between the input margin (x-x') and the distance between the feature representations of x and x' (Fig. <ref>). Given an input sample x, by definition d_out(x)=z-z'_p where z=h_ψ(x) and z' an orthogonal projection of z on the boundary hyperplane. The points z, z' and h_ψ(x') will form a right triangle so the side z-z'_p will directly correlate with side h_ψ(x)-h_ψ(x')_p. §.§ Learning a Pseudo-Margin For the two models that are weakly margin-consistent, we are proposing to directly learn a mapping that maps the feature representation of a sample to a pseudo-margin that reflects the relative position of the samples to the decision in the input space. We use a learning scheme similar to the one of <cit.>, with a small ad hoc neural network for learning the confidence of the instances (cf. Fig. <ref> in appendix <ref>). Given some samples with estimations of their input margins, the objective is to learn to map their feature representation to a pseudo-margin that correlates with the input margins. This learning task can be seen as a learning-to-rank problem. We use a simple learning-to-rank algorithm for that purpose, which is a pointwise regression approach <cit.> relying on the mean squared error as a surrogate loss. For the experiment, we used a similar architecture and training protocol as <cit.> with a fully connected network with five dense layers of 512 neurons, with ReLU activations for the hidden layers and a sigmoid activation at the output layer. We learn using 5000 examples sampled randomly from the training set, with 20% (1000 examples) held as a validation. Fig. <ref> and Table <ref> show the improved correlation on the learned score compared to the logit margin for both models. The plots are given with standard error for the y-axis values in each interval. The network has learned to recover the relative positions of the samples from the feature representation. § RELATED WORK Detection tasks in machine learning are found to be of three main types: •=1em =0em =0em * Adversarial Detection The goal of adversarial detection <cit.> is to discriminate adversarial samples from clean and noisy samples. An adversarial example is a malicious example found by adversarially attacking a sample; it has a different class while being close to the original sample. A vulnerable (non-robust) sample is a normal sample that admits an adversarial example close to it. The two detection tasks are very distinct. Adversarial detection is a defence mechanism like adversarial training; <cit.> has established that both tasks are equivalent problems with the same difficulty. * Out-of-Distribution (OOD) detection In OOD detection <cit.>, the objective is to detect instances that have a different label from the labels on which the model was trained on. For example, for a model trained on the CIFAR10 dataset, samples from the SVHN dataset are OOD samples for such a model. * Misclassification Detection (MisD) It consists in detecting if the classifier's prediction is incorrect. This is also referred to as Failure Detection <cit.> or Trustworthiness Detection <cit.>. MisD is often used for selective classification (classification with a reject option) <cit.> to abstain from predicting samples on which the model is likely to be wrong. A score for non-robust detection cannot tell if the sample is incorrect, as a vulnerable sample could be from any side of the decision boundary. Recent work by <cit.> shows that input margins can predict the generalization gap only in specific constrained directions that explain the variance of the training data but not in general. Formal robustness verification aims at certifying whether a given sample is ϵ-robust or if it is not an adversarial counter-example can be provided <cit.>. Some complete exact methods based on solving Satisfiability Modulo Theory problems <cit.> or Mixed-Integer Linear Programming <cit.> provide formal certification given enough time. However, in practice, they are tractable only up to 100,000 activations <cit.>. Incomplete but effective methods based on linear and convex relaxation methods and Branch-and-Bound methods <cit.> are faster but conservative, without guaranteed certifications even if given enough time. Scaling them to larger architectures such as WideResNets and large Transformers is still challenging even with GPU accelartion<cit.>. <cit.> converts the problem of finding the robust radius (input margin) as a local Lipschitz constant estimation problem. Computing the Lipschitz constant of Deep Nets is NP-hard <cit.> and <cit.> proved that there is no efficient algorithm to compute the local Lipschitz constant. The estimation provided by <cit.> requires random sampling and remains computationally expensive to obtain a good approximation. Vulnerability detection with margin-consistent models does not provide certificates but an empirical estimation of the robustness of a sample as evaluated by adversarial attacks. At scale, it can help filter the samples to undergo formal verification and a more thorough adversarial attack for resource prioritization. § LIMITATIONS AND PERSPECTIVES Vulnerability detection scope The scope of this work is ℓ_p robustness measured by the input space margin; the minimum distortion that changes the model's decision while this does not give a full view of the ℓ_p robustness. Samples may be at the same distance to the decision boundary and have unequal unsafe neighbourhoods given an average estimation over the ϵ-neighbourhood considered. The average estimation of local robustness for a given ϵ-neighborhood remains an open problem, so whether it is possible to extract other notions of robustness from the feature representation efficiently could be a potential avenue for further exploration. Attack-based verification The margin consistency property does not rely on attacks; however, its verification and the learning of a pseudo-margin with an attack-based estimation may not be possible if the model cannot be attacked on sufficient samples. The implicit assumption is that we can always successfully provide the closest point to the decision with a sufficient budget. This is a reasonable assumption since the studied models are not perfectly robust, and the empirical evidence so far with adaptive attacks is that no defence is foolproof, which justifies the need to detect the non-robust samples. In some cases, we might need to combine with an attack such as CW-attack <cit.> to find the closest adversarial sample. Influence of terminal phase of training The work of <cit.> shows that when deep neural network classifiers are trained beyond zero training error and beyond zero cross-entropy loss (aka terminal phase of training), they fall into a state known as neural collapse. Neural collapse is a state where the within-class variability of the feature representations collapses to their class means, the class means, and the classifiers become self-dual and converge to a specific geometric structure, an equiangular tight frame (ETF) simplex, and the network classifier converges to nearest train class center. This implies that we may lose the margin consistency property. While neural collapse predicts that all representations collapse on their class mean, in practice, perfect collapse is not quite achieved, and it is precisely the divergence of a representation from its class mean (or equivalently its classifier's class mean) that encodes the information we seek about the distance to the decision boundary in the input space. Exploring the impact of the neural collapse on margin consistency as models tend toward a collapsed state could provide valuable insights into generalization and adversarial robustness. § CONCLUSION This work addresses the question of efficiently estimating local robustness in the ℓ_p sense at a per-instance level in robust deep neural classifiers in deployment scenarios. We introduce margin consistency as a necessary and sufficient condition to use the logit margin of a deep classifier as a reliable proxy estimation of the input margin for detecting non-robust samples. Our investigation of various robustly trained models shows that they have strong margin consistency, which leads to a high performance of the logit margins in detecting vulnerable samples to adversarial attacks and estimating robust accuracy on arbitrarily large test sets using only a small subset. We also find that margin consistency does not always hold, with some models having a weak correlation between the input margin and the logit margin. In such cases, we show that it is possible to learn to map the feature representation to a better-correlated pseudo-margin that simulates the margin consistency and performs better on vulnerability detection. Finally, we present some limitations of this work, mainly the scope of robustness, the attack-based verification and the impact of neural collapse in terminal phases of training. Beyond its highly practical importance, we see this as a motivation to extend the analysis of robust models and the properties of their feature representations in the context of vulnerability detection. § ACKNOWLEDGEMENTS This work is supported by the https://deel.quebec/DEEL Project CRDPJ 537462-18 funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Consortium for Research and Innovation in Aerospace in Québec (CRIAQ), together with its industrial partners Thales Canada inc, Bell Textron Canada Limited, CAE inc and Bombardier inc.[<https://deel.quebec>] bibstyle § PROOF OF THEOREM <REF> If a model is margin-consistent, then for any robustness threshold ϵ, there exists a threshold λ for the logit margin d_out that perfectly separates non-robust samples and robust samples. Conversely, if for any robustness threshold ϵ, d_out admits a threshold λ that perfectly separates non-robust samples from robust samples, then the model is margin-consistent. Formally, for a finite sample S and non-negative values ϵ≥ 0, λ≥ 0, we define: A^S_ϵ:={x ∈ S: d_in(x) ≤ϵ} and B^S_λ:={x∈ S: d_out(x) ≤λ}. We say that d_out perfectly separates non-robust samples from robust samples if for any finite sample S⊆𝒳 and every ϵ≥ 0 there exists λ≥ 0 such that A^S_ϵ = B^S_λ. Necessity: Let's assume that the model is not margin-consistent, i.e., there exist two samples x_1 and x_2 such that d_out(x_1)≤ d_out(x_2) and d_in(x_1)>d_in(x_2). By taking S={x_1,x_2} and ϵ=d_in(x_2) we have that A^S_ϵ={x_2}. However for any λ≥ 0, if x_2∈ B^S_λ, then d_out(x_1)≤ d_out(x_2)≤λ and so x_1 ∈ B^S_λ. Therefore d_out does not perfectly separates non-robust samples from robust samples. Sufficiency: Let's assume that the model is margin-consistent. Let S be a finite sample and consider a threshold ϵ. Let x_0 be the element of the finite set A^S_ϵ with maximum d_in(x_0) and d_out(x_0). Since the model is margin-consistent, then for x∈ S: x ∈ A^S_ϵ⇔ d_in(x) ≤ϵ⇔d_in(x)≤ d_in(x_0) ⇔ d_out(x)≤ d_out(x_0)_margin consistency⇔ d_out(x) ≤λ⇔ x ∈ B^S_λ. This means we have A^S_ϵ = B^S_λ_0,, which shows that d_out perfectly separates non-robust samples from robust samples. § SUPPORTING MATERIALS §.§ Input Margins Estimation Sanity Check One way to verify the reliability of our estimated input margins is to compare the robust accuracy measured by AutoAttack at ϵ=8/255 and the proportion of correctly classified test samples with estimated input margins greater than ϵ; both quantities should be approximately equal which happens to be the case –See Fig. <ref>. §.§ Detection Performance at Different Values of ϵ We present in Fig. <ref> the performance of the detection for various values of the robustness threshold. We can see that the strong margin consistency allows the logit margin to be a good proxy for detection at various thresholds. Note that below ϵ=2/255 and beyond ϵ=16/255, the ratio of vulnerable points to non-vulnerable points becomes too imbalanced, with little to no positive instances beyond ϵ=32/255. §.§ Sample Efficient Robust Accuracy Estimation We plot the variation of the absolute error with subset size for the approximation of the AutoAttack Robust Accuracy by the estimation of Algorithm <ref>. Results are presented in Fig. <ref> and Fig. <ref> for CIFAR10 and CIFAR100, respectively. From 100 samples, the approximation is already good for some models. §.§ Pseudo-margin Learning Setup The architecture and learning setup for the pseudo-margin is inspired from <cit.>. A multilayer perceptron (Fig. <ref>) learns a pseudo-margin from the feature representations of the samples by minimizing the mean-squared error loss between the output pseudo-margin and an estimation of the input margin. §.§ Verification of Equidistance Assumption of the Linear Classifiers Eq. <ref> in Sec. <ref> shows that we can approximate the margin in the feature space by the logit margin if the classifiers w_j are equidistant, i.e. w_i-wj=C, ∀ i, j∈{1,.., C}. For each model, we computed the C(C-1)/2 possible values of the distances between pairs of classifiers (45 for CIFAR10 and 4950 for CIFAR100). We confirm this hypothesis for our investigated models in Fig. <ref> by plotting the boxplot of the distribution of values. For each model, the values vary only in a small range.
http://arxiv.org/abs/2406.18636v1
20240626180000
The Pristine Inner Galaxy Survey (PIGS) X. Probing the early chemical evolution of the Sagittarius dwarf galaxy with carbon abundances
[ "Federico Sestito", "Anke Ardern-Arentsen", "Sara Vitali", "Martin Montelius", "Romain Lucchesi", "Kim A. Venn", "Nicolas F. Martin", "Julio F. Navarro", "Else Starkenburg" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
Probing the early chemical evolution of the Sagittarius dwarf galaxy with carbon abundances Department of Physics and Astronomy, University of Victoria, PO Box 3055, STN CSC, Victoria BC V8W 3P6, Canada sestitof@uvic.ca Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Instituto de Estudios Astrofísicos, Universidad Diego Portales, Av. Ejército Libertador 441, Santiago, Chile Millenium Nucleus ERIS Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands Dipartimento di Fisica e Astronomia, Università degli Studi di Firenze, Via G. Sansone 1, I-50019 Sesto Fiorentino, Italy Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg, UMR 7550, F-67000 Strasbourg, France Max-Planck-Institut fur Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands We aim to constrain the chemo-dynamical properties of the Sagittarius (Sgr) dwarf galaxy using carbon abundances. Especially at low metallicity, these reveal the early chemical evolution of a system, tracing the supernovae (SNe) that contributed and how much of their ejecta made it into the next stellar generation. Our sample from the Pristine Inner Galaxy Survey (PIGS) includes ∼ 350 metal-poor ([Fe/H] <-1.5) stars in the main body of Sgr with good quality spectroscopic observations. Our metal-poor Sgr population has a larger velocity dispersion than metal-rich Sgr from the literature, which could be explained by outside-in star formation, extreme Galactic tidal perturbations and/or the presence of a metal-rich disc/bar + a metal-poor halo. The average carbon abundance [C/Fe] in Sgr is similar to that of other classical dwarf galaxies (DGs) and consistently lower than in the Milky Way by ∼0.2-0.3 dex at low metallicity. The interstellar medium in DGs, including Sgr, may have retained yields from more energetic Population III and II supernovae (SNe), thereby reducing the average [C/Fe]. Additionally, SNe Ia, producing more Fe than C, would start to contribute at lower metallicity in DGs/Sgr than in the Galaxy. The presence of a [C/Fe] gradient for Sgr stars with ≳-2.0 (∼ 6.8× 10^-4 dex arcmin^-1) suggests that SNe Ia contributed in the system at those metallicities, especially in its inner regions. There is a low frequency of carbon-enhanced metal-poor (CEMP) stars in our Sgr sample. At higher metallicity/carbon abundance (mostly CEMP-s) this may be due to photometric selection effects, but those are less likely to affect CEMP-no stars. We propose that, given the lower average [C/Fe] in DGs, using the same CEMP definition ([C/Fe] >+0.7) as in the Galaxy under-predicts the number of CEMP stars in DGs, and for Sgr a cut at [C/Fe]∼ +0.35 may be more appropriate, which brings the frequency of CEMP stars in agreement with that in the Galaxy. The Pristine Inner Galaxy Survey (PIGS) X Federico Sestito1 Anke Ardern-Arentsen2 Sara Vitali3,4 Martin Montelius5 Romain Lucchesi6 Kim A. Venn1 Nicolas F. Martin7,8 Julio F. Navarro1 Else Starkenburg9 Received XX; accepted YY ==================================================================================================================================================================== § INTRODUCTION The Sagittarius (Sgr) dwarf galaxy <cit.>, located approximately 26.5 kpc away from us towards the inner Galactic regions <cit.>, experienced its first in-fall into the Milky Way (MW) about 5 Gyr ago <cit.>. As it is being tidally stripped by the MW, its core and two stellar streams are now visible in the Sky <cit.>, as well as various associated globular clusters <cit.>. Given its proximity, it is an ideal test-bed for galactic chemo-dynamical models. The star formation history (SFH) of Sgr is characterised by multiple star formation episodes, investigated with both high-resolution spectroscopy <cit.> and photometric techniques <cit.>. So far, studies have typically focussed on metal-rich and relatively young stars, given that they are the prevalent population. Further complicating the study of the oldest/metal-poor stars is the strong overlap in the colour-magnitude diagram between the Milky Way bulge population and stars in Sgr <cit.>, especially on the blue, metal-poor side of the red giant branch (RGB) of Sgr. However, the most metal-poor stars are key to understanding the early chemical evolution of Sgr. An efficient way to discover new members in dwarf galaxies is to use the exquisite Gaia <cit.> astrometry and photometry alone <cit.> or to couple it with metal-poor dedicated photometric surveys, e.g. the Pristine survey <cit.>, as done in the Pristine dwarf galaxy survey <cit.>. Along those lines, the Pristine Inner Galaxy Survey (PIGS) targets metal-poor stars towards the inner regions of the MW <cit.>, as well as the Sagittarius dwarf galaxy <cit.>. The latter work investigated the metallicity distribution of ∼50,000 Sgr candidate members as a function of their spatial location, and identified the largest sample of Sgr candidate members with ≤-2.0 (∼1200 stars). From PIGS, <cit.> followed-up with MIKE high-resolution spectroscopy 12 very metal-poor (VMP, ≤-2.0) Sgr members, the largest and most complete detailed chemical abundance analysis of the VMP Sgr component <cit.>. The authors interpreted the chemical pattern of the most metal-poor stars as the result of a variety of type II supernovae and asymptotic giant branch stars. A wide range of energetic supernovae and hypernovae with intermediate mass (10-70) are needed to account for the chemical abundances of the lighter elements up to the Fe-peak. The chemical trend of the heavier elements is interpreted as a mixture of yields from compact binary mergers and massive (up to ∼120) fast-rotating stars (up to ∼300). Investigating the origin of carbon in a given stellar population is crucial to understand various astrophysical topics, for example the types of supernovae contributing in a given system, nucleosynthesis in massive stars and binary interaction mechanisms <cit.>. At low metallicity, many stars are found to be carbon-enhanced. Populations of these so-called carbon-enhanced metal-poor (CEMP) stars, with [C/Fe] >+0.7, are powerful probes of the underlying stellar population and the star formation history. Some CEMP stars are thought to carry the imprint of the first generations of supernovae, these are called CEMP-no stars and have sub-solar Ba, [Ba/Fe] <0.0 <cit.>. It has been suggested that classical DGs have a lower CEMP-no fraction than the MW halo and ultra-faint dwarfs (UFDs) <cit.>. Other types of CEMP stars are typically the products of mass transfer from binary interaction with a former asymptotic giant branch (AGB) star companion. These are Ba-rich ([Ba/Fe] >+1.0) due to slow-process channels taking place in the AGB companion and are called CEMP-s stars <cit.>. The latter group is important to understand the properties of binary populations. In particular, their properties are instructive to understand the nucleosynthetic channels, convection and non-convective processes <cit.>; the interaction mechanisms, such as the physics of Roche-lobe over-flow and wind accretion <cit.>; and their influence on the measurement of the velocity dispersion in a system and its dynamical mass <cit.>, such as its dark matter content. From medium-resolution spectroscopy, metallicities and carbon abundances have been measured in only 11 VMP stars in Sgr <cit.>. In this work, we use the data release of the PIGS low/medium-resolution spectroscopic campaign <cit.> to select the largest sample of low-metallicity (≤-1.5) Sgr members (356 stars) with measured metallicity, [C/Fe], and radial velocity to date. The dataset and a discussion on the photometric selection effects due to the Pristine filter is reported in Section <ref>. The dynamical properties of the metal-rich and metal-poor populations in Sgr are outlined in Section <ref>. A comparison of the [C/Fe] abundances in Sgr with respect the other classical dwarf galaxies (DGs) and the MW halo and inner Galaxy is discussed in Section <ref>. We discuss the types and frequencies of CEMP stars in Sgr in Section <ref>, including a suggestion that the definition of CEMP might need revision in DGs. Conclusions are summarised in Section <ref>. § THE PRISTINE INNER GALAXY SURVEY (PIGS) PIGS targets the most metal-poor stars in the inner regions of the Milky Way <cit.>, using a metallicity-sensitive narrow CaHK filter mounted at CFHT (MegaCam). Among the photometric metal-poor candidates, ∼ 13 235 stars have been observed with the Anglo Australian Telescope (AAT) using the AAOmega+2dF spectrograph. We will refer to them as the PIGS/AAT sample, which is publicly available <cit.>. The AAT setup acquired spectra with low-resolution (R∼ 1800) in the blue and with medium-resolution (R∼ 11 000) around the calcium triplet. The analysis is described in detail in <cit.>, but, briefly, the two arms were fit simultaneously with the code[<http://github.com/callendeprieto/ferre>] <cit.> to obtain stellar parameters (effective temperature and surface gravity), metallicities, and carbon abundances. The radial velocities (RVs) were derived by cross-correlation of the calcium triplet spectra with synthetic templates. §.§ PIGS target selection from photometry Some of the PIGS/AAT fields overlap with the core of the Sagittarius dwarf galaxy and, in four fields, Sgr stars were specifically targeted. Two fields were observed in 2018 and served as a pilot program (, ), two additional fields with more Sgr candidates were observed in 2020 (, ). For the 2018 observations, Sgr stars were selected to be within a radius of 0.6 mas yr^-1 around proper motion μ_α = -2.7 mas yr^-1 and μ_δ = -1.35 mas yr^-1 and parallax - parallax_error < 0.05 mas. This was relaxed a little in 2020, to a radius of 1 mas yr^-1 around those proper motions and the parallax - parallax_error < 0.1 mas. In 2020, suspected variable stars were removed using the flux error and the number of observations <cit.>. Both selections were done using Gaia DR2 <cit.>. The photometric calibration of the PIGS CaHK photometry was slightly different when the targets were selected compared to the current, final photometric catalogue, but changes are not expected to be major for the Sgr fields. For the fields from 2018, Sgr candidates were selected using a horizontal line in (CaHK - G)_0 - 2.5(BP-RP)_0 to select the best ∼100 Sgr targets per field (and the rest of the AAT fibres were filled with inner Galaxy targets). Observed targets can be seen as black/small coloured points in the Pristine colour-colour diagrams in the left-hand panels of Figure <ref>, compared to all Sgr candidates in the fields in grey. A red cut at (BP-RP)_0 = 1.7 was also made. For the fields in 2020 a different strategy was used, the focus was completely on Sgr and inner Galaxy stars were mostly used as fillers if no fibres could be placed on Sgr stars. Sgr candidates were selected in two ways. The first group contained all stars brighter than G_0 = 15.5 and bluer than a [M/H] = -1.0 MIST isochrone <cit.>, this was to get some red and bright targets that would have been missed in the 2018 selection. The next group contained the most promising metal-poor candidate stars according to CaHK, again using a horizontal selection in the colour-colour diagram, this time with factor of 3.0 instead of 2.5 in front of (BP-RP)_0. These selections can be seen as black/small coloured points in the right-hand panels of Figure <ref>. A colour cut of 1.0 < (BP-RP)_0 < 1.8 was also made. §.§ Selection effects with reference to CEMP stars Photometric selections of metal-poor stars are plagued by selection effects against carbon-rich stars, especially for cooler stars <cit.>. This is because carbon has many molecular features in the spectrum, affecting both the narrow-band and broad-band photometry. We empirically investigate possible selection effects in our Sgr sample by comparing the location of our observed Sgr/AAT sample in the Pristine colour-colour diagram with known CEMP stars from <cit.>. We select giant stars within the relevant Sgr range, making cuts on log g < 2.5 and -3.0 < [Fe/H] < -2.0. Almost all Y16 stars after this cut have T_eff > 4500 K. We use the synthetic CaHK catalogue from <cit.>, derived from Gaia XP spectra <cit.>, and cross-match it with Y16 to obtain Pristine colour-colour diagram positions for these stars. All CaHK uncertainties for the Y16 stars are less than 0.075 mag, with more than 80% less than 0.05 mag. For the metal-poor regime in the Sgr/PIGS colour-colour diagrams, PIGS CaHK uncertainties are typically less than 0.025 mag. Large symbols in Figure <ref> are CEMP stars from <cit.> in the relevant Sgr range. Unfortunately, the Y16 catalogue does not contain many cool giants in this metallicity range, but a small sample of 48 stars remains that can be used. What is clear is that the CEMP stars are mostly not where they are expected to be, given their metallicity – they are further down in the colour-colour diagrams. A similar conclusion for the Pristine survey was reached by <cit.>, who reported that these stars have higher photometric metallicities than their spectroscopic metallicities <cit.>. Analogously, the SkyMapper survey, which is targeting metal-poor stars with the v filter also in the CaHK region, found a similar bias against CEMP stars, especially for those stars with very large carbon-enhancement <cit.>. For the 2018 fields (left column of Figure <ref>), a large fraction of Y16 CEMP stars falls outside the selected region (y-axis ≲-0.9). These are mostly stars with > -2.5 and/or [C/Fe] >+1.8 – the regime where CEMP-s stars dominate. Stars with < -2.5 and [C/Fe] < +1.8 fall within the selected range – this combination of and [C/Fe] is in the regime of the Group II/CEMP-no stars. In the 2020 fields (right column of Figure <ref>), more Sgr stars were targeted and the selection boundary lies slightly lower in the colour-colour diagram. More Y16 CEMP stars now overlap with the selection range, although very much at the edge. The biases are similar to those of the 2018 selection, although a few more stars with < -2.5 and [C/Fe] >+2.0 are included now. From this analysis, we conclude that CEMP-no stars with moderate carbon-enhancement should likely be included in our selection (especially for the 2020 fields, where the majority of our sample comes from), but a large fraction of CEMP-s stars would likely have been excluded. Finally, we note that the Y16 sample does not have any stars cooler than 4500 K with < -2.5 or with > -2.5 and [C/Fe] <+1.5. It is therefore difficult to estimate the biases against these stars, although we expect them to be worse for such cool stars. Our analysis in this work is focused on slightly warmer stars so the details of these stars are not crucial. §.§ Sagittarius spectroscopic sample used in this work For this work, to remove the MW contamination from the Sgr candidates, a selection of the Sgr members is made based on the Gaia DR3 proper motions, position on the sky, and radial velocity. In particular, we use the reduced proper motions for Sgr[μ_α - μ_α = μ_α + 2.69 - 0.009Δα +0.002Δδ + 0.00002Δα^3, μ_δ - μ_δ = μ_δ + 1.35 + 0.024Δα +0.019Δδ + 0.00002Δα^3, where Δα,Δδ are differences in RA and Dec of each star from the centre of the system (α_0=283.764 deg, δ_0=-30.480 deg)], as defined in <cit.>. This takes into account that the proper motion of the members changes as a function of the coordinates. We assume a star to be a Sgr member if it has a reduced proper motion of less than 0.6 mas yr^-1 as in <cit.> and <cit.>. Additionally, Sgr members have RVs in the range from 100 to 200 <cit.>. Finally, we limit our analysis to stars with RA >280^∘. This leads to a sample of 834 kinematically selected PIGS/AAT Sgr stars. Not all the AAT spectra have enough good quality to obtain reliable measurements of [Fe/H] and [C/Fe]. Therefore, bad measurements are removed from the kinematical selection using the flag , as suggested in <cit.>. This flag is based on the S/N of the blue spectra, the χ^2 and the CaT not being double-lined. This further cut leads to 631 Sgr members with available chemistry. The stars with bad S/N in the AAT sample are partly due to issues with the 2dF fibre placement (see discussion in ), which were particularly severe for the two fields observed in 2020 – this is why the upper/right parts of these fields in RA/Dec (see top-left panel of Fig. <ref>) do not have many stars in the final Sgr cut. The stellar parameter grid used in is limited to 4500 ≤T_eff (K)≤ 7000 and 1 ≤log g≤ 5, implying that for stars at the edge of this grid, a wrong model atmosphere might have been adopted to derive the and [C/Fe]. For the Sgr stars, this is particularly an issue at the cool end (see the bottom right panel of Figure <ref>); we, therefore, remove stars with T_eff < 4510 K to avoid stars close to the cool limit of the grid. For warm stars, the [C/Fe] abundances may not be reliable, we therefore remove stars with T_eff > 5700 K. Because we are interested in the chemistry, we only keep stars with reasonable uncertainties on and [C/Fe] (< 0.5 dex). After these cuts, the PIGS/AAT Sgr sample consists of 437 stars. However, in this work we are mainly interested in stars with <-1.5, which results in a final selection of 356 metal-poor PIGS/AAT Sgr members with good measurements of , [C/Fe], and RV. A table of the Sgr members updated to Gaia DR3 will be available as online material. The PIGS/AAT sample (13 235 stars, grey dots), the stars from the kinematical cut (834, coral circles), the final selection (356 stars, blue circles), and Sgr members from APOGEE DR17 <cit.> are shown in Figure <ref>. The figure displays the position on the sky zoomed in on the Sgr fields (top left panel), the reduced proper motion space (top right), the -RV space (bottom left), and the Kiel diagram (bottom right). PIGS/AAT stars in grey dots that lie within the red circle in proper motion space (top right) do not have RV compatible with Sgr, and, similarly, PIGS/AAT stars in grey dots with similar RV as Sgr (bottom left) do not match its proper motion. The Kiel diagram clearly shows an overdensity of stars at the cool edge of the grid, which has been removed as outlined above. Most PIGS/AAT Sgr stars have 1.0 < log g < 2.5 and 4500 K < T_eff < 5300 K. Part of this work is focused on very carbon-rich objects (Section <ref>), so it is important to be certain that our spectroscopic quality cuts do not bias against such stars. The main quality cuts of relevance are the S/N and the χ^2. The S/N is determined from the spectra independently of the fit, in two regions (4000-4100 Å and 5000-5100 Å), and is not expected to be strongly affected by the carbon abundance, so cutting on it is unlikely to introduce a bias against CEMP stars. If cannot find a good fit or there are many bad regions in the spectrum, the χ^2 will be high. We inspect all fits of Sgr candidates with bad S/N or bad χ^2 by eye, and identify two clearly carbon-rich stars that are badly fitted, with a high χ^2. Both of these are very cool, very carbon-enhanced and intermediate/very metal-poor, and they will be discussed in Section <ref>. § ON THE RV DISTRIBUTION The RVs of the PIGS/AAT Sgr sample fall within the overall distribution of stars in Sgr's core, ranging between 100-200 <cit.>, see also Figure <ref>. Various studies have pointed out that the metal-poor population of Sgr, both in the core and in the stream, is more spatially extended and has a larger velocity dispersion σ_RV and a larger systemic velocity <RV> than the more metal-rich population <cit.>. With the PIGS/AAT Sgr sample, we update these quantities using a more metal-poor, and likely older, population than previous work. Sgr stars have been divided into two populations, the metal-poor (<-1.5) from PIGS/AAT and the metal-rich (>-0.6) from APOGEE DR17. The number of stars in these two populations as a function of the projected elliptical distance from Sgr's centre is shown in Figure <ref>. The metal-rich population dominates over the metal-poor one in the very inner regions, until a projected elliptical distance of ∼0.25 half-light radii (r_h). Then the two groups from the two surveys are similarly populated. The metal-poor and the metal-rich populations are then divided into two sub-groups according to their projected elliptical distances: the inner group at <0.25 r_h vs the outer at ≥0.25 r_h. To derive the systemic RV and the RV dispersion, a Bayesian framework embedded in a Monte Carlo Markov chain, based on the Metropolis-Hastings algorithm, is employed. The prior probability distribution is a step function and it expects these quantities to be in the ranges 90≤RV≤220 and σ_RV≤40. The likelihood is a Gaussian distribution centred on the systemic RV and with a dispersion that takes into account the intrinsic RV dispersion of the system and the uncertainties of the RV measurements. The systemic RV, <RV>, vs velocity dispersion, σ_RV, are displayed in Figure <ref> and reported in Table <ref>. As reference, Figure <ref> also displays the values for the populations from <cit.>, for metal-rich stars (>-0.6, blue small circle) and metal-poor stars (≤-0.6, but almost no stars with < -1.0, black small circle). We checked for possible systematics between the APOGEE and PIGS radial velocities by comparing both surveys to Gaia radial velocities (not limited to Sgr to have many more stars). The difference ΔRV(PIGS - Gaia) = +0.5 <cit.> and ΔRV(APOGEE - Gaia) = +0.2, implying there is only a ∼ 0.3 systematic difference between APOGEE and PIGS. The overall metal-poor (large blue circle) and metal-rich (large black circle) populations have a systemic RV of 145.4±0.9 and of ∼142.6±0.7, respectively. These values are compatible with the ones inferred by <cit.> adopting a different cut in and a different dataset. The difference in the systemic RV between these populations is significant, given the uncertainties and the precision of the RVs. We did not take into account projection effects. Recently, <cit.> modelled the RV distributions of Sgr and M54, and inferred a difference of 4 between M54 (magenta cross marker) and the main body of Sgr (magenta small circle), with mean radial velocities of 139.6 ± 0.9 for M54 and 143.7 ± 0.7 for the main body, with a velocity gradient in the main body. Our results suggest that there is additionally a difference between MP and MR Sgr field populations – there appears to be an increasing mean RV going from M54, to metal-rich field stars, to metal-poor field stars. In agreement with previous work on the stream and core <cit.>, we find that the overall metal-poor population has a velocity dispersion larger than the metal-rich counterpart, in our case σ_RV∼ 17 vs σ_RV∼ 12, respectively. Also to be noted from Figure <ref>: the inner populations in our analysis (large squares), both MP and MR, have lower RV dispersion and lower systemic RV than their respective outer populations (large plus markers). For all the populations and subgroups, the velocity dispersion and the systemic velocity are found to be considerably higher than the values for M54 (magenta cross marker). The MP and MR populations should not be contaminated by many M54 members. §.§ Internal and external mechanisms in play Various internal and external mechanisms can affect the chemo-dynamical properties of a system. For instance, the internal morphology can play a role. In this regard, a dynamically hotter MP and a colder MR population with weak rotation has been proposed to indicate the presence of a metal-rich thick and rotating disc or bar surrounded by a more dispersed and metal-poor stellar halo in Sgr <cit.>. The presence of such a rotating disc/bar would also explain some chemo-dynamical properties of the stellar streams associated with Sgr <cit.>. The fact that the MR population, either in the inner or in the outer regions, has a lower velocity dispersion and a lower systemic RV than the MP supports the idea that these two groups populate two different structures, such as a “disc/bar” and a stellar “halo” of Sgr. If so, projection effects on the bar are another ingredient explaining the different systemic RV from the MP group. Additionally, outside-in star formation has been proposed as one mechanism to explain the different spatial and kinematical properties between MP and MR populations in DGs, such as the gradient in the velocity dispersion <cit.>. In this scenario, the oldest MP population would form spatially everywhere in the system, and their supernovae would enrich the ISM. Then some of the gas might have sunk to the inner region with time, forming younger and more metal-rich stars that are more gravitationally bound to the system. As a result of this, the MP population would be more spatially extended and kinematically hotter than the MR one, with the latter being confined mostly to the inner regions with a lower velocity dispersion. The main external mechanisms that can affect the dynamical properties of a DG are merging events and tidal stripping. In case of the former, stars will be heated up by the accreted system, and likely the less bound ones, such as in the outskirts, will be more affected. Then, the additional gas from the accreted system (if it has any) can sink into the inner regions, triggering the formation of new stars that are more metal-rich <cit.>. In addition, tidal stripping also influences the distribution and kinematics of the outskirts, which are less bound, of a system. In fact, the ongoing stripping of Sgr resulted in the formation of the Sgr stellar streams, which are known to be more metal-poor on average than the core <cit.>. It has been proposed that Sgr has interacted gravitationally with the MW for more than 8 Gyr, with its first pericentric passage likely to have happened around 5-6 Gyr ago <cit.>. The MR population in Sgr has an estimated age spanning from 4 to 8 Gyr – their star formation, or part of it, might have have been triggered by Galactic perturbations at, or close to, the first pericentric passage. Investigations on simulated galaxies reveals that the extreme tidal effects that Sgr is undergoing might have affected the system's morphology, e.g. it could have reshaped its disc (if it had one) into a prolate rotating bar structure <cit.>. §.§ Comparison to other DGs The values of the velocity dispersion for the other 6 classical DGs <cit.> are also reported in Figure <ref> as a reference. The velocity dispersion for the MR population in Sgr is similar to Fornax's value, which is the highest among the DGs compilation. The σ_RV for the MP population in Sgr is significantly higher than the averages for the other DGs. This could be due to an observational bias, such as the σ_RV in the reference galaxies are calculated from the overall population, which is mostly more metal-rich than the MP population in Sgr. As an example, the velocity dispersion for the overall population in Sculptor is around 7, while restricting to the more dispersed metal-poor stars would provide a σ_RV∼10-12 <cit.>. In addition, Sgr has a total mass higher than the other DGs reported in Figure <ref> and it is experiencing strong Galactic tidal stripping, which is far more extreme than in the other systems <cit.>, which both concur to inflate the σ_RV of this system. § CARBON TRENDS IN SAGITTARIUS We next focus our attention on the chemistry of Sgr, specifically the abundance of carbon. As discussed in the Introduction, carbon abundances can trace the early chemical evolution of a system <cit.>. Is the level of carbon in Sgr similar to that in the other classical DGs? What about in comparison with the inner Galaxy and the MW halo? To answer these questions, in Figure <ref> we present the average [C/Fe] ratio as a function of the metallicity for Sgr (red circles) compared to the classical DGs (left panel) and compared to the inner Galaxy and the MW halo (right panel). All carbon abundances have been corrected for evolutionary effects according to <cit.>, see <cit.> for details. We find that the Sgr carbon abundance slightly rises with decreasing metallicity. §.§ Halo and inner Milky Way There are a number of studies that have explored the carbon abundance of low-metallicity stars in the MW halo, and to a lesser extent in the inner Galaxy. <cit.> showed that trends involving carbon abundances are very sensitive to the assumptions made in the synthetic spectroscopic grids (e.g. the model atmospheres, the adopted atomic and molecular data) and the employed pipeline, with large systematic offsets between different literature samples (see their Figure 4). To not bias our conclusions, in the comparison with the halo and the inner Galaxy, we restrict ourselves to [C/Fe] measured within PIGS and the Pristine survey, which have all been derived with the same methodology. The inner Galaxy PIGS/AAT sample is selected from <cit.>, restricted to those stars with good measurements of stellar parameters, metallicities and carbon abundances, as in our Sgr sample. An additional cut is imposed to select stars with similar surface gravity as the bulk of the Sgr sample (log g<2.3) and to remove the region of early asymptotic giant branch stars (eAGBs) whose carbon abundances have been altered by stellar evolution <cit.>. This selection is composed of 2318 stars with <-1.5 (grey circles in the right-hand panel of Figure <ref>). Additionally, this sample is split into two sub-groups according to their Galactic apocentric distances, those that remain confined in the inner Galaxy (apocentre <3 kpc, grey crosses) and the “halo interlopers” (apocentre >8 kpc, grey plus markers). The former and the latter are composed of 1032 and 276 stars, respectively. For the MW halo, we include the Pristine medium-resolution follow-up sample from <cit.>, with carbon abundances corrected for spurious log g determinations following <cit.>. This sample has a less restrictive cut on the surface gravity, namely log g <3.0. Although the same trend is visible for the Milky Way and Sgr samples, namely a rise in carbon abundance with decreasing metallicity, the average carbon abundances are higher in the Milky Way samples compared to Sgr, and the rise appears to be less steep in Sgr. The [C/Fe] difference between Sgr and the Milky Way starts at ∼ 0.1 dex for = -1.6 and increases to 0.3-0.4 dex for < -2.5. The average carbon abundance of the MW (inner regions and the halo) is also higher than most of the classical DGs, except for Fornax (Fnx, gold squares). The difference in carbon abundances can be interpreted as a different population of SNe II and AGB stars that contributed to the chemical enrichment of the dwarf galaxies in comparison with the one of the Galaxy. In particular, a higher contribution of faint and core-collapse SNe could provide a higher [C/Fe] ratio <cit.>. The physical and chemical properties of the building blocks that contributed to the formation of the proto-Galaxy are still under discussion <cit.>, as well as the importance of an ancient in-situ component <cit.>. Did the early building blocks have a chemical evolution similar of the present UFDs? What about their masses and sizes, or, in other words, are the building blocks comparable to classical DGs or to smaller UFDs <cit.>? For the PIGS inner Galaxy sample, there is a slight difference in the average level of carbon abundance between the “confined” (plusses, lower [C/Fe]) and “halo interloper” (crosses, higher [C/Fe]) samples, of the order of 0.05-0.10 dex. This could potentially be connected to different building blocks contributing to these populations, e.g. more chemically evolved ones to the confined population and more chemically pristine systems to the halo population (see also the discussion in ). We further discuss the connection to dwarf galaxies and their chemical evolution in Section <ref>. §.§.§ Note on possible systematics As previously discussed, the PIGS/AAT inner Galaxy and the Sgr stars have been analysed with the same methodology applied to the same AAT spectra, and the <cit.> sample has been analysed with the same methodology as well, so systematic differences should hopefully be minimal. One caveat here is that [α/Fe] is fixed in the analysis, to +0.4. However, various high-resolution spectroscopic works showed that the majority of the inner Galaxy VMP stars have similar [α/Fe] compared to typical halo stars <cit.>, and the α-abundances are also very similar between the MW and Sgr in the VMP regime <cit.>. Therefore, we should not expect significant biases in the analyses due to [α/Fe] differences. We note that the magnitude of the evolutionary carbon correction following <cit.> also depends on the natal nitrogen abundances of stars, which may differ for each formation site, but are all assumed to be [N/Fe] = 0.0 in the calculations. However, the predicted effect on the carbon corrections is much smaller than the difference we find between Sgr and the Milky Way – Figure 1 of <cit.> shows that for a = -2.3 star, the difference in the carbon correction between a [N/Fe] of -0.5 and +0.5 at birth is at most ∼0.05 dex. Therefore, a different average level of [N/Fe] between the MW and Sgr would not impact our findings. The evolutionary corrections may also potentially be better or worse in some parts of the parameter space (e.g. depending on log g), so it is crucial to compare stars in similar evolutionary phases. We attempted this by limiting the reference samples in log g, but the distributions of evolutionary phases are not exactly the same. What might be the effect of photometric selection effects on trends of carbon? As discussed previously, very carbon-rich stars are likely excluded from our selection because they look too metal-rich. Could our selection be biased even for “carbon-normal” stars, selecting only those with relatively lower carbon abundances? This is unlikely to be the case, especially for < -2, given that the carbon features are relatively weak for carbon-normal VMP stars and given that our selection was not only targeting VMP stars, but also probed the slightly more metal-rich population. Finally, we checked potential systematics on the mean [C/Fe] and its trend with metallicity as a function of the surface gravity. As a sanity check, we repeated the exercise of Figure <ref>, restricting the Sgr and MW compilations to stars with 1.8<log g <2.3 (for lower log g, the evolutionary carbon corrections become more important). We find no qualitative or quantitative differences between this more strict cut and the one applied to produce Figure <ref>. However, we note that the MW halo sample from <cit.> would not have enough stars to populate all the metallicity bins for this limited log g selection. §.§ Dwarf galaxies To compare the average [C/Fe] of Sgr with classical DGs, stars with ≤ -1.5 have been selected from the DG members summarised in <cit.>, <cit.>, and <cit.>. The compilation from <cit.> is composed of 442 stars (16 CEMP-no) and distributed in 7 classical DGs, namely Canes Venatici I (CVn I, 1 star, ), Carina (Car, 8 stars, ), Draco (Dra, 161 stars, ), Fornax (Fnx, 14 stars, ), Sculptor (Scl, 173 stars, ), Sextans (Sex, 4 stars, ), Ursa Minor (UMi, 81 stars, ). The compilations from <cit.> and <cit.> include members of the Large Magellanic Cloud (LMC) for a total of 21 stars (no CEMP). The systems from these compilations, excluding CVn I and CEMP-no stars, are displayed in Figure <ref> with coloured circles, diamonds, squares, and plusses. The average level of [C/Fe] in Sgr is within the wide range of the 7 classical DGs. In particular, the average carbon abundance in Sgr appears to be higher than in Scl for >-2.4 by up to ∼0.3 dex. Compared to Car, Sgr's [C/Fe] level is also higher, for ≲-2.4 by at least ∼0.3 dex. As proposed by <cit.>, the strikingly low amount of [C/Fe] in Scl and Car might be explained by a strong imprint of hypernovae from Pop III stars. Thus, classical DGs and stars with such low carbon level might be crucial for understanding the energy distribution of the primordial generation of stars <cit.>. Another nucleosynthetic channel that contributes to lower the [C/Fe] is from SNe Ia, in which the production of Fe exceeds that of C <cit.>. This event might be responsible for lowering the [C/Fe] in Dra and UMi for ≳-2.5, as also shown in <cit.>. Chemical abundance analysis from <cit.> reveals that the level of [C/Fe] in Dra strongly decreases around ∼-2.5, such as the metallicity at which SNe Ia starts to kick in. Similarly, <cit.> discovered that the contribution of SNe Ia in UMi starts at ∼-2.1. In Sgr, the contribution of SNe Ia is absent in the VMP regime. However, <cit.> suggest that the trend of [Co/Fe] at ≳-2.0 might be an indication of a possible contribution of SNe Ia in Sgr. This can also explain the lower [C/Fe] at ≳-2.0 compared to the more metal-poor bins. A more thorough investigation of this metallicity regime in Sgr will be explored by PIGS in a coming paper (Vitali et al., in prep.). <cit.> discussed the early chemical enrichment phase of Sgr from the detailed chemical abundances of 11 VMP stars. The chemical pattern of Sgr stars has been interpreted as the result of a mixture of Pop III and II stars contributing <cit.>. In particular, intermediate-mass high-energy and hypernovae are needed to explain the abundance patterns of the lighter elements up to the Fe-peak, while compact binary merger events and fast-rotating (up to ∼300) intermediate-mass to massive metal-poor stars (∼25-120) are needed to account for the level of the heavy elements. No evidence for contributions from pair-instability supernovae has been found in <cit.>. This mixture of various energetic SNe events appears to be common in classical DGs, and therefore explain the similarity in [C/Fe] between these systems and their lower level compared to the MW, see Section below for a further discussion on this topic. §.§ The different supernovae enrichment The different amount of [C/Fe] among the classical DGs and their lower level compared to the MW can be interpreted as the imprint of a different chemical evolution and a different efficiency in retaining the ejecta of SNe. For instance, the chemical evolution models from <cit.> suggest that DGs would have been polluted by a mixture of SNe II from Population III and II stars vs a more pristine population of SNe II in the building blocks of the MW halo <cit.>. The higher fraction of Pop II would have contributed to partially lower the average [C/Fe] <cit.>. In addition, the ISM of classical DGs is considered to be homogeneously mixed, therefore able to have retained the ejected yields from the most energetic events <cit.>, such as high-energy SNe II, hypernovae, and potential pair-instability SNe II. The retention of the ejected yields from the most energetic events would lower the average amount of [C/Fe], given they would produce more Fe than C <cit.>. While there is a consensus that massive systems would contribute to the formation of the MW <cit.>, it is still an open question whether the MW's building blocks resembled UFDs or DGs in terms of their ISM efficiency in retaining SNe yields or regarding their star formation history or their initial mass function. We interpret the higher average [C/Fe] of the MW as an indication that the ISM efficiency of the MW's building blocks is similar to UFDs, hence unable to retain the most energetic events <cit.>. Therefore, the ISM of the building blocks of the MW, should be the fossil of the lower energetic events only <cit.>. Additionally, if inhomogeneous chemical enrichment is in place, asymptotic giant branch stars (AGBs) can also be an extra source for the level of carbon, even at lower metallicities <cit.>. Figure <ref> also shows a difference in the average [C/Fe] between the MW halo and the inner Galaxy, especially those stars confined within 3 kpc. Recently, <cit.> suggested that a potential dearth of CEMP stars in the inner Galaxy could be due to the very high star formation rates at early times. The star formation would be so intense that stars massive enough to explode as pair-instability SNe would form, which would lower the average [C/Fe] compared to the halo. However, no star carrying the imprint of pair-instability SNe has been found so far in the Galaxy <cit.>. Furthermore, SNe Ia can concur to lower the average [C/Fe] in a given system <cit.>. The contribution of SNe Ia might start at ≳-2.5 in some classical DGs <cit.>, and likely between -2.0≲≲-1.5 for Sgr <cit.>. This is not the case for the MW, where SNe Ia starts to kick in at higher metallicities, ∼-1.0 <cit.>. Therefore, the lower average [C/Fe] at ≳-2.5 in DGs and at ≳-2.0 in Sgr can also be caused by the contribution of SNe Ia. §.§ The radial gradient of [C/Fe] Our sample is large enough and covers enough of Sgr to test whether there may be any radial gradients in [C/Fe]. To avoid potential systematic effects in [C/Fe] between radial bins due to differences in stellar parameter coverage, we limit the sample to 1.8< log g < 2.3 for this analysis. We find that the general picture of our results does not change compared to using a more generous cut or the full sample, but the behaviour is cleaner for the limited sample. The median [C/Fe] as a function of the projected elliptical distance is shown in Figure <ref>. The Sgr PIGS/AAT sample is divided into two sub-groups, the low-metallicity (blue circles, -2.5≤≤-2.0) and a slightly more metal-rich group (navy circles, -2.0 < ≤ -1.5), and removing CEMP stars from the calculations. There is a net positive [C/Fe] gradient for the slightly more metal-rich sub-group, with a difference of ∼+0.25 dex between the very inner region and the outskirts of Sgr. This leads to a positive gradient in [C/Fe] of about ∇[C/Fe]∼ 0.23 dex r_h^-1 or ∼ 8.8× 10^-2 dex kpc^-1 or ∼ 6.8× 10^-4 dex arcmin^-1. Regarding the low-metallicity sub-group, a mild positive gradient is visible if the innermost bin is not considered. In this case, the difference in [C/Fe] would be ∼0.2 dex between the inner to the outer Sgr's regions. To be taken into account, uncertainties on the average [C/Fe] are larger for the low-metallicity sub-group than the more metal-rich one. Is the more pronounced gradient at higher metallicities connected to a different chemical enrichment between the two populations? A couple of concurrent mechanisms might explain these gradients: outside-in star formation and the contribution of SNe Ia. The former, as discussed in Section <ref>, implies that the oldest and most metal-poor stars should form everywhere in the system and would carry a similar imprint of nucleosynthetic events, if also homogeneous mixing applies to the system. In the case that the ISM is not completely homogeneously mixed between the inner regions and the outskirts, these two regions might carry different level of [C/Fe]. Likely, the outskirts would be less efficient in retaining the more energetic events as the inner regions, resulting in a higher average [C/Fe]. The stellar feedback from the first supernovae would expel the gas outside the system, which then later would be re-accreted onto the inner regions, where slightly more metal-rich stars would form. These relatively more metal-rich inner stars might carry the imprint of SNe Ia as well. As discussed in Section <ref>, SNe Ia can lower the average [C/Fe] <cit.>, and the higher contribution of these events in the inner regions would explain the positive gradient in [C/Fe]. This result would be an indication, in addition to the trend of [Co/Fe] in <cit.>, that SNe Ia might have started to kick in in Sgr at metallicities between -2.0<<-1.5, well below what was previously inferred <cit.>. § CEMP STARS As discussed in the Introduction, CEMP stars are of interest because they probe the properties of the First Stars and early chemical evolution (CEMP-no) and of binary populations (CEMP-s). Next, we investigate the properties of CEMP stars in Sgr with the PIGS/AAT Sgr data set, which is much larger than previous literature samples with [C/Fe] in Sgr. To our sample of carbon measurements in Sgr, we add those of <cit.> and <cit.>, who observed metal-poor Sgr stars with the Magellan Echellette (MagE) Spectrograph, measuring [C/Fe] for 4 and 18 targets, respectively. These stars have metallicities in the range -3.1 ≲≲ -1.5, similarly to the PIGS/AAT range. None of these stars are CEMP according to the standard definition ([C/Fe] >+0.7). Other Sgr members with measured [C/Fe] that are not included are the targets analysed in <cit.> and from APOGEE DR17. <cit.> measured [C/Fe] in 12 stars with metallicity -2.95 ≲≲ -1.40. These targets were observed with UVES high-resolution spectrograph at VLT. However, as shown in <cit.>, the [C/Fe] ratios from <cit.> are systematically lower than the ones from <cit.>, <cit.>, and this work <cit.>. APOGEE stars are not included, since the C-measurements are in non-local thermodynamic equilibrium (non-LTE) and in the infra-red, which have offsets compared to LTE measurements in the optical <cit.>. §.§ New CEMP stars in Sgr The distribution of [Fe/H] vs A(C) for Sgr stars is shown in Figure <ref> (blue circles). According to the classical definition of CEMP stars ([C/Fe] > +0.7), only 3-4 stars in the PIGS/AAT Sgr sample are classified as CEMP. One of them (red pentagon) has previously been studied in <cit.>, and was confirmed to be a CEMP-s star based on the over-abundance of s-process elements ([Ba/Fe] ∼+1.2). For the other two CEMP candidates, Ba measurements are not available. We compare the distribution of metallicities and carbon abundances with those for the inner Galaxy (grey circles) and DGs <cit.>. Note that the DG sample only includes carbon-normal and spectroscopically confirmed CEMP-no stars. Without measurements of Ba or Sr, it is not possible to classify CEMP stars with certainty, although a rough classification can be made based on [Fe/H] and A(C) alone <cit.>. CEMP-s stars typically have higher A(C) than CEMP-no stars and are more common at higher metallicities, and a tentative separation between the two groups has been placed at A(C) = 7.1 <cit.> and [Fe/H]≳-3.3. It is not entirely clean – there is some known contamination when using such a simple division without detailed chemistry, for example the <cit.> CEMP-s star lies in the CEMP-no region based on [Fe/H] and A(C) alone, and some DG CEMP-no stars lie in the CEMP-s region. Similarly, a contamination of CEMP-no in the CEMP-s region is also found for MW halo stars <cit.>. However, without better data, we may propose that the two new Sgr CEMP stars are likely of the CEMP-s kind given their metallicity and high carbon abundances. §.§.§ Two cool candidate CEMP stars We noticed that there are two stars in the AAT/Sgr sample (not passing our quality cuts, based on χ^2) that by eye appear to be very carbon-rich from their spectrum. These stars, Pristine_185524.38-291422.5 (Gaia DR3 source_id = 6761678859361894912) and Pristine_190122.55-304744.3 (6760545743905626496) are highlighted with pink circles in the Pristine colour-colour diagram in the top left and right panels of Figure <ref>. It is curious that one of them is located above the main Sgr sequence. They are also shown on the CMD with large red symbols in the top panel of Figure <ref>. The same star that is an outlier in the colour-colour diagram is located beyond the metal-rich side of the RGB, which is also curious. If the star is truly a Sgr star (and there is no reason to suspect it is not given its radial velocity and proper motions), it cannot be an intrinsic carbon star, because it is not evolved enough. Both stars have T_eff∼ 4500 K, which is at the cool boundary of the grid, therefore they could be even cooler. Inspection of the spectroscopic fit shows that the fit is bad in both the blue and the CaT regions: there is a strong discrepancy between the carbon features in the star and those in the FERRE grid, although it is clear that the star is very carbon-rich. This is potentially due to the assumptions on nitrogen in the grid (see below). To further constrain the stellar parameters for these stars, we employ a different grid of synthetic spectra originally created for use in the Segue Stellar Parameter Pipeline (, , grid from Y.S. Lee, private communication). An important difference between the and grids is that the former assumes [N/Fe] = 0, while the latter assumes [C/N] = 0 – this is potentially particularly important for fitting the CN features in the CaT. We use a cool subset of the grid with the following stellar parameters: T_eff = [4000, 4250, 4500, 4750] K, log g = 1.0 (we checked that varying log g does not make a difference), from -3.0 to -1.0 in steps of 0.25 dex and [C/Fe] from 0.0 to +3.0 in steps of 0.25 dex. After normalising both the observed and synthetic spectra with a running median of 200 pixels (50 Å), we search for the best matching spectrum by minimising the residuals. We do this separately for the CaT and the blue and combine the χ^2 values afterwards, giving more weight to the CaT because of its high resolution and because it is less sensitive to the shape of the molecular bands. For both of the stars there is no clear best-fit stellar parameter combination, because there are strong degeneracies between T_eff, [Fe/H] and [C/Fe]. For Pristine_185524.38-291422.5, the outlier in photometry, the main constraint is placed on the absolute carbon abundance: for the 5% best fits, A(C) = 8.7 ± 0.4 (mean and standard deviation). The mean metallicity is -1.5 ± 0.4 and the temperature is not well-constrained within the limit of our small grid. The other star, Pristine_190122.55-304744.3, is more metal-poor and slightly less carbon-rich – the mean A(C) = 8.0 ± 0.5 and [Fe/H] = -2.2 ± 0.5 for the 5% best fits, and the temperature is again not well-constrained. For each of these stars, we present one of the best matching synthetic spectra in Figure <ref>, with the observed spectrum in black and the synthetic one in red. We applied a by-eye linear normalisation to the blue arm synthetic spectrum to roughly match the shape of the observed spectrum rather than showing the normalised version, so the match is not perfect. We conclude that these stars are likely CH- or CEMP-s stars. The location of the more metal-rich star in the Pristine colour-colour diagram and the CMD is likely strongly affected by the very large carbon bands, causing the star to look fainter and redder compared to where a “normal” metal-poor star would be. This effect appears to be less strong for the more metal-poor star, although it is on the border of having been included in our selection according to Figure <ref>. Such extreme stars have likely been missed in other selections of metal-poor stars as well, in DGs and the Milky Way, possibly leading to an underestimate of the number of binary mass-transfer type stars at intermediate metal-poor metallicities. §.§ Fraction of CEMP stars In the Galactic halo, the cumulative fraction of CEMP stars for < -2.0 has been found to be of the order of 20-30%, rising to 30-40% for < -3.0 <cit.>. There are various caveats complicating the exact determination of the overall CEMP and separate CEMP-no and CEMP-s fractions in the Galactic halo <cit.>, but the consensus is that there is a significant fraction of these stars at low metallicity. As shown in Figure <ref>, only three out of 356 PIGS/AAT Sgr stars is classified as CEMP and none from <cit.> and <cit.>, giving a total percentage of ∼3% for < -2.0 and ∼5% for < -2.5 – much lower than that claimed in Galactic halo samples. This could partially be the result of our photometric metal-poor candidate selection being biased against carbon-rich stars, especially those at slightly higher metallicity (> -2.5) and/or higher carbon abundance ([C/Fe] >+1.5) – the realm of the CEMP-s stars. The CEMP fraction in Sgr is also low for < -2.5, and we find that none of the 8 Sgr stars with < -2.7 are CEMP. This is interesting given that in our test of the selection function in Section <ref>, we found that CEMP-no stars in this metallicity range should typically not have been excluded from our selection. This finding is consistent with previous observations suggesting that classical DGs are poor in CEMP-no stars in comparison to the MW and UFDs <cit.>. §.§ Redefining CEMP stars in DGs Given that the average carbon abundance is ∼0.3 dex lower in Sgr compared to the Milky Way (Figure <ref>), is it fair to use the same definition of carbon-enhancement as in the Milky Way? This seems to be a generic question for classical DGs, as most of them have lower average [C/Fe] than the Milky Way, as discussed in the previous section, and they would therefore need a larger carbon “boost” to be classified as CEMP. The LMC also has lower carbon abundances compared to the MW halo (although similar to the inner Galaxy), with a dearth of CEMP stars <cit.>. The first definition of CEMP stars was [C/Fe] >+1.0 <cit.>, which was refined empirically by <cit.> to [C/Fe] >+0.7 based on a sample of observations of MW stars, using the gap between carbon-normal stars and outliers with high carbon abundances. This definition is therefore a relative one, specifically for the Milky Way “field” population, raising the question whether it should it be redefined for dwarf galaxies. Inspecting Figure <ref>, there are a significant number of Sgr stars that appear to be outliers in A(C) from the main Sgr trend, although they do not make it to above the classical CEMP definition of [C/Fe] >+0.7. For ≲-2.5 in the PIGS/AAT inner Galaxy sample, the average [C/Fe] ≈ +0.3 with a dispersion of 0.2 dex (conservative estimate) – meaning that the [C/Fe] =+0.7 CEMP definition selects stars that are ∼ 2 σ outliers, roughly 0.4 dex higher than the mean trend. The average [C/Fe] in Sgr in the lowest metallicity bins (≲ -2.5) is ∼-0.05, therefore, adopting a similar conservative dispersion, stars with [C/Fe] > -0.05 + 0.4 > +0.35 could be considered outliers in Sgr, and therefore CEMP. This working definition of CEMP stars in Sgr is shown in Figure <ref> with a dashed red line. Using this new definition, ∼20 Sgr members would be classified as CEMP stars (vs 3-4 from the classical definition). This would lead to a carbon-enhanced percentage of ∼15% for < -2.0, which is much less in tension with the results in the MW <cit.>. The percentage would be ∼12% for -2.5< < -2.0 and ∼30% for < -2.5 (or ∼35% if only Sgr/AAT data are considered), compatible with the frequency of CEMP stars in the MW. Similarly, for Dra, UMi, and Scl (selecting stars between -2.4<<-1.9), the new [C/Fe] threshold for a member star to be a CEMP would be ∼+0.3,+0.3,+0.1, respectively. This new limit would suggest that the percentage of CEMP in Dra, UMi, and Scl would be ∼16%, 27%, 19%, respectively. However, the latter values refer only to the CEMP-no population, given that the compilation from <cit.> does not contain CEMP-s stars. Investigations of the chemical properties of CEMP candidates based on our relative CEMP definition will be necessary to test whether they show differences in their abundance patterns compared to stars in the bulk of the carbon-metallicity distribution, and whether they are truly a different population of stars. § SUMMARY The chemo-dynamical properties of the low-metallicity regime of the Sagittarius dwarf galaxy are explored using the low/medium-resolution AAT spectra observed by the Pristine Inner Galaxy Survey (PIGS). The PIGS dataset contains measurements of RVs, stellar parameters, , and [C/Fe] for stars towards the inner Galaxy and Sgr. We summarise below the main conclusions of this work: * We provide a clean list of low-metallicity (≤-1.5) members stars selected according to their RV from AAT and proper motion and on-sky position from Gaia, as in <cit.>, and updated to DR3 (Figure <ref>). A table updated to Gaia DR3 will be available as online material. * The metal-poor (≤-1.5) population (PIGS/AAT) of Sgr has a larger velocity dispersion and systemic RV than the metal-rich (≥-0.6, APOGEE) as shown in Figures <ref> and <ref>. Additionally, the velocity dispersion and the systemic RV increase in the outer regions for both populations. This effect might be caused by the contribution of various mechanisms, such as the complex structure in Sgr (MR/disc + MP/halo), the outside-in star formation, and the extreme Galactic tidal perturbations acting in the system. * The average [C/Fe] of Sgr is similar to the range displayed by the other classical DGs (Figure <ref>). However, the level of [C/Fe] is higher in Sgr than in Carina and Sculptor. This can be explained by differences in the IMF of these systems, with the ejecta of more energetic SNe II events retained in the ISM of Carina and Sculptor. * The average [C/Fe] of Sgr, and of the other classical DGs, is lower than in the MW at fixed , either when compared to inner Galactic or halo-like stars (Figure <ref>). The ISM of classical DGs might have been able to retain the ejecta of energetic events, such as hypernovae, while this would not have been the case for the building blocks of the Galaxy, where stochasticity might have played an important role. In this scenario, classical DGs should display the imprint of Population III and II high energy SNe II, which would act to lower the average [C/Fe]. Instead, less energetic events, faint- and core-collapse SNe II from a more pristine population should be imprinted in the stars of the MW building blocks, hence the higher [C/Fe]. * SNe Ia can also lower the average [C/Fe]. This kind of event would be already present at ∼-2.0 in classical DGs and absent in the MW stars at the same metallicities. Indications of the SNe Ia contributions in Sgr starting at -2.0<<-1.5 are the lower median [C/Fe] at these metallicities vs the higher [C/Fe] at lower metallicities (see Figure <ref>) and also the lower [C/Fe] in the inner regions (see Figure <ref>), inhabited by a slightly more metal-rich population. The presence of SNe Ia at the aforementioned metallicities would also be confirmed by the trend of [Co/Fe] found by <cit.>. * We find a positive [C/Fe] gradient of ∇[C/Fe]∼ 0.23 dex r_h^-1 or ∼ 8.8× 10^-2 dex kpc^-1 or ∼ 6.8× 10^-4 dex arcmin^-1 for stars with -2.0<<-1.5 (Figure <ref>), which we interpret as the effect of contributions by SNe Ia. * We identify 4 new CEMP stars in Sgr. Figure <ref> suggests that the empirical distinction between CEMP-s and CEMP-no solely based on A(C) does not work well for Sgr and the classical DGs. We therefore cannot reach definitive conclusions on the nature of the new CEMP stars, however, we may propose that they are likely of the CEMP-s kind given their and high A(C). * The AAT spectra of two carbon-rich candidates, Pristine_185524.38-291422.5 and Pristine_190122.55-304744.3, are re-analysed with the grid of synthetic spectra (Figure <ref>) because they had high χ^2 in the fit and were at the edge of the grid. They have [Fe/H] ∼ -1.5 and -2.2 with very high carbon abundances (A(C) ∼ 8.8 and 8.0, respectively), making them CH- or CEMP-s candidates. The C-bands of the former star strongly affect its colour, magnitude and its position in the Pristine colour-colour diagram (Figures <ref> and <ref>). Similar stars could have been missed in other metal-poor (DG) selections as well. * The photometric selection effects in the various PIGS fields that include Sgr targets are discussed, showing there is a bias against CEMP stars in the sample (Figure <ref>), specifically those of the CEMP-s (binary interaction) type. CEMP-no stars (connected to early chemical evolution), however, are less likely to have been excluded from the selection and their frequency in our sample should be largely unbiased. * Following the classical definition of CEMP stars ([C/Fe] >+0.7), the fraction of CEMP stars in our sample is very low: ∼3% for < -2.0 and ∼6% for < -2.5. However, the low mean abundance of [C/Fe] in Sgr (and other classical DGs) as well as the clear presence of outliers of the distribution at “intermediate” carbon abundances, lead us to propose a new definition for CEMP stars. Rather than a fixed threshold, the limit should depend on the average [C/Fe] of a given system. For Sgr, stars with [C/Fe] ≳+0.35 can be considered CEMP in this case, as they are outliers from the bulk of the system's distribution (see Figure <ref>). The new frequency of CEMP in Sgr according to this definition would be ∼12% for -2.5 < < -2.0 and ∼30-35% for < -2.5, much more in agreement with frequencies in the MW. This work, which complements the high-resolution investigation by <cit.>, provides a novel glimpse into the early chemical evolution of Sgr by exploring its level of carbon. These works will be beneficial for upcoming spectroscopic surveys, for example 4DWARFS <cit.>, which will observe a larger number of stars in the Sgr core and in its streams. We acknowledge and respect the lək^ wəŋən peoples on whose traditional territory the University of Victoria stands and the Songhees, Esquimalt and WSÁNEĆ peoples whose historical relationships with the land continue to this day. We thank the Australian Astronomical Observatory, which have made the PIGS spectroscopic follow-up observations used in this work possible. We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. We thank Vini Placco for calculating the carbon evolutionary corrections. We thank Young Sun Lee for providing the SSPP synthetic spectra. FS and KAV thank the National Sciences and Engineering Research Council of Canada for funding through the Discovery Grants and CREATE programs. AAA acknowledges support from the Herchel Smith Fellowship at the University of Cambridge and a Fitzwilliam College research fellowship supported by the Isaac Newton Trust. SV thanks ANID (Beca Doctorado Nacional, folio 21220489) and Universidad Diego Portales for the financial support provided. SV acknowledges the Millennium Nucleus ERIS (ERIS NCN2021017) and FONDECYT (Regular number 1231057) for the funding. NFM gratefully acknowledges support from the French National Research Agency (ANR) funded project “Pristine” (ANR-18-CE31-0017) along with funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 834148). ES acknowledges funding through VIDI grant “Pushing Galactic Archaeology to its limits” (with project number VI.Vidi.193.093) which is funded by the Dutch Research Council (NWO). This research has been partially funded from a Spinoza award by NWO (SPI 78-411). The spectroscopic follow-up used in this work was based on selection from observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada–France–Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France <cit.>. This work made extensive use of TOPCAT <cit.>. Author contribution statement. FS led the analysis and the various discussions in this work, contributed to write most of this draft, and created most of the Figures. AAA led the PIGS/AAT target selection and observations, co-led the PIGS/AAT spectroscopic analysis with David Aguado (not a co-author here), analysed the cool candidate CEMP stars discussed in Section <ref>, created the respective section with figures, and was closely involved in shaping the manuscript and contributed to the scientific discussion. SV contributed to the discussion and revision of the paper. MM identified one of the cool candidate CEMP stars using photometry and contributed to the discussion. RL provided the [C/Fe] dataset of the various dwarf galaxies. KAV, NFM, JFN, and ES provided insightful scientific and editorial comments on the manuscript. aa
http://arxiv.org/abs/2406.18529v1
20240626175713
Confident Natural Policy Gradient for Local Planning in $q_π$-realizable Constrained MDPs
[ "Tian Tian", "Lin F. Yang", "Csaba Szepesvári" ]
cs.LG
[ "cs.LG" ]
The Pristine Inner Galaxy Survey (PIGS) X Federico Sestito1 Anke Ardern-Arentsen2 Sara Vitali3,4 Martin Montelius5 Romain Lucchesi6 Kim A. Venn1 Nicolas F. Martin7,8 Julio F. Navarro1 Else Starkenburg9 Received XX; accepted YY ==================================================================================================================================================================== § ABSTRACT The constrained Markov decision process (CMDP) framework emerges as an important reinforcement learning approach for imposing safety or other critical objectives while maximizing cumulative reward. However, the current understanding of how to learn efficiently in a CMDP environment with a potentially infinite number of states remains under investigation, particularly when function approximation is applied to the value functions. In this paper, we address the learning problem given linear function approximation with q_π-realizability, where the value functions of all policies are linearly representable with a known feature map, a setting known to be more general and challenging than other linear settings. Utilizing a local-access model, we propose a novel primal-dual algorithm that, after Õ((d) ϵ^-3)[Here Õ(·) hides log factors.] queries, outputs with high probability a policy that strictly satisfies the constraints while nearly optimizing the value with respect to a reward function. Here, d is the feature dimension and ϵ > 0 is a given error. The algorithm relies on a carefully crafted off-policy evaluation procedure to evaluate the policy using historical data, which informs policy updates through policy gradients and conserves samples. To our knowledge, this is the first result achieving polynomial sample complexity for CMDP in the q_π-realizable setting. § INTRODUCTION In the classical reinforcement learning (RL) framework, optimizing a single objective above all else can be challenging for safety-critical applications like autonomous driving, robotics, and Large Language Models (LLMs). For example, it may be difficult for an LLM agent to optimize a single reward that fulfills the objective of generating helpful responses while ensuring that the messages are harmless <cit.>. In autonomous driving, designing a single reward often requires reliance on complex parameters and hard-coded knowledge, making the agent less efficient and adaptive <cit.>. Optimizing a single objective in motion planning involves combining heterogeneous quantities like path length and risks, which depend on conversion factors that are not necessarily straightforward to determine <cit.>. The constrained Markov decision process (CMDP) framework <cit.> emerges as an important RL approach for imposing safety or other critical objectives while maximizing cumulative reward <cit.>. In addition to the single reward function optimized under a standard Markov decision process (MDP), CMDP considers multiple reward functions, with one designated as the primary reward function. The goal of a CMDP is to find a policy that maximizes the primary reward function while satisfying constraints defined by the other reward functions. Although the results of this paper can be applied to multiple constraint functions, for simplicity of presentation, we consider the CMDP problem with only one constraint function. Our current understanding of how to learn efficiently in a CMDP environment with a potentially infinite number of states remains limited, particularly when function approximation is applied to the value functions. Most works studying the sample efficiency of a learner have focused on the tabular or simple linear CMDP setting (see related works for more details). However, there has been little work in the more general settings such as the q_π-realizability, which assumes the value function of all policies can be approximated by a linear combination of a feature map with unknown parameters. Unlike Linear MDPs <cit.>, where the transition model is assumed to be linearly representable by a feature map, q_π-realizability only imposes the assumption on the existence of a feature map to represent value functions of policies. Nevertheless, the generality of q_π-realizability comes with a price, as it becomes considerably more challenging to design effective learning algorithms, even for the unconstrained settings. For the general online setting, we are only aware of one sample-efficient MDP learning algorithm <cit.>, which, however, is computationally inefficient. To tackle this issue, a line of research <cit.> applies the local-access model, where the RL algorithm can restart the environment from any visited states - a setting that is also practically motivated, especially when a simulator is provided. The local-access model is more general than the generative model <cit.>, which allows visitation to arbitrary states in an MDP. The local-access model provides the ability to unlock both the sample and computational efficiency of learning with q_π-realizability for the unconstrained MDP settings. However, it remains unclear whether we can harness the power of local-access for CMDP learning. In this paper, we present a systematic study of CMDP for large state spaces, given q_π-realizable function approximation in the local-access model. We summarize our contributions as follows: * We design novel, computationally efficient primal-dual algorithms to learn CMDP near-optimal policies with the local-access model and q_π-realizable function classes. The algorithms can return policies with small constraint violations or even no constraint violations and can handle model misspecification. * We provide theoretical guarantees for the algorithms, showing that they can compute an ϵ-optimal policy with high probability, making no more than Õ((d) ϵ^-3) queries to the local-access model. The returned policies can strictly satisfy the constraint. * Under the misspecification setting with a misspecification error ω, we show that our algorithms achieve an Õ(ω) + ϵ sub-optimality with high probability, maintaining the same sample efficiency of Õ((d) ϵ^-3). § RELATED WORKS Most provably efficient algorithms developed for CMDP are in the tabular and linear MDP settings. In the tabular setting, most notably are the works by <cit.>. Work by <cit.> have showed their algorithm uses no more than Õ(SA/(1-γ)^3 ϵ^2) samples to achieve relaxed feasibility and Õ(SA/(1-γ)^5 ζ^2 ϵ^2) samples to achieve strict feasibility. Here, the γ∈ [0,1) is the discount factor and ζ∈ (0, 1/1-γ] is the Slater's constant, which characterizes the size of the feasible region and hence the hardness of the CMDP. In their work, they have also provided a lower bound of Ω(SA/(1-γ)^5 ζ^2 ϵ^2) on the sample complexity under strict feasibility. However, all the aforementioned results all scale polynomially with the cardinality of the state space. For problems with large or possibly infinite state spaces, works by <cit.> have used linear function approximations to address the curse of dimensionality. All these works, except <cit.>, make the linear MDP assumption, where the transition function is linearly representable. Under the generative model, for the infinite horizon discounted case, the online algorithm proposed in <cit.> achieves a regret of Õ(√(d)/√(K)) with Õ(√(d)/√(K)) constraint violation, where K is the number of iterations. Work by <cit.> is able to achieve a faster O(ln(K)/K) convergence rate for both the reward suboptimality and constraint violation. For the online access setting under linear MDP assumption, <cit.> achieve a regret of Õ(poly(d) poly(H) √(T)) with Õ(poly(d) poly(H) √(T))) violations, where T is the number of episodes and H is the horizon term. <cit.> presented an algorithm that achieves a sample complexity of Õ(d^3 H^6/ϵ^2), where d is the dimension of the feature space and H is the horizon term in the finite horizon CMDP setting. In the more general setting under q_π-realizability, the best-known upper bounds are in the unconstrained MDP setting. In the unconstrained MDP setting with access to a local-access model, early work by <cit.> have developed a tree-search style algorithms under this model, albeit in the tabular setting. Under v^*-realizability, <cit.> presented a planner that returns an ϵ-optimal policy using O((dH/ϵ)^||) queries to the simulator. More works by <cit.> have considered the local-access model with q_π-realizability assumption. Recent work by <cit.> have shown their algorithm can return a near-optimal policy that achieves a sample complexity of Õ(d/(1-γ)^4 ϵ^2). § PROBLEM FORMULATION §.§ Constrained MDP We consider an infinite-horizon discounted CMDP (, , P, γ, r, c, b, s_0) consisting a possibly infinite state space with finite actions , a transition probability function P: ×→_, a discount factor γ∈ [0,1), a reward function r:×→ [0,1], a constraint function c:×→ [0,1], a constraint threshold b, and a fixed initial state s_0 ∈. Given any stationary randomized policy π : →_ and the reward function, we define the action-value function with respect to a state-action pair (s,a) as q_π^r(s,a) ≐[∑_t=0^∞γ^t r(S_t, A_t) | S_0 = s, A_0 = a ]. The expectation is taken over randomness of the trajectory induced by the interaction between policy π and transition function P. The action-value function of the constraint function q_π^c is defined similarly to the reward function. For the state-value function of a state s ∈, we have π(s) ≐π(· | s), π(s, ·). Likewise, the value function of the constraint function v_π^c is defined similarly to the reward function. The objective of the CMDP is to find a policy π that maximizes the state-value function π starting from a given state s_0 ∈, while ensuring that the constraint π(s_0) ≥ b is satisfied: max_π∈Π_randπ(s_0) s.t. π(s_0) ≥ b, where Π_rand is the set of stationary randomized policies of the CMDP. We assume the existence of a feasible solution to <ref> and let π^* denote the solution to <ref>. A quantity unique to CMDP is the Slater's constant, which is denoted as ζ≐max_ππ(s_0) - b. Slate's constant characterizes the size of the feasibility region, and hence the hardness of the problem. Because the state space can be large or possibly infinite, we use linear function approximation to approximate the values of stationary randomized policies. Let ϕ: ×→^d be a feature map, we make the following assumption: (q_π-realizability) There exists B> 0 and a misspecification error ω≥ 0 such that for every π∈Π_rand, there exists a weight vector w_π∈^d, w_π_2 ≤ B, and ensures |q_π(s,a) - w_π, ϕ(s,a)| ≤ω for all (s,a) ∈×. We assume to have access to a local access model, where the agent can only query simulator for states that have been encountered in previous simulations. Then, our goal is to design an algorithm that returns a near-optimal mixture policy , whose performance can be characterised in two ways. For a given target error ϵ > 0, the relaxed feasibility requires the returned policy whose sub-optimality gap π^*(s_0) - (s_0) is bounded by ϵ, while allowing for a small constraint violation. Formally, we require such that π^*(s_0) - (s_0) ≤ϵ s.t (s_0) ≥ b - ϵ. On the other hand, strict-feasibility requires the returned policy whose sub-optimality gap π^*(s_0) - (s_0) is bounded by ϵ while not allowing any constraint violation. Formally, we require such that π^*(s_0) - (s_0) ≤ϵ s.t (s_0) ≥ b. §.§ Notations For any integer i, we let [i] = { 1, ⋯, i} and [0, i] = {0, 1, ⋯, i}. For any integers i_1, i_2, let [i_1, ⋯, i_2] to mean {i_1, i_1 + 1,⋯, i_2}. For any real number x ∈, we let x to denote the smallest integer i such that i ≤ x. For a vector of values x ∈^d, we use x_1 = ∑_i |x_i|, x_2 = √(∑_i x_i^2), and x_∞ = max_i |x_i|. We let _[a_1, a_2](λ) ≐_p ∈ [a_1, a_2] |λ - p|, and _[a_1, a_2](y) ≐min{max{y,a_1}, a_2 }. For any two positive numbers a, b, we write a = O(b) if there exists an absolute constant c > 0 such that a ≤ c b. We use the Õ to hide any polylogarithmic terms. § CONFIDENT-NPG-CMDP, A LOCAL-ACCESS ALGORITHM FOR CMDP §.§ A primal-dual approach We approach solving the CMDP problem by framing it as an equivalent saddle-point problem: max_πmin_λ≥ 0 L(π, λ), where L: Π_rand×_+→ is the Lagrange function. For a policy π∈Π_rand and a Lagrange multiplier λ∈_+, we have L(π, λ) ≐π(s_0) + λ(π(s_0) - b). Let (π^*, λ^*) be the solution to this saddle-point problem. By an equivalence to a LP formulation and strong duality <cit.>, π^* is the policy that achieves the optimal value in the CMDP as defined in <ref>. The optimal Lagrange multiplier λ^* ≐_λ≥ 0 L(π^*, λ), Therefore, solving <ref> is equivalent to finding the saddle-point of the Lagrange function. A typical primal dual algorithm that finds the saddle-point will proceed in an iterative fashion alternating between a policy update using policy gradient and a dual variable update using mirror descent. The policy gradient is computed with respect to the primal value π_k, λ_k = π_k + λ_k π_k and the mirror descent is computed with respect to the constraint value π_k+1(s_0) = π_k+1(· | s_0), π_k+1(s_0, ·). Given that we do not have access to an oracle for exact policy evaluations, we must collect data to estimate the primal and constraint values. If we have the least-squares estimates of π_k and π_k, denoted by k and k, respectively, then we can compute the least-squares estimate k = k + λ_k k to be the estimate of the primal value π_k. Additionally, we can compute V^c_k+1(s_0)=π_k+1(· | s_0), k(s_0, ·) to be the least-squares estimate of the constraint value π_k+1(s_0). Then, for any given s,a ∈×, our algorithm makes a policy update of the following form: π_k+1(a|s) ∝π_k(a|s) exp(η_1 k(s,a)), followed by a dual variable update of the following form: λ_k+1←λ_k - η_2 (V^c_k+1(s_0) - b), where the η_1 and η_2 are the step-sizes. §.§ Core set and least square estimates To construct the least-squares estimates, let us assume for now that we are given a set of state-action pairs, which we call the core set . By organizing the feature vector of each state-action pair in row-wise into a matrix Φ_∈^|| × d, we can write the covariance matrix as V(, α) ≐Φ_^⊤Φ_ + α I. For each state-action pair in , suppose we have run Monte Carlo rollouts using the rollout policy π with the local access simulator to obtain an averaged Monte Carlo return denoted by . Then for any state-action pair s,a ∈×, the least-square estimate of action-value q_π is defined to be Q(s,a) ≐ϕ(s,a), V(, α)^-1Φ_^⊤. Since the algorithm can only rely on estimates for policy improvement and constraint evaluation, it is imperative that these estimates closely approximate their true action values. In the local access setting, an algorithm may not be able to visit all state-action pairs, so we cannot guarantee that the estimates will closely approximate the true action values for all state-action pairs. However, we can ensure the accuracy of the estimates for a subset of states. Given , let us define a set of state-action pairs whose features satisfies the condition ϕ(s,a)_V(, α)^-1≤ 1, then we call this set the action-cover of : ActionCov() ≐{ (s,a) ∈×: ϕ(s,a)_V(, α)^-1≤ 1 }. Following from the action-cover, we have the cover of . For a state s to be in the cover of , all its actions a ∈, the pair (s,a) is in the action-cover of . In other words, Cov() ≐{s ∈ : ∀ a ∈, (s,a) ∈ ActionCov()}. For any s ∈ Cov(), we can ensure the least square estimate Q(s,a) defined by <ref> closely approximates its true action value q_π(s,a) for all a ∈. However, such a core set is not available before the algorithm is run. Therefore, we need an algorithm that will build a core set incrementally in the local-access setting while planning. To achieve this, we build our algorithm on CAPI-QPI-Plan <cit.>, using similar methodology for core set building and data gathering. For the pseudo-code of our algorithm Confident-NPG-CMDP, please see <ref>. §.§ Core set building and data gathering to control the accuracy of the least-square estimates Confident-NPG-CMDP does not collect data in every iteration but collects data in interval of m = O ( ln(1+c)(ϵ^-1 (1-γ)^-1) ), where c ≥ 0 is set by the user. By setting c to a non-zero value places an upper bound of 1+c on the per-trajectory importance sampling ratio used in the off-policy evaluation. The m is then set accordingly. The total number of data collection phases is L = K/. When c is set to 0, we have L = K, recovering a purely on-policy version of the algorithm. The importance sampling ratio is used in the LSE-subroutine (<ref> in <ref>) for computing an unbiased used in <ref>. In the iteration that corresponds to a data collection phase, the algorithm performs on-policy evaluation. Between any two data collection phases, the algorithm performs number of off-policy evaluations reusing data collected during the on-policy iteration. By setting c to a non-zero value places an upper bound of 1+c on the per-trajectory importance sampling ratio used in the off-policy evaluation. The m is then set accordingly. When c is set to 0, we have L = K, recovering a purely on-policy version of the algorithm. The importance sampling ratio is used in the LSE-subroutine (<ref> in <ref>) for computing an unbiased used in <ref>. Similar to CAPI-QPI-Plan of <cit.>, Confident-NPG-CMPD maintains a set of core sets (_l)_l ∈ [L+1] and a set of policies (π_k)_k ∈ [K]. Due to the off-policy evaluations, Confident-NPG-CMDP also maintains a set of data (D_l)_l ∈ [L]. Initially, all core sets are set to the empty set, all policies are set to the uniform policy, and all D_l are empty. The program starts by adding any feature vector of s_0, a for all a ∈ that are not in the action-cover of C_0. Those feature vectors are considered informative. For the informative state-action pairs, the algorithm adds an entry in D_0 and set the value to the placeholder ⊥. Then the problem finds an l ∈ [L] in <ref> of <ref>, such that the corresponding D_l has an entry that does not have any roll-out data. If there are multiple phases with the placeholder value, then start with the lowest level phase l ∈ [L] such that D_l contains the placeholder value. When such a phase is found, a running phase starts, and is denoted by ℓ in <ref>. We note that only during a running phase, will the next phase core set _ℓ+1 be extended by <ref>, the policies be updated by <ref>, and dual variable updated by <ref>. During the roll-out performed in Gather-data subroutine (<ref> in <ref>), if any s,a ∈× is not in the action-cover of _ℓ, it is added to _0. Once a state-action pair is added to a core set by <ref> and <ref>, it remains in that core set for the duration of the algorithm. This means that any _l, l ∈ [L+1] can grow in size and be extended multiple times during the execution of the algorithm. When any new state-action pair is added to _l, the least-square estimate should be recomputed with the newly added information. This would mean the policy need to be updated and data rerun. We can avoid restarting the data collection procedure by following a similar update strategy of CAPI-QPI-Plan <cit.> for updating the policy and the dual variable. When a new state-action pair is added to _ℓ, for each corresponding iteration k ∈ [k_ℓ, ⋯, k_ℓ+1-1], the least-squares estimate is recomputed by the LSE subroutine (<ref> in <ref>) with the newly added information. However, the refreshed least-squares estimate k is only used to update the policy of states that are newly covered by _ℓ (i.e., s ∈ Cov(_ℓ) ∖ Cov(_ℓ+1)) using the update <ref>. For any states that are already covered by _ℓ (i.e., s ∈ Cov(_ℓ+1)), the policy remains unchanged as it was first updated by <ref> with the least-squares estimate at that time. The primal estimate k in line  <ref> of <ref> captures the value with which π_k+1 is updated. We want the accuracy guarantee of k(s,a) with respect to π_k, λ_k(s,a) not just for π_k but for an extended set of policies defined as follows: For any policy π from the set of randomized policies Π_rand and any subset 𝒳⊆, the extended set of policies is defined as: Π_π, 𝒳 = {π' ∈Π_rand|π(· | s) = π'(· | s) for all s ∈𝒳}. By maintaining a set of core sets, gathering data via the Gather-data subroutine (<ref> in <ref>), making policy updates by line  <ref>, and dual variable updates by <ref>, we have: Whenever the LSE-subroutine on line  <ref> of Confident-NPG-CMDP is executed, for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1], for all s ∈ Cov(C_ℓ) and a ∈, the least-square estimate k(s,a) satisfies the following, | k(s,a) - q_π_k', λ_k^p(s,a) | ≤ϵ' for all π_k' ∈Π_π_k, Cov(_ℓ), where ϵ' = (1+U)(ω + √(α) B + (ω + ϵ) √(d̃)) with d̃ = Õ(d) and U is an upper bound on the optimal Lagrange multiplier. Likewise, | k(s,a) - q_π_k'^c(s,a) | ≤ω + √(α) B + (ω + ϵ) √(d̃) for all π_k' ∈Π_π_k, Cov(_ℓ), The accuracy guarantee of <ref> and <ref> are maintained throughout the execution of the algorithm. By lemma 4.5 of <cit.> (restated in <ref> in <ref>), for any _l^past to be a past version of _l and π_k^past be the corresponding policy associated with _l^past, then we have Π_π_k, Cov(_l)⊆Π_π_k^past, Cov(_l^past). If we have <ref> and <ref> being true for any policy from Π_π_k^past, Cov(_l^past), then it will also be true for any future π_k. § CONFIDENT-NPG-CMDP SATISFIES RELAXED-FEASIBILITY With the accuracy guarantee of the least-square estimates, we prove that at the termination of Confident-NPG-CMDP, the returned mixture policy K satisfies relaxed-feasibility. We note that because of the execution of <ref> in <ref>, at termination, one can show using induction that all the _l for l ∈ [L+1] will be the same. Therefore, the cover of _l for all l ∈ [L+1] are also equal. Thus, it is sufficient to only consider _0 at the termination of the algorithm. By <ref> of <ref>, we have ensured s_0 ∈ Cov(_0). By the primal-dual approach discussed in <ref>, we have reduced the CMDP problem into an unconstrained problem with a single reward of the form r_λ = r + λ c. Therefore, we can apply the value-difference lemma (<ref>) of Confident-NPG in the single reward setting (see <ref>) to Confident-NPG-CMDP. Then, we can show the value difference between π^* and K can be bounded, which leads to: Let δ∈ (0,1] be the failure probability, ϵ > 0 be the target accuracy, and s_0 be the initial state. Assuming for all s ∈ Cov(_0) and all a ∈, |k(s,a) - π_k', λ_k(s,a)| ≤ϵ' and |k(s_0,a) - π_k'(s_0,a)| ≤ (ω + √(α) B + (ω + ϵ) √(d̃)) for all π_k' ∈Π_π_k, Cov(_0), then, with probability 1-δ, Confident-NPG-CMDP returns a mixture policy π̅_K that satisfies the following, π^*(s_0) - K(s_0) ≤5 ϵ'/1-γ + (√(2 ln(A))+1)(1+U)/(1-γ)^2 √(K), b - K(s_0) ≤ [b-K(s_0)]_+ ≤5 ϵ'/(1-γ)(U-λ^*) + (√(2 ln(A)) +1)(1+U)/(1-γ)^2(U-λ^*) √(K), where ϵ' ≐ (1+U)(ω + (√(α)B + (ω + ϵ)√(d̃))) with d̃ = Õ(d), and U is an upper bound on the optimal Lagrange multiplier. By setting the parameters to appropriate values, it follows from <ref> that we obtain the following result: With probability 1-δ, the mixture policy K = 1/k∑_k=0^K-1π_k returned by confident-NPG-CMDP ensures that π^*(s_0) - K(s_0) = Õ( √(d) (1-γ)^-2ζ^-1ω) + ϵ, K(s_0) ≥ b - (Õ(√(d) (1-γ)^-2ζ^-1ω) + ϵ). if we choose n = Õ(ϵ^-2ζ^-2(1-γ)^-4d), α = O (ϵ^2 ζ^2 (1-γ)^4), K= Õ(ϵ^-2ζ^-2(1-γ)^-6), η_1 = Õ( (1-γ)^2 ζ K^-1/2), η_2 = ζ^-1 K^-1/2, H = Õ ((1-γ)^-1), m = Õ(ϵ^-1ζ^-2 (1-γ)^-2), and L = K/ = Õ(ϵ^-1(1-γ)^-4) total number of data collection phases. Furthermore, the algorithm utilizes at most Õ(ϵ^-3ζ^-3 d^2 (1-γ)^-11) queries in the local-access setting. Remark 1: In the presence of misspecification error ω > 0, the reward suboptimality and constraint violation is Õ(ω) + ϵ with the same sample complexity. Remark 2: Suppose the Slader's constant ζ is much smaller than the suboptimality bound of Õ(ω) + ϵ, and it is reasonable to set ζ = ϵ. Then, the sample complexity is Õ(ϵ^-6 (1-γ)^-11d^2), which is independent of ζ. Remark 3: We note that our algorithm requires the knowledge of the Slater's constant ζ, which can be estimated by another algorithm. § CONFIDENT-NPG-CMDP SATISFIES STRICT-FEASIBILITY In this section, we show that the returned mixture policy K by Confident-NPG-CMDP satisfies the strict feasibility. In order to obtain an ϵ-optimal policy that satisfies constraint: K≥ b, we consider a more conservative CMDP that we call it the surrogate CMDP. The surrogate CMDP is defined by the tuple (, , P, r, c, b', s_0, γ), where b' ≐ b + for a ≥ 0. We note that b' ≥ b and the optimal policy of this surrogate CMDP is defined as follows: ∈π(s_0) s.t. π(s_0) ≥ b' . Notice that is a more conservative policy than π^*, where π^* is the optimal policy of the original CMDP objective <ref>. By solving this surrogate CMDP using Confident-NPG-CMDP and applying the result of <ref>, we obtain a K that would satisfy π̅^*(s_0) - K(s_0) ≤ϵ̅ s.t. K(s_0) ≥ b' - ϵ̅, where ϵ̅= Õ (ω) + ϵ. Expanding out b', we have K(s_0) ≥ b + - ϵ̅. If we can set such that - ϵ̅≥ 0, then K(s_0) ϵ≥ b, which satisfies strict-feasibility. We show this formally in the next theorem, where = O(ϵ (1-γ) ζ) and is incorporated into the algorithmic parameters for ease of presentation. With probability 1-δ, a target ϵ > 0, the mixture policy K returned by confident-NPG-CMDP ensures that π^*(s_0) - K(s_0) ≤ϵ and K(s_0) ≥ b, if assuming the misspecification error ω≤ϵζ^2 (1-γ)^3 (1+√(d̃))^-1, and if we choose α = O (ϵ^2 ζ^3 (1-γ)^5 ), K = Õ(ϵ^-2ζ^-4 (1-γ)^-8), n = Õ(ϵ^-2ζ^-4(1-γ)^-8 d), H = Õ((1-γ)^-1), m = Õ(ϵ^-1ζ^-2 (1-γ)^-3), and L = K/ = Õ((ϵ^-1ζ^-2 (1-γ)^-5)) total data collection phases. Furthermore, the algorithm utilizes at most Õ(ϵ^-3ζ^-6(1-γ)^-14 d^2 ) queries in the local-access setting. § CONCLUSION We have presented a primal-dual algorithm for planning in CMDP with large state spaces, given q_π-realizable function approximation. The algorithm, with high probability, returns a policy that achieves both the relaxed and strict feasibility CMDP objectives, using no more than Õ(ϵ^-3 d^2 (ζ^-1(1-γ)^-1)) queries to the local-access simulator. Our algorithm does not query the simulator and collect data in every iteration. Instead, the algorithm queries the simulator only at fixed intervals. Between these data collection intervals, our algorithm improves the policy using off-policy optimization. This approach makes it possible to achieve the desired sample complexity in both feasibility settings. apalike § CONFIDENT-NPG IN A SINGLE REWARD SETTING The pseudo code of Confident-NPG with a single reward setting is the same as Confident-NPG-CMDP in <ref>, except that <ref> to <ref> will not appear in Confident-NPG. Additionally, the LSE subroutine returns just , and the policy update will be with respect to . For the complete pseudo code of Confident-NPG in the single reward setting, please see <ref>. In the following analysis, for convenience, we omit the superscript r. §.§ The Gather-data subroutine Given a core set , a behaviour policy μ, a starting state-action pair (s,a) ∈× along with some algorithmic parameters, the Gather-data subroutine (<ref>) will either 1) return a newly discovered state-action pair, or 2) return a set of n trajectories. Each trajectory is generated by running the behaviour policy μ with the simulator for H consecutive steps. For i = 1,...,n, let τ^i_s,a denote the ith trajectory starting from s,a to be {S_0^i = s, A_0^i = a, R_1^i, C_1^i, ⋯, S_H-1^i, A_H-1^i, R_H^i, C_H^i, S_H^i}. Then the i-th discounted cumulative rewards G(τ^i_s,a) = ∑_h=0^H-1γ^h R_h+1^i. For a target policy π, then the empirical mean of the discounted sum of rewards (s,a) = 1/n∑_i=1^n ρ(τ^i_s,a) G(τ^i_s,a), where ρ(τ^i_s,a) = Π_h=1^H-1π(A^i_h | S_h^i)/μ(A^i_h | S_h^i) is the importance sampling ratio. For some given s̅ and a̅, we establish the following relationship between the target policy π and the behavior policy μ: π(a̅|s̅) ∝μ(a̅|s̅) exp(f(s̅, a̅)) s.t. sup_s̅, a̅ |f(s̅,a̅)| ≤ln(1+c)/2H, where f(s̅,a̅) : ×→ℝ^+ be a function and c ≥ 0 is given and a constant. By establishing the relationship in <ref>, the importance sampling ratio ρ(τ^i_s,a) can be bounded by 1+c as it is proven in the following lemma: Suppose the state-action pairs { (S_h, A_h) }_h=1^H-1 extracted from a trajectory τ≐{S_0, A_0, R_1, S_1, A_1, ⋯, S_H-1, A_H-1, R_H}∼μ, have their behavior policy μ related to a target policy π by the relation in <ref>. In this case, the per-trajectory importance sampling ratio ρ(τ) = Π_h=1^H-1π(A_h|S_h)/μ(A_h|S_h)≤ 1+c. Let l ≐sup_s,a |f(s,a)|, where f is defined in <ref>. For any (s,a) ∈{(S_h, A_h)}_h=1^H-1, π(a|s) = μ(a|s)exp(f(s,a))/∑_a'μ(a'|s) exp(f(s,a'))≤μ(a|s) exp(l)/∑_a'μ(a'|s) exp(-l) ≤μ(a|s) exp(2l). By assumption, l ≤ln(1+c)/2H, then exp(2lH) ≤exp(2H ln(1+c)/2H) ≤ 1+c. Then, we can show that for all (s,a) ∈, |(s,a) - q_π(s,a)| ≤ϵ, where ϵ > 0 is a given target error. Additionally, the accuracy guarantee of |(s,a) - q_π(s,a)| ≤ϵ continues to holds for the extended set of policies defined in <ref>. Formally, we state the main result of this section. For any s,a ∈×, 𝒳⊂, the Gather-data subroutine will either return with ((s',a'), True) for some s' ∉𝒳, or it will return with (D[(s,a)], False), where D[(s,a)] is a set of n independent trajectories generated by a behavior policy μ starting from (s,a). When Gather-data returns False for (s,a), we assume 1) the behavior policy μ and target policy π for all the states and actions encountered in the trajectories stored in D[(s,a)] satisfy <ref> and 2) (s,a) is an unbiased estimate of _π', s,a [∑_h=0^H-1γ^h R_h+1] for all π' ∈Π_π, 𝒳. Then, the importance-weighted return (s,a) constructed from D[(s,a)] according to <ref> will, with probability 1-δ', |(s,a) - q_π'(s,a) | ≤ϵ for all π' ∈Π_π, 𝒳. The proof follows similar logic to Lemma 4.2 <cit.>. Recall D[(s,a)] stores n number of trajectories indexed by i, where each trajectory τ^i_s,a = (S_0^i = s, A_0^i=a, R_0^i,...,S_H-1^i) ∼μ. The per-trajectory importance sampling ratio ρ(τ^i_s,a) = Π_h=1^H-1π(A_h^i|S_h^i)/μ(A_h^i|S_h^i), and the return is ∑_h=0^H-1γ^h R_h+1^i. By the triangle inequality, |(s,a) - q_π(s,a) | = |1/n∑_i=1^nΠ_h=0^H-1π(A_h^i|S_h^i)/μ(A_h^i| S_h^i)∑_h=0^H-1γ^h R^i_h - q_π(s,a)| ≤ |1/n∑_i=1^nρ(τ^i_s,a) ∑_h=0^H-1γ^h R_h+1^i - _π, s,a∑_h=0^H-1γ^h R_h+1| + |E_π,s,a∑_h=0^H-1γ^h R_h - q_π(s,a)|. The goal is to bound each of the two terms in <ref> by ϵ/4 so that the sum of the two is ϵ/2. By assumption, π, μ for all {(S_h^i, A_h^i)}_h=1^H-1 extracted out of the i-trajectory τ_s,a^i satisfies <ref>. Second, (s,a) is assumed to be an unbiased estimate of _π, s,a[ ∑_h=0^H-1γ^h R_h+1]. Note that each ρ(τ_s,a^i) ∑_h=0^H-1γ^h R_h+1^i for all i = 1,...,n are independent random variables such that ρ(τ_s,a^i) ∑_h=0^H-1γ^h R_h+1^i ∈[0, 1+c/1-γ]. This is because 1) ∑_h=0^H-1γ^h R_h+1^i≤1/1-γ since the rewards take values in the range of [0,1], and 2) ρ(τ_s,a^i) ≤ 1 + c by <ref>. We apply Hoeffding's inequality: (|1/n∑_i=1^n ρ(τ^i_s,a) ∑_h=0^H-1γ^h R_h+1^i - _π, s, a∑_h=0^H-1γ^h R_h+1| > ϵ) ≤ 2exp(-2 n ϵ^2/( 1+c/1-γ)^2). Then, we have with probability 1-δ'/2, where δ' = 2exp(-2 n (ϵ/4)^2/( 1+c/1-γ)^2), the first term in <ref>, |1/n∑_i=1^nρ(τ^i_s,a) ∑_h=0^H-1γ^h R_h+1^i - _π, s,a∑_h=0^H-1γ^h R_h+1| ≤ϵ/4. For the second term in <ref>, |E_π,s,a∑_h=0^H-1γ^h R_h - q_π(s,a)| = |_π,s, a∑_h=H^∞γ^h R_h| ≤γ^H/1-γ. By the choice of H, we have γ^H/1-γ≤ϵ/4. Putting everything together, we get |(s,a) - q_π(s,a)| ≤ϵ/2. To get the final result, we need to bound |q_π(s,a) - q_π'(s,a)| ≤ϵ/2, so that |(s,a) - q_π'(s,a) | ≤ | (s,a) - q_π(s,a)| + |q_π(s,a) - q_π'(s,a) | ≤ϵ. Recall that π and π' differs in distributions over states that are not in 𝒳. For a trajectory {S_0=s, A_0 =a, S_1,...}, let T be the smallest positive integer such that state S_T ∉𝒳, then the distribution of the trajectory S_0=s, A_0=a, S_1,...,S_T are the same under P_π, s,a and P_π',s,a because π(· |s) = π'(· |s) for all s ∈𝒳. Then, |q_π(s,a) - q_π'(s,a)| = | _π,s,a[∑_t=0^T-1γ^t R_t +γ^T v_π(S_T) ] - _π', s,a[∑_t=0^T-1γ^t R_t + γ^T v_π'(S_T)]| =| _π,s,a[γ^T v_π(S_T) ] - _π', s,a[γ^T v_π'(S_T)]| = ∑_s' ∈𝒳,a' P_π,s,a(S_T-1 = s', A_T-1 = a') P(S_T | s', a') γ^T v_π(S_T) - ∑_s' ∈𝒳,a' P_π',s,a(S_T-1 = s', A_T-1 = a') P(S_T | s', a') γ^T v_π'(S_T) = ∑_s' ∈𝒳, a' P_π, s,a(S_T-1 = s', A_T-1 = a') P(S_T|s',a') γ^T (v_π(S_T) - v_π'(S_T)) ≤1/1-γ∑_s' ∈𝒳, a' P_π, s,a(S_T-1 = s', A_T-1 = a') P(S_T|s',a') γ^T = 1/1-γ∑_s' ∈𝒳, a'∑_s̅∉𝒳∑_t=1^∞ P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') γ^t = 1/1-γ∑_t=1^H-1∑_s' ∈𝒳, a'∑_s̅∉𝒳 P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') γ^t + 1/1-γ∑_t=H^∞∑_s' ∈𝒳, a'∑_s̅∉𝒳 P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') γ^t ≤1/1-γ∑_t=1^H-1∑_s' ∈𝒳, a'∑_s̅∉𝒳 P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') + γ^H/1-γ∑_s' ∈𝒳, a'∑_t=0^∞γ^t P_π, s,a(S_t+H-1 = s', A_t+H-1 = a') ≤1/1-γ∑_t=1^H-1∑_s' ∈𝒳, a'∑_s̅∉𝒳 P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') + γ^H/(1-γ)^2∑_s' ∈𝒳, a' (1-γ) d_π,s,a(s',a') ≤1/1-γ∑_t=1^H-1∑_s' ∈𝒳, a'∑_s̅∉𝒳 P_π, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') + γ^H/(1-γ)^2. By the law of total probability, P_π,s,a(S_t = s', A_t = a') = ∑_s_1,...,s_t, a_1,...,a_tΠ_i=0^t P(S_i+1 | S_i = s_i, A_i = a_i) Π_i=1^t π(A_i = a_i | S_i = s_i) = ∑_s_1,...,s_t, a_1,...,a_tΠ_i=0^t P(S_i+1 | S_i = s_i, A_i = a_i) Π_i=1^t π(A_i = a_i | S_i = s_i)/μ(A_i = a_i | S_i = s_i)μ(A_i = a_i | S_i = s_i) ≤∑_s_1,...,s_t, a_1,...,a_tΠ_i=0^t P(S_i+1 | S_i = s_i, A_i = a_i) Π_i=1^t exp(2l) μ(A_i = a_i | S_i = s_i) ≤exp(2tl) ∑_s_1,...,s_t, a_1,...,a_tΠ_i=0^t P(S_i | S_i = s_i, A_i = a_i) Π_i=1^t μ(A_i = a_i | S_i = s_i) ≤ (1+c) ∑_s_1,...,s_t, a_1,...,a_tΠ_i=1^t P(S_i | S_i-1 = s_i, A_i-1 = a_i) Π_i=1^t μ(A_i = a_i | S_i = s_i) = (1+c) P_μ,s,a(S_t = s', A_t = a'). To get from <ref> to <ref>, recall from the proof of <ref>, we have π(a|s)/μ(a|s)≤exp(2l), for l ≐sup_s,a f(s,a). To go from <ref> to <ref>, we note that exp(2tl) ≤exp(2Hl) since t ∈ [1, H), and exp(2Hl) ≤ 1+c for any l that satisfies the constraint of <ref>. In summary, we have |q_π(s,a) - q_π'(s,a)| ≤1/1-γ∑_t=1^H-1∑_s' ∈𝒳, a'∑_s̅∉𝒳 (1+c) P_μ, s,a(S_t-1 = s', A_t-1 = a') P(S_t = s̅ | s',a') + γ^H/(1-γ)^2 ≤1+c/1-γ P_μ,s,a(1 ≤ T < H) + ϵ/4. To bound P_μ,s,a(1 ≤ T < H), let us recall that during the execution of Gather-data subroutine (<ref>), everytime the simulator returns a state-action pair that is not in the action-cover (i.e., dooes not pass the certainty check), the subroutine stops. For each s,a ∈𝒳, during any of the n rollouts, let I_i(s,a) denote the indicator event that during 1 ≤ T < H, S_T ∉𝒳 in rollout i. Then _μ,s,a[I_i(s,a)] = P_μ, s,a(1 ≤ T < H). We know that I_i(s,a) = {0, 1} and then for any ϵ > 0, by another Hoeffdings inequality, with probability 1-δ'/2, |_μ,s,a[I_i(s,a)] - 1/n∑_i=1^n I_i(s,a) | ≤ϵ(1-γ)/4(1+c), When gather-data subroutine returns, all indicators I_i(s,a) = 0 for all (s,a) ∈𝒳 and i ∈ [n], then we have P_μ,s,a(1 ≤ T < H) ≤ϵ(1-γ)/4(1+c). Putting everything together, we have the result. §.§ The LSE subroutine Given a core set , a set of trajectories, a behaviour policy μ, a target policy π, the LSE subroutine (<ref>) returns a least-square estimate Q of q_π. If the core set is empty, we define Q(·, ·) to be zero. Then, for a target accuracy ϵ > 0 and a uniform misspecification error ω defined in <ref>, we have a bound on the accuracy of with respect to q_π as given by the next lemma. [Lemma 4.3 of <cit.>] Let π be a randomized policy. Let = {(s_i, a_i)}_i ∈ [N] be a set of state-action pairs of set size N ∈ℕ. Assume for all i ∈ [N], |(s_i,a_i) - q_π(s_i, a_i) | ≤ϵ. Then, for all s, a ∈×, |Q(s,a) - q_π(s,a) | ≤ω + ϕ(s,a)_V(, α)^-1( √(α)B + (ω + ϵ)√(N)). Let w^* be the parameter that satisfies inf_w ∈^d, w_2 ≤ Bsup_s,a ∈× |ϕ(s,a)^⊤ w - q_π(s,a)| ≤ϵ, for all s,a ∈× and an ϵ > 0. Let w̅^* = V(C)^-1∑_s̅, a̅∈ϕ(s̅, a̅) ϕ(s̅, a̅)^⊤ w^*. Note that w̅^* is obtained w.r.t to state-action pairs in . For any s,a ∈×, |Q(s,a) - q_π(s,a)| ≤ |ϕ(s,a)^⊤ (w - w̅^*)| + |ϕ(s,a)^⊤ (w̅^* - w^*)| + |ϕ(s,a)^⊤ w^* - q_π(s,a)|. By applying q_π-realizability assumption (<ref>), we have |ϕ(s,a)^⊤ w^* - q_π(s,a)| ≤ϵ. To bound the second term in <ref>, we have |ϕ(s,a)^⊤(w̅^* - w^*)| ≤ϕ(s,a)_V(C)^-1w̅^* - w^*_V(C) ≤ϕ(s,a)_V(C)^-1V(C)^-1∑_s̅,a̅∈ C((ϕ(s̅,a̅) ϕ(s̅,a̅)^⊤ + α I)w^* - α I w^*) - w^*_V(C) =ϕ(s,a)_V(C)^-1-α V(C)^-1 w^*_V(C) =αϕ(s,a)_V(C)^-1w^*_V(C)^-1 ≤αϕ(s,a)_V(C)^-1w^*_1/α I ≤αϕ(s,a)_V(C)^-1√(1/α) B = √(α) B ϕ(s,a)_V(C)^-1. Let α be the smallest eigenvalue of V(C), then by eigendecomposition, V(C) = Q Λ Q^-1≥ Q(α I) Q = α QQ^⊤≥α I since QQ^⊤ is orthonormal. This implies that V(C)^-1≤1/α I, which leads to <ref>. Finally, we bound the first term in <ref>. By definition of least-square, = Φ w^* + ξ, where ξ is the error. Recall that w = V(C)^-1∑_s̅, a̅∈ϕ(s̅, a̅) (s̅, a̅), and by assumption that for any s̅, a̅∈, |(s̅, a̅) - q_π(s̅, a̅)| ≤ϵ, then |ξ(s̅, a̅)| = |(s̅, a̅) - ϕ(s̅, a̅)^⊤ w^*| ≤ |(s̅, a̅) - q_π(s̅, a̅)| + |q_π(s̅, a̅) - ϕ(s̅, a̅)^⊤ w^*| ≤ϵ + ω. It follows that for all s, a ∈×, | ϕ(s,a)^⊤(w - w̅^*)| = |⟨ V()^-1∑_s̅, a̅∈ϕ(s̅, a̅) ((s̅, a̅) - ϕ(s̅, a̅)^⊤ w^*), ϕ(s,a) ⟩| = |⟨ V()^-1∑_s̅, a̅∈ϕ(s̅, a̅) ξ(s̅, a̅), ϕ(s,a) ⟩| ≤∑_s̅, a̅∈ |⟨ V()^-1ϕ(s̅, a̅) ξ(s̅, a̅), ϕ(s,a) ⟩| ≤ (ω + ϵ) ∑_s̅, a̅∈ |⟨ V()^-1ϕ(s̅, a̅), ϕ(s,a) ⟩| ≤ (ω + ϵ) √(||)√(∑_s̅, a̅∈⟨ V()^-1ϕ(s̅, a̅), ϕ(s,a) ⟩^2) by Holder's inequality ≤ (ω + ϵ) √(||)√(ϕ(s,a)^⊤ V()^-1(∑_s̅, a̅∈ϕ(s̅, a̅) ϕ(s̅, a̅)^⊤) V()^-1)ϕ(s,a) ≤ (ω + ϵ) √(||)√(ϕ(s,a)^⊤(I - α V()^-1) V()^-1ϕ(s,a)) ≤ (ω + ϵ) √(||)√(ϕ(s,a)^⊤ V()^-1ϕ(s,a)) because V()^-1≤ (1/α)I = (ω + ϵ) √(||)ϕ(s,a)_V()^-1. Altogether, for any s,a ∈×, |Q(s,a) - q_π(s,a)| ≤ω + ϕ(s,a)_V()^-1( √(α) B + (ω+ϵ) √(||)). §.§ The accuracy of least-square estimates Given a core set and a target policy π, <ref> ensures that |Q(s,a) - q_π(s,a)| = O(ω + ϵ) for any s within Cov(). This accuracy comes from the fact that for all s ∈ Cov(), the feature vector ϕ(s,a) satisfies ϕ(s,a)_V(, α)^-1≤ 1 for all a ∈. In this section, we verify whether this accuracy is maintained under the framework of our algorithm, which includes constructing the core set, updating action-values, and improving policy updates. All of which plays a role in maintaining the accuracy of the Q estimates. We note that policy improvements can only occur during a running phase ℓ. When all (s,a) pairs in _ℓ have their placeholder value ⊥ replaced by trajectories, <ref> executes <ref> to <ref>. During each iteration from k_ℓ to k_ℓ+1-1, the LSE subroutine is executed. The accuracy of k is used to bound the estimation error in <ref>. Therefore, we will first verify that the accuracy guarantee of k(s,a), used in <ref>, is indeed satisfied by the main algorithm and maintained throughout its execution. Once a state-action pair is added to a core set, it remains in that core set for the duration of the algorithm. This means that any _l for l ∈ [L+1] can grow in size. When a core set _l is extended during a running phase when ℓ = l, the least-square estimates will need be updated based on the newly extended _l containing newly discovered informative features. However, the improved estimates will only be used to update the policy of states that are newly covered, which are states that are in Cov(_ℓ) ∖ Cov(_ℓ+1). We break down k to reflect the value based on which π_k+1 is updated, k(s,a) ←k(s,a) if s ∈ Cov(_ℓ+1) Q_k(s,a) if s ∈ Cov(_ℓ) ∖ Cov(_ℓ+1) initial value 0 if s ∉Cov(_ℓ), where Q_k(s,a) = _[0, 1/1-γ]k(s,a). Respective to k, we have the corresponding policy update as follows, π_k+1(a|s) ←π_k+1(a|s) if s ∈ Cov(_ℓ+1) ∝π_k(· | s) exp(η_1 _[0, 1/1-γ]Q_k(s,a) ) if s ∈ Cov(_ℓ) ∖ Cov(_ℓ+1) π_k(a|s) if s ∉Cov(_ℓ). For all s ∈ Cov(_ℓ), the k(s, ·) will be the value of the least-square estimate at the time the policy makes an NPG update of the form: π_k+1(· | s) ∝π_k(· | s) exp(η_1 _[0, 1/1-γ]Q_k(s,a) ). This is because at the end of the loop after <ref> is run, the next phase core set _ℓ+1 = _ℓ, which make any states that are covered by _ℓ also be covered by _ℓ+1. A state that was once newly covered by _ℓ will no longer be newly covered again. If the algorithm was to make an NPG update for states that are newly covered in some loop, in any subsequent loop with the same value ℓ, the policy will remain unchanged. By updating policies according to <ref> of <ref> with resepect to k, we have: [Lemma 4.5 of <cit.>] For any l ∈ [L], let _l^past be any past version of _l and π_k^past for k ∈ [k_l, ⋯, k_l+1-1] be the corresponding policies associated with _l^past, then at any later point during the execution of the algorithm, π_k ∈Π_π_k, Cov(_l)⊆Π_π_k^past, Cov(_l^past) for all k ∈ [k_l, ⋯, k_l+1-1]. For any l ∈ [L], for any states to have been covered by _l, they will continue to be covered by _l throughout the execution of the algorithm. Let us consider a past version of _l and denote it as _l^past. For any states to have been covered by _l^past, it will continue to be covered by any future extensions of _l. This is because V(_l^past, α) ≽ V(_l, α) and therefore Cov(_l^past) ⊆ Cov(_l). Whenever LSE-subroutine of Confident-NPG is executed, for all iterations k ∈ [k_ℓ, …, k_ℓ+1-1], for any (s,a) ∈_ℓ, the importance weighted return k(s,a) is an unbiased estimate of _π_k', s,a[∑_h=0^H-1 R_h+1] for all π_k' ∈Π_π_k, Cov(_ℓ). When LSE-subroutine is executed with ℓ = l, we consider two scenarios. Case 1: The trajectories for a (s,a) ∈_ℓ are generated and stored in D_l[(s,a)] for the first time. Case 2: The trajectories for a (s,a) ∈_ℓ were already generated and stored in D_l[(s,a)] during a previous iteration when ℓ = l. Case 1: We consider the case when LSE-subroutine is executed with ℓ = l, where the trajectories of a (s,a) ∈_l is generated and saved to D_l[(s,a)] for the first time. For any trajectories to have been saved to D_l would mean that the encountered states within the trajectories are in Cov(_l). Otherwise, the Gather-data would have returned `discovered is True', and a newly discovered state-action pair would be added to _0, interrupting the roll-out procedure. Consequently, no trajectories would have been saved to D_l[(s,a)]. We let τ_s,a^i denote the i-th trajectory {S_0^i = s, A_0^i = a, R_1^i, S_1^i,...,S_H-1^i, A_H-1^i, R_H^i} generated by π_k_l interacting with the simulator, and there are n such trajectories stored in D_l[(s,a)]. Then, for all k ∈ [k_l, ⋯, k_l+1-1], the return k(s,a) = 1/n∑_i=1^n Π_h=1^H-1π_k(A_h^i|S_h^i)/π_k_l(A_h^i|S_h^i) G(τ_s,a^i). The behavior policy π_k_l is updated in a previous loop through the algorithm when ℓ = l-1. By the time LSE-subroutine is executed, π_k_l will have been the most recent π_k_l to generate the data. For subsequent iterations, starting with k = k_l + 1 up to k = k_l+1-1, the policy π_k is updated based in iteration k-1. Thus, by the time LSE-subroutine is executed for any k within [k_l + 1, …, k_l+1-1], both the most recent policy π_k and the behaviour policy π_k_l are available for the computation of the importance sampling ratio: ρ_k(τ_s,a^i) = Π_h=1^H-1π_k(A_h^i|S_h^i)/π_k_l(A_h^i|S_h^i). For k = k_l, the importance sampling ratio ρ_k(τ_s,a^i) = 1. The importance weighted return ρ_k(τ_s,a^i)G(τ_s,a^i) is an unbiased estimate of _π_k, s,a[G(τ_s,a^i)]: _π_k_l, s, a[ ρ_k(τ_s,a^i) ∑_h=0^H-1γ^h R_h+1^i ] =_π_k_l, s, a[ δ(s,a) P(S_1| S_0 = s,A_0 = a) π_k(A_1|S_1).... π_k(A_H-1|S_H-1)/δ(s,a) P(S_1 | S_0=s,A_0=a) π_k_l(A_1|S_1)...π_k_l(A_H-1|S_H-1)∑_h=0^H-1γ^h R_h+1^i ] =_π_k, s, a[∑_h=0^H-1γ^h R_h+1^i ], where δ(s,a) is the dirac-delta function. Because all of the states encountered in the trajectories are in Cov(_l), the importance sampling ratio ρ_k(τ_s,a^i) would have been produced by any policy π_k_l' ∈Π_π_k_l, Cov(_l) and π_k' ∈Π_π_k, Cov(_l) by <ref>. Thus, the return ρ_k(τ_s,a) G(τ_s,a^i) is an unbiased estimate of E_π_k', s,a[G(τ_s,a^i] for all π_k' ∈Π_π_k, Cov(_l), and this is true for all i = 1,...,n. Consequently, k(s,a) is an unbiased estimate of _π_k', s,a[∑_h=0^H-1 R_h+1] for all for all π_k' ∈Π_π_k, Cov(_l). Case 2: We consider the case when LSE-subroutine is executed with ℓ =l, where the trajectories of a (s,a) ∈_ℓ is generated in a previous loop through the algorithm with ℓ = l. Let us denote _l at the time of this data acquisition as _l^past, and the stored data as D_l^past[(s,a)]. The trajectories stored in D_l^past[(s,a)] were generated from the past behaviour policy π_k_l^past interacting with the simulator. Likewise, we let π_k^past to denote the past target policies at the time after 26 was executed for each of the iterations in the range of [k_l + 1, ⋯, k_l+1-1]. Finally, we also have the corresponding τ_s,a^i,past to denote the i-th trajectory that is stored in D_l^past[(s,a)]. By the time LSE-subroutine is run with ℓ =l again, by the same arguments made for Case 1, for any k ∈ [k_l, ⋯, k_l+1-1] and any i =1,...,n, the importance weighted return ρ_k(τ_s,a^i,past)G(τ_s,a^i,past) is an unbiased estimate of _π̃_k, s,a[G(τ_s,a^i,past)] for all π̃_k^past∈Π_π_k^past, Cov(_l^past). By <ref>, the most recent π_k ∈Π_π_k^past, Cov(_l^past). Then, ρ_k(τ_s,a^i,past) G(τ_s,a^i, past) is an unbiased estimate of _π_k, s,a[G(τ_s,a^i, past)]. By <ref>, any π_k' ∈Π_π_k, Cov(_l)⊆Π_π_k^past, Cov(_l^past) and π_k_l' ∈Π_π_k_l, Cov(_l)⊆Π_π_k^past, Cov(_l^past) would have produced the same importance sampling ratio, this implies that ρ_k(τ_s,a^i,past) G(τ_s,a^i, past) is an unbiased estimate of _π_k' , s,a[G(τ_s,a^i, past)] for all π_k' ∈Π_π_k, Cov(_l). Once D_l^past[(s,a)] is populated with trajectories, D_l^past[(s,a)] remain unchanged throughout the execution of the algorithm. Therefore, G(τ_s,a^i, past) will never change again. Thus, G(τ_s,a^i) = G(τ_s,a^i,past), and we have ρ_k(τ_s,a^i)G(τ_s,a^i) is an unbiased estimate of _π_k', s,a[G(τ_s,a^i)] for all π_k' ∈Π_π_k, Cov(_l), and this is true for all i = 1,...,n. Consequently, k(s,a) is an unbiased estimate of _π_k', s,a[∑_h=0^H-1 R_h+1] for all for all π_k' ∈Π_π_k, Cov(_l). Whenever LSE-subroutine of Confident-NPG is executed, for any s ∈ Cov(_ℓ), the behaviour policy π_k_ℓ(· |s) and target policy π_k(· |s) for k ∈ [k_ℓ+1, ⋯, k_ℓ+1-1] satisfy <ref>. Recall that the behavior policy π_k_ℓ is updated in a previous loop through the algorithm when ℓ = l-1. By the time LSE-subroutine is executed, π_k_ℓ will have been the most recent π_k_ℓ to generate the data. For the on-policy iteration where k = k_ℓ, the policy π_k_ℓ serves as both the target and behavior policy, making <ref> trivially satisfied. For subsequent iterations, starting with k = k_ℓ + 1 up to k = k_ℓ+1-1, in iteration k-1, the policy π_k would have either undergone an NPG update in the form of <ref> for the first time or remain unchanged in the form of <ref> with respect to some past π_k and π_k_ℓ. Either way, for any s ∈ Cov(_ℓ), the target policy π_k and behaviour policy π_k_ℓ would be related to each other in form of <ref>. Since t(s,a) ≤1/1-γ for any t ∈ [k_ℓ, k-1], then it follows that η_1 ∑_t=k_ℓ^k-1t(s,a) ≤η_1 (k-k_ℓ) 1/1-γ≤η_1 ( -1)/1-γ. By choosing η_1 = (1-γ)√(2ln(||)/K), m = ln(1+c)/2(1-γ)ϵln(4/ϵ(1-γ)^2), and K = 2 ln(A)/(1-γ)^4 ϵ^2, we have η_1 ( -1) /1-γ≤η_1 m/1-γ = ln(1+c)/2H. By the time LSE-subroutine is executed for any k within [k_ℓ + 1, …, k_ℓ+1-1], the most recent target policy π_k and behaviour policy π_k_ℓ will satisfy <ref> for all states in the cover of _ℓ. [q-bar unchanged] For any l ∈ [L], once an entry (s,a) in _l is populated with trajectories and stored in D_l[(s,a)] and a k(s,a) for a k ∈ [k_l, ⋯, k_l+1-1] is computed in the LSE subroutine in LSE-subroutine for the first time, the k will maintain unchanged as that value for remainder of the algorithm's execution. Recall the i-th trajectory τ_s,a^i stored in D_l[(s,a)] forms the return G(τ_s,a^i) = ∑_h=0^H-1γ^h R_h+1^i. By extracting out all of the state-action pairs {(S_h^i, A_h^i)}_h=1^H-1 from τ_s,a^i, we have the importance sampling ratio for trajectory i: ρ(τ_s,a^i) = Π_h=1^H-1π_k(A_h^i|S_h^i)/π_k_l(A_h^i|S_h^i). Then, k(s,a) = 1/n∑_i=1^n ρ(τ_s,a^i) G(τ_s,a^i). We see that the value of k(s,a) is influenced by the importance sampling ratio and the returns. The returns G(τ_s,a^i) for all i = 1,...,n are based on data D_l[(s,a)]. Once D_l[(s,a)] is populated with trajectories, D_l[(s,a)] remain unchanged for the rest of the algorithm's execution. The only source that can change the value of k throughout the execution of the algorithm is the importance sampling ratio. Thus, let us consider the running phase with ℓ = l where k(s,a) for any k ∈ [k_l, ⋯, k_l+1-1] is computed the first time in the LSE subroutine in LSE-subroutine. First of all, the behavior policy π_k_l would already have been updated in a previous loop through the algorithm with ℓ = l-1. For subsequent iterations, starting with k = k_l + 1 up to k = k_l+1-1, the policy π_k is updated in iteration k-1. Thus, by the time LSE-subroutine is executed for any k within [k_l + 1, …, k_l+1-1], both the most recent policy π_k and the behaviour policy π_k_l would have been updated by line 26. In line 26, for any s that are newly covered by _l (i.e., s ∈ Cov(_l) ∖ Cov(_l+1)), the policy π_k(·|s) makes an update, which will remain unchanged in any subsequent loops through the algorithm when ℓ = l again. This is because of line 30, the state s that is considered newly covered in one loop through the algorithm will be added to _l+1 by the end of the loop, making the state no longer as a newly covered state. In summary, π_k(·|s) would have been updated once and remain unchanged as that update for the remainder of the execution of the algorithm. By the time LSE-subroutine is executed for any k within [k_l + 1, …, k_l+1-1], the most recent policy π_k and the behaviour policy π_k_l for all states in the cover of _l would have already been updated and remain unchanged throughout the execution of the algorithm. Since all of the states in the trajectories are in the cover of _l, the importance sampling ratio would also have been set once and remain as that value for rest of the algorithm's run. When LSE-subroutine is run, k(s,a) for a k ∈ [k_l, ⋯, k_l+1-1] will also remain unchanged throughout the execution of the algorithm. [q-bar accuracy] Whenever LSE-subroutine of Confident-NPG is executed, for a δ' ∈ (0,1], for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1], the corresponding k(s,a) for all (s,a) ∈_ℓ has with probability 1-δ', |k(s,a) - q_π_k'(s,a) | ≤ϵ for all π_k' ∈Π_π_k, Cov(_ℓ). Since ℓ takes on a specific value l ∈ [L] in each loop through the algorithm, if <ref> holds for all k ∈ [k_l, ⋯, k_l+1-1], then the condition continues to hold for the reminder of the algorithm's execution. We aim to apply <ref> to each individual (s,a) ∈_ℓ. To do this, we must confirm the two necessary conditions of the lemma for each (s,a) ∈_ℓ, demonstrating that |k(s,a) - q_π_k(s,a)| for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1] is indeed bounded by ϵ for all (s,a) ∈_ℓ. We note that by the time LSE subroutine is run in <ref>, all of the (s,a) ∈_ℓ would have trajectories generated successfully by Gather-data and be stored in D_ℓ[(s,a)]. For any trajectory to be stored in D_ℓ, it would mean that all of the states in the trajectories are in the cover of _ℓ, and by <ref>, these states will continue to be covered by _ℓ throughout the execution of the algorithm. By picking any (s,a) ∈_ℓ, we apply <ref> to all of the states s in the trajectories stored in D_ℓ[(s,a)], then we have the behaviour policy π_k_ℓ(· |s) and target policy π_k(· |s) for k ∈ [k_ℓ+1, ⋯, k_ℓ+1-1] satisfy <ref>. Second, by <ref>, the importance weighted return k(s,a) is unbiased estimate of any _π_k',s,a[∑_h=0^H-1 R_h+1] for all π_k' ∈Π_π_k, Cov(_ℓ). Altogether, by <ref>, we can ensure <ref> holds. Since ℓ takes on a specific value l ∈ [L] in each loop through the algorithm, if <ref> holds for all k ∈ [k_l, ⋯, k_l+1-1], then the condition continues to hold for the reminder of the algorithm's execution. For any (s,a) ∈_l, by the time q_k(s,a) is calculated for the first time and used in LSE subroutine, by <ref>, the q_k(s,a) will remain as that same value throughout the execution of the algorithm. So, if <ref> holds for k(s,a), then the condition will continue to hold for the remainder of the algorithm's execution. At any time during the execution of the main algorithm, for all l ∈ [0, L], the size of each _l is bounded: |_l| ≤ 4 d ln(1 + 4/α) ≐d̃ = Õ(d), where the α is the smallest eigenvalue of V(, α) and N is the radius of the Euclidean ball containing all the feature vectors. Whenever LSE-subroutine of Confident-NPG is executed, for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1], for all s ∈ Cov(C_ℓ) and a ∈, the least-square estimate k(s,a) satisfies the following condition | k(s,a) - q_π_k'(s,a) | ≤ϵ' for all π_k' ∈Π_π_k, Cov(_ℓ), where ϵ' = ω + √(α) B + (ω + ϵ) √(d̃). We prove the result by induction similar to Lemma F.1 of <cit.>. We let _ℓ^-, π_k^-, k^- to denote the value of variable _ℓ, π_k, k at the time when lines 16 to 31 were most recently executed with ℓ =l in a previous loop through the algorithm. If such time does not exist, we let their values be the initialization values. Only after the execution of line 30 will _ℓ^- change and as well as _ℓ+1, and this is the only time that _ℓ+1 can be changed. Therefore, at the start of a new loop, we see that _ℓ+1 = _ℓ^-. This also holds at the initialization of the algorithm, we conclude that at the start of each loop, Cov(_ℓ+1) = Cov(_ℓ^-). At initialization, k = 0 for any k ∈ [0,K] and C_l=() for all l∈ [L]. By applying <ref> (Lemma 4.3 of <cit.>), for any s,a ∈×, |k(s,a) - q_π'(s,a)| ≤ω+√(α) B ≤ϵ', which satisfies <ref> for all k. Next, let us consider the start of a loop after ℓ is set and assume that the inductive hypothesis holds for the previous time <ref> to <ref> were executed with the same value of ℓ. For any s ∈ Cov(_ℓ-1^-), policy π_k_ℓ(· |s) would have already been set in a previous loop with value ℓ-1 and remains unchanged in the current loop. By <ref> and <ref>, we have for any s ∈ Cov(_ℓ-1^-), |k_ℓ^-(s,·) - q_π_k_ℓ'(s,·)| ≤ω + √(α) B + (ω + ϵ) √(d̃) for π_k_ℓ'∈Π_π_k_ℓ^-, Cov(_ℓ-1^-), where ϕ(s,·)_V(_ℓ-1^-, α)^-1≤ 1 because s ∈ Cov(_ℓ-1^-) and |C_ℓ-1^-| ≤d̃ by <ref>. Recall by definition, k_ℓ = k_ℓ^-, π_k_ℓ = π_k_ℓ^-, C_ℓ = C_ℓ-1^-, and Cov(_ℓ) = Cov(_ℓ-1^-). It follows that for any s ∈ Cov(_ℓ), |k_ℓ(s,·) - q_π_k_ℓ'(s,·)| ≤ϵ' for π_k_ℓ'∈Π_π_k_ℓ, Cov(_ℓ). For any s that is already covered by _ℓ (i.e., s ∈ Cov(_ℓ^-)), and for any off-policy iteration k ∈ [k_ℓ+1, ⋯, k_ℓ+1-1], k(s, ·) = k^-(s, ·). Additionally, the policy π_k(· |s) would already have been set in a previous loop with the same value of ℓ and remains unchanged in the current loop. For s ∈ Cov(_ℓ^-), by <ref> and <ref>, |k^-(s,·) - q_π_k'(s,·)| ≤ω + √(α) B + (ω + ϵ) √(d̃) for π_k'∈Π_π_k^-, Cov(_ℓ^-), where ϕ(s, ·)_V(_ℓ^-, α)^-1≤ 1 because s ∈ cov(_ℓ^-) and |_ℓ^-| ≤√(d̃) by <ref>. By <ref>, Π_π_k, Cov(_ℓ)⊆Π_π_k^-, Cov(_ℓ^-). By definition, k(s, ·) = k^-(s, ·) for s ∈ Cov(_ℓ+1) = Cov(_ℓ^-), |k(s,·) - q_π_k'(s,·)| ≤ϵ' for any π_k' ∈Π_π_k, Cov(C_ℓ+1). Finally, for any s that is newly covered by _ℓ (i.e., s ∉Cov(_ℓ+1)), and for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1], k(s, ·) = Q_k(s, ·). By <ref> and <ref>, we have |Q_k(s,·) - q_π_k'(s,·)| ≤ω + √(α) B + (ω + ϵ) √(d̃) for π_k'∈Π_π_k, Cov(_ℓ), where ϕ(s, ·)_V(_ℓ, α)^-1≤ 1 and |_ℓ| ≤d̃ by <ref>. For any δ' ∈ (0, 1], a target accuracy ϵ > 0, misspecification error ϵ, and initial state s_0 ∈, with probability at least 1-δ', the value difference between any π∈Π_rand and the mixture policy K returned by Confident-NPG has the following value-difference error: v_π(s_0) - v_K (s_0) ≤4 ϵ'/1-γ + 1/K (1-γ)∑_k=0^K-1_s' ∼ d_π(s_0), s' ∈ Cov(_0)[ k(s', ·), π(· | s') - π_k(· | s') ]. By <ref> of <ref>, one can use induction to show that by the time Confident-NPG terminates, all the _l for l ∈ [L+1] will be equal. Therefore, the cover of _l for all l ∈ [L+1] are also equal. Thus, it is sufficient to only consider _0 at the end of the algorithm. By <ref> of <ref>, s_0 ∈ Cov(_0). For any l ∈ [L], define policy for k ∈ [k_l, ⋯, k_l+1-1] as follows, π^+_k(· | s) = π_k(· | s) if s ∈ Cov(_l) π(· | s) otherwise. For any l ∈ [L], and for any s ∈ Cov(_l), k ∈ [k_l, ⋯, k_l +1-1], v_π(s) - v_π_k(s) = v_π(s) - v_π_k^+(s) + v_π_k^+(s) - v_π_k(s) = 1/1-γ_s' ∼ d_π(s)[q_π_k^+(s', ·), π(· | s') - π_k^+(· | s') ]_I by performance difference lemma + q_π_k^+(s, ·), π_k^+(· | s) - q_π_k(s, ·), π_k(· | s)_II, where d_π(s) is the discounted state occupancy measure induced by following π from s. To bound term II, we note that for any s ∈ Cov(_l), we have π_k^+(· | s) = π_k(· | s) and both π_k, π_k^+(· | s) ∈Π_π_k, Cov(_l). By <ref>, we have for any s ∈ Cov(_l), a ∈, |k(s,a) - q_π_k'(s, a)| ≤ϵ' for any π_k' ∈Π_π_k, Cov(_l). Then, for any s ∈ Cov(_l), a ∈, |q_π_k^+(s, a) - q_π_k(s, a)| ≤ | q_π_k^+(s, a) - k(s,a)| + |k(s,a) - q_π_k(s,a)| ≤ 2 ϵ'. It follows that for any s ∈ Cov(_l), q_π_k^+(s, ·), π_k^+(· | s) - q_π_k(s, ·), π_k(· | s) = π_k(· | s), q_π_k^+(s, ·) - q_π_k(s, ·) ≤π_k(· | s), |q_π_k^+(s, ·) - q_π_k(s, ·)| ≤q_π_k^+(s, ·) - q_π_k(s, ·)_∞π_k(· |s)_1 ≤ 2 ϵ'. To bound term I, we note that for any s ∉Cov(_l), π^+_k(· | s) = π(· | s) and π_k^+ ∈Π_π_k, Cov(_l), then 1/1-γ_s' ∼ d_π(s)[q_π_k^+(s', ·), π(· | s') - π_k^+(· | s') ] = 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[q_π_k^+(s', ·), π(· | s') - π_k^+(· | s') ] + 1/1-γ_s' ∼ d_π(s), s' ∉Cov(_l)[q_π_k^+(s', ·), π(· | s') - π_k^+(· | s') ] = 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[q_π_k^+(s', ·), π(· | s') - π_k^+(· | s') ] = 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[q_π_k^+(s', ·) - k(s', ·), π(· | s') - π_k^+(· | s') ] + 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[k(s', ·), π(· | s') - π_k^+(· | s') ] ≤1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[q_π_k^+(s', ·) - k(s', ·)_∞π(· | s') - π_k^+(· | s') _1 ] by Holder's inequality + 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[k(s', ·), π(· | s') - π_k^+(· | s') ] ≤2 ϵ'/1-γ by <ref> and π^*(·|s') - π_k^+(· | s')_1≤ 2 + 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[k(s', ·), π(· | s') - π_k(· | s') ] = 2 ϵ'/1-γ + 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[k(s', ·), π(· | s') - π_k(· | s') ] In summary, for any l, k ∈ [k_l, k_l+1-1], v_π(s) - v_π_k(s) ≤4 ϵ'/1-γ + 1/1-γ_s' ∼ d_π(s), s' ∈ Cov(_l)[k(s',·), π^*(· | s') - π_k(· | s')]. Putting everything together, the value difference can be bounded as follows, 1/K∑_k=0^K-1(v_π(s_0) - v_π_k(s_0)) = 1/K∑_l=0^L∑_k= k_l^k_l+1-1(v_π(s_0) - v_π_k(s_0)) ≤1/K∑_l=0^L∑_k=k_l^k_l+1-14 ϵ'/1-γ + 1/K(1-γ)∑_l=0^L∑_k=k_l^k_l+1-1_s' ∼ d_π(s_0), s' ∈ Cov(_l)[ k(s', ·), π(· | s') - π_k(· | s') ] ≤4 ϵ'/1-γ + 1/K (1-γ)∑_k=0^K-1_s' ∼ d_π(s_0), s' ∈ Cov(_0)[ k(s', ·), π(· | s') - π_k(· | s') ]. Going from <ref> to <ref> is because when the algorithm terminates, all the _l are the same, and hence the Cov(C_0) = Cov(C_1) = ... = Cov(C_L). § CONFIDENT-NPG-CMDP We include the proofs of lemmas that appear in prior works and supporting lemmas that are helpful proving the lemmas in the main sections. The lemmas that appear in the main section will have the same numbering here. §.§ The accuracy of least-square estimates Once a state-action pair is added to a core set, it remains in that core set for the duration of the algorithm. This means that any _l for l ∈ [L+1] can grow in size. When a core set _l is extended during a running phase when ℓ = l, the least-square estimates will need be updated based on the newly extended _l containing newly discovered informative features. However, the improved estimates will only be used to update the policy of states that are newly covered, which are states that are in Cov(_ℓ) ∖ Cov(_ℓ+1). Therefore, we break down k to reflect the value based on which π_k+1 is updated, that is for s, a ∈×, k(s,a) ←k(s,a) if s ∈ Cov(_ℓ+1) k(s,a) if s ∈ Cov(_ℓ) ∖ Cov(_ℓ+1) initial value 0 if s ∉Cov(_ℓ), where k(s,a) = _[0, 1/1-γ]k(s,a) + k_[0, 1/1-γ]k(s,a). Respective to k, we have the corresponding policy update as follows, π_k+1(a|s) ←π_k+1(a|s) if s ∈ Cov(_ℓ+1) ∝π_k(· | s) exp(η_1 _[0, 1/1-γ]k(s,a) ) if s ∈ Cov(_ℓ) ∖ Cov(_ℓ+1) π_k(a|s) if s ∉Cov(_ℓ), For all s ∈ Cov(_ℓ), the k(s, ·) will be the value of the least-square estimate at the time the policy makes an NPG update of the form: π_k+1(· | s) ∝π_k(· | s) exp(η_1 _[0, 1/1-γ]Q_k(s,a) ). This is because at the end of the loop after <ref> of <ref> is run, the next phase core set _ℓ+1 = _ℓ, which make any states that are covered by _ℓ also be covered by _ℓ+1. A state that was once newly covered by _ℓ will no longer be newly covered again. If the algorithm was to make an NPG update for states that are newly covered in some loop, in any subsequent loops with the same value ℓ, the policy will remain unchanged. 1 Whenever LSE-subroutine on <ref> of Confident-NPG-CMDP is executed, for all k ∈ [k_ℓ, ⋯, k_ℓ+1-1], for all s ∈ Cov(C_ℓ) and a ∈, the least-square estimate k(s,a) satisfies the following, | k(s,a) - q_π_k', λ_k^p(s,a) | ≤ϵ' for all π_k' ∈Π_π_k, Cov(_ℓ), where ϵ' = (1+U)(ω + √(α) B + (ω + ϵ) √(d̃)) with d̃ = Õ(d). Likewise, | k(s,a) - q_π_k'^c(s,a) | ≤ω + √(α) B + (ω + ϵ) √(d̃) for all π_k' ∈Π_π_k, Cov(_ℓ), By the primal-dual approach, we have reduced the CMDP problem into an unconstrained problem with a single reward of the form r_λ = r + λ c. The proof of this lemma is a direct application of <ref> in the single reward setting along with a few adjustments. The result of <ref> depends on <ref> (q-bar accuracy). For <ref> to be true, one of the requirement is that the behaviour policy π_k_ℓ and the target policy π_k must satisfy the <ref>. The lemma that verifies this condition is <ref>, and all of the modifications would need be made in <ref> for the CMDP setting. The main modification to <ref> for the CMDP setting is to recognize that the value k for all k ∈ [0,K] are in the range of 0 and 1+U/1-γ. The upper bound value is the result of the primary reward function taking values in the range of [0, 1] and the dual variable taking values in the range of [0, U]. The value U is defined in <ref> for relaxed-feasibility and in <ref> for strict-feasibility, and it is an upper bound on the optimal dual variable (i.e., λ^* ≤ U). From this value modification, we have the step size η_1 = 1-γ/1+U√(2 ln(||)/K). The next modification is the interval of data collection m. For the total number of iterations K = 9(√(2 ln(||)) + 1)^2(1+U)^2/(1-γ)^4 ϵ^2 and H = ln((90√(d̃)(1+U))/((1-γ)^3 ϵ))/1-γ. Then, it follows that m = (1+U)ln(1+c)/2ϵ (1-γ) ln((90√(d̃)(1+U))/((1-γ)^3 ϵ)). With these changes, we apply <ref> to validate one of the conditions for <ref>. Unlike the single-reward setting, the primal estimate k(s,a) also depends on the dual variable k. By the time policy π_k+1 is being updated with respect to k, the λ_k would have been set. For the on-policy iteration k= k_ℓ, the dual variable λ_k is set in a prior loop. For off-policy iterations k ∈ [k_ℓ+1, ⋯, k_ℓ+1-1], the dual variable λ_k would have been set in the k -1 iteration. Thus, in any iteration k ∈ [k_ℓ, ⋯, k_ℓ+1-1], the most recent λ_k will be available for the construction of k. Due to the execution of <ref> and <ref>, the initial state s_0 would be guaranteed to be covered by C_ℓ. If s_0 ∈ Cov(_ℓ) for the first time, then λ_k makes a mirror descent update in <ref> using k(s_0) at the time of the update and remains that value for the reminder execution of the algorithm. This is because at the end of the loop after <ref> is run, the next phase core set _ℓ+1 = _ℓ, making any states that are covered by _ℓ also be covered by _ℓ+1. By <ref>, these states will remain covered by _ℓ+1 for the rest of the algorithm execution. If the algorithm was to set the dual variable to the mirror descent update, in any subsequent loops with the same ℓ again, the dual variable λ_k remains unchanged. Therefore, by <ref>, k will also remain unchanged as the value when it is computed for the first time in line  <ref>. With all the conditions of <ref> satisfied, we can apply the result to . For k, the result follows from <ref>. § RELAXED-FEASILIBILITY [Lemma 4.1 of <cit.>] Let λ^* be the optimal dual variable that satisfies min_λ≥ 0max_ππ(ρ) + λ (π(ρ) - b). If we choose U = 2/ζ(1-γ), then λ^* ≤ U. Let π^*_c(ρ) ≐π(ρ), and recall that ζ≐π^*_c(ρ) - b > 0, then π^*(ρ) = max_πmin_λ≥ 0π(ρ) + λ (π(ρ) - b). By <cit.>, π^*(ρ) = min_λ≥ 0max_ππ(ρ) + λ (π(ρ) - b) = max_ππ(ρ) + λ^* ( π(ρ) - b) ≥π^*_c(ρ) + λ^* ( π^*_c(ρ) - b) ≥π^*_c(ρ) + λ^*ζ. After rearranging terms, we have λ^* ≤π^*(ρ) - π^*_c(ρ)/ζ≤1/ζ(1-γ). By choosing U = 2/ζ(1-γ), we have λ^* ≤ U. R^p(π^*, K) = ∑_k=0^K-1_s' ∼ d_π^*(s_0), s' ∈ Cov(_0)[π^*(· |s') - π_k(·|s'), k(s', ·) + kk(s', ·)], R^d(λ, K) = ∑_k=0^K-1 (k - λ) (k(s_0) - b). 2 For any failure probability δ∈ (0,1], target accuracy ϵ > 0, and initial state s_0 ∈, with probability 1-δ, Confident-NPG-CMDP returns a mixture policy π̅_K that satisfies the following, π^*(s_0) - K(s_0) ≤5 ϵ'/1-γ + (√(2 ln(A))+1)(1+U)/(1-γ)^2 √(K), b - K(s_0) ≤ [b-K(s_0)]_+ ≤5 ϵ'/(1-γ)(U-λ^*) + (√(2 ln(A)) +1)(1+U)/(1-γ)^2(U-λ^*) √(K), where ϵ' ≐ (1+U)(ω + (√(α)B + (ω + ϵ)√(d̃))) with d̃ = Õ(d). We apply <ref> with π being the optimal policy π^* for CMDP, k = k + kk instead of k, and <ref> instead of <ref> of the single reward setting, then we have 1/K∑_k=0^K-1π^*, λ_k(s_0) - π_k, λ_k(s_0) ≤4 ϵ'/1-γ + 1/K(1-γ)∑_k=0^K-1_s' ∼ d_π^*(s_0), s' ∈ Cov(_0)[ k(s', ·) + kk(s', ·), π^*(· | s') - π_k(· | s') ] = 4 ϵ'/1-γ + R^p(π^*, K)/K(1-γ). By Proposition 28.6 of <cit.>, the primal regret R^p(π^*, K) ≤1+U/1-γ√(2K ln(||)) with η_1 = 1-γ/1+U√(2 ln(||)/K). Expanding the left hand side of <ref> in terms of ,, 1/K∑_k = 0^K-1π^*(s_0) - π(s_0) + 1/K∑_k = 0^K-1k (π^*(s_0) - π_k(s_0)) ≤4 ϵ'/1-γ + 1+U/(1-γ)^2√(2 ln(A)/K). Furthermore, 1/K∑_k=0^K-1λ_k (π_k(s_0) - π^*(s_0)) ≤1/K∑_k=0^K-1λ_k (π_k(s_0) - b) = 1/K∑_k=0^K-1λ_k(π_k(s_0) - k(s_0)) + λ_k(k(s_0) - b) ≤ϵ' + R^d(0,K)/K ≤ϵ' + U/(1-γ) √(K). Note that λ_k(π_k(s_0) - k(s_0)) ≤ U(ω + √(α) B + ω√(d̃) + ϵ√(d̃)) ≤ϵ', with d̃ defined in <ref>. The update to the dual variable is a mirror descent algorithm. By Proposition 28.6 of <cit.>, the dual regret R^d(0, K) ≤U√(K)/1-γ with η_2 = U(1-γ)/√(K). Altogether, 1/K∑_k = 0^K-1π^*(s_0) - π(s_0) ≤4 ϵ'/1-γ + 1+U/(1-γ)^2√(2 ln(A)/K) + ϵ' + U/(1-γ)√(K) ≤5 ϵ'/1-γ + (√(2 ln(A))+1)(1+U)/(1-γ)^2 √(K) For bounding the constraint violations, we first incorporate R^d(λ, K) into <ref> and rearrange terms to obtain: 1/K∑_k=0^K-1π^*(s_0) - π_k(s_0) + λ/K∑_k = 0^K-1(b - π_k(s_0)) ≤1/K∑_k=0^K-1 (k - λ) (π_k(s_0) - b) + 4 ϵ'/1-γ + (1+U)√(2 ln(A))/(1-γ)^2 √(K) = 1/K∑_k=0^K-1 (k - λ) (π_k(s_0) - k(s_0)) + 1/K∑_k=0^K-1 (k - λ) (k(s_0) - b) + 4 ϵ'/1-γ + (1+U)√(2 ln(A))/(1-γ)^2 √(K) = ϵ' + R^d(λ, K)/K + 4 ϵ'/1-γ + (1+U)√(2 ln(A))/(1-γ)^2 √(K) ≤5 ϵ'/1-γ + (1+U)(√(2 ln(||))+1)/(1-γ)^2 √(K) There are two constraint cases. Case one is b - π(s_0) ≤ 0 (no violation), for which case, λ = 0. Case two is b - π(s_0) > 0 (violation), for which case, λ = U. With these choices, R^d(λ, L) is increasing in λ. Using notation [x]_+ = max{x, 0}, we have 1/K∑_k=0^K-1π^*(s_0) - π_k(s_0) + U/K[∑_k=0^K b - π(s_0)]_+ ≤5 ϵ'/1-γ + (1+U)(√(2 ln(||))+1)/(1-γ)^2 √(K). By Lemma B.2 of <cit.>, we have [b-K(s_0)]_+ ≤5 ϵ'/(1-γ)(U-λ^*) + (√(2 ln(A)) +1)(1+ U)/(1-γ)^2(U-λ^*) √(K). 1 With probability 1-δ, the mixture policy K = 1/k∑_k=0^K-1π_k returned by confident-NPG-CMDP ensures that π^*(s_0) - K(s_0) = 5(1+U)(1+√(d̃))/1-γω + ϵ, K(s_0) ≥ b - (5 (1+U)(1+√(d̃))/(1-γ)ω + ϵ). if we choose n = 1013 (1+c)^2(1+U)^2 d̃/ϵ^2 (1-γ)^4ln(4 d̃(L+1)/δ), α = (1-γ)^2 ϵ^2/225(1+U)^2 B^2, K= 9(√(2 ln(||)) + 1)^2(1+U)^2/(1-γ)^4 ϵ^2, η_1 = 1-γ/1+U√(2 ln(||)/K), η_2 = U(1-γ)/√(K), H = ln((90√(d̃)(1+U))/((1-γ)^3 ϵ))/1-γ, m = (1+U)^2ln(1+c)/2ϵ (1-γ) ln((90√(d̃)(1+U))/((1-γ)^3 ϵ)), and U = 2/ζ(1-γ). Furthermore, the algorithm utilizes at most Õ(d^2 (1+U)^3 ϵ^-3(1-γ)^-8) queries in the local-access setting. From <ref>, we have π^*(s_0) - K(s_0) ≤5 ϵ'/(1-γ) + (√(2 ln(A))+1)(1 + U)/(1-γ)^2 √(K), b - K(s_0) ≤5 ϵ'/(1-γ)(U-λ^*) + (√(2 ln(A))+1)(1 + U)/(1-γ)^2(U-λ^*)√(K), Let C ≐1/ζ(1-γ) for a ζ∈ (0, 1/1-γ]. By <ref>, we chose U = 2C and λ^* ≤ C. It follows that 1/U-λ^*≤1/C = ζ(1-γ) ≤ 1, and thus the right hand side of <ref> is upper bounded by the right hand side of <ref>. Recall ϵ' ≐ (1+U) (ω + (√(α)B+ (ω + ϵ) √(d̃))). Then, the goal is to set the parameters H, n, K, and α appropriately so that the A, B and C of the following expression, when added together, is less than ϵ: 5(1+U)(1+√(d̃))ω/1-γ + 5(1+U) √(α)B/1-γ_A + 5(1+U)ϵ√(d̃)/1-γ_B + (√(2 ln(A))+1))(1 + U)/(1-γ)^2√(K)_C. First, we set n appropriately so that the failure probability is well controlled. The failure probability depends on the number of times Gather-data subroutine (<ref>) is executed. Gather-data is run in phase [0,L]. Each phase has at most d̃ elements, and recall d̃ is defined in <ref>. Therefore, Gather-data would return success at most d̃ times. Altogether, Gather-data can return success at most d̃ (L+1) times, each with probability of at least 1-δ' = 1 - δ/(d̃ (L+1)). By a union bound, Gather-data returns success in all occasions with probability 1-δ. By setting H = ln((90√(d̃)(1+U))/((1-γ)^3 ϵ))/1-γ and n = 1013 (1+c)^2(1+U)^2 d̃/ϵ^2 (1-γ)^4ln(4 d̃(L+1)/δ), we have for all l ∈ [0,L], k ∈ [k_l, ⋯, k_l+1-1], the |k(s,a) - q_π_k'(s,a) | ≤(1-γ)ϵ/15(1+U)√(d̃) hold for all π_k' ∈Π_π_k, Cov(_l) with probability at least 1-δ. Then, this is used in the accuracy guarantee of the least-square estimate (<ref>) and finally in the suboptimality bound of <ref>. Then, we can set B of <ref> to be less than ϵ/3. By setting K = 9(√(2 ln(||)) + 1)^2(1+U)^2/(1-γ)^4 ϵ^2, we have C of <ref> be less than ϵ/3. Finally, we set α = (1-γ)^2 ϵ^2/225(1+U)^2 B^2 and have A of <ref> to be less than ϵ/3. Altogether, we have the reward suboptimality satisfying <ref> and constraint satisfying <ref>. For the query complexity, we note that our algorithm does not query the simulator in every iteration, but at fixed intervals, which we call phases. Each phase is m iterations in length. There are total of L = K/≤ K/m = Õ ( (1+U) (1-γ)^-3ϵ^-1) phases. In each phases, Gather-data subroutine (<ref>) can be run. Each time Gather-data returns success with trajectories, the subroutine would have made at most nH queries. Gather-data is run for each of the elements in _l, l ∈ [0,L]. By the time the algorithm terminates, all _l's are the same. Since there are at most Õ(d) elements in each _l, the algorithm will make a total of nH(L+1)|_0| number of queries to the simulator. Since we have H = Õ((1-γ)^-1), n = Õ((1+U)^2 d ϵ^-2 (1-γ)^-4) and L = Õ((1+U)ϵ^-1(1-γ)^-3), the sample complexity is Õ(d^2 (1+U)^3 (1-γ)^-8ϵ^-3). § STRICT-FEASIBILITY Let be defined as in <ref> and π^* be the optimal policy of CMDP. Then, for a > 0, π^*(s_0) - (s_0) ≤λ^* , where λ^* is the optimal dual variable that satisfies min_λ≥ 0max_ππ(s_0) + λ (π(s_0) - b'). (s_0) = max_πmin_λ≥ 0π(s_0) + λ (π(s_0) - b'). By <cit.>, (s_0) = min_λ≥ 0max_ππ(s_0) + λ (π(s_0) - b') = max_ππ(s_0) + λ^* ( π(s_0) - b') ≥π^*(s_0) + λ^* ( π^*(s_0) - (b + )) ≥π^*(s_0) + λ^*(b - b - ) because π^*(s_0) ≥ b = π^*(s_0) - λ^* . After rearranging the terms, we get the result. Let λ^* be the optimal dual variable that satisfies min_λ≥ 0max_ππ(s_0) + λ (π(s_0) - b'). If we choose U = 4/ζ(1-γ), then λ^* ≤ U requiring that ∈ (0, ζ/2). Let π^*_c(s_0) ≐π(s_0), and recall that ζ≐π^*_c(s_0) - b > 0, then (s_0) = max_πmin_λ≥ 0π(s_0) + λ (π(s_0) - b') By <cit.>, (s_0) = min_λ≥ 0max_ππ(s_0) + λ (π(s_0) - b') = max_ππ(s_0) + λ^* ( π(s_0) - b') ≥π^*_c(s_0) + λ^* ( π^*_c(s_0) - (b + )) = π^*_c(s_0) + λ^*(ζ - ). If we require ∈ (0, ζ/2), then we have (s_0) ≥π^*_c(s_0) + λ^*(ζ - ζ/2) = π^*_c(s_0) + λ^*ζ/2 After rearranging terms in <ref>, we have λ^* ≤2((s_0) - π^*_c(s_0))/ζ≤2/ζ(1-γ). By choosing U = 4/ζ(1-γ), λ^* ≤ U. 2 With probability 1-δ, a target ϵ > 0, the mixture policy K returned by confident-NPG-CMDP ensures that π^*(s_0) - K(s_0) ≤ϵ and K(s_0) ≥ b, if assuming the misspecificiation error ω≤(1-γ)/40(1+U)(1+ √(d̃)), and if we choose = ϵ(1-γ)ζ/8, α = ^2 (1-γ)^2/1600 (1+U)^2 B^2, K = 64 (√(2 ln(||))+1)^2(1+U)^2/(1-γ)^4 ^2, n = 7200 (1+c)^2 d̃ (1+U)^2/^2(1-γ)^4ln(4 d̃ (L+1)/δ), H = ln(240(1+U)√(d̃)/ (1-γ)^3)/1-γ, m = 4(1+U) ln(1+c)/(1-γ)ln(240(1+U)√(d̃)/ (1-γ)^3), U = 4/ζ(1-γ). Furthermore, the algorithm utilizes at most Õ(d^2 (1+U)^3 (1-γ)^-11ϵ^-3ζ^-3) queries in the local-access setting. Let λ^* be the optimal dual variable that satisfies the Lagrangian primal-dual of the surrogate CMDP defined by <ref> (i.e., λ^* = _λ≥ 0max_ππ(s_0) + λ (π(s_0) - b')). π^*(s_0) - K(s_0) = [π^*(s_0) - (s_0) ]_surrogate suboptimality + [ (s_0) - K(s_0) ]_Confident-NPG-CMDP suboptimality ≤λ^* + ϵ̅, where ϵ̅= 5(1+U)(1+√(d̃))ϵ/1-γ + 5(1+U) √(α)B/1-γ + 5(1+U)ϵ√(d̃)/1-γ + (√(2 ln(A)) +1)(1 + U)/(1-γ)^2√(K). By <ref>, π^*(s_0) - (s_0) ≤λ^*. We can further upper bound λ^* by U = 4/ζ(1-γ) using <ref> and requiring ∈(0, ζ/2). Together with <ref>, we have Confident-NPG return K s.t. π^*(s_0) - K(s_0) ≤4/ζ(1-γ) + ϵ̅ and b' - K(s_0) ≤ϵ̅. Now, we need to set such that 1) ∈( 0, ζ/2) and 2) - ϵ̅≥ 0 are satisfied. If we choose = ϵ(1-γ)ζ/8, then the first condition is satisfied. This is because ϵ∈( 0, 1/1-γ], and thus ≤ζ/8 < ζ/2. Next, we check if our choice of = ϵ(1-γ)ζ/8 satisfies - ϵ̅≥ 0. For the condition - ϵ̅≥ 0 to be true, we make an assumption on the misspecification error ω≤(1-γ)/40(1+U)(1+ √(d̃)), and pick n, α, K, η_1, η_2, H, m to be the values outlined in this theorem. Consequently, we have ϵ̅= 1/2. Then, we have ensured the condition - ϵ̅≥ 0 is satisfied. We note that because ζ∈(0, 1/1-γ), we have ϵ̅≤ϵ/16≤ϵ. following from <ref>, we have π^*(s_0) - K(s_0) ≤ϵ and b' - K(s_0) ≤/2. Then it follows that b+ /2≤K(s_0). Strict-feasilbility is achieved. For the query complexity, we note that our algorithm does not query the simulator in every iteration, but at fixed intervals, which we call phases. Each phase is m iterations in length. There are total of L = K/≤ K/m = Õ ((1+U)(1-γ)^-3^-1) phases. In each phase, Gather-data subroutine (<ref>) can be run. Each time Gather-data subroutine returns with trajectories, the subroutine would have made at most nH queries. Gather-data is run for each of the element in _l, l ∈ [0,L]. By the time the algorithm terminates, all _l's are the same. Since there are at most Õ(d) elements in each _l, the algorithm will make a total of nH(L+1)|_0| number of queries to the simulator. Since we have H = Õ((1-γ)^-1), n = Õ((1+U)^2 d (1-γ)^-4^-2), L = Õ ((1+U)(1-γ)^-3^-1), and = ϵζ (1-γ)/8, the sample complexity is Õ(d^2 (1+U)^3 (1-γ)^-11ϵ^-3ζ^-3).
http://arxiv.org/abs/2406.18744v1
20240626201914
Quantum Resources Required for Binding Affinity Calculations of Amyloid beta
[ "Matthew Otten", "Thomas W. Watts", "Samuel D. Johnson", "Rashmi Sundareswara", "Zhihui Wang", "Tarini S. Hardikar", "Kenneth Heitritter", "James Brown", "Kanav Setia", "Adam Holmes" ]
quant-ph
[ "quant-ph" ]
Department of Physics, University of Wisconsin – Madison, Madison, WI, USA Corresponding author: mjotten@wisc.edu HRL Laboratories, LLC, Malibu, CA, USA HRL Laboratories, LLC, Malibu, CA, USA HRL Laboratories, LLC, Malibu, CA, USA Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA Research Institute for Advanced Computer Science (RIACS), USRA, Moffett Field, CA qBraid Co., 111 S Wacker Dr., Chicago, IL 60606, USA qBraid Co., 111 S Wacker Dr., Chicago, IL 60606, USA qBraid Co., 111 S Wacker Dr., Chicago, IL 60606, USA qBraid Co., 111 S Wacker Dr., Chicago, IL 60606, USA HRL Laboratories, LLC, Malibu, CA, USA Corresponding author: aholmes@hrl.com § ABSTRACT Amyloid beta, an intrinsically disordered protein, plays a seemingly important but not well-understood role in neurodegenerative diseases like Alzheimer's disease. A key feature of amyloid beta, which could lead to potential therapeutic intervention pathways, is its binding affinity to certain metal centers, like iron and copper. Numerically calculating such binding affinities is a computationally challenging task, involving strongly correlated metal centers. A key bottleneck in understanding the binding affinity is obtaining estimates of the ground state energy. Quantum computers have the potential to accelerate such calculations but it is important to understand the quantum resources required. In this work, we detail a computational workflow for binding affinity calculations for amyloid beta utilizing quantum algorithms, providing estimated quantum resources required, at both the logical and hardware level. Quantum Resources Required for Binding Affinity Calculations of Amyloid beta Adam Holmes July 1, 2024 ============================================================================ § INTRODUCTION Proteins which contain a metal ion cofactor, known as metalloproteins, account for nearly half of all proteins in nature, with functions ranging from photosynthesis to nitrogen fixation to important biological functions <cit.>. Metal protein interactions represent a strongly correlated system, where standard classical electronic structure techniques fail to accurately predict features such as coordination and dynamics <cit.>. Quantum computers have long been predicted to be able to solve such strongly correlated problems exponentially faster than classical computers <cit.>. In fact, nitrogenase, which serves as an important protein in natural nitrogen fixation, through its primary cofactor, iron molydbenum cofactor (FeMo-co), has long served as an example of a problem that future, fault-tolerant quantum computers may be able to solve and is often used as a basis for quantum resource estimation  <cit.>. In their biological function, certain metalloproteins, specifically the protein amyloid beta (Aβ) interacting with metal ions such as copper, ion, and zinc, have been linked to neurodegenerative diseases such as Alzheimer's disease <cit.>. These interactions have been studied via many classical techniques, including molecular dynamics (MD), quantum mechanics / molecular mechanics (QM/MM), and density functional theory (DFT) <cit.>, but a lack of accuracy has resulted in conflicting predictions for coordination schemes and other chemical quantities. Higher accuracy calculations are therefore necessary to fully understand Aβ's role in the onset of neurodegenerative diseases. In this paper, we provide specific instances of Aβ-metal ion systems and further provide estimates of the required quantum resources to perform accurate calculations on such systems. § PROBLEM STATEMENT AND UTILITY Amyloid-beta is a protein that is critical to understanding the pathogenesis of Alzheimer's Disease (AD). Created by proteases breaking down Amyloid Precursor Protein (APP), Amyloid-Beta (Aβ) is a disordered protein and the subject of many leading AD hypotheses. Chief among these hypotheses is the amyloid cascade hypothesis and the oligomer cascade hypothesis (are soluble low molecular weight oligomers causing toxicity or is it the abnormal accumulation of plaques in the brain? <cit.>). To understand this aggregation and to design targeted drugs, a key challenge is understanding how Aβ interacts with metal ions such as zinc, copper, iron, and platinum <cit.>. Therefore, it is important to correctly model these metalloproteins and calculate their metal binding affinities. There are many possible metal-binding domains that could be studied; AB16 (shown in Fig. <ref>) is considered the minimal metal-binding domain for the larger Aβ protein (Protein Databank (PDB) ID: 1ZE9) <cit.>. At physiologically relevant conditions (pH 6.5), many possible different binding sites are found for AB16. Nuclear magnetic resonance (NMR) studies support His6, His13, His14, and Glu11 as the binding sites <cit.>; one such region, His6, is shown interacting with a copper ion in Fig. <ref>. Experimental studies are faced with the challenges of highly disordered structure, the rapid kinetics of aggregation, and solvent effects when identifying coordination schemes, demonstrating the need for computational tools. Computational studies have been performed on AB16, with a variety of techniques used, such as QM/MM, classical MD, and DFT. Each approach has its own shortcomings, and often suggests alternative coordination schemes such as with oxygen, COAla2, Tyr10, Asp1, N-terminal nitrogen, or even water as the fourth coordination site <cit.>. The specific computational task is to calculate the metal binding affinity of the AB16 protein. The specific process for calculating this is detailed in the workflow below. Knowing the metal binding affinity provides insight into how AB16 interacts with metal ions and can be used to inform a broader theory to understand its role in plaque aggregation and to potentially design targeted drugs. For instance, a correct understanding of metal-protein coordination sites can allow us to understand the mechanism and kinetics of protein oligomerization and aggregation, show what ions contribute to this effect, and what physiological factors are necessary for this process to occur. A binding energy based mechanistic understanding can be the key to studying one of the earliest points of AD diagnosis. This, in turn, can be useful in future drug discovery and design pipelines, which could help alleviate the burden of AD. The specific problem we study, that of the AB16 protein, is only one in a family of possible Aβ metalloproteins. Larger proteins (such as AB40 or AB42) could also be potential computational targets. The larger set of metalloproteins also include systems important for photosynthesis, nitrogen fixation, and water oxidation <cit.>. The techniques described here for AB16 could be applied to the wider family of metalloproteins. Alzheimer's disease has a large economic impact; the Alzheimer's Association reported that in 2024 the total payments for health care, long-term care and hospice services for people age 65 and older with dementia was estimated to be $360 billion <cit.>. It is certainly not expected that calculations of the metal-binding affinity of AB16 will directly lead to a therapeutic for AD. To attempt to quantify the potential utility of a successful computational solution for the metal-binding affinity in Aβ, we instead look at the National Institutes of Health (NIH) RePORTER tool, which reports, among other things, the research expenditures of the NIH in an open, searchable manner <cit.>. Since 2005, the NIH has spent around $8.9 billion on awards for projects mentioning `amyloid beta' (and other variants, such as `beta amyloid'). From that set of awards, a total of around $280 million mention the word `computational' in the abstract. Therefore, we estimate that a technique or device which could solve the computational problems detailed in this paper would have a utility of at least $280 million. § WORKFLOW For this application, the goal is to understand how Aβ interacts with various metal ions. While there are many ways to attack this problem, one workflow, following the quantum mechanical/molecular mechanics (QM/MM) approach is as follows <cit.>. First, an experimental or previously studied geometry is taken from a database, such as the protein database. Specific choices for this geometry (such as the AB16 protein) are discussed elsewhere. This structure is then reoptimized at a coarse level (that of, say, classical force fields) using geometry optimization techniques. This level of optimization results in several local minima of similar quality, which are then chosen to be analyzed at more detailed level of theory, here using a QM/MM approach. The QM/MM performs additional optimization, where a small region is treated fully quantum mechanically (at, say, a hybrid functional DFT level), and the rest of the protein is treated via molecular mechanics. The various geometries found from this additional optimization are then analyzed at an even more accurate level of theory. Here, we use the fragment molecular orbital (FMO) <cit.> method to divide the AB16 protein into multiple smaller fragments. The ground state energy of these fragments is then found directly using the most accurate level of theory (either a full configuration interaction (FCI)-like algorithm on classical computers or a quantum phase estimation (QPE) algorithm on quantum computers). The energies of each fragment are combined via the FMO algorithm to get the overall energy for the various conformations (geometries). These energies are compared to determine which conformations are most energetically favorable. The metal binding affinity can then be calculated by comparing the energy of the protein (which would be calculated via a similar QM/MM technique) with and without the additional metal ions. Several corrections to the energy, such as dispersion and solvation, may be applied. This workflow is show diagrammatically in Fig. <ref>. The accuracy afforded at the lowest level of the calculation, at the individual fragment level, is necessary to distinguish between multiple possible structures. QM/MM techniques have been applied to the Aβ system before, using much less accurate techniques compared with the proposed FMO + FCI or QPE method. One study, using such less-accurate techniques, found 8 structures, all lying within about 40 kJ/mol of each other, which, due to an expected accuracy of only 2̃0 kJ/mol, had to all be considered equally probable <cit.>. Overall, the energy of dozens of geometries, both with and without an additional metal ion, need to be calculated in an optimization / dynamical loop, resulting in hundreds of energy evaluations each. §.§ Specific Hamiltonian We use the AB16 protein (PDB ID:1ZE9) <cit.> as our protein of interest. The choice for the specific protein structure is motivated by the fact that it is the minimal metal coordination domain within the larger AB42 structure. With a structure identified at physiologically relevant pH by solution NMR, AB16 is realistic and biologically relevant starting point. Furthermore, this structure has been previously studied using classical techniques, showcasing interest from the community as well as previous results to benchmark against. The atomic structure of AB16 is shown in Fig. <ref>. As part of our workflow, we divide the protein into individual fragments which are solved separately and whose solutions are combined classically. The most interesting, and likely most difficult, fragments to solve are the ones which involve the metal ion. One such potential binding site, His6, with a copper ion, is shown in Fig. <ref>. Our fragmentation scheme results in 15 fragments. Given the positions of the atoms, we represent our Hamiltonian using Gaussian type orbitals (GTO). To provide a wide range of resource estimates, we use several basis sets, including, in order of increasing numbers of basis functions, STO-3G, 6-31g* and cc-pvdz. The largest basis sets (Dunning's correlation consistent basis sets <cit.>) studied here are consistent with those that are used for highly-accurate quantum chemistry studies. No active space is used; we directly correlate all electrons. §.§ Specific Algorithm Descriptions §.§.§ Fragment Molecular Orbital The first step in the pipeline is generating fragments from the larger protein sequence. This is done using Fragment Molecular Orbital (FMO) method <cit.>. This method preserves chemical information by breaking bonds heterolytically and moving a bond with a proton. The method follows an energy decomposition approach, where the energy of the entire molecule is constructed by summing over monomer fragment energies, dimer fragment energies, and so on. Overlapping fragment energies are then subtracted. At the monomer step, each monomer’s electronic density relaxes with respect to the electric field of all other monomers, and are recalculated until a self consistent cycle converges. At the dimer step, dimer energies are calculated in the field of the monomer densities (which are not recalculated) <cit.>. §.§.§ Quantum Phase Estimation with Double-Factorized Qubitization Quantum phase estimation (QPE) is one of the core algorithms for solving quantum chemistry problems on quantum computers <cit.>. The QPE algorithm to estimate the eigenvalue for a unitary U can be characterized by the following key steps: * Initialization. The algorithm starts with two registers. The ancilla register is initialized to ⊗ |+⟩ state. The data register is prepared in a state that has suitable overlap with the desired eigenstate of the unitary operator U. * Controlled Unitary Operations. A series of controlled unitary operations U^2k with integer k are applied to the data register, conditioned on the state of the ancilla qubits. By varying k for each operation, different powers of the phase are encoded (phase kick-back) into the ancilla qubits. Implementation of controlled unitaries in this step constitutes the major contribution to resource estimation. * Inverse Quantum Fourier Transform (QFT). The ancilla register now is in a superposition state encoding the eigenvalue of U. The inverse Quantum Fourier Transform is then applied to this register and upon measuring the ancilla register, one get a bit string that is an estimate of the phase of the eigenvalue of U. The desired accuracy of the phase estimate determines the number of ancilla qubits needed and hence is a key factor for resource estimation. For QPE to be applied for ground state energy estimation, an initial state with suitable overlap with the ground state is prepared and evolved in time under the action of the Hamiltonian, H. Various methods of time-evolution can be used, including Trotterization <cit.> and qubitization <cit.>. We use the double-factorized qubitization algorithm of Ref. <cit.>. In qubitization, the time evolution is performed not through the direct Hamiltonian, H, but instead through a walk operator, W=e^i sin^-1(H) <cit.>. Compared with the Trotterization of the time-evolution unitary exp[-itH], qubitization provides a considerable reduction in gate depth at the cost of additional logical qubits. The technique of qubitization holds great promises for molecular systems in terms of the T-gate complexity.<cit.> For our specific amyloid-β application, we use the standard quantum chemistry electronic Hamiltonian, H = ∑_ij,σ h_ij a^†_(i,σ) a _(i,σ) + 1/2∑_ijkl,σρ h_ijkl a^†_(i,σ) a^†_(k,ρ)a_(l,ρ) a_(j,σ), where h_ij and h_ijkl are the one- and two-electron integrals (computed via a standard quantum chemistry package, e.g., pyscf <cit.>); σ and ρ index spin; and a_p are Fermion raising and lowering operators. The Hamiltonian, eq. (<ref>), is decomposed through the so-called double-factorization procedure, H_DF = ∑_ij,σh̅_ij a^†_i,σ a_j,σ+ 1/2∑_r∈ [R](∑_ij,σ∑_m ∈ [M^(r)]λ_m^(r)R⃗_m,i^(r)R⃗_m,j^(r) a^†_i,σ a_j,σ)^2. This expression is derived from Eq. <ref> through a two-step factorization of the two-electron tensor terms. We refer the readers to Ref. <cit.> for the derivation and more details. Using Majorana representation of fermion operators, the double-factored Hamiltonian H_DF is mapped into a sum of squares of one-body Hamiltonians, and the walk operator can then be synthesized. § RESOURCE ESTIMATES We estimate the quantum resources required to solve for the ground state energy of each fragment using the Azure Quantum Resource Estimator <cit.> implemented in the Azure Quantum Development Kit <cit.>. For logical resource estimates, we use an accuracy cutoff of 1mHa, slightly below chemical accuracy for the double-factorized qubitization algorithm. For physical resource estimates, we use a model consistent with error rates and gate times for optimistic superconducting qubits implementing a surface code with a total error budget of 1%. §.§.§ Resource Estimation Details We use the Azure Quantum Resource Estimator (AzureQRE) <cit.> to provide both the logical and physical resource estimates. We briefly describe several important features of the Azure QRE here; more details can be found in Ref. <cit.>. The AzureQRE takes the definition of a logical circuit and compiles it into a Quantum Intermediate Representation. At the physical level, it assumes a 2D nearest-neighbor layout that has the ability to perform parallel operations. Because the qubits are inherently noisy, we must estimate the overhead of quantum error correction; specifically, we estimate the overhead using the surface code <cit.>. The distance, d, of the surface code parameterizes the level of error suppression and is adjusted based on the logical depth and physical qubit parameters. Logical qubit movement and multi-qubit measurements are assumed to be performed via lattice surgery operations. The cost to implement T gates is estimated via the use of T state distillation factories <cit.>. We use physical qubit parameters consistent with an optimistic superconducting qubit device, with 50 ns gate times, 100 ns measurement times, and 10^-4 Clifford and non-Clifford error rates. Within the AzureQRE, this has the name . §.§.§ Resource Estimation Results Logical resource estimates are shown in Fig. <ref> for all 15 fragments over the various basis sets. The specific estimates for the fragments containing metal ions are shown with stars; these fragments are the ones expected to have the strongest correlation, necessitating the accuracy provided by quantum computers. The number of logical qubits necessary grows linearly with the number of orbitals; the total number of qubits needed is about an order of magnitude larger than the number of orbitals used to represent the problem due to the additional ancilla qubits needed to perform the double-factorized qubitization algorithm. The number of logical T gates grows clearly as O(n^5), with no clear difference between the fragments with and without the metal ion. To calculate the His6 binding site with a copper ion in the 6-31g* basis set, we estimate that 4728 logical qubits implementing 1.17e14 T gates would be required to perform the double-factorized qubitization algorithm. This system is potentially the smallest system of practical interest, with 192 orbitals. The 6-31g* basis set is the smallest basis set that has the potential to provide somewhat accurate results, though larger basis sets are likely needed to lower the basis set error. To perform such a deep circuit, quantum error correction will be necessary. Physical resource estimates, utilizing quantum error correction, are shown in Fig. <ref>. The total number of physical qubits is now in the millions. For the His6 binding site with a copper ion in the 6-31g* basis set, 7 million physical qubits would be required, with an estimated runtime of 8.8e8s, a little over 28 years. § CONCLUSION In this paper, we provide a specific example of a biologically-relevant, computational metallorganic problem: calculating the metal-binding affinity of the AB16 protein, which is relevant for the study of Aβ's role in neurodegenerative diseases, such as Alzheimer's disease. Through a specific computational workflow, involving QM/MM, FMO, and QPE, we provide detailed quantum resource estimates for solving this problem. The utility of solving this computational problem is at least $91 million, as evidenced by the NIH funding reports. This protein is only one of the many proteins which contain a metal ion cofactor. It is expected that a device which could solve the problems discussed here could likely provide interesting insights into other metalloproteins, such as FeMoco <cit.>. The smallest problem of practical interest, that of the His6 binding site interacting with a copper ion, would require 7 million physical qubits with a run time of a little over 28 years. This points to the need to reduce overheads across the board, from a more compact chemical description, to better quantum algorithms, better error correction schemes, and faster physical operations. For the AB16 chemical description, active spaces may be an area to potentially create a more compact chemical description, though active space selection has to be carefully considered to maintain accuracy. § ACKNOWLEDGEMENTS This material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. HR001122C0074. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. This work is supported by Wellcome Leap as part of the Quantum for Bio Program. ZW acknowledges support from DARPA under IAA 8839, Annex 130 through NASA Academic Mission Services (contract NNA16BD14C). The authors thank John Carpenter for his support in creating high-resolution figures for this paper.
http://arxiv.org/abs/2406.18254v1
20240626110425
Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning
[ "Zhijie Nie", "Richong Zhang", "Zhangchi Feng", "Hailang Huang", "Xudong Liu" ]
cs.IR
[ "cs.IR", "cs.AI", "cs.MM" ]
CCSE, Beihang University Beijing China niezj@act.buaa.edu.cn Corresbonding author: zhangrc@act.buaa.edu.cn. CCSE, Beihang University Beijing China zhangrc@act.buaa.edu.cn CCSE, Beihang University Beijing China zcmuller@buaa.edu.cn CCSE, Beihang University Beijing China huanghl@act.buaa.edu.cn CCSE, Beihang University Beijing China liuxd@act.buaa.edu.cn § ABSTRACT Cross-lingual Cross-modal Retrieval (CCR) is an essential task in web search, which aims to break the barriers between modality and language simultaneously and achieves image-text retrieval in the multi-lingual scenario with a single model. In recent years, excellent progress has been made based on cross-lingual cross-modal pre-training; particularly, the methods based on contrastive learning on large-scale data have significantly improved retrieval tasks. However, these methods directly follow the existing pre-training methods in the cross-lingual or cross-modal domain, leading to two problems of inconsistency in CCR: The methods with cross-lingual style suffer from the intra-modal error propagation, resulting in inconsistent recall performance across languages in the whole dataset. The methods with cross-modal style suffer from the inter-modal optimization direction bias, resulting in inconsistent rank across languages within each instance, which cannot be reflected by Recall@K. To solve these problems, we propose a simple but effective 1-to-K contrastive learning method, which treats each language equally and eliminates error propagation and optimization bias. In addition, we propose a new evaluation metric, Mean Rank Variance (MRV), to reflect the rank inconsistency across languages within each instance. Extensive experiments on four CCR datasets show that our method improves both recall rates and MRV with smaller-scale pre-trained data, achieving the new state-of-art[Our codes can be accessed at <https://github.com/BUAADreamer/CCRK>]. <ccs2012> <concept> <concept_id>10002951.10003317.10003371.10003386.10003387</concept_id> <concept_desc>Information systems Image search</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003371.10003381.10003385</concept_id> <concept_desc>Information systems multi-lingual and cross-lingual retrieval</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003359.10003362</concept_id> <concept_desc>Information systems Retrieval effectiveness</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Information systems Image search [500]Information systems multi-lingual and cross-lingual retrieval [300]Information systems Retrieval effectiveness Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning Xudong Liu ================================================================================================= § INTRODUCTION Recently, significant progress has been made in the cross-modality <cit.>, and the cross-lingual <cit.> domains, leading to increased interest in the more general cross-lingual cross-modal scenarios. In the cross-lingual cross-modal domain, Cross-lingual Cross-modal Pre-training (CCP) <cit.> is first explored, followed by Cross-lingual Cross-modal Retrieval (CCR) <cit.> as the first downstream task independently studied. CCR aims to achieve image-text retrieval in multi-lingual scenarios with a single model, preventing the high latency associated with text translation from other languages to English in real-time web searches. In general, modern dense retrieval matches the results for a query by a particular distance metric (e.g., Euclidean distance or cosine similarity), which implies that the dense retrieval methods should push queries and those semantically similar candidate items closer than other random pairs in the high-dimensional space. Thus, the core of the retrieval task lies in aligning the semantic spaces of queries and candidate sets, regardless of whether they are in different languages or different modalities. Recent studies show that contrastive learning based on pairwise data is effective in cross-lingual and cross-modal retrieval tasks. For example, CLIP <cit.>, which is only pre-trained by aligning different modalities using contrastive learning, has achieved remarkable performances in zero-shot cross-modal retrieval; on the other hand, aligning the representations from different modalities (or different languages) before fusing them can reduce the difficulty of fusion and significantly improve the performance of downstream cross-modal tasks including retrieval, question answering and reasoning <cit.>. As a result, the existing works in CCP directly pieced the alignment ideas in cross-modal or cross-lingual domains, feeding pairwise data into the encoder at a time, such as an image-text pair and a bi-lingual text pair. Specifically, the existing methods use the following two ideas to align different modalities: (1) considering English as the anchor for bridging vision with other languages, which means that the images are aligned to the English texts only, while the texts in other languages are aligned to the English texts only <cit.> or (2) considering the images being aligned with the texts in a random language at a time during pre-training <cit.>. However, the desirable alignment process is more complex in cross-lingual cross-modal scenarios. Intuitively, the semantics of the texts in multiple languages need to be aligned jointly with those from vision, which cannot be achieved with pairwise data. With the theoretical derivations and empirical studies (Section <ref>), we find that applying either of the two above ideas to CCP will result in two problems of inconsistency (Figure <ref>). Specifically, regarding English as the bridge in inter-modal may cause error propagation, resulting in an inconsistent performance on Recall@K of different languages in CCR; aligning the image with only the text in a random language at a time may lead to the optimization direction bias, resulting the inconsistent ranks of different languages within an instance. Highlighting that the latter problem is more insidious since it cannot be directly reflected by Recall@K, which is almost the only reported evaluation metric of CCR <cit.>. To solve the above problems, in this paper, we propose a simple but effective contrastive paradigm for CCP, 1-to-K contrastive learning. Specifically, when pre-training the images and texts in a mini-batch ratio of not 1 to 1 but 1 to K (K ≥ 2), each image is aligned simultaneously with K texts in different languages. Under this paradigm, all languages are aligned with vision at once, and no language is used as the bridge between vision and other languages, eliminating intra-modal error propagation and inter-modal optimization direction bias in principle. In addition, two commonly used pre-training tasks for capturing fine-grained correlation between modalities, Multi-lingual Image-Text Matching (MITM) <cit.> and Cross-modal Masked Language Modeling (CMLM) <cit.>, can be easier superimposed on the novel contrastive paradigm with the help of hard negative sampling. Based on the three pre-training tasks, we propose a pre-trained model, CCR^k. For the evaluation of CCR, as a complement to Recall@K, we propose a new evaluation metric, Mean Rank Variance (MRV), to reflect the rank inconsistency of the different languages in an instance. Extensive experiments on four public CCR datasets demonstrate that our method has effectively solved the above two problems and achieved new state-of-the-art. The contributions of this paper can be summarized as follows: * We analyze two problems of inconsistency existing in the current CCP methods and point out their impact on the performance of CCR for the first time. * We propose a simple but effective 1-to-K contrastive paradigm as an alternative to the traditional 1-to-1 contrastive paradigm in CCR to solve these problems. * We propose Mean Rank Variance (MRV) to better reflect retrieval performance across languages and modalities, which is used to replenish Recall@K and evaluate the rank consistency across languages in each dataset sample. * We propose CCR^k, a CCP model with the novel 1-to-K contrastive paradigm. We pre-train four variants of CCR with the different language numbers and data scales. The largest variant CCR^10-E, which is still pre-trained with fewer language numbers and data scale than all baselines, achieves new SOTA on four CCR datasets. § BACKGROUND This section overviews recent advances in cross-lingual cross-modal pre-training and cross-lingual cross-modal retrieval. Due to space limitations, we will only focus on works related to image-text retrieval in the cross-lingual scenarios. §.§ Cross-Lingual Cross-Modal Pre-Training Cross-lingual Cross-modal Pre-training (CCP) <cit.> is generalized from cross-modal pre-training <cit.> and cross-lingual pre-training <cit.>, which aims to develop a representation learning model that captures the relationship in different modalities and different languages simultaneously. Current methods can be broadly divided into three categories based on their model architectures. Cross-Lingual Style The first class of methods follows the model architecture in the cross-lingual domain, where a pre-trained cross-modal model (e.g. CLIP <cit.>) is required. Then, the pre-trained model is tuned to a cross-lingual version by aligning the representations of English texts and non-English texts while freezing both the visual and English textual backbone. The representatives of these methods are multi-lingual CLIPs <cit.>. The idea behind these methods is using English as a bridge between vision and other languages. Cross-Modal Style The second class of methods follows the model architecture in the cross-modal domain, where multi-lingual image-text pairs are required. Due to the difficulty of collecting multi-lingual image-text pairs in practice, translation models are usually used to translate the English text in the existing image-text pairs to other languages <cit.>. Then, at most, one non-English text is adapted to form an image-text pair with the image at a time, keeping consistent with the input form of the cross-modal model <cit.>. The representatives of these methods are UC^2 <cit.>, and TD-MML <cit.>. The idea behind these methods is aligning the image with the text in a language at a time to improve the performance across languages. Cross-Modal Cross-Lingual Style The third class of methods references the architectures in both cross-lingual and cross-modal domains. The same multi-lingual encoders are responsible for encoding the texts in both image-text pairs and parallel corpora for a unified framework. The representatives of these methods are xUNITER <cit.>, M^3P <cit.>, and CCLM <cit.>. The idea behind these methods is using a unified framework to combine the ideas from the first and second class of methods. §.§ Cross-Lingual Cross-Modal Retrieval Cross-lingual Cross-modal Retrieval (CCR) <cit.> is one of the downstream tasks that have been focused on in cross-lingual cross-modal scenarios. MURAL <cit.> demonstrates that high performance in CCR can be achieved through pre-training with contrastive learning over large-scale datasets. <cit.> pre-train only a fusion encoder for CCR using pre-extracted image region features. More recently, IGLUE <cit.>, a cross-lingual cross-modal benchmark, was proposed with two new retrieval datasets, xFlickr&CO and WIT. In addition, IGLUE explores several cross-modal pre-training models (such as ViLBERT <cit.> and xUNITER <cit.>), and evaluates them on two new datasets by directly translating the texts in other languages to English, demonstrating that these models serve as strong baselines. <cit.> apply cross-lingual teacher learning to transfer CLIP to other languages. <cit.> proposed a noise robustness CCR method to improve the performance when training on the noisy translated data. To the best of our knowledge, our work in this paper is the first exploration of the consistency in cross-lingual cross-modal retrieval. In addition, our newly proposed 1-to-K contrastive learning pre-training task and the evaluation metric MRV have not previously appeared in CCR and related fields. § PROBLEM OF INCONSISTENCY IN CCR In this section, we first explore two alignment problems in the existing CCP methods under the perspective of contrastive learning, then point out their impacts on the performance of CCR. §.§ Preliminary In the loss functions for alignment, there may be only the anchor with its positive samples (e.g., Mean Squared Error (MSE)) and the optional negative samples (e.g., InfoNCE Loss <cit.>, which is commonly used in contrastive learning). When these loss functions are used, the anchor is optimized by the alignment direction, which points from the anchor to the positive sample. Intuitively, the alignment direction brings the anchor and positives together in the semantic space. In advance, we give the required notation for the follow-up content in this section. For simplicity, we only consider the case where one image needs to be aligned with two texts from two different languages, and the subsequent conclusions can be easily generalized to more languages. Let î, t̂_m and t̂_n denote the normalized representations of the image, the text in language m, and the text in language n, respectively. We define α=∠(î,t̂_m), β=∠(î,t̂_n) and γ=∠(t̂_m,t̂_n), where ∠(.,.) represents the angle of two same dimensional representations. §.§ Inconsistency in Recall@K Theoretical Analysis. The methods following the cross-lingual architecture implicitly rely on English as a bridge in inter-modal alignment between the other language and vision. In this setting, we consider the situation in which the other language text representation is the anchor, where it is aligned to its positive sample, the English text representation. However, in theory, it should be aligned to the image representation. Without loss of generality, if we regard language m as English and language n as another language, then the practical alignment direction is t̂_m-t̂_n, while correct alignment direction is î-t̂_n (Figure <ref>). Then we have the following results: Suppose that θ is the angle between the practical and correct alignment direction of t̂_n. If and only if English texts can be aligned well with images, i.e. α tends to 0, then θ will converge to 0. Empirical Observation. We find the inter-modal alignment process so tough that English texts cannot be aligned well with images. Specifically, the loss value can drop by 5 to 6 orders of magnitude in the text-modal (uni-modal) scenario <cit.>, while it is only 2 orders of magnitude in cross-modal contrastive learning <cit.> (Figure <ref>). It means that the alignment between English texts and images is not ideal, and if English texts are used to connect images and texts in other languages, there will be a risk of error propagation on intra-modal alignment, resulting in a worse alignment between non-English texts and images. Impact of inconsistency. As this problem persists during pre-training, the impact of this problem is global and can be revealed by the uneven performance under the different language settings. As it is shown by the results of M^3P and UC^2 in Table <ref>, the performance gap among different language scenarios is clear even though the instance number per language has been kept nearly consistent during pre-training <cit.>. §.§ Inconsistency in Rank Theoretical Analysis. The methods that follow the cross-modal architecture consider each language separately aligned to the vision, thus avoiding error propagation in intra-modal. However, they suffer from another local problem of inconsistency. In this setting, we consider the situation that the image is the anchor, where its optimal alignment coordinates should satisfy: (1) min (∠(î,t̂_m)+∠(î,t̂_n)) and (2) ∠(î, t̂_m) = ∠(î, t̂_n). Combining the two conditions above, î should be drawn to the midpoint of the minor arc corresponding to t̂_m and t̂_n, i.e., the correct alignment direction is (t̂_m+t̂_n)/t̂_m+t̂_n-î. However, the image is aligned with only one of the text representations at a time under the cross-modal setting. Without loss of generality, if we regard t̂_m as the alignment target, the practical alignment direction of î can be considered as t̂_n-î (Figure <ref>). Then we have the following results: Suppose that ω is the angle between the actual alignment direction and the correct optimization direction of î. If and only if the English text can be aligned well with the text in the other language, i.e. γ tends to 0, then ω will converge to 0. Empirical Observation. We find that the representations obtained by the popular multi-lingual text encoders are not aligned according to semantics after degenerating the representations by t-SNE <cit.>. Instead, they remain irregularly distributed (Figure <ref>). As a result, the alignment direction of the image may not favor all languages when the model only sees the texts in one language at one time, which might result in inconsistent performance among the semantically similar texts in different languages. Impact of inconsistency. As this problem appears dynamically in different instances for different languages during pre-training, the impact of this problem is local. The very different retrieval results will be obtained (1) when the texts in different languages are retrieved using the same image or (2) when the same image is retrieved using the texts in different languages but with the same semantics. Unfortunately, Recall@K can only reflect the overall performance of the model on each language in the whole dataset but can not reflect the inconsistent performance across languages of an instance. § METHOD The section is organized as follows: some necessary notations are first introduced in Section <ref>; a novel 1-to-K contrastive method is then proposed to solve the inconsistency problems in Section <ref>; a pre-training model, CCR^k, is further presented to combine 1-to-K contrastive learning with other common pre-training tasks in a unified framework in Section <ref>; Finally, a new evaluation metric called Mean Rank Variance (MRV) is proposed in Section <ref>, which evaluates the rank consistency across languages in a instance. §.§ Notation Let D = (I, T_1, T_2, ..., T_K) denote a multi-lingual image-text dataset, consisting of the instance (i_j, t_j1, t_j2, ..., t_jK) ∼ D, where j indexes the instance, i_j is the image in this instance, t_jk is the text in the k-th language in this instance, and K refers to the total number of languages in the dataset. If it is clear from the context, we will remove the subscript j or jk for brevity. §.§ 1-to-K Contrastive Learning To solve both two problems in the previous section, the key is that the texts in all languages should be aligned with the semantically similar images all at once. Obviously, it is not possible to do this by aligning pairs of data. Even if uniformly sampling one from the texts in all languages and combining it with the corresponding image to form an image-text pair, the second problem remains. Therefore, the effective way is to form the texts in all languages and the image directly into a tuple as the input. Therefore, we propose a 1-to-K contrastive learning approach to solve this problem. For simplicity, let t̂ and î represent the normalized text and image representations, respectively. Then, the optimization objective of 1-to-K contrastive learning can be formulated as follows: ℒ_kcl^i2t = -1/Klogexp(î_j^T t̂_jk/τ)/∑_k^K exp(î_j^T t̂_jk/τ) + ∑_n,n≠ j^N∑_k^Kexp(î_j^T t̂_nk/τ) ℒ_kcl^t2i = -logexp(t̂_jk^T î_j /τ)/exp(t̂_jk^T î_j /τ) + ∑_n, n≠ j^Nexp(t̂_jk^T î_n /τ) where K is the number of languages and N is the number of negative instances. It is worth noting that there exists literature on multiple positive contrastive learning in other fields <cit.>, where all positive items are accumulated in the numerator and the probability of the overall positive terms probability is calculated to be infinitely convergent to 1. Instead, we further set the label of each positive item to 1/K to ensure equal contribution from each language. Note that increasing the number of multi-lingual texts used as input to the encoders only results in a small increase in GPU memory and training time since the text encoders are usually more lightweight than image encoders in CCP <cit.> and most of the computations involved are matrix operations that support parallelism. The changes in memory usage and training time before and after applying 1-to-K contrastive learning are detailed in Appendix <ref>. §.§ Pretraining Model: CCR^k Based on the proposed 1-to-K contrastive learning, we further propose a CCP model named CCR^k. Specifically, we combine 1-to-K contrastive learning with two other common CCP tasks and balance positive and negative samples by hard sample mining. As shown in the middle of Figure <ref>, we adopt the common framework in cross-lingual cross-modal pretraining <cit.>, which consists of a multi-lingual text encoder f(·), a visual encoder g(·) and a fusion encoder ϕ(·,·) with image-to-text cross-attention. §.§.§ Hard Sample Mining Incorporating cross-attention between the image representation and the text representations in all languages can greatly increase the pre-training time. Therefore, we use the hard sample mining strategy proposed by <cit.> for both positive and negative samples. This method allows the model can only focus on how to reconstruct the hardest positive samples in the CMLM task and distinguish the hardest negative samples in the MITM task. In subsequent sections, we use t_j^pos to represent the hard positive sample for texts and t_j^neg and i_j^neg to represent the hard negative sample for texts and images, respectively. Please refer to Appendix <ref> for sampling details. §.§.§ Multi-lingual Image-Text Matching (MITM) The MITM task is a binary classification task that aims to identify whether the semantics of a given image-text pair match. This task is often regarded as an image-text bi-directional prediction problem. Specifically, in the image-to-text direction, the model is trained to select the right one from the hard positive and hard negative text samples. Let u_ cls be the representation output by the fusion encoder, then the loss function of MITM can be expressed as ℒ_mitm^i2t = -logexp(ψ(u_ cls^ p))/exp(ψ(u_ cls^ p))+exp(ψ(u_ cls^ nt)) where ψ∈ℝ^d × 2 is the binary-classification head, d is the representation dimension, u_ cls^ p is obtained from ϕ(t̂_j^ pos, î_j) and u_ cls^ nt is obtained from ϕ(t̂_j^ neg, î_j). Similarly, for the text-to-image direction, the matching objective can be expressed as ℒ_mitm^t2i = -logexp(ψ(u_ cls^ p))/exp(ψ(u_ cls^ p))+exp(ϕ(u_ cls^ ni) where ψ∈ℝ^d × 2 is the same binary-classification that is used in Eqn. (<ref>) and u_ cls^ ni is obtained from ϕ(t̂_j^ pos, î_j^ neg). §.§.§ Cross-Modal Masked Language Modeling (CMLM) The cross-modal masked language modeling task aims to reconstruct the masked tokens using both textual contextual information and image information. Let t_j^ mask be the variant of t_j^ pos whose partial tokens are masked, and û_j^ mask is the fusion encoder output corresponding to t_j^ mask, then the loss function for this task can be expressed as ℒ_cmlm = -logexp(ρ(û_j^ mask, w^+_j))/∑_w_j∈𝒲exp(ρ(û_j^ mask, w_j)) where ρ: (ℝ^d ×𝒲)→ℝ^1 is a score function to evaluate the matching degree of a given contextual representation with a given token, w^+_j is the original token of the masked location and 𝒲 is the vocabulary list. We use the special token [MASK] to replace 15% of the tokens in each text, following BERT <cit.>. §.§.§ Optimization Objective Note that contrastive loss, image-text matching, and masked language modeling have been verified in numerous prior works <cit.> to converge together when co-optimized, so we directly sum them here without the additional hyper-parameters for weighting different losses. Thus, the final optimization objective, which can be expressed as ℒ = ℒ_kcl^i2t + ℒ_kcl^t2i + ℒ_mitm^i2t + ℒ_mitm^t2i + ℒ_cmlm §.§ Evaluation Metric: Mean Rank Variation While Recall@K is the common metric used in CCR, it only can reflect the overperformance on a single language. In this section, we introduce a new evaluation metric, Mean Rank Variation (MRV), to measure the rank consistency in different languages within an instance. Figure <ref> illustrates the difference between MRV and Recall@K in their calculation methods. MRV for K languages can be computed in both Image-to-Text Retrieval (TR) and Text-to-Image Retrieval (IR) tasks. For example, in the TR task, given an image i_j and a text set in a particular language {t_jk}_j=1^N, the similarities between the image and the text set are computed first. Then the text set is sorted by these similarities in ascending order and the rank of t_jk is denoted as Rank_jk. For each i_j, we can loop through k from 1 to K to obtain {Rank_jk}_k=1^K, and average them to obtain Rank_j. Similarly, in the IR task, we denote the rank of retrieving the image i_j using the text t_jk as Rank_jk and the average rank obtained by retrieving i_j using all K languages as Rank_j. Thus, MRV for K languages, which is denoted as MRV_K, can be expressed as MRV_K = 1/NK∑_j^N ∑_k^K |Rank_jk - Rank_j|^2 Note that there is no trade-off between Recall@K and MRV_K, which means that when Recall@1=1 holds for all K languages, MRV_K=0 also holds. MRV_K is more likely to reflect the alignment consistency of local semantic space. Such consistency is significant in certain scenarios, such as cross-border e-commerce, to ensure consistency in the results retrieved when the queries are in different languages but have the same semantics. § EXPERIMENT §.§ Experiment Setup §.§.§ Pre-training Datasets For pre-training, we mainly use Conceptual Captions 3M (CC3M) <cit.>, which currently has only 1.8 million image-text pairs from the web due to the inaccessibility of image hyperlinks. To verify the scalability of our approach, we further introduce 3 additional cross-modal web datasets, including SBU Caption <cit.>, Visual Genome <cit.> and COCO <cit.>. For the translated version of the texts, we use the 6-language (English, German, French, Czech, Japanese, and Chinese) translated texts in CC3M provided by UC^2 <cit.> as well as the same 6-language translated texts in the other three datasets, provided by CCLM <cit.> for fair comparisons. To further verify the generalizability of our method to more languages, we use the M2M-100-large model <cit.> to translate the English text in the datasets into an additional 4 languages (Spanish, Indonesian, Russian, and Turkish), following <cit.>. Therefore, the total number of text languages used for evaluation is 10, which covers all languages in xFlickr&CO. We plan to open-source these translated texts for research. §.§.§ Baseline CCR^k proposed in this paper is mainly an improvement of the training optimization objective in the pre-training phase, so we mainly compare it with other CCP models, including xUNITER <cit.>, UC^2 <cit.>, M^3P <cit.>, TD-MML <cit.> and CCLM <cit.>. These methods have been briefly described in Section <ref>, while for more details on them, please refer to Appendix <ref>. §.§.§ The Variant of CCR^k We report the performance of four model variants pre-trained with different data, which are as follows: * CCR^6 pre-trained using CC3M with 6-language texts. * CCR^10 pre-trained using CC3M with 10-language texts. * CCR^6-E pre-trained using CC3M, COCO, VG and SBU with 6-language texts. * CCR^10-E pre-trained using CC3M, COCO, VG and SBU with 10-language texts. §.§.§ Evaluation Datasets and Protocols We evaluate our methods on four popular CCR datasets, including xFlickr&CO <cit.>, WIT <cit.>, Multi30K <cit.>, and COCO <cit.>. Although the images in xFlickr&CO are derived from the original Flickr30K and COCO, the multi-lingual texts in xFlickr&CO are manually re-annotated. Therefore, the performance on xFlickr&CO may not be strongly correlated with that on Multi30K and COCO. For both xFlickr&CO and WIT, we evaluate our models using two protocols: fine-tuning on the English train set (Zero-Shot) and fine-tuning on 100 instances of other languages based on English fine-tuned models (Few-Shot). For Multi30K and COCO, we also use two evaluation protocols: fine-tuning on the English train set (Zero-Shot) and fine-tuning on each language train set (Fine-Tune). Note that the results on WIT under the few-shot scenario are not reported because IGLUE <cit.> does not provide the corresponding evaluation protocol. For more details, please refer to Appendix <ref>. §.§ Implementation Details Following <cit.>, the image encoder is initialized using the 12-layer Swin Transformer <cit.>, and the multi-lingual encoder and fusion encoder are initialized using the pre-trained XLM-R <cit.>, which consist of 6 layers for each. We provide a detailed comparison of the model architecture and initialization sections between CCR and other baselines in Appendix <ref>. Also, keeping consistent with <cit.> for a fair comparison, τ in Eqn. (<ref>) and (<ref>) are set as 0.07. The AdamW <cit.> optimizer with 1e-4 learning rate, 0.01 weight decay, and first 3% linearly warm-up steps is used. The batch size on each GPU is set to 64. The pre-training experiments were conducted on 2 NVIDIA A100s, while fine-tuning was done on 1 A100. We pre-train all models for 30 epochs. With the acceleration of PyTorch DDP <cit.>, it takes approximately 4 days to pre-train for 30 epochs on CC3M with 6 languages. In addition, we provide the hyper-parameters used for fine-tuning all four datasets in Appendix <ref>. §.§ Main Performance We report the performance of all four variants of CCR^k and baselines in Table <ref>. Note that the results of CCLM-3M on WIT are not reported in Table <ref> as we find that there is a significant overlap between the WIT test set and the pre-training data of CCLM. Unless otherwise noted, we use ISO 639-1 Abbreviations to represent specific languages in subsequent tables. The table mapping the two-letter codes to the specific language is provided in Appendix <ref> for convenience. Recall Rates With a smaller scale pre-trained data (#images and #texts) and fewer language numbers than the baselines, CCR^10-E achieves SOTA results under both zero-shot and few-shot (or fine-tuning) setting for all CCR datasets, demonstrating the good generalizability and transferability of CCR^k among different languages. When comparing the performance difference among the four variants of CCR^k, we can find that (1) CCR^10 use more languages compared to CCR^6, causing it to improve the performance on the newly added languages while hurting Recall@K of the original languages existing in CCR^6, possibly due to the increased difficulty of alignment across more languages; (2) CCR^6-E achieves higher Recall@K and lower MRV on the original languages compared to CCR^6 after introducing more pre-training data. Consistency Evaluation of Recall@K Recall that one of the inconsistency problems leads to inconsistent recall@K in different languages. As seen in Table <ref>, all baselines perform better in English than in other languages on Multi30K and COCO because English is used as a bridge between the visual and other languages during their pre-training. Benefitting from the 1-to-K contrastive paradigm, all four variants of CCR^k maintain significantly smaller inter-language gaps on these two datasets. Among them, CCR^10-E maintains the smallest performance gap across languages on Multi30K and COCO in the zero-shot scenario, even though this scenario is more favourable for English-related retrieval. More surprisingly, when CCR^k is fine-tuned in each language separately, the performance gap on various languages almost disappears, which reflects the promising application of CCR^k in practical applications. Consistency Evaluation of Rank Recall that the other problem results in the inconsistency of rank. The motivation behind proposing MRV is that Recall@K cannot reflect such differences across languages within an instance. Therefore, we calculate MRV for four languages (EN, DE, JA, and ZH) on xFlickr&CO and four languages (EN, DE, FR, and CS) on Multi30K, which are denoted as MRV_4 in Table <ref>. We also report MRV_4 of all compared models except TD-MML based on the checkpoints obtained from the official IGLUE GitHub repository [<https://github.com/e-bug/iglue>]. It can be found that MRV_4 for CCLM, which uses 1-to-1 contrastive learning, has improved substantially compared to M^3P and UC^2, while CCR^k can improve further and achieve the lowest MRV. Similar to Recall@K, adding more languages (CCR^6 → CCR^10 and CCR^6-E → CCR^10-E) will result in a higher MRV due to the capacity constraints of the model and the elevated difficulty of the optimization objective. §.§ Ablation Study To verify the effectiveness of each model component, we conduct ablation experiments by removing critical components. The ablated variants we consider are as follows: w/o KCL: 1-to-K Contrastive Learning (KCL) is replaced with 1-to-1 contrastive learning; w/o H-MITM: Hard sample mining for MITM is replaced with random uniform sampling from the candidate set; w/o H-CMLM: Hard sample mining for CMLM is replaced with uniform sampling from the candidate set. Due to space constraints, we only report results for CCR^6 and CCR^10-E under the zero-shot setting in Table <ref>. Note that the other two variants also show a similar trend. As can be seen from the results, each pre-training task and sampling approach proposed to contribute to the improvement in both Recall@K and MRV_4. More specifically, 1-to-K contrastive learning has the largest improvement for all metrics, while 1-to-1 contrastive learning is still better than the results without contrastive learning. Hard sample mining positively affected both MITM and CMLM downstream tasks. §.§ Further Study §.§.§ Pure Contrastive Learning In fact, CCR^k is proposed to ensure that the model's parameter number and pre-training tasks are similar to other baselines. However, neither MITM and CMLM tasks nor the fusion encoder is necessary for the retrieval task. Therefore, we further compare the effect of 1-to-K and 1-to-1 contrastive learning on Recall@K and MRV with the fusion encoder removed, while other settings remain consistent with CCR^6. As seen from Figure <ref>, 1-to-K contrastive learning can still lead on both xFlickr&CO and Multi30k. §.§.§ Loss and Performance To better understand why our method works, we record the 1-to-1 contrastive loss and 1-to-K contrastive loss during the pre-training process of “CCR^6” and “CCR^6 -w/o KCL”, respectively. In addition, we evaluate the checkpoints every 5 epochs on Multi30K under zero-shot setting and plot the results in Figure <ref>. The figure shows that 1-to-K contrastive learning performs better at all evaluated checkpoints. Attributed to the absence of directional bias, when pre-training with 1-to-K contrastive learning, the corresponding loss values remain lower than those when using 1-to-1 contrastive learning. §.§.§ T-SNE Visualization A T-SNE visualization similar to that in Section <ref> is shown in Figure <ref> and Figure <ref>, which contains 10 instances randomly sampled in xFlickr&CO. Comparing to 1-to-1 contrastive learning, 1-to-K contrastive learning enables higher discrimination between instances and a more balanced distribution within instances. In addition, a case study on failure alignment is provided in Appendix <ref> for potential further improvement. § CASE STUDY After manually analyzing the wrong cases in xFlickr&CO, which are not correct under some language settings, we summarized two typical causes of matching errors: fine-grained semantic matching errors and pseudo-negative samples. We give some cases for each of them in Figure <ref>. Since images are more presentable and comprehensible than texts, we only use the error cases from the text-to-image retrieval (IR) task. The first four cases demonstrate a fine-grained semantic matching error. For example, the concept of “headband” in the first case is so specialized that the image can match all other features when retrieved using German (DE) and Turkish (TR). The last two cases show a pseudo-negative sample error, where the images retrieved actually match the text semantics, but these matching relationships are missing annotations in the dataset. For example, in the fifth case, both images retrieved for the "hockey game" matched the textual description, yet only one is labelled as correct in the xFlickr&CO dataset. § DISCUSSION The Novelty of 1-to-K Contrastive Learning The proposed modification is not groundbreaking but based on traditional 1-to-1 contrastive learning. However, recall that 1-to-1 contrastive learning, which has been carried over from the cross-lingual or cross-modal domains, is still the dominant paradigm in CCP. The call to change a task's pre-training paradigm is usually tough. Changing to 1-to-K contrastive learning is minimal yet effective and easily applicable to the existing CCR models based on SimSiam networks. The Significance of the Consistency in CCR Maintaining consistency in CCR is important. For example, in a cross-border e-commerce business, consistency in recall across languages ensures that the entire retrieval system can be supported by a single fundamental model. Further, the query with the same semantics issued by different native-speaking customers should be expected to return the same results, meaning there needs to be good consistency in rank across different languages within an instance. If we evaluate the retrieval model with Recall@K on each language only, the true performance of the CCR model will not be reflected. Further Consistency Ensuring equal contributions across languages in all aspects is challenging. For instance, XLM-R, CCR^k's cross-lingual encoder, is trained on the 2.5TB CommonCrawl Corpus encompassing 100 languages. Discrepancies in data sizes between high-resource and low-resource languages within this corpus, like the 100GB English data versus the 0.1GB Sundanese data, impede XLM-R from achieving uniform performance across languages. Balancing language contributions during pre-training could help narrow the performance gap but would require substantial computational resources, which we will explore in future studies. § CONCLUSION In this paper, we first analyze the two problems of inconsistency existing in the current CCP methods and point out their impact on CCR via theoretical analysis and empirical studies. Then we propose a 1-to-K contrastive paradigm and a CCP model, CCR^k, based on it, which equally aligns all languages with vision at once, effectively improving the consistency in CCR. In addition, a new evaluation metric, MRV, is proposed to portray the consistency of each language rank within each instance. Exclusive experiments on the four CCR datasets show that our model scales well and achieves new SOTA on both Recall@K and MRV. § ACKNOWLEDGEMENTS This work was supported by the National Science and Technology Major Project under Grant 2022ZD0120202, in part by the National Natural Science Foundation of China (No. U23B2056), in part by the Fundamental Research Funds for the Central Universities, and in part by the State Key Laboratory of Complex & Critical Software Environment. ACM-Reference-Format § ISO 639 LANGUAGE CODES We give the ISO-691 codes for all the language codes that appear in the main text and appendices in Table <ref> for reference. § SUPPLEMENT ON EXPERIMENT SETUP §.§ Baseline This section details the baselines used for comparison and compares key information about their architectures and pre-training processes in Table <ref>. xUNITER <cit.> is a multi-lingual variant of UNITER <cit.>, which follows the architecture of UNITER and the parameters are initialized with XLM-R_ base <cit.>. It also has a twin, mUNITER, which is initialized using mBERT <cit.>. Considering that xUNITER works better, we ignore the results of mUNITER in this paper. xUNITER and mUNITER are pre-trained using image-English text pairs and parallel corpus alternately composed of batch. UC^2 <cit.> presents the first MT-augmented pre-training model that pivots primarily on images and complementary on English to learn cross-lingual cross-modal representation from large-scale of multi-lingual image-to-text pairs. Two new pre-training tasks, Masked Region-to-Token Language Modeling and Visual Translation Language Modeling, are proposed to facilitate the model to obtain better alignment between vision and different languages. M^3P <cit.> combines multi-lingual pre-training and multi-modal pre-training into a unified framework via multitask Learning. multi-modal code-switched training is proposed to further alleviate the issue of lacking enough labeled data for non-English multi-modal tasks and avoid the tendency to model the relationship between vision and English text. TD-MML <cit.> uses translated data for multi-lingual multi-modal learning, which are applied in both pre-training and fine-tuning data with the existing CCP model. In order to prevent the model from learning from low-quality translated texts, two metrics are proposed for automatically removing the low-quality translation texts from the resulting datasets. CCLM <cit.> is a CCP framework that unifies cross-lingual pretraining and cross-modal pretraining with shared architectures and objectives. Contrastive learning is introduced for cross-modal and cross-lingual alignment, respectively. §.§ Evaluation Dataset xFlickr&CO is a novel dataset purposed by ICLUE <cit.> and collected by combining 1000 images from Flickr30K and COCO respectively. The existing captions from <cit.> and <cit.> are used for English and Japanese, while the captions are from crowd-source for the other 6 languages. WIT means “Wikipedia-based Image-Text” dataset <cit.> collected instances from the websites of Wikipedia in 108 languages. For training, a subset of 500K captions is randomly sampled from the English training set of WIT. For evaluation, the WIT test data released as part of its corresponding Kaggle competition[<www.kaggle.com/c/wikipedia-image-caption>] is used. Multi30K extends Flickr30K <cit.> from English to German, French and Czech. It contains 31,783 images obtained from Flickr and provides five captions per image in English and German, and one caption per image in French and Czech. Dataset splits are defined as the original Flickr30K. COCO extends the original COCO Caption <cit.> by translating the captions into Japanese and Chinese. The Japanese and Chinese subsets consist of 820k and 20k captions respectively. Following previous work, we use the same train, dev, and test splits for English and Japanese as defined by <cit.>. For Chinese, we use the COCO-CN split <cit.>. § IMPLEMENTATION DETAILS §.§ Evaluation Protocols Zero-Shot Only pre-training and fine-tuning on the English train set, then evaluate the test set in each target language. Few-Shot Fine-tune First pre-training and fine-tuning on English train set. Then twice fine-tuning 100 labeled instances in a target language and evaluating the test set of this target language. Single-Language Fine-tune First pre-training and fine-tuning on English train set. Then, fine-tuning the training set of the target language and evaluating the test set of this target language. §.§ Hyperparameter Setting For zero-shot xFlickr&CO and WIT, we first fine-tune the model on the English training set, and then evaluate zero-shot and few-shot performance in other languages. Following <cit.>, for both zero-shot and few-shot experiments, we use AdamW optimizer with β_1 = 0.9 and β_2 = 0.999; weight decay is set to 0.01; learning rate scheduler is linear. The all hyper-parameters used are shown in Table <ref>. §.§ The Method of Hard Negative Sampling For positive samples, given an image i_j, its associated set of texts (t_j1, t_j2, ..., t_jK) can be regarded as positive samples. Among these texts, the hardest positive sample t_ik^pos can be identified as the text that aligns worst with the image, and the degree of alignment can be estimated by computing the cosine similarity between the image and text representations. Accordingly, we can sample the index k^pos of the hardest positive sample from a specific distribution T, which can be expressed as t_j^pos = t_ik^pos, k^pos∼ T, where P_T(k) = 1-t̂_jk^T î_j/∑_k'^Kt̂_jk'^T î_j where T is a multinomial distribution. For negative samples, if the image and the text from different tuples are well aligned, they can be regarded as hard negative samples for each other. Also, we estimate the degree of alignment using the cosine similarity and sample the index of the negative example from a multinomial distribution. Thus, the process of obtaining the hard negative image can be expressed as i_j^neg = i_j^neg, j^neg∼ R, where P_R(j') = ∑_k^Kt̂_jk^T î_j'/∑_j' ≠ j^N∑_k^Kt̂_jk^T î_j' where R is a multinomial distribution. Similarly, we can obtain the hard negative text for each image in the batch. §.§ The Method of Rank We obtain the representations from the text encoder and image encoder outputs and rank the candidates by cosine similarity. For CCR^k and ablation models containing the fusion encoder, we re-rank only the top N candidates using the Fusion encoder to better adapt to the web-scale data. Specifically, we use the projection head used for the multi-lingual image-text matching task to predict the match probability between the query and each shortlisted candidate and re-rank the candidates regarding this probability only. In our experiment, N is 256 for COCO and 128 for the other three datasets. § TIME AND MEMORY COMPARISON We compare the model's training time and GPU memory consumption for different language numbers of translated texts, which are reported in Table <ref>. The results in the table are the average results measured while keeping other external conditions constant as much as possible. It is easy to find that both training time and memory usage increase linearly with the number of languages. Specifically, the training time increases by 4.2 min per language for 1 Epoch, while the memory footprint increases by 710 MB per language per Nvidia A100 40GB.
http://arxiv.org/abs/2406.18438v1
20240626154243
Geometrical finiteness for automorphism groups via cone conjecture
[ "Kohei Kikuta" ]
math.AG
[ "math.AG", "math.GR", "math.GT", "14J50, 20F67" ]
Geometrical finiteness for automorphism groups via cone conjecture Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka, 560-0043, Japan. kikuta@math.sci.osaka-u.ac.jp School of Mathematics, The University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, Scotland, EH9 3FD, United Kingdom. § ABSTRACT This paper aims to establish the geometrical finiteness for the natural isometric actions of (birational) automorphism groups on the hyperbolic spaces for K3 surfaces, Enriques surfaces, Coble surfaces, and irreducible symplectic varieties. As an application, it can be seen that such groups are non-positively curved: CAT(0) and relatively hyperbolic. In the case of K3 surfaces, we additionally provide a dynamical characterization of the relative hyperbolicity, and the first counterexample to Mukai's conjecture concerning the virtual cohomological dimension of automorphism groups. Kohei Kikuta July 1, 2024 ================ KOHEI KIKUTAGEOMETRICAL FINITENESS § INTRODUCTION §.§ Geometrical finiteness Geometrical finiteness is one of the central notions in the study of Kleinian groups or more generally, discrete subgroups of semisimple Lie groups of real rank one. This term describes associated quotient orbifolds such that all the interesting geometry goes on in some compact subset. Many people have contributed to the development of this idea, and among them, Bowditch provided several equivalent definitions of geometrical finiteness <cit.>. The term describes the simplest possible such manifolds: those for which all the interesting geometry goes on in some compact set. There are several equivalent ways to formulate this idea. Thurston identifies the geometrically interesting part of Λ as the “thick part” of the “convex core”. Thus, saying that Λ is geometrically finite means that the thick part of the convex core is compact. In fact the notions of thick part (or thick-thin decomposition) and convex core are central to the present paper. Another definition, due to Thurston, demands that some uniform neighborhood of the convex core of Λ should have finite volume. ーーー This term describes associated quotient orbifolds for which all the interesting geometry goes on in some compact subset. This term describes associated quotient orbifolds such that all the interesting geometry is contained within some compact subset. The geometrically interesting information of the quotient ^n/Γ concentrate on some compact subspace called the convex core. This notion captures some finiteness of... Bowditch proved the equivalence of various definitions of geometrical finiteness. various characterizations Bowditch and many others have contributed to developments of the formulations. §.§ Main results The study of automorphism groups of algebraic varieties is a classical subject in algebraic geometry. For certain algebraic varieties X, the torsion-free part of the Néron–Severi group admits a hyperbolic lattice structure, and the representation of the automorphism group has a finite kernel, which establishes a connection to hyperbolic geometry. For examaple, in the case of minimal algebraic surfaces of Kodaira dimension 0, a kernel is finite if and only if such surfaces are either K3 or Enriques (see Remark <ref>). We denote the associated isometric action on the hyperbolic space by (X)→(^ρ_X-1), where ρ_X is the Picard rank of X. The main result of this paper is the geometrical finiteness for automorphism groups of such varieties further satisfying the Morrison–Kawamata cone conjecture: the existence of a rational polyhedral fundamental domain for the action on the effective nef cone. We also have a similar result for birational automorphism groups. The precise statement is as follows. Let G_X be one of the following groups: (1) the automorphism group (X) of a K3 surface X over a field of characteristic not 2. (2) the automorphism group (X) of a non-supersingular K3 surface X over a field of characteristic 2. (3) the automorphism group (X) of an Enriques surface X over an algebraically closed field of characteristic not 2. (4) the automorphism group (X) of a Coble surface X over an algebraically closed field of characteristic 0. A Coble surface V is a rational surface with |−K_V|=ϕ and |−2K_V|=B_1+⋯+B_n where B_i is a non-singular rational curve and B_i ∩ B_j=ϕ (i ≠ j). This is called a terminal Coble surface of K3 type in Dolgachev and Zhang. (5) the automorphism group (X) of an irreducible symplectic variety X over a field of characteristic 0. (6) the birational automorphism group (X) of an irreducible symplectic variety X over a field of characteristic 0. Then the representation G_X→(^ρ_X-1) is geometrically finite. Geometric group theory studies infinite groups with isometric actions on non-positively curved spaces, such as CAT(0) or hyperbolic spaces mentioned in this paper. As a corollary, the following can be derived from classical results for geometrically finite Kleinian groups: [Corollary <ref> and <ref>] Let G_X be a group as in Theorem <ref>. (1) G_X is CAT(0). (2) G_X is either virtually abelian or non-elementary relatively hyperbolic. As further applications, in the case of K3 surfaces, we obtain a dynamical characterization of the relative hyperbolicity using entropy (Corollary <ref>), and the first counterexample to Mukai's conjecture concerning the relation between the virtual cohomological dimension of automorphism groups and the Mordell–Weil rank of elliptic fibrations (Example <ref>). §.§ Organization of the paper Section <ref> briefly provides the basics of geometrically finite Kleinian groups from hyperbolic geometry and various cones appeared in algebraic geometry. In Section <ref>, we prove the main result (Theorem <ref>). However, prior to this, we prove a general statement (Theorem <ref>) in terms of hyperbolic geometry, which is the technical part of this paper. Several applications of Theorem <ref> are explained in the final section. §.§ Related works * Taiki Takatsu independently proves the geometrical finiteness for automorphism groups of K3 surfaces over the complex number field in a different way <cit.>. As an application, he proves that the virtual cohomological dimension of automorphism groups is determined by the covering dimension of the blown-up boundaries of their ample cones. He also provides an affirmative example of Mukai's conjecture using sphere packings. * Kurnosov–Yasinsky announced a result on the CAT(0) property for automorphism groups and birational automorphism groups of irreducible symplectic varieties over in a conference proceeding <cit.>. They adopted a similar approach as in <cit.>. By the above corollary, Theorem <ref> is a broader generalization of their results. The author would like to thank Koji Fujiwara, Tomohiro Fukaya, Shin-ichi Oguni, and Taiki Takatsu for valuable discussions and helpful comments. This work is supported by JSPS KAKENHI Grant Number 21K13780. § PRELIMINARIES §.§ Geometrically finite representations We briefly recall the basics of hyperbolic geometry and Kleinian groups, and provide the definition of the geometrical finiteness. For further details, readers are referred to <cit.>. A lattice is a finitely generated free abelian group endowed with an integral non-degenerate symmetric bilinear form. Let Λ be a lattice of signature (1,n). Fix one of the two connected components of {v∈Λ_| v^2>0} called a positive cone and denoted by . The hyperboloid model ^n of the hyperbolic space is defined by ^n:={x∈| (x,x)=1} endowed with the metric d determined by cosh d(x,y):=(x,y). The boundary of ^n is defined by ∂^n:=(∂\{0})/_>0. The set of rational points on ^n is given by ^n():=^n∩Λ_. Similarly, the set ∂^n() of rational points on ∂^n is the image of (∂∩Λ_)\{0} with respect to the quotient ∂\{0}→∂^n. We set ^n:=^n∪∂^n and ^n():=^n()∪∂^n(). In this paper, we also consider other isometric models as necessary: the conformal ball model ^n and the upper half-space model ^n. For a subset S⊂^n, S denotes the closure of S in ^n. A subset S of ^n is convex if, for each pair of distinct points x,y∈ S, the geodesic segment [x,y] is contained in S. For a convex open subset S⊂^n (or its closure in ^n), a side of S is a maximal convex subset of the topological boundary ∂ S. A generalized polytope in ^n is a convex hull of finitely many points in ^n. generalized polytopeの正確な定義: For x_1,⋯,x_s∈^n and c_1,⋯,c_t∈∂^n, the convex hull of {x_1,⋯,x_s,c_1,⋯,c_t}⊂^n is the smallest convex subset P⊂^n containing {x_1,⋯,x_s} such that P∩∂^n={c_1,⋯,c_t}. A horoball is an open ball in ^n tangent to ∂^n. Let O(Λ_) (resp. O(Λ)) be the orthogonal group of Λ_ (resp. Λ). We write O^+(Λ_) for the index two subgroup of transformations preserving the cone , which is naturally isomorphic to the isometry group (^n). We define a discrete subgroup O^+(Λ):= O^+(Λ_)∩ O(Λ)<(^n). Each element g∈(^n) is classified as: * Elliptic: g fixes a point in ^n. * Parabolic: g fixes no point in ^n and fixes a unique point in ∂^n. * Hyperbolic: g fixes no point in ^n and fixes exactly two points in ∂^n. Note that, for a fixed point of a parabolic (resp. hyperbolic) isometry in O^+(Λ), a representative fixed isotopic vector in ∂ is rational (resp. irrational). K3の場合 primitive isotropic integral vectorの存在とell fibの存在が同値 さらに primitive isotropic integral vectorがpara fix ptであることとそのell fibがinfinite sectionを持つことが同値 [<cit.>] * A collection of subsets of a topological space Y is locally finite if and only if for each point y of Y, there is an open neighborhood U of y in Y such that U meets only finitely many members of . * A subset C of ^n is convex if for each pair of distinct points x,y of C, the geodesic segment [x,y] is contained in C. * A subset C of ^n is convex if for each pair of distinct points x,y of C, the geodesic segment [x,y] is contained in C. * Let C be the closure of a nonempty convex open subset of ^n. A side of C is a nonempty, maximal, convex subset of the topological boundary ∂ C. * A connected closed subset D of ^n is a fundamental domain for a group Γ of isometries of ^n if the members of {gD^∘ | g∈Γ} are mutually disjoint, and ^n=⋃_g∈ΓgD. * A fundamental domain D for a group Γ of isometries of ^n is locally finite if {gD | g∈Γ} is a locally finite collection of subsets of ^n * A convex fundamental domain D for a group Γ of isometries of ^n is exact if for each side S of D there is an element g∈Γ such that S=D∩ gD. * A generalized polytope in ^n is a convex hull of finitely many points in ^n. A subgroup of (^n) is Kleinian if it is discrete. A Kleinian group is elementary if it is virtually abelian, that is, it contains an abelian subgroup of finite index. Each elementary Kleinian group Γ is also classified into the following three types: * Elliptic type: Γ fixes a point in ^n, or equivalently, Γ is finite. * Parabolic type: Γ has a unique fixed point in ∂^n, or equivalently, Γ has a free abelian subgroup of finite index generated by parabolic elements. * Hyperbolic type: Γ has two fixed point in ∂^n, or equivalently, Γ has an infinite cyclic subgroup of finite index generated by a hyperbolic element. Let Γ be a Kleinian group. A point c∈∂^n is a limit point of Γ if there exists x∈^n and a sequence {g_i}_i=1^∞⊂Γ such that {g_ix}_i=1^∞ converges to c. The limit set L(Γ) of Γ is the set of all limit points of Γ. Note that a fixed point of either a parabolic or hyperbolic element of Γ is a limit point of Γ. For each x∈^n, we have L(Γ)=Γ.x∩∂^n, hence L(Γ) is a Γ-invariant closed subset of ∂^n. Let Γ be a Kleinian group. The following are equivalent: * Γ is elementary. * L(Γ) is finite, especially |L(Γ)|≤2. * Γ has a finite orbit in ^n. Thus, for each c∈∂^n, its stabilizer Γ_c is elementary. The set C(Γ) is the convex hull of L(Γ) in ^n, that is, the smallest convex subset of ^n satisfying C(Γ)∩∂^n=L(Γ). C(Γ) is Γ-invariant and closed in ^n, and when Γ is non-elementary, any Γ-invariant closed subset of ^n contains C(Γ). The quotient C(Γ)/Γ is called the convex core of the hyperbolic orbifold ^n/Γ. We introduce the notion of the geometrical finiteness, which is significant in this paper. * A Kleinian group Γ<(^n) is geometrically finite if Γ is finitely generated, and vol(C_ϵ(Γ)/Γ)<∞ for some ϵ>0, where C_ϵ(Γ) is a ϵ-neighborhood of C(Γ) in ^n. * Let G be a group. A representation G→(^n) is geometrically finite if the image is a geometrically finite Kleinian group, and the kernel is finite. The following is one of several characterizations of geometrically finite Kleinian groups. Let Γ be a Kleinian group. The following are equivalent: * Γ is geometrically finite. * There exists a Γ-invariant, pairwise disjoint collection {V_λ}_λ of open horoballs at parabolic fixed points of Γ, such that the quotient (C(Γ)\⋃_λ V_λ)/Γ is compact. Let ⊂^n be a convex subset and Γ a group of isometries of ^n preserving . A connected closed subset Π_ of ^n is a fundamental domain for the action of Γ on if the members of {gΠ_^∘ | g∈Γ} are mutually disjoint, and =⋃_g∈ΓgΠ_. A fundamental domain Π_ for for the action of Γ on is locally finite if {gΠ_ | g∈Γ} is a locally finite collection of subsets of . A convex fundamental domain Π_ for the action of Γ on is exact if for each side S of Π_ there is an element g∈Γ such that S=Π_∩ gΠ_. §.§ Groups and Cones in algebraic geometry Let X be a smooth projective variety over a field K. The automorphism group (resp. birational automorphism group) of X is denoted by (X) (resp. (X)). We write N^1(X) for the torsion-free part of the Néron–Severi group (X). Its rank is called the Picard rank and denoted by ρ_X. NS(X): group of divisor classes on X modulo algebraic (resp. numerical) equivalence. N^1(X):=Num(X): group of divisor classes on X modulo algebraic (resp. numerical) equivalence. NS(X)/tor = N^1(X) We define several notions of cones in N^1(X)_ as follows: * The positive cone _X is the connected component of the set {v∈ N^1(X)_| v^2>0} containing ample classes. * The cone ^+_X⊂_X is the convex hull of _X∩ N^1(X)_ in N^1(X)_. * The ample cone _X⊂_X is the cone generated by ample divisor classes. * The nef cone _X⊂_X is the closure of _X in N^1(X)_. * The cone ^+_X⊂_X is the convex hull of _X∩ N^1(X)_ in N^1(X)_. * The effective cone _X⊂ N^1(X)_ is the cone generated by integral curve classes. * The effective nef cone ^e_X:=_X∩_X. * The movable cone _X⊂_X is the cone generated by movable divisor classes. * The cone ^+_X⊂_X is the convex hull of _X∩ N^1(X)_ in N^1(X)_. A closed cone C⊂ N^1(X)_ is rational polyhedral if C is the convex hull of finitely many rational vectors, i.e. elements of N^1(X)_. Note that in any case of Theorem <ref>, N^1(X) admits a lattice structure of signature (1,ρ_X-1): the intersection form for surfaces, and the Beauville–Bogomolov–Fujiki form for irreducible symplectic varieties. For a rational polyhedral cone C⊂ N^1(X)_, the convex subspace C∩^ρ_X-1 is a generalized polytope whose vertices lie in ^ρ_X-1(). § PROOF OF THEOREM <REF> §.§ General statement We shall prove the following general statement on the geometrical finiteness. Throughout this subsection, let Λ be a lattice of signature (1,n) and fix a positive cone ⊂Λ_. Let Γ be a subgroup of O^+(Λ) and be a Γ-invariant closed convex subset of ^n. Suppose that there exists a fundamental domain Π_ for the action of Γ on satisfying the following conditions: * Π_ is locally finite and exact, * Π_ is a generalized polytope whose vertices lie in ^n(). Then Γ is geometrically finite. Let Γ<(^n) be a Kleinian group. If there exists a Γ-invariant closed convex subset ⊂^n satisfying following conditions: * admits a locally finite, exact fundamental domain Π_, * Π_ is a generalized polytope whose vertices lie in ^n()∪∂^n(), then Γ is geometrically finite. To prove this theorem, we take a similar approach as in <cit.>. Elementary Kleinian groups are geometrically finite. Therefore, in the following, we assume that Γ is non-elementary. The set Π_∩ L(Γ) is nonempty and finite. On the contrary, suppose that Π_∩ L(Γ) is empty. By L(Γ)⊂C(Γ)⊂=Γ.Π_, L(Γ) is empty, which contradicts that Γ is non-elementary. Since Π_ is a generalized polytope, Π_∩ L(Γ) is finite. The following is a key to prove Theorem <ref>. Π_∩ L(Γ) consists of parabolic fixed points of Γ. Since any c∈Π_∩ L(Γ) is rational, the stabilizer Γ_c is elementary of either elliptic or parabolic type, hence it is enough to show that Γ_c is infinite. We pass to the upper half-space model ^n and conjugate Γ so that c=∞. Let v:^n→^n-1 be the vertical projection. We define the subset U of as follows U:=∪{gΠ_| g∈Γ such that c∈ gΠ_}. We now show that vU= v. Since {gΠ_}_g∈Γ is locally finite, vU is closed in v. Let us show that vU is open in v. For any z∈ vU, we take an element w∈ v^-1z. If w is an inner point of U, then z is also. If not, we may assume that w∈∂ U lies in a vertical side (i.e. a side whose closure contains c=∞) of f_1Π_ for some f_1∈Γ. There exist f_2,⋯,f_s∈Γ and a sufficiently small open ball B(w) at w such that w∈ f_iΠ_ for each i and ∩ B(w)⊂∪^s_j=1 f_jΠ_. Furthermore, we can assume that B(w) does not meet non-vertical sides of each f_iΠ_. Note that <cit.> is available since Π_ is a locally finite, exact fundamental domain and a generalized polytope. For some f_i_1∈Γ, if f_i_1Π_ has a side that intersects a vertical one of f_1Π_ containing w, then the side of f_i_1Π_ is also vertical by <cit.>. Hence, any side of f_i_1Π_ containing w is vertical. If some f_i_2Π_ has a side that intersects a vertical one of f_i_1Π_ containing w, the same argument implies that any side of f_i_2Π_ containing w is vertical. Repeating this argument, we find that each f_iΠ_ contains c, namely ∪^s_j=1 f_jΠ_⊂ U. We have z∈ v∩ vB(w) = v(∩ B(w)) ⊂∪^s_j=1 vf_jΠ_⊂ vU. It turns out that vU is open in v. Hence vU= v. By the finiteness of Π_∩Γ.c, we have elements h_1,⋯,h_k∈Γ such that Π_∩Γ.c={h^-1_jc| j=1,⋯,k}. Note that, for g∈Γ, c∈ gΠ_ if and only if g∈∪^k_j=1Γ_c h_j. Therefore, v={ vgΠ_ | g∈∪^k_j=1Γ_c h_j}. On the contrary, suppose that Γ_c is finite. Then we have ∪^k_j=1Γ_c h_j={g_1,⋯,g_m} for some elements of Γ. For any y∈, there exists y_0∈ g_j_0Π_ for some g_j_0∈Γ such that the hyperbolic geodesic ray, or equivalently, the vertical ray [y,c)⊂^n satisfies that [y,c)∩ g_j_0Π_=[y_0,c). Thus, taking a sufficiently small open ball B(c) at c in ^n, we have ∩ B(c)⊂∪^m_j=1g_jΠ_. By c∈ L(Γ)=Γ.x∩∂^n for any x∈ g_1Π^∘_, there exist a infinite sequence {f_j}_j≥1⊂Γ such that {f_jx}_j⊂∩ B(c), hence {f_jx}_j⊂∪^m_j=1g_jΠ_. However, this contradicts that Π_ is a fundamental domain. Set Π_∩ L(Γ)={c_1,⋯,c_l}⊂∂^n(). There exists an open horoball B_i at each c_i∈Π_∩ L(Γ) such that elements in the collection {gB_i | g∈Γ, i=1,⋯,l} are pairwise disjoint. The claim is proved in <cit.>, but we provide a proof here for reader convenience. For c_i∈Π_∩ L(Γ), let e_i∈∂∩Λ be the corresponding primitive isotropic integral vector. We explicitly define an open horoball B_i at c_i as follows B_i:=B_e_i:={x∈^n  |  (x,e_i)<1/2}. For distinct primitive isotropic integral vectors e,e'∈∂∩Λ and x∈^n, by <cit.> and <cit.>, we have 1≤(e,e')≤ 2(x,e)(x,e'), which implies that B_e and B_e' are disjoint. Furthermore, it is easy to check that gB_e=B_ge for each g∈Γ, which completes the proof. Therefore, we obtain the Γ-invariant, pairwise disjoint collection :={gB_i | g∈Γ, i=1,⋯,l} of open horoballs at parabolic fixed points of Γ. Π_∩ C(Γ)\⋃_i=1^l B_i is bounded. Assume that Π_∩ C(Γ)\⋃_i=1^l B_i is unbounded. Then there exists unbounded sequence {a_j}_j≥1⊂Π_∩ C(Γ)\⋃_i=1^l B_i. We can take a subsequence {a'_j} convergent to a point y in the set (Π_∩ C(Γ))∩∂^n =Π_∩ L(Γ) ={c_1,⋯,c_l}, hence y=c_i_0 for some i_0. Since Π_ is finite-sided, a'_j is in the horoball B_i_0 for sufficiently large j, which contradicts the assumption. Proof of Theorem <ref>. Let π:^n→^n/Γ be the quotient map. Then it easily follows from C(Γ) ⊂ that π(C(Γ)\⋃_ gB_i ) ⊂π(Π_∩ C(Γ)\⋃_i=1^l B_i) . Assume that Π_∩ C(Γ)\⋃_i=1^l B_i is unbounded. Then there exists unbounded sequence {a_j}_j≥1⊂Π_∩ C(Γ)\⋃_i=1^l B_i. We can take a subsequence {a'_j} convergent to a point y in the set (Π_∩ C(Γ))∩∂^n ⊂Π_∩ L(Γ) ={c_1,⋯,c_l}, hence y=c_i_0 for some i_0. Since Π_ is finite-sided, a'_j is in the horoball B_i_0 for sufficiently large j, which contradicts the assumption. Therefore, the right-hand side of (<ref>) is compact, thus the left-hand side is also. By Theorem <ref>, Γ is geometrically finite. §.§ Cone conjecture implies Geometrical finiteness We now complete the proof of Theorem <ref> using Theorem <ref> and the cone conjecture for each case. Proof of Theorem <ref>. Set τ:=ρ_X-1 for simplicity. Case (1): Let X be a K3 surface over a field of characteristic not 2. Let us recall that the representation (X)→ O^+((X)) has a finite kernel (see <cit.>), and the image is denoted by Γ_X. For the Weyl group W_X of X, we define Γ_X:=Γ_X⋉ W_X. We apply Theorem <ref> with Λ:=(X) and :=_X^+∩^τ. Fix an element h:=H/√((H,H))∈ for some ample class H∈(X) such that the stabilizer Γ_X,H is trivial. _X^+は(X)_coneより, geodesically connected and geodesically complete. よって<cit.>より, stabilizerがtrivialとなる, この元に関するDirichlet domainを考える. ample class(の正規化)がこのDirichlet domainの内部にないとすると, ample classがample coneを生成することに矛盾する. よってあるample class(の正規化)がこのDirichlet domainの内部に存在し, これは内部の元なのでstabilizerは自明. We here recall the cone conjecture: Let X be a K3 surface over a field K. Suppose that the characteristic of K is not 2 or X is not supersingular. Then D^+_X:={x∈^+_X  | (H,x)≤(H,gx) for any g∈Γ_X} is a rational polyhedral fundamental domain for the action of Γ_X on ^+_X. A closed subset Π'_:=D^+_X∩^τ is a fundamental domain for the action of Γ_X on . Clearly, Π'_ is a subset of the Dirichlet domain (<cit.>) for the action of Γ_X on ^τ. Thus, Π'_ is locally finite as a fundamental domain, which implies that is closed in ^τ, hence is proper (i.e. any closed ball is compact). is also geodesically connected and geodesically complete in the sense of <cit.>. By <cit.>, the Dirichlet domain Π_ defined by Π_:= {x∈ |  d(h,x)≤ d(h,gx) for any g∈Γ_X}, is a locally finite, exact fundamental domain for the action of Γ_X on . Therefore, the inclusion Π'_⊂Π_ is actually an equality. Since Π'_ is also a generalized polytope whose vertices lie in ^τ(), Π_ satisfies all the conditions in Theorem <ref>. Case (2): The proof is the same as Case (1). Case (3): Let X be an Enriques surface over an algebraically closed field of characteristic not 2. The representation (X)→ O^+(N^1(X)) has a finite kernel (see <cit.>), and the image is denoted by Γ_X. For the subgroup W^ nod_X of O^+(N^1(X)) generated by reflections associated with classes of nodal curves in X, we define Γ_X:=Γ_X⋉ W^ nod_X. The proof is similar to Case (1). We apply Theorem <ref> with Λ:=N^1(X) and :=_X^e∩^9. The cone conjecture is as follows: Let X be an Enriques surface over an algebraically closed field of characteristic not 2. Then D^+_X:={x∈^+_X  | (H,x)≤(H,gx) for any g∈Γ_X} is a rational polyhedral fundamental domain for the action of Γ_X on ^e_X. Case (4): In this paper, a Coble surface means a terminal Coble surface of K3 type in the sense of Dolgachev–Zhang <cit.>. Let X be a Coble surface over an algebraically closed field of characteristic 0, and Y→ X the K3 cover with covering involution σ. A natural representation (X)→ O^+((X))≃ O^+((Y)^σ)< O^+((Y)) has a finite kernel (see <cit.>), and the image in O^+((Y)^σ) is denoted by Γ_X. For the subgroup R_X of O^+((Y)^σ) called σ-equivariant reflection group (<cit.>), we define Γ_X:=Γ_X⋉ R_X. The proof is similar to Case (1). We apply Theorem <ref> with Λ:=(Y)^σ≃(X), _X:=_Y∩(Y)^σ, and :=(_Y^e)^σ∩^τ≃_X^e∩^τ. The cone conjecture is as follows: Let X be a Coble surface over an algebraically closed field of characteristic 0. Then D^+_X:={x∈^+_X  | (H,x)≤(H,gx) for any g∈Γ_X} is a rational polyhedral fundamental domain for the action of Γ_X on (^e_Y)^σ. Case (5): Let X be an irreducible symplectic variety over a field of characteristic 0. The representation (X)→ O^+(N^1(X)) has a finite kernel (see <cit.>), and the image is denoted by Γ_X. We apply Theorem <ref> with Λ:=N^1(X) and :=_X^+∩^τ. Fix an element h:=H/√((H,H))∈ for some ample class H∈ N^1(X) such that the stabilizer Γ_X,H is trivial. The cone conjecture is as follows: Let X be an irreducible symplectic variety over a field of characteristic 0. Then D^+_X:={x∈_X  | (H,x)≤(H,gx) for any g∈Γ_X} is a rational polyhedral fundamental domain for the action of Γ_X on ^+_X. In this case, a closed subset Π_:=D^+_X∩^τ is nothing but the Dirichlet fundamental domain for the action of Γ_X on . The remaining arguments are the same as Case (1). Case (6): Let X be an irreducible symplectic variety over a field K of characteristic 0. The representation (X)→ O^+(N^1(X)) has a finite kernel (see <cit.>), and the image is denoted by Γ_X. Let W^ Exc_X_K be the subgroup of O^+(N^1(X_K)) generated by reflections associated with classes of prime exceptional divisors on X_K, and R_X the Gal_K-fixed part in W^ Exc_X_K (<cit.>), where Gal_K is the absolute Galois group of K. Note that R_X faithfully acts on N^1(X). We define Γ_X:=Γ_X⋉ R_X< O^+(N^1(X)). The proof is similar to Case (1). We apply Theorem <ref> with Λ:=N^1(X) and :=_X^+∩^τ. The cone conjecture is as follows: Let X be an irreducible symplectic variety over a field of characteristic 0. Then D^+_X:={x∈^+_X  | (H,x)≤(H,gx) for any g∈Γ_X} is a rational polyhedral fundamental domain for the action of Γ_X on _X^+. § APPLICATIONS We present several applications of Theorem <ref>. List of group theoretic properties of G_X Some are new even if the case of K3 surfaces. 話題が分散してしまうので細かい群論的性質はカット 幾何群論的性質であるCAT(0)とrel hypに絞る ー geom fin Klein gp are virtually torsion-free and residually finite since they are fin gen linear gp. 有限核なので写像類群と同様の証明(cf. [Farb-Margalit])からG_Xもvirtually torsion-free かつ residually finite. ー geom fin Klein gp satisfy the strong Tits alternative since they are fin gen linear gp and CAT(0). rel hyp gp satisfy the Tits alternative. G_Xがrel hypかつCAT(0)なのでstrong Tits alternativeを満たす ー geom fin Klein gpはbiautomatic (https://arxiv.org/abs/math/0302245)だが,quasi-isomで保たれるか分からないのでG_Xがbiautomaoticかは不明 §.§ Non-positively curved properties Throughout this subsection, let G_X be a group as in Theorem <ref>. We first see that G_X is non-positively curved: CAT(0) and relatively (Gromov) hyperbolic, see <cit.> for definitions. G_X is CAT(0). It is well-known that each geometrically finite Kleinian group Γ is CAT(0) via the isometric action on the CAT(0) space C(Γ)\⋃_λ V_λ with the induced length metric, called the truncated convex hull of the limit sets, where {V_λ} is a Γ-invariant pairwise disjoint collection of open horoballs at parabolic fixed points of Γ(<cit.>). Since the representation G_X→(^ρ_X-1) has a finite kernel, the isometric action of G_X on the truncated convex hull is also proper and cocompact. Historically, non-elementary geometrically finite Kleinian groups are the original examples of relatively hyperbolic groups. Recall that a (relatively) hyperbolic group is called elementary if it is virtually cyclic. G_X is either virtually abelian or non-elementary relatively hyperbolic. The claim is clear by Theorem <ref>. Note that since the isometric action on ^ρ_X-1 has a finite kernel, Γ_X is non-elementary relatively hyperbolic (resp. virtually abelian) if and only if G_X is non-elementary relatively hyperbolic (resp. virtually abelian). The virtual abelianity (cyclicity) is quasi-isometric invariance. Harmonic analysis, cohomology, and the large-scale geometry of amenable groupsより, [virtually abelian]と[virtually ^n]と[quasi-isometric to ^n]は全て同値. 従って,[virtually abelian]はquasi-isometric invariant. 同様に[virtually cyclic]もquasi-isometric invariantであることがわかる relatively hyperbolic groupの性質 ・quasi-isometric invariant (https://arxiv.org/abs/math/0605211) →有限個の有限生成なperipheral subgpというconvention →vir cycもquasi-isometric invariantなので,non-elementaryもquasi-isometricで保たれる ・ランク2以上のabel subgpはあるperipheral subgpと共役(https://arxiv.org/abs/math/0504271) any abelian subgroup of rank at least 2 is conjugate to a peripheral subgroup. ・non-vir-cyc virtually abelian ⇒ non-hyperbolic relative to any finite collection of proper finitely generated subgroups (https://arxiv.org/abs/math/0504271) →簡単にいうと[vir cycでないvir abelはrel hypではない] We also obtain a criterion for relative hyperbolicity. G_X is non-elementary relatively hyperbolic if and only if G_X contains the rank 2 free group F_2. This is a direct corollary of the strong Tits alternative: for every subgroup H<G_X, either H is virtually abelian or H contains F_2. Note that relatively hyperbolic groups satisfy the Tits alternative (<cit.>), and the virtual solvability is equivalent to the virtual abelianity in this case by <cit.>. §.§ Dynamical characterizations for K3 surfaces In this subsection, let X be a K3 surface over an algebraically closed field K. We further suppose that the characteristic of K is not 2 or X is not supersingular as in Theorem <ref>. For each automorphism f∈(X), the entropy is numerically defined as the logarithm of the spectral radius of its induced action on (X)_. We call that (X) has zero entropy if the entropy of any automorphism of X is zero, otherwise, (X) has positive entropy. Note that, over the complex number field, the entropy is actually equal to the topological entropy due to Gromov–Yomdin <cit.>. The following is a characterization of the virtual abelianity via entropy. Note that there exists a numerical criterion for positivity of the topological entropy due to Gromov–Yomdin. Namely, an automorphism has the positive topological entropy if and only if the induced linear map on the cohomology has an eigenvalue of radius grater then 1. Suppose that (X) is infinite. * For ρ_X=2, (X) is virtually cyclic and has positive entropy. * For ρ_X=3, (X) is virtually abelian if and only if (X) is virtually cyclic. * For ρ_X=4, (X) is virtually abelian if and only if either (X) is virtually cyclic and has positive entropy, or (X) has zero entropy. * For ρ_X≥ 5, (X) is virtually abelian if and only if (X) has zero entropy. Most of the proof follows from <cit.> and <cit.>, but note that we use the hyperbolicity (see Example <ref>) in the case of ρ_X=2,3. By the alternative in Corollary <ref>, we obtain a dynamical and numerical characterization of relative hyperbolicity. Suppose that the Picard rank of X is at least 5. Then (X) is non-elementary relatively hyperbolic if and only if (X) has positive entropy. §.§ Examples We collect several examples in this subsection. [<cit.>] For an integer ρ'∈{3,4,⋯,18}, there exists a complex K3 surface of Picard rank ρ' such that (X) is virtually abelian of rank at most 8. Let X be a K3 surface as in Theorem <ref>. If its Picard rank is at most 3, then (X) is (possibly elementary) hyperbolic by <cit.>. [cf. <cit.>] The following K3 surfaces over an algebraically closed field have positive entropy, and hence their automorphism groups are non-elementary relatively hyperbolic. * Kummer surfaces in characteristic not 2. * K3 surfaces covering an Enriques surface, unless (X)≃ U⊕ E_8⊕ D_8 in characteristic not 2. * Singular K3 surfaces. * Supersingular K3 surfaces in characteristic not 2. We shall consider two specific examples of singular K3 surfaces over . * Let X_F be the Fermat quartic. Then by Shioda <cit.>, X_F admits an elliptic fibration of the Mordell–Weil rank 6. Therefore, (X_F) is relatively hyperbolic, but not hyperbolic. * Let X_3 and X_4 be the K3 surfaces whose transcendental lattices are of the form (X_3) = [ 2 1; 1 2; ] and (X_4) = [ 2 0; 0 2; ] respectively, see <cit.> and <cit.>. By Vinberg (<cit.>), their automorphism groups are virtually free, hence hyperbolic. Let X be a K3 surface as in Theorem <ref>. If its Picard rank is at least 2 and X does not contain any (-2) curve, then Γ_X is a uniform lattice in (^ρ_X-1), (-2)曲線がないとNef=NS_Rなので=^τ,よってΠ_volなのでΓ_Xはlatticeとなる さらにlatticeであることからC(Γ_X)=^finであることからuniformとなる hence hyperbolic. Furthermore, in the case of ρ_X≥3, it is easy to construct a quasi-isometric embedding of ^2 into (X) since (X) itself is quasi-isometric to ^ρ_X-1 by Švarc–Milnor lemma. Therefore (X) is not virtually free due to Bonk–Kleiner <cit.>. To our best knowledge, all known explicit descriptions of automorphism groups that are hyperbolic so far have been virtually free, for example, free products of finite groups. We here present examples in the case of Enriques (resp. Coble) surfaces over an algebraically closed field of characteristic 2 (resp. positive characteristic), which are not covered by Theorem <ref>. * <cit.> (see also <cit.>):  Let X be an ordinary unnodal Enriques surface. singular Enriques=ordinary Enriques=μ_2-surface E_10:=U⊕E_8 = E_2,3,7 = Num(Y) W(Num(Y))=W_2,3,7 W(Num(Y))(2) < Aut(Y)^* < W(Num(Y)) = O(Num(Y))' < O(Num(Y)): finite index O(Num(Y))'はpositive coneを保つO(Num(Y))の指数2の部分群 Then (X) is a finite index subgroup of the orthogonal group O^+(E_10) of E_10 preserving the positive cone, where E_10 is the unique even unimodular lattice of signature (1,9). It is well-known that O^+(E_10) is a lattice (in the sense of hyperbolic geometry) in (^9), and hence is geometrically finite. Thus, (X) is also geometrically finite. * Let Y_2 be a general nodal Enriques surface. <cit.>の記述が不明瞭なのでカット Aut(Y_2)がW(E_2,4,6)'と擬同型ならgeom finだが, Aut(Y_2)がW_2,4,6'と擬同型ならgeom finかどうか分からない Theorem 8.4.16にはAut(Y_2)がW(E_2,4,6)'と擬同型と書いてるが[Cossec-Dolgachev]や前後を読む感じ怪しい Enriques曲面Yでcovering K3 Xのピカール数が最小(すなわち10)の場合 [Brandhorst-Mezzedimi, Cor5.4]よりAut(X)はpos entを持ち,ピカール数5以上なのでrel hypとなる またこの場合に限ってAut(Y)=Aut(X)/Z_2が成り立つので([金銅K3, p175]) Aut(Y)もrel hypとなることが分かる →正標数(標数3以上)や標数2のordinary Enriquesでも正しい? * <cit.>:  Let X be an unnodal Coble surface. Then the representation (X)→ O^+((X)) has a finite kernel and an image isomorphic to a finite index subgroup of O^+(E_10), hence is geometrically finite as in (i). §.§ Mukai's conjecture In 2018, Mukai conjectured the relation between the virtual cohomological dimension of automorphism groups of elliptic K3 surfaces and the Mordell–Weil rank of elliptic fibrations. Let X be a complex elliptic K3 surface. Then we have ((X))=max_f{(f)}, where f:X→^1 is any elliptic fibration and (f) is the Mordell–Weil group of the Jacobian fibration of f. As an application of the results in this paper, we can provide the first counterexample to Mukai's conjecture as follows. Let X_0 be a complex elliptic K3 surface of Picard rank 3. We further suppose that X_0 does not contain any (-2) curve. Note the well-known fact that a group is virtually free if and only if its virtual cohomological dimension is at most 1. Since (X_0) is a non-virtually-free hyperbolic group as in Example <ref>, we have ((X_0))≥2. On the other hand, we generally have (f)≤ρ_X_0-2=1 for any elliptic fibration f:X_0→^1, thus the equation (<ref>) does not hold. Fix a positive integer a at least 2. Let X_a be a complex K3 surface with (X_a) = [ 2a 0 0; 0 -2a 0; 0 0 -2a; ] . Clearly, X_a is elliptic and does not contain any (-2) curve. Thus (X_a) is a non-virtually-free hyperbolic group as in Example <ref>, hence we have ((X_a))≥2. On the other hand, for any elliptic fibration f:X_a→^1, we have (f)≤ρ_X_a-2=1, thus the equation (<ref>) does not hold. alpha
http://arxiv.org/abs/2406.18380v1
20240626142121
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning
[ "Roman Bresson", "Giannis Nikolentzos", "George Panagopoulos", "Michail Chatzianastasis", "Jun Pang", "Michalis Vazirgiannis" ]
cs.LG
[ "cs.LG" ]
CmWave and Sub-THz: Key Radio Enablers and Complementary Spectrum for 6G Mayur V. Katwe, Aryan Kaushik, Keshav Singh, Marco Di Renzo, Shu Sun, Doohwan Lee, Ana G. Armada, Yonina C. Eldar, Octavia A. Dobre, and Theodore S. Rappaport M. V. Katwe is with the National Institute of Technology, Raipur, India (e-mail: mvkatwe.ece@nitrr.ac.in). A. Kaushik is with the School of Engineering & Informatics, University of Sussex, UK (e-mail: aryan.kaushik@sussex.ac.uk). K. Singh is with the Institute of communications Engineering, National Sun Yat-sen University, Taiwan (e-mail: keshav.singh@mail.nsysu.edu.tw). M. Di Renzo is with Université Paris-Saclay, CNRS, CentraleSupélec, France (e-mail: marco.di-renzo@universite-paris-saclay.fr). S. Sun is with the Department of Electronic Engineering, Shanghai Jiao Tong University, China (e-mail: shusun@sjtu.edu.cn). D. Lee is with the Network Innovation Laboratories, NTT Corporation, Japan (e-mail: doohwan.lee@ntt.com). A. G. Armada is with the Department of Signal Theory and Communications, Universidad Carlos III de Madrid, Spain (e-mail: agarcia@tsc.uc3m.es). Y. C. Eldar is with the Faculty of Math and CS, Weizmann Institute of Science, Rehovot, Israel (e-mail: yonina.eldar@weizmann.ac.il). O. A. Dobre is with the Faculty of Engineering and Applied Science, Memorial University, Canada (e-mail: odobre@mun.ca). T. S. Rappaport is with the Tandon School of Engineering, New York University, USA (e-mail: tlw335@nyu.edu). July 1, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message passing) layers. Within each of these layers, the representation of each node is updated from an aggregation and transformation of its neighbours representations at the previous layer. The upper bound for the expressive power of message passing GNNs was reached through the use of MLPs as a transformation, due to their universal approximation capabilities. However, MLPs suffer from well-known limitations, which recently motivated the introduction of Kolmogorov-Arnold Networks (KANs). KANs rely on the Kolmogorov-Arnold representation theorem, rendering them a promising alternative to MLPs. In this work, we compare the performance of KANs against that of MLPs in graph learning tasks. We perform extensive experiments on node classification, graph classification and graph regression datasets. Our preliminary results indicate that while KANs are on-par with MLPs in classification tasks, they seem to have a clear advantage in the graph regression tasks. § INTRODUCTION Graphs are structural representations of information which are useful for modeling many types of data. They arise naturally in a wide range of application domains, and their abstract nature offers increased flexibility. Typically, the nodes of a graph represent entities, while the edges capture the interactions between them. For instance, in social networks, nodes represent individuals, and edges represent their social interactions. In chemo-informatics, molecules are commonly modeled as graphs, with nodes corresponding to atoms and edges to chemical bonds. In other settings, molecules can also be nodes, with edges capturing their ability to bond with one another. In many cases where graph data is available, there exist problems that cannot be solved efficiently using conventional tools (graph algorithms) and require the use of machine learning techniques. For instance, in the field of chemo-informatics, the standard approach for estimating the quantum mechanical properties of molecules leverages computationally expensive density functional theory computations <cit.>. Machine learning methods could serve as a more efficient alternative to those methods. Recently, Graph Neural Networks (GNNs) have been established as the dominant approach for learning on graphs <cit.>. Most GNNs consist of a series of message passing layers. Within a message passing layer, each node updates its feature vector by aggregating the feature vectors of its neighbors and combining the emerging vector with its own representation. A lot of recent work has focused on investigating the expressive power of GNNs <cit.>. There exist different definitions of expressive power, however, the most common definition is concerned with the number of pairs of non-isomorphic graphs that a GNN model can distinguish. Two graphs are isomorphic if there exists an edge-preserving bijection between their respective sets of nodes. In this setting, a model is more expressive than another model if the former can distinguish all pairs of non-isomorphic graphs that the latter can distinguish, along with other pairs that the latter cannot <cit.>. Furthermore, an equivalence has also been established between the ability of GNNs to distinguish non-isomorphic graphs and their ability to approximate permutation-invariant functions on graphs <cit.>. This line of work gave insights into the limitations of different models <cit.>, but also led to the development of more powerful architectures <cit.>. Most maximally-expressive GNN models rely on multi-layer perceptrons (MLPs) as their main building blocks, due to their universal approximation capabilities <cit.>. The theorem states that any continuous function can be approximated by an MLP with at least one hidden layer, given that this layer contains enough neurons. Having said that, in practice the models suffer from several limitations due to non-convex loss functions, algorithms without convergence guarantees and a notorious lack of interpretability that hinders their applicability in several domains. Recently, Kolmogorov-Arnold Networks (KANs) <cit.> have emerged as promising alternatives to MLPs. They are based on the Kolmogorov-Arnold representation theorem <cit.> which states that a continuous multivariate function can be represented by a composition and sum of a fixed number of univariate functions. KANs substitute the learnable weights and pre-defined activation functions of MLPs, with learnable activations based on B-splines and summations. The initial results demonstrate that KANs have the potential to be more accurate than MLPs in low dimensions, while simultaneously being more interpretable. In this paper, we present a thorough empirical comparison between GNNs that use KANs to update node representations and GNNs that utilize MLPs to that end. Our work is orthogonal to prior work that studies the expressive power of GNNs. Here, we empirically compare models that are theoretically equally expressive in terms of distinguishing non-isomorphic graphs against each other, and we study the impact of the different function approximation modules (KANs or MLPs) on the model's performance. We evaluate the different GNN models on several standard node classification, graph classification and graph regression datasets. The rest of this paper is organized as follows. Section <ref>, provides an overview of the tasks we address in this paper, as well as a description of message passing GNNs and Kolmogorov-Arnold networks. In section <ref>, we introduce the  (Kolmogorov-Arnold Graph Isomorphism Network) and  (Kolmogorov-Arnold Graph Convolution Network) models, which are variants of existing GNNs, and which leverage KANs to update node features within each layer. In section <ref>, we present extensive empirical results comparing the above models with their vanilla counterparts in several tasks. Finally, section <ref> concludes the paper. § BACKGROUND §.§ Considered Graph Learning Tasks Before presenting the tasks on which we focus in this study, we start by introducing some key notation for graphs. Let ℕ denote the set of natural numbers, {1,2,…}. Then, [n] = {1,…,n}⊂ℕ for n ≥ 1. Let G = (V,E) be an undirected graph, where V is the vertex set and E is the edge set. We denote by n the number of vertices and by m the number of edges, n = |V| and m = |E|. Let g V → [n] denote a bijective mapping from the space of nodes to set [n]. Let 𝒩(v) denote the the neighbourhood of vertex v, the set {u |{v,u}∈ E}. The degree of a vertex v is (v) = |𝒩(v)|. Each node v ∈ V is associated with a d-dimensional feature vector 𝐱_v ∈ℝ^d, and the feature matrix for all nodes is represented as 𝐗∈ℝ^n × d. Thus, 𝐱_v is equal to the g(v)-th row of 𝐗. In node classification, each node v ∈ V is associated with a label y_v that represents a class. The task is to learn a function that maps nodes to their class labels, to learn a function f_node such that f_node(v, G, 𝐗) = y_v. In graph regression/classification, the dataset consists of a collection of N graphs G_1, …, G_N along with their class labels/targets y_G_1, …, y_G_N. The task is then to learn a function that maps graphs to their class labels/targets, to learn a function f_graph such that f_graph(G, 𝐗) = y_G, which can be discrete or continuous, for graph classification or graph regression, respectively. The standard approach for learning such predictors (both for node- and graph-level tasks) is to first embed the nodes of the graph(s) into some vector space. That is, we aim to learn 𝐇 = (G, X) ∈ℝ^n × d_e where d_e denotes the embedding dimension. Then, the g(v)-th row of matrix 𝐇 represents the embedding of node v. Let 𝐡_v denote this embedding. For node-level tasks, we can use 𝐡_v to predict directly the class label/target of node v. For graph-level tasks, we also need to apply a readout function on all the representations of the graph's nodes to obtain a representation 𝐡_G =(𝐇) for the entire graph. One particularly desirable property of such models is permutation invariance. That is, the embedding 𝐡_G of a graph needs to be the same regardless of the ordering of its nodes . Indeed, these orderings do not hold any semantic meaning and different orderings give rise to isomorphic graphs. Permutation invariance is achieved at the readout step by utilizing a permutation invariant operation over the rows of 𝐇, such as the sum, max or mean operators. §.§ Graph Neural Networks One of the most widely-used paradigms for designing such permutation invariant models is the message passing framework <cit.> which consists of a sequence of layers and whithin each layer the embedding of each node is computed as a learnable function of its neighbors' embeddings. Formally, the embedding 𝐡_v^(ℓ)∈ℝ^d_ℓ at layer ℓ is computed as follows: 𝐡_v^(ℓ) = ϕ^(l)(𝐡_v^(ℓ-1), ⊕_u∈𝒩(v)𝐡_u^(ℓ-1)) where ⊕ is a permutation-invariant aggregation function (mean, sum), and ϕ^(ℓ) is a differentiable function (linear transformation, MLP) that combines and transforms the node's previous embedding with the aggregated vector of its neighbors. As discussed above, in this paper, we focus on the functions that different GNN models employ to update node representations. Many existing GNNs use a 1-layer perceptron (a linear mapping followed by a non-linear activation function) within each neighborhood aggregation layer to update node features <cit.>. For instance, each layer of the the Graph Convolutional Network (GCN) <cit.> is defined as follows: 𝐡_v^(ℓ) = σ(𝐖^(ℓ)∑_u∈𝒩(v) ∪{v}𝐡_u^(ℓ-1)/√(((v)+1) ((u)+1))) where σ is a non-linear activation and 𝐖^(ℓ) is a trainable weight matrix. However, the 1-layer perceptron is not a universal approximator of multiset functions <cit.>. Thus, the emerging GNN might not be expressive enough for some tasks. Thus, more recent models use MLPs instead of 1-layer perceptrons to update node representations <cit.>. It is well-known that standard message passing GNNs are bounded in expressiveness by the Weisfeiler-Leman (WL) test of isomorphism <cit.>. While two isomorphic graphs will always be mapped to the same representation by such a GNN, some non-isomorphic graphs might also be assigned identical representations. A model that can achieve the same expressive power as the WL test, given sufficient width and depth of the MLP, is the Graph Isomorphism Network (GIN) <cit.>, which is defined as follows: 𝐡^(ℓ)_v = ^(ℓ)((1+ϵ^(ℓ)) ·𝐡^(ℓ-1)_v + ∑_u∈𝒩(v)𝐡^(ℓ-1)_u) where ϵ^(ℓ) denotes a trainable parameter, and ^(ℓ) a trainable MLP. The GIN model can achieve its full potential if proper weights (for the different ^(ℓ) layers and ϵ^(ℓ)) are learned. However, in practice, GIN might fail to learn those weights due to limited training data and due to limitations of the employed training algorithm (stochastic gradient descent). This has motivated a series of works which focused on improving the training procedure of GNNs. For example, Ortho-GConv is an orthogonal feature transformation that can address GNNs' unstable training <cit.>. Other works have studied how to initialize the weights of the MLPs of the message passing layers of GNNs. It was shown that by adopting the weights of converged MLPs as the weights of corresponding GNNs can lead to performance improvements in node classification tasks <cit.>. On the other hand, there exist settings where there is no need of complex learning models. This has led to the development of methods for simplifying GNNs. This can be achieved by removing the nonlinearities between the neighborhood aggregation layers and collapsing the resulting function into a single linear transformation <cit.> or by feeding the node features into a neural network which generates predictions and then propagate those predictions via a personalized PageRank scheme <cit.>. §.§ Kolmogorov-Arnold Networks Presented as an alternative to the MLP, the Kolmogorov-Arnold Network (KAN) architecture has recently attracted a lot of attention in the machine learning community <cit.>. As mentioned above, this model relies on the Kolmogorov-Arnold representation theorem, which states that any multivariate function f: [0,1]^d →ℝ can be written as: f(𝐱) = ∑_i=1^2d+1Φ_i(∑_j=1^d ϕ_ij(𝐱_j)) where all Φ_□ and ϕ_□ functions are univariate, and the sum is the only multivariate operator. Equation (<ref>) can be seen as a two-step process. First, a different set of univariate non-linear activation functions is applied to each dimension of the input, and then the output of those functions are summed up. The authors rely on this interpretation to define a Kolmogorov-Arnold Network (KAN) layer, which is a mapping between a space A ⊆ℝ^d and a different space B ⊆ℝ^d', (identical in use to an MLP layer). Such a layer consists of d × d' trainable functions {ϕ_ij, 1≤ i ≤ d', 1≤ j ≤ d}. Then, for 𝐱∈ A, we compute its image 𝐱' as: 𝐱_i' = ∑_j=1^d ϕ_ij(𝐱_j) Stacking two such layers, one with input dimension d and output dimension 2d+1, and another with input dimension 2d+1 and output dimension 1, we obtain Equation (<ref>), and the derived model is a universal function approximator. This seemingly offers a complexity advantage compared to MLPs, since the number of univariate functions required to represent any multivariate function from [0,1]^d to ℝ^d' is at most (2d^2 + d) × d', whereas the universal approximation theorem for the MLP requires a possibly infinite number of neurons. However, as stated by the original paper, the behavior of such univariate functions might be arbitrarily complex (fractal, non-smooth), thus leading to them being non-representable, and non-learnable. MLPs relax the arbitrary-width constraint by stacking finite-width layers. Likewise, KANs relax the arbitrary-complexity constraints on the non-linearities by stacking KAN layers. Thus, the output of a function is given by: y = (𝐱) = Φ_L ∘Φ_L-1∘⋯∘Φ_1(𝐱) where Φ_1,…,Φ_L are KAN layers. The original paper uses splines (trainable piecewise-polynomial functions) as nonlinearities. This allows to retain a high expressivity for a relatively small number of parameters, at the cost of enforcing some local smoothness. A layer ℓ is thus a d_ℓ× d_ℓ-1 grid of splines. The degree used for each spline (called spline order), as well as the number of splines used for each function (called grid size) are both hyperparameters of the architecture. Even though KANs were introduced very recently, they have already been applied to different problems such as in the task of satellite image classification <cit.> and for predicting the pressure and flow rate of flexible electrohydrodynamic pumps <cit.>. So far, most efforts have focused on time series data <cit.>. For instance, KANs have been evaluated in the satellite traffic forecasting task <cit.>. Furthermore, they have been combined with architectures that are traditionally leveraged in time series forecasting tasks such as the Long Short-Term Memory Network <cit.> and the Transformer <cit.>. The work closest to ours is the one reported in <cit.>, where the authors propose FourierKAN-GCF. This is a GNN model designed for the task of graph collaborative filtering where the feature transformation in the neighborhood aggregation layers is performed by KANs. § KAN-BASED GNN LAYERS We next derive variants of the GIN and GCN models which use KANs to transform the node features instead of fully-connected layers or MLPs. §.§ The  Layer To achieve its maximal expressivity, the GIN model relies on the MLP architecture and its universal approximator property. Since KAN is also a universal function approximator, we could achieve the same expressive power using KANs in lieu of MLPs. We thus propose the  model which is defined as follows: 𝐡^(ℓ)_v = ^(l)((1+ϵ) ·𝐡^(ℓ-1)_v + ∑_u∈𝒩(v)𝐡^(ℓ-1)_u) With theoretically-sound KANs (with arbitrarily complex components), this architecture is exactly as expressive as the vanilla GIN model with arbitrary layer width. While this is not guaranteed with the spline-based implementation with limited grid size, the empirical results in the original paper demonstrate the great expressive power of KANs <cit.>, especially of small models and setting where regularity is expected. §.§ The  Layer GCN-based architectures have achieved great success in node classification tasks. While in our experiments we evaluate  on node classification datasets, the objective advantage of GCN over GIN on some them does not facilitate a fair estimation of KANs' potential in this context. To this end, we also propose a variant of the GCN model. Specifically, we substitute the parameters and ReLU function of the standard GCN <cit.> model with a single KAN layer (defined in Equation (<ref>)) to obtain the  layer: 𝐡^(ℓ)_v = Φ^(ℓ)( ∑_u∈𝒩(v) ∪{v}𝐡_u^(ℓ-1)/√(((v)+1) ((u)+1))) where Φ^(ℓ) denotes a single KAN layer. In the familiar matrix formulation, where 𝐀̃ = 𝐀 + 𝐈 is the adjacency matrix with self-loops and 𝐃̃ the diagonal degree matrix of 𝐀̃, the node update rule of  can be written as: 𝐇^(ℓ) = Φ^(ℓ)( 𝐃̃^-1/2𝐀̃𝐃̃^-1/2𝐇^(ℓ-1)) where the different rows of 𝐇^(ℓ) store the representations of the different nodes of the graph. § EMPIRICAL EVALUATION In this section, we compare the  and  models with the GIN and GCN models in the following tasks: node classification, graph classification and graph regression. The code for reproducing the results is available at <https://github.com/RomanBresson/KAGNN>. All models are implemented with PyTorch <cit.>. For KAN layers, we rely on a publicly available implementation[https://github.com/Blealtan/efficient-kan]. §.§ Node classification Datasets. To evaluate the performance of GNNs with KAN layers in the context of node classification, we use 7 well-known datasets of varying sizes and types, including homophilic (Cora, Citeseer <cit.> and Ogbn-arxiv <cit.>) and heterophilic (Cornell, Texas, Wisconsin, Actor) networks. The homophilic networks are already split into training, validation and test sets, while the heterophilic datasets are accompanied by fixed 10-fold cross validation indices. Experimental setup. For every dataset and model, we tune the values of the hyperparameters. Specifically, we choose the values that perform best on the validation set (lowest validation error). To find these values, we use the Optuna package <cit.>. We set the number of iterations of Optuna equal to 100 trials. For all models, the learning rate is chosen from [10^-3, 10^-2], the number of message passing layers from { 1,2,3,4}, the hidden dimension size from {8,9,…, 128} and the weight decay from { 0, 0.0005}. For  and , we also choose the grid size from {3,4,5} and the spline order from {1,2,3,4}. Once the best hyperparameter values are found, we evaluate the models on the test set. For the homophilic networks, we initialize and train 10 different models (we use 10 different random seeds). We then evaluate the 10 models on the test set and report the average accuracy. For the heterophilic datasets, we tune each model's hyperparameters within each fold and we report the average accuracy across the 10 folds. Results. The results are given in Table <ref>. We can see that  outperforms GIN on all but one datasets. On some datasets, the difference in performance between the two models is significant. For example, on Citeseer and Cora,  offers a respective absolute improvement of 20.89% and 16.17% in accuracy over GIN. On the other hand,  outperforms GCN on only 2 out of the 7 considered datasets. On some datasets (Ogbn-arxiv), GCN significantly outperforms . With regards to the different families of models, there is no clear winner. The  and GIN models are the best performing methods on 3 datasets, while the  and GCN models achieve the highest accuracy on the rest of the datasets. Overall, the results in the node classification experiments are mixed. The use of KANs in the GIN architecture brings performance improvements on 6/7 datasets, but 5/7 experiments overlap in the confidence intervals. Such an overlap also occurs in 4 experiments of the GCN-based models, but overall we can contend that KAN has a more positive impact on the GIN architecture than on the GCN architecture. With regards to the optimal hyperparameters, we found that  required a larger grid size on average compared to . In general, a grid size of 4 was chosen in 8/14 experiments and the predominant spline order was 1, appearing in 8/14 experiments while 3 appears 2 times and 4 only once. The number of hidden layers and their sizes varied significantly through datasets but overall  had a substantially larger average hidden layer size 75.4 compared to 48 for , as 128 was chosen for 3 datasets compared to only 1 for . This is in contrast to previous findings contending that GIN-based models requires more complex learning procedures, a pattern that seems to withstand with KANs. Training times. We present in Table <ref> the training time per epoch for different configurations of the  and GIN models. We observe that for a given number of message passing layers and hidden dimension size, the  model is computationally more expensive than GIN. If the grid size and spline order hyperparameters of  are set to 1, the difference in running time between the two models is very small (less than 0.02 seconds per epoch for all configurations). However, the complexity of  increases are the grid size and spline order increase. Overall, our results suggest that the running time of  is slightly greater than that of GIN, and by no means prohibitive. §.§ Graph Classification Datasets. In this set of experiments, we compare the  model against GIN on standard graph classification benchmark datasets <cit.>. We experiment with the 7 following datasets: (1) MUTAG, (2) DD, (3) NCI1, (4) PROTEINS, (5) ENZYMES, (6) IMDB-B, (7) IMDB-M. The first 5 datasets come from bio- and chemo-informatics, while the last 2 are social interaction datasets. Experimental setup. We follow the experimental protocol proposed in <cit.>. Thus, we perform 10-fold cross-validation to obtain an estimate of the generalization performance of each method, while within each fold a model is selected based on a 90%/10% split of the training set. We use the splits provided in <cit.>. We use the Optuna package to select the model that achieves the lowest validation error. We set the number of iterations of Optuna equal to 100. For a fair comparison, we set the number of message passing layers of both models to a fixed value for each dataset. Based on preliminary experiments, on MUTAG, PROTEINS, IMDB-B and IMDB-M, we set the number of layers to 2. On DD, we set it to 3. On ENZYMES, we set it to 4 and finally, on NCI1, we set it to 5. To produce graph representations, we use the sum operator. The produced graph representations are fed to an MLP (for GIN) or a KAN (for ) layer which computes the output. We train each model for 1,000 epochs by minimizing the cross entropy loss. We use the Adam optimizer for model training <cit.>. We apply batch normalization <cit.> and dropout <cit.> to the output of each message passing layer. We also use early stopping with a patience of 20 epochs. For both models, we choose the number of hidden layers from { 2,3,4} and the dropout rate from [0.0,0.5]. For GIN, we choose the hidden dimension size from { 8,9,…,256} and the learning rate from [10^-5, 10^-2]. For , we choose the hidden dimension size from { 2,3,…,128}, the grid size from {1,2,…,16}, the spline order from { 1,2,…,8} and the learning rate from [10^-4, 10^-2]. For the IMDB-B and IMDB-M datasets, where there are no node features, we annotate nodes with one-hot encodings of their degrees, up to 35 (all degrees above 35 are set equal to 35). Once the best hyperparemeters are found for a given split, we train 3 different models on the training set of the split (in order to limit the impact of random initialization) and evaluate them on the test set of the split. This yields 3 test accuracies, and we compute their average. Results. Table <ref> illustrates the average classification accuracies and the corresponding standard deviations of the two models on the different datasets. We observe that the two models achieve similar levels of performance.  outperforms GIN on 5 out of the 7 datasets. However, the difference in performance between the two models is very small. This suggests that the two models are similar in terms of expressive power. On ENZYMES, however,  was found to perform much worse than GIN. Note that ENZYMES consists of more classes (6 classes in total) than the rest of the datasets, while ENZYMES and PROTEINS are the only datasets where the nodes of the graphs are annotated with continuous features (on the rest of the datasets, they are annotated with one-hot encodings). We hypothesize that the difference in performance is due to the inability of KANs to handle those continuous features. We thus normalized the node features within each fold by removing the mean of each feature (computed from the training samples) and then dividing by the corresponding standard deviation. We re-conducted the experiment and the average accuracy increased by approximately 6% (48.77% instead of 42.94%). Therefore, it turns out that in some settings it might be harder for KAN layers to handle continuous features than for MLPs. Training times. We give in Table <ref> an overview of the training times for different GIN and  architectures. We provide the number of parameters of each architecture. We notice that, for the same number of parameters, KAN is slower than its MLP counterpart. This is particularly sensitive to grid size and spline order. This makes sense since, using splines, each parameter involves more complex computations than the usual multiplication/summation of traditional MLP neurons. Moreover, some of this performance difference might come from how optimized the implementation is. It would be an interesting next step to study the relation of size to performance for KANs and MLPs, since, intuitively, the expressivity of splines should allow for smaller networks. §.§ Graph Regression Datasets. We experiment with two molecular datasets: (1) ZINC-12K <cit.>, and (2) QM9 <cit.>. ZINC-12K consists of 12,000 molecules. The task is to predict the constrained solubility of molecules, an important chemical property for designing generative GNNs for molecules. The dataset is already split into training, validation and test sets (10,000, 1,000 and 1,000 graphs in the training, validation and test sets, respectively). QM9 contains approximately 134,000 organic molecules. Each molecule consists of Hydrogen (H), Carbon (C), Oxygen (O), Nitrogen (N), and Flourine (F) atoms and contain up to 9 heavy (non Hydrogen) atoms. The task is to predict 12 target properties for each molecule. The dataset was divided into a training, a validation and a test set according to a 80%/10%/10% split. Experimental setup. We perform grid search to select values for the different hyperparameters. For both models, we choose the number of hidden layers from {2,3,4}, and the learning rate from {10^-3, 10^-4}. For GIN, we choose the hidden dimension size from {32,64,128,256,512,1024}, while for , we choose it from { 4,8,16,32,64,128,256}. For , we also select the grid size from {1,3,5,8,10} and the spline order from { 3,5}. To produce graph representations, we use the sum operator. The emerging graph representations are finally fed to an MLP (for GIN) or a KAN (for ) layer which produces the output. We train each model for 1,000 epochs by minimizing the mean absolute error (MAE). We use the Adam optimizer for model training <cit.>. We also use early stopping with a patience of 20 epochs. For ZINC-12K, we also use an embedding layer which maps node features into 100-dimensional vectors. We choose the configuration that achieves the lowest validation error. Once the best configuration is found, we run 10 experiments and report the average performance on the test set. For both datasets and models, we set the number of message passing layers to 4. On QM9, we performed a joint regression of the 12 targets. Results. The results are shown in Table <ref>. We observe that on both considered datasets,  significantly outperforms the GIN model. Note that these datasets are significantly larger (in terms of number of samples) compared to the graph classification datasets of Table <ref>. More specifically, KAGIN offers an absolute improvement of approximately 0.11 and 0.03 in MAE over GIN. Those improvements suggest that KANs might be more effective than MLPs in regression tasks. § CONCLUSION In this paper, we have investigated the potential of Kolmogorov-Arnold networks in graph learning tasks. Since the KAN architecture is a natural alternative to the MLP, we developed two GNN architectures,  and , respectively analogous to the GCN and GIN models. We then compared those architectures against each other in both node- and graph-level tasks. In the classification tasks, there does not appear to be a clear winner, with both architectures outperforming each other on some datasets. In the graph regression task, however, preliminary results seem to indicate that KAN has an advantage over MLP. This paper shows, through its preliminary results, that such KAN-based GNNs are valid alternatives to the traditional MLP-based models. We thus believe that these models deserve the attention of the graph machine-learning community. Finally, we discuss potential advantages that KANs might have over MLPs, and leave their investigation for future work. First, their ability to accurately fit smooth functions could prove highly relevant on datasets where variables interact with some regular patterns. Second, their interpretability could be leveraged to provide explanations on learned models, giving insights into the nature of interactions among entities. Finally, a thorough study of the effect of the different hyperparameters could be leveraged, allowing to fully exploit the richness of splines while retaining small networks. § ACKNOWLEDGEMENTS This work was partially supported by the The Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation and partner Swedish universities and industry. plain
http://arxiv.org/abs/2406.18958v2
20240627074059
AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation
[ "Yanan Sun", "Yanchen Liu", "Yinhao Tang", "Wenjie Pei", "Kai Chen" ]
cs.CV
[ "cs.CV" ]
Abbreviated paper title Sun. et al. Shanghai AI Laboratory Harbin Institute of Technology, Shenzhen now.syn@gmail.com wenjiecoder@outlook.com {liuyanchen, tangyinhao, chenkai}@pjlab.org.cn : Create Your Artwork with Versatile Control on Text-to-Image Generation Yanan Sun1 Yanchen Liu1,2 Yinhao Tang1 Wenjie Pei2 Kai Chen1 July 1, 2024 ======================================================================== type=figure < g r a p h i c s > figure Multi-control image synthesis of . Our model supports free combinations of multiple control signals and generates harmonious results that are well-aligned with each input. The input control signals fed into the model are shown in a combined image for better visualization. § ABSTRACT The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional user-supplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose , a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. Our project page is available in <https://any-control.github.io>. § INTRODUCTION In recent years, the field of text-to-image (T2I) generation has experienced significant advancements, leading to unprecedented improvements in generated image quality and diversity, primarily attributed to the introduction of diffusion models <cit.>. While linguistic control allows for effective and engaging content creation, it also presents challenges in achieving fine-grained control over image generation. This challenge is extensively explored in <cit.>, where an additional network is employed to encode and inject the user-supplied control signal into the pre-trained T2I model such as Stable Diffusion <cit.>, so that to exert influence over the image generation process. Built upon <cit.>, subsequent approaches <cit.> presents unified architecture designs for managing multiple spatial conditions. However, the task of multi-control image synthesis remains challenging in the following aspects: (1) accommodating free combinations of input conditions, (2) modeling complex relationships among multiple spatial conditions, and (3) maintaining compatibility with textual prompts. We refer to these three challenges as input flexibility, spatial compatibility, and textual compatibility, respectively. Input flexibility. The first challenge comes from the any combination of available control signals based on user requirements. The amount and modality of control signals provided by users are varying, placing high demands on the input flexibility of the model. However, existing methods <cit.> typically employ fixed-length input channels, limiting their ability to accommodate diverse inputs. Other approaches <cit.> adopt MoE design to solve varying-number conditions, which can result in unforeseen artifacts when processing unseen combinations of inputs. Spatial compatibility. Secondly, control signals are not isolated; instead, they collectively influence the composition of a complete image. It is crucial to consider the relationships among these control signals, especially when managing occlusions among multiple spatial conditions. Unfortunately, current algorithms commonly combine multiple conditions through weighted summation with hand-crafted weights, easily leading to undesired blending results, or even causing low-response control signals to disappear when addressing occlusions. Textual compatibility. Ultimately, textual compatibility emerges as an important factor influencing user experience. Typically, the textual descriptions govern the content of generated images, whereas spatial conditions compensate the structural information. Nevertheless, a lack of communication between the textual and spatial conditions often leads current algorithms to prioritize accommodating the spatial conditions, thereby disregarding the impact of textual prompts. In summary, generating comprehensive and harmonious results that satisfy both textual prompts and multiple spatial conditions presents a significant challenge for multi-control image synthesis. To tackle the challenges of input flexibility, spatial compatibility, and textual compatibility, we propose , a controllable image synthesis framework that supports arbitrary combinations of diverse control signals. At the core of is the Multi-Control Encoder, which plays a crucial role in ensuring coherent, spatially and semantically aligned multi-modal embeddings. This novel component allows to extract a unified representation from various control signals, enabling a truly versatile and high-performing multi-control image synthesis framework. Specifically, Multi-Control Encoder is driven by multi-control fusion block and multi-control alignment block in turns, with a set of query tokens to unite the two seamlessly. Multi-control fusion block is employed to aggregate compatible information from multiple spatial conditions through the query tokens. A cross-attention transformer block is employed on the query tokens and the visual tokens of spatial conditions extracted from a pre-trained visual encoder. Therefore, the rich spatial controllable information is passed to the query tokens, which will be utilized in the multi-control alignment block. Multi-control alignment is used to guarantee the compatibility between all forms of control signals by aligning all other signals to the textual signal. A self-attention transformer block is employed on the query tokens and textual tokens. The query tokens contain spatial controllable information, while the textual tokens carry semantic information. Through information exchange between the query and textual tokens, both types of tokens are able to represent compatible multi-modal information. With alternating multi-control fusion and alignment blocks in several turns, the query tokens achieve a comprehensive understanding with highly aligned and compatible information from versatile user inputs. This capability empowers our method to handle complex relationships among conditions and uphold strong compatibility with textual prompts. Consequently, this approach fosters a more smooth and harmonious control over the generated images. Furthermore, transformer blocks with attention mechanisms inherently excel in accommodating a variety of control signals, and thus enable free combinations of user inputs. In summary, our contributions are manifold in the following: * proposes a novel Multi-Control Encoder comprising a sequence of alternative multi-control fusion and alignment blocks to achieve comprehensive understanding of complex multi-modal user inputs. * supports flexible combinations of user inputs, regardless of the amount and modality of different control signals. * produces more harmonious and natural high-quality outcomes, demonstrating state-of-the-art performance in multi-control image synthesis. § RELATED WORK §.§ Text-to-image Generation T2I diffusion models <cit.> have emerged as a promising approach for generating high-quality images from textual prompts. Diffusion models <cit.>, originally developed for image generation, have been adapted to the T2I domain, offering a novel perspective on the problem. These models leverage the concept of iterative denoising, where the generation process unfolds step-by-step, progressively refining the image quality. The diffusion process allows for better control over the generated images by conditioning on both the text input and intermediate image representations at each diffusion step. Recent advances in T2I diffusion models have explored various techniques to enhance the generation process, such as introducing attention mechanisms to better align textual and visual features <cit.> and operating in latent space <cit.> to achieve complexity reduction and detail preservation. While T2I diffusion models have shown promising results, there is still ongoing research to address challenges such as controllability, also the focus discussed in this paper, in the context of diffusion-based T2I generation. §.§ Controllable Image Synthesis Text descriptions guide the diffusion model to generate user-desired images but are insufficient in fine-grained control over the generated results. The fine-grained control signals are diverse in modality, for instance, layout constraint is introduced to arrange the location of the objects given; a bundle of works <cit.> thoroughly explore to synthesis images with high layout alignment given the semantic-aware boxes. Besides, segmentation map <cit.> is another popular control signal for controlling the layout and object shape of generated images. InstanceFusion <cit.> proposes a method to support location control in more free-form such as point, scribble, box and segmentation map. Highly detailed control can be achieved by structure signal such as sketch <cit.>. Depth map <cit.> can provide the control of the depth of field for the generated images. As layout, structure and depth control all outline the generated image in spatial alignment, content control <cit.> enables the personalization of the generated appearance in semantic level through an additional image input. Studies <cit.> propose general framework designs to process diverse spatial conditions instead of control-specific design, while both spatial and content control are jointly taken into consideration in works <cit.>. Specifically, considering the powerful generation ability of T2I model, ControlNet <cit.> proposes to utilize the trainable copy of the UNet encoder in the T2I diffusion model to encode extra condition signals into latent representations and then apply zero convolution to inject into the backbone of the UNet in diffusion modal. The simple but effective design shows generalized and stable performance in spatial control, and thus are widely adopted in various downstream applications. However, ControlNet is a single-modality framework and requires separate model for each modality. To address this, unified ControlNet-like models <cit.> are proposed to handle diverse control signals with only one multi-modality model. Another advantage of these methods is that they can support multi-control image synthesis. They adopt fixed length input channels or MoE design with hand-crafted weighted summation to aggregate conditions. Nevertheless, these methods are short in handling conditions with complex relations and hard to generate harmonious, natural results under various control signals. § METHOD In this section, we first give a preliminary overview of Stable Diffusion <cit.> and ControlNet <cit.>. Subsequently, we introduce , featuring a pioneering crafted for extracting a unified representation with compatible information for multiple control signals. Finally, we expound on our training dataset and strategy. Figure <ref> depicts the architecture of AnyControl and the Multi-Control Encoder. §.§ Preliminary Stable Diffusion. T2I generation introduces text as conditions in diffusion models. In the forward pass, Gaussian noises are gradually added to the sample over a series of steps; while the backward process learns to recover the image by estimating and eliminating the noise with the text guidance. In this paper, we base Stable Diffusion <cit.>, one of the most popular T2I diffusion model, to develop for multi-control image synthesis. Stable Diffusion model operates the diffusion and denoising process in latent space rather than pixels to reduce computation cost. It adopts UNet-like <cit.> structure as its backbone, comprising downsampling blocks, middle block and upsampling blocks. The text guidance are encoded through CLIP <cit.> text encoder and integrated into the UNet through a CrossAttention block after each ResBlock <cit.>. If we use Z to denote the noise features derived from the last ResBlock and Y to denote the embeddings encoded by the text encoder, the output noise features Z from CrossAttention block can be obtained by Q = W_q(Z), K = W_k(Y), V = W_v(Y), Z = Softmax(QK^T/√(d))V, where W_q, W_k and W_v are projection layer and d is the dimension of the embedding space. ControlNet. ControlNet <cit.> is developed to adapt Stable Diffusion model for spatial conditions. To be specific, it locks the parameters of Stable Diffusion, and makes a trainable copy of the encoding layers in the UNet. The two parts are connected by zero convolution layers with zero-initialized weights to progressively increase spatial control influence as the training goes. This design empowers ControlNet to achieve robust controllable image generation while preserve the quality and capabilities of Stable Diffusion model. §.§ . Similar to ControlNet, in our , we also lock the pre-trained stable diffusion model, and instead design a for understanding complex control signals. We first obtain three types of tokens, i.e., textual tokens 𝒯, visual tokens 𝒱 and query tokens 𝒬. The textual tokens are extracted from CLIP text encoder on textual prompts, while the visual tokens are obtained from a pre-trained visual encoder (e.g., CLIP image encoder) on all of the user-provided spatial conditions in image form. The query tokens are defined as a set of learnable parameters. To address the three challenges discussed in the introduction, , input flexibility, spatial compatibility and textual compatibility, we develop the multi-control encoder via alternating multi-control fusion blocks and multi-control alignment blocks united by the query tokens. Multi-Control Fusion. Multi-control fusion block aims to extract compatible information from various spatial conditions. This is accomplished by utilizing a cross-attention transformer block to facilitate interactions between the query tokens and the visual tokens of all spatial conditions. Specifically, suppose that there are n spatial conditions in image form of various modalities including depth, segmentation, etc. We can obtain the visual tokens 𝒱_i,j for condition C_i from the j-th block of the pre-trained visual encoder. Here, we use [𝒱_1,j, 𝒱_2,j, …, 𝒱_n,j] to represent the visual tokens for all the spatial conditions from the j-th block. Then the interactions in the multi-control fusion block can be formulated as 𝒬_j = CrossAttention(𝒬_j, [𝒱_1,j+P, 𝒱_2,j+P, …, 𝒱_n,j+P]), where P denotes a shared learnable positional embedding additive to each 𝒱_i,j for better alignment between the query tokens and the visual tokens. After this process, the spatial controllable information encoded in the visual tokens is passed on to the query tokens. Multi-Control Alignment. Although the various controllable information is integrated into the query tokens, it is challenging to infer the priority of spatial control signals within the overlapping region due to the absence of a global condition that indicates the relationships among spatial conditions. Fortunately, textual prompts can serve as a global control that regulates the content of generated image. Therefore, in the multi-control alignment block, we facilitate the interactions between the query tokens and the textual tokens with a self-attention transformer block. Before we encode the textual prompts to tokens, we append a textual task prompt at the tail of the user-provided text to solve the modality discrepancy among diverse spatial conditions. Then we concatenate the query tokens 𝒬 and textual tokens 𝒯 together and perform the self-attention as [𝒬_j+1, 𝒯_j+1] = SelfAttention([𝒬_j, 𝒯_j]). With self-attention, the query tokens, which carry the mixed controllable information, will exchange information with textual tokens and thus can achieve semantic alignment with user prompts. Alternating Fusion and Alignment. To ensure the information aligned and compatible of all the control signals, we employ the multi-control fusion and alignment blocks alternately for multiple turns. Notably, we utilize multi-level visual tokens for fine-grained spatial control. Specifically, in each turn, the visual tokens consumed in the cross-attention transformer block are extracted from different levels of the pre-trained visual encoder, considering the spatial conditions are diverse in controlling level, , layout control such as segmentation map and structural control such as edge map. Therefore, multi-level visual tokens are necessary for multi-control fusion blocks in different depth. Advantages of AnyControl. The query tokens work as a bridge, uniting the two types of blocks seamlessly. After several turns, the query tokens retain well-aligned compositional information, served as a unified multi-modal representations for user inputs. This design empowers AnyControl in multi-control image synthesis even with occlusion, generating high-quality harmonious results with high spatial and textual compatibility. Our multi-control encoder shares a similar idea to Q-Former <cit.>, however, AnyControl incorporates many dedicated design for the multi-control image synthesis such as the appended textual task prompt, additional shared position embeddings across all the conditions and the usage of multi-level visual tokens. In implementation, to save computation cost, we insert the cross-attention block after every two self-attention blocks. Another natural advantage of our AnyControl lies in the input flexibility. AnyControl, utilizing the transformer blocks with attention mechanism, has natural advantage in accommodating free combinations of user inputs. Previous methods either adopt the design of fixed-length input channels or MoE structure as illustrated in Figure <ref>. The former limits the freedom of user inputs, while the latter, MoE design, supports combining flexible inputs with hand-crafted weighted summation, leading to laborious adjustments to the combination weights. §.§ Training Datasets. r0.63 < g r a p h i c s > Visualization of aligned and unaligned conditions. The first row shows the aligned case where pixels at the same location of all the control signals describe the same object. Conditions in the second and third rows describe the foreground and background respectively, contributing to a complete image together, constructing the unaligned case. We adopt the training dataset, MultiGen, for multi-control image synthesis presented in <cit.>. This dataset is built from LAION <cit.> with aesthetics score above 6. Low-resolution images are removed and finally 2.8M images are kept. Different methods are utilized to extract the control signals. Unfortunately, there is a domain gap between the combinations of the spatial conditions during training and inference time, , during training, all the spatial conditions extracted from the same image are fully aligned while the multiple spatial conditions accepted from users are not the case. User-provided conditions usually have multiple image sources, thus the extracted spatial conditions are not always aligned and sometimes have occlusion in the overlapping region, which requires the model to handle the spatial conditions in right arrangement according to the depth of the target scene. To relieve the discrepancy, we collect a subset of unaligned data as shown in Figure <ref>. To be specific, we utilize the images in Open Images dataset <cit.> and MSCOCO <cit.> dataset which are rich in objects to make the synthetic data. Given an image and the mask of a foreground object, we recover the background image with the masked region using the inpainting tool <cit.>. We discard images with too small or too large objects, and finally produce 0.44M images as supplementary unaligned training data. Training Strategy. When utilizing the unaligned data for training, we take the combination of spatial conditions for the foreground object and the inpainted background image together while treating the original image as target. During training, for the data with fully aligned spatial conditions, we randomly pick two conditions for each training sample; for the synthetic unaligned data, we randomly pick a condition for the foreground object and the background inpainted image respectively. We randomly drop all the conditions at a rate of 0.05 to enable classifier free guidance, and also randomly drop the textual prompts at a rate of 0.05 to let the model learn from pure spatial conditions only. § EXPERIMENTS We validate the effectiveness of AnyControl with Stable Diffusion <cit.> of version 1.5 on four types of conditions, including Edge <cit.>, Depth <cit.>, Segmentation <cit.>, and Human Pose <cit.>. We compare our AnyControl with state-of-the-art methods including Multi-ControlNet and Multi-Adapter, the versions of ControlNet <cit.> and T2I-Adapter <cit.> which support multiple spatial conditions, as well as Uni-ControlNet <cit.>, Cocktail <cit.>, UniControl <cit.>, DiffBlender <cit.> and CnC <cit.> with extensive qualitative and quantitative results. Implementation details including network structure, hyper-parameters of training and inference can be found in the supplementary material. §.§ Qualitative Results In this section, we analyze input flexibility, spatial compatibility, and also the compatibility with text, style and color control. Input Flexibility. There are three ways to process free combinations of spatial conditions from users: 1) MoE design (, UniControl<cit.>, Multi-ControlNet <cit.>); 2) Attention design (AnyControl); 3) Composition design, which merges spatial conditions of the same type into one image so that methods with fixed-length input channels can work smoothly. In MoE-based methods, the composition of different conditions is achieved by hand-crafted weighted summation. Instead, our AnyControl adopts attention mechanism to learn the composition weights dynamically, achieving superior performance on multi-control image synthesis as shown in Figure <ref> and Figure <ref>. In addition,“sticker” artifact is observed in the results of Cocktail <cit.> with composition design. r0.5 < g r a p h i c s > Spatial compatibility. AnyControl is capable of inferring the relationships not only between conditions but also between the generated objects and the environment. Spatial Compatibility. In Figure <ref>, we provide comparisons given various conditions with occlusion, demonstrating the superiority of AnyConrol in handling complex multi-control synthesis. Blending issues are difficult to avoid in previous methods <cit.> with trivial design on multi-control combination. Figure <ref> further shows examples on AnyControl in dealing with relative spatial positions of two conditions, which generates high-quality results with correct spatial relation between conditions. For example, given different layout arrangements, the generated cat and teddy bear from AnyControl are always positioned in a natural way, rather than mixed within the occluded region. Another notable thing is that AnyControl shows a surprising ability of dealing with the interaction of generated objects and the corresponding environment. When placed at the same horizontal line, the cat and teddy bear are seated at the same plane, while the teddy bear sitting on a stage when its vertical position axis is raised up. These advantages contribute to the introduction of the Multi-Control Encoder, which strengthens the interactions between all control signals and consequently achieves a comprehensive understanding of complex user inputs. Text Compatibility. Textual prompts play an important role in controllable image synthesis as text is typically the primary communication means between human and T2I models, while spatial conditions work as auxiliary roles in providing the fine-grained information. Therefore, while guided by spatial conditions, maintaining the compatibility with textual prompts is essential. However, existing methods commonly prioritize the response to spatial conditions and miss important message in textual prompts. For instance, in the third row of Figure <ref>, other methods either totally neglect the “Transformers Robot” information so that fail to bind the concept with the input human pose, or only partially respond to “Robot” but drop “Transformers” information. On the contrary, our method responds to all important information in multi-modality user inputs and produces harmonious results. Compatibility with Style and Color Control. As a plug-and-play model, r0.6 < g r a p h i c s > AnyControl with style and color controls. The first two cases take style, depth and edge controls, while the last further takes color control. AnyControl can be integrated with existing conditional generation methods in a convenient way. We take style and color controls as examples to demonstrate the effectiveness of AnyControl in the collaboration with other plug-and-play modules for wider applications. Specifically, we enhance AnyControl with decoupled cross-attention <cit.> in UNet to employ style and color controls. The compositional outcomes are visually depicted in Figure <ref>, revealing the generation of high-quality results that adhere to style, color and spatial constraints. §.§ Multi-Control Benchmark: Most existing methods evaluate multi-control image synthesis on MSCOCO validation set with totally spatio-aligned conditions extracted from different algorithms. However, we argue that evaluation on the dataset with well-aligned multi-control conditions cannot reflect the ability of methods to handle occluded multiple conditions in practical applications, given that the user provided conditions are typically collected from diverse sources which are not aligned. Therefore, we construct an Unaligned Multi-control benchmark based on MSCOCO validation set, short for COCO-UM, for a more effective evaluation on multi-control image synthesis. The construction pipeline is similar to that used in unaligned data synthesis described in Section <ref>. That is, we decompose an image into the background image and foreground image outlined by an object mask, and then recover the background image through inpainting tools <cit.>. Additionally, after obtaining the recovered background image, we remove the bad cases with low image quality, such as cases with new generated object within the hole region rather than filled by background scene. Finally, we construct an occluded multi-control dataset with 1480 samples. §.§ Quantitative Evaluation For thorough quantitative evaluation, we employ various evaluation metrics, including FID <cit.> for the restructured image quality, CLIP-Score <cit.> for the alignment with textual prompt. For the condition fidelity, we adopt RMSE for depth map and edge map, mPA and mAP for segmentation map and pose map, respectively. Multi-Control Synthesis Evaluation. We evaluate multi-control synthesis on . As depicted in Table <ref>, our method outperforms other multi-control methods on FID and CLIP-Score by a large margin. This remarkable achievement signifies that AnyControl is capable of processing complex combinations of multiple spatial conditions, and generates high-quality harmonious results well aligned with the textual prompts and spatial conditions. Single-Control Synthesis Evaluation. For comprehensive evaluation, we also conduct the comparisons on each single condition as tabulated in Table <ref>. To be specific, we evaluate the single control synthesis on the full validation set of MSCOCO, that is, COCO-5K since there is no occlusion in single control scenario. Overall, AnyControl outperforms existing single- and multi-control methods on most metrics, illustrating the superiority of our method to existing methods. Ablation Study on Unaligned Data. Unaligned data is provided to solve the gap between the alignment of input conditions during training and inference. That is, in training, conditions are totally aligned; while, in testing, the control r0.4 Ablation study of training with and without unaligned data on . 0.7 [1.5pt] Metric FID↓ CLIP↑ w/o unaligned data 52.10 25.62 w unaligned data 44.28 26.40 [1.5pt] signals from users contributing to a whole image are almost not aligned at all. In Table <ref>, we provide comparisons on FID and CLIP-Score of AnyControl trained with and without unaligned data. As illustrated, a large improvement on FID and CLIP-Score is observed through the data expansion on occluded cases. The introduction of unaligned data during training strengthens AnyControl in modeling complex multi-control synthesis, especially the occluded cases. §.§ Discussion Although the input number of spatial conditions is not limited in AnyControl, we r0.5 < g r a p h i c s > Miss-blending issue under too many spatial conditions. observe the miss-blending issue as shown in Figure <ref>, when the number of spatial conditions is overlarge, such as 8 in this case. The possible reasons are as follows: 1) the limited ability of CLIP text encoder in understanding complex textual prompts with numerous concepts; 2) Too many visual tokens in cross-attention transformer block results in a decrease in the accuracy of softmax, and thus weaken AnyControl in precise multi-control understanding. We leave this issue as future work. § CONCLUSION In conclusion, we propose a multi-control image synthesis framework based on the public T2I model to address the limitations of existing methods in accommodating diverse inputs, handling relationships among spatial conditions, and maintaining the compatibility with textual prompts. supports free combination of versatile control signals, which develops a Multi-Control Encoder that enables holistic understanding of multi-modal user inputs. We achieve this through employing alternating multi-control fusion and alignment blocks united by a set of query tokens. This approach enables AnyControl model complex relationships among diverse control signals and extract a multi-control embedding with compatible information. Our method produces high-quality natural outcomes, positioning it as a state-of-the-art solution for multi-condition image generation. The advancements introduced by contribute to the broader goal of enhancing controllable image synthesis and pushing the boundaries of T2I generation. splncs04 Appendix § IMPLEMENTATION DETAILS Network. The detailed structure of our AnyControl is depicted in Figure <ref>. We base Stable Diffusion of version 1.5 to build our AnyControl. Similar to ControlNet <cit.>, we make a trainable copy of the UNet encoding blocks for adapting to controlling information while freezing the pre-trained weights of Stable Diffusion model totally. In our Multi-Control Encoder, the number of query tokens is set to 256 enabling detailed controllable information extraction. The additional position embedding, with the same length as the query tokens, are shared by all input spatial conditions. We take the pre-trained weights of Q-Former <cit.> as the initialization for Multi-Control Encoder except for the query tokens and the additional position embedding, which are randomly initialized. Hyper Parameters. We train AnyControl on 8 A100 GPU cards with a batch size of 8 on each GPU. We train the model for totally 90K iterations with a initial learning rate of 1e-5. During inference, we set the classifier-free guidance scale to 7.5. In all the experiments, we adopt DDIM <cit.> sampler with 50 timesteps for all the compared methods. § UNALIGNED DATA During producing the synthetic unaligned dataset, we utilize the groudtruth object masks with the area ratio in [0.1, 0.4] to outline the foreground object, while oversmall or overlarge objects will lead to undesired recovered background image. PowerPaint <cit.> is a multi-task inpainting model supporting text-guided object inpainting, context-aware image inpainting as well as object removal. Here, we adopt the “object removal” mode for the unaligned data construction. More visualizations for synthetic unaligned data are in Figure <ref>. § HAND-CRAFTED WEIGHT ADJUSTMENT As shown in Figure <ref>, multi-control methods with hand-crafted weights, , Multi-ControlNet <cit.>, usually require a series of laborious weight adjustments according to the synthesized results while ours can automatically infer the combination weights and extracts unified multi-control embedding, thus producing harmonious results. § MULTI-LEVEL VISUAL TOKENS Although the visual tokens from the last transformer block of the pre-trained visual encoder have already aggregated rich information, they are not sufficient r0.45 Multi-level visual tokens. We gradually enable the visual tokens from the deepest level to the shallowest level. 0.7 [1.5pt] Levels 1 2 3 4 5 6 FID ↓ 45.64 43.73 43.69 43.67 43.74 44.28 CLIP ↑ 26.35 26.40 26.39 26.39 26.38 26.40 [1.5pt] to convey fine-grained controllable information. We conduct ablation experiments on the levels of used visual tokens from the visual encoder to the multi-control encoder. Table <ref> demonstrate that integrating more visual tokens from middle layers increase FID and encounter performance saturation at 4-th level. § MORE QUALITATIVE RESULTS More qualitative results on multi-control synthesis are shown in Figure <ref>. Results of single-control synthesis including depth map, edge map, segmentation map and human pose are shown in Figure <ref> to Figure <ref> respectively.
http://arxiv.org/abs/2406.19125v1
20240627121607
Entanglement Harvesting and Quantum Discord of Alpha Vacua in de Sitter Space
[ "Feng-Li Lin", "Sayid Mondal" ]
hep-th
[ "hep-th" ]
equationsection font=footnotesize Department of Physics, National Taiwan Normal University, Taipei, 11677, Taiwan^αmailto: fengli.lin@gmail.com fengli.lin@gmail.com, ^βmailto:sayid.mondal@gmail.comsayid.mondal@gmail.com The CPT invariant vacuum states of a scalar field in de Sitter space, called α-vacua, are not unique. We explore the α-vacua from the quantum information perspective by a pair of Unruh-DeWitt (UDW) detectors coupled to a scalar field with either monopole or dipole coupling, which are in time-like zero separation or space-like antipodal separation. The analytical form of the reduced final state of the UDW detector is derived. We study the entanglement harvesting and quantum discord of the reduced state, which characterize the quantum entanglement and quantum correlation of the underlying α-vacua, respectively. Our results imply that the quantum entanglement gravitated by de Sitter gravity behaves quite differently for time-like and space-like separations. It experiences “sudden death" for the former and grows for the latter as the measuring time or the value of α increases. This demonstrates the nonlocal nature of quantum entanglement. For the quantum discord, we find no “sudden death" behavior, and it experiences superhorizon suppression, which explains the superhorizon decoherence in the inflationary universe scenario. Overall, the time-like or space-like quantum entanglement and correlation behave differently on their dependence of α, measuring time and spectral gaps, with details discussed in this work. § INTRODUCTION De Sitter space is intriguing in many different ways. On the one hand, it is one of the simplest spacetimes because it is maximally symmetric. On the other hand, it is a highly dynamic spacetime adopted to explain inflationary universe scenarios. The existence of the eternal Hubble horizon initiated the primordial curvature fluctuations, which caused the signature of cosmic microwave background (CMB) in today's observational universe <cit.>. Besides, in the static patch, the existence of a cosmic event horizon mimics the one associated with black holes, such as the thermal properties related to Hawking-like radiation. These thermal properties are also encoded in the vacuum Wightman function of a quantum field in de Sitter space and manifest in the infinite number of imaginary double poles in its spectral representation. Furthermore, the full isometry of the de Sitter space also allows a variety of the vacua states of a quantum field, the so-called α-vacua <cit.>, which has nontrivial UV and IR properties such as the acausal correlations. Therefore, it is worth exploring further the physical properties of vacua of de Sitter space. Among many properties, the quantum information perspective has been studied less for de Sitter space. Quantum fluctuations of a relativistic field carry both energy and quantum fluctuation, and it is natural to ask how the Hubble horizon affects long-range quantum correlations such as quantum entanglement or non-classical mutual information. The answers to these questions can help understand the quantum information perspective of the primordial fluctuations and their imprints in CMB. Moreover, the zero-point energy fluctuations lead to the famous cosmological constant problem, which could result in the de Sitter space. This has puzzled theoretical physicists for decades, and the resolution still seems elusive. As information and energy are twin partners of a physical entity such as quantum fields, the insight about the quantum information of de Sitter space may help to uncover the mysterious cosmological constant in the long run. This further motivates the study of quantum information problems in de Sitter space. Due to the almost infinite degrees of freedom, directly studying the quantum information of a relativistic quantum field is a formidable task. Despite that, many important results have been obtained in the past few decades. For example, the entanglement entropy of vacuum states between adjacent regions is shown to obey the area-law <cit.>. Also, the Reeh-Schlieder theorem <cit.> showed that the entanglement of a quantum field exists among regions separated at all scales. Introducing new tools such as holographic principle or its manifest via AdS/CFT duality <cit.> helps to explore the quantum information of nontrivial quantum field theories by their dual descriptions <cit.>. On the other hand, a more conservative way is to explore the quantum information of a relativistic quantum field by some elementary probe, by which we can watch or monitor the change in quantum information due to its interaction with the environmental quantum field. A typical probe for such a purpose is the atomic or qubit-like Unruh-DeWit (UDW) detector <cit.>. Via environmental scalar as a quantum channel, the distant UDW detectors can build up long-range entangled states violating Bell's inequality; this confirms the implication of the Reeh-Schlieder theorem <cit.>. Extending the above discussions from the Minkowski spacetime to the curved spacetime yields more interesting phenomena because gravity force, even as a classical entity, can cause nontrivial effects on the vacuum states. For example, the self-force may induce radiation damping for the UDW detector via the fluctuation-dissipation theorem <cit.> and induce decoherence <cit.>, reflecting either the thermal nature of the Rindler vacuum for an accelerating detector or of the cosmic horizon or the event horizon of a black hole. Inspired by these results and the aforementioned motivation to explore the quantum information perspective of de Sitter space, this paper will consider a pair of UDW detectors coupled to the scalar field in the de Sitter space. By evolving the total system to arrive at its final state in the far future, we will see how the strong gravity affects the quantum information content of the UDW detectors via the coupling to the vacuum channel of the environmental scalar. By tracing out the scalar field part of the total final state, we obtain the reduced density matrix of the UDW detectors in a mixed state. Examining this reduced mixed state, we can read out the effect of de Sitter gravity on its quantum information content and uncover the gravity effect on the quantum fluctuations of the scalar field. Two characteristics of quantum information considered in this paper for the reduced final are entanglement harvesting <cit.> and quantum discord <cit.>. The first explores the time scale and energy gap dependences of the quantum entanglement generated by strong gravity and then harvested by the UDW detectors. The second examines the amount of non-classical quantum correlation generated by the gravity of de Sitter space. These two quantum information characteristics are related but not equivalent. For example, we will see the “sudden death" behavior for the entanglement harvesting at the superhorizon scale or large energy gap, but not for the quantum correlation. These quantum informational quantities, in general, are nonlocal and will depend on the separation scale between two UDW detectors. However, the general separation will be quite difficult for the analytical calculations. To avoid technical involvement, we will consider two extreme cases: the zero and antipodal separations, representing the time-like and space-like separations, respectively. For the same reason, we will only consider a conformally coupled scalar, of which the corresponding Green functions of vacuum states take the simpler form. With this technical simplification, we can obtain the analytical form of the reduced density matrix of the final state for the UDW detectors in the saddle point approximation, which is formally valid for large measuring time scales. This will avoid the numerical inaccuracy arising from the peculiar iϵ prescription when evaluating the quantum informational quantities of the reduced states. Based on these results, we will study the dependence of quantum information on the measuring time scale and energy gaps of the UDW detectors and examine how they behave at the superhorizon scale. Besides, we consider both monopole and dipole couplings of UDW detectors to the environmental scalar. In some cases, we will see the essential differences between two different ways of coupling. Our results serve as the prototype for the quantum correlation between two different regions separated by either time-like or space-like distances in de Sitter space and can shed new insight into the quantum information perspective of the primordial fluctuations or quantum gravity. The rest of the paper is organized as follows. In the next section, we will briefly review the basics of the UDW detector, concurrence, and discord, scalar vacuum states in de Sitter space, and finally derive and classify some issues of the spectral representations of Wightman Green functional of de Sitter α-vacua. In section <ref>, we obtain the matrix elements of the reduced density of a pair of UDW detectors in terms of the spectral density of Wightman functions in the saddle point approximation. Based on the analytical results of section <ref>, in section <ref> and <ref>, we present the numerical plots respectively for entanglement harvesting and quantum discord to demonstrate their dependence on measuring time scale, energy gaps and the variety of vacua. We finally conclude our paper in section <ref>. § BRIEF REVIEW ON UDW DETECTORS, THE QUANTUM CHARACTERISTICS OF X-STATES AND Α-VACUA OF DE SITTER SPACE In this section, we sketch the basics of UDW detectors, quantum characteristics of the resultant final X-states of their reduced dynamics, and α-vacua of a scalar field in de Sitter space. At the same time, we set up the notations for our constructions and calculations. Despite this being a review section, the quantum discord for the resultant X-states of the UDW detectors is obtained here for the first time. Besides, in the last part of this section, we also give the analytical forms for the spectral representations of Wightman functions probed by two static UDW detectors with zero or antipodal separation. §.§ Probing environmental vacuum state by UDW detectors The UDW detector <cit.> is a particle detector that serves as a local probe of the quantum field. For simplicity, we will consider a two-level system (qubit) for such a particle detector with its worldline trajectory denoted as x(τ) parameterized by its own proper time τ. This qubit-type UDW detector is characterized by the energy gap Ω separating the ground |0⟩ from the excited state |1⟩, and by its monopole interaction Hamiltonian with the environmental scalar field ϕ(x), which in the interaction picture is expressed as H^(0)(τ)=g χ(τ)(e^i Ωτ |1⟩⟨ 0| +e^-i Ωτ |0⟩⟨ 1|) ⊗ϕ(x(τ)) . Here, the superscript (0) in H^(0) refers to monopole coupling, and we also introduce a coupling constant g to tune the interaction strength and the window function χ(τ) to characterize the duration of interaction for the UDW detector coupled to ϕ(x). In this work, we will use the typical Gaussian-type window function, χ(τ)=1/(2 π)^1 / 4e^-τ^2/4 T^2 , where the normalization is chosen so that ∫_-∞^∞ dτ [χ(τ)]^2 = T, and the parameter T can be considered as the interaction time scale or its inverse as the resolution energy scale for measuring the field fluctuation by the UDW detector. In this work, we like to study the quantum information perspective of the quantum field fluctuations in its vacuum states probed by a pair of qubit-type UDW detectors. Compared to the scheme of using just a single UDW detector, we can explore the quantum entanglement or quantum correlation between the detectors, which we label as A and B. The associated quantities for each detector will be indicated by the subscripts A and B. To characterize the strong gravitational effect on the relativistic quantum information, we will prepare an initial state that contains minimal content of quantum information and then examine the influence of gravity through quantum evolution by the interaction Hamiltonian Hamil_int_mono. The simplest initial state for such purpose is the direct product of the vacuum states for all the involved partners, i.e., |Ψ⟩_i:=|0⟩_A ⊗ |0⟩_B ⊗ |Λ⟩ , where |0⟩_A,B and |Λ⟩ are the corresponding vacuum states of UDW detectors and the scalar field. We then evolve the initial state |Ψ⟩_i, prepared at the coordinate time t=-∞ to the final total state |Ψ⟩_i at t=∞, which in the interaction picture can be expressed as |Ψ⟩_f = 𝒯exp[-i ∫_-∞^∞ dt [dτ_A/dt H^(0)_A(τ_A) + dτ_B/dt H^(0)_B(τ_B) ] ] |Ψ⟩_i , where 𝒯 is the time ordering operator, dτ_A, B/dt are the corresponding boost factors from the comoving frames to the laboratory frame. The final total state |Ψ⟩_f encodes all the gravitational effects on the quantum fluctuations and quantum information of the coupled system. However, due to the complication of the quantum field by its infinite number of degrees of freedom, it is not easy to examine it directly. As the quantum evolution entangles the UDW detectors and the fields, we can then glimpse the imprint effect of gravity on the quantum field fluctuations by using the UDW detectors as the probe. This implies we can examine the reduced final state of the UDW detector by taking the partial trace of |Ψ⟩_f⟨Ψ| over ϕ. Up to O(g^2) this results in the reduced density matrix ρ_AB, which takes the famous form of X-states <cit.>[Bell states and Werner states <cit.> are special examples of the X-states.], and is dictated by the Wightman function of ϕ, which we denote by W_Λ(x, x^'):=⟨Λ|ϕ(x) ϕ(x^')| Λ⟩. In the basis of |i⟩_A ⊗ |j⟩_B={|00⟩,|01⟩,|10⟩,|11⟩}, it is explicitly given by <cit.> ρ_A B=([ 1-P_A-P_B 0 0 X; 0 P_B C 0; 0 C^* P_A 0; X^* 0 0 0 ])+𝒪(g^4), where P_D = g^2 ∫_-∞^∞ dτ_D ∫_-∞^∞ d τ_D' e^-i Ω_D (τ_D-τ_D') W_χΛ(x_D(τ_D),x_D(τ_D')) D ∈{A,B}, C= g^2 ∫_-∞^∞ dτ_A ∫_-∞^∞ dτ_B e^- i ( Ω_A τ_A - Ω_B τ_B ) W_χΛ(x_A(τ_A),x_B(τ_B) ) , X =-g^2 ∫_-∞^∞ dτ_A ∫_-∞^∞ dτ_B e^-i( Ω_A τ_A + Ω_B τ_B )[ θ[t_B(τ_B)-t_A(τ_A)] W_χΛ(x_A(τ_A),x_B(τ_B) ) + θ[t_A(τ_A)-t_B(τ_B)] W_χΛ(x_B(τ_B),x_A(τ_A) ) ] , with θ[t] denoting the Heaviside step function of the coordinate time t, and the windowed Wightman function is defined by W_χΛ(x_F(τ_F),x_G(τ_G)):=χ(τ_F) χ(τ_G) ⟨Λ| ϕ(x_F(t_F)) ϕ(x_G(t_G))|Λ⟩ , F,G ∈{A,B} . Here, P_D=A, B, by definition, is the transition probability for a single UDW detector, which yields the transition rate ∼ P_D/T for characterizing the Unruh effect for a constantly accelerating UDW detector <cit.>. This can be verified by the fact that ρ_A=_B ρ_AB= diag(1-P_A, P_A) + 𝒪(g^2). The eigenvalues of ρ_AB are given by <cit.> (up to 𝒪(g^2)), λ_0,1,±=0, 1- P_A- P_B, 1/2( P_A+ P_B ±√((P_A-P_B)^2+ 4 |C|^2)) . Note that the positive condition for λ_- requires P_A P_B/|C|^2≥ 1 . This condition is usually satisfied for generic vacuum states of ϕ. Furthermore, if the two UDW detectors are identical, i.e., Ω_A=Ω_B (and also χ_A=χ_B), then P_A=P_B:=P. In this case, the eigenvalues of ρ_AB are 0, 1-2 P, P+|C|, P-|C|. Moreover, if two UDW detectors are also placed at the same spatial location, then it is straightforward to see C=P, so that ρ_AB becomes a rank-2 matrix. It is also straightforward to extend the above consideration into the dipole interactions with the interaction Hamiltonian taking this form <cit.> H^(2)(τ)=g χ^μ(τ)(e^i Ωτ |1⟩⟨ 0| +e^-i Ωτ |0⟩⟨ 1|) ⊗Φ_μ(x(τ)) , where the superscript (2) in H^(2) refers to dipole coupling, and Φ_μ can be either ∂_μϕ for a scalar-dipole interaction, or F_0 μ for an electric-dipole interaction with F_μν the Maxwell's field strength. We introduce the local tetrad e^I_μ and its inverse along the worldline to define the local vector field Φ_I:=Φ_μ e^μ_I and the local vector window function χ^I:=χ^μ e^I_μ which will be again chosen to be Gaussian-type. Then, the reduced density matrix for the pair of UDW detectors up to O(g^2) will take the same X-state form as in dtec_den_mat. However, the windowed Wightman function defined in win_G and used in PJ-defX will be replaced by W_χΛ(x_F(τ_F), x_G(τ_G)):=χ^I(τ_F) χ^J(τ_G) ⟨Λ| Φ_I(x_F(t_F)) Φ_J(x_G(t_G)) |Λ⟩ , F,G ∈{A,B} . We will provide the explicit forms of the above windowed Green functions later. §.§ Characterizing quantumness: concurrence and discord Given a ρ_AB, we can characterize the quantum information by some entanglement measure to characterize the entanglement harvested by the UDW detectors from the vacuum fluctuations of the scalar field. A common quantity for evaluating the entanglement of a mixed state is concurrence <cit.>, which we will adopt. The concurrence for the X-state of dtec_den_mat up to O(g^2) can be found to be <cit.>, 𝒞(ρ_A B)=2 max[0,|X|-√(P_A P_B)] . The concurrence is an entanglement monotone, so its value quantifies the amount of quantum entanglement. Since we start with a pair of UDW detectors in their unentangled product of ground states, any entanglement quantified by the nonzero concurrence for the reduced final state can be seen as harvesting from the entanglement of the environmental scalar gravitated by the de Sitter space during the time evolution. Thus, this quantity can be coined as entanglement harvesting <cit.>. By studying the scale dependence of the entanglement harvesting, we can uncover the gravity effect of de Sitter space on the generation of entanglement in the scalar vacua at different length/time scales. Another quantity adopted in this paper for characterizing the quantum coherence of a X-state is the quantum discord <cit.>, which measures the difference between two natural extensions of classical mutual information between A and B. Thus, the quantum discord is used to characterize the quantumness of correlations between subsystems, which does not necessarily involve quantum entanglement. In classical information theory, the mutual information I(A,B):=S(A)+S(B)-S(AB)=S(A)-S(A||B) where S(A), S(B) and S(AB) are the Shannon entropy of the subsystem A and B, and total system AB, respectively; and S(A||B)=S(AB)-S(B) is the relative entropy. A natural extension to quantum mutual information is to replace the Shannon entropy with the von Neumann entropy, e.g., S(A)=-_A ρ_A lnρ_A. Alternatively, one can define the quantum mutual information through its operational meaning, i.e., obtaining the information about A by observing B. Introduce the projective measurement basis {B_k} for performing measurements on subsystem B, one can define alternative quantum mutual information other than I(A,B) by J(A,B)=S(A)- min_{ B_k }∑_k p_k S(A|| B_k) , where p_k:=_B(B_k ρ_AB). The measurement destroys quantum correlation so that J(A,B) quantifies the classical correlation. Thus, the quantum discord quantifies the pure quantum correlation by defining it as D(A,B):= I(A,B)- J(A,B) = min_{ B_k }∑_k p_k S(A|| B_k) - S(A||B) ≥ 0 . The subadditivity of entanglement entropy ensures the last inequality. The quantum discord vanishes when ρ_AB is in the pointer states, i.e., environment-induced superselection <cit.> such that ρ_AB=∑_k B_k ρ_AB B_k. Evaluation of quantum discord for generic X-state was studied extensively <cit.>. Following the procedure outlined in <cit.> to evaluate the quantum discord for ρ_AB of dtec_den_mat, we find that the conditioned entropy ∑_k p_k S(A|| B_k) is independent of the choices of measurement basis { B_k } up to O(g^2) so that there is no need for minimization to obtain quantum discord[The measurement-basis independence of conditioned entropy ∑_k p_k S(A|| B_k) is a special feature for ρ_AB we consider up to O(g^2), in which all the measurement-basis dependence happens to vanish. Otherwise, for a generic X-state, ∑_k p_k S(A|| B_k) will generally depend on the choice of { B_k } as discussed in <cit.>. ]. The resultant quantum discord is D(A,B) = g^2 2 ln 2[ (P_A+ P_B) ln (P_A P_B - C^2)- 2 P_A ln P_A - 2 P_B ln P_B + √((P_A-P_B)^2 + 4 C^2)lnP_A + P_B + √((P_A-P_B)^2 + 4 C^2) P_A + P_B - √((P_A-P_B)^2 + 4 C^2)] . For the identical UDW detectors with P_A=P_B:=P, it can be reduced to D(A,B)= g^2 ln 2[ (P+|C|) ln (P+|C|) + (P-|C|)ln (P-|C|) -2 P ln P ] , which can be further reduced to D(A,B)=2 g^2 P if these two identical UDW detectors are placed at the same spatial position so that C=P. Thus, considering up to O(g^2) of two identical UDW detectors in the same position, their quantum discord is proportional to the transition probability. §.§ Euclidean vacuum and α-vacua in de Sitter space As shown in the previous subsection, the final reduced density matrix of the UDW detectors is dictated by the Wightman function of the scalar field. As we aim to study the vacuum states of the scalar field in de Sitter space probed by the UDW detectors, we need to understand the basics of the corresponding Wightman function. In this subsection, we briefly review the necessary materials for our consideration. We will consider the invariant vacuum states under the full isometry group O(1,4) of the de Sitter space, which includes the disconnected components related, for example, by the antipodal mapping. Such vacuum states are called α-vacua <cit.>, including the Euclidean vacuum state, also known as the Bunch-Davies vacuum. For this purpose, we consider the 4-dimensional de Sitter space, denoted as dS_4, in the global coordinates, which can cover the patches connected by the disconnected components of O(1,4). The O(1,4) is also the full Lorentz global of 5-dimensional Minkowski space, in which we can consider dS_4 as the embedded hyperbola, with the embedding constraint -X_0^2+X_1^2+X_2^2+X_3^2+X_4^2=L^2. where L is the radius of the cosmic horizon of dS_4. One can solve this embedding relation for X^M(x) with x^μ the chosen 4-dimensional coordinates for dS_4. Then the O(1,4)-invariant length interval between two points x and x' is given by P(x,x')= 1/L^2η_MNX^M(x) X'^N(x') . A dS_4 vacuum state |Λ⟩ and the Wightman function can be defined by the mode decomposition of the scalar field operator ϕ(x) as follows: ϕ(x)=∑_n [ a_n ϕ_n(x) + a_n^†ϕ^*_n(x)] where a_n (a^†_n) the annihilation (creation) operator of eigenmode ϕ_n of Klein-Gordon equation in dS_4 with appropriate boundary conditions so that a_n |Λ⟩ =0 , ∀n . The corresponding Wightman function is then given by W_Λ(x,x')= ⟨Λ|ϕ(x)ϕ(x') |Λ⟩ = ∑_n ϕ_n(x) ϕ^*_n(x') . It should be a function of P(x,x') with appropriate iϵ prescription with ϵ=0^+ to take care of the pole due to light-like separation with P(x,x')=0. By definition, a vacuum state should obey the isometry of underlying spacetime. In dS_4, this is either SO(1,4) for the static patch or O(1,4) for the global patch. For the latter, the associate Hadamard Green function, i.e., symmetrized Wightman function or explicitly H(x,x')=∑_n (ϕ_n(x) ϕ^*_n(x') + ϕ^*_n(x) ϕ_n(x')) should be invariant under the CPT (charge-parity-time reversal) map, which sends a point x into its CPT conjugate point x̅<cit.>, i.e., (t,x⃗)→ (-t,-x⃗) under antipodal map. This implies that the vacuum state is CPT invariant and the eigenmodes {ϕ_n } should satisfy ϕ_n(x̅)=ϕ_n^*(x) , ∀n . In this work, we will consider the O(1,4)-invariant vacuum states, which include Euclidean vacuum and α-vacua <cit.>. For such purpose, we need to consider dS_4 in the global coordinate. The global coordinate system of dS_4 can be defined through the following embedding relations, which solve hyperbola, X^0 = L sinht L , X^1=Lcosht Lcosχ , X^a = L cosht Lsinχ (cosθ, sinθcosϕ, sinθsinϕ), a=2,3,4 . From the above, we can turn the 5-dimensional Minkowski metric into the following dS_4 metric in the global coordinate, g_μν dx^μ dx^ν=-d t^2+L^2 cosh ^2(t/L) (d χ^2+sin ^2χ(d θ^2+sin ^2θ d ϕ^2)), where t∈(-∞, ∞), (χ, θ) ∈[0, π] and ϕ∈[0,2 π]. In this coordinate, the the P(x,x') of W_E1 in the global coordinate becomes P(x,x')= -sinht/Lsinht^'/L + cosht/Lcosht^'/LcosΔχ where, without loss of generality, we have chosen x=(t,Δχ,θ,ϕ) and x=(t',0,θ,ϕ). A particular vacuum state, called Euclidean vacuum or Bunch-Davies vacuum <cit.> and denoted by |E⟩, of which the Wightman function is given by <cit.> W_E(x, x^')=Γ(3/2-ν) Γ(3/2+ν)/16 π^2 L^2_2 F_1(3/2-ν, 3/2+ν, 2 ; 1+P(x,x')/2) , ν=√(9/4- m^2 L^2 -12 ξ) with m the scalar's mass and ξ the coupling constant of ϕ^2 R. For simplicity, in this work, we will only consider the case with ν=1/2, which can be identified as a conformally coupled massless scalar. In this case, Euclid_Wight_fn_d4 can be simplified to W_E(x, x^')=1/8 π^2 L^21/(1- P(x,x^')) . This Wightman function can also be obtained by using the embedding relation embed_1 to replace X^M in the Wightman function of the 5-dimensional Minkowski space for a massless scalar, that is, W_E(x,x') := ⟨ E|ϕ(x)ϕ(x') |E⟩ =-1 4π^21/(X^0(x)-X^0(x') - i ϵ )^2 - |X⃗(x)-X⃗(x')|^2 . The associated Hadamard function will inherit the O(1,4) invariance of its parent 5-dimensional Hadamard function; thus, the Euclidean vacuum is O(1,4) invariant. The Euclidean vacuum is not the only O(1,4)-invariant state. To see this, we first perform a global Bogoliubov transformation for all eigenmodes, i.e., ϕ_n(x)=coshα ϕ_n(x)+e^iβsinhα ϕ^⋆_n(x), where the parameters α∈ [0,∞) and β∈ (-π,π) are real. Use this set of modes to expand the scalar field operator ϕ(x)=∑_n ( ã_n ϕ̃_n(x) + h.c.) to define the so-called α-vacua |α, β⟩ with ã_n|α,β⟩ =0<cit.>. The corresponding Wightman function W_α,β=⟨α,β| ϕ(x) ϕ(x') |α, β⟩ =∑_n ϕ̃_n(x) ϕ̃^*(x')_n is given by W_α,β(x, x^')=cosh ^2 α W_E(x, x^')+sinh ^2 α W_E(x̅, x̅^') +1/2sinh 2 α(e^-iβW_E(x, x̅^')+e^iβW_E(x̅, x^')) . In the above, we have used the relation O14_c. It is straightforward to see that the associated Hadamard Green function is invariant under the CPT map only if β=0<cit.>. Otherwise, the vacuum state is invariant under SO(1,4) but not under CPT map. We will refer to these O(1,4)-invariant vacuum states as the α-vacua, denoted as |α⟩, and the associated Wightman function by W_α(x,x'). The vacuum states |α,β⟩ have been revived in <cit.> to discuss the dS/CFT correspondence. In <cit.>, these states are parameterized by a single complex number; we denote it as α̃ with Re α̃<0 to distinguish from the α in |α,β⟩. Thus, we have |α̃⟩=|α,β⟩, which was shown in <cit.> to be a squeezed state of the Euclidean vacuum, i.e., |α̃⟩ = exp[ ∑_n ( c (a_n^†)^2 - c^* (a_n)^2 ) ] |E⟩, c:=1 4(lntanh (- Reα̃ 2) ) e^-i Imα̃ and the relation between α̃ and α, β are given by α=tanh^-1(exp(Re(α̃))), and β=Im(α̃) by noting that alpha_mode can also be expressed as <cit.> ϕ̃_n ≡ N_α^'(ϕ_n(x)+e^α^'ϕ^⋆_n(x)), N_α^'≡1/1-e^2 Re(α^') . §.§ Spectral representations of Wightman function and its antipodal counterparts As the reduced final state ρ_AB of UDW detectors depends on the Wightman function W_E(x,x') and its antipodal counterparts, we need to evaluate it either analytically or numerically. However, the required iϵ prescription usually makes the numerical error hard to control. Thus, we will evaluate it analytically by obtaining its spectral representation. However, as W_E(x,x') depends on both t and t', we may not be able to have spectral representation if W_E(x,x') cannot be reduced to a function of a single variable of the linear combination of t and t'. For the Wightman function Euclid_wightman2 with P(x,x') of Pxx_g considered in this paper, we see this is the case if Δχ 0, π. Thus, in this work, we will only consider the two static UDW detectors separated by Δχ=0,π, and denote the corresponding W_E(x,x') as W_E^-(x,x') and W_E^+(x,x'), respectively. These two cases are also the extremal cases for time-like and space-like separations. For simplicity, we will choose the static detector's worldline time to be the same as the coordinate time. Using P(x,x') of Pxx_g for Δχ=0,π and the defining equation Euclid_wightman2 for the Wightman function, we have W_E^-(x,x') = a_0/1 - cosh(s_–iϵ) and W_E^+(x,x') = a_0/1 + cosh(s_+) with s_∓:=t∓ t'/L , a_0:=1 8 π^2 L^2 . The i ϵ prescription is inherited from W_E1. However, there is no causality issue for the antipodal (or any space-like) separation, so there is no need to provide iϵ prescription for W_E^+(x,x'). We can obtain the spectral representation of W^∓_E(x,x') by Fourier transform over s_∓. Restrict the spectral density to be bounded below, i.e., ω≥ 0, and turn this Fourier integral into a contour integral on the complex s_∓ plane. Using the residue theorem for the double poles [W^∓_E(x,x') has pure imaginary double poles at s_-=i 2 n π with residue 2 i ω e^2 n πω, and at s_+= i (2n+1) π with residue 2 i ω e^2 (n+1) πω for n∈ℤ, respectively.] in the lower half s_∓ plane without including n=0 poles for W_E^-(x,x'), we obtain the spectral representation, W_E^-(x,x') = ∫_0^∞ dω ρ_0,0(ω)  e^i ω L (t-t') , W_E^+(x,x') = ∫_0^∞ dω ρ_0,1(ω)   e^i ω L (t + t') , with the spectral densities ρ_ℓ,k(ω) := 2 a_0 (T L)^ℓ ω^ℓ+1 e^k πω e^2πω-1 , for ℓ=0,2 , where ℓ=0 for monopole coupling and ℓ=2 for dipole coupling, which will be discussed later. Note that the Boltzmann factor 1 e^2πω-1 reflects the thermal nature of the de Sitter vacua with the temperature related to the Hubble scale, i.e., here ω is dimensionless with Hubble as the basic unit. To evaluate the Wightman function for the α-vacua, i.e., wightman_alpha, we also need to obtain the spectral representation for the CPT-conjugate partners: W_E^∓(x̅,x̅'), W_E^∓(x̅,x') and W_E^∓(x,x̅'). The CPT map sends a point x=(t,x⃗) to x̅=(-t,x⃗)[The CPT map was referred to as the antipodal map in <cit.>, which is understood as the antipodal map X^M → -X^M on the embedding hyperbola.]. Based on the Pxx_g, applying the CTP map and obtaining the spectral density by Fourier transform via residue theorem, we have, W^-_E(x̅,x')=W^-_E(x,x̅')= a_0/1 + cosh(∓ s_-) = ∫_0^∞ dω ρ_0,1(ω)  e^i ω L (t - t') and W^+_E(x, x̅') = a_0/1 - cosh(s_+ -i ϵ)= ∫_0^∞ dω ρ_0,0(ω)  e^i ω L (t+t') , W^+_E(x̅,x') = a_0/1 - cosh(-s_+ - iϵ)= ∫_0^∞ dω ρ_0,2(ω)  e^i ω L (t+t') . Two coordinate arguments in Wxx_3 are in antipodal separation, so there is no need for iϵ prescription. In contrast, the two coordinate arguments in Wxx_4 or Wxx_5 are in zero separation, so we must reinstall the iϵ prescription [In Wxx_3, we have chosen the -iϵ for W^+_E(x, x̅') by not changing the original iϵ prescription under the CPT map. However, it seems the alternative is also possible. When defining the Wightman function for the (α,β)-vacua by wightman_alpha, this ambiguity can be absorbed by changing the sign of β.]. Also, in Wxx_5 it is in the s_+ + iϵ prescription, so that the n=0 poles are included so that ρ_0,0(ω) is Wxx_4 is changed to ρ_0,2(ω). Finally, if we perform the CPT map on both coordinate arguments of W^∓(x,x'), it will not change the spatial separation but reverse their time order. This is the same as swapping the two coordinate arguments, changing the s_–iϵ to s_-+iϵ. Thus, we have W^-_E(x̅, x̅') = W^-_E(x',x) = a_0/1 - cosh(-s_- -i ϵ)= ∫_0^∞ dω ρ_0,2(ω)  e^i ω L (t-t') , W^+_E(x̅,x̅') = W^+_E(x',x) = a_0/1 + cosh(-s_+ )= ∫_0^∞ dω ρ_0,1(ω)  e^i ω L (t+t') = W_E^+(x,x') . Similarly, we have W^+_E(x̅',x)=W_E^+(x̅,x') , W^+_E(x',x̅)=W_E^+(x, x̅') , and W^-_E(x̅',x)=W^-_E(x',x̅)=W^-_E(x̅,x')=W^-_E(x,x̅') . The above results for the CPT map of argument swapping of W^-_E(x,x') agree with the ones in <cit.>. The above are the spectral representations for the Wightman functions used for the monopole coupling of UDW detectors to a massless conformal scalar. We can generalize to the cases for dipole coupling. Recall the defining equation dipole_W of the corresponding windowed Wightman function for dipole coupling; we will only use the scalar dipole for simplicity so that Φ_I(x)=e^μ_I(x)∂_μϕ(x). For static UDW detectors at χ=0,π, the worldline tetrads are e^μ_0=(1,0,0,0), e^μ_1=(0,Lcosht L,0,0), and e^μ_2,3=(0,0,0,0). If the window function χ^1(τ) is nonzero, the cosht L factor in e^μ_2 will prevent from obtaining the spectral representation with the same reason discussed before. Therefore, we will only consider the case with the window functions χ^1=0 and χ^0 given by chi_w. Then, the corresponding Wightman functions denoted by W_∓(x,x') for zero and antipodal separation are give by W_E^∓(x,x')= T^2 ∂_t ∂_t' W_E^∓(x,x')=T^2 L^2 a_0 (2 ±cosh s_∓ ) (1∓cosh(s_∓-iϵ) )^2 . Here, we have compensated the dimension of ∂_t by the resolution time scale T given in chi_w when defining the window function. It turns out that W_E^∓ has pure imaginary double poles at s_-=i 2 n π with residue 2 i ω^3 e^2 n πω, and at s_+= i (2n+1) π with residue 2 i ω^3 e^2 (n+1) πω for n∈ℤ, respectively. Based on this, the spectral representations of the Wightman functions for the dipole coupling are just to replace the spectral density ρ_0,k(ω) in their monopole coupling counterparts by ρ_2,k(ω). Based on the above results for the Euclidean vacuum, we can now obtain the Wightman function for the |α,β⟩ vacua. For the zero separation, we denote the Wightman function by W^(ℓ,-)_α, β(x,x'), and for the antipodal separation by W^(ℓ,+)_α, β(x,x') with ℓ=0 for monopole coupling and ℓ=2 for dipole coupling. Based on wightman_alpha, we can write them in a more unified way as follows: W^(ℓ,∓)_α,β(x,x') := ∑^2_k=0 f^∓_k w_k^∓ , ℓ=0,2 , with the component spectral representations w^∓_k:=∫_0^∞ dω ρ_ℓ,k(ω) e^i ω L (t ∓ t') . The coefficients f^∓_k encode the information about (α,β)-vacua. Explicitly, f_0^-= cosh^2α , f_1^-= sinh 2αcosβ , f_2^-= sinh^2α ; f_0^+=1 2sinh 2α e^-i β , f_1^+= cosh 2α , f_2^+=1 2sinh 2α e^i β . In order to calculate X of defX, we also need to have W^(ℓ,∓)_α,β(x',x). Using the swapping rules Wxx_6, Wxx_7, swap_r_1 and swap_r-2, we obtain W^(ℓ,∓)_α,β(x',x) = f^∓_0 w^∓_2 + f^∓_1 w^∓_1 + f^∓_2 w^∓_0 . § FINAL STATES OF UDW DETECTORS OF ZERO AND ANTIPODAL SEPARATIONS IN DE SITTER VACUA Given the spectral representation of W^(ℓ,∓)_α,β(x,x') in W_f_1-W_f_4, we can calculate the corresponding reduced density matrix ρ_AB defined in dtec_den_mat-defX for the zero and antipodal separations with monopole and dipole couplings to a massless conformal scalar field. Inspecting PJ and defC, we note that P_D=A,B can be thought of as a special case of C with zero separation and the same Ω for the pair of UDW detectors. Thus, we will first evaluate C, from which we can obtain P_D straightforwardly. According to W_f_1, C for zero and antipodal separation denoted respectively by C^- and C^+ will take the following form (we sometimes omit the ℓ labeling without confusion for simplicity) C^∓= g^2 ∑_k=0^2 f^∓_k c^∓_k , where f_k^∓ as given in W_f_3 and W_f_4 encodes the information of (α,β)-vacua, and c^∓_k := ∫_0^∞ dω ρ_ℓ,k(ω) ∫_-∞^∞ dt_A ∫_-∞^∞ dt_B χ(t_A) χ(t_B) e^-i (Ω_A t_A- Ω_B t_B) e^i ω L (t_A ∓ t_B) . Introduce the following new variables t_±=t_A ± t_B , and Ω_±=Ω_A ±Ω_B 2 , so that ∫_-∞^∞∫_-∞^∞ dt_A dt_B χ(t_A) χ(t_B) e^-i (Ω_A t _A ∓Ω_B t_B) , = 1 2∫_-∞^∞∫_-∞^∞ dt_- dt_+ χ(t_- √(2)) χ(t_+ √(2)) e^-i (Ω_∓ t _+ + Ω_± t_-):=[⋯]_∓ . Using temp_1 and ∫_-∞^∞ dt χ(t√(2)) e^- i Ω t=2(2π)^1/4 T e^-2 T^2 Ω^2 to carry out the double time integrals, we get c^∓_k = 2√(2π) T^2 ∫_0^∞ dω ρ_ℓ,k(ω) exp{- 2 ( T L)^2 [ (ω -Ω_± L )^2 + (Ω_∓ L )^2 ] } . There is no closed form for the above integral; instead, we perform it using the saddle point approximation for large T/L, similar to <cit.>. That is, using ∫_0^∞ dω R(ω) e^-(T L)^2 Q(ω)≃√(2 π Q”(ω_0))(L T) R(ω_0) e^-(T L)^2 Q(ω_0) , for large T/L , with ω_0 ≥ 0 a strict minimum such that g'(ω_0)=0 and R(ω_0) 0. Casting saddle_c_1 into the form of saddle_c_2, we find ω_0=Ω_± L , Q(ω_0)= 2 (Ω_∓ L)^2 , Q”(ω_0)= 4 . Thus, the resultant C^∓ is C^∓=g^2 2π(T L)^ℓ+1∑_k f_k^∓ e^-2 (T L)^2 (Ω_∓ L)^2ρ̅_ℓ,k[Ω_±L] , where the dimensionless spectral density is defined by ρ̅_ℓ,k[ω]:= ω^ℓ+1 e^k πω e^2πω-1 . It is related to the spectral density of rho_lk by ρ_ℓ,k(ω) =2 a_0 (T L)^ℓρ̅_ℓ,k[ω]. From the fact that lim_ω→ 0ρ̅_ℓ,k[ω] = 1 2πδ_ℓ,0 and C_f_1, we can obtain P_D=A,B by taking Ω_-→ 0^+ and Ω_+=Ω_D of C^-, and the result is P_D = g^2 2π(T L)^ℓ+1∑_k f_k^- ρ̅_ℓ,k[Ω_D L] . The above results for C^∓ and P_D hold for the (α,β)-vacua, which include the Euclidean vacuum with only nonzero f_0^-=f_0^+=1. Moreover, the transition probability P_D of P_D_fab agrees with the one in <cit.> by using ab_rel for a comparison. In the case with identical UDW detectors, by construction C^-=P_D, but C^+=δ_ℓ,0g^2 4π^2(T L)^ℓ+1∑_k f_k^+ e^-2 (T L)^2 (Ω_D L)^2 . Note that C^+=0 for the identical UDW detectors with dipole coupling. Finally, we calculate the matrix element X of ρ_AB. First, note that the factor e^-i(Ω_A t_A -Ω_B) in C is replace by e^-i(Ω_A t_A + Ω_B). There are two terms with different time orderings and the arguments of the Wightman function swapped. Denote X for zero and antipodal separations by X^- and X^+, respectively, then from W_f_1 and W_f_swap they will take the following form X^∓ = - g^2 [ f_0^∓(x^∓_<,0 +x^∓_>,2) + f_1^∓(x^∓_<,1 +x^∓_>,1) + f_2^∓(x^∓_<,2 +x^∓_>,0) ] , := - g^2 ∑_k=0^2 f_k^∓x̃^∓_k , with x^∓_s,k :=∫_0^∞ dω ρ_ℓ,k(ω) [⋯]_+ e^iω L t_∓ θ(t_s) , where s=< or > with t_<=-t_- and t_>=t_-, and [⋯]_+ ∼1 2∫ dt_- dt_+ ⋯ e^-i (Ω_+ t _+ + Ω_- t_-) is defined in temp_1. First, the two theta functions in x̃_1=x_<,1+x_>,1 can be combined into unity. Carrying out the Gaussian integrals over t_± and the saddle point approximation for the integral over ω, we arrive x̃^∓_1 := 2π T^2 e^-2 (T L)^2 (Ω_± L)^2ρ_ℓ,1[Ω_∓ L] . On the other hand, the two theta functions in x̃_k=0,2 cannot be combined into unity so that the integral over t_- will yield the imaginary error function erfi[z]=-erfi[-z]. Carrying out the integrals over t_± yields x^∓_s,k=0,2=√(2π) T^2 ∫_0^∞ dω ρ_ℓ,k=0,2(ω) e^-2 (T L)^2 ((Ω_± L)^2 + (ω - Ω_∓ L)^2 )[1∓ i erfi[√(2)(T L) (ω-Ω_∓ L) ] ] . Perform the saddle point approximation for the above integral. The imaginary error function vanishes at the saddle point ω_0=Ω_- L. This then results in x̃^∓_0=x̃^∓_2 = π T^2 e^-2 (T L)^2 (Ω_± L )^2(ρ_ℓ,0[Ω_∓ L] + ρ_ℓ,2[Ω_∓ L] ) . Combine all the above results, we obtain X^∓ for the zero and antipodal separations in the (α,β)-vacua are X^∓ = -g^2 4π(T L)^ℓ+1 e^-2 (T L)^2 (Ω_± L )^2[ ( ρ̅_ℓ,0[Ω_∓ L] + ρ̅_ℓ,2[Ω_∓ L] ) (f_0^∓ + f_2^∓) + 2 ρ̅_ℓ,1[Ω_∓ L] f_1^∓] . Again, this result for X^∓ holds for the (α,β)-vacua, including the Euclidean one with only nonzero f_0^-=f_0^+=1. For the identical UDW detectors with Ω_-→ 0 and Ω_+=Ω_D, we have X^-_D=-δ_ℓ,0g^2 4π^2(T L)^ℓ+1 e^-2 (T L)^2 (Ω_D L )^2∑_k=0^2 f^-_k , and X^+_D=-g^2 4π(T L)^ℓ+1[ ( ρ̅_ℓ,0[Ω_D L] + ρ̅_ℓ,2[Ω_D L] ) (f_0^+ + f_2^+) + 2 ρ̅_ℓ,1[Ω_D L] f_1^+ ] . We note that X^-_D=0 for the dipole coupling. In summary, the analytical forms of the elements of the reduced density matrix for a pair of UDW detectors at zero and antipodal separation with monopole/dipole coupling in the (α,β)-vacua of de Sitter space are given in C_f_1, P_D_fab and X_f_0 supplemented with rhobar. These are the key results of this paper. So far, we have written the expressions of our key results by adopting L as the unit for measuring T, and 1/L to measure Ω[This is different from the usual convention in the entanglement harvesting, e.g., <cit.> by using 1/T as the frequency unit and T as the length unit (by setting light speed c=1) because T is the overall measuring time. However, in de Sitter space, L is a universal infrared cutoff. It is more natural to adopt it as the basic unit to measure other physical quantities.]. For simplicity, when presenting our results for entanglement harvesting and quantum discord in the next two sections, we will omit L (or setting L=1) and treat both Ω and T as dimensionless quantities with respect to the basic units defined by L. Thus, the superhorizon scale means T is larger than O(L) or Ω is smaller than O(1/L). Finally, we emphasize that our analytical results are obtained by the saddle point approximation for the ω-integral. The saddle point approximation is valid in the large T limit. However, for all cases considered, only one dominant saddle exists. It is then possible for the results to remain approximately valid even beyond the large T regime. In the numerical plots shown below, we will sometimes also plot the regimes with small T. As a consistency check for the small L regime, we will ensure the purity ρ^2_AB is less than unity for all the numerical plots presented below. § ENTANGLEMENT HARVESTING FROM DE-SITTER VACUA Based on the analytical results of the reduced density matrix given in the last section, we will apply them to calculating the concurrence of def_concurrence. This represents the entanglement harvesting from the de sitter vacuum states by the UDW detectors. We then present the results in the numerical plots to demonstrate the dependence of entanglement harvesting on the energy gaps and the measuring time scales of the detectors. Due to the variety of dependent factors, we first need to clarify the logic of our presentation. We will start with the results of the Euclidean vacuum in the first subsection and the (α,β)-vacua in the second one. The simplicity of the Euclidean vacuum helps to capture the essential features of the gravitating quantum information by their energy gap and time scale dependencies. Then, we will examine the effect of the variety of de Sitter vacuums and present the corresponding results by also showing the dependence on the values of (α,β). In each subsection, we will first consider the monopole coupling and then the dipole coupling. As the quantum information is usually nonlocal, it is important to observe the effect of the separation between two UDW detectors. Therefore, we will juxtapose the results of zero and antipodal separations for all the numerical plots. For simplicity, we will only present the plots for the identical UDW detectors. §.§ de Sitter Euclidean vacuum In this subsection, we analyze concurrence for two identical UDW detectors, i.e., Ω_A=Ω_B=Ω as a function of the detector energy gap and interaction time in the Euclidean vacuum in de Sitter space. We start with the scenario of monopole coupling Hamil_int_mono and subsequently examine the case of dipole coupling dipole_H. §.§.§ Monopole coupling in Euclidean vacuum In <ref> and <ref>, we present the concurrence as a function of the detector energy gap Ω for different measuring time T, for zero spatial separation and antipodal separations, respectively. We observe for both cases that concurrence attains a maximum value before monotonically diminishing to zero as the detector energy becomes large. Interestingly, in the case of zero spatial, there is a phenomenon akin to the “sudden death” of concurrence occurring at a large detector energy gap. Furthermore, we depict the concurrence as a function of T for various Ω in <ref> and <ref> for zero and antipodal separations, respectively. We also notice the phenomena of “sudden death” of the concurrence when we plot it as a function of T for the case of zero spatial separation, while for antipodal separation, the concurrence increases monotonically with time. In the current setup, we use the UDW detectors to probe the entanglement structure of the underlying scalar vacuum. Thus, the zero and antipodal separations of UDW detectors probe the short-range and long-range quantum entanglement, respectively. From the above results, we can see that the short-range and long-range entanglements behave differently as a function of the energy gaps and the measuring time scale. In particular, “sudden death” only occurs for the short-range entanglement, not the long-range one. Besides, the <ref> implies that the Euclidean vacuum generates more long-range entanglement over time, characterized by the growing concurrence for the antipodal separation. This contrasts with the zero separation cases where the concurrence decays to zero at large T. To better comprehend the dependence of concurrence on various parameter spaces, we provide a density plot of concurrence as a function of Ω and T in <ref>. The white curve in <ref> indicates the “sudden death" of concurrence for zero spatial separation. On the other hand, this special feature is absent for antipodal separation in <ref>. With this overview picture, we see that the silent and active regions of entanglement harvesting, with zero and high concurrence, are located quite differently for zero and antipodal separations. The active region is located at low but nonzero T and Ω part in <ref> but in the low Ω and high T part in <ref>. The last feature implies that the long-range entanglements in the de Sitter Euclidean vacuum grow over time, but the short-range ones decay. §.§.§ Dipole coupling in Euclidean vacuum We now examine entanglement harvesting for two identical dipole-coupling UDW detectors in Euclidean vacuum in de Sitter space. As noted in XmD, X^-_ℓ=2=0 for zero separation, this yields zero concurrence. On the other hand, there is nonzero concurrence for the antipodal separation. This implies that the UDW detectors cannot explore the short-range but the long-range quantum entanglement through the dipole coupling. Here, we just present the density plot of the concurrence for the antipodal separation, as shown in <ref>. Compared to <ref> of monopole-coupling, the active region of the concurrence shifts to higher T and Ω part with larger values by a factor of 100. §.§ de Sitter α-vacua We now consider the entanglement harvesting for the α-vacua (with β=0) or the generic (α,β)-vacua. The parameter β is periodic with a period of 2π, and the parameter α is a non-negative real number. It is important to mention that the purity ρ_AB^2 can exceed one for α larger than the order of unity. The exact critical value of α where the purity becomes ill-defined depends on Ω and T. This implies that the O(g^2) approximation breaks down for large α. Consequently, we will restrict our consideration for the α-vacua with α≤ 1.5 to ensure a well-defined density matrix ρ_AB. Again, we start our analysis with the monopole coupling and, subsequently, the dipole coupling. §.§.§ Monopole coupling in α-vacua We first consider the α-vacua (i.e., β=0). For a small α value, we find the dependences of the concurrence on Ω and T are similar to the ones in <ref> and <ref> of the Euclidean vacuum. However, this may not be the case for larger α. In <ref>, we show the dependence of the concurrence on α for a fixed value of T and a few different values of Ω for the zero and antipodal separation. Interestingly, we notice that the “sudden death" of the entanglement occurs at some value of α for the zero separation scenario only. On the other hand, in the case of antipodal separation, the concurrence grows with α monotonically. Similar α dependence can be obtained by fixing Ω and varying T. The above α dependence of concurrence implies that increasing α suppresses the short-range entanglement harvested by the UDW detectors but enhances the long-range entanglement for a given Ω and T. From the previous discussions, it is clear that concurrence exhibits interesting behavior as a function of the detector energy gap, measuring time scale and the parameter α. To elucidate this behavior more comprehensively, we provide the following density plots. If we fix α, the density plot for the concurrence as a function of Ω and T shows an analogous pattern as illustrated in <ref> of Euclidean vacuum. Thus we do not include it for brevity. Instead, we present the density plots of the concurrence as a function of α and Ω with a fixed T in <ref>, and a function of α and T with a fixed Ω in <ref>. These density plots reconfirm the implication that the α-vacua prefer long-range entanglement harvesting. Finally, we show concurrence for the generic (α,β)-vacua in <ref> and <ref> for zero and antipodal separations, respectively. Interestingly, we observe the novel phenomena of “sudden death and revival" of the entanglement when tuning β for the antipodal separation. Otherwise, the α-dependence of concurrence that we notice from these density plots confirms the implication previously discussed. §.§.§ Dipole coupling in α-vacua We now proceed to discuss entanglement harvesting by two dipole-coupling distinct detectors in the α-vacua. As noted in XmD, X^-_D=0 for the dipole coupling for two identical detectors, resulting in vanishing concurrence for the case of zero separation. Therefore, we only need to consider the antipodal separation scenario. We first depict a density plot in <ref> to present an overview of T and Ω dependence of the concurrence for an α-vacuum with a typical value of α=0.05. It can be compared with the antipodal separation results of Euclidean vacuum for either the monopole-coupling one in <ref> or the dipole-coupling one in <ref>. Moreover, in <ref>, <ref>, and <ref>, we provide the density plot of concurrence as a function of α and Ω, α and T, and α and β respectively. We then compare the plots with the monopole-coupling counterparts illustrated in <ref>, <ref>, and <ref>. They exhibit similar features, including the growth of concurrence with α and the phenomena of “sudden death and revival" upon tuning β, but with different overall magnitudes compared to their monopole-coupling counterparts. § QUANTUM DISCORD OF DE SITTER VACUA To characterize the non-classical quantum correlation produced by the gravitating scalar vacuum states, in this section, we will apply the analytical results of the reduced density matrix of the UDW detectors provided in section <ref> to obtain the quantum discord D of def_discord. We subsequently present the results in numerical plots to demonstrate the dependence of gravitating quantum correlation on the measuring time scale and energy gaps of the detectors. The logic of the presentation is similar to that adopted for entanglement harvesting in section <ref>. As noted in def_discord, the quantum discord D=D(A,B) depends only on P_D=A,B and C but not on X. For the identical UDW detectors, it can be further reduced to QD_id. The latter leads to D=2 g^2 P_D for the zero separation. It also leads to D=0 for the dipole-coupling identical UDW detectors of antipodal separation since C^+_ℓ=2=0. Due to these two special trivial cases for identical UDW detectors, we will consider the quantum discord for the non-identical UDW detectors that can be characterized by Ω_B and the parameter of energy gap difference δ :=Ω_A - Ω_B . Given that the quantum discord D(A,B)def_discord is symmetric when exchanging A and B, we simply assume δ≥ 0 without loss of generality. Moreover, D(A,B) is defined only for real P_D<cit.>, which is only true if β=0 or δ=0. Since we will mostly investigate scenarios with nonzero δ, our analysis will focus on non-identical UDW detectors in α-vacua only, i.e., β=0, when considering the dependence of D on δ, Ω_B, T and α for both monopole- and dipole-coupling with zero and antipodal separations. §.§ de Sitter Euclidean vacuum In this subsection, we study quantum discord between two distinct UDW detectors in the Euclidean vacuum. We first consider the monopole-coupling and, subsequently, the dipole-coupling. §.§.§ Monopole coupling in Euclidean vacuum In <ref> and <ref>, we show the behavior of quantum discord D as a function of the energy gap difference δ for a given set of (Ω_B, T) for zero and antipodal separations respectively. It shows that D decays to zero when δ becomes an order of Ω_B for both cases, although their detailed decay patterns differ. This implies the quantum correlations are heavily suppressed by the incompatibility of spectral gaps between the UDW detectors. To further explore the interplay between the T and separation dependences of D, we present the plots <ref> and <ref>, for zero and antipodal separations respectively. As it can be observed from these plots <ref> and <ref>, unlike the entanglement harvesting there is no “sudden death" behavior for quantum discord. For both zero and antipodal separations, the corresponding discords decay to zero for sufficiently large T. However, in <ref>, D reaches a maximum before decaying to zero, whereas in <ref>, it decreases monotonically to zero. This implies that the coherence of long-time correlations is hard to maintain. In terms of length scale, the above results imply that there is no quantum correlation beyond an order of a few Hubble scales, i.e., no super-horizon quantum correlation, as clearly shown in <ref> for antipodal separation. This corresponds to the decoherence of superhorizon quantum fluctuation in the inflationary universe scenario. On the other hand, we see in <ref> that the quantum entanglements of the superhorizon scales are not suppressed. This highlights an interesting contrast between quantum entanglement and quantum correlation at superhorizon scales. Moreover, the magnitude and the range of T for nonzero D are larger in the zero separation than in the antipodal separation. This agrees with the expectation that the short-range quantum correlations are more vibrant than the long-range ones. Finally, to obtain an overview of the δ and T dependence of D, we present the corresponding density plots in <ref> for both zero and antipodal cases for a fixed value of Ω_B. We notice that the quantum discord extends more along the T-direction in <ref> for zero separation, but more along the δ-direction in <ref> for antipodal separation. The former is more constrained by the decoherence due to the incompatibility of spectral gaps, and the latter is more by the decoherence of superhorizon quantum correlations. §.§.§ Dipole coupling in Euclidean vacuum To compare with the monopole-coupling counterparts illustrated in <ref>, we present the density plots of quantum discord as a function of δ and T for dipole-coupling detectors in <ref> and <ref>, for zero and antipodal separations respectively. By comparison, the active region of the zero separation shrinks, with the lower T region becoming silent. On the other hand, the active region of the antipodal separation is now located at the large δ part instead of the smaller one, which is now silent. Despite the overall magnitude being down by two orders compared to the monopole-coupling counterpart, it is still novel to see that the spectral incompatibility of the UDW detectors will enhance the quantum correlation in the dipole-coupling cases. However, the superhorizon suppression remains. §.§ de Sitter α-vacua We will now consider quantum discord for the non-identical UDW detectors coupled to the scalar field in the α-vacua. As before, we first consider the monopole-coupling and then dipole-coupling. We have four model parameters Ω_B, δ, T and α. In what follows, we will present the density plots of quantum discord as a function of two model parameters, with the values of the other two being fixed. To ensure consistency and facilitate straightforward comparison, we will use the following fixed parameter values throughout our analysis: Ω_B=0.5, T=1.5, δ=0.1, and α=0.1. §.§.§ Monopole coupling in α-vacua We present three sets of density plots for the monopole-coupling case, contrasting zero and antipodal separations. The first one is depicted in <ref> and <ref> for zero and antipodal separation respectively, which shows the δ and T dependence of D. This can be compared to <ref> and <ref> for the Euclidean vacuum scenario. They differ slightly and share all the relevant features, such as the suppressions of the quantum fluctuations due to superhorizon decoherence or spectral incompatibility. The second set of density plots is shown in <ref> and <ref> for zero and antipodal separations, which exhibits the interplay between the α and δ dependence of D. We observe that increasing the value of α enhances the quantum discord for the zero separation cases significantly, while there is a minimal effect for the antipodal separation cases. The overall magnitude for the zero separation scenario is about four orders larger than the one for the antipodal separation. In both cases, the spectral incompatibility diminishes the quantum discord. The third set of density plots is presented in <ref> and <ref> for zero and antipodal separations, which exhibits the interplay between the α and T dependence of D. We again see that increasing the value of α can enhance the quantum discord for both zero and antipodal separations. However, their ways of enhancement are different. In <ref>, the quantum discord of larger T gets more enhanced, which is in contrast with <ref>, where the quantum discord of lower T (T≃ 0.25) gets more enhanced. In conclusion, our results demonstrate that the α-vacua have a non-trivial effect on quantum discord, but the superhorizon decoherence remains. §.§.§ Dipole coupling in α-vacua Finally, we present the quantum discord for the dipole-coupling of UDW detectors in α-vacua. We parallel what we have presented for the monopole coupling cases for comparison. The first set of density plots is shown in <ref> and <ref> for zero and antipodal separations, to exhibit the δ and T dependence of D. Its difference from its monopole-coupling counterpart of <ref> is quite similar to the corresponding difference for Euclidean vacuum, i.e., the difference of <ref> from <ref>. Thus, turning on α will not drastically change the overall patterns of the quantum discord of the Euclidean vacuum. The second set of density plots is shown in <ref> and <ref> for zero and antipodal separations, to exhibit the interplay between the α and δ dependence of D. Compared to its monopole coupling counterpart of <ref>, the overall magnitude and pattern of the active region of the zero separation cases do not change much. On the other hand, for the antipodal separation cases, the overall magnitude is reduced by more than four orders and can be considered zero. The third set of density plots is shown in <ref> and <ref> for zero and antipodal separations, which is in parallel with its monopole-coupling counterpart of <ref>, to exhibit the interplay between the α and T dependence of D. The change in the overall magnitude and patterns compared to the monopole-coupling counterpart is similar to that of the second set of density plots. Overall, we see that the patterns in the cases of monopole and dipole couplings are quite similar. However, the overall magnitude remains the same for the zero separation but decreases by a few orders for the antipodal separation. § CONCLUSION De Sitter space is the most simple dynamical spacetime and plays an important role in the inflationary universe scenario for initiating the primordial curvature perturbations from the fluctuations of quantum fields. Besides, the CPT invariant vacuum states of a quantum field in de Sitter space, called α-vacua, are not unique. In this paper, we have explored the de Sitter space and the associated scalar α-vacuum states by a pair of probe UDW detectors. Our results demonstrated how the de Sitter gravity affects the relativistic quantum information of the vacuum polarizations. In this work, we, in particular, have studied two quantum information quantities, concurrence and quantum discord of the reduced final state of the UDW detectors. The concurrence characterizes the quantum entanglement harvested by the UDW detectors from the vacuum states of the environmental scalar. Notably, the quantum discord for the reduced state is obtained analytically in this paper for the first time in the literature. It characterizes the non-classical quantum correlations between UDW detectors, which reflect the gravitated quantum correlation of the scalar's vacuum states. These two quantities, though quantum, are intrinsically quite different, especially on the nonlocal features. For this purpose, we considered a pair of UDW detectors in either time-like zero separation or space-like antipodal separation. As the reduced state is obtained by tracing out the environmental scalar field in curved spacetime, it is usually hard to have an analytical form, and the calculations of the derived quantum information quantities are usually based on numerical analysis of the integrals of the Wightman functions. Due to the nontrivial iϵ prescription of the Wightman function, the results will be subjected to numerical error for nontrivial background spacetime. The de Sitter space, though time-dependent, is maximally symmetric, so the Wightman function has a simple analytical spectral representation. Exploiting this and the saddle point approximation, we have obtained the analytical form of the reduced state in the α-vacuum states of de Sitter space for the first time in the literature. Based on the analytical reduced states, we compute the corresponding concurrence and quantum discord. We then present the numerical density plots to understand how their patterns are affected by the interplays of four model parameters: two spectral gaps of UDW detectors, the measuring time, and the value of α labeling the α-vacua. From the patterns of these density plots, we draw our main conclusions as follows. By increasing the measuring time or the value of α, we observed “sudden" death behavior for the short-range quantum entanglements probed by UDW detectors in zero (or time-like) separation, but not for the long-range ones probed by UDW detectors in antipodal (or space-like) separation. Moreover, the long-range quantum entanglements grow with the measuring time and the values of α. This implies that the de Sitter gravity enhances the long-range entanglement. On the other hand, for the quantum discord, we found that there exists suppression of quantum discord at the superhorizon scale, which can be characterized by the measuring time scale. This conforms to the folklore about the decoherence of the quantum correlation at the superhorizon scales in the inflationary universe scenario. This is also consistent with the intuition that it is difficult to maintain long-time quantum correlations. Several minor points, as drawn from the density plots, such as the dependence on the spectral (in)compatibility of UDW detectors in time-like and space-like separations, have also been noted in the paper. Overall, our study and results have helped one gain more tools and insights to understand the de Sitter space from the perspective of relativistic quantum information. § ACKNOWLEDGEMENT The work of FLL is supported by Taiwan's NSTC with grant numbers 109-2112-M-003-007-MY3 and 112-2112-M-003-006-MY3, and the work of SM is supported by Taiwan's NSTC with grant numbers 112-2811-M-003-014. 10gibbons1983very G. Gibbons, S. Hawking and S. Siklos, The Very Early Universe: Proceedings of the Nuffield Workshop, Cambridge 21 June to 9 July, 1982, Cambridge University Press (1983). Mottola:1984ar E. Mottola, Particle Creation in de Sitter Space, https://doi.org/10.1103/PhysRevD.31.754Phys. Rev. D 31 (1985) 754. PhysRevD.32.3136 B. Allen, Vacuum states in de sitter space, https://doi.org/10.1103/PhysRevD.32.3136Phys. Rev. D 32 (1985) 3136. PhysRevD.98.065014 A. Higuchi and K. Yamamoto, Vacuum state in de sitter spacetime with static charts, https://doi.org/10.1103/PhysRevD.98.065014Phys. Rev. D 98 (2018) 065014. PhysRevD.65.104039 R. Bousso, A. Maloney and A. Strominger, Conformal vacua and entropy in de sitter space, https://doi.org/10.1103/PhysRevD.65.104039Phys. Rev. D 65 (2002) 104039. Srednicki:1993im M. Srednicki, Entropy and area, https://doi.org/10.1103/PhysRevLett.71.666Phys. Rev. Lett. 71 (1993) 666 [https://arxiv.org/abs/hep-th/9303048 hep-th/9303048]. Callan:1994py C.G. Callan, Jr. and F. Wilczek, On geometric entropy, https://doi.org/10.1016/0370-2693(94)91007-3Phys. Lett. B 333 (1994) 55 [https://arxiv.org/abs/hep-th/9401072 hep-th/9401072]. Holzhey:1994we C. Holzhey, F. Larsen and F. Wilczek, Geometric and renormalized entropy in conformal field theory, https://doi.org/10.1016/0550-3213(94)90402-2Nucl. Phys. B 424 (1994) 443 [https://arxiv.org/abs/hep-th/9403108 hep-th/9403108]. Schlieder1965SomeRA S. Schlieder, Some remarks about the localization of states in a quantum field theory, Communications in Mathematical Physics 1 (1965) 265. Witten:2018zxz E. Witten, APS Medal for Exceptional Achievement in Research: Invited article on entanglement properties of quantum field theory, https://doi.org/10.1103/RevModPhys.90.045003Rev. Mod. Phys. 90 (2018) 045003 [https://arxiv.org/abs/1803.04993 1803.04993]. Sanders:2008gs K. Sanders, On the Reeh-Schlieder Property in Curved Spacetime, https://doi.org/10.1007/s00220-009-0734-3Commun. Math. Phys. 288 (2009) 271 [https://arxiv.org/abs/0801.4676 0801.4676]. Maldacena:1997re J.M. Maldacena, The Large N limit of superconformal field theories and supergravity, https://doi.org/10.4310/ATMP.1998.v2.n2.a1Adv. Theor. Math. Phys. 2 (1998) 231 [https://arxiv.org/abs/hep-th/9711200 hep-th/9711200]. Ryu:2006bv S. Ryu and T. Takayanagi, Holographic derivation of entanglement entropy from AdS/CFT, https://doi.org/10.1103/PhysRevLett.96.181602Phys. Rev. Lett. 96 (2006) 181602 [https://arxiv.org/abs/hep-th/0603001 hep-th/0603001]. Chen:2021lnq B. Chen, B. Czech and Z.-z. Wang, Quantum information in holographic duality, https://doi.org/10.1088/1361-6633/ac51b5Rept. Prog. Phys. 85 (2022) 046001 [https://arxiv.org/abs/2108.09188 2108.09188]. PhysRevD.14.870 W.G. Unruh, Notes on black-hole evaporation, https://doi.org/10.1103/PhysRevD.14.870Phys. Rev. D 14 (1976) 870. DeWitt:1980hx B.S. DeWitt, Quantum gravity: The new synthesis , in General Relativity: An Einstein Centenary Survey, pp. 680–745 (1980). summers1985vacuum S.J. Summers and R. Werner, The vacuum violates bell's inequalities, Physics Letters A 110 (1985) 257. Summers:1987ze S.J. Summers and R. Werner, Maximal Violation of Bell's Inequalities Is Generic in Quantum Field Theory, https://doi.org/10.1007/BF01207366Commun. Math. Phys. 110 (1987) 247. VALENTINI1991321 A. Valentini, Non-local correlations in quantum electrodynamics, https://doi.org/https://doi.org/10.1016/0375-9601(91)90952-5Physics Letters A 153 (1991) 321. Reznik:2002fz B. Reznik, Entanglement from the vacuum, https://doi.org/10.1023/A:1022875910744Found. Phys. 33 (2003) 167 [https://arxiv.org/abs/quant-ph/0212044 quant-ph/0212044]. PhysRevA.71.042104 B. Reznik, A. Retzker and J. Silman, Violating bell's inequalities in vacuum, https://doi.org/10.1103/PhysRevA.71.042104Phys. Rev. A 71 (2005) 042104. Wilson-Gerow:2024ljx J. Wilson-Gerow, A. Dugad and Y. Chen, Decoherence by warm horizons, https://arxiv.org/abs/2405.00804 2405.00804. Danielson:2021egj D.L. Danielson, G. Satishchandran and R.M. Wald, Gravitationally mediated entanglement: Newtonian field versus gravitons, https://doi.org/10.1103/PhysRevD.105.086001Phys. Rev. D 105 (2022) 086001 [https://arxiv.org/abs/2112.10798 2112.10798]. Danielson:2022sga D.L. Danielson, G. Satishchandran and R.M. Wald, Killing horizons decohere quantum superpositions, https://doi.org/10.1103/PhysRevD.108.025007Phys. Rev. D 108 (2023) 025007 [https://arxiv.org/abs/2301.00026 2301.00026]. Danielson:2022tdw D.L. Danielson, G. Satishchandran and R.M. Wald, Black holes decohere quantum superpositions, https://doi.org/10.1142/S0218271822410036Int. J. Mod. Phys. D 31 (2022) 2241003 [https://arxiv.org/abs/2205.06279 2205.06279]. Dhanuka:2022ggi A. Dhanuka and K. Lochan, Unruh DeWitt probe of late time revival of quantum correlations in Friedmann spacetimes, https://doi.org/10.1103/PhysRevD.106.125006Phys. Rev. D 106 (2022) 125006 [https://arxiv.org/abs/2210.11186 2210.11186]. Gralla:2023oya S.E. Gralla and H. Wei, Decoherence from horizons: General formulation and rotating black holes, https://doi.org/10.1103/PhysRevD.109.065031Phys. Rev. D 109 (2024) 065031 [https://arxiv.org/abs/2311.11461 2311.11461]. Salton:2014jaa G. Salton, R.B. Mann and N.C. Menicucci, Acceleration-assisted entanglement harvesting and rangefinding, https://doi.org/10.1088/1367-2630/17/3/035001New J. Phys. 17 (2015) 035001 [https://arxiv.org/abs/1408.1395 1408.1395]. Martin-Martinez:2015eoa E. Martin-Martinez and B.C. Sanders, Precise space–time positioning for entanglement harvesting, https://doi.org/10.1088/1367-2630/18/4/043031New J. Phys. 18 (2016) 043031 [https://arxiv.org/abs/1508.01209 1508.01209]. PhysRevD.93.044001 E. Martín-Martínez, A.R.H. Smith and D.R. Terno, Spacetime structure and vacuum entanglement, https://doi.org/10.1103/PhysRevD.93.044001Phys. Rev. D 93 (2016) 044001. Henderson:2017yuv L.J. Henderson, R.A. Hennigar, R.B. Mann, A.R.H. Smith and J. Zhang, Harvesting Entanglement from the Black Hole Vacuum, https://doi.org/10.1088/1361-6382/aae27eClass. Quant. Grav. 35 (2018) 21LT02 [https://arxiv.org/abs/1712.10018 1712.10018]. Kukita:2017etu S. Kukita and Y. Nambu, Harvesting large scale entanglement in de Sitter space with multiple detectors, https://doi.org/10.3390/e19090449Entropy 19 (2017) 449 [https://arxiv.org/abs/1708.01359 1708.01359]. Koga:2019fqh J.-i. Koga, K. Maeda and G. Kimura, Entanglement extracted from vacuum into accelerated Unruh-DeWitt detectors and energy conservation, https://doi.org/10.1103/PhysRevD.100.065013Phys. Rev. D 100 (2019) 065013 [https://arxiv.org/abs/1906.02843 1906.02843]. Perche:2022ykt T.R. Perche, B. Ragula and E. Martín-Martínez, Harvesting entanglement from the gravitational vacuum, https://doi.org/10.1103/PhysRevD.108.085025Phys. Rev. D 108 (2023) 085025 [https://arxiv.org/abs/2210.14921 2210.14921]. Mendez-Avalos:2022obb D. Mendez-Avalos, L.J. Henderson, K. Gallock-Yoshimura and R.B. Mann, Entanglement harvesting of three Unruh-DeWitt detectors, https://doi.org/10.1007/s10714-022-02956-xGen. Rel. Grav. 54 (2022) 87 [https://arxiv.org/abs/2206.11902 2206.11902]. ollivier2001introducing H. Ollivier and W.H. Zurek, Introducing quantum discord, arXiv preprint quant-ph/0105072 (2001) . Henderson:2001wrr L. Henderson and V. Vedral, Classical, quantum and total correlations, https://doi.org/10.1088/0305-4470/34/35/315J. Phys. A 34 (2001) 6899 [https://arxiv.org/abs/quant-ph/0105028 quant-ph/0105028]. yu2005evolution T. Yu and J. Eberly, Evolution from entanglement to decoherence of bipartite mixed" x" states, arXiv preprint quant-ph/0503089 (2005) . rau2009algebraic A. Rau, Algebraic characterization of x-states in quantum information, Journal of physics a: Mathematical and theoretical 42 (2009) 412002. PhysRevA.40.4277 R.F. Werner, Quantum states with einstein-podolsky-rosen correlations admitting a hidden-variable model, https://doi.org/10.1103/PhysRevA.40.4277Phys. Rev. A 40 (1989) 4277. ali2010quantum M. Ali, A. Rau and G. Alber, Quantum discord for two-qubit x states, Physical Review A 81 (2010) 042105. Koga:2018the J.-I. Koga, G. Kimura and K. Maeda, Quantum teleportation in vacuum using only Unruh-DeWitt detectors, https://doi.org/10.1103/PhysRevA.97.062338Phys. Rev. A 97 (2018) 062338 [https://arxiv.org/abs/1804.01183 1804.01183]. PhysRevLett.80.2245 W.K. Wootters, Entanglement of formation of an arbitrary state of two qubits, https://doi.org/10.1103/PhysRevLett.80.2245Phys. Rev. Lett. 80 (1998) 2245. chen2011quantum Q. Chen, C. Zhang, S. Yu, X. Yi and C. Oh, Quantum discord of two-qubit x states, Physical Review A 84 (2011) 042313. PhysRevA.88.014302 Y. Huang, Quantum discord for two-qubit x states: Analytical formula with very small worst-case error, https://doi.org/10.1103/PhysRevA.88.014302Phys. Rev. A 88 (2013) 014302. yurischev2015quantum M.A. Yurischev, On the quantum discord of general x states, Quantum Information Processing 14 (2015) 3399. Chernikov:1968zm N.A. Chernikov and E.A. Tagirov, Quantum theory of scalar fields in de Sitter space-time, Ann. Inst. H. Poincare A Phys. Theor. 9 (1968) 109. Niermann:2024fvi L. Niermann and L.C. Barbado, Particle detectors in superposition in de Sitter spacetime, https://arxiv.org/abs/2403.02087 2403.02087. Henderson:2018lcy L.J. Henderson, R.A. Hennigar, R.B. Mann, A.R.H. Smith and J. Zhang, Entangling detectors in anti-de Sitter space, https://doi.org/10.1007/JHEP05(2019)178JHEP 05 (2019) 178 [https://arxiv.org/abs/1809.06862 1809.06862]. Maeso-Garcia:2022uzf H. Maeso-García, J. Polo-Gómez and E. Martín-Martínez, How measuring a quantum field affects entanglement harvesting, https://doi.org/10.1103/PhysRevD.107.045011Phys. Rev. D 107 (2023) 045011 [https://arxiv.org/abs/2210.05692 2210.05692].
http://arxiv.org/abs/2406.19330v1
20240627170147
Non-spinning tops are stable
[ "Iosif Bena", "Giorgio Di Russo", "Jose Francisco Morales", "Alejandro Ruipérez" ]
hep-th
[ "hep-th" ]
1.2 equationsection e ξ𝒪(α)å𝚊𝚌 fpheader a] Iosif Bena,a, b] Giorgio Di Russo,b] Jose Francisco Morales,b] Alejandro Ruipérez,dirusso@roma2.infn.it, iosif.bena@ipht.fr, morales@roma2.infn.it, alejandro.ruiperez@roma2.infn.it[a]Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, F-91191 Gif-sur-Yvette, France[b]Dipartimento di Fisica, Università di Roma “Tor Vergata”& INFN Roma 2, Via della Ricerca Scientifica 1, 00133, Roma, ItalyWe consider coupled gravitational and electromagnetic perturbations of a family of five-dimensional Einstein-Maxwell solutions that describes both magnetized black strings and horizonless topological stars. We find that the odd perturbations of this background lead to a master equation with five Fuchsian singularities and compute its quasinormal mode spectrum using three independent methods: Leaver, WKB and numerical integration. Our analysis confirms that odd perturbations always decay in time, while spherically symmetric even perturbations may exhibit for certain ranges of the magnetic fluxes instabilities of Gregory-Laflamme type for black strings and of Gross-Perry-Yaffe type for topological stars. This constitutes evidence that topological stars and black strings are classically stable in a finite domain of their parameter space.Non-spinning tops are stable [ July 1, 2024 ============================ § INTRODUCTION The physics of black holes is responsible for some of the deepest puzzles in our quest to formulate a unified quantum theory of gravity. On one hand, black holes have an entropy proportional to the area of the event horizon in Planck units, and hence, according to Statistical Mechanics, an enormous number of states (e^10^90 for the Sgr.A black hole at the center of the Milky Way). On the other hand, uniqueness theorems indicate that General Relativity is not able to distinguish any of these states and this, in turn, leads to violations of quantum unitarity. It is also possible to argue <cit.> that the only way to avoid such violations is if these states give different physics from that of the classical General-Relativity black-hole solution at the scale of the event horizon. However, constructing such states is no easy feat: as the black-hole horizon is null, any horizon-sized object we can construct using normal four-dimensional matter will immediately fall in. The only mechanism to avoid gravitational collapse in a classical theory is to use (or mimic) higher-dimensional theories with nontrivial fluxes wrapping topologically-nontrivial cycles <cit.>, of the type one naturally finds in String Theory. Furthermore, the black hole horizon is perhaps the only thing in our universe that grows when gravity becomes stronger. Hence, if one wants to construct horizon-sized extreme-compact-objects (ECO's) that replace the black hole, these objects must contain non-perturbative solitonic degrees of freedom, which become lighter and larger when gravity becomes stronger. For supersymmetric and extremal-non-supersymmetric black holes there is an almost-20-year history of string-theory and supergravity constructions of such horizon-sized ECO's[See <cit.> and references therein], which are also known as microstate geometries, fuzzballs geometries, or “topological stars" <cit.>. The string-theory ingredients that enter their construction have exactly the properties needed to avoid collapse and grow when gravity becomes stronger. However, constructing and analyzing topological stars for non-supersymmetric black holes is much more challenging, requiring in general solving non-linear PDE's that, absent supersymmetry, do not factorize. Besides some artisanal solutions <cit.>, there are now three systematic routes to build such solutions. The oldest is the floating-JMaRT factorization <cit.>, which can in principle produce non-extremal rotating solutions, but which unfortunately does not seem to produce solutions that have the same charges and angular momenta as non-extremal black holes with a macroscopic horizon <cit.>. The second is to use numerics in a consistent truncation to three-dimensional supergravity <cit.> to produce microstrata<cit.> that have the same charges as non-extremal asymtptotically-AdS rotating black holes. The third is to use the Bah-Heidmann factorization <cit.> to produce non-extremal non-rotating multi-bubble topological stars, which can have both flat-space and AdS asymptotics <cit.>. Absence of rotation notwithstanding, this later method appears to be the most prolific at generating solitonic solutions with non-extremal black-hole charges and mass, including Schwarzschild microstate <cit.>. The main question about these non-extremal microstate geometries is whether they are stable or unstable, and what are the physical implications of their stability or absence thereof. Since the asymptotically-AdS_3 solutions are dual to coherent CFT states that have both right movers and left movers, one can argue that generic solutions with a large number of identical CFT strands should be unstable <cit.>, much like the JMaRT solution is <cit.>. However, this argument does not extend to solutions without an AdS near-horizon region, such as Schwarzschild microstates. The purpose of this paper is to investigate the stability under coupled gravitational and electromagnetic perturbations of the simplest spherically-symmetric topological star, which can be constructed as four-dimensional Euclidean Schwarzschild (ES) solution times time, to which one adds magnetic flux on the ES bolt. The solution is specified by two parameters r_s and r_b that interpolate between a magnetic black string solution with a finite-area event horizon when r_s> r_b and a smooth horizonless solution one when r_s< r_b. Throughout this paper we will call the former the black string and the later the top star. The top star solution is one of the key ingredients in the building of the more general multi-bubble Bah-Heidmann solutions, and hence our investigation is the first step in a programme to determine the stability or instability of these solutions and the physical consequences thereof[The stability of top stars under scalar perturbations was established in <cit.>.]. The equations governing the perturbation around the black string and top star are the same, and couple the electromagnetic fluctuations to the gravitational ones. They separate into a rather intractable parity-even sector and a more tractable parity-odd sector, on which we focus in this paper. We find that in this sector the perturbations separate into two coupled systems of ODE's. We show that one of these systems boils down to two independent second-order ODE's for the so-called master variables. These ODE's have five Fuchsian singularities, and cannot be mapped to a Heun equation or confluent version thereof. Hence, they are harder to solve then the scalar perturbations of black holes <cit.>, of D-brane bound states <cit.>, and of topological stars <cit.>, where the linear dynamics is described by confluent forms of the Heun equation. We primarily rely on the Leaver method to find the quasinormal (QNM) frequencies of topological stars and magnetic black strings. The then verify the results against those obtained through direct numerical integration and WKB analysis. The WKB approximation works well when the orbital number of the perturbation is large. Direct numerical integration of the differential equation produces reliable results for frequencies with a small imaginary part. Our calculations show that QNM frequencies of odd perturbations have always negative imaginary part, so they represent modes decaying in time. For even perturbations, we are able to separate the equations only for spherically symmetric perturbations and find stable solutions for all topological stars with r_s < 2 r_b and black strings with r_b< 2 r_s in agreement with <cit.>. Our analysis provide a unifying picture that interpolates between the Gross-Perry-Yaffe-type instability of top stars with zero or small magnetic fields <cit.> and the Gregory-Laflamme instability <cit.> of magnetized black strings with zero or small magnetic charge <cit.>. Both these instabilities manifest themselves through the existence of QNMs that blow up in time. The key result of our calculation is that in the regimes of parameters where these instabilities are absent, all other perturbations decay in time. Hence, in these regimes, both the top star and the magnetized black strings are stable! In Section <ref> we derive the system of ordinary differential equations governing the odd sector of the coupled gravitational and electromagnetic perturbations. In Section <ref> we compute the spectrum of QNM frequencies for odd perturbations. In Section <ref> we determine the regimes of Gregory-Laflamme instability. Finally, in Section <ref> we present some conclusions and future directions. Note added: When this work was nearly complete, we were informed that another group was working independently on the same problem <cit.>. Although there is significant overlap between our results, our analysis and that of <cit.> also focus on different aspects and numerical methods and are therefore complementary to each other. We have compared some of our numerical results to those of <cit.>, finding very good agreement. § GRAVITATIONAL AND ELECTROMAGNETIC PERTURBATIONS We consider solutions of Einstein-Maxwell theory in five dimensions describing magnetically-charged topological stars and black strings. Our goal is to study linear perturbations of the metric and gauge field around these backgrounds. The study of perturbations around magnetically charged backgrounds poses a technical problem well known in the literature on black-hole perturbation theory <cit.>: the usual decoupling between even and odd perturbations does not occur. Here we circumvent this problem by dualizing the vector field into a two-form potential C_μν with field strength F_μνρ=3 ∂_[μC_νρ]. In terms of this field, the equations of motion are R_μν-1/2g_μνR = 1/4(F_μαβF_ν^αβ-1/6g_μνF_αβγF^αβγ) , ∇_μF^μνρ = 0 . We study linear perturbations around solutions of (<ref>) and (<ref>) with metric g̅_μν and two-form potential C̅_μν given by <cit.> s̅^2 = -f_s(r) t^2+ r^2/f_s(r) f_b(r)+r^2 (θ^2 +sinθ^2 ϕ^2)+f_b(r) y^2 , C̅ = √(3r_s r_b)/r t∧ y , where r_s, r_b are some real positive numbers and f_s(r)=1-r_s/r , f_b(r)=1-r_b/r . There are two regimes of interest: Black string: r_b<r_s Topological star: r_s<r_b The first is called the black string regime <cit.>, in which the solution has an event horizon at r=r_s. In the second, the solution describes a topological star <cit.>, which is a smooth horizonless solution provided the coordinate y is periodically identified with period y∼ y+4π r_b^3/2/√(r_b-r_s) . For this periodicity the spacetime smoothly ends at the hypersurface r=r_b. The geometry near r=r_b is ℝ_t×ℝ^2×𝕊^2. §.§ Linear perturbations We consider perturbations of the background metric and two-form potentials, g_μν=g̅_μν+h_μν , C_μν=C̅_μν+c_μν . Following <cit.>, we separate the perturbations according to their transformations under parity in even- and odd-types. For the background solutions we are considering, even and odd perturbations do not couple. Therefore, we can study them separately. Here we focus on odd perturbations, which are considerably simpler. These are given by h_μν x^μ x^ν = 2 e^- iω t∑_a=t,r,y h_a(r) x^a [- 1/sinθ∂ Y_ℓ m/∂ϕθ+ sinθ∂ Y_ℓ m/∂θϕ] , c_μν x^μ∧ x^ν = 2 e^- iω t∑_a=t,r,y c_a(r) x^a ∧[- 1/sinθ∂ Y_ℓ m/∂ϕθ+ sinθ∂ Y_ℓ m/∂θϕ] , where Y_ℓ m(θ, ϕ) are the spherical harmonics satisfying [1/sinθ∂/∂θ(sinθ∂/∂θ)+1/sin^2θ∂^2/∂ϕ^2+ℓ (ℓ+1)]Y_ℓ m=0 . Since the background is spherically symmetric there is no dependence on m, so from now on we set m=0. In addition, we consider no dependence on the coordinate y for the time being. Plugging (<ref>) and (<ref>) into the field equations, (<ref>) and (<ref>), and expanding to linear order in the perturbations, one finds two decoupled systems of ordinary differential equations: one for the (h_t,h_r,c_y) perturbations h_r'(r)+[ r(r_s+r_b) -2 r_s r_b/r (r-r_s)(r-r_b) ] h_r(r)+i r^3 ω h_t(r)/(r-r_b) (r-r_s)^2=0 , h_t'(r)-2 h_t(r)/r+ i[ω^2 r^3 -(ℓ+2)(ℓ-1)(r-r_s) /ω r^3] h_r(r) +√(3r_b r_s) c_y(r)/ r(r-r_b)=0 , c_y” (r) +r_s c_y' (r) /r (r-r_s)+[ (r^5 ω ^2-ℓ (ℓ+1) r^2 (r-r_s)+3 r_b r_s (r_s-r)) /r^2 (r-r_b) (r-r_s)^2] c_y (r) +i √(3) (ℓ+2) (ℓ-1) √(r_b r_s) h_r(r) /r^4 ω=0 , and one for (c_t,c_r,h_y) perturbations c_r'(r)+ [ r (r_b+r_s)-2 r_b r_s/r (r-r_b) (r-r_s)] c_r(r)+i r^3 ω c_t(r) /(r-r_b) (r-r_s)^2 =0 , c_t'(r) +[ ℓ (ℓ+1) (r-r_s)-r^3 ω ^2]c_r (r) /ir^3 ω-√(3)ℓ( ℓ+1) √(r_b r_s) h_y(r) /r (r-r_b) =0 , h_y” (r)+r_s h_y' (r)/r(r-r_s)+[(r^5 ω ^2-(r-r_s) (r (ℓ (ℓ+1) r-2 r_b)+3 r_b r_s))/r^2 (r-r_b) (r-r_s)^2] h_y(r) -i √(3) c_r(r) √(r_b r_s)/r^4 ω =0 . Before studying the complete system, we discuss two limits of interest. §.§ Schwarzschild black string and soliton limits The limits are the r_b → 0 and the r_s→ 0 limits. In the first the background solution corresponds to the “Schwarzschild black string” (meaning a four-dimensional Schwarzschild solution times a transverse direction). The second limit corresponds to a horizonless solution which we are going to call the Schwarzschild soliton. These solutions are mapped one into another via a double Wick rotation along the coordinates t and y and an exchange of r_s↔ r_b. This does not imply, as we are going to see, that the equations for the perturbations around these backgrounds coincide (up to an exchange of r_s ↔ r_b). The reason is that here we are considering perturbations with no dependence in y. Therefore, the perturbations are not mapped one into each other by the double Wick rotation, even if the background solutions are. * Schwarzschild black string: r_b→0. In this limit the full system reduces to four independent second-order differential equations for h_r, h_y, c_r, c_y. When recast into Schrödinger form, they are given by Ψ_s” (r) +Q_s(r)Ψ_s (r) =0 , where Q_s is the spin-s Regge-Wheeler potential <cit.> Q_s(r)=r^4ω^2-ℓ(ℓ+1)r(r-r_s)+s^2 r_s (r-r_s) + r_s^2/4/r^2 (r-r_s)^2 , and Ψ_s∼{[ c_r s=0; h_y,c_y s=1; h_r s=2 ]. . * Schwarzschild soliton: r_s→0. The four differential equations can be cast in Schrödinger form, Ψ_ϵ”+Q_ϵ(r)Ψ_ϵ=0 , with Q_ϵ =r^3ω^2-ℓ(ℓ+1)r + 2ϵ r_b/r^2(r-r_b) , and Ψ_s ∼{[ c_r,c_y ϵ=0; h_r,h_y ϵ=1; ]. . §.§.§ The (h_r,c_y,h_t) system We focus on the system (<ref>) describing the perturbations (h_t,h_r,c_y). The first equation can be solved for h_t, leading to a coupled system of two differential equations for h_r, c_y. The system can be decoupled by the linear transformation, h_r = g_h(r) [Ψ_+(r)+Ψ_-(r) ] , c_y = g_c(r)[ (1+γ) Ψ_+(r) +(1-γ) Ψ_-(r) ] , with g_h(r)=r^7 2 (r-r_b)(r-r_s)^3/2 , g_c(r)=- i (2 r_b+3 r_s) 2 ω√(3 r_b r_s)r^1 2 (r-r_s)^1 2 , and γ≡√(1+12(ℓ+2)(ℓ-1)r_b r_s/(2r_b+3r_s)^2) . In terms of these variables the system reduces to two decoupled ordinary differential equations of the Schrödinger form, Ψ”_±+Q_±Ψ_±=0 , with Q_± = r^3/(r-r_s)^2(r-r_b)[ω^2-ℓ(ℓ+1)/r^2+(2r_b+3r_s)(1±γ)+2r_s( ℓ^2+ℓ+1)/2r^3. . -r_s[(2r_b+3r_s)(1±γ)+8r_b+3/2r_s]/2r^4+15r_br_s^2/4r^5] . Let us remark that (<ref>) is a differential equation with five Fuchsian singularities: three regular ones at r=0,r_s,r_b, and two colliding at r=∞. Crucially, this radial equation cannot be mapped to a Heun equation or any of its confluent versions, as is the case for the scalar perturbations in the topological star geometry, which are described by a Confluent Heun equation <cit.>, similar to all black holes in the Kerr-Newman family <cit.>. §.§.§ The (h_y, c_t, c_r) system We can proceed analogously for the system (<ref>). We solve the first equation for c_t, which yields a system of two coupled second-order ODEs. Then we look for a change of variables of the form [ h_y; c_r; ]= [ g_1+(r) g_1-(r); g_2+(r) g_2-(r); ][ Φ_+; Φ_-; ] , and fix the functions g_1+, g_2-, g_2+ and g_2- such that the system decouples in two independent second-order ODEs for Φ_+ and Φ_-. We find that for this system this not possible. However, for the choice of functions g_1+=g_2+=1 , g_1-=-g_2-=r^3/(r-r_b)(r-r_s) , we find that the second system (<ref>) reduces to the following differential equations Φ”_±+r_s/r(r-r_s)Φ'_±+ [-4 r_b r_s +r(2r_b+r_s)]ω±i√(3 r_b r_s) r[1+ℓ(ℓ+1 )ω^2]/2r^2(r-r_b)(r-r_s)Φ_∓ + [r^3ω^2/(r-r_b)(r-r_s)^2±i√(3 r_b r_s) /2r(r-r_b)(r-r_s)(ℓ (ℓ+1) ω+1/ω). .+2r(r_b-ℓ (1+ℓ)r)-r_s(r+2r_b)/2r^2(r-r_b)(r-r_s)]Φ_±=0 . Then we can use the equation which does not contain derivatives of Φ_- (equivalently, Φ_+) and solve it (for Φ_-). After plugging the resulting expression in the remaining equation, we get a fourth-order ODE for Φ_+. Given its complexity, we omit the details and the study of the QNMs. § QNMS OF TOPOLOGICAL STARS AND BLACK STRINGS In this section we compute the QNM spectrum associated to the (h_t,h_r,c_y) perturbations, whose dynamics is described by the master equations (<ref>). We will mainly use a semi-analytical method developed by Leaver <cit.>, which has been recently applied to the study of scalar perturbations of topological stars, <cit.>. Nevertheless, we will compare the results obtained using Leaver's method against those obtained using the WKB approximation and via a direct numerical integration of the differential equation. QNMs correspond to solutions of the Schrodinger-type equation (<ref>) satisfying the boundary conditions[We omit the ± subindex in the master variables for the sake of clarity.] Ψ(r) r→ r_0∼ (r-r_0)^λ_0 , Ψ(r) r→∞∼ r^λ_∞ e^iω r . with r_0= {[ r_b (top star); r_s (black string); ]. , λ_0 ∈{[ ℝ_+ (top star); i ℝ_- (black string); ]. . Namely, we are imposing outgoing boundary conditions at infinity for both background solutions. The difference lies in the boundary conditions imposed at r_0. For top stars we just demand regularity at the cap: r=r_b. Instead, for the black string we impose incoming boundary conditions at the horizon: r=r_s. The specific values of λ_0 and λ_∞ can be obtained by solving the differential equation around r=r_0 and r=∞. The details are given in sections <ref> and <ref>. Solutions satisfying these boundary conditions exist only for a discrete choice of complex frequencies ω_n: the QNMs. The rest of the section is organized as follows. First, in sections <ref> and <ref> we briefly describe the WKB approximation and the numerical integration methods we are going to use to further confirm the results obtained using Leaver's method. The latter will be discussed in sections <ref> (top star) and <ref> (black string). §.§ WKB approximation A rough estimate of the QNM frequencies, ω, can be obtained from a WKB semiclassical approximation of the wave solution <cit.> around the “light rings”: extrema of the effective potential -Q(r) where both Q and its first derivative vanish: Q(r_c;ω_c)=Q'(r_c;ω_c)=0 . To estimate the QNM frequencies, we promote ω_c to a complex number (adding a small imaginary part) and demand that the integral between two zeros r_± of Q(r) satisfies the Bohr-Sommerfeld quantization condition: ∫_r_-^r_+√(Q(r;ω))dr=π(n+1/2) . The solution to linear order in the imaginary part of ω is given by the WKB formula, namely ω_n^ WKB=ω_c - i(n+1/2) √(∂_r^2 Q)/∂_ω Q|_ω=ω_c,r=r_c . §.§ Direct numerical integration QNM frequencies can be alternatively obtained via a direct numerical integration of the differential equation in the domain r∈ [r_0,∞] with boundary conditions (<ref>). Since both r=r_0 and r=∞ are singular points, boundary conditions cannot be imposed exactly at these points, but one can choose some arbitrary points nearby. The procedure has several steps: First, one determines the solution (with the appropriate boundary conditions) as a Taylor expansion around r_0. Then, one evaluates the solution at a point r_0+ϵ with ϵ small, and uses the result as a boundary condition to find a numerical solution, Ψ_0. One then proceeds in the same way to compute a numerical solution Ψ_∞ satisfying outgoing boundary conditions at infinity. The QNM frequencies are then obtained by requiring the matching of the two functions at an intermediate point. This is equivalent to demanding the vanishing of the Wronskian: Ψ_0'(r;ω) Ψ_∞(r;ω)- Ψ_0(r;ω) Ψ_∞'(r;ω)=0 , at some point r in the middle. We notice that the Wronskian is independent of the choice of this point. Thus, (<ref>) becomes an equation for ω whose solutions are the QNMs. When comparing this method against the Leaver method, we will bear in mind that this numerical method is known to be numerically stable only for frequencies with a small imaginary part <cit.>. §.§ QNMs: top stars In this section we compute the QNM spectrum of top stars using Leaver's method. As explained in the previous section, top stars correspond to smooth horizonless geometries ending a cap at r=r_b with r_b> r_s. Hence, QNMs in this geometry correspond to solutions of (<ref>) satisfying regular boundary conditions at the cap and behaving as an outgoing wave at infinity. Given this, we consider the following ansatz Ψ(r)=e^iω r r^-3 2 (r-r_s)^λ_s∑_n=0^∞ c_n(r-r_b/r-r_s)^n , with λ_s=1/2+ iω 2 (r_b+2 r_s) . The expansion (<ref>) satisfies the required boundary conditions at the cap and infinity, and solves (<ref>) near r=r_b and near r→∞. Plugging (<ref>) into (<ref>) yields a four-term recursion relation, α_n c_n+1+β_n c_n+γ_n c_n-1+δ_n c_n-2=0 , n≥ 0 . The explicit expressions of the coefficients α_n, β_n, γ_n and δ_n for Ψ_+ are α _n = -(n+1) (n+2) , β _n = ℓ (ℓ+1) +2 n(n+2) +1-γ +r_s(3(1+γ)-2n(n-2))/2 r_b-3 i (n+1) ω r_b-ω ^2 r_b^3/r_b-r_s , γ _n = -n (n+1)+1+γ+r_s (3 γ-4 (n-3) n-2 ℓ (ℓ+1)-5)/2 r_b+1/2 i ω(2 n r_b+r_b+(10 n-7) r_s)+ω ^2 r_b^2 (r_b+7 r_s)/4 (r_b-r_s) , δ _n = (n-2) r_s (n-2 i ω r_s-2)/r_b-1/4ω r_s (ω r_b+4 i (n-2)+4 ω r_s)-ω ^2 r_s^3/r_b-r_s . The coefficients of the recursion for Ψ_- are obtained by sending γ→-γ. Truncating n to some large number N, the recursion relation (<ref>) becomes an N-dimensional matrix equation M· c=0, where c is a vector containing the c_n coefficients entering in the ansatz (<ref>). Therefore, a non-trivial solution exists only if the determinant of M vanishes, which provides the equation satisfied by the QNM frequencies. However, we are not going to use the vanishing of the determinant to find the QNMs. Instead, we find more convenient to first tridiagonalize M, α'_0 = α_0, β'_0=β_0 , α'_1 = α_1, β'_1=β_1, γ'_1=γ_1 α'_n = α_n, β'_n=β_n-δ_n/γ'_n-1α'_n, γ'_n = γ_n-δ_n/γ'_n-1β'_n, δ'_n=0 , n≥2 and then use the fact that the integrability of the three-term recursion boils down to the continuous fraction equation: β'_n +α'_nγ'_n/β'_n-1-α'_n-2γ'_n-1/β'_n-2-…+α'_nγ'_n+1/β'_n+1-α'_n+1γ'_n+2/β'_n+2-…=0 . The QNM frequencies can be obtained by solving the above equation for a particular n, let us say n=0. In practice, the continuous fraction is truncated by keeping N terms, with N a large enough number. In Figure <ref> (left) the result for the QNM frequencies is displayed for the ℓ=2 mode in a top star with r_s=0.8, r_b=1 with N=50. The plots on the right show the convergence of the method, which typically occurs for N≥ 30. In Table <ref> we display the results for QNM frequencies for the fundamental mode n=0 as ℓ varies from 2 to 10. The results are compared against those obtained from a direct numerical integration of the differential equation, showing an excellent agreement. We omit the results based on a WKB approximation of the solution as the latter fails in reproducing the imaginary part of the QNM frequencies for the low n modes. Finally, in appendix <ref> we collect more results for various representative choices of r_s, r_b, ℓ. In all the solutions we have checked, QNM frequencies have negative imaginary parts, suggesting the stability of top stars against metric and electromagnetic odd perturbations. §.§ QNM: black string The analysis for the black string r_b< r_s follows mutatis mutandis the same steps than that for topological stars, but now incoming boundary conditions have to be imposed at the horizon r=r_s. Therefore, the ansatz now becomes Ψ(r)=e^iω r r^-3 2 (r-r_s)^λ_s (r-r_b)^λ_b∑_n=0^∞ c_n(r-r_s/r-r_b)^n , with λ_s = 1 2 +ω r_s^3 2√(r_b-r_s) , λ_b = 1+ iω 2( r_s-r_b) ( 2 r_s^2-r_b^2-r_b r_s+2 √( r_s^3(r_s-r_b) )) . Plugging this into (<ref>) yields again a four-term recursion relation as in (<ref>), but this time with coefficients given by: α _n =-(n+1)^2 r_s+2 i (n+1) ω r_s^2/σ , β _n =-1/2 r_s (-2 γσ ^2+5 γ -2 ℓ^2-2 ℓ+2 n^2 σ ^2-6n^2-6 n σ ^2+2 n+2 σ ^2+1) -i (σ +1) ω(2 nσ ^2-6 n σ +12 n+σ ^2+5 σ -2) r_s^2/2 σ-(σ +1)^3 ω ^2 r_s^3/σ ^2 , γ _n = 1/2 r_s(-2 γσ ^2+5 γ +2 ℓ^2 σ ^2-2 ℓ^2+2 ℓσ ^2-2 ℓ+4 n^2σ ^2-6 n^2-16 n σ ^2+16 n+14 σ ^2-9) -i (σ +1)^2ω(n σ ^3-2 n σ ^2+6 n σ -6 n-2 σ ^3+4 σ ^2-10σ +8) r_s^2/σ+(σ ^2-8 σ +8) (σ+1)^4 ω ^2 r_s^3/4 σ ^2 , δ _n = -i (2 n-5) (σ -2) (σ-1) (σ +1)^3 ω r_s^2/2 σ-(n-3) (n-2) (σ -1) (σ +1)r_s +(σ -2)^2 (σ -1) (σ +1)^5 ω ^2 r_s^3/4 σ ^2 , with σ=√( 1-r_b r_s) . Figure <ref> shows some QNM frequencies (different overtones) r_s=1, r_b=0.8 and ℓ=2, again for the Ψ_+ perturbation. Those for n=0 with ℓ varying from 2 to 10 are displayed in Table <ref>, together with a comparison against the WKB and direct integration methods. As we can see, we find an pretty good agreement. Finally, other representative examples are collected in the Appendix <ref>. For r_b=0 we reproduce the results for Schwarzschild black holes, as expected. As in the analysis of the topological star, we find that for all choices of r_s, r_b, ℓ we have checked the imaginary parts of the QNM frequencies of black strings are always negative, strongly suggesting the stability of these solutions against this type of perturbations. § GREGORY-LAFLAMME INSTABILITIES Black strings and branes in dimensions higher than four typically exhibit a classical instability, known as the Gregory-Laflamme (GL) instability <cit.>. A characteristic feature of the GL perturbations is that they have a momentum, p, along the internal directions. Typically, the instability appears when p is smaller than a certain threshold value, p_⋆, while the system is stable under perturbations with p>p_⋆. This suggests that the mode with p=p_⋆ (which is referred to as the threshold unstable mode) is time independent. Based on the existence of a threshold time-independent mode, the domain of GL stability of black strings was determined in <cit.> to be r_s<2 r_b. Furthermore, <cit.> used the double Wick rotation symmetry that exchanges black strings with topological stars, r_s↔ r_b and ω↔ i p to argue that top stars are stable when r_b < 2 r_s.[In the analysis of <cit.>, the role of the threshold mode is played by a mode with p=0 and ω = i p_⋆.] We also know that when r_b is fixed and r_s=0, the top star becomes Euclidean Schwarzschild times time, and this solution suffers from a Gross-Perry-Yaffe instability <cit.>. Hence, we expect the instability of top stars with 0< r_s < r_b/2 to be of the same type <cit.>. Here we review the arguments of <cit.>, adapting them to the more general perturbations where both p and ω are non-vanishing. We then apply the Leaver method to compute the QNM frequencies and show directly the existence of modes with positive imaginary parts outside of the stability domains. Let us consider the following even perturbation of the metric and two-form with ℓ=0 (hence, spherically-symmetric), h_μνx^μx^ν = e^ i p y- iω t[ h_1(r) r^2+r^2 k(r) (θ^2+sin^2θ ϕ^2) + 2 h_2(r) r y + 2 h_3(r) r t ] , c_μνx^μx^ν = 2 e^ i p y- iω t c(r) t∧ y . One can check that for any choice of p and ω this is not pure gauge: h_μν cannot be written as ∇_(μζ_ν). The field equations (<ref>) and (<ref>) allow us to solve for c, h_1, h_2 and h_3 in terms of k, and we find that the latter satisfies the second-order differential equation d dr[ A(r) k'(r) ] - B(r) k(r) =0 , with A(r) =(r-r_b) (r-r_s)/ W(r)^2 , B(r) = r^3 W(r) [p^2 (r-r_s)-ω ^2 (r-r_b)]+2 (r-r_b) (r-r_s) [p^2 (2 r_b-r_s)-ω ^2 (2 r_s-r_b)]/ W(r)^3 (r-r_b) (r-r_s) , and W(r)=p^2 (4 r-3 r_s)-ω ^2 (4 r-3 r_b) . We look for solutions to (<ref>) satisfying the boundary conditions, k(r)r→ r_0∼ (r-r_0)^λ_0 , k(r)r→∞∼ e^ i r √(ω^2-p^2) , where r_0=max[r_s, r_b] and λ_0 is determined by solving the equation near r=r_0 and imposing incoming (for the black string) or regular (for the top star) boundary conditions at r=r_0. The GL modes correspond to solutions of (<ref>) satisfying (<ref>), and exhibiting an exponential decay at infinity <cit.>: Re( i√(ω^2-p^2)) <0 . In order to review the arguments presented in <cit.> in a unified way, let us first consider perturbations where p is real and ω is purely imaginary and (<ref>) is automatically satisfied. Multiplying (<ref>) by k(r), integrating it over the domain [r_0 , ∞], and integrating by parts, we obtain: ∫^∞_r_0 r [A(r) (k'(r))^2+ B(r) k(r)^2]=∫^∞_r_0 d[ A(r) k(r) k'(r)] =0 . The vanishing of the right hand side follows from the fact that A(r) vanishes at r_0 and k(r) vanishes at infinity. In addition if p is real and ω purely imaginary, both A(r) and B(r) are real and positive in the domain: top star: r_s< r_b < 2 r_s , black string: r_b< r_s < 2 r_b . Thus, the only (real) solution of (<ref>) is the trivial one, k=0. The geometry is therefore stable in this domain. Given this, we provide further evidence applying again Leaver's method. Since the procedure is analogous to the one described in the previous section, we shall ignore the technical details and present directly the results in Figures <ref> and <ref>, where we plot the argument of the QNM eigenvalue equation for various choices of r_s and r_b. The QNM frequencies show up as peaks on these plots. A summary of our findings is: * Black string (Figure <ref>): We find unstable modes only in the regime r_s>2r_b for p<p_⋆≈ 0.435943 in agreement with expectations in <cit.>. * Top star (Figure <ref>): We find unstable modes for r_b>2r_s and p small enough. However when p is finite, it cannot be arbitrarily small since regularity of the geometry requires the quantization condition: p=n_y/R_y=n_y √(r_b-r_s)/2 r_b^3/2 , n_y ∈ℤ , We find that all modes with n_y ≥ 1 are stable, therefore only the n_y=0 mode leads to a potential instability. § CONCLUSIONS We have calculated the QNM frequencies associated to odd perturbations of topological stars and magnetized black strings in five-dimensional Einstein-Maxwell theory. Our analysis of the QNM spectrum mainly relies on Leaver's method <cit.>, which has been suitably generalized in order to apply it to an ODE with five Fuchsian singularities, reducing the 4-term recursion relation to a 3-term one via a tri-diagonalization of the eigenvalue matrix. We have confirmed the expectations of <cit.> which suggest classical stability inside the domain top star: r_s< r_b < 2 r_s , black string: r_b< r_s < 2 r_b , as we have found that all the perturbations we have considered inside the above domains decay exponentially in time. Outside of these domains we have verified that topological stars suffer from a Gross-Perry-Yaffe-type instability <cit.>, whereas magnetic black strings suffer from the usual Gregory-Laflamme instability <cit.>. Such instabilities reflect themselves on the existence of QNMs with positive imaginary part which are associated to even perturbations with ℓ=0. There are several obvious next steps to our investigation. The first is to study the possible gravitational-wave signatures that may distinguish top stars from black holes: multipolar structure<cit.>, echoes <cit.> and tidal effects during the inspiral phase of black-hole merger <cit.>. The next is to consider the even perturbations, and try to solve the underlying system of ODE's numerically. This will involve a shooting problem in several functions of one variable. Systems of similar complexity have been solved by shooting in other circumstances <cit.>, so we believe this problem is within reach. Another extension of our calculation is to the running-Bolt solutions <cit.>, which are also obtained by magnetizing the bolt of Euclidean Schwarzschild times time in five dimensions. For these solutions we also expect a Gross-Perry-Yaffe-type instability for small magnetic fluxes, and possibly a stable solution for larger fluxes <cit.>. Another cohomogeneity-one solution with fluxes that can be studied using our method is the magnetized Atiyah-Hitchin solution <cit.>, which is the M-theory uplift of Type-IIA Orientifold 6-planes with fluxes. Our calculation serves as a first step towards and a benchmark in the determination of the stability or lack thereof for more generic non-extremal topological stars that have a fluxed bolt. The most generic such solutions are cohomogeneity-two <cit.>, and hence determining the quasinormal frequencies will require solving PDE's, using methods similar to those of <cit.>. These calculations should reduce in certain limits to the calculations we do here, and hence our calculations should serve as a useful benchmark for these more complicated calculations. § ACKNOWLEDGMENTS: We would like to thank Massimo Bianchi, Pablo A. Cano, Giuseppe Dibitetto, Alexandru Dima, Francesco Fucito, Pierre Heidmann, Marco Melis, Paolo Pani and David Pereniguez for stimulating discussions and exchanges. The work of IB is supported in part by the ERC Grants 787320 - QBH Structure and 772408 - Stringlandscape. The work of GDR, JFM and AR is supported by the MIUR-PRIN contract 2020KR4KN2 - String Theory as a bridge between Gauge Theories and Quantum Gravity and by the INFN Section of Rome “Tor Vergata”. § QNMS In this appendix we collect some tables displaying QNM frequencies corresponding to the perturbations Ψ_+ (left) and Ψ_- (right) for various choices of r_s, r_b and ℓ. * r_s=0.8, r_b=1: * r_s=0, r_b=1: * r_b=0.8, r_s=1: JHEP
http://arxiv.org/abs/2406.18496v1
20240626165913
$Λ_{\rm s}$CDM cosmology: Alleviating major cosmological tensions by predicting standard neutrino properties
[ "Anita Yadav", "Suresh Kumar", "Cihad Kibris", "Ozgur Akarsu" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph", "hep-th" ]
anita.math.rs@igu.ac.in Department of Mathematics, Indira Gandhi University, Meerpur, Haryana 122502, India suresh.kumar@plaksha.edu.in Data Science Institute, Plaksha University, Mohali, Punjab-140306, India kibrisc@itu.edu.tr Department of Physics, Istanbul Technical University, Maslak 34469 Istanbul, Turkey akarsuo@itu.edu.tr Department of Physics, Istanbul Technical University, Maslak 34469 Istanbul, Turkey § ABSTRACT In this work, we investigate a two-parameter extension of the Λ_ sCDM model, as well as the ΛCDM model for comparison, by allowing variations in the effective number of neutrino species (N_ eff) and their total mass (∑ m_ν). Our motivation is twofold: (i) to examine whether the Λ_ sCDM framework retains its success in fitting the data and addressing major cosmological tensions, without suggesting a need for a deviation from the standard model of particle physics, and (ii) to determine whether the data indicate new physics that could potentially address cosmological tensions, either in the post-recombination universe through the late-time (z∼2) mirror AdS-to-dS transition feature of the Λ_ sCDM model, or in the pre-recombination universe through modifications in the standard values of N_ eff and ∑ m_ν, or both. Within the extended Λ_ sCDM model, referred to as Λ_ sCDM+N_ eff+∑ m_ν, we find no significant tension when considering the Planck-alone analysis. We observe that incorporating BAO data limits the further success of the Λ_ sCDM extension. However, the weakly model-dependent BAOtr data, along with Planck and Planck+PP&SH0ES, favor an H_0 value of approximately 73 km s^-1 Mpc^-1, which aligns perfectly with local measurements. In cases where BAOtr is part of the combined dataset, the mirror AdS-dS transition is very effective in providing enhanced H_0 values, and thus the model requires no significant deviation from the standard value of N_ eff = 3.044, remaining consistent with the standard model of particle physics. Both the H_0 and S_8 tensions are effectively addressed, with some compromise in the case of the Planck+BAO dataset. Finally, the upper bounds obtained on ∑ m_ν≲ 0.50 eV are fully compatible with neutrino oscillation experiments. Our findings provide evidence that late-time physics beyond ΛCDM, such as Λ_ sCDM, without altering the standard description of the pre-recombination universe, can suffice to alleviate the major cosmological tensions, as indicated by our analysis of Λ_ sCDM+N_ eff+∑ m_ν. Λ_ sCDM cosmology: Alleviating major cosmological tensions by predicting standard neutrino properties Özgür Akarsu July 1, 2024 ======================================================================================================= § INTRODUCTION Insofar as the most contemporary observations are concerned, the energy budget of the present-day universe consists mostly of cold dark matter (CDM) and dark energy (DE). The standard Lambda Cold Dark Matter (ΛCDM) model, resting on these elusive dark constituents, has, without a doubt, provided a marvelous description of the observed cosmic phenomena, including the late-time accelerated expansion <cit.> via its positive cosmological constant Λ assumption, cosmic microwave background (CMB) radiation <cit.>, and its minute fluctuations, as well as the formation and growth of large-scale structures (LSS) <cit.>. As successful as it may seem, ΛCDM, has been found to be fraught with a number of cracks over the past few years. As the observational data keep growing and improving in precision, not only are brand-new discrepancies with independent observations emerging within the framework of the ΛCDM model, but some of the existing ones also escalate to higher degrees of significance <cit.>. The most notorious of them all is in the value of the Hubble constant H_0, known as the H_0 tension <cit.>. It captures a more-than-5σ discordance between the local measurements by the SH0ES team using the Cepheid-calibrated distance ladder approach, which finds H_0=73.04±1.04  km s^-1 Mpc^-1 (73.30 ± 1.04  km s^-1 Mpc^-1, when including high-z SN Ia) <cit.>, and the latest measurement of 73.17 ± 0.86  km s^-1 Mpc^-1 <cit.> (see also 73.22 ± 0.68 (stat) ± 1.28 (sys)  km s^-1 Mpc^-1 using Cepheids, TRGB, and SBF Distance Calibration to SN Ia <cit.>), and the value H_0=67.36±0.54  km s^-1 Mpc^-1 estimated by the CMB measurements assuming ΛCDM <cit.>. In addition to the H_0 tension, it was suggested that ΛCDM suffers from another tension, though less significant, known as the S_8 tension <cit.>; Planck-ΛCDM predicts a larger weighted amplitude of matter fluctuations, viz., S_8 = 0.830±0.016 <cit.>, than what LSS dynamical probes like weak-lensing, cluster counts, and redshift-space distortion suggest within ΛCDM. For instance, S_8 = 0.759^+0.024_-0.021 (KiDS-1000) <cit.> and S_8 = 0.759±0.025 (DES-Y3) <cit.> from low-redshift measurements are in approximately 3σ tension with the Planck-ΛCDM predicted value. While the scientific community has yet to reach a consensus on whether the H_0 tension arises from systematic errors or yet-to-be-discovered new physics, its persistence across various probes over time diminishes the possibility of systematic causes. This has led many researchers to devote substantial efforts to devising models alternative to ΛCDM. In addressing the H_0 tension, a variety of modifications to ΛCDM have been proposed, which can be broadly categorized as follows: (i) Early Universe Modifications: Introducing new physics in the pre-recombination (z≳ 1100) universe, essentially to reduce the sound horizon scale and thereby increase the H_0 value. Examples include Early Dark Energy (EDE) <cit.>, New EDE <cit.>, Anti de-Sitter-EDE <cit.>, extra radiation parameterized by the effective number of relativistic species N_ eff <cit.>, combined effects of N_ eff and EDE <cit.>, and modified gravity <cit.>, and oscillations in the inflaton potential <cit.>. (ii) Intermediate/Late Universe Modifications: Introducing new physics at intermediate to late times (0.1 ≲ z ≲ 3.0) to adjust the expansion history, viz., H(z), aligning H_0 predictions with its local measurements while remaining consistent with CMB and late-time observational data. Examples include the Graduated Dark Energy (gDE) <cit.>, the Λ_ sCDM model—mirror Anti de-Sitter to de-Sitter (AdS to dS) transition in the late universe—conjectured from gDE <cit.>, the Λ_ sVCDM model <cit.> (VCDM <cit.> implemention of Λ_ sCDM), the Λ_ sCDM^+ model (a stringy model of Λ_ sCDM <cit.>), Phantom Crossing Dark Energy <cit.>, Omnipotent Dark Energy <cit.>, dynamical DE on top of an AdS background <cit.>, (non-minimally) Interacting Dark Energy (IDE) <cit.> [A recent model-independent reconstruction of the IDE kernel, using Gaussian process methods as suggested in <cit.>, reveals that DE assumes negative densities for z ≳ 2, suggesting that IDE models do not preclude the possibility of negative DE densities at high redshifts.], running vacuum <cit.>, and Phenomenologically Emergent Dark Energy (PEDE) <cit.>. (iii) Ultra Late Universe Modifications: Implementing changes in either fundamental physics or stellar physics during the recent past (z ≲ 0.01) <cit.>. While our list includes some key examples of attempts to resolve the H_0 tension through new physics, it is by no means exhaustive. For a comprehensive overview and detailed classification of various approaches, one may refer to the Refs. <cit.>. However, addressing the H_0 tension while ensuring compatibility with all available data and without exacerbating other discrepancies, such as the S_8 tension, has turned out to be another challenging task. Currently, only a few models propose simultaneous solutions to both the H_0 and S_8 tensions. Among these, though not exhaustively, are the Λ_ sCDM model <cit.>, New EDE <cit.>, inflation with oscillations in the inflaton potential <cit.>, some IDE models <cit.>, sterile neutrino with non-zero masses combined with dynamical DE <cit.>, dark matter (DM) with a varying equation of state (EoS) parameter <cit.>, AdS-EDE with ultralight axion <cit.>, some running vacuum models <cit.>. However, it remains difficult to assert that any model has been widely accepted as both observationally and theoretically fully satisfactory. Among them, the (abrupt) Λ_ sCDM model stands out for its simplicity, introducing only one extra free parameter compared to the standard ΛCDM model: z_†, the redshift of the rapid mirror AdS-dS transition. We refer readers to Refs. <cit.> for more works considering dark energy assuming negative density values, (mostly) consistent with a negative (AdS-like) cosmological constant, for z ≳ 1.5-2, particularly aiming to address cosmological tensions such as the H_0 and S_8 tensions and, recently, anomalies from JWST. Additionally, Refs. <cit.> suggest such dynamics for dark energy from model-independent/non-parametric observational reconstructions and investigations. The most popular early-time solutions to the H_0 tension, such as EDE <cit.> and extra radiation parameterized by the effective number of relativistic species N_ eff <cit.>, involve inserting an additional energy component into the pre-recombination universe to reduce the sound horizon scale, thereby resulting in a higher H_0 <cit.>. However, the extent to which models reducing the sound horizon can tackle the H_0 tension is severely restricted by the fact that they yield a larger matter density ω_ m to preserve consistency with the CMB power spectrum, thereby chronically worsening the S_8 discrepancy <cit.>. Given that early-time modifications focus almost exclusively on the concept of shrinking the sound horizon to increase H_0, this difficulty in addressing both H_0 and S_8 tensions simultaneously turns an already challenging problem into an even more daunting one from the perspective of early-time solutions. On the other hand, it is conceivable that a post-recombination extension of the ΛCDM model that addresses the H_0 tension could remain immune to exacerbating the S_8 tension or even address it. A promising candidate is the Λ_ sCDM cosmology, inspired by the recent conjecture that the universe underwent a spontaneous mirror AdS-dS transition characterized by a sign-switching cosmological constant (Λ_ s) around z ∼ 2 <cit.>. This conjecture emerged following findings in the gDE model, which demonstrated that a rapid smooth transition from AdS-like DE to dS-like DE at z ∼ 2 could address the H_0 and BAO Ly-α discrepancies <cit.>. The Λ_ sCDM cosmology involves a sign-switching cosmological constant, a behavior that can typically be described by sigmoid functions, e.g., Λ_ s(z) = Λ_ s0 tanh[η(z_†-z)]/tanh[η,z_†], where Λ_ s0 > 0 is the present-day value of Λ_ s and η>1 determines the rapidity of the transition; the larger the η, the faster the transition. In the limit as η→∞, we approach the abrupt Λ_ sCDM model <cit.>: Λ_ s→Λ_ s0, sgn[z_†-z] for η→∞, serving as an idealized depiction of a rapid mirror AdS-dS transition, introducing only one extra free parameter to be constrained by the data, compared to the standard ΛCDM model. Detailed observational investigations of the Λ_ sCDM model suggest that it can simultaneously address the H_0, M_ B, and S_8 tensions, as well as the Ly-α, t_0, and ω_ b anomalies. It is also observed that while the model partially steps back from its achievements when the BAO (3D BAO) dataset is included in the analysis, it remains entirely compatible with the weakly model-dependent transversal BAO, i.e., 2D BAO <cit.>. These phenomenological achievements of Λ_ sCDM are now underpinned by significant theoretical progress in elucidating the (mirror) AdS-dS transition phenomenon. The authors of Refs. <cit.> assert that, despite the AdS swampland conjecture suggesting that Λ_ s seems unlikely given the AdS and dS vacua are infinitely distant from each other in moduli space, the Casimir energy of fields inhabiting the bulk can realize the AdS-dS transition conjectured in Λ_ sCDM. It was also shown in Refs. <cit.> that the Λ_ sCDM model with this abrupt/rapid transition can effectively be constructed from a particular Lagrangian containing an auxiliary scalar field with a two-segmented linear potential within a type-II minimally modified gravity framework called VCDM <cit.>. All the aforementioned successes of Λ_ sCDM, despite being one of the most minimal deviations from ΛCDM, and the ensuing theoretical developments suggest that missing pieces of the cosmic puzzle, if any, are likely to be identified in the late universe rather than the early universe. Thus, examining both early and late-time modifications within a viable model would be enlightening in our endeavor to restore cosmic concordance. Following this line of reasoning, we investigate the implications of allowing the effective number of neutrino species, N_ eff, to vary freely along with the redshift at which the mirror AdS-dS transition occurs, z_†, in the Λ_ sCDM model. The effect of N_ eff is most pronounced when radiation dominates the universe, while the effect of z_† is most noticeable in the late matter-dominated era and beyond. The variation of N_ eff also provides an excellent avenue to assess how well Λ_ sCDM concurs with our best theory of matter, the Standard Model (SM) of particle physics, while addressing major discrepancies like the H_0 and S_8 tensions. In addition to N_ eff, we relax the minimal mass assumption of ΛCDM and allow the sum of mass eigenstates, ∑ m_ν, to be a free parameter to test the model's capabilities and its consistency with neutrino flavor oscillation experiments. Consequently, we place joint constraints on N_ eff and ∑ m_ν in both the Λ_ sCDM+N_ eff+∑ m_ν and ΛCDM+N_ eff+∑ m_ν models. See, e.g., Ref. <cit.> for a similar investigation conducted in the context of PEDE <cit.>. We refer readers to Ref. <cit.> for a comprehensive review on neutrino physics and Refs. <cit.> and references therein for recent discussions and constraints regarding neutrino properties, viz., N_ eff and ∑ m_ν, in the context of cosmology. The remainder of the paper is structured as follows: in <ref>, we present the rationale behind this work and explain the underlying physics of the possible outcomes of having a non-standard effective number of relativistic species and neutrino masses. <ref> introduces the datasets and elaborates on the methodology utilized in the observational analysis. In <ref>, we present the observational constraints on the model parameters under consideration. We then discuss the results in terms of existing tensions such as the H_0 and S_8 tensions and explore the emergence of new ones like N_ eff and the resultant Y_ p in cases where the data favor large N_ eff. Finally, we conclude with our main findings in <ref>. § RATIONALE In this section, we explore the implications of a two-parameter extension of the abrupt Λ_ sCDM model <cit.>, as well as for the ΛCDM model for comparison reasons, achieved by treating N_ eff and ∑ m_ν as free parameters. Allowing these parameters to vary can significantly impact the early universe and its associated cosmological observables. We detail these effects in the following subsections. §.§ Number of relativistic neutrino species The universe, in its history of evolution, underwent a phase of radiation (r) domination when it was filled with a soup of high energy photons (γ) and other relativistic species, such as electrons (e^-), positrons (e^+), neutrinos (ν), and anti-neutrinos (ν̅). This early universe content can collectively be treated as radiation, and its energy density, ρ_ r, can be parameterized in terms of the so-called effective number of relativistic neutrino species, N_ eff, and the energy density of photons, ρ_γ <cit.> ρ_ r = ρ_γ [ 1 + 7/8N_ eff( 4/11)^4/3]. In the instantaneous neutrino decoupling limit, the SM of particle physics, which includes three types of active neutrino flavors, suggests N_ eff = 3. However, in reality, decoupling was an extended process, and neutrinos were not entirely decoupled from the plasma with which they were initially in thermal equilibrium when the e^± annihilation began. Consequently, some of the energy and entropy were inevitably transferred from the annihilating e^± pairs to neutrinos, particularly to those at high energy tail of the neutrino spectrum, as well as photons, slightly heating and pushing them away from the Fermi-Dirac distribution <cit.>. Along with QED plasma corrections, the SM therefore predicts the precise value of N_ eff = 3.044 <cit.>. Any significant departure from this predicted value might hint at either new physics or non-standard neutrino properties. Ascertaining its value in various cosmological models is thus crucial for carrying out a consistency check against known particle physics and for probing the physics beyond. In this respect, the possibility of the existence of additional relativistic relics, not accommodated by the SM of particle physics, and the absence of a definitive upper bound on ∑ m_ν, leave the door open for natural and well-motivated extensions of the six-parameter ΛCDM model. These extensions can be achieved by relaxing N_ eff and ∑ m_ν, either separately or jointly. Considering them as free parameters of the extended models, the Planck CMB experiment is capable of constraining N_ eff through the damping scale and small scale CMB anisotropies (based on the damping tail) <cit.>, which finds N_ eff = 2.92_-0.38^+0.36 (95% CL, Planck TT,TE,EE+lowE) <cit.>. Similarly, the sum of neutrino masses ∑ m_ν can be constrained via CMB power spectra and lensing, placing an upper bound of ∑ m_ν < 0.24 eV (95% CL, TT,TE,EE+lowE+lensing) <cit.>. See Ref. <cit.> for model marginalized constraints on neutrino properties, N_ eff and ∑ m_ν from various cosmological data, on top of the standard ΛCDM model and its some well-known extensions. In such models, the effect of dark radiation ρ_ dr, i.e., extra relativistic degrees of freedom such as sterile neutrinos initially in thermal equilibrium with standard model bath, and any non-standard neutrino behavior, generates a deviation Δ N_ eff = N_ eff - 3.044 from the SM value N_ eff = 3.044. If the relic contribution to radiation density is due, say, to extra neutrino species, then we have: ρ_ dr = 7/8Δ N_ eff( T_ν/T_γ)^4 ρ_γ , where T_γ and T_ν are the temperatures of photons and neutrinos, respectively. When Δ N_ eff > 0, the energy density ρ_ r in the total radiation content increases since ρ_ r = ρ_ SM + ρ_ dr, resulting in an early expansion rate H(z) = √(8π G ρ_ r/3) that is enhanced compared to ΛCDM with ρ_ SM. One significant consequence of such an enhanced expansion is the reduction of the sound horizon r_* at recombination. To elaborate, the sound horizon is defined as the maximum comoving distance that acoustic waves can travel in photon-baryon plasma, from the beginning of the universe (z=∞) to the last scattering redshift (z_*), and is given by: r_*=∫_z_*^∞c_ s(z)/H(z) d z, with c_ s(z)=c/√(3(1+3ω_ b/4ω_γ(1+z))) being the sound speed in the photon-baryon fluid. Here, c is the speed of light in the vacuum, ω_ b≡Ω_ b,0h^2 and ω_γ≡Ω_γ,0h^2 are the physical baryon and photon densities, respectively, with Ω_i,0 being the present-day density parameter of the i^ th fluid and h=H_0/100 km s^-1 Mpc^-1 being the reduced Hubble constant. H(z) depends on a given model; therefore, the resultant r_* is also a model-dependent quantity. In ΛCDM, r_*∼ 144 Mpc <cit.>; on the other hand, models with greater early expansion rate H(z>z_*)>H_Λ CDM(z>z_*) have correspondingly smaller r_*<144 Mpc. Since the sound horizon r_* at decoupling represents a known distance scale, it can be used as a standard ruler to define D_M(z_*) = r_* / θ_*, where: D_M(z_*)=∫_0^z_*c dz/H(z) is the comoving angular diameter distance from a present-day observer to the surface of last scattering, and θ_* is the angular acoustic scale. θ_* is very accurately and nearly model-independently measured with a precision of 0.03% according to the spacing of acoustic peaks in the CMB power spectrum, found to be 100θ_*=1.04110±0.00031 (Planck, 68% CL, TT,TE,EE+lowE+lensing) <cit.>. This implies that any viable model introducing modifications to H(z>z_*) is expected to keep θ_* fixed at the measured value to remain concordant with the CMB. It then follows that imposing such a condition on θ_* in the case of varying r_* requires D_M(z_*) to change, hence H_0 to change as well since D_M(z_*) is much less affected by the changing N_ eff (because the integral Eq. (<ref>) is dominated by its lower limit). That is, for models reducing the sound horizon, H_0 must increase to keep θ_* fixed. In the literature, this is a generic method employed by early-time solutions that modify the pre-recombination universe but leave the post-recombination universe intact, such as EDE models <cit.>, to address the H_0 tension. Λ_ sCDM, however, with its additional switch parameter z_†, allows for non-standard low redshift evolution as the cosmological constant Λ begins dominating the energy budget in the late universe. A negative cosmological constant, Λ<0 when z>z_†, leads to a reduction in the total energy density relative to that of ΛCDM, resulting in H_Λ_ s CDM(z>z_†) < H_Λ CDM(z>z_†). Besides, both Λ_ sCDM and ΛCDM have almost the same sound horizon scale r_* because Λ has a vanishing effect on H(z) at redshifts as high as z>z_*, hence effectively the same r_* / θ_* = D_M(z_*). The deficit in H_Λ_ s CDM(z>z_†) prior to the switching must then be compensated by an enhanced H_Λ_ s CDM(z<z_†), implying a larger H_0, since the D_M(z_*) integrals in both models must yield the same result. We note in this regard that the Λ_ s CDM+N_ eff+∑ m_ν model represents a scenario accommodating both early (N_ eff) and late (z_†) time degrees of freedom, which can be constrained by observational data. Confrontation of such models with observational data might provide extremely valuable hints as to whether we should seek physics/modifications beyond/in the standard model of cosmology in the early or late universe, or both (see Refs. <cit.> for a further discussion). Λ_ s CDM+N_ eff+∑ m_ν would therefore serve as a very illuminating and powerful guide in the quest to develop a more complete and observationally consistent cosmological framework. §.§ Sum of Neutrino Masses It was long assumed in the SM of particle physics that neutrinos were massless family of leptons. However, confirmed by atmospheric and solar neutrino observations, they have been found to have non-zero, albeit very small, masses <cit.>. In this sense, what can be considered as a first step beyond the SM has come not from N_ eff measurements but from efforts to determine neutrino masses. Although their exact masses have not been pinpointed yet, we know that at least two of their mass states are massive, and neutrino oscillation experiments can place bounds on the so-called mass splittings Δ m_ij^2 = m_i^2 - m_j^2, where i,j = 1,2,3 label mass eigenstates m_i and m_j belonging to different neutrino types. Cosmological observations are sensitive to the sum of neutrino masses ∑ m_ν, which in the normal hierarchy (NH) m_1 ≪ m_2 < m_3, is given by ∑ m_ν = m_0 + √(Δ m_21^2 + m_0^2 ) + √(|Δ m_31|^2 + m_0^2), where m_0 is the lightest neutrino mass and conventionally m_0≡ m_1 in the normal mass ordering <cit.>. Taking the lightest neutrino mass to be zero (m_1 = 0), we can use the oscillation data, Δ m_21^2 = 7.49^+0.29_-0.17× 10^-5 eV^2 and Δ m_31^2 = 2.484^+0.045_-0.048× 10^-3 eV^2, to compute the minimal sum of masses and find the lower bound ∑ m_ν∼ 0.06 eV <cit.>. Performing the same calculation for the inverted hierarchy (IH) with m_3 ≪ m_1 < m_2 yields ∑ m_ν∼ 0.1 eV. Thus, any total mass value ∑ m_ν < 0.06 eV is ruled out by the oscillation experiments. ΛCDM assumes the normal mass hierarchy with the minimal mass ∑ m_ν = 0.06 eV <cit.>; however, unless they are in conflict with observations, there is no well-justified theoretical underpinning for why neutrinos with reasonably greater mass values should not be considered in a given cosmological model. See Ref. <cit.> for model marginalized constraints on neutrino properties, N_ eff and ∑ m_ν from cosmology, on top of the standard ΛCDM model and its some well-known extensions. Provided that neutrinos are not so massive, that is ∑ m_ν < 1 eV, they are relativistic prior to recombination, behaving like radiation. After around the time of recombination, they transition from being radiation-like particles to being matter-like particles. Although massive neutrinos increase the physical density of matter ω_ m by an amount of about ω_ν≈∑ m_ν / 93 eV, at small scales they tend to erase the growth of gravitational potential wells created by CDM due to their high thermal speed. In other words, unlike CDM, they do not cluster on scales smaller than their free-streaming length, which leads to the suppression of the (late time) clustering amplitude σ_8, hence to the suppression of the growth factor S_8 = σ_8 √(Ω_ m/0.3) <cit.>. Such a feature might render massive neutrinos an effective tool in tackling the S_8 tension, especially in potential situations where pre-recombination expansion rate is hastened by a non-negligible amount of extra radiation, namely Δ N_ eff>0. This additional species causes a magnified early integrated Sachs-Wolfe effect that manifests itself as an enhancement in the heights of the first two CMB acoustic peaks (most noticeable at ℓ∼200). In order for the fit to the CMB power spectrum that is already outstanding in the baseline ΛCDM not to deteriorate, this excess power at low-ℓ can be offset by an accompanying increase in ω_ m, the impact of which is to eventually worsen the so-called S_8 tension. On the other hand, the degree to which massive neutrinos can actually counteract the effect of ω_ m-induced power is limited by the H_0 tension because large ∑ m_ν values shrink the comoving angular diameter distance to the last scattering surface given by Eq. (<ref>), shifting the acoustic peaks to low-ℓ. The fit can then simply be restored by lowering H_0. Note that this signals a strong degeneracy between H_0 and ∑ m_ν, meaning large ∑ m_ν values that are supposed to suppress S_8 act to aggravate the H_0 tension, which lends further support to the view that the simultaneous elimination of H_0 and S_8 tensions is a formidable task, particularly for models enhancing the pre-recombination expansion rate as early-time solutions (for a list of early-time solution suggestions, see Ref. <cit.>). §.§ Primordial Helium Abundance The abundance of light elements, particularly helium, is proportional to N_ eff as the early expansion rate H(z) directly affects the rate of Big Bang Nucleosynthesis (BBN). To understand this, consider the interaction rate per particle, Γ = n_ν⟨σ v ⟩, where n_ν is the number density of neutrinos, ⟨σ v ⟩ is the thermally-averaged cross-section of the weak interaction, and v is the relative particle speed. The amount of helium formed in the first few minutes of the universe is determined by two competing factors: H(z) and Γ. As long as Γ≫ H, neutrons and protons maintain chemical equilibrium via weak interactions. As the temperature drops below T∼ 1 MeV with expansion, the weak interaction loses efficiency, causing neutrons to go out of equilibrium and freeze the neutron-proton ratio, n_ n/n_ p, at the freeze-out temperature T_ f. We can determine T_ f using the relation Γ = n_ν⟨σ v ⟩∼ G_ F^2 T^5, as n_ν∼ T^3 and ⟨σ v⟩∼ G_ F^2 T^2, where G_ F = 1.166 × 10^-5 GeV^-2 is the Fermi coupling constant. In the early universe, dominated by radiation, the Friedmann equation can be expressed as H = √(4π^3 G/45 g_*T^4)∼√(g_*)T^2/m_ Pl, where m_ Pl = G^-1/2 = 1.22 × 10^19 GeV is the Planck mass scale, and g_* represents the effective number of degrees of freedom internal to each particle. Neutrons freeze out approximately when Γ(T_ f) ≈ H(T_ f), which implies: T_ f = ( √(g_*)/G_ F^2 m_ Pl)^1/3. Injecting extra relativistic degrees of freedom with Δ N_ eff>0 results in a higher g_*. If the relic is a fermion, g_* is adjusted as follows: g_* = g_* SM + 7/8g_ rel( T_ rel/T_γ)^4, leading to enhanced expansion rate as H∝√(g_*) T^2. Consequently, T_ f increases as it is proportional to g_*^1/6 <cit.>. The neutron fraction in equilibrium, n_ n/n_ p = e^-Δ m/T_ f, where Δ m = m_ n - m_ p = 1.293 MeV is the mass difference between neutron and proton, dictates that at higher temperatures of T_ f, neutrons not only freeze out sooner than in the standard case but also in larger numbers. While a portion of these neutrons undergo spontaneous β^- decay, most end up in He^4 nuclei, leading to increased helium production compared to the standard BBN. Using the neutron fraction n_ n/n_ p∼ 1/7, we can roughly estimate the primordial helium-4 mass fraction Y_ p: Y_ p = 2( n_ n/n_ p)/1 + (n_ n/n_ p)≈ 0.25. The modification of Y_ p due to Δ N_ eff≠ 0 can be approximated by Δ Y_ p≈ 0.013 ×Δ N_ eff <cit.>. Thus, a model with a sufficiently large Δ N_ eff>0 could easily overestimate Y_ p, limiting the scope for significant variations in N_ eff. § DATASETS AND METHODOLOGY To constrain the model parameters, we utilize multiple datasets, including the Planck CMB, BAO, BAOtr, and PantheonPlus&SH0ES. * CMB: The CMB data was obtained from the Planck 2018 legacy data release, a comprehensive dataset widely recognized for its precision and accuracy. Our analysis incorporated CMB temperature anisotropy and polarization power spectra measurements, their cross-spectra, and lensing power spectrum <cit.>. This analysis utilizes the high-ℓ likelihood for TT (where 30 ≤ℓ≤ 2508), as well as TE and EE (where 30 ≤ℓ≤ 1996). Additionally, it incorporates the low-ℓ TT-only likelihood (where 2 ≤ℓ≤ 29) based on the component-separation algorithm in pixel space, the low-ℓ EE-only likelihood (where 2 ≤ℓ≤ 29) using the method, and measurements of the CMB lensing. This dataset is conveniently referred to as Planck * BAO: We utilize 14 Baryon Acoustic Oscillation (BAO) measurements, which consists of both isotropic and anisotropic BAO measurements. The isotropic BAO measurements are identified as D_ V(z)/r_ d, where D_ V(z) characterizes the spherically averaged volume distance, and r_ d represents the sound horizon at the baryon drag epoch and the anisotropic BAO measurements encompass D_ M(z)/r_ d and D_ H(z)/r_ d, where D_ M(z) denoting the comoving angular diameter distance and D_ H(z) expressed as c/H(z), indicating the Hubble distance. These measurements have been derived from the extensive observations conducted by the SDSS collaboration. These measurements, which span eight distinct redshift intervals, have been acquired and continuously refined over the past 20 years <cit.>. This dataset is conveniently referred to as BAO. * Transversal BAO: The dataset comprises measurements of the BAO in 2D, specifically referred to as θ_BAO(z). These measurements are obtained using a weakly model-dependent approach and are compiled in Table I in <cit.>. The dataset originates from various public data releases (DR) of the Sloan Digital Sky Survey (SDSS), which includes DR7, DR10, DR11, DR12, DR12Q (quasars), and consistently follows the same methodology across these releases. It is noteworthy that these transversal BAO measurements tend to exhibit larger errors compared to those derived using a fiducial cosmology. This discrepancy arises because the error in the Transversal BAO methodology is determined by the magnitude of the BAO bump, whereas the fiducial cosmology approach, which is model-dependent, yields smaller errors. Generally, the error in the former approach can vary from approximately 10% to as much as 18%, while the latter approach typically results in errors on the order of a few percent <cit.>. Furthermore, a notable feature of this 2D BAO dataset is the absence of correlations between measurements at different redshifts. This absence of correlation is a result of the methodology employed, which ensures that measurements are derived from cosmic objects within separate redshift shells, preventing correlation between adjacent data bins. This dataset is conveniently referred to as BAOtr. * Type Ia supernovae and Cepheids: In the likelihood function, we integrate distance modulus measurements of Type Ia supernovae extracted from the Pantheon+ sample <cit.>, incorporating the latest SH0ES Cepheid host distance anchors <cit.>. The PantheonPlus dataset encompasses 1701 light curves associated with 1550 distinct SNe Ia events, spanning the redshift range z ∈ [0.001, 2.26]. This amalgamated dataset is conveniently denoted as PantheonPlus&SH0ES. In the context of the Λ_ sCDM+N_ eff+∑ m_ν model, the baseline comprises nine free parameters represented as 𝒫= {ω_ b, ω_ c, θ_ s, A_ s, n_ s, τ_ reio, N_ eff, ∑ m_ν, z_†}, with the first eight parameters being identical to those of the ΛCDM+N_ eff+∑ m_ν model. Throughout our statistical analyses, we adopt flat priors for all parameters: ω_ b∈[0.018,0.024], ω_ c∈[0.10,0.14], 100 θ_ s∈[1.03,1.05], ln(10^10A_ s)∈[3.0,3.18], n_ s∈[0.9,1.1], τ_ reio∈[0.04,0.125], N_ eff∈ [0,5], ∑ m_ν∈ [0,1], and z_†∈[1,3]. We employ Monte Carlo Markov Chain (MCMC) techniques to sample the posterior distributions of the model's parameters by using publicly available code <cit.> for different combinations of datasets considered in our analysis. To ensure the convergence of our MCMC chains, we have used the Gelman-Rubin criterion R-1 < 0.01 <cit.>. We have also made use of the GetDist Python package to perform an analysis of the samples. In the last row of Table <ref>, for the model comparison, we calculate the relative log-Bayesian evidence (ln B_ij) using the publicly accessible package [https://github.com/yabebalFantaye/MCEvidencegithub.com/yabebalFantaye/MCEvidence] <cit.> to approximate the Bayesian evidence of extended Λ_ sCDM model relative to the extended ΛCDM model. We follow the convention of indicating a negative value when the Λ_ sCDM+N_ eff+∑ m_ν model is favored over the ΛCDM+N_ eff+∑ m_ν scenario, or vice versa. For the purpose of interpreting the findings, we make use of the updated Jeffrey's scale introduced by Trotta <cit.>. We classify the evidence's strength as follows: it is considered inconclusive when 0 ≤ | ln B_ij| < 1, weak if 1 ≤ | ln B_ij| < 2.5, moderate if 2.5 ≤ | ln B_ij| < 5, strong if 5 ≤ | ln B_ij| < 10, and very strong if | ln B_ij | ≥ 10. § RESULTS AND DISCUSSION We present in <ref> the marginalized constraints at a 68% CL on various parameters of the extended (abrupt) Λ_ sCDM and ΛCDM models, namely, the (abrupt) Λ_ sCDM+N_ eff+∑ m_ν and ΛCDM+N_ eff+∑ m_ν, utilizing various combinations of datasets including Planck, Planck+BAO, Planck+BAOtr, Planck+BAO+PP&SH0ES, and Planck+BAOtr+PP&SH0ES. The table also includes the relative log-Bayesian evidence (lnℬ_ij), where a negative value indicates a preference for the Λ_ sCDM+N_ eff+∑ m_ν model over the ΛCDM+N_ eff+∑ m_ν. In the current study, for the first time, we constrain the parameters N_ eff and ∑ m_ν within the framework of the Λ_ sCDM cosmology, employing the combinations of datasets in our analysis. As N_ eff and ∑ m_ν are treated as free parameters in the current study, the errors associated with the constraints are increased compared to those of the standard (abrupt) Λ_ sCDM and ΛCDM models, considering the same combinations of datasets presented in Refs. <cit.>. The analysis of CMB-alone data yields N_ eff = 2.91 ± 0.19 and H_0 = 69.00^+2.10_-3.70 km s^-1 Mpc^-1 for the Λ_ sCDM+N_ eff+∑ m_ν model, while the ΛCDM+N_ eff+∑ m_ν model results in N_ eff = 2.88 ± 0.18 and H_0 = 65.50^+2.00_-1.60 km s^-1 Mpc^-1. Considering the SH0ES measurement of H_0 = 73.04 ± 1.04 km s^-1 Mpc^-1 <cit.>, the H_0 tension is significantly alleviated to 1.3σ for the Λ_ sCDM+N_ eff+∑ m_ν model, whereas it reduces only to 3.6σ for the ΛCDM+N_ eff+∑ m_ν model. The predicted N_ eff values are consistent with the SM value of N_ eff=3.044 <cit.> within 1σ for both models. The models yield similar constraints on the total neutrino mass, with ∑ m_ν < 0.41 eV at a 95% CL for the Λ_ sCDM+N_ eff+∑ m_ν model and ∑ m_ν < 0.40 eV at a 95% CL for the ΛCDM+N_ eff+∑ m_ν model. The parameter z_† in the Λ_ sCDM+N_ eff+∑ m_ν model remains unconstrained. We also present in <ref> the one- and two-dimensional marginalized distributions of the extended Λ_ sCDM and ΛCDM model parameters at 68% and 95% CL for the Planck, Planck+BAO/BAOtr, and Planck+BAO/BAOtr+PP&SH0ES datasets. We observe a strong positive correlation between H_0 and N_ eff owing to the physical mechanism discussed in <ref>. Notably, the addition of data from low redshift probes such as BAO/BAOtr and supernova samples, which fix the late-universe evolution, helps break the geometric degeneracies and tighten the constraints on N_ eff and other related parameters. With that being said, all the datasets that favor somewhat large H_0 values, with H_0≳70  km s^-1 Mpc^-1, also show preference for a relatively significant deviation from N_ eff = 3.044, except for the Λ_ s CDM+N_ eff+∑ m_ν model when subjected to Planck+BAOtr and Planck+BAOtr+PP&SH0ES, which warrants particular attention. In the Λ_ s CDM+N_ eff+∑ m_ν model, H_0 values that agree well with the ones measured using the local distance ladder approach can be realized in two ways: either by shrinking the sound horizon scale r_* due to the introduction of extra relics to the early universe, i.e., Δ N_ eff>0, while keeping the sign switch redshift z_† at large enough values where the model is not significantly distinguishable from its ΛCDM counterpart, or by adhering approximately to the standard value of neutrino species (Δ N_ eff∼ 0) and allowing a value of z_†∼2, resulting in a significantly larger H_0 due to the abrupt mirror AdS-dS transition in the late universe as discussed above. We read off from <ref> that Planck+BAOtr and Planck+BAOtr+PP&SH0ES favor exactly the latter case by placing the constraints N_ eff = 2.97±0.19 and N_ eff = 3.11^+0.13_-0.15, respectively, well consistent with the standard particle physics value of N_ eff=3.044 <cit.>. The strong degeneracy of characteristic parameter z_† of the Λ_ sCDM model is broken and it is constrained to be z_† = 1.57^+0.16_-0.22 and z_† = 1.62^+0.19_-0.30, corresponding to time periods when the dark energy density is non-negligible and therefore Λ_ s CDM+N_ eff+∑ m_ν is statistically distinct from Λ CDM+N_ eff+∑ m_ν. The upshot is that the observational data do not spoil the early universe account of the SM, keeping N_ eff, r_*, r_ d, and Y_ p at values roughly similar to those in the baseline standard ΛCDM model as shown in <ref>, but instead call for new physics or modification in the post-recombination universe. Moreover, it is noteworthy that replacement of BAOtr dataset with BAO holds back both models from attaining H_0 values consistent with SH0ES measurements, thereby from efficiently resolving the H_0 tension. Notice also that inclusion of BAO data in the analysis makes H_0 of Λ_ s CDM+N_ eff+∑ m_ν assume values rather close to those of Λ CDM+N_ eff+∑ m_ν and the effect of the mirror AdS-dS transition is weakened, especially by the low-redshift BAO, which finds the lower bounds z_† > 1.69 for Planck+BAO and z_† > 1.65 for Planck+BAO+PP&SH0ES at 95% CL. Only a moderate improvement in H_0 is achieved in the case of Planck+BAO+PP&SH0ES with N_ eff = 3.44±0.15 (3.50±0.13) and H_0=71.09^+0.81_-0.70 (70.95±0.75)  km s^-1 Mpc^-1. This improvement is, however, realized not mainly by the mirror AdS-dS transition but mostly by the relatively substantial increase in N_ eff. We stress here the fact that BAO data, which implicitly assume Planck-ΛCDM as fiducial cosmology in computing the distance to the spherical shell, push Λ_ s CDM+N_ eff+∑ m_ν towards Λ CDM+N_ eff+∑ m_ν, posing an impediment to the efficient operation of the mirror AdS-dS transition mechanism, hence to the resolution of the tensions. On the other hand, the combined Planck+BAOtr and Planck+BAOtr+PP&SH0ES datasets, incorporating the weakly model-dependent BAOtr data, yield H_0=73.10±1.40  km s^-1 Mpc^-1 and H_0=73.08±0.76  km s^-1 Mpc^-1 for the Λ_ s CDM+N_ eff+∑ m_ν model, respectively, in excellent agreement with the SH0ES measurement of H_0=73.04±1.04  km s^-1 Mpc^-1. However, despite the evident improvement in H_0 for Λ CDM+N_ eff+∑ m_ν as well—specifically, H_0=70.10±1.30  km s^-1 Mpc^-1 and H_0=72.23±0.74  km s^-1 Mpc^-1, respectively—this enhancement comes at the cost of a significant divergence from the SM of particle physics. It turns out that the N_ eff = 3.50±0.13 predicted by the Λ CDM+N_ eff+∑ m_ν model leads to a 3.5σ tension with the SM value of N_ eff = 3.044. In contrast, in the Λ_ s CDM+N_ eff+∑ m_ν model, we obtain N_ eff = 3.11^+0.13_-0.15, which is fully consistent with SM value of N_ eff = 3.044 within 68% CL interval. The H_0 tension also shows up in the supernova absolute magnitude M_ B, determined through the Cepheid calibration, as a ∼ 3.4σ discrepancy with the results obtained by the inverse distance ladder method utilizing the sound horizon r_ d as calibrator <cit.>, via the distance modulus μ(z_i) = m_B,i - M_B,i, where μ(z_i) = 5 log_10(1+z_i)/10 pc∫_0^z_ic d z/H(z) in the spatially flat Robertson-Walker spacetime, and m_ B,i is the SNIa apparent magnitude measured at the redshift z_i. In the case of Planck+BAO+PP&SH0ES, both extended models yield similar values of M_ B≈ -19.33 mag, which are in 2σ tension with the SH0ES calibrated value of M_ B = -19.244±0.037 mag <cit.>. This 2σ tension is reduced to 0.9σ when the BAOtr is used instead of BAO for the Λ_ s CDM+N_ eff+∑ m_ν model. Specifically, using the combined Planck+BAOtr+PP&SH0ES dataset results in M_ B = -19.281±0.021 mag for the Λ_ s CDM+N_ eff+∑ m_ν model, whereas the tension remains at the 2σ level in the Λ CDM+N_ eff+∑ m_ν model. Furthermore, when N_ eff and ∑ m_ν are relaxed within the context of ΛCDM, analyses from Planck, Planck+BAO, and Planck+BAO+PP&SH0ES yield upper bounds of ∑ m_ν <0.40 eV, ∑ m_ν<0.13 eV, and ∑ m_ν<0.13 eV, respectively. Notably, the latter two values, ∑ m_ν<0.13 eV, are exceedingly stringent, bordering on the threshold that would rule out IH, where ∑ m_ν>0.1 eV. Moreover, the combined datasets Planck+BAOtr and Planck+BAOtr+PP&SH0ES favor an even lower sum of neutrino masses, with ∑ m_ν < 0.06 eV. These upper limits are in stark contrast to the lower bounds established by flavor oscillation experiments, implying an additional complication alongside the need for new physics suggested by Δ N_ eff > 0. On the other hand, using the Λ_ s CDM+N_ eff+∑ m_ν model, the upper bounds range from ∑ m_ν < 0.35 eV to ∑ m_ν < 0.49 eV. Both bounds are completely concordant with experimental results. What is more, as shown in <ref>, the Λ CDM+N_ eff+∑ m_ν model exhibits a 2.5-3σ tension with the low redshift measurements of S_8, e.g., S_8 = 0.759^+0.024_-0.021 <cit.> (also reported as S_8 = 0.749_-0.020^+0.027 in Ref. <cit.>) from KiDS-1000 data, obtained within the standard ΛCDM framework. This indicates that there is almost no reduction in the significance of the S_8 tension, suggesting that it persists in the extended ΛCDM model without exception. On the other hand, the Λ_ s CDM+N_ eff+∑ m_ν model performs significantly better, exhibiting no S_8 tension at all when combined Planck, Planck+BAOtr, and Planck+BAOtr+PP&SH0ES datasets are used. These predict S_8 values in excellent alignment with S_8 = 0.746_-0.021^+0.026 <cit.> from KiDS-1000 data, obtained for the abrupt Λ_ sCDM model. Additionally, it reduces the tension to 1.9σ even for the Planck+BAO and Planck+BAO+PP&SH0ES datasets. Interestingly, despite the fact that matter density parameter Ω_ m of Λ_ s CDM+N_ eff+∑ m_ν trends towards higher values—similar to those in the Λ CDM+N_ eff+∑ m_ν model—due to the counteracting low-redshift BAO data points in case of the Planck+BAO and Planck+BAO+PP&SH0ES datasets, it still manages to yield a reasonably lower S_8. This outcome is plausible because the effectiveness of the two-parameter extension in the Λ_ sCDM model in mitigating the S_8 tension partially lies underneath how massive neutrinos are allowed to get in each model: We observe in <ref> that ∑ m_ν is anti-correlated with S_8 in both models, especially when subjected to Planck+BAOtr/BAO and Planck+BAOtr/BAO+PP&SH0ES datasets. This means that larger ∑ m_ν values further suppress σ_8. On the other hand, recalling the sum of neutrino masses ∑ m_ν is degenerate with H_0, as seen in <ref>, within the context of the Λ CDM+N_ eff+∑ m_ν model, relatively higher H_0 can only be attained unless neutrinos are too massive, which explains the rather stringent constraints, ∑ m_ν<0.06 (<0.13) eV, ∑ m_ν<0.06 (<0.13) eV found by Planck+BAOtr(BAO) and Planck+BAOtr(BAO)+PP&SH0ES, respectively. Consequently, a sufficiently large reduction in S_8 through the suppression of σ_8 to bring it to the range consistent with S_8 = 0.759^+0.024_-0.021 of ΛCDM-KiDS <cit.> cannot be accomplished. Given the obtained Ω_ m values are also not low enough, that is Ω_ m, 0≳ 0.29, we conclude that Λ CDM+N_ eff+∑ m_ν cannot be expected to simultaneously resolve both H_0 and S_8 tension without a compromise on either H_0 or S_8 under these circumstances. However, notice that the upper bounds ∑ m_ν≲ 0.50 eV provided by Planck+BAOtr/BAO and Planck+BAOtr/BAO+PP&SH0ES datasets for the Λ_ s CDM+N_ eff+∑ m_ν model are much more conservative than those obtained in the Λ CDM+N_ eff+∑ m_ν model. Correspondingly, neutrinos in the extended Λ_ sCDM are allowed to be 3 to 8 times more massive, thanks to the mirror AdS-dS transition mechanism, resulting in smaller σ_8 values. This, to some extent, enables the model to circumvent the BAO data's propensity to exacerbate the S_8 tension via an elevated Ω_ m. Nevertheless, for Planck+BAOtr and Planck+BAOtr+PP&SH0ES datasets the transition mechanism is more prominent at lower redshifts, viz., z_†∼ 1.6, implying enhanced H_0, thus lower Ω_ m than in the ΛCDM extension. Since S_8=σ_8 √(Ω_ m/0.3), neutrinos being more massive cooperate with z_† to diminish S_8 to even more compatible ranges as low as S_8 = 0.763_-0.014^+0.019 and S_8 = 0.771_-0.014^+0.022, which perfectly align with S_8 = 0.746_-0.021^+0.026 of Λ_ sCDM-KiDS <cit.>. In <ref>, we observe that when N_ eff is allowed to vary, the PP&SH0ES dataset prefers Δ N_ eff > 0, leading to higher H_0 values compared to the cases where N_ eff is fixed at the standard value of N_ eff = 3.044 (see Refs. <cit.>). An immediate consequence is that the freeze-out temperature T_ f, and thus the corresponding baryon density ω_ b, increases due to Δ N_ eff>0. This manifests as a positive correlation between N_ eff and ω_ b (or a negative correlation between r_* and ω_ b), leading to predicted primordial helium-4 abundances that exceed expected levels, as discussed in <ref>. These elevated levels are particularly discrepant with astrophysical measurements in the context of the extended ΛCDM model. In the context of Λ CDM+N_ eff+∑ m_ν model, the Planck+BAOtr+PP&SH0ES dataset favors a primordial helium-4 abundance, Y_ p = 0.2542±0.0017, in 2.3σ and 4.3σ tensions with Y_ p^ Aver et al. = 0.2453±0.0034 <cit.> and Y_ p^ Fields et al. = 0.2469±0.0002 <cit.>, respectively. Similarly, the Planck+BAO+PP&SH0ES dataset estimates Y_ p = 0.2541±0.0017, also creating tensions of 2.3σ and 4.2σ with these measurements. However, for the Λ_ s CDM+N_ eff+∑ m_ν model using the same Planck+BAOtr+PP&SH0ES dataset, we find Y_ p = 0.2489±0.0020, which aligns within 1.0σ of both sets of measurements, indicating no tension at all. As with H_0, the incorporation of BAO into the analysis hinders the reconciliation of Y_ p predicted by Λ_ s CDM+N_ eff+∑ m_ν with direct measurements. Nonetheless, the statistical significance of the tensions from the value Y_ p = 0.2532±0.0020 are at 2σ and 3.1σ, both of which are still more favorable than those found for Λ CDM+N_ eff+∑ m_ν. This suggests that resolving the H_0 and S_8 tensions without introducing new significant discrepancies with astrophysical observations of the primordial helium mass fraction, Y_ p, cannot be achieved by simply allowing N_ eff and ∑ m_ν as two additional free parameters in the standard ΛCDM model. However, within the Λ_ s CDM+N_ eff+∑ m_ν framework, it is possible to address both H_0 and S_8 discrepancies without creating significant tensions in parameters like Y_ p and N_ eff. It is crucial to note that the resolution of these tensions in the extended Λ_ sCDM model is not due to a broadening of error bars but primarily to a shift in the central values of the relevant parameters in the correct direction, as illustrated in <ref>. Last but not least, to assess the goodness and robustness of the statistical fit to the observational data, we provide a quantitative comparison between the Λ_ s CDM+N_ eff+∑ m_ν and Λ CDM+N_ eff+∑ m_ν models in terms of relative log-Bayesian evidence, ln ℬ_ij, according to the updated Jeffreys' scale <cit.>. The analysis yields inconclusive Bayesian evidence (ln ℬ_ij = -0.63) between models for the CMB-alone case. In contrast, we find weak statistical evidence in favor of Λ_ s CDM+N_ eff+∑ m_ν when incorporating BAO datasets, with ln ℬ_ij = -1.98 for Planck+BAO and ln ℬ_ij = -1.24 for Planck+BAO+PP&SH0ES. Remarkably, this preference is significantly enhanced to a very strong level by substituting BAOtr for BAO, yielding evidence values of ln ℬ_ij = -11.73 and ln ℬ_ij = -11.05 for Planck+BAOtr and Planck+BAOtr+PP&SH0ES, respectively. Consequently, we infer that the N_ eff + ∑ m_ν extension of the Λ_ sCDM model outperforms the ΛCDM extension in fitting the data, addressing the H_0 and S_8 tensions, and maintaining coherence with well-established theoretical predictions and observations across all datasets, as demonstrated in <ref>. Additionally, we provide a bar chart in <ref> that summarizes and visually illustrates the statistical significance of various concordances and discordances (tensions) across key cosmological and astrophysical parameters—H_0, M_B, S_8, N_ eff, and Y_ p. This comprehensive visualization reinforces our conviction that to gain a deeper understanding of the cosmos through independent observations, incorporating new physics at later times is an indispensable component of our exploration, if not the sole resolutions to the tensions. §.§ Consistency with the AAL-Λ_ sCDM model It has recently been reported in Ref. <cit.> that the mirror AdS to dS transition at low energies (in the late universe at z∼2), which characterizes the Λ_ sCDM model, can be realized through the Casimir forces inhabiting the bulk. This fundamental physical mechanism, proposed to substantiate the Λ_ sCDM model, suggests that the effective number of relativistic neutrino species, N_ eff, is altered by the fields that characterize the deep infrared region of the dark sector, resulting in a deviation Δ N_ eff≈ 0.25 from the standard model value of particles physics. We refer to this particular realization of the Λ_ sCDM model as AAL-Λ_ sCDM. As we previously discussed, an increase in N_ eff modifies the standard BBN by increasing the expansion rate during the BBN epoch, which leads to greater abundances of primordial Helium-4. The impact of small modifications in the expansion rate of the universe during the BBN epoch on the helium-4 mass fraction, Y_ p, can approximately be quantified using an analytical formula provided by Ref. <cit.>: Y_ p = 0.2381 ± 0.0006 + 0.0016[η_10 + 100 (S-1)], where η_10 represents the scaled baryon-to-photon ratio (η_10 = 273.9ω_ b). The parameter S, quantifying the deviation of the expansion rate during the BBN epoch, H'_ BBN, from the expansion rate in the standard BBN model, H_ SBBN, due to additional relativistic species, is given by: S = H'_ BBN/H_ SBBN = √(1 + 743Δ N_ eff). We then assess the implications of the prediction Δ N_ eff≈ 0.25 within the framework of Λ_ s CDM+N_ eff+∑ m_ν and calculate Y_ p^ AAL = 0.2512± 0.0006, which we compare with the mass fractions obtained from the observational analysis detailed in <ref>. For the dataset combinations Planck+BAOtr, Planck+BAO+PP&SH0ES, and Planck+BAOtr+PP&SH0ES, the AAL-Λ_ sCDM predicted abundance for Δ N_ eff≈ 0.25 is consistent with the abundances found in Λ_ s CDM+N_ eff+∑ m_ν at less than 2σ. As expected, the same holds for the effective number of neutrino species, N_ eff = 3.294, since Y_ p and N_ eff are strongly and positively correlated. In the last two rows of the table, we observe that a 3.3σ tension in Y_ p emerges due to the relaxation of N_ eff when Λ CDM+N_ eff+∑ m_ν is analyzed using the Planck+BAO+PP&SH0ES and Planck+BAOtr+PP&SH0ES datasets. In contrast, for Λ_ s CDM+N_ eff+∑ m_ν, the tension is a mild 2.5σ and non-existent in the case of Planck+BAO+PP&SH0ES and Planck+BAOtr+PP&SH0ES, respectively. Thus, while the Λ_ s CDM+N_ eff+∑ m_ν model, when confronted with observational data, yields N_ eff and Y_ p values that are much more compatible with standard BBN than those of the Λ CDM+N_ eff+∑ m_ν model, the constraints on N_ eff and Y_ p are still compatible with their predicted values in the AAL-Λ_ sCDM model within ∼2σ. This suggests that the AAL-Λ_ sCDM could achieve similar success in fitting the data as the Λ_ sCDM model. Nevertheless, to definitely confirm our conclusions on the AAL-Λ_ sCDM, a more rigorous and comprehensive analysis should be conducted by setting N_ eff to the specific value of 3.294 in Λ_ sCDM model, as suggested by the AAL-Λ_ sCDM model, and then confronting it with the observational data using MCMC analysis. While this paper was nearing completion, a work confronting the AAL-Λ_ sCDM model, along with Λ_ sCDM and ΛCDM models, with observational data, appeared on arXiv. We refer the reader to Ref. <cit.> for further details on the observational analysis of the AAL-Λ_ sCDM model, which is dubbed as Λ_ s CDM^+ in that paper. This stringy realization of the abrupt Λ_ sCDM model offers promising results both in fitting the data and resolving major cosmological tensions. It incorporates both pre- and post-recombination modifications to the standard ΛCDM model, namely, the rapid mirror AdS-dS transition in the late universe (at z_†∼2) and an increased effective number of neutrino species, Δ N_ eff∼0.25. Notably, compared to the abrupt, Λ_ sCDM model, it predicts slightly higher H_0 values despite the slightly larger z_† value they found. Specifically, they report H_0=74.0 km s^-1 Mpc^-1 with z_†∼ 2.1 in AAL-Λ_ sCDM and H_0=73.4 km s^-1 Mpc^-1 with z_†∼1.9 in Λ_ sCDM, based on their dataset. § CONCLUSION The Λ_ sCDM cosmology <cit.> extends the standard model of cosmology, the ΛCDM model, by promoting its positive cosmological constant (Λ) assumption to a rapidly sign-switching cosmological constant (Λ_ s), namely, a rapid mirror AdS-dS transition, in the late universe, around z_†∼2, as first conjectured in <cit.> based on findings in the graduated dark energy (gDE) model. In its simplest, idealized form, the abrupt Λ_ sCDM model <cit.> introduces z_†, the redshift at which the mirror AdS-dS transition occurs instantaneously, as the only additional free parameter beyond the standard ΛCDM model. Detailed observational analyses of the abrupt Λ_ sCDM model have demonstrated its ability to address major cosmological tensions such as the H_0, M_B, and S_8 tensions, as well as less significant discrepancies like Ly-α and t_0 anomalies, simultaneously <cit.>. Recent theoretical advances regarding the potential physical mechanisms underlying a late-time mirror AdS-dS transition, such as those introduced in Refs. <cit.> have propelled Λ_ sCDM cosmology beyond a phenomenological framework into a fully predictive physical cosmological model. The standard Λ_ sCDM cosmology suggests a post-recombination modification to ΛCDM, leaving the pre-recombination universe as described in standard cosmology. Therefore, it is crucial to further investigate whether this framework indeed leaves the pre-combination universe unaltered if modifications related to early universe dynamics, such as variations in the number of neutrino species and the total mass of neutrinos, which are directly related to the standard model of particle physics, are allowed. In this paper, we have considered, for the first time, a two-parameter extension of the abrupt Λ_ sCDM model, as well as ΛCDM for comparison purposes. These extensions involve treating the effective number of relativistic neutrino species N_ eff=3.044 and a minimal mass ∑ m_ν = 0.06 eV of the SM of particle physics, inherent in the standard Λ_ sCDM and ΛCDM models, as free parameters to be predicted from cosmological observational analyses. We have first discussed the physical and cosmological implications of deviating N_ eff and ∑ m_ν from their standard values (see <ref>). We then conducted observational analyses to constrain the free parameters in the extended models—Λ_ sCDM+N_ eff+∑ m_ν and ΛCDM+N_ eff+∑ m_ν—using the Planck CMB, BAO (3D BAO), and alternative to this BAOtr (2D BAO), and PantheonPlus&SH0ES datasets (see <ref>)) and then discuss our findings in detail (see <ref>). Our approach presents one of the first examples of considering both late-time (introducing new physics operating in the post-recombination universe and deforming Hubble parameter) and early-time (introducing new physics operating in the pre-recombination universe and reducing the sound horizon) modifications proposed to address H_0 tension. This allowed us to assess whether data suggest late- or early-time modifications, or both (as suggested in <cit.>), to better fit the data, compared to ΛCDM, and address the cosmological tensions, particularly the H_0 tension, while remaining consistent with the SM of particle physics. In the CMB-alone analysis, we have found no tension at all in any of the parameters of interest (namely, H_0, M_B, S_8, N_ eff, Y_ p, and ω_ b) within the context of the Λ_ sCDM+N_ eff+∑ m_ν model. In contrast, for the ΛCDM+N_ eff+∑ m_ν model, while the H_0 tension is only slightly alleviated to a 3.6σ level, the so-called S_8 tension remains at a 3σ level. N_ eff values are found to be ∼2.9 for both models, consistent within 1σ with the SM of particle physics value of N_ eff=3.044. We also confronted both extended models with the combined Planck+BAO and Planck+BAO+PP&SH0ES datasets, as well as the combined Planck+BAOtr and Planck+BAOtr+PP&SH0ES datasets, considering BAOtr (2D BAO) data, which are less-model dependent, instead of the BAO (3D BAO). As anticipated, in the case of both Planck+BAO and Planck+BAO+PP&SH0ES, the extended Λ_ sCDM model approaches the extended ΛCDM model due to the opposition of Galaxy BAO data, pushing z_† to higher values to give smaller H_0. Therefore, for the combined Planck+BAO+PP&SH0ES, the moderate enhancement in H_0 within Λ_ sCDM+N_ eff+∑ m_ν is to some extent a result of Δ N_ eff∼0.4 rather than the effect of an efficient a mirror AdS-dS transition. Likewise, ΛCDM+N_ eff+∑ m_ν model relaxes the H_0 tension through Δ N_ eff∼0.5. Unfortunately, this improvement in H_0 creates a new Y_ p tension with astrophysical measurements of the primordial helium-4 abundances and demands new physics beyond the SM of particle physics. Additionally, the existing S_8 tension is worsened by the increased pre-recombination expansion in the ΛCDM+N_ eff+∑ m_ν model. On the other hand, we achieved a remarkable improvement in the fit to the data when we considered BAOtr instead of BAO in the analysis, resulting in no tension at all while simultaneously remaining fully consistent with the SM of particle physics using the Λ_ sCDM+N_ eff+∑ m_ν model. For the Planck+BAOtr and Planck+BAOtr+PP&SH0ES datasets, the H_0 tension is entirely eliminated, with the value H_0 ≈ 73 km s^-1 Mpc^-1 obtained in Λ_ sCDM+N_ eff+∑ m_ν. Additionally, N_ eff is constrained to be N_ eff∼3 and N_ eff∼3.1, respectively, both of which are in agreement with N_ eff = 3.044 at 1σ. Nevertheless, even though ΛCDM+N_ eff+∑ m_ν also resolves the H_0 tension, the model loses its coherence with the SM of particle physics and suffers from the same Y_ p tension mentioned above because of Δ N_ eff∼0.5. An important realization at this point is that a post-recombination modification at z∼1.6 in the form of a rapidly sign-switching cosmological constant Λ_ s, namely, a rapid mirror AdS-dS transition, is strongly favored over an early-time deformation of H(z) induced by Δ N_ eff>0, with the Bayesian evidence value of ln ℬ_ij∼-11. In addition, the upper bounds on the sum of neutrino masses in the Λ_ sCDM+N_ eff+∑ m_ν model are about ∑ m_ν≲0.50 eV, being consistent with the lower bounds provided by the neutrino oscillation experiments, i.e., ∑ m_ν>0.06 eV (assuming the normal ordering) and ∑ m_ν>0.10 eV (assuming the inverted ordering). These bounds help to remedy the S_8 tension by suppressing clustering σ_8 when matter density Ω_ m is not sufficiently low, especially in cases where the model is subjected to Planck, Planck+BAO, Planck+BAO+PP&SH0ES datasets. However, the upper bounds placed on ∑ m_ν using ΛCDM+N_ eff+∑ m_ν are extremely tight, with ∑ m_ν < 0.06 eV for Planck+BAOtr and Planck+BAOtr+PP&SH0ES, and ∑ m_ν< 0.13 eV for Planck+BAO and Planck+BAO+PP&SH0ES. Consequently, σ_8 values preferred by these datasets are typically larger in ΛCDM+N_ eff+∑ m_ν than in Λ_ sCDM+N_ eff+∑ m_ν, partially hampering the alleviation of the S_8 tension. And, in the last section, we evaluated the Δ N_ eff≈0.25 prediction of the AAL-Λ_ sCDM model <cit.> in the extended Λ_ sCDM studied in this paper and detected no serious incompatibility between the two models. As a final remark, we note that the extensive literature attempting to address the shortcomings of the standard cosmological model by proposing modifications spanning the entire or a long history of the universe often includes regimes that are inaccessible with direct observational methods. This broad approach can be akin to looking for a needle in a haystack when trying to resolve issues that arise within the ΛCDM framework. Therefore, it might be more effective to narrow down the time/redshift scale in which a more complete cosmological framework can be sought, guided by the observational data, as suggested in Ref. <cit.>. This focused approach could help localize the possible missing physics, ideally by introducing minimal (though not necessarily trivial) modifications, aiding in a better understanding of the universe. Models possessing this property would allow for further testing via new and independent methods and ideally direct observations with current or future experiments. Certain extensions of the Λ_ sCDM model, introducing modifications on top of its directly detectable new physics around z ∼ 2, such as Λ_ sCDM+N_ eff+∑ m_ν studied in the current work, enable us to test whether the data prefer pre- or post-recombination new physics, or both (as suggested in Ref. <cit.>). Our work here provides a compelling example, highlighting the potential of new physics in the late universe around z ∼ 2 against the pre-recombination new physics closely related to the SM of particle physics. This late-time AdS-dS transition era around z ∼ 2 remains accessible to direct observations in principle, in contrast to pre-recombination epochs where we mainly rely on indirect observations. Thus, it would be worthwhile to further investigate the Λ_ sCDM model, as well as its standard extensions similar to the ones applied to the ΛCDM model as we have done here, and, perhaps even better, its realizations based on different physical theories—which usually come with different types of corrections on top of the simplest abrupt Λ_ sCDM model, see, e.g., Ref. <cit.>—can help in our quest to establish a physical cosmology better at describing cosmological phenomena than today's standard model of cosmology, i.e., the ΛCDM model. The authors thank Luis A. Anchordoqui for fruitful discussions. A.Y. is supported by a Junior Research Fellowship (CSIR/UGC Ref. No. 201610145543) from the University Grants Commission, Govt. of India. S.K. gratefully acknowledges the support of Startup Research Grant from Plaksha University (File No. OOR/PU-SRG/2023-24/08), and Core Research Grant from Science and Engineering Research Board (SERB), Govt. of India (File No. CRG/2021/004658). Ö.A. acknowledges the support of the Turkish Academy of Sciences in the scheme of the Outstanding Young Scientist Award (TÜBA-GEBİP). This study was supported by Scientific and Technological Research Council of Turkey (TUBITAK) under the Grant Number 122F124. The authors thank TUBITAK for their support.
http://arxiv.org/abs/2406.17680v1
20240625161252
End-to-End Autonomous Driving without Costly Modularization and 3D Manual Annotation
[ "Mingzhe Guo", "Zhipeng Zhang", "Yuan He", "Ke Wang", "Liping Jing" ]
cs.CV
[ "cs.CV" ]
Efficient classical algorithm for simulating boson sampling with inhomogeneous partial distinguishability J. J. Renema July 1, 2024 ========================================================================================================= § ABSTRACT We propose UAD, a method for vision-based end-to-end autonomous driving (E2EAD), achieving the best open-loop evaluation performance in nuScenes, meanwhile showing robust closed-loop driving quality in CARLA. Our motivation stems from the observation that current E2EAD models still mimic the modular architecture in typical driving stacks, with carefully designed supervised perception and prediction subtasks to provide environment information for oriented planning. Although achieving groundbreaking progress, such design has certain drawbacks: 1) preceding subtasks require massive high-quality 3D annotations as supervision, posing a significant impediment to scaling the training data; 2) each submodule entails substantial computation overhead in both training and inference. To this end, we propose UAD, an E2EAD framework with an unsupervised[Following <cit.>, here we consider the methods as “unsupervised” ones as long as no manual annotation is used and required in the target task or domain.] proxy to address all these issues. Firstly, we design a novel Angular Perception Pretext to eliminate the annotation requirement. The pretext models the driving scene by predicting the angular-wise spatial objectness and temporal dynamics, without manual annotation. Secondly, a self-supervised training strategy, which learns the consistency of the predicted trajectories under different augment views, is proposed to enhance the planning robustness in steering scenarios. Our UAD achieves 38.7% relative improvements over UniAD on the average collision rate in nuScenes and surpasses VAD for 41.32 points on the driving score in CARLA's Town05 Long benchmark. Moreover, the proposed method only consumes 44.3% training resources of UniAD and runs 3.4× faster in inference. Our innovative design not only for the first time demonstrates unarguable performance advantages over supervised counterparts, but also enjoys unprecedented efficiency in data, training, and inference. Code and models for both open- and closed-loop evaluation will be released upon publication at <https://github.com/KargoBot_Research/UAD>. § INTRODUCTION Recent decades have witnessed breakthrough achievements in autonomous driving. The end-to-end paradigm, which seeks to integrate perception, prediction, and planning tasks into a unified framework, stands as a representative branch <cit.>. The latest advances in end-to-end autonomous driving significantly piqued researchers' interest <cit.>. However, handcrafted and resource-intensive supervised sub-tasks for perception and prediction, which have previously proved their utility in environment modeling <cit.>, continue to be indispensable, as shown in Fig. <ref>. Then what insights have we gained from the recent advances? It has come to our attention that one of the most enlightening innovations lies in the Transformer-based pipeline, in which the queries act as a connective thread, seamlessly bridging various tasks. Besides, the capability for environment modeling has also seen a significant boost, primarily due to complicated interactions of supervised sub-tasks. However, every coin has two sides. In comparison to the vanilla design <cit.> (see Fig. <ref>), modularized methods incur unavoidable computation and annotation overhead. As illustrated in Fig. <ref>, the training of the recent method UniAD <cit.> takes 48 GPU days while running at only 2.1 frames per second (FPS). Moreover, modules in existing perception and prediction design require large quantities of high-quality annotated data. The financial overhead for human annotation significantly impedes the scalability of such modularized methods with supervised subtasks to leverage massive data. As proved by large foundation models <cit.>, scaling up the data volume is the key to bringing the model capabilities to the next level. Thus we ask ourselves the question: Is it viable to devise an efficient and robust E2EAD framework while alleviating the reliance on 3D annotation? In this work, we show the answer is affirmative by proposing an innovative Unsupervised pretext task for end-to-end Autonomous Driving (UAD), which seeks to efficiently model the environment. The pretext task consists of an angular-wise perception module to learn spatial information by predicting the objectness of each sector region in BEV space, and an angular-wise dreaming decoder to absorb temporal knowledge by predicting inaccessible future states. The introduced angular queries link the two modules as a whole pretext task to perceive the driving scene. Notably, our method shines by completely eliminating the annotation requirement for perception and prediction. Such data efficiency is not attainable for current methods with complex supervised modularization <cit.>. The supervision for learning spatial objectness is obtained by projecting the 2D region of interests (ROIs) from an off-the-shelf open-set detector <cit.> to BEV space. While utilizing the publicly available open-set 2D detector pre-trained with manual annotation from other domains (e.g. COCO <cit.>), we avoid the need for any additional 3D labels within our paradigm and target domains (e.g. nuScenes <cit.> and CARLA <cit.>), thereby creating a pragmatically unsupervised setting <cit.>. Furthermore, we introduce a self-supervised direction-aware learning strategy to train the planning model. Specifically, the visual observations are augmented with different rotation angles, and the consistency loss is applied to the predictions for robust planning. Without bells and whistles, the proposed UAD outperforms UniAD for 0.13m in nuScenes Avg. L2 error, and surpasses VAD <cit.> for 9.92 points in CARLA route completion score. Such unprecedented performance gain is achieved with a 3.4× inference speed, a mere 44.3% training budget of UniAD, and zero annotations, as illustrated in Fig. <ref>. In summary, our contributions are as follows: 1) We propose an unsupervised pretext task to discard the requirement of 3D manual annotation in end-to-end autonomous driving, potentially making it more feasible to scale the training data to billions level without any labeling overload; 2) We introduce a novel self-supervised direction-aware learning strategy to maximize the consistency of the predicted trajectories under different augment views, which enhances planning robustness in steering scenarios; 3) Our method shows superiority in both open- and closed-loop evaluation compared with other vision-based E2EAD methods, with much lower computation and annotation cost. § RELATED WORK §.§ End-to-End Autonomous Driving End-to-end autonomous driving can be dated back to 1988, when the ALVINN <cit.> proposed by Carnegie Mellon University could successfully navigate a vehicle over 400 meters. After that, to improve the robustness of E2EAD, a series of modern approaches such as NEAT <cit.>, P3 <cit.>, MP3 <cit.>, ST-P3 <cit.> introduce the design of more dedicated modularization, which integrate auxiliary information such as HD maps, and additional tasks like bird's-eye view (BEV) segmentation. Most recently, embracing advanced architectures like Transfromer <cit.> and visual occupancy prediction <cit.>, UniAD <cit.> and VAD <cit.> demonstrate impressive performance in open-loop evaluation. In this work, instead of integrating complex supervised modular sub-tasks, we innovatively propose another path proving that an efficient unsupervised pretext task without any human annotation like 3D bounding boxes and point cloud categories, can achieve even superior performance than recent state-of-the-arts. §.§ World Model In pursuit of understanding the dynamic changes in environments, researchers in the fields of gaming and robotics have proposed various world models <cit.>. Recently, the autonomous driving community introduces world models for safer maneuvering <cit.>. MILE <cit.> considers the environment as a high-level embedding and tends to predict its future state with historical observations. Drive-WM <cit.> proposes a framework to integrate world models with existing E2E methods to improve planning robustness. In this work, we propose an auto-regressive mechanism, tailored to our unsupervised pretext, to capture angular-wise temporal dynamics within each sector. § METHOD §.§ Overview As illustrated in Fig. <ref>, our UAD framework consists of two essential components: 1) the Angular Perception Pretext, aims to liberate E2EAD from costly modularized tasks in an unsupervised fashion; 2) the Direction-Aware Planning, learns self-supervised consistency of the augmented trajectories. Specifically, UAD first models the driving environment with the pretext. The spatial knowledge is acquired by estimating the objectness of each sector region within the BEV space. The angular queries, each responsible for a sector, are introduced to extract features and predict the objectness. The supervision label is generated by projecting the 2D regions of interests (ROIs) to the BEV space, which are predicted with an available open-set detector GroundingDINO <cit.>. This way not only eliminates the 3D annotation requirement, but also greatly reduces the training budget. Moreover, as driving is inherently a dynamic and continuous process, we thus propose an angular-wise dreaming decoder to encode the temporal knowledge. The dreaming decoder can be viewed as an augmented world model <cit.> capable of auto-regressively predicting the future states. Subsequently, direction-aware planning is introduced to train the planning module. The raw BEV feature is augmented with different rotation angles, yielding rotated BEV representations and ego trajectories. We apply self-supervised consistency loss to the predicted trajectories of each augmented view, which is expected to improve the robustness for directional change and input noises. The learning strategy can also be regarded as a novel data augmentation technique customized for end-to-end autonomous driving, which enhances the diversity of trajectory distribution. §.§ Angular Perception Pretext Spatial Representation Learning. Our model attempts to acquire spatial knowledge of the driving scene by predicting the objectness of each sector region within the BEV space. Specifically, taking multi-view images {𝐈_ i∈ℝ^H_ i× W_ i× 3} as input, the BEV encoder <cit.> first extracts visual information into the BEV feature F_ b∈ℝ^H_ b× W_ b× C. Then, F_ b is partitioned into K sectors with a uniform angle θ centered around ego car. Each sector contains several feature points in BEV space. Denoting feature of a sector as f∈ℝ^N×C, where N is the maximum number of feature points in all sectors, we derive angular BEV feature F_ a∈ℝ^K×N×C. Zero-padding is applied on sectors with fewer than N points. Then why do we partition the rectangular BEV feature to angular-wise formatting? The underlying reason is that, in the absence of depth information, the region in BEV space corresponding to an ROI in 2D image is a sector. As illustrated in Fig. <ref>, by projecting 3D sampling points to images and verifying their presence in 2D ROIs, a BEV object mask M∈ℝ^H_ b× W_ b×1 is generated, representing the objectness in BEV space. Specifically, the sampling points falling within 2D ROIs are set to 1, while the others are 0. It is noticed that the positive sectors are irregularly and sparsely distributed in BEV space. To make the objectness label more compact, similar to the BEV feature partition, we uniformly divide M into K equal parts. The segments overlapped with positive sectors are assigned with 1, constituting the angular objectness label Y_ obj∈ℝ^K×1. Thanks to the rapid development of open-set detection, it's now convenient to obtain 2D ROIs for the input multi-view images by feeding the pre-defined prompts (e.g., vehicle, pedestrian, and barrier) to a 2D open-set detector like GroundingDINO <cit.>. Such design is the key in reducing annotation cost and scaling up the dataset. To predict the objectness score of each sector, we define angular queries Q_ a∈ℝ^K×C to summarize F_ a. Each angular query q_ a∈ℝ^1×C in Q_ a will interact with corresponding f by cross attention <cit.>, 1.0q_ a = CrossAttention( q_ a, f), Finally, we map Q_ a to the objectness scores P_ a∈ℝ^K × 1 with a linear layer, which is supervised by Y_ obj with binary cross-entropy loss (denoted as ℒ_ spat). Temporal Representation Learning. We propose to capture the temporal information of driving scenarios with the angular-wise dreaming decoder. As shown in Fig. <ref>, the decoder auto-regressively learns transition dynamics of each sector in a similar way of world model <cit.>. Assuming the planning module predicts the trajectories of future T steps, the dreaming decoder accordingly comprises T layers, where each updates the input angular queries Q_ a and angular BEV feature F_ a based on the learned temporal dynamics. At step t, the queries Q_ a^t-1 first grasp environmental dynamics from the observation feature F_ a^ t with a gated recurrent unit (GRU) <cit.>, which generates Q_ a^t (hidden state), 1.0Q_ a^t = GRU( Q_ a^t-1, F_ a^t), In previous world models, the hidden state Q is solely used for perceiving observed scenes. The GRU iteration thus ends at t with the final observation F_ a^t. In our framework, Q is also used for predicting ego trajectories in the future. Yet, the future observation, e.g., F_ a^t+1, is unavailable, as the world model <cit.> is designed for forecasting the future with only current observation. To obtain Q_ a^t+1, we first propose to update F_ a^t to provide pseudo observations F̂_ a^t+1, 1.0F̂_ a^t+1 = CrossAttention( F_ a^t, Q_ a^t). Then Q_ a^t+1 can be generated with Eq. <ref> and inputs of F̂_ a^t+1 and Q_ a^t. Following the loss design in world models <cit.>, we respectively map Q_ a^t-1 and Q_ a^t to distributions of {μ_ a^t-1,σ_ a^t-1∈ℝ^K × C} and {μ_ a^t,σ_ a^t∈ℝ^K × C}, and then minimize their KL divergence. For the prior distribution from Q_ a^t-1, it's regarded as a prediction of the future dynamics without observation. In contrast, the posterior distribution from Q_ a^t represents the future dynamics with the observation F_ a^t. The KL divergence between the two distributions measures the gap between the imagined future (prior) and the true future (posterior). We expect to enhance the capability of future prediction for long-term driving safety, which is realized by optimizing the dreaming loss ℒ_ drm, ℒ_ drm = KL({μ_ a^t,σ_ a^t} || {μ_ a^t-1,σ_ a^t-1}), §.§ Direction-Aware Planning Planning Head. The outputs of angular perception pretext contain a group of angular queries { Q_ a^t (t=1,...,T)}. For planning, we correspondingly initialize T ego queries { Q_ ego^t∈ℝ^1 × C (t=1,...,T)} to extract planning-relevant information and predict the ego trajectory of each future time step. The interaction between ego queries and angular queries is performed with cross attention, 1.0Q_ ego^t = CrossAttention( Q_ ego^t, Q_ a^t). The output ego queries { Q_ ego^t} are then used to predict the ego trajectories of future T steps. Following previous works <cit.>, a high-level driving signal c (turn left, turn right or go straight) is provided as prior knowledge. The planning head takes the concatenated ego feature F_ ego∈ℝ^T × C from { Q_ ego^t} and the driving command c as inputs, and outputs the planning trajectory P_ traj∈ℝ^T × 2, P_ traj = PlanHead( F_ ego, c), where the PlanHead is the same as UniAD <cit.>. We apply ℒ_1 loss to minimize the distance between the predicted ego trajectory P_ traj and the ground truth G_ traj, denoted as ℒ_ imi. Notably, G_ traj is easy to obtain, and manual annotation is not required in practical scenarios. Directional Augmentation. Observed that the training data is predominated by the go straight scenarios, we propose a directional augmentation strategy to balance the distribution. As shown in Fig. <ref>, the BEV feature F_ b is rotated with different angles r∈R={90,180,270}, yielding r0.57 < g r a p h i c s > Illustration of direction-aware learning strategy. the rotated representations { F_ b^r}. The augmented features will also be used for the pretext and planning task, and supervised by the aforementioned loss functions (e.g., ℒ_ spat). Notably, the BEV object mask M and the ground truth ego trajectory G_ traj are also rotated to provide corresponding supervision labels. Furthermore, we propose an auxiliary task to enhance the steering capability. In specific, we predict the planning direction that the ego car intends to maneuver (i.e., left, straight or right) based on the ego query Q_ ego^t, which is mapped to the probabilities of three directions P_ dir^t∈ℝ^1 × 3. The direction label Y_ dir^t is generated by comparing the x-axis value of ground truth G_ traj^t(x) with the threshold δ. Specifically, Y_ dir^t is assigned to straight if -δ< G_ traj^t(x)<δ, otherwise Y_ dir^t=left/right for G_ traj^t(x)⩽-δ/ G_ traj^t(x)⩾δ, respectively. We use the cross-entropy loss to minimize the gap between the direction prediction P_ dir^t and the direction label Y_ dir^t, denoted as ℒ_ dir. Directional Consistency. Tailored to the introduced directional augmentation, we propose a directional consistency loss to improve the augmented plan training in a self-supervised manner. It should be noticed that the augmented trajectory predictions P_ traj^t,r incorporate the same scene information as the original one P_ traj^t, r=0, i.e., BEV features with different rotation angles. Therefore, it's reasonable to consider the consistency among the predictions and regulate the noises caused by the rotation. The planning head is expected to be more robust to directional change and input distractors. Specifically, P_ traj^t,r are first rotated back to the original scene direction, then ℒ_1 loss is applied with P_ traj^t, r=0, 1.0ℒ_ cons = 1/T·|R|∑_t=1^T∑_r^R || Rot( P_ traj^t,r) - P_ traj^t, r=0||_1, where Rot is the inverse rotation. To summarize, the overall objective for our UAD contains spatial objectness loss, dreaming loss from the pretext, and imitation learning loss, direction loss, consistency loss from the planning task, 1.0ℒ = ω_1ℒ_ spat + ω_2ℒ_ drm +ω_3ℒ_ imi + ω_4ℒ_ dir + ω_5ℒ_ cons, where ω_1,ω_2,ω_3,ω_4,ω_5 are the weight coefficients. § EXPERIMENT §.§ Experimental Setup We conduct experiments in nuScenes <cit.> for open-loop evaluation, that contains 40,157 samples, of which 6,019 ones are used for evaluation. Following previous works <cit.>, we adopt the metrics of L2 error (in meters) and collision rate (in percentage). Notably, the intersection rate with road boundary (in percentage), proposed in BEV-Planner <cit.>, is also included for evaluation. For the closed-loop setting, we follow previous works <cit.> to perform evaluation in the Town05 <cit.> benchmark of the CARLA simulator <cit.>. Route completion (in percentage) and driving score (in percentage) are used as the evaluation metrics. We adopt the query-based view transformer <cit.> to learn BEV features from multi-view images. The confidence threshold of the open-set 2D detector is set to 0.35 to filter unreliable predictions. The angle θ to partition the BEV space is set to 4^∘ (K=360^∘/4^∘), and the default threshold δ is 1.2m (see Sec. <ref>). The weight coefficients in Eq. <ref> are set to 2.0,0.1,1.0,2.0,1.0. Our model is trained for 24 epochs on 8 NVIDIA Tesla A100 GPUs with a batch size of 1 per GPU. Other settings follow UniAD <cit.> unless otherwise specified. We observed that ST-P3 <cit.> and VAD <cit.> adopt different open-loop evaluation protocols (L2 error and collision rate) from UniAD in their official codes. We denote the setting in ST-P3 and VAD as TemAvg and the one in UniAD as NoAvg, respectively. In specific, the TemAvg protocol calculates metrics by averaging the performances from 0.5s to the corresponding timestamp. Taking the L2 error at 2s as an example, the calculation in TemAvg is 1.0L2@2s=Avg(l2_0.5s,l2_1.0s,l2_1.5s,l2_2.0s), where Avg is the average operation and 0.5s is the time interval between two consecutive annotated frames in nuScenes <cit.>. For NoAvg protocol, L2@2s=l2_2.0s. §.§ Comparison with State-of-the-arts Open-loop Evaluation. Tab. <ref> presents the performance comparison in terms of L2 error, collision rate, intersection rate with road boundary, and FPS. Since ST-P3 and VAD adopt different evaluation protocols from UniAD to compute L2 error and collision rate (see Sec. <ref>), we respectively calculate the results under different settings, i.e., NoAvg and TemAvg. As shown in Tab. <ref>, the proposed UAD achieves superior planning performance over UniAD and VAD on all metrics, while running faster. Notably, our UAD obtains 39.4% and 55.2% relative improvements on Collision@3s compared with UniAD and VAD under the NoAvg evaluation protocol (e.g., 39.4%=(0.71%-0.43%)/0.71%), demonstrating the longtime robustness of our method. Moreover, UAD runs at 7.2FPS, which is 3.4× and 1.4× faster than UniAD and VAD-Base, respectively, verifying the efficiency of our framework. Surprisingly, our tiny version, UAD-Tiny, which aligns the settings of backbone, image size, and BEV resolution in VAD-Tiny, runs at the fastest speed of 18.9FPS while clearly outperforming VAD-Tiny and even achieving comparable performance with VAD-Base. This again proves the superiority of our design. More detailed runtime comparisons and analyses are presented in the appendix. We adopt the NoAvg evaluation protocol in the following ablation experiments unless otherwise specified. Recent works discuss the effect of using ego status in the planning module <cit.>. Following this trend, we also fairly compare the ego status equipped version of our model with these works. It shows that the superiority of our UAD is still preserved, which also achieves the best performance against the compared methods. Moreover, BEV-Planner <cit.> introduces a new metric named “interaction” for better evaluating the performance of E2EAD methods. As shown in Tab. <ref>, our model obtains the average interaction rate of 1.13%, obviously outperforming other methods. This again proves the effectiveness of our UAD. On the other hand, this demonstrates the importance of designing a suitable pretext for perceiving the environment. Only using ego status is not enough for safe driving. Closed-loop Evaluation. The simulation results in CARLA <cit.> are shown in Tab. <ref>. Our UAD achieves better performance compared with recent E2E planners ST-P3 <cit.> and VAD <cit.> in all scenarios, proving the effectiveness. Notably, on challenging Town05 Long benchmark, UAD greatly outperforms recent E2E method VAD by 41.32 points on the driving score and 19.24 points on route completion, respectively. This proves the reliability of our UAD for long-term autonomous driving. §.§ Component-wise Ablation Loss Functions. We first analyze the influence of different loss functions that correspond to the proposed pretext task and self-supervised trajectory learning strategy. The experiments are conducted on the validation split of the nuScenes <cit.>, as shown in Tab. <ref>. The model with single imitation loss ℒ_ imi is considered as the baseline (172). With the enhanced perception capability by the spatial objectness loss ℒ_ spat, the average L2 error and collision rate are clearly improved to 1.00m and 0.71% from 3.18m and 2.43%, respectively (173 v.s. 172). The dreaming loss ℒ_ drm, direction loss ℒ_ dir and consistency loss ℒ_ cons also respectively bring considerable gains on the average L2 error for 1.98m, 1.58m, 1.77m over the baseline model (174,175,176 v.s. 172). The loss functions are finally combined to construct our UAD (177), which obtains the average L2 error of 0.90m and average collision rate of 0.19%. The results demonstrate the effectiveness of each proposed component. Temporal Learning with Dreaming Decoder. The temporal learning with the proposed dreaming decoder is realized by Circular Update and Dreaming Loss. The circular update is in charge of both extracting information from observed scenes (Eq. <ref>) and generating pseudo observations to predict the ego trajectories of future frames (Eq. <ref>). We study the influence of each module in Tab. <ref>. Circular Update and Dreaming Loss respectively bring performance gains of 0.70m/0.78m on the average L2 error (173,174v.s.172), proving the effectiveness of our designs. Applying both two modules (175) achieves the best performance, showing their complementarity for temporal representation learning. Direction Aware Learning Strategy. Directional Augmentation and Directional Consistency are the two core components of the proposed direction-aware learning strategy. We prove their effectiveness in Tab. <ref>. It shows that the Directional Augmentation improves the average L2 error for considerable 0.05m (173v.s.172). One interesting observation is that applying the augmentation brings more gains for long-term planning than short-term ones, i.e., the L2 error of 1s/3s decreases for 0.01m/0.08m compared with 172, which proves the effectiveness of our augmentation on enhancing longer temporal information. The Directional Consistency further reduces the average collision rate for impressive 0.13% (174v.s.173), which enhances the robustness for driving directional change. Angular Design. We further explore the influence of the proposed angular design by removing the angular partition and angular queries. Specifically, the BEV feature is directly fed into the dreaming decoder to predict pixel-wise objectness, which is supervised by the BEV object mask (see Fig. <ref>) with binary cross-entropy loss. Besides, the ego query directly interacts with the BEV feature by cross-attention to extract environmental information. The results are presented in Tab. <ref>. r0.42 Ablation on the angular design. =0.12cm ! 2*# 2*cAngular Design 4c|L2 (m) ↓ 4cCollision (%) ↓ 1s 2s 3s gray!15Avg. 1s 2s 3s gray!15Avg. 172 - 0.78 1.31 2.01 gray!151.37 0.61 1.39 2.12 gray!151.37 173 0.39 0.81 1.50 gray!150.90 0.01 0.12 0.43 gray!150.19 When discarding the angular design, the average L2 error degrades for 0.47m, and the average collision rate consistently degrades for 1.18%. This demonstrates the effectiveness of our angular design in perceiving complex environments and planning robust driving routes. §.§ Further Analysis Planning Performance in Different Driving Scenes. The direction-aware learning strategy is designed to enhance the planning performance in scenarios of vehicle steering. We demonstrate the superiority of our proposed model by evaluating the metrics of different driving scenes in Tab. <ref>. According to the given driving command (i.e., go straight, turn left and turn right), we divide the 6,019 validation samples in nuScenes <cit.> into three parts, which contain 5,309, 301 and 409 ones, respectively. Not surprisingly, all methods perform better under go straight scenes than the steering scenes, proving the necessity of augmenting the imbalanced training data for robust planning. When applying the proposed direction-aware learning strategy, our UAD achieves considerable gains on the average collision rate of turn left and turn right scenes (UAD v.s. UAD^*). Notably, our model outperforms UniAD and VAD by a large margin in steering scenes, proving its effectiveness. Visualization of Angular Perception and Planning. The angular perception pretext is designed to perceive the objects in each sector region. We show its capability by visualizing the predicted objectness in nuScenes <cit.> in Fig. <ref>. For a better view, we transform the discrete objectness scores and ground truth to a pseudo-BEV mask. It shows that our model can successfully capture surrounding objects. Fig. <ref> also shows the open-loop planning results of recent SOTA UniAD <cit.>, VAD <cit.> and our UAD, proving the effectiveness of our method to plan a more reasonable ego trajectory. Fig. <ref> compares the closed-loop driving routes between Transfuser <cit.>, ST-P3 <cit.> and our UAD in CARLA <cit.>. Our method successfully notices the person and drives in a much safer manner, proving the reliability of our UAD in handling safe-critical issues under complex scenarios. Due to limited space, we present more analyses in the appendix, including 1) the influence of partition angle θ, 2) the influence of direction threshold δ, 3) different backbones and pre-trained weights, 4) replacing 2D ROIs from GroundingDINO with 2D GT boxes, 5) different settings of GroundingDINO to generate 2D ROIs, 6) the influence of pre-training to previous method UniAD and our UAD, 7) runtime analysis of each module in our UAD and modularized UniAD, 8) more visualizations, etc. §.§ Discussion Ego Status and Open-loop Planning Evaluation. As revealed by <cit.>, it's not a challenge to acquire decent performance of L2 error and collision rate (the original metrics in nuScenes <cit.>) in the open-loop evaluation of nuScenes by using ego status in the planning module (see Tab. <ref>). The question is: is open-loop evaluation meaningless? Our answer is NO. Firstly, the inherent reason for the observation is that the simple cases of go straight dominate the nuScenes testing dataset. In these cases, even a linear extrapolation of motion being sufficient for planning is not surprising. However, as shown in Tab. <ref>, in more challenging cases like turn right and turn left, the open-loop metrics can still clearly indicate the difficulty of steering scenarios and the differences in methods, which is also proved in <cit.>. Therefore, open-loop evaluation is not meaningless, while the crux is the distribution of the testing data and the metrics. Secondly, the advantage of open-loop evaluation is its efficiency, which benefits the fast development of algorithms. This view is also revealed by a recent simulator design study <cit.>, which tries to transform the closed-loop evaluation into an open-loop fashion. In our work, we thoroughly compare our model with other methods, which shows consistent improvements against previous works under various driving scenarios (straight or steering), different usage of ego status (w/. or w/o.), diverse evaluation metrics (L2 error, collision rate or intersection rate from <cit.>), and different evaluation types (open- or closed-loop). It thus again proves the importance of designing suitable pretext tasks for end-to-end autonomous driving. How to Guarantee Safety in Current Auto-Drive System? Safety is the first requirement of autonomous driving systems in practical products, especially for L4-level auto-vehicles. To guarantee safety, offline collision check with predicted 3D boxes is an inevitable post-process under current technological conditions. Then, a question naturally arises: how to safely apply our model to current auto-driving systems? Before answering this question, we reaffirm our claim that we believe discarding 3D labels is an efficient, attractive, and potential direction for E2EAD, but it doesn't mean we refuse to use any 3D labels if the relatively cheap ones are available in practical product engineering. For instance, solely annotating bounding boxes without object identity for tracking is much cheaper than labeling other elements like HD-map, and point-cloud segmentation labels for occupancy. Therefore, we provide a degraded version of our method by arranging an additional 3D detection head. r0.45 Ablation on the 3D detection head. =0.12cm ! 2*# 2*cDetection Head 4c|L2 (m) ↓ 4cCollision (%) ↓ 1s 2s 3s gray!15Avg. 1s 2s 3s gray!15Avg. 172 - 0.39 0.81 1.50 gray!150.90 0.01 0.12 0.43 gray!150.19 173 0.37 0.86 1.57 gray!150.93 0.02 0.17 0.55 gray!150.25 Then our model can seamlessly integrate into auto-drive products, and offline collision check is achievable. As shown in Tab. <ref>, integrating the 3D detection head doesn't bring additional improvements, which again proves the design of our method has sufficiently encoded 3D information to the planning module. In a nutshell, 1) our work can easily integrate other 3D tasks if they are inevitable under current technical conditions; 2) the experiments again prove from the side that our spatial-temporal module has already encoded important 3D clues for planning; 3) we hope our frontier work can eliminate some inessential 3D sub-tasks for both research and engineer usage of E2EAD models. § CONCLUSION Our work seeks to liberate E2EAD from costly modularization and 3D manual annotation. With this goal, we propose the unsupervised pretext task to perceive the environment by predicting angular-wise objectness and future dynamics. To improve the robustness in steering scenarios, we introduce the direction-aware training strategy for planning. Experiments demonstrate the effectiveness and efficiency of our method. As discussed, although the ego trajectories are easily obtained, it is almost impossible to collect billion-level precisely annotated data with perception labels. This impedes the further development of end-to-end autonomous driving. We believe our work provides a potential solution to this barrier and may push performance to the next level when massive data are available. plain § APPENDIX The appendix presents additional designing and explaining details of our Unpervised pretext task for end-to-end Autonomous Driving (UAD) in the manuscript. * Different Partition Angles We explore the influence of different partition angles in angular pretext to learn better spatio-temporal knowledge. * Different Direction Thresholds We explore the influence of different thresholds in direction prediction to enhance planning robustness in complex driving scenarios. * Different Backbones and Pre-trained Weights We compare the performance of different backbones and pre-trained weights on our method. * Objectness Label Generation with GT Boxes We compare the generated objectness label between using the pseudo ROIs from GroundingDINO <cit.> and ground-truth boxes on different backbones. * Settings for ROI Generation We ablate different settings for the open-set 2D detector GroundingDINO, which provides ROIs for the label generation of angular perception pretext. * Different Image Sizes and BEV Resolution We compare the performance with different input sizes of multi-view images and BEV resolutions. * Runtime Analysis We evaluate the runtime of each module of UAD and compare with modularized UniAD <cit.>, which demonstrates the efficiency of our method. * Classification of Angular Perception We evaluate the objectness prediction in the angular perception pretext, which demonstrates the enhanced perception capability in complex driving scenarios. * Influence of Pre-training We evaluate the influence of pre-training by detailing the training losses and planning performances with different pre-trained weights. * More Visualizations We provide more visualizations for the predicted angular-wise objectness and planning results in the open-loop evaluation of nuScenes <cit.> and closed-loop simulation of CARLA <cit.>. §.§ Different Partition Angles The proposed angular perception pretext divides the BEV space into multiple sectors. We explore the influence of partition angle θ in Tab <ref>. Experimental results show that the L2 error and inference speed gradually increase with the partition angle. The model with partition angle of 1^∘(172) achieves the best average L2 error of 0.85m. And the partition angle of 4^∘ contributes to the best average collision rate of 0.19% (174). This reveals that a smaller partition angle helps learn more fine-grained environmental representations, eventually benefiting planning. In contrast, the model with a large partition angle sparsely perceives the scene. Despite reducing the computation cost, it will also degrade the safety of the end-to-end autonomous driving system. §.§ Different Direction Thresholds The direction prediction that the ego car intends to maneuver (i.e., left, straight and right) is proposed to enhance the steering capability for autonomous driving. The label is generated with the threshold δ (see Eq. 7 in the manuscript), which determines the ground-truth direction of each waypoint in the expert trajectory. Here we explore the influence by ablating different thresholds, as shown in Tab. <ref>. Experimental results show that the L2 error gradually increases with the direction threshold. The model with δ of 0.5m (172) achieves the lowest L2 error of 0.86m. It reveals that a smaller threshold will force the planner to fit the expert navigation, leading to a closer distance between the predicted trajectory and the ground truth. In contrast, the collision rate benefits more from larger thresholds. The model with δ of 2.0m obtains the best collision rate at 2s of 0.08% (176), showing the effectiveness for robust planning. Notably, the threshold of 1.2m contributes to a great balance with the average L2 error of 0.90m and average collision rate of 0.19%. §.§ Different Backbones and Pre-trained Weights As a common sense, pre-training the backbone network with fundamental tasks like image classification on ImageNet <cit.> will benefit the sub-tasks. The previous method UniAD <cit.> uses the pre-trained weights of BEVFormer <cit.>. What surprised us is that when replacing the pre-trained weights with the one learned on ImageNet, the performance of UniAD dramatically degraded (see “Influence of Pre-training” for more details). This inspires us to explore the influence of backbone settings on our framework. As shown in Tab. <ref>, interestingly, even without any pre-training, our model still outperforms UniAD with pre-trained ResNet101 and VAD with pre-trained ResNet50. This verifies the effectiveness of our unsupervised pretext task on modeling the driving scenes. We also use publicly available pre-trained weights on detection datasets like COCO <cit.> and nuImages <cit.> to train our model, which shows better performance. These experimental results and observations demonstrate that a potentially promising topic is how to pre-train a model for end-to-end autonomous driving. We leave this to future research. §.§ Objectness Label Generation with GT Boxes As mentioned in the manuscript, the essence of generating the angular objectness label lies in the 2D ROIs, which come from the open-set 2D detector GroundingDINO <cit.>. Here we explore the influence of using the ground-truth 2D boxes as ROIs, which provide more high-quality samples for the representation learning in the angular perception pretext. Tab. <ref> shows that training with GT boxes achieves consistent performance gains on both ResNet50 <cit.> and ResNet101 <cit.> (173,175 v.s. 172,174). This reveals that accurate annotation does help to learn better spatio-temporal knowledge and improve ego planning. Considering the cost in real-world deployment, training with accessible pseudo labels is a more efficient way compared with the manual annotation, which also shows comparable performance in autonomous driving (172 v.s. 173 and 174 v.s. 175). §.§ Settings for ROI Generation. The quality of learned spatio-temporal knowledge highly relies on the generated ROIs by the open-set 2D detector GroundingDINO <cit.>, which are then projected as the BEV objectness label for training the angular perception pretext. We explore the influence of generated ROIs with different settings, as shown in Tab. <ref>. We take the setting with the confidence score of 0.35, prompt word of vehicle and without the Rule Filter, as the baseline (172). By appending more prompt words (e.g., pedestrian, barrier), the planning performance gradually improves (174,173 v.s.172), showing the enhanced perception capability with more diversified objects. Filtering the ROIs with overlarge size (i.e., Rule Filter) brings considerable gains for the average L2 error of 0.07m and average collision rate of 0.10% (175v.s.174). One interesting observation is that decreasing the confidence threshold would slightly improve the L2 error while causing higher collision rate (176v.s.175). In contrast, increasing the threshold obtains lower average collision rate of 0.17% and higher average L2 error of 0.98m. This reveals the importance of providing diversified ROIs for angular perception learning as well as ensuring high quality. The model with the confidence score of 0.35, all prompt words and Rule Filter achieves balanced performance with the average L2 error of 0.90m and average collision rate of 0.19%. §.§ Different Image Sizes and BEV Resolution For safe autonomous driving, increasing the input size of the multi-view images and the resolution of the built BEV representation is an effective way, which provide more detailed environmental information. While benefiting perception and planning, it inevitably brings heavy computation cost. We then ablate the image size and BEV resolution of our UAD to find a balanced version between performance and efficiency, as shown in Tab. <ref>. The results show that our UAD with ResNet-101 <cit.>, image size of 1600×900, BEV resolution of 200×200, achieves the best performance compared with previous methods UniAD <cit.> and VAD-Base <cit.> while running faster with 7.2FPS (177). By replacing the backbone with ResNet-50, our UAD is more efficient with little performance degradation (176 v.s. 177). We further align the settings of VAD-Tiny, which has an inference speed of outstanding 17.6FPS (173), to explore the influence of much smaller input sizes. Tab. <ref> shows that our UAD still achieves excellent performance even compared with VAD-Base of high-resolution inputs (175 v.s. 174). Notably, our UAD of this version has the fastest inference speed of 18.9FPS. This again proves the effectiveness of our method in performing fine-grained perception, as well as the robustness to fit the inputs of different sizes. §.§ Runtime Analysis Tab. <ref> compares the runtime of each module between the modularized method UniAD <cit.> and our UAD. As we adopt the Backbone and BEV Encoder from BEVFormer <cit.> that are the same in UniAD, the latency of feature extraction is similar with little difference due to different pre-processing. The modular sub-tasks in UniAD consume most of the runtime, i.e., significant 71.8% for Det&Track (31.2%), Map (19.8%), Motion (10.9%) and Occupancy (9.9%), respectively. In contrast, our UAD performs simple Angular Partition and Dreaming Decoder, which take only 14.0% (19.3ms) to model the complex environment. This demonstrates our insight that it's a necessity to liberate end-to-end autonomous driving from costly modularization. The downstream Planning Head takes negligible 1.5ms to plan the ego trajectory, compared with 9.7ms in UniAD. Finally, our UAD finishes the inference with a total runtime of 138.3ms, 3.4× faster than the 465.1ms of UniAD, showing the efficiency of our design. §.§ Classification of Angular Perception The proposed angular perception pretext learns spatio-temporal knowledge of the driving scene by predicting the objectness of each sector region, which is supervised by the generated binary angular-wise label. We show the perception ability by evaluating the classification metrics based on the validation split of the nuScenes <cit.> dataset. Fig. <ref> draws the Precision-Recall (PR) curve and Receiver-Operating-Characteristic (ROC) curve in different driving scenes (i.e., turn left, go straight and turn right). In the PR curve, our UAD achieves balanced precision and recall scores in different driving scenes, showing the effectiveness of our pretext task to perceive the surrounding objects. Notably, the performance of go straight scenes is slightly better than the steering ones under all thresholds. This proves our insight to design tailored direction-aware learning strategy for improving the safety-critical turn left and turn right scenes. The ROC curve shows the robustness of our angular perception pretext to classify the objects from complex environmental observations. §.§ Influence of Pre-training Pre-training the backbone network with fundamental tasks is a commonly used metric to benefit representation learning. As mentioned in “Different Backbones and Pre-trained Weights” of Sec. 4.4 in the manuscript, the performance of the previous SOTA method UniAD <cit.> dramatically degrades without the pre-trained weights from BEVFormer <cit.>. Here we further detail the influence by comparing the training losses and planning performances with different pre-trained weights in Fig. <ref>. Fig. <ref> shows that the training losses increase by about 20 on average when replaced with the pre-trained weights from ImageNet <cit.>. Correspondingly, the average L2 error is significantly higher than the one with the pre-trained weights from BEVFormer. This reveals that UniAD heavily relies on the perceptive pre-training in BEVFormer to optimize modularized sub-tasks. In contrast, our UAD performs comparably even without any pre-training (see Fig. <ref>), proving the effectiveness of our designs for robust optimization. §.§ More Visualizations Open-loop Planning We provide more visualizations about the predicted angular-wise objectness and planning results on nuScenes <cit.>. Fig. <ref> compares the discrete objectness scores and ground truth, proving the effectiveness of our angular perception pretext to perceive the objects in each sector region. The planning results of previous SOTA methods (i.e., UniAD <cit.> and VAD <cit.>) and our UAD are shown in Fig. <ref>. With the designed pretext and tailored training strategy, our method could plan a more reasonable ego trajectory under different driving scenarios, proving the effectiveness of our work. The third row shows the failure case of our planner. In this case, the ego car is given the “Turn Right” command when t=0 (i.e., the first frame of the driving scenario), leading to ineffectiveness of our planner in learning helpful temporal information. A possible solution to deal with this is to apply an auxiliary trajectory prior for the first several frames, and we leave this to future work. Closed-loop Simulation Fig. <ref> visualizes the predicted objectness and planning results in the Town05 Long benchmark of CARLA <cit.>. Following the setting of ST-P3 <cit.> in closed-loop evaluation, we collect visual observations from the cameras of “CAM_FRONT”, “CAM_FRONT_LEFT”, “CAM_FRONT_RIGHT” and “CAM_BACK”. It shows that the sector regions in which the surrounding objects exist are successfully captured by our UAD, proving the effectiveness and robustness of our design. Notably, the missed objects by GroundingDINO <cit.>, e.g., the black car in the camera of “CAM_FRONT_LEFT” at t=145, are surprisingly perceived and marked in the corresponding sector. This demonstrates our method has the capability of learning perceptive knowledge in a data-driven manner, even with coarse supervision by the generated 2D pseudo boxes from GroundingDINO.
http://arxiv.org/abs/2406.19353v1
20240627173218
CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement
[ "Chengwen Zhang", "Yun Liu", "Ruofan Xing", "Bingda Tang", "Li Yi" ]
cs.CV
[ "cs.CV" ]
A fully analytical equation of motion approach for the double quantum dot in the Coulomb blockade regime Stefan Kurth July 1, 2024 ======================================================================================================== § ABSTRACT Understanding how humans cooperatively rearrange household objects is critical for VR/AR and human-robot interaction. However, in-depth studies on modeling these behaviors are under-researched due to the lack of relevant datasets. We fill this gap by presenting , a novel large-scale 4D human-object-human interaction dataset focusing on collaborative object rearrangement, which encompasses diverse compositions of various object geometries, collaboration modes, and 3D scenes. With 1K human-object-human motion sequences captured in the real world, we enrich by contributing an iterative collaboration retargeting strategy to augment motions to a variety of novel objects. Leveraging this approach, comprises a total of 11K collaboration sequences spanning 3K real and virtual object shapes. Benefiting from extensive motion patterns provided by , we benchmark two tasks aiming at generating human-object interaction: human-object motion forecasting and interaction synthesis. Extensive experiments demonstrate the effectiveness of our collaboration retargeting strategy and indicate that has posed new challenges to existing human-object interaction generation methodologies. Our dataset and code are available at https://github.com/leolyliu/CORE4D-Instructionshttps://github.com/leolyliu/CORE4D-Instructions. § INTRODUCTION Humans frequently rearrange household items through multi-person collaboration , such as moving a table or picking up an overturned chair together. Analyzing and synthesizing these diverse collaborative behaviors could be widely applicable in VR/AR, human-robot interaction <cit.>, and dexterous <cit.> and humanoid <cit.> manipulation. However, understanding and modeling these interactive motions have been under-researched due to the lack of large-scale, richly annotated datasets. Most existing human-object and hand-object interaction datasets focus on individual behaviors <cit.> and two-person handovers <cit.>. But these datasets typically encompass a limited number of object instances, thus struggling to support generalizable interaction understanding across diverse object geometries. Scaling up precise human-object interaction data is challenging. While vision-based human-object motion tracking methods <cit.> have made significant progress, they still struggle with low fidelity in severe occlusion, common in multi-human collaboration scenes. However, mocap <cit.> is expensive and hard to scale up to cover a large number of objects to be rearranged. We want to curate a large-scale category-level human-object-human (HOH) interaction dataset with high motion quality in a cost-efficient manner. We observe that HOH collaborations mainly vary in two aspects including the temporal collaboration patterns of two humans and the spatial relations between human and object. The temporal collaboration patterns could vary in many ways depending on scene complexity, motion range, and collaboration mode. In contrast, the spatial relations between human and object tend to possess strong homogeneity when facing objects from the same category, e.g., two persons holding the two sides of a chair. This allows retargeting interactions involving one specific instance to another using automatic algorithms, avoiding the need to capture interactions with thousands of same-category objects in the real world. The above observations make it possible for us to leverage expensive motion capture systems to capture only humans' diverse temporal collaboration patterns while leaving the richness of human-object spatial relations to automatic spatial retargeting algorithms. Using these insights, we build a large-scale dataset, , encompassing a wide range of human-object interactions for collaborative object rearrangement. includes various types of household objects, collaboration modes, and 3D environments. Our data acquisition strategy combines mocap-based capturing and synthetic retargeting, allowing us to scale the dataset effectively. The retargeting algorithm transfers spatial relation between human and object to novel object geometries while preserving temporal pattern of human collaboration. As a result, includes 1K real-world motion sequences (-Real) paired with videos and 3D scenes, and 10K synthetic collaboration sequences (-Synthetic) covering 3K diverse object geometries. We benchmark two tasks for generating human-object collaboration: (1) motion forecasting <cit.> and (2) interaction synthesis <cit.> on , revealing challenges in modeling human behaviors, enhancing motion naturalness, and adapting to new object geometries. Ablation studies demonstrate the effectiveness of our hybrid data acquisition strategy, and the quality and value of -Synthetic, highlighting its role in helping to improve existing motion generation methods. In summary, our main contributions are threefold: (1) We present , a large-scale 4D HOH interaction dataset for collaborative object rearrangement. (2) We propose a novel hybrid data acquisition methodology, incorporating real-world data capture and synthetically collaboration retargeting. (3) We benchmark two tasks for collaboration generation, revealing new challenges and research opportunities. § RELATED WORK §.§ Human-object Interaction Datasets Tremendous progress has been made in constructing human-object interaction datasets. To study how humans interact with 3D scenes, various widely-used datasets record human movements and surrounding scenes separately, regarding objects as static <cit.> or partially deformable <cit.> without pose changes. For dynamic objects, recent works <cit.> have captured human-object interaction behaviors with different focuses. Table <ref> generally summarizes the characteristics of 4D human-object-interaction datasets. To support research for vision-based human-object motion tracking and shape reconstruction, a line of datasets <cit.> presents human-object mesh annotations with multi-view RGB or RGBD signals. With the rapid development of human-robot cooperation, several works <cit.> focus on specific action types such as grasping <cit.> and human-human handover <cit.>. Our dataset uniquely captures multi-person and object collaborative motions, category-level interactions, and both egocentric and allocentric views, offering comprehensive features with the inclusion of both real and synthetic datasets. §.§ Human Interaction Retargeting Human interaction retargeting focuses on how to apply human interactive motions to novel objects in human-object interaction scenarios. Existing methodologies <cit.> are object-centric, which propose first finding contact correspondences between the source and the target objects and then adjusting human motion to touch specific regions on the target object via optimization. As crucial guidance of the result, contact correspondences are discovered by aligning either surface regions <cit.>, spatial maps <cit.>, distance fields <cit.>, or neural descriptor fields <cit.> between the source and the target objects, which are all limited to objects with similar topology and scales. Our synthetic data generation strategy incorporates object-centric design <cit.> with novel human-centric contact selection, creating a chance to adapt to these challenging objects using human priors. §.§ Human-object Interaction Generation Human-object interaction generation is an emerging research topic that aims to synthesize realistic human-object motions conditioned on surrounding 3D scenes, known object trajectories, or action types. To generate humans interacting with static 3D scenes, POSA <cit.> and COINS <cit.> synthesize static human poses with CVAE <cit.>, while a line of work <cit.> further presents dynamic human motions by auto-regressive manners <cit.>, diffusion models <cit.>, or two-stage designs that first generates start and end poses and then interpolates motion in-between <cit.>. InterDiff <cit.> and OMOMO <cit.> further fulfill this task for dynamic objects. To generate human-object interaction under specific action descriptions, recent works <cit.> extract text features with pretrained CLIP <cit.> encoders or LLMs <cit.> and use them to guide diffusion models <cit.>. § CONSTRUCTING CORE4D is a large-scale 4D human-object-human interaction dataset acquired in a novel hybrid scheme, comprising -Real and -Synthetic. -Real is captured (Section <ref>) and annotated (Section <ref>) from authentic collaborative scenarios. It provides human-object-human poses, allocentric RGB-D videos, egocentric RGB videos, and 2D segmentation across 1.0K sequences accompanied by 37 object models. To augment spacial relation between human and object, we present an innovative collaboration retargeting technique in Section <ref>, integrating -Real with -Synthetic, thereby expanding our collection with an additional 10K sequences and 3K rigid objects. Detailed characteristics such as data diversities are demonstrated in Section <ref>. §.§ -Real Data Capture To collect precise human-object motions with visual signals, we set up a hybrid data capturing system shown in Fig. <ref>, consisting of an inertial-optical mocap system, four allocentric RGB-D cameras and a camera worn by persons for egocentric sensing. The frequency of our system is 15 FPS. Inertial-optical Mocap System. To accurately capture human-object poses in multi-person collaboration scenarios, often involving severe occlusion, we use an inertial-optical mocap system <cit.> inspired by CHAIRS <cit.> This system includes 12 infrared cameras, mocap suits with 8 inertial-optical trackers and two data gloves per person, and markers of a 10mm radius. The mocap suits capture Biovision Hierarchy (BVH) skeletons of humans, while markers attached to the objects track object motion. Visual Sensors. Kinect Azure DK cameras are integrated to capture allocentric RGB-D signals, and an Osmo Action3 is utilized to capture egocentric color videos. The resolution of all the visual signals is 1920x1080. Cameras are calibrated by the mocap system and synchronized via timestamp. Details on camera calibration and synchronization are provided in the supplementary material. Object Model Acquisition. -Real includes 37 3D models of rigid objects spanning six household object categories. Each object model is constructed by an industrial 3D scanner with up to 100K triangular faces. We additionally adopt manual refinements on captured object models to remove triangle outliers and improve accuracy. Privacy Protection. To ensure participant anonymity, blurring is applied to faces <cit.> in RGB videos, and fake facial meshes are generated via SMPL-X <cit.>. The participants all consented to releasing , and were also notified of their right to have their data removed from at any time. §.§ -Real Data Annotation Object Pose Tracking. To acquire the 6D pose of a rigid object, we attach four to five markers to the object's surface. The markers formulate a virtual rigid that the mocap system can track. With accurate localization of the object manually, the object pose can be precisely determined by marker positions captured by the infrared cameras. Human Mesh Acquisition. Aligning with existing dataset efforts <cit.>, we retarget the BVH <cit.> human skeleton to the widely-used SMPL-X <cit.>. SMPL-X <cit.> formulates a human mesh as D_smplx = M(β, θ). The body shape β∈ℝ^10 are optimized to fit the constraints on manually measured human skeleton lengths. With β computed, we optimize the full-body pose θ∈ℝ^159 with the loss function: ℒ = ℒ_reg + ℒ_j3D + ℒ_jOri + ℒ_smooth + ℒ_h3D + ℒ_hOri + ℒ_contact, where ℒ_reg ensures the simplicity of the results and prevents unnatural, significant twisting of the joints. ℒ_j3D and ℒ_jOri encourage the rotation of joints and the global 3D positions to closely match the ground truth. ℒ_h3D and ℒ_hOri guide the positioning and orientation of the fingers. ℒ_smooth promotes temporal smoothness. ℒ_contact encourages realistic contact between the hands and objects. Then using SMPL-X <cit.> M(β, θ, Φ) : ℝ^|θ| × |β|↦ℝ^3N to generate human mesh. Details on the loss functions are presented in the supplementary material. 2D Mask Annotation. We offer automatic 2D segmentation for individuals and the manipulated object to aid in predictive tasks like vision-based human-object pose estimation <cit.>. We first use DEVA <cit.> to segment human and object instances in a captured interaction image with text prompts. Then, we render human and object meshes separately on each image and select the instance with the highest Intersection-over-Union (IoU) for mask annotation. §.§ -Synthetic Data Generation In order to enrich the diversities of object geometries and human-object spatial relations, our retargeting algorithm transfers real interactions to ShapeNet <cit.> objects of the same category, thereby significantly expanding the dataset regarding the object's diversity. When transferring interactions across objects, contact points are always the key and it is important to consider whether they can be properly transferred with consistent semantics on new objects <cit.>. However, we find this insufficient when object geometries vary largely and correspondences become hard to build. We thus tackle interaction retargeting from a novel human-centric perspective where good contact points should support natural human poses and motions. We realize this idea through the pipeline depicted in Figure <ref>, which comprises three key components. First, object-centric contact retargeting uses whole contact knowledge from -Real to obtain accurate contact with different objects. Second, contact-guided interaction retargeting adapts motion sequences to new object geometries while considering the contact constraints. Third, a human-centric contact selection evaluates poses from interaction candidates to select the most plausible contacts. Object-centric Contact Retargeting. To acquire reasonable human poses, contact constraints on the target object are essential. We draw inspiration from Tink <cit.> and train DeepSDF on all objects' signed distance fields (SDFs). For source object SDF O_s and target object SDF O_t, we first apply linear interpolation on their latent vectors o_s and o_t and obtain N intermediate vectors o_i = N+1-i/N+1 o_s + i/N+1 o_t (1≤ i≤ N). We then decode o_i to its SDF O_i via the decoder of DeepSDF, and reconstruct the corresponding 3D mesh M_i using the Marching Cubes algorithm <cit.>. Thereby get mesh sequence ℳ = [source, M_1, M_2, ..., M_N, target] and successively transfer contact positions between every two adjacent meshes in ℳ via Nearest-neighbor searching. In addition, we leverage all contact candidates from -Real on source to form a pool of contact candidates and transfer them to target as contact constraints. Contact-guided Interaction Retargeting. For each contact constraint, interaction retargeting aims to transfer human interaction from source to target. To greatly enforce the consistency of interaction motion, we optimize variables including the object rotations R_o ∈ℝ^N×3 and translations T_o ∈ℝ^N×3, human poses θ_1,2∈ℝ^2 × N × 153, translation T_1,2∈ℝ^2 × N × 3 and orientation O_1,2∈ℝ^2 × N × 3 on the SMPL-X <cit.>. N is the frame number. We first estimate the target's motion {R_o,T_o} by solving an optimization problem as follows: R_o, T_o ⟵argmin_R_o, T_o(ℒ_f + ℒ_spat + ℒ_smooth), where fidelity loss ℒ_f evaluates the difference of the target's rotation and translation against the source, restriction loss ℒ_spat penalizes target's penetration with the ground, and smoothness loss ℒ_smooth constrains the target's velocities between consecutive frames. Given the target's motion and contact constraints, we then transfer humans' interactive motion {θ_1,2,T_1,2,O_1,2} from the source to the target by solving another optimization problem as follows: θ_1,2, T_1,2, O_1,2⟵argmin_θ_1,2, T_1,2, O_1,2(ℒ_j + ℒ_c + ℒ_spat + ℒ_smooth), where fidelity loss ℒ_j evaluates the difference in human joint positions before and after the transfer, contact loss ℒ_c computes the difference between human-object contact regions and the contact constraints, ℒ_spat and ℒ_smooth ensures the smoothness of human motion. Details on the loss designs and their motivations are provided in the supplementary material. Human-centric Contact Selection. Selecting reasonable contact constraints efficiently is challenging due to their large scale and the time-consuming interaction retargeting. We address this challenge by developing a beam search algorithm to select contact constraints from a human-centric perspective. Specifically, we train a human pose discriminator inspired by GAN-based motion generation strategies <cit.>. To train the discriminator, we build a pairwise training dataset, with each pair consisting of one positive human pose sample and one negative sample. Positive samples are encouraged to get higher scores than negative ones. We use -Real as positive samples. We add 6D pose noise Δ(α, β, γ, x, y, z) on target motion, and regard corresponding human motions generated by contact-guided interaction retargeting as negative samples. The loss function is: ℒ_ranking = - log(σ(R_pos - R_neg - m(S_pos, S_neg))), where S_pos and S_neg denote inputs for positive and negative samples respectively, with R_pos and R_neg being their corresponding discriminator scores. σ is Sigmoid function, and m(S_pos, S_neg) = ||Δ(α, β, γ, x, y, z)|| is human-guide margin <cit.> between positive and negative poses. This margin could explicitly instruct the discriminator to yield more significant disparities across different poses. To ensure the realism of human interactions, we also introduce an interpenetration penalty. We prioritize those with the highest discriminator scores while ensuring acceptable levels of interpenetration as the optimal contact constraints. §.§ Dataset Characteristics To better model collaborative object rearrangement interactions, we focus on diversifying our dataset in several vital areas: object geometries, collaboration modes, and 3D scenes. These ensure a comprehensive representation of real-world interactions. Diversity in Object Geometries. We design six object categories to cover the main collaborative object rearrangement interaction scenarios as Fig. <ref>(a). Categories with relatively simple geometry, uniformity, and typically exhibiting symmetry include box, board, barrel, and stick. Categories with more complex geometries and significant individual differences include chair and desk. Diversity in Collaboration Modes. We define five human-human collaboration modes in collaborative object rearrangement. Each mode represents a unique form of collaboration between two individuals, providing a new perspective and possibilities for understanding and researching collaborative behaviors. At first, we define the person with the egocentric camera as Person 2, and the other as Person 1. Collaborative carrying tasks are divided by whether Person 2 knows the goal or not. Tasks of handover and solely move alternate between the two participants. In join and leave tasks, Person 2 will either join in to help or leave halfway through, respectively. Diversity in 3D Scenes. Surrounding scenarios are set up with varying levels of scene complexity: no obstacle, single obstacle, and many obstacles (more than one). Participants are asked to navigate through these randomly placed obstacles by their own means. We observe that this typically involved behaviors including bypassing, going through, stepping over, or moving obstacles aside. § EXPERIMENTS In this section, we first present the train-test split of (Section <ref>). We then propose two benchmarks for generating human-object collaboration: human-object motion forecasting (Section <ref>), and interaction synthesis (Section <ref>). Finally, Section <ref> presents extensive studies on the collaboration retargeting approach. §.§ Data Split We construct a training set from a random assortment of real objects, combining their real motions and corresponding synthetic data. We also create two test sets from -Real for non-generalization and inner-category generalization studies. Test set S1 includes interactions with training set objects, while S2 features interactions with new objects. -Synthetic is not included in the test set, avoiding potential biases from the retargeting algorithm. Details are shown in supplementary material. §.§ Human-object Motion Forecasting Forecasting 4D human motion <cit.> is a crucial problem with applications in VR/AR and embodied perception <cit.>. Current research <cit.> is limited to individual behaviors due to data constraints. Our work expands this by using diverse multi-person collaborations, making the prediction problem both intriguing and challenging. Task Formulation. Given the object's 3D model and human-object poses in adjacent 15 frames, the task is to predict their subsequent poses in the following 15 frames. The human pose P_h ∈ℝ^23 × 3 represents joint rotations of the SMPL-X <cit.> model, while the object pose P_o = {R_o ∈ℝ^3, T_o ∈ℝ^3} denotes 3D orientation and 3D translation of the rigid object model. Evaluation Metrics. Following existing motion forecasting works <cit.>, we evaluate human joints position error J_e, object translation error T_e, object rotation error R_e, human-object contact accuracy C_acc, and penetration rate P_r. Details are provided in the supplementary material. Methods, Results, and Analysis. We evaluate three state-of-the-art motion forecasting methods, MDM <cit.>, InterDiff <cit.>, and CAHMP <cit.>. Table <ref> quantitatively shows these methods reveal a consistent drop in performance for unseen objects (S2) versus seen ones (S1) regarding human pose prediction. Meanwhile, errors in object pose prediction remain similar. This highlights the challenges in generalizing human collaborative motion for novel object shapes. §.§ Interaction Synthesis Generating human-object interaction <cit.> is an emerging research topic benefiting human avatar animation and human-robot collaboration <cit.>. With extensive collaboration modes and various object categories, constitutes a knowledge base for studying generalizable algorithms of human-object-human interactive motion synthesis. Task Formulation. Following recent studies <cit.>, we define the task as object-conditioned human motion generation. Given an object geometry sequence G_o ∈ℝ^T × N × 3, the aim is to generate corresponding two-person collaboration motions M_h ∈ℝ^2 × T × 23 × 3. This involves frame numbers T, object point clouds G_o, and human pose parameters for the SMPL-X <cit.> model. Evaluation Metrics. Following individual human-object interaction synthesis <cit.>, we evaluate human joint position error RR.J_e, object vertex position error RR.V_e, and human-object contact accuracy C_acc. The FID score (FID) is leveraged to quantitatively assess the naturalness of synthesized results. Details of the metric designs are presented in the supplementary material. Methods, Results, and Analysis. We utilize two advanced generative models, MDM <cit.> and OMOMO <cit.>, as baselines. MDM is a one-stage conditional motion diffusion model, while OMOMO is a two-stage approach with hand positions as intermediate results. Quantitative evaluations reveal larger errors in OMOMO when modeling multi-human collaboration compared to individual interaction synthesis by Li et al. <cit.>. Furthermore, the synthesized results have a higher FID than real motion data, indicating challenges in motion naturalness. §.§ Collaboration Retargeting User Studies. We conduct user studies to examine the quality of -Synthetic in terms of naturalness of contact and human motion. Each study comprises two collections, each with at least 100 sequences displayed in pairs on a website. Users are instructed to assess the realism of human-object contacts and the naturalness of human motions, and then select the superior one in each pair separately. Recognizing the diversity of acceptable contacts and motions, participants are permitted to deem the performances as roughly equivalent. Ablation on Contact Candidates. In Table <ref>.Abl.1, we only use the contact points from a source trajectory for retargeting to the target instead of resorting to the CORE4D-Real for many candidates, making the whole retargeting process similar to the OakInk <cit.> method. We observe a sharp decline in both physical plausibility and user preferences, indicating that our method compensates for OakInk's shortcomings in retargeting objects with significant geometric and scale variations. Ablation on Discriminator. In this ablation, as shown in Table <ref>.Abl.2, we omit the human pose discriminator in the collaboration retargeting. We will randomly choose a candidate from the contact candidates. There are obvious performance drops, demonstrating the critical role of the human pose discriminator in selecting appropriate candidates. Ablation on Contact Candidate Update. We exclude contact candidate update process in Table <ref>.Abl.3 experiment. This removal has weakened our method's ability to search for optimal solutions on objects, resulting in a modest degradation in penetration distance. The user study still exhibited a strong bias, indicating a perceived decline in the plausibility of both contact and motion. This ablation underscores the importance of contact candidate update within our methodology. Comparing -Synthetic with -Real. We assess the quality of CORE4D-Synthetic by comparing it with CORE4D-Real through user study. In conclusion, there is a 43% probability that users perceive the quality of both options as comparable. Furthermore, in 14% of cases, users even exhibit a preference for synthetic data. This indicates that the quality of our synthetic data closely approximates that of real data. Application of -Synthetic. Table <ref> compares the motion forecasting ability of light-weighted CAHMP <cit.>. The test set is S2 defined in Section <ref>. We assess the quality of -Synthetic by comparing No.A and No.B. No.A even have better performance on object due to enriched spacial relation between human and object in -Synthetic. No.C shows the value of the -Synthetic by largely improving the performance. Details are in supplementary material. § CONCLUSION AND LIMITATIONS We present , a novel large-scale 4D human-object-human interaction dataset for collaborative object rearrangement. It comprises diverse compositions of various object geometries, collaboration modes, and surrounding 3D scenes. To efficiently enlarge the data scale, we contribute a hybrid data acquisition method involving real-world data capturing and a novel synthetic data augmentation algorithm, resulting in 11K motion sequences covering 37 real-world and 3K virtual objects. Extensive experiments demonstrate the effectiveness of the data augmentation strategy and the value of the augmented motion data. We benchmark human-object motion forecasting and interaction synthesis on , revealing new challenges and research opportunities. Limitations. One limitation is that outdoor scenes are not incorporated due to the usage of the mocap system. Another limitation is that our data augmentation strategy currently focuses on adopting collaboration to novel object geometries while excluding human shape diversity. Integrating our retargeting approach with human shape modeling could be an interesting future direction. splncs04 Appendix The project page of is https://core4d.github.io/Project Page. Contents: * <ref>. Cross-dataset Evaluation * <ref>. Details on Real-world Data Acquisition * <ref>. Details on -Synthetic Data Generation * <ref>. Dataset Statistics and Visualization * <ref>. Details on Data Split * <ref>. Evaluation Metrics for Benchmarks * <ref>. Qualitative Results on Benchmarks * <ref>. Details on the Application of -Synthetic * <ref>. -Real Data Capturing Instructions and Costs * <ref>. Experiment Configurations and Codes * <ref>. URLs of Dataset, Repository, Metadata, DOI, and License * <ref>. Dataset Documentation and Intended Uses * <ref>. Author Statement § CROSS-DATASET EVALUATION To examine the data quality of -Real, we follow existing dataset efforts<cit.> and conduct the vision-based cross-dataset evaluation. We select an individual human-object-interaction dataset BEHAVE<cit.> that includes color images and select 2D human keypoint estimation as the evaluation task. Data Preparation. For a color image from -Real and BEHAVE<cit.>, we first detect the bounding box for each person via ground truth human pose and obtain the image patch for the person. We then resize the image patch to get a maximal length of 256 pixels and fill it up into a 256x256 image with the black color as the background. Finally, for each 256x256 image, we automatically acquire the ground truth 2D-pixel coordinates of 22 SMPL-X<cit.> human body joints from 3D human poses. For data split, we follow the original train-test split for BEHAVE<cit.> and merge the two test sets (S1, S2) for -Real. Task Formulation. Given a 256x256 color image including a person, the task is to estimate the 2D-pixel coordinate for each of the 22 SMPL-X<cit.> human body joints. Evaluation Metrics. P_e denotes the mean-square error of 2D coordinate estimates. Acc denotes the percentage of the coordinate estimates with the Euclidean distance to the ground truth smaller than 15 pixels. Method, Results, and Analysis. We draw inspiration from HybrIK-X<cit.> and adopt their vision backbone as the solution. Table <ref> shows the method performances on the two datasets under different training settings. Due to the significant domain gaps in visual patterns and human behaviors, transferring models trained on one dataset to the other would consistently encounter error increases. Despite the domain gaps, integrally training on both datasets achieves large performance gains on both -Real and BEHAVE<cit.>, indicating the accuracy of -Real and the value of the dataset serving for visual perception studies. § DETAILS ON REAL-WORLD DATA AQUISITION In this section, we describe our system calibration (Section <ref>) and time synchronization (Section <ref>) in detail. Moreover, we provide detailed information on loss functions of the human mesh acquisition (Section <ref>). §.§ System Calibration Calibrating the Inertial-optical Mocap System. Three reflective markers are fixed at known positions on a calibration rod, by which the 12 high-speed motion capture cameras calculate their relative extrinsic parameters, providing information about their spatial relationships. Additionally, three markers fixed at the world coordinate origin are employed to calibrate the motion capture system coordinate with the defined world coordinate. Calibrating Camera Intrinsic. The intrinsic parameters of allocentric and egocentric cameras are calibrated using a chessboard pattern. Calibrating Extrinsic of the Allocentric Cameras. We place ten markers in the camera view to locate each allocentric camera. By annotating the markers' 3D positions in the world coordinate system and their 2D-pixel coordinates on allocentric images, the camera's extrinsic parameters are estimated by solving a Perspective-n-Point (PnP) problem via OpenCV. Calibrating Extrinsic of the Egocentric Camera. We obtain the camera's pose information by fixing the camera to the head tracker of the motion capture suit. Similarly, ten markers are used to calibrate the relative extrinsic parameters of the first-person perspective cameras, allowing for determining their positions and orientations relative to the motion capture system. Additionally, to mitigate errors introduced by the integration of optical and inertial tracking systems, a purely optical tracking rigid is mounted on the motion camera. §.§ Time Synchronization To implement our synchronization method, we first set up a Network Time Protocol (NTP) server on the motion capture host. This server serves as the time synchronization reference for the Windows computer connected to the Kinect Azure DK. We minimize time discrepancies by connecting the Windows computer to the NTP server in high-precision mode and thus achieving precise synchronization. Additionally, we employ a Linear Timecode (LTC) generator to encode a time signal onto the action camera's audio track. This time signal serves as a synchronization reference for aligning the first-person perspective RGB information with the motion capture data. §.§ Loss Function Designs for Human Mesh Acquisition To transfer the BVH<cit.> human skeleton to the widely-used SMPL-X<cit.> model. We optimize body shape parameters β∈ℝ^10 to fit the constraints on manually measured human skeleton lengths and then optimize the full-body pose θ∈ℝ^159 with the following loss function: ℒ = ℒ_reg + ℒ_j3D + ℒ_jOri + ℒ_smooth + ℒ_h3D + ℒ_hOri + ℒ_contact. Regularization Loss ℒ_reg. The regularization loss term is defined as ℒ_reg = ∑||θ_body||^2 ·λ_body + ( ∑||θ_l_hand||^2 + ∑||θ_r_hand||^2 ) ·λ_hand, where θ_body∈ℝ^21×3 represents the body pose parameters defined by 21 joints of the skeleton, θ_l_hand∈ℝ^12 and θ_r_hand∈ℝ^12 represents the hand pose parameters. For each hand, the original SMPL-X skeleton has 15 joints with parameters θ_hand∈ℝ^15×3. However, principal component analysis (PCA) is applied to the hand pose parameters. The θ_hand parameters are transformed into a lower-dimensional space, specifically ℝ^12. λ_body=10^-3 and λ_hand=10^-4 are different weights that are used to control the regularization strength for the body and hand pose parameters, respectively. This loss ensures the simplicity of the results and prevents unnatural, significant twisting of the joints. 3D Position Loss ℒ_j3D and ℒ_h3D. The 3D position loss term is defined as ℒ_3D = ∑|| T_smplx - T_bvh||^2 ·λ_3D, where T_smplx∈ℝ^3 represents the 3D global coordinates of the joints in the SMPL-X model and T_bvh∈ℝ^3 represents the corresponding 3D global coordinates of the joints in the BVH representation. ℒ_j3D represents the 3D position loss sum for the 21 body joints, while ℒ_h3D represents the 3D position loss sum for the 30 hand joints (15 joints per hand). These two terms have different weights, set as λ_j3D=1.0 and λ_h3D=2.0, respectively. Orientation Loss ℒ_jOri and ℒ_hOri. The orientation loss term is defined as ℒ_Ori = ∑|| R_smplx - R_bvh||^2 ·λ_Ori, which is similar to ℒ_3D, except that ℛ_smplx∈ℝ^3×3 and ℛ_bvh∈ℝ^3×3 represent the rotation matrices for the adjacent joints in the SMPL-X and corresponding BVH representations, respectively. Specifically, body joints named head, spine, spine2, leftUpLeg, rightUpLeg, rightShoulder, leftShoulder, rightArm, leftArm, and neck are subjected to orientation loss, ensuring that their rotations relative to adjacent nodes are close to the BVH ground truth. λ_Ori is set to 0.2. Temporal Smoothness Loss ℒ_smooth. The temporal smoothness loss term is defined as ℒ_smooth = ∑_i=1^N( | | θ_i - θ_i-1||^2 ) ·λ_smooth where θ_i∈ℝ^(21+30)×3 represents the body and hand pose of the i-th frame. λ_smooth is set to 20.0. Contact Loss ℒ_contact. The contact loss term is defined as ℒ_contact = ∑( | | T_finger - T_obj||^2 ·𝒥(T_finger, T_obj) ) ·λ_contact where 𝒯_finger∈ℝ^10×3 is the global coordinates of ten fingers, and 𝒯_obj∈ℝ^10×3 is the corresponding global coordinates of the point closest to finger. 𝒥(T_finger, T_obj) is 1 when the distance between T_finger and T_obj is less than a threshold, otherwise it is 0. And λ_contact is 2.0. § DETAILS ON -SYNTHETIC DATA GENERATION In this section, we provide details on our synthetic data generation (collaboration retargeting) method. Firstly, we clarify term definitions in Section <ref>. We then explicitly introduce the whole method pipeline in detail in Section <ref>. Finally, we provide implementation details in Sections <ref> and <ref>. §.§ Term Definitions We provide definitions for the terms in our collaboration retargeting pipeline as follows. Contact Candidate: Contact candidate is a quadruple list containing all possible contact region index (person1_leftHand, person1_rightHand, person2_leftHand, person2_rightHand) on source's vertices. For each source, we record the contact regions of the four hands in each frame of each data sequence. At the beginning of the synthetic data generation pipeline, we sample contact candidates from these records. Contact Constraint: Having contact candidate on source, we apply DeepSDF-based<cit.> contact retargeting to transfer the contact regions to target. These contact regions on target are the contact constraints fed into the contact-guided interaction retargeting module. Source Interaction: During each collaboration retargeting process, we sample a human-object-human collaborative motion sequence from -Real as the source interaction to guide temporal collaboration pattern. Interaction Candidate: Sampling N contact candidates, we apply contact-guided interaction retargeting N times and have N human-object-human motion outputs, dubbed interaction candidates. These motions would be fed into the human-centric contact selection module to assess their naturalness. §.§ Method Pipeline The algorithm takes a source-target pair as input. First, we sample contact candidates from the whole -Real contact knowledge on source. For each contact candidate, we apply object-centric contact retargeting to propagate contact candidates to contact constraints on target. Sampling motion from -Real provides a high-level temporal collaboration pattern, and together with augmented low-level spatial relations, we obtain interaction candidates from the contact-guided interaction retargeting. Then, the human-centric contact selection module selects the optimal candidates, prompting a contact constraint update. After multiple iterations, the process yields augmented interactions. This iterative mechanism ensures a refined augmentation of interactions, enhancing the dataset's applicability across various scenarios. §.§ Contact-guided Interaction Retargeting The contact-guided interaction retargeting is a two-step optimization. We start by optimizing the motion of target. Then with target contact constraints, we optimize the poses of the two persons. Object motion retargeting. We deliberately design temporal and spatial losses to acquire consistent and smooth target motion. In the concern of efficiency, we jointly optimize all frames in a single data sequence with N frames. To guarantee the fidelity of object motion, we design the fidelity loss L_f to restrict the rotation R_o,i and the translation T_o,i with the ground-truth rotation R'_o,i and translation T'_o,i in N frames: ℒ_f = λ_f∑_i(||R'_o,i - R_o,i||_1 + ||T'_o,i - T_o,i||_1). We then address restriction on target's spatial position to avoid penetration with the ground. The spatial loss is defined as: ℒ_spat = λ_spat∑_imax (- min(height_i), 0), where min(height_i) represents the lowest spatial position of the objects per frame. A smoothness loss is designed to constrain the object pose difference between consecutive frames: ℒ_smooth = λ_smooth∑_ia_R_o,i^2 + a_T_o,i^2, where a is the acceleration of rotation and translation during N frames defined as: a_R_o,i = 2R_o,i - R_o,i-1 - R_o,i+1, a_T_o,i = 2T_o,i - T_o,i-1 - T_o,i+1, The total object motion retargeting problem is: R_o, T_o ⟵argmin_R_o, T_o(ℒ_f + ℒ_spat + ℒ_smooth). Human motion retargeting. We next optimize each person's motion based on the motion of target and the contact constraint. To acquire visually plausible motion, we design the fidelity loss ℒ_j and the smoothness loss ℒ_smooth. Besides, we utilize the contact correctness loss ℒ_c to acquire contact consistency in target interaction motion, and leverage spatial loss L_spat similar to Equation <ref> to avoid human-ground inter-penetration. To enhance motion fidelity, we define two loss functions ℒ_sr and ℒ_wr and let L_j = ℒ_sr + ℒ_wr. For joints from the human arms, despite following the correct temporal collaboration pattern, their global positions would vary concerning diverse object geometries. Therefore, we utilize oriented vectors pointing to their parent body joints to obtain a relative joint fidelity: ℒ_sr = λ_sr∑_i∑_j ∈arm‖ (P_j,i - P_parent(j),i) - (P'_j,i - P'_parent(j),i) ‖_2^2, where P_j,i denotes the 3D global position of joint j in frame i, and P' denotes ground-truth values. ℒ_wr denotes constraints on the global positions of other joints: ℒ_wr = λ_wr∑_i∑_j ∉arm‖ P_j,i - P'_j,i‖_2^2. The design of the smoothness loss is similar to Equation <ref>, penalizing huge acceleration of human SMPL-X parameters to avoid great motion differences between frames: ℒ_smooth = λ_smooth∑_i∑_j∈{1,2}(a_θ_j,i)^2 + (a_T_j,i)^2 + (a_O_j,i)^2. To leverage contact constraints, we attract human hands to the corresponding contact region on target. We select the positions of 20 fingertips of the two persons in the i-th frame as ℋ_i = {P̅_tip,i}_tip∈[1,20], where P̅ are tip positions in the object's coordinate system. The contact vertices on the target from object-centric contact retargeting are defined as 𝒞 = {P̅'_tip}_tip∈[1,20]. We minimize the Chamfer Distance (CD) between ℋ_i and 𝒞 to obtain contact consistency: ℒ_c = λ_c ∑_i CD(ℋ_i, 𝒞). The total human motion retargeting problem is: θ_1,2, T_1,2, O_1,2⟵argmin_θ_1,2, T_1,2, O_1,2(ℒ_j + ℒ_c + ℒ_spat + ℒ_smooth), In practice, we run 1,000 and 1,500 iterations respectively for object motion retargeting and human motion retargeting. The whole pipeline is implemented in PyTorch with Adam solver. The learning rate is 0.01. In object motion retargeting, λ_f for rotation is 500, for translation is 0.005, λ_spat=0.01, λ_smooth=1. In human motion retargeting, λ_sr = 0.1, λ_wr = 0.003, λ_c = 1,000, λ_spat=0.01, and λ_smooth = 1. §.§ Human-centric contact selection The pairwise training dataset utilized for the human pose discriminator training comprises 636,424 pairs of data. Each pair encompasses a positive human pose S_pos∈ℝ^21×3 and a negative human pose S_neg∈ℝ^21×3. The positive human pose is sampled from the -Real. Conversely, the negative human pose is derived from the corresponding positive sample by introducing noise to its object pose, subsequently employing the original contact information to perform contact-guided interaction retargeting. The discriminator is trained by: ℒ_ranking = - log(σ(R_pos - R_neg - m(S_pos, S_neg))), iterating 1,000 epochs by the Adam solver with a learning rate 2e-4. Specifically, the noise Δ(α, β, γ, x, y, z) incorporates both rotational and translational components. The rotational noise Δ(α, β, γ) ranges from 20 to 60 degrees, while the translational noise Δ(x, y, z) falls within the range of 0.2 to 0.5 meters. The margin is computed by: m(S_pos, S_neg) = (|α| + |β| + |γ|) / 10 + (|x| + |y| + |z|) *10. During the contact constraint update process, a penetration filtering step is performed. For each frame, the penetration volume between the human and object is calculated. If the penetration volume exceeds 10^-4 cubic meters, it is considered a penetration case. If more than 2.5% of frames within an interaction candidate exhibit penetration, the entire candidate is discarded. Among the remaining candidates, the one with the highest score from the human pose discriminator is selected to proceed with the contact constraint update. § DATASET STATISTICS AND VISUALIZATION §.§ Collaboration Modes encompasses five human-human cooperation modes in collaborative object rearrangement. “Move1” refers to the scenario where two participants simultaneously rearrange objects and both are aware of the target. On the other hand, “move2” represents the scenario where objects are rearranged simultaneously, but only Person 1 knows the target. “Pass” indicates that one participant passes the object to another for relay transportation. “Join” means that Person 2 joins Person 1 in carrying the object during transportation. Lastly, “leave” signifies that Person 2 leaves during the joint transportation with Person 1. According to the different durations of the two participants' contact with the object, “move1” and “move2” can be combined into collaborative carrying tasks. “Pass” represents the task of handover and solely moving the object. Incorporating the join task and the leave task, totally comprises four different tasks (see Figure 4 in the main paper) based on the interaction between humans and objects. Fig. <ref> exemplifies the motions for each task. As depicted in Fig. <ref>, distinct characteristics are exhibited by different cooperation modes in high-level movements, thereby offering an innovative standpoint and potential for comprehending and investigating collaborative behaviors. §.§ Participants As illustrated in Fig. <ref>, a total of 31 participants, encompassing variations in height, weight, and gender, contributed to the capturing of -Real. §.§ Objects -Real has 38 objects while -Synthetic has about 3k objects. The objects encompass six categories, namely box, board, barrel, stick, chair, and desk, each exhibiting a rich diversity in surface shape and size. The distribution of object categories is detailed in Table <ref>. All the objects in -Real are shown in Fig. <ref>. Fig. <ref> shows samples from -Synthetic and their interpolation process. §.§ Camera Views Fig. <ref> shows the four allocentric and one egocentric views of our data capturing system. § DETAILS ON DATA SPLIT Benefiting from the diverse temporal collaboration patterns from -Real and the large data amount of -Synthetic, we randomly select a subset of real object models and construct the training set as the combination of their real (T-Real) and synthesized (T-Synthetic) collaboration motion sequences. We formulate two test sets on -Real supporting studies of both non-generalization and inner-category generalization. The first test set (S1) consists of interaction performed on the objects that appear in the training set, while the second one (S2) is composed of interaction from novel objects. Detailed data distribution of each object category is shown in Table <ref>. § EVALUATION METRICS FOR BENCHMARKS The code of our evaluation metrics is provided in https://github.com/leolyliu/CORE4D-InstructionsCode Repository. §.§ Human-object Motion Forecasting Evaluation metrics include the human joints position error J_e, the object translation error T_e, the object rotation error R_e, the human-object contact accuracy C_acc, and the penetration rate P_r. * We define J_e as the average Mean Per Joint Position Error (MPJPE) of the two persons. MPJPE represents the mean per-joint position error of the predicted human joint positions and the ground-truth values. * Translation error (T_e) and rotation error (R_e) denote the average L2 difference between the predicted object translation vectors and the ground-truth ones, and the average geodesic difference between the estimated object rotation matrices and the ground-truth ones, respectively. * Physical metrics: To assess contact fidelity, we detect contacts on the two hands of the two persons for each frame with an empirically designed distance threshold (5 centimeters). We then examine the contact accuracy (C_acc), which indicates the average percentage of contact detection errors in the predicted motions. Additionally, we examine the object penetration ratio (P_r) representing the mean percentage of object vertices inside the human meshes. §.§ Interaction Synthesis Following an existing individual human-object interaction synthesis study<cit.>, the evaluation metrics include the root-relative human joint position error RR.J_e, the root-relative human vertex position error RR.V_e, the human-object contact accuracy C_acc, and the FID score (FID). * RR.J_e denotes the average root-relative MPJPE of the two persons. The root-relative MPJPE represents the mean per-joint position error of the predicted human joint positions relative to the human root position and the ground-truth values. * RR.V_e denotes the average root-relative Mean Per Vertex Position Error (MPVPE) of the two persons. The root-relative MPVPE represents the mean per-vertex position error of the predicted human vertex positions relative to the human root position and the ground-truth values. * C_acc is the same as that in Section <ref>. * The Fréchet Inception Distance (FID) quantitatively evaluates the naturalness of synthesized human motions. We first train a feature extractor on -Real to encode each human-object-human motion sequence to a 256D feature vector f̅_i and acquire the ground-truth human motion feature distribution D̅={f̅_i}. We then replace the motions of the two persons as synthesized ones and obtain another distribution D={f_i}. Eventually, the FID denotes the 2-Wasserstein distance between D̅ and D. Since -Real provides action labels, the feature extractor is supervised-trained by fulfilling the action recognition task. The network structure of the feature extractor is a single-layer Transformer<cit.>. We provide the code of the feature extractor and pre-trained parameters in https://github.com/leolyliu/CORE4D-InstructionsCode Repository. § QUALITATIVE RESULTS ON BENCHMARKS Figure <ref> and Figure <ref> exemplify generated motions for the human-object motion forecasting task and the interaction synthesis task, respectively, where “GT” denotes the ground truth motions, and others are method predictions. Since the baseline methods do not focus on generating hand poses, we replace hand poses in ground truth with flat hands to facilitate fair comparisons. Despite diverse cooperation modes that can be generated, the baseline methods consistently encompass unsatisfactory performances including unnatural collaboration, inter-penetration, and unnatural contact. § DETAILS ON THE APPLICATION OF -SYNTHETIC To evaluate the application of -Synthetic, we use the lightweight CAHMP<cit.> to conduct the motion forecasting experiments. Unlike the experiments in section Human-object Motion Forecasting mentioned in the main paper, where 15 frames are predicted, here we predict the human-object motion for the next 10 frames given the previous 10 frames. §.§ Task Formulation Given the object's 3D model and human-object poses in adjacent 10 frames, the task is to predict their subsequent poses in the following 10 frames. The human pose P_h ∈ℝ^23 × 3 represents the joint rotations of the SMPL-X<cit.> model, while the object pose P_o = {R_o ∈ℝ^3, T_o ∈ℝ^3} denotes 3D orientation and 3D translation of the rigid object model. §.§ Evaluation Metrics Following existing motion forecasting works<cit.>, we evaluate human joints position error J_e, object translation error T_e, object rotation error R_e. Details of the three metrics can be found in Section <ref>. §.§ Results Comparing the 1K real dataset with the 0.1K real dataset supplemented with synthetic data generated through retargeting, we observed that the quality of the synthetic data is comparable to the real data. Additionally, due to the increased diversity of objects and enriched spatial relations between humans and objects in the synthetic data, it exhibits better generalization performance in object motion forecasting. Comparing the evaluation results of the 1K real dataset with the results obtained by augmenting it with additional 4K synthetic data, we observed a significant performance gain from the synthetic data. This demonstrates that the inclusion of synthetic data enhances the value of our dataset and better supports downstream tasks. § -REAL DATA CAPTURING INSTRUCTIONS AND COSTS §.§ Instructions. Target. We divide a 4m × 5m field into 20 squares and number them, and place colored labels as markers along the perimeter of the field. The following language instructs participants: "Please collaboratively move the object to the target square. You can choose any path and orientation of the object as you like. It is not necessary to be overly precise with the final position - a rough placement is fine. Do not make unnatural motions just to achieve an exact position. Do not use verbal communication with each other.". As for the settings when only one participant knows the target, the target square number is written on a piece of paper and shown to the participant who knows the target. And additional instructions are given as: "If you know the target, do not use language or direct body language to inform the other party (such as pointing out the location). If you do not know the target, please assist the other participant in completing the transportation.". Collaboration Mode. The instructions are given as follows to indicate different Collaboration Modes for the participants. For Collaborate mode: "Based on the target, please cooperatively transport the object, or upright any overturned tables, chairs, etc. Both participants should be in contact with the object throughout the process.". For Handover mode: "Please decide the handover point yourselves, then have one person hand the object to the other, completing the object transfer in relay.". For Leave and Join modes: "One person will transport the object throughout, while the other leaves or joins to help at a time point not disclosed to the collaborator.". Obstacle. The instructions are given as follows to guide the participants in tackling obstacles: "There are a varying number of obstacles on the field. If they get in your way, please decide on your own how to solve it using some common everyday operations. If the obstacles occupy the destination, please place the object near the destination.". §.§ Costs. Scanning the object took 30 person-hours, modeling the object into the mocap system took 27.5 person-hours, data capture took 78 person-hours, data annotation took 7 person-hours, and the user study took 60 person-hours. The wage is 100 RMB per person-hour. § EXPERIMENT CONFIGURATIONS AND CODES We evaluate existing methods for the two benchmarks on Ubuntu 20.04 with one NVIDIA GeForce RTX 3090 GPU. The code of benchmarks and relevant methods are provided in https://github.com/leolyliu/CORE4D-InstructionsCode Repository. During the quantitative evaluation, we select three random seeds (0, 42, 233) for each method, train the network respectively, and then report the mean performances and standard deviations as the evaluation results. More experimental details are provided in https://github.com/leolyliu/CORE4D-InstructionsCode Repository. § URLS OF DATASET, REPOSITORY, METADATA, DOI, AND LICENSE * Dataset project page: https://core4d.github.io/https://core4d.github.io/. * Data link: https://onedrive.live.com/?authkey= * Dataset usage instruction: https://github.com/leolyliu/CORE4D-Instructionshttps://github.com/leolyliu/CORE4D-Instructions. * Code link: https://github.com/leolyliu/CORE4D-Instructionshttps://github.com/leolyliu/CORE4D-Instructions. * Croissant metadata: https://github.com/leolyliu/CORE4D-Instructions/blob/main/metadata.jsonCroissant Metadata Link. * Schema.org metadata: https://core4d.github.io/https://core4d.github.io/. * DOI: https://doi.org/10.5281/zenodo.1160766610.5281/zenodo.11607666. * License. This work is licensed under a https://creativecommons.org/licenses/by/4.0/CC BY 4.0 license. § DATASET DOCUMENTATION AND INTENDED USES We use the documentation framework from Gebru et.al<cit.>. §.§ Motivation * For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. The dataset was created to facilitate research studies in multi-person collaboration for object rearrangement. The dataset can support various research topics for understanding and synthesizing collaborative behaviors, including human-object motion tracking, action recognition, human-object motion forecasting, and collaboration synthesis. * Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The dataset was created by Chengwen Zhang from Beijing University of Posts and Telecommunications, together with Yun Liu, Ruofan Xing, Bingda Tang, and Li Yi from Tsinghua University. * Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. Funding was provided by the Institute for Interdisciplinary Information Sciences at Tsinghua University. * Any other comments? None. §.§ Composition * What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The dataset comprises two parts: -Real and -Synthetic. The -Real includes 3D object models, human-object motions, allocentric RGBD videos, egocentric RGB videos, human-object segmentations, camera parameters, and action labels. The -Synthetic includes 3D object models and human-object motions. Please refer to the https://github.com/leolyliu/CORE4D-InstructionsDataset Documentation for explicit definitions of these files. * How many instances are there in total (of each type, if appropriate)? -Real includes 37 object models, 1.0K human-object motion sequences, 4.0K allocentric RGBD videos, 1.0K egocentric videos, 4.0K human-object segmentations, and 1.0K action labels. -Synthetic includes 3.0K object models and 10K human-object motion sequences. * Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The dataset is a representative sample of all possible and infinitely many multi-human collaborative behaviors for household object rearrangement. To cover as diverse collaboration as possible, we collect five typical collaboration modes in the -Real, and enrich human-object spatial relations greatly in the -Synthetic. Each collaboration sequence is complete. * What data does each instance consist of? “Raw” data (e.g., unprocessed text or images)or features? In either case, please provide a description. For -Real, each collaboration instance consists of SMPLX<cit.> models of two persons in each frame, the 3D model for the manipulated object, the object's 6D pose in each frame, four allocentric RGBD videos with camera intrinsic and extrinsic, one egocentric RGB video with the camera intrinsic, four human-object segmentation sequences, and one action label. For -Synthetic, each collaboration instance consists of SMPLX<cit.> models of two persons in each frame, the 3D model for the manipulated object, and the object's 6D pose in each frame. Details are provided in https://github.com/leolyliu/CORE4D-InstructionsDataset Documentation. * Is there a label or target associated with each instance? If so, please provide a description. For -Real, each collaboration instance is associated with an action label. There is no label in -Synthetic. * Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No information is missing. * Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. Relationships between individual collaboration instances include the same persons or the same objects. We provide an explicit SMPLX<cit.> shape parameter for each person, and an explicit name and 3D object model for each object. * Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. The recommended data split is provided in https://github.com/leolyliu/CORE4D-InstructionsCode Repository. * Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. Noises come from the hardware noises of the inertial-optical mocap system, the 3D scanner, and the cameras. * Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. The dataset is fully self-contained. * Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ nonpublic communications)? If so, please provide a description. No confidential data. * Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. No. * Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No. * Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. No. We ensure participants' anonymity by mosaicking their faces. * Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. No. * Any other comments? None. §.§ Collection Process * How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If the data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. The data is acquired by the method described in Section 3 of the main paper. The data quality is evaluated in Section <ref> and Section 4.4 of the main paper. * What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? How were these mechanisms or procedures validated? The data is collected by the method described in Section 3 of the main paper. The hardware qualities of the inertial-optical mocap system, the 3D scanner, and the cameras are examined by their developers. * If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? The dataset is a representative sample of all possible and infinitely many multi-human collaborative behaviors for household object rearrangement. To cover as diverse collaboration as possible, we collect five typical collaboration modes in the -Real, and enrich human-object spatial relations greatly in the -Synthetic. The dataset is not a sample from a known larger set since each motion sequence is complete. * Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? Students from Universities participated in the data collection process. They were paid 100 RMB/hour. Thanks to them all. * Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The data was created and collected between July 2023 and December 2023. The creation time and the collection time of each data instance are the same. * Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. No. * Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? The data is the individuals' motions. * Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. Yes. The language is: "As a data collector, you will be familiar with the working principle and usage of the optical-inertial hybrid motion capture system. You will personally wear the motion capture device to collect motion data. All the data collected in this project will be used solely for research purposes by the Yili Research Group at Tsinghua University's Institute for Interdisciplinary Information Sciences." * Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. Yes. All the participants signed that: "I am aware of and agree that the collected data will be used for research purposes related to the project and may be released as a dataset." * If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). Yes. All participants were notified that they have the right to request the removal of their data at any time in the future by reimbursing their salary and compensating for the expenses incurred in collecting their data. * Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. No. * Any other comments? None. §.§ Preprocessing/cleaning/labeling * Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section. No. * Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. N/A * Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point. N/A * Any other comments? None. §.§ Uses * Has the dataset been used for any tasks already? If so, please provide a description. Currently, the dataset has been used to establish two benchmarks, human-object motion forecasting and interaction synthesis, in Section 4 of the main paper. * Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. Yes. Please refer to https://github.com/leolyliu/CORE4D-InstructionsRelevant Works. * What (other) tasks could the dataset be used for? The dataset can support various research topics for understanding and synthesizing collaborative behaviors, including human-object motion tracking, action recognition, human-object motion forecasting, and collaboration synthesis. Besides, the dataset can potentially be further used to study robot policies for robot manipulations and human-robot collaborations. * Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms? Unknown to the authors. * Are there tasks for which the dataset should not be used? If so, please provide a description. Unknown to the authors. * Any other comments? None. §.§ Distribution * Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. Yes. The dataset is fully released at https://onedrive.live.com/?authkey= * How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? The dataset is distributed on Yun Liu's OneDrive Cloud Storage: https://onedrive.live.com/?authkey= DOI: 10.5281/zenodo.11607666https://doi.org/10.5281/zenodo.11607666. Project page: https://core4d.github.io/Project Page. * When will the dataset be distributed? The dataset is distributed in June 2024. * Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. The dataset is licensed under a https://creativecommons.org/licenses/by/4.0/CC BY 4.0 license. * Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. No. * Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. No. * Any other comments? None. §.§ Maintenance * Who will be supporting/hosting/maintaining the dataset? Chengwen Zhang is supporting/maintaining the dataset. * How can the owner/curator/manager of the dataset be contacted (e.g., email address)? The curators of the dataset, Chengwen Zhang, Yun Liu, Ruofan Xing, Bingda Tang, and Li Yi, can be contacted at zcwoctopus@gmail.comzcwoctopus@gmail.com, yun-liu22@mails.tsinghua.edu.cnyun-liu22@mails.tsinghua.edu.cn, xingrf20@mails.tsinghua.edu.cnxingrf20@mails.tsinghua.edu.cn, tbd21@mails.tsinghua.edu.cntbd21@mails.tsinghua.edu.cn, and ericyi@mail.tsinghua.edu.cnericyi@mail.tsinghua.edu.cn, respectively. * Is there an erratum? If so, please provide a link or other access point. No. * Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)? The dataset update will be posted at https://github.com/leolyliu/CORE4D-InstructionsDataset Instructions and https://core4d.github.io/Project Page. * If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. No. * Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers. After a dataset update, older versions will be kept for consistency. These notices will be posted at https://github.com/leolyliu/CORE4D-InstructionsDataset Instructions and https://core4d.github.io/Project Page. * If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description. Others are allowed to do these and should contact the original authors. * Any other comments? None. § AUTHOR STATEMENT Author Responsibility Statement This statement is intended to emphasize the author's responsibility regarding the dataset work, including ensuring compliance with all relevant laws, regulations, and ethical guidelines. By participating in the dataset work, the author agrees and commits to the following responsibilities: * Legality: The authors guarantee that all data and materials used in the dataset work are obtained and used legally. The authors will ensure compliance with all applicable laws, regulations, and policies within their country, region, or organization. * Rights Protection: The authors will make every effort to protect the privacy rights, intellectual property rights, and other legitimate interests of individuals within the dataset. The authors will respect individuals' privacy and take appropriate measures to safeguard the security of personal identifying information. * Transparency: The authors will provide sufficient information and explanations to enable users of the dataset to understand the sources, purposes, and limitations of the data. The authors will strive to ensure that the use and publication of the dataset are transparent and traceable. * Compliance: The authors will ensure that the dataset work complies with all applicable laws, regulations, and policies. In the event of any violation of rights, the authors will bear full responsibility and be willing to accept the corresponding legal consequences and liability for damages. * Shared Responsibility: The authors require others who use the dataset to also assume appropriate responsibilities and adhere to similar obligations and guidelines to ensure the legal and responsible use of the dataset. * License Confirm: This work is licensed under a https://creativecommons.org/licenses/by/4.0/CC BY 4.0 license.
http://arxiv.org/abs/2406.17999v1
20240626010424
Entangling Schrödinger's cat states by seeding a Bell state or swapping the cats
[ "Daisuke Hoshi", "Toshiaki Nagase", "Sangil Kwon", "Daisuke Iyama", "Takahiko Kamiya", "Shiori Fujii", "Hiroto Mukai", "Shahnawaz Ahmed", "Anton Frisk Kockum", "Shohei Watabe", "Fumiki Yoshihara", "Jaw-Shen Tsai" ]
quant-ph
[ "quant-ph" ]
These authors contributed equally to this work. email: kwon2866@gmail.com (Sangil Kwon) Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan These authors contributed equally to this work. email: kwon2866@gmail.com (Sangil Kwon) Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan These authors contributed equally to this work. email: kwon2866@gmail.com (Sangil Kwon) Research Institute for Science and Technology, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan Research Institute for Science and Technology, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan Department of Microtechnology and Nanoscience, Chalmers University of Technology, 412 96 Gothenburg, Sweden Department of Microtechnology and Nanoscience, Chalmers University of Technology, 412 96 Gothenburg, Sweden Research Institute for Science and Technology, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan College of Engineering, Shibaura Institute of Technology, 3-7-5 Toyosu, Koto-ku, Tokyo 135-8548, Japan Department of Physics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan Research Institute for Science and Technology, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan Research Institute for Science and Technology, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan § ABSTRACT In quantum information processing, two primary research directions have emerged: one based on discrete variables (DV) and the other on the structure of quantum states in a continuous-variable (CV) space. It is increasingly recognized that integrating these two approaches could unlock new potentials, overcoming the inherent limitations of each. Here, we show that such a DV–CV hybrid approach, applied to superconducting Kerr parametric oscillators (KPOs), enables us to entangle a pair of Schrödinger's cat states by two straightforward methods. The first method involves the entanglement-preserving and deterministic conversion between Bell states in the Fock-state basis (DV encoding) and those in the cat-state basis (CV encoding). This method would allow us to construct quantum networks in the cat-state basis using conventional schemes originally developed for the Fock-state basis. In the second method, the √(iSWAP) gate operation is implemented between two cat states following the procedure used for Fock-state encoding. This DV-like gate operation on CV encoding not only completes the demonstration of a universal quantum gate set in a KPO system but also enables faster and simpler gate operations compared to previous SWAP gate implementations on bosonic modes. Our work offers a simple yet powerful application of DV–CV hybridization while also highlighting the scalability of this planar KPO system. Entangling Schrödinger's cat states by seeding a Bell state or swapping the cats Jaw-Shen Tsai July 1, 2024 ================================================================================ § INTRODUCTION For nearly three decades, there have been two paradigms in quantum information processing: one involves discrete variables (DVs), such as photon number (Fock) states or spin states <cit.>, whereas the other relies on the structure of quantum states in a continuous-variable (CV) space, such as Schrödinger's cat and Gottesman–Kitaev–Preskill states <cit.>. Recently, considerable efforts have focused on bridging DV and CV quantum information to overcome the limitations of each paradigm <cit.>. Parametrically driven Kerr nonlinear resonators, often referred to as Kerr parametric oscillators (KPOs) <cit.>, offer a unique testbed for this task, particularly for exploring emergent quantum properties like entanglement in interacting quantum systems. This capability is enabled by simple one-to-one conversion between Fock and cat states via parametric pump control <cit.>. In our previous work <cit.>, we experimentally demonstrated that such conversion in a superconducting KPO preserves the quantum coherence of the system, with the underlying physics being quantum tunnelling in phase space <cit.>. Furthermore, we showed that single-gate operations on cat states in a KPO can be implemented similarly to conventional gate operations on the Fock-state basis <cit.>. In this work, we introduce two straightforward methods to create entangled cat states—a valuable resource for fault-tolerant quantum computation and communication <cit.>—by extending our approach that bridges DV and CV domains. The first method is the entanglement-preserving conversion from Fock-state encoding to cat-state encoding. Although there have been studies on two interacting KPOs <cit.>, the entanglement between them and its preservation during the conversion have yet to be investigated. Such a conversion suggests the possibility of constructing quantum networks in the cat basis using conventional schemes originally developed for the Fock basis, thereby reducing experimental complexity. Thus, our demonstration highlights the potential of DV–CV hybridization and may lay new groundwork for constructing quantum networks in the cat basis. The next method is to implement the √(iSWAP) gate between two cat states in a manner almost identical to that for Fock-state encoding <cit.>. This allows us to create entangled cat states faster than previous implementations on bosonic modes <cit.>, using only a single gate pulse. Furthermore, our implementation completes the demonstration of a universal quantum gate set, alongside the single-cat gate operations from our previous work <cit.>. Such two-KPO gate operation for cat-state encoding, which we refer to as the two-cat gate, has not been demonstrated, despite its importance in showing the scalability of a KPO system as a promising platform for quantum information processing. Thus, our implementation supports the scalability of planar superconducting KPO systems. For both our methods, we can make analogies to seeds (from the DV domain) sprouting (in the CV domain) thanks to watering (two-photon pumping), as illustrated in Fig. <ref>a. In this paper, we denote Fock states |0⟩ and |1⟩ as |0_F⟩ and |1_F⟩, respectively. Correspondingly, the even and odd cat states are denoted as |0_C⟩ and |1_C⟩ as shown in Fig. <ref>b. In addition, we refer to the Bell states in the Fock basis as Bell–Fock states and designate the resulting entangled cat states as Bell–Cat states. § SETUP The chip used in this work is shown in Fig. <ref>c. It is the same chip used in our previous study <cit.>. The Hamiltonian of our system can be described as (see Sec. 1 of Supplementary Information for the derivation) ℋ̂(t) = Δ_1 â_1^†â_1 - K_1/2â_1^†â_1^†â_1â_1 + P_1(t)/2( â_1^†â_1^† + â_1â_1 ) + Δ_2 â_2^†â_2 - K_2/2â_2^†â_2^†â_2â_2 + P_2(t)/2( â_2^†â_2^† + â_2â_2 ) + g ( â_1^†â_2 e^+iΔ_pt + â_1â_2^†e^-iΔ_pt). Here, we are working in units where ħ=1; â_i and â_i^† are the ladder operators for the KPOi (i=1,2); Δ_i (≡ω_Ki - ω_pi/2) is the KPO-pump frequency detuning, where ω_Ki is the transition frequency between the |0_F⟩ and |1_F⟩ states, and ω_pi is the frequency of the two-photon pump; K_i is the self-Kerr coefficient; P_i is the amplitude of the pump; g is the coupling constant; and Δ_p [≡(ω_p1-ω_p2)/2] is half of the detuning between the two pumps. The Hamiltonian in Eq. (<ref>) is in the rotating frame defined by ℋ̂_0 = ∑_i (ω_pi/2)â_i^†â_i. See Supplementary Table 1 for the values of these system parameters. The cat states are generated adiabatically using the pump pulse with the profile sin^2(π t/2τ_ramp), where the ramping time τ_ramp is 1 (see Methods for more details). Throughout this work, for both KPOs, the P/K ratio is chosen to be 1.0, where the Kerr coefficient is approximately 2 MHz after ramping up the pump, and the pump detuning [Δ_1 and Δ_2 in Eq. (<ref>)] is chosen to be 1.0 MHz. Since the detuning between the two KPOs (144 MHz) is nearly 20 times larger than the coupling (8 MHz), the interaction is effectively turned off on the timescale of the measurements; thus, cat states can be generated and measured independently and simultaneously as shown in Fig. <ref>d. § RESULTS §.§ Conversion from Fock to cat We first prepare all four types of Bell–Fock state, |0_F 0_F⟩±|1_F 1_F⟩ and |0_F 1_F⟩±|1_F 0_F⟩. Subsequent two-photon pumping to each KPO converts the Bell–Fock state into the same type of Bell–Cat state; for instance, from |0_F 0_F⟩ + |1_F 1_F⟩ to |0_C 0_C⟩ + |1_C 1_C⟩ (see Fig. <ref>c for the pulse sequence). This approach relies on the fundamental property of entanglement, namely, that “entanglement is preserved under local unitary operations” <cit.>. The Bell–Fock state is prepared by activating the interaction between the KPOs by applying a parametric pulse with either the frequency ω_K1+ω_K2 or ω_K1-ω_K2 to the pump ports <cit.>. A parametric pulse with each frequency induces the transitions between |0_F0_F⟩ and |1_F1_F⟩, and between |0_F1_F⟩ and |1_F0_F⟩ based on the three-wave mixing capability of our KPOs. Using such transitions, we can create states |0_F 0_F⟩ + e^iϕ_s|1_F 1_F⟩ and |0_F 1_F⟩ + e^iϕ_d|1_F 0_F⟩, where the phases ϕ_s and ϕ_d are determined by the phase of the parametric pulse (virtual Z gate). We refer to this pulse as the Bell-preparation pulse. Rabi oscillations associated with the Bell-preparation pulse are shown in Fig. <ref>a. For the full characterization of such entangled quantum states, we measured the two-mode Wigner functions (2WFs) (Fig. <ref>d,e) <cit.> because the one-mode Wigner functions (1WFs) cannot provide information on entanglement—all Bell states show the same 1WF, which is identical to that of the fully mixed state (Fig. <ref>b). The 2WFs of the target Bell–Fock and Bell–Cat states are shown in Supplementary Fig. 5. We observe all essential features in the 2WF of Bell–Cat states (Fig. <ref>e). Firstly, in the Re–Re plots with (α_i)=0 (i=1,2), two red circles aligned diagonally indicate the correlation between the two KPOs, similar to the results in Ref. <cit.>. The alignment direction of the red circles represents the sign of the superposition. The colour of the centre circle, which represents the joint number parity, indicates the type of Bell state; for instance, |0 1⟩ + e^iϕ|1 0⟩ shows a blue centre regardless of whether the basis is Fock or cat. Secondly, the interference pattern in the Im–Im plot with (α_i)=0 demonstrates that the correlation is of quantum nature. Note that the patterns in Fig. <ref>d,e illustrate how the 2WFs of Bell–Fock states evolve to those of Bell–Cat states: As the pump amplitude increases, the pattern in Fig. <ref>d elongates along the diagonal axis, eventually resembling the Re–Re plots in Fig. <ref>e. Regarding the Im–Im plots of Bell–Fock states, those of |0_F 1_F⟩±|1_F 0_F⟩ are identical to the Re–Re plots, whereas the Im–Im plot of |0_F 0_F⟩±|1_F 1_F⟩ matches the Re–Re plot of |0_F 0_F⟩∓|1_F 1_F⟩, as confirmed by our measurements (not shown). The Im–Im plots in Fig. <ref>e can be interpreted as a compressed version of the plots in Fig. <ref>d along the diagonal axis. These 2WF patterns show the profound connection between quantum correlations in the Bell–Fock and Bell–Cat states. The fidelity between the experimentally created Bell–Fock states and the target Bell–Fock states is 0.81± 0.01 (the error represents the standard deviation). This fidelity was obtained by reconstructing the density matrix from the measured 2WFs (see Methods). By simulating our Bell–Fock preparation process via the Lindblad master equation with the system parameters in Supplementary Table 1, we find that approximately half of the infidelity is caused by thermal excitation and the other half by relaxations such as single-photon loss and dephasing. See Sec. 2 of Supplementary Information for more details on the simulation. The fidelity between the experimentally created Bell–Cat states and the target states is 0.61±0.04. For completely mixed cat states, this value would be 0.25. Further suppression of the fidelity during the conversion process is primarily caused by single-photon loss. The most notable symptom in the 2WF caused by single-photon loss is that the colour of the centre circle in the Re–Re plots with (α_i)=0 (i=1,2) and the interference pattern in the Im–Im plots with (α_i)=0 decay with time <cit.>, resulting in a weaker contrast than the 2WF plots of the target Bell–Cat states (shown in Supplementary Fig. 5), as shown in Fig. <ref>e. The simulation with the Lindblad master equation gives a fidelity of about 0.71, which is reasonably close to our experimental result. Dephasing caused by low-frequency noise does not affect the fidelity during and after the ramping of the pump because the cat states in KPOs are protected by the energy gap <cit.>. Experimental evidence indicates that the primary source of relaxation for cat states in a KPO is single-photon loss <cit.>. However, dephasing during the Bell–Fock state preparation, specifically fluctuations in the phase of the Bell-preparation pulse, causes another notable symptom as shown in Fig. <ref>e: Two corners in the Re–Re plots are slightly pink, whereas those of the target Bell–Cat states should be completely white (see Supplementary Fig. 5) <cit.>. Thus, the fidelity can be improved by suppressing single-photon loss, thermal excitation, and dephasing. If we assume T_1=T_2=100 <cit.> and a thermal photon number of 0.01, the fidelity of the Bell-Fock states is approximately 0.96 using the same Bell-preparation pulse, as simulated by the Lindblad master equation. The primary source of remaining infidelity arises from unwanted higher-state excitations due to the small Kerr coefficient. This issue can be mitigated by employing DRAG-like pulses <cit.>. Importantly, since our KPO is frequency-tunable, techniques such as spin echo are necessary to achieve a long T_2. With the same relaxation times and thermal population, the simulation shows that the fidelity of the Bell-Cat state can reach approximately 0.93 using the same cat generation pulse. The primary source of reduced fidelity in this case is population leakage out of the computational subspace caused by non-adiabatic transitions during cat generation. This leakage can be suppressed by employing a counterdiabatic or numerically optimized pulse <cit.>. (In this work, a counterdiabatic pulse was not used, unlike in our previous work <cit.>. See Methods for more information.) §.§ Two-cat gate operation One interesting and useful property of this KPO system is that we can use the same type of parametric pulse for two-qubit gate operation both in the Fock and cat state encoding <cit.>. In this work, the parametric pulse with the frequency ω_K1-ω_K2, which we used to prepare the Bell–Fock state, was also used for the gate operation. We observe the Rabi-like oscillations in the parity of each KPO, which we call the two-cat Rabi, as a function of the phase and the detuning of the parametric pulse, which we call the gate pulse (Fig. <ref>a,b). Here, the gate phase ϕ_g is the phase relative to the pumps, and the gate detuning Δ_g is the detuning from (ω_p1-ω_p2)/2. For this measurement, we first prepare |0_F1_F⟩ and convert it to |0_C1_C⟩ by applying the pumps. Then, we apply the gate pulse, in addition to the pumps, as shown in Fig. <ref>c. Note that the two KPOs exhibit the same two-cat Rabi oscillations but with opposite parities. From the simulation, we determined the gate amplitude to be 2.96 MHz (see Supplementary Fig. 4a and its caption for details). One-mode Wigner functions show that during the Rabi oscillations, the state evolves from |0_C1_C⟩ (no gate) to |1_C0_C⟩ (iSWAP). To determine the intermediate quantum state between these two points, a 2WF measurement with an additional offset in displacement is required. This is because 1WF and the Re–Re (Im–Im) plot in 2WF without an additional displacement along the imaginary (real) axes cannot distinguish the following three states: |0_C1_C⟩±i|1_C0_C⟩, which are the states after the √(iSWAP) gate, and the mixture of |0_C1_C⟩ and |1_C0_C⟩. The Re–Re plot with an additional offset shows that the state is |0_C1_C⟩ - i|1_C0_C⟩ (the plot at the bottom of Fig. <ref>d), confirming that the two-cat gate operation is the √(iSWAP) gate (see Supplementary Fig. 5). The √(iSWAP) gate time, 275 ns, is significantly faster than recent implementations of similar SWAP gate operations on bosonic modes <cit.>. This short gate time is possible because the beam-splitter interaction is inherently built into the Hamiltonian [Eq. (<ref>)], and the KPO system enables us to adopt schemes for gate operations in Fock-state encoding. The primary limitations on our gate time are the AC Stark-like frequency shift induced by the gate pulse above a certain amplitude threshold, which would introduce unwanted Z-gate operations, and the small cat size. Additionally, Ref. <cit.> suggested performing a similar gate operation using the frequency (ω_p1+ω_p2)/2, as we demonstrated in Fig. <ref> for the preparation of Bell–Fock states. We did not pursue this approach because the amplitude threshold for the AC Stark-like frequency shift is almost zero at (ω_p1+ω_p2)/2. Therefore, suppressing the AC Stark-like frequency shift at the circuit design level and increasing the cat size will enable faster gate operations and enhance functionalities. Similarly to the Bell–Fock state preparation, the sign of the superposition can be flipped by adding π in the phase of the two-cat gate pulse. Unlike the conversion from the Bell–Fock to Bell–Cat states, however, we cannot create a Bell–Cat state with an arbitrary phase. The reason is that once the pumps are turned on, the pump phase becomes the reference phase; consequently, we can no longer use the virtual Z gate as implied in Fig. <ref>a. Thus, the Bell–Cat state we create in this work by the two-cat gate is limited to |0_C1_C⟩±i|1_C0_C⟩. The gate-detuning dependence of the two-cat Rabi exhibits the characteristic pattern observed in cat Rabi oscillations for the X gate <cit.>. This suggests that, as pointed out in Ref. <cit.>, when mapping the dynamics of cat states to that of interacting two-level qubits, two tones with opposite gate detuning are required. In such a two-level qubit system, the same pattern can be reproduced by modulating the coupling constant with two frequencies, ω_g and 2(ω_q1-ω_q2)-ω_g, where ω_qi is the transition frequency of the two-level qubiti (i=1,2). In this case, zero detuning corresponds to ω_g = ω_q1-ω_q2. For further discussion and simulation results, see Sec. 6 of Supplementary Information. The fidelity of the |0_C 1_C⟩±i|1_C 0_C⟩ states is 0.60±0.04, which is almost identical to that achieved by the conversion from Bell–Fock to Bell–Cat states. This result is not surprising because, although the pulse length of the √(iSWAP) gate on the cat states (275 ns) is less than half that of the Bell-preparation pulse (730 ns), the contrast of the two-cat Rabi oscillations attenuates faster than that of the Rabi oscillations used for the Bell–Fock state preparation (compare Fig. <ref>b and the left plot of Fig. <ref>a). More quantitatively, the decay time of the two-cat Rabi oscillation is 3 for both KPOs, whereas that of the Rabi oscillations used in Bell–Fock state preparation is longer than 10 . The simulation using the Lindblad master equation suggests that the photon lifetime of both KPOs to reproduce the data in Fig. <ref>b is 10 (see Supplementary Fig. 4a). This photon lifetime falls within the observed range (Supplementary Table 1). Thus, the main sources of infidelity in this case are also single-photon loss and thermal excitation. Lastly, we point out that the same √(iSWAP) gate operation can be performed between cat states with different mean photon numbers. This property may provide significant flexibility when constructing a KPO-based quantum network, particularly for the scheme developed in Refs. <cit.>. The simulation results can be found in Supplementary Fig. 4c. § DISCUSSION To summarize, we demonstrate two intuitive methods for entangling cat states by adopting a DV–CV hybrid approach. This hybridization is achieved through Hamiltonian engineering, combining moderate Kerr nonlinearity and two-photon pumping. It enables coherent treatment of Bell–Fock and Bell–Cat states, facilitating gate operations directly on the cat basis without the need for ancilla qubits or individual Fock state control. One consequence is the entanglement-preserving conversion from Bell–Fock to Bell–Cat states. The other is the fast and simple √(iSWAP) gate operation on the cat states, thereby completing the demonstration of a universal quantum gate set. Therefore, our superconducting planar KPO system is not only a potentially scalable quantum information processing unit but also a potent platform for DV–CV hybridization. We suggest several future research directions extending this work. First, we can construct quantum networks in the cat basis. Note that our methods are compatible with previously demonstrated quantum network constructions in the Fock basis <cit.>. This means we can create more complex entangled states, such as Greenberger–Horne–Zeilinger or cluster states, in the cat basis simply by replacing transmon/Xmon qubits with KPOs and converting the basis from Fock to cat states. This approach will significantly reduce the complexity of constructing quantum networks using bosonic modes. We can also create travelling entangled-cat states by coupling our system to transmission lines <cit.>. Combining Hamiltonian engineering with dissipation engineering may enable us to create highly coherent cat states <cit.>. Finally, employing other multiphoton pumps may open new possibilities <cit.>, such as exploring condensed matter physics in time crystals <cit.> and autonomous quantum error correction <cit.>. § METHODS §.§ Cat state generation As mentioned in the main text, the ramping time of the pump for cat-state generation is 1 . This ramping time is much longer than that in our previous work (300 ns) <cit.> because the counterdiabatic pulse did not work. We believe that the reason is the reduction in Kerr coefficient from about 3 MHz to 2 MHz after ramping up the pump (see Supplementary Table 1), whereas the Kerr coefficient in Ref. <cit.> increases slightly from 2.86 MHz to 3.13 MHz. During the ramping, we change the pump frequency, i.e., chirp the pump pulse, for two reasons: One is to compensate for unwanted AC Stark-like frequency shifts in 2ω_K1 and 2ω_K2, which are approximately -10 MHz at the target pump amplitude <cit.>. The other reason is that the pump detuning must start from zero and then approach the target value adiabatically to create high-fidelity cat states. §.§ Wigner-function measurements The one-mode Wigner function of the KPO is given by <cit.> W^(i)(α_i) = 2/π[D̂^†(α_i) ρ^(i)D̂(α_i) Π̂^(i)], where D̂(α_i) = exp(α_i â_i^† - α_i^* â_i ) is the displacement operator, Π̂^(i) = exp(iπâ_i^†â_i ) is the photon-number parity operator, and ρ^(i) is the density matrix of KPOi (i=1,2). Similarly, the two-mode Wigner function is given by <cit.> W^(12)(α_1,α_2) = 4/π^2[D̂^†(α_2)D̂^†(α_1) ρ^(12)D̂(α_1) D̂(α_2) Π̂^(12)] = 4/π^2⟨Π̂^(12)(α_1,α_2) ⟩, where ρ^(12) is the density matrix of the two-KPO system and Π̂^(12) = Π̂^(1)Π̂^(2) is the joint parity of the two KPOs. This operator can be measured by the joint probabilities of the transmons being in their ground/excited state P_jk (j,k ∈{g,e}) <cit.>: ⟨Π̂^(12)(α_1,α_2) ⟩ = P_ee + P_gg - P_eg - P_ge. In the experiment, this was accomplished by fitting the single-shot readout data (Fig. <ref>a) with a two-dimensional Gaussian function for all pixels of the Wigner function plots. The two-mode Wigner functions of the target states in Supplementary Fig. 5 are obtained using the Cahill–Glauber formula <cit.>: W^(12) (α_1,α_2) = 4/π^2[ρ^(12)T̂(α_1)T̂(α_2)] = 4/π^2∑_{n_i}=0^N_i∑_{m_i}=0^N_i∏_i=1^2⟨ n_i | T̂ (α_i) | m_i ⟩ ×⟨{m_i} | ρ^(12) | {n_i}⟩, where T̂ is the complex Fourier transform of the displacement operator, and N_i is the dimension of the Hilbert space of KPOi. For m_i ≥ n_i, ⟨ n_i | T̂ (α_i) | m_i ⟩ = √(n_i !/m_i !) (-1)^n_i (2α_i^*)^δ_i × L_n_i^(δ_i)(4|α_i|^2) exp(-2|α_i|^2), where δ_i ≡ m_i-n_i and L_n_i^(δ_i)(x) are the associated Laguerre polynomials. For m_i < n_i, we can use the following property: ⟨ n_i | T̂ (α_i) | m_i ⟩ = ⟨ m_i | T̂ (α_i^*) | n_i ⟩. §.§ Density-matrix reconstruction A two-mode Wigner function is a four-dimensional function. Since our signal-to-noise ratio is marginal, as shown in Fig. <ref>a, collecting such a large data set—13 × 13 × 13 × 13 pixels, for example—is impractical. Instead, we measured 10 two-dimensional plots, each of which has 17× 17 pixels in the range -1.6 ≤α_i ≤ 1.6 (i=1,2). Among these 10 plots, half are Re–Re plots with imaginary offset displacements and the other half are Im–Im plots with real offset displacements. For Re–Re plots, the imaginary offset displacements are given as follows (Fig. <ref>b): {((α_1),(α_2))} = {(0,0), (0,+0.82), (-1.10,+1.07), (-1.35,-1.32), (+1.35,-0.82)}. The same values are used for the real offset displacements for Im–Im plots. We found that, with the data set simulated from the target Bell–Cat states, the reconstruction fidelity is >0.99. We also checked the reconstruction fidelity of non-ideal data sets. For example, we prepared low-quality Bell–Cat states by simulating the Lindblad mater equation with T_1=10 for both KPOs after 2 waiting; the resulting fidelity between this state and the initial state was 0.57, which is similar to our results. The reconstruction fidelity from 2WFs of these low-quality states is still >0.93. For Bell–Fock states, the dimensions of the Hilbert space are set to 3 × 3. For Bell–Cat states, the dimensions of the Hilbert space are set to 8 × 8 because, for the ideal Bell–Cat states with P/K = 1 and 1 MHz of pump detuning, the occupation probability at |≥8⟩ is less than 10^-4. The algorithm for reconstruction followed the idea from Refs. <cit.>, which use gradient descent to reconstruct a density matrix with a projection step. A loss function between the measured data and that obtained from an estimated density matrix is minimized to obtain the reconstructed density matrix starting from a random initialization. We simplified the method to directly apply gradient descent (Adam <cit.>) on a matrix T, that is projected to construct an estimate of the physical density matrix using the Cholesky decomposition. At each gradient-descent step, the loss function is minimized followed by a projection step where the matrix T is converted to a lower triangular matrix with real-valued diagonal elements by discarding the upper-triangular part and making the diagonal real. This step allows us to obtain a density matrix ρ = T^† T/Tr(T^† T) that is guaranteed to be physical. The Python libraries used were QuTiP <cit.>, NumPy <cit.>, and JAX <cit.>. Data availability: All data are available in the main text or in the supplementary materials. 99 vandersypen L. M. K. Vandersypen and I. L. Chuang, NMR techniques for quantum control and computation, Rev. Mod. Phys. 76, 1037 (2005). mit P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver, A quantum engineer's guide to superconducting qubits, Appl. Phys. Rev. 6, 021318 (2019). kwon S. Kwon, A. Tomonaga, G. L. Bhai, S. J. Devitt, and J.-S. Tsai, Gate-based superconducting quantum computing, J. Appl. Phys. 129, 041102 (2021). burkard G. Burkard, T. D. Ladd, A. Pan, J. M. Nichol, and J. R. Petta, Semiconductor spin qubits, Rev. Mod. Phys. 95, 025003 (2023). braunstein2005 S. L. Braunstein and P. van Loock, Quantum information with continuous variables, Rev. Mod. Phys. 77, 513 (2005). joshi2021 A. Joshi, K. Noh, and Y. Y. Gao, Quantum information processing with bosonic qubits in circuit QED, Quantum Sci. Technol. 6, 033001 (2021). eriksson2024 A. M. Eriksson, T. Sépulcre, M. Kervinen, T. Hillmann, M. Kudra, S. Dupouy, Y. Lu, M. Khanahmadi, J. Yang, C. Castillo-Moreno, P. Delsing, and S. Gasparinetti, Universal control of a bosonic mode via drive-activated native cubic interactions, Nat. Commun. 15, 2512 (2024). andersen2015 U. L. Andersen, J. S. Neergaard-Nielsen, P. van Loock, and A. Furusawa, Hybrid discrete- and continuous-variable quantum information, Nature Phys. 11, 713–719 (2015). jeong2014 H. Jeong, A. Zavatta, M. Kang, S.-W. Lee, L. S. Costanzo, S. Grandi, T. C. Ralph, and M. Bellini, Generation of hybrid entanglement of light, Nature Photon. 8, 564–569 (2014). morin2014 O. Morin, K. Huang, J. Liu, H. Le Jeannic, C. Fabre, and J. Laurat, Remote creation of hybrid entanglement between particle-like and wave-like optical qubits, Nature Photon. 8, 570–574 (2014). ulanov2017 A. E. Ulanov, D. Sychev, A. A. Pushkina, I. A. Fedorov, and A. I. Lvovsky, Quantum Teleportation Between Discrete and Continuous Encodings of an Optical Qubit, Phys. Rev. Lett. 118, 160501 (2017). sychev2018 D. V. Sychev, A. E. Ulanov, E. S. Tiunov, A. A. Pushkina, A. Kuzhamuratov, V. Novikov, and A. I. Lvovsky, Entanglement and teleportation between polarization and wave-like encodings of an optical qubit, Nat. Commun. 9, 3672 (2018). gan2020 H. C. J. Gan, G. Maslennikov, K.-W. Tseng, C. Nguyen, and D. Matsukevich, Hybrid Quantum Computing with Conditional Beam Splitter Gate in Trapped Ion System, Phys. Rev. Lett. 124, 170502 (2020). darras2023 T. Darras, B. E. Asenbeck, G. Guccione, A. Cavaillès, H. Le Jeannic, and J. Laurat, A quantum-bit encoding converter, Nat. Photon. 17, 165–170 (2023). macridin2024 A. Macridin, A. C. Y. Li, and P. Spentzouris, Qumode transfer between continuous- and discrete-variable devices, Phys. Rev. A 109, 032419 (2024). dykman M. Dykman, in Fluctuating Nonlinear Oscillators: From Nanomechanics to Quantum Superconducting Circuits, edited by M. Dykman (Oxford University Press, 2012). goto2019b H. Goto, Quantum computation based on quantum adiabatic bifurcations of Kerr-nonlinear parametric oscillators, J. Phys. Soc. Jpn. 88, 061015 (2019). wustmann2019 W. Wustmann and V. Shumeiko, Parametric effects in circuit quantum electrodynamics, Low Temp. Phys. 45, 848–869 (2019). yamaji2022 T. Yamaji, S. Kagami, A. Yamaguchi, T. Satoh, K. Koshino, H. Goto, Z. R. Lin, Y. Nakamura, and T. Yamamoto, Spectroscopic observation of the crossover from a classical Duffing oscillator to a Kerr parametric oscillator, Phys. Rev. A 105, 023519 (2022). yamaguchi2024 A. Yamaguchi, S. Masuda, Y. Matsuzaki, T. Yamaji, T. Satoh, A. Morioka, Y. Kawakami, Y. Igarashi, M. Shirane, and T. Yamamoto, Spectroscopy of flux-driven Kerr parametric oscillators by reflection coefficient measurement, New. J. Phys. 26, 043019 (2024). cochrane1999 P. T. Cochrane, G. J. Milburn, and W. J. Munro, Macroscopically distinct quantum-superposition states as a bosonic code for amplitude damping, Phys. Rev. A 59, 2631–2634 (1999). goto2016a H. Goto, Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network, Sci. Rep. 6, 21686 (2016). minganti2016 F. Minganti, N. Bartolo, J. Lolli, W. Casteels, and C. Ciuti, Exact results for Schrödinger cats in driven-dissipative systems and their feedback control, Sci. Rep. 6, 26987 (2016). puri2017a S. Puri, S. Boutin, and A. Blais, Engineering the quantum states of light in a Kerr-nonlinear resonator by two-photon driving, npj Quantum Inf. 3, 18 (2017). zhang2017 Y. Zhang and M. I. Dykman, Preparing quasienergy states on demand: A parametric oscillator, Phys. Rev. A 95, 053841 (2017). wang2019 Z. Wang, M. Pechal, E. A. Wollack, P. Arrangoiz-Arriola, M. Gao, N. R. Lee, and A. H. Safavi-Naeini, Quantum dynamics of a few-photon parametric oscillator, Phys. Rev. X 9, 021049 (2019). masuda2021a S. Masuda, T. Ishikawa, Y. Matsuzaki, and S. Kawabata, Controls of a superconducting quantum parametron under a strong pump field, Sci. Rep. 11, 11459 (2021). xue2022 J.-J. Xue, K.-H. Yu, W.-X. Liu, X. Wang, and H.-R. Li, Fast generation of cat states in Kerr nonlinear resonators via optimal adiabatic control, New J. Phys. 24, 053015 (2022). catGen D. Iyama, T. Kamiya, S. Fujii, H. Mukai, Y. Zhou, T. Nagase, A. Tomonaga, R. Wang, J.-J. Xue, S. Watabe, S. Kwon, and J.-S. Tsai, Observation and manipulation of quantum interference in a superconducting Kerr parametric oscillator, Nat. Commun. 15, 86 (2024). marthaler2007 M. Marthaler and M. I. Dykman, Quantum interference in the classically forbidden region: A parametric oscillator, Phys. Rev. A 76, 010102(R) (2007). venkatraman2022 J. Venkatraman, R. G. Cortinas, N. E. Frattini, X. Xiao, and M. H. Devoret, A driven quantum superconducting circuit with multiple tunable degeneracies, Preprint at https://doi.org/10.48550/arXiv.2211.04605 (2022). goto2016b H. Goto, Universal quantum computation with a nonlinear oscillator network, Phys. Rev. A 93, 050301(R) (2016). puri2020 S. Puri, L. St-Jean, J. A. Gross, A. Grimm, N. E. Frattini, P. S. Iyer, A. Krishna, S. Touzard, L. Jiang, A. Blais, S. T. Flammia, and S. M. Girvin, Bias-preserving gates with stabilized cat qubits, Sci. Adv. 6, eaay5901 (2020). grimm2020 A. Grimm, N. E. Frattini, S. Puri, S. O. Mundhada, S. Touzard, M. Mirrahimi, S. M. Girvin, S. Shankar, and M. H. Devoret, Stabilization and operation of a Kerr-cat qubit, Nature 584, 205 (2020). kanao2021b T. Kanao, S. Masuda, S. Kawabata, and H. Goto, Quantum gate for Kerr-nonlinear parametric oscillator using effective excited states, Phys. Rev. Applied 18, 014019 (2022). xu2021 Q. Xu, J. K. Iverson, F. G. S. L. Brandão, and L. Jiang, Engineering fast bias-preserving gates on stabilized cat qubits, Phys. Rev. Research 4, 013082 (2022). masuda2022 S. Masuda, T. Kanao, H. Goto, Y. Matsuzaki, T. Ishikawa, and S. Kawabata, Fast Tunable Coupling Scheme of Kerr Parametric Oscillators Based on Shortcuts to Adiabaticity, Phys. Rev. Applied 18, 034076 (2022). hajr2024 A. Hajr, B. Qing, K. Wang, G. Koolstra, Z. Pedramrazi, Z. Kang, L. Chen, L. B. Nguyen, C. Junger, N. Goss, I. Huang, B. Bhandari, N. E. Frattini, S. Puri, J. Dressel, A. N. Jordan, D. Santiago, and I. Siddiqi, High-Coherence Kerr-cat qubit in 2D architecture, Preprint at https://doi.org/10.48550/arXiv.2404.16697 (2024). DSI F. Dell'Anno, S. De Siena, and F. Illuminati, Multiphoton quantum optics and quantum state engineering, Phys. Rep. 428, 53-168 (2006). sanders2012 B. C. Sanders, Review of entangled coherent states, J. Phys. A: Math. Theor. 45, 244002 (2012). walschaers2021 M. Walschaers, Non-Gaussian Quantum States and Where to Find Them, PRX Quantum 2, 030204 (2021). wang2016 C. Wang, Y. Y. Gao, P. Reinhold, R. W. Heeres, N. Ofek, K. Chou, C. Axline, M. Reagor, J. Blumoff, K. M. Sliwa, L. Frunzio, S. M. Girvin, L. Jiang, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, A Schrödinger Cat Living in Two Boxes, Science 352, 1087-1091 (2016). albert2019 V. V. Albert, S. O. Mundhada, A. Grimm, S. Touzard, M. H. Devoret, and L. Jiang, Pair-cat codes: autonomous error-correction with low-order nonlinearity, Quantum Sci. Technol. 4, 035007 (2019). zhou2021 Z.-Y. Zhou, C. Gneiting, J. Q. You, and F. Nori, Generating and detecting entangled cat states in dissipatively coupled degenerate optical parametric oscillators, Phys. Rev. A 104, 013715 (2021). gertler2023 J. M. Gertler, S. van Geldern, S. Shirol, L. Jiang, and C. Wang, Experimental Realization and Characterization of Stabilized Pair-Coherent States, PRX Quantum 4, 020319 (2023). margiani2023 G. Margiani, J. del Pino, T. L. Heugel, N. E. Bousse, S. Guerrero, T. W. Kenny, O. Zilberberg, D. Sabonis, and A. Eichler, Deterministic and stochastic sampling of two coupled Kerr parametric oscillators, Phys. Rev. Research 5, L012029 (2023). yamaji2023 T. Yamaji, S. Masuda, A. Yamaguchi, T. Satoh, A. Morioka, Y. Igarashi, M. Shirane, and T. Yamamoto, Correlated Oscillations in Kerr Parametric Oscillators with Tunable Effective Coupling, Phys. Rev. Applied 20, 014057 (2023). chono2022 H. Chono, T. Kanao, and H. Goto, Two-qubit gate using conditional driving for highly detuned Kerr nonlinear parametric oscillators, Phys. Rev. Research 4, 043054 (2022). gao2019 Y. Y. Gao, B. J. Lester, K. S. Chou, L. Frunzio, M. H. Devoret, L. Jiang, S. M. Girvin, and R. J. Schoelkopf, Entanglement of bosonic modes through an engineered exchange interaction, Nature 566, 509 (2019). chapman2023 B. J. Chapman, S. J. de Graaf, S. H. Xue, Y. Zhang, J. Teoh, J. C. Curtis, T. Tsunoda, A. Eickbusch, A. P. Read, A. Koottandavida, S. O. Mundhada, L. Frunzio, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, High-On-Off-Ratio Beam-Splitter Interaction for Gates on Bosonically Encoded Qubits, PRX Quantum 4, 020355 (2023). plenio M. B. Plenio and S. S. Virmani, An Introduction to Entanglement Theory in Quantum Information and Coherence, edited by E. Andersson and P. Öhberg (Springer Cham, 2014); https://doi.org/10.1007/978-3-319-04063-9_8 kim2000 M. S. Kim and J. Lee, Test of quantum nonlocality for cavity fields, Phys. Rev. A 61, 042102 (2000). zhao2017 X. L. Zhao, Z. C. Shi, M. Qin, and X. X. Yi, Optical Schrödinger cat states in one mode and two coupled modes subject to environments, Phys. Rev. A 96, 013824 (2017). puri2019 S. Puri, A. Grimm, P. Campagne-Ibarcq, A. Eickbusch, K. Noh, G. Roberts, L. Jiang, M. Mirrahimi, M. H. Devoret, and S. M. Girvin, Stabilized cat in a driven nonlinear cavity: A fault-tolerant error syndrome detector, Phys. Rev. X 9, 041009 (2019). frattini2022 N. E. Frattini, R. G. Cortiñas, J. Venkatraman, X. Xiao, Q. Su, C. U Lei, B. J. Chapman, V. R. Joshi, S. M. Girvin, R. J. Schoelkopf, S. Puri, and M. H. Devoret, The squeezed Kerr oscillator: spectral kissing and phase-flip robustness, Preprint at https://doi.org/10.48550/arXiv.2209.03934 (2022). place2021 A. P. M. Place, L. V. H. Rodgers, P. Mundada, B. M. Smitham, M. Fitzpatrick, Z. Leng, A. Premkumar, J. Bryon, A. Vrajitoarea, S. Sussman, G. Cheng, T. Madhavan, H. K. Babla, X. H. Le, Y. Gang, B. Jäck, A. Gyenis, N. Yao, R. J. Cava, N. P. de Leon, and A. A. Houck, New material platform for superconducting transmon qubits with coherence times exceeding 0.3 milliseconds, Nat. Commun. 12, 1779 (2021). wang2022 C. Wang, X. Li, H. Xu, Z. Li, J. Wang, Z. Yang, Z. Mi, X. Liang, T. Su, C. Yang, G. Wang, W. Wang, Y. Li, M. Chen, C. Li, K. Linghu, J. Han, Y. Zhang, Y. Feng, Y. Song, T. Ma, J. Zhang, R. Wang, P. Zhao, W. Liu, G. Xue, Y. Jin, and H. Yu, Towards practical quantum computers: transmon qubit with a lifetime approaching 0.5 milliseconds, npj Quantum inf. 8, 3 (2022). kono2024 S. Kono, J. Pan, M. Chegnizadeh, X. Wang, A. Youssefi, M. Scigliuzzo, and T. J. Kippenberg, Mechanically induced correlated errors on superconducting qubits with relaxation times exceeding 0.4 ms, Nat. Commun. 15, 3950 (2024). biznarova2024 J. Biznárová, A. Osman, E. Rehnman, L. Chayanun, C. Križan, P. Malmberg, M. Rommel, C. Warren, P. Delsing, A. Yurgens, J. Bylander, and A. Fadavi Roudsari, Mitigation of interfacial dielectric loss in aluminum-on-silicon superconducting qubits, Preprint at https://doi.org/10.48550/arXiv.2310.06797 (2023). motzoi2009 F. Motzoi, J. M. Gambetta, P. Rebentrost, and F. K. Wilhelm, Simple Pulses for Elimination of Leakage in Weakly Nonlinear Qubits, Phys. Rev. Lett. 103, 110501 (2009). goto2019a H. Goto, Z. Lin, T. Yamamoto, and Y. Nakamura, On-demand generation of traveling cat states using a parametric oscillator, Phys. Rev. A 99, 023838 (2019). xue2022 J.-J. Xue, K.-H. Yu, W.-X. Liu, X. Wang, and H.-R. Li, Fast generation of cat states in Kerr nonlinear resonators via optimal adiabatic control, New J. Phys. 24, 053015 (2022). zhong2021 Y. Zhong, H.-S. Chang, A. Bienfait, É. Dumur, M.-H. Chou, C. R. Conner, J. Grebel, R. G. Povey, H. Yan, D. I. Schuster, and A. N. Cleland, Deterministic multi-qubit entanglement in a quantum network, Nature 590, 571–575 (2021). qiu2023 J. Qiu, Y. Liu, J. Niu, L. Hu, Y. Wu, L. Zhang, W. Huang, Y. Chen, J. Li, S. Liu, Y. Zhong, L. Duan, and D. Yu, Deterministic quantum teleportation between distant superconducting chips, Preprint at https://doi.org/10.48550/arXiv.2302.08756 (2023). kurpiers2018 P. Kurpiers, P. Magnard, T. Walter, B. Royer, M. Pechal, J. Heinsoo, Y. Salathé, A. Akin, S. Storz, J.-C. Besse, S. Gasparinetti, A. Blais, and A. Wallraff, Deterministic quantum state transfer and remote entanglement using microwave photons, Nature 558, 264–267 (2018). gautier2022 R. Gautier, A. Sarlette, and M. Mirrahimi, Combined Dissipative and Hamiltonian Confinement of Cat Qubits, PRX Quantum 3, 020339 (2022). gravina2023 L. Gravina, F. Minganti, and V. Savona, Critical Schrödinger Cat Qubit, PRX Quantum 4, 020337 (2023). marquet2024 A. Marquet, A. Essig, J. Cohen, N. Cottet, A. Murani, E. Albertinale, S. Dupouy, A. Bienfait, T. Peronnin, S. Jezouin, R. Lescanne, and B. Huard, Autoparametric Resonance Extending the Bit-Flip Time of a Cat Qubit up to 0.3 s, Phys. Rev. X 14, 021019 (2024). reglade2024 U. Réglade, A. Bocquet, R. Gautier, J. Cohen, A. Marquet, E. Albertinale, N. Pankratova, M. Hallén, F. Rautschke, L.-A. Sellem, P. Rouchon, A. Sarlette, M. Mirrahimi, P. Campagne-Ibarcq, R. Lescanne, S. Jezouin, and Z. Leghtas, Quantum control of a cat qubit with bit-flip times exceeding ten seconds, Nature 629, 778–783 (2024). svensson2017 I.-M. Svensson, A. Bengtsson, P. Krantz, J. Bylander, V. Shumeiko, and P. Delsing, Period-tripling subharmonic oscillations in a driven superconducting resonator, Phys. Rev. B 96, 174503 (2017). svensson2018 I.-M. Svensson, A. Bengtsson, J. Bylander, V. Shumeiko, and P. Delsing, Period multiplication in a parametrically driven superconducting resonator, Appl. Phys. Lett. 113, 022602 (2018). chang2020 C. W. S. Chang, C. Sabín, P. Forn-Díaz, F. Quijandría, A. M. Vadiraj, I. Nsanzineza, G. Johansson, and C. M. Wilson, Observation of Three-Photon Spontaneous Parametric Down-Conversion in a Superconducting Parametric Cavity, Phys. Rev. X 10, 011011 (2020). zhang2017a Y. Zhang, J. Gosner, S. M. Girvin, J. Ankerhold, and M. I. Dykman, Time-translation-symmetry breaking in a driven oscillator: From the quantum coherent to the incoherent regime, Phys. Rev. A 96, 052124 (2017). zhang2019 Y. Zhang and M. I. Dykman, Nonlocal random walk over Floquet states of a dissipative nonlinear oscillator, Phys. Rev. E 100, 052148 (2019). tadokoro2020 Y. Tadokoro, H. Tanaka, and M. I. Dykman, Noise-induced switching from a symmetry-protected shallow metastable state, Sci. Rep. 10, 10413 (2020). gosner2020 J. Gosner, B. Kubala, and J. Ankerhold, Relaxation dynamics and dissipative phase transition in quantum oscillators with period tripling, Phys. Rev. B 101, 054501 (2020). lang2021 B. Lang and A. D. Armour, Multi-photon resonances in Josephson junction-cavity circuits, New J. Phys. 23, 033021 (2021). arndt2022 L. Arndt and F. Hassler, Period Tripling due to Parametric Down-Conversion in Circuit QED, Phys. Rev. Lett. 128, 187701 (2022). miganti2023 F. Minganti, V. Savona, and A. Biella, Dissipative phase transitions in n-photon driven quantum nonlinear resonators, Quantum 7, 1170 (2023). iachello2023 F. Iachello, R. G. Cortiñas, F. Pérez-Bernal, and L. F. Santos, Symmetries of the squeeze-driven Kerr oscillator, J. Phys. A: Math. Theor. 56, 495305 (2023). guo2024 L. Guo and V. Peano, Engineering Arbitrary Hamiltonians in Phase Space, Phys. Rev. Lett. 132, 023602 (2024). mora2024 A. Labay-Mora, R. Zambrini, and G. L. Giorgi, Quantum memories for squeezed and coherent superpositions in a driven-dissipative nonlinear oscillator, Phys. Rev. A 109, 032407 (2024). guo2013 L. Guo, M. Marthaler, and G. Schön, Phase space crystals: A new way to create a quasienergy band structure, Phys. Rev. Lett. 111, 205303 (2013). guo2020 L. Guo and P. Liang, Condensed matter physics in time crystals, New J. Phys. 22, 075003 (2020). sacha K. Sacha, Time Crystals (Springer, 2020). kwon2022 S. Kwon, S. Watabe, and J.-S. Tsai, Autonomous quantum error correction in a four-photon Kerr parametric oscillator, npj Quantum Inf. 8, 40 (2022). royer1977 A. Royer, Wigner function as the expectation value of a parity operator, Phys. Rev. A 15, 449–450 (1977). cahill1969a K. E. Cahill and R. J. Glauber, Ordered Expansions in Boson Amplitude Operators, Phys. Rev. 177, 1857 (1969). cahill1969b K. E. Cahill and R. J. Glauber, Density Operators and Quasiprobability Distributions, Phys. Rev. 177, 1882 (1969). miranowicz2023 A. Miranowicz, J. Kadlec, K. Bartkiewicz, A. Černoch, Y.-N. Chen, K. Lemr, and F. Nori, Quantifying nonclassicality of vacuum-one-photon superpositions via potentials for Bell nonlocality, quantum steering, and entanglement, Preprint at https://doi.org/10.48550/arXiv.2309.12930 (2023). ahmed2021a S. Ahmed, C. Sánchez Muñoz, F. Nori, and A. F. Kockum, Quantum State Tomography with Conditional Generative Adversarial Networks, Phys. Rev. Lett. 127, 140502 (2021). ahmed2021b S. Ahmed, C. Sánchez Muñoz, F. Nori, and A. F. Kockum, Classification and reconstruction of optical quantum states with deep neural networks, Phys. Rev. Research 3, 033278 (2021). adam D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, Preprint at https://doi.org/10.48550/arXiv.1412.6980 (2014). qutip1 J. R. Johansson, P. D. Nation, and F. Nori, QuTiP: An open-source Python framework for the dynamics of open quantum systems, Comp. Phys. Comm. 183, 1760–1772 (2012). qutip2 J. R. Johansson, P. D. Nation, and F. Nori, QuTiP 2: A Python framework for the dynamics of open quantum systems, Comp. Phys. Comm. 184, 1234–1240 (2013). numpy C. R. Harris, K. Jarrod Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. Fernández del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant, Array programming with NumPy, Nature 85, 357–362 (2020). jax J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, JAX: composable transformations of Python+NumPy programs, http://github.com/google/jax (2018). Acknowledgments: The authors thank Adam Miranowicz, Tsuyoshi Yamamoto, Shiro Saito, Atsushi Noguchi, Shotaro Shirai, and Yoshiki Sunada for their interest in this project and helpful discussion. We also thank Kazumasa Makise of the National Astronomical Observatory of Japan for providing niobium films and the MIT Lincoln Laboratory for providing a Josephson travelling-wave parametric amplifier. This work was supported by the Japan Science and Technology Agency (Moonshot R&D, JPMJMS2067; CREST, JPMJCR1676) and the New Energy and Industrial Technology Development Organization (NEDO, JPNP16007). SA and AFK acknowledge support from the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT). AFK is also supported by the Swedish Research Council (grant number 2019-03696), the Swedish Foundation for Strategic Research (grant numbers FFL21-0279 and FUS21-0063), and the Horizon Europe programme HORIZON-CL4-2022-QUANTUM-01-SGA via the project 101113946 OpenSuperQPlus100. Author contributions: SK and JST conceived the project. SK, DH, TN, HM, and JST designed the details of the experiment. DH, TN, and SK performed the measurements and data analysis. TN and SK performed the simulations with contributions from SF. SW provided theoretical support. DH and DI wrote the software for the measurements. SA wrote the code for the density-matrix reconstruction with contributions from AFK and SK. HM managed the hardware. SK and TK designed the chip. TK fabricated the chip. SK wrote the original draft with contributions from DH and TN. All authors contributed to the review and editing of the paper. SK, FY, and JST supervised the project. JST acquired the primary funding. Competing interests: The authors declare that they have no competing interests. [pages=1]bellCat_supp.pdf [pages=2]bellCat_supp.pdf [pages=3]bellCat_supp.pdf [pages=4]bellCat_supp.pdf [pages=5]bellCat_supp.pdf [pages=6]bellCat_supp.pdf [pages=7]bellCat_supp.pdf [pages=8]bellCat_supp.pdf [pages=9]bellCat_supp.pdf [pages=10]bellCat_supp.pdf
http://arxiv.org/abs/2406.18298v1
20240626123202
A model-independent determination of the sound horizon using recent BAO measurements and strong lensing systems
[ "Tonghua Liu", "Shuo Cao", "Jieci Wang" ]
astro-ph.CO
[ "astro-ph.CO" ]
School of Physics and Optoelectronic, Yangtze University, Jingzhou 434023, China; caoshuo@bnu.edu.cn School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China; Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China; jcwang@hunnu.edu.cn Department of Physics, and Collaborative Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China; § ABSTRACT We propose an improved method to determine the sound horizon in a cosmological model-independent way by using the latest observations of BAO measurements from DES, BOSS/eBOSS, and DESI surveys and gravitationally time-delay lensed quasars from H0LiCOW collaboration. Combining the 6D_Δ t plus 4D_d measurements and the reconstructed BAO datasets, we obtain a model-independent result of r_d=139.7^+5.2_-4.5 Mpc, with the precision at the ∼3.7% level, which is in agreement with the result of Planck 2018 within ∼1.7σ uncertainty. Our method is independent of cosmological parameters such as the Hubble constant, dark energy, (and, more importantly, does not involve the cosmic curvature when using the D_d measurements of the lenses, and also avoids the obstacle of mass-sheet degeneracy in gravitational lensing). Meanwhile, it does not need to consider the Eddington relation with concerning the transformation of distance. Since only two types of data are considered, the contribution of each can be clearly understood. Our results also highlight the Hubble tension and may give us a better understanding of the discordance between the datasets or reveal new physics beyond the standard model. A model-independent determination of the sound horizon using recent BAO measurements and strong lensing systems Jieci Wang July 1, 2024 =============================================================================================================== § INTRODUCTION In the early universe, baryon acoustic oscillation (BAOs) are sound waves produced by gravitational interactions between photon-baryon fluids and inhomogeneity <cit.>. During the drag period, baryons decouple from photons and freeze at a scale equal to the acoustic horizon at the drag period redshift. This scale, known as the sound horizon scale, is a standard ruler embedded in the distribution of galaxies. It is a crucial theoretical prediction of the cosmological model, dependent on the speed of sound in the baryon-photon plasma and the rate of expansion of the early Universe before matter and radiation decoupled. The Sloan Digital Sky Survey (SDSS) <cit.> and the 2dF Galaxy Redshift Survey (2dFGRS) <cit.> first detected BAO signal and demonstrated the power of the BAO as a standard ruler for cosmology. Subsequent BAO surveys including the Six-degree Field Galaxy Survey (6dFGS) <cit.>, the Baryon Oscillation Spectroscopic Survey (BOSS) <cit.>, the extended Baryon Oscillation Spectroscopic Survey (eBOSS) <cit.>, and the WiggleZ Survey <cit.> aimed to achieve more precise cosmic distance measurements at a percentage level. These measurements could offer a highly statistically significant study of the current tension problems (please see e.g. <cit.> and references therein for a more comprehensive discussion). Currently, the Dark Energy Spectroscopic Instrument (DESI) collaboration has presented Data Release 1 (DR1) of BAO measurements and showed more than 2σ evidence for the dynamical dark energy <cit.>. However, before using the BAO as a standard ruler for cosmology and as a powerful cosmological probe, one needs to know the comoving length of this ruler, i.e., the sound horizon r_d at the radiation drag epoch. The sound horizon r_d is usually calibrated at z≈ 1100 relying on the CMB observations. Since the length of the scale is not known, BAO can only give a relative measurement of the expansion history. This is similar to the type of Ia supernovae (SNe Ia) acting as cosmological standard candles. If the value of absolute magnitude is not known, then SNe Ia can only provide relative distances. This implies that the Hubble constant H_0 and the sound horizon r_d are closely related, that there is strong degeneracy between them, and that they link late and early cosmology. Planck cosmic microwave background (CMB) anisotropy (both temperature and polarization) data reported the r_d=147±0.30 Mpc with assuming cosmological constant plus cold dark matter (ΛCDM) model <cit.>. An alternative approach is to combine BAO measurements with other low-redshift observations. Combining standard clocks and the local H_0 measurement to the SNe and BAO, the work <cit.> obtained the r_d=142.8±3.7 Mpc and consisted of the results derived from Planck data. Similarly, considering the same data type, the work <cit.> got r_d=136.80±4.0 Mpc using the spline interpolation method for reconstruction of expansion history, when curvature Ω_K as a free parameter. Subsequently, under the assumption that the universe is flat, <cit.> inferred sound horizon r_d=143.9±3.1 Mpc. See Refs. <cit.> for more works about the sound horizon. It is important to emphasize here that most of the work to determine sound horizon is cosmological model dependent. If one considers the data from standard clocks to reconstruct the expansion history (eliminating the assumptions of the cosmological background), then one needs to have an assumption about the curvature of the universe or use it as a free parameter. Considering the current critical situation for the measurements of the Hubble constant in astronomical observations, in particular the assumptions of cosmological models, it is very necessary to realize the calibration or determination of r_d in the cosmological model-independent way. As one of the most ubiquitous phenomena in astronomy, strong gravitational lensing by elliptical galaxies directly provides absolute distance, and is a powerful tool to study the velocity dispersion function of early-type galaxies <cit.>, the distribution of dark matter <cit.>, and cosmological parameters <cit.>. In particular, gravitational lensing systems with time-delay measurements between multiple images provide a valuable opportunity for the determination of H_0. In representative work of <cit.>, the H_0 Lenses in COSMOGRAIL's Wellspring (H0LiCOW) collaboration combined the six gravitationally lensed quasars with well-measured time delays to constrain the H_0. The full dataset consists of six lenses, five of which were analyzed blindly, and four of which have both time-delay distance D_Δ t and the angular diameter distance to the lens D_d measurements. Current state-of-the-art lensing programs for time-delay with lensed quasars have great progress, such as TDCOSMO collaboration[http://tdcosmo.org] <cit.> (formed by members of H0LiCOW <cit.>, COSMOGRAIL <cit.>, STRIDES <cit.>, and SHARP. Recently, TDCOSMO <cit.> based on a new hierarchical Bayesian approach with the original seven lenses, six of which are from H0LiCOW, and reported H_0=67.4^+4.1_-3.5 . Previously, there has been some work considering the use of time-delay gravitational lensing to calibrate the sound horizon <cit.>. However, these works were either cosmological model dependent or introduced data like standard clock. Using these data together, it is not possible to disentangle and determine the contribution of these different probes to calibrating the sound horizon in the BAO. Inspired by the above, this work will use time-delay strong gravitational lensing systems combined with the recent BAO measurements from DES, BOSS/eBOSS, and DESI, to calibrate the sound horizon r_d in a model-independent method. The combination of these two probes has a number of advantages: first, it is independent of the cosmological model and is independent of the early universe; second, it is independent of cosmological parameters such as the Hubble constant, dark energy, (and even more no cosmic curvature is involved when using the D_d of the lenses, and also avoids the obstacle of mass-sheet degeneracy in gravitational lensing); and third, it does not need to take into account the Eddington relation with respect to the transformation of distance. Fourth, only two types of data are considered, enabling a clear understanding of the contributions of each data. This paper is organized as follows: in Section 2, we present the data used in this work and the methodology of calibration of the sound horizon. In Section 3, we give our results and discussion. Finally, the main conclusions are summarized in Section 4. § DATA AND METHODOLOGY §.§ BAO angular scale measurements The clustering of matter created by BAO provides a “standard scale" of length in cosmology. The length of this standard scale (roughly 150 Mpc at present) can be measured by astronomical surveys looking at the large-scale structure of matter, thus constraining the cosmological parameters (especially the density of baryonic matter), and further understanding the nature of the dark energy that causes the accelerated expansion of the Universe. However, when using BAO for cosmological studies, it is important to know the length of this standard ruler. During the drag epoch, baryons decoupled from photons and “froze in" at a scale equal to the sound horizon at the drag epoch redshift z_d, i.e., r_s≡ r_d(z_d), if r_s is interpreted as the sound horizon at radiation drag, then r_d=∫^∞_z_ddzc_s(z)/H(z), where z_d being the redshift of the drag epoch, c_s(z) is the sound speed, and H(z) is the Hubble parameter. The angular BAO scale θ_BAO can be written as θ_BAO =r_d/(1+z)D^A, in terms of the angular diameter distance D^A. In this work, we consider the 15 transverse BAO angular scale measurements (denoted as 2D-BAO) summarized in Table 1 of <cit.>. These values were obtained using public data releases (DR) of the Sloan Digital Sky Survey (SDSS), namely: DR7, DR10, DR11, DR12, and DR12Q (quasars), without assuming any fiducial cosmological model. It is important to note that because these transverse BAO measurements are performed using cosmology-independent methods, their uncertainties are larger than those obtained using the fiducial cosmological method. For anisotropic BAO (denoted as 3D-BAO), we considered two sources of data, a dataset from DES Y6 and BOSS/eBOSS <cit.>, and other from recent DESI DR1 data <cit.>. The 3D-BAO angular scale measurements can be found in Table 1 of <cit.>. These BAO angular scale measurements including the 2D-BAO and 3D-BAO are shown in Fig. <ref>. As mentioned above, the joint use of other low redshift observations is necessary if a cosmological model-independent r_d calibration is to be realized. Here we consider observations from the H0LiCOW analysis of six lensed quasars with good lens modeling. The realization of a gravitational lens calibrated sound horizon requires that the BAO be able to provide the corresponding cosmological information both at the redshifts of the sources and lenses. However, both BAO and gravitational lensing data are very sparse. Therefore, we consider in this work a cosmological model-independent data reconstruction method, Gaussian Process Regression (GPR) <cit.>, to reconstruct the angular scale measurements of the observed BAO. We generate samples of reconstructed angular scale BAO measurements from the posteriors of 2D and 3D BAO datasets. The posterior sampling of GPR is realized with the code [https://github.com/dkirkby/gphist.] <cit.>. GPR is a completely data-driven reconstruction method and performs in an infinite dimensional function space without overfitting problem <cit.>. GPR works by generating large samples of functions γ(z) determined by the covariance function. The covariance between these functions can be described by the kernel function. We use here the most general and commonly used squared exponential kernel to parameterize the covariance ⟨γ(z_1)γ(z_2) ⟩ = σ_f^2 exp{-[s(z_1)-s(z_2)]^2/(2ℓ^2)}, where σ_f and ℓ are hyperparameters and are marginalized over. The γ(z) is a random function inferred from the distribution defined by the covariance, and we adopt γ(z) = ln(θ_BAO(z)) to generate more angular scale measurements by using BAO dataset. The 1000 reconstructed curves of angular scale measurements from the BAO dataset are shown in Fig. <ref>. It shows the shape of the angular scale-redshift relation of BAO data very well. It should be noted that the redshift of the BAO dataset well covers the range of H0LiCOW lensing system redshifts, we needn't extrapolate the redshift range of the reconstructed BAO dataset. §.§ Strong gravitational lensing systems from H0LiCOW Let us briefly outline the standard procedure for D_Δ t and D_d measurements used in the H0LiCOW procedure. The distances of a gravitational lens involve only the angular diameter distances. For a given strong lensing system, quasars act as a background source at redshift z_s, which is lensed by foreground elliptical galaxies (at redshift z_d), and multiple bright images of active galactic nuclei (AGN) are formed along with arcs of their host galaxies. The subscripts d and s stand for the lens galaxy and the source, respectively. The lensing time delay between any two images is determined by the geometry of the universe and the gravity field of the lensing galaxy <cit.> Δ t_ AB = D_Δ t[ϕ(θ_ A,β)-ϕ(θ_ B,β)]=D_Δ tΔϕ_ AB(ξ_ lens), where ϕ(θ,β)=[(θ-β)^2/2-ψ(θ)] is the Fermat potential at images, β is the source position, ψ is lensing potential obeying the Poisson equation ∇^2ψ=2κ, where κ is the surface mass density of the lens in units of critical density Σ_ crit=D_ s/(4π D_dD_ds), and ξ_lens denotes the lens model parameters. The cosmological background is reflected in the “time delay distance" D_Δ t=(1+z_ d)D_dD_ s/D_ ds=Δ t_ AB/Δϕ_ AB(ξ_ lens). The variability of the AGN light curve can be monitored to measure the time delay between multiple images. The key point here is that the Fermat potential difference Δϕ_ AB(ξ_ lens) can be reconstructed by high-resolution lensing imaging from space telescopes. On the other hand, assuming an explicit model for the lens, such as the simplest singular isothermal sphere (SIS) model (not limited to the SIS model, and denoted as lens model parameter ξ_lens), plus observations on stellar kinematics, such as the lens galaxy such as the light profile ξ_light, the line of sight (LOS) projected stellar velocity dispersion of the lens galaxy σ_v, the anisotropy distribution of the stellar orbits β_ani, one can yield the absolute distance measure of D_d at the lens <cit.> D_ d=1/1+z_ dcΔ t_ AB/Δϕ_ AB(ξ_lens)c^2J(ξ_lens,ξ_light,β_ani)/σ_v^2 , where the function J captures all model components calculated from the lensing image and the photometrically weighted projected velocity dispersion (from spectroscopy). For more details on the modelling problem for the function J, see Section 4.6 of <cit.>. The posterior distribution of these lenses (namely RXJ1131-1231 <cit.>, PG1115+080 <cit.>, B1608+656[Except for this len, other lenses were analyzed blindly with respect to the cosmological parameters.] <cit.>, J1206+4332 <cit.>, WFI2033-4723 <cit.>, HE0435-1223 <cit.>) including the time delay distances and the angular diameter distances of the lenses can be found at H0LiCOW website[http://www.h0licow.org.]. The redshifts of both lenses and sources, time delay distances and angular diameter distances to lenses for systems are summarized in Table 2 of <cit.>. Although TDCOSMO recently considered different strategies for analyzing these six lenses, compared to the results of H0LiCOW's constraints on the Hubble constant, the results of the latest H_0 released by TDCOSMO have a large reduction in precision. This is mainly due to the release of the choice of parameterization of the lens mass profile in H0LiCOW lens modeling, which is the so-called mass-sheet transformation (MST) <cit.>. To counter this increase in uncertainty, the TDCOSMO team obtained stellar kinematics from the Sloan Lens ACS (SLACS) catalogue for constraining the MST. The TDCOSMO IV likelihood products are all publicly available on this repository[https://github.com/TDCOSMO.] <cit.>. However, it is necessary to emphasize that TDCOSMO uses the assumptions of the model and the cosmological a priori from SNe Ia in its lensing modeling. Until TDCOSMO realizes a fully cosmological model-independent analysis, we still use the posterior distributions of distances publicly released by H0LiCOW. Although the lensing data are not used the latest, the data from the BAO are the latest, and this is the first work to use gravitational lensing to calibrate the sound horizon of the BAO in a cosmological model-independent way. § RESULTS AND DISCUSSION Let's emphasize that calibration of BAO sound horizon r_d using observations of the strong lensing system is straightforward, we do not make assumptions about cosmological models and other cosmological parameters. Combing the Eq. (2) and (5), the BAO standard ruler r_d can be rewritten as r_d=D_Δ t(θ_BAO(z_d)-θ_BAO(z_s)), where θ_BAO(z_d) and θ_BAO(z_s) are the angular BAO scale at redshifts of source and lens, and we assume a spatially flat universe and use the standard distance relation to obtain the angular diameter distance between the lens and the source D_ds=D_s-[(1+z_d)/(1+z_s)]D_d <cit.>. When we combine the angular diameter distances to lenses with Eq. (2), one can obtain r_d =(1+z_d)θ_BAOD_d. The 1000 curves θ_BAO reconstructed by GPR contain information about its uncertainty. For given a redshift point, we have a distribution of 1000 measurements on the BAO angular scale. Based on these distributions, and combining Eqs. (7) and (8), respectively, we bring them into the skewed log-normal distributions of the time-delay and angular diameter distances provided by gravitational lensing (as published by H0LiCOW) to obtain the Probability Distribution Functions (PDFs) of r_d for the combination of lenses and reconstructed BAO data. Strong lensing systems are in principle uncorrelated, so we obtain the individual distributions of the r_d given by each lens. Using time-delay distances from H0LiCOW, we obtain the final result obtained by combining all lenses is the following: r_d=139.5^+5.2_-4.4 Mpc (median value with the 16^th and 84^th percentiles around it), and are displayed in Fig. <ref>. This result is incompatible with the result of Planck 2018 within 1σ uncertainty, but consistent within ∼1.7σ uncertainty. This is quite reasonable. First of all, H0LiCOW based on these 6 time-delay gravitational lenses gives H_0=73.3^+1.7_1.8 in the framework of the model. Planck CMB observation showed their result H_0=67.4±0.5 , there is 3.1σ tension with H0LiCOW result. Moreover, the Hubble constant and the sound horizon are directly correlated with each other, and there is a strong degeneracy between them and a negative correlation. Our results also highlight the Hubble tension problem to some extent. To understand the relative contributions, we constrain r_d with each lens in the model-independent manner and also show that Fig. <ref>. For each of the individual lens results, they are all compatible with Planck's results within 1σ uncertainties due to the large uncertainties, except for the lens RXJ1131, its result is r_d=131.5^+8.1_-6.4 Mpc, which is agreement within the ∼1.7σ observational uncertainty. The uncertainty in the GPR reconstructed BAO data is almost equal to the uncertainty in the observed data, and our error budget in the measurement of r_d is clearly dominated by the uncertainty from the observations of the strong lenses, we do not expect any improvement in the precision, as shown by our results. However, such a combination already constrains the low-redshift standard ruler scale r_d at the ∼3.7% level. Combination of the posterior distributions of angular diameter distances of four H0LiCOW lenses and reconstructed 1000 curves from the BAO dataset, the final PDFs for r_d are reported in Table <ref> and displayed in Fig. <ref>. Compared with the results of time delay distance, we see that the mean values of r_d for four individual lenses have large changes and large uncertainties. For a combination of four lenses D_d, the corresponding constraint result is r_d=126.3^+13.3_-10.5 Mpc, the constrained precision on r_d is only the ∼10.5% level, but it is also in agreement with the results of Planck 2018 within ∼1.5σ uncertainty. However, it is worth pointing out that, despite the poor precision of the constraints on r_d, the method using D_d provided by gravitational lensing has two great benefits. First, the method does not require assumptions about the curvature of the universe, whereas using time-delayed distances requires assumptions to be taken about the curvature. As suggested in work <cit.>, taking different values for the curvature of the universe might have some impact on the significance of results for measuring r_d. In addition, the method circumvents the mass-sheet degeneracy obstacle in gravitational lensing. For the full data set consisting of 6 lenses, 4 of which have both D_Δ t and D_d measurements, our model-independent constrained result is r_d=139.7^+5.2_-4.5 Mpc and shown in Fig. <ref>. Compared with results obtained by using 6D_Δ t alone, we see that the mean values of r_d for the full dataset have a few changes (though not significant). As pointed out in the work of <cit.>, the addition of D_d (or dispersion measurements) does not play a significant role in the H_0 estimation of the H0LiCOW analysis. However, as we mentioned above, considering different types of data for gravitational lensing has its own advantages. In order to highlight the potential of our method, it is necessary to compare our results with previous works. Compared our results with previous works, <cit.> used the D_d of three lenses from the H0LiCOW collaboration, combined relative distances from SNe Ia and BAOs with a prior of H_0 from H0LiCOW, their reported r_d=137±4.5 Mpc by using cosmography and global fitting method. The work <cit.> used the measurements of BAO, observational H(z) data, GW170817, and SNe Ia to constrain the H_0 and r_d, and obtained the results of H_0=68.58±1.7 and r_d=148.0±3.6 Mpc. These works are either cosmological model-dependent or involve more than two or more types of data, which prevents us from clearly understanding the contribution of each type of data. However, it should be stressed that, compared with assuming a specific model, a combination of strong lensing and current astronomical probes such as BAO can reduce possible bias in our work. More importantly, there is no significant increase in uncertainty. As we mentioned earlier (see Introduction), our approach effectively avoids these problems, which suggests that our approach has great potential to provide more precise and accurate measurements of r_d in the future, further precision cosmology research. § CONCLUSION In this work, we propose an improved model-independent method to calibrate sound horizon r_d by using the latest observations of BAO measurement from DES, BOSS/eBOSS, and DESI surveys and gravitationally time-delay lensed quasars from H0LiCOW collaboration. We adopt the non-parameterized method of GPR to reconstruct observed BAO angular scale measurements. Our approach has several significant advantages. First, it is independent of the model of the universe and early universe; second, it is independent of cosmological parameters such as the Hubble constant, dark energy, (and, more importantly, does not involve the curvature of the universe when using the D_d of the lenses, and also avoids the obstacle of mass-sheet degeneracy in gravitational lensing); third, it avoids the transformation of distances by the Eddington relation. Fourth, it involves only two types of data, allowing a clear understanding of the contribution of data. Such combinations of time-delay strong lensing systems and current astronomical probes such as BAO can reduce possible bias in the r_d estimation of our analysis. Combining the 6D_Δ t measurements and reconstructed BAO dataset, our model-independent result is r_d=139.5^+5.2_-4.4 Mpc with the precision at ∼3.7% level. This result changes to r_d=126.3^+13.3_-10.5 Mpc when combining 4D_d datasets with the reconstructed BAO dataset. Despite the poor precision of the constraints on r_d in this case, the method using D_d provided by gravitational lensing does not require assumptions about the curvature of the universe and avoids the obstacle of mass-sheet degeneracy, whereas using time-delayed distances requires assumptions to be taken about the curvature. For the full dataset of 6D_Δ t and 4D_d measurements, we obtain the result r_d=139.7^+5.2_-4.5 Mpc, which is in agreement with the results of Planck 2018 within ∼1.7σ uncertainty, the addition of D_d (or dispersion measurements) does not play a significant role in our analysis. Considering the correlation between r_d and H_0, our results also highlight the Hubble tension and may give us a better understanding of the discordance between the datasets or reveal new physics beyond the standard model. As a final remark, we are also looking forward to other new cosmological probes, such as gravitational waves as standard sirens. Recently, the work <cit.> forecast a relative precision of σ_r_d/r_d∼1.5% from forthcoming surveys such as LISA gravitational wave (GW) standard sirens and DESI or Euclid angular BAO measurements. Focusing on the available data before the true gravitational wave era, it has been possible to constrain r_d to up to 3.7% precision using our method. The methodology of this paper also applies to the r_d estimation of joint GW and BAO measurements. This shows that our method has great potential to provide more precise and accurate measurements of r_d in the future, further precision cosmology research. § ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China under Grant No. 12203009, 12122504 and 12035005; the Chutian Scholars Program in Hubei Province (X2023007); and the Hubei Province Foreign Expert Project (2023DJC040). [Wald(1984)]1984ucp..book.....W Wald R. 1984, General Relativity, (University of Chicago Press, Chicago, 1984) [Weinberg(1972)]1972gcpa.book.....W Weinberg, S. 1972, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, by Steven Weinberg, pp. 688. ISBN 0-471-92567-5. Wiley-VCH , July 1972., 688 [Eisenstein et al.(2005)]2005ApJ...633..560E Eisenstein, D. J., Zehavi, I., Hogg, D. W., et al. 2005, , 633, 560. doi:10.1086/466512 [Cole et al.(2005)]2005MNRAS.362..505C Cole, S., Percival, W. J., Peacock, J. A., et al. 2005, , 362, 505. doi:10.1111/j.1365-2966.2005.09318.x [Beutler et al.(2011)]2011MNRAS.416.3017B Beutler, F., Blake, C., Colless, M., et al. 2011, , 416, 3017. doi:10.1111/j.1365-2966.2011.19250.x [Alam et al.(2017)]2017MNRAS.470.2617A Alam, S., Ata, M., Bailey, S., et al. 2017, , 470, 2617. doi:10.1093/mnras/stx721 [Alam et al.(2021)]2021PhRvD.103h3533A Alam, S., Aubert, M., Avila, S., et al. 2021, , 103, 083533. doi:10.1103/PhysRevD.103.083533 [Blake et al.(2012)]2012MNRAS.425..405B Blake, C., Brough, S., Colless, M., et al. 2012, , 425, 405. doi:10.1111/j.1365-2966.2012.21473.x [Di Valentino et al.(2021)]2021CQGra..38o3001D Di Valentino, E., Mena, O., Pan, S., et al. 2021, Classical and Quantum Gravity, 38, 153001. doi:10.1088/1361-6382/ac086d [Abdalla et al.(2022)]2022JHEAp..34...49A Abdalla, E., Abellán, G. F., Aboubrahim, A., et al. 2022, Journal of High Energy Astrophysics, 34, 49. doi:10.1016/j.jheap.2022.04.002 [Perivolaropoulos & Skara(2022)]2022NewAR..9501659P Perivolaropoulos, L. & Skara, F. 2022, , 95, 101659. doi:10.1016/j.newar.2022.101659 [DESI Collaboration et al.(2024a)]2024arXiv240403000D DESI Collaboration, Adame, A. G., Aguilar, J., et al. 2024a, arXiv:2404.03000. doi:10.48550/arXiv.2404.03000 [DESI Collaboration et al.(2024b)]2024arXiv240403001D DESI Collaboration, Adame, A. G., Aguilar, J., et al. 2024b, arXiv:2404.03001. doi:10.48550/arXiv.2404.03001 [DESI Collaboration et al.(2024c)]2024arXiv240403002D DESI Collaboration, Adame, A. G., Aguilar, J., et al. 2024c, arXiv:2404.03002. doi:10.48550/arXiv.2404.03002 [Planck Collaboration et al.(2020)]2020AA...641A...6P Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, , 641, A6. doi:10.1051/0004-6361/201833910 [Heavens et al.(2014)]2014PhRvL.113x1302H Heavens, A., Jimenez, R., & Verde, L. 2014, , 113, 241302. doi:10.1103/PhysRevLett.113.241302 [Bernal et al.(2016)]2016JCAP...10..019B Bernal, J. L., Verde, L., & Riess, A. G. 2016, , 2016, 019. doi:10.1088/1475-7516/2016/10/019 [Verde et al.(2017)]2017MNRAS.467..731V Verde, L., Bernal, J. L., Heavens, A. F., et al. 2017, , 467, 731. doi:10.1093/mnras/stx116 [Macaulay et al.(2019)]2019MNRAS.486.2184M Macaulay, E., Nichol, R. C., Bacon, D., et al. 2019, , 486, 2184. doi:10.1093/mnras/stz978 [L'Huillier & Shafieloo(2017)]2017JCAP...01..015L L'Huillier, B. & Shafieloo, A. 2017, , 2017, 015. doi:10.1088/1475-7516/2017/01/015 [Shafieloo et al.(2018)]2018PhRvD..98h3526S Shafieloo, A., L'Huillier, B., & Starobinsky, A. A. 2018, , 98, 083526. doi:10.1103/PhysRevD.98.083526 [Camarena & Marra(2020)]2020MNRAS.495.2630C Camarena, D. & Marra, V. 2020, , 495, 2630. doi:10.1093/mnras/staa770 [Zhang & Huang(2021)]2021PhRvD.103d3513Z Zhang, X. & Huang, Q.-G. 2021, , 103, 043513. doi:10.1103/PhysRevD.103.043513 [Giarè et al.(2024)]2024arXiv240607493G Giarè, W., Betts, J., van de Bruck, C., et al. 2024, arXiv:2406.07493. doi:10.48550/arXiv.2406.07493 [Matsumoto & Futamase(2008)]2008MNRAS.384..843M Matsumoto, A. & Futamase, T. 2008, , 384, 843. doi:10.1111/j.1365-2966.2007.12769.x [Geng et al.(2021)]2021MNRAS.503.1319G Geng, S., Cao, S., Liu, Y., et al. 2021, , 503, 1319. doi:10.1093/mnras/stab519 [Chae(2007)]2007ApJ...658L..71C Chae, K.-H. 2007, , 658, L71. doi:10.1086/516569 [Cao et al.(2022)]2022A A...659L...5C Cao, S., Qi, J., Cao, Z., et al. 2022, , 659, L5. doi:10.1051/0004-6361/202142694 [Mellier et al.(1993)]1993ApJ...407...33M Mellier, Y., Fort, B., & Kneib, J.-P. 1993, , 407, 33. doi:10.1086/172490 [Newman et al.(2009)]2009ApJ...706.1078N Newman, A. B., Treu, T., Ellis, R. S., et al. 2009, , 706, 1078. doi:10.1088/0004-637X/706/2/1078 [Suyu et al.(2014)]2014ApJ...788L..35S Suyu, S. H., Treu, T., Hilbert, S., et al. 2014, , 788, L35. doi:10.1088/2041-8205/788/2/L35 [Bonvin et al.(2017)]2017MNRAS.465.4914B Bonvin, V., Courbin, F., Suyu, S. H., et al. 2017, , 465, 4914. doi:10.1093/mnras/stw3006 [Chen et al.(2019)]2019MNRAS.488.3745C Chen, Y., Li, R., Shu, Y., et al. 2019, , 488, 3745. doi:10.1093/mnras/stz1902 [Wong et al.(2020)]2020MNRAS.498.1420W Wong, K. C., Suyu, S. H., Chen, G. C.-F., et al. 2020, , 498, 1420. doi:10.1093/mnras/stz3094 [Millon et al.(2020)]2020A A...639A.101M Millon, M., Galan, A., Courbin, F., et al. 2020, , 639, A101. doi:10.1051/0004-6361/201937351 [Millon et al.(2020)]2020A A...642A.193M Millon, M., Courbin, F., Bonvin, V., et al. 2020, , 642, A193. doi:10.1051/0004-6361/202038698 [Eigenbrod et al.(2005)]2005A A...436...25E Eigenbrod, A., Courbin, F., Vuissoz, C., et al. 2005, , 436, 25. doi:10.1051/0004-6361:20042422 [Treu et al.(2018)]2018MNRAS.481.1041T Treu, T., Agnello, A., Baumer, M. A., et al. 2018, , 481, 1041. doi:10.1093/mnras/sty2329 [Birrer & Treu(2021)]2021A A...649A..61B Birrer, S. & Treu, T. 2021, , 649, A61. doi:10.1051/0004-6361/202039179 [Birrer et al.(2020)]2020A A...643A.165B Birrer, S., Shajib, A. J., Galan, A., et al. 2020, , 643, A165. doi:10.1051/0004-6361/202038861 [Aylor et al.(2019)]2019ApJ...874....4A Aylor, K., Joy, M., Knox, L., et al. 2019, , 874, 4. doi:10.3847/1538-4357/ab0898 [Wojtak & Agnello(2019)]2019MNRAS.486.5046W Wojtak, R. & Agnello, A. 2019, , 486, 5046. doi:10.1093/mnras/stz1163 [Arendse et al.(2019)]2019A A...632A..91A Arendse, N., Agnello, A., & Wojtak, R. J. 2019, , 632, A91. doi:10.1051/0004-6361/201935972 [Nunes et al.(2020)]2020MNRAS.497.2133N Nunes, R. C., Yadav, S. K., Jesus, J. F., et al. 2020, , 497, 2133. doi:10.1093/mnras/staa2036 [Favale et al.(2024)]2024arXiv240512142F Favale, A., Gómez-Valent, A., & Migliaccio, M. 2024, arXiv:2405.12142. doi:10.48550/arXiv.2405.12142 [Holsclaw et al.(2010b)]Holsclaw1 Holsclaw, T., Alam, U., Sansó, B., et al. 2010b, , 105, 241302. doi:10.1103/PhysRevLett.105.241302 [Shafieloo et al.(2012)]ShafKimLind Shafieloo, A., Kim, A. G., & Linder, E. V. 2012, , 85, 123530. doi:10.1103/PhysRevD.85.123530 [Joudaki et al.(2018)]Keeley0 Joudaki, S., Kaplinghat, M., Keeley, R., et al. 2018, , 97, 123501. doi:10.1103/PhysRevD.97.123501 [Kirkby & Keeley(2017)]GPHist Kirkby, D., Keeley, R. 2017, doi:10.5281/zenodo.999564 [Falco et al.(1985)]1985ApJ...289L...1F Falco, E. E., Gorenstein, M. V., & Shapiro, I. I. 1985, , 289, L1. doi:10.1086/184422 [Refsdal(1964)]1964MNRAS.128..307R Refsdal, S. 1964, , 128, 307. doi:10.1093/mnras/128.4.307 [Shapiro(1964)]1964PhRvL..13..789S Shapiro, I. I. 1964, , 13, 789. doi:10.1103/PhysRevLett.13.789 [Birrer et al.(2019)]2019MNRAS.484.4726B Birrer, S., Treu, T., Rusu, C. E., et al. 2019, , 484, 4726. doi:10.1093/mnras/stz200 [Suyu et al.(2013)]Suyu13 Suyu, S. H., Auger, M. W., Hilbert, S., et al. 2013, , 766, 70. doi:10.1088/0004-637X/766/2/70 [Chen et al.(2019a)]Chen19 Chen, G. C.-F., Fassnacht, C. D., Suyu, S. H., et al. 2019a, , 490, 1743. doi:10.1093/mnras/stz2547 [Suyu et al.(2010)]2010ApJ...711..201S Suyu, S. H., Marshall, P. J., Auger, M. W., et al. 2010, , 711, 201. doi:10.1088/0004-637X/711/1/201 [Jee et al.(2019)]Jee19 Jee, I., Suyu, S. H., Komatsu, E., et al. 2019, Science, 365, 1134. doi:10.1126/science.aat7371 [Wong et al.(2017)]Wong17 Wong, K. C., Suyu, S. H., Auger, M. W., et al. 2017, , 465, 4895. doi:10.1093/mnras/stw3077 [Rusu et al.(2020)]Rusu20 Rusu, C. E., Wong, K. C., Bonvin, V., et al. 2020, , 498, 1440. doi:10.1093/mnras/stz3451 [Weinberg(1972)]Weinberg1972 Weinberg, S. 1972, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity
http://arxiv.org/abs/2406.18727v1
20240626195309
Demonic variance and a non-determinism score for Markov decision processes
[ "Jakob Piribauer" ]
cs.LO
[ "cs.LO" ]
Chandra detects low-luminosity AGN with M_BH=10^4-10^6 M_⊙ in nearby (z<0.5), dwarf and star-forming galaxies Santosh Harish July 1, 2024 ============================================================================================================== § ABSTRACT This paper studies the influence of probabilism and non-determinism on some quantitative aspect X of the execution of a system modeled as a Markov decision process (MDP). To this end, the novel notion of demonic variance is introduced: For a random variable X in an MDP , it is defined as 1/2 times the maximal expected squared distance of the values of X in two independent execution of in which also the non-deterministic choices are resolved independently by two distinct schedulers. It is shown that the demonic variance is between 1 and 2 times as large as the maximal variance of X in that can be achieved by a single scheduler. This allows defining a non-determinism score for and X measuring how strongly the difference of X in two executions of can be influenced by the non-deterministic choices. Properties of MDPs with extremal values of the non-determinism score are established. Further, the algorithmic problems of computing the maximal variance and the demonic variance are investigated for two random variables, namely weighted reachability and accumulated rewards. In the process, also the structure of schedulers maximizing the variance and of scheduler pairs realizing the demonic variance is analyzed. § INTRODUCTION In software and hardware systems, uncertainty manifests in two distinct forms: non-determinism and probabilism. Non-determinism emerges from, e.g., unknown operating environments, user interactions, or concurrent processes. Probabilistic behavior arises through deliberate randomization in algorithms or can be inferred, e.g., from probabilities of component failures. In this paper, we investigate the uncertainty in the value X of some quantitative aspect of a system whose behavior is subject to non-determinism and probabilism. On the one hand, we aim to quantify this uncertainty. In the spirit of the variance that quantifies uncertainty in purely probabilistic settings, we introduce the notion of demonic variance that generalizes the variance in the presence of non-determinism. On the other hand, we provide a non-determinism score (NDS) based on this demonic variance that measures the extent to which the uncertainty of X can be ascribed to the non-determinism. As formal models, we use Markov decision processes (MDPs, see, e.g., <cit.>), one of the most prominent models combining non-determinism and probabilism, heavily used in verification, operations research, and artificial intelligence. The non-deterministic choices in an MDP are resolved by a scheduler. Once a scheduler is fixed, the system behaves purely probabilistically. Demonic variance. For a random variable Y, the variance is equal to half the expected squared deviation of two independent copies Y_1 and Y_2 of Y: (Y) 𝔼((Y-𝔼(Y))^2)= 𝔼(Y^2)-𝔼(Y)^2 = 1/2𝔼(Y_1^2 - 2 Y_1Y_2 +Y_2^2) = 1/2𝔼((Y_1-Y_2)^2) . For a quantity X in an MDP , we obtain a random variable X_^ for each scheduler .[Note that the notation X^_ here differs from the notation used in the technical part of the paper.] The maximal variance ^max_ (X) sup_ (X^_) can serve as a measure for the “amount of probabilistic uncertainty” regarding X present in the MDP. However, in the presence of non-determinism, quantifying the spread of outcomes in terms of the squared deviation of two independent executions of a system gives rise to a whole new meaning: We can allow the non-determinism to be resolved independently as well. To this end, we consider two different scheduler _1 and _2 in two independent copies _1 and _2 of and define ^_1,_2_(X) 1/2𝔼((X^_1__1-X^_2__2)^2). If we now allow for a demonic choice of the two schedulers making this uncertainty as large as possible, we arrive at the demonic variance ^_(X) sup__1,_2^_1,_2_(X) of X in . To illustrate a potential use case, consider a communication network in which messages are processed according to a randomized protocol employed on different hardware at the different nodes of the network. A low worst-case expected processing time of the protocol is clearly desirable. In addition, however, large differences in the processing time at different nodes make buffering necessary and increase the risk of package losses. Consider the MDPs and in Fig. <ref> modeling such a communication protocol. Initially, a non-deterministic choice between α, β, and γ is made. Then, a final node containing the processing time X is reached according to the depicted distributions. In both MDPs, the expected value of X lies between 1 and 3 for all schedulers – with the values 1 and 3 being realized by α and γ. Furthermore, as the outcomes lie between 0 and 4, the distribution over outcomes leading to the highest possible variance of 4 is the one that takes value 0 and 4 with probability 1/2 each, which is realized by a scheduler choosing β. So, ^max_(X)=^max_(X)=4. However, the demonic variances are different: Our results will show that the demonic variance is obtained by a pair of deterministic schedulers that do not randomize over the non-deterministic choices. In , we can easily check that no combination of such schedulers and leads to a value ^,_(X) of more than 4=^β,β_(X) where β denotes the scheduler that chooses β with probability 1. In , on the other hand, the demonic variance is ^_(X)=^α,γ_(X) = 1/2𝔼((X__1^α - X__2^γ)^2) =1/2(10/16· 16) = 5. So, despite the same maximal variance and range of expected values, the worst-case expected squared deviation between two values of X in independent executions is worse in than in . Hence, we argue that the protocol modeled by should be preferred. Non-determinism score (NDS). By the definition of the demonic variance, it is clear that ^_(X) ≥^max_(X). Under mild assumptions ensuring the well-definedness, we will prove that ^_(X) ≤ 2 ^max_(X), too. So, the demonic variance is between 1 and 2 times as large as the maximal variance. We use this to define the non-determinism score (NDS) (,X) ^_(X) - ^max_(X)/^max_(X)∈ [0,1]. The NDS captures how much larger the expected squared deviation of two outcomes can be made by resolving the non-determinism in two executions independently compared to how large it can be solely due to the probabilism under a single resolution of the non-determinism. For an illustration of the NDS, four simple MDPs and their NDSs are depicted in Figure <ref>. In all of the MDPs except for the first one, a scheduler has to make a (randomized) choice over actions α and β in the initial state . Afterwards one of the terminal states is reached according to the specified probabilities. The terminal states are equipped with a weight that specifies the value of X at the end of the execution. For all of these MDPs, the maximal variance can be computed by expressing the variance in terms of the probability p that α is chosen and maximizing the resulting quadratic function. In the interest of brevity, we do not present these computations. The pair of (deterministic) schedulers realizing the demonic variance always consists of the scheduler choosing α and the scheduler choosing β making it easy to compute the demonic variance in these examples. Potential applications. First of all, the demonic variance might serve as the basis for refined guarantees on the behavior of systems, in particular, when employed in different environments. As a first result in this direction, we will prove an analogue to Chebyshev's Inequality using the demonic variance. Further, as illustrated in Example <ref>, achieving a low demonic variance or NDS can be desirable when designing systems. Hence, a reasonable synthesis task could be to design a system ensuring a high expected value of a quantity X while keeping the demonic variance of X below a threshold. Secondly, the demonic variance and the NDS can serve to enhance the explainability of a system's behavior, a topic of growing importance in the area of formal verification (see, e.g., <cit.> for an overview). More concretely, the NDS can be understood as a measure assigning responsibility for the scattering of a quantity X in different executions to the non-determinism and the probabilism present in the system, respectively. Further, considering the NDS for different starting states makes it possible to pinpoint regions of the state space in which the non-determinism has a particularly high influence. Notions of responsibility that quantify to which extent certain facets of the behavior of a system can be ascribed to certain components, states, or events have been studied in various settings <cit.>. Finally, the NDS can also be understood as a measure for the power of control when non-determinism models controllable aspects of a system. This interpretation could be useful, e.g., when designing exploration strategies in reinforcement learning. Here, the task is to learn good strategies as fast as possible by interacting with a system. One of the main challenges is to decide which regions of the state space to explore (see <cit.> for a recent survey). Estimations for the NDS starting from different states could be useful here: States from which the NDS is high might be more promising to explore than states from which the NDS is low as the difference in received rewards from such a state is largely subject to randomness. Contributions. Besides establishing general results for the demonic variance and the NDS, we investigate the two notions for weighted reachability and accumulated rewards. For weighted reachability, terminal states of an MDP are equipped with a weight that is received if an execution ends in this state. For accumulated rewards, all states are assigned rewards that are summed up along an execution. The main contributions of this paper are as follows. * We introduce the novel notions of demonic variance and non-determinism score. For general random variables X, we prove that the demonic variance is at most twice as large as the maximal variance. Furthermore, we prove an analogue of Chebyshev's inequality. For the non-determinism score, we establish consequences of a score of 0 or 1. * In the process, we prove a result of independent interest using a topology on the space of schedulers that states that convergence with respect to this topology implies convergence of the corresponding probability measures. * For weighted reachability, we show that the maximal and the demonic variance can be computed via quadratic programs. For the maximal variance, this results in a polynomial-time algorithm; for the demonic variance, in a separable bilinear program of polynomial size yielding an exponential time upper bound. Further, we establish that there is a memoryless scheduler maximizing the variance and a pair of memoryless deterministic schedulers realizing the demonic variance. * For accumulated rewards, we prove that the maximal variance and an optimal finite-memory scheduler can be computed in exponential time. Further, we prove that the demonic variance is realized by a pair of deterministic finite-memory schedulers which can be computed via a bilinear program of exponential size. Related work. We are not aware of investigations of notions similar to the demonic variance for MDPs. Previous work on the variance in MDPs usually focused on the minimization of the variance. In <cit.>, the problem to find schedulers that ensure a certain expected value while keeping the variance below a threshold is investigated for accumulated rewards in the finite horizon setting. It is shown that deciding whether there is a scheduler ensuring variance 0 is NP-hard. In <cit.>, the minimization of the variance of accumulated rewards and of the mean payoff is addressed with a focus on optimality equations and no algorithmic results. The variance of accumulated weights in Markov chains is shown to be computable in polynomial time in <cit.>. For the mean payoff, algorithms were given to compute schedulers that achieve given bounds on the expectation and notions of variance and variability in <cit.>. One objective incorporating the variance that has been studied on MDPs is the variance-penalized expectation (VPE) <cit.>. Here, the goal is to find a scheduler that maximizes the expected reward minus a penalty factor times the variance. In <cit.>, the objective is studied for accumulated rewards. Methodically, our results for the maximal and demonic variance of accumulated rewards share similarities with the techniques of <cit.> and we make use of some results proved there, such as the result that among expectation-optimal schedulers a variance-optimal memoryless deterministic scheduler can be computed in polynomial time. Nevertheless, the optimization of the VPE inherently requires the minimization of the variance. In particular, it is shown in <cit.> that deterministic schedulers are optimal for the VPE, while randomization is necessary for the maximization of the variance. Besides the variance, several other notions that aim to bound the uncertainty of the outcome of some quantitative aspect in MDPs have been studied – in particular, in the context of risk-averse optimization: Given a probability p, quantiles for a quantity X are the best bound B such that X exceeds B with probability at most p in the worst or best case. For accumulated rewards in MDPs, quantiles have been studied in <cit.>. The conditional value-at-risk is a more involved measures that quantifies how far the probability mass of the tail of the probability distribution lies above a quantile. In <cit.>, this notion has been investigated for weighted reachability and mean payoff; in <cit.> for accumulated rewards. A further measure incentivizing a high expected value while keeping the probability of low outcomes small is the entropic risk measure. For accumulated rewards, this measure has been studied in <cit.> in stochastic games that extend MDPs with an adversarial player. Finally, as the demonic variance is a measure that looks at a system across different executions, there is a conceptual similarity to hyperproperties <cit.>. For probabilistic systems, logics expressing hyperproperties that allow to quantify over different executions or schedulers have been introduced in <cit.>. § PRELIMINARIES Notations for Markov decision processes. A Markov decision process (MDP) is a tuple ℳ = (S,,P,) where S is a finite set of states, a finite set of actions, P S ×× S → [0,1] ∩ the transition probability function, and ∈ S the initial state. We require that ∑_t∈ SP(s,,t) ∈{0,1} for all (s,α)∈ S×. We say that action α is enabled in state s iff ∑_t∈ SP(s,,t) =1 and denote the set of all actions that are enabled in state s by (s). We further require that (s) ≠∅ for all s∈ S. If for a state s and all actions α∈ Act(s), we have P(s,α,s)=1, we say that s is absorbing. The paths of are finite or infinite sequences s_0 _0 s_1 _1 … where states and actions alternate such that P(s_i,_i,s_i+1) >0 for all i≥0. For = s_0 _0 s_1 _1 …_k-1 s_k, P() = P(s_0,_0,s_1) ·…· P(s_k-1,_k-1,s_k) denotes the probability of and ()=s_k its last state. Often, we equip MDPs with a reward function S×→ℕ. The size of is the sum of the number of states plus the total sum of the encoding lengths in binary of the non-zero probability values P(s,α,s') as fractions of co-prime integers as well as the encoding length in binary of the rewards if a reward function is used. A Markov chain is an MDP in which the set of actions is a singleton. In this case, we can drop the set of actions and consider a Markov chain as a tuple =(S,P,, ) where P now is a function from S× S to [0,1] and a function from S to ℕ. An end component of is a strongly connected sub-MDP formalized by a subset S^'⊆ S of states and a non-empty subset 𝔄(s)⊆(s) for each state s∈ S^' such that for each s∈ S^', t∈ S and α∈𝔄(s) with P(s,α,t)>0, we have t∈ S^' and such that in the resulting sub-MDP all states are reachable from each other. An end-component is a 0-end-component if it only contains state-action-pairs with reward 0. Given two MDPs =(S,, P,) and =(S^',^', P^',^'), we define the (synchronous) product ⊗ as the tuple (S× S^', ×^', P^⊗, (,^')) where we define P^⊗ ((s,s^'), (α,β) , (t,t^')) = P(s,α, t) · P(s^', β, t^') for all (s,s^'), (t,t^') ∈ S× S^' and (α,β)∈×^'. Schedulers. A scheduler (also called policy) for is a function that assigns to each finite path a probability distribution over (()). If ()=(^') for all finite paths and ^' with ()=(^'), we say that is memoryless. In this case, we also view schedulers as functions mapping states s∈ S to probability distributions over (s). A scheduler is called deterministic if () is a Dirac distribution for each finite path , in which case we also view the scheduler as a mapping to actions in (()). Given two MDPs =(S,, P,) and =(S^',^', P^',^') and two schedulers and for and , respectively, we define the product scheduler ⊗ for ⊗ by defining for a finite path π = (s_0,t_0) (α_0 ,β_0) (s_1,t_1) … (s_k,t_k): ⊗ (π) (α, β) = (s_0 α_0 … s_k) (α) ·(t_0 β_0 … t_k) (β) for all (α, β)∈×^'. Probability measure. We write ^_,s to denote the probability measure induced by a scheduler and a state s of an MDP . It is defined on the σ-algebra generated by the cylinder sets (π) of all infinite extensions of a finite path π = s_0 _0 s_1 _1 …_k-1 s_k starting in state s, i.e., s_0=s, by assigning to (π) the probability that π is realized under , which is P^(π) ∏_i=0^k-1(s_0 _0 … s_i )(_i) · P(s_i,_0,s_i+1). This can be extended to a unique probability measure on the mentioned σ-algebra. For details, see <cit.>. For a random variable X, i.e., a measurable function defined on infinite paths in , we denote the expected value of X under a scheduler and state s by 𝔼^_,s(X). We define 𝔼^min_,s(X) inf_𝔼^_,s(X) and 𝔼^max_,s(X) sup_𝔼^_,s(X). The variance of X under the probability measure determined by and s in is denoted by ^_,s(X) and defined by ^_,s(X)𝔼^_,s((X-𝔼^_,s(X))^2)=𝔼^_,s(X^2) -𝔼^_,s(X)^2. We define ^max_,s(X) sup_^_,s(X). If s=, we sometimes drop the subscript s in _,s^, 𝔼_,s^ and ^_,s(X). Mixing schedulers. Intuitively, we often want to use a scheduler that initially decides to behave like a scheduler and then to stick to this scheduler with probability p and to behave like a scheduler with probability 1-p. As this intuitive description does not match the definition of schedulers as functions from finite paths[This description would be admissible if we allowed stochastic memory updates (see, e.g., <cit.>).], we provide a formal definition: For two schedulers and and p∈ [0,1], we use p⊕ (1-p) to denote the following scheduler. For a path π = s_0 _0 s_1 _1 …_k-1 s_k, we define for an action α enabled in s_k (p⊕ (1-p)) (π) (α) p· P^(π)·(π)(α)/p· P^(π) + (1-p) · P^(π) + (1-p)· P^(π)·(π)(α)/p· P^(π) + (1-p) · P^(π) . This is well-defined for any path that has positive probability under or . The following result is folklore; a proof is included in Appendix <ref>. propositionpropmixing Let the schedulers and and the value p be as above. Then, for any path π = s_0 _0 s_1 _1 …_k-1 s_k, we have P^p⊕ (1-p) (π) (π) = p P^(π) + (1-p) P^(π). We conclude _,s^p⊕ (1-p) (A) = p_,s^(A) + (1-p) _,s^(A) for any measurable set of paths A. Hence, we can think of the scheduler p⊕ (1-p) as behaving like with probability p and like with probability (1-p). In particular, we can also conclude that for a random variable X, we have 𝔼_,s^p⊕ (1-p) (X) = p𝔼_,s^(X) + (1-p) 𝔼_,s^(X). For the variance, we obtain the following as shown in Appendix <ref>. lemmavariancemix Given , X, and two schedulers _1 and _2, as well as p∈ [0,1], let = p _1 ⊕ (1-p) _2. Then, ^_(X) = p _^_1(X) + (1-p) _^_2(X) + p (1-p) (𝔼_^_1(X) - 𝔼_^_2(X))^2. Topology and convergence of measures. Given a family of topological spaces ((S_i,τ_i))_i∈ I, the product topology τ on ∏_i∈ IS_i is the coarsest topology such that the projections p_i ∏_i∈ IS_i → S_i , (s_i)_i∈ I↦ s_i are continuous for all i∈ I. For measures (μ_j)_j∈ℕ and μ on a measure space (Ω,Σ) where Ω is a metrizable topological space and Σ the Borel σ-algebra on Ω, we say that the sequence (μ_j)_j∈ℕ weakly converges to μ if for all bounded continuous functions fΩ→ℝ, we have lim_j→∞∫ f dμ_j = ∫ f dμ. The set of infinite paths Π_ of an MDP with the topology generated by the cylinder sets is metrizable as we can define the metric d(π,π^')=2^-ℓ where ℓ is the length of the longest common prefix of π and π^'. § DEMONIC VARIANCE AND NON-DETERMINISM SCORE In this section, we formally define the demonic variance. After proving first auxiliary results, we prove an analogue of Chebyshev's Inequality using the demonic variance. Then, we introduce the non-determinism score and investigate necessary and sufficient conditions for this score to be 0 or 1. Proofs omitted here can be found in Appendix <ref>. Throughout this section, let = (S,, P,) be an MDP and let X be a random variable, i.e., a Borel measurable function on the infinite paths of . We will work under two assumptions that ensure that all notions are well-defined: First, note that ^max_(X)=0 implies that there is a constant c such that under all schedulers , we have _^(X=c)=1 – an uninteresting case. Furthermore, for meaningful definitions of demonic variance and non-determinism score, we need that the expected value and the variance of X in are finite. Hence, we work under the following assumption: We assume that 0<^max_(X)<∞ and that sup_| 𝔼^_(X) | < ∞. §.§ Demonic variance As described in the introduction, the idea behind the demonic variance is to quantify the expected squared deviation of X in two independent executions of , in which the non-determinism is resolved independently as well. We use the following notation: Given a path in ⊗ consisting of a sequence of pairs of states and pairs of actions, we denote by X_1 and X_2 the function X applied to the projection of the path on the first component and on the second component, respectively. Given two schedulers _1 and _2 for , we define ^_1,_2_(X) 1/2𝔼^_1⊗_2_⊗ ((X_1 - X_2)^2). Intuitively, in this definition two independent executions of are run in parallel while the non-determinism is resolved by _1 in the first execution and by _2 in the second component. As the two components in the products ⊗ and _1⊗_2 are independent, the resulting distributions of X in the two components, i.e., X_1 and X_2 are independent as well. The factor 1/2 is included as for a random variable Y, this factor also appears in the representation (Y)=1/2𝔼((Y_1-Y_2)^2) for two independent copies Y_1 and Y_2 of Y. The demonic variance is now the worst-case value when ranging over all pairs of schedulers: ^_(X) sup__1,_21/2𝔼^_1⊗_2_⊗ ((X_1 - X_2)^2). A first simple, but useful, result allows us to express ^_1,_2_(X) in terms of the expected values and variances of X under _1 and _2. lemmavariancetwo Given two schedulers _1 and _2 for , we have ^_1,_2_(X) = 1/2( _^_1(X) + _^_2(X) + (𝔼_^_1(X) - 𝔼_^_2(X))^2 ). This lemma allows us to provide an insightful graphical interpretation of the demonic variance using the standard deviation 𝕊𝔻(X)√((X)) of a random variable X: Suppose in an MDP , there are four deterministic scheduler _1,…,_4 with expected values 1, 2, 3, and 4 and variances 1, 8, 8, and 5 for a random variable X. Lemma <ref> allows us to compute the variances of schedulers obtained by randomization leading to parabolic line segments in the expectation-variance-plane as depicted in Figure <ref> (see also <cit.>). Further randomizations also make it possible to realize any combination of expectation and variance in the interior of the resulting shape. When looking for the maximal variance and the demonic variance, only the upper bound of this shape is relevant. In Figure <ref>, we now depict the standard deviations of schedulers on this upper bound over the expectation twice on two orthogonal planes. Clearly, the highest standard deviation (and consequently variance) is obtained for the expected value 2.5 in this example. The red dotted line of length √(2^max_(X)) connects the two points corresponding to this maximum on the two planes. Considering _2 and _4, we can also find the value √(2^_2,_4_(X)) : The blue dashed line connects the point corresponding to _2 on one of the planes to the point corresponding to _4 on the other plane. By the Pythagorean theorem, its length is √(√(^_2_(X))^2 + (𝔼^_2_(X) - (𝔼^_4_(X) )^2 + √(^_4_(X))^2 ) = √(2^_2,_4_(X)). So, finding √(2) times the “demonic standard deviation” and hence the demonic variance corresponds to finding two points on the two orthogonal graphs with maximal distance. The relation between maximal and demonic variance is shown in the following proposition. propositionboundstwo We have ^max_(X) ≤^_(X) ≤ 2^max_(X). By means of Chebyshev's Inequality, the variance can be used to bound the probability that a random variable Y lies far from its expected value. Using the demonic variance, we can prove an analogous result providing bounds on the probability that the outcomes of X in two independent executions of the MDP lie far apart. This can be seen as a first step in the direction of using the demonic variance to provide guarantees on the behavior of a system. theoremcheby We have _⊗^⊗( |X_1 - X_2| ≥ k·√(^_(X))) ≤2/k^2 for any k∈ℝ_>0 and schedulers and for . Using the result that ^_(X)≤ 2^max_(X), we obtain the following variant of the inequality providing a weaker bound in terms of the maximal variance. corollarycorcheby We have _⊗^⊗( |X_1 - X_2| ≥ k·√(^max_(X))) ≤4/k^2 for any k∈ℝ_>0 and schedulers and for . §.§ Non-determinism score We have seen that the demonic variance is larger than the maximal variance by a factor between 1 and 2. As described in the introduction, we use this insight as the basis for a score quantifying how much worse the “uncertainty” of X is when non-determinism can be resolved differently in two executions of an MDP compared to how bad it can be in a single execution. We define the non-determinism score (NDS) (,X) ^_(X) - ^max_(X)/^max_(X). By Assumption <ref>, the NDS is well-defined. By Proposition <ref>, the NDS always returns a value in [0,1]. Clearly, in Markov chains, the NDS is 0. A bit more general, we can show: propositionndszero If 𝔼^_(X) = 𝔼^_(X) for all schedulers and , then (,X)=0. In transition systems viewed as MDPs in which all transition probabilities are 0 or 1, the NDS is 1: Under Assumption <ref> in a transition system the value of X must be bounded, i.e., X∈ [a,b] for some a,b∈ℝ such that sup_π X(π)=b and inf_π X(π)=a where π ranges over all paths. Any path can be realized by a scheduler with probability 1. So, for any ε>0, there are schedulers and with ^_(X<a+ε) = 1 and ^_(X>b-ε) = 1. Then, ^,_(X) ≥1/2 (b-a - 2ε)^2. For ε→ 0, this converges to (a-b)^2/2. It is well-known that the variance of random variables taking values in [a,b] is maximal for the random variable taking values a and b with probability 1/2 each. The variance in this case is (a-b)^2/4. So, the maximal variance is (at most) half the demonic variance in this case. Consequently, the NDS is 1. Of course, a NDS of 1 does not imply that there are no probabilistic transitions in . Nevertheless, a NDS of 1 has severe implications showing that the outcome of X can be heavily influenced by the non-determinism in this case as the following theorem shows: theoremthmndsone If (,X)=1, the following statements hold: * For every ε>0, there are schedulers 𝔐𝔦𝔫_ε and 𝔐𝔞𝔵_ε with 𝔼^𝔐𝔦𝔫_ε_ (X) ≤𝔼^min_ (X) +ε and ^𝔐𝔦𝔫_ε_ (X) ≤ε, and 𝔼^𝔐𝔞𝔵_ε_ (X) ≥𝔼^max_ (X) -ε and ^𝔐𝔞𝔵_ε_ (X) ≤ε. * If there are schedulers _0 and _1, with ^_(X) = ^_0,_1_(X), then, for i=0 or i=1, _^_i ( X = 𝔼^min_(X)) = 1 and _^_1-i ( X = 𝔼^max_(X)) = 1. * If X is bounded and continuous wrt the topology generated by the cylinder sets, there are schedulers 𝔐𝔦𝔫 and 𝔐𝔞𝔵 with _^𝔐𝔦𝔫 ( X = 𝔼^min_(X)) = 1 and _^𝔐𝔞𝔵 ( X = 𝔼^max_(X)) = 1. The first two statements can be shown by elementary calculations. For the third statement, we use topological arguments. We view schedulers as elements of ∏_k=0^∞Distr()^Paths_^k where Paths_^k is the set of paths of length k in and prove the following result: The space of schedulers Sched() = ∏_k=0^∞Distr()^Paths_^k with the product topology is compact. So, every sequence of schedulers has a converging subsequence in this space. Further, for a sequence (_j)_j∈ℕ converging to a scheduler in this space, the sequence of probability measures (^_j_)_j∈ℕ weakly converges to the probability measure ^_. An example for a random variable that is bounded and continuous wrt the topology generated by the cylinder sets is the discounted reward: Given a reward function S →ℝ, the discounted reward of a path π=s_0_0s_1… is defined as 𝐷𝑅_λ(π) ∑_j=0^∞λ^j (s_j) for some discount factor λ∈ (0,1). First, | 𝐷𝑅_λ| is bounded by max_s∈ S |(s)| ·1/1-λ. Further, for any ε>0, let N be a natural number such that max_s∈ S |(s)| ·λ^N/1-λ<ε. Then, |𝐷𝑅_λ(π) - 𝐷𝑅_λ(ρ) |<ε for all paths π and ρ that share a prefix of length more than N. § WEIGHTED REACHABILITY We now address the problems to compute the demonic and the maximal variance for weighted reachability where a weight is collected on a run depending on which absorbing state is reached. As the NDS is defined via these two quantities, we do not address it separately here. Throughout this section, let =(S, , P,) be an MDP with set of absorbing states T⊆ S and let T→ℚ be a weight function. We define the random variable on infinite paths π by (π) = (t) if π reaches the absorbing state t∈ T, and (π) = 0 if π does not reach T. The main result we are going to establish is the following: Main result. The maximal variance ^max_() and an optimal memoryless randomized scheduler can be computed in polynomial time. The demonic variance ^_() can be computed as the solution to a bilinear program that can be constructed in polynomial time. Furthermore, there is a pair of memoryless deterministic schedulers realizing the demonic variance. The following standard model transformation collapsing end components (see <cit.>) allows us to assume that T is reached almost surely under any scheduler: We add a new absorbing state t^∗ and set (t^∗)=0 and collapse all maximal end components in S∖ T to single states s_. In s_, all actions that were enabled in some state in and that did not belong to as well as a new action τ leading to t^∗ with probability 1 are enabled. In the resulting MDP , the set of absorbing states T∪{t^∗} is reached almost surely under any scheduler. Further, for any scheduler for , there is a scheduler for such that the distribution of is the same under in and under in , and vice versa. So, w.l.o.g., assume the following: The set T is reached almost surely under any scheduler for . In the sequel, we first address the computation of the maximal variance and afterwards of the demonic variance of in . Omitted proofs can be found in Appendix <ref>. Computation of the maximal variance. It is well-known that the set of vectors (^_(◊ q))_q∈ T of combinations of reachability probabilities for states in T that can be realized by a scheduler can be described by a system of linear inequalities (see, e.g., <cit.>). We provide such a system of inequalities below in equations (<ref>) – (<ref>). The equations use variables x_s.α for all state-action pairs (s,α) encoding the expected number of times action α is taken in state s. Setting 1_s==1 if s= and 1_s==0 otherwise, we require x_s,α ≥ 0 for all (s,α), ∑_α∈(s) x_s,α = ∑_t∈ S,β∈(t) x_t,β· P(t,β,s) + 1_s= for all s∈ S∖ T, y_q = ∑_t∈ S,β∈(t) x_t,β· P(t,β,q) for all q∈ T. The variables y_q for q∈ T represent the probabilities that state q is reached. We can now express the expected value of and ^2 via variables e_1 and e_2 via the constraints: e_1= ∑_q∈ T y_q ·(q) and e_2= ∑_q∈ T y_q ·(q)^2. The variance can now be written as a quadratic objective function in e_1 and e_2: maximize e_2 - e_1^2. theoremWRmax The maximal value in objective (<ref>) under constraints (<ref>) – (<ref>) is ^max_(). Due to the concavity of the objective function, we conclude: corollarycorMR The maximal variance ^max_() can be computed in polynomial time. Furthermore, there is a memoryless randomized scheduler with _^()= _^max(), which can also be computed in polynomial time. Computation of the demonic variance. The demonic variance can also be expressed as the solution to a quadratic program. To encode the reachability probabilities for states in T under two distinct schedulers, we use variables x_s,α for all state weight pairs (s,α) and y_q for q∈ T subject to constraints (<ref>) – (<ref>) as before. Additionally, we use variables x_s,α^' for all state weight pairs (s,α) and y_q^' for q∈ T subject to the analogue constraints (<ref>^') – (<ref>^') using these primed variables. The maximization of the demonic variance can be expressed as maximize 1/2∑_q,r∈ T y_q · y^'_r · ((q)-(r))^2. theorembilinear The maximum in (<ref>) under constraints (<ref>) – (<ref>), (<ref>^') – (<ref>^') is ^_(). The quadratic objective function (<ref>) is not concave. However, it is bilinear and separable. This means that the variables can be split into two sets, the primed and the unprimed variables, such that the quadratic terms only contain products of variables from different sets and each constraint contains only variables from the same set. In general, checking whether the solution to a separable bilinear program exceeds a given threshold is NP-hard <cit.>. Nevertheless, solution methods tailored for bilinear programs that perform well in practice have been developed (see, e.g., <cit.>). Further, bilinearity allows us to conclude: corollaryMDdem There is a pair of memoryless deterministic schedulers and for such that _^ () = ^,_(). For the complexity of the threshold problem, we can conclude an NP upper bound. Whether the computation of the demonic variance is possible in polynomial time is left open. corollaryNPupper Given , and ϑ∈ℚ, deciding whether ^_()≥ϑ in in NP. § ACCUMULATED REWARDS One of the most important random variables studied on MDPs are accumulated rewards: Let ℳ = (S,,P,) be an MDP and let S →ℕ be a reward function. We extend the reward function to paths π=s_0_0s_1… by (π)= ∑_i=0^∞(s_i). For this random variable, we prove the following result: Main result. The maximal variance ^max_() and an optimal randomized finite-memory scheduler can be computed in exponential time. The demonic variance ^_() can be computed as the solution to a bilinear program that can be constructed in exponential time. Furthermore, there is a pair of deterministic finite-memory schedulers realizing the demonic variance. We provide a sketch outlining the proof strategy. For a detailed exposition, see Appendix <ref>. It can be checked in polynomial time whether 𝔼^max_() < ∞ <cit.>. If this is the case, this allows us to perform the same preprocessing as in Section <ref> that removes all end components without changing the possible distributions of <cit.>. Bounding expected values and expectation maximizing actions: After the pre-processing, a terminal state is reached almost surely. As shown in <cit.>, this allows to obtain a bound Q on 𝔼^max_ (^2) in polynomial time. Further, the maximal expectation 𝔼^max_,s() from each state s can be computed in polynomial time <cit.>. From these values, a set of maximizing actions ^max(s) for each state s can be computed. After the preprocessing, a scheduler is expectation optimal iff it only chooses actions from these sets. If a scheduler initially chooses a non-maximizing action in a state s, the expected value 𝔼_,s^() is strictly smaller than 𝔼_,s^max(). We define δ to be the minimal difference between these values ranging over all starting states and non-maximizing actions. So, δ is the “minimal loss” in expected value of received by choosing a non-maximizing action. Switching to expectation maximization: Using the values Q and δ, we provide a bound B such that any scheduler choosing a non-maximizing action with positive probability after a path π with (π)≥ B cannot realize the maximal variance. The bound B can be computed in polynomial time and its numerical value is exponential in the size of the input. It follows that variance maximizing schedulers have to maximize the future expected rewards after a reward of at least B has been accumulated. Furthermore, we can show that among all expectation maximizing schedulers, a variance maximizing scheduler has to be used above the reward bound B. In <cit.>, it is shown that a memoryless deterministic expectation maximizing scheduler that maximizes the variance among all expectation maximizing schedulers can be computed in polynomial time. So, schedulers maximizing the variance of can be chosen to behave like once a reward of at least B has been accumulated. Quadratic program: Now, we can unfold the MDP by storing in the state space how much reward has been accumulated up to the bound B. This results on an exponentially larger MDP ^'. Using the expected values 𝔼^_,s() and the variances ^_,s() under from each state s, we can formulate a quadratic program similar to the one for weighted reachability in Section <ref> for this unfolded MDP ^'. From the solution to this quadratic program, the maximal variance and an optimal memoryless scheduler for ^' can be extracted. Transferred back to , the scheduler corresponds to a reward-based finite-memory scheduler that keeps track of the accumulated reward up to bound B. As the quadratic program is convex, these computations can be carried out in exponential time. Demonic variance: For the demonic variance, the overall proof follows the same steps. Similar to the bound B above, a bound B^' can be provided such that in any pair of scheduler and realizing the demonic variance, both schedulers can be assumed to switch to the behavior of the memoryless deterministic scheduler above the reward bound B^'. Again by unfolding the state space up to this reward bound, the demonic variance can be computed via a bilinear program of exponential size similar to the one used in Section <ref> for weighted reachability. Furthermore, the pair of optimal memoryless deterministic schedulers in the unfolded MDP, which can be extracted from the solution, corresponds to a pair of deterministic reward-based finite-memory schedulers in the original MDP . § CONCLUSION We introduced the notion of demonic variance that quantifies the uncertainty under probabilism and non-determinism of a random variable X in an MDP . As this demonic variance is at most twice as big as the maximal variance of X, we used it to define the NDS for MDPs. The demonic variance can be used to provide new types of guarantees on the behavior of systems. A first step in this direction is the variant of Chebyshev's Inequality using the demonic variance proved in this paper. Furthermore, the demonic variance and the NDS can serve as the basis for notions of responsibility. On the one hand, such notions could ascribe responsibility for the uncertainty to non-determinism and probabilism. On the other hand, comparing the NDS from different starting states can be used to identify regions of the state space in which the non-deterministic choices are of high importance. For weighted reachability and accumulated rewards, we proved that randomized finite-memory schedulers are sufficient to maximize the variance. For the demonic variance, even pairs of deterministic finite-memory schedulers are sufficient. While we obtained upper bounds via the formulation of the computation problems as quadratic programs, determining the precise complexities is left as future work. In the case of accumulated rewards, we restricted to non-negative rewards. When dropping this restriction, severe difficulties have to be expected as several related problems on MDPs exhibit inherent number-theoretic difficulties rendering the decidability status of the corresponding decision problems open <cit.>. Of course the investigation of the demonic variance and NDS for further random variables constitutes an interesting direction for future work. For practical purposes, studying also the approximability of the maximal and demonic variance is important. Finally, In the spirit of the demonic variance, further notions can be defined to quantify the uncertainty in X if the non-determinism in two executions of is not resolved independently, but information can be passed between the two executions. This could be useful, e.g., to analyze the potential power of coordinated attacks on a network. Formally, such a notion could be defined as sup_𝔼^_⊗ ( (X_1-X_2)^2) where ranges over all schedulers for ⊗. In this context, also using an asynchronous product of with could be reasonable. splncs04 § OMITTED PROOFS OF SECTION <REF> * We prove the result by induction on k. For k=0, there is nothing to show. So, assume the induction hypothesis (IH) that the claim holds for k∈ℕ. Let π = s_0 _0 s_1 _1 …_k-1 s_k _k s_k+1 and π^' = s_0 _0 s_1 _1 …_k-1 s_k. Then, P^p⊕ (1-p) (π) = P^p⊕ (1-p) (π^') · (p⊕ (1-p)) (π^')(_k) · P(s_k,_k,s_k+1) (IH)= (p P^(π^') + (1-p) P^(π^') ) · P(s_k,_k,s_k+1) ·(p· P^(π)·(π)(α)/p· P^(π) + (1-p) · P^(π) + (1-p)· P^(π)·(π)(α)/p· P^(π) + (1-p) · P^(π)) = p P^(π)·(π)(α) · P(s_k,_k,s_k+1) + (1-p) P^(π)·(π)(α) · P(s_k,_k,s_k+1) = pP^(π) + (1-p)P^(π). This concludes the induction step. * We express the variance of X under as ^_(X) = 𝔼^_(X^2) - 𝔼^_(X)^2 = p 𝔼^_1_(X^2) + (1-p) 𝔼^_2_(X^2) - ( p 𝔼^_1_(X) +(1-p) 𝔼^_2_(X) )^2 = p 𝔼^_1_(X^2) - p^2 𝔼^_1_(X)^2 -p(1-p)𝔼^_1_(X)^2 +p(1-p)𝔼^_1_(X)^2 + (1-p) 𝔼^_2_(X^2) - (1-p)^2 𝔼^_2_(X)^2 -p(1-p)𝔼^_2_(X)^2 +p(1-p)𝔼^_2_(X)^2 - 2 p (1-p) 𝔼^_1_(X)𝔼^_2_(X) = p^_1_(X) + (1-p) ^_2_(X) + p(1-p) (𝔼^_1_(X) - 𝔼^_1_(X))^2. § OMITTED PROOFS OF SECTION <REF> * We compute 𝔼^_1⊗_2_⊗ ((X_1 - X_2)^2) = 𝔼^_1⊗_2_⊗ (X_1^2 -2X_1X_2 + X_2^2) = 𝔼^_1_ (X^2) - 2 𝔼^_1_ (X) 𝔼^_2_ (X) + 𝔼^_2_ (X^2) (independence) = ^_1_ (X) + 𝔼^_1_ (X)^2 - 2 𝔼^_1_ (X) 𝔼^_2_ (X) + ^_2_ (X) + 𝔼^_2_ (X)^2 = ^_1_ (X) + ^_2_ (X) + (𝔼_^_1(X) - 𝔼_^_2(X))^2. * Clearly, ^_(X) ≥^max_(X) as for any scheduler , we have ^,_(X) = ^_(X). For a pair of schedulers and , let = 1/2⊕1/2. Then, by Lemma <ref>, ^_(X) = 1/2^_(X) + 1/2^_(X) + 1/4 (𝔼^_(X) - 𝔼^_(X))^2 ≥ 1/4^_(X) + 1/4^_(X) + 1/4 (𝔼^_(X) - 𝔼^_(X))^2 = 1/2^,_(X) where the last equality follows from Lemma <ref>. So, ^_(X) = sup_, ^,_(X) ≤ 2sup_^_(X) = 2 ^max_(X). * The Markov inequality states that for any random variable Y only taking non-negative values and any value a>0, we have (Y≥ a) ≤𝔼(Y)/a. Considering the random variable (X_1-X_2)^2 in ⊗, we can apply the Markov inequality to obtain _⊗^⊗( |X_1 - X_2| ≥ k·√(^_(X))) = _⊗^⊗( (X_1 - X_2)^2 ≥ k^2·^_(X) ) ≤𝔼_⊗^⊗ ((X_1-X_2)^2)/k^2·^_(X) ≤2^_(X)/k^2·^_(X) = 2/k^2. * We observe _⊗^⊗( |X_1 - X_2| ≥ k·√(^max_(X))) ≤ _⊗^⊗( |X_1 - X_2| ≥ k·√(^_(X)/2)) = _⊗^⊗( |X_1 - X_2| ≥k/√(2)·√(^_(X))) ≤ 2/(k/√(2))^2 = 4/k^2 where the last line follows from Theorem <ref> * Using Lemma <ref>, we have for any pair of schedulers and that ^,_(X) = 1/2 (_^(X) + _^(X) + (𝔼^_(X) - 𝔼^_(X) )^2) = 1/2 (_^(X) + _^(X)) where the last equality follows from our assumption. So, the variance _^(X) or the variance _^(X) is at least as large as _^,(X). Consequently, ^max_(X) = ^_(X). * . (1): First, note that 𝔼^min_(X)<𝔼^max_(X) as otherwise (,X)=0 by Proposition <ref>. Define D𝔼^max_(X) - 𝔼^min_(X). Let ε>0. Let and be two schedulers such that ^,_(X) ≥^_(X) - δ for a δ>0 depending on D and ε, which we will specify later. Define 1/2⊕1/2. Then, by Lemma <ref>, ^_(X) = 1/2^_(X) + 1/2^_(X) + 1/4 (𝔼^_(X) - 𝔼^_(X))^2 . On the other hand, by Lemma <ref>, ^,_(X) = 1/2^_(X) + 1/2^_ + 1/2 (𝔼^_(X) - 𝔼^_(X))^2. As (,X)=1, we know ^_(X) = 2 ^max_(X). So, ^,_(X) ≥^_(X) - δ≥ 2 ^_(X) - δ = ^_(X) + ^_ (X) + 1/2 (𝔼^_(X) - 𝔼^_(X))^2 - δ = ^,_(X) + 1/2 (^_(X) + ^_ (X) ) - δ. So, we conclude ^_(X) ≤ 2 δ and ^_(X) ≤ 2 δ. Now, let 𝔐𝔞𝔵 be a scheduler with 𝔼^𝔐𝔞𝔵_(X) = 𝔼^max_(X) and 𝔐𝔦𝔫 a scheduler with 𝔼^𝔐𝔦𝔫_(X) = 𝔼^min_(X). Then, 2^,_(X) ≥ 2^𝔐𝔞𝔵,𝔐𝔦𝔫_(X) - 2δ which is equivalent to ^_(X) + ^_ (X) + (𝔼^_(X) - 𝔼^_(X))^2 ≥ ^𝔐𝔞𝔵_(X) + ^𝔐𝔦𝔫_ (X) + (𝔼^𝔐𝔞𝔵_(X) - 𝔼^𝔐𝔦𝔫_(X))^2 - 2δ. Plugging in ^_(X) ≤ 2 δ, ^_(X) ≤ 2 δ and D= 𝔼^max_(X) - 𝔼^min_(X), we obtain (𝔼^_(X) - 𝔼^_(X))^2 ≥ D^2 - 6δ. W.l.o.g., assume 𝔼^_(X) ≥𝔼^_(X) and let E𝔼^_(X) - 𝔼^_(X). We conlcude E≥√(D^2 - 6δ). Now, we can specify δ depending on ε and D. We may choose any δ>0 such that δ≤ε<2 and D-√(D^2 - 6δ)<ε. Then, ^_(X) ≤ε and ^_(X) ≤ε. Further, E≥ D-ε which implies 𝔼^_(X)≥𝔼^max_(X) - ε and 𝔼^_(X)≤𝔼^min_(X) + ε. (2): The same reasoning as in item (1) applied to schedulers _0 and _1 with ^_0,_1_(X) = ^_(X) allows us first to conclude that ^_0_(X)=0 and ^_1_(X)=0. Then, we obtain (𝔼^_0_(X) - 𝔼^_1_(X))^2 = (𝔼^𝔐𝔞𝔵_(X) - 𝔼^𝔐𝔦𝔫_(X))^2. Assuming w.l.o.g., that 𝔼^_1_(X)>𝔼^_0_(X), we get 𝔼^_1_(X)=𝔼^max_(X) and 𝔼^_0_(X)=𝔼^min_(X). Together, this implies ^_0_(X=𝔼^min_(X))=1 and ^_1_(X=𝔼^max_(X))=1. (3): W.l.o.g., we assume that all actions are enabled in all states. As there is always at least one enabled action, we can simply let all disabled actions in a state have the same transition dynamics as some enabled action. For a natural number k, we let Δ_k{fPaths_^k →Distr()} be the set of functions from the set Paths_^k of finite paths in containing k transitions to the set of distributions Distr() over . Viewing Distr() as a subset of [0,1]^, this is a compact space with the usual Euclidean topology as it is a closed subset of [0,1]^. Similarly, we can view Δ_k as Distr()^Paths_^k which is a (finite) product of compact spaces and hence compact with the product topology. Note that in this finite product, a basis for the topology is given by products of open sets. Now, the space of schedulers can be seen as Sched() = ∏_k=0^∞Distr()^Paths_^k. Again, we equip this space with the product topology. By Tychonoff's theorem, this again results in a compact space. Now, let (ε_n)_n∈ℕ be a sequence of positive numbers converging to 0. For each n, let 𝔐𝔦𝔫_ε_n be a scheduler as in item (1). As Sched() is compact, the sequence (𝔐𝔦𝔫_ε_n)_n∈ℕ has a converging subsequence (𝔐𝔦𝔫_j)_j∈ℕ. Let 𝔐𝔦𝔫 be the limit of this subsequence. Let μ_j be the probability measure ^𝔐𝔦𝔫_j_ for each j∈ℕ and let μ be the probability measure ^𝔐𝔦𝔫_. We will show that μ_j weakly converges to μ. As shown in <cit.>, it is sufficient to show for each finite union U of elements of a countable basis of the topology on infinite paths that lim inf_j→∞μ_j(U) ≥μ(U). The set of cylinder sets forms a countable basis of the topology. So, let U=⋃_i=1^ℓ(π_i) for finite paths π_1,…, π_ℓ. W.l.o.g., we can assume that the cylinder sets (π_i) with 1≤ i ≤ℓ are disjoint as we can write U as union of cylinder sets generated by paths of the same length. So, in fact it is sufficient to prove lim inf_j→∞μ_j(C) ≥μ(C) for all cylinder set C. So, consider a finite path π=s_0_0s_1… s_k. Then, μ((π)) = ^𝔐𝔦𝔫_(π) = ∏_h=0^k-1 P(s_i,_i,s_i+1)·𝔐𝔦𝔫(s_0_0s_1… s_i)(_i). For each δ>0, the set S_δ{ ∈Sched() | |𝔐𝔦𝔫(s_0_0s_1… s_i)(_i) - (s_0_0s_1… s_i)(_i)| <δ for all 0≤ i ≤ k-1} is open in the product topology on Sched(). So, for each δ, there is an N such that 𝔐𝔦𝔫_j∈ S_δ for all j>N. Now, let Δ_δsup_∈ S_δ |^_(π) - ^𝔐𝔦𝔫_(π)|. Then, Δ_δ∈𝒪(δ). This allows us to conclude that for any δ^', there is an N^' such that |^𝔐𝔦𝔫_j_(π) - ^𝔐𝔦𝔫_(π)|<δ^' for all j>N^'. So, lim_j→∞μ_j((π)) = μ((π)) showing that μ_j weakly converges to μ. for j→∞. For bounded, continuous, Borel measurable functions X, we hence can conclude that 𝔼^𝔐𝔦𝔫_ (X) = ∫ X dμ = lim_j→∞∫ X dμ_j = lim_j→∞𝔼^𝔐𝔦𝔫_j_ (X) = 𝔼^min_(X). As also ∫ X^2 dμ = lim_j→∞∫ X^2 dμ_j and lim_j→∞^𝔐𝔦𝔫_j_ (X) = 0, we conclude 𝔼^𝔐𝔦𝔫_ (X^2) = ( 𝔼^min_(X))^2. Hence, ^𝔐𝔦𝔫_ (X) = 0. So, ^𝔐𝔦𝔫_(X= 𝔼^min_(X)) =1. The existence of the scheduler 𝔐𝔞𝔵 as claimed in the theorem can be shown analogously. § OMITTED PROOFS OF SECTION <REF> * The correctness of the constraints (<ref>) – (<ref>) is shown, e.g., in <cit.>. So, for each scheduler there is a solution to (<ref>) – (<ref>) with y_q = ^_(◊ q) for all q∈ T, and vice versa. This implies directly that the variables e_1 and e_2 defined by constraints (<ref>) in terms of these variables y_q for q∈ T satisfy e_1 = 𝔼^_() and e_2 = 𝔼^_(^2) for a scheduler corresponding to the values y_q. Hence, any value of the objective (<ref>) that is obtainable under constraints (<ref>) – (<ref>) is the variance of under some scheduler, and vice versa. * Clearly, the objective function is concave and all constraints are linear. Hence, the maximal value of the objective function can be computed in polynomial time <cit.>. (Note that the maximization of a concave function is equivalent to the minimization of a convex function.) Furthermore, from the values x_s,α in the solution, a memoryless scheduler can be computed by setting (s)(α) = x_s,α/∑_α∈(s) x_s,α (see, e.g., <cit.>). * The statement follows analogously to Theorem <ref>. * From Theorem <ref>, we can conclude that there are schedulers and with ^_()= ^, _(). For this fixed scheduler , we can optimize ^,_() by fixing the variables y_q^' for q∈ T in the objective function (<ref>). The resulting linear program consisting of constraints (<ref>) – (<ref>) and the objective function (<ref>), is the linear program that computes the maximal expected value of the weighted reachability problem with weight function ^' T →ℚ given by (q) = ∑_r∈ T y^'_r · ((q)-(r))^2. As memoryless deterministic schedulers are sufficient to maximize weighted reachability (which is well-known, see, e.g., <cit.>), there is a memoryless deterministic scheduler maximizing ^,_() and hence ^_()= ^, _(). Analogously, we can show that can be chosen to be memoryless deterministic. * We can guess two memoryless deterministic schedulers and . The value ^,_() can then easily be computed in polynomial time and be compared to the threshold ϑ. By Corollary <ref>, this solves the threshold problem in non-deterministic polynomial time. § DETAILED EXPOSITION OF THE RESULTS OF SECTION <REF> Before we address the computation of the maximal variance and the demonic variance for accumulated rewards, we provide some prerequisites that are well-known. *Prerequisites We assume that 𝔼^max_() < ∞, which can be checked in polynomial time <cit.>. This implies that all reachable end components contain only states with reward 0. In particular, all reachable absorbing states have reward 0. Hence, we can perform the same preprocessing as in Section <ref> that introduces a new absorbing state t^∗ and removes all end components. This pre-processing does not change the distributions of that can be realized by a scheduler. For details, see also <cit.>. So, w.l.o.g., we work under the following assumption: We assume that an absorbing state t with (t)=0 is reached almost surely under any scheduler for . Under our assumption, the maximal expected accumulated reward 𝔼^max_() as well as an optimal memoryless deterministic scheduler can be computed in polynomial time <cit.>. The actions such a scheduler chooses in a state s belong to the set of maximizing actions ^max(s) that we define as ^max(s) = {∈(s) |𝔼^max_,s() = (s,) + ∑_t∈ S P(s,,t) ·𝔼^max_,t() }. Conversely, under Assumption <ref> any scheduler only choosing actions in ^max(s) in each state s maximize the expected value of . Furthermore, it is known that a bound for the expected value of ^2 can be computed in polynomial time: Let p_min be the minimal non-zero transition probability in and let R be the largest reward. Then, max_s∈ S𝔼^max_,s(^2) ≤2· |S|^2· R^2/p_min^2|S| Q. Finally, we call a scheduler reward-based if for paths π and π^' with (π)=(π^') and (π)=(π^'), the scheduler chooses the same distribution over actions. We say that a scheduler is a finite-memory scheduler if it can be implemented with a finite set of memory states that can be updated after each transition according to the state that is reached such that the decisions are based only on the memory state and the current state of the MDP. §.§ Computing the maximal variance The goal of this section is to prove the following theorem. The maximal variance ^max_() as well as an optimal randomized reward-based finite-memory scheduler can be computed in exponential time. In this section, we will show that we can restrict the supremum in the definitions of the maximal variance and the demonic variance to a class of schedulers that switch to a fixed behavior given by a memoryless deterministic scheduler once the reward on a path exceeds a computable bound B. Before we prove this result, we introduce additional notation and afterwards define the bound B. *Notation We define some notation used below. Given a scheduler and a finite path π, we denote by _π the residual scheduler of after π given by _π(ζ) (π∘ζ) for all finite paths ζ starting in (π). Further, for α∈((π)), we denote by _π,α the residual scheduler of after π that starts by choosing α and behaves like afterwards. It is given by assigning probability 1 to action α initially in state (π) and for all finite paths ζ=(π) α … is given by _π,α(ζ) (π∘ζ). Further, given a scheduler , we denote by ↑_π,α the scheduler that behaves like unless chooses α at the end of the path π. In this case, ↑_π,α makes decisions according to on the paths starting in (π) from that moment on instead of according to _π,α. *Bound B We begin by defining the bound B, after which schedulers can switch to memoryless behavior as we will prove afterwards. This bound depends on the maximal expected value Mmax_s∈ S𝔼^max_,s(), the bound Q for the expected value of ^2 given in Lemma <ref> and on the minimal loss in expected value δ that the choice of a non-maximizing action causes, which we define as follows: δmin_s∈ Smin_α∉^max(s)𝔼^max_,s () - ∑_t∈ S P(s,α,t) 𝔼^max_,t(). We define B Q + 5/2 M^2/δ +2M +1 The choice of B will become clear later. By Markov's Inequality, we know that ^_(≥ 2 M) ≤1/2. As B≥ 2 M, we conclude ^_(≥ B)≤1/2. *Switch to expectation maximization First, we show that the variance of any scheduler that chooses a non-maximizing action after a path with reward more than B can be increased. Let Σ^max be the set of schedulers with 𝔼^_,s() = 𝔼^max_,s() for all states s∈ S. Let be a scheduler for and let π be a finite path with (π)≥ B. Assume that π has positive probability under and that there is an action α∈(s) ∖^max(s) such that (π)(α)>0. Let ∈Σ^max. Then, setting ↑_π,α, we have ^_() < ^_() . Let A be the event that scheduler chooses action α at the end of path π and let p be the probability of this event under . Note that switches to the behavior of exactly in case A occurs. By the definition of B, we know that p≤1/2. Now, we aim to estimate the difference ^_() - ^_() = 𝔼^_(^2) - 𝔼^_(^2) + 𝔼^_()^2 - 𝔼^_()^2. First, we take a look at the expected values of ^2: 𝔼^_(^2) - 𝔼^_(^2) = (1-p)(𝔼^_(^2| A) - 𝔼^_(^2| A) ) + p (𝔼^_(^2| A) - 𝔼^_(^2| A) ) = p (𝔼^_(^2| A) - 𝔼^_(^2| A) ) as and behave identically under the condition A. Further, letting W (π), 𝔼^_(^2| A) - 𝔼^_(^2| A) = 𝔼^_π,α_,(π)((W+)^2) - 𝔼^_,(π)((W+)^2) = W^2 + 2 · W ·𝔼^_π,α_,(π)() + 𝔼^_π,α_,(π)(^2) - ( W^2 + 2 · W ·𝔼^_,(π)() + 𝔼^_,(π)(^2) ) † ≤ Q - 2· W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()). Now, we estimate 𝔼^_()^2 - 𝔼^_()^2. Note that 𝔼^_() = (1-p)𝔼^_(| A) + p 𝔼^_(| A) and analogously for . Let C (1-p)𝔼^_(| A) ≤𝔼^max_(). Now, recall that M= max_s∈ S𝔼^max_,s(). We compute 𝔼^_()^2 - 𝔼^_()^2 = (C + p 𝔼^_(| A))^2 - (C + p 𝔼^_(| A))^2 = (C+p(W+𝔼^_,(π)()))^2 - (C+p(W+𝔼^_π,α_,(π)()))^2 = 2· C · p · ( 𝔼^_,(π)() - 𝔼^_π,α_,(π)()) +p^2 · 2· W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()) + p^2 ( 𝔼^_,(π)()^2 - 𝔼^_π,α_,(π)()^2) ≤ 2 p M^2 + p^2 · 2 · W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()) + p^2 M^2 Using that p<1/2, we obtain that the last line is at most p·(2 M^2 + M^2/2 + W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()) ) Put together, we obtain ^_() - ^_() ≤ p·( Q - 2· W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)() ) + 5/2 M^2 + W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()) ) = p·(Q + 5/2 M^2 - W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)()) ) ≤ p · (Q + 5/2 M^2 - W·δ). As W≥ B, we conclude ^_() - ^_() <0 by the definition of B. The previous theorem tells us that it is sufficient to consider schedulers that maximize the expected future accumulated reward as soon as a reward of B has been accumulated on a path. The next question we answer is which of the expectation maximizing schedulers ∈Σ^max a variance maximizing scheduler should choose above the bound B. The answer can be found in the computations for the proof of the previous theorem. A closer look at Equation (<ref>) in the previous proof makes it clear that the scheduler should maximize the expected value of ^2 among all schedulers in Σ^max: Changing the scheduler among schedulers in only influences the 𝔼^_,(π)(^2) on the right hand side of Equation (<ref>). The following lemma, which is a variation of a result shown in <cit.>, shows that such a scheduler can be computed in polynomial time: There is a memoryless deterministic scheduler ∈Σ^max with 𝔼^_,s(^2) = sup_∈Σ^max𝔼^_,s(^2) for all states s∈ S. Further, as well as 𝔼^_,s(^2) for all states s can be computed in polynomial time. Let ^max be the MDP obtained from by only enabling actions in ^max(s) in state s. It is well-known that any scheduler in Σ^max can only schedule actions in ^max. Conversely, under Assumption <ref>, any scheduler for ^max viewed as a scheduler for is in Σ^max (see, e.g., <cit.>). So, it is sufficient to show that there is a memoryless deterministic scheduler for ^max that maximizes the expected value of ^2 from every state. In <cit.>, it is shown that in an MDP, in which all schedulers have the same expected value of from any state (as in our MDP ^max), a memoryless deterministic scheduler minimizing the variance exists and this scheduler as well as its variance can be computed in polynomial time via the definition of a new weight function ^' S →ℚ. While the result is stated for the minimization of the variance in <cit.>, the same reasoning works for the maximization as well. Since all schedulers in ^max have the same expected value from a state s, a scheduler maximizing ^_,s()=𝔼^_,s(^2) - 𝔼^_,s()^2 also maximizes 𝔼^_,s(^2). So, the result of <cit.> implies that a scheduler as claimed in the lemma can be computed in polynomial time. Put together, we can conclude the following theorem: The maximal variance can be expressed as ^max_() = sup_^_() where ranges over all schedulers such that for all path π with (π)≥ B, we have _π= for the memoryless deterministic scheduler given by Lemma <ref>. Let us denote the set of these schedulers by Σ^_B in the sequel. As weighted reachability can be seen as a special case of accumulated rewards, we know that randomization is necessary in order to maximize the variance. While Theorem <ref> does not allow us to reduce the maximization of the variance for accumulated rewards to the case of weighted reachability, it will nevertheless allow us to construct a quadratic program whose solution is the maximal variance. To this end, we consider an unfolding of the MDP . Let B be the bound provided above and R be the maximal reward occurring in as before. We define as the MDP that keeps track of the accumulated weight until a weight of more than B has been accumulated. The state space is given by S×{0,…,⌊ B ⌋+R}. The initial state is (,0). The transition probability function P^' for (s,w)∈ S×{0,…, ⌊ B ⌋} and α∈ is given by P^'((s,w),α,(t,v)) = P(s,α,t) if v=w+(s,α), and is set to 0 otherwise. All states (s,w) with w>B are made absorbing. So, the absorbing states are divided into the two sets T_1^' T×{0,…, ⌊ B ⌋} and T_2^' S×{⌊ B ⌋+1, …, ⌊ B ⌋+R} where T is the set of absorbing states of . Now, there is a one-to-one correspondence between schedulers for and schedulers in Σ^_B: In paths are simply equipped with the additional information how much reward has been accumulated, while this information is implicitly given in by the reward function . Other than that, the paths with reward ≤ B are the same in and . For paths with length >B, schedulers in cannot make choices anymore as they reach an absorbing state. Schedulers in Σ^_B cannot make choices anymore on these paths as they switch to the behavior of . Now, we can derive a system of linear inequalities Ax≤ b for a rational matrix A and vector b computable in polynomial time from such that the variable vector x contains, among others, variables y_q for each q∈ T_1^'∪ T_2^' such that these variables capture exactly the possible combinations of reachability probabilities of the terminal states in . This system of constraints is the system given in Equations (<ref>) – (<ref>) in Section <ref> transferred to the MDP . As for weighted reachability, we introduce two new variables e_1 and e_2 that express the expected value of and of ^2 and that only depend on the variables y_q with q∈ T_1^'∪ T_2^'. For the expected value, this is straight-forward: e_1 = ∑_(s,w)∈ T_1 y_(s,w)· w + ∑_(s,w)∈ T_2 y_(s,w)· (w+𝔼^max_,s()) For the expected value of ^2, we derive e_2 = ∑_(s,w)∈ T_1 y_(s,w)· w^2 + ∑_(s,w)∈ T_2 y_(s,w)· (w+ 2w𝔼^max_,s()+ 𝔼^_,s(^2)) The fact that this set of constraints works as intended is stated in the following lemma and proved in Appendix <ref>. lemmacorrectquad For each scheduler ∈Σ^_B, there is a solution to (<ref>) – (<ref>) in which e_1 = 𝔼^_() and e_2 = 𝔼^_(^2), and vice versa. Let A_s,w be the event that terminal state (s,w)∈ T_1∪ T_2 is reached in . One of the events A_s,w occurs with probability 1 under any scheduler. In , the corresponding events are that either a absorbing state is reached after a path π with (π)≤ B or that the first prefix of a run with reward w>B ends in state s. Given a scheduler ∈Σ^_B viewed as a scheduler for , we know (see <cit.>) that there is a solution to (<ref>) such that ^_ (◊ (s,w)) = y_(s,w) for all (s,w)∈ T_1∪ T_2, and vice versa. In , this corresponds to ^_ (A_s,w) = y_(s,w). For the expected value, it is then clear that 𝔼^_()= ∑_(s,w)∈ T_1 y_(s,w)· w + ∑_(s,w)∈ T_2 y_(s,w)· (w+𝔼^max_,s()) = e_1 as the scheduler switches to the behavior of which maximizes the future expected rewards in case of event A_s,w for (s,w)∈ T_2. For the expected value of ^2, we compute 𝔼^_() = ∑_(s,w)∈ T_1∪ T_2𝔼^_(^2 | A_s,w) ·^_(A_s,w) = ∑_(s,w)∈ T_1 w^2 · y_s,w + ∑_(s,w)∈ T_2𝔼^_(^2 | A_s,w) · y_s,w = ∑_(s,w)∈ T_1 w^2 · y_s,w + ∑_(s,w)∈ T_2𝔼^_,s((w+)^2) · y_s,w = ∑_(s,w)∈ T_1 w^2 · y_s,w + ∑_(s,w)∈ T_2𝔼^_,s((w+)^2) · y_s,w = ∑_(s,w)∈ T_1 w^2 · y_s,w + ∑_(s,w)∈ T_2 (w^2 + 2 w 𝔼^_,s() + 𝔼^_,s(^2)) · y_s,w = e_2. Consequently, the maximal variance can be found via the following optimization objective subject to (<ref>) – (<ref>): maximize e_2 - e_1^2. As the quadratic program (<ref>) – (<ref>) is a concave maximization problem and can be constructed in polynomial time from , which is of size exponential in the size of , we conclude that the optimal variance can be computed in exponential time. Furthermore, from a solution to the quadratic program, we can extract a memoryless scheduler for as in Corollary <ref>. A memoryless scheduler for corresponds to a reward-based finite-memory scheduler for . This is the case here as a scheduler only has to keep track of the reward accumulated so far up to the bound B. Hence, we conclude: The maximal variance ^max_() as well as an optimal randomized reward-based finite-memory scheduler can be computed in exponential time. §.§ Computation of the demonic variance For the demonic variance, we follow a similar line of argumentation as we did for the maximal variance. First, we restrict the class of schedulers that we have to consider to obtain the demonic variance. Afterwards, we formulate the computation of the demonic variance as a quadratic program. To investigate the structure of schedulers necessary to obtain the demonic variance, fix a scheduler . We take a closer look at ^,_() for a scheduler : ^,_() =1/2 (^_() +^_() + (𝔼^_() -𝔼^_())^2) = 1/2 (𝔼^_(^2) - 2𝔼^_()𝔼^_() + ^_() + 𝔼^_()^2 ) We can see that in order to maximize ^,_() for fixed , the scheduler has to maximize 𝔼^_(^2) - 2𝔼^_()𝔼^_(). For simplicity, abbreviate C𝔼^_(). Using the bound M = max_s∈ S𝔼_,s^max(), we know 0≤ C ≤ M and want to find a scheduler maximizing 𝔼^_(^2) - 2· C·𝔼^_(). We will establish results analogous to the results in Section <ref> for the maximization of this expression. The proofs follow the similar ideas with slightly different calculations. *Bound B^' To provide a bound B^' in analogy to Section <ref>, we use the values Q, M, and δ defined there again. We set B^' = Q/2δ+2M+1. As before, we know that ^_(≥ B^') ≤1/2. The following result is shown following the same idea as for Theorem <ref> and is proved in Appendix <ref>. theorembounddem Let 0≤ C ≤ M. Let be a scheduler for and let π be a finite path with (π)≥ B^'. Assume that π has positive probability under and that there is an action α∈(s) ∖^max(s) such that (π)(α)>0. Let ∈Σ^max. Then, setting ↑_π,α, we have 𝔼^_(^2) - 2· C·𝔼^_() < 𝔼^_(^2) - 2· C·𝔼^_() . Let A be the event that scheduler chooses action α at the end of path π and let p be the probability of this event under . Note that switches to the behavior of exactly in case A occurs. By the definition of B^', we know that p<1/2. Now, we aim to estimate the difference 𝔼^_(^2) - 2· C·𝔼^_() - (𝔼^_(^2) - 2· C·𝔼^_()) = 𝔼^_(^2) - 𝔼^_(^2) +2· C · p · ( 𝔼^_,(π)() - 𝔼^_π,α_,(π)()) In the proof of Theorem <ref>, we computed 𝔼^_(^2) - 𝔼^_(^2) ≤ p ( Q - 2· W · (𝔼^_,(π)() - 𝔼^_π,α_,(π)())) where W=(π). So, we conclude using C≤ M 𝔼^_(^2) - 2· C·𝔼^_() - (𝔼^_(^2) - 2· C·𝔼^_()) ≤ p · ((2 M - 2 W ) ( 𝔼^_,(π)() - 𝔼^_π,α_,(π)()) + Q ) ≤ p · (2 (M-B^') δ +Q). By the choice of B^' this value is less than 0. Again, we can observe in these calculations that an optimal scheduler should in fact switch to a scheduler maximizing the expected value of ^2 among the schedulers maximizing the expected value of above the bound B^'. The demonic variance can be expressed as ^_() = sup_,^,_() where and range over all schedulers such that for all path π with (π)≥ B, we have _π= for the memoryless deterministic scheduler given by Lemma <ref>. In order to compute the demonic variance, we use the constraints (<ref>) – (<ref>) with a vector of variables x containing the variables e_1 and e_2 as well as a copy (<ref>^') – (<ref>^') using variables x^' containing e_1^' and e_2^'. By Lemma <ref>, for each pair of schedulers and , there is a solution to (<ref>) – (<ref>) and (<ref>^') – (<ref>^') such that e_1 = 𝔼^_(), e_2 = 𝔼^_(^2), e_1^' = 𝔼^_(), e_2^' = 𝔼^_(^2), and vice versa. By Lemma <ref>, we have ^,_() = 1/2( _^() + _^() + (𝔼_^() - 𝔼_^())^2 ) = 𝔼^_(^2) - 2 𝔼_^()𝔼_^() +𝔼^_(^2). This means we can express the demonic variance via the objective function maximize e_2 - 2e_1e_1^' + e_2^'. Together with constraints (<ref>) – (<ref>) and (<ref>^') – (<ref>^'), this is a bilinear program that can be computed from and in exponential time. Reasoning as in Corollary <ref>, we can extract memoryless deterministic schedulers for the unfolded MDP , i.e., deterministic reward-based finite-memory schedulers for . We conlcude: The demonic variance ^_() as well as a pair of deterministic reward-based finite-memory schedulers and with ^_() = ^,_() can be computed via a bilinear program of exponential size computable in exponential time from . Determining the precise complexity is left as future work.
http://arxiv.org/abs/2406.18846v1
20240627023558
AFBench: A Large-scale Benchmark for Airfoil Design
[ "Jian Liu", "Jianyu Wu", "Hairun Xie", "Guoqing Zhang", "Jing Wang", "Wei Liu", "Wanli Ouyang", "Junjun Jiang", "Xianming Liu", "Shixiang Tang", "Miao Zhang" ]
cs.CE
[ "cs.CE" ]
[ Jian Liu^1,2    Jianyu Wu^2    Hairun Xie^3    Guoqing Zhang^1,2    Jing Wang^3    Wei Liu^3    Wanli Ouyang^2    Junjun Jiang^1    Xianming Liu^1    Shixiang Tang^2    Miao Zhang^3 ^1 Harbin Institute of Technology    ^2 Shanghai Artificial Intelligence Laboratory    ^3 Shanghai Aircraft Design and Research Institute July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================== type=figure < g r a p h i c s > figureOur Airfoil Generation and Editing Software. (a) Generating diverse candidate airfoils. (b) Editing Keypoints and Editing Physical Parameters. § ABSTRACT Data-driven generative models have emerged as promising approaches towards achieving efficient mechanical inverse design. However, due to prohibitively high cost in time and money, there is still lack of open-source and large-scale benchmarks in this field. It is mainly the case for airfoil inverse design, which requires to generate and edit diverse geometric-qualified and aerodynamic-qualified airfoils following the multimodal instructions, i.e., dragging points and physical parameters. This paper presents the open-source endeavors in airfoil inverse design, AFBench, including a large-scale dataset with 200 thousand airfoils and high-quality aerodynamic and geometric labels, two novel and practical airfoil inverse design tasks, i.e., conditional generation on multimodal physical parameters, controllable editing, and comprehensive metrics to evaluate various existing airfoil inverse design methods. Our aim is to establish AFBench as an ecosystem for training and evaluating airfoil inverse design methods, with a specific focus on data-driven controllable inverse design models by multimodal instructions capable of bridging the gap between ideas and execution, the academic research and industrial applications. We have provided baseline models, comprehensive experimental observations, and analysis to accelerate future research. Our baseline model is trained on an RTX 3090 GPU within 16 hours. The codebase, datasets and benchmarks will be available at <https://hitcslj.github.io/afbench/>. § INTRODUCTION The airfoil inverse design problem serves as the center of the automatic airfoil design, which is to seek design input variables, i.e., physical parameters, to optimize an underlying objective function, e.g., aerodynamics. Previous methods can be divided into two categories: optimization methods <cit.> and data-driven methods <cit.>. First, the optimization-based methods usually design an objective function by constructing a mathematical model and leverage the typical optimization algorithms, e.g., genetic algorithms <cit.>, adjoint optimization <cit.> and topology optimization <cit.>, to find the optimal input variables as the design parameters. Despite the success, these methods have limitations in considerable time consumption and the diversity of the optimal design variables due to the constructed physical model of airfoils. Second, the data-driven methods <cit.> typically borrow ideas from the advancements in conditional generative models in artificial intelligence. Popular generative methods such as CGAN <cit.>, CVAE <cit.>, and Diffusion models <cit.> have been explored, demonstrating their effectiveness. However, current data-driven methods suffer from the following three drawbacks. First, the existing datasets are relatively small-scale, e.g., the design geometry dataset UIUC <cit.> contain only thousands of samples. Therefore, data-driven models trained on such datasets have limited generalization capabilities and fail to generate diverse solutions that meet the requirements. Second, the current datasets typically provide only a single condition, i.e.,aerodynamic parameters, and thus cannot handle multi-condition design, i.e., controlling leading edge radius and upper crest position as geometric parameters simultaneously, which are real industrial applications in airfoil design. Third, current airfoil inverse design methods do not support progressive editing existing designs according to manual and multimodal requirements, which limits their applications in the industry. For example, one of our authors from Shanghai Aircraft Design and Research Institute, who has over 10 years experiences for airfoil design, claimed that each airfoils used in current commercial airplanes underwent years of progressively refinements by hundreds of engineers. To drive the development of generative models in the field of engineering design, we construct a comprehensive airfoil benchmark, AFBench, that can be a cornerstone to cope with the aforementioned challenges with the following merits: (1) Tasks – Multi-Conditional Generation and Editing in Airfoil Inverse Design: Regarding the aforementioned dataset, we tailor it to accommodate two new but more practical tasks in real airfoil design: multi-conditional airfoil generation and multimodal airfoil editing. The task of airfoil generation is not limited to the previous approach of generating airfoils based solely on given aerodynamic labels such as Lift-to-drag ratio. Instead, it involves generating airfoils based on multiple intricate geometric labels proposed by our authors who are experts in airfoil designs, which is more challenge but practical than previous airfoil generation based on the single condition. The newly proposed airfoil editing task. Specifically, we currently support the editing of the control points and physical parameters of the airfoil. The editing of physical parameters is not present in traditional airfoil editing software, and the movement of control points, compared to the spline interpolation in traditional software, is enabled by AI models with a broader design space. (2) Datasets - Large-scale Airfoil Datasets with High-quality and Comprehensive Geometric and Aerodynamic Labels: Regarding the aforementioned airfoil inverse design tasks, the training subset of the proposed AFBench consists of 200,000 well-designed both synthetic and manually-designed airfoils with 11 geometric parameters and aerodynamic properties under 66 work conditions (Mach number from 0.2 to 0.7, Lift coefficient from 0 to 2). To construct the AFBench, we propose an automatic data engine that includes data synthesis, high-quality annotations and low-quality filtering. Different from previous datasets, we (i) not only combine all airfoils in the existing datasets such as such as UIUC <cit.> and NACA <cit.>, but also include 2,150 new manually-designed supercritical airfoils from Shanghai Aircraft Design and Research Institute that is highly insufficient in existing datasets; (ii) further enlarge the dataset to 200,000 airfoils by effective data synthesis with conventional physical models and unconditional generative models; (iii) annotate geometric and aerodynamic labels by CFD (Computational fluid dynamics) simulation software. (3) Open-source Codebase and Benchmarks – A Open-source Codebase of Data-driven Generative Models for Airfoil Inverse Design with State-of-the-art Methods and Comprehensive Evaluation Metrics: Since there are still absent a comprehensive and clean codebase to compare and analyze different airfoil inverse design methods, we release a comprehensive and publicly accessible codebase to facilitate future researches. This codebase includes multiple existing methods, e.g., cVAE <cit.>, cGAN <cit.>, and our newly proposed primary architectures for both multi-conditional airfoil generation and controllable airfoil editing, PK-VAE, PK-VAE^2, PK-GAN, PKVAE-GAN, PK-DIFF, PK-DiT inspired by mainstream generative frameworks, i.e., VAE, GAN and Diffusion models. To facilitate exploration and usage, we have also provided a user-friendly demo that easily allows different airfoil inverse design methods for online generation and editing. Furthermore, different from previous benchmarks that only evaluates the areodynamic performance, we also provide the interface to evaluate the geometric quality, the aerodynamic quality and the diversity of the generated airfoils, which are also crucial for airfoil inverse design. The main contributions of this work are summarized as follows: * We propose the use of generative methods for two key tasks in airfoil design: multi-conditional airfoil generation and airfoil editing. We also establish comprehensive evaluation metrics including diversity, controllability, geometric quality and aerodynamic quality. * We propose a large-scale and diverse airfoil dataset in Airfoil Generative Design. This dataset includes 200 thoushands airfoil shapes, accompanied by detailed geometric and aerodynamic annotation labels. The dataset can provide valuable resources for training and evaluating generative models in airfoil inverse design. * We construct and open-source a codebase that encompasses generative methods in airfoil design, including foundational techniques such as cVAE, cGAN as well as advanced models like PK-GAN,PK-VAE,PKVAE-GAN and PK-DiT. We provide a user-friendly demo that allows for visualizing and experiencing airfoil design in real-time. § RELATED WORK Airfoil Inverse Design. The ultimate goal of airfoil inverse design is to use algorithms to automatically find airfoils that meet the given requirements. Previous efforts <cit.> have explored datasets for investigating airfoil aerodynamic characteristics, but have largely relied on the UIUC and NACA airfoil shapes, lacking the support to explore large-scale airfoil models. Our dataset, on the other hand, boasts a more diverse collection of airfoils and rich annotations. Additionally, we have proposed AFBench, which includes airfoil generation and airfoil editing. Airfoil generation is a combination and complement to inverse design and parameterization, as it can generate airfoils that satisfy geometric constraints based on PARSEC parameters. We also propose a new task, airfoil editing, to allow designers to more easily find the optimal airfoil based on their experience. Conditional Generative methods in Airfoil Design. There are some new attempts to leveraging the advantages of both implicit representation and generative methods in airfoil design. Variation Auto Encoder <cit.> trains a model to minimize reconstruction loss and latent loss, and it is usually optimized considering the sum of these losses. <cit.> proposes two advanced CVAE for the inverse airfoil design problems by combining (CVAE) and distributions. Generative adversarial network <cit.> uses a generative neural network to generate a airfoil and uses a discriminative neural network to justify the airfoil is real or fake. CGAN <cit.> improves the original GAN by inputting the conditions to both the generator and discriminator. For instance, <cit.> generates shapes with low or high lift coefficient. By inputing the aerodynamic characteristics such as lift-to-drag ratio (Cl/Cd) or share parameters, it is possible to guide the shape generation process toward particular airfoil. Diffusion model <cit.> is the emerging generative models <cit.> in engineering design. However, there are still few attempts in airfoil inverse design. We provide more detailed literature review in Appendix <ref>. § AUTOMATIC DATA ENGINE Since diverse airfoil datasets are not easily accessible publicly, we develop a data engine to collect 200,000 diverse airfoils, dubbed AF-200K. Our proposed AF-200K dataset first includes airfoils from two public datasets, such as UIUC and NACA, and then leverages our proposed data engine to generate synthetic airfoils. The data engine has three stages: (1) synthetic airfoil generation stage; (2) geometric and aerodynamic parameter annotation stage; (3) low-quality airfoil filtering stage. We illustrate the data engine pipeline in Fig. <ref> and visualize generated airfoils in Fig. <ref>. §.§ Synthetic Airfoil Generation Stage Based on airfoils in UIUC and newly collected airfoils that are manually designed by COMAC(Commercial Aircraft Corporation of China), we synthesize airfoils by both physical models and unconditional generative methods. CST-assisted Generation. The CST-assisted Generation synthesizes the airfoils firstly by parameterizing physical models, i.e., CST model and then perturbing these parameters. Given one manually designed airfoil f_0, we parameterize the airfoil with the CST method <cit.> as p_0 = (p_0^1, p_0^2, ..., p_0^M), where M is the number of physical parameters [The exact formulation of CST models and fitting methods are detailed in the Appendix <ref>.]. Afterwards, we perturb the parameters of the airfoils with Latin hypercube sampling (LHS). Take generating N airfoils based on one manually-designed airfoil for example. For every variable (p_0^1, p_0^2, ..., p_0^M)in our parameterized airfoil, we evenly divided into N parts, and randomly sample a value in N parts, respectively for N generated airfoils. With Latin hypercube sampling (LHS), the generated airfoils can be uniformly sampled from the parametric space of CST for supercritical airfoils. The generated airfoils are illustrated in Fig. <ref>. Unconditional Airfoil Generation Stage. While the airfoils generated by perturbing the parameters of CST models significantly extend the training datasets, the design space is still limited to the capability of CST models (Fig. <ref>). To further explore the more general design space, we propose two unconditional generative-model-based methods, i.e., BézierGAN <cit.> and diffusion models <cit.>, to generate airfoils in the training set. Specifically, we train BézierGAN <cit.> and diffusion models <cit.> using our selected airfoils from the UIUC dataset (referred to as UIUC-Picked). We generate 10,000 airfoils with BézierGAN and generate another 10,000 airfoils with the diffusion model. We will detail the architecture of BézierGAN and diffusion model in the Appendix <ref>. §.§ Geometric and Aerodynamic Parameter Annotation Stage Aerodynamic Annotation. We compute the angle of attack (AoA) and drag coefficient (CD) of each airfoil under different working conditions. Specifically, we set the Reynolds number to 100,000 and vary the Mach number from 0.2 to 0.7, and the lift coefficient (CL) from 0.0 to 2.0 (in increments of 0.2). The working conditions are denoted as w_c = [Ma, CL], where Ma is the Mach number and CL is the lift coefficient. For each working condition, we pass the airfoil coordinates into XFoil <cit.> to calculate the corresponding aerodynamic labels, including the angle of attack (AoA), drag coefficient (CD), and moment coefficient (CM). Geometric Annotation. The Geometric label is primarily based on PARSEC physical parameters, with Control keypoints as supplementary information. The PARSEC physical parameters (as shown in Fig. <ref>) include the leading edge radius (R_le), upper crest position (X_up, Y_up), upper crest curvature (Z_xxup), lower crest position (X_lo, Y_lo), lower crest curvature (Z_xxlo), trailing edge position (Y_te), trailing thickness (Δ Y_te), and two trailing edge angles (α_te, β_te). r0.5 < g r a p h i c s > PARSEC physical parameters We utilize B-spline <cit.> interpolation to convert the discrete points into a continuous representation, and then calculate the first-order and second-order derivatives, as well as the extrema. For the Control Keypoints, we select a subset of the airfoil surface points, approximately one-twentieth of the original number of points. The main purpose is to control the overall contour of the airfoil, ensuring that it does not undergo drastic changes. §.§ Airfoil Filtering Stage Given the airfoils generated with the parametric CST model and the generative model, we need to filter out those with low aerodynamic performance to prevent the generative model from producing low-quality airfoils. Specifically, we use a numerical solver based on Reynolds-Averaged Navier-Stokes (RANS) equations to calculate the physical parameters of flow fields. These parameters are then used to assess the aerodynamic performance of the generated airfoils. We set 66 work conditions (as detailed in Section <ref>), and if an airfoil fails to converge under all 66 conditions, we classify it as a poor-quality airfoil and discard it. § AFBENCH: DATASET PRESENTATION AND BENCHMARKING SETUP §.§ Dataset Presentation Based on the aforementioned automatic data engine, the AF-200K dataset includes a diverse collection of about 200,000 airfoils, including UIUC, NACA, supercritical airfoils, and generated airfoils, as shown in Fig. <ref>. From the UIUC dataset, we have carefully selected 1,433 airfoils with favorable aerodynamic performance from more than 1,600 original raw data entries. For the NACA airfoils, we referenced the AIRFRANS <cit.> design space, resulting in a total of 5,000 NACA 4-digit and 5,000 NACA 5-digit airfoils. The Supercritical Airfoil dataset was generated by perturbing and expanding upon designs provided by COMAC engineers, using the CST method, yielding a total of 21,500 airfoils. To further augment the UIUC dataset, we generated 143,300 airfoils through CST-assisted generation. Additionally, we employed generative modeling approaches to synthesize 10,000 airfoils each using BézierGAN <cit.> and diffusion models <cit.>. All airfoil data are stored in the form of 2D coordinates, with each airfoil represented by 257 points. In cases where the original data did not have 257 points, we used B-spline interpolation to ensure a consistent representation. The AF-200K dataset is split into training, validation, and test sets with a ratio of 8:1:1. §.§ Airfoil Inverse Design Tasks Controllable Airfoil Generation. The controllable airfoil task aims at generating airfoils, which is described by 257 points, given the physical parameters and control keypoints. We expect the generated airfoil should consist with the given physical parameters and control keypoints and also with high diversity, good geometric quality and aerodynamic quality. Editable Airfoil Generation. The editable airfoil generation task aims at editing a given airfoil following the instruction. Specifically, the editable airfoil we can edit the airfoil using the physical parameters and control keypoints, e.g., 2 times enlarging the leading edge, or dragging one of the control point. We expect the airfoil after editting can be conformed to the instruction. We expect the airfoil after editing should be consist with the given physical parameters or control keypoints and also with high diversity, good geometric quality and aerodynamic quality. §.§ Baseline Methods As shown in Fig. <ref>, we train four generative models: VAE, GAN, VAE-GAN and Diffusion Models. PK-VAE and PK-VAE^2. Based on <cit.>, we modify the plain VAE by incorporating parsec parameters <cit.> and control keypoints as geometry constraints. PK-VAE^2 is a composite of VAEs: EK-VAE, PK-VAE and PK-VAE, which enable airfoil editing. Speicifically, EK-VAE achieves editing by predicting physical parameters from control keypoints, while EP-VAE predicts control keypoints from physical parameters. By training these components separately and then combining them with PK-VAE for joint training, we can achieve efficient airfoil editing. PK-GAN. Building upon the Bézier-GAN approach <cit.>, we introduce a conditional formulation in our model. We employ a technique similar to Adaptive Instance Normalization <cit.> to seamlessly integrate the condition embedding at multiple scales within the Generator. Simultaneously, we adopt a similarity-based approach to blend the condition information into the Discriminator. PKVAE-GAN. Inspired by <cit.>, we utilize a conditional Variational Autoencoder (cVAE) as the generator, and train it with the discriminator conditioned on physical parameters and keypoints. PK-Diffusion. For conditional diffusion models <cit.>, we designed two types based on Unet <cit.> and Transformer <cit.> architectures. In the U-Net model (PK-DIFF), the encoder extracts features by stacking multiple layers of convolutions, while the decoder reconstructs features by stacking multiple layers of convolutions. The encoder outputs are concatenated with the corresponding scale inputs of the decoder through skip connections. The time steps and conditions are mapped through MLP layers and then integrated with the input features. In the DIT model (PK-DIT), we also integrate time steps and conditions by employing MLP to map them before feature extraction. Feature extraction is performed through four layers of DIT blocks. §.§ Evaluation Metrics The performance of the model depends on three factors: controllability, diversity of the generated/edited airfoils, and quality of the generated airfoils (including both geometric and aerodynamic quality). We evaluate the performance using the following metrics: * To measure the constraint of the conditions, we propose the label error: σ_i = | p̂_̂î - p_i | , i=1,2,...,11 where σ_i is the label error for the i-th physical parameter, p̂_̂î is the i-th physical parameter calculated from the generated airfoil, p_i is the i-th physical parameter of the given condition. We denote {p_i}_i=1^11 as {R_le,X_up, Y_up,Z_xxup,X_lo,Y_lo,Z_xxlo,Y_te,Δ Y_te,α_te, β_te}, respectively. * To quantify the diversity of the generated airfoils, we propose the following formula: 𝒟 = 1/n∑_i=1^n logdet(ℒ_S_i), where n is the number of sample times, and the set of generated airfoils is denoted as 𝐅 = (f_1, f_2, ..., f_M). The i-th subset of the data, S_i, is a subset of 𝐅 with a smaller size N (where N < M). The matrix ℒ_S_i is the similarity matrix, calculated based on the Euclidean distances between the airfoils in the subset S_i, as proposed in <cit.>. The det(ℒ_S_i) represents the determinant of the similarity matrix ℒ_S_i, and the logdet(ℒ_S_i) is the natural logarithm of the determinant, which is used to prevent numerical underflow. * To measure the geometric quality of the airfoils, we propose the smoothness metric: ℳ = ∑_i=1^NDistance_Pn⊥ |P_n-1P_n+1| , where P_n is the n-th point, |P_n-1P_n+1| is the line connecting the adjacent points, and Distance calculates the perpendicular distance from point P_n to the line |P_n-1P_n+1|. N represents the number of generated airfoils. * To measure the aerodynamic quality of the model, we propose the success rate. We generate airfoils and evaluate whether they converge under M different work conditions. The success rate ℛ is calculated as: ℛ = 1/N∑_i=1^N𝕀(∑_j=1^M C_j/M > 60%) , j=1,...,M, Where C_j is a binary variable that takes the value of 0 or 1, indicating whether the j^th work condition results in non-convergence (0) or convergence (1), and N represents the number of generated airfoils. Here, 𝕀(x) is the indicator function, where 𝕀(True)=1 and 𝕀(False)=0. § BENCHMARKING RESULTS The baselines in Sec. <ref> are trained with 500 epochs and a batch size of 512. In the following, we will present the results of our proposed method on controllable airfoil generation and controllable airfoil design, as well as the ablation study to validate the effectiveness of the dataset and methodology. §.§ Comprehensive Method Comparison Controllable Airfoil Generation. We evaluated all our baselines and illustrate the experimental results in Tab. <ref>, from which we can make the following observations. First, our proposed AF-200K dataset is effective than previous datasets. As the dataset size increased from UIUC 1,600 to Supercritical Airfoil 21,500, and finally to AF-200K, the results indicate that Label Error decreased, Diversity Score increased (indicating generating more diverse airfoils), and Smoothness value decreased (indicating better geometric quality). These results demonstrate that as the size and diversity of the dataset increase, the model performance increases, which validates the effectiveness of our proposed methods. Second, PK-VAE and PK-GAN, PK-VAE exhibits lower Label Error and generates more consistent airfoil shapes, although with reduced diversity due to the strong constraints imposed by the reconstruction loss in VAE. PK-GAN, compared to cGAN, shows by using Bézier curves as intermediate representations, it generates smoother airfoil shapes. PK-VAE-GAN combines the stability of VAE and the diversity of GAN, positioning its performance in between. The Diffusion architecture is simpler and more stable in training compared to VAE and GAN. Comparing PK-DIFF, based on raw data and Unet architecture, with PK-DIT, based on latent space, PK-DIT generates more diverse and smoother airfoil shapes. Controllable Airfoil Editing. For training the airfoil editing task, we randomly sample two airfoils as the source and target. The airfoil editing task is divided into two parts: editing the control points and editing the physical parameters. For editing the control keypoints, the model takes as input (source-physical, target-keypoint) and is expected to output an airfoil that satisfies (target-physical, target-keypoint). For editing the physical properties, the model takes as input (target-physical, source-keypoint) and is expected to output an airfoil that satisfies (target-physical, target-keypoint). The results for these two editing tasks are presented in Table <ref>. It can be observed that PK-VAE^2 outperforms PK-VAE across the board. Specifically, PK-VAE^2 achieves a lower label error in physical parameter editing and demonstrates a higher diversity score and better smoothness in keypoint editing. §.§ Ablation Study Pretrain and Finetune. To verify whether the AF-200K dataset can help the model generate airfoils with better aerodynamic capabilities, we select about 20,000 airfoils with superior aerodynamic performance from the AF-200K dataset and then pre-train on the full AF-200K dataset and fine-tune on this subset. The experimental results illustrate that finetuning on airfoils with high aerodynamic performance can improve the model's success rate from 33.6% to 42.99% (Tab.  <ref>). r0.3 Different Generative Data. To evaluate the impact of different generative data on the final model performance, we select 10,000 airfoils each from NACA-GEN, CST-GEN, BézierGAN-GEN, and Diffusion-GEN, and train the model on these datasets. The results are shown in Tab. <ref>. We find that, CST-GEN provided the model with the most diversity. BézierGAN-GEN granted the model the highest score of smoothness. Diffusion-GEN impart the model with the greatest control capability and the lowest label error. § CONCLUSION We have proposed a large-scale and diverse airfoil dataset, AF-200K, which has been demonstrated to significantly improve the capabilities of data-driven models compared to previous datasets. Additionally, we have introduced a comprehensive benchmark that evaluates the performance of mainstream generative models on the task of airfoil inverse design. This benchmark provides researchers with a valuable tool to explore more powerful inverse design methods. As the availability of data continues to expand and AI techniques advance, there is great potential to explore an even broader design space. AI-driven exploration can transcend the limitations of human experience and create innovative structures that are beyond human imagination. In complex design scenarios, AI may achieve superior outcomes compared to human experts. We believe that our methods can also offer valuable insights for 3D airfoil design. Looking ahead, we aim to establish a more comprehensive benchmark for both 2D and 3D airfoil inverse design. The limitations of our current approach are discussed in Appendix <ref>. unsrt § APPENDIX § AF-200K DATASET We publish the AF-200K dataset, benchmark, demo and codebase at our website https://hitcslj.github.io/afbench/Page-AFBench. It is our priority to protect the privacy of third parties. We bear all responsibility in case of violation of rights, etc., and confirmation of the data license. Terms of use, privacy and License. The AF-200K dataset is published under the https://creativecommons.org/licenses/by-nc-sa/4.0/legalcodeCC BY-NC-SA 4.0, which means everyone can use this dataset for non-commercial research purpose. The original UIUC dataset is released under the https://m-selig.ae.illinois.edu/pd/pub/lsat/GPL.TXTGPL license. The original NACA dataset is released under the https://en.wikipedia.org/wiki/MIT_LicenseMIT license. Data maintenance. Data is stored in Google Drive for global users, and the AF-200K is stored in https://drive.google.com/drive/folders/1SV9Vyb0EisuG0t69YauGUyq0C5gKwRgt?usp=sharinghere. We will maintain the data for a long time and check the data accessibility on a regular basis. Benchmark and code. https://github.com/hitcslj/AFBenchAFBench provides benchmark results and codebase of AF-200K. Data statistics. For AF-200K, there are 160K airfoils for training, 20K airfoils for valuation, 20K airfoils for testing, and 200K airfoils in total. Limitations. The current aerodynamic labels are computed using the relatively coarse CFD solver XFoil. In future work, higher precision CFD simulation software can be utilized to improve the accuracy of the aerodynamic labels. § BACKGROUND AND RELATED WORK In this section, we primarily discuss three concepts: airfoil design, airfoil representation and conditional generative methods in airfoil design. §.§ Airfoil Design The essence of airfoil design is to find an airfoil that satisfies one's requirements within a vast design space. However, the traditional trial-and-error process is inefficient and costly. To address this issue, a significant amount of research has been conducted, which can be broadly categorized into airfoil parameterization <cit.>, airfoil aerodynamic performance prediction <cit.>, airfoil inverse design <cit.>, and airfoil shape optimization <cit.>. Airfoil parameterization compresses the airfoil into a few parameters, effectively reducing the design space to a parameter space, which can be searched more quickly. However, parameterization may introduce discontinuities in the design space, making it challenging to find the desired airfoil. Airfoil aerodynamic performance prediction can be divided into two main approaches: using PINNs <cit.> to solve for the aerodynamic coefficients on the airfoil surface, and employing data-driven surrogate models to quickly predict the performance of the current airfoil, approximating the traditional CFD approach. Airfoil inverse design takes the desired requirements as input and outputs an airfoil that satisfies those requirements. Airfoil shape optimization aims to find the design variables that maximize the lift-to-drag ratio (Cl/Cd). The ultimate goal of these four directions is to use algorithms to automatically find airfoils that meet the given requirements. Airfoil parameterization can reduce the search variables, airfoil inverse design can provide multiple candidate airfoils as initial values, airfoil aerodynamic performance prediction can use surrogate models for rapid feedback, and airfoil shape optimization can employ optimization methods to find the optimal airfoil. Previous efforts  <cit.> have explored datasets for investigating airfoil aerodynamic characteristics, but have largely relied on the UIUC and NACA airfoil shapes, lacking the breadth of data necessary to support the exploration of large-scale airfoil models. Our dataset, on the other hand, boasts a more diverse collection of airfoils and rich annotations, enabling better support for the four research directions mentioned earlier. Additionally, we have proposed AFBench, which includes airfoil generation and airfoil editing. Airfoil generation is a combination and complement to inverse design and parameterization, as it can generate airfoils that satisfy geometric constraints based on PARSEC parameters. Airfoil editing is a supplement to shape optimization, as it allows designers to more controllably find the optimal airfoil based on their experience. §.§ Airfoil Representation Airfoil representation is an evolution of airfoil parameterization. It can be broadly categorized into explicit representation and implicit representation. Explicit representation includes the most common Coordinate Point Method, as well as polynomial-based Parametric Representation Methods, such as PARSEC <cit.>, Bézier <cit.>, CST <cit.>. The former is the easiest to manipulate, but the large number of variables makes it difficult to optimize. The polynomial-based representations can reduce the design variables while ensuring the represented airfoils are smooth. Some, like PARSEC, even have intuitive geometric interpretations, such as leading edge radius and upper/lower surface peak values. Other parameterization methods, however, have design variables that are less intuitive. Additionally, their design spaces tend to be relatively small. Implicit representation primarily uses data-driven methods to compress the airfoil into a latent space. Traditional methods include SVD <cit.> and PCA <cit.>, but these linear combination approaches also result in small design spaces. More recently, neural representations have become common, where a well-trained neural network can store the airfoil information, allowing the design space to be sampled from a low-dimensional space. A representative work in this area is BézierGAN <cit.>. Our work adopts a hybrid approach, combining implicit and explicit representations. By adjusting the intuitive PARSEC parameters and control points, we can achieve airfoil generation. The neural network representation allows for a much larger design space compared to pure PARSEC parameterization. §.§ Conditional Generative methods in Airfoil Design Leveraging the advantages of both implicit representation and generative methods, there recently appears attempts to combine implicit representation and generative methods to achieve better design performance. VAE Variation Auto Encoder<cit.> trains a model to minimize reconstruction loss and latent loss, and it is usually optimized considering the sum of these losses. <cit.> proposes two advanced CVAE for the inverse airfoil design problems by combining the conditional variational autoencoders (CVAE) and distributions. There are two versions: N-CVAE, which combines the CVAE with normal distribution <cit.>, and S-CVAE, which combines the VAE and von Mises-Fischer distribution <cit.>. Both the CVAE models convert the original airfoils into a latent space. Differently, the S-CVAE enables the separation of data in the latent space, while the N-CVAE embeds the data in a narrow space. These different features are used for various tasks. GAN Generative adversarial network <cit.> uses a generative neural network to generate a airfoil and uses a discriminative neural network to justify the airfoil is real or fake. CGAN <cit.> improves the original GAN by inputing the conditions to both the generator and discriminator. Then, the generator is learned to generate the shape satisfying the condition constraints. Inspired by the big success in computer vision, several works are proposed to explore the applications of GANs to solve the airfoil design. <cit.> generates shapes with low or high lift coefficient. By inputing the aerodynamic characteristics such as lift-to-drag ratio (Cl/Cd) or share parameters, it is possible to guide the shape generation process toward particular airfoil. Diffusion Diffusion model <cit.> is the recent widely adapted generative model in the image and 3D computer vision. In the airfoil generation field, there are also several initial attempts to use diffusion model to generate airfoils. <cit.> uses conditional diffusion models to perform performance-aware and manufacturability-aware topology optimization. Specifically, a surrogate model-based guidance strategy is proposed that actively favors structures with low compliance and good manufacturability. <cit.> introduces compositional inverse design with diffusion models, which enables the proposed method to generalize to out-of-distribution and more complex design inputs than seen in training. <cit.> leverages the capable of a latent denoising diffusion model to generate airfoil geometries conditioned on flow parameters and an area constraint. They found that the diffusion model achieves better generation performance than GAN-based methods. § DETAILED DESCRIPTION OF AIRFOIL GENERATION BY CST METHOD The CST-assisted Generation Stage augments the airfoils with physical models, i.e., CST model. Given one airfoil f_0, we generate the airfoils with assistance of CST by the following steps: (1) We parameterize the airfoil f_0 with the CST method <cit.> as p_0; (2) Employ Latin Hypercube Sampling (LHS) to perturb the CST parameters. LHS enables uniform sampling across the CST parameter space, thereby facilitating the generation of new airfoils. Parametrize the airfoils using CST Given one manually designed airfoil f_0, we parameterize the airfoil with the CST method as p_0 = (p_0^1, p_0^2, ..., p_0^M), where M is the number of physical parameters. The CST method is a widely-accepted method in airfoil design and proposes to fit the supercritical airfoil with the Bernstein polynomial, which can be mathematically expressed as ζ(ψ)=C(ψ) S(ψ)+ψζ_T, where ζ = Y/c, ψ = X/c, and c represents the chord length. ζ_T=Δζ_TE/c represents the trailing edge thickness of the airfoil. Here, X and Y are x-corrdinates and y-coordinates of the airfoil. C(ψ) and S(ψ) correspond to the class function and shape function, respectively, which can be formally described as follows: C(ψ )=(ψ )^N_1(1-ψ)^N_2 S(ψ)=ζ(ψ)-ψζ_T/√(ψ(1-ψ)) where N_1 and N_2 define the class of airfoils. In this paper, we choose N_1 = 0.5 and N_2 = 1.0 to represent the circular leading edge and sharp trailing edge of supercritical airfoils. Perturb the CST parameters using Latin Hypercube Sampling (LHS) and generate new airfoils Given the parameterized supercritical airfoil, we perturb the parameters of the airfoils with Latin hypercube sampling (LHS). Take generating N airfoils based on one manually-designed airfoil for example. For every variable in our parameterized airfoil, we evenly divided into N parts, and randomly sample a value in N parts, respectively for N generated airfoils. With Latin hypercube sampling (LHS), the generated airfoils can be uniformly sampled from the parametric space of CST for supercritical airfoils. The generated aifoils are illustrated in Fig. <ref>. § DETAILED DESCRIPTION OF AIRFOIL GENERATION BY GENERATIVE MODELS BézierGAN-GEN BézierGAN <cit.> uses a Bézier layer to transform the network's predicted control points, weights, and parameter variables into smooth airfoil coordinates. The Bézier layer formula is as follows: X_j=∑_i=0^nni u_j^i(1-u_j)^n-i P_i w_i/∑_i=0^nni u_j^i(1-u_j)^n-i w_i, j=0, …, m where n is the Bézier degree, m+1 is the number of airfoil coordinate points, and P_i, w_i, u_j are the network-predicted control points, weights, and parameter variables, respectively, which are all differentiable. By applying the aforementioned models, we uniformly sampled 10,000 latent codes from the range [0, 1], combined them with Gaussian noise to form the input z, and used the generator to produce 10,000 smooth airfoil shapes represented as 257 x 2 coordinate points. Diff-GEN Diffusion models <cit.> are a recent advancement in a generative modeling. Compared to the traditional GAN models, diffusion models introduce the following main improvements: For Noise Schedule: Instead of learning to transform noise to data directly, diffusion models gradually transform noise into data through a series of small, reversible steps, which makes the training more stable and produces higher-quality samples. For the model architecture, the generator is replaced with a U-Net architecture that predicts the noise added to the data at each step, while a diffusion process progressively refines this prediction. The forward diffusion process formula is as follows: q(x_t|x_t-1)=𝒩(x_t;√(α_t)x_t-1,(1-α_t)I), t=1,2,⋯,T where T is the total number of diffusion steps, α_t is a variance schedule controlling the amount of noise added at each step, and 𝒩 denotes a normal distribution. The model learns to reverse this process, thereby generating samples from pure noise through a series of learned denoising steps. Specifically for airfoil generation, we need to first encode the 257 x 2 airfoil coordinates into a 32x1 latent variable using a pretrained VAE. The diffusion model then learns to generate these latent variables, which are subsequently decoded by the VAE to produce the final airfoil shape. This approach leverages the strengths of both VAEs and diffusion models, where the VAE efficiently compresses the high-dimensional airfoil data into a more manageable latent space, and the diffusion model excels at generating high-quality samples within this latent space. By combining these methods, we achieve a robust and efficient framework for generating realistic and high-quality airfoil designs. § DIFFUSION RESULTS Reverse Diffusion Process. Fig. <ref> illustrates the denoising sampling results of the Diffusion model at different time steps. It can be observed that when trained directly on raw data, the generated airfoils are not smooth. In contrast, airfoils trained in the latent space are smooth from the beginning due to the pre-trained VAE providing a performance baseline. As the reverse steps increase, the generated airfoils gradually align more closely with the given physical conditions. Aerodynamic Performance Visualization. Given the same conditions, airfoils were generated using both PK-DIFF and PK-DIT. We used a refined CFD solver OpenFOAM to calculate the flow and aerodynamic performance of these two generated airfoils. Fig. <ref> shows the distribution of the pressure coefficient around the airfoils generated by PK-DIFF and PK-DIT. Under the given working conditions [AoA = 3°, Re = 1e6], the lift coefficient (Cl) and drag coefficient (Cd) for PK-DIFF are (0.36, 0.01029), while for PK-DIT, they are (0.7335, 0.0125). The higher Cl/Cd ratio for PK-DIT indicates that the airfoil generated by PK-DIT has superior aerodynamic performance. Generate Diverse Airfoils by PK-DIT Fig. <ref> illustrates the diversity of airfoils generated by the Diffusion model. Starting from random noise, Diffusion progressively denoises the airfoil. Each denoising step can introduce different details, thus ensuring that the generated airfoils are diverse while still meeting the required conditions. § LIMITATION The current physical parameters and control keypoints used in our approach are coupled within each condition. For example, simply changing leading edge radius may not result in a feasible airfoil, as the design space may not contain such a configuration. When dealing with multiple conditions, finding and balancing the conflicts between the conditions to generate an optimal airfoil is a challenge that remains unsolved and deserves further exploration. In addition to finding better airfoil design variables, modeling the relationships between these variables is also crucial. Moreover, our method currently does not integrate airfoil shape optimization techniques into the generation process. Embedding optimization methods to produce generated airfoils with superior aerodynamic performance, surpassing manually designed airfoils, would further demonstrate the effectiveness of AI-based approaches, and is another area worth investigating. § DATASHEET * For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. * AFBench was created as a benchmark for airfoil inverse design task. The goal of this task is to find the design input variables that optimize a given objective function. Although some related datasets and works have been proposed, they do not take into account the real needs of applications. Moreover, there is still a lack of large-scale foundational data and evaluation metrics. * Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? * This dataset is presented by HIT-AIIA Lab & Shanghai AI Lab & Shanghai Aircraft Design and Research Institute. * Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. * This work was sponsored by Shanghai AI Lab. * Any other comments? * No. §.§ Composition * What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. * AFBench comprises existing UIUC and NACA datasets, along with 2,150 manually designed supercritical airfoils and airfoils generated by models, totaling 200k samples. Each sample consists of a well-designed airfoil, accompanied by 11 geometric parameters and aerodynamic properties under 66 work conditions. We made our benchmark openly available on the AFBench github page(<https://hitcslj.github.io/afbench/>). * How many instances are there in total (of each type, if appropriate)? * For AF-200K, there are 160K airfoils for training, 20K airfoils for valuation, 20K airfoils for testing, 200K airfoils in total. * Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). * Both UIUC and NACA are open-source datasets. We use the proposed CST method and unconditional generative models to derive AF-200K dataset. For AF-200K, we use all samples of UIUC and NACA Open dataset. * What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. * Each instance consists of a well-designed airfoil, accompanied by 11 geometric parameters and aerodynamic properties under 66 work conditions. * Is there a label or target associated with each instance? If so, please provide a description. * Each instance includes various aerodynamic performance metrics such as angle of attack (AoA), drag coefficient (CD), and moment coefficient (CM), under different work conditions. Additionally, PARSEC physical parameters are provided as geometric features for each instance. * Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. * No. * Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? If so, please describe how these relationships are made explicit. * No. * Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. * We recommend using the default 8:1:1 ratio provided by AFBench for dataset partitioning. * Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. * No. * Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. * We release the AFBench dataset on our GitHub repository: <https://github.com/hitcslj/AFBench>. More specifically, please follow the instructions provided on the website: https://hitcslj.github.io/afbench/AFBench-Webpage. Our dataset is developed based on existing airfoil dataset https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA. * Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description. * No. * Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. * No. * Does the dataset relate to people? If not, you may skip the remaining questions in this section. * No. * Does the dataset identify any subpopulations (e.g., by age, gender)? * No. * Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. * No. * Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. * No. * Any other comments? * No. §.§ Collection Process * How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. * Our data is developing based on published airfoil dataset https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA using a designed CST method and unconditional generative models mentioned before. * What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? * For UIUC, we write a Python script that uses Bézier interpolation to generate a smooth airfoil with a specified number of points. For NACA, we write an NACA generator script to sample the airfoil at a specified number of points. For the rest, we use CST and generative methods to generate the airfoils, then use XFoil to create the aerodynamic labels, and a Python script to calculate the geometry labels. We use hundreds of small CPU nodes and small GPU nodes for the computation. * If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? * We use full-set provided by https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA. * Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? * No crowdworkers were involved in the curation of the dataset. Open-source researchers and developers enabled its creation for no payment. * Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. * The AF-200K data and label was generated in 2024, while the source data UIUC v2 was created in 2020, NACA v1 was created in 1933. * Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. * The source sensor data for UIUC and NACA had been conducted ethical review processes by UIUC Applied Aerodynamics Group and National Advisory Committee for Aeronautics airfoils, which can be referred to https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA, respectively. * Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? * We retrieve the data from the open source dataset https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA. * Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. * The AFBench dataset is developed based on open-source dataset and following the open-source license. * Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. * The AFBench dataset is developed on open-source dataset and obey the license. * If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). * Users have a possibility to check for the presence of the links in our dataset leading to their data on public internet by using the search tool provided by AFBench, accessible at https://hitcslj.github.io/afbench/AFBench-Webpage. If users wish to revoke their consent after finding sensitive data, they can contact the hosting party and request to delete the content from the underlying website. Please leave the message in https://github.com/hitcslj/AFBench/issuesGitHub Issue to request removal of the links from the dataset. * Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. * We develop our dataset based on open source dataset https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA publised by UIUC Applied Aerodynamics Group and National Advisory Committee for Aeronautics airfoils. The published dataset has been seriously considered of it's potential impact and its use on data subjects. * Any other comments? * No. §.§ Preprocessing, Cleaning, and/or Labeling * Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. * Above all, We utilize B-spline interpolation to convert the discrete points into a continuous representation. Then we use CST method to augment the dataset and XFOIL to calculate the corresponding aerodynamic labels. Additionally, We utilize PARSEC physical parameters with Control keypointsBeside as Geometric label. Besides this, no preprocessing or labelling is done. * Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. * Yes, we provide the original open source dataset and the augmented AF-200K dataset. * Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point. * Yes, XFOIL is accessible at https://github.com/hitcslj/Xfoil-calhttps://github.com/hitcslj/Xfoil-cal. * Any other comments? * No. §.§ Uses * Has the dataset been used for any tasks already? If so, please provide a description. * No. * Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. * No. * What (other) tasks could the dataset be used for? * We encourage researchers to explore more diverse airfoil generation and editing, as well as optimization design. * Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? * No. * Are there tasks for which the dataset should not be used? If so, please provide a description. * Due to the known biases of the dataset, under no circumstance should any models be put into production using the dataset as is. It is neither safe nor responsible. As it stands, the dataset should be solely used for research purposes in its uncurated state. * Any other comments? * No. §.§ Distribution * Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. * Yes, the dataset will be open-source. * How will the dataset be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? * The data is available through <https://github.com/hitcslj/AFBench>. * When will the dataset be distributed? * 06/2024 and onward * Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. * The AFBench dataset is published under https://creativecommons.org/licenses/by-nc-sa/4.0/legalcodeCC BY-NC-SA 4.0, which means everyone can use this dataset for non-commercial research purpose. The original UIUC dataset is released under the https://m-selig.ae.illinois.edu/pd/pub/lsat/GPL.TXTGPL license. The original NACA dataset is released under the https://en.wikipedia.org/wiki/MIT_LicenseMIT license. * Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. * The original UIUC dataset is released under the https://m-selig.ae.illinois.edu/pd/pub/lsat/GPL.TXTGPL license, and the for the restrictions, please refer to https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC. The original NACA dataset is released under the https://en.wikipedia.org/wiki/MIT_LicenseMIT license, and the for the restrictions, please refer to http://airfoiltools.com/airfoil/naca4digitNACA * Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. * No. * Any other comments? * No. §.§ Maintenance * Who will be supporting/hosting/maintaining the dataset? * Shanghai AILab will support hosting of the dataset. * How can the owner/curator/manager of the dataset be contacted (e.g., email address)? * <https://github.com/hitcslj/AFBench/issues> * Is there an erratum? If so, please provide a link or other access point. * There is no erratum for our initial release. Errata will be documented as future releases on the dataset website. * Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? * We will continue to support AFBench dataset. * If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. * No. * Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. * Yes. We will continue to support AFBench dataset in https://github.com/hitcslj/AFBench/our github page. * If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description. * Yes, they can driectly developing on open scource dataset https://m-selig.ae.illinois.edu/ads/coord_database.htmlUIUC and http://airfoiltools.com/airfoil/naca4digitNACA dataset or concat us via https://github.com/hitcslj/AFBench/issuesGitHub Issue. * Any other comments? * No. § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? See Appendix  <ref> * Did you discuss any potential negative societal impacts of your work? * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? Our datasets and codebases are available at a https://github.com/hitcslj/AFBenchGithub repo * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? See Section  <ref> * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? See Section  <ref> * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you citep the creators? * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a URL? * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
http://arxiv.org/abs/2406.18660v1
20240626180225
Optical modeling for the evaluation of HOWFSC on embedded processors
[ "Kian Milani", "Ewan Douglas", "Leonid Pogorelyuk", "Christopher Mendillo", "Kerri Cahoy", "Nicholas Belsten", "Brandon Eickert", "Shanti Rao" ]
astro-ph.IM
[ "astro-ph.IM" ]
[ Yves-Alexandre de Montjoye July 1, 2024 ============================== § ABSTRACT The correction of quasi-static wavefront errors within a coronagraphic optical system will be a key challenge to overcome in order to directly image exoplanets in reflected light. These quasi-static errors are caused by mid to high-order surface errors on the optical elements as a result of manufacturing processes. Using high-order wavefront sensing and control (HOWFSC) techniques that do not introduce non-common path aberrations, the quasi-static errors can be corrected within the desired region of interest designated as the dark hole. For the future Habitable Worlds Observatory (HWO), HOWFSC algorithms will be key to attaining the desired contrasts. To simulate the performance of HOWFSC with space rated processors, optical models for a 6 m class space-borne observatory and a coronagraph have been developed. Phenomena such as the Talbot effect and beamwalk are included in the simulations using combinations of ray-based modeling and end-to-end propagation techniques. After integrating the optical models with the embedded processors, simulations with realistic computation times can be performed to understand the computational hardware performance that will be needed to maintain the desired contrasts. Here, the details of the optical models are presented along with the HOWFSC methods utilized. Initial results of the HOWFSC methods are also included as a demonstration of how system drifts will degrade the contrast and require dark hole maintenance. § INTRODUCTION At the recommendation of the Astro2020 decadal survey, NASA was advised to pursue a Habitable Worlds Observatory (HWO) equipped with a coronagraph capable of detecting exoplanets at 1E-10 contrast levels<cit.>. The Nancy Grace Roman Coronagraph Instrument will be a vital technology demonstration for the HWO coronagraph as it will utilize high-order wavefront sensing and control (HOWFSC) methods in order to reach contrasts on the order of 1E-8. To do so, the Roman Coronagraph will utilize a "set-and-forget" HOWFSC scheme involving ground-in-the-loop operations. However, this scheme will likely be unfeasible for the HWO coronagraph due to the higher sensitivity to drifts in the optical system at better contrasts. This means both reaching and maintaining the 1E-10 contrast will likely require HOWFSC iterations on time scales on the order of seconds to minutes<cit.>. To accomplish this, a continuous HOWFSC scheme will have to be implemented to perform dark hole maintenance. This scheme will require embedded processors capable of performing the HOWFSC computations within the coherence time of the coronagraphic speckles. Because flight hardware can be "frozen" up to 10 years in advance of the mission launch, work has begun on implementing these HOWFSC algorithms on radiation hardened-processors to assess their performance, identify computational bottlenecks, and evaluate the impact on the performance of the instrument. To accomplish the third goal, we have implemented a framework for modeling a potential HWO coronagraph and simulating optical disturbances such as beamwalk. Here, the details of the optical models are presented while the methodology and implmentation of HOWFSC algorithms on space-rated processors is presented in Belsten et al<cit.>. Currently, the only HOWFSC algorithms being investigated are pair-wise probing (PWP) and electric field conjugation (EFC)<cit.>. § TELESCOPE MODEL To begin with, a nominal design for an off-axis three mirror anastigmat (TMA) telescope was created with Zemax to serve as a baseline for what a HWO could look like. This model is loosely based on the LUVOIR-B prescription in that it contains a similar net focal ratio of about F/36 and a similar primary mirror focal ratio of about F/3<cit.>. However, the total pupil diameter has been shrunk from the 8.4m for LUVOIR-B to 6.5m. Additionally, an unobscured circular pupil is assumed unlike the hexagonally segmented pupil of LUVOIR-B. Note that this telescope is not completely optimized for diffraction limited imaging over a large FOV that will be desired for a HWO, but merely serves as a general architecture for the optical models required. As details about the HWO become available, this model will be updated to be more accurate. Using the raytrace model, a Fresnel model of the telescope is constructed using POPPY as the backend propagation software<cit.>. This model allows for the surface roughness of the telescope optics to be taken into account when performing the complete optical propagation including the coronagraph. But prior to including the WFE from each surface, the Fresnel model is validated by comparing the footprint diameter of each optic with the calculated footprint from the raytrace model. Additionally, the wavefront at M4 is analyzed to ensure it is a reimaged pupil as is expected from the Zemax design. Finally, the PSF of the Fresnel model is validated by computing the expected resolution of the telescope with the wavelength and F-number and comparing this resolution with the PSF result of the Fresnel model. Using POPPY's StatisticalPSDWFE functionality, WFE maps are pre-computed for each optic in the telescope model. At the moment, the RMS WFE from the surface roughness of M1-M4 is 40nm, 20nm, 20nm and 15nm respectively. The current PSD of each surface is defined with a simple power law that has an index of -2.75, but this PSD will be updated in the future with more realistic parameters. These pre-computed WFEs are then used to simulate the effects of beamwalk on M2 and M3. Given M1 and M4 are pupils, beamwalk is not considered for these optics. Additionally, because M4 is a relayed pupil, we assume it can act as FSM such that beamwalk from pointing errors can be neglected on downstream optics. To implement the beamwalk, the footprint shift on M2 and M3 is assumed to be linear with respect to pointing error, so the Zemax model is used to compute the shift per milliarcsecond of pointing error. These values are found to be 0.084micron/mas and 0.908micron/mas for M2 and M3 respectively. Similar to Mendillo et al.<cit.>, the WFE maps for each surface are shifted by the appropriate values with subpixel precision. For this model, scipy.ndimage.shift (or the CuPy equivalent) are used. Figure <ref> illustrates the difference in WFE for each optic for 15mas of pointing error as well as the difference in the final pupil of the telescope computed by performing the propagation with and without the shifted WFEs. § CORONAGRAPH MODELS To perform the HOWFSC experiments with a space-rated processor in the loop, additional Fresnel models have been created to simulate images from a "true" coronagraph. This Fresnel model also uses POPPY for the backend propagation and runs on a standard PC that will be used to compute the "true" images while the HOWFSC algorithm running on the space-rated processor will use the images to compute DM commands that are fed back to the coronagraph model. While computations are performed on the processor, system drifts can be simulated in both the telescope and coronagraph Fresnel models to evaluate the impact of compute times. At the moment, there are no relay optics included in the design because beamwalk or other drifts from these optics are assumed to be negligible given they are after the FSM/M4. This allows the coroangraph model to be slightly simplified such that the wavefront of the telescope exit pupil is computed and directly injected into the coronagraph model rather than being propagated through relay optics. Here, only a simple vortex coronagraph is considered as VVCs have previously been considered for a HabEx mission<cit.>. Because our telescope model does not include any segmentation, the coronagraph model does not include any apodization or DM assistance for the vortex, although, an additional pupil plane where an apodizer may be placed is included so the model may be updated in the future. Figure <ref> illustrates the fundamental optical train of the coronagraph model. Here, the deformable mirrors (DMs) are being modeled using the fast convolution method described in Will et al.<cit.>. To numerically model a vortex phase mask, the same method described in Krist et al<cit.> is being used. When implementing this in a Fresnel model using POPPY, the model is separated into two segments. The first propagates from the entrance pupil of the coronagraph to the focal plane where the vortex will be located. At this point, the propagation through POPPY is ended and an FFT is used to compute the pupil plane wavefront from the focal plane data. Now, the vortex is numerically applied with additional FFTs and MFTs which output the wavefront at a pupil plane after the vortex. Angular spectrum propagation is then used to back propagate the pupil wavefront to an OAP that would collimate the beam coming from the vortex mask. A surface roughness map is applied to this wavefront and angular spectrum is used to propagate back to the pupil. This data now acts as the wavefront at the Lyot stop, so POPPY is again used to propagate from the Lyot stop through the rest of the optical train and to the image plane. With the Fresnel model acting as the true coronagraph, a compact/Fraunhofer model is also created using FFTs and MFTs for the implementation of HOWFSC algorithms. The pre-vortex WFE is then computed with the Fresnel model and injected into the compact model. In reality, phase retrieval techniques would be used to measure the WFE of an instrument and inject the measurement into the model, but no phase retrieval method has currently been implemented. Figure <ref> presents a comparison of the Fresnel model PSFs and coronagraphic images with the injected WFE. Because the pre-FPM errors are more significant to the coronagraph, the residual surface errors after the FPM are ignored in this compact model. Nonetheless, the morphology of the PSF and speckles in the compact model demonstrates agreement with the Fresnel model, so it acts as a well calibrated model for HOWFSC. For most HOWFSC algorithms, the computational complexity will be dependent on the number of actuators (or DM modes) being utilized and the number of pixels in the focal plane within the desired control region. To evaluate the performance of processors for varying actuator counts, two configurations of this coronagraph model are created. The first assumes smaller 34x34 actuator DMs while the second uses 68x68 actuator DMs. The larger DM model assumes the same actuator spacing, but the pupil diameter is doubled to account for the higher actuator count. The detector sampling in each model is assumed to be 5 microns, but the final imaging focal length is also doubled for the model with larger pupils such that the pixelscale is 0.354in each. Table <ref> contains the details of each models pupil sizes and actuator counts while Figure <ref> illustrates the difference in potential control regions with the higher actuator count. Both models will be used when evaluating the processors to understand how the performance will scale with more actuators and larger control regions. § HOWFSC SIMULATIONS Currently, the primary HOWFSC methods being considered for dark hole creation and maintenance are standard EFC and PWP. Two additional HOWFSC methods will be implemented in the future including modal PWP demonstrated by Pogorelyuk et al.<cit.> for more efficient dark hole maintenance along with the Jacobian-free algorithmic differentiation EFC introduced by Will et al<cit.>. For now, the models discussed above are used to perform EFC and illustrate why dark hole maintenance will be necessary, particularly for the 1E-10 contrasts. For simplicity, only monochromatic EFC is implemented, but this will be expanded to broadband EFC in the future. Figures <ref> and <ref> illustrate an example of why using a HOWFSC method for dark hole maintenance will likely be necessary. Here, EFC is originally used to create a dark hole assuming a perfectly stable telescope. This allows us to generate a dark hole with 1E-10 contrast in just 18 iterations. One noteworthy point is that due to the large WFEs assumed in the telescope optics, the Jacobian was relinearized after 9 iterations in order to reach the final contrasts indicated in the figures. After the initial EFC loop is completed, a pointing error of 15mas was injected into the telescope model. The effect of beamwalk on M2 and M3 was then computed and injected into the coronagraph model to assess the contrast degradation this level of pointing error would induce assuming no other dynamics in the coronagraph system. As a result of this beamwalk, the contrast degrades by about an order magnitude with each DM configuration. However, the new speckles can be corrected with additional iterations of EFC. For the models being used here, each DM configuration utilized 3 more EFC iteration to re-converge to 1E-10 contrast. It should be noted that contrast degradation from beamwalk and other drifts in the coronagraph system will also depend on the quality of each optic's surface roughness as the better each optic can be polished, then larger drifts can be tolerated. § CONCLUSIONS AND FUTURE WORK While a set-and-forget HOWFSC scheme is possible for more moderate contrast goals, the extreme contrast requirement of the HWO will require much more frequent dark hole maintenance with HOWFSC techniques due to drifts within the optical system. These drifts will have to be corrected on time scales equivalent to or smaller than the lifetime of the speckles in order for the HOWFSC maintenance to operate as intended. Here a framework has been developed to model various configurations and parameters of a 6.5m class telescope and coronagraph to evaluate the impact of HOWFSC computation times. As more details become available about potential HWO concepts including telescope design, coronagraph modes, and quality of optical surfaces, the models here will be updated to include the more accurate parameters. Future experiments with space-rated processors in-the-loop will also implement various other dynamics such as DM creep and slow shifts of coronagraph optics. These experiments will yield results informing the mission about how fast HOWFSC will need to run for various conditions and what the computational bottlenecks for HOWFSC will be. § ACKNOWLEDGEMENTS This work is supported by the NASA Astrophysics Technology Division under APRA grant #80NSSC22K1412. This research made use of community-developed core Python packages, including: POPPY<cit.>, Astropy <cit.>, Matplotlib <cit.>, SciPy <cit.>, CuPy<cit.>, Ray<cit.>, and the IPython Interactive Computing architecture <cit.>. spiebib
http://arxiv.org/abs/2406.17744v1
20240625172952
Following Length Constraints in Instructions
[ "Weizhe Yuan", "Ilia Kulikov", "Ping Yu", "Kyunghyun Cho", "Sainbayar Sukhbaatar", "Jason Weston", "Jing Xu" ]
cs.CL
[ "cs.CL" ]
Light-weight End-to-End Graph Interest Network for CTR Prediction in E-commerce Search Zichong Xiao July 1, 2024 ====================================================================================== § ABSTRACT Aligned instruction following models can better fulfill user requests than their unaligned counterparts. However, it has been shown that there is a length bias in evaluation of such models, and that training algorithms tend to exploit this bias by learning longer responses. In this work we show how to train models that can be controlled at inference time with instructions containing desired length constraints. Such models are superior in length instructed evaluations, outperforming standard instruction following models such as GPT4, Llama 3 and Mixtral. § INTRODUCTION Instruction following has emerged as one of the most important topics in AI, where the standard approach is to train instruction-tuned large language models (LLMs) to respond to human requests <cit.>. One current challenge in developing better models is that there remain open questions on how to evaluate them, which in turn means there are open questions on how to train them with appropriate rewards. It has been found that in current evaluations both humans and models tend to have a “length bias” whereby they prefer longer responses over shorter ones in pairwise preferences <cit.>. Correspondingly, training methods that follow these preferences tend to produce longer responses <cit.>. Recently, instruction following benchmarks have incorporated length penalties into their scoring mechanisms to counteract this bias <cit.>, but this does not fix the problem at its source. In this work, we argue that the expected length of responses is ill-defined in many queries, and this ambiguity makes evaluation difficult, which in turn affects training algorithms that use these evaluation signals. To resolve this we propose that evaluation should include further disambiguating instructions that prescribe the length of the desired response. Typical requests can be ambiguous in terms of the length of the desired response, for example without context the instruction `Give me information about Coco Gauff” could be answered by a few sentences, a few paragraphs, or a multi-page document. Yet, given the context, the intended length is often clearer, for example the expected length of the replies in the downstream application the user is interacting with, the interface being used (voice, viewed on a phone vs. a laptop) and so on. Hence adding a further instruction in a given context to the above example such as "The answer should be 300 words or less" resolves this ambiguity.[We note that this paper itself was generated (by humans!) with the constraint that it has to be at most 8 pages.] 0 We show that many existing state-of-the-art instruction following models fail to follow such maximum word length instructions adequately. To measure this we construct and evaluate models on length instructed versions of AlpacaEval 2 <cit.> and MT-Bench <cit.> by augmenting existing prompts with length instructions. We find that, for example, GPT4-Turbo violates length constraints almost 50% of the time, highlighting a significant flaw in these models when it comes to steering their output length. We hence develop a method for improving instruction following models at length instruction following. Our approach, Length-Instruction Fine-Tuning (LIFT), involves taking a conventional instruction following dataset and constructing augmented training data by inserting length instructions in the original prompts. We define length instructions so that the constructed preference pairs reflect both length constraints and response quality. This length instruction augmented dataset is used in finetuning a model via Direct Preference Optimization (DPO) <cit.>. We train both Llama 2 and Llama 3 models using LIFT-DPO and evaluate them on our length instructed benchmarks. See <ref> for some example length instructed generations. We find that our method leads to less length constraint violations and improved overall win rates compared to existing instruction following models. 0 We hence develop a method for improving instruction following models at length instruction following. Our approach involves taking a conventional instruction following dataset and constructing augmented training data by inserting length instructions to the original prompts. We define length instructions so that the constructed preference pairs reflect both length constraints and response quality. This length instruction augmented dataset is used in finetuning a model via Direct Preference Optimization (DPO) <cit.>. We train both Llama 2 and Llama 3 models in this way and evaluate them on our length instructed benchmarks. See <ref> for example length instructed generations. We find that our method leads to less length constraint violations and improved overall win rates compared to existing instruction following models. § RELATED WORK §.§ Length Bias in Model Alignment When optimizing for instruction following ability, reinforcement learning (RL) has been consistently observed to encourage models to produce longer responses <cit.>. <cit.> showed that simply selecting the longest data from the training set for fine-tuning is a strong baseline, and <cit.> showed that optimizing for response length is a significant factor behind RL’s reported improvements. This effect seen in training parallels that on the evaluation side, whereby both humans and models tend to favor longer responses over shorter ones <cit.>. Correspondingly, constructing preference pairs either through human feedback (RLHF) or through AI feedback (RLAIF) is likely to reflect these biases. On the other hand, longer responses are not necessarily better even if preferred by annotators <cit.>. For example they are more likely to contain inaccuracies <cit.>, which may be missed by human evaluators on challenging tasks <cit.>. Recently, instruction following benchmarks such as AlpacaEval 2 <cit.> and WildBench <cit.> have incorporated length penalties into their scoring mechanisms to counteract this bias. This is done by fitting a generalized linear model to predict the (biased) preferences given length as a feature, and then obtaining length-debiased preferences by predicting preferences after removing the length term from the regression. While this penalizes longer responses to a certain degree, it is not yet clear if this new scoring function can still be gamed by models. §.§ Length-aware Model Training Learning methods that take into account length have historically been prevalent in the task of summarization, see for example <cit.>, and in particular <cit.> for length constrained summarization. For instruction following, <cit.> investigate several mitigations, such as balancing preferences or truncating lengths, but do not find that they uniformly help. Both <cit.> and <cit.> propose to modify the reward model to disentangle length from quality so that they can concentrate the training on quality. <cit.> proposes modifying the DPO objective function with a length regularizer, and reports this prevents length exploitation while maintaining quality. These approaches all assume there is an optimum length of responses which the model should be trained to generate. In contrast, our work assumes desired length depends on additional context, and a good model should be capable of following length instructions (i.e., via prompting for desired length). Some production LLMs incorporate system prompts that reference output length, for example the sentence “give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions” is included in the system prompt of Claude 3 <cit.>. However, to our knowledge no systematic evaluation of such prompts has been reported. Many post-training datasets used to fine-tune popular large language models have not been released or their exact makeup detailed <cit.> hence it is difficult to immediately ascertain if such length following instructions are contained in their training setups. However, from data that has been released <cit.> it appears that the amount of such data is generally small. Instead, preference pairs are typically provided which implicitly assume a preferred target length for a given prompt, while the prompt itself does not contain length following instructions. Such preferences are well known to typically prefer longer responses over shorter ones <cit.>. § ALPACAEVAL-LI & MT-BENCH-LI:       NEW LENGTH-INSTRUCTED BENCHMARKS Strongly performing instruction following models should naturally be able to follow given length limits, as those are instructions as well. Such instructions can be a natural part of a prompt, for example "Tell me about <concept>. The answer should be 300 words or less". Depending on the use case and context, users might want responses from the same underlying model but of a different length, e.g. either a shorter or longer answer. In this section, we thus first evaluate the ability of current instruction following models to follow length instructions. In order to do this, we thus build length-instructed (LI) benchmarks, AlpacaEval-LI and MT-Bench-LI[Our length-instructed benchmarks are available at <https://github.com/facebookresearch/RAM/tree/main/projects/length_instruct>]. §.§ Augmenting General Instructions with Length Constraints To evaluate a model's length instruction-following ability, we augment existing instruction-following tasks by inserting maximum length limits as part of the instructions, as shown in the template in <ref>. This tests whether models can respond to the given query successfully, whilst also fulfilling the given length instruction. §.§.§ Target Length The choice of desired length limits might vary a lot by instruction and task. To establish a reasonable yet challenging length limit for effectively evaluating current state-of-the-art (SOTA) models, we base the target length limit on the generation lengths of three strong SOTA models: GPT-4 Turbo (11/06) <cit.>, Claude 3 Opus (02/29)[https://www.anthropic.com/news/claude-3-family] and Mistral Large (24/02)[https://mistral.ai/news/mistral-large/]. We set <MAX_LEN> in the template to the minimum generation length among these three models given the original prompts. Therefore, this length constraint varies for each individual prompt, and is short enough to be challenging, i.e., is not trivially satisfied by all SOTA models. §.§.§ Length Instruction-Following Baseline Many benchmarks evaluate models by conducting pairwise comparisons between model outputs. For given instructions, they report win rates against a strong baseline, such as GPT-4 generations using another LLM to judge the pair (LLM-as-a-Judge). To establish a strong baseline that consistently adheres to the length constraint, we employ the same minimum of three models approach from <ref>. Thus instead of a single model, each baseline response is chosen to be the shortest response generated from the three models. This ensures that the baseline generations always meet the length constraint specified in the prompt while maintaining high generation quality. Thus, for each model tested we compare its generations with this baseline in a pairwise setting. §.§.§ Metrics We propose two metrics: Length-Instructed (LI) winrates against the baseline to evaluate response quality and violation rates to measure length instruction following ability. Length Instruction Following We use violation rates (Vlt%) to measure the percentage of responses that exceed the length constraint by counting the number of words. Additionally, we report other metrics, such as the average response length (in words). To calculate the word count of a response we use the word tokenization function provided by NLTK, excluding punctuation. The exact word count function is detailed in <ref>. Response Quality We report winrates from pairwise comparisons between model and baseline generations on length-following instructions, referred to as the Length-Instructed (LI) Winrate. The winner of each pairwise comparison is determined by both the quality of the responses and adherence to the length constraints. We treat the length limit as a hard constraint. Since the baseline always satisfies the length constraint, if the model response being tested exceeds the limit it automatically loses. If the model response satisfies the length limit, we use the standard pairwise LLM-as-a-Judge comparison between the two responses, where only the original instruction without length limit is given to the judge as input. §.§ Length-Instructed AlpacaEval AlpacaEval 2 <cit.> is an evaluation task consisting of 805 general instruction following prompts from creativity, brainstorming and writing to question answering, math and reasoning tasks. We augment this task with length instructions to create the AlpacaEval-Length-Instructed (LI) benchmark as described in <ref>. Following <ref> we take the minimum generation length of the three strong LLMs as a target length for each prompt. Three out of the 805 Alpaca test instructions already have an explicit length constraint in the original prompt. We therefore only consider the remaining 802 prompts for the AlpacaEval-LI benchmark. <ref> shows the ratio of generation lengths over target instruction lengths as target lengths vary. GPT4-0409 generations exceed the target length limits almost 50% of the time (red dots), especially when target lengths are over 200 words. Claude3-Opus has a similar trend according to the scatter plot. We also include results for Mistral Large and LLAMA3-70b-Instruct in <ref>. Standard AlpacaEval 2 compares model outputs against baseline GPT-4 Turbo generations. In AlpacaEval-LI, the baseline is built from GPT4-1106, Claude3-Opus and Mistral Large as described in <ref>. Their respective winrates in the standard AlpacaEval 2 are 50%, 40.5% and 32.7%. This indicates that the resulting baseline is of high quality while consistently meeting the length constraint. 0 * We then vary the target length by range(-100, 101, 10), and we show line plot where the x-axis is the avg target len and the y-axis is the avg generation length. §.§ Length-Instructed MT-Bench In addition, we also extend the MT-Bench evaluation <cit.> with length instructions to test models on a wide-range of prompts. This dataset consists of 80 challenging multi-turn questions covering 8 categories of user prompts (writing, roleplay, extraction, reasoning, math, coding, STEM, and humanities/social science). We follow the same steps as described in <ref> on the MT-Bench evaluation set by sampling three length constraints for each prompt. For simplicity we only consider first turns, giving 240 MT-Bench-LI prompts. We will use this benchmark, along with AlpacaEval-LI in our experiments in <ref>. § LENGTH-INSTRUCTION FINE-TUNING (LIFT) As shown in the previous section, current SOTA models may not adhere to specific length following instructions. To improve the ability of models in length-instruction following tasks, we propose the following method, which first builds Length-Instruction Fine-Tuning (LIFT) data. This training data consists of preference pairs, which can be used for training models via RLHF or other preference optimization methods. We first assume we are given an existing pairwise preference dataset 𝒟 consisting of N triples of input prompt, winning response, and losing response (x, y_i^w, y_i^l)_i=1⋯ N. Let us denote by (y) the number of words in response y. First, we filter out any triple where the difference between (y_i^w) and (y_i^l) is less than a certain threshold T (T=10 in our experiments). We then construct an augmented dataset 𝒟^' that prepends an explicit length instruction to the input prompt x_i using the template shown in <ref> to convert it into x_i^'. We then construct new length-instructed preference pairs (x_i^', y_i^w^', y_i^l^') where the winners and losers of the pairs are determined as follows: * If (y_i^w) > (y_i^l), i.e. the winning response is longer, we construct two samples in the augmented dataset 𝒟^' by, (1) adding a length instruction to x_i that both responses satisfy (we simply use (y_i^w) + T) and the winning response and losing response remain the same, and (2) adding a length constraint uniformly sampled from the interval [(y_i^l), (y_i^w)], and y_i^w becomes the losing one due to the violation of length constraint, and y_i^l becomes the winning one. * If (y_i^w) < (y_i^l), we also construct two samples in the augmented dataset 𝒟^' by, (1) adding a length constraint to x_i that both responses satisfy (we simply use (y_i^l) + T), and (2) adding a length constraint sampled from the interval [ (y_i^w), (y_i^l) ]. In both (1) and (2), the winning response and the losing response remain the same as in the original dataset. The data construction process is also illustrated in <ref>. Having these preferences will ensure models can handle a wide-range of target lengths and prioritize the length constraint over the original preferences when necessary. We use DPO to train our models, using both datasets D and D' so that models can handle prompts with and without length instructions. § EXPERIMENTAL SETUP We empirically investigate model performance on following length instructions, and the effectiveness of our LIFT training strategy. We begin with a description of our experimental setup. §.§ Train Dataset & Baselines Standard Training Data We use the human-authored examples provided in the OpenAssistant (OA) dataset <cit.> for instruction fine-tuning. Following <cit.> we use 3,200 examples as 𝒟, by sampling only the first conversational turns in the English language that are high-quality, based on their human annotated rank (choosing only the highest rank 0 as chosen and rank 1 as loser). We first do supervised finetuning (SFT) on the chosen responses of 𝒟. We then further fine-tune the SFT model using the DPO loss on response pairs in 𝒟, which becomes our Standard DPO baseline. In addition, we also compare against the Length Regularized DPO (R-DPO) <cit.> baseline that penalizes longer responses by modifying the DPO loss. Length-Instructed Fine-Tuning (LIFT) Data 0 For LI supervised finetuning data, we augment the 3,200 normal SFT examples with <ref> with target length limits that all human-authored outputs always automatically satisfy (we simply add 10 to the generation length as the <MAX_LEN>). Next, We apply our LIFT method to create dataset 𝒟' from 𝒟, which yields 5,954 preference pairs with length instructions. The original dataset 𝒟 consists of 223 pairs where the two responses have less than T=10 words difference, 1,083 pairs where chosen responses are shorter than loser responses, and 1894 pairs where chosen responses are longer. As a result, 𝒟' contains 1,083 pairs where the original winning response loses due to violations of length limits. We train on 𝒟∪𝒟' with the DPO loss, which we call LIFT-DPO. §.§ Training Details In our experiments, we use two sets of base models: Llama2-70B-Base and Llama2-70B-Chat models <cit.> and Llama3-8B-Base and Llama3-8B-Instruct. Our DPO training sweeps over a range of learning rates 5e^-7 to 5e^-6 with a cosine learning rate schedule, a batch size of 16, and a dropout rate of 0.1. Specifically for DPO training, we employed a β value of 0.1. For R-DPO, we set α∈ [0.01, 0.1][We had to reverse the sign of the regularization term in Eq. 9 of <cit.>.]. All Llama2 models are trained for up to 2,000 steps and Llama3 models for up to 20 epochs, and we perform checkpoint selection for early stopping, see <ref> for more details. §.§ Evaluation Method We evaluate our models' length instruction-following capabilities on AlpacaEval-LI and MT-Bench-LI, described in <ref> and <ref>, as well as general instruction-following on the standard AlpacaEval 2 and MT-Bench benchmarks without length instructions. For AlpacaEval-LI and MT-Bench-LI, we use the same setup as in AlpacaEval 2 with GPT4 acting as a judge to measure pairwise winrates. § EXPERIMENTAL RESULTS We report AlpacaEval-LI winrates (Win(%)) and violation rates (Vlt%) for existing SOTA LLMs in <ref>, and for our training variants of Llama2-70B in <ref> and Llama3-8B in <ref>. Our findings lead to several key observations. SOTA LLMs fail to follow length instructions As demonstrated in <ref>, state-of-the-art models, such as the GPT-4 series, exhibit significant challenges in adhering to length instructions. Specifically, the latest GPT-4 model (0409) shows a high violation rate of 49.3% on our AlpacaEval-LI and 44.2% on MT-Bench-LI. In contrast, the Llama-3 instruct model series displays considerably lower violation rates. For instance, the Llama3-8B-instruct model achieves a violation rate of 7.0% on AlpacaEval-LI and 20.0% on MT-Bench-LI, but nevertherless has a lower winrate due to being a less powerful model. LIFT-DPO models perform well on AlpacaEval-LI and MT-Bench-LI <ref> illustrates the effectiveness of our LIFT-DPO training for Llama2 70B models, demonstrating a significant reduction in violation rates compared to both the baseline model and (standard) DPO-trained counterparts. Specifically, the Llama-2-70B-Base model, when subjected to standard DPO training, exhibits a violation rate of 65.8% on AlpacaEval-LI. However, with our LIFT-DPO training, this rate decreases dramatically to 7.1%, simultaneously improving the win rate from 4.6% to 13.6%. Similarly, for the Llama-2-70B-Chat model, standard DPO results in a violation rate of 15.1%, whereas our LIFT-DPO training reduces this rate to 2.7%, and enhances the win rate from 10.4% to 14.2%. On MT-Bench-LI, the Llama-2-70B-Base model has a violation rate of 60.8% using standard DPO training, which is reduced to 10.0% with LIFT-DPO, also boosting the win rate from 5.0% to 11.0%. For the Llama-2-70B-Chat model, the violation rate decreases from 24.2% using standard DPO to 6.7% with LIFT-DPO, with an improvement in the win rate from 10.8% to 12.5%. While the R-DPO baseline improves over standard DPO on both benchmarks especially for a higher α value, it still shows significantly higher violation rates compared to LIFT-DPO, which negatively affects R-DPO's win rates. LIFT-DPO models show no performance degradation on standard AlpacaEval 2 We further assessed our LIFT-DPO models using the standard AlpacaEval 2 benchmark, where no length instructions were added and only the original prompts from AlpacaEval 2 were utilized. The results, detailed in Appendix <ref>, indicate no performance degradation when compared to the baselines. Specifically, the Llama-2-70B-Base model achieved a win rate of 8.6% using standard DPO training, which increased to 9.9% with our LIFT-DPO training. For the Llama-2-70B-Chat model, the win rates improved from 12.6% using DPO to 12.9% with LIFT-DPO. However, the Llama-3-8B-Base models yielded a slight decrease in winrate from 7.8% with DPO to 7.2% with LIFT-DPO, although the LC (length-controlled)-winrate actually increased from 13.9% to 15.7% (as the average response length decreased). Similarly, the Llama-2-8B-Instruct models have a winrate of 25.8% with DPO, which slightly decreased to 22.7% with our LIFT-DPO training, although the LC-winrate actually increased from 26.3% to 26.5%. In summary, our LIFT-DPO models exhibit comparable performance to standard DPO when length instructions are not applied. We observed similar results on standard MT-Bench as shown in Appendix <ref>. LIFT-DPO can follow out-of-distribution length instructions better than existing methods To increase the difficulty of our AlpacaEval-LI benchmark, we can progressively decrease the limit in the length instructions by applying a scaling factor to the existing values ranging from 0.9 down to 0.1. This adjustment introduces a spectrum of challenging length constraints. We assessed the performance of various models based on Llama-2-70B-Base, including standard DPO, R-DPO and LIFT-DPO, and plotted their violation rates. The results are provided in <ref>. The analysis reveals that the standard DPO model exhibits increasingly higher violation rates as the length scale decreases, with rates escalating from below 50% to almost 100% when the scale factor is set to 0.1. This indicates significant difficulties in adhering to stringent length constraints for this model. The R-DPO model displays trends similar to standard DPO, suggesting that while it can reduce the generation length, it lacks the capability to precisely steer it. In contrast, our LIFT-DPO model consistently maintains a low violation rate (below 10%) across all tested length scales. We observe similar trends on MT-Bench-LI, see Appendix <ref> for details. Robustness of Length Controlled AlpacaEval Previous research has acknowledged the presence of length bias, and designers have introduced measures to mitigate it, notably through Length-Controlled (LC) AlpacaEval, which incorporates an LC winrate that considers generation length <cit.>. Despite these efforts, we find that the LC winrate can still be manipulated by adjusting the length instructions. By scaling the length constraints as we did in AlpacaEval-LI and measuring the AlpacaEval LC winrate, we observe significant fluctuations in the results, as shown in <ref>. The LC winrate varies dramatically, from 23% up to 29%. In contrast, in our work we argue that expected length is ill-defined in many queries (see motivation in <ref>), and that length instruction evaluation helps remove this ambiguity, and hence also any potential gameability. § CONCLUSION To address the issue of length bias in general instruction following, we propose length instructions, which assess models' abilities to generate responses within given length limits. We introduce two Length-Instructed (LI) benchmarks, MT-Bench-LI and AlpacaEval-LI, and show that SOTA models surprisingly fail to follow length instructions on these benchmarks. We hence propose Length-Instruction Fine-Tuning (LIFT), a method that augments existing general instruction-following examples with varying length limits. LIFT-DPO models show significant improvement in their ability to control output length while maintaining high response quality. Our length instruction following approach provides a way to compare models without length bias, as it does not suffer from the gameability of simply increasing model response length, as that leads to a violation. In addition, augmenting general instructions with length limits allows for more controllability for users in real-world use cases. 0 While there are existing evaluation measurements trying to address length bias through different avenues, we argue that they might be vulnerable to attach by steering generation lengths of our models via length instructions. Further, we believe augmenting general instructions with length limits allows for more controllability for users in different use cases. 0 To address length bias in evaluation, we introduce Length Instruction-Following evaluation to assess models' general instruction-following abilities within length limits. We show that SOTA models surprisingly fail to follow such length limits on AlpacaEval-LI and MT-Bench-LI. As a mitigation, we propose the LIFT method that augments existing general instruction-following examples with varying length limits. LIFT-DPO models trained on such length instruction-following tasks show significant improvement on steerability of output length while maintaining high response quality. While there are existing evaluation measurements trying to address length bias through different avenues, we argue that they might be vulnerable to attach by steering generation lengths of our models via length instructions. Further, we believe augmenting general instructions with length limits allows for more controllability for users in different use cases. § LIMITATIONS In this paper, the length limit is set in terms of the number of words, but more generally it can be set in number of characters, or some other measure. Another direction of generalization can be allowing length instructions to be phrased using different wording instead of a fixed template, so users can specify the limit in their own words, such as “Keep the response under 100 words.”. We also did not address other kinds of length instructions such as “write at least 100 words”. While this paper attempts to address length bias in model evaluations through length instructions, this bias may also arise from a natural human preference for longer and more detailed responses. Future research could further explore human desired response lengths across different instructions. Such studies could further enhance the alignment of models with human expectations. Another possible cause of longer responses could be related to the increased computation allowance that comes with more tokens, which can benefit from future analysis. § WORD COUNT FUNCTION WE USE from nltk.tokenize import word_tokenize import string def count_words(text) -> int: # Count the number of words # while excluding punctuations return len([word for word in word_tokenize(text) if word not in string.punctuation]) § ADDITIONAL RESULTS ON SOTA MODELS' LENGTH FOLLOWING MEASUREMENTS We plot the generation lengths over target instruction lengths on AlpacaEval-LI for Mistral Large and LLAMA3-70b-Instruct in <ref>. The scatter plots reveal that both models occasionally fail to meet the length constraints. § TRAINING AND TEST LENGTH DISTRIBUTION <ref> illustrates the distribution of length constraints in our LIFT-DPO training data alongside those in AlpacaEval-LI and MT-Bench-LI. We observed that the majority of our training data features length constraints ranging from 50 to 300, a range that is consistent with that of AlpacaEval-LI. Additionally, we have depicted the distribution of length constraints in AlpacaEval-LI scaled by a factor of 0.1 in <ref>. Nearly all scaled length constraints fall below 50, constituting only a small fraction of the length constraints present in our training dataset. § CHECKPOINT SELECTION We perform checkpoint selection by saving a checkpoint every 200 steps and at the end of each epoch. We then evaluate these checkpoints using GPT-4-Turbo on a set of 253 validation examples, which are derived from various sources as outlined by <cit.>. The LI (Length-Instructed) validation set is augmented from the same validation set but includes length limits, using the minimum length from three strong LLMs in <ref>. For the standard instruction-following validation set, each new model checkpoint is evaluated by comparing its generations pairwise with those from the previous checkpoint, utilizing the AlpacaEval evaluation prompt format <cit.>. For length-instructed tasks, evaluations are conducted pairwise against a baseline from one of the three LLMs, specifically the one whose generation length matches the length limit specified in the prompt. The win rate of a model checkpoint is calculated as the average of the win rates on both the instruction-following validation set and the LI validation set. We implement early stopping if we observe a decrease in this average win rate. § MT-BENCH RESULTS In the standard MT-Bench evaluation, models employ different temperatures (including 0) for different categories during inference time. To expand the size of MT-Bench-LI via sampling, we standardized the temperature setting to 0.7 across all categories for pairwise baseline models as well as models being tested. However, for the standard MT-Bench evaluation reported in <ref>, we switch back to the original setup using different temperatures for different categories and assessing performance on 80 unique questions. § DECODING PARAMETERS During inference time, except for the standard MT-Bench evaluations, we apply consistent hyperparameter settings for the Llama models. For the Llama2 models, we set the temperature to 0.7, with a maximum token limit of 2048. For the Llama3 models, the temperature is adjusted to 0.6, maintaining the same top-p of 0.9, but with an increased maximum token limit of 4096. We consistently set top-p to 0.9 for AlpacaEval 2 and AlpacaEval-LI and top-p to 1.0 for MT-Bench and MT-Bench-LI. § ADDITIONAL LENGTH INSTRUCTION FOLLOWING RESULTS In our MT-Bench-LI evaluations, we progressively reduced the length instructions by applying scaling factors to the existing values, ranging from 0.9 down to 0.1. We assessed the performance of various models based on the Llama-2-70B-Base, including standard DPO, R-DPO, and LIFT-DPO, and plotted their violation rates as shown in <ref>). The results indicate that our LIFT-DPO trained model significantly outperforms both DPO and R-DPO in adhering to length constraints. Specifically, the LIFT-DPO model maintains a violation rate below 20% across all scaling factors, whereas both DPO and R-DPO models exhibit violation rates exceeding 80% when the scaling factor is reduced to less than 0.6. Additionally, we analyzed the performance of models based on Llama-3-8B-Instruct on AlpacaEval-LI under gradually reduced length limits. The observed trend is similar to that of MT-Bench-LI, as depicted in <ref>. § ALPACAEVAL RESULTS & MT-BENCH RESULTS The results of the LIFT-DPO models on standard AlpacaEval and MT-Bench are detailed in <ref> and <ref>, respectively. Our analysis reveals that the LIFT-DPO models exhibit no performance degradation when compared to the standard DPO models on these benchmarks.
http://arxiv.org/abs/2406.18856v1
20240627025355
FFN: a Fine-grained Chinese-English Financial Domain Parallel Corpus
[ "Yuxin Fu", "Shijing Si", "Leyi Mai", "Xi-ang Li" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CE" ]
FFN: a Fine-grained Chinese-English Financial Domain Parallel Corpus Yuxin Fu,      Shijing Si^*1Corresponding author: Shijing Si, ., School of Economics and Finance Shanghai International Studies University Shanghai, China Leyi Mai,      Xi-ang Li School of Education Shanghai International Studies University Shanghai, China ======================================================================================================================================================================================================================================================================================= § ABSTRACT Large Language Models (LLMs) have stunningly advanced the field of machine translation, though their effectiveness within the financial domain remains largely underexplored. To probe this issue, we constructed a fine-grained Chinese-English parallel corpus of financial news called FFN. We acquired financial news articles spanning between January 1st, 2014, to December 31, 2023, from mainstream media websites such as CNN, FOX, and China Daily. The dataset consists of 1,013 main text and 809 titles, all of which have been manually corrected. We measured the translation quality of two LLMs – ChatGPT and ERNIE-bot, utilizing BLEU, TER and chrF scores as the evaluation metrics. For comparison, we also trained an OpenNMT model based on our dataset. We detail problems of LLMs and provide in-depth analysis, intending to stimulate further research and solutions in this largely uncharted territory. Our research underlines the need to optimize LLMs within the specific field of financial translation to ensure accuracy and quality. Large Language Models, Chinese-English corpus, Financial news § INTRODUCTION Translation in business and finance domain has increased in volume as well as impact due to the growing globalisation and explosion of financial transactions and increasing business activity <cit.>. China are the first populous nations of the world, and arguably, the growth in wealth and spending power of it make it very attractive destinations for business <cit.>. Besides, English is the dominant language in the global business <cit.>. Therefore, the demand for Chinese-English translation is huge, across many areas and industry sectors. Due to the complicated nature of financial concepts, translators have to commit to a big up-front investment in order to acquire a deep knowledge of the various sub-sectors and types of texts, each with a different level of lexical complexity. Large language models (LLMs) pretrained on massive unlabeled corpora have shown impressive emergent abilities under model scaling which enable prompting for downstream applications <cit.>. However, there is little work on exploring the application of for machine translation in the financial domain. Additionally, the translation performance of is derived from their training datasets. If we want to study the effectiveness of large language models in Chinese-English translation within the financial domain, it is also essential for us to search for existing datasets in this field. Based on the above considerations, we conducted relevant research and experiments. In this paper, our main contributions are listed as below: * We build a parallel dataset of English-Chinese news translation in the finance domain, which includes main texts and titles. * Based on our parallel dataset, we evaluated the performance of ChatGPT and ERNIE-bot in translation, and brought in DeepL and Google for comparison, and found some unexpected feedback. * We trained an OpenNMT model based on it to evaluate the performance of the dataset. * We also provide a quantitative and qualitative analysis to disclose problems when prompting for MT, which provides insights for future study. § RELATED WORKS §.§ Large Language Models Large language models have good promise for machine translation. Reference <cit.> found that have recently shown interesting capabilities of in-context learning and can adapt to a set of in-domain sentence pairs and/or terminology while translating a new sentence. Reference <cit.> proposed a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on. When it comes to ChatGPT, reference <cit.> presented a comprehensive evaluation of GPT models for machine translation and found that GPT models achieve very competitive translation quality for high resource languages, while having limited capabilities for low-resourced languages. Although Chinese is one of the high resource languages, the study of Chinese-English translation quality of LLMs is still under-explored. In this paper, we release a high-quality, human verified parallel dataset that can benchmark popular LLMs. §.§ Datasets There are some existing bilingual news datasets in Chinese and English. WikiTitles-v3 <cit.> is a dataset of titles. ParaCrawl(bonus) <cit.>, WikiMatrix <cit.> and BackTrans News <cit.> provide parallel corpus in the form of sentences. However, all these databases does not target the financial field. By contrast, <cit.> provides a Chinese–English parallel dataset which focuses on financial news, using the Financial Times website, from which they grabbed 60,473 news items from between 2007 and 2021. After browsing through the dataset, we discovered that a large number of the Chinese and English texts are not well aligned. Additionally, since the data was scraped from web pages, there are many HTML tags present. We list three examples in Table <ref>. Thus, we aim to create a database exclusively focused on Chinese and English financial news, meticulously proofread by humans to ensure alignment of sentences. §.§ Neural Machine Translation Neural machine translation (NMT) is a new methodology for machine translation that has led to remarkable improvements. Currently there are many existing NMT implementations. Many systems such as those developed in industry by Google, Microsoft, and Baidu, are closed source, and are unlikely to be released with unrestricted licenses. In addition, we found other open-source neural NMT framework. OpenNMT <cit.> is an open-source framework for neural machine translation which can be used to try out new ideas in translation, language modeling, summarization, and many other NLP tasks. So we use OpenNMT to train a model wihch focus on the translation of Chinese and English financial news. § FFN CREATION We have systematically amassed a substantial volume of financial news articles sourced from various reputable websites, including FOX[<https://www.foxnews.com/>], CNN[<https://edition.cnn.com/>], and China Daily[<http://www.chinadaily.com.cn/>]. All these financial news are freely available. The compilation spans a time-frame from January 1, 2014, to December 31, 2023. The dataset can be found in <https://github.com/shijing001/FFN_corpus>. We are committed to crafting a precise and high-quality evaluation dataset. As a result, we refrained from directly scraping sentences from web pages using code. This decision was made because such direct scraping can often result in unaligned text. Therefore, we manually browse web pages, select several paragraphs and the title of a complete news article to add to our dataset, and during the manual screening process, we repeatedly correct the translated results. The resulting dataset comprises two distinct categories: main texts, which encompass detailed content within the financial news articles, and titles, representing the headlines of these articles. The identical information is presented in both Chinese (ZH) and English (EN) versions, as delineated in Table <ref>. In contrast to the corpora of WMT <cit.>, our dataset is specifically tailored to financial news, providing content exclusively in simplified Chinese, without the amalgamation of simplified and traditional Chinese characters. Furthermore, when juxtaposed with existing Financial News datasets for text mining <cit.>, our dataset, which is manually aligned, ensures translation accuracy and is free of any HTML tags, eliminating the need for further preprocessing. Besides, our dataset stands out for its currency, covering the period from 2014 to 2023, a more recent span compared to the earlier range of 2007 to 2021. Notably, the data in our dataset is sourced from different websites than those in existing datasets, ensuring the provision of distinct data sets even for the same chronological year. §.§ Main text Main text refers to the primary content within financial news articles, predominantly characterized by lengthy declarative sentences that encompass various clauses. These sentences exhibit a strong contextual meaning. Given the nature of financial news, the inclusion of company names, policy clauses, legal documents, and financial terms is commonplace within these sentences. It is not sentence-aligned, but paragraph-aligned, which aims to provide the contextual background to examine the influence of context on the translation outcome. §.§ Titles In contrast to main texts, titles exhibit a distinct nature characterized by brevity and summarization. Essentially, a title serves as a condensed representation or key focal point of the entire article, reflecting a pronounced authorial intent. Notably, titles are often more concise, and some may lack a clear sentence structure, making it inappropriate to categorize them strictly as short sentences. Moreover, the tone employed in titles may lean towards the hyperbolic, strategically designed to captivate readers' attention, thereby differing from the more neutral tone found within paragraph sentences. It is crucial to note that, as titles are crafted by authors after a comprehensive understanding of the article, their extraction alone may result in an abrupt representation. Additionally, the inherent differences in linguistic thinking between Chinese and English contribute to variations in the titles of the same article across languages. § EXPERIMENTAL SETUP §.§ Machine Translation Models This comparative study aims to assess the performance of these models in the context of translating Chinese (ZH) to English (EN). By scrutinizing their respective capabilities, we seek to discern any potential advantages or differences in performance, particularly in the realm of ZH-EN translation. This exploration is anticipated to shed light on the strengths and weaknesses of each model, contributing valuable insights to the field of machine translation and language understanding. For our comparative analysis, we have selected two distinct LLMs: ChatGPT[<https://chat.openai.com/>], a popular LLM developed by OpenAI, and ERNIE-Bot[<https://yiyan.baidu.com/>] developed by Baidu. Notably, ERNIE-Bot originates from Chinese researchers, prompting our interest in evaluating its efficacy in ZH-EN translation compared to ChatGPT. For comparison, we also choose DeepL[<https://www.deepl.com/translator>] and Google[<https://translate.google.com/>], two sets of online translation software. Additionally, we trained an OpenNMT model <cit.> based on the dataset "Financial News dataset for text mining" <cit.> and then our dataset serves as its test dataset. We wanted to evaluate this existing dataset, to see how effective it is as a dataset when actually training models. Because the original author of <cit.> did not manually align this dataset, we pre-processed it with manual alignment and removing HTML tags. The resulting database will also be made public and available for research, which can be found in <https://github.com/shijing001/FFN_corpus>. §.§ Evaluation and Detailed Configuration We adopt the BLEU <cit.>, TER <cit.>, chrF <cit.> as our evaluation metrics, which is supported by SacreBLEU <cit.>. In our experiment, we pay attention to the impact of different prompt styles in guiding LLMs’ translation capabilities. We initiated the experiment with two distinct types of English prompts, which were later translated from English to Chinese. As is shown in Table <ref>. This allowed us to examine whether the prompt's language type affects the translation quality. § RESULTS AND ANALYSIS §.§ Performance of various translation systems Table <ref> displays the performance of five machine translation systems on both directions (ZH-EN and EN-ZH). Generally, DeepL and Google translation outperform both the ChatGPT and ERNIE-Bot. Especially in the translation of titles, the scores of both translation software are superior to those of the large language model. Particularly in the TER scores for titles, the scores of both translation software (Google Translate and DeepL) clearly demonstrate their superiority in translation accuracy. From this table, the performance of LLMs (ChatGPT and ERNIE-Bot) is quite similar. In terms of translation direction, the performance of LLMs in EN-ZH translation is better than in ZH-EN translation. Overall, the translation quality of the main text is better than that of the titles. From Table <ref>, the BLEU scores of OpenNMT model (trained from scratch) are much lower than those of LLMs and translation software. However, this does not necessarily reflect poor performance of the OpenNMT model itself; rather, it indicates that there are still some issues with the training dataset it relies on. We speculate that the main problem lies in the fact that the dataset itself is too small, and many specialized terms have not been included in it. This actually highlights an issue: there is indeed a shortage of parallel datasets for Chinese and English financial news, and relying solely on the dataset in <cit.> is insufficient. §.§ Performance of over four prompts To investigate the effects of prompts on LLMs, we utilize four prompts (two in English and two in Chinese) in Table <ref>. Table <ref> presents the performance of ChatGPT and ERNIE-Bot over those four prompts. Based on the standard deviation of BLUE scores of various prompts, prompts have a certain level of impact on the translation outputs of LLMs. § PROBLEMS OF To further investigate the specific problems of machine translation of LLMs, we conducted a manual evaluation of the translation results generated by ChatGPT and ERNIE-Bot. Through this evaluation, we discovered the following issues, which is summarized in Table <ref>. For issues unique to ChatGPT, we list them in Table <ref>. The detailed explanation for each type of errors are shown as follows. More problematic translation examples of LLMs can be found in the Appendix. The Rejection of Translation (RT) On occasions, ERNIE-Bot may decline to translate certain sentences, responding with a message such as "Please refer to relevant websites for more information, and feel free to ask me any other questions." Besides, ERNIE-Bot may provide a translation answer when using one prompt, but rejecting the translation when using another prompt. This indicates that this model is not stable when outputting translation results. Answer according to the Meaning of the Sentence (AMS) Another observed anomaly in ERNIE-Bot's feedback is its tendency to provide an interpretation or understanding of the given sentences instead of delivering a translation. This behavior is deemed erroneous since the model fails to fulfill the translation request as specified in our prompt. Pinyin Character Feedback (PY) In some instances, when prompted in English, ChatGPT may add Pinyin to the results, potentially lowering the overall scores. This could be because ChatGPT assumes that users prompted in English may not understand Chinese, thus including Pinyin to aid pronunciation. Traditional Chinese Results (TC) Albeit infrequently, when conducting English to Chinese translation with English prompts, ChatGPT may provide results in both simplified and traditional Chinese. Giving Notes (GN) Sometimes, ChatGPT and ERNIE-Bot may give some notes of the results. This usually does not affect the output of the translation text. Multiple Outcome (MO) Normally, a single input will result in one translation, but sometimes multiple translations will be given. Reserve the Original Sentences (ROS) Chances are that ERNIE-Bot may reserve the original sentences rather than translate them.Perhaps due to insufficient training set, ERNIE-Bot cannot translate. Information Omission (IO) may inadvertently overlook certain information during translation due to an insufficient grasp of contextual nuances. After comprehending the overall meaning of a sentence, the system might erroneously omit certain words, resulting in the loss of crucial information and hindering the reader’s accurate understanding of the original text. This issue is exacerbated when translating long sentences or text with intricate grammatical structures, which strains the system's ability to capture detailed nuances, leading to potential information omission. Errors in Financial Terminology (EFT) Translation errors in financial terminology are prevalent and significantly impede readers' efficiency and comprehension. These errors often arise from the literal interpretation of technical terms. The underlying cause may be that LLMs lack the corresponding financial terms in their databases, hindering accurate translations. Mispunctuation (MIS) The occurrence of such errors primarily stems from the disparity in punctuation conventions between Chinese and English. Chinese employs full-angle punctuation, while English utilizes half-angle punctuation, and many symbols do not have direct equivalents, potentially leading to translation inaccuracies. Furthermore, the divergent grammatical structures of Chinese and English necessitate adjustments during translation, often involving changes in punctuation. If machine translation does not appropriately address these differences, it can result in the incorrect application of punctuation marks, further contributing to translation errors. Errors in the Name of Company and Organization (ENCO) In the realm of finance, the accurate translation of company names and names of professional organizations holds significant importance. However, often exhibit a tendency to overlook these specific terms, either failing to translate them or providing translations that do not align with the actual names. This oversight can lead to confusion among readers. One plausible explanation for this issue is that language models lack corresponding data in their databases for these specific terms. Additionally, institutions are sometimes presented in the form of abbreviations, and the same abbreviation may have different references in the financial field. In the absence of context, language models may adopt a strategy of not translating to avoid potential inaccuracies in the output. Tense (TEN) Due to the brevity and contextual limitations inherent in most titles, especially in the context of translation from Chinese to English, may encounter challenges in accurately selecting tenses. This can result in inaccuracies, with past tense phrases being mistakenly rendered as present perfect tense constructions. Extended Meaning (EM) The textual content of titles often encompasses intricate semantic nuances, integrating elements such as metaphors and personification to convey layers of meaning. However, when processed by for translation, there exists a tendency to prioritize literal interpretations, which can potentially introduce ambiguity into the translated output. This divergence in translation approach may compromise the ability of to accurately capture the nuanced essence of the original title, consequently impacting the clarity and effectiveness of the translated text. Sentence Pattern (SP) Indeed, a prevalent characteristic of titles is their deviation from complete sentence structures; instead, they commonly feature concise phrases or fragments. However, when subjected to translation by , these titles often undergo an automatic transformation into full sentences, thereby losing their distinctive structural nuances. This transformation can result in a loss of conciseness and impact, ultimately diminishing the effectiveness of the translated title in conveying its intended message. Among these problems, Pinyin character feedback, traditional Chinese results, giving notes and multiple outcome can all be avoided by changing the prompts. However, the others actually reflect the translation performance of LLMs themselves, and are not completely eliminated by changing the prompts. § CONCLUSION We have developed a parallel English-Chinese news translation dataset in the finance domain, comprising main texts and titles. Unlike existing datasets, our dataset has been manually verified and revised for high quality, and is current as of December 2023. This dataset can be utilized as a benchmark for evaluating the translation capabilities of LLMs. We observed that various prompts impact LLM translation results, including issues with Pinyin character feedback, traditional Chinese output, annotations, and multiple outcomes. These issues can be mitigated by adjusting the prompts. However, LLMs still exhibit problems such as mispunctuation and errors in company, organization, and financial terminology, highlighting their inherent limitations. Compared to LLMs, translation software like DeepL performs better, especially in translating titles. To enhance LLM competitiveness against translation software, improvements should begin with their training datasets. § ACKNOWLEDGMENTS The authors thank the reviewers for the valuable comments that helped to improve the paper. This work was supported by the National Natural Science Foundation of China (grant numbers: 12071302), “the Fundamental Research Funds for the Central Universities" (grant number 2022114012), and Mentor Academic Guidance Program of Shanghai International Studies University (grant number: 2022113028). IEEEtran
http://arxiv.org/abs/2406.18117v1
20240626071428
Resilient and Secure Programmable System-on-Chip Accelerator Offload
[ "Inês Pinto Gouveia", "Ahmad T. Sheikh", "Ali Shoker", "Suhaib A. Fahmy", "Paulo Esteves-Verissimo" ]
cs.AR
[ "cs.AR" ]
Resilient and Secure Programmable System-on-Chip Accelerator Offload Inês Pinto Gouveia1, Ahmad T. Sheikh2, Ali Shoker3,Suhaib A. Fahmy4 and Paulo Esteves-Verissimo5 Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE),King Abdullah University of Science and Technology (KAUST) Thuwal 23955-6900, Kingdom of Saudi Arabia Email: 1ines.pintogouveia@kaust.edu.sa, 2ahmad.sheikh@kaust.edu.sa, 3ali.shoker@kaust.edu.sa,4suhaib.fahmy@kaust.edu.sa 5paulo.verissimo@kaust.edu.sa July 1, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Computational offload to hardware accelerators is gaining traction due to increasing computational demands and efficiency challenges. Programmable hardware, like FPGAs, offers a promising platform in rapidly evolving application areas, with the benefits of hardware acceleration and software programmability. Unfortunately, such systems composed of multiple hardware components must consider integrity in the case of malicious components. In this work, we propose Samsara, the first secure and resilient platform that derives, from Byzantine Fault Tolerant (BFT), protocols to enhance the computing resilience of programmable hardware. Samsara uses a novel lightweight hardware-based BFT protocol for Systems-on-Chip, called H-Quorum, that implements the theoretical-minimum latency between applications and replicated compute nodes. To withstand malicious behaviors, Samsara supports hardware rejuvenation, which is used to replace, relocate, or diversify faulty compute nodes. Samsara's architecture ensures the security of the entire workflow while keeping the latency overhead, of both computation and rejuvenation, close to the non-replicated counterpart. MPSoC, Fault and Intrusion Tolerance (FIT), Rejuvenation, Reconfigurable Hardware, Resilience § INTRODUCTION The trend of offloading computing to hardware accelerators is gaining more traction in emerging systems and applications such as cyber-physical systems (CPS), Internet-of-Things (IoT) services, automation and space applications <cit.>. The purpose is often both seeking accelerated computing and enhanced security. Hardware acceleration is a result of building application-specific logic, e.g., matrix multiplication and cryptographic operations, precluding OS or software interruptions or locking, and taking advantage of fine grained parallelism; whereas security leverages the hardware immutability properties to provide guarantees like tamper-resistant data processing, isolation and building security abstractions <cit.>. To continuously support new use-cases while reducing the burden of hardware fabrication of a monolithic system (costly and slow), these systems are often (1) assembled and deployed as Multi-Processor System-on-Chip (MPSoCs) that makes use of modular cheap Commercial-of-the-shelf (COTS) components <cit.>; and (2) leverage the hardware programmablity features of general-purpose reconfigurable hardware like FPGAs and CGRAs <cit.>. Unfortunately, this raises new computing integrity challenges against intrusions and faults. Reconfigurable hardware fabrics allow new computing functionality to be loaded after fabrication <cit.>, by means of a binary file (usually called the bitstream) that maps an architecture or module (we call a Tile henceforth) onto the fabric. While this gains the intrinsic performance and security properties of hardware after loading, it inherits some of the mutability weaknesses of software prior to loading (given the nature of the binary file), which can be prone to intrusion and benign faults <cit.>. We noticed that little has been done to ensure computing integrity when a tile is faulty or malicious (Byzantine <cit.>). Indeed, literature has addressed this issue through several complementary approaches, e.g., focusing on encryption, isolation, and memory obfuscation <cit.>; however, these approaches assume tiles are trusted or rely on software and, thus, mutable implementations. For instance, encrypting the bitstream <cit.> prevents tampering with it at rest, e.g., in memory; however, it neither withstands a contaminated bitstream at the development phase (which is can occur due to the lengthy hardware development process and the use of many tools), nor Network-on-Chip (NoC) attacks under execution <cit.>. On the other hand, containment and memory randomization <cit.> ensure the execution of deployed co-existing tiles do not interfere, but do not give any guarantees on the output integrity if tiles are Byzantine, e.g., corresponding to a vulnerable bitstream. Finally, Triple-Modular-Replication (TMR) is often used as a voting mechanism to ensure computing integrity <cit.>, but it is not adequate for malicious adversaries. In this paper, we introduce Samsara, the first resilient and secure computing platform architecture for programmable MPSoCs. Samsara hinges on hardware reconfigurability to ensure computing integrity by employing state-machine replication (SMR) <cit.> and rejuvenation of tiles, managed through a novel lightweight hardware-based Byzantine agreement protocol, called . is optimized for simplicity and low latency to make it feasible for hardware offload. When detects a fault or delay, it recovers via a rejuvenation process that is seamless to the application. Rejuvenation is used to reload (from a bitstream library) a tile facing transient faults or glitches, replace a malicious or faulty tile with a diversified version <cit.> to improve independence of failures, or relocate the tile to another location to avoid underlying glitches or compromised network routes. Diversity of bitstreams is possible by means of different logic module configurations at design time, through design with hardware description languages (HDLs) or open-source architectures and frameworks like those of RISC-V <cit.>. Samsara's architecture is designed to maintain the security of the entire workflow including storing encrypted bitstreams, booting, execution, and rejuvenation. One of the main challenges addressed by Samsara is circumventing the known complexity and delay of Byzantine agreement, which would represent a significant overhead for hardware accelerators. Hence, we optimize H-Quorum for simplicity and latency by inspiring from Quorum <cit.>, which is known to be the simplest classical Byzantine agreement protocol with the lowest theoretical latency possible <cit.>. Nevertheless, Quorum is known to be impractical in Internet-based settings, as it assumes that clients are not Byzantine and it recovers into a heavy-weight backup phase under faults (see details in Section <ref>). We circumvent these limitations by making smart choices: Samsara employs an Application Specific Integrated Circuit (ASIC) Controller that plays the role of a trusted "semi-leader" replica: it acts as a leader by mediating the application requests sent to the replicated tile and collects votes, while not participating in the computation phase. This is key to ensure that the Controller exhibits a small fingerprint (a few hundred LoC — of a hardware description language — in our case) to be easily verified, and, thus, considering it a trusted-trustworthy hardware component is a valid assumption. Another main challenge in Samsara is to ensure secure rejuvenation. Rejuvenation is done while the system is running by reloading a new bitstream to the reconfigurable fabric via booting scripts. To make sure intruders cannot modify, initiate, or interrupt rejuvenation, our MPSoC architecture makes use of a simple microprocessor in which only the Controller can run these scripts. This makes sure that maintains the state integrity across rejuvenation, including a fast shared-memory state-transfer mechanism, detailed in Section <ref>. We provide a systematic proof sketch in Appendix <ref> to ensure the correctness of the entire workflow phases. As a proof of concept, we implemented Samsara on a Xilinx ZCU102 FPGA SoC <cit.>. Our evaluation of accelerator applications shows that Samsara's latency is slightly higher than a non-replicated accelerator, and up to 35.9% lower than state-of-the-art hardware-based Byzantine agreement counterparts, like iBFT <cit.> and a shared-memory implementation of MinBFT <cit.>. Additionally, the evaluation shows that the rejuvenation time is negligible, and 99.89% faster than rebooting the whole platform. The rest of the paper is organized as follows: Section <ref> discusses the Samsara's system and threat models, as well as background on MPSoCs and reconfiguration. Samsara's architecture and are presented in Section <ref>. We then provide the evaluation of our proof of concept on an ZCU102 FPGA in Section <ref>, and discuss related work in Section <ref>. Finally, we conclude in Section <ref>. We provide correctness and liveness proof sketches in Appendix <ref>, for the reader's convenience. § SYSTEM AND THREAT MODELS §.§ System Model §.§.§ Generic Programmable MPSoC We consider a generic programmable Multi-Processor System on Chip (MPSoC) composed of: (1) a processor portion, called the Processing System (PS) that has one or more processing cores, used to run application software, alongside the basic I/O and peripherals; (2) a reconfigurable portion (e.g., FPGA, CGRA), used to deploy offloaded functionality, i.e., accelerated tasks. Examples of commercial devices tht fit this model include the AMD/Xilinx Zynq-7000 <cit.> and Zynq UltraScale+ <cit.> and Intel/Altera Agilex 7 SoC. All portions of the MPSoC are connected via an underlying reliable hardware Network-on-Chip (NoC) or network bus, such as AMBA AXI or PCIe [The main difference between a NoC and a bus pertains to the fact that the NoC uses a conceptually point-to-point approach, while a bus tends to be multipoint.]. The choice of communication medium depends on the platform's fine-grained architecture and application requirements (e.g., bandwidth). Channels may slightly delay circulated messages or data but will eventually deliver them as long as they are not under attack. §.§.§ Samsara COTS In the context of Samsara, we propose an additional application-specific integrated circuit (ASIC) component, as part of the MPSoC PS, which implements the Controller of Samsara (as explained in Section <ref>); and a simple COTS microprocessor, MP-Boot, to handle booting and configuration of the reconfigurable portion, (see Section <ref>)). We propose that the MPSoC is supplied with volatile memory and hardware-assisted Tamper-Resistant Storage (TRS) (e.g., TPM <cit.> or Hardware Credential Storage <cit.>), used for tamper-proof storage and compute. Modern MPSoCs allow addition of specific COTS, modules, or microchips as we do <cit.>. Future MPSoC designs leveraging chiplets also provide this flexibility. In addition to this, as detailed in Section <ref>, the reconfigurable portion is divided into partitions, i.e., tiles <cit.>, that encapsulate compute functions. A tile is an abstraction of the logic modules containing offloaded and/or accelerated tasks, placed on dynamically reconfigurable hardware regions. Samsara uses these tiles as replicas. §.§.§ Reconfiguration Implementation There are several possible implementations for the reconfigurable portion. For instance, FPGA Programmable Logic (PL) can be used to instantiate hardware accelerators in two phases: (1) loading full bitstreams that define the base infrastructure, e.g., determining the number of tiles locations, networking (called routing), and other components; and (2) partial bitstreams for reconfiguration at runtime, used to deploy new compute logic. Alternatively, CGRAs <cit.> consist of a large array of function units (FUs) interconnected by a mesh style network; they offer more coarser-grained configuration than FPGAs. Yet another possibility is to use multiple connected GPUs or other diverse FUs, e.g., via a secure implementation of PCIe <cit.>. For the remainder of this paper, we shall focus on an FPGA-based implementation of the reconfigurable portion. For this, we first present a brief overview of MPSoC and FPGAs to ease the understanding of the rest of the remaining sections. §.§ Background on MPSoC and FPGA This section provides a brief background on programmable Multi-Processor Systems-on-Chip (MPSoC) and FPGA Partial Reconfiguration (PR). §.§.§ MPSoC Architecture An MPSoC is a System-on-a-Chip (SoC) which includes multiple processors, often used in embedded devices. A typical modern MPSoC <cit.> includes multiple fabricated processing cores (called hard cores) in a Processing Side (PS) and programmable hardware, based on Field Programmable Gate Array (FPGA) technology. The latter can be reconfigured with arbitrary custom-hardware logic after fabrication. The cost of FPGA development and deployment is negligible for low to medium volumes. However, per-device cost is expensive. Application Specific Integrated Circuits (ASICs) are chips with immutable logic circuits. They are preferred for large scale manufacturing due to cheaper per-device cost after amortizing the high non-recurring engineering (NRE) costs. §.§.§ FPGAs FPGAs possess reconfigurable programmable logic (PL), consisting of fabric that can be programmed according to a design that is mapped into a configuration file called a bitstream. A common case is to design accelerators or softcores, and connect them to other modules such as memory controllers and I/O. The PL can be configured directly through an external interface or via the MPSoC's PS. The PL consists of a sea of building blocks such as Configurable Logic Blocks (CLB), programmable routing, I/O banks, Block RAMs (BRAMs), etc. Designed modules, e.g., accelerators or softcores, and their communication medium are mapped into the aforementioned components. Traditionally, when the PL is programmed, it becomes immutable, i.e., the free and configured regions of the PL cannot be further changed without reconfiguring the whole fabric. However, dynamic partial reconfiguration (PR) allows partitions of the PL to be reconfigured at runtime <cit.>, without mutating the rest of the design. Thus, a full bitstream configures the whole PL, while partial bitstreams modify only the specified partition(s) without compromising the integrity of applications running on the remaining portions of the PL. One of the biggest advantages of PR is the ability to time multiplex the underlying silicon for various tasks. To design a system using PR is it necessary to floorplan the design: the PL fabric is spatially divided into Reconfigurable Partitions (RPs) that are to contain dynamically Reconfigurable Modules (RMs). When using PR, floorplans contain a static partition, containing the design modules that cannot or should not be dynamically reconfigured [These can, however, be reconfigured by reconfiguring the whole PL.]; and one or more RPs. The number and placement of these RPs cannot be changed at runtime, meaning that, in order to change the number of RPs used and the type of RMs they can hold, a full bitstream (configured with a new floorplan) must be loaded to the PL. Compatible RMs, e.g., diverse accelerators, can, however, be swapped at runtime using only partial bitstreams. We now present the threat model we assume, based on MPSoC and FPGAs. §.§ Threat Model In this section we address Samsara's hardware, software and network/bus threat models (see Samsara's architecture in Fig. <ref> for clarity). The threat model focuses on compute integrity and availability. Confidentially is out of scope for this paper. §.§.§ Samsara Controller and MP-Boot Samsara's Controller is a simple and easily-verifiable trusted-trustworthy ASIC and is, therefore, assumed to be tamper proof. Similarly, the microprocessor MP-Boot has a small footprint and is used only to run booting and reconfiguration software upon a signal from the Controller. This software is stored in encrypted form in, e.g., the TPM. Therefore, this software is assumed to preserve its integrity and not be called by another entity. Both the Controller and MP-Boot are detailed in Section <ref>. §.§.§ PL Hardware Logic We assume a threat model where an adversary can inject hardware trojans <cit.> or backdoors (e.g., in coalition with PS software <cit.>) in PL Tiles, during the design cycle <cit.>. That is, we assume tiles in the PL and, thus, their bitstreams may contain malicious functionality that impacts integrity. Additionally, we assume the possibility of non-malicious threats caused by unintended vulnerabilities created at design time <cit.>. We do not focus on attacks that have been successful in breaking the confidentiality of bitstreams <cit.> while they are at rest in memory. To boost resilience against a compromised tile, several redundant tiles are run simultaneously in the PL following the agreement protocol — explained in Section <ref>. We assume that a majority of these tiles are not simultaneously compromised in a short time window, which means they have diverse configurations or implementations to avoid common-mode failures. For example, diverse configurations of hardware modules are a possibility that is currently supported by some vendor tools, while different implementations are also possible via hardware description languages (HDLs) or with the support of the RISC-V instruction set architecture and respective frameworks <cit.>. We also assume that tiles have a level of containment or isolation, by having the base PL design use methods like Xilinx's Isolation Design Flow (IDF) <cit.>, which provides fault containment at the FPGA module level, so that an anomaly or vulnerability may not affect other modules of the PL directly. §.§.§ General Hardware Application Cores may fail arbitrarily since these do not have access to the Controller, MP-Boot, the TPM, or the PL. For further isolation guarantees, solutions like <cit.> can be used for flexible access control. We do not assume any side-channel attacks on the platform, the failure of the FPGA fabric itself, nor MPSoC-wide failures. This assumption is reasonable since FPGAs and MPSoCs in general are high-end hardware that are subject to rigorous testing. Still, resilient clocks <cit.> mitigate some chip-wide common-mode faults and the recent trend towards interconnected chiplets further improves the physical decoupling of MPSoC components. §.§.§ Software Similarly, the attacker can compromise any application software that would be executed in the PS's Application Cores (see Section <ref>) or in PL soft cores (if used as tiles). However, as we shall explain later, malfunctioning PS application software does not interfere with Samsara, and software running on the PL (if used) has no means to cause trouble beyond providing incorrect output, which Samsara addresses with . Furthermore, to safeguard Application Cores further, their core-to-NoC interface can be augmented with solutions such as <cit.>. §.§.§ Network-on-Chip Moreover, via a compromised tile, the adversary may compromise the NoC by dropping or modifying exchanged messages <cit.>, namely by means of software-hardware coalition <cit.>. The adversary cannot, however, spoof, pretending to be another tile, as shown in Annex <ref>. § SAMSARA RESILIENT COMPUTING PLATFORM §.§ Architecture We present a high-level architecture of Samsara in Figure <ref>. Samsara's architecture is composed of three main components: the Controller, the Compute Platform, and MP-Boot Utilities. The Controller manages the Compute Platform in the PL and interfaces the application requests from the Application Cores with it. Having such a critical role, the Controller is implemented in hardware (ASIC) that cannot be tampered with. To enable different configurations and extensions, the Controller stores default PL configurations and security keys in Tamper-Resistant Storage (TRS). The Compute Platform implements the compute Tiles, i.e., accelerators and critical functions, to be used by the applications. It is composed of FPGA-based tiles in the PL, loaded from bitstreams, encrypted and stored in the Softcore library in the PS's memory and later authenticated. Encryption provides basic design security to protect the design from copying or reverse engineering (which is not the focus of this paper), while authentication provides assurance that the bitstream provided for configuration is the unmodified bitstream created by an authorized user. Authentication verifies both data integrity and authenticity of the bitstream. Authorized users can still have corrupt or malicious bitstreams and these can also be subject to faults, hence the need for PL-side Rejuvenation, as outlined in Section <ref>. With the benefit of reconfigurable hardware, tiles are subject to updates over time, e.g., to modify functionality, which makes them less secure than fixed function ASIC tiles. Consequently, the Compute Platform supports active replication of tiles to mask and detect faulty or misbehaving ones. This is protected and managed by the Controller. In addition, Samsara makes use of Software Utilities, i.e., the Bootloader and Tileloader (see Fig. <ref>), that are used occasionally by the Controller to manage and rejuvenate faulty tiles. These utilities are software-based components that should only interact with the PL at booting and reconfiguration time (detailed in the next section), respectively. Therefore, they are retained in memory in an encrypted form and are executed in a dedicated processor (MP-Boot) that is the only software processing unit with PL access. Furthermore, MP-Boot is not available for user-level application usage, to prevent malicious code from invoking the API that loads PL bitstreams. In our architecture, the communication between the Controller and the Compute Platform Tiles can be done, e.g., 1) through a bus such as PCIe, or 2) through shared PL memory - BRAMs. While 1) would require the use of signatures for authentication, 2) does not, as we explain in Annex <ref>. We shall use 2) for the remainder of the paper. As seen in Fig. <ref>, the Controller holds a BRAM in the PL to which it has read/write access. Here, it writes the Requests and keeps the Log of executed Requests for later state transfer after rejuvenating a tiles (in the case of stateful applications). Tiles have read-only access to the Controller's BRAM in order to prevent malicious ones from modifying its contents. Then, tiles hold a BRAM of their own, to which they have read/write access and the Controller has read-only access. In this BRAM, tiles place their Replies as well as their local Log. Requests and Replies are written up to a maximum, after which a checkpoint is taken and the BRAMs reset to give space to new rounds of Requests/Replies. Checkpoints are saved in an on-chip SRAM outside the PL and can be fetched if needed by the Controller. Reset of the BRAMs can be triggered by the Controller or by a simple Reset IP in the PL. The Controller's memory can instead be kept outside the PL as regular SRAM or as Tamper-Resistant Storage (TRS), which could improve security due to being outside reconfigurable logic, but, depending on the specific implementation (bus, type of storage), could as well increase access times. §.§ Operational Phases §.§.§ Overview Samsara operates in three phases: Bootstrapping, Execution, and Rejuvenation. In a nutshell, in the Bootstrapping phase, the Controller launches the Compute Platform. It executes the Bootloader utility that loads the main PL bitstream (i.e., the floorplan of the FPGA) and then the Tileloader utility following the configuration stored in the TRS. The configuration decides the types/versions of tiles to load from the Softcore Library as partial bitstreams, how many tile replicas are run, when and how to rejuvenate tiles, etc. When the Compute Platform is ready for use by the applications, the Execution phase begins. In this phase, the Controller receives requests from the applications, assigns each request a unique ID, and sends them to all the tiles of the Compute Platform, by running the lightweight quorum-based Byzantine agreement (detailed next). The tiles execute the same request simultaneously[Simultaneous here does not mean in lock step.] and reply to the Controller. The latter verifies if a majority of the received responses are matching and forwards the reply back to the application. In case there is any mismatch or delay detected, the corresponding tile is to be replaced. This announces the launch of the Rejuvenation phase. In this phase, the Controller destroys the faulty tile and replaces it with another one of the same type (i.e., refreshing it) or by another diverse version. This follows a policy defined by the Controller. Rejuvenation ends by completing the state transfer to the newly loaded tile if the application is stateful. Next, we explain the three phases of Samsara in detail. Floorplanning refers to the set of physical constraints used to control how logic is placed in the PL. The definition of dynamically reconfigurable zones is done at this stage and defines the containers these partitions can be in. §.§.§ Bootstrapping Phase This phase is only launched when the MPSoC is started. It aims at preparing the Compute Platform through loading the necessary main and partial (i.e., Tile) bitstreams to the PL section with the following steps: * Controller sets its status: 𝐬𝐭𝐚𝐭𝐮𝐬← Loading and accepts no requests from applications. * Controller verifies and launches the Bootloader in the dedicated microprocessor. This loads the main bitstreams to the PL. The main bitstream represents the basic configuration on which Tile bitstreams are loaded afterwards into specific containers defined by the main one. * Controller verifies and launches the Tileloader using the stored configuration in the TRS. The Controller passes the configuration parameters 𝐂𝐨𝐧𝐟𝐢𝐠 to the Tileloader. The retained configurations in the TRS are described in Table <ref>. * Tileloader loads the bitstreams to the PL and notifies Controller when the Tiles are ready. This avoids the case when the latter is unresponsive or faulty. The Controller sets a timer while waiting for the Tiles status to become Ready. * Tiles notify the Controller when Ready. This asserts the info sent by the Tileloader in case it is unresponsive or faulty. * If the Controller received the expected number of Ready messages from the tiles, as defined in 𝐂𝐨𝐧𝐟𝐢𝐠 before the timer's expiry, it sets its 𝐬𝐭𝐚𝐭𝐮𝐬← Ready and starts accepting application requests. Otherwise, the Controller launches the Rejuvenation phase in partial-mode if a minority of Tiles are faulty or slow, or in full-mode otherwise. §.§.§ Execution Phase This phase is dedicated to the execution of application requests through a novel lightweight quorum-based Byzantine agreement protocol, we call it . has a simple message exchange pattern, depicted in Fig. <ref>, that is tailored for low latency. In a nutshell, an application sends its requests to the Controller that mediates the requests and responses with the compute Tiles. The Controller sends each request directly to all Tiles and collects their responses. A reply is sent to the application if a majority, i.e., f+1 out of 2f+1, of responses match. Otherwise it recovers by launching the rejuvenation phase. is inspired by Quorum <cit.>, that is proposed for Internet-based distributed services settings. Quorum follows a single round-trip direct messaging pattern between a client and all the 3f+1 replicas, assuming f faulty replicas, thus making it the Byzantine agreement protocol with the lowest theoretical latency <cit.>. This makes it the preferred choice in our case since low latency is paramount in hardware accelerator applications. Nevertheless, Quorum has several shortcomings that make it unfeasible for Internet-based settings, and actually impede its adoption <cit.>. In particular, (i) Quorum assumes the client is trusted since it directly sends requests to all replicas and collects their responses, i.e., without a primary replica as in classical protocols <cit.>; (ii) requiring 3f+1 replicas to maintain safety and liveness under partially-synchronous networks, which is deemed costly; and (iii) requiring a recovery phase to a backup protocol (e.g., PBFT <cit.>) with an expensive state transfer mechanism due to logs' matching, necessary for safe view-change under failures. Fortunately, by leveraging the hardware environment, tweaks Quorum in order to bypass these shortcomings as follows. A. Controller acts as a trusted half-primary. avoids the trusted client issue by having the Controller mediating the messages between the application and the Tiles. Being an ASIC hardware, the Controller is arguably highly trusted provided that its footprint is not big (as we convey in Section <ref>) in order to be easily verifiable. For this, we decided to simplify the Controller that can be observed as "half-primary": it mediates the messages and performs the log matching, but it does not compute the requests as the Tiles do (since computation logic can have a large footprint in accelerators). B. Requiring a majority of correct tiles. requires only 2f+1 tiles to tolerate a maximum of f faulty tiles to maintain liveness and safety. This is possible since the communication in Samsara is hardware-based, which provides some network containment and thus exhibiting some network synchrony. This allows us to make some time bounds on responses before launching the rejuvenation phase, contrary to Quorum that cannot differentiate between a Byzantine replica and a slow or faulty network. Even in the case where a faulty Tile is slow or attempts to launch a DoS attack <cit.> on the network bus or jam the Controller, the latter can detect this easily and launch the rejuvenation phase that can change the faulty Tile and/or the network routes (see next). C. Safe state-transfer upon recovery. 's state-transfer is simple as it does not require log-matching, contrary to Quorum. Indeed, Samsara's Controller has a restricted memory in the PL, i.e, PLM-C in Fig. <ref>, in which it retains the state and logs, that are equivalent to those of correct Tiles. This makes state-transfer fast and importantly preserves the correct state across rejuvenation phases (which is analogous to view-change in Quorum). We now present the steps of as follows: * An application A sends a request req to the Controller C. * C creates a message m=uid, req, H(req), where uid is the message unique ID representing an incremented sequence number safely assigned by the Controller, and H(Req) is the hash digest of Req. C sends m to each tile T_i by placing m message in the read-shared PLM-C memory. The Controller sets a timer timer_i and waits for a reply from at least f+1 tiles before expiry. * Upon the receipt of m, a tile T_i accepts m after verifying its hash digest and uid. Then, T_i computes req and puts the output rep in a response message r=uid, rep, H(rep), tid that uses the corresponding request uid, as well as T_i's ID tid. The latter "sends" r to C by placing it in its shared PLM_i slot to be read by C. * For each tile T_i, C reads the reply r_i from the corresponding PLM_i, it verifies its hash digest and uid. Then, C tries to check if a majority of these replies are matching as follows: * If C received matching (i.e., having equivalent output) replies rep from all the 2f+1 tiles before the timer_c expires, it forwards the rep to the application A. In addition, if A is a stateful application, C temporally retains the req|rep in its allocated PL PLM_C slot to be used for state transfer under faults. * If C received only f+1 matching replies rep before timer_c expires, it forwards rep to A; however, it also launches the Rejuvenation phase with partial-mode to replace the faulty or slow tile (see next). In addition, if A is stateful, C temporally retains the req|rep in its allocated PL PLM_C slot to be used for state transfer under faults. * Otherwise, if less than f+1 matching responses rep are received before timer_c expires, c launches the Rejuvenation phase with full-mode to replace all the tiles and their (network) routes (see next). * Finally, if agreement was successful, C marks the request as done. §.§.§ Rejuvenation Phase This phase is launched by running the Tileloader to carry on a partial-mode or full-mode replacement of the Compute Platform. Partial-mode happens through reloading new softcores to the PL, e.g., to refresh or diversify tiles; whereas the full-mode rejuvenates the tiles as well as their routes (network). Both stateless or stateful applications can be considered. For the former, no state transfer upon rejuvenation is required, while for the latter rejuvenated replicas read or copy the state saved in PLM-C, depending on whether they need a local copy. Examples of stateful applications include, e.g., image processing. Rejuvenation steps and modes are explained as follows: * The Controller C changes its status to 𝐬𝐭𝐚𝐭𝐮𝐬← Loading and stops accepting application requests. * C invokes the Tileloader in two possible modes: (i) Partial-mode where C launches the Tileloader to flush the faulty or slow tiles and their corresponding BRAMs in the PLMs. It then reloads the same or diverse softcores as new tiles (see next). In the (ii) full-mode, C invokes Tileloader on the entire Compute Platform to be rejuvenated, meaning that all PL resources (softcores, BRAMs, Routes, Registers, etc.) are destroyed and reloaded. This is necessary when a majority of matching replies is not achieved or the Compute Platform is slow or unresponsive. In the case of stateful applications, the state saved in the PLM-C BRAMs is first checkpointed and saved in the on-chip SRAM before triggering the Tileloader, to ensure no state progress is lost. The Tileloader reloads the Compute Platform following a predefined policy as follows: * Refresh/Diversify: reloads the exact (Refresh) or diverse (Diversify) softcore version to the PL. Refresh is convenient for transient faults or attacks while Diversify boosts the Compute Platform resilience through minimizing common-mode failures. * Replace/Relocate: reloads the softcore in the same or different PL partition location. This precludes potential issues in the PL fabric location and changes the communication routes in the case of AXI issues/attacks. Relocation, however, assumes a block for the relocation of the partial bitstream into a new location and its interface has been taken into account at design time during floorplanning. * Scale out/in: changes the number of simultaneous tiles adapting to severity levels, i.e., as f in 2f+1 changes. This is bounded by the min-tiles and max-tiles parameters in configuration (see Table <ref>). * Reactive/Proactive: invokes Tileloader in reaction to a fault raised by or in a periodic fashion regardless of faults, seeking proactive resilience (especially useful against Advanced Persistent Threats). * The Tileloader notifies C with Ready status. * C sets a timer and waits for Ready response from the corresponding rejuvenated tiles. * If C received the expected number of Ready messages from the tiles before the timer's expiry, it sets its 𝐬𝐭𝐚𝐭𝐮𝐬← Ready, transfers the checkpointed state back to PLM-C, and starts accepting application requests. Otherwise, the Controller launches the Rejuvenation phase in partial-mode if minority of Tiles are faulty or slow, or in full-mode otherwise. § EVALUATION The proposed framework was evaluated on a Xilinx Zynq Ultrascale+ ZCU102 FPGA board, running at 100MHz. The choice of frequency has the goal of fair comparison, as the references against which we compare Samsara run also at 100MHz. We instantiated 3 reconfigurable partitions (RP) in the PL (i.e., 3 tiles), each with 2 possible reconfigurable modules (RM). RMs refer to the possible partial bitstreams that fit into the RPs which are dedicated to tile implementation. The choice of the type of IPs to run on the PL, i.e., what tiles contain, is application dependent and not a property of Samsara. RMs can only be loaded into a full bitstream with the RPs designed to contain them. However, by triggering full Rejuvenation and replacing the whole PL, it is possible to install a new full bitstream that can interface with different types of RMs. Namely, a bitstream can be designed to have more generic RP boundaries which interface more easily with different types of tiles. In order to evaluate the most complex and costly case possible for Samsara, we designed the tiles to be softcores (Xilinx's MicroBlazes in standard configuration). Simpler cases include cryptographic or machine learning accelerators, or any other type of module. As a proof of concept, the Controller is simulated in the PL and not implemented as an ASIC (as it is would require manufacturing and be costly to experiment with over several iterations). It is then instead implemented as software running on a PL softcore. Tiles' cores and other components in the PL (e.g., memory controllers, timers, etc) are connected using AXI4-Lite. The Bootloader and Tileloader are run in one of the FPGA's PS ARM cores. Table <ref> details the architectures/protocols evaluated in this paper. We evaluate two fault models for Samsara, (1) one where we assume the MPSoC NoC and AXI buses to be correct and thus do not use message hashes; and (2) one where we acknowledge network attacks (as described in the threat model in Section <ref>) and, thus, use hashes (Samsara HW_H in Table <ref>). Both scenario (1) and (2) use 3 partitions as described above. We compare these implementations with a baseline of having just one Tile, in this case, one softcore (SC in Table <ref>), with a regular triple-modular redundant (TMR) architecture (with and without hashing), the iBFT protocol from <cit.> and a shared-memory-based implementation of MinBFT <cit.> akin to that described in <cit.>. Using 3 Tiles is the minimum to have majority (i.e., if f = 1, then n = 2f + 1 = 3) and allows better comparison to basic redundancy (TMR) and <cit.>, since it uses 3 replicas as well. The comparison with iBFT is relevant given it too implements a low-level agreement protocol like . The other protocol presented in <cit.>, Midir <cit.>, is slower than iBFT and, therefore, is not as suitable for comparison. Samsara's version with no hashes also helps to compare with iBFT, which assumes a correct network. MinBFT is the most well-know state-of-the-art BFT protocol that implements architectural hybridization <cit.> and, as such, was a good target for comparison. The protocols are compared in terms of agreement latency, reconfiguration latency, latency footprint, area usage, power consumption and communication steps. Area usage and power consumption are SWaP (space, weight and power) metrics, which are relevant in the construction of on-chip, often resource restricted systems like cyber-physical systems (CPS) and IoT. §.§ Latency Latency is measured with a combination of an AXI Timer and an AXI Interrupt Controller module in the PL, and a Xilinx timer API that outputs the cycle count. Protocol Latency Fig. <ref> compares Samsara's (H-Q in the figure) with 256-bit messages (with and without hashing) with our baseline (SC), iBFT and the shared-memory implementation of MinBFT, in logarithmic scale. It is observable that the use of 3 agreeing Tiles as opposed to 1 only has a latency overhead of 1054 cycles (i.e., 10.54 ms, considering a core running at 100MHz). Additionally, with hardware-based hashing is still 3241 cycles faster (and 4834 cycles faster without hashing) than iBFT (no hashing), meaning it is 21.6% (and 32.2% respectively) faster than iBFT. In Samsara, we use an SHA-256 PL module that we connected with each core Tile (so 3 SHA-256 modules, one for each Tile) to perform the hashing of the messages. The lower cycle count in comparison with iBFT is expected, as iBFT has more communication steps and number of exchanged messages, as seen in Table <ref>. All measurements are done for a null operation, meaning the depicted latency represent the protocol latency (i.e., message exchange, which in this case is done through reads and writes from/to BRAM) and does not include request execution latency. To fairly compare with MinBFT which has hashing implemented in software, we present the results for an implementation that uses SHA-256 in software rather than hardware, SW_H. As can be seen, performs faster due to having less protocol steps. In the FPGA realm, with hardware accelerators significantly reducing latency (e.g, SHA-256 in hardware takes on average 1593 cycles, while in software, running on a MicroBlaze core, it takes 128643 cycles), the biggest sources of latency are clock domain crossing and the nature of the bus protocol implementation (for simplicity, we used Xilinx's implementation of AXI4-Lite, which is slower than AXI4-Full and does not allow burst access) for memory access. With more complex implementations of the AXI4 bus and higher clock frequencies, better performance can be achieved. Protocol Footprint Fig. <ref> shows how 's latency (null operation) becomes more negligible as application complexity scales. We ran SHA-256 on a 256-bit message in software (i.e., running on the MicroBlaze core and not the hardware accelerator used in Samsara) and performed complex double-precision multiplication on a one-dimensional array of very small size (10) for comparison; and observed that running the agreement protocol brings no significant overhead to operation execution. Naturally, these operations can be run on dedicated accelerators, which would replace cores in the Tiles and have lower latency. Namely, itself will have better performance with the envisioned ASIC controller, as it will execute as hardware logic. However, given we are emulating the Controller in software running on a PL core, it is fair to compare against software implementations of the aforementioned operations. Even with, e.g., a hardware implementation of SHA-256 (which we use in Samsara, and takes 1593 cycles for a 256-bit input message), is, not only still fast, but does not incur great overhead based on replication or agreement, as previously seen in Fig. <ref> which shows single core execution as taking only 1054 cycles less. Reconfiguration Latency We then analyzed reconfiguration time, measuring the time to partially reconfigure the PL (i.e., replacing a Tile, in this case, a MicroBlaze Tile) against the time to fully reconfigure the whole PL, and a full platform reboot (i.e., including running the Bootloader again and initializing the PS), as seen in Fig. <ref>. The partial and full PL reconfiguration are measured by the PS's ARM core, while the full platform reboot is measured with the help of the Xilinx application development tool, Vitis which logs the time elapsed from the beginning of the reboot process until it has successfully finished, excluding the time it takes to download the bitstreams and files to the board through JTAG. Given that, by rebooting the whole board we do not have any cores available to perform the latency measurement, we could not check the time elapsed by booting the board from the FPGA's SD card (as done in the PL reconfiguration measurements), and therefore we required the time measurement provided by the tool. As such, for the full reboot we programmed the board via JTAG. We evaluated this process through JTAG, which gave us a very precise measurement from the tool. Note that the full reboot latency presented in Fig. <ref> does not account for applications loaded into the PS, besides the Bootloader and Tileloader. As can be seen, there is a significant decrease in the time required to reconfigure a Tile versus the full PL, which results from replacing only a small part of the architecture. One MicroBlaze core represents only 4.9% of the Samsara design, i.e., of the full base bitstream. The full reboot, on the other hand, requires reset and initialization of the PS, running the FSBL and programming of the PL. The presented results were obtained with un-encrypted bitstreams, meaning they do not account for bitstream decryption, however, the reduced latency gained from PR would still hold. For stateful applications that need a local state copy, state hashing (digest) and transfer latency depends on message and checkpoint size. Fig. <ref> a) depicts 's latency as a function of varying message sizes, again using a null operation. As expected, latency grows linearly with the rising memory accesses. Our AXI4-Lite interface performs writes of 32 bits, with number of accesses scaling linearly with message length. Other bus implementation options include burst or streaming. Similarly, Fig. <ref> b) shows latency hashing latency (computed in software) with a growing checkpoint size, up to 100. §.§ Resource Usage and Power Consumption Next we look at resource usage, as seen in Fig. <ref>. The metrics presented in the graph are LUTs (Look-Up Tables) and registers which are the basic unit of area measurement on the FPGA fabric. We divided each architecture into 3 categories: base architecture (Tiles, memory controllers, timers, interrupts, etc), AXI Interconnect and DFX [DFX stands for Dynamic Function eXchange and refers to the Xilinx functionality used to perform dynamic partial reconfiguration at runtime.] Controller (not to be mistaken with Samsara's Controller). The DFX Controller in the PL manages the low-level loading of hardware bitstreams from memory. The separation of the AXI from the rest of the architecture has the intent to show the large occupation of the bus in comparison to the rest of the design, it is still, however, part of the PL along with the DFX Controller. The AXI Interconnect module connect several PL modules and thus multiplexes accesses among them, consuming a large number of LUTs and registers. This is the reason why AXI increases significantly from using a single Tile (SC) to using 3 (Samsara). For simplicity and easy comparison with the selected state of the art, we implemented Samsara with main AXI Interconnect for connecting most modules and used the AXI4-Lite interface. Separating it into multiple smaller ones can lower resource consumption as there is less multiplexing, while using AXI4-Full can bring lower latency. DFX is naturally only present in Samsara, as it is the only solution presented here that does dynamic partial reconfiguration, i.e., swapping Tile contents at runtime. This represents a trade-off of between a slightly higher area and resource usage, and the flexibility and speed of hardware reconfiguration. In the evaluated board (ZCU102), the utilization for Samsara still represents only 12.68% of available LUTs and 6.65% of available registers, with 3.24%/1.92% (respectively) for the DDR4 memory where bitstreams were stored, 2.38%/1.63% for the AXI Interconnect, and 1.59%/1.02% for the DFX Controller. Table <ref> shows the predicted power consumption for all evaluated architectures. This power analysis is taken from the implemented netlist outputted by the Xilinx design, synthesis and implementation tool Vivado. It represents activity derived from constraint files, simulation files and vectorless analysis and is provided as expectations, not physically tested results. As expected, Samsara consumes approximately 1.297 W more than the average of the other architectures, which comes from the usage of DFX. Nevertheless, in Samsara, 2.744 W out of the 5.064 W come from the PS and not the PL where the Tiles reside. It is also noticeable that the usage of 3 Tiles (TMR and Samsara) versus 1 Tile leads only to an increase of 0.146 W, meaning that replication or multiple Tiles participating in does not massively affect power consumption. § RELATED WORKS TMR has been used in the realm of critical embedded systems, e.g., in the primary flight computers of Boeing 777's fly-by-wire (FBW) system <cit.>. Similarly, passive redundancy can also be seen in Airbus' dependability-oriented approach to FBW <cit.>. The concept was extended to multi-phase tightly-synchronous message-passing protocols in the CPS domain <cit.>. Unlike TMR, Byzantine fault tolerant (BFT) SMR algorithms <cit.> aim to tolerate both accidental and malicious faults, by reaching consensus with |Q|=2f+1 out of n=3f+1 replicas. Architectural hybridization <cit.> proposes an additional trusted-trustworthy component to further reduce the size of n and Q to 2f+1 and f+1, respectively. This technique has been used in protocols such as MinBFT and CheapBFT <cit.>. BFT algorithms have traditionally been implemented on distributed systems and have seen little work in emerging MPSoC critical systems like CPS and IoT, due to their added latency and replication costs. Midir presents an architecture and an on-chip BFT-like protocol for improving the safety and resilience of low-level software running on MPSoCs as well as their access control mechanisms. It does so through minimalist hardware logic, T2H2, that provides secure voting on critical operations and access control. Similarly, iBFT <cit.> is an efficient consensus algorithm by leveraging shared memory. Nevertheless, neither of these works allow the flexilibity or accelerator reconfiguration not dynamic rejuvenation. In <cit.>, a partitioning technique enabling the use of COTS NoC-based MPSoC for mixed criticality systems is proposed, however, it is intended to have a purely software implementation as a module of a real-time operating system. Another work <cit.> investigates and evaluates fault-tolerance techniques for the UltraScale+ MPSoC FPGA, but it targets only accidental faults in the form of Single-Event Upsets (SEUs). Contention on shared resources in the context of MPSoC-based safety critical applications is explored in <cit.>. Alcaide et al. <cit.> develop safety measures in the PL of COTS MPSoCs, but do not consider PL-side faults and intrusions. In <cit.>, a secure framework to implement logic-locking, extended with secure boot for Xilinx FPGAs is presented. Furkan <cit.> provides a survey on secure FPGA configuration and security of FPGA modules. § CONCLUSION We introduced Samsara, the first secure and resilient platform for programmable hardware. Our work shows that leveraging the hardware properties of programmable hardware, like FPGA and GPU, paves the way to design lightweight and low latency BFT variants, such as H-Quorum. Interestingly, we show that hardware rejuvenation is also possible at a negligible latency overhead. To improve independence of failures, Samsara supports the rejuvenation to diverse implementations from a pool of versions. This is possible either by simple reconfiguration tweaks or via using off-the-shelf implementations. In particular, typical compute IP implementations, e.g., SHA-256, are available as open source. Samsara, however, imposes an additional resource utilization overhead over non-replicated systems, which we believe is a reasonable price for security and resilience. This is not as critical as replicated computers since a programmable fabric might often be under-utlized. IEEEtran §.§ Safety In the context of our protocol, proving that Samsara is safe refers to ensuring Integrity is preserved. Bootstrapping: The Controller, being a simple and easily-verifiable trusted-trustworthy ASIC, cannot be tampered with and follows only the designed logic at all times. During the boot phase, it signals the MP-Boot to start execution of the Bootloader code. This microprocessor is the only software processing unit capable of accessing the PL and its memory and is triggered only by the Controller (Bootstrapping phase, step 2). Since no other software is allowed to run on MP-Boot, the configuring API is only called by the Bootloader and Tileloader, which are kept in Tamper-Resistant Storage in encrypted form and authenticated by the Controller. Therefore, the initial configuration of the PL is only executed at boot time as demanded by the Controller and not triggered arbitrarily by malicious code. The same applies to the Tileloader during Rejuvenation. (Bootstrapping phase, step 3) Full and partial bitstreams are stored in encrypted form in the Softcore Library and authenticated when loaded into the PL by modules such as the AES-GCM engine, present in, e.g, Xilinx UltraScale+ devices as part of the configuration Security Unit (CSU). Without knowledge of the AES-GCM key, a bitstream cannot be modified or forged. Given authenticated bitstreams can still be malicious in nature or suffer faults when deployed into the PL, the necessity for Rejuvenation remains. Furthermore, techniques such as those presented in <cit.> utilize the Xilinx Internal Configuration Access Port (ICAP) to read FPGA configuration memory at runtime and generate a hash for comparison against an expected hash with the goal of detecting runtime tampering, which the Bootloader and Tileloader can used for further verification. Since the Controller, MP-Boot and Softcore Lib are tamperproof and the MPSoC designed to grant only access to the required components, it follows that the Compute Platform and tiles work as defined. Execution: During PL-side executions, to ensure message requests are not dropped, the Controller assigns an unique ID to each message, which is incremented by a hardware logic counter it implements. This counter is monotonic and used to associate sequence numbers to each operation (Execution phase, step 2), similarly to USIG in MinBFT <cit.> . Given only the Controller has write access to its specific set of BRAM memory (PLM-C) from which the Tiles will read requests from, it is guaranteed that requests are originated from the Controller. Additionally, requests are hashed for integrity and verified by the Tiles, which, upon execution of the request, place a reply with the same ID in hashed form in their own BRAM memory to which only they have write access, proving its origin. Only replies with the corresponding ID in the corresponding address offset (dictated by the ID) are accepted by the Controller. The fact that messages are hashed means they cannot be forged by the bus or a malicious PL IP. Finally, the Controller matches replies with the same unique ID from all tiles, forwarding the result to the invoking application only in case of majority (2f + 1 or f + 1). Otherwise, Rejuvenation is triggered based on the chosen policy. Given this processing is done by hardware logic alone, the Controller is trusted to provide the correct result to the application (Execution phase, step 4). Rejuvenation: Rejuvenation works similarly to Bootstrapping, as the Tileloader is triggered by the Controller upon mismatch and uses a similar API to load the partial or full bitstreams into the PL. The Controller uses the parameters defined in Config that dictate the type of softcore/IP and version to load, which is stored in the TRS (Rejuvenation phase, step 2). Even in the eventuality that a wrong bitstream version was loaded, the execution phase would trigger a mismatch in the Controller, leading to another Rejuvenation. Softcore/IP digests are also registered in the Controller TRS to ensure no foreign bitstream is loaded. Finally, state transfer is guaranteed to be correct since the Controller stores the hashed state during the Execution phase, after successfully matching each request reply, in the PLM-C BRAMs to which only it has write access. The state retains all the previous matching replies delivered, while non-delivered requests should be replayed by the application. After a successful Rejuvenation, Tiles read the hashed state from the Controller's PLM-C into their own BRAM so that they can keep building upon it if required by a stateful application. No other Tile can forge state in another Tile's BRAM given no read or write access is provided. Furthermore, the Controller issues new Requests only when Tiles are loaded and state transfer ends, as signaled by the Tileloader and the Tiles themselves. In the event of a full PL Rejuvenation, which wipes the PLM-C BRAMs, the Controller first takes a Checkpoint, saves it to the on-chip SRAM and only then triggers the Tileloader. §.§ Liveness We detail now the informal proof that availability and termination are achieved. Bootstrapping: The Bootloader and Tileloader are executed as bare-metal applications. No other application is allowed to execute in the MP-Boot, therefore they will never be preempted. The Controller signals when execution shall take place via an interrupt and MP-Boot's interrupt input port is connected only to the Controller. Similarly, the Config TRS is connected only to the Controller, which is trusted not to flood it, and configurations are sent to the MP-Boot by an exclusive channel connecting only the Controller and the MP-Boot (Bootstrapping phase, step 3). Since no other element of the MPSoC has access to these configurations, they shall always be available. The Controller only triggers the Bootloader at boot time, before the PL starts receiving requests; and only signals the MP-Boot to execute the Tileloader after detecting a mismatch in replies. A mismatch is observed after the PL tiles have executed and replied, as such, before sending new requests, the Controller triggers the Tileloader, which reconfigures the PL (partially or fully) while Tiles are waiting for a new request (Bootstrapping phase, step 6). Reconfiguration is preemptive and will not fail even if the Tile is executing some rogue code or finishing logging the request in its local memory (in stateful applications). A corrupted log is also irrelevant as it is never sent to any other Tile or the Controller. The latter records its own log, which is used for state transfer and is always available in the PLM-C or the on-chip SRAM. In order to prevent Rejuvenation from stalling, the Controller sets a timer and waits for Ready responses from the corresponding rejuvenated tiles and the Bootloader, Tileloader and Tiles notify the Controller as being ready when Rejuvenation has finished. This ensures normal operation is resumed as soon as Rejuvenation is done and that after a pre-defined time limit, if a Tile is not ready, Rejuvenation is triggered again and the Tile is swapped for a diverse one. Execution: The Controller uses parallelized logic to receive requests and place them in queue while it processes replies from the Tiles. Nevertheless, to ensure correct operation, the Controller only starts accepting requests when its status is Ready, meaning requests are not accepted while the PL is Rejuvenating. Jamming the Controller may be attempted by faulty applications, however, due to hardware parallelism, the input ports and request verification logic are independent of the logic used to handle requests (i.e., sending requests to replicas, waiting for their reply, verifying agreement, etc) and trigger Rejuvenation. The Controller can ignore requests sent by an application if they are sent with a time interval less than a determined delta or above a maximum threshold number within a time window. Additionally, requests, when issued by the Controller, are readily available to the Tiles, due to being written in the PLM-C, to which Tiles have read access. Each Tile has its own read-only access channel to the PLM-C memory controller, granting them exclusive use of the bus and ensuring there is no contention. The same goes for Tiles writing replies in their own local BRAMs and the Controller reading them. No replays happen since each message has its own unique ID and is placed in a specific address offset according to the said unique ID (Execution phase, step 3). The Controller sets a timer to wait for responses, as highlighted in step 2 of Section <ref>, to make sure faulty replicas to not delay agreement indefinitely. The Controller considers an operation successful only if it finds majority matches, which happens since a maximum of f faulty Tiles are assumed. Otherwise Rejuvenation will happen if one of the Tile is faulty and another is slow). The Controller always delivers the matched reply to the waiting application (Execution phase, step 4). Rejuvenation: This is only invoked by the Controller that is trusted either in Reactive or Proactive mode. Thus, the system cannot keep rejuvenating except when the Controller notifies the MP-Boot, when faults happen or on well-defined periods (in case of proactive-mode) (Rejuvenation phase, step 2). In the former, invokes the Tileloader using the given Config (that is always available and secure) and parameters (full-mode or partial-mode). The same is observed in Bootstrapping with the exception of state-transfer. This is available since the Controller stores the state in a dedicated BRAM to which Tiles only have read access through dedicated channels. Furthermore, state is hashed so that it is copied intact to the Tile's BRAM. Finally, Tiles signal they are ready when state transfer is done before the Controller's timer is out, in which case it Rejuvenates. After Rejuvenation completes, the Controller's status is set to ready and the Compute Platform is ready to execute new requests (Rejuvenation phase, step 5).
http://arxiv.org/abs/2406.18985v1
20240627082653
Exploiting Structured Sparsity in Near Field: From the Perspective of Decomposition
[ "Xufeng Guo", "Yuanbin Chen", "Ying Wang", "Chau Yuen" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Exploiting Structured Sparsity in Near Field: From the Perspective of Decomposition Xufeng Guo, Yuanbin Chen, Ying Wang, Member, IEEE, and Chau Yuen, Fellow, IEEE This work was supported by Beijing Natural Science Foundation under Grant 4222011, and in part by the BUPT Excellent Ph.D. Students Foundation under Grant CX2023145. (Corresponding author: Ying Wang.) Xufeng Guo, Yuanbin Chen, and Ying Wang are with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail:brook1711@bupt.edu.cn; chen_yuanbin@163.com; wangying@bupt.edu.cn). Chau Yuen is with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798 (e-mail:chau.yuen@ntu.edu.sg). July 1, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The structured sparsity can be leveraged in traditional far-field channels, greatly facilitating efficient sparse channel recovery by compressing the complexity of overheads to the level of the scatterer number. However, when experiencing a fundamental shift from planar-wave-based far-field modeling to spherical-wave-based near-field modeling, whether these benefits persist in the near-field regime remains an open issue. To answer this question, this article delves into structured sparsity in the near-field realm, examining its peculiarities and challenges. In particular, we present the key features of near-field structured sparsity in contrast to the far-field counterpart, drawing from both physical and mathematical perspectives. Upon unmasking the theoretical bottlenecks, we resort to bypassing them by decoupling the geometric parameters of the scatterers, termed the triple parametric decomposition (TPD) framework. It is demonstrated that our novel TPD framework can achieve robust recovery of near-field sparse channels by applying the potential structured sparsity and avoiding the curse of complexity and overhead. § INTRODUCTION By deploying antenna entries that significantly outnumber those used in conventional multiple-input multiple-output (MIMO) or massive MIMO (mMIMO) systems, extremely large-scale antenna arrays (ELAAs) can substantially benefit from spatial multiplexing and beamforming enhancements. This may provide a tenfold spectral efficiency enhancement for 6G wireless communication scenarios <cit.>. However, by greatly increasing the Rayleigh distance, the deployment of ELAAs also brings about a host of new near-field challenges, leading to increased complexity of channel estimation and the enlarged overhead of pilot transmission. To address these challenges, the beneficial sparsity inherent in wireless channels can be leveraged to facilitate efficient channel estimation. This sparsity is attributed to the limited scatterer observed in the wireless propagation environment. Therefore, the direct estimate of scatterers' geometric parameters (e.g., the angles of arrival or angles of departure (AoA/AoDs)) would significantly save pilot overheads and computational complexity compared to the entry-by-entry estimation of the whole channel. Furthermore, wireless channels not only manifest general sparsity but also deliver structured sparsity. This means that the non-zero entries in the sparse channel representations adhere to specific, predetermined patterns <cit.>. Building on this insight, there are sufficient works that have investigated the compressive sensing (CS)-based approaches by leveraging sparsity and structured sparsity in traditional far-field systems <cit.>. However, the aforementioned benefits of structured sparsity in far-field communications are not guaranteed to persist in the ELAA systems under near-field conditions. The fundamental shift from planar-wave-based to spherical-wave-based modeling poses new physical peculiarities, which in turn give rise to mathematical challenges when applying structured sparsity. Regarding the physical aspect, the phase difference across the array caused by the spherical wave is not linear with respect to (w.r.t.) the antenna indices, expanding as a combination of minuscule planar waves emanating from distinct directions centered around the AoA/AoDs corresponding to the significant paths <cit.>. In this case, the propagation paths cannot be characterized as angular-domain (AD) energy impulses as is typically presented in traditional far-field channel modeling. By contrast, the energy impulses associated with significant paths are observed as AD waveforms that fluctuate around the genuine AoA/AoDs, which can be referred to as the power leakage issue <cit.>. Furthermore, the distance between the scatterers and the array also demands careful attention in the presence of spherical wave modeling. Although a custom polar-domain (PD) representation was introduced in <cit.> to mitigate the strong correlation between angle and distance for more effective near-field channel estimation, the power leakage remains. The inherent physical characteristics in near-field communications present mathematical challenges when applying structured sparsity. Owing to the power leakage, the AD energy distribution generated by a significant path does not correspond to a single non-zero element in the sparse channel representation. Instead, it produces substantial interference at irrelevant positions, termed as weak sparseness <cit.>, leading to a high incidence of false alarms in CS-based algorithms. Furthermore, the introduction of distances necessitates a complex probabilistic model to capture the full range of structured information. This not only leads to i) prohibitive multiplicative computational complexity w.r.t. the dimensionalities of all three variables; ii) but also poses significant challenges in algorithmic design, given that there currently exists no probabilistic modeling specifically tailored for such a three-dimensional configuration. In view of the discussion from both physical and mathematical perspectives, we deem that several critical issues remain to be addressed to effectively leverage the sparsities in the near-field context for facilitating efficient channel estimation. Specifically, the following questions have not been clearly answered: Q1: How does structured sparsity in the near-field regime contrast with that in the classical far-field scenario? Q2: What specific challenges arise when applying structured sparsity in the near-field context? Q3: How can we achieve efficient sparse channel estimation in near-field communications, by leveraging structured sparsity? In response, we revisit the structured sparsity originated from the classical compressed sensing theory, and present the novel characteristics brought by the near-field communication. Our contributions can be summarized as follows: * We first highlight the fundamental physical distinctions between far-field and near-field communications, which make the far-field sparsifying method impractical in near-field systems. * We then unmask the mathematical challenges brought by the physical peculiarities when implementing structured sparsity in near-field systems. * We demonstrate that by adopting a strategy of parametric decoupling of sparse scatterers within the channel—specifically, through a triple parametric decomposition (TPD) framework—we can bypass the challenges above. This approach opens the door to a broad spectrum of structured sparsity applications. § REVISITING STRUCTURED SPARSITY: FROM FAR FIELD TO NEAR FIELD In this section, we revisit the concept of structured sparsity inherent in traditional far-field communications. We elucidate how the traditional schemes employ the features of structured sparsity within the context of planar-wave-based modeling. Subsequently, we explore the distinctions between these methodologies when adapted for their spherical-wave-based near-field counterparts. §.§ Revisiting Structured Sparsity in Far Field What is Structured Sparsity?: Before delving into structured sparsity, it is imperative to figure out the sparsity observed in conventional far-field systems. In wireless propagation environments, signals transmitted from the source to its designated receiver experience a few propagation paths. These paths are dependent upon the clusters present in the wireless propagation environment and are termed significant paths. Each of the significant paths generates a response sequence with linear phase differences across the antennas, in the presence of the traditional planar-wave-based assumption. Therefore, each array response of the corresponding scatterer can be approximately represented as an impulse signal on the angular domain by the discrete Fourier transform (DFT). Given this, the overall channel that is a linear combination of array responses of the sparse scatterers can be pruned to a sparse one, with each non-zero entry corresponding to a scatterer on the angular domain. Beyond such sparsity, these non-zero entries in the sparse channel matrix may exhibit specific sparsity, e.g., structured sparsity. Explicitly, scatterers tend to be distributed in clusters, leading to a phenomenon known as clustered sparsity, where non-zero entries in the AD sparse vector are more likely to group together <cit.>. This specific characteristic constitutes additional structured information that can further enhance estimation accuracy and reduce overhead. By harnessing this beneficial property, the required number of observations to achieve robust sparse channel recovery can be drastically reduced, leading to much fewer communication overheads like pilots and RF chains. For example, in a wireless system equipped with an N-antenna array and L propagation paths, the order of the required number of RF chains and pilots can be decreased from the Nyquist sampling rate 𝒪(N/2) to 𝒪(Llog(N/L)) by using sparsity <cit.>. The CS-based schemes assisted by structured sparsity can further compress this expense to between 𝒪(L) and 𝒪( Llog(N/L) ) without significant performance loss <cit.>. How to Use Structured Sparsity?: To leverage the additional information offered by structured sparsity in far-field MIMO systems, state-of-the-art schemes have already introduced probabilistic methods in compressive sensing <cit.>. Specifically, an orthogonal or near-orthogonal dictionary basis is necessary to transform the channel into AD sparse representation. In far-field cases, since the channel can be directly expressed as a linear combination of planar waves, the columns of the DFT matrix serve as the dictionary basis <cit.>. To further characterize the prior structured information, binary supports based on Markov chains, fields, or trees can be established behind the entries of the sparse vector, referred to as hidden Markov models (HMMs) to impose the structured sparsity <cit.>. Specifically, the values and locations of the non-zero entries in the sparse signal exhibit a unique pattern that can be formulated according to the inherent characteristics of the specific channel model. By establishing the hidden support binary pattern in the HMM, we can capture such structural information in practical channels and flexibly characterize various forms of structured sparsity <cit.>. For example, a Markov chain model has been developed to capture the clustered paths in mmWave channels <cit.>. An enhanced HMM featuring a dual structure has been proposed to address the unique structured sparsity in two-hop channels within the reconfigurable intelligent surface (RIS)-aided cascaded systems <cit.>. Beyond one-layer structures, a hierarchical HMM has been constructed to capture common sparsity in uplink multi-user systems, where users share the same significant paths <cit.>. §.§ Physical Peculiarities in Near-Field Regime What Fundamentally Differs in Near Field?: In near-field communications, spherical-wave modeling becomes necessary, requiring the joint estimation of elevation and azimuth angles, as well as distances between scatterers and the base station <cit.>. The tight coupling of these three geometric parameters poses two significant challenges for their robust recovery, i.e., the power leakage and a complex triple-coupled structure of the geometric parameters, as elaborated on in the subsections that follow. §.§.§ Power Leakage As illustrated in Fig. <ref>, in far-field scenarios, planar waves emanating from scatterers can be efficiently transformed into AD sparse power peaks through DFT. These peaks unambiguously identify the significant angles. However, this clarity is compromised in near-field scenarios. In such cases, the spherical waves generated by scatterers can be conceptually represented as a multitude of micro-planar wavefronts. These wavefronts originate from a range of directions that are centered around the original significant angles. Consequently, the AD power distributions that were initially focused in distinct power peaks begin to spread into adjacent angular directions, thereby forming diffused lobes. This occurrence is known as the power leakage issue <cit.>. §.§.§ Triple-Coupled Geometric Parameters In planar-wave-based far-field scenarios, a two-dimensional (2D) AD sparse vector is sufficient to characterize the angular directions. Upon this, identifying the directions by the clear power peaks, i.e., the elevation-azimuth angle pair would provide the complete far-field channel state information. However, near-field scenarios introduce additional complexities due to the spherical nature of wave propagation. In this context, each spherical wave induced by the scatterer is associated not just with the elevation-azimuth angle pair but is also intricately coupled with the distance between the scatterer and the antenna array. As a result, it becomes necessary to incorporate this additional degree of freedom (DoF) related to distance into the original 2D space, leading to an intricate triple-coupled structure. Such a structure not only exacerbates the computational complexity but also hampers the capture of structured information. §.§ Mathematical Challenges of Applying Structured Sparsity in Near-Field Regime In this section, we explore the mathematical challenges of applying structured sparsity in near-field communications with existing sparsifing methods. §.§.§ Weak Sparseness CS-based schemes rely significantly on the assumption of strong sparseness, where in the presence of an appropriate dictionary basis, a sparse vector can be attained with only a few non-zero entries included. In the far-field case, the channel can be straightforwardly represented to an AD sparse vector via the DFT basis <cit.>. In Fig. <ref>, we compare the sparse channel vectors in the far field using the AD sparsifying basis and the near-field counterparts using the AD and PD sparsifying basis, respectively. The traditional AD sparse representation in the far field is characterized by a few non-zero entries, each of which corresponds to a significant path. However, the AD basis fails to identify the significant channel power in the near field due to the power leakage effect. In particular, the non-zero entries no longer maintain a clear one-to-one mapping with significant angles, termed weak sparseness. This lack of clear mapping is attributed to the intricate couplings between angles and distances under the Fresnel approximation, leading to the AD power from a significant angle leaking to neighboring angles. Unfortunately, this issue presents a major obstacle in identifying significant paths within the near-field environment. To mitigate the weak sparseness, state-of-the-art efforts have proposed the PD basis <cit.>. Although the PD sparse representation leads to more concentrated and compact non-zero entries, the weak sparseness issue still exists. The main reason is that it can only assure approximate orthogonality between the basis vectors through a coherence controlling factor, which typically enlarges the basis coherence to 0.5 <cit.>. This compromise on the orthogonality increases the ambiguity of dictionary basis, leading to high false detection probability, thereby offering no robust guarantee for sparse near-field channel recovery. On the other hand, the design of the coherence controlling factor comes at the cost of sacrificing the size of the PD coverage region. This limitation on distance region becomes even more pronounced in uniform planar array (UPA) systems <cit.>. §.§.§ 3D Structured Information The concept of structured sparsity enhances the performance of CS-based methods by introducing additional structured information within the sparse representation. This newly introduced information lies in the correlation among the non-zero entries of the sparse channel representation. A host of strategies have delved into the HMM-based probabilistic frameworks to capture this structured information <cit.>. While chain-based models adeptly extract the structured information in 1D sparse vectors, field-based models are tailored for 2D sparse matrices. However, the triple-coupled structure of geometric parameters in near-field scenarios gives rise to an intractable 3D sparse cube, where neither chain nor field-based modeling can effectively encapsulate the near-field structured information. More precisely, as shown in Fig. 3, the use of chain models to represent near-field geometric parameters always neglects the structured information in the other two dimensions. While field models have the capacity to comprehensively capture structured information spanning all dimensions, they introduce computational hurdles, specifically loops burdened with NP-hard complexities. In summary, there still exists a notable gap in devising effective approaches specifically crafted to encapsulate the 3D structured information in the near-field regime. Additionally, as illustrated in Fig. 3, the overall size of the 3D space is proportional to the product of the sizes of the elevation, azimuth, and distance dimensions, thereby contributing to the unacceptable multiplicative complexity. § TRIPLE PARAMETRIC DECOMPOSITION FOR NEAR-FIELD CHANNELS The crux of this pair of physical and mathematical problems stems from the coupling of the 3D geometric parameters under near-field conditions. Rather than additional sophisticated remedial measures addressing these challenges retroactively, we advocate for decoupling the 3D variables prior to the onset of these problems. Guided by this rationale, we propose the TPD framework, the specifics of which will be elaborated in the subsequent section. §.§ Implementation of the TPD Framework In near-field scenarios, the phase sequence at the antennas can be approximated by Fresnel expression, denoted by a quadratic polynomial, as opposed to the linear expressions commonly found in the far-field counterparts. The non-linear Fresnel expression is the root source of the near-field peculiarities and complicates the application of structured sparsity in near-field systems. Despite this, specific mathematical patterns can still be observed in the Fresnel quadratic polynomial. To elaborate, the linear term in the polynomial exhibits a linearly monotonic property, while the quadratic term is both non-negative and symmetric w.r.t. the geometry center of the antenna array. In this context, the linear term captures the planar components within the near-field wavefront, containing only directional information related to the elevation-azimuth angle pair. Conversely, the quadratic term describes the curvature information in the spherical wavefront, encapsulating the distance information. This observation provides us with a handle for decoupling distance and angle pair variables, as shown in Fig. <ref>. §.§.§ Step 1 – Decomposition Between the Angle Pair and Distance We employ a carefully designed strategy to enable the angular decomposition. By selecting a pair of antenna indices that are symmetrical to the origin, their expectation of the conjugate multiplication constitutes an entry of the angular-related channel response. In this case, the distance-related quadratic terms have been eliminated while the angle-pair-related linear terms are preserved. Consequently, we obtain a channel response that is exclusively related to the elevation-azimuth angle pair. §.§.§ Step 2 – Decomposition of Elevation-Azimuth Angle Pair To further decouple the elevation-azimuth angler pair, Step 2 employs a similar strategy to derive a channel response that only contains the elevation or azimuth angle. For each channel entry obtained in Step 1, we consider a pairing strategy that chooses the horizontally symmetrical entry to form a new channel entry pair, whose sum constitutes the elevation-related term in the phase and the azimuth-related term in the amplitude. Since the elevation angle estimation solely relies on the phase, leveraging the channel response derived in this manner for elevation angle estimation will no longer be influenced by the azimuth. Similarly, due to the reciprocity of the elevation and azimuth angles, we can apply a vertically symmetrical channel entries pairing strategy, obtaining a channel response solely related to the azimuth angle. Differing from the simple projection-based approach, the proposed TPD can preserve the 2D structured information in the angle pair. §.§.§ Step 3 – Distance Extraction Although Step 1 successfully decoupled the distance variable from the elevation-azimuth angle pair, we only retained the angular information while completely overlooking the distance information. Consequently, retrieving the distance information from the original near-field channel is imperative. Specifically, each channel entry is paired with the fixed entry at the UPA's geometric center. Following the approach in Step 1, we conjugate-multiply the pair entries and take their expectations. Upon this operation, all the angle-related-linear and the distance-related-quadratic terms are fully retained. Additionally, given that we have comprehensively captured the angle pair data in the sub-problems parsed in Step 2, and considering them as constant parameters, the problem is reduced to estimating the distances of scatterers within a 1D distance space. §.§ Advantages of the TPD Framework We compared the performance of TPD with other benchmarks in estimating geometric parameters under the influence of cluster concentration factor and the distance between scatterers and the array, as illustrated in Fig. <ref> and Fig. <ref>, respectively. Specifically, the cluster concentration factor of the von Mises-Fisher (vMF)-based channel model controls the cluster size <cit.>, a higher value of which results in a smaller cluster size. Since TPD provides sparsifying method for the near-field channel, we compare the performance of TPD with the widely-used angular-domain (AD) sparsifying method <cit.> and the recently proposed polar-domain (PD) sparsifying method both in UPA and uniform linear array (ULA) setting <cit.>. Furthermore, to demonstrate the comparability of TPD with the underlying algorithm, we not only used the state-of-the-art on/off-grid compressive sensing algorithm under various sparsifying methods but also compared the performance when utilizing the traditional multiple signal classification (MUSIC) algorithm <cit.>. §.§.§ Robustness Against Cluster Size The reduction in cluster size leads to an excessively small angular interval among scatterers, making them difficult to distinguish. In far-field MIMO channels, this issue can be effectively addressed by enlarging the MIMO array size, thus ensuring the angular-domain resolution is sufficient to discern the angular intervals of scatterers. However, in near-field channels, the problem of power leakage results in the overlapping of spreading spectra, which complicates the extraction of sparsity in the near-field. Consequently, the performance of the angular-domain method suffers fast degradation as the cluster concentration factor increases. While the polar-domain method can achieve robustness comparable to the proposed TPD in ULA scenarios, its excessive correlation among sparsifying bases leads to a high incidence of false detections of scatterers in UPA scenarios. In contrast, the TPD framework proposed in this study demonstrates markedly superior accuracy and robustness over traditional sparsifying methods. §.§.§ Robustness Against Distance Generally, within near-field channels, the proximity of scatterers to the array significantly exacerbates the effects of power leakage brought about by spherical waves. Conversely, the channel increasingly resembles a far-field channel model as the distance increases. Thus, the performance of traditional angular-domain-based methods deteriorates noticeably as the distance decreases. In contrast, the efficacy of polar-domain-based methods diminishes with increasing distance. This decline in performance is attributable to the reliance of the polar-domain basis on the distance grid distribution. Given the inverse-ratio-based design of distance grids <cit.>, greater distances result in sparser grid distribution, leading to decreased performance. Distinct from both angular- and polar-domain methods, after the process in Step 1 of the TPD, the elevation-azimuth angle pair is decoupled from the distance variable. In simpler terms, TPD isolates the planar wave component from the distance-associated spherical wavefront, significantly alleviating the power leakage problem and thereby enhancing robustness against the distance variable. §.§.§ Broad-Spectrum Compatibility The proposed TPD framework operates at a higher level, offering a decoupled sparse channel representation. Hence, it facilitates integration across a variety of algorithms. For instance, as demonstrated in Fig. <ref> and Fig. <ref>, the TPD framework significantly enhances the MUSIC algorithm, outperforming methods in both the angular and polar domains. The TPD framework can be successfully implemented once the antenna array possesses central symmetry and horizontal/vertical symmetry. Consequently, beyond its compatibility across different algorithms, it also boasts extensive adaptability across various scenarios. §.§.§ Reduced Structural and Computational Complexity The sparse representation facilitated by the TPD is divided into three distinct vectors, corresponding solely to elevation, azimuth, and distance, respectively. This allows for the individual capture of structured information inherent in these geometric parameters, obviating the need for more intricate models customized for a triply-coupled 3D structure. Consequently, this approach reduces the structural complexity inherent in algorithm design and allows for more flexible design. On the other hand, the dimension of the coupled 3D geometric parameters results from multiplying the dimensions of elevation, azimuth, and distance. If sparse recovery is directly carried out in the near-field case, the algorithm must search for the 3D geometric parameters with a solution space of 𝒪(N^2.5) in an (N× N)-antenna near-field system <cit.>. By contrast, within the proposed TPD framework, the geometric parameters shift from being contained within a 3D coupled cube to being represented by three independent vectors. This leads to a significant reduction in dimensional complexity: transitioning from the multiplicative 𝒪(N^2.5) to additive 𝒪(2.5N). § APPLICATIONS AND FUTURE DIRECTIONS OF THE TPD FRAMEWORK In this section, we explore the prospective applications alone with the future directions of the TPD framework for near-field communications. §.§ TPD Applications Enhanced by Structured Sparsity §.§.§ Clustered Sparsity in Robust AoA Detection A typical kind of structured sparsity in near-field channels is the clustering scatterers in wireless propagation environments <cit.>. This results in grouped non-zeros elements in the sparse representations. By applying the TPD process to address power leakage issues, the imposed sparsity is attained in the decoupled sparse vector for each dimension, providing more precise cluster structure representations to the underlying CS-based algorithms. §.§.§ Temporal Sparsity in Recursive Channel Tracking Besides the spatial structured sparsity such as clustered sparsity, structured information in the temporal domain also exists, termed temporal sparsity. Specifically, in recursive channel tracking problems, the scatterers from the previous time slot offer prior information for the current time slot. However, within the near-field 3D geometric parameters, the structured information across time slots is hidden in the triple-coupled structure. Specifically, the state transition function is the joint function w.r.t. all the geometric dimensions. This leads to exceeding computational complexity and strong correlation-induced cross-dimension interference. Fortunately, The original joint 3D parametric tracking problem can be simplified through the TPD filtering to three distinct parametric tracking problems with a single DoF. As such, even the simplest chain-based model can be used to provide sufficient temporal information to enhance tracking accuracy. §.§.§ Common Sparsity in location division multiple access (LDMA) The distance ingredient can be employed for enhancing multiple access techniques by introducing a new DoF, i.e., LDMA <cit.>. The scatterers within each user's distinct near-field channel may share the same locations on the elevation and azimuth dimensions, referred to as common sparsity <cit.>. Specifically, different users may share the same geometric parameters distribution on one dimension, despite the fact that they are located in different 3D geometric positions. We can design the underlying algorithm to search the common angles (or distances) first and then search for the distinct distances across users. Therefore, the searching space can be compressed from multiplicative to additive, paving the way for low-complexity and low-overhead LDMA designs. §.§ Future Directions §.§.§ Structured Sparsity in Continuous Aperture Design Future holographic MIMO (HMIMO) surface aims to achieve unprecedentedly finer beamforming in free space to attain unparalleled spacial multiplexing gains <cit.>. This will be achieved by packing nearly infinite meta-surface-based entries on the array plane with the antenna spacing far less than half the wavelength. This continuous or near-continuous aperture design will profoundly revolutionize electromagnetic channel modeling. Similarly, the proposed TPD framework can also be effectively implemented in HMIMO systems. Specifically, we can redesign the dictionary based on the Fourier harmonics <cit.>, where each dictionary entry denotes a corresponding Fourier-harmonic-based wavenumber-domain (WD) element. The reason is that the continuous aperture design will lead to a near-infinite number of traditional DFT basis, while the number of WD basis is solely determined by the ratio of the antenna aperture and the working frequency. By performing the TPD framework on the Fourier harmonics and sparsifying the channel in the corresponding wavenumber domain, we can extend the advantages of the TPD framework from traditional discrete MIMO to HMIMO systems with continuous antenna aperture design. §.§.§ Efficient Algorithm Design for Sparse Recovery Although the TPD framework can alleviate issues such as power leakage and weak sparsity, CS-based methods inherently possess quantization errors, acting as performance bottlenecks for sparse channel recovery. Specifically, offsets exist between the discrete non-zero entries in the sparse channel representations and the real geometric parameters <cit.>. Hence, a two-module algorithm design is needed to improve sparse recovery performance in the TPD framework, where the first aims to achieve sparse signal recovery in the discrete sparse domain; the second module, using techniques such as successive convex optimization and gradient descent, focuses on precisely estimating the quantization errors. With this dual-module design, we can further improve the performance of the CS-based algorithm with the assistance of structured sparsity. § CONCLUSION This article investigates the challenges, peculiarities, and applications from the perspective of the near-field structured sparsity. In particular, by revisiting the prior works devoted to the structured sparsity in the far field, we elucidate the fundamental physical peculiarities induced by the tightly coupled 3D parameters in the near field. Upon this, we expose the consequent mathematical challenges by detailing the theoretical limitations of various conventional sparsifying methods when attempting to extract near-field structured sparsity. Guided by the rationale of decoupling, a low-complexity TPD framework is proposed to decompose triple-coupled geometric parameters, facilitating their individual sparse recovery. Then, we outline several applications in which the TPD framework can effectively be employed to realize parametric recovery based on decoupled structured sparsity. We finally envisage promising directions for future research within this domain. IEEEtran
http://arxiv.org/abs/2406.19283v1
20240627155553
PhysioLLM: Supporting Personalized Health Insights with Wearables and Large Language Models
[ "Cathy Mengying Fang", "Valdemar Danry", "Nathan Whitmore", "Andria Bao", "Andrew Hutchison", "Cayden Pierce", "Pattie Maes" ]
cs.HC
[ "cs.HC" ]
PhysioLLM: Supporting Personalized Health Insights with Wearables and Large Language Models Anonymous Authors ============================================================================================ § ABSTRACT We present PhysioLLM, an interactive system that leverages large language models (LLMs) to provide personalized health understanding and exploration by integrating physiological data from wearables with contextual information. Unlike commercial health apps for wearables, our system offers a comprehensive statistical analysis component that discovers correlations and trends in user data, allowing users to ask questions in natural language and receive generated personalized insights, and guides them to develop actionable goals. As a case study, we focus on improving sleep quality, given its measurability through physiological data and its importance to general well-being. Through a user study with 24 Fitbit watch users, we demonstrate that PhysioLLM outperforms both the Fitbit App alone and a generic LLM chatbot in facilitating a deeper, personalized understanding of health data and supporting actionable steps toward personal health goals. Large language model, Sleep, Conversational interface, Physiological data, Digital health app, Wearable, AI § INTRODUCTION The advent of wearable health monitors, such as Fitbit, Apple Watch, and Samsung Gear has made it possible to continuously collect detailed physiological data, such as heart rate, activity data, and sleep stages. They bring convenience and awareness to our personal health and provide a granular look into one's habits and how they affect physiology. These data and trends can help nudge healthier behavior and may even help detect health problems <cit.>. While it is important to make accessible and accurate health monitoring systems, individuals who wish to change their habits are currently required to first deeply understand their physiological data and how it correlates with their daily routine, and finally think of ways to work towards positive changes. However, users often struggle to make sense of the data and translate them into meaningful actions <cit.>. Interactions with the data are typically predefined by graphical user interfaces provided by the phone and wearables, which offer limited interaction and generic recommendations with few personalized insights. Large Language Models (LLMs) potentially present a promising solution to these challenges. For one, they enable individuals to engage in unconstrained questioning and answering in natural language <cit.>. Second, they have the potential to relate health data and behaviors to a wealth of health literature <cit.>. Lastly, LLMs have a semantic understanding of the context that could grant flexibility in producing insights based on raw data <cit.>. Integrating LLMs with physiological data offers the potential to build systems that allow users to ask questions and receive personalized responses, enhancing their understanding of their health and motivating positive behavior changes. This research addresses two main questions: (1) how to implement an LLM-based system that generates personalized insights from physiological data and communicates them through natural language, and (2) how such a system impacts users' understanding of their data and helps them develop actionable health goals. We designed PhysioLLM, a novel system that utilizes an orchestration of LLMs to deliver personalized insights by incorporating users' own data from already available wearable health trackers together with contextual information. Different from conventional health applications, our system conducts statistical analyses of the user's data to uncover patterns and relationships within the data. As a case study, we focus on improving sleep as the main health goal. Sleeping well is one the most important things to stay healthy physically and mentally<cit.>. The latest wearable devices offer in-depth reports on sleep, providing information on sleep timing, sleep stages and commonly used metrics such as wake time after sleep onset. They also typically provide a sleep score to indicate overall sleep quality. However, it is often not obvious to users how one can improve one's sleep score and the relationships between one's daytime activity and sleep. To understand what might improve individuals’ understanding of their data and what questions they might ask a conversational interface, we recruited actual users for an in-situ experiment. 24 adult Fitbit users shared their most recent week of Fitbit data. Each participant used a text-based chatbot that was either the complete PhysioLLM system with personal data and insights, an LLM chatbot with personal data but no access to insights, or a placebo off-the-shelf LLM chatbot with no personal data or generated insights. They filled out a survey before and after interacting with the interface that assessed their understanding of their sleep data, how motivated they felt after interacting with the interface, and how actionable their goals were based on their interactions with the interface. The results show that chatting with an LLM-based system, which provides effective personalized insights using our LLM architecture, improves one's understanding of their own health. The interface was perceived as more personalized than chatting with a generic LLM-based chatbot. In fact, the latter resulted in the user having less motivation to change, and their goals were found to be less actionable. We also interviewed two sleep experts to review the personal insights generated by the system and its responses and suggestions provided to the user. Overall, the experts found the insights reasonable but noted the system's tendency to overemphasize correlation values. They suggested improving the system by providing the LLM with more background on the data generation process and tuning responses to be more modest when based on sparse data and potentially spurious correlations. In summary, the contributions of this work are: * A novel orchestration of LLMs that integrates physiological and contextual data to support conversations about personalized health insights. * An in-the-wild study with 24 users that interacted with the system and the study insights derived from quantitative and qualitative results. * Evidences that show the interface is perceived as personalized and effectively improves users' understanding of their health through personalized insights. * A preliminary valuation by two sleep experts of the accuracy and quality of the generated personal insights and suggestions. § RELATED WORK §.§ LLMs For Health Prediction The use of LLMs for medical tasks has rapidly increased, with applications such as knowledge extraction <cit.> and disease prediction <cit.>. Researchers found that the GPT-4 model exceeded the passing score on the United States Medical Licensing Examinations, an exam that allows individuals to practice medicine in the U.S., by over 20 points<cit.>. Med-PaLM2, a fine-tuned domain-specific medical LLM set a new state-of-the-art by scoring up to 86.5% on the MedQA dataset, a dataset containing expert answers to medical questions <cit.>. Meanwhile, researchers have finetuned LLMs for mental health specific tasks such as the prediction of stress and depression, achieving accuracies from 48% to 87% <cit.>. Taken together, these advancements highlight the substantial potential of LLMs in interpreting and reasoning about health information and their growing potential for supporting healthcare professionals. However, current approaches do not enable individuals who are not medical professionals to contextualize the knowledge with personal data and health goals. In contrast, PhysioLLM not only derives tailored insights from personal wearable health data but also allows the user to intuitively understand the implications of their data through conversations. §.§ LLM-based Data Analysis Different from fine-tuning an LLM for domain-specific tasks, another approach is to prompt a large language model to generate code to then be run by a code executor to produce calculations and graphs <cit.>. While this approach has already found its way into commercial products[<https://github.com/features/copilot>], it requires explicit knowledge of the types of analyses to run. To overcome this challenge, other systems have added multiple "chains" or nodes of LLMs where each LLM in sequence selects the appropriate analysis from a set of possible analysis actions <cit.>. While this method enables users to explore their data without prior knowledge or conducting analyses themselves, it does not incorporate personal information about the user. Additionally, it still requires users to have some understanding of potential hypotheses to test based on data trends and to suggest these for further exploration. Physiollm takes into account the context of the personal health data and formulates hypotheses based on the data a priori. As such, it guides the user through a more focused conversation that prioritizes notable discoveries. §.§ LLM for Personal Health Insight Generation Many studies integrate personal health records from an electronic health record (EHR) for effective disease prediction <cit.> or to help patients understand health records <cit.>. Health-LLM proposed by Kim et al. adapts the public health prediction tasks with wearable data to enable personal health support <cit.>. Most related to our work is PH-LLM<cit.>, a fine-tuned model for contextualizing physiological data and producing personalized insights. The work focuses on benchmarking the LLM's capability against human domain experts. Commercial systems are beginning to offer ChatGPT-based conversations to discuss training plans[<https://www.whoop.com/>] and interpretations of heart rate variability data[<https://welltory.com/>]. With the increasing availability of LLM-based services, prior research has emphasized the prediction accuracy of these models. However, it remains unclear how to effectively communicate these predictions and insights to engage users in positive behavior change. Our work with PhysioLLM investigates not only how an LLM can be used to create personalized insights but also how such LLM-generated insights should be delivered so that individuals can better understand their data and develop actionable plans. Stromel et al. compare the modality of the insight between text and chart and found LLM-generated text-based narrative to be more effective at helping people reflect on their data <cit.>. However, their investigation is limited to a one-turn interaction, and the data is limited to step count, whereas our system supports multi-turn conversations and explores the relationship among a variety of sensor data types to uncover relationships that may otherwise be difficult to see at a glance. § MOTIVATION AND DESIGN GOALS We hypothesize that engaging in a personalized conversation that includes actionable insights about one's health data can enhance understanding of the data and the ability to develop effective action plans towards healthy behaviors. The concept of Personalization is evident through the LLM's grounded knowledge of the user's data and its references to the data sources in its response. Actionable insights refer to the LLM-generated discoveries of trends, correlations, and patterns within the user's data, as well as actionable, follow-up questions and suggestions based on these discoveries. While current accompanying apps of wearable devices allow users to explore the collected data through graphical representations, uncovering actionable insights remains challenging. Data visualizations alone can lead to bias in interpreting their data, and one way to reduce such bias is to incorporate statistical analysis for comparison and correlation <cit.>. Additionally, although users can search for solutions to specific problems, these queries are often not contextualized within their data. In addition to making personalized and insightful responses our primary research and design goal, we designed our system with the following important principles in mind: Privacy-preserving: To safeguard user confidentiality and trust, we ensure that no identifiable information is included in the communication with third-party systems; Responsible: To maintain ethical standards and avoid potential harm, our system should never provide medical or clinical diagnoses; Accuracy: To provide reliable and trustworthy information, we ensure all responses are based on the data sources and avoid any fabrication or hallucination of values; Responsive: To create a smooth and engaging user experience, the system is designed for fast response times, making the conversation feel seamless and fluid. § PHYSIOLLM ARCHITECTURE AND IMPLEMENTATION Figure <ref> shows an overview of the system. The system consists of three main components: data preparation, insight generation, and the conversational interface. Next, we describe each component in depth. §.§ Data Preparation The quality of the responses depends on the quality and interpretability of the input data, which necessitates a process that prepares the data in formats that LLMs expect and instructs the LLMs on how to interpret the data. Initially, we thought to leverage the code-generation capabilities of LLMs to provide real-time analysis of the data. Early experiments showed that this approach fails to be consistently accurate and fast, which are two important design principles. In addition, the need to generate bespoke functions is rare; meaningful analyses are often in the category of fundamental statistical analysis, such as mean, variance, trends over time, and correlation between data types. Thus, the system consists of an "offline" (as opposed to real-time) preparation phase that conducts statistical analysis on and summarizes the user's data. Specifically, the process is as follows: §.§.§ Data Filtering and Alignment The Fitbit data is exported and filtered for the dates of interest. The raw data from different sensors have varying sampling rates. For example, step count is sampled every minute, heart rate is sampled every 5 minutes, and sedentary minute is sampled daily. Thus we consolidated daily values for each data type and hourly values for step count and heart rate. Accurate representation of temporal information is essential, as the subsequent steps that derive the correlations and potential causal relationships rely on the temporal dimension. Therefore, we aligned the different sensor data based on date and time considering the device's timezone. Because we are interested in the effect of daily activities on sleep quality, we adjusted the 'date of sleep' to correspond with the day following the recorded daytime activities. For simplicity, we excluded naps (i.e., not the main sleep event). In the event of missing data, an average of the weekly value was used. The final list of data is in Figure <ref>. §.§.§ Generation of Summary, Trends and Correlations After the data had been filtered and aligned, we summarized the data to extract the averages of the week, dates of min and max values, and trends. For trends, we used a permissive threshold of ±0.15 because the goal is not to perform statistical hypothesis testing but rather to provide the LLM with narrative descriptions of possible trends. The hourly step count and heart rate were plotted to show the visual pattern of one's activity and heart rate each day over a week. Then, we calculated pair-wise correlation values. An example of the pattern graph and correlation matrix plot is shown in Figure <ref>. §.§ User Modeler and Insight Generation Deeper insights such as how the data correlate with each other and the implications of the data are not apparent to a user. As such, the mere integration of the user's data in an LLM is not enough as one can obtain a similar summary from the smartwatch's accompanying app. In addition, advice one can get from searching the web is often generic. While general advice can be applicable and helpful, anomalies and edge cases are arguably important yet challenging to catch using traditional machine learning approaches. The advantage of LLMs is that: (1) they have ample prior knowledge of statistics, insights on health, and common sense; (2) they can take into account the user's profile and other contextual information, such as gender, age, and habits. To generate meta-level insights, we used OpenAI's GPT-4-turbo model (temperature=0, max token=4096), which is an LLM model capable of receiving multi-modal input. We input the user's biography (provided by the user's demographic survey), the summary and correlation matrix of the data, and the plot of the hourly trends of heart rate and step count. We tried inputting the correlation matrix as a plot, but it resulted in consistent factual errors, so a numerical representation of the matrix was used instead. The system metaprompt instructs the LLM to generate at least 10 insights. For each insight, it needs to provide reasoning, assumptions, and explanations that make use of the data. The data sources need to be specific with values, and it must use a combination of different sources of data. After each insight, it needs to give a score between 0-10 on how likely the insight is to be the most important factor affecting sleep quality. An example output of this step is shown in Figure <ref>. §.§ Conversational Interface Design The conversational interface is a text-based chatbot on a web browser that can be accessed on a phone or a laptop. The interface offers an interactive way to understand the data via a summary of data, discussions of implications, and answers to questions. The conversation is driven by an LLM which is prompt-tuned to focus on unique and personal trends and insights (Figure <ref>). We again used OpenAI's GPT-4-turbo model (temperature=0, max token=4096) but with a different system metaprompt. The model takes in the pre-generated insights and summary of the week of data as inputs. The system metaprompt of the LLM has a few critical components: role: defines the character of the LLM and its high-level role in the conversation; data: describes the expected input, including the person's biography, the summary of Fitbit data for the time period of interest, the correlation matrix, and health data trends; communication style: specifies a concise language style, avoiding overly technical jargon. task: ensures the LLM encourages users to explore all insights by suggesting relevant questions; opening format: grounds the conversation with a self-introduction, an overview of the data, derived insights, and three follow-up questions to guide user exploration. caution: anticipates and mitigates malicious or unintended uses of the LLM, such as off-topic questions; An example of the conversation is in Figure <ref>. § USER STUDY Instead of evaluating the efficacy of LLMs at predicting health concerns, our focus is on helping people understand their data and arrive at actionable insights, which we believe LLMs have the potential to support. Thus, we implemented and evaluated our system in real-world settings where actual users interacted with our system using their wearable devices and personal data. §.§ Procedure Figure <ref> shows an overview of the experimental protocol. Participants were asked to wear the Fitbit for minimally a week, including during sleep. They completed a demographics survey and a pre-survey that asked about their understanding of their data and goals after using the Fitbit App. The survey breakdown is detailed in the later section. Once participants had at least a week of data, they exported and shared their Fitbit data with the experimenters. Their raw data was securely stored and never shared with any third-party systems, including the LLMs. They then interacted with a version of our system depending on which condition group they were randomly assigned to. They needed to complete at least 10 exchanges with the chatbot. Their chat conversations were logged and shared with the experimenters. Finally, participants completed a post-survey about the interface with the same questions as the pre-survey. Participants received $15 for completing the study, which was approved by the institution's IRB. §.§ Conditions The study has 3 between-subject conditions (Figure <ref>): Placebo (C1): Chat with an off-the-shelf LLM with no personal information; Control (C2): Chat with an LLM that has access to a summary of their Fitbit data; Intervention (C3): Chat with an LLM that has access to their Fitbit data summary, insights on how their data correlate, and generated follow-up questions that guide the user through the insights. Note that the placebo group was still asked to share their Fitbit data, despite their summarized data never being provided to the conversational interface. §.§ Participants 50 Participants were recruited through university mailing lists, 5 participated in the pilot study, and 21 did not complete the full study and were excluded from the data analysis. This left us with 24 participants, 8 for C1, 8 for C2, and 8 for C3. The sample population has a mean age of 29.09 (SD=8.50). 12 identified as male, 12 as female. All have used a smartwatch before but may not be a Fitbit compatible watch. For consistency, we gave those who own a different type of smartwatch a Fitbit watch to wear for a week. Participants must not have any serious health or sleep concerns as our system should not provide medical diagnosis or advice. 77% typically use the Fitbit app at least once a day, 43% use LLM-based systems more than once a day, and 74% got full scores on the Cognitive Reflection Test <cit.>, where a higher score indicates individuals' ability to suppress an intuitive and spontaneous wrong answer in favor of a reflective and deliberative right answer. In our study, this test assessed participants' acceptance of statistical explanations as opposed to adhering to prior beliefs. §.§ Hypotheses and Measurements Below are the hypotheses and the corresponding metrics used to measure and compare the effectiveness of the different conditions in four outcomes of interest. The pre-survey focuses on outcomes as a result of using the Fitbit App. The post-survey contains an identical set of questions as the pre-survey to measure the difference in the outcome after interacting with the chatbot. * H1: C3>C2>C1 in improving individual's understanding of their data. Measured by: 7 qualitative questions, each followed by a quantitative self-rated confidence score (1-7), and 1 quantitative rating of the interface (1-7). * H2: C3>C2>C1 in making individuals feel motivated to improve their sleep. Measured by: 1 qualitative question, and 3 quantitative ratings of the interface (1-7). * H3: C3>C2>C1 in helping individuals form actionable goals to improve their sleep. Measured by: 1 qualitative question, and 3 quantitative ratings of the interface (1-7). * H4: C3>C2>C1 as a more personalized interface. Measured by: 2 quantitative ratings of the interface (1-7). §.§ Analysis For the quantitative results, we treat the mean of the aggregated 7-point Likert scores within each category as a continuous variable. We used a linear mixed effects (LME) model (lme4 package in R <cit.>) to account for the nested data structure, namely each subject has 2 observations: pre-survey and post-survey. We used the random intercept model, allowing each subject to have a unique intercept. The predictors of interest are Test (pre vs post) and Condition (placebo, control, and intervention), and we control for AI literacy, Fitbit use frequency, and cognitive reflection test. Intuitively, the four outcomes should not be independent as outcomes from the same person have the same underlying determinants. For example, one's motivation to improve their sleep could be dependent on how much they understand their data. However, we do not model the correlations among the outcomes at this point. Thus, we fit a random intercept linear mixed effect model for each outcome separately. Since our hypotheses need to compare three pairs of the conditions, and the LME model only compares two pairs (Control and Placebo, Intervention and Placebo), we conducted an post-hoc pairwise comparison using the emmeans package in R <cit.>, which produces an adjusted p-value. We also open-coded and thematically clustered participants’ qualitative questionnaire responses and conversation logs to extract trends. Specifically, we compared the qualitative responses to the knowledge questions and action plans before and after interacting with the chatbot. We also compared the post-survey action plans against the conversation log. The hypothesis is that the conversation content has a positive influence on one's knowledge about their data, ability to generate actionable plans, and confidence in their knowledge and action plans. § QUANTITATIVE RESULTS Overall, interacting with a chatbot in addition to using the Fitbit app increased users' understanding. Specifically, the post-hoc pairwise comparison reveals that both Control and Intervention groups had a statistically significant increase in understanding (estimate=1.28 and 1.05 respectively, p<.01 and p=.02 respectively) (Figure <ref>). Comparing the amount of change post-interaction between the conditions, the LME model and pairwise comparison show that the Control group had a significantly greater increase than the Placebo group (estimate=1.01, p=.03) in understanding between the post- and pre-survey results (Figure <ref>), while the difference between Intervention and Placebo groups approached significance (estimate=0.77, p=.08) (Table <ref>). On the other hand, interacting with a generic chatbot that has no personal data or tailored insights felt less personalized than using the Fitbit App, whereas the full PhysioLLM system was rated the highest for this category (Figure <ref>). The pre-survey rating varied between conditions, so the pre-post interaction differences were not significant between conditions. The LME model did not reveal any statistical significance for the fixed or interaction effects. Similarly, the Placebo group rated the generic chatbot lower for outcomes actionable and motivation compared to using the Fitbit App alone, and the full PhysioLLM system was again rated the highest for both outcomes (Figure <ref>). Comparing between conditions, The LME model shows the Control group had a significantly greater increase in supporting actionable goals than the Placebo group (estimate=1.75, p=.03) (Figure <ref>, Table <ref>). § DISCUSSION Combining numerical results and trends extracted from the qualitative results, we now discuss the system's performance in achieving our design goals. Comparing understanding pre- and post-interaction – As mentioned earlier, the quantitative data shows the control and intervention groups had a significant increase in confidence in their understanding of the data (Figure <ref>). Qualitative results revealed further that there was an increase in detail and clarity in post-survey responses. When asked if they knew what certain terminologies mean and their influence on sleep, there was a decrease in the number of "no" responses in the post-survey. For instance, many participants initially "vaguely" understood various sleep stages, but later described sleep's "importance for memory, emotions, other health regulation" (P202) in the post-survey. There were also several misconceptions before the interaction with the chatbot, and the responses of participants in control and interaction groups indicate a more comprehensive understanding afterward. For example, P34 initially only knew that "REM is when dream happens," but stated after the interaction that "REM (sleep) helps with memory & creativity and deep (sleep) is for restorative sleep" while citing specific percentages of sleep stages. This was also seen with HRV knowledge, where P46 described that for them, "higher HRV is correlated with better sleep," whereas they initially had not heard of HRV. When asked about what they thought had the most impact on their sleep, participants in the placebo group answered similarly in the pre- and post-survey, whereas those in control and intervention groups were able to pinpoint that physical activity during the day significantly affects their sleep (P32, P46, P402). Furthermore, participants were more specific about the timing of the activities. For example, in pre- and post-survey, participants mentioned that caffeine can make it harder to fall asleep, but post-study responses more frequently mention that "caffeine intake close to bedtime decreases sleep quality" (P33). Similarly, most participants conclusively stated in the post-survey that exercise leads to better sleep, which is an improvement from the varied and uncertain responses in the pre-survey. Comparing goals with conversation content – We were also interested in whether the interaction with the chatbot led to more personal and actionable goals. Quantitative results show that the control and intervention groups rated the interface as more personalized and relevant and that they are more confident in their ability to use their health data to improve their sleep. The quantitative survey asked participants to list three goals and explain why and how they want to achieve these goals. In both the pilot study and full study, participants adapted their goals based on chatbot feedback. In particular, participants in the intervention group related daily behaviors with specific sleep outcomes based on insights provided by the system. For example, some goals were to "reduce stressful activities late at night" so they can "go to sleep at a more consistent time" (P40) or to have "more regular medium intensity exercise" for "better sleep and HRV" (P45). In contrast, participants in the placebo group had more personal goals, with vaguer explanations and reasoning on why they wanted to achieve them. Personalization – Overall, participants had positive interactions and thought that conversations with the chatbot were "personalized" and "engaging." However, a few thought the chatbot was not personalized even though suggestions explicitly mention individual data such as steps per day or hours of sleep. A possible explanation may be that the health insights provided are well-known, making the chatbot responses appear more generic. Nonetheless, participants felt that the interface focused them on the relevant information. § PRELIMINARY EVALUATION WITH SLEEP EXPERTS We aimed to understand how human experts derive insights from physiological data and evaluate PhysioLLM-generated suggestions. Two sleep experts, B and J, were independently interviewed using an experimenter's personal data as a case study. They were presented with the same input (biography, summary, correlation, and trends) given to the LLM and asked to generate insights without additional context. They then reviewed the LLM's generated insights and the system's responses via its interface. Below, we summarize the main insights from both interviews. Comparison between LLM and human expert insights – We compared how human experts and PhysioLLM approached the provided information. Both experts focused on big-picture data to assess the user's sleep health, whereas the LLM concentrated on data correlations. Some insights generated from the correlation matrix were similar between the LLM and the experts. However, experts found some correlations unexpected and counterintuitive, such as an increase in sedentary minutes correlating with a higher percentage of deep sleep. The LLM justified this by suggesting it "could be due to the body's increased need to recover from activity." In contrast, the experts dismissed this correlation, noting that the step count and activity minutes indicated the person did not engage in activities intense enough to require such recovery. Expert opinion on insights – Overall, the system provided reasonable and correct explanations. Most of the explanations that experts found surprising stemmed from unexpected correlations in the data. The LLM tends to over-index the correlation values. Experts noted that correlation significance should be adjusted for the small data sample and redundant data categories. They suggested reducing comparisons by combining related values, such as aggregating different activity levels into a single value. Expert opinion on generated feedback – Expert J took the perspective of the user and thought it gave "good suggestions on the practical side," while expert B took the perspective of a medical professional. Expert B remarked that since some insights might be based on spurious statistics, the model should provide more modest comments rather than sounding certain. While acknowledging occasional over-interpretation by the model, Expert J believed that "the explanations may not matter," as users primarily seek actionable advice, such as avoiding overexertion and not exercising close to bedtime. § LIMITATION AND FUTURE WORK Limitations of sensor data – We assume most people follow a weekly routine, so we choose a week of data as the range of input data. Some correlation values can be counter-intuitive due to the short time window of data. In addition, several different health conditions can cause the same changes in sensor readings. For example, heart rate variability can be low due to stress, or because one has an infection. Because the data are inherently ambiguous, the system should not try to provide specific diagnoses based on the data, rather it should suggest testable hypotheses to the user which they can try to identify the root causes. Limitations of insights – The current implementation relies on GPT's prior knowledge during training. This is acceptable as prior work has shown that the zero-shot GPT-4 can have 84% accuracy when answering medical licensing exams <cit.>. A fine-tuned GPT for medical diagnosis can improve the accuracy and comprehensiveness of the system. The way the insights are presented could also be more diverse. Some participants wished they were given more visuals, such as graphs to represent the data the chatbot is referencing. In the future, the conversational interface can be directly integrated into the companying app, and the chatbot can reference the graphical representations in addition to the textual insights. Safety, privacy, and ethics – The system has embedded counter-action prompts to prevent abusive uses of the system that are beyond the system's capabilities and intended uses, but further tests on the robustness of the safety prompt are needed. The outcome of the generation should be factually accurate, especially in the domains of personal health. Mistakes such as Google's AI search feature suggesting people eat rocks[<https://www.bbc.com/news/articles/cd11gzejgz4o>] highlight the challenge of making the LLM factually grounded. However, not all mistakes are absurd and obvious. The natural, human-sounding outputs of the LLM systems are worrisomely persuasive. We made sure users knew the system was not allowed or capable of giving medical diagnoses and advice, and that the system should acknowledge its limitations. Last but not least, health and activity data is sensitive information. By design, we made sure that no raw data was sent to the LLM, and we de-identified all data and survey results. Participant pool – The participant group was recruited with some interest in improving their sleep but most had no specific sleep issues. This reduced the likelihood of our system discovering findings that were significantly different from common knowledge and suggested actions that could result in drastic behavior change. In the future, we hope to work with a broader user group with more diverse sleep patterns. Just-in-time assistance – Our system allows the user to reflect on recent but historical data. A proactive, always-on system could suggest and anticipate physiological states to help individuals take preventive measures. § CONCLUSION In this paper, we introduced PhysioLLM, a novel system that addresses the question of how to provide personalized health insights from individuals' wearables. The system orchestrates multiple LLMs and non-LLM modules to generate reliable, personal, and insightful outputs. Our user study with 24 Fitbit watch users demonstrates that PhysioLLM outperforms both the Fitbit App and a generic LLM chatbot in facilitating a deeper, personalized understanding of health data and supporting actionable steps toward personal health goals. Despite limitations, such as handling the randomness and unknowns in the data and contexts, the adaptability of our system ensures beneficial and personalized suggestions. Our system uses an off-the-shelf, general-purpose LLM so it has limited expert health knowledge; integrations of fine-tuned specialized LLMs with our system will further improve the quality of the insights. As LLM-based conversational systems become widely integrated with health apps, our study's insights are eminently important for providing the appropriate responses and enabling users to query and discover insights. Anecdotally, some participants reported deeper reflections about their sleep and adjusted daytime activities informed by the interactions with our system, which shows the promise of this system in nudging people towards positive behavior change and merits future study. The significance of this work lies in its potential to turn general-purpose LLMs into personal intelligence by contextualizing AI-enabled conversational chatbots with time-series, personal data. We envision that this system allows individuals to better understand how their body functions and the consequences of actions, thereby making the internal and invisible visible. ieeetr
http://arxiv.org/abs/2406.19271v1
20240627153757
AutoPureData: Automated Filtering of Web Data for LLM Fine-tuning
[ "Praneeth Vadlapati" ]
cs.CL
[ "cs.CL" ]
Fractal Subsystem Symmetries, Anomalies, Boundaries, and Effective Field Theory Pedro R. S. Gomes July 1, 2024 ================================================================================ https://github.com/Pro-GenAI/AutoPureDatagithub.com/Pro-GenAI/AutoPureData § ABSTRACT Up-to-date and reliable Large Language Models (LLMs) are consistently sought after. Typically, LLMs are trained on a fixed dataset and then deployed. However, the training data continually becomes outdated. Enable automatic training of AI using web data involves significant concerns regarding data quality and safety due to bias, spam, and other unsafe or unwanted text. Pure data is essential for producing reliable models. Training a model on impure data may result in undesirable outcomes. This research proposes a system that collects web data and automatically filters out unwanted text with the assistance of existing trusted AI models. In the experiment, a small sample of web data was collected and filtered, demonstrating the system's effectiveness in purifying the data. § INTRODUCTION §.§ Problem Statement Millions of users interact with AI chatbots regularly. Keeping models up-to-date is crucial in domains where data changes frequently, such as news and academic research. Unfortunately, very few LLMs are continuously updated, as they do not integrate the latest data. Using search engines on demand is often time-consuming and expensive, and web data is reliable on proper filtration. This research focuses on regular automated web data collection and filtration to support up-to-date Responsible AI models. AI safety is crucial for the success of Responsible AI models. Data used for training Responsible AI models should be both safe and unbiased. As "garbage in, garbage out" suggests, the input data for training or fine-tuning an LLM impacts the quality of the model <cit.>. The quality of the model depends on the quality of the data used to train or fine-tune it. The web is a vast source of information, but its reliability varies significantly. Currently, organizations that train LLMs automate most of the data collection process but not the filtering process. §.§ Challenges with Manual-Only Data Filtering Human experts are employed to filter the data manually. However, manual data filtering can introduce bias and errors, necessitating review by multiple experts. Hiring multiple human experts for data filtering is often time-consuming and expensive. This lengthy process can delay data preparation, preventing LLMs from staying up-to-date, especially when the data changes every second. The challenge is further compounded when the data is in multiple languages. With the speed of new information being created, it is essential to filter out unwanted text in an automated manner. This study aims to address these challenges and enhance the productivity of data reviewers, without intending to replace jobs. §.§ Proposed Solution and Its Benefits This paper proposes a system for continuous data collection and filtering to ensure that the dataset remains current with the latest data. NLP tasks can be done with the help of existing trusted LLMs <cit.>. AI safety can help organizations retain users and prepare for future regulatory requirements to save a substantial amount of money in penalties due to biased or unsafe AI models. The proposed system significantly reduces the time and effort required for data collection and preprocessing, thereby increasing the efficiency of the data preparation process, which is a crucial part of the model training process. Potential applications of this system span various domains, including but not limited to news aggregation and academic research. This system is more efficient and less biased than manually searching online for the latest data and allows models to adapt instantly to new information. This project aims to ensure data quality, which is crucial for the success of AI models. § LITERATURE REVIEW Penedo et al. (2024) <cit.> present the FineWeb dataset with refined deduplicated web data suitable for training, but it does not focus on filtering unsafe or unwanted text. Yexiao He et al. (2024) <cit.> introduced SHED, a method for Automated Dataset Refinement to select the most informative data for training. Biester et al. (2024) <cit.> introduced LLMClean, which includes automated data cleaning using rule-based and ML-based cleaning tools. Chen and Mueller (2023) <cit.> worked on automated data curation for fine-tuning. Existing work focuses on creating high-quality datasets by leveraging tools for data curation. However, existing research does not address regular automated filtration of diverse data, such as web data, for AI safety. This paper proposes a system that automates the data filtering process, thereby addressing a significant gap in current research. The system also filters data from untrusted sources, even if the data appears safe. § EXPERIMENT §.§ Data Collection The data source for this experiment is the web, a vast source of information. The system utilizes refined web data from FineWeb <cit.> <cit.>. The data originates from various websites, including news platforms. A small sample of 100 rows of data was collected and filtered during the experiment. §.§ Data Flagging The web contains a substantial amount of unwanted text. The data is flagged using existing trusted AI models. §.§.§ Flagging Unsafe Text LlamaGuard 2 <cit.> is employed to flag the following types of unsafe text: violent crimes, non-violent crimes, sex-related crimes, child sexual exploitation, specialized advice, privacy, intellectual property, indiscriminate weapons, hate speech, suicide & self-harm, and sexual content. According to the Model Card page <cit.>, LlamaGuard 2 has an F-1 score of 91.5% and a False Positive Rate of 4%, and is noted to be superior to other popular moderation models or APIs. §.§.§ Flagging Unsafe and Unreliable Domains LlamaGuard 2 is also utilized to flag unsafe domains. A search engine is utilized to determine whether a domain is indexed. Typically, search engines do not index unreliable domains. §.§.§ Flagging Unwanted Text Using LLM Llama 3 (8B) <cit.> is the LLM used to flag other unwanted text using the provided rules. Text not flagged by LlamaGuard 2 may be flagged in this step. The rules are designed to filter out unwanted data. The data is flagged according to a list of possible flags. Llama 3 is used to identify and flag various forms of inappropriate content, including sensitive topics, biased information, religious content, extremism, lottery, scams, misleading content, advertisements, and adversarial attacks through data poisoning. §.§ Filtering the Flagged Rows Flagged rows have been removed from the dataset to ensure its purity. The following is a comprehensive flowchart of the automated data filtering process. shapes.geometric, arrows.meta, positioning, calc process/.style= rectangle, minimum height=0.8cm, text centered, draw=black, fill=orange!30, align=center , arrow/.style=-Stealth, invisibledot/.style=minimum width=0mm, inner sep=0mm, outer sep=0mm, collect/.style=process, fill=blue!10, flag/.style=process, fill=orange!40, filter/.style=process, fill=green!20, § RESULTS The reasons for removing the rows are presented in Table 1 below. The flags identified by the LLM are presented in Table 2 below. Some text may have been incorrectly flagged as unsafe during the experiment. 0.5 Reason Count Unsafe text 8 Unsafe domain 3 Unindexed domain 5 More Flagged by LLM 16 TOTAL 32 tableDetection of Unwanted Rows 0.5 Flag Count Sensitive topic 6 Advertisement 6 Data poisoning 3 Biased 2 Lottery 1 Scam 1 tableFlags by LLM Note: Some rows have multiple flags. § DISCUSSION The system demonstrated efficacy in filtering out undesired text. Observations indicated that the system flagged 32 rows from a sample of 100 rows. This system represents an experimental endeavor and serves as a proof of concept. The experiment marks a progressive step towards automating the data filtering process. Enhancements to the system can be achieved through the incorporation of additional rules and flags. By automating the data filtering process, organizations stand to benefit from significant time and cost savings. The outputs generated by the system warrant further examination by human experts to ensure the integrity and impartiality of the data. Feedback garnered from multiple human experts could be instrumental in refining and improving the system. The existing manual-only data review process consumes more time, money, and resources than the automated data filtering process proposed in this research. Despite this, the manual process can still introduce bias and errors. Hence, this system is a step towards up-to-date Responsible AI Models. § CONCLUSION The system has demonstrated its capability to filter out undesired text efficiently from a limited dataset of web content. This innovation holds potential for adoption across various organizations, aiming to augment the data review process. It is noteworthy that the task of flagging does not necessitate the deployment or usage of Large Language Models (LLMs). Alternative Natural Language Processing (NLP) algorithms might exist that are faster and more cost-effective. Expanding the system to encompass a broader array of data sources, including research papers and books could significantly enhance its utility. Moreover, incorporating multilingual support could extend the system's applicability, catering to a global audience. § LIMITATIONS OF STUDY During the experiment, some data may be filtered out incorrectly even if the information is legitimate. The models used in this experiment might not be the best for every task. Model selection is the responsibility of the engineers for their specific use case. The system is designed for data in only English and automatically removes data in other languages without translating or evaluating the text. The system is designed only to experiment with a new approach to data filtering on a small sample of web data and is not scalable. Research on scalable, cost-effective, production-friendly filtering methods is yet to be conducted. The system flags entire rows of data if any part of the text is unwanted. A more effective approach could involve removing only the unwanted parts of the text. The data source used is only web data. Additional sources could be incorporated. unsrt *
http://arxiv.org/abs/2406.17957v1
20240625221852
Improving Robustness of LLM-based Speech Synthesis by Learning Monotonic Alignment
[ "Paarth Neekhara", "Shehzeen Hussain", "Subhankar Ghosh", "Jason Li", "Rafael Valle", "Rohan Badlani", "Boris Ginsburg" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
Inherent Challenges of Post-Hoc Membership Inference for Large Language Models [ ======================================================================================== § ABSTRACT Large Language Model (LLM) based text-to-speech (TTS) systems have demonstrated remarkable capabilities in handling large speech datasets and generating natural speech for new speakers. However, LLM-based TTS models are not robust as the generated output can contain repeating words, missing words and mis-aligned speech (referred to as hallucinations or attention errors), especially when the text contains multiple occurrences of the same token. We examine these challenges in an encoder-decoder transformer model and find that certain cross-attention heads in such models implicitly learn the text and speech alignment when trained for predicting speech tokens for a given text. To make the alignment more robust, we propose techniques utilizing CTC loss and attention priors that encourage monotonic cross-attention over the text tokens. Our guided attention training technique does not introduce any new learnable parameters and significantly improves robustness of LLM-based TTS models. [ Audio Examples: <https://t5tts.github.io/> ^*Denotes equal contribution ] § INTRODUCTION Large language models (LLMs) have revolutionized the landscape of deep generative AI with their unprecedented ability to generate coherent and contextually rich content across diverse domains. In LLM-based generative models, data is quantized into discrete tokens, which allows the formulation of data synthesis as a language modeling task. Transformer architectures such as GPT <cit.> (decoder-only) and T5 <cit.> (encoder-decoder) are trained to autoregressively generate discrete tokens given a prompt, leading to a unified architecture that can be adapted across various data domains and synthesis tasks. Particularly in the speech domain, there has been a recent surge in the use of LLMs for various speech synthesis applications such as text-to-speech (TTS) and speech-to-speech translation tasks <cit.>. TTS synthesis has been traditionally treated as a cascaded problem with intermediate mel-spectrogram representation that is typically modelled as a regression task <cit.>. However, discrete neural audio codecs <cit.> have emerged as a promising intermediate audio representation, that not only preserve audio fidelity at a high compression rate, but are also suitable for training autoregressive transformer-based LLMs. Audio LLMs <cit.> have gained traction for their ability to generate audio seamlessly, eliminating the necessity for additional duration and pitch prediction models. Moreover, LLM-based speech synthesis models can scale up to large speech datasets and be prompted in diverse ways to perform tasks like zero-shot speech synthesis, multilingual speech synthesis and other audio generation tasks besides speech. Despite their remarkable achievements, LLM-based TTS models suffer from attention errors resulting in mis-aligned speech, repeating and missing words, analogous to hallucinations <cit.> exhibited by LLMs in the text domain. This issue becomes more prominent when the input text is challenging and contains repeating words. For certain inputs, the probabilistic autoregressive inference of LLM-based TTS models can result in looping or infinite silences <cit.>. This issue makes LLM-based TTS models unreliable for real-world applications. In our work, we investigate this hallucination issue and find that attention layers of LLM-based TTS models learn an implicit alignment between text and speech tokens when trained using the next-token prediction objective. In encoder-decoder transformers, the TTS alignment is learned in certain cross-attention heads of the decoder; while in decoder-only models, the alignment is learned in the self-attention layers. Since the implicitly learned alignment in attention layers is unconstrained during training, it is not strictly monotonic which results in mis-aligned synthesis during inference. To address this challenge, we propose a learning procedure that encourages monotonic alignment in the attention layers of LLM-based TTS models, resulting in significantly more robust TTS models without modifying the architecture or introducing new parameters. We design a TTS model based on an encoder-decoder T5 <cit.> transformer architecture, which takes text and audio tokens of a reference audio as input and autoregressively predicts the audio tokens of the target audio from the decoder. To improve robustness of the TTS model, we propose a technique to guide the cross-attention head of the T5 model using a static attention prior and alignment loss that encourages monotonic attention over the text input. Our experiments demonstrate that the proposed technique significantly improves intelligibility of the synthesized audio especially for challenging text inputs. The key contributions of our work are as follows: * We propose an encoder-decoder transformer model for TTS synthesis. To the best of our knowledge, this is the first attempt at synthesizing multi-codebook neural audio codecs with an encoder-decoder architecture. * We develop an alignment learning technique to guide the cross-attention heads in our TTS model to learn monotonic alignment. Incorporating our proposed technique reduces Character Error Rate (CER) of synthesized speech from 9.03% to 3.92% on challenging texts. * We compare audio codec models based on Residual Vector Quantization and Finite Scalar Quantization (FSQ). FSQ codecs not only improve audio quality but also simplify the data representation by allowing parallel codebook prediction. § RELATED WORK AudioLM <cit.> pioneered the task of training a decoder-only LLM on discretized audio tokens from a neural codec model, for high-quality speech synthesis. Following this, several solutions utilizing decoder-only transformer architectures have been proposed such as VALL-E, UniAudio, Bark, SpeechX <cit.>. They frame audio generation as an autoregressive language modeling task using multiple discrete codebooks. Alternatively, SpeechT5 <cit.> proposes an encoder-decoder architecture for sequence to sequence translation using a unified discrete representation of text and speech. However, SpeechT5 similar to other synthesis models based on SSL representations <cit.>, does not utilize multi-codebook audio representations. In the aforementioned transformer-based TTS models, the alignment between audio and phoneme sequences is entirely learned implicitly through the attention mechanisms in the transformer. This introduces potential instability in the form of hallucinations, since the alignment is not constrained to capture the monotonic dependencies of audio and text tokens <cit.>. Prior research <cit.> on non-LLM spectrogram generation models have proposed solutions to learn stricter alignment between text and speech tokens by constraining the encoder-decoder attention layers in CNN-based TTS models and LSTM-based models such as Tacotron <cit.> and Flowtron <cit.>. While these techniques show promising results, they cannot be directly applied to transformer-based models which contain multiple cross-attention layers and multiple heads per layer, and generate discrete codes as opposed to continuous spectrograms. § METHODOLOGY Our TTS model is an encoder-decoder LLM that is trained to predict acoustic codes of the target speech given the tokenized text input and acoustic codes of a reference audio from the target speaker. In this section, we first describe the tokenized representations used for text and speech. Next, we describe our model architecture and prompting setup for TTS. Finally, we propose a training procedure to learn robust text and speech alignment in the LLM. §.§ Tokenization Speech: We utilize neural audio codec models to convert the raw speech signal into a tokenized representation. Given an audio signal y=y_1 … y_t, a neural audio codec model outputs C_T × N=CodecModel(y). C_T × N is a two dimensional acoustic matrix containing m-bit discrete codes, where T is the downsampled length and N is the number of codebooks per timestep. We consider three acoustic codec models: Encodec <cit.>, Dac <cit.> and an unpublished Finite Scalar Quantization (FSQ) <cit.> based spectral codec model <cit.>. Both Encodec and Dac use Residual Vector Quantization (RVQ) <cit.>. Due to RVQ's hierarchical architecture, we follow MusicGen <cit.>'s delay pattern scheme for modelling the codebook depdendecies in RVQ. In contrast, spectral codec <cit.> has N independent codebooks. This allows us to predict the N codebooks parallely at each timestep without using additional models or a delay pattern. To the best of our knowledge, we propose the first LLM that can parallely predict all N codebooks and achieve high quality speech synthesis. Text: For text, we use two tokenization schemes: sentence-piece <cit.> and phonemes. Sentence-piece tokens allows us to leverage pretrained text LLMs. To allow phoneme tokens as input, we expand the vocabulary and embedding space of the pretrained text-LLM to include additional tokens for phonemes. We train a single model to perform both phoneme to speech and sentence-piece to speech by prepending the text with the task prompt “Phoneme TTS” or “Text to Speech” respectively. §.§ Model Overview Our model is based on the T5 architecture <cit.>, with additional embedding layers and prediction heads to adapt it for TTS task. T5 is an encoder-decoder model, where the encoder is a non autoregressive bi-directional transformer and the decoder is an autoregressive transformer. Both the encoder and decoder networks contain N_l transformer layers. Each layer within the encoder is composed of a self-attention module and a fully connected feed-forward network. In the decoder network, each layer adds an additional cross-attention module which performs multi-headed attention over the encoder's output. To perform multi-speaker TTS, the model takes as input the tokenized text (question), and the acoustic tokens of a reference audio from the target speaker (context); and outputs the acoustic tokens of the target audio (answer). We consider two design options in our experiments: feeding the context audio tokens to the encoder network with the question, or to the decoder network before the answer. We discuss the trade-offs between these two designs in Section <ref>. Note that the context and answer are represented by N codebooks per timestep. To embed such tokens, we maintain N embedding tables (_i) of size (2^m ×embDim) where m is the bit-width of the acoustic codes (m=10 for all acoustic tokens in this work). At each timestep, the acoustic embedding is derived by referencing and summing the embeddings from each of the N codebooks. Therefore, the acoustic embedding at a timestep t is computed as: e_t = ∑_i=1^N_i(C[t,i]) Similarly, at the decoder output, predictions for each of the N codebooks are computed using a separate linear layer of size h × 2^m, where h is the hidden size of the transformer network. Therefore, for all timesteps and codebooks, we compute logits y of size T × N × 2^m. Finally we calculate the cross entropy loss for next-token prediction as: L = CE(SoftMax(y), answer) Note that unlike past work <cit.>, our model does not use additional networks for handling multiple codebook predictions. Instead, we employ the delay pattern for representing RVQ tokens <cit.> to model codebook dependencies. §.§ Alignment Learning When the T5 model is trained for TTS task using only the next token prediction loss, we observe that the attention-score matrix A_T× M in certain cross-attention heads, exhibits the learned text and speech alignment (where T is the number of decoder timesteps and M is the number of encoder timesteps). That is, if we slice the attention-score matrix to include only the question time-steps, we observe higher attention-scores near the diagonal indicating the desirable monotonic alignment (Figure <ref>). However, attention errors in this implicitly learned alignment can cause missing or repeating words during inference, leading to hallucinations and inaccurate generations for challenging texts. Moreover, the alignment learning using only the next token prediction loss is often unstable and it can take several training iterations to learn a reasonable text and speech alignment, especially when training utterances are longer <cit.>. We extend prior work <cit.> and propose an alignment learning framework to guide multiple cross-attention heads of the T5 transformer model to learn robust alignment. §.§.§ Attention Prior To accelerate alignment learning, during initial training we multiply the attention-score matrices in the cross-attention heads with a static 2D beta-binomial prior. The 2D beta-binomial prior is a near-diagonal heuristic matrix that is wider near the center and narrower near the corners. Multiplying the initially random attention matrices with such a prior, reduces the attention scores that are far-off the diagonal, providing a desirable monotonic initialization to the cross-attention scores. Consider the attention-score matrix between the decoder and encoder timesteps A_T× M^l,h, of the h^th cross-attention head in decoder layer l. We generate a static 2d prior using the 2D beta-binomial distribution between the answer and question timesteps P_T' × M' where T' is the number of time frames in the answer tokens and M' is the number of question (text) timesteps. Given this prior, we obtain the re-scaled attention scores as: A_T× M^l,h[a_s:a_e,q_s:q_e] A_T× M^l,h[a_s:a_e,q_s: q_e] ⊙ P_T' × M' where q_s and q_e indicate the start and end of the question timesteps (M'=q_e - q_s) and a_s and a_e indicate the start and end of the answer timesteps (T'=a_e - a_s). While q_e=M, a_e=T, the start timesteps (q_s, a_s) for slicing depend on whether we pass context to the encoder or decoder. When passing context as input to the encoder, q_s=context length, a_s=0 and when passing the context to the decoder, q_s=0, a_s=context length. We apply the prior to all cross-attention heads of each decoder layer. Since we do not know the target audio length during inference which is needed to compute the prior, we cannot use this prior during inference. Therefore, we apply the attention prior for the first S_1 training iterations. Then we linearly anneal the prior to an all ones matrix J_T' × M' from training step S_1 to S_2, and turn off the prior after step S_2. That is, for a training step S, where S_1 ≤ S ≤ S_2, the prior matrix is obtained as: P_T' × M'^S = ( (S_2 - S) · P_T' × M' + (S-S_1) · J_T' × M' ) /(S_2 - S_1) This annealing procedure is necessary to ensure stability during training. Turning off the prior without annealing causes the loss curve to spike, since the decoder expects re-scaled attention scores for making valid predictions. In our experiments, we set S_1=8000 and S_2=15000. §.§.§ Alignment Loss The soft alignment matrix between the text and audio timesteps can be obtained by taking softmax of the sliced attention-score matrix over the text dimension: A^soft_l,h_T'× M' = Softmax( A_T× M^l,h[a_s:a_e,q_s:q_e] ) An i^th row in this matrix A^soft_l,h_T'× M'[i,:] represents the attention probability distribution over all text timesteps for the given answer timestep i. If we sample a prediction from such a distribution at each answer timestep, it is desirable that the resulting sequence of text timesteps is monotonic. Since the length of the answer is typically longer than the text, there can be multiple valid monotnic reductions of the alignment matrix. To encourage valid monotonic sampling from the alignment matrix, we calculate the likelihood of all possible monotonic reductions using the Connectionist Temporal Classification (CTC) algorithm. That is, given the alignment matrix A^soft_l,h_T'× M', we obtain the alignment loss for a decoder layer and head as: L_align^l,h = CTCLoss(A^soft_l,h_T'× M', q_M') where q_M'={1, 2, … M'} is the target monotonic sequence from 1 to M'. We compute the total alignment loss for a set of cross-attention heads and layers ℙ over which we wish to enforce monotonic alignment. That is, L_align = ∑_l,h ∈ℙ L_align^l,h For set ℙ we consider i) all cross-attention heads or ii) 2 heads in each decoder layer. Observing no significant difference in intelligibility and robustness, for simplicity we apply L_align to all cross-attention heads in each layer in our experiments. § EXPERIMENTS §.§ Datasets and Models We train our T5-TTS models on a data-blend containing 1.8k hours of English TTS data from four datasets: the train-clean-360 subset of LibriTTS <cit.>, HiFiTTS <cit.>, a 1000 hour subset of the LibriVox MLS dataset <cit.>, and a proprietary, 2-speaker, 63 hour dataset. The encoder and decoder transformer networks of our TTS model have 12 layers and 12 attention heads each, an embedding dimension of 768, a feed-forward layer dimension of 4096, and a dropout of 0.1. This results in a total of 220 million parameters excluding the embedding layers. We initialize our model weights with a pre-trained T5 checkpoint trained on Pile <cit.>. To adapt the pre-trained text checkpoint for TTS, we make three modifications to the pretrained model: i) Expand the text vocabulary and corresponding embedding layers (initialized randomly) to include phoneme tokens. ii) Add randomly initialized embedding layers for each of the N codebooks of the speech tokens. iii) Expand position embeddings to a maximum length of 1536 which allows for an audio generation length of 20.5 seconds using Encodec and 17.8 seconds using Dac and spectral codec. We use a fixed context duration of 3 seconds, where context is an alternate utterance from the speaker of the target utterance. We train each of our models with a batch size of 192 distributed across 32 NVIDIA A100 GPUs, for 250,000 steps optimized with a fixed learning rate of 1e-4 using AdamW optimizer. During inference, we use multinomial Top-k sampling with k=80 and temperature=0.85. §.§ Results Alignment Learning: To assess the efficacy of our alignment learning method (Section <ref>), we train three variants of our T5 TTS model using the spectral codec: 1) T5-TTS (No Prior, No L_align ): trained without alignment learning method. 2) T5-TTS (W Prior, No L_align ): trained with attention prior but not L_align and 3) T5 TTS (W Prior, W L_align ): trained with attention prior and L_align applied to all cross-attention heads. In our initial experiments, the attention prior is crucial for training with L_align. Without the prior and with L_align, we obtain monotonic but unaligned attention maps. leading to no speech synthesis. We evaluate the models on a set of seen and unseen speakers. For seen speakers, we use 200 holdout utterances of the train-clean-360 set. For unseen speakers, we consider 200 utterances from the VCTK <cit.> speakers: 20 random speakers with 10 utterances per speaker. For each utterance, we synthesize two audios using either the sentence piece text tokenizer or the phoneme tokenizer. We evaluate the synthesized speech on intelligibility and speaker similarity. For intelligibility, we transcribe the synthesized audio through a Conformer-Transducer ASR model [66<https://hf.co/nvidia/stt_en_conformer_transducer_large>] and compute the CER and WER between the ASR transcript and the ground-truth text. For speaker similarity (SSIM), we compute the cosine similarity between the embeddings of the synthesized speech and target ground-truth audio obtained from WavLM <cit.> speaker verification model [66<https://hf.co/microsoft/wavlm-base-plus-sv>]. We report the results in Table <ref>. While all three models achieve high speaker similarity for seen speakers, the intelligibility metrics improve as we incorporate attention prior and alignment loss during training. For unseen speakers, we observe a higher speaker similarity and intelligibility when the context is fed to the T5 decoder instead of the encoder. Challenging Texts and Comparison against Prior Work: As shown in Table <ref>, the improvement in intelligibility is even more significant when we consider challenging text inputs with repeating words. We compare our models (using decoder context) with three open source LLM-based TTS models using the inference code and released checkpoints <cit.>. For this evaluation we consider a set of 100 challenging texts and choose two seen speakers (male and female) from the voice presets of each model. As observed, our best model outperforms the baseline models and prior LLM-based TTS models. Additionally, we synthesize audio on 100 texts from Harvard Sentences <cit.> and conduct a Mean Opinion Score (MOS) evaluation on Amazon Mechanical Turk (Table <ref>). For MOS evaluation, each listener is presented with one audio sample and asked to rate the audio on a scale of 1 to 5 with 1 point intervals. Each audio is rated by at least 10 independent listeners. For 200 audios per model, this results in a total of 2000 evaluations per model. MOS with 95% confidence intervals indicates our model outperforms prior LLM-based TTS models considered in our study. We encourage readers to listen to audio examples linked in the footnote of the first page. Codec Choice: We train three T5-TTS models with alignment learning on the three audio codecs and report results on seen speakers in Table <ref>. We find that both spectral codec and Dac significantly outperform Encodec in terms of audio naturalness. Spectral codec streamlines training by independently predicting codebooks in parallel, unlike the delay pattern scheme needed for Encodec/Dac. Additionally, spectral codec enhances synthesized speech intelligibility, demonstrated by reduced CER/WER. § CONCLUSION We present a T5-TTS model that can learn robust text and speech alignment without modifying the model architecture or requiring ground-truth text duration. We identify that attention heads in LLM-based TTS models implicitly learn text and speech alignment and can be guided to monotonically attend over the text input. Our experiments demonstrate that our alignment learning procedure improves the reliability of TTS synthesis, especially for challenging text inputs and outperforms prior LLM-based TTS models on both intelligibility and naturalness. § ACKNOWLEDGEMENTS We would also like to thank Ryan Langman for developing the spectral codec model that was used in our TTS model. IEEEtran
http://arxiv.org/abs/2406.18513v1
20240626173503
Superconductivity from domain wall fluctuations in sliding ferroelectrics
[ "Gaurav Chaudhary", "Ivar Martin" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
gc674@cam.ac.uk TCM Group, Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom ivar@anl.gov Materials Science Division, Argonne National Laboratory, Lemont, IL 60439, USA § ABSTRACT Bilayers of two-dimensional van der Waals materials that lack an inversion centre can show a novel form of ferroelectricity, where certain stacking arrangements of the two layers lead to an interlayer polarization. Under an external out-of-plane electric field, a relative sliding between the two layers can occur accompanied by an inter-layer charge transfer and a ferroelectric switching. We show that the domain walls that mediate ferroelectric switching are a locus of strong attractive interactions between electrons. The attraction is mediated by the ferroelectric domain wall fluctuations, effectively driven by the soft interlayer shear phonon. We comment on the possible relevance of this attraction mechanism to the recent observation of an interplay between sliding ferroelectricity and superconductivity in bilayer T_d-MoTe_2. We also discuss the possible role of this mechanism in the superconductivity of moiré bilayers. Superconductivity from domain wall fluctuations in sliding ferroelectrics Ivar Martin July 1, 2024 ========================================================================= Introduction- Recently ferroelectricity has been discovered in several stacking-engineered bilayers and twisted bilayers of van der Waals materials<cit.>. In bilayers of non-elemental materials that lack an inversion symmetry at the monolayer level, certain bilayer stacking arrangements lead to interlayer charge transfer. Even when the layers themselves are metallic, such charge imbalance can be interpreted as ferroelectricity. For example, AB stacked bilayers where A and B are different elements may lead to an interlayer charge transfer because of the different electron affinity of A and B elements. Remarkably, the polarization direction can be switched via the application of a transverse electric field, which reverses the stacking order to BA; this is the mechanism of the sliding ferroelectricity. Similarly, moiré engineering of these materials via small relative twist or strain leads to moiré polar domains <cit.>. Apart from bilayers of non-elemental materials, ferroelectricity was also discovered in graphene bilayers, possibly due to strong electron-electron interactions <cit.>. A unique feature of these two-dimensional ferroelectrics is that they are compatible with the in-plane metallicity and superconductivity. In bulk metals or superconductors, a long-range ferroelectric order cannot develop because any such order is screened by mobile charge carriers. However, in the two-dimensional ferroelectrics, interlayer polarization can survive the presence of in-plane conduction. Indeed, a ferroelectric metal was discovered in bilayer orthorhombic T_d-WTe_2 <cit.>. Even more surprisingly an intriguing electric-field switching of superconductivity was recently observed in T_d-MoTe_2 <cit.>. Under an applied external out-of-plane electric field, as the bilayers approach the polarization reversal, superconducting T_c is strongly enhanced, followed by a rapid drop as the system becomes fully polarized again. This suggests that superconductivity is enhanced on the domain walls that mediate ferroelectric switching. Such electric field (gating) control of superconductivity can have groundbreaking applications in superconducting devices. Superconductivity typically emerges due to an attractive pairing interaction between electrons, mediated by an exchange of a soft bosonic mode. In the conventional BCS theory of superconductivity, these intermediary bosonic modes are phonons <cit.>. In high-temperature cuprate superconductors, it is believed that soft antiferromagnetic fluctuations are the relevant bosonic modes <cit.>. Here, we show that in the metallic sliding ferroelectrics, fluctuations of domain walls separating domains of the opposite polar orders naturally lead to strong effective attractive electron-electron interactions, which favor intra-layer pairing. If the domain wall fluctuations are associated with structural fluctuations, i.e. phonons, we can recast this mechanism in terms of an effective electron-phonon coupling. The nature of this coupling is transverse piezoelectric, with the induced tranverse polarization in the center of the domain wall being proportional to the shear strain between the layers. Dynamically such strain fluctuations can be mediated by the interlayer shear phonons (deriving from the transverse bulk phonons with the propagation direction normal to the layers). That is in contrast to the conventional BCS mechanism, which relies on the attraction mediated by the longitudinal optical phonons <cit.>. Near the polarization switching transition, the local superconductivity at the domain walls can percolate through the entire system leading to vanishing resistivity, possibly explaining the experimental observations in Ref. <cit.>. Hamiltonian and effective attractive interactions- To illustrate the mechanism in the simplest possible setting, we start with the two diatomic chains as shown in Fig. <ref> and described by the Hamiltonian H = ∫ dx [ψ̂^†(x) { H_e (x, ∂_x ) + H_e-p (x) }ψ̂(x) + H_p(x)]. Here H_e accounts for the electron kinetic energy contributions including the interlayer hopping, H_p (x) = P^2(x)/(2ϵ) is the electrostatic energy for polarization P(x) and permittivity ϵ. Electrons moving in the polarized background experience potential energy H_e-p(x) = D(x) τ_3, where D(x) = edP(x)/ϵ and d is the interlayer distance. The electron operator ψ̂(x) has spin, layer, sublattice pseudospins structure, and τ Pauli matrices act in the layer basis. Insulating systems can also develop in-plane (along the chains) polarization components <cit.>. In a two-dimensional metal, static in-plane polarization vanishes due to screening by the in-plane itinerant charges. Therefore, the metallic systems of interest only have interlayer polarization. In the following discussion, we focus on the electron-polarization interaction term. Since the polarization changes as the system is deformed from its equilibrium configuration, we consider a real polarization that has both an explicit spatial dependence and an implicit spatial dependence via a deformation field u(x) that can drive fluctuations in polarisation such that P(x, u(x)) = P(x, 0) + .∂ P(x, u)/∂ u|_u = 0 d u(x) + O(u^2), where P(x, 0) is the static polarization before the deformation. We decompose the Hamiltonian in Eq. <ref> as H = ∫ d x [ ψ̂^†(x) {H_0(x,∂_x) + H_1(x) }ψ̂(x) + H_p(x)], where H_0 = H_e + H_e-p takes into account the electron Hamiltonian and the coupling of static polarization to the electrons. The dynamic deformation enters the Hamiltonian through H_1 = H_u + H_e-u, where H_u is the Hamiltonian associated with the dynamic deformation itself and H_e-u accounts for its coupling to electrons. As shown in Fig. <ref> (b), far on either side of the domain wall, where polarization is saturated, the first-order spatial derivative of polarization vanishes such that a local deformation will not induce a change in the polarization to the first order in u. Whereas, at the domain wall [centred at the origin in Fig. <ref> (b)], deformation u(0) leads to rapid change in polarization. Therefore, we can expect that the strongest coupling between the dynamical deformation and electrons will be achieved in the spatial region where the interlayer polarization switches its direction [See Fig. <ref> (b)], or in moiré bilayers, at the domain walls separating regularly arranged regions with opposite polarizations [See Fig. <ref> (c)]. To consider fluctuations in the deformation field, we take the harmonic approximation and follow the standard procedure to quantize the fluctuations by promoting them to bosonic operators â and using the substitution û = √(ħω/(8 κ ))(â + â^†), where κ is an effective spring constant of the restoring force and ω is the fundamental frequency. Here, we have assumed that â-boson is a local vibration mode at the domain wall; for the moment, we assume that they are independent at different domain walls. The electron-boson Hamiltonian becomes H_e-u = ∫ dx g(x) (â + â^†) ψ̂^†(x) τ_3 ψ̂(x), where g(x) = ed √(ħω/ (8 ϵ^2 κ ))∂ P(x, u)/∂ u. We integrate out the â-boson and obtain an effective electron-electron interaction H_e-e = -∑_s,s'∫ dx dx' g(x) g(x') 2ħω/ħ^2ω^2- E^2 ×τ_3,ssτ_3,s's'ψ̂^†_s(x)ψ̂^†_s'(x')ψ̂_s'(x')ψ̂_s(x), where E, and is the electron energy. Clearly, attractive interactions are generated in the regime E < ħω for the intra-layer Cooper pair channel. We also note that the interlayer interactions are repulsive with the same strength. This is a distinctive feature of interaction mediated by the interlayer polarization fluctuations. If the electronic states near the domain walls are approximately the layer eigenstates, we can restrict to the intra-layer attractive channel, which is fully decoupled from the interlayer repulsive channel. Phonon-induced domain wall fluctuation- To determine whether these attractive interactions can lead to substantial superconductivity in a real system, we require additional input about the microscopic origin of the domain wall fluctuations. We now consider a phonon-based origins of the fluctuations. Consider a displacement field u and decompose it over in-phase and out-of-phase interlayer displacements, such that u_in = (u_t + u_b)/2 and u_out = (u_t-u_b)/2. Assuming these displacements are independent and acting locally near a domain wall, the resultant displacements of the center of the domain wall are respectively, u_dw = u_in and u_dw = u_out w/a, where w is the domain wall width and a is the microscopic lattice constant (this follows from the observation that a relative layer shift by a lattice constant leads to a domain wall center shifting by the distance ∼ w). As a result, the out-of-phase motion of atoms between the layers has a much larger effect on the domain wall position than the in-phase one, leading to significantly stronger coupling between electrons and the fluctuations of u_out compared to u_in [See Fig. <ref>]. Moreover, the sliding motion mechanism of the ferroelectric transition is indeed driven by the shear phonon. Therefore, we now consider the interlayer shear phonon, which is associated with q=0 out-of-phase motion of two layers, u_ph = (u_t - u_b)/2 [Fig. <ref> (a)]. From the preceding discussion, the domain wall displacement width u_dw = w u_ph/a, leads to a stronger coupling of the shear phonon to electronic states in the domain wall region. Therefore, in what follows, we will focus on the interlayer shear mode for domain wall fluctuation-induced superconductivity. Finally, we note that the domain wall fluctuations need not be caused by phonons. For example, polarization order parameter fluctuations can occur due to order parameter dynamics in the standard Ginzburg-Landau theory. In particular, the interlayer ferroelectric order in bilayer graphene is believed to be primarily due to electronic correlations, rather than the hetero-atomic structure described above, and therefore need not involve a sliding motion during polarization switching <cit.>. Nevertheless, at the polarization switching transition, the domain wall fluctuations can still lead to local attractive interactions of purely electronic origin. Infinitely-wide domain wall limit. Domain walls, by definition, separate energetically stable regions – in our case, regions with uniform saturated interlayer polarization. They represent saddle-point configurations that would be unstable under conditions of translational invariance. Being unstable, they can offer a unique electronic environment favoring superconductivity. Indeed, as we showed in the previous section, the coupling constant dP/du_out between electrons and the shear phonons is maximized at the center of the domain wall. Yet, the finite width of the domain wall may lead to suppression of superconductivity, particularly when the width becomes smaller than the superconducting coherence length <cit.>. Therefore, the estimate of the upper bound on the pairing interaction and thus T_c can be obtained assuming that the domain wall is infinitely wide, making the system translationally invariant. This limit allows us to do a direct comparison between the standard polar LO mechanism <cit.> and the shear phonon mechanisms. In this uniform limit, we can recast the effective electron-electron interaction resulting from the shear phonon coupling in the more familiar-looking form H_e-e = -∑_η,η'= ±∫ dk dq g^2(q)/ħω(q)ϕ̂^†_-η (k-q)ϕ̂^†_-η' (-k+q) ×ϕ̂_η'(-k)ϕ̂_η(k). Here, ϕ̂_̂±̂ = (ψ_s ±ψ_s̅)/√(2) are the electron operators for inter-layer bonding/anti-bonding eigenstates, which are approximately the band eigenstates in this limit; the coupling constant is g(q) = 2P_0ed √(ħ/(2m ω(q)a^2 ϵ^2)), where P_0 is the saturation value of the interlayer polarization. Notice that in this scenario not only are the interactions uniformly present throughout the system, suggesting phase coherence, but they are also fully attractive at the Fermi level. Notably, the effective coupling scales quadratically with the saturation value of interlayer polarization, which is achieved when the layers are AB (BA) stacked. Estimates of electron-phonon coupling strength. Using the approximation of an infinitely wide domain wall we can make an estimate of the strength of the proposed mechanism. Even though free carriers screen a static polarization, fluctuating polarization is precisely what generates Frölich-like electron-phonon coupling term <cit.>. In the sliding ferroelectrics, dynamical polarization can be directed either in-plane or, when induced by the interlayer charge transfer between layers, out-of-plane. The ratio of these polarizations, assuming that they are driven by very similar phonon modes, can be estimated as P_z/P_x∼ d/a. Generally, in the van der Waals bilayers d> a. For example, in WTe_2, the c = 15.4Å, compared to a= 3.5Å and b=6.34Å. Therefore, the ratio of the corresponding electron-electron attraction strengths can be estimated as ∼ (d/a)^2. It may in fact be even larger since the in-plane polarization fluctuations are dynamically screened by the itinerant electrons. In addition to modifying interlayer polarization, the shear phonons also affect hybridization between the vdW layers. It is instructive to compare the relative strengths of these two types of electron-phonon coupling. Changing the stacking of layers can change interlayer hybridization energy by approximately 100 meV for the typical vdW systems. This is therefore approximately the difference in energy per electron located within a stable stacking domain or at the domain wall, which corresponds to a relative layer displacement by approximately a lattice constant. To compare with the corresponding electrostatic energies in the polar metallic phase, we can take the example of in bilayer T_d-WTe_2, where experiments and first-principles calculations estimate P∼ P_0 × 10^-4 C · m^-2, where P_0 ∼ 2-6 <cit.>. Simple estimates suggest that it can lead to a typical coupling energy per electron is V ∼ 100 c meV. Here c=(P_0 d/ϵ_r)^2 is an O(1) proportionality constant for d measured in nm and ϵ_r is the dielectric constant. Therefore we find that in the domain wall region, these two coupling mechanisms to the shear phonon have comparable strengths. In the systems that develop interlayer polarization, the mere fact of the presence of polarization implies that the polarization energy dominates the hybridization (since hybridization favors equal charge distribution between layers). In real systems, both effects can work in conjunction to provide an increased electron-to-shear phonon coupling at the stacking domain walls in vdW bilayers. We note in passing, that in non-polar vdW materials, such as multilayer graphene-based superconductors, the just-described modulation of the interlayer hybridization in the domain wall regions by the shear phonon can be a viable mechanism of local pairing. In addition to the strength of the electron-phonon coupling discussed above, superconducting transition temperature depends on several other ingredients, which we discuss now. In the weak-to-intermediate coupling superconductors, the critical temperature follows T_c ∼ħω_D exp -(1+λ)/(λ-μ^*), where ω_D is the typical frequency of the most pairing-relevant phonon, λ is a dimensionless coupling constant, and μ^* is the Coulomb pseudopotential <cit.>. Because the T_c is very sensitive to the coupling constants, we only discuss here the distinctive qualitative features of the domain wall mechanism. If we neglect the phonon dispersion, the coupling constant can be decomposed as λ = 4𝒩_F V/(mω_D^2)[The q=0 frequency of the shear phonon is determined by the weak interlayer spring constant. However, its dispersion is predominantly governed by the stiff intralayer spring constant. Therefore, these phonons are highly dispersive and neglecting their dispersion is not a very good approximation.], where 𝒩_F is the electron density of state at the Fermi level and V  ∼ 4 (edP_0/a ϵ)^2 quantifies the polarization energy change across the domain wall, m is the unit cell mass, and ω is q=0 phonon frequency. Since the interlayer stacking arrangement varies spatially, the phonon frequency is influenced by a spatially varying interlayer spring constant <cit.>. The weaker van der Waals forces at the AA-stacked region (domain walls), are expected to soften the shear phonon frequency compared to the uniform AB stacked bilayers, further increasing the effective coupling constant λ. Discussion- Now we return to the possible role of this mechanism in the recently discovered ferroelectric control of superconductivity in bilayer T_d-MoTe_2. Bulk T_d-MoTe_2 is a superconductor but with a very modest T_c≈ 120 mK. T_c increases as the number of layers is reduced, reaching T_c≈ 2 K in the bilayers and 7 K in the monolayer <cit.>. Notably, in the bilayer, T_c is observed to increase sharply at the switching transition of the polarization direction <cit.>. We now discuss the latter property in light of our mechanism. When the bilayers are fully polarized in either AB or BA stacking configuration, the shear phonon does not affect polarization to linear order in phonon displacement. (Similarly, the lowest-energy stacking maximizes the interlayer hybridization, eliminating the linear coupling between the shear phonon and hybridization). Therefore, it cannot provide significant attractive interaction for superconductivity, leaving only such mechanisms as the standard LO phonons operational. Under an applied electric field opposite to the polarization direction, the bilayers start to switch their polarization direction. This hysteretic process happens by forming small domains of opposite polarization. At the domain walls of these newly formed domains, local pairing becomes enhanced due to the attractive interactions from the domain wall fluctuations. However, for weak electric fields, such opposite polarization domains are small and far apart [Fig. <ref> (a)]. Therefore, the ferroelectric phase does not lead to global superconductivity (or enhancement of T_c) in transport properties (one could obtain, however, a local enhancement of pairing at the domain walls, possibly detectable with local probes). As the electric field is increased, the minority domains grow in size and as the system approaches the ferroelectric reversal, the domain walls form a percolating network creating a path for the supercurrent to flow through the system [Fig. <ref> (b)]. This leads to the appearance of superconductivity (or T_c enhancement) in transport properties in this region, as seen in the experiment. Finally, as the polarization completely switches to the opposite direction, the superconductivity is again turned off (or T_c is suppressed) with the disappearance of domain walls. The superconductivity exists in Td-MoTe_2 even away from the ferroelectric switching region. For example, bilayers have T_c ≈ 2.5 K at neutrality. There need not be a direct relation of this bulk superconductivity to our mechanism. However, it also does not stand in contradiction. It is possible that another coexisting mechanism is responsible for superconductivity in a single-domain few-layer T_d-MoTe_2. In such a case, our mechanism's role is to relatively enhance T_c near the ferroelectric switching. A similar assistive role of nematic fluctuations in enhancing T_c has been previously studied in high-temperature superconductors <cit.>. Our mechanism should equally apply to moiré bilayers of polar vdW metals. For small twist angles, the lattice relaxation effects lead to large domains of AB and BA regions separated by relatively narrow domain walls, similar to the isolated domain walls that mediate the ferroelectric switching in untwisted bilayers. However, in the moiré case, the domain walls automatically form a spanning network, without the need to tune to the switching point. The fluctuations of the domain walls would then lead to a spanning superconducting network and global superconductivity. When limiting occupancy to the lowest moire subbands, increasing the twist angle may benefit superconductivity, for two reasons. First, that forces the electronic wave-function to enter the domain wall regions (from the AB/BA stacked regions where they are normally peaked) so that they can enjoy stronger electron-phonon interactions. Also, the larger density of domain walls would lead to a larger average superfluid stiffness. We, therefore, anticipate that the peak of T_c in polar moire systems will be reached at the largest twist angles that can still sustain the formation of polar domains. An observation of superconductivity in twisted bilayers of MoTe_2 can be an indication of the domain wall fluctuation mechanism. We are not aware of any such measurement in MoTe_2. However, the recent observation of superconductivity in twisted bilayers of WSe_2 <cit.> provides a possible realization of the domain wall fluctuations mechanism proposed here. We finally note that since the attractive interactions are enhanced at the domain walls where the polarization vanishes, creating an energetically frustrated spatially uniform (saddle-point) configuration of AA stacked bilayers of polar vdW metals should provide an ideal setting for such superconductivity. In this case, the shear phonons become coupled to the whole system via polar fluctuations (equivalent to the infinite domain wall limit described above). While this would be an unstable configuration in stand-alone bilayers, it is conceivable that such an arrangement can be achieved in artificially stacked multilayers. Conclusion- In conclusion, we have shown that domain wall fluctuations in sliding ferroelectric generate local attractive interactions between electrons and can create local Cooper pairs. If the domain wall fluctuations originate from phonons, we further express this as an electron-phonon coupling, albeit the coupling occurs via a transverse piezoelectric mechanism, where induced polarization is proportional to dynamic strain, rather than the much-studied Fröclich coupling between electron charge density and the divergence of lattice polarization (there, the induced polarization is proportional to the LO phonon displacement). Further, by considering an interlayer shear phonon as the driver of the polarization fluctuations, we have argued that this coupling can exceed the standard Fröhlich coupling and is comparable to phonon-interlayer hybridization coupling. The proposed scenario for strong electron-phonon coupling is a special feature of the few-layer TMD that are either ordered (or on the verge of ordering) sliding ferroelectrics. Finally, we have shown that this mechanism may explain the recently discovered polar switching of superconductivity in T_d-MoTe_2. A full first-principles calculation for electron shear phonon coupling in metallic sliding ferroelectrics will be an interesting direction, which we leave for further studies. Acknowledgement- We thank J. Shi and A. H. MacDonald for useful discussions. This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. §.§ General coupling to polarization fluctuations Im concerned that we may confuse people more than help them with the curl story. Perhas a simpler contrast woudl be to say tha in the usual polar case, P⃗∝u⃗. In our case, we have instead P_z ∝ u_x, which is like P⃗∝u⃗×ŷ. Also "catchy" but less ivolved that curl But I don't know if we can claim new by saying P⃗∝u⃗×ŷ, since in some ways that is the statement of sliding ferroelectricity. I think the curl story is interesting and in the main text the coupling expression is simple enough. The SM is just trying to say it in a more general way. That is the simple coupling expression written in the main text can be interpreted as a curl coupling. Even though the in-plane charge carriers prohibit an in-plane polarization order, the fluctuations can produce a dynamical in-plane polarization. Here, we consider the effect of fluctuation in polarization to show that the mechanism in the main text is indeed related to curl in polarization. Consider a local energy contribution by an electron moving in a polar background dE(r⃗) = -e P⃗(r⃗)· dr⃗ and the corresponding local energy change when a small deformation is introduced δ [ dE(r⃗)] = -e u⃗·∇_u⃗ [P⃗(r⃗)· dr⃗ ]. This can be further decomposed as δ [dE(r⃗)] = -e u⃗· [ (dr⃗·∇_u⃗) P⃗(r⃗) + dr⃗× (∇_u⃗×P⃗ (r⃗)) + (P⃗(r⃗) ·∇_u⃗) dr⃗ + P⃗(r⃗) × (∇_u⃗× dr⃗) ]. We can ignore the last two terms on the RHS because they are second order in a small parameter. do not understant why they are higher order derivative of a small term dr. For the rest, the first part can be interpreted as the conventional coupling of the electron to polarization I think there are many more terms there, like ∂ P_x(r⃗)/∂ u_y u_x dy. Yes, but not for the 1D illustration since there is no u_y, but in general they are present and will be there for shear phonon in 2D material. u⃗· [ (dr⃗·∇_u⃗) P⃗(r⃗) ] = ∂ P_x(r⃗)/∂ u_x u_x dx + ∂ P_y(r⃗)/∂ u_y u_y dy + ∂ P_z(r⃗)/∂ u_z u_z dz and the second part is the term considered in this work. u⃗· [ dr⃗× (∇_u⃗×P⃗ (r⃗)) ] = [ ( ∂ P_y/∂ u_x - ∂ P_x/∂ u_y ) dy - ( ∂ P_x/∂ u_z - ∂ P_z/∂ u_x ) dz ] u_x + [ (∂ P_z/∂ u_y - ∂ P_y/∂ u_z )dz - ( ∂ P_y/∂u_x - ∂ P_x/∂ u_y ) dx ] u_y + [ (∂ P_x/∂ u_z - ∂ P_z/∂ u_x ) dx - ( ∂ P_z/∂ u_y - ∂ P_y/∂ u_z ) dy ] u_z. Considering the in-plane phonon displacements and assuming a one-dimensional system, the curl contribution reduces to the main text. The main text discusses the interlayer shear phonon as the origin of these fluctuations. This is a longitudinal optical phonon, for which the transverse (curl) contributions to electron-phonon coupling are typically ignored. However, in the present scenario of sliding ferroelectric, the curl contribution is larger than the longitudinal contribution. Since the origin of polarization is an interplane charge transfer, P_z/P_a ∼ d/a, and quite generally in the van der Waals bilayer d > a. §.§ Moiré system and Pairing symmetry We can extend the single domain wall picture of the main text to the case of periodic moiré modulation. For the one-dimensional scenario discussed here, a moiré superlattice can form by a small lattice mismatch between the two chains. Suppose the top chain is stretched slightly such that under a small deformation field 𝒰(x) = (1+ η )x, where η≪ 1. This leads to a moiré superlattice with period a_M = a/η, where a is the microscopic lattice constant and a moiré reciprocal lattice G_M = 2π/a_M. Structurally, this lattice mismatch leads to a spatial modulation of the interlayer stacking arrangement that is periodic over the moiré lattice vector. Inside a moiré unit cell, domains of AB and BA stacking are formed, where the interlayer charge transfer occurs in opposite directions, leading to a polarization profile modulated over the moiré unit cell [See Fig. <ref> (c) in the main text]. We introduce the moiré periodicity by projection ψ̂(x) = √(a_M/L)∑_n∫ dk ψ̂_n(k) e^i(k+nG_M)x, where the momentum k are restricted in the first moiré Brillouin zone. Next, we assume that in the domain wall region, the static polarization P_s(x) = P(x, u(x)=0) smoothly changes from the one saturation value to the opposite saturation value [as shown in Fig. <ref>(c) ]. In this limit, we can substitute ∂ P(x,u(x))/∂ u(x) ≈∂ P_s/∂ x, which is also moiré periodic. shoudln't this be amplified by w/a? Once u is replaced by u_ph, it gets amplified. For now u should just be thought of domain wall displacement. Therefore, in the electron-fluctuation coupling, we substitute g(x) = ed √(ħω/ (8 ϵ^2 κ ))∂ P[x, u(x)]/∂ u(x) = ∑_n g_n exp(inG_M). After restricting the electron operators to the first moiré Brillouin zone, the electron-fluctuation interactions become H_e-u = ∑_n ∫ dk ∫ dq g_n u(q+nG_M) ψ̂^†_0(k+q) τ_3 ψ̂_0(k) , where u(q) is the Fourier representation of the fluctuations, and g_n's are the Fourier components of the moiré periodic function g[u(x)]. We can further quantize the fluctuations u(q) and integrate them out to obtain an effective electron-electron interaction H_e-e = -∑_n,n'∑_s∫ dk dq g_2n+1 g_2n'+12ħω(q) /ħ^2 ω^2(q) - E^2(k)ψ̂^†_0,s(k+q)ψ̂^†_0,s(-k-q) ψ̂_0,s(-k) ψ̂_0,s(k), where s is the layer index. Here, we have imposed the reality condition on polarization, which implies only the odd Fourier components are non-zero and imposed ω(q+nG_M) = ω(q). By ignoring the dispersion in the fluctuation modes and limiting the pairing electrons inside a small energy window around the Fermi level, H_e-e = - U∑_s∫ dk dq ψ̂^†_0,s(k+q)ψ̂_0,s(-k-q) ψ̂_0,s(-k) ψ̂_0,s(k), where U = 2/(ħω)∑_n,n' g_2n+1 g_2n'+1 Gaurav, I am still bit concerned about this. I think it is strange that thre is a double sum over n and n'. I suggest that we drop the appendices for now, and leave the complete treatment of moire to later. The second appendix is also not necessary imo. What do you think? If we drop the appendix, I am fine putting the paper on arxiv on Wed.is an isotropic effective interaction. In the absence of additional repulsive effects, such interactions will generically lead to conventional even parity pairings. However, the pairing order will have moiré modulation in real space and only resembles uniformity in real space at much larger length scale than a_M. We mention that in Eq. <ref>, we have only considered the attractive part of effective electron-electron interactions. In the layer basis the repulsive part of these interactions is fully decoupled from the attractive part. However, since ψ̂_s(k) are not the band eigenstate, generally the attractive and repulsive interactions may not be fully decoupled. To show this, we consider two limiting cases. If the domain walls are very narrow, the electronic states at the domain walls predominantly come from large AB and BA stacking regions. Therefore, the electronic states at the Fermi level have minimal layer mixing and two separate layer-polarized electronic states exist at the domain walls. These layer-polarized states are not mixed by the pairing interactions. Therefore, from Eq. <ref>, in this limit, we obtain BCS pairs described by their layer eigenstate. Now, we consider the opposite limit, of either (i) very wide domain walls, or (ii) the ideal setup of AA-stacked bilayers. In these cases, the static polarization vanishes in the relevant region. Therefore, it is more convenient to construct the bonding and antibonding orbitals of the layer pseudospin as the electronic states at the Fermi level. In the new basis, we rewrite the full (without discarding the repulsive part) electron-fluctuation coupling term H_e-u = ∑_n ∫ dk ∫ dq g_n u(q+nG_M)ϕ̂^†_0(k+q) τ_1 ϕ̂_0(k), where the ϕ̂ = 1/√(2)(τ_1+τ_3)ψ̂ are the electron operators in the bonding/anti-bonding basis, which is approximately the diagonal basis for the infinite domain wall (AA-stacked) bilayers. We obtain an effective electronic Hamiltonian after integrating out the fluctuations H_e-e = - U∑_η∫ dk dq [ϕ̂^†_0,η(k+q)ϕ̂^†_0,η(-k-q) ϕ̂_0,-η(-k) ϕ̂_0,-η(k) + ϕ̂^†_0,-η(k+q)ϕ̂^†_0,η(-k-q) ϕ̂_0,-η(-k) ϕ̂_0,η(k) ], which leads to two-band superconductivity. Interestingly, in the bonding anti-bonding basis, the effective interactions are fully attractive. This further suggests that non-polar AA-stacking provides an ideal scenario for strong pairing from our mechanism. §.§ Universality of the electron- shear phonon coupling strength Here we discuss some universal features of the electron-shear phonon coupling in the q=0 limit. For this purpose, imagine a domain wall of width w. With a q=0 shear phonon of amplitude u_ph, the polarization develops at the domain wall as P_x ∼α q u_ph and P_z ∼α q (d/a) u_ph. Here, the proportionality constant α is determined by the saturation value of the polarization and therefore, it is system-dependent. Further, the domain wall center is displaced with an amplitude u_dw = w u_ph/a. This leads to an electron-phonon coupling ∂ P_z/∂ u_dw u_dw≈dqα/w u_dw = dqα/a u_ph. It is worthwhile to understand the above expression in the infinite domain wall limit (AA-stacking). In this limit, in the middle expression above, as w→∞, u_dw→∞ in the numerator, therefore, it is well-behaved. This can also be seen by the right expression, where the domain wall width dependence cancels out. Starting from zero polarization, its saturation value is reached when u_ph = a/2. Assuming a linear increase in the polarization as the phonon displacement reaches this saturation value, we get α≈ 2P_0/(qd), where P_0 is the saturation value of P_z. We obtain the electron-phonon coupling as ∂ P_z/∂ u_dw u_dw≈2P_0/a u_ph. The coupling constant 2P_0/a acquires a universal value, determined by the saturation value of the interlayer polarization. Based on similar arguments, and at same level of approximation, we can estimate the coupling constant due to the interlayer hybridization energy change as ∂ t/∂ u_dw u_dw∼t_AA-t_AB/a u_ph. However, since the system indeed acquires an interlayer polarization in the AB-stacking, it is expected that 2P_0 ≥ t_AA-t_AB.
http://arxiv.org/abs/2406.18972v1
20240627080313
Applying LLMs for Rescoring N-best ASR Hypotheses of Casual Conversations: Effects of Domain Adaptation and Context Carry-over
[ "Atsunori Ogawa", "Naoyuki Kamo", "Kohei Matsuura", "Takanori Ashihara", "Takafumi Moriya", "Takatomo Kano", "Naohiro Tawara", "Marc Delcroix" ]
eess.AS
[ "eess.AS", "cs.CL" ]
Exact Fisher zeros and thermofield dynamics across a quantum critical point Haiyuan Zou July 1, 2024 =========================================================================== § ABSTRACT Large language models (LLMs) have been successfully applied for rescoring automatic speech recognition (ASR) hypotheses. However, their ability to rescore ASR hypotheses of casual conversations has not been sufficiently explored. In this study, we reveal it by performing N-best ASR hypotheses rescoring using Llama2 on the CHiME-7 distant ASR (DASR) task. Llama2 is one of the most representative LLMs, and the CHiME-7 DASR task provides datasets of casual conversations between multiple participants. We investigate the effects of domain adaptation of the LLM and context carry-over when performing N-best rescoring. Experimental results show that, even without domain adaptation, Llama2 outperforms a standard-size domain-adapted Transformer-LM, especially when using a long context. Domain adaptation shortens the context length needed with Llama2 to achieve its best performance, i.e., it reduces the computational cost of Llama2. § INTRODUCTION Large language models (LLMs), such as GPT-4 <cit.>, PaLM2 <cit.>, and Llama2 (Large Language Model META AI) <cit.>, have now become a prominent component in modern natural language processing (NLP) and are successfully utilized in various NLP tasks, such as machine translation, text summarization, and question answering. Recently, they have been used not only in NLP tasks but also in speech-related tasks, including automatic speech recognition (ASR). A simple way to utilize LLMs in ASR is using them in the second-pass rescoring (re-ranking) of multiple ASR hypotheses represented as an N-best list or a lattice, which is obtained by the first-pass ASR decoding. Several studies have reported the usefulness of LLMs in N-best or lattice rescoring of ASR hypotheses <cit.>. Thanks to the significant progress of end-to-end (E2E) neural network modeling, the performance of ASR has greatly improved. Despite this significant progress, ASR accuracy remains unsatisfactory in some situations, such as performing ASR in daily-life environments <cit.>. The distant ASR (DASR) task of the CHiME-7 challenge provides a dataset of such challenging situations <cit.>. The dataset contains casual conversations between multiple participants at real dinner parties. LMs can be expected to play an important role in ASR of such casual conversational speech, and most of the submitted systems try to use LMs during ASR decoding and/or for rescoring ASR hypotheses <cit.>. However, the effect of using LMs is limited (the first-place system does not use any LMs <cit.>), and there is a demand for LMs to deal with such highly casual conversational speech. As described above, several studies have successfully applied LLMs for rescoring ASR hypotheses <cit.>. However, their targets are not casual conversations, and the ability of LLMs to rescore ASR hypotheses of casual conversations remains unclear (note that LLMs are not allowed to be used in the CHiME-7 challenge <cit.>). In this study, we reveal it by performing N-best ASR hypotheses rescoring using Llama2-7B <cit.>, which is one of the most representative Transformer <cit.> decoder-based causal LLMs, on the CHiME-7 DASR task. We comprehensively investigate the effects of domain adaptation of the LLM and context carry-over <cit.> when performing N-best rescoring. We employ QLoRA <cit.> for memory efficient domain adaptation and consider various context lengths (up to 1024 tokens) in context carry-over. We conducted experiments, including experimental settings that have not been investigated in previous studies <cit.>, and thus, the experimental results and findings obtained in this study are informative for researchers in this field (note that Llama2-7B is allowed to be used in the CHiME-8 challenge <cit.>). Our main findings can be summarized as follows. * Even without domain adaptation, Llama2 significantly outperforms a standard-size domain-adapted Transformer-LM. * Both domain adaptation and context carry-over improve the Llama2 performance. * Even without domain adaptation, by considering a very long context (e.g., 1024 tokens), Llama2 captures the flow of a conversation and achieves the lowest word error rate (WER), which is achieved with the domain-adapted Llama2. * Domain adaptation shortens the context length needed with Llama2 to achieve the lowest WER, significantly reducing the computational cost of Llama2. § RELATION TO PRIOR WORK Previous studies <cit.> use both Transformer encoder-based bidirectional LLMs, such as BERT <cit.>, RoBERTa <cit.>, and ELECTRA <cit.>, and Transformer decoder-based unidirectional LLMs, such as GPT <cit.>, GPT-2 <cit.>, PaLM <cit.> and Llama1 <cit.>, but focus more on the former encoder-based LLMs. In contrast, in this study, we focus on a decoder-based LLM, i.e., Llama2 <cit.>, since recently released LLMs are mainly decoder-based, e.g., GPT-4 <cit.>, PaLM2 <cit.>, and Llama2, and we can expect their further progress. Some previous studies <cit.> use moderately conversational datasets, such as Switchboard (conversations on telephone calls) <cit.>, AMI (conversations on meetings) <cit.>, and an in-house dataset (conversations with a conversational agent) <cit.>. In contrast, in this study, we use the CHiME-7 DASR task dataset (conversations at dinner parties) <cit.>, which is much more casual and challenging than the above datasets, to reveal the applicability of LLMs for rescoring ASR hypotheses of highly casual conversations. Considering past and future contexts is useful for rescoring current ASR hypotheses, and some previous studies perform context carry-over <cit.>. The past context is used with both encoder-based bidirectional LLMs and decoder-based unidirectional LLMs, while the future context is used only with encoder-based LLMs. In this study, we utilize only the past context since we use Llama2, but we comprehensively investigate the effect of the context length by varying it in a wide range, i.e., 0 (without considering the context) to 1024 tokens. The context length investigated in this study is much longer than that investigated in the previous studies, i.e., up to 180 tokens <cit.>. § MODELS AND METHODS We introduce the LMs used in this study, the domain adaptation methods of the LMs, the N-best rescoring method with context-carry over, and text preprocessing. §.§ Language models We use Llama2-7B <cit.> as the main LLM. As a competitor, we also prepared a standard-size Transformer-LM. We used the Llama2 tokenizer (its vocabulary size is 32k BPE <cit.> tokens) as that of the standard-size Transformer-LM, and thus, we can fairly compare these two models in terms of perplexity (PPL). To build the standard-size Transformer-LM, we first copied the configuration of Llama2-7B and edited it to define a downsized model structure, and then we trained the configurated model from scratch using a text dataset. The model size (number of model parameters) is about 70M, i.e., 1/100 of the Llama2-7B size, which is the standard size of a Transformer-LM. This model inherits the configuration of Llama2-7B, and thus, in this study, we refer to it as Slama2-70M, i.e., Standard-size (or Smaller-size) of Llama2. Details of Slama2-70M are described in Section <ref>. We also use Llama2-7B-Chat, which is a fine-tuned version of Llama2-7B that is optimized for dialogue use cases <cit.>, since it may be more suitable than the base Llama2-7B for rescoring ASR hypotheses of casual conversation. We investigate which model is more suitable for the target in Section <ref>. §.§ Domain adaptation Llama2 is trained using massive text datasets and is expected to have general linguistic knowledge. However, conversations contained in the CHiME-7 DASR task dataset are highly casual, and thus, transcriptions of such conversations may not be included in the Llama2 training text datasets (their details are not opened <cit.>). We employ QLoRA <cit.> to adapt Llama2 to the target casual conversational domain with its memory efficient way. With QLoRA, a 4-bit quantized large number of the LLM parameters are frozen, while a small number of low-rank adapters (LoRA) <cit.> are fine-tuned using a smaller-size target-domain text dataset. As regards domain adaptation of Slama2, we perform full parameter fine-tuning. Details of domain adaptation are described in Section <ref>. §.§ N-best rescoring with context carry-over Let _i be a feature vector sequence of the ith utterance in an input utterance sequence. As the first-pass ASR decoding, an E2E ASR model decodes _i and outputs N-best ASR hypotheses (an N-best list) of the input utterance as {_i^r}_r=1^N, where _i^r is the rth rank hypothesis (token sequence). The ASR model provides the score (log-probability) for each of the N-best hypotheses as {log(_i^r|_i)}_r=1^N. Then, as the second-pass post-processing, we perform N-best rescoring. We first calculate the LM score (log-probability) for each of the N-best hypotheses as {log(_i^r)}_r=1^N using an LM. Next, for each rank, i.e., r=1,⋯,N, we combine the ASR and LM scores as, logP(_i^r|_i) = log(_i^r|_i) + αlog(_i^r) + γ|_i^r|, where α (α≥ 0) is the language weight and γ (γ≥ 0) is the reward that is given proportional to the length of _i^r. Lastly, we select the best (the highest score rank) hypothesis based on the combined score logP(_i^r|_i) in Eq. (<ref>) as the final 1-best ASR hypothesis. In the above basic N-best rescoring procedure, we focus on the current hypotheses. However, considering the past hypotheses sequence as the context is effective for rescoring the current hypotheses, especially for the conversational speech case. In this study, as with some previous studies <cit.>, we perform context carry-over in N-best rescoring. To consider the context, we modify the LM score in Eq. (<ref>) as, log(_i^r) →log(_i^r|_-L:-1^), where _-L:-1^ is the best past context (token sequence) of the length (number of tokens) L obtained by N-best rescoring for the past N-best hypotheses sequence. Note that, in this study, we do not care about the hypothesis (utterance) boundaries, i.e., the best past context can start from the middle of a past 1-best hypothesis. Note also that, as with N-best rescoring, we can perform PPL calculation with context-carry over. We comprehensively investigate the effect of the context length L by varying it in a wide range in Section <ref>. §.§ Text processing The authors of <cit.>, who submitted the second-place system of the CHiME-7 challenge, ordered utterances (sentences) in the training text dataset as, speaker 1's utterance 1, utterance 2, ..., speaker 2's utterance 1, utterance 2, ..., and trained an LM (they performed N-best rescoring by applying the same ordering to ASR hypotheses). This speaker-conditioned ordering is based on the assumption that utterances from one speaker have some consistency, and, within the speaker, the past utterances are useful in predicting the current utterance. However, this ordering ignores the flow of a conversation. We investigate which of the speaker-conditioned order or the conversational order is more suitable for the CHiME-7 DASR task in Section <ref>. Llama2 is trained using texts that preserve their original forms <cit.>, i.e., the texts preserve capitalized characters and symbols, such as commas, periods, (double) quotations, colons, question/exclamation marks, and so on. In contrast, texts used in the ASR research field, including texts in the CHiME-7 DASR task dataset, are usually heavily normalized, i.e., all the characters in the texts are lowercased, and all the symbols are removed from the texts. It is not clear whether Llama2 can appropriately treat these heavily normalized texts. However, what we can do to recover the original texts is limited. In this study, we add a period for each sentence (or hypothesis in N-best rescoring). What else we can do is capitalize the first character for each sentence (but it is difficult to recover other capitalization, e.g., named entities). We investigate whether this capitalization of the first character is effective for Llama2 in Section <ref>. § EXPERIMENTS We conducted N-best rescoring experiments using the CHiME-7 DASR task dataset <cit.> on the PyTorch <cit.> environment. We used ESPnet <cit.> for ASR model training and decoding. We also used Hugging Face Transformers <cit.> with the PEFT library <cit.> for LM training, domain adaptation, and inference. §.§ Experimental settings The CHiME-7 DASR task dataset <cit.> consists of the three datasets, i.e., CHiME-6 <cit.>, DiPCo <cit.>, and Mixer 6 <cit.>. The former two datasets contain conversations between four participants at real dinner parties, while Mixer 6 contains conversations between an interviewer and a subject. CHiME-6 and Mixer 6 have the training, development (dev), and evaluation (eval) data splits, while DiPCo has the dev and eval data splits. We used the CHiME-6 and Mixer 6 (CH6+Mx6) combined training dataset for LM domain adaptation, the CHiME-6 dev dataset for hyperparameter tuning, and all the dev and eval datasets for evaluation. Table <ref> shows details of these datasets, and further details can be found in <cit.>. As described in Section <ref>, we sorted all the sentences (utterances) in these datasets in the conversational order (not the speaker-conditioned order <cit.>) and added a period for each sentence (but we did not perform any capitalization). For domain adaptation of Llama2, we attached LoRA adapters <cit.> to all the query and value projection matrices in the attention modules of Llama2 and fine-tuned them with QLoRA <cit.> (Section <ref>) using the CH6+Mx6 training dataset shown in Table <ref>. The ratio of the number of trainable parameters against that of all parameters was 0.06%. We set the context length (number of tokens) L in Eq. (<ref>) at 0, 16, 32, 64, 128, 256, 512, and 1024, respectively. For each of these context lengths L, we concatenated past L tokens as the context to all the sentences in the dataset and performed fine-tuning. We performed one epoch QLoRA fine-tuning using the AdamW optimizer <cit.> by setting the LoRA rank, LoRA alpha scaling parameter, LoRA dropout probability, batch size, and learning rate at 8, 16, 0.05, 64, 1e-5, respectively. As a result, we obtained eight domain-adapted Llama2 models. Table <ref> shows the configuration of Slama2-70B (Section <ref>) in comparison with that of Llama2-7B <cit.>. We trained Slama2 using 1.1G tokens of the LibriSpeech text dataset <cit.>. We concatenated all the sentences (token sequences) in the dataset to form one long token sequence and split it into token sequences of length 2048, which is the maximum positional embedding length of Slama2, as shown in Table <ref>. We trained Slama2 from scratch using these token sequences and then performed domain adaptation of it. For each of the eight context lengths L, we applied the same text processing described above to the CH6+Mx6 training dataset and performed fine-tuning of Slama2 using the dataset. We performed one epoch full parameter fine-tuning using the AdamW optimizer by setting the batch size and learning rate at 64 and 5e-6, respectively. As a result, we obtained eight domain-adapted Slama2 models. As the E2E ASR model, we trained a competitive model based on a Conformer-encoder <cit.> and a structured state space (S4) decoder <cit.>, which is used in the third-place system <cit.> of the CHiME-7 challenge. Using this ASR model, we performed ASR for all the dev and eval utterances and generated 32-best ASR hypotheses for each of the utterances. We did not use any LMs in ASR decoding. As with the above-described text processing, we sorted the ASR hypotheses in the conversational order and added a period for each hypothesis. Then, using Llama2, the domain-adapted Slama2/Llama2 of the eight context lengths L (17 models in total), respectively, we performed rescoring for the 32-best ASR hypotheses. When using Llama2, we set the language weight α and the reward γ in Eq. (<ref>) at 0.4 and 0.5, respectively, and when using Slama2, we set them at 0.3 and 0.5, respectively. We optimized these values using the CHiME-6 dev dataset. We also performed token-based PPL evaluation for all the dev and eval transcriptions (correct token sequences). §.§ Results of PPL evaluation and N-best rescoring Table <ref> shows the results of PPL evaluation and N-best rescoring. First, we can confirm that, in some cases, the domain-adapted Slama2 reduces the word error rates (WERs) from the strong ASR 1-best baseline. The longer contexts bring the lower WERs (and PPLs). However, the reduction of the WERs is limited, as reported in the CHiME-7 papers <cit.>. Next, we compare the results of Slama2 and Llama2 without domain adaptation. We can confirm that, with the shorter context lengths (especially when L=0), Llama2 underperforms Slama2. However, its performance is quickly improved by considering longer contexts, i.e., by capturing the flow of a conversation. It achieves the lowest WERs by using a long context length, e.g., 512 and 1024. Finally, we compare the results of Llama2 and the domain-adapted Llama2. We can confirm that, unfortunately, domain adaptation does not bring further WER reduction. However, it shortens the context length needed with Llama2 to achieve the lowest WERs. This is a large advantage since the computational cost of an LLM heavily depends on the length of an input token sequence, and by using shorter context lengths, we can greatly reduce the computational cost. For example, the inference time when L=128 is about 1/10 of that when L=1024. As reported in <cit.>, we also confirmed that recognition errors of infrequent words, such as “claustrophobic" and “octogenarians", were reduced by using Llama2. Llama2 steadily reduces WERs from the strong ASR 1-best baseline, but there is still room for improvement since the lowest WERs obtained with Llama2 are much higher than those of the oracle hypotheses shown in the last row of Table <ref>. §.§ Comparison of experimental settings As described in Sections <ref> and <ref>, we performed PPL evaluation on the CHiME-6 dev dataset to compare experimental settings with the following three aspects, i.e., (1) capitalize the first character of each sentence or not, (2) sort utterances in the conversational order or in the speaker-conditioned order <cit.>, and (3) use Llama2-7B or Llama2-7B-Chat <cit.>. Table <ref> shows the experimental results. The leftmost setting is our current experimental setting described in Section <ref>. First, we can confirm that, by capitalizing the first character of each sentence, the PPLs slightly get higher. This result indicates that capitalization is unnecessary thanks to the robust text processing ability of Llama2, or we need a more sophisticated approach for recovering the original text forms. Next, we can confirm that, with the shorter context lengths, the speaker-conditioned order shows the lower PPLs than those of the conversational order, but with the longer context lengths, the trend is reversed. This result indicates that several consecutive utterances from one speaker have some consistency, while, in the longer contexts, the flow of a conversation becomes more dominant. Finally, we can confirm that, by using Llama2-Chat, the PPLs get much higher. This result indicates that the style of the dialogue text datasets used to train Llama2-Chat may be very different from that of the CHiME-7 DASR task dataset. To summarize, our current setting described in Section 4.1 seems to be reasonable. § CONCLUSION AND FUTURE WORK We investigated the applicability of LLMs for rescoring ASR hypotheses of highly casual conversations by using Llama2 <cit.> and the CHiME-7 DASR task dataset <cit.>. Llama2 steadily reduces WERs from the strong ASR 1-best baseline mainly with the effect of context-carry over. Domain adaptation reduces the computational cost of Llama2 by shortening the needed context length. The experimental results and findings obtained in this study are informative for researchers in this field. Future work will include using larger Llama2, i.e., 13B and 70B <cit.>, and backward LMs <cit.>. IEEEtran_ogawa
http://arxiv.org/abs/2406.18407v1
20240626150049
Enriques surfaces of zero entropy
[ "Gebhard Martin", "Giacomo Mezzedimi", "Davide Cesare Veniani" ]
math.AG
[ "math.AG", "14J28 (Primary) 14J27, 14J50, 37E30 (Secondary)" ]
§ ABSTRACT We classify Enriques surfaces of zero entropy, or, equivalently, Enriques surfaces with a virtually abelian automorphism group. BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using Camera-LiDAR Data Kemiao Huang 0000-0003-2899-4222, Meiying Zhang 0000-0002-3721-4315, and Qi Hao 0000-0002-2792-5965 This work is supported by Shenzhen Key Laboratory of Robotics and Computer Vision (ZDSYS20220330160557001), Southern University of Science and Technology Supported by Shenzhen Fundamental Research Program (JCYJ20220818103006012), and National Natural Science Foundation of China (62261160654). (Corresponding author: Qi Hao and Meiying Zhang) Kemiao Huang is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen Guangdong 518055, China (email: 12032943@mail.sustech.edu.cn). Meiying Zhang is with the Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, China (email: zhangmy@sustech.edu.cn) Qi Hao is with the Department of Computer Science and Engineering, Shenzhen Key Laboratory of Robotics and Computer Vision, and Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: hao.q@sustech.edu.cn). Digital Object Identifier xx.xxxx/LRA.xxxx.xxxxxxx July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION By a result of Nikulin <cit.> and Barth–Peters <cit.>, the automorphism group of a very general complex Enriques surface is isomorphic to the 2-congruence subgroup of the orthogonal group of the lattice E_10 U ⊕ E_8. This group is infinite and not virtually solvable, that is, it does not contain a solvable subgroup of finite index (see <Ref>). On the other end of the spectrum, there are families of Enriques surfaces with finite automorphism group, classified in <cit.>. Therefore it is natural to ask whether there exist families of Enriques surfaces with an infinite, but less complicated (e.g., virtually abelian) automorphism group. Barth and Peters <cit.> found one such family over , and Mukai <cit.> sketched a proof that this is the only such family. Our main result is a complete and characteristic-free classification of Enriques surfaces with virtually abelian automorphism group. We say that X is of type Å, type , or, respectively, type if X contains (-2)-curves with the following dual graphs: [scale=0.6] (R1) at (180:2) [nodal] ; (R2) at (135:2) [nodal] ; (R3) at (90:2) [nodal] ; (R4) at (45:2) [nodal] ; (R5) at (0:2) [nodal] ; (R6) at (315:2) [nodal] ; (R7) at (270:2) [nodal] ; (R8) at (225:2) [nodal] ; (R9) at (intersection of R2–R7 and R3–R8) [nodal] ; (R10) at (intersection of R4–R7 and R3–R6) [nodal] ; (R6)–(R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) (R5)–(R10) (R4)–(R5)–(R6); (R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) ; at (270:3) (type Ã_7); (R0) at (90:1.5) [nodal] ; (R1) at (90:1) [nodal] ; (R2) at (90:0.5) [nodal] ; (R3) at (0:0) [nodal] ; (R4) at (210:0.5) [nodal] ; (R5) at (210:1) [nodal] ; (R6) at (210:1.5) [nodal] ; (R7) at (330:0.5) [nodal] ; (R8) at (330:1) [nodal] ; (R9) at (330:1.5) [nodal] ; (R0)–(R1)–(R2)–(R3)–(R4)–(R5)–(R6) (R3)–(R7)–(R8)–(R9); at (0,-1.3) (type Ẽ_6); [scale=0.5] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (RX) at (-1,1) [nodal] ; (RXX) at (5,0) [nodal] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8) (R3)–(RX); [double] (R9)–(RXX); at (1.5,-2) (type ); Let X be an Enriques surface with infinite automorphism group over an algebraically closed field k of characteristic p≥ 0. Then, the following are equivalent: * The automorphism group (X) is virtually abelian. * The automorphism group (X) is virtually solvable. * The Enriques surface X is of type Å, or . We note that Enriques surfaces of type and only exist in characteristic 2 by <Ref>. Therefore, if p 2, there exists a unique family of Enriques surfaces of zero entropy and infinite automorphism group, namely the family studied by Barth–Peters. This confirms Mukai's statement in <cit.>. The classification of Enriques surfaces with virtually abelian automorphism group is closely related to the notion of entropy of automorphisms, which we recall in <Ref>. In <Ref>, we prove that an Enriques surface has virtually abelian automorphism group if and only if it has zero entropy. The classification of K3 surfaces of zero entropy has recently been completed by Yu <cit.> and Brandhorst–Mezzedimi <cit.>. By <cit.>, the K3 covers of very general Enriques surfaces of type Å have zero entropy as well. In particular, their automorphism group is infinite, but virtually abelian. This is in stark contrast to the K3 covers of Enriques surfaces with finite automorphism group, which instead have an infinite, non virtually solvable automorphism group by <cit.>. The automorphism group of Enriques surfaces of type Å was computed by Barth–Peters <cit.>. In <cit.>, they claim that the automorphism group of such surfaces is never larger than /4× D_∞, where D_∞ denotes the infinite dihedral group. This turns out to be false: indeed, we show in <Ref> that there exists a single surface in the family, whose automorphism group is a non-split extension of /2 by /4× D_∞ (cf. also <Ref>). This article is structured as follows. In <Ref>, we recall preliminaries on genus 1 fibrations on Enriques surfaces and investigate the action of the Mordell–Weil group of the Jacobian on the fibers of the fibration. We also quickly recall the notion of entropy of automorphisms. In <Ref>, we show that Enriques surfaces of type Å, and have zero entropy and we compute their automorphism groups and number of moduli. Finally, in <Ref>, we prove <Ref> using the connection with zero entropy given in <Ref>. §.§ Acknowledgments We thank Shigeyuki Kondō for making us aware of Mukai's report <cit.>. We are grateful to Igor Dolgachev and Matthias Schütt for helpful comments on a first draft of this article. § PRELIMINARIES An Enriques surface is a smooth and proper surface X over an algebraically closed field k with numerically trivial canonical class K_X and b_2(X) = 10. We let p be the characteristic of k. For p 2, the canonical bundle ω_X of an Enriques surface X is 2-torsion. On the other hand, recall that for p = 2, there are three types of Enriques surfaces, with different torsion component ^τ_X of the identity of their Picard scheme: classical, with ^τ_X ≅/2, ordinary, with ^τ_X ≅μ_2, and supersingular, with ^τ_X ≅α_2. Classical Enriques surfaces have a 2-torsion canonical bundle ω_X, while for ordinary and supersingular Enriques surfaces, ω_X ≅𝒪_X is trivial. Let us briefly explain the contents of this section. In <Ref>, we collect some known results about genus 1 fibrations, with particular focus on Enriques surfaces. In <Ref>, we define the Mordell–Weil group of a genus 1 fibration, and we collect several results on the action of this group on the reducible fibers. Finally, in <Ref>, we recall the definition of the algebraic entropy of automorphisms and give a characterization of Enriques surfaces of zero entropy (cf. <Ref>). §.§ Genus 1 fibrations on Enriques surfaces For a comprehensive account on genus 1 fibrations, we refer to <cit.>. Let X,Y be normal varieties over a field. A genus 1 fibration is defined as a proper, surjective and flat morphism f X → Y with f_*_X = _Y, such that the generic fiber X_η is a geometrically integral and regular curve of genus 1. The fibration f is called elliptic if X_η is smooth, otherwise it is called quasi-elliptic. Following Kodaira's notation, we recall that the non-multiple singular fibers of genus 1 fibrations are either additive and denoted by ,,,^*,^*,^*, or _n^*, or multiplicative and denoted by _n. Now let X be an Enriques surface. A half-fiber on X is a non-trivial, connected, nef divisor F with F^2 = 0 and h^0(F) = 1. Every Enriques surface carries a half-fiber (see, e.g., <cit.>). Moreover, every genus 1 fibration on X is induced by a linear system of the form |2F|, where F is a half-fiber on X. The following result characterizes the structure of half-fibers on X: Let f X →^1 be a genus 1 fibration on an Enriques surface X. * If p ≠ 2, then f is an elliptic fibration with two half-fibers, and each of them is either non-singular, or singular of multiplicative type. * If p = 2 and X is classical, then f is an elliptic or quasi-elliptic fibration with two half-fibers, and each of them is either an ordinary elliptic curve, or singular of additive type. * If p = 2 and X is ordinary, then f is an elliptic fibration with one half-fiber, which is either an ordinary elliptic curve, or singular of multiplicative type. * If p = 2 and X is supersingular, then f is an elliptic or quasi-elliptic fibration with one half-fiber, which is either a supersingular elliptic curve, or singular of additive type. §.§ Mordell–Weil group of the Jacobian In <Ref>, we saw that every genus 1 fibration f X →ℙ^1 on an Enriques surface X has a double fiber, hence in particular no section. The associated Jacobian fibration J(f) J(X) →^1 is a genus 1 fibration on a rational surface J(X) by <cit.>, and J(f) is (quasi-)elliptic if and only if f is (quasi-)elliptic. More precisely, by <cit.>, the fibers of f and J(f) have the same Kodaira types. The natural action of (J(f)) on the generic fiber of f extends to a regular action on X. If f X →^1 is the genus 1 fibration induced by the pencil |2F|, we put (|2F|) (J(f)), and we identify this group with a subgroup of (X) (see, e.g., <cit.>). It is well-known how this group acts on simple fibers of f: Let f X → B be a genus 1 fibration with Jacobian J(f) J(X) → B. Let b ∈ B be a point and let X_b and J(X)_b be the fibers of f and J(f) over b. Assume that X_b is simple. Then, there exists an (J(f))-equivariant isomorphism J(X)_b ≅ X_b. Since X_b is simple, f admits a section over an étale neighborhood U of b ∈ B. As the smooth locus of f is a torsor under the smooth locus of J(f) and X_U and J(X)_U are the unique relatively minimal proper regular models of the smooth part of the respective fibration, there is a (J(f))-equivariant isomorphism between X_U and J(X)_U. Restricting to a point of U lying over b, we obtain the desired isomorphism. We say that a genus 1 fibration f is extremal if (J(f)) is a finite group. Any quasi-elliptic fibration of a smooth and proper surface is extremal (see, e.g., <cit.>). It will turn out that extremal rational genus 1 fibrations with 2-elementary Mordell–Weil group play a fundamental role in the classification of Enriques surfaces of zero entropy. For the convenience of the reader, we recall in <Ref> and <Ref> the classification of extremal elliptic and quasi-elliptic fibrations on rational surfaces (cf. <cit.>). Furthermore, we know exactly how the sections of such a fibration meet the reducible fibers, and thus, using <Ref>, we observe the following: Let f X →ℙ^1 be an extremal genus 1 fibration of an Enriques surface X. If G is a simple reducible fiber of f, then (J(f)) acts on the dual graph of G as in <Ref> and <Ref>. Describing the action of (J(f)) on the half-fibers of f is, in general, more delicate. If f admits multiplicative half-fibers, then one can use the K3 cover to study this action. Recall that the symmetry group of the dual graph of a configuration of type _n with n ≥ 3 is isomorphic to D_2n, the dihedral group of order 2n. In analogy with the classical representation of D_2n≅ℤ/nℤ⋊ℤ/2ℤ, we call elements in the first factor rotations and all other elements reflections. Let f X →ℙ^1 be an elliptic fibration of an Enriques surface X. Let F be a half-fiber of f and assume that F is of type _n. Then, the following hold: * If g ∈(X) ∖(J(f)) is an involution preserving each fiber of f, then g acts as a reflection on F. * If g ∈(J(f)) acts as a rotation of odd order r on the fiber of J(f) corresponding to F, then it acts as a rotation of order r on F. * If g ∈(J(f)) acts as a rotation of even order r on the fiber of J(f) corresponding to F, then it acts as a rotation of order r/2 on F. Since f admits a half-fiber of type _n, the Enriques surface X is ordinary if p = 2. Thus, the K3 cover πX→ X is étale and a quotient by a fixed point free involution τ. Since F is half-fiber, π^-1(F) is a (necessarily simple) fiber of an elliptic fibration f on X, and since π^-1(F) → F is étale, π^-1(F) is of type _2n. The only fixed point free involution of such a configuration is a rotation of order 2, hence τ acts as such a rotation. The preimage π^-1(E) of a component E of F is the union of two components E and E' on opposite sides of π^-1(F). Now, for Claim (1), observe that g lifts to an automorphism g of X that preserves the fibers of f and is not a translation. Since π^-1(F) is a simple fiber, g acts as a reflection on π^-1(F). Taking the quotient by τ, we see that g acts as a reflection on F, as claimed. For Claims (2) and (3), observe that we can realize the Jacobian J(f) J(X) →ℙ^1 as the minimal resolution of the base change of J(f) J(X) →ℙ^1 along the morphism ℙ^1 →ℙ^1 given by the finite part of the Stein factorization of f ∘π. We obtain a generically finite morphism π' J(X) → J(X). Let F' be the fiber of J(f) corresponding to F. By <Ref> and since π^-1(F) is simple, the (J(f))-action on π'^-1(F') can be identified with the (J(f))-action on π^-1(F). Taking the quotient by τ, we obtain Claims (2) and (3). If f is quasi-elliptic, we can use the existence of the curve of cusps together with some lattice theory to understand the action on additive half-fibers. Let f X →ℙ^1 be a quasi-elliptic fibration of an Enriques surface X. Let F be a half-fiber of f and let R be the curve of cusps of f. Then, the following hold: * The Mordell–Weil group (J(f)) is 2-elementary. * The group (J(f)) preserves every component of F. Claim (1) follows from <Ref>. For Claim (2), we use the fact that F is of type ^*, ^*, _4^*,_2^*,_0^* or . If F is not of type _2n^*, then there are at most two simple components in F, and R meets only one of them, so the group (J(f)) preserves all simple components of F and, consequently, it preserves all components of F. Assume instead that F is of type _2n^*, and denote by C_0,…,C_3 the four simple components of F, with C_0 being the one meeting R. Every involution σ∈(J(f)) preserves another simple component of F, say C_1, and if n>0 this must be the simple component near C_0. This implies that σ preserves all double components of F, and σ either preserves C_2 and C_3 as well, or it swaps them. If n=0, then σ has two fixed points on the central component. Since p = 2 and involutions of ℙ^1 in characteristic 2 have only one fixed point, σ fixes the central component pointwise and thus σ preserves all components of F. If n=2, then |2F| has two additional reducible fibers G_1 and G_2, both of type . We consider the invariant and coinvariant lattices of σ, which we denote by (X)^σ and (X)_σ ((X)^σ)^⊥, respectively. Recall that both lattices are 2-elementary, since (X) is unimodular. Assume that σ does not preserve all components of F. Then, by considering fiber components and the curve of cusps, one easily checks that rk((X)^σ) = 9 - a and rk((X)_σ) = 1 + a, where a∈{1,2} is the number of G_i whose components are permuted by σ. Moreover, we have (-4) ⊕ (-8)^a ⊆(X)_σ, the first summand generated by C_2 - C_3 and the second summand generated by the difference of components of the G_i. This is a contradiction, since (-4) ⊕ (-8)^a has no 2-elementary overlattice. Finally, if n=4, then X is extra-special of type D̃_8, and by <cit.> we know that the group (X) acts trivially on (X), so in particular it acts trivially on F. In the following, for a given genus 1 fibration f X →ℙ^1, we let _ℙ^1(X) ⊆(X) be the subgroup of automorphisms of X preserving f and fixing the base of the fibration pointwise. Let f X →ℙ^1 be a non-isotrivial elliptic fibration of an Enriques surface X. Then, _ℙ^1(X) ≅(J(f)) ⋊ℤ/2ℤ. Every element of _ℙ^1(X)∖(J(f)) is an involution that acts with fixed points on a general fiber of f. If two such involutions fix a common point on a general fiber of f, they coincide. Let F_η be the generic fiber of f. Since X is the unique minimal proper regular model of F_η, we have _ℙ^1(X) ≅(F_η). Since f is non-isotrivial and elliptic, the known structure of automorphisms of elliptic curves shows that (F_η) ⊆(J(F)_η) ⋊ℤ/2ℤ, where the splitting is induced by identifying ℤ/2ℤ with the stabilizer of a geometric point of F_η. Thus, to finish the proof, it suffices to realize an involution that is not a translation. For this, let F be a half-fiber of f and pick a half-fiber F_1 on X of some other fibration such that F.F_1 = 1. This is possible by <cit.> and because X is not extra-special of type Ẽ_8 since f is elliptic (cf. <cit.>). The linear system |2F + 2F_1| induces a generically finite morphism π X →𝖣 of degree 2 by <cit.> and the pencils |2F| and |2F_1| are mapped to pencils of conics on 𝖣. Since |2F| is elliptic and its image on 𝖣 is a pencil of conics by <cit.>, π must be separable. We let g ∈(X) be the covering involution of π. Since the image of |2F| under π is a pencil of conics, we deduce that g preserves every member of |2F| and acts with a fixed point on a general member, hence g ∈_ℙ^1(X)∖(J(f)) and we are done. §.§ Entropy Let X be a smooth projective surface over an algebraically closed field k of arbitrary characteristic. For an automorphism g of X, the (algebraic) entropy of g is defined as the logarithm of the spectral radius of the pullback g^* on (X)⊗. If the base field is , the entropy of g coincides with the topological entropy of the biholomorphism g on X. The automorphism g has zero entropy if and only if all eigenvalues of the action of g on (X) are roots of unity. This happens for instance if g is periodic, i.e., if it has finite order. From the point of view of hyperbolic geometry, g has zero entropy if and only if the isometry g^*∈ O^+((X)) induced by pullback is elliptic (if g^* has finite order) or parabolic (if g^* has infinite order and preserves a nef isotropic vector in (X)), cf. <cit.>. In the case of K3 surfaces, Cantat <cit.> gives geometric descriptions of automorphisms of zero entropy. We say that the surface X has zero entropy if all its automorphisms have zero entropy. In this context, surfaces of zero entropy naturally stand out as the surfaces with the simplest dynamics and the simplest infinite automorphism groups, as we are going to show now. Recall the following characterization of Enriques surfaces with finite automorphism group: Let X be an Enriques surface. Then, the automorphism group (X) is finite if and only if every genus 1 fibration on X is extremal. The following proposition characterizes Enriques surfaces of zero entropy with infinite automorphism group in an analogous way. Let X be an Enriques surface with infinite automorphism group. Then, the following are equivalent: * The surface X has zero entropy. * The automorphism group (X) is virtually abelian. * The automorphism group (X) is virtually solvable. * There exists exactly one non-extremal genus 1 fibration on X. * There exists a genus 1 fibration that is preserved by all of (X). The proof relies on hyperbolic geometry. Denote by ℍ_X the 9-dimensional hyperbolic space associated to the hyperbolic lattice (X), and consider the natural homomorphism φ(X)→O((X))⊆O(ℍ_X) sending an automorphism g to its induced action g^* on (X). The homomorphism φ has finite kernel by <cit.>, so we can identify (X) with the discrete group of isometries Γφ((X)) up to a finite group. Recall that a discrete group G of isometries of ℍ_X is elementary if it has a finite orbit in the closure ℍ_X <cit.>. The group G is elementary of elliptic type if it is finite, elementary of parabolic type if it fixes a unique boundary point of ℍ_X, and elementary of hyperbolic type otherwise. Note that, if H is a subgroup of G of finite index, then H is elementary if and only if G is elementary of the same type. (1) ⇒ (2): If X has zero entropy, then all isometries in Γ are either elliptic or parabolic. Hence Γ is elementary by <cit.>, and thus virtually abelian by <cit.>. (2) ⇒ (3): This is clear. (3) ⇒ (4): Let Γ' be a solvable subgroup of Γ of finite index. By <cit.>, Γ' is elementary, and since by assumption Γ' is infinite, Γ' is either elementary of parabolic or hyperbolic type. We claim that Γ' (and thus Γ) is elementary of parabolic type. Seeking a contradiction, assume that it is of hyperbolic type. Then by <cit.>, every element of infinite order of Γ' is hyperbolic, and thus it has positive entropy. This is a contradiction, because by <Ref> there exists at least one non-extremal genus 1 fibration |2F| on X, which induces a parabolic element of Γ' of infinite order. Therefore Γ is of parabolic type, and it fixes a unique boundary point of ℍ_X, namely the point corresponding to the class of F in (X). Let |2F_1| be any genus 1 fibration of X different from |2F|. The subgroup φ((|2F_1|)) of Γ is elementary and it fixes at least two distinct points in the boundary of ℍ_X, corresponding to F and F_1, and therefore it is elementary of elliptic type, hence finite. (4) ⇒ (5): The unique non-extremal genus 1 fibration on X is preserved by all of (X). (5) ⇒ (1): Every automorphism of X preserves a genus 1 fibration, hence it preserves the class of a half-fiber F, which induces a nef isotropic class in (X). Thus, X has zero entropy. The implications (2) ⇒ (3) and (4) ⇒ (5) ⇒ (1) in <Ref> hold for every surface X. The implication (1) ⇒ (2) holds for any surface X such that the natural homomorphism (X) →O((X)) has finite kernel. Moreover, the implication (3) ⇒ (4) holds if one further assumes that X has a non-extremal genus 1 fibration. In particular, the proof of the implication (3) ⇒ (4) fails for K3 surfaces. And indeed, there exist K3 surfaces of positive entropy with virtually cyclic automorphism group (see <cit.>). The main difference, compared with Enriques surfaces, is that all genus 1 fibrations on these K3 surfaces have finite Mordell–Weil group. So, <Ref> fails for K3 surfaces because <Ref> does. § EXAMPLES The goal of this section is to show that the Enriques surfaces appearing in <Ref> have zero entropy. Along the way, we compute their automorphism groups and number of moduli. §.§ Type A7 Given an Enriques surface of type Å, we let F_0 be the (multiplicative) half-fiber of type _8 which can be found in the defining dual graph: [scale=0.6] (R1) at (180:2) [nodal,label=left:R_1] ; (R2) at (135:2) [nodal,label=left:R_2] ; (R3) at (90:2) [nodal,label=above:R_3] ; (R4) at (45:2) [nodal,label=right:R_4] ; (R5) at (0:2) [nodal,label=right:R_5] ; (R6) at (315:2) [nodal,label=right:R_6] ; (R7) at (270:2) [nodal,label=below:R_7] ; (R8) at (225:2) [nodal,label=left:R_8] ; (R9) at (intersection of R2–R7 and R3–R8) [nodal, fill=white, label=above:E_1] ; (R10) at (intersection of R4–R7 and R3–R6) [nodal, fill=white, label=above:E_2] ; [densely dashed, very thick] (R1)–(R2)–(R3)–(R4)–(R5)–(R6)–(R7)–(R8)–(R1); (R1)–(R9) (R5)–(R10); Note that the divisor F_0 is indeed a half-fiber, since E_1.F_0 = 1, so F_0 is primitive in (X). By <Ref>, Enriques surfaces of type Å in characteristic 2 are ordinary. Let X be an Enriques surface of type Å. Then, X admits a unique numerically trivial involution σ. More precisely, σ∈(|2F_0|) and the reduced divisorial part X_1^σ of the fixed locus of σ is X_1^σ = F_0 if p = 2, F_0' + ∑_i=0^3 R_2i+1 if p ≠ 2, where F_0' is the second half-fiber of |2F_0|. Moreover, F_0' is smooth. Since X is ordinary if p = 2, the K3 cover πX→ X is étale with covering involution τ. The preimage π^-1(F_0) of the half-fiber F_0 of type _8 is a fiber of type _16 of an elliptic fibration on X and the preimages of the two bisections E_1,E_2 of |2F_0| are four disjoint sections E_1^±,E_2^± of the induced elliptic fibration fX→ℙ^1. Choosing E_1^+ as the zero section, the section E_1^- is a 2-torsion section by the height pairing <cit.> and the quotient of X by the involution τ' obtained by composing the translation by E_1^- with τ is birational to J(X) by <cit.>. The induced generically finite morphism π' X→ J(X) sends the four sections E_1^±,E_2^± to four sections of J(f). Now, we take π'(E_1^-) ∈(J(f)) and let σ be the induced involution of X. In other words, σ is the automorphism of X induced by τ' via π. We describe the divisorial part of the fixed locus of σ. By construction, σ acts as a translation on the simple fibers of f, hence the divisorial part of the fixed locus of σ is contained in the half-fibers F_0 and F_0' (where F_0' only exists if p ≠ 2). To understand the action of σ on F_0, note that τ' preserves all components of π^-1(F_0). If p = 2, the fact that involutions of ℙ^1 have only one fixed point implies that τ' fixes every component of π^-1(F_0) pointwise, hence σ fixes F_0 pointwise. If p ≠ 2, then τ' is anti-symplectic, so its fixed locus has pure codimension 1. Thus, the smoothness of fixed loci of tame involutions implies that every other component of π^-1(F_0) is fixed pointwise. Therefore, σ fixes R_1,R_3,R_5, and R_7 pointwise. For the action of σ on F_0', first note that p ≠ 2 in this case. Then, the automorphism τ' of the previous paragraph fixes the point F_0' ∩ E_1^+, hence, again because τ' is anti-symplectic, it fixes F_0' pointwise. The smoothness of fixed loci of tame involutions implies that F_0' is smooth. In particular, we see that, in all characteristics, σ preserves all components of the defining graph of X and it is easy to check that (X) ⊗ℚ is generated by the curves in this graph, hence σ is numerically trivial. The uniqueness of σ follows from <cit.>. In fact, σ is even cohomologically trivial. This is clear if p = 2, for then K_X ∼ 0, and if p = 0, this is proved in <cit.>. Via specialization, this implies cohomological triviality of σ also in odd characteristic. Every Enriques surface of type Å has zero entropy. More precisely, |2F_0| is the unique non-extremal genus 1 fibration on X if (X) is infinite. By <Ref>, X admits a unique numerically trivial involution σ. Since σ is unique, the subgroup generated by σ is normal and hence central in (X). Thus, (X) preserves the fixed locus of σ. By <Ref>, the fibration |2F_0| is the unique genus one fibration with a half-fiber contained in the fixed locus of σ, hence (X) preserves |2F_0| and so X has zero entropy by <Ref>. Recall that, by the proof of <Ref>, the Jacobian J(f) of |2F_0| admits a 2-torsion section. It is well-known that torsion sections on rational elliptic surfaces are disjoint from the zero section <cit.>. Thus, if p = 2, then J(f) admits no irreducible multiplicative fibers (because 𝔾_m admits no point of order 2 in this case), and if p ≠ 2, then J(f) admits no irreducible additive fibers (because 𝔾_a admits no point of order 2 in this case). If (X) is infinite, J(f) has infinite Mordell–Weil group by <Ref>, so the _8-fiber is its only reducible fiber. From <cit.>, we conclude that if p = 2, then the singular fibers of J(f) are of type _8 and , and if p ≠ 2, they are of type _8 and _1,_1,_1,_1. In the following, we let D_∞ = ℤ⋊ℤ/2ℤ be the infinite dihedral group. Let X be an Enriques surface of type Å with infinite automorphism group. Then, the following hold: * If p = 2, then (X) ≅ℤ/2ℤ× D_∞. * If p ≠ 2, assume that F_0 and F_0' lie over [0:1] and [1:0] and let p_i = [a_i:1] with a_i ∈ k^× be the images of the four nodal fibers of |2F_0|. Set a_1 = 1. Then, * if {a_1,a_2,a_3,a_4} = {1,ζ_4,ζ_4^2,ζ_4^3} for a primitive 4-th root of unity ζ_4, then (X) is a non-split extension of ℤ/2ℤ by ℤ/4ℤ× D_∞. * if there exists a ∈ k with a^4 ≠ 1 and {a_1,a_2,a_3,a_4} = {1,-1,a,-a}, then (X) ≅ℤ/4ℤ× D_∞. * if (a) and (b) do not hold, then (X) ≅ℤ/2ℤ× D_∞. Let f X →^1 be the elliptic fibration induced by |2F_0|. By <Ref>, we know that (J(f)) contains an element of order 2, hence, by <cit.>, we have (J(f)) ≅ℤ/2ℤ×ℤ. Since f is not isotrivial, because there is a multiplicative fiber, we have _ℙ^1(X) ≅ℤ/2ℤ× D_∞ by <Ref>. It remains to study the image of the homomorphism ρ(X) → PGL_2 induced by the action of (X) on the base of f. Let g ∈(X). Then, by functoriality of the Jacobian, g induces an automorphism g' of J(X) that acts on the base of J(f) as ρ(g). By <cit.>, g' preserves the zero section, hence also the unique 2-torsion section of J(f). If p ≠ 2, an explicit computation with Weierstraß equations using Tate's algorithm <cit.> shows that J(f) can be defined by an equation of the form y^2 = x^3 + 2a_2(s,t)x^2 + t^4x, where the 2-torsion section is given by (x,y) = (0,0), the _8-fiber lies over t = 0 and the other singular fibers lie over the roots of Δ_0(s,t) = a_2^2 - t^4, where Δ(s,t) = - 64 t^8 Δ_0 is the discriminant. Since we assume that X has infinite automorphism group, the four roots of Δ_0 must be distinct. We know from <Ref> that the other half-fiber of f is smooth, and after a change of coordinates, we may assume that it lies over s=0. In particular, both Δ_0(0,1) and Δ_0(1,0) are non-zero. Since ρ(g) fixes the two points corresponding to the half-fibers of f, we know that ρ(g) fixes the points t = 0 and s = 0, so that it is given by an automorphism of the form s ↦λ s for some λ∈ k^×. Next, write a_2 = as^2 + bst + ct^2 with a,b,c ∈ k and recall that ρ(g) preserves the roots of Δ_0 = a_2^2 - t^4 = a^2s^4 + 2abs^3t + (2ac + b^2)s^2t^2 + 2bcst^3 + (c^2 - 1)t^4 and that c^2 - 1 = Δ_0(0,1) ≠ 0 and a^2 = Δ_0(1,0) ≠ 0. In particular, we can rescale coordinates to assume Δ_0(1,1) = 0. Since ρ(g) preserves the roots of Δ_0, the polynomial Δ_0(λ s ,t) must be a multiple of Δ_0(s,t). The automorphism ρ(g) changes the coefficients of the monomials in Δ_0(s,t) as follows: [a^2:2ab:2ac+b^2:2bc:c^2-1] ↦ [λ^4 a^2:2λ^3ab:λ^2(2ac+b^2):λ(2bc):c^2-1]. We deduce that λ^4 = 1, that b = 0 if λ≠ 1, and that b = c = 0 if λ^2 ≠ 1, i.e., the order of ρ(g) is n ∈{1,2,4}. Observe that, in every case, ρ(g) sends a_2 to λ^2 a_2 and that λ^4 = 1. Hence, up to composing with y ↦ - y, we deduce from the structure of isomorphisms between Weierstraß forms (see, e.g., <cit.>) that g' must be an automorphism of the following form: g' (s,t,x,y) ↦ (λ s ,t,λ^2 x,λ y). Now, we reverse the construction of <Ref>: the Weierstraß model of the K3 cover X of X is given by replacing (s,t) by (s^2,t^2) in Equation (<ref>), hence by y^2 = x^3 + 2a_2(s^2,t^2)x^2 + t^8x, and the Enriques involution τ is the composition of (s,t) ↦ (-s,t) with the translation by the section with (x,y) = (0,0). The automorphism g' lifts to X as g' (s,t,x,y) ↦ (√(λ) s,t,λ^2 x, λ y). This is an automorphism of order 2n, where n is the order of λ∈ k^×, and it commutes with τ, hence it yields an automorphism g”∈(X) of order 2n. Note that g”^n = σ is the cohomologically trivial involution of X. Indeed, g'^n ∘τ coincides with the translation by (x,y) = (0,0), and this induces σ on X by <Ref>. We conclude that (X) is generated by a g” as above with maximal n, an involution ι of the generic fiber of f as in <Ref>, and the translation by a generator E of the free part of (J(f)). We now have three cases: * n = 1: This happens if and only if b ≠ 0. In this case, the action of (X) on the base of f is trivial and (X) = _ℙ^1(X) ≅ℤ/2ℤ× D_∞. Note that in this case Δ_0(-1,1) ≠ 0, so this corresponds to Case (2) (c) in the statement of the proposition. * n = 2: This happens if and only if b = 0 and c ≠ 0. In this case, pick d ∈ k with d^2 + 2cd + 1 = 0 and let E be the section (x,y) = (dt^2, √(2a)dst^2). This is a section of height 1/2, hence a generator of the free part of (J(f)) by <cit.>. The automorphism g' preserves the preimage (x,y) = (dt^4,√(2a)ds^2t^4) of E in X, hence commutes with the translation by E, and hence so does g”. As g” commutes with ι, we deduce that (X) ≅ℤ/4ℤ× D_∞. Note that in this case Δ_0(-1,1) = Δ_0(1,1) = 0 but Δ_0(ζ_4,1) ≠ 0, so this corresponds to Case (2) (b) in the statement of the proposition. * n = 4: This happens if and only if b = c = 0. Here, (X) contains the automorphism group of Case (2) as a normal subgroup of index 2, hence (X) is an extension of ℤ/2ℤ by ℤ/4ℤ× D_∞. This extension does not split, since (X) acts through ℤ/4ℤ on ℙ^1, while ℤ/4ℤ× D_∞ acts through ℤ/2ℤ. Note that in this case Δ_0(ζ_4^i,1) = 0 for i = 0,1,2,3, so this corresponds to Case (2) (a) in the statement of the proposition. Next, assume that p = 2. By <cit.>, the j-map of J(f) has degree 8 and it is ρ((X))-invariant, hence the order of ρ((X)) is a power of 2. On the other hand, ρ((X)) fixes the two points corresponding to the two half-fibers, hence it must have odd order. We conclude that ρ((X)) must be trivial if p = 2, hence (X) = _ℙ^1(X) ≅ℤ/2ℤ× D_∞. The proof of <Ref> shows that Enriques surfaces of type Ã_7 form a 2-dimensional family in all characteristics. If p ≠ 2, the subfamily where (X) ≅ℤ/4ℤ× D_∞ is 1-dimensional and there is a unique Enriques surface of type Å where (X) is a non-split extension of ℤ/2ℤ by ℤ/4ℤ× D_∞. We also recall that there is a 1-dimensional family of Enriques surfaces of type A_7 with finite automorphism group in all characteristics. These are called “type ” surfaces in <cit.> and <cit.>. In <cit.>, it is claimed that the automorphism group of a surface of type Ã_7 is never larger than ℤ/4ℤ× D_∞. This is due to an erroneous calculation in the last lines of the proof of <cit.>. The Enriques surface of type Ã_7 that corresponds to Case (2) (a) in <Ref> can be realized as a member of the family considered by Barth–Peters as follows. Consider the double cover X of ℙ^1 ×ℙ^1 branched over the curve (v_0^2 - v_1^2)((v_0^2-v_1^2)u_0^4 + (v_0^2 + v_1^2)u_1^4)). This corresponds to the parameters (a,b,c,d) = (1,0,1,-1) in <cit.>. The minimal resolution of the quotient of X by the involution induced by ([u_0:u_1],[v_0:v_1]) ↦ ([-u_0:u_1],[-v_0:v_1]) is an Enriques surface of type Ã_7. This surface admits an automorphism of order 8 induced by ([u_0:u_1],[v_0:v_1]) ↦ ([ζ_8 u_0:u_1],[ζ_4 v_1: ζ_4 v_0]). The existence of this automorphism contradicts the last two lines of the proof of <cit.>. This model can be used to give a more explicit description of (X). We leave the details to the interested reader. §.§ Type E6 Given an Enriques surface of type , we let F_0 be the (additive) half-fiber of type ^* which can be found in the defining dual graph: (R0) at (90:1.5) [nodal,fill=white] ; (R1) at (90:1) [nodal] ; (R2) at (90:0.5) [nodal] ; (R3) at (0:0) [nodal] ; (R4) at (210:0.5) [nodal] ; (R5) at (210:1) [nodal] ; (R6) at (210:1.5) [nodal,fill=white] ; (R7) at (330:0.5) [nodal] ; (R8) at (330:1) [nodal] ; (R9) at (330:1.5) [nodal,fill=white] ; [densely dashed, very thick] (R1)–(R2)–(R3)–(R4)–(R5) (R3)–(R7)–(R8); (R0)–(R1) (R5)–(R6) (R8)–(R9); By <Ref>, Enriques surfaces of type only exist in characteristic 2 and are either classical or supersingular. Recall that classical and supersingular Enriques surfaces X in characteristic 2 have the property that h^0(X,Ω_X) = 1. The divisorial part D of the zero locus Z of a global 1-form on X is called conductrix. For general X, this conductrix is empty and Z consists of 12 reduced points by the Hirzebruch–Riemann–Roch formula. In contrast, if X is of type , then D = F_0 by <cit.>. It is clear that all automorphisms of X preserve the conductrix, so we deduce the following theorem from <Ref>. Every Enriques surface of type has zero entropy. More precisely, |2F_0| is the unique non-extremal genus 1 fibration on X if (X) is infinite. If X is an Enriques surface of type with infinite automorphism group, then (X) ≅(|2F_0|) ⋊ℤ/2ℤ and (|2F_0|) ∈{ℤ,ℤ^2}. Let f X →^1 be the elliptic fibration induced by |2F_0|. First, assume that f is non-isotrivial, so that it admits a fiber G of type _n with n ≥ 1. By <cit.>, the canonical cover πX→ X coincides with the Frobenius pullback of J(f) in a neighborhood of G, hence X has some A_1-singularities over G, so by <cit.> and because X admits a quasi-elliptic fibration by <cit.>, X is classical. The second half-fiber F'_0 of f is not of type _n by <Ref> and not of the same type as F_0, because the Picard rank of X is 10. Since (X) preserves both F_0 and F'_0, it acts on ℙ^1 through k^×. Thus, if this action is non-trivial, then the number of fibers of type _n for a given n must be odd, contradicting the possible configurations of singular fibers of f determined in <cit.>. Therefore, (X) coincides with the automorphism group of the generic fiber F_η of f, which is (J(f)) ⋊ℤ/2ℤ by <Ref>, since F_η is ordinary. By <cit.>, we know that (J(f)) is either ℤ or ℤ^2, depending on whether f has a second reducible fiber or not. Next, assume that f is isotrivial. Then, by <cit.>, f admits a second singular fiber G of type  or . A computation using Tate's algorithm shows that J(f) admits a Weierstraß equation of the form y^2 + st^2y = x^3 + at^2x^2 + bt^6, with a,b ∈ k, not both 0. The fibration J(f) admits a fiber of type ^* over t = 0 and the other singular fiber G over s = 0. Every automorphism of J(X) is of the form g' (s,t,x,y) ↦ (λ s, μ t, β x + b_2(s,t), y + b_1(s,t)x + b_3(s,t)) which sends the Weierstraß form to y^2 + λμ^2 st^2 y = β^3 x^3 + (β^2 b_2 + b_1^2 + a β^2 μ^2 t^2) x^2 + (λμ^2 st^2 b_1 + β b_2^2) x + b_3^2 + λμ^2 st^2b_3 + b_2^3 + a μ^2 t^2b_2^2 + b μ^6 t^6. Comparing coefficients of x, we see that s | b_1 and st | b_2, and then comparing the coefficients of x^2 yields b_1 = b_2 = 0. Moreover, λ = μ^-2 and β^3 = 1, so that the above new Weierstraß equation simplifies to y^2 + st^2 y = x^3 + a β^2 μ^2 t^2 x^2 + b_3^2 + st^2b_3 + b μ^6 t^6. Thus, if a ≠ 0, then β^2μ^2 = 1, hence μ is a third root of unity and b_3 ∈{0,st^2}, so that g', if non-trivial, is the sign involution. If a = 0, we can rescale coordinates to assume b = 1. Then, the only additional condition we have is that b_3^2 + st^2 b_3 = (1 + μ^6) t^6. If t^6 occurs on the left-hand side with non-zero coefficient, then so does st^5, which is absurd. Hence, μ^6 = 1. But then again μ^3 = 1 and b_3 ∈{0,st^2}, so that g' is either trivial or the sign involution. This shows that (J(X)) = (J(f)) ⋊ℤ/2ℤ acting trivially on the base of J(f). Thus, (X) acts trivially on the base of f and if F_η is the generic fiber of f, then the natural map φ(F_η) →(^0_F_η), whose kernel is (J(f)), factors through ℤ/2ℤ. The sign involution on the generic fiber of f has non-zero image under φ, so the proposition follows. By <cit.>, Enriques surfaces of type form a 3-di­men­sion­al family and the generic member satisfies (|2F_0|) ≅ℤ^2. The surfaces with (|2F_0|) ≅ℤ form a subfamily of dimension 2 and the ones with finite automorphism group form a subfamily of dimension 1. In each of these strata, the generic member is a classical Enriques surface and the supersingular Enriques surfaces form a subfamily of codimension 1. §.§ Type D6+A1 Given an Enriques surface of type , we let F_1 be the following (additive) half-fiber of type _2^* which can be found in the defining dual graph: [scale=0.6] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal,fill=white] ; (R9) at (4,0) [nodal,fill=white] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal,fill=white] ; (RX) at (-1,1) [nodal] ; (RXX) at (5,0) [nodal,fill=white] ; (R7)–(R8)–(R9) (R8)–(R1); [densely dashed, very thick] (R2)–(R7) (R5)–(R6) (RX)–(R3); [double] (R9)–(RXX); The following hold: * An Enriques surface is of type  if and only if it admits a genus 1 fibration with half-fibers of type _2^* and . This fibration is quasi-elliptic. * Every Enriques surface of type  is a classical Enriques surface in characteristic 2. * Enriques surfaces of type  exist and form a 2-dimensional family. * The conductrix of an Enriques surface of type  looks as follows: [scale=0.6] (R4) at (0,0) [nodal, label=below:1] ; (R5) at (1,1) [nodal, label=above:1] ; (R6) at (1,0) [nodal, label=below:2] ; (R7) at (2,0) [nodal, label=below:1] ; (R8) at (3,0) [nodal, label=below:1] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal, label=below:1] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (RX) at (-1,1) [nodal] ; (RXX) at (5,0) [nodal] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8) (R3)–(RX); [double] (R9)–(RXX); * Every Enriques surface of type  admits a unique non-trivial numerically trivial involution σ. Moreover, σ∈(|2F_1|). * Every Enriques surface of type  has infinite automorphism group. For Claim (1), observe from the defining graph of type  that the fibration |2F_1| has a second half-fiber F_1' of type  or _2, and a third reducible fiber: [scale=0.6] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal,fill=white] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (RX) at (-1,1) [nodal] ; (RXX) at (5,0) [nodal] ; (R7)–(R8)–(R9) (R8)–(R1); [densely dashed, very thick] (R2)–(R7) (R5)–(R6) (RX)–(R3); [densely dashed, very thick,double] (R9)–(RXX); In particular, this fibration is extremal. As X admits an additive half-fiber, we are in characteristic p = 2 by <Ref>, and so F_1' must be of type  . By <Ref>, there exists no extremal rational elliptic fibration with these fibers if p = 2, so f must be quasi-elliptic. Conversely, if an Enriques surface admits a genus 1 fibration with the given fiber types, then p = 2 and the fibration must be quasi-elliptic by <cit.>, so the dual graph of components of fibers and curve of cusps contains the graph of type . Claim (2) follows from Claim (1) and <Ref>. For Claim (3), note that with the alternative description given in Claim (1), surfaces of type  have first been constructed in <cit.> and the conjectural number of moduli for these surfaces given in <cit.> has recently been confirmed to be 2 in <cit.>. Claim (4) follows from <cit.>. For Claim (5), we combine <Ref> and <Ref> with <Ref> to deduce that there exists a unique non-trivial σ∈(|2F_1|) that preserves all curves in the defining graph of X. The curves in the graph generate (X) over ℚ, hence σ is numerically trivial. Conversely, if σ' is a numerically trivial automorphism of X, then it preserves f and acts trivially on the base, since it preserves the three reducible fibers. If σ' had odd order, then, by the known structure of fixed loci of automorphisms of cuspidal curves, the fixed locus of σ' would contain an integral curve that has intersection number 1 with every fiber, which is absurd. Hence, σ' has even order, so it must come from (|2F_1|), and so σ' = σ. Claim (6) follows from the classification of Enriques surfaces with finite automorphism group given in <cit.>. Let X be an Enriques surface of type  and let σ be its non-trivial numerically trivial involution. Then, the support of the union of the conductrix of X and the divisorial part of the fixed locus of σ forms a configuration G_0 of type Ẽ_7: [scale=0.6] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal,fill=white] ; (RX) at (-1,1) [nodal,fill=white] ; (RXX) at (5,0) [nodal,fill=white] ; (R1)–(R8) (R3)–(RX); [very thick] (R2)–(R9) (R5)–(R6); [double] (R9)–(RXX); Since σ is numerically trivial, it preserves all curves that appear in the defining graph of X. Since non-trivial involutions on ℙ^1 in characteristic 2 have a unique fixed point, we deduce that σ fixes pointwise the curves corresponding to the black vertices in the following graph: [scale=0.6] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal,fill=white] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal, label=below:R] ; (R9) at (4,0) [nodal, label=below:C_1] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal,fill=white, label=below:C_1'] ; (R1) at (3,1) [nodal,fill=white, label=right:C_2] ; (RX) at (-1,1) [nodal,fill=white, label=left:C_2'] ; (RXX) at (5,0) [nodal,fill=white] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8) (R3)–(RX); [double] (R9)–(RXX); By <Ref>, the Mordell–Weil group of the genus 1 fibration |2F_2| with fiber of type _4^* in the defining diagram induces a horizontal reflection on the graph obtained by removing the right-most vertex, so we may assume without loss of generality that C_1 and C_1' resp. C_2 and C_2' are interchanged by this involution. Since σ is the only non-trivial numerically trivial automorphism, the whole automorphism group (X) commutes with σ, and hence preserves the fixed locus of σ. We conclude that, as σ fixes C_1 pointwise, it must also fix C_1' pointwise. It remains to show that the remaining components of the fixed locus of σ lie in the conductrix. Since σ∈(|2F_1|) by <Ref>, the fixed locus of σ is contained in the union of the curve of cusps R and fibers of |2F_1|. By <cit.>, the sections of the Jacobian of |2F_1| are disjoint, hence by <Ref>, every fixed point of σ on a simple fiber of |2F_1| lies on R. We conclude that C_2 is not in X^σ, so neither is C_2'. Finally, because X^σ is stable under the involution in (|2F_2|) described in the previous paragraph, the right-most vertex of the defining graph of X is also not in X^σ. This finishes the proof. Let X be an Enriques surface of type  and let G_0 the configuration of type Ẽ_7 on X described in <Ref>. Let F_0 be a half-fiber with G_0 ∈ |2F_0| and let σ∈(X) be the numerically trivial involution. Then, the following hold: * The fiber G_0 is preserved by all of (X). * The involution σ exchanges the two half-fibers of |2F_0|. * The fiber G_0∈ |2F_0| is simple and the only reducible fiber of |2F_0|. * The fibration |2F_0| is elliptic and not isotrivial with singular fibers of type ^*,_1,_1. For Claim (1), recall that by <Ref>, the support of the fiber G_0∈ |2F_0| is the union of the conductrix of X and the divisorial part of the fixed locus of the numerically trivial involution of X, hence it is preserved by the whole (X). For Claim (2), assume by contradiction that σ preserves the two half-fibers F_0, F_0' of |2F_0|. Then, σ fixes the base of |2F_0|, hence it induces an involution on the generic fiber (F_0)_η of |2F_0|. By <cit.>, (|2F_0|) is torsion-free, hence σ is not a translation, so it has fixed points on (F_0)_η. This would produce an irreducible component of the fixed locus of σ not contained in G_0, contradicting <Ref>. For Claim (3), let σ be the numerically trivial involution. By Claims (1) and (2) we have σ(G_0)=G_0 and σ(F_0)=F_0', so G_0 is not a half-fiber. Moreover, if |2F_0| has another reducible fiber G_0', then G_0 and G_0' are the only reducible fibers of |2F_0| (otherwise the rank of the lattice spanned by their components would be too big). Then, σ preserves G_0 and G_0', hence it fixes the base of |2F_0|, contradicting the fact that σ exchanges the two half-fibers of |2F_0|. For Claim (4), first note that |2F_0| is not extremal by Claim (3), hence it is elliptic. By <cit.>, the singular fibers of |2F_0| are either ^*,_1,_1, or G_0 is the unique singular fiber of |2F_0| and |2F_0| is isotrivial with j-invariant 0. Since X is classical, <Ref> implies that |2F_0| admits a smooth ordinary elliptic curve as half-fiber, so the latter case cannot occur. We note the following immediate consequence of <Ref> and <Ref>. Every Enriques surface of type  has zero entropy. More precisely, |2F_0| is the unique non-extremal genus 1 fibration on X. We are also able to describe the structure of the automorphism group of Enriques surfaces of type . If X is an Enriques surface of type , then (X) ≅/2× D_∞. The subgroup _^1(X) of (X) acting trivially on the base of |2F_0| is isomorphic to D_∞ by <Ref>, since |2F_0| is elliptic and not isotrivial by <Ref> and (|2F_0|) ≅ℤ by <cit.>. By <Ref> (2) and (4), the image of (X) → PGL_2 is generated by the numerically trivial involution σ. Since σ is central in (X), this yields the claim. § CLASSIFICATION The aim of this section is to prove <Ref>. Our strategy can be summarized as follows. By <Ref>, there exists a unique genus 1 fibration |2F_0| with infinite Mordell–Weil group on X. Thus, |2F_0| is preserved by all automorphisms of X and hence, in particular, by the Mordell–Weil groups of the other genus 1 fibrations on X. It turns out that this puts heavy restrictions on the Mordell–Weil groups that appear, eventually leading to the dual graphs of <Ref>. Let X be an Enriques surface with two genus 1 fibrations |2F_0| and |2F_1| such that F_0.F_1 = 1. Suppose that (|2F_1|) preserves |2F_0|, and that |2F_0| or |2F_1| is elliptic. Then, |2F_1| is extremal with reducible fibers (^*), (_4^*), (^*,_2), (^*,), (_0^*,_0^*), (_2^*,,) or (_2^*,_2,_2). In particular, (|2F_1|) ≅ (ℤ/2ℤ)^a with a ≤ 2. Since (|2F_1|) preserves the numerical classes of F_0 and F_1, it acts on (X) ≅ U ⊕ E_8 through the finite group O(E_8). Since the kernel of (X) → O((X)) is finite by <cit.>, we conclude that (|2F_1|) is finite, that is, |2F_1| is extremal. Let G_1 ∈ |2F_1| be a reducible fiber. Since F_0.F_1 = 1, there exist either at most two simple components of G_1 or one double component of G_1 that meets F_0. Since (|2F_1|) preserves the numerical classes of F_0 and F_1, the set of such components is preserved by (|2F_1|). If G_1 is simple or multiplicative, we understand the action of (|2F_1|) on G_1 by <Ref>, and <Ref>. Combining this with the previous paragraph, we see that either a simple component of G_1 has a (|2F_1|)-orbit of length ≤ 2, or a double component of G_1 has a trivial (|2F_1|)-orbit. In particular, G_1 must be of type ^*,^*,_4^*,_2^*,_0^*,, or _2. In order to see this, assume for instance that G_1 is simple of type _n^*. The orbit of a simple component of G_1 has length ≤ 2 if and only if n=4, and a double component has a trivial orbit if and only if n is even (the fixed component is the central one). The other cases are analogous. By <Ref>, the previous discussion already gives the desired claim if |2F_1| is elliptic. Thus assume that |2F_1| is quasi-elliptic; in particular, p=2 and the half-fiber F_1 is additive. There is a subgroup H ⊆(|2F_1|) of index at most 2 (resp. at most 1 if X is not classical) and which preserves the half-fiber F_0. Let E_1 be the component of F_1 meeting F_0. Since H fixes F_0 ∩ E_1 and any singular point of F_1 on E_1 and these points are distinct by <cit.>, it fixes E_1 pointwise. Here, we use again the fact that involutions of ℙ^1 and the cuspidal cubic have only one fixed point in characteristic 2. Thus, H fixes the base of |2F_0| and acts with a fixed point on a general fiber of |2F_0|, hence, as |2F_0| is elliptic, H contains at most one non-trivial involution (cf. <Ref>). Thus, (|2F_1|)≅ (/2)^a with a≤ 2 (resp. a ≤ 1 if X is not classical) and the claim follows by <Ref>. In the setting of <Ref>, assume that |2F_0| is not extremal. Then, |2F_1| is extremal with reducible fibers (_4^*), (^*,), (^*,_2), (_2^*,,) or (_2^*,_2,_2). Moreover, |2F_1| admits a simple reducible fiber G_1 such that F_0 meets two distinct simple components of G_1. It suffices to prove the last statement. Indeed, a fiber of type ^* has only one simple component and we know from the proof of <Ref> that a simple fiber of type _0^* can only appear if its central component meets F_0. To prove the last statement, it suffices to note that if F_0 meets only one component of every fiber of |2F_1|, then the lattice spanned by fiber components of |2F_1| that are orthogonal to F_0 has rank 8, hence |2F_0| is extremal. Thus, there must be a fiber of |2F_1| which has two distinct components meeting F_0, and this fiber is necessarily simple and reducible. Recall that by <Ref>, any Enriques surface of zero entropy with infinite automorphism group admits a unique non-extremal genus 1 fibration (necessarily elliptic), which we always denote by |2F_0|. Being preserved by the whole (X), the fibration |2F_0| is preserved by (|2F|) for every fibration |2F| on X. Let X be an Enriques surface of zero entropy with infinite automorphism group. Let |2F_0| be the unique non-extremal fibration and let F_1 be a half-fiber with F_0.F_1 = 1. Assume that X is not of type . Then, |2F_1| is extremal with reducible fibers (_4^*), (^*,) or (^*,_2). By <Ref>, we have to show that if the reducible fibers of |2F_1| are of type (_2^*,,) or (_2^*,_2,_2), then X is of type . Denote by G_1∈ |2F_1| the fiber of type _2^*. Assume first that the fiber G_1 is simple. By the proof of <Ref>, the union Γ of the components of G_1 orthogonal to F_0 is of type A_3∪ A_3. If Γ is contained in a single fiber G_0 ∈ |2F_0|, then G_1 must be simple of type _8, since the central component of G_1 is a bisection of |2F_0|, so the adjacent components must be simple in G_0. If instead there are two fibers G_0,G_0'∈ |2F_0| containing Γ, then, one of them, say G_0, must be of type _4, for otherwise |2F_0| would be extremal. We get three possible diagrams (the last two according to whether G_0 is double or simple): [scale=0.6] (R1) at (0,0) [nodal] ; (R3) at (2,0) [nodal] ; (R4) at (2,-1) [nodal] ; (R5) at (2,-2) [nodal] ; (R7) at (0,-2) [nodal] ; (R8) at (0,-1) [nodal] ; (R9) at (1,-1) [nodal] ; (RX) at (1,0.5) [nodal] ; (RXX) at (1,-2.5) [nodal] ; (R9)–(R8) (R9)–(R4) (R3)–(R5) (R7)–(R1) (R1)–(RX)–(R3) (R7)–(RXX)–(R5); [scale=0.6] (R1) at (0,0) [nodal, label=left:R_1] ; (R3) at (2,0) [nodal] ; (R4) at (2,-1) [nodal] ; (R5) at (2,-2) [nodal] ; (R7) at (0,-2) [nodal, label=left:R_2] ; (R8) at (0,-1) [nodal] ; (R9) at (1,-1) [nodal] ; (RXX) at (-1,-1) [nodal, label=left:R] ; (R9)–(R8) (R9)–(R4) (R3)–(R5) (R7)–(R1) (R1)–(RXX)–(R7); [scale=0.6] (R1) at (0,0) [nodal] ; (R3) at (2,0) [nodal] ; (R4) at (2,-1) [nodal] ; (R5) at (2,-2) [nodal] ; (R7) at (0,-2) [nodal] ; (R8) at (0,-1) [nodal] ; (R9) at (1,-1) [nodal] ; (RXX) at (-1,-2.5) [nodal] ; (R9)–(R8) (R9)–(R4) (R3)–(R5) (R7)–(R1) (R1)–(RXX)–(R7); (RXX) to[bend right=45] (R9); In the first (resp. third) diagram, we find a half-fiber F_2 of type _6 (resp. _4) such that F_0.F_2=1, contradicting <Ref>. In the second diagram, the (-2)-curve R is a bisection of |2F_1|. Let G_1' and G_1” be the other two reducible fibers in |2F_1|, with components R_3,R_3' and R_4,R_4' respectively. If G_1' (or G_1”) is double, then R meets one of its components with multiplicity 1, say R_3 (resp. R_4). If instead G_1' is simple, then, by <Ref>, there is an element in (|2F_1|) exchanging its components, and therefore F_0.R_3=F_0.R_3'. In particular, R.R_3=R.R_3'=1. In both cases, we have that R.R_3=R.R_4=1, obtaining the following diagram: [scale=0.6] (A) at (-2,0) [nodal,label=left:R_3] ; (B) at (-2,-2) [nodal,label=left:R_4] ; (R1) at (0,0) [nodal] ; (R3) at (2,0) [nodal,fill=white] ; (R4) at (2,-1) [nodal,fill=white] ; (R5) at (2,-2) [nodal,fill=white] ; (R7) at (0,-2) [nodal] ; (R8) at (0,-1) [nodal,fill=white] ; (R9) at (1,-1) [nodal,fill=white] ;; (RXX) at (-1,-1) [nodal] ; (R9)–(R8) (R9)–(R4) (R3)–(R4)–(R5) (R7)–(R8)–(R1) (R1)–(RXX)–(R7) (A)–(RXX)–(B); However, the simple fiber G_2 in bold of type _0^* satisfies G_2.F_0=2, contradicting <Ref>. On the other hand, assume that G_1∈ |2F_1| is a double fiber, say G_1 = 2F_1. Then p=2 and the fibration |2F_1| is quasi-elliptic by <cit.> or alternatively by <Ref>, because the 2-torsion subgroup of an elliptic curve in characteristic 2 has order at most 2. Denote by R its curve of cusps. By the last paragraph of the proof of <Ref>, |2F_0| has two half-fibers, hence X is classical. Thus, by <Ref>, it suffices to prove that the second half-fiber F_1' of |2F_1| is reducible (and therefore of type ). If by contradiction F_1' is irreducible, then every g∈(|2F_1|) preserves its singular point and the smooth point F_1'∩ R, so (|2F_1|) acts trivially on F_1'. Since F_1' is a bisection of |2F_0|, the group (|2F_1|) fixes the base of the elliptic fibration |2F_0| pointwise, and it fixes the point(s) G_0∩ F_1 for a general G_0∈ |2F_0|, so |(|2F_1|)| ≤ 2 by <ref>, a contradiction. By <Ref>, every Enriques surface of type Å, or has zero entropy. Thus, to finish the proof of <Ref>, it suffices by <Ref> to show that if an Enriques surface X admits a unique genus 1 fibration with infinite Mordell–Weil group, then X is of type Å, or . Let X be an Enriques surface with a unique non-extremal fibration |2F_0|. In particular, |2F_0| is preserved by every automorphism of X. By <cit.> or <cit.>, there exists a half-fiber F_1 with F_0.F_1 = 1. By <Ref>, we may assume that there is a reducible fiber G_1∈ |2F_1| of type _4^* or ^*. Let Γ be the union of the components of G_1 orthogonal to F_0, and let G_0 ∈ |2F_0| be the fiber containing Γ. Assume first that G_1 is of type _4^*. By <Ref> and the proof of <Ref>, G_1 is simple and Γ is of type A_7. Since |2F_0| is not extremal, G_0 is a (simple or double) fiber of type _8, or a simple fiber of type ^*, leading to the following three possible graphs. [scale=0.4] (R1) at (180:2) [nodal] ; (R2) at (135:2) [nodal] ; (R3) at (90:2) [nodal] ; (R4) at (45:2) [nodal] ; (R5) at (0:2) [nodal] ; (R6) at (315:2) [nodal] ; (R7) at (270:2) [nodal] ; (R8) at (225:2) [nodal] ; (R9) at (intersection of R2–R7 and R3–R8) [nodal] ; (R10) at (intersection of R4–R7 and R3–R6) [nodal] ; (R6)–(R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) (R5)–(R10) (R4)–(R5)–(R6); (R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) ; [scale=0.4] (R1) at (180:2) [nodal] ; (R2) at (135:2) [nodal] ; (R3) at (90:2) [nodal] ; (R4) at (45:2) [nodal] ; (R5) at (0:2) [nodal] ; (R6) at (315:2) [nodal] ; (R7) at (270:2) [nodal] ; (R8) at (225:2) [nodal] ; (R9) at (intersection of R2–R7 and R3–R8) [nodal] ; (R10) at (intersection of R4–R7 and R3–R6) [nodal] ; (R6)–(R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) (R5)–(R10) (R4)–(R5)–(R6); (R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) (R3)–(R9) (R3)–(R10); [scale=0.5] (-2.5,-1.3) rectangle (4.5,1.5); (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (RX) at (-1,1) [nodal] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8) (R3)–(RX); In the first case, X is of type Å. In the second case, any half-fiber F_2 of type _4 in the graph satisfies F_0.F_2=1, contradicting <Ref>. In the third case, any half-fiber F_2 of type _2^* in the graph satisfies F_0.F_2=1, so X is of type  by <Ref>. Assume now that G_1 is of type ^*. If G_1 is a double fiber, then Γ is of type E_7. Since |2F_0| is not extremal, we deduce that G_0 is of type ^*. Since G_0 and G_1 share components, G_0 must be simple by <cit.>. Thus, X contains (-2)-curves with the following dual graph: [scale=0.5] (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8); As in the previous case, the half-fiber F_2 of type _2^* satisfies F_0.F_2=1, so X is of type  by <Ref>. If instead G_1 is a simple fiber, then Γ is of type A_7 or E_6, by <Ref>, <Ref> and <Ref>. In the first case, G_0 is a (double or simple) fiber of type _8, while in the second case, G_0 is a (double or simple) fiber of type ^*, or a simple fiber of type ^*. We get the following possible dual graphs: [scale=0.4] (R1) at (180:2) [nodal] ; (R2) at (135:2) [nodal] ; (R3) at (90:2) [nodal] ; (R4) at (45:2) [nodal] ; (R5) at (0:2) [nodal] ; (R6) at (315:2) [nodal] ; (R7) at (270:2) [nodal] ; (R8) at (225:2) [nodal] ; (R9) at (intersection of R2–R7 and R3–R8) [nodal] ; (R6)–(R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) (R4)–(R5)–(R6); (R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) ; [scale=0.4] (R1) at (180:2) [nodal] ; (R2) at (135:2) [nodal] ; (R3) at (90:2) [nodal] ; (R4) at (45:2) [nodal] ; (R5) at (0:2) [nodal] ; (R6) at (315:2) [nodal] ; (R7) at (270:2) [nodal] ; (R8) at (225:2) [nodal] ; (R9) at (0,0)[nodal] ; (R6)–(R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9)–(R5) (R4)–(R5)–(R6); (R7)–(R8)–(R1)–(R2)–(R3)–(R4) (R1)–(R9) ; [scale=0.8] (R1) at (90:1) [nodal] ; (R2) at (90:0.5) [nodal] ; (R3) at (0:0) [nodal] ; (R4) at (210:0.5) [nodal] ; (R5) at (210:1) [nodal] ; (R6) at (210:1.5) [nodal] ; (R7) at (330:0.5) [nodal] ; (R8) at (330:1) [nodal] ; (R9) at (330:1.5) [nodal] ; (R1)–(R2)–(R3)–(R4)–(R5)–(R6) (R3)–(R7)–(R8)–(R9); [scale=0.8] (R1) at (90:1) [nodal] ; (R2) at (90:0.5) [nodal] ; (R3) at (0:0) [nodal] ; (R4) at (210:0.5) [nodal] ; (R5) at (210:1) [nodal] ; (R6) at (210:1.5) [nodal] ; (R7) at (330:0.5) [nodal] ; (R8) at (330:1) [nodal] ; (R9) at (330:1.5) [nodal] ; (R1)–(R2)–(R3)–(R4)–(R5)–(R6) (R3)–(R7)–(R8)–(R9) (R1)–(R9) (R1)–(R6); [scale=0.5] (-2.5,-0.2) rectangle (4.5,2); (R4) at (0,0) [nodal] ; (R5) at (1,1) [nodal] ; (R6) at (1,0) [nodal] ; (R7) at (2,0) [nodal] ; (R8) at (3,0) [nodal] ; (R9) at (4,0) [nodal] ; (R3) at (-1,0) [nodal] ; (R2) at (-2,0) [nodal] ; (R1) at (3,1) [nodal] ; (RX) at (-1,1) [nodal] ; (R2)–(R3)–(R6) (R5)–(R6)–(R9) (R1)–(R8) (R3)–(RX); In the second and fourth graph, we find a half-fiber F_2 of type _6 with F_0.F_2 = 1, contradicting again <Ref>. As above, the last graph leads to Enriques surfaces of type by <Ref>. In the first and third case, the component of G_0 that is not contained in G_1 meets at least one component of the other reducible fiber of |2F_1| transversally by <Ref>. Thus, in these cases, we obtain the graph of types Å and ), respectively.